Optimisation-Based Solution Methods for Set Partitioning Models
DEFF Research Database (Denmark)
Rasmussen, Matias Sevel
The scheduling of crew, i.e. the construction of work schedules for crew members, is often not a trivial task, but a complex puzzle. The task is complicated by rules, restrictions, and preferences. Therefore, manual solutions as well as solutions from standard software packages are not always su......_cient with respect to solution quality and solution time. Enhancement of the overall solution quality as well as the solution time can be of vital importance to many organisations. The _elds of operations research and mathematical optimisation deal with mathematical modelling of di_cult scheduling problems (among...... other topics). The _elds also deal with the development of sophisticated solution methods for these mathematical models. This thesis describes the set partitioning model which has been widely used for modelling crew scheduling problems. Integer properties for the set partitioning model are shown...
Optimisation-based worst-case analysis and anti-windup synthesis for uncertain nonlinear systems
Menon, Prathyush Purushothama
This thesis describes the development and application of optimisation-based methods for worst-case analysis and anti-windup synthesis for uncertain nonlinear systems. The worst-case analysis methods developed in the thesis are applied to the problem of nonlinear flight control law clearance for highly augmented aircraft. Local, global and hybrid optimisation algorithms are employed to evaluate worst-case violations of a nonlinear response clearance criterion, for a highly realistic aircraft simulation model and flight control law. The reliability and computational overheads associated with different opti misation algorithms are compared, and the capability of optimisation-based approaches to clear flight control laws over continuous regions of the flight envelope is demonstrated. An optimisation-based method for computing worst-case pilot inputs is also developed, and compared with current industrial approaches for this problem. The importance of explicitly considering uncertainty in aircraft parameters when computing worst-case pilot demands is clearly demonstrated. Preliminary results on extending the proposed framework to the problems of limit-cycle analysis and robustness analysis in the pres ence of time-varying uncertainties are also included. A new method for the design of anti-windup compensators for nonlinear constrained systems controlled using nonlinear dynamics inversion control schemes is presented and successfully applied to some simple examples. An algorithm based on the use of global optimisation is proposed to design the anti-windup compensator. Some conclusions are drawn from the results of the research presented in the thesis, and directions for future work are identified.
Energy-efficient cooking methods
Energy Technology Data Exchange (ETDEWEB)
De, Dilip K. [Department of Physics, University of Jos, P.M.B. 2084, Jos, Plateau State (Nigeria); Muwa Shawhatsu, N. [Department of Physics, Federal University of Technology, Yola, P.M.B. 2076, Yola, Adamawa State (Nigeria); De, N.N. [Department of Mechanical and Aerospace Engineering, The University of Texas at Arlington, Arlington, TX 76019 (United States); Ikechukwu Ajaeroh, M. [Department of Physics, University of Abuja, Abuja (Nigeria)
2013-02-15
Energy-efficient new cooking techniques have been developed in this research. Using a stove with 649{+-}20 W of power, the minimum heat, specific heat of transformation, and on-stove time required to completely cook 1 kg of dry beans (with water and other ingredients) and 1 kg of raw potato are found to be: 710 {+-}kJ, 613 {+-}kJ, and 1,144{+-}10 s, respectively, for beans and 287{+-}12 kJ, 200{+-}9 kJ, and 466{+-}10 s for Irish potato. Extensive researches show that these figures are, to date, the lowest amount of heat ever used to cook beans and potato and less than half the energy used in conventional cooking with a pressure cooker. The efficiency of the stove was estimated to be 52.5{+-}2 %. Discussion is made to further improve the efficiency in cooking with normal stove and solar cooker and to save food nutrients further. Our method of cooking when applied globally is expected to contribute to the clean development management (CDM) potential. The approximate values of the minimum and maximum CDM potentials are estimated to be 7.5 x 10{sup 11} and 2.2 x 10{sup 13} kg of carbon credit annually. The precise estimation CDM potential of our cooking method will be reported later.
Efficient methods of piping cleaning
Directory of Open Access Journals (Sweden)
Orlov Vladimir Aleksandrovich
2014-01-01
Full Text Available The article contains the analysis of the efficient methods of piping cleaning of water supply and sanitation systems. Special attention is paid to the ice cleaning method, in course of which biological foil and various mineral and organic deposits are removed due to the ice crust buildup on the inner surface of water supply and drainage pipes. These impurities are responsible for the deterioration of the organoleptic properties of the transported drinking water or narrowing cross-section of drainage pipes. The co-authors emphasize that the use of ice compared to other methods of pipe cleaning has a number of advantages due to the relative simplicity and cheapness of the process, economical efficiency and lack of environmental risk. The equipment for performing ice cleaning is presented, its technological options, terms of cleansing operations, as well as the volumes of disposed pollution per unit length of the water supply and drainage pipelines. It is noted that ice cleaning requires careful planning in the process of cooking ice and in the process of its supply in the pipe. There are specific requirements to its quality. In particular, when you clean drinking water system the ice applied should be hygienically clean and meet sanitary requirements.In pilot projects, in particular, quantitative and qualitative analysis of sediments adsorbed by ice is conducted, as well as temperature and the duration of the process. The degree of pollution of the pipeline was estimated by the volume of the remote sediment on 1 km of pipeline. Cleaning pipelines using ice can be considered one of the methods of trenchless technologies, being a significant alternative to traditional methods of cleaning the pipes. The method can be applied in urban pipeline systems of drinking water supply for the diameters of 100—600 mm, and also to diversion collectors. In the world today 450 km of pipelines are subject to ice cleaning method.Ice cleaning method is simple
Efficient Training Methods for Conditional Random Fields
National Research Council Canada - National Science Library
Sutton, Charles A
2008-01-01
.... In this thesis, I investigate efficient training methods for conditional random fields with complex graphical structure, focusing on local methods which avoid propagating information globally along the graph...
Yadav, Naresh Kumar; Kumar, Mukesh; Gupta, S. K.
2017-03-01
General strategic bidding procedure has been formulated in the literature as a bi-level searching problem, in which the offer curve tends to minimise the market clearing function and to maximise the profit. Computationally, this is complex and hence, the researchers have adopted Karush-Kuhn-Tucker (KKT) optimality conditions to transform the model into a single-level maximisation problem. However, the profit maximisation problem with KKT optimality conditions poses great challenge to the classical optimisation algorithms. The problem has become more complex after the inclusion of transmission constraints. This paper simplifies the profit maximisation problem as a minimisation function, in which the transmission constraints, the operating limits and the ISO market clearing functions are considered with no KKT optimality conditions. The derived function is solved using group search optimiser (GSO), a robust population-based optimisation algorithm. Experimental investigation is carried out on IEEE 14 as well as IEEE 30 bus systems and the performance is compared against differential evolution-based strategic bidding, genetic algorithm-based strategic bidding and particle swarm optimisation-based strategic bidding methods. The simulation results demonstrate that the obtained profit maximisation through GSO-based bidding strategies is higher than the other three methods.
Quantitative Efficiency Evaluation Method for Transportation Networks
Directory of Open Access Journals (Sweden)
Jin Qin
2014-11-01
Full Text Available An effective evaluation of transportation network efficiency/performance is essential to the establishment of sustainable development in any transportation system. Based on a redefinition of transportation network efficiency, a quantitative efficiency evaluation method for transportation network is proposed, which could reflect the effects of network structure, traffic demands, travel choice, and travel costs on network efficiency. Furthermore, the efficiency-oriented importance measure for network components is presented, which can be used to help engineers identify the critical nodes and links in the network. The numerical examples show that, compared with existing efficiency evaluation methods, the network efficiency value calculated by the method proposed in this paper can portray the real operation situation of the transportation network as well as the effects of main factors on network efficiency. We also find that the network efficiency and the importance values of the network components both are functions of demands and network structure in the transportation network.
Efficient searching in meshfree methods
Olliff, James; Alford, Brad; Simkins, Daniel C.
2018-04-01
Meshfree methods such as the Reproducing Kernel Particle Method and the Element Free Galerkin method have proven to be excellent choices for problems involving complex geometry, evolving topology, and large deformation, owing to their ability to model the problem domain without the constraints imposed on the Finite Element Method (FEM) meshes. However, meshfree methods have an added computational cost over FEM that come from at least two sources: increased cost of shape function evaluation and the determination of adjacency or connectivity. The focus of this paper is to formally address the types of adjacency information that arises in various uses of meshfree methods; a discussion of available techniques for computing the various adjacency graphs; propose a new search algorithm and data structure; and finally compare the memory and run time performance of the methods.
Efficient Methods for Fast Shading
Directory of Open Access Journals (Sweden)
ROMANYUK, A.
2008-06-01
Full Text Available On devices without battery consuming and specialized hardware for rendering, it is important to improve the speed and quality so that these methods are suitable for real-time rendering. Furthermore such algorithms are needed on the coming multicore architectures. We show how the methods by Gouraud and Phong, the commonly most used methods for shading, can be improved and made faster for both software rendering as well as simple low energy consuming hardware implementations. Moreover, this paper summarizes the authors' achievements in increasing shading speed and performance and a Bidirectional Reflectance Distribution Function is simplified for faster computing and hardware implementation.
ASSESSMENT OF THE EFFICIENCY OF DISINFECTION METHOD ...
African Journals Online (AJOL)
eobe
ABSTRACT. The efficiencies of three disinfection methods namely boiling, water guard and pur purifier were assessed. ... Water is an indispensable resource for supporting life systems [2- ...... developing country context: improving decisions.
Towards Cost-efficient Sampling Methods
Peng, Luo; Yongli, Li; Chong, Wu
2014-01-01
The sampling method has been paid much attention in the field of complex network in general and statistical physics in particular. This paper presents two new sampling methods based on the perspective that a small part of vertices with high node degree can possess the most structure information of a network. The two proposed sampling methods are efficient in sampling the nodes with high degree. The first new sampling method is improved on the basis of the stratified random sampling method and...
Efficiency Test Method for Electric Vehicle Chargers
DEFF Research Database (Denmark)
Kieldsen, Andreas; Thingvad, Andreas; Martinenas, Sergejus
2016-01-01
This paper investigates different methods for measuring the charger efficiency of mass produced electric vehicles (EVs), in order to compare the different models. The consumers have low attention to the loss in the charger though the impact on the driving cost is high. It is not a high priority...... different vehicles. A unified method for testing the efficiency of the charger in EVs, without direct access to the component, is presented. The method is validated through extensive tests of the models Renault Zoe, Nissan LEAF and Peugeot iOn. The results show a loss between 15 % and 40 %, which is far...
Toward cost-efficient sampling methods
Luo, Peng; Li, Yongli; Wu, Chong; Zhang, Guijie
2015-09-01
The sampling method has been paid much attention in the field of complex network in general and statistical physics in particular. This paper proposes two new sampling methods based on the idea that a small part of vertices with high node degree could possess the most structure information of a complex network. The two proposed sampling methods are efficient in sampling high degree nodes so that they would be useful even if the sampling rate is low, which means cost-efficient. The first new sampling method is developed on the basis of the widely used stratified random sampling (SRS) method and the second one improves the famous snowball sampling (SBS) method. In order to demonstrate the validity and accuracy of two new sampling methods, we compare them with the existing sampling methods in three commonly used simulation networks that are scale-free network, random network, small-world network, and also in two real networks. The experimental results illustrate that the two proposed sampling methods perform much better than the existing sampling methods in terms of achieving the true network structure characteristics reflected by clustering coefficient, Bonacich centrality and average path length, especially when the sampling rate is low.
Efficient Methods of Estimating Switchgrass Biomass Supplies
Switchgrass (Panicum virgatum L.) is being developed as a biofuel feedstock for the United States. Efficient and accurate methods to estimate switchgrass biomass feedstock supply within a production area will be required by biorefineries. Our main objective was to determine the effectiveness of in...
Computational efficiency for the surface renewal method
Kelley, Jason; Higgins, Chad
2018-04-01
Measuring surface fluxes using the surface renewal (SR) method requires programmatic algorithms for tabulation, algebraic calculation, and data quality control. A number of different methods have been published describing automated calibration of SR parameters. Because the SR method utilizes high-frequency (10 Hz+) measurements, some steps in the flux calculation are computationally expensive, especially when automating SR to perform many iterations of these calculations. Several new algorithms were written that perform the required calculations more efficiently and rapidly, and that tested for sensitivity to length of flux averaging period, ability to measure over a large range of lag timescales, and overall computational efficiency. These algorithms utilize signal processing techniques and algebraic simplifications that demonstrate simple modifications that dramatically improve computational efficiency. The results here complement efforts by other authors to standardize a robust and accurate computational SR method. Increased speed of computation time grants flexibility to implementing the SR method, opening new avenues for SR to be used in research, for applied monitoring, and in novel field deployments.
Efficiency and stability of the DSBGK method
Li, Jun
2012-07-09
Recently, the DSBGK method (Note: the original name DS-BGK is changed to DSBGK for simplicity) was proposed to reduce the stochastic noise in simulating rarefied gas flows at low velocity. Its total computational time is almost independent of the magnitude of deviation from equilibrium state. It was verified by the DSMC method in different benchmark problems over a wide range of Kn number. Some simulation results of the closed lid-driven cavity flow, thermal transpiration flow and the open channel flow by the DSBGK method are given here to show its efficiency and numerical stability. In closed problems, the density distribution is subject to unphysical fluctuation due to the absence of density constraint at the boundary. Thus, many simulated molecules are employed by DSBGK simulations to improve the stability and reduce the magnitude of fluctuation. This increases the memory usage remarkably but has small influence to the efficiency of DSBGK simulations. In open problems, the DSBGK simulation remains stable when using about 10 simulated molecules per cell because the fixed number densities at open boundaries eliminate the unphysical fluctuation. Small modification to the CLL reflection model is introduced to further improve the efficiency slightly.
Efficiency and stability of the DSBGK method
Li, Jun
2012-01-01
Recently, the DSBGK method (Note: the original name DS-BGK is changed to DSBGK for simplicity) was proposed to reduce the stochastic noise in simulating rarefied gas flows at low velocity. Its total computational time is almost independent of the magnitude of deviation from equilibrium state. It was verified by the DSMC method in different benchmark problems over a wide range of Kn number. Some simulation results of the closed lid-driven cavity flow, thermal transpiration flow and the open channel flow by the DSBGK method are given here to show its efficiency and numerical stability. In closed problems, the density distribution is subject to unphysical fluctuation due to the absence of density constraint at the boundary. Thus, many simulated molecules are employed by DSBGK simulations to improve the stability and reduce the magnitude of fluctuation. This increases the memory usage remarkably but has small influence to the efficiency of DSBGK simulations. In open problems, the DSBGK simulation remains stable when using about 10 simulated molecules per cell because the fixed number densities at open boundaries eliminate the unphysical fluctuation. Small modification to the CLL reflection model is introduced to further improve the efficiency slightly.
An Efficient Vital Area Identification Method
International Nuclear Information System (INIS)
Jung, Woo Sik
2017-01-01
A new Vital Area Identification (VAI) method was developed in this study for minimizing the burden of VAI procedure. It was accomplished by performing simplification of sabotage event trees or Probabilistic Safety Assessment (PSA) event trees at the very first stage of VAI procedure. Target sets and prevention sets are calculated from the sabotage fault tree. The rooms in the shortest (most economical) prevention set are selected and protected as vital areas. All physical protection is emphasized to protect these vital areas. All rooms in the protected area, the sabotage of which could lead to core damage, should be incorporated into sabotage fault tree. So, sabotage fault tree development is a very difficult task that requires high engineering costs. IAEA published INFCIRC/225/Rev.5 in 2011 which includes principal international guidelines for the physical protection of nuclear material and nuclear installations. A new efficient VAI method was developed and demonstrated in this study. Since this method drastically reduces VAI problem size, it provides very quick and economical VAI procedure. A consistent and integrated VAI procedure had been developed by taking advantage of PSA results, and more efficient VAI method was further developed in this study by inserting PSA event tree simplification at the initial stage of VAI procedure.
Efficient Load Scheduling Method For Power Management
Directory of Open Access Journals (Sweden)
Vijo M Joy
2015-08-01
Full Text Available An efficient load scheduling method to meet varying power supply needs is presented in this paper. At peak load times the power generation system fails due to its instability. Traditionally we use load shedding process. In load shedding process disconnect the unnecessary and extra loads. The proposed method overcomes this problem by scheduling the load based on the requirement. Artificial neural networks are used for this optimal load scheduling process. For generate economic scheduling artificial neural network has been used because generation of power from each source is economically different. In this the total load required is the inputs of this network and the power generation from each source and power losses at the time of transmission are the output of the neural network. Training and programming of the artificial neural networks are done using MATLAB.
An Efficient Simulation Method for Rare Events
Rached, Nadhir B.
2015-01-07
Estimating the probability that a sum of random variables (RVs) exceeds a given threshold is a well-known challenging problem. Closed-form expressions for the sum distribution do not generally exist, which has led to an increasing interest in simulation approaches. A crude Monte Carlo (MC) simulation is the standard technique for the estimation of this type of probability. However, this approach is computationally expensive, especially when dealing with rare events. Variance reduction techniques are alternative approaches that can improve the computational efficiency of naive MC simulations. We propose an Importance Sampling (IS) simulation technique based on the well-known hazard rate twisting approach, that presents the advantage of being asymptotically optimal for any arbitrary RVs. The wide scope of applicability of the proposed method is mainly due to our particular way of selecting the twisting parameter. It is worth observing that this interesting feature is rarely satisfied by variance reduction algorithms whose performances were only proven under some restrictive assumptions. It comes along with a good efficiency, illustrated by some selected simulation results comparing the performance of our method with that of an algorithm based on a conditional MC technique.
An Efficient Simulation Method for Rare Events
Rached, Nadhir B.; Benkhelifa, Fatma; Kammoun, Abla; Alouini, Mohamed-Slim; Tempone, Raul
2015-01-01
Estimating the probability that a sum of random variables (RVs) exceeds a given threshold is a well-known challenging problem. Closed-form expressions for the sum distribution do not generally exist, which has led to an increasing interest in simulation approaches. A crude Monte Carlo (MC) simulation is the standard technique for the estimation of this type of probability. However, this approach is computationally expensive, especially when dealing with rare events. Variance reduction techniques are alternative approaches that can improve the computational efficiency of naive MC simulations. We propose an Importance Sampling (IS) simulation technique based on the well-known hazard rate twisting approach, that presents the advantage of being asymptotically optimal for any arbitrary RVs. The wide scope of applicability of the proposed method is mainly due to our particular way of selecting the twisting parameter. It is worth observing that this interesting feature is rarely satisfied by variance reduction algorithms whose performances were only proven under some restrictive assumptions. It comes along with a good efficiency, illustrated by some selected simulation results comparing the performance of our method with that of an algorithm based on a conditional MC technique.
Efficient methods for overlapping group lasso.
Yuan, Lei; Liu, Jun; Ye, Jieping
2013-09-01
The group Lasso is an extension of the Lasso for feature selection on (predefined) nonoverlapping groups of features. The nonoverlapping group structure limits its applicability in practice. There have been several recent attempts to study a more general formulation where groups of features are given, potentially with overlaps between the groups. The resulting optimization is, however, much more challenging to solve due to the group overlaps. In this paper, we consider the efficient optimization of the overlapping group Lasso penalized problem. We reveal several key properties of the proximal operator associated with the overlapping group Lasso, and compute the proximal operator by solving the smooth and convex dual problem, which allows the use of the gradient descent type of algorithms for the optimization. Our methods and theoretical results are then generalized to tackle the general overlapping group Lasso formulation based on the l(q) norm. We further extend our algorithm to solve a nonconvex overlapping group Lasso formulation based on the capped norm regularization, which reduces the estimation bias introduced by the convex penalty. We have performed empirical evaluations using both a synthetic and the breast cancer gene expression dataset, which consists of 8,141 genes organized into (overlapping) gene sets. Experimental results show that the proposed algorithm is more efficient than existing state-of-the-art algorithms. Results also demonstrate the effectiveness of the nonconvex formulation for overlapping group Lasso.
Efficient computation method of Jacobian matrix
International Nuclear Information System (INIS)
Sasaki, Shinobu
1995-05-01
As well known, the elements of the Jacobian matrix are complex trigonometric functions of the joint angles, resulting in a matrix of staggering complexity when we write it all out in one place. This article addresses that difficulties to this subject are overcome by using velocity representation. The main point is that its recursive algorithm and computer algebra technologies allow us to derive analytical formulation with no human intervention. Particularly, it is to be noted that as compared to previous results the elements are extremely simplified throughout the effective use of frame transformations. Furthermore, in case of a spherical wrist, it is shown that the present approach is computationally most efficient. Due to such advantages, the proposed method is useful in studying kinematically peculiar properties such as singularity problems. (author)
Travel Efficiency Assessment Method: Three Case Studies
This slide presentation summarizes three case studies EPA conducted in partnership with Boston, Kansas City, and Tucson, to assess the potential benefits of employing travel efficiency strategies in these areas.
Simplified method for calculating SNCR system efficiency
Directory of Open Access Journals (Sweden)
Pronobis Marek
2017-01-01
Full Text Available SNCR (Selective Non-Catalytic Reduction technology is aimed at reducing NOx emissions. SNCR efficiency is appropriately high only for the reaction temperature range called ‘the SNCR temperature window’. It is a narrow temperature range defined in various ways in the literature, which makes it difficult to evaluate the DeNOx system’s efficiency. Therefore, this study attempts to approximate the relationship between SNCR system efficiency and the flue gas temperature. The approximation was performed on the basis of literature data and verified using data from an experiment. Measurements were performed in a Polish boiler with a maximum continuous rating of 230 t/h. The verified, evaluated function could be used to forecast efficiency of SNCR systems in existing units that use urea or ammonia as a reagent. The approximation results are polynomial functions that depend on flue gas temperature, which fit the literature data with the coefficient of determination R2 = 0.83-0.86. Therefore, these equations could be used by the designer or operator of the boiler for preliminary determination of current SNCR system efficiency.
New methods in efficient coal transportation
Energy Technology Data Exchange (ETDEWEB)
Monroe, C.O.; Wolach, D.G.; Alexander, A.B. [Savage Industries Inc., Salt Lake City, UT (United States)
1998-10-01
With the increasing trend towards railroad mergers in the USA, there is a growing awareness of competition and of the need for railroads to ensure a better value service. This paper discusses the concept of business process outsourcing and its potential to provide an efficient and integrated transport system for coal handling. Examples at US coal distribution facilities are given. 6 photos., 1 fig.
EPA’s Travel Efficiency Method (TEAM) AMPO Presentation
Presentation describes EPA’s Travel Efficiency Assessment Method (TEAM) assessing potential travel efficiency strategies for reducing travel activity and emissions, includes reduction estimates in Vehicle Miles Traveled in four different geographic areas.
Time-efficient multidimensional threshold tracking method
DEFF Research Database (Denmark)
Fereczkowski, Michal; Kowalewski, Borys; Dau, Torsten
2015-01-01
Traditionally, adaptive methods have been used to reduce the time it takes to estimate psychoacoustic thresholds. However, even with adaptive methods, there are many cases where the testing time is too long to be clinically feasible, particularly when estimating thresholds as a function of anothe...
Efficient pseudospectral methods for density functional calculations
International Nuclear Information System (INIS)
Murphy, R. B.; Cao, Y.; Beachy, M. D.; Ringnalda, M. N.; Friesner, R. A.
2000-01-01
Novel improvements of the pseudospectral method for assembling the Coulomb operator are discussed. These improvements consist of a fast atom centered multipole method and a variation of the Head-Gordan J-engine analytic integral evaluation. The details of the methodology are discussed and performance evaluations presented for larger molecules within the context of DFT energy and gradient calculations. (c) 2000 American Institute of Physics
Statistical methods towards more efficient infiltration measurements.
Franz, T; Krebs, P
2006-01-01
A comprehensive knowledge about the infiltration situation in a catchment is required for operation and maintenance. Due to the high expenditures, an optimisation of necessary measurement campaigns is essential. Methods based on multivariate statistics were developed to improve the information yield of measurements by identifying appropriate gauge locations. The methods have a high degree of freedom against data needs. They were successfully tested on real and artificial data. For suitable catchments, it is estimated that the optimisation potential amounts up to 30% accuracy improvement compared to nonoptimised gauge distributions. Beside this, a correlation between independent reach parameters and dependent infiltration rates could be identified, which is not dominated by the groundwater head.
Efficient protein structure search using indexing methods.
Kim, Sungchul; Sael, Lee; Yu, Hwanjo
2013-01-01
Understanding functions of proteins is one of the most important challenges in many studies of biological processes. The function of a protein can be predicted by analyzing the functions of structurally similar proteins, thus finding structurally similar proteins accurately and efficiently from a large set of proteins is crucial. A protein structure can be represented as a vector by 3D-Zernike Descriptor (3DZD) which compactly represents the surface shape of the protein tertiary structure. This simplified representation accelerates the searching process. However, computing the similarity of two protein structures is still computationally expensive, thus it is hard to efficiently process many simultaneous requests of structurally similar protein search. This paper proposes indexing techniques which substantially reduce the search time to find structurally similar proteins. In particular, we first exploit two indexing techniques, i.e., iDistance and iKernel, on the 3DZDs. After that, we extend the techniques to further improve the search speed for protein structures. The extended indexing techniques build and utilize an reduced index constructed from the first few attributes of 3DZDs of protein structures. To retrieve top-k similar structures, top-10 × k similar structures are first found using the reduced index, and top-k structures are selected among them. We also modify the indexing techniques to support θ-based nearest neighbor search, which returns data points less than θ to the query point. The results show that both iDistance and iKernel significantly enhance the searching speed. In top-k nearest neighbor search, the searching time is reduced 69.6%, 77%, 77.4% and 87.9%, respectively using iDistance, iKernel, the extended iDistance, and the extended iKernel. In θ-based nearest neighbor serach, the searching time is reduced 80%, 81%, 95.6% and 95.6% using iDistance, iKernel, the extended iDistance, and the extended iKernel, respectively.
Efficient Training Methods for Conditional Random Fields
2008-02-01
Learning (ICML), 2007. [63] Bruce G. Lindsay. Composite likelihood methods. Contemporary Mathematics, pages 221–239, 1988. 189 [64] Yan Liu, Jaime ...Conference on Machine Learning (ICML), pages 737–744, 2005. [107] Erik F. Tjong Kim Sang and Sabine Buchholz. Introduction to the CoNLL-2000 shared task
Computationally efficient methods for digital control
Guerreiro Tome Antunes, D.J.; Hespanha, J.P.; Silvestre, C.J.; Kataria, N.; Brewer, F.
2008-01-01
The problem of designing a digital controller is considered with the novelty of explicitly taking into account the computation cost of the controller implementation. A class of controller emulation methods inspired by numerical analysis is proposed. Through various examples it is shown that these
Efficient orbit integration by manifold correction methods.
Fukushima, Toshio
2005-12-01
Triggered by a desire to investigate, numerically, the planetary precession through a long-term numerical integration of the solar system, we developed a new formulation of numerical integration of orbital motion named manifold correct on methods. The main trick is to rigorously retain the consistency of physical relations, such as the orbital energy, the orbital angular momentum, or the Laplace integral, of a binary subsystem. This maintenance is done by applying a correction to the integrated variables at each integration step. Typical methods of correction are certain geometric transformations, such as spatial scaling and spatial rotation, which are commonly used in the comparison of reference frames, or mathematically reasonable operations, such as modularization of angle variables into the standard domain [-pi, pi). The form of the manifold correction methods finally evolved are the orbital longitude methods, which enable us to conduct an extremely precise integration of orbital motions. In unperturbed orbits, the integration errors are suppressed at the machine epsilon level for an indefinitely long period. In perturbed cases, on the other hand, the errors initially grow in proportion to the square root of time and then increase more rapidly, the onset of which depends on the type and magnitude of the perturbations. This feature is also realized for highly eccentric orbits by applying the same idea as used in KS-regularization. In particular, the introduction of time elements greatly enhances the performance of numerical integration of KS-regularized orbits, whether the scaling is applied or not.
Method for Household Refrigerators Efficiency Increasing
Lebedev, V. V.; Sumzina, L. V.; Maksimov, A. V.
2017-11-01
The relevance of working processes parameters optimization in air conditioning systems is proved in the work. The research is performed with the use of the simulation modeling method. The parameters optimization criteria are considered, the analysis of target functions is given while the key factors of technical and economic optimization are considered in the article. The search for the optimal solution at multi-purpose optimization of the system is made by finding out the minimum of the dual-target vector created by the Pareto method of linear and weight compromises from target functions of the total capital costs and total operating costs. The tasks are solved in the MathCAD environment. The research results show that the values of technical and economic parameters of air conditioning systems in the areas relating to the optimum solutions’ areas manifest considerable deviations from the minimum values. At the same time, the tendencies for significant growth in deviations take place at removal of technical parameters from the optimal values of both the capital investments and operating costs. The production and operation of conditioners with the parameters which are considerably deviating from the optimal values will lead to the increase of material and power costs. The research allows one to establish the borders of the area of the optimal values for technical and economic parameters at air conditioning systems’ design.
An assessment of diagnostic efficiency by Taguchi/DEA methods.
Taner, Mehmet Tolga; Sezen, Bulent
2009-01-01
The aim of this paper is to propose a new, objective and consistent method for the calculation of the diagnostic efficiency in medical applications. In this study, a hybrid method of Taguchi and DEA is proposed. This method reflects the diversity of inputs and outputs by incorporating the stepwise application of sensitivity, specificity, leveling threshold, and efficiency score. A hypothetical case study is given which involves eight readers of X-ray films in clinical radiology. The selected pairs of sensitivity and specificity yielded two efficient readers. After super efficiency analysis, Reader 6 is found to be the most efficient reader. The paper presents a new, objective and consistent method for the calculation of the diagnostic efficiency in medical applications.
New efficient methods for calculating watersheds
International Nuclear Information System (INIS)
Fehr, E; Andrade, J S Jr; Herrmann, H J; Kadau, D; Moukarzel, C F; Da Cunha, S D; Da Silva, L R; Oliveira, E A
2009-01-01
We present an advanced algorithm for the determination of watershed lines on digital elevation models (DEMs) which is based on the iterative application of invasion percolation (IP). The main advantage of our method over previously proposed ones is that it has a sub-linear time-complexity. This enables us to process systems comprising up to 10 8 sites in a few CPU seconds. Using our algorithm we are able to demonstrate, convincingly and with high accuracy, the fractal character of watershed lines. We find the fractal dimension of watersheds to be D f = 1.211 ± 0.001 for artificial landscapes, D f = 1.10 ± 0.01 for the Alps and D f = 1.11 ± 0.01 for the Himalayas
Efficiency profile method to study the hit efficiency of drift chambers
International Nuclear Information System (INIS)
Abyzov, A.; Bel'kov, A.; Lanev, A.; Spiridonov, A.; Walter, M.; Hulsbergen, W.
2002-01-01
A method based on the usage of efficiency profile is proposed to estimate the hit efficiency of drift chambers with a large number of channels. The performance of the method under real conditions of the detector operation has been tested analysing the experimental data from the HERA-B drift chambers
An Evaluation of the Efficiency of Different Hygienisation Methods
Zrubková, M.
2017-10-01
The aim of this study is to evaluate the efficiency of hygienisation by pasteurisation, temperature-phased anaerobic digestion and sludge liming. A summary of the legislation concerning sludge treatment, disposal and recycling is included. The hygienisation methods are compared not only in terms of hygienisation efficiency but a comparison of other criteria is also included.
Efficient decomposition and linearization methods for the stochastic transportation problem
International Nuclear Information System (INIS)
Holmberg, K.
1993-01-01
The stochastic transportation problem can be formulated as a convex transportation problem with nonlinear objective function and linear constraints. We compare several different methods based on decomposition techniques and linearization techniques for this problem, trying to find the most efficient method or combination of methods. We discuss and test a separable programming approach, the Frank-Wolfe method with and without modifications, the new technique of mean value cross decomposition and the more well known Lagrangian relaxation with subgradient optimization, as well as combinations of these approaches. Computational tests are presented, indicating that some new combination methods are quite efficient for large scale problems. (authors) (27 refs.)
New Efficient Fourth Order Method for Solving Nonlinear Equations
Directory of Open Access Journals (Sweden)
Farooq Ahmad
2013-12-01
Full Text Available In a paper [Appl. Math. Comput., 188 (2 (2007 1587--1591], authors have suggested and analyzed a method for solving nonlinear equations. In the present work, we modified this method by using the finite difference scheme, which has a quintic convergence. We have compared this modified Halley method with some other iterative of fifth-orders convergence methods, which shows that this new method having convergence of fourth order, is efficient.
Efficient numerical method for district heating system hydraulics
International Nuclear Information System (INIS)
Stevanovic, Vladimir D.; Prica, Sanja; Maslovaric, Blazenka; Zivkovic, Branislav; Nikodijevic, Srdjan
2007-01-01
An efficient method for numerical simulation and analyses of the steady state hydraulics of complex pipeline networks is presented. It is based on the loop model of the network and the method of square roots for solving the system of linear equations. The procedure is presented in the comprehensive mathematical form that could be straightforwardly programmed into a computer code. An application of the method to energy efficiency analyses of a real complex district heating system is demonstrated. The obtained results show a potential for electricity savings in pumps operation. It is shown that the method is considerably more effective than the standard Hardy Cross method still widely used in engineering practice. Because of the ease of implementation and high efficiency, the method presented in this paper is recommended for hydraulic steady state calculations of complex networks
Method for calculating annual energy efficiency improvement of TV sets
International Nuclear Information System (INIS)
Varman, M.; Mahlia, T.M.I.; Masjuki, H.H.
2006-01-01
The popularization of 24 h pay-TV, interactive video games, web-TV, VCD and DVD are poised to have a large impact on overall TV electricity consumption in the Malaysia. Following this increased consumption, energy efficiency standard present a highly effective measure for decreasing electricity consumption in the residential sector. The main problem in setting energy efficiency standard is identifying annual efficiency improvement, due to the lack of time series statistical data available in developing countries. This study attempts to present a method of calculating annual energy efficiency improvement for TV set, which can be used for implementing energy efficiency standard for TV sets in Malaysia and other developing countries. Although the presented result is only an approximation, definitely it is one of the ways of accomplishing energy standard. Furthermore, the method can be used for other appliances without any major modification
Method for calculating annual energy efficiency improvement of TV sets
Energy Technology Data Exchange (ETDEWEB)
Varman, M. [Department of Mechanical Engineering, University of Malaya, Lembah Pantai, 50603 Kuala Lumpur (Malaysia); Mahlia, T.M.I. [Department of Mechanical Engineering, University of Malaya, Lembah Pantai, 50603 Kuala Lumpur (Malaysia)]. E-mail: indra@um.edu.my; Masjuki, H.H. [Department of Mechanical Engineering, University of Malaya, Lembah Pantai, 50603 Kuala Lumpur (Malaysia)
2006-10-15
The popularization of 24 h pay-TV, interactive video games, web-TV, VCD and DVD are poised to have a large impact on overall TV electricity consumption in the Malaysia. Following this increased consumption, energy efficiency standard present a highly effective measure for decreasing electricity consumption in the residential sector. The main problem in setting energy efficiency standard is identifying annual efficiency improvement, due to the lack of time series statistical data available in developing countries. This study attempts to present a method of calculating annual energy efficiency improvement for TV set, which can be used for implementing energy efficiency standard for TV sets in Malaysia and other developing countries. Although the presented result is only an approximation, definitely it is one of the ways of accomplishing energy standard. Furthermore, the method can be used for other appliances without any major modification.
An efficient method for sampling the essential subspace of proteins
Amadei, A; Linssen, A.B M; de Groot, B.L.; van Aalten, D.M.F.; Berendsen, H.J.C.
A method is presented for a more efficient sampling of the configurational space of proteins as compared to conventional sampling techniques such as molecular dynamics. The method is based on the large conformational changes in proteins revealed by the ''essential dynamics'' analysis. A form of
An efficient Korringa-Kohn-Rostoker method for ''complex'' lattices
International Nuclear Information System (INIS)
Yussouff, M.; Zeller, R.
1980-10-01
We present a modification of the exact KKR-band structure method which uses (a) a new energy expansion for structure constants and (b) only the reciprocal lattice summation. It is quite efficient and particularly useful for 'complex' lattices. The band structure of hexagonal-close-packed Beryllium at symmetry points is presented as an example of this method. (author)
Method for determining efficiency in a liquid scintillation system
International Nuclear Information System (INIS)
Laney, B.H.
1975-01-01
In a liquid scintillation system utilizing plural photomultiplyier means, a method for determining efficiency of coincident pulse detection. Various incremental counting efficiency levels are associated with asymptotic functions in a two dimension matrix in which the abscissa and ordinate correspond to the pulse heights of each of a pair of coincident pulses from different photomultiplier means. An efficiency determining point is located in the matrix based on the sum of the pulse heights of each of the coincident pulses as well as on the amplitude of the smallest pulse of the coincident pulses. The single counting efficiency determining point is recorded as the level of efficiency at which the photomultiplier means detect scintillations that generate coincident pulses having pulse heights equal to those recorded. (Patent Office Record)
Modeling of Methods to Control Heat-Consumption Efficiency
Tsynaeva, E. A.; Tsynaeva, A. A.
2016-11-01
In this work, consideration has been given to thermophysical processes in automated heat consumption control systems (AHCCSs) of buildings, flow diagrams of these systems, and mathematical models describing the thermophysical processes during the systems' operation; an analysis of adequacy of the mathematical models has been presented. A comparison has been made of the operating efficiency of the systems and the methods to control the efficiency. It has been determined that the operating efficiency of an AHCCS depends on its diagram and the temperature chart of central quality control (CQC) and also on the temperature of a low-grade heat source for the system with a heat pump.
An efficient unstructured WENO method for supersonic reactive flows
Zhao, Wen-Geng; Zheng, Hong-Wei; Liu, Feng-Jun; Shi, Xiao-Tian; Gao, Jun; Hu, Ning; Lv, Meng; Chen, Si-Cong; Zhao, Hong-Da
2018-03-01
An efficient high-order numerical method for supersonic reactive flows is proposed in this article. The reactive source term and convection term are solved separately by splitting scheme. In the reaction step, an adaptive time-step method is presented, which can improve the efficiency greatly. In the convection step, a third-order accurate weighted essentially non-oscillatory (WENO) method is adopted to reconstruct the solution in the unstructured grids. Numerical results show that our new method can capture the correct propagation speed of the detonation wave exactly even in coarse grids, while high order accuracy can be achieved in the smooth region. In addition, the proposed adaptive splitting method can reduce the computational cost greatly compared with the traditional splitting method.
A Method for Determining Optimal Residential Energy Efficiency Packages
Energy Technology Data Exchange (ETDEWEB)
Polly, B. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Gestwick, M. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Bianchi, M. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Anderson, R. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Horowitz, S. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Christensen, C. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Judkoff, R. [National Renewable Energy Lab. (NREL), Golden, CO (United States)
2011-04-01
This report describes an analysis method for determining optimal residential energy efficiency retrofit packages and, as an illustrative example, applies the analysis method to a 1960s-era home in eight U.S. cities covering a range of International Energy Conservation Code (IECC) climate regions. The method uses an optimization scheme that considers average energy use (determined from building energy simulations) and equivalent annual cost to recommend optimal retrofit packages specific to the building, occupants, and location.
An efficient multilevel optimization method for engineering design
Vanderplaats, G. N.; Yang, Y. J.; Kim, D. S.
1988-01-01
An efficient multilevel deisgn optimization technique is presented. The proposed method is based on the concept of providing linearized information between the system level and subsystem level optimization tasks. The advantages of the method are that it does not require optimum sensitivities, nonlinear equality constraints are not needed, and the method is relatively easy to use. The disadvantage is that the coupling between subsystems is not dealt with in a precise mathematical manner.
76 FR 21673 - Alternative Efficiency Determination Methods and Alternate Rating Methods
2011-04-18
... EERE-2011-BP-TP-00024] RIN 1904-AC46 Alternative Efficiency Determination Methods and Alternate Rating Methods AGENCY: Office of Energy Efficiency and Renewable Energy, Department of Energy. ACTION: Notice of... and data related to the use of computer simulations, mathematical methods, and other alternative...
An efficient method for evaluating RRAM crossbar array performance
Song, Lin; Zhang, Jinyu; Chen, An; Wu, Huaqiang; Qian, He; Yu, Zhiping
2016-06-01
An efficient method is proposed in this paper to mitigate computational burden in resistive random access memory (RRAM) array simulation. In the worst case scenario, a 4 Mb RRAM array with line resistance is greatly reduced using this method. For 1S1R-RRAM array structures, static and statistical parameters in both reading and writing processes are simulated. Error analysis is performed to prove the reliability of the algorithm when line resistance is extremely small compared with the junction resistance. Results show that high precision is maintained even if the size of RRAM array is reduced by one thousand times, which indicates significant improvements in both computational efficiency and memory requirements.
Application of an efficient Bayesian discretization method to biomedical data
Directory of Open Access Journals (Sweden)
Gopalakrishnan Vanathi
2011-07-01
Full Text Available Abstract Background Several data mining methods require data that are discrete, and other methods often perform better with discrete data. We introduce an efficient Bayesian discretization (EBD method for optimal discretization of variables that runs efficiently on high-dimensional biomedical datasets. The EBD method consists of two components, namely, a Bayesian score to evaluate discretizations and a dynamic programming search procedure to efficiently search the space of possible discretizations. We compared the performance of EBD to Fayyad and Irani's (FI discretization method, which is commonly used for discretization. Results On 24 biomedical datasets obtained from high-throughput transcriptomic and proteomic studies, the classification performances of the C4.5 classifier and the naïve Bayes classifier were statistically significantly better when the predictor variables were discretized using EBD over FI. EBD was statistically significantly more stable to the variability of the datasets than FI. However, EBD was less robust, though not statistically significantly so, than FI and produced slightly more complex discretizations than FI. Conclusions On a range of biomedical datasets, a Bayesian discretization method (EBD yielded better classification performance and stability but was less robust than the widely used FI discretization method. The EBD discretization method is easy to implement, permits the incorporation of prior knowledge and belief, and is sufficiently fast for application to high-dimensional data.
Simple Methods to Approximate CPC Shape to Preserve Collection Efficiency
Directory of Open Access Journals (Sweden)
David Jafrancesco
2012-01-01
Full Text Available The compound parabolic concentrator (CPC is the most efficient reflective geometry to collect light to an exit port. Anyway, to allow its actual use in solar plants or photovoltaic concentration systems, a tradeoff between system efficiency and cost reduction, the two key issues for sunlight exploitation, must be found. In this work, we analyze various methods to model an approximated CPC aimed to be simpler and more cost-effective than the ideal one, as well as to preserve the system efficiency. The manufacturing easiness arises from the use of truncated conic surfaces only, which can be realized by cheap machining techniques. We compare different configurations on the basis of their collection efficiency, evaluated by means of nonsequential ray-tracing software. Moreover, due to the fact that some configurations are beam dependent and for a closer approximation of a real case, the input beam is simulated as nonsymmetric, with a nonconstant irradiance on the CPC internal surface.
Removing efficiency of radon from water by different methods
International Nuclear Information System (INIS)
Muellerova, M.; Holy, K.; Gulasova, Z.; Polaskova, A.
2008-01-01
In this contribution problem of radon removing from water samples by different methods was tested. Lowest efficiency of deemanation was achieved at tossing of water from one vessel into the other. For increasing of efficiency deemanation of radon use of needle-bath principle was also used. Low efficiency deemanation was found at trapping of radon from sample of water by toluene (83 ± 5) %, too. Reversal highest efficiency deemanation of radon from water was reached at aerating by argon (95 ± 6)%. It is shown, that reduction of volume activity of radon in water under 0.1 Bq/dm l - 3 is big problem. Suppression of this limit will claim use of more completion and sophistic approaches. (author)
Efficient forced vibration reanalysis method for rotating electric machines
Saito, Akira; Suzuki, Hiromitsu; Kuroishi, Masakatsu; Nakai, Hideo
2015-01-01
Rotating electric machines are subject to forced vibration by magnetic force excitation with wide-band frequency spectrum that are dependent on the operating conditions. Therefore, when designing the electric machines, it is inevitable to compute the vibration response of the machines at various operating conditions efficiently and accurately. This paper presents an efficient frequency-domain vibration analysis method for the electric machines. The method enables the efficient re-analysis of the vibration response of electric machines at various operating conditions without the necessity to re-compute the harmonic response by finite element analyses. Theoretical background of the proposed method is provided, which is based on the modal reduction of the magnetic force excitation by a set of amplitude-modulated standing-waves. The method is applied to the forced response vibration of the interior permanent magnet motor at a fixed operating condition. The results computed by the proposed method agree very well with those computed by the conventional harmonic response analysis by the FEA. The proposed method is then applied to the spin-up test condition to demonstrate its applicability to various operating conditions. It is observed that the proposed method can successfully be applied to the spin-up test conditions, and the measured dominant frequency peaks in the frequency response can be well captured by the proposed approach.
An efficient method for solving fractional Sturm-Liouville problems
International Nuclear Information System (INIS)
Al-Mdallal, Qasem M.
2009-01-01
The numerical approximation of the eigenvalues and the eigenfunctions of the fractional Sturm-Liouville problems, in which the second order derivative is replaced by a fractional derivative, is considered. The present results can be implemented on the numerical solution of the fractional diffusion-wave equation. The results show the simplicity and efficiency of the numerical method.
Cattle slurry on grassland - application methods and nitrogen use efficiency
Lalor, S.T.J.
2014-01-01
Cattle slurry represents a significant resource on grassland-based farming systems. The objective of this thesis was to investigate and devise cattle slurry application methods and strategies that can be implemented on grassland farms to improve the efficiency with which nitrogen (N) in
The relative efficiency of three methods of estimating herbage mass ...
African Journals Online (AJOL)
The methods involved were randomly placed circular quadrats; randomly placed narrow strips; and disc meter sampling. Disc meter and quadrat sampling appear to be more efficient than strip sampling. In a subsequent small plot grazing trial the estimates of herbage mass, using the disc meter, had a consistent precision ...
Efficiency analysis of hydrogen production methods from biomass
Ptasinski, K.J.
2008-01-01
Abstract: Hydrogen is considered as a universal energy carrier for the future, and biomass has the potential to become a sustainable source of hydrogen. This article presents an efficiency analysis of hydrogen production processes from a variety of biomass feedstocks by a thermochemical method –
An Automatic High Efficient Method for Dish Concentrator Alignment
Directory of Open Access Journals (Sweden)
Yong Wang
2014-01-01
for the alignment of faceted solar dish concentrator. The isosceles triangle configuration of facet’s footholds determines a fixed relation between light spot displacements and foothold movements, which allows an automatic determination of the amount of adjustments. Tests on a 25 kW Stirling Energy System dish concentrator verify the feasibility, accuracy, and efficiency of our method.
Simple and efficient methods for isolation and activity measurement ...
African Journals Online (AJOL)
Jane
2011-06-29
Jun 29, 2011 ... Key words: Hirudin, thrombin titration method, chromatography, purification. INTRODUCTION. Since recombinant ... Escherichia coli in 1986, intensive research had been .... mixed with 50 µl sample was incubated in 37°C water for 5 min, then 5 µl .... conclusion, the concise and efficient isolation line of the.
A Memory and Computation Efficient Sparse Level-Set Method
Laan, Wladimir J. van der; Jalba, Andrei C.; Roerdink, Jos B.T.M.
Since its introduction, the level set method has become the favorite technique for capturing and tracking moving interfaces, and found applications in a wide variety of scientific fields. In this paper we present efficient data structures and algorithms for tracking dynamic interfaces through the
Efficient method for transport simulations in quantum cascade lasers
Directory of Open Access Journals (Sweden)
Maczka Mariusz
2017-01-01
Full Text Available An efficient method for simulating quantum transport in quantum cascade lasers is presented. The calculations are performed within a simple approximation inspired by Büttiker probes and based on a finite model for semiconductor superlattices. The formalism of non-equilibrium Green’s functions is applied to determine the selected transport parameters in a typical structure of a terahertz laser. Results were compared with those obtained for a infinite model as well as other methods described in literature.
Methods of Efficient Study Habits and Physics Learning
Zettili, Nouredine
2010-02-01
We want to discuss the methods of efficient study habits and how they can be used by students to help them improve learning physics. In particular, we deal with the most efficient techniques needed to help students improve their study skills. We focus on topics such as the skills of how to develop long term memory, how to improve concentration power, how to take class notes, how to prepare for and take exams, how to study scientific subjects such as physics. We argue that the students who conscientiously use the methods of efficient study habits achieve higher results than those students who do not; moreover, a student equipped with the proper study skills will spend much less time to learn a subject than a student who has no good study habits. The underlying issue here is not the quantity of time allocated to the study efforts by the students, but the efficiency and quality of actions so that the student can function at peak efficiency. These ideas were developed as part of Project IMPACTSEED (IMproving Physics And Chemistry Teaching in SEcondary Education), an outreach grant funded by the Alabama Commission on Higher Education. This project is motivated by a major pressing local need: A large number of high school physics teachers teach out of field. )
Efficient and robust implementation of the TLISMNI method
Aboubakr, Ahmed K.; Shabana, Ahmed A.
2015-09-01
The dynamics of large scale and complex multibody systems (MBS) that include flexible bodies and contact/impact pairs is governed by stiff equations. Because explicit integration methods can be inefficient and often fail in the case of stiff problems, the use of implicit numerical integration methods is recommended in this case. This paper presents a new and efficient implementation of the two-loop implicit sparse matrix numerical integration (TLISMNI) method proposed for the solution of constrained rigid and flexible MBS differential and algebraic equations. The TLISMNI method has desirable features that include avoiding numerical differentiation of the forces, allowing for an efficient sparse matrix implementation, and ensuring that the kinematic constraint equations are satisfied at the position, velocity and acceleration levels. In this method, a sparse Lagrangian augmented form of the equations of motion that ensures that the constraints are satisfied at the acceleration level is used to solve for all the accelerations and Lagrange multipliers. The generalized coordinate partitioning or recursive methods can be used to satisfy the constraint equations at the position and velocity levels. In order to improve the efficiency and robustness of the TLISMNI method, the simple iteration and the Jacobian-Free Newton-Krylov approaches are used in this investigation. The new implementation is tested using several low order formulas that include Hilber-Hughes-Taylor (HHT), L-stable Park, A-stable Trapezoidal, and A-stable BDF methods. The HHT method allows for including numerical damping. Discussion on which method is more appropriate to use for a certain application is provided. The paper also discusses TLISMNI implementation issues including the step size selection, the convergence criteria, the error control, and the effect of the numerical damping. The use of the computer algorithm described in this paper is demonstrated by solving complex rigid and flexible tracked
Efficient modeling of chiral media using SCN-TLM method
Directory of Open Access Journals (Sweden)
Yaich M.I.
2004-01-01
Full Text Available An efficient approach allowing to include linear bi-isotropic chiral materials in time-domain transmission line matrix (TLM calculations by employing recursive evaluation of the convolution of the electric and magnetic fields and susceptibility functions is presented. The new technique consists to add both voltage and current sources in supplementary stubs of the symmetrical condensed node (SCN of the TLM method. In this article, the details and the complete description of this approach are given. A comparison of the obtained numerical results with those of the literature reflects its validity and efficiency.
Method for determining efficiency in a liquid scintillation system
International Nuclear Information System (INIS)
Laney, B.H.
1975-01-01
This invention relates to a method of counting radioactive events in a liquid scintillation radiation detecting and counting apparatus by utilizing pulses generated by a photomultiplying means resulting from scintillations caused by radioactive events. A counting efficiency value is assigned to each pulse generated in the photomultiplying means according to the height of the pulse. The numerical inverse of each assigned counting efficiency value is determined and each numerical inverse is recorded as an actual number of radioactive events with each having a pulse height identical to that of the corresponding pulse generated in the photomultiplying means. (Patent Office Record)
Method for Determining Optimal Residential Energy Efficiency Retrofit Packages
Energy Technology Data Exchange (ETDEWEB)
Polly, B.; Gestwick, M.; Bianchi, M.; Anderson, R.; Horowitz, S.; Christensen, C.; Judkoff, R.
2011-04-01
Businesses, government agencies, consumers, policy makers, and utilities currently have limited access to occupant-, building-, and location-specific recommendations for optimal energy retrofit packages, as defined by estimated costs and energy savings. This report describes an analysis method for determining optimal residential energy efficiency retrofit packages and, as an illustrative example, applies the analysis method to a 1960s-era home in eight U.S. cities covering a range of International Energy Conservation Code (IECC) climate regions. The method uses an optimization scheme that considers average energy use (determined from building energy simulations) and equivalent annual cost to recommend optimal retrofit packages specific to the building, occupants, and location. Energy savings and incremental costs are calculated relative to a minimum upgrade reference scenario, which accounts for efficiency upgrades that would occur in the absence of a retrofit because of equipment wear-out and replacement with current minimum standards.
A Computationally Efficient Method for Polyphonic Pitch Estimation
Directory of Open Access Journals (Sweden)
Ruohua Zhou
2009-01-01
Full Text Available This paper presents a computationally efficient method for polyphonic pitch estimation. The method employs the Fast Resonator Time-Frequency Image (RTFI as the basic time-frequency analysis tool. The approach is composed of two main stages. First, a preliminary pitch estimation is obtained by means of a simple peak-picking procedure in the pitch energy spectrum. Such spectrum is calculated from the original RTFI energy spectrum according to harmonic grouping principles. Then the incorrect estimations are removed according to spectral irregularity and knowledge of the harmonic structures of the music notes played on commonly used music instruments. The new approach is compared with a variety of other frame-based polyphonic pitch estimation methods, and results demonstrate the high performance and computational efficiency of the approach.
An Efficient Approach for Identifying Stable Lobes with Discretization Method
Directory of Open Access Journals (Sweden)
Baohai Wu
2013-01-01
Full Text Available This paper presents a new approach for quick identification of chatter stability lobes with discretization method. Firstly, three different kinds of stability regions are defined: absolute stable region, valid region, and invalid region. Secondly, while identifying the chatter stability lobes, three different regions within the chatter stability lobes are identified with relatively large time intervals. Thirdly, stability boundary within the valid regions is finely calculated to get exact chatter stability lobes. The proposed method only needs to test a small portion of spindle speed and cutting depth set; about 89% computation time is savedcompared with full discretization method. It spends only about10 minutes to get exact chatter stability lobes. Since, based on discretization method, the proposed method can be used for different immersion cutting including low immersion cutting process, the proposed method can be directly implemented in the workshop to promote machining parameters selection efficiency.
Evaluation of Test Method for Solar Collector Efficiency
DEFF Research Database (Denmark)
Fan, Jianhua; Shah, Louise Jivan; Furbo, Simon
The test method of the standard EN12975-2 (European Committee for Standardization, 2004) is used by European test laboratories to determine the efficiency of solar collectors. In the test methods the mean solar collector fluid temperature in the solar collector, Tm is determined by the approximat...... and the sky temperature. Based on the investigations, recommendations for change of the test methods and test conditions are considered. The investigations are carried out within the NEGST (New Generation of Solar Thermal Systems) project financed by EU.......The test method of the standard EN12975-2 (European Committee for Standardization, 2004) is used by European test laboratories to determine the efficiency of solar collectors. In the test methods the mean solar collector fluid temperature in the solar collector, Tm is determined by the approximated...... equation where Tin is the inlet temperature to the collector and Tout is the outlet temperature from the collector. The specific heat of the solar collector fluid is in the test method as an approximation determined as a constant equal to the specific heat of the solar collector fluid at the temperature Tm...
An efficient method for DNA extraction from Cladosporioid fungi.
Moslem, M A; Bahkali, A H; Abd-Elsalam, K A; Wit, P J G M
2010-11-23
We developed an efficient method for DNA extraction from Cladosporioid fungi, which are important fungal plant pathogens. The cell wall of Cladosporioid fungi is often melanized, which makes it difficult to extract DNA from their cells. In order to overcome this we grew these fungi for three days on agar plates and extracted DNA from mycelium mats after manual or electric homogenization. High-quality DNA was isolated, with an A(260)/A(280) ratio ranging between 1.6 and 2.0. Isolated genomic DNA was efficiently digested with restriction enzymes and produced distinct banding patterns on agarose gels for the different Cladosporium species. Clear DNA fragments from the isolated DNA were amplified by PCR using small and large subunit rDNA primers, demonstrating that this method provides DNA of sufficiently high quality for molecular analyses.
Efficient solution method for optimal control of nuclear systems
International Nuclear Information System (INIS)
Naser, J.A.; Chambre, P.L.
1981-01-01
To improve the utilization of existing fuel sources, the use of optimization techniques is becoming more important. A technique for solving systems of coupled ordinary differential equations with initial, boundary, and/or intermediate conditions is given. This method has a number of inherent advantages over existing techniques as well as being efficient in terms of computer time and space requirements. An example of computing the optimal control for a spatially dependent reactor model with and without temperature feedback is given. 10 refs
2012-05-30
...-AC46 Energy Conservation Program: Alternative Efficiency Determination Methods and Alternative Rating... regulations authorizing the use of alternative methods of determining energy efficiency or energy consumption... alternative methods of determining energy efficiency or energy consumption of various consumer products and...
"System evaluates system": method for evaluating the efficiency of IS
Directory of Open Access Journals (Sweden)
Dita Blazkova
2016-10-01
Full Text Available In paper I deal with the possible solution of evaluating the efficiency of information systems in companies. The large number of existing methods used to address the efficiency of information systems is dependent on the subjective responses of the user that may distort output evaluation. Therefore, I propose a method that eliminates the subjective opinion of a user as the primary data source. Applications, which I suggests as part of the method, collects relevant data. In this paper I describe the application in detail. This is a follow-on program on any system that runs parallel with it. The program automatically collects data for evaluation. Data include mainly time data, positions the mouse cursor, printScreens, i-grams of previous, etc. I propose a method of evaluation of the data, which identifies the degree of the friendliness of the information system to the user. Thus, the output of the method is the conclusion whether users, who work with the information system, can handle effectively work with it.
Efficient data retrieval method for similar plasma waveforms in EAST
Energy Technology Data Exchange (ETDEWEB)
Liu, Ying, E-mail: liuying-ipp@szu.edu.cn [SZU-CASIPP Joint Laboratory for Applied Plasma, Shenzhen University, Shenzhen 518060 (China); Huang, Jianjun; Zhou, Huasheng; Wang, Fan [SZU-CASIPP Joint Laboratory for Applied Plasma, Shenzhen University, Shenzhen 518060 (China); Wang, Feng [Institute of Plasma Physics Chinese Academy of Sciences, Hefei 230031 (China)
2016-11-15
Highlights: • The proposed method is carried out by means of bounding envelope and angle distance. • It allows retrieving for whole similar waveforms of any time length. • In addition, the proposed method is also possible to retrieve subsequences. - Abstract: Fusion research relies highly on data analysis due to its massive-sized database. In the present work, we propose an efficient method for searching and retrieving similar plasma waveforms in Experimental Advanced Superconducting Tokamak (EAST). Based on Piecewise Linear Aggregate Approximation (PLAA) for extracting feature values, the searching process is accomplished in two steps. The first one is coarse searching to narrow down the search space, which is carried out by means of bounding envelope. The second step is fine searching to retrieval similar waveforms, which is implemented by the angle distance. The proposed method is tested in EAST databases and turns out to have good performance in retrieving similar waveforms.
Method for Determining Volumetric Efficiency and Its Experimental Validation
Directory of Open Access Journals (Sweden)
Ambrozik Andrzej
2017-12-01
Full Text Available Modern means of transport are basically powered by piston internal combustion engines. Increasingly rigorous demands are placed on IC engines in order to minimise the detrimental impact they have on the natural environment. That stimulates the development of research on piston internal combustion engines. The research involves experimental and theoretical investigations carried out using computer technologies. While being filled, the cylinder is considered to be an open thermodynamic system, in which non-stationary processes occur. To make calculations of thermodynamic parameters of the engine operating cycle, based on the comparison of cycles, it is necessary to know the mean constant value of cylinder pressure throughout this process. Because of the character of in-cylinder pressure pattern and difficulties in pressure experimental determination, in the present paper, a novel method for the determination of this quantity was presented. In the new approach, the iteration method was used. In the method developed for determining the volumetric efficiency, the following equations were employed: the law of conservation of the amount of substance, the first law of thermodynamics for open system, dependences for changes in the cylinder volume vs. the crankshaft rotation angle, and the state equation. The results of calculations performed with this method were validated by means of experimental investigations carried out for a selected engine at the engine test bench. A satisfactory congruence of computational and experimental results as regards determining the volumetric efficiency was obtained. The method for determining the volumetric efficiency presented in the paper can be used to investigate the processes taking place in the cylinder of an IC engine.
DASPfind: new efficient method to predict drug–target interactions
Ba Alawi, Wail
2016-03-16
Background Identification of novel drug–target interactions (DTIs) is important for drug discovery. Experimental determination of such DTIs is costly and time consuming, hence it necessitates the development of efficient computational methods for the accurate prediction of potential DTIs. To-date, many computational methods have been proposed for this purpose, but they suffer the drawback of a high rate of false positive predictions. Results Here, we developed a novel computational DTI prediction method, DASPfind. DASPfind uses simple paths of particular lengths inferred from a graph that describes DTIs, similarities between drugs, and similarities between the protein targets of drugs. We show that on average, over the four gold standard DTI datasets, DASPfind significantly outperforms other existing methods when the single top-ranked predictions are considered, resulting in 46.17 % of these predictions being correct, and it achieves 49.22 % correct single top ranked predictions when the set of all DTIs for a single drug is tested. Furthermore, we demonstrate that our method is best suited for predicting DTIs in cases of drugs with no known targets or with few known targets. We also show the practical use of DASPfind by generating novel predictions for the Ion Channel dataset and validating them manually. Conclusions DASPfind is a computational method for finding reliable new interactions between drugs and proteins. We show over six different DTI datasets that DASPfind outperforms other state-of-the-art methods when the single top-ranked predictions are considered, or when a drug with no known targets or with few known targets is considered. We illustrate the usefulness and practicality of DASPfind by predicting novel DTIs for the Ion Channel dataset. The validated predictions suggest that DASPfind can be used as an efficient method to identify correct DTIs, thus reducing the cost of necessary experimental verifications in the process of drug discovery. DASPfind
Detecting Android Malwares with High-Efficient Hybrid Analyzing Methods
Directory of Open Access Journals (Sweden)
Yu Liu
2018-01-01
Full Text Available In order to tackle the security issues caused by malwares of Android OS, we proposed a high-efficient hybrid-detecting scheme for Android malwares. Our scheme employed different analyzing methods (static and dynamic methods to construct a flexible detecting scheme. In this paper, we proposed some detecting techniques such as Com+ feature based on traditional Permission and API call features to improve the performance of static detection. The collapsing issue of traditional function call graph-based malware detection was also avoided, as we adopted feature selection and clustering method to unify function call graph features of various dimensions into same dimension. In order to verify the performance of our scheme, we built an open-access malware dataset in our experiments. The experimental results showed that the suggested scheme achieved high malware-detecting accuracy, and the scheme could be used to establish Android malware-detecting cloud services, which can automatically adopt high-efficiency analyzing methods according to the properties of the Android applications.
A modified efficient method for dental pulp stem cell isolation.
Raoof, Maryam; Yaghoobi, Mohammad Mehdi; Derakhshani, Ali; Kamal-Abadi, Ali Mohammadi; Ebrahimi, Behnam; Abbasnejad, Mehdi; Shokouhinejad, Noushin
2014-03-01
Dental pulp stem cells can be used in regenerative endodontic therapy. The aim of this study was to introduce an efficient method for dental pulp stem cells isolation. In this in-vitro study, 60 extracted human third molars were split and pulp tissue was extracted. Dental pulp stem cells were isolated by the following three different methods: (1) digestion of pulp by collagenase/dispase enzyme and culture of the released cells; (2) outgrowth of the cells by culture of undigested pulp pieces; (3) digestion of pulp tissue pieces and fixing them. The cells were cultured in minimum essential medium alpha modification (αMEM) medium supplemented with 20% fetal bovine serum(FBS) in humid 37°C incubator with 5% CO 2. The markers of stem cells were studied by reverse transcriptase polymerase chain reaction (PCR). The student t-test was used for comparing the means of independent groups. P third method, we obtained stem cells successfully with about 60% efficiency after 2 days. The results of RT-PCR suggested the expression of Nanog, Oct-4, and Nucleostemin markers in the isolated cells from dental pulps. This study proposes a new method with high efficacy to obtain dental pulp stem cells in a short time.
A modified efficient method for dental pulp stem cell isolation
Directory of Open Access Journals (Sweden)
Maryam Raoof
2014-01-01
Full Text Available Background: Dental pulp stem cells can be used in regenerative endodontic therapy. The aim of this study was to introduce an efficient method for dental pulp stem cells isolation. Materials and Methods: In this in-vitro study, 60 extracted human third molars were split and pulp tissue was extracted. Dental pulp stem cells were isolated by the following three different methods: (1 digestion of pulp by collagenase/dispase enzyme and culture of the released cells; (2 outgrowth of the cells by culture of undigested pulp pieces; (3 digestion of pulp tissue pieces and fixing them. The cells were cultured in minimum essential medium alpha modification (αMEM medium supplemented with 20% fetal bovine serum(FBS in humid 37°C incubator with 5% CO 2 . The markers of stem cells were studied by reverse transcriptase polymerase chain reaction (PCR. The student t-test was used for comparing the means of independent groups. P <0.05 was considered as significant. Results: The results indicated that by the first method a few cell colonies with homogenous morphology were detectable after 4 days, while in the outgrowth method more time was needed (10-12 days to allow sufficient numbers of heterogeneous phenotype stem cells to migrate out of tissue. Interestingly, with the improved third method, we obtained stem cells successfully with about 60% efficiency after 2 days. The results of RT-PCR suggested the expression of Nanog, Oct-4, and Nucleostemin markers in the isolated cells from dental pulps. Conclusion: This study proposes a new method with high efficacy to obtain dental pulp stem cells in a short time.
Efficient methods for time-absorption (α) eigenvalue calculations
International Nuclear Information System (INIS)
Hill, T.R.
1983-01-01
The time-absorption eigenvalue (α) calculation is one of the options found in most discrete-ordinates transport codes. Several methods have been developed at Los Alamos to improve the efficiency of this calculation. Two procedures, based on coarse-mesh rebalance, to accelerate the α eigenvalue search are derived. A hybrid scheme to automatically choose the more-effective rebalance method is described. The α rebalance scheme permits some simple modifications to the iteration strategy that eliminates many unnecessary calculations required in the standard search procedure. For several fast supercritical test problems, these methods resulted in convergence with one-fifth the number of iterations required for the conventional eigenvalue search procedure
DASPfind: new efficient method to predict drug–target interactions
Ba Alawi, Wail; Soufan, Othman; Essack, Magbubah; Kalnis, Panos; Bajic, Vladimir B.
2016-01-01
DASPfind is a computational method for finding reliable new interactions between drugs and proteins. We show over six different DTI datasets that DASPfind outperforms other state-of-the-art methods when the single top-ranked predictions are considered, or when a drug with no known targets or with few known targets is considered. We illustrate the usefulness and practicality of DASPfind by predicting novel DTIs for the Ion Channel dataset. The validated predictions suggest that DASPfind can be used as an efficient method to identify correct DTIs, thus reducing the cost of necessary experimental verifications in the process of drug discovery. DASPfind can be accessed online at: http://www.cbrc.kaust.edu.sa/daspfind.
Efficient Pruning Method for Ensemble Self-Generating Neural Networks
Directory of Open Access Journals (Sweden)
Hirotaka Inoue
2003-12-01
Full Text Available Recently, multiple classifier systems (MCS have been used for practical applications to improve classification accuracy. Self-generating neural networks (SGNN are one of the suitable base-classifiers for MCS because of their simple setting and fast learning. However, the computation cost of the MCS increases in proportion to the number of SGNN. In this paper, we propose an efficient pruning method for the structure of the SGNN in the MCS. We compare the pruned MCS with two sampling methods. Experiments have been conducted to compare the pruned MCS with an unpruned MCS, the MCS based on C4.5, and k-nearest neighbor method. The results show that the pruned MCS can improve its classification accuracy as well as reducing the computation cost.
Efficient electronic structure methods applied to metal nanoparticles
DEFF Research Database (Denmark)
Larsen, Ask Hjorth
of efficient approaches to density functional theory and the application of these methods to metal nanoparticles. We describe the formalism and implementation of localized atom-centered basis sets within the projector augmented wave method. Basis sets allow for a dramatic increase in performance compared....... The basis set method is used to study the electronic effects for the contiguous range of clusters up to several hundred atoms. The s-electrons hybridize to form electronic shells consistent with the jellium model, leading to electronic magic numbers for clusters with full shells. Large electronic gaps...... and jumps in Fermi level near magic numbers can lead to alkali-like or halogen-like behaviour when main-group atoms adsorb onto gold clusters. A non-self-consistent NewnsAnderson model is used to more closely study the chemisorption of main-group atoms on magic-number Au clusters. The behaviour at magic...
Highly efficient vitrification method for cryopreservation of human oocytes.
Kuwayama, Masashige; Vajta, Gábor; Kato, Osamu; Leibo, Stanley P
2005-09-01
Two experiments were performed to develop a method to cryopreserve MII human oocytes. In the first experiment, three vitrification methods were compared using bovine MII oocytes with regard to their developmental competence after cryopreservation: (i) vitrification within 0.25-ml plastic straws followed by in-straw dilution after warming (ISD method); (ii) vitrification in open-pulled straws (OPS method); and (iii) vitrification in plastic handle (Cryotop method). In the second experiment, the Cryotop method, which had yielded the best results, was used to vitrify human oocytes. Out of 64 vitrified oocytes, 58 (91%) exhibited normal morphology after warming. After intracytoplasmic sperm injection, 52 became fertilized, and 32 (50%) developed to the blastocyst stage in vitro. Analysis by fluorescence in-situ hybridization of five blastocysts showed that all were normal diploid embryos. Twenty-nine embryo transfers with a mean number of 2.2 embryos per transfer on days 2 and 5 resulted in 12 initial pregnancies, seven healthy babies and three ongoing pregnancies. The results suggest that vitrification using the Cryotop is the most efficient method for human oocyte cryopreservation.
Efficient parallel implicit methods for rotary-wing aerodynamics calculations
Wissink, Andrew M.
Euler/Navier-Stokes Computational Fluid Dynamics (CFD) methods are commonly used for prediction of the aerodynamics and aeroacoustics of modern rotary-wing aircraft. However, their widespread application to large complex problems is limited lack of adequate computing power. Parallel processing offers the potential for dramatic increases in computing power, but most conventional implicit solution methods are inefficient in parallel and new techniques must be adopted to realize its potential. This work proposes alternative implicit schemes for Euler/Navier-Stokes rotary-wing calculations which are robust and efficient in parallel. The first part of this work proposes an efficient parallelizable modification of the Lower Upper-Symmetric Gauss Seidel (LU-SGS) implicit operator used in the well-known Transonic Unsteady Rotor Navier Stokes (TURNS) code. The new hybrid LU-SGS scheme couples a point-relaxation approach of the Data Parallel-Lower Upper Relaxation (DP-LUR) algorithm for inter-processor communication with the Symmetric Gauss Seidel algorithm of LU-SGS for on-processor computations. With the modified operator, TURNS is implemented in parallel using Message Passing Interface (MPI) for communication. Numerical performance and parallel efficiency are evaluated on the IBM SP2 and Thinking Machines CM-5 multi-processors for a variety of steady-state and unsteady test cases. The hybrid LU-SGS scheme maintains the numerical performance of the original LU-SGS algorithm in all cases and shows a good degree of parallel efficiency. It experiences a higher degree of robustness than DP-LUR for third-order upwind solutions. The second part of this work examines use of Krylov subspace iterative solvers for the nonlinear CFD solutions. The hybrid LU-SGS scheme is used as a parallelizable preconditioner. Two iterative methods are tested, Generalized Minimum Residual (GMRES) and Orthogonal s-Step Generalized Conjugate Residual (OSGCR). The Newton method demonstrates good
Comparison of high efficiency particulate filter testing methods
International Nuclear Information System (INIS)
1985-01-01
High Efficiency Particulate Air (HEPA) filters are used for the removal of submicron size particulates from air streams. In nuclear industry they are used as an important engineering safeguard to prevent the release of air borne radioactive particulates to the environment. HEPA filters used in the nuclear industry should therefore be manufactured and operated under strict quality control. There are three levels of testing HEPA filters: i) testing of the filter media; ii) testing of the assembled filter including filter media and filter housing; and iii) on site testing of the complete filter installation before putting into operation and later for the purpose of periodic control. A co-ordinated research programme on particulate filter testing methods was taken up by the Agency and contracts were awarded to the Member Countries, Belgium, German Democratic Republic, India and Hungary. The investigations carried out by the participants of the present co-ordinated research programme include the results of the nowadays most frequently used HEPA filter testing methods both for filter medium test, rig test and in-situ test purposes. Most of the experiments were carried out at ambient temperature and humidity, but indications were given to extend the investigations to elevated temperature and humidity in the future for the purpose of testing the performance of HEPA filter under severe conditions. A major conclusion of the co-ordinated research programme was that it was not possible to recommend one method as a reference method for in situ testing of high efficiency particulate air filters. Most of the present conventional methods are adequate for current requirements. The reasons why no method is to be recommended were multiple, ranging from economical aspects, through incompatibility of materials to national regulations
Memory Efficient PCA Methods for Large Group ICA.
Rachakonda, Srinivas; Silva, Rogers F; Liu, Jingyu; Calhoun, Vince D
2016-01-01
Principal component analysis (PCA) is widely used for data reduction in group independent component analysis (ICA) of fMRI data. Commonly, group-level PCA of temporally concatenated datasets is computed prior to ICA of the group principal components. This work focuses on reducing very high dimensional temporally concatenated datasets into its group PCA space. Existing randomized PCA methods can determine the PCA subspace with minimal memory requirements and, thus, are ideal for solving large PCA problems. Since the number of dataloads is not typically optimized, we extend one of these methods to compute PCA of very large datasets with a minimal number of dataloads. This method is coined multi power iteration (MPOWIT). The key idea behind MPOWIT is to estimate a subspace larger than the desired one, while checking for convergence of only the smaller subset of interest. The number of iterations is reduced considerably (as well as the number of dataloads), accelerating convergence without loss of accuracy. More importantly, in the proposed implementation of MPOWIT, the memory required for successful recovery of the group principal components becomes independent of the number of subjects analyzed. Highly efficient subsampled eigenvalue decomposition techniques are also introduced, furnishing excellent PCA subspace approximations that can be used for intelligent initialization of randomized methods such as MPOWIT. Together, these developments enable efficient estimation of accurate principal components, as we illustrate by solving a 1600-subject group-level PCA of fMRI with standard acquisition parameters, on a regular desktop computer with only 4 GB RAM, in just a few hours. MPOWIT is also highly scalable and could realistically solve group-level PCA of fMRI on thousands of subjects, or more, using standard hardware, limited only by time, not memory. Also, the MPOWIT algorithm is highly parallelizable, which would enable fast, distributed implementations ideal for big
Memory efficient PCA methods for large group ICA
Directory of Open Access Journals (Sweden)
Srinivas eRachakonda
2016-02-01
Full Text Available Principal component analysis (PCA is widely used for data reduction in group independent component analysis (ICA of fMRI data. Commonly, group-level PCA of temporally concatenated datasets is computed prior to ICA of the group principal components. This work focuses on reducing very high dimensional temporally concatenated datasets into its group PCA space. Existing randomized PCA methods can determine the PCA subspace with minimal memory requirements and, thus, are ideal for solving large PCA problems. Since the number of dataloads is not typically optimized, we extend one of these methods to compute PCA of very large datasets with a minimal number of dataloads. This method is coined multi power iteration (MPOWIT. The key idea behind MPOWIT is to estimate a subspace larger than the desired one, while checking for convergence of only the smaller subset of interest. The number of iterations is reduced considerably (as well as the number of dataloads, accelerating convergence without loss of accuracy. More importantly, in the proposed implementation of MPOWIT, the memory required for successful recovery of the group principal components becomes independent of the number of subjects analyzed. Highly efficient subsampled eigenvalue decomposition techniques are also introduced, furnishing excellent PCA subspace approximations that can be used for intelligent initialization of randomized methods such as MPOWIT. Together, these developments enable efficient estimation of accurate principal components, as we illustrate by solving a 1600-subject group-level PCA of fMRI with standard acquisition parameters, on a regular desktop computer with only 4GB RAM, in just a few hours. MPOWIT is also highly scalable and could realistically solve group-level PCA of fMRI on thousands of subjects, or more, using standard hardware, limited only by time, not memory. Also, the MPOWIT algorithm is highly parallelizable, which would enable fast, distributed implementations
Efficiency of Choice Set Generation Methods for Bicycle Routes
DEFF Research Database (Denmark)
Halldórsdóttir, Katrín; Rieser-Schüssler, Nadine; W. Axhausen, Kay
behaviour, observed choices and alternatives composing the choice set of each cyclist are necessary. However, generating the alternative choice sets can prove challenging. This paper analyses the efficiency of various choice set generation methods for bicycle routes in order to contribute to our...... travelling information with GPS loggers, compared to self-reported RP data, is more accurate geographic locations and routes. Also, the GPS traces give more reliable information on times and prevent trip underreporting, and it is possible to collect information on many trips by the same person without...
An efficient method for DNA extraction from Cladosporioid fungi
Moslem, M.A.; Bahkali, A.H.; Abd-Elsalam, K.A.; Wit, de, P.J.G.M.
2010-01-01
We developed an efficient method for DNA extraction from Cladosporioid fungi, which are important fungal plant pathogens. The cell wall of Cladosporioid fungi is often melanized, which makes it difficult to extract DNA from their cells. In order to overcome this we grew these fungi for three days on agar plates and extracted DNA from mycelium mats after manual or electric homogenization. High-quality DNA was isolated, with an A260/A280 ratio ranging between 1.6 and 2.0. Isolated genomic DNA w...
An Efficient Evolutionary Based Method For Image Segmentation
Aslanzadeh, Roohollah; Qazanfari, Kazem; Rahmati, Mohammad
2017-01-01
The goal of this paper is to present a new efficient image segmentation method based on evolutionary computation which is a model inspired from human behavior. Based on this model, a four layer process for image segmentation is proposed using the split/merge approach. In the first layer, an image is split into numerous regions using the watershed algorithm. In the second layer, a co-evolutionary process is applied to form centers of finals segments by merging similar primary regions. In the t...
Thermal Efficiency Degradation Diagnosis Method Using Regression Model
International Nuclear Information System (INIS)
Jee, Chang Hyun; Heo, Gyun Young; Jang, Seok Won; Lee, In Cheol
2011-01-01
This paper proposes an idea for thermal efficiency degradation diagnosis in turbine cycles, which is based on turbine cycle simulation under abnormal conditions and a linear regression model. The correlation between the inputs for representing degradation conditions (normally unmeasured but intrinsic states) and the simulation outputs (normally measured but superficial states) was analyzed with the linear regression model. The regression models can inversely response an associated intrinsic state for a superficial state observed from a power plant. The diagnosis method proposed herein is classified into three processes, 1) simulations for degradation conditions to get measured states (referred as what-if method), 2) development of the linear model correlating intrinsic and superficial states, and 3) determination of an intrinsic state using the superficial states of current plant and the linear regression model (referred as inverse what-if method). The what-if method is to generate the outputs for the inputs including various root causes and/or boundary conditions whereas the inverse what-if method is the process of calculating the inverse matrix with the given superficial states, that is, component degradation modes. The method suggested in this paper was validated using the turbine cycle model for an operating power plant
Directory of Open Access Journals (Sweden)
Alicia Cordero
2018-01-01
Full Text Available We construct a family of derivative-free optimal iterative methods without memory to approximate a simple zero of a nonlinear function. Error analysis demonstrates that the without-memory class has eighth-order convergence and is extendable to with-memory class. The extension of new family to the with-memory one is also presented which attains the convergence order 15.5156 and a very high efficiency index 15.51561/4≈1.9847. Some particular schemes of the with-memory family are also described. Numerical examples and some dynamical aspects of the new schemes are given to support theoretical results.
Division of methods for counting helminths’ eggs and the problem of efficiency of these methods
Directory of Open Access Journals (Sweden)
Katarzyna Jaromin-Gleń
2017-03-01
Full Text Available From the sanitary and epidemiological aspects, information concerning the developmental forms of intestinal parasites, especially the eggs of helminths present in our environment in: water, soil, sandpits, sewage sludge, crops watered with wastewater are very important. The methods described in the relevant literature may be classified in various ways, primarily according to the methodology of the preparation of samples from environmental matrices prepared for analysis, and the sole methods of counting and chambers/instruments used for this purpose. In addition, there is a possibility to perform the classification of the research methods analyzed from the aspect of the method and time of identification of the individuals counted, or the necessity for staining them. Standard methods for identification of helminths’ eggs from environmental matrices are usually characterized by low efficiency, i.e. from 30% to approximately 80%. The efficiency of the method applied may be measured in a dual way, either by using the method of internal standard or the ‘Split/Spike’ method. While measuring simultaneously in an examined object the efficiency of the method and the number of eggs, the ‘actual’ number of eggs may be calculated by multiplying the obtained value of the discovered eggs of helminths by inverse efficiency.
Approximation methods for efficient learning of Bayesian networks
Riggelsen, C
2008-01-01
This publication offers and investigates efficient Monte Carlo simulation methods in order to realize a Bayesian approach to approximate learning of Bayesian networks from both complete and incomplete data. For large amounts of incomplete data when Monte Carlo methods are inefficient, approximations are implemented, such that learning remains feasible, albeit non-Bayesian. The topics discussed are: basic concepts about probabilities, graph theory and conditional independence; Bayesian network learning from data; Monte Carlo simulation techniques; and, the concept of incomplete data. In order to provide a coherent treatment of matters, thereby helping the reader to gain a thorough understanding of the whole concept of learning Bayesian networks from (in)complete data, this publication combines in a clarifying way all the issues presented in the papers with previously unpublished work.
Efficient model learning methods for actor-critic control.
Grondman, Ivo; Vaandrager, Maarten; Buşoniu, Lucian; Babuska, Robert; Schuitema, Erik
2012-06-01
We propose two new actor-critic algorithms for reinforcement learning. Both algorithms use local linear regression (LLR) to learn approximations of the functions involved. A crucial feature of the algorithms is that they also learn a process model, and this, in combination with LLR, provides an efficient policy update for faster learning. The first algorithm uses a novel model-based update rule for the actor parameters. The second algorithm does not use an explicit actor but learns a reference model which represents a desired behavior, from which desired control actions can be calculated using the inverse of the learned process model. The two novel methods and a standard actor-critic algorithm are applied to the pendulum swing-up problem, in which the novel methods achieve faster learning than the standard algorithm.
An efficient method for model refinement in diffuse optical tomography
Zirak, A. R.; Khademi, M.
2007-11-01
Diffuse optical tomography (DOT) is a non-linear, ill-posed, boundary value and optimization problem which necessitates regularization. Also, Bayesian methods are suitable owing to measurements data are sparse and correlated. In such problems which are solved with iterative methods, for stabilization and better convergence, the solution space must be small. These constraints subject to extensive and overdetermined system of equations which model retrieving criteria specially total least squares (TLS) must to refine model error. Using TLS is limited to linear systems which is not achievable when applying traditional Bayesian methods. This paper presents an efficient method for model refinement using regularized total least squares (RTLS) for treating on linearized DOT problem, having maximum a posteriori (MAP) estimator and Tikhonov regulator. This is done with combination Bayesian and regularization tools as preconditioner matrices, applying them to equations and then using RTLS to the resulting linear equations. The preconditioning matrixes are guided by patient specific information as well as a priori knowledge gained from the training set. Simulation results illustrate that proposed method improves the image reconstruction performance and localize the abnormally well.
The differential method for grating efficiencies implemented in mathematica
Energy Technology Data Exchange (ETDEWEB)
Valdes, V.; McKinney, W. [Lawrence Berkeley Lab., CA (United States); Palmer, C. [Milton Co., Rochester, NY (United States). Roy Analytical Products Div.
1993-08-01
In order to facilitate the accurate calculation of diffraction grating efficiencies in the soft x-ray region, we have implemented the differential method of Neviere and Vincent in Mathematica [1]. This simplifies the programming to maximize the transparency of the theory for the user. We alleviate some of the overhead burden of the Mathematica program by coding the time-consuming numerical integration in C subprograms. We recall the differential method directly from Maxwell`s equations. The pseudo-periodicity of the grating profile and the electromagnetic fields allows us to use their Fourier series expansions to formulate an infinite set of coupled differential equations. A finite subset of the equations are then numerically integrated using the Numerov method for the transverse electric (TE) case and a fourth-order Runge-Kutta algorithm for the transverse magnetic (TM) case. We have tested our program by comparisons with the scalar theory and with published theoretical results for the blazed, sinusoidal and square wave profiles. The Reciprocity Theorem has also been used as a means to verify the method. We have found it to be verified for several cases to within the computational accuracy of the method.
Ringing Artefact Reduction By An Efficient Likelihood Improvement Method
Fuderer, Miha
1989-10-01
In MR imaging, the extent of the acquired spatial frequencies of the object is necessarily finite. The resulting image shows artefacts caused by "truncation" of its Fourier components. These are known as Gibbs artefacts or ringing artefacts. These artefacts are particularly. visible when the time-saving reduced acquisition method is used, say, when scanning only the lowest 70% of the 256 data lines. Filtering the data results in loss of resolution. A method is described that estimates the high frequency data from the low-frequency data lines, with the likelihood of the image as criterion. It is a computationally very efficient method, since it requires practically only two extra Fourier transforms, in addition to the normal. reconstruction. The results of this method on MR images of human subjects are promising. Evaluations on a 70% acquisition image show about 20% decrease of the error energy after processing. "Error energy" is defined as the total power of the difference to a 256-data-lines reference image. The elimination of ringing artefacts then appears almost complete..
A Power-Efficient Propulsion Method for Magnetic Microrobots
Directory of Open Access Journals (Sweden)
Gioia Lucarini
2014-07-01
Full Text Available Current magnetic systems for microrobotic navigation consist of assemblies of electromagnets, which allow for the wireless accurate steering and propulsion of sub-millimetric bodies. However, large numbers of windings and/or high currents are needed in order to generate suitable magnetic fields and gradients. This means that magnetic navigation systems are typically cumbersome and require a lot of power, thus limiting their application fields. In this paper, we propose a novel propulsion method that is able to dramatically reduce the power demand of such systems. This propulsion method was conceived for navigation systems that achieve propulsion by pulling microrobots with magnetic gradients. We compare this power-efficient propulsion method with the traditional pulling propulsion, in the case of a microrobot swimming in a micro-structured confined liquid environment. Results show that both methods are equivalent in terms of accuracy and the velocity of the motion of the microrobots, while the new approach requires only one ninth of the power needed to generate the magnetic gradients. Substantial equivalence is demonstrated also in terms of the manoeuvrability of user-controlled microrobots along a complex path.
An Efficient Ensemble Learning Method for Gene Microarray Classification
Directory of Open Access Journals (Sweden)
Alireza Osareh
2013-01-01
Full Text Available The gene microarray analysis and classification have demonstrated an effective way for the effective diagnosis of diseases and cancers. However, it has been also revealed that the basic classification techniques have intrinsic drawbacks in achieving accurate gene classification and cancer diagnosis. On the other hand, classifier ensembles have received increasing attention in various applications. Here, we address the gene classification issue using RotBoost ensemble methodology. This method is a combination of Rotation Forest and AdaBoost techniques which in turn preserve both desirable features of an ensemble architecture, that is, accuracy and diversity. To select a concise subset of informative genes, 5 different feature selection algorithms are considered. To assess the efficiency of the RotBoost, other nonensemble/ensemble techniques including Decision Trees, Support Vector Machines, Rotation Forest, AdaBoost, and Bagging are also deployed. Experimental results have revealed that the combination of the fast correlation-based feature selection method with ICA-based RotBoost ensemble is highly effective for gene classification. In fact, the proposed method can create ensemble classifiers which outperform not only the classifiers produced by the conventional machine learning but also the classifiers generated by two widely used conventional ensemble learning methods, that is, Bagging and AdaBoost.
An efficient strongly coupled immersed boundary method for deforming bodies
Goza, Andres; Colonius, Tim
2016-11-01
Immersed boundary methods treat the fluid and immersed solid with separate domains. As a result, a nonlinear interface constraint must be satisfied when these methods are applied to flow-structure interaction problems. This typically results in a large nonlinear system of equations that is difficult to solve efficiently. Often, this system is solved with a block Gauss-Seidel procedure, which is easy to implement but can require many iterations to converge for small solid-to-fluid mass ratios. Alternatively, a Newton-Raphson procedure can be used to solve the nonlinear system. This typically leads to convergence in a small number of iterations for arbitrary mass ratios, but involves the use of large Jacobian matrices. We present an immersed boundary formulation that, like the Newton-Raphson approach, uses a linearization of the system to perform iterations. It therefore inherits the same favorable convergence behavior. However, we avoid large Jacobian matrices by using a block LU factorization of the linearized system. We derive our method for general deforming surfaces and perform verification on 2D test problems of flow past beams. These test problems involve large amplitude flapping and a wide range of mass ratios. This work was partially supported by the Jet Propulsion Laboratory and Air Force Office of Scientific Research.
Parallel efficient rate control methods for JPEG 2000
Martínez-del-Amor, Miguel Á.; Bruns, Volker; Sparenberg, Heiko
2017-09-01
Since the introduction of JPEG 2000, several rate control methods have been proposed. Among them, post-compression rate-distortion optimization (PCRD-Opt) is the most widely used, and the one recommended by the standard. The approach followed by this method is to first compress the entire image split in code blocks, and subsequently, optimally truncate the set of generated bit streams according to the maximum target bit rate constraint. The literature proposes various strategies on how to estimate ahead of time where a block will get truncated in order to stop the execution prematurely and save time. However, none of them have been defined bearing in mind a parallel implementation. Today, multi-core and many-core architectures are becoming popular for JPEG 2000 codecs implementations. Therefore, in this paper, we analyze how some techniques for efficient rate control can be deployed in GPUs. In order to do that, the design of our GPU-based codec is extended, allowing stopping the process at a given point. This extension also harnesses a higher level of parallelism on the GPU, leading to up to 40% of speedup with 4K test material on a Titan X. In a second step, three selected rate control methods are adapted and implemented in our parallel encoder. A comparison is then carried out, and used to select the best candidate to be deployed in a GPU encoder, which gave an extra 40% of speedup in those situations where it was really employed.
Dissolution-recrystallization method for high efficiency perovskite solar cells
Energy Technology Data Exchange (ETDEWEB)
Han, Fei; Luo, Junsheng; Wan, Zhongquan; Liu, Xingzhao; Jia, Chunyang, E-mail: cyjia@uestc.edu.cn
2017-06-30
Highlights: • Dissolution-recrystallization method can improve perovskite crystallization. • Dissolution-recrystallization method can improve TiO{sub 2}/perovskite interface. • The optimal perovskite solar cell obtains the champion PCE of 16.76%. • The optimal devices are of high reproducibility. - Abstract: In this work, a dissolution-recrystallization method (DRM) with chlorobenzene and dimethylsulfoxide treating the perovskite films during the spin-coating process is reported. This is the first time that DRM is used to control perovskite crystallization and improve the device performance. Furthermore, the DRM is good for reducing defects and grain boundaries, improving perovskite crystallization and even improving TiO{sub 2}/perovskite interface. By optimizing, the DRM2-treated perovskite solar cell (PSC) obtains the best photoelectric conversion efficiency (PCE) of 16.76% under AM 1.5 G illumination (100 mW cm{sup −2}) with enhanced J{sub sc} and V{sub oc} compared to CB-treated PSC.
An efficient immunodetection method for histone modifications in plants.
Nic-Can, Geovanny; Hernández-Castellano, Sara; Kú-González, Angela; Loyola-Vargas, Víctor M; De-la-Peña, Clelia
2013-12-16
Epigenetic mechanisms can be highly dynamic, but the cross-talk among them and with the genome is still poorly understood. Many of these mechanisms work at different places in the cell and at different times of organism development. Covalent histone modifications are one of the most complex and studied epigenetic mechanisms involved in cellular reprogramming and development in plants. Therefore, the knowledge of the spatial distribution of histone methylation in different tissues is important to understand their behavior on specific cells. Based on the importance of epigenetic marks for biology, we present a simplified, inexpensive and efficient protocol for in situ immunolocalization on different tissues such as flowers, buds, callus, somatic embryo and meristematic tissue from several plants of agronomical and biological importance. Here, we fully describe all the steps to perform the localization of histone modifications. Using this method, we were able to visualize the distribution of H3K4me3 and H3K9me2 without loss of histological integrity of tissues from several plants, including Agave tequilana, Capsicum chinense, Coffea canephora and Cedrela odorata, as well as Arabidopsis thaliana. There are many protocols to study chromatin modifications; however, most of them are expensive, difficult and require sophisticated equipment. Here, we provide an efficient protocol for in situ localization of histone methylation that dispenses with the use of expensive and sensitive enzymes. The present method can be used to investigate the cellular distribution and localization of a wide array of proteins, which could help to clarify the biological role that they play at specific times and places in different tissues of various plant species.
Public-Private Investment Partnerships: Efficiency Estimation Methods
Directory of Open Access Journals (Sweden)
Aleksandr Valeryevich Trynov
2016-06-01
Full Text Available The article focuses on assessing the effectiveness of investment projects implemented on the principles of public-private partnership (PPP. This article puts forward the hypothesis that the inclusion of multiplicative economic effects will increase the attractiveness of public-private partnership projects, which in turn will contribute to the more efficient use of budgetary resources. The author proposed a methodological approach and methods of evaluating the economic efficiency of PPP projects. The author’s technique is based upon the synthesis of approaches to evaluation of the project implemented in the private and public sector and in contrast to the existing methods allows taking into account the indirect (multiplicative effect arising during the implementation of project. In the article, to estimate the multiplier effect, the model of regional economy — social accounting matrix (SAM was developed. The matrix is based on the data of the Sverdlovsk region for 2013. In the article, the genesis of the balance models of economic systems is presented. The evolution of balance models in the Russian (Soviet and foreign sources from their emergence up to now are observed. It is shown that SAM is widely used in the world for a wide range of applications, primarily to assess the impact on the regional economy of various exogenous factors. In order to clarify the estimates of multiplicative effects, the disaggregation of the account of the “industry” of the matrix of social accounts was carried out in accordance with the All-Russian Classifier of Types of Economic Activities (OKVED. This step allows to consider the particular characteristics of the industry of the estimated investment project. The method was tested on the example of evaluating the effectiveness of the construction of a toll road in the Sverdlovsk region. It is proved that due to the multiplier effect, the more capital-intensive version of the project may be more beneficial in
A mathematical method to calculate efficiency of BF3 detectors
International Nuclear Information System (INIS)
Si Fenni; Hu Qingyuan; Peng Taiping
2009-01-01
In order to calculate absolute efficiency of the BF 3 detector, MCNP/4C code is applied to calculate relative efficiency of the BF 3 detector first, and then absolute efficiency is figured out through mathematical techniques. Finally an energy response curve of the BF 3 detector for 1-20 MeV neutrons is derived. It turns out that efficiency of BF 3 detector are relatively uniform for 2-16 MeV neutrons. (authors)
Robust and efficient method for matching features in omnidirectional images
Zhu, Qinyi; Zhang, Zhijiang; Zeng, Dan
2018-04-01
Binary descriptors have been widely used in many real-time applications due to their efficiency. These descriptors are commonly designed for perspective images but perform poorly on omnidirectional images, which are severely distorted. To address this issue, this paper proposes tangent plane BRIEF (TPBRIEF) and adapted log polar grid-based motion statistics (ALPGMS). TPBRIEF projects keypoints to a unit sphere and applies the fixed test set in BRIEF descriptor on the tangent plane of the unit sphere. The fixed test set is then backprojected onto the original distorted images to construct the distortion invariant descriptor. TPBRIEF directly enables keypoint detecting and feature describing on original distorted images, whereas other approaches correct the distortion through image resampling, which introduces artifacts and adds time cost. With ALPGMS, omnidirectional images are divided into circular arches named adapted log polar grids. Whether a match is true or false is then determined by simply thresholding the match numbers in a grid pair where the two matched points located. Experiments show that TPBRIEF greatly improves the feature matching accuracy and ALPGMS robustly removes wrong matches. Our proposed method outperforms the state-of-the-art methods.
Efficiency of High Order Spectral Element Methods on Petascale Architectures
Hutchinson, Maxwell; Heinecke, Alexander; Pabst, Hans; Henry, Greg; Parsani, Matteo; Keyes, David E.
2016-01-01
High order methods for the solution of PDEs expose a tradeoff between computational cost and accuracy on a per degree of freedom basis. In many cases, the cost increases due to higher arithmetic intensity while affecting data movement minimally. As architectures tend towards wider vector instructions and expect higher arithmetic intensities, the best order for a particular simulation may change. This study highlights preferred orders by identifying the high order efficiency frontier of the spectral element method implemented in Nek5000 and NekBox: the set of orders and meshes that minimize computational cost at fixed accuracy. First, we extract Nek’s order-dependent computational kernels and demonstrate exceptional hardware utilization by hardware-aware implementations. Then, we perform productionscale calculations of the nonlinear single mode Rayleigh-Taylor instability on BlueGene/Q and Cray XC40-based supercomputers to highlight the influence of the architecture. Accuracy is defined with respect to physical observables, and computational costs are measured by the corehour charge of the entire application. The total number of grid points needed to achieve a given accuracy is reduced by increasing the polynomial order. On the XC40 and BlueGene/Q, polynomial orders as high as 31 and 15 come at no marginal cost per timestep, respectively. Taken together, these observations lead to a strong preference for high order discretizations that use fewer degrees of freedom. From a performance point of view, we demonstrate up to 60% full application bandwidth utilization at scale and achieve ≈1PFlop/s of compute performance in Nek’s most flop-intense methods.
Efficiency of High Order Spectral Element Methods on Petascale Architectures
Hutchinson, Maxwell
2016-06-14
High order methods for the solution of PDEs expose a tradeoff between computational cost and accuracy on a per degree of freedom basis. In many cases, the cost increases due to higher arithmetic intensity while affecting data movement minimally. As architectures tend towards wider vector instructions and expect higher arithmetic intensities, the best order for a particular simulation may change. This study highlights preferred orders by identifying the high order efficiency frontier of the spectral element method implemented in Nek5000 and NekBox: the set of orders and meshes that minimize computational cost at fixed accuracy. First, we extract Nek’s order-dependent computational kernels and demonstrate exceptional hardware utilization by hardware-aware implementations. Then, we perform productionscale calculations of the nonlinear single mode Rayleigh-Taylor instability on BlueGene/Q and Cray XC40-based supercomputers to highlight the influence of the architecture. Accuracy is defined with respect to physical observables, and computational costs are measured by the corehour charge of the entire application. The total number of grid points needed to achieve a given accuracy is reduced by increasing the polynomial order. On the XC40 and BlueGene/Q, polynomial orders as high as 31 and 15 come at no marginal cost per timestep, respectively. Taken together, these observations lead to a strong preference for high order discretizations that use fewer degrees of freedom. From a performance point of view, we demonstrate up to 60% full application bandwidth utilization at scale and achieve ≈1PFlop/s of compute performance in Nek’s most flop-intense methods.
Highly efficient DNA extraction method from skeletal remains
Directory of Open Access Journals (Sweden)
Irena Zupanič Pajnič
2011-03-01
Full Text Available Background: This paper precisely describes the method of DNA extraction developed to acquire high quality DNA from the Second World War skeletal remains. The same method is also used for molecular genetic identification of unknown decomposed bodies in routine forensic casework where only bones and teeth are suitable for DNA typing. We analysed 109 bones and two teeth from WWII mass graves in Slovenia. Methods: We cleaned the bones and teeth, removed surface contaminants and ground the bones into powder, using liquid nitrogen . Prior to isolating the DNA in parallel using the BioRobot EZ1 (Qiagen, the powder was decalcified for three days. The nuclear DNA of the samples were quantified by real-time PCR method. We acquired autosomal genetic profiles and Y-chromosome haplotypes of the bones and teeth with PCR amplification of microsatellites, and mtDNA haplotypes 99. For the purpose of traceability in the event of contamination, we prepared elimination data bases including genetic profiles of the nuclear and mtDNA of all persons who have been in touch with the skeletal remains in any way. Results: We extracted up to 55 ng DNA/g of the teeth, up to 100 ng DNA/g of the femurs, up to 30 ng DNA/g of the tibias and up to 0.5 ng DNA/g of the humerus. The typing of autosomal and YSTR loci was successful in all of the teeth, in 98 % dekalof the femurs, and in 75 % to 81 % of the tibias and humerus. The typing of mtDNA was successful in all of the teeth, and in 96 % to 98 % of the bones. Conclusions: We managed to obtain nuclear DNA for successful STR typing from skeletal remains that were over 60 years old . The method of DNA extraction described here has proved to be highly efficient. We obtained 0.8 to 100 ng DNA/g of teeth or bones and complete genetic profiles of autosomal DNA, Y-STR haplotypes, and mtDNA haplotypes from only 0.5g bone and teeth samples.
Method of oocyte activation affects cloning efficiency in pigs.
Whitworth, Kristin M; Li, Rongfeng; Spate, Lee D; Wax, David M; Rieke, August; Whyte, Jeffrey J; Manandhar, Gaurishankar; Sutovsky, Miriam; Green, Jonathan A; Sutovsky, Peter; Prather, Randall S
2009-05-01
The following experiments compared the efficiency of three fusion/activation protocols following somatic cell nuclear transfer (SCNT) with porcine somatic cells transfected with enhanced green fluorescent protein driven by the chicken beta-actin/rabbit beta-globin hybrid promoter (pCAGG-EGFP). The three protocols included electrical fusion/activation (NT1), electrical fusion/activation followed by treatment with a reversible proteasomal inhibitor MG132 (NT2) and electrical fusion in low Ca(2+) followed by chemical activation with thimerosal/dithiothreitol (NT3). Data were collected at Days 6, 12, 14, 30, and 114 of gestation. Fusion rates, blastocyst-stage mean cell numbers, recovery rates, and pregnancy rates were calculated and compared between protocols. Fusion rates were significantly higher for NT1 and NT2 compared to NT3 (P NT1 (71.4%, n = 28; P 0.05). All fusion/activation treatments produced live, pCAGG-EGFP positive piglets from SCNT. Treatment with MG132 after fusion/activation of reconstructed porcine embryos was the most effective method when comparing the overall pregnancy rates. The beneficial effect of NT2 protocol may be due to the stimulation of proteasomes that infiltrate donor cell nucleus shortly after nuclear transfer. (c) 2008 Wiley-Liss, Inc.
An efficient method of reducing glass dispersion tolerance sensitivity
Sparrold, Scott W.; Shepard, R. Hamilton
2014-12-01
Constraining the Seidel aberrations of optical surfaces is a common technique for relaxing tolerance sensitivities in the optimization process. We offer an observation that a lens's Abbe number tolerance is directly related to the magnitude by which its longitudinal and transverse color are permitted to vary in production. Based on this observation, we propose a computationally efficient and easy-to-use merit function constraint for relaxing dispersion tolerance sensitivity. Using the relationship between an element's chromatic aberration and dispersion sensitivity, we derive a fundamental limit for lens scale and power that is capable of achieving high production yield for a given performance specification, which provides insight on the point at which lens splitting or melt fitting becomes necessary. The theory is validated by comparing its predictions to a formal tolerance analysis of a Cooke Triplet, and then applied to the design of a 1.5x visible linescan lens to illustrate optimization for reduced dispersion sensitivity. A selection of lenses in high volume production is then used to corroborate the proposed method of dispersion tolerance allocation.
An efficient iterative method for the generalized Stokes problem
Energy Technology Data Exchange (ETDEWEB)
Sameh, A. [Univ. of Minnesota, Twin Cities, MN (United States); Sarin, V. [Univ. of Illinois, Urbana, IL (United States)
1996-12-31
This paper presents an efficient iterative scheme for the generalized Stokes problem, which arises frequently in the simulation of time-dependent Navier-Stokes equations for incompressible fluid flow. The general form of the linear system is where A = {alpha}M + vT is an n x n symmetric positive definite matrix, in which M is the mass matrix, T is the discrete Laplace operator, {alpha} and {nu} are positive constants proportional to the inverses of the time-step {Delta}t and the Reynolds number Re respectively, and B is the discrete gradient operator of size n x k (k < n). Even though the matrix A is symmetric and positive definite, the system is indefinite due to the incompressibility constraint (B{sup T}u = 0). This causes difficulties both for iterative methods and commonly used preconditioners. Moreover, depending on the ratio {alpha}/{nu}, A behaves like the mass matrix M at one extreme and the Laplace operator T at the other, thus complicating the issue of preconditioning.
Method for efficient establishment of technical biodosimetry competence
International Nuclear Information System (INIS)
Stricklin, D.; Jaworska, Alicja; Arvidsson, E.
2007-01-01
The current gold standard in biodosimetry, the dicentric assay, requires documented technical competence. Expertise is developed over time by evaluation of thousands of metaphases. Competence is documented through establishment of a dose-response curve, required by any service laboratory performing biodosimetry. Consistent and reliable evaluations must be established for new observers that might contribute to analyses for biological dose assessments. Discrepancies in evaluations jeopardize the reliability of assessments. The Swedish Defence Research Agency (FOI) together with the Norwegian Radiation Protection Authority (NRPA) conducted an inter-calibration exercise for the purpose of establishing comparable scoring criteria for evaluation of aberrations in metaphases. The exercise revealed specific aberrations that were difficult to identify and were consistent sources of uncertainty. Subsequently, a report detailing the FOI's scoring criteria was developed with visual examples and a strategy for establishing technical competence in metaphase scoring in an efficient manner evolved. Key components of the strategy are the review of guidance for biodosimetry, performance of inter-calibration exercises with previously established data sets, review of incongruous evaluations with a well-established observer, follow-up exercises depending on the initial outcome, and inter-comparisons to document agreement. Methods suggested here could be applied in training of new personnel. Documentation of methods in other laboratories could facilitate more consistent scoring criteria among the biodosimetry community, a problem observed in previous international inter-comparisons. Improved consistency among biodosimetry laboratories could facilitate reliably sharing the work load among different members of the biodosimetry community in the event of a mass casualty accident
Evolutionary dynamics on graphs: Efficient method for weak selection
Fu, Feng; Wang, Long; Nowak, Martin A.; Hauert, Christoph
2009-04-01
Investigating the evolutionary dynamics of game theoretical interactions in populations where individuals are arranged on a graph can be challenging in terms of computation time. Here, we propose an efficient method to study any type of game on arbitrary graph structures for weak selection. In this limit, evolutionary game dynamics represents a first-order correction to neutral evolution. Spatial correlations can be empirically determined under neutral evolution and provide the basis for formulating the game dynamics as a discrete Markov process by incorporating a detailed description of the microscopic dynamics based on the neutral correlations. This framework is then applied to one of the most intriguing questions in evolutionary biology: the evolution of cooperation. We demonstrate that the degree heterogeneity of a graph impedes cooperation and that the success of tit for tat depends not only on the number of rounds but also on the degree of the graph. Moreover, considering the mutation-selection equilibrium shows that the symmetry of the stationary distribution of states under weak selection is skewed in favor of defectors for larger selection strengths. In particular, degree heterogeneity—a prominent feature of scale-free networks—generally results in a more pronounced increase in the critical benefit-to-cost ratio required for evolution to favor cooperation as compared to regular graphs. This conclusion is corroborated by an analysis of the effects of population structures on the fixation probabilities of strategies in general 2×2 games for different types of graphs. Computer simulations confirm the predictive power of our method and illustrate the improved accuracy as compared to previous studies.
Directory of Open Access Journals (Sweden)
Jingyu Sun
2014-07-01
Full Text Available To survive in the current shipbuilding industry, it is of vital importance for shipyards to have the ship components’ accuracy evaluated efficiently during most of the manufacturing steps. Evaluating components’ accuracy by comparing each component’s point cloud data scanned by laser scanners and the ship’s design data formatted in CAD cannot be processed efficiently when (1 extract components from point cloud data include irregular obstacles endogenously, or when (2 registration of the two data sets have no clear direction setting. This paper presents reformative point cloud data processing methods to solve these problems. K-d tree construction of the point cloud data fastens a neighbor searching of each point. Region growing method performed on the neighbor points of the seed point extracts the continuous part of the component, while curved surface fitting and B-spline curved line fitting at the edge of the continuous part recognize the neighbor domains of the same component divided by obstacles’ shadows. The ICP (Iterative Closest Point algorithm conducts a registration of the two sets of data after the proper registration’s direction is decided by principal component analysis. By experiments conducted at the shipyard, 200 curved shell plates are extracted from the scanned point cloud data, and registrations are conducted between them and the designed CAD data using the proposed methods for an accuracy evaluation. Results show that the methods proposed in this paper support the accuracy evaluation targeted point cloud data processing efficiently in practice.
CHOICE OF EFFICIENT METHOD OF ADDING FLOUR FROM BUCKWHEAT BRAN
Directory of Open Access Journals (Sweden)
E. I. Ponomareva
2015-01-01
with cardiovascular diseases and renal insufficiency. According to the chosen efficient method of adding buckwheat bran flour in the dough for no-salt bread, we have found out that the method when enricher is added in the sponge provides the best physical and chemical, and structure mechanical properties of the bakery product and can be recommended for baking bread for mass production at the bakery plant.
Yucel, Abdulkadir C.; Bagci, Hakan; Michielssen, Eric
2015-01-01
An efficient method for statistically characterizing multiconductor transmission line (MTL) networks subject to a large number of manufacturing uncertainties is presented. The proposed method achieves its efficiency by leveraging a high
Efficient parsimony-based methods for phylogenetic network reconstruction.
Jin, Guohua; Nakhleh, Luay; Snir, Sagi; Tuller, Tamir
2007-01-15
Phylogenies--the evolutionary histories of groups of organisms-play a major role in representing relationships among biological entities. Although many biological processes can be effectively modeled as tree-like relationships, others, such as hybrid speciation and horizontal gene transfer (HGT), result in networks, rather than trees, of relationships. Hybrid speciation is a significant evolutionary mechanism in plants, fish and other groups of species. HGT plays a major role in bacterial genome diversification and is a significant mechanism by which bacteria develop resistance to antibiotics. Maximum parsimony is one of the most commonly used criteria for phylogenetic tree inference. Roughly speaking, inference based on this criterion seeks the tree that minimizes the amount of evolution. In 1990, Jotun Hein proposed using this criterion for inferring the evolution of sequences subject to recombination. Preliminary results on small synthetic datasets. Nakhleh et al. (2005) demonstrated the criterion's application to phylogenetic network reconstruction in general and HGT detection in particular. However, the naive algorithms used by the authors are inapplicable to large datasets due to their demanding computational requirements. Further, no rigorous theoretical analysis of computing the criterion was given, nor was it tested on biological data. In the present work we prove that the problem of scoring the parsimony of a phylogenetic network is NP-hard and provide an improved fixed parameter tractable algorithm for it. Further, we devise efficient heuristics for parsimony-based reconstruction of phylogenetic networks. We test our methods on both synthetic and biological data (rbcL gene in bacteria) and obtain very promising results.
Efficient screening methods for glucosyltransferase genes in Lactobacillus strains
Kralj, S; van Geel-schutten, GH; van der Maarel, MJEC; Dijkhuizen, L
Limited information is available about homopolysaccharide synthesis in the genus Lactobacillus . Using efficient screening techniques, extracellular glucosyltransferase (GTF) enzyme activity, resulting in alpha-glucan synthesis from sucrose, was detected in various lactobacilli. PCR with degenerate
Efficient screening methods for glucosyltransferase genes in Lactobacillus strains
Kralj, S.; Geel van - Schutten, G.H.; Maarel, M.J.E.C. van der; Dijkhuizen, L.
2003-01-01
Limited information is available about homopolysaccharide synthesis in the genus Lactobacillus. Using efficient screening techniques, extracellular glucosyltransferase (GTF) enzyme activity, resulting in α-glucan synthesis from sucrose, was detected in various lactobacilli. PCR with degenerate
An Improved, Highly Efficient Method for the Synthesis of Bisphenols
Directory of Open Access Journals (Sweden)
L. S. Patil
2011-01-01
Full Text Available An efficient synthesis of bisphenols is described by condensation of substituted phenols with corresponding cyclic ketones in presence of cetyltrimethylammonium chloride and 3-mercaptopropionic acid as a catalyst in extremely high purity and yields.
Advanced scoring method of eco-efficiency in European cities.
Moutinho, Victor; Madaleno, Mara; Robaina, Margarita; Villar, José
2018-01-01
This paper analyzes a set of selected German and French cities' performance in terms of the relative behavior of their eco-efficiencies, computed as the ratio of their gross domestic product (GDP) over their CO 2 emissions. For this analysis, eco-efficiency scores of the selected cities are computed using the data envelopment analysis (DEA) technique, taking the eco-efficiencies as outputs, and the inputs being the energy consumption, the population density, the labor productivity, the resource productivity, and the patents per inhabitant. Once DEA results are analyzed, the Malmquist productivity indexes (MPI) are used to assess the time evolution of the technical efficiency, technological efficiency, and productivity of the cities over the window periods 2000 to 2005 and 2005 to 2008. Some of the main conclusions are that (1) most of the analyzed cities seem to have suboptimal scales, being one of the causes of their inefficiency; (2) there is evidence that high GDP over CO 2 emissions does not imply high eco-efficiency scores, meaning that DEA like approaches are useful to complement more simplistic ranking procedures, pointing out potential inefficiencies at the input levels; (3) efficiencies performed worse during the period 2000-2005 than during the period 2005-2008, suggesting the possibility of corrective actions taken during or at the end of the first period but impacting only on the second period, probably due to an increasing environmental awareness of policymakers and governors; and (4) MPI analysis shows a positive technological evolution of all cities, according to the general technological evolution of the reference cities, reflecting a generalized convergence of most cities to their technological frontier and therefore an evolution in the right direction.
DEFF Research Database (Denmark)
Le, T.H.A.; Pham, D. T.; Canh, Nam Nguyen
2010-01-01
Both the efficient and weakly efficient sets of an affine fractional vector optimization problem, in general, are neither convex nor given explicitly. Optimization problems over one of these sets are thus nonconvex. We propose two methods for optimizing a real-valued function over the efficient...... and weakly efficient sets of an affine fractional vector optimization problem. The first method is a local one. By using a regularization function, we reformulate the problem into a standard smooth mathematical programming problem that allows applying available methods for smooth programming. In case...... the objective function is linear, we have investigated a global algorithm based upon a branch-and-bound procedure. The algorithm uses Lagrangian bound coupling with a simplicial bisection in the criteria space. Preliminary computational results show that the global algorithm is promising....
Radiochemical methods to enhance efficiency of α-spectral measurements
International Nuclear Information System (INIS)
Silkina, G.P.; Artem'ev, O.I.
2001-01-01
The paper describes possible ways to improve a plutonium radiochemical separation technique developed in the Khlopin Radium Institute and modify it to account for the site-specific features of samples from the former Semipalatinsk test site (STS) and enhance the alpha spectrometry efficiency.The paper describes possible ways to improve a plutonium radiochemical separation technique developed in the Khlopin Radium Institute and modify it to account for the site-specific features of samples from the former Semipalatinsk test site (STS) and enhance the alpha spectrometry efficiency. (author)
Absolute efficiency calibration of HPGe detector by simulation method
International Nuclear Information System (INIS)
Narayani, K.; Pant, Amar D.; Verma, Amit K.; Bhosale, N.A.; Anilkumar, S.
2018-01-01
High resolution gamma ray spectrometry by HPGe detectors is a powerful radio analytical technique for estimation of activity of various radionuclides. In the present work absolute efficiency calibration of the HPGe detector was carried out using Monte Carlo simulation technique and results are compared with those obtained by experiment using standard radionuclides of 152 Eu and 133 Ba. The coincidence summing correction factors for the measurement of these nuclides were also calculated
Efficient screening methods for glucosyltransferase genes in Lactobacillus strains
Kralj, S; van Geel-schutten, GH; van der Maarel, MJEC; Dijkhuizen, L
2003-01-01
Limited information is available about homopolysaccharide synthesis in the genus Lactobacillus . Using efficient screening techniques, extracellular glucosyltransferase (GTF) enzyme activity, resulting in alpha-glucan synthesis from sucrose, was detected in various lactobacilli. PCR with degenerate primers based on homologous boxes of known glucosyltransferase (gtf ) genes of lactic acid bacteria strains allowed cloning of fragments of 10 putative gtf genes from eight different glucan produci...
An efficient visualization method for analyzing biometric data
Rahmes, Mark; McGonagle, Mike; Yates, J. Harlan; Henning, Ronda; Hackett, Jay
2013-05-01
We introduce a novel application for biometric data analysis. This technology can be used as part of a unique and systematic approach designed to augment existing processing chains. Our system provides image quality control and analysis capabilities. We show how analysis and efficient visualization are used as part of an automated process. The goal of this system is to provide a unified platform for the analysis of biometric images that reduce manual effort and increase the likelihood of a match being brought to an examiner's attention from either a manual or lights-out application. We discuss the functionality of FeatureSCOPE™ which provides an efficient tool for feature analysis and quality control of biometric extracted features. Biometric databases must be checked for accuracy for a large volume of data attributes. Our solution accelerates review of features by a factor of up to 100 times. Review of qualitative results and cost reduction is shown by using efficient parallel visual review for quality control. Our process automatically sorts and filters features for examination, and packs these into a condensed view. An analyst can then rapidly page through screens of features and flag and annotate outliers as necessary.
Efficiency of snake sampling methods in the Brazilian semiarid region.
Mesquita, Paula C M D; Passos, Daniel C; Cechin, Sonia Z
2013-09-01
The choice of sampling methods is a crucial step in every field survey in herpetology. In countries where time and financial support are limited, the choice of the methods is critical. The methods used to sample snakes often lack objective criteria, and the traditional methods have apparently been more important when making the choice. Consequently researches using not-standardized methods are frequently found in the literature. We have compared four commonly used methods for sampling snake assemblages in a semiarid area in Brazil. We compared the efficacy of each method based on the cost-benefit regarding the number of individuals and species captured, time, and financial investment. We found that pitfall traps were the less effective method in all aspects that were evaluated and it was not complementary to the other methods in terms of abundance of species and assemblage structure. We conclude that methods can only be considered complementary if they are standardized to the objectives of the study. The use of pitfall traps in short-term surveys of the snake fauna in areas with shrubby vegetation and stony soil is not recommended.
An efficient direct method for image registration of flat objects
Nikolaev, Dmitry; Tihonkih, Dmitrii; Makovetskii, Artyom; Voronin, Sergei
2017-09-01
Image alignment of rigid surfaces is a rapidly developing area of research and has many practical applications. Alignment methods can be roughly divided into two types: feature-based methods and direct methods. Known SURF and SIFT algorithms are examples of the feature-based methods. Direct methods refer to those that exploit the pixel intensities without resorting to image features and image-based deformations are general direct method to align images of deformable objects in 3D space. Nevertheless, it is not good for the registration of images of 3D rigid objects since the underlying structure cannot be directly evaluated. In the article, we propose a model that is suitable for image alignment of rigid flat objects under various illumination models. The brightness consistency assumptions used for reconstruction of optimal geometrical transformation. Computer simulation results are provided to illustrate the performance of the proposed algorithm for computing of an accordance between pixels of two images.
Generalized Truncated Methods for an Efficient Solution of Retrial Systems
Directory of Open Access Journals (Sweden)
Ma Jose Domenech-Benlloch
2008-01-01
Full Text Available We are concerned with the analytic solution of multiserver retrial queues including the impatience phenomenon. As there are not closed-form solutions to these systems, approximate methods are required. We propose two different generalized truncated methods to effectively solve this type of systems. The methods proposed are based on the homogenization of the state space beyond a given number of users in the retrial orbit. We compare the proposed methods with the most well-known methods appeared in the literature in a wide range of scenarios. We conclude that the proposed methods generally outperform previous proposals in terms of accuracy for the most common performance parameters used in retrial systems with a moderated growth in the computational cost.
Efficient Composite Repair Methods for Launch Vehicles, Phase I
National Aeronautics and Space Administration — Polymer matrix composites are increasingly replacing traditional metallic materials in NASA launch vehicles. However, the repair and subsequent inspection methods...
Energy Technology Data Exchange (ETDEWEB)
Kurnik, Charles W [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Huang, Robert [The Cadmus Group, Portland, OR (United States); Masanet, Eric [Northwestern Univ., Evanston, IL (United States)
2017-11-02
This chapter focuses on IT measures in the data center and examines the techniques and analysis methods used to verify savings that result from improving the efficiency of two specific pieces of IT equipment: servers and data storage.
A Method for Efficient Searching at Online Shopping
Sanjo, Tomomi; Nagata, Moiro
In recent years, online shopping has been popularized. However, the users can not find efficiently their items at on-line markets. This paper proposes an engine to find items easily at the online market. This engine has the following facilities. First, it presents information in a fixed format. Second, the user can find items by selected keywords. Third, it presents only necessary information by using his/her history. Finally, it has a customize function for each user. Moreover, the system asks the users to down load a page of recommended items. We show the effectives of our proposal with some experiments.
Method of increasing efficiency of uranium sorption from acid pulp
International Nuclear Information System (INIS)
Parobek, P.; Hinterholzinger, O.; Baloun, S.; Homolka, V.; Vanek, J.; Vebr, Z.
1989-01-01
Acid pulp containing uranium is adjusted to pH 2.5 to 4 with alkaline agents, such as alkaline pulp, lime milk, finely ground limestone or soda, or a combination thereof. The treated pulp is put into contact with an ion exchanger whose pH has been adjuste to a range of 2.5 to 4. Partial pulp neutralization causes the hydrolysis of the iron present and an overall reduction in salt contents and a significant increase in the ion exchanger sorptio capacity and thus the overall sorption efficiency. The quality o the eluate and of the uranium concentrate improves. (B.S.)
A highly efficient method for Agrobacterium mediated transformation ...
African Journals Online (AJOL)
An Agrobacterium mediated transformation method was developed for the Thai rice variety, Pathumthani 1 (PT1), and the Indian rice variety, Pokkali (PKL). Various aspects of the transformation method, including callus induction, callus age, Agrobacterium concentration and co-cultivation period were examined, in order to ...
Efficient Numerical Methods for Stochastic Differential Equations in Computational Finance
Happola, Juho
2017-09-19
Stochastic Differential Equations (SDE) offer a rich framework to model the probabilistic evolution of the state of a system. Numerical approximation methods are typically needed in evaluating relevant Quantities of Interest arising from such models. In this dissertation, we present novel effective methods for evaluating Quantities of Interest relevant to computational finance when the state of the system is described by an SDE.
An efficient method for studying and analysing the propagation ...
African Journals Online (AJOL)
The paper describes a method, based on the solution of travelling-wave phenomena in polyphase systems by the use of matrix methods, of deriving the basic matrices of the conductor system taking into account the effect of conductor geometry, conductor internal impedance and the earth-return path. It is then shown how ...
Efficient Numerical Methods for Stochastic Differential Equations in Computational Finance
Happola, Juho
2017-01-01
Stochastic Differential Equations (SDE) offer a rich framework to model the probabilistic evolution of the state of a system. Numerical approximation methods are typically needed in evaluating relevant Quantities of Interest arising from such models. In this dissertation, we present novel effective methods for evaluating Quantities of Interest relevant to computational finance when the state of the system is described by an SDE.
Efficient Calculation of Near Fields in the FDTD Method
DEFF Research Database (Denmark)
Franek, Ondrej
2011-01-01
When calculating frequency-domain near fields by the FDTD method, almost 50 % reduction in memory and CPU operations can be achieved if only E-fields are stored during the main time-stepping loop and H-fields computed later. An improved method of obtaining the H-fields from Faraday's Law is prese...
An efficient non-dominated sorting method for evolutionary algorithms.
Fang, Hongbing; Wang, Qian; Tu, Yi-Cheng; Horstemeyer, Mark F
2008-01-01
We present a new non-dominated sorting algorithm to generate the non-dominated fronts in multi-objective optimization with evolutionary algorithms, particularly the NSGA-II. The non-dominated sorting algorithm used by NSGA-II has a time complexity of O(MN(2)) in generating non-dominated fronts in one generation (iteration) for a population size N and M objective functions. Since generating non-dominated fronts takes the majority of total computational time (excluding the cost of fitness evaluations) of NSGA-II, making this algorithm faster will significantly improve the overall efficiency of NSGA-II and other genetic algorithms using non-dominated sorting. The new non-dominated sorting algorithm proposed in this study reduces the number of redundant comparisons existing in the algorithm of NSGA-II by recording the dominance information among solutions from their first comparisons. By utilizing a new data structure called the dominance tree and the divide-and-conquer mechanism, the new algorithm is faster than NSGA-II for different numbers of objective functions. Although the number of solution comparisons by the proposed algorithm is close to that of NSGA-II when the number of objectives becomes large, the total computational time shows that the proposed algorithm still has better efficiency because of the adoption of the dominance tree structure and the divide-and-conquer mechanism.
Harada, Ryuhei; Nakamura, Tomotake; Shigeta, Yasuteru
2016-03-30
As an extension of the Outlier FLOODing (OFLOOD) method [Harada et al., J. Comput. Chem. 2015, 36, 763], the sparsity of the outliers defined by a hierarchical clustering algorithm, FlexDice, was considered to achieve an efficient conformational search as sparsity-weighted "OFLOOD." In OFLOOD, FlexDice detects areas of sparse distribution as outliers. The outliers are regarded as candidates that have high potential to promote conformational transitions and are employed as initial structures for conformational resampling by restarting molecular dynamics simulations. When detecting outliers, FlexDice defines a rank in the hierarchy for each outlier, which relates to sparsity in the distribution. In this study, we define a lower rank (first ranked), a medium rank (second ranked), and the highest rank (third ranked) outliers, respectively. For instance, the first-ranked outliers are located in a given conformational space away from the clusters (highly sparse distribution), whereas those with the third-ranked outliers are nearby the clusters (a moderately sparse distribution). To achieve the conformational search efficiently, resampling from the outliers with a given rank is performed. As demonstrations, this method was applied to several model systems: Alanine dipeptide, Met-enkephalin, Trp-cage, T4 lysozyme, and glutamine binding protein. In each demonstration, the present method successfully reproduced transitions among metastable states. In particular, the first-ranked OFLOOD highly accelerated the exploration of conformational space by expanding the edges. In contrast, the third-ranked OFLOOD reproduced local transitions among neighboring metastable states intensively. For quantitatively evaluations of sampled snapshots, free energy calculations were performed with a combination of umbrella samplings, providing rigorous landscapes of the biomolecules. © 2015 Wiley Periodicals, Inc.
A simple and efficient electrochemical reductive method for ...
Indian Academy of Sciences (India)
Administrator
This approach opens up a new, practical and green reducing method to prepare large- scale graphene. ... has the following significant advantages: (1) It is simple to operate. .... The authors thank the National High Technology Research.
Cholinesterase assay by an efficient fixed time endpoint method
Directory of Open Access Journals (Sweden)
Mónica Benabent
2014-01-01
The method may be adapted to the user needs by modifying the enzyme concentration and applied for simultaneously testing many samples in parallel; i.e. for complex experiments of kinetics assays with organophosphate inhibitors in different tissues.
Efficient and effective implementation of alternative project delivery methods.
2017-05-01
Over the past decade, the Maryland Department of Transportation State Highway : Administration (MDOT SHA) has implemented Alternative Project Delivery (APD) methods : in a number of transportation projects. While these innovative practices have produ...
Efficient k⋅p method for the calculation of total energy and electronic density of states
Iannuzzi, Marcella; Parrinello, Michele
2001-01-01
An efficient method for calculating the electronic structure in large systems with a fully converged BZ sampling is presented. The method is based on a k.p-like approximation developed in the framework of the density functional perturbation theory. The reliability and efficiency of the method are demostrated in test calculations on Ar and Si supercells
An Efficient Method for Electron-Atom Scattering Using Ab-initio Calculations
Energy Technology Data Exchange (ETDEWEB)
Xu, Yuan; Yang, Yonggang; Xiao, Liantuan; Jia, Suotang [Shanxi University, Taiyuan (China)
2017-02-15
We present an efficient method based on ab-initio calculations to investigate electron-atom scatterings. Those calculations profit from methods implemented in standard quantum chemistry programs. The new approach is applied to electron-helium scattering. The results are compared with experimental and other theoretical references to demonstrate the efficiency of our method.
An efficient motion-resistant method for wearable pulse oximeter.
Yan, Yong-Sheng; Zhang, Yuan-Ting
2008-05-01
Reduction of motion artifact and power saving are crucial in designing a wearable pulse oximeter for long-term telemedicine application. In this paper, a novel algorithm, minimum correlation discrete saturation transform (MCDST) has been developed for the estimation of arterial oxygen saturation (SaO2), based on an optical model derived from photon diffusion analysis. The simulation shows that the new algorithm MCDST is more robust under low SNRs than the clinically verified motion-resistant algorithm discrete saturation transform (DST). Further, the experiment with different severity of motions demonstrates that MCDST has a slightly better performance than DST algorithm. Moreover, MCDST is more computationally efficient than DST because the former uses linear algebra instead of the time-consuming adaptive filter used by latter, which indicates that MCDST can reduce the required power consumption and circuit complexity of the implementation. This is vital for wearable devices, where the physical size and long battery life are crucial.
A method for the efficient prioritization of infrastructure renewal projects
International Nuclear Information System (INIS)
Karydas, D.M.; Gifun, J.F.
2006-01-01
The infrastructure renewal program at MIT consists of a large number of projects with an estimated budget that could approach $1 billion. Infrastructure renewal at the Massachusetts Institute of Technology (MIT) is the process of evaluating and investing in the maintenance of facility systems and basic structure to preserve existing campus buildings. The selection and prioritization of projects must be addressed with a systematic method for the optimal allocation of funds and other resources. This paper presents a case study of a prioritization method utilizing multi-attribute utility theory. This method was developed at MIT's Department of Nuclear Engineering and was deployed by the Department of Facilities after appropriate modifications were implemented to address the idiosyncrasies of infrastructure renewal projects and the competing criteria and constraints that influence the judgment of the decision-makers. Such criteria include minimization of risk, optimization of economic impact, and coordination with academic policies, programs, and operations of the Institute. A brief overview of the method is presented, as well as the results of its application to the prioritization of infrastructure renewal projects. Results of workshops held at MIT with the participation of stakeholders demonstrate the feasibility of the prioritization method and the usefulness of this approach
An efficient dose-compensation method for proximity effect correction
International Nuclear Information System (INIS)
Wang Ying; Han Weihua; Yang Xiang; Zhang Yang; Yang Fuhua; Zhang Renping
2010-01-01
A novel simple dose-compensation method is developed for proximity effect correction in electron-beam lithography. The sizes of exposed patterns depend on dose factors while other exposure parameters (including accelerate voltage, resist thickness, exposing step size, substrate material, and so on) remain constant. This method is based on two reasonable assumptions in the evaluation of the compensated dose factor: one is that the relation between dose factors and circle-diameters is linear in the range under consideration; the other is that the compensated dose factor is only affected by the nearest neighbors for simplicity. Four-layer-hexagon photonic crystal structures were fabricated as test patterns to demonstrate this method. Compared to the uncorrected structures, the homogeneity of the corrected hole-size in photonic crystal structures was clearly improved. (semiconductor technology)
Advanced non-destructive methods for an efficient service performance
International Nuclear Information System (INIS)
Rauschenbach, H.; Clossen-von Lanken Schulz, M.; Oberlin, R.
2015-01-01
Due to the power generation industry's desire to decrease outage time and extend inspection intervals for highly stressed turbine parts, advanced and reliable Non-destructive methods were developed by Siemens Non-destructive laboratory. Effective outage performance requires the optimized planning of all outage activities as well as modern Non-destructive examination methods, in order to examine the highly stressed components (turbine rotor, casings, valves, generator rotor) reliably and in short periods of access. This paper describes the experience of Siemens Energy with an ultrasonic Phased Array inspection technique for the inspection of radial entry pinned turbine blade roots. The developed inspection technique allows the ultrasonic inspection of steam turbine blades without blade removal. Furthermore advanced Non-destructive examination methods for joint bolts will be described, which offer a significant reduction of outage duration in comparison to conventional inspection techniques. (authors)
Statistically Efficient Methods for Pitch and DOA Estimation
DEFF Research Database (Denmark)
Jensen, Jesper Rindom; Christensen, Mads Græsbøll; Jensen, Søren Holdt
2013-01-01
, it was recently considered to estimate the DOA and pitch jointly. In this paper, we propose two novel methods for DOA and pitch estimation. They both yield maximum-likelihood estimates in white Gaussian noise scenar- ios, where the SNR may be different across channels, as opposed to state-of-the-art methods......Traditionally, direction-of-arrival (DOA) and pitch estimation of multichannel, periodic sources have been considered as two separate problems. Separate estimation may render the task of resolving sources with similar DOA or pitch impossible, and it may decrease the estimation accuracy. Therefore...
Adaptive cluster sampling: An efficient method for assessing inconspicuous species
Andrea M. Silletti; Joan Walker
2003-01-01
Restorationistis typically evaluate the success of a project by estimating the population sizes of species that have been planted or seeded. Because total census is raely feasible, they must rely on sampling methods for population estimates. However, traditional random sampling designs may be inefficient for species that, for one reason or another, are challenging to...
An Efficient Optimization Method for Solving Unsupervised Data Classification Problems
Directory of Open Access Journals (Sweden)
Parvaneh Shabanzadeh
2015-01-01
Full Text Available Unsupervised data classification (or clustering analysis is one of the most useful tools and a descriptive task in data mining that seeks to classify homogeneous groups of objects based on similarity and is used in many medical disciplines and various applications. In general, there is no single algorithm that is suitable for all types of data, conditions, and applications. Each algorithm has its own advantages, limitations, and deficiencies. Hence, research for novel and effective approaches for unsupervised data classification is still active. In this paper a heuristic algorithm, Biogeography-Based Optimization (BBO algorithm, was adapted for data clustering problems by modifying the main operators of BBO algorithm, which is inspired from the natural biogeography distribution of different species. Similar to other population-based algorithms, BBO algorithm starts with an initial population of candidate solutions to an optimization problem and an objective function that is calculated for them. To evaluate the performance of the proposed algorithm assessment was carried on six medical and real life datasets and was compared with eight well known and recent unsupervised data classification algorithms. Numerical results demonstrate that the proposed evolutionary optimization algorithm is efficient for unsupervised data classification.
Efficient iris texture analysis method based on Gabor ordinal measures
Tajouri, Imen; Aydi, Walid; Ghorbel, Ahmed; Masmoudi, Nouri
2017-07-01
With the remarkably increasing interest directed to the security dimension, the iris recognition process is considered to stand as one of the most versatile technique critically useful for the biometric identification and authentication process. This is mainly due to every individual's unique iris texture. A modestly conceived efficient approach relevant to the feature extraction process is proposed. In the first place, iris zigzag "collarette" is extracted from the rest of the image by means of the circular Hough transform, as it includes the most significant regions lying in the iris texture. In the second place, the linear Hough transform is used for the eyelids' detection purpose while the median filter is applied for the eyelashes' removal. Then, a special technique combining the richness of Gabor features and the compactness of ordinal measures is implemented for the feature extraction process, so that a discriminative feature representation for every individual can be achieved. Subsequently, the modified Hamming distance is used for the matching process. Indeed, the advanced procedure turns out to be reliable, as compared to some of the state-of-the-art approaches, with a recognition rate of 99.98%, 98.12%, and 95.02% on CASIAV1.0, CASIAV3.0, and IIT Delhi V1 iris databases, respectively.
A hybrid approach for efficient anomaly detection using metaheuristic methods
Directory of Open Access Journals (Sweden)
Tamer F. Ghanem
2015-07-01
Full Text Available Network intrusion detection based on anomaly detection techniques has a significant role in protecting networks and systems against harmful activities. Different metaheuristic techniques have been used for anomaly detector generation. Yet, reported literature has not studied the use of the multi-start metaheuristic method for detector generation. This paper proposes a hybrid approach for anomaly detection in large scale datasets using detectors generated based on multi-start metaheuristic method and genetic algorithms. The proposed approach has taken some inspiration of negative selection-based detector generation. The evaluation of this approach is performed using NSL-KDD dataset which is a modified version of the widely used KDD CUP 99 dataset. The results show its effectiveness in generating a suitable number of detectors with an accuracy of 96.1% compared to other competitors of machine learning algorithms.
An efficient method for facial component detection in thermal images
Paul, Michael; Blanik, Nikolai; Blazek, Vladimir; Leonhardt, Steffen
2015-04-01
A method to detect certain regions in thermal images of human faces is presented. In this approach, the following steps are necessary to locate the periorbital and the nose regions: First, the face is segmented from the background by thresholding and morphological filtering. Subsequently, a search region within the face, around its center of mass, is evaluated. Automatically computed temperature thresholds are used per subject and image or image sequence to generate binary images, in which the periorbital regions are located by integral projections. Then, the located positions are used to approximate the nose position. It is possible to track features in the located regions. Therefore, these regions are interesting for different applications like human-machine interaction, biometrics and biomedical imaging. The method is easy to implement and does not rely on any training images or templates. Furthermore, the approach saves processing resources due to simple computations and restricted search regions.
Radiolysis: an efficient method of studying radicalar antioxidant mechanisms
International Nuclear Information System (INIS)
Gardes-Albert, M.; Jore, D.
1998-01-01
The use of the radiolysis method for studying radicalar antioxidant mechanisms offers the different following possibilities: 1- quantitative evaluation of antioxidant activity of molecules soluble in aqueous or non aqueous media (oxidation yields, molecular mechanisms, rate constants), 2- evaluation of the yield of prevention towards polyunsaturated fatty acids peroxidation, 3- evaluation of antioxidant activity towards biological systems such as liposomes or low density lipoproteins (LDL), 4- simple comparison in different model systems of drags effect versus natural antioxidants. (authors)
Method Development for Efficient Incorporation of Unnatural Amino Acids
Harris, Paul D.
2014-04-01
The synthesis of proteins bearing unnatural amino acids has the potential to enhance and elucidate many processes in biochemistry and molecular biology. There are two primary methods for site specific unnatural amino acid incorporation, both of which use the cell’s native protein translating machinery: in vitro chemical acylation of suppressor tRNAs and the use of orthogonal amino acyl tRNA synthetases. Total chemical synthesis is theoretically possible, but current methods severely limit the maximum size of the product protein. In vivo orthogonal synthetase methods suffer from the high cost of the unnatural amino acid. In this thesis I sought to address this limitation by increasing cell density, first in shake flasks and then in a bioreactor in order to increase the yield of protein per amount of unnatural amino acid used. In a parallel project, I used the in vitro chemical acylation system to incorporate several unnatural amino acids, key among them the fluorophore BODIPYFL, with the aim of producing site specifically fluorescently labeled protein for single molecule FRET studies. I demonstrated successful incorporation of these amino acids into the trial protein GFP, although incorporation was not demonstrated in the final target, FEN1. This also served to confirm the effectiveness of a new procedure developed for chemical acylation.
Efficient solution of parabolic equations by Krylov approximation methods
Gallopoulos, E.; Saad, Y.
1990-01-01
Numerical techniques for solving parabolic equations by the method of lines is addressed. The main motivation for the proposed approach is the possibility of exploiting a high degree of parallelism in a simple manner. The basic idea of the method is to approximate the action of the evolution operator on a given state vector by means of a projection process onto a Krylov subspace. Thus, the resulting approximation consists of applying an evolution operator of a very small dimension to a known vector which is, in turn, computed accurately by exploiting well-known rational approximations to the exponential. Because the rational approximation is only applied to a small matrix, the only operations required with the original large matrix are matrix-by-vector multiplications, and as a result the algorithm can easily be parallelized and vectorized. Some relevant approximation and stability issues are discussed. We present some numerical experiments with the method and compare its performance with a few explicit and implicit algorithms.
Method Development for Efficient Incorporation of Unnatural Amino Acids
Harris, Paul D.
2014-01-01
The synthesis of proteins bearing unnatural amino acids has the potential to enhance and elucidate many processes in biochemistry and molecular biology. There are two primary methods for site specific unnatural amino acid incorporation, both of which use the cell’s native protein translating machinery: in vitro chemical acylation of suppressor tRNAs and the use of orthogonal amino acyl tRNA synthetases. Total chemical synthesis is theoretically possible, but current methods severely limit the maximum size of the product protein. In vivo orthogonal synthetase methods suffer from the high cost of the unnatural amino acid. In this thesis I sought to address this limitation by increasing cell density, first in shake flasks and then in a bioreactor in order to increase the yield of protein per amount of unnatural amino acid used. In a parallel project, I used the in vitro chemical acylation system to incorporate several unnatural amino acids, key among them the fluorophore BODIPYFL, with the aim of producing site specifically fluorescently labeled protein for single molecule FRET studies. I demonstrated successful incorporation of these amino acids into the trial protein GFP, although incorporation was not demonstrated in the final target, FEN1. This also served to confirm the effectiveness of a new procedure developed for chemical acylation.
System and method to determine electric motor efficiency using an equivalent circuit
Lu, Bin [Kenosha, WI; Habetler, Thomas G [Snellville, GA
2011-06-07
A system and method for determining electric motor efficiency includes a monitoring system having a processor programmed to determine efficiency of an electric motor under load while the electric motor is online. The determination of motor efficiency is independent of a rotor speed measurement. Further, the efficiency is based on a determination of stator winding resistance, an input voltage, and an input current. The determination of the stator winding resistance occurs while the electric motor under load is online.
Efficient Numerical Methods for Nonequilibrium Re-Entry Flows
2014-01-14
right-hand side is the only quadratic operation). The number of sub- iterations , kmax, used in this update needs to be chosen for optimal convergence and...Upper Symmetric Gauss - Seidel Method for the Euler and Navier-Stokes Equations,”, AIAA Journal, Vol. 26, No. 9, pp. 1025-1026, Sept. 1988. 11Edwards, J.R...Candler, “The Solution of the Navier-Stokes Equations Using Gauss - Seidel Line Relaxation,” Computers and Fluids, Vol. 17, No. 1, pp. 135-150, 1989
A simple and efficient method to enhance audiovisual binding tendencies
Directory of Open Access Journals (Sweden)
Brian Odegaard
2017-04-01
Full Text Available Individuals vary in their tendency to bind signals from multiple senses. For the same set of sights and sounds, one individual may frequently integrate multisensory signals and experience a unified percept, whereas another individual may rarely bind them and often experience two distinct sensations. Thus, while this binding/integration tendency is specific to each individual, it is not clear how plastic this tendency is in adulthood, and how sensory experiences may cause it to change. Here, we conducted an exploratory investigation which provides evidence that (1 the brain’s tendency to bind in spatial perception is plastic, (2 that it can change following brief exposure to simple audiovisual stimuli, and (3 that exposure to temporally synchronous, spatially discrepant stimuli provides the most effective method to modify it. These results can inform current theories about how the brain updates its internal model of the surrounding sensory world, as well as future investigations seeking to increase integration tendencies.
Hepatitis B vaccination: Efficiency of pretesting by RIA-methods
International Nuclear Information System (INIS)
Hale, T.I.; Schmid, B.
1984-01-01
Vaccination of individuals who possess antibodies against HBs virus from a previous infection is not necessary. Health-care personnel represents a large population of potential vaccine recipients. The risk of developing hepatitis B among these workers is proportional to the degree of their exposure to both blood and blood products as well as to patients with hepatitis B. The decision to screen before vaccination depends on the costs of screening, the costs of vaccination, and the likelihood of vaccination candidates having had hepatitis B. We have demonstrated the cost effective use of screening using RIA-methods in a group of health workers for anti-HBs. If care is taken in the organization of the vaccination program, prevaccination screening of vaccine candidates can save considerable amounts of money. (orig.) [de
Hepatitis B vaccination: Efficiency of pretesting by RIA-methods
Energy Technology Data Exchange (ETDEWEB)
Hale, T I; Schmid, B
1984-04-01
Vaccination of individuals who possess antibodies against HBs virus from a previous infection is not necessary. Health-care personnel represents a large population of potential vaccine recipients. The risk of developing hepatitis B among these workers is proportional to the degree of their exposure to both blood and blood products as well as to patients with hepatitis B. The decision to screen before vaccination depends on the costs of screening, the costs of vaccination, and the likelihood of vaccination candidates having had hepatitis B. We have demonstrated the cost effective use of screening using RIA-methods in a group of health workers for anti-HBs. If care is taken in the organization of the vaccination program, prevaccination screening of vaccine candidates can save considerable amounts of money.
Efficient Method to Approximately Solve Retrial Systems with Impatience
Directory of Open Access Journals (Sweden)
Jose Manuel Gimenez-Guzman
2012-01-01
Full Text Available We present a novel technique to solve multiserver retrial systems with impatience. Unfortunately these systems do not present an exact analytic solution, so it is mandatory to resort to approximate techniques. This novel technique does not rely on the numerical solution of the steady-state Kolmogorov equations of the Continuous Time Markov Chain as it is common for this kind of systems but it considers the system in its Markov Decision Process setting. This technique, known as value extrapolation, truncates the infinite state space using a polynomial extrapolation method to approach the states outside the truncated state space. A numerical evaluation is carried out to evaluate this technique and to compare its performance with previous techniques. The obtained results show that value extrapolation greatly outperforms the previous approaches appeared in the literature not only in terms of accuracy but also in terms of computational cost.
SCoPE: an efficient method of Cosmological Parameter Estimation
International Nuclear Information System (INIS)
Das, Santanu; Souradeep, Tarun
2014-01-01
Markov Chain Monte Carlo (MCMC) sampler is widely used for cosmological parameter estimation from CMB and other data. However, due to the intrinsic serial nature of the MCMC sampler, convergence is often very slow. Here we present a fast and independently written Monte Carlo method for cosmological parameter estimation named as Slick Cosmological Parameter Estimator (SCoPE), that employs delayed rejection to increase the acceptance rate of a chain, and pre-fetching that helps an individual chain to run on parallel CPUs. An inter-chain covariance update is also incorporated to prevent clustering of the chains allowing faster and better mixing of the chains. We use an adaptive method for covariance calculation to calculate and update the covariance automatically as the chains progress. Our analysis shows that the acceptance probability of each step in SCoPE is more than 95% and the convergence of the chains are faster. Using SCoPE, we carry out some cosmological parameter estimations with different cosmological models using WMAP-9 and Planck results. One of the current research interests in cosmology is quantifying the nature of dark energy. We analyze the cosmological parameters from two illustrative commonly used parameterisations of dark energy models. We also asses primordial helium fraction in the universe can be constrained by the present CMB data from WMAP-9 and Planck. The results from our MCMC analysis on the one hand helps us to understand the workability of the SCoPE better, on the other hand it provides a completely independent estimation of cosmological parameters from WMAP-9 and Planck data
Bitcoin as an Efficient Method for the International Contract Settlement
Directory of Open Access Journals (Sweden)
Diachek Olga Yu.
2017-04-01
Full Text Available The article is aimed to study the functioning of Bitcoin in today’s world. The issue of the nature of Bitcoin and tendencies in its development have been considered. Given the growing spread of Bitcoin, the number of transactions and the size of the blocks in the system will but increase. In order to maintain correct work, according to progressive growth, consideration is given to the dynamics of the increasing role of the virtual currency, the payment channels through digital coins, and the importance of assimilating such platforms. An analysis of the methods for international contract settlements was carried out. The relevance of implementation of cryptocurrency has been substantiated. The features of functioning of Bitcoin compared to the world currencies were considered. The economic properties of the block chain were studied. In turn, cryptocurrency suggests that the true value of technology lies in its growing potential. In order to develop a complete picture of the effect and scope of Bitcoin, stages of its development and the current status both in Ukraine and the world as a whole were examined. Accordingly, recommendations have been made to improve the status of Bitcoin in Ukraine
Efficient diagnostic methods for nuclear power plant monitoring
International Nuclear Information System (INIS)
Sunder, R.
1997-01-01
There are systems for operational monitoring of vibrations, valves, solid-borne sound, leaks, and embrittlement. The paper focuses on vibration monitoring of pressurized components and systems in PWRs and BWRs. Other than with conventional systems, the task is not to globally monitor the vibration level, but the frequency-selective information is used as a sensitive information source, the level of vibrations being of secondary significance. The essential feature therefore is the comprehensive and selective scanning of individual frequency components in the recorded vibration spectra, which in some cases can be less than a millesimal millimeter. Upon identification of the various vibration components - from so-called baseline analyses using correlation methods - vibration analysis is done by only one step, comparing a defined reference state with the actual vibration state of the monitored system component. In the event of detected deviations, information on the causes of vibrations - essentially component-related structural resonances - will give the relevant cause-effect relationship. The paper uses some practical examples to illustrate that reliable diagnoses are achieved by the above-mentioned frequency-selective technique. It is important, however, to carry out a sufficiently reliable statistical verification of diagnostic data by means of vibration trending. This is ensured with the digital systems and their high data acquisition rates. (orig./CB) [de
Emergy assessment method for wheat cultivar efficiency and environmental sustainability
Directory of Open Access Journals (Sweden)
Janusz Jankowiak
2009-01-01
Full Text Available The method based on emergy was applied to quantify the fluxes of the energy, matter and monetary investment use (water, seeds, work, fertilizer and plant protecting agents, fuel, goods and services, productivity, environmental services and sustainability in typical wheat cultivar conducted in Wielkopolska. In order to convert all the flows mentioned into common base (seJ a conversion factors (solar transformities were used. In this way it was possible to consider also such flows that are free and generally neglected in the traditional balances. Generally only 52% emergy inflow is delivered by financial investment, while the remaining part, delivered in the form of the environmental services, is free. The Emergy Yield Ratio EYR = 1.14 indicate a low level of output per emergy investment unit. The values of Environmental Loading Ratio ELR = 11 and Emergy Sustainability Index ESI = 0.1 indicate an environmental stress and low level of cultivar sustainability, respectively. The final cultivar product (wheat has the emergy density 4.35 E12 seJ/kg and transformity 26.3 E4 seJ/J.
Energy Technology Data Exchange (ETDEWEB)
Li, Michael [Dept. of Energy (DOE), Washington DC (United States). Office of Energy Efficiency and Renewable Energy; Haeri, Hossein [The Cadmus Group, Portland, OR (United States); Reynolds, Arlis [The Cadmus Group, Portland, OR (United States)
2017-09-28
This chapter provides a set of model protocols for determining energy and demand savings that result from specific energy efficiency measures implemented through state and utility efficiency programs. The methods described here are approaches that are or are among the most commonly used and accepted in the energy efficiency industry for certain measures or programs. As such, they draw from the existing body of research and best practices for energy efficiency program evaluation, measurement, and verification (EM&V). These protocols were developed as part of the Uniform Methods Project (UMP), funded by the U.S. Department of Energy (DOE). The principal objective for the project was to establish easy-to-follow protocols based on commonly accepted methods for a core set of widely deployed energy efficiency measures.
Method and apparatus for high-efficiency direct contact condensation
Bharathan, Desikan; Parent, Yves; Hassani, A. Vahab
1999-01-01
A direct contact condenser having a downward vapor flow chamber and an upward vapor flow chamber, wherein each of the vapor flow chambers includes a plurality of cooling liquid supplying pipes and a vapor-liquid contact medium disposed thereunder to facilitate contact and direct heat exchange between the vapor and cooling liquid. The contact medium includes a plurality of sheets arranged to form vertical interleaved channels or passageways for the vapor and cooling liquid streams. The upward vapor flow chamber also includes a second set of cooling liquid supplying pipes disposed beneath the vapor-liquid contact medium which operate intermittently in response to a pressure differential within the upward vapor flow chamber. The condenser further includes separate wells for collecting condensate and cooling liquid from each of the vapor flow chambers. In alternate embodiments, the condenser includes a cross-current flow chamber and an upward flow chamber, a plurality of upward flow chambers, or a single upward flow chamber. The method of use of the direct contact condenser of this invention includes passing a vapor stream sequentially through the downward and upward vapor flow chambers, where the vapor is condensed as a result of heat exchange with the cooling liquid in the contact medium. The concentration of noncondensable gases in the resulting condensate-liquid mixtures can be minimized by controlling the partial pressure of the vapor, which depends in part upon the geometry of the vapor-liquid contact medium. In another aspect of this invention, the physical and chemical performance of a direct contact condenser can be predicted based on the vapor and coolant compositions, the condensation conditions. and the geometric properties of the contact medium.
Energy Technology Data Exchange (ETDEWEB)
Ortiz-Ramírez, Pablo, E-mail: rapeitor@ug.uchile.cl; Ruiz, Andrés [Departamento de Física, Facultad de Ciencias, Universidad de Chile (Chile)
2016-07-07
The Monte Carlo simulation of the gamma spectroscopy systems is a common practice in these days. The most popular softwares to do this are MCNP and Geant4 codes. The intrinsic spatial efficiency method is a general and absolute method to determine the absolute efficiency of a spectroscopy system for any extended sources, but this was only demonstrated experimentally for cylindrical sources. Due to the difficulty that the preparation of sources with any shape represents, the simplest way to do this is by the simulation of the spectroscopy system and the source. In this work we present the validation of the intrinsic spatial efficiency method for sources with different geometries and for photons with an energy of 661.65 keV. In the simulation the matrix effects (the auto-attenuation effect) are not considered, therefore these results are only preliminaries. The MC simulation is carried out using the FLUKA code and the absolute efficiency of the detector is determined using two methods: the statistical count of Full Energy Peak (FEP) area (traditional method) and the intrinsic spatial efficiency method. The obtained results show total agreement between the absolute efficiencies determined by the traditional method and the intrinsic spatial efficiency method. The relative bias is lesser than 1% in all cases.
Biological optimization systems for enhancing photosynthetic efficiency and methods of use
Hunt, Ryan W.; Chinnasamy, Senthil; Das, Keshav C.; de Mattos, Erico Rolim
2012-11-06
Biological optimization systems for enhancing photosynthetic efficiency and methods of use. Specifically, methods for enhancing photosynthetic efficiency including applying pulsed light to a photosynthetic organism, using a chlorophyll fluorescence feedback control system to determine one or more photosynthetic efficiency parameters, and adjusting one or more of the photosynthetic efficiency parameters to drive the photosynthesis by the delivery of an amount of light to optimize light absorption of the photosynthetic organism while providing enough dark time between light pulses to prevent oversaturation of the chlorophyll reaction centers are disclosed.
Efficiency Optimization Methods in Low-Power High-Frequency Digitally Controlled SMPS
Directory of Open Access Journals (Sweden)
Aleksandar Prodić
2010-06-01
Full Text Available This paper gives a review of several power efficiency optimization techniques that are utilizing advantages of emerging digital control in high frequency switch-mode power supplies (SMPS, processing power from a fraction of watt to several hundreds of watts. Loss mechanisms in semiconductor components are briefly reviewed and the related principles of online efficiency optimization through power stage segmentation and gate voltage variation presented. Practical implementations of such methods utilizing load prediction or data extraction from a digital control loop are shown. The benefits of the presented efficiency methods are verified through experimental results, showing efficiency improvements, ranging from 2% to 30%,depending on the load conditions.
International Nuclear Information System (INIS)
Ducasse, Q.; Jurado, B.; Mathieu, L.; Marini, P.; Morillon, B.; Aiche, M.; Tsekhanovich, I.
2016-01-01
The study of transfer-induced gamma-decay probabilities is very useful for understanding the surrogate-reaction method and, more generally, for constraining statistical-model calculations. One of the main difficulties in the measurement of gamma-decay probabilities is the determination of the gamma-cascade detection efficiency. In Boutoux et al. (2013) [10] we developed the EXtrapolated Efficiency Method (EXEM), a new method to measure this quantity. In this work, we have applied, for the first time, the EXEM to infer the gamma-cascade detection efficiency in the actinide region. In particular, we have considered the "2"3"8U(d,p)"2"3"9U and "2"3"8U("3He,d)"2"3"9Np reactions. We have performed Hauser–Feshbach calculations to interpret our results and to verify the hypothesis on which the EXEM is based. The determination of fission and gamma-decay probabilities of "2"3"9Np below the neutron separation energy allowed us to validate the EXEM.
Energy Technology Data Exchange (ETDEWEB)
Ducasse, Q. [CENBG, CNRS/IN2P3-Université de Bordeaux, Chemin du Solarium B.P. 120, 33175 Gradignan (France); CEA-Cadarache, DEN/DER/SPRC/LEPh, 13108 Saint Paul lez Durance (France); Jurado, B., E-mail: jurado@cenbg.in2p3.fr [CENBG, CNRS/IN2P3-Université de Bordeaux, Chemin du Solarium B.P. 120, 33175 Gradignan (France); Mathieu, L.; Marini, P. [CENBG, CNRS/IN2P3-Université de Bordeaux, Chemin du Solarium B.P. 120, 33175 Gradignan (France); Morillon, B. [CEA DAM DIF, 91297 Arpajon (France); Aiche, M.; Tsekhanovich, I. [CENBG, CNRS/IN2P3-Université de Bordeaux, Chemin du Solarium B.P. 120, 33175 Gradignan (France)
2016-08-01
The study of transfer-induced gamma-decay probabilities is very useful for understanding the surrogate-reaction method and, more generally, for constraining statistical-model calculations. One of the main difficulties in the measurement of gamma-decay probabilities is the determination of the gamma-cascade detection efficiency. In Boutoux et al. (2013) [10] we developed the EXtrapolated Efficiency Method (EXEM), a new method to measure this quantity. In this work, we have applied, for the first time, the EXEM to infer the gamma-cascade detection efficiency in the actinide region. In particular, we have considered the {sup 238}U(d,p){sup 239}U and {sup 238}U({sup 3}He,d){sup 239}Np reactions. We have performed Hauser–Feshbach calculations to interpret our results and to verify the hypothesis on which the EXEM is based. The determination of fission and gamma-decay probabilities of {sup 239}Np below the neutron separation energy allowed us to validate the EXEM.
A method to identify energy efficiency measures for factory systems based on qualitative modeling
Krones, Manuela
2017-01-01
Manuela Krones develops a method that supports factory planners in generating energy-efficient planning solutions. The method provides qualitative description concepts for factory planning tasks and energy efficiency knowledge as well as an algorithm-based linkage between these measures and the respective planning tasks. Its application is guided by a procedure model which allows a general applicability in the manufacturing sector. The results contain energy efficiency measures that are suitable for a specific planning task and reveal the roles of various actors for the measures’ implementation. Contents Driving Concerns for and Barriers against Energy Efficiency Approaches to Increase Energy Efficiency in Factories Socio-Technical Description of Factory Planning Tasks Description of Energy Efficiency Measures Case Studies on Welding Processes and Logistics Systems Target Groups Lecturers and Students of Industrial Engineering, Production Engineering, Environmental Engineering, Mechanical Engineering Practi...
International Nuclear Information System (INIS)
Svec, A.; Schrader, H.
2002-01-01
An ionization chamber without and with an iron liner (absorber) was calibrated by a set of radionuclide activity standards of the Physikalisch-Technische Bundesanstalt (PTB). The ionization chamber is used as a secondary standard measuring system for activity at the Slovak Institute of Metrology (SMU). Energy-dependent photon-efficiency curves were established for the ionization chamber in defined measurement geometry without and with the liner, and radionuclide efficiencies were calculated. Programmed calculation with an analytical efficiency function and a nonlinear regression algorithm of Microsoft (MS) Excel for fitting was used. Efficiencies from bremsstrahlung of pure beta-particle emitters were calibrated achieving a 10% accuracy level. Such efficiency components are added to obtain the total radionuclide efficiency of photon emitters after beta decay. The method yields differences of experimental and calculated radionuclide efficiencies for most of the photon-emitting radionuclides in the order of a few percent
Methodical Approach to Diagnostics of Efficiency of Production Economic Activity of an Enterprise
Directory of Open Access Journals (Sweden)
Zhukov Andrii V.
2014-03-01
Full Text Available The article offers developments of a methodical approach to diagnostics of efficiency of production economic activity of an enterprise, which, unlike the existing ones, is realised through the following stages: analysis of the enterprise external environment; analysis of the enterprise internal environment; identification of components of efficiency of production economic activity for carrying out complex diagnostics by the following directions: efficiency of subsystems of the enterprise production economic activity, efficiency of use of separate types of resources and socio-economic efficiency; scorecard formation; study of tendencies of change of indicators; identification of cause-effect dependencies between the main components of efficiency for diagnosing reasons of its level; diagnosing deviations of indicator values from their optimal values; development of a managerial decision on preserving and increasing efficiency of production economic activity of the enterprise.
A Biologically Inspired Energy-Efficient Duty Cycle Design Method for Wireless Sensor Networks
Directory of Open Access Journals (Sweden)
Jie Zhou
2017-01-01
Full Text Available The recent success of emerging wireless sensor networks technology has encouraged researchers to develop new energy-efficient duty cycle design algorithm in this field. The energy-efficient duty cycle design problem is a typical NP-hard combinatorial optimization problem. In this paper, we investigate an improved elite immune evolutionary algorithm (IEIEA strategy to optimize energy-efficient duty cycle design scheme and monitored area jointly to enhance the network lifetimes. Simulation results show that the network lifetime of the proposed IEIEA method increased compared to the other two methods, which means that the proposed method improves the full coverage constraints.
Saito, Terubumi; Tatsuta, Muneaki; Abe, Yamato; Takesawa, Minato
2018-02-01
We have succeeded in the direct measurement for solar cell/module internal conversion efficiency based on a calorimetric method or electrical substitution method by which the absorbed radiant power is determined by replacing the heat absorbed in the cell/module with the electrical power. The technique is advantageous in that the reflectance and transmittance measurements, which are required in the conventional methods, are not necessary. Also, the internal quantum efficiency can be derived from conversion efficiencies by using the average photon energy. Agreements of the measured data with the values estimated from the nominal values support the validity of this technique.
A novel MPPT method for enhancing energy conversion efficiency taking power smoothing into account
International Nuclear Information System (INIS)
Liu, Jizhen; Meng, Hongmin; Hu, Yang; Lin, Zhongwei; Wang, Wei
2015-01-01
Highlights: • We discuss the disadvantages of conventional OTC MPPT method. • We study the relationship between enhancing efficiency and power smoothing. • The conversion efficiency is enhanced and the volatility of power is suppressed. • Small signal analysis is used to verify the effectiveness of proposed method. - Abstract: With the increasing capacity of wind energy conversion system (WECS), the rotational inertia of wind turbine is becoming larger. And the efficiency of energy conversion is significantly reduced by the large inertia. This paper proposes a novel maximum power point tracking (MPPT) method to enhance the efficiency of energy conversion for large-scale wind turbine. Since improving the efficiency may increase the fluctuations of output power, power smoothing is considered as the second control objective. A T-S fuzzy inference system (FIS) is adapted to reduce the fluctuations according to the volatility of wind speed and accelerated rotor speed by regulating the compensation gain. To verify the effectiveness, stability and good dynamic performance of the new method, mechanism analyses, small signal analyses, and simulation studies are carried out based on doubly-fed induction generator (DFIG) wind turbine, respectively. Study results show that both the response speed and the efficiency of proposed method are increased. In addition, the extra fluctuations of output power caused by the high efficiency are reduced effectively by the proposed method with FIS
Directory of Open Access Journals (Sweden)
C. F. D. Rocha
Full Text Available Studies on anurans in restinga habitats are few and, as a result, there is little information on which methods are more efficient for sampling them in this environment. Ten methods are usually used for sampling anuran communities in tropical and sub-tropical areas. In this study we evaluate which methods are more appropriate for this purpose in the restinga environment of Parque Nacional da Restinga de Jurubatiba. We analyzed six methods among those usually used for anuran samplings. For each method, we recorded the total amount of time spent (in min., the number of researchers involved, and the number of species captured. We calculated a capture efficiency index (time necessary for a researcher to capture an individual frog in order to make comparable the data obtained. Of the methods analyzed, the species inventory (9.7 min/searcher /ind.- MSI; richness = 6; abundance = 23 and the breeding site survey (9.5 MSI; richness = 4; abundance = 22 were the most efficient. The visual encounter inventory (45.0 MSI and patch sampling (65.0 MSI methods were of comparatively lower efficiency restinga, whereas the plot sampling and the pit-fall traps with drift-fence methods resulted in no frog capture. We conclude that there is a considerable difference in efficiency of methods used in the restinga environment and that the complete species inventory method is highly efficient for sampling frogs in the restinga studied and may be so in other restinga environments. Methods that are usually efficient in forested areas seem to be of little value in open restinga habitats.
Rocha, C F D; Van Sluys, M; Hatano, F H; Boquimpani-Freitas, L; Marra, R V; Marques, R V
2004-11-01
Studies on anurans in restinga habitats are few and, as a result, there is little information on which methods are more efficient for sampling them in this environment. Ten methods are usually used for sampling anuran communities in tropical and sub-tropical areas. In this study we evaluate which methods are more appropriate for this purpose in the restinga environment of Parque Nacional da Restinga de Jurubatiba. We analyzed six methods among those usually used for anuran samplings. For each method, we recorded the total amount of time spent (in min.), the number of researchers involved, and the number of species captured. We calculated a capture efficiency index (time necessary for a researcher to capture an individual frog) in order to make comparable the data obtained. Of the methods analyzed, the species inventory (9.7 min/searcher /ind.- MSI; richness = 6; abundance = 23) and the breeding site survey (9.5 MSI; richness = 4; abundance = 22) were the most efficient. The visual encounter inventory (45.0 MSI) and patch sampling (65.0 MSI) methods were of comparatively lower efficiency restinga, whereas the plot sampling and the pit-fall traps with drift-fence methods resulted in no frog capture. We conclude that there is a considerable difference in efficiency of methods used in the restinga environment and that the complete species inventory method is highly efficient for sampling frogs in the restinga studied and may be so in other restinga environments. Methods that are usually efficient in forested areas seem to be of little value in open restinga habitats.
International Nuclear Information System (INIS)
Yanotovskii, M.T.; Mogilevskaya, M.P.; Obol'nikova, E.A.; Kogan, L.M.; Samokhvalov, G.I.
1986-01-01
A method has been developed for the qualitative and quantitative determination of ubiquinones CoQ 6 -CoQ 10 , using high-efficiency reversed-phase liquid chromatography. Tocopherol acetate was used as the internal standard
A novel method for coil efficiency estimation: Validation with a 13C birdcage
DEFF Research Database (Denmark)
Giovannetti, Giulio; Frijia, Francesca; Hartwig, Valentina
2012-01-01
Coil efficiency, defined as the B1 magnetic field induced at a given point on the square root of supplied power P, is an important parameter that characterizes both the transmit and receive performance of the radiofrequency (RF) coil. Maximizing coil efficiency will maximize also the signal......-to-noise ratio. In this work, we propose a novel method for RF coil efficiency estimation based on the use of a perturbing loop. The proposed method consists of loading the coil with a known resistor by inductive coupling and measuring the quality factor with and without the load. We tested the method...... by measuring the efficiency of a 13C birdcage coil tuned at 32.13 MHz and verified its accuracy by comparing the results with the nuclear magnetic resonance nutation experiment. The method allows coil performance characterization in a short time and with great accuracy, and it can be used both on the bench...
Energy Technology Data Exchange (ETDEWEB)
Verdu, G.; Miro, R. [Departamento de Ingenieria Quimica y Nuclear, Universidad Politecnica de Valencia, Valencia (Spain); Ginestar, D. [Departamento de Matematica Aplicada, Universidad Politecnica de Valencia, Valencia (Spain); Vidal, V. [Departamento de Sistemas Informaticos y Computacion, Universidad Politecnica de Valencia, Valencia (Spain)
1999-05-01
To calculate the neutronic steady state of a nuclear power reactor core and its subcritical modes, it is necessary to solve a partial eigenvalue problem. In this paper, an implicit restarted Arnoldi method is presented as an advantageous alternative to classical methods as the Power Iteration method and the Subspace Iteration method. The efficiency of these methods, has been compared calculating the dominant Lambda modes of several configurations of the Three Mile Island reactor core.
CREPT-MCNP code for efficiency calibration of HPGe detectors with the representative point method.
Saegusa, Jun
2008-01-01
The representative point method for the efficiency calibration of volume samples has been previously proposed. For smoothly implementing the method, a calculation code named CREPT-MCNP has been developed. The code estimates the position of a representative point which is intrinsic to each shape of volume sample. The self-absorption correction factors are also given to make correction on the efficiencies measured at the representative point with a standard point source. Features of the CREPT-MCNP code are presented.
Wang, H.; Chen, S.; Tao, C.; Qiu, L.
2017-12-01
High-density, high-fold and wide-azimuth seismic data acquisition methods are widely used to overcome the increasingly sophisticated exploration targets. The acquisition period is longer and longer and the acquisition cost is higher and higher. We carry out the study of highly efficient seismic data acquisition and processing methods based on sparse representation theory (or compressed sensing theory), and achieve some innovative results. The theoretical principles of highly efficient acquisition and processing is studied. We firstly reveal sparse representation theory based on wave equation. Then we study the highly efficient seismic sampling methods and present an optimized piecewise-random sampling method based on sparsity prior information. At last, a reconstruction strategy with the sparsity constraint is developed; A two-step recovery approach by combining sparsity-promoting method and hyperbolic Radon transform is also put forward. The above three aspects constitute the enhanced theory of highly efficient seismic data acquisition. The specific implementation strategies of highly efficient acquisition and processing are studied according to the highly efficient acquisition theory expounded in paragraph 2. Firstly, we propose the highly efficient acquisition network designing method by the help of optimized piecewise-random sampling method. Secondly, we propose two types of highly efficient seismic data acquisition methods based on (1) single sources and (2) blended (or simultaneous) sources. Thirdly, the reconstruction procedures corresponding to the above two types of highly efficient seismic data acquisition methods are proposed to obtain the seismic data on the regular acquisition network. A discussion of the impact on the imaging result of blended shooting is discussed. In the end, we implement the numerical tests based on Marmousi model. The achieved results show: (1) the theoretical framework of highly efficient seismic data acquisition and processing
Discussion on Boiler Efficiency Correction Method with Low Temperature Economizer-Air Heater System
Ke, Liu; Xing-sen, Yang; Fan-jun, Hou; Zhi-hong, Hu
2017-05-01
This paper pointed out that it is wrong to take the outlet flue gas temperature of low temperature economizer as exhaust gas temperature in boiler efficiency calculation based on GB10184-1988. What’s more, this paper proposed a new correction method, which decomposed low temperature economizer-air heater system into two hypothetical parts of air preheater and pre condensed water heater and take the outlet equivalent gas temperature of air preheater as exhaust gas temperature in boiler efficiency calculation. This method makes the boiler efficiency calculation more concise, with no air heater correction. It has a positive reference value to deal with this kind of problem correctly.
DETERMINATION OF HYDRAULIC TURBINE EFFICIENCY BY MEANS OF THE CURRENT METER METHOD
Directory of Open Access Journals (Sweden)
PURECE C.
2016-12-01
Full Text Available The paper presents methodology used for determining the efficiency of a low head Kaplan hydraulic turbine with short converging intake. The measurement method used was the current meters method, the only measurement method recommended by the IEC 41standard for flow measurement in this case. The paper also presents the methodology used for measuring the flow by means of the current meters method and the various procedures for calculating the flow. In the last part the paper presents the flow measurements carried out on the Fughiu HPP hydraulic turbines for determining the actual operating efficiency.
An Efficient Approach for Solving Mesh Optimization Problems Using Newton’s Method
Directory of Open Access Journals (Sweden)
Jibum Kim
2014-01-01
Full Text Available We present an efficient approach for solving various mesh optimization problems. Our approach is based on Newton’s method, which uses both first-order (gradient and second-order (Hessian derivatives of the nonlinear objective function. The volume and surface mesh optimization algorithms are developed such that mesh validity and surface constraints are satisfied. We also propose several Hessian modification methods when the Hessian matrix is not positive definite. We demonstrate our approach by comparing our method with nonlinear conjugate gradient and steepest descent methods in terms of both efficiency and mesh quality.
Costs and Efficiency of Online and Offline Recruitment Methods: A Web-Based Cohort Study
Riis, Anders H; Hatch, Elizabeth E; Wise, Lauren A; Nielsen, Marie G; Rothman, Kenneth J; Toft Sørensen, Henrik; Mikkelsen, Ellen M
2017-01-01
Background The Internet is widely used to conduct research studies on health issues. Many different methods are used to recruit participants for such studies, but little is known about how various recruitment methods compare in terms of efficiency and costs. Objective The aim of our study was to compare online and offline recruitment methods for Internet-based studies in terms of efficiency (number of recruited participants) and costs per participant. Methods We employed several online and offline recruitment methods to enroll 18- to 45-year-old women in an Internet-based Danish prospective cohort study on fertility. Offline methods included press releases, posters, and flyers. Online methods comprised advertisements placed on five different websites, including Facebook and Netdoktor.dk. We defined seven categories of mutually exclusive recruitment methods and used electronic tracking via unique Uniform Resource Locator (URL) and self-reported data to identify the recruitment method for each participant. For each method, we calculated the average cost per participant and efficiency, that is, the total number of recruited participants. Results We recruited 8252 study participants. Of these, 534 were excluded as they could not be assigned to a specific recruitment method. The final study population included 7724 participants, of whom 803 (10.4%) were recruited by offline methods, 3985 (51.6%) by online methods, 2382 (30.8%) by online methods not initiated by us, and 554 (7.2%) by other methods. Overall, the average cost per participant was €6.22 for online methods initiated by us versus €9.06 for offline methods. Costs per participant ranged from €2.74 to €105.53 for online methods and from €0 to €67.50 for offline methods. Lowest average costs per participant were for those recruited from Netdoktor.dk (€2.99) and from Facebook (€3.44). Conclusions In our Internet-based cohort study, online recruitment methods were superior to offline methods in terms
Costs and Efficiency of Online and Offline Recruitment Methods: A Web-Based Cohort Study.
Christensen, Tina; Riis, Anders H; Hatch, Elizabeth E; Wise, Lauren A; Nielsen, Marie G; Rothman, Kenneth J; Toft Sørensen, Henrik; Mikkelsen, Ellen M
2017-03-01
The Internet is widely used to conduct research studies on health issues. Many different methods are used to recruit participants for such studies, but little is known about how various recruitment methods compare in terms of efficiency and costs. The aim of our study was to compare online and offline recruitment methods for Internet-based studies in terms of efficiency (number of recruited participants) and costs per participant. We employed several online and offline recruitment methods to enroll 18- to 45-year-old women in an Internet-based Danish prospective cohort study on fertility. Offline methods included press releases, posters, and flyers. Online methods comprised advertisements placed on five different websites, including Facebook and Netdoktor.dk. We defined seven categories of mutually exclusive recruitment methods and used electronic tracking via unique Uniform Resource Locator (URL) and self-reported data to identify the recruitment method for each participant. For each method, we calculated the average cost per participant and efficiency, that is, the total number of recruited participants. We recruited 8252 study participants. Of these, 534 were excluded as they could not be assigned to a specific recruitment method. The final study population included 7724 participants, of whom 803 (10.4%) were recruited by offline methods, 3985 (51.6%) by online methods, 2382 (30.8%) by online methods not initiated by us, and 554 (7.2%) by other methods. Overall, the average cost per participant was €6.22 for online methods initiated by us versus €9.06 for offline methods. Costs per participant ranged from €2.74 to €105.53 for online methods and from €0 to €67.50 for offline methods. Lowest average costs per participant were for those recruited from Netdoktor.dk (€2.99) and from Facebook (€3.44). In our Internet-based cohort study, online recruitment methods were superior to offline methods in terms of efficiency (total number of participants
Stamenkovic, Dragan D.; Popovic, Vladimir M.
2015-02-01
Warranty is a powerful marketing tool, but it always involves additional costs to the manufacturer. In order to reduce these costs and make use of warranty's marketing potential, the manufacturer needs to master the techniques for warranty cost prediction according to the reliability characteristics of the product. In this paper a combination free replacement and pro rata warranty policy is analysed as warranty model for one type of light bulbs. Since operating conditions have a great impact on product reliability, they need to be considered in such analysis. A neural network model is used to predict light bulb reliability characteristics based on the data from the tests of light bulbs in various operating conditions. Compared with a linear regression model used in the literature for similar tasks, the neural network model proved to be a more accurate method for such prediction. Reliability parameters obtained in this way are later used in Monte Carlo simulation for the prediction of times to failure needed for warranty cost calculation. The results of the analysis make possible for the manufacturer to choose the optimal warranty policy based on expected product operating conditions. In such a way, the manufacturer can lower the costs and increase the profit.
Meng, Zeng; Yang, Dixiong; Zhou, Huanlin; Yu, Bo
2018-05-01
The first order reliability method has been extensively adopted for reliability-based design optimization (RBDO), but it shows inaccuracy in calculating the failure probability with highly nonlinear performance functions. Thus, the second order reliability method is required to evaluate the reliability accurately. However, its application for RBDO is quite challenge owing to the expensive computational cost incurred by the repeated reliability evaluation and Hessian calculation of probabilistic constraints. In this article, a new improved stability transformation method is proposed to search the most probable point efficiently, and the Hessian matrix is calculated by the symmetric rank-one update. The computational capability of the proposed method is illustrated and compared to the existing RBDO approaches through three mathematical and two engineering examples. The comparison results indicate that the proposed method is very efficient and accurate, providing an alternative tool for RBDO of engineering structures.
Energy Technology Data Exchange (ETDEWEB)
Han, Jong-Boo; Song, Hajun; Kim, Sung-Soo [Chungnam Nat’l Univ., Daejeon (Korea, Republic of)
2017-06-15
Flexible multibody simulations are widely used in the industry to design mechanical systems. In flexible multibody dynamics, deformation coordinates are described either relatively in the body reference frame that is floating in the space or in the inertial reference frame. Moreover, these deformation coordinates are generated based on the discretization of the body according to the finite element approach. Therefore, the formulation of the flexible multibody system always deals with a huge number of degrees of freedom and the numerical solution methods require a substantial amount of computational time. Parallel computational methods are a solution for efficient computation. However, most of the parallel computational methods are focused on the efficient solution of large-sized linear equations. For multibody analysis, we need to develop an efficient formulation that could be suitable for parallel computation. In this paper, we developed a subsystem synthesis method for a flexible multibody system and proposed efficient parallel computational schemes based on the OpenMP API in order to achieve efficient computation. Simulations of a rotating blade system, which consists of three identical blades, were carried out with two different parallel computational schemes. Actual CPU times were measured to investigate the efficiency of the proposed parallel schemes.
The Tracer Gas Method of Determining the Charging Efficiency of Two-stroke-cycle Diesel Engines
Schweitzer, P H; Deluca, Frank, Jr
1942-01-01
A convenient method has been developed for determining the scavenging efficiency or the charging efficiency of two-stroke-cycle engines. The method consists of introducing a suitable tracer gas into the inlet air of the running engine and measuring chemically its concentration both in the inlet and exhaust gas. Monomethylamine CH(sub 3)NH(sub 2) was found suitable for the purpose as it burns almost completely during combustion, whereas the "short-circuited" portion does not burn at all and can be determined quantitatively in the exhaust. The method was tested both on four-stroke and on two-stroke engines and is considered accurate within 1 percent.
DEFF Research Database (Denmark)
Haller, Michel; Cruickshank, Chynthia; Streicher, Wolfgang
2009-01-01
This paper reviews different methods that have been proposed to characterize thermal stratification in energy storages from a theoretical point of view. Specifically, this paper focuses on the methods that can be used to determine the ability of a storage to promote and maintain stratification...... during charging, storing and discharging, and represent this ability with a single numerical value in terms of a stratification efficiency for a given experiment or under given boundary conditions. Existing methods for calculating stratification efficiencies have been applied to hypothetical storage...
Efficient formalism for treating tapered structures using the Fourier modal method
DEFF Research Database (Denmark)
Østerkryger, Andreas Dyhl; Gregersen, Niels
2016-01-01
We investigate the development of the mode occupations in tapered structures using the Fourier modal method. In order to use the Fourier modal method, tapered structures are divided into layers of uniform refractive index in the propagation direction and the optical modes are found within each...... layer. This is not very efficient and in this proceeding we take the first steps towards a more efficient formalism for treating tapered structures using the Fourier modal method. We show that the coupling coefficients through the structure are slowly varying and that only the first few modes...
DEFF Research Database (Denmark)
Zhang, Jiaying; Pivnenko, Sergey; Breinbjerg, Olav
2010-01-01
, but not for balanced antennas like loops or dipoles. In this paper, a modified Wheeler cap method is proposed for the radiation efficiency measurement of balanced electrically small antennas and a three-port network model of the Wheeler cap measurement is introduced. The advantage of the modified method...... is that it is wideband, thus does not require any balun, and both the antenna input impedance and radiation efficiency can be obtained. An electrically small loop antenna and a wideband dipole were simulated and measured according to the proposed method and the results of measurements and simulations are presented...
DEFF Research Database (Denmark)
Gundersen, H J; Bendtsen, T F; Korbo, L
1988-01-01
Stereology is a set of simple and efficient methods for quantitation of three-dimensional microscopic structures which is specifically tuned to provide reliable data from sections. Within the last few years, a number of new methods has been developed which are of special interest to pathologists...... are invariably simple and easy....
IP2P K-means: an efficient method for data clustering on sensor networks
Directory of Open Access Journals (Sweden)
Peyman Mirhadi
2013-03-01
Full Text Available Many wireless sensor network applications require data gathering as the most important parts of their operations. There are increasing demands for innovative methods to improve energy efficiency and to prolong the network lifetime. Clustering is considered as an efficient topology control methods in wireless sensor networks, which can increase network scalability and lifetime. This paper presents a method, IP2P K-means – Improved P2P K-means, which uses efficient leveling in clustering approach, reduces false labeling and restricts the necessary communication among various sensors, which obviously saves more energy. The proposed method is examined in Network Simulator Ver.2 (NS2 and the preliminary results show that the algorithm works effectively and relatively more precisely.
A robust AHP-DEA method for measuring the relative efficiency: An application of airport industry
Directory of Open Access Journals (Sweden)
Amin Foroughi
2012-01-01
Full Text Available Measuring the relative efficiency of similar units has been an important topic of research among many researchers. Data envelopment analysis has been one of the most important techniques for measuring the efficiency of different units. However, there are some limitations on using such technique and some people prefer to use other methods such as analytical hierarchy process to measure the relative efficiencies. Besides, uncertainty in the input data is another issue, which makes some misleading results. In this paper, we present an integrated robust DEA-AHP to measure the relative efficiency of similar units. The proposed model of this is believed to capable of presenting better results in terms of efficiency compared with exclusive usage of DEA or AHP. The implementation of the proposed model is demonstrated for a real-world case study of Airport industry and the results are analyzed.
Directory of Open Access Journals (Sweden)
Lorenzo Clementi
2013-05-01
Full Text Available Efficiency has a key-role in the measurement of the impact of the National Health Service (NHS reforms. We investigate the issue of inefficiency in health sector and provide empirical evidence derived from Italian public hospitals. Despite the importance of efficiency measurement in health care services, only recently advanced econometric methods have been applied to hospital data. We provide a synoptic survey of few empirical analyses of efficiency measurement in health care services. An estimate of the cost efficiency level in Italian public hospitals during 2001-2003 is obtained through a sample. We propose an efficiency indicator and provide cost frontiers for such hospitals, using stochastic frontier analysis (SFA for longitudinal data.
An efficient digital signal processing method for RRNS-based DS-CDMA systems
Directory of Open Access Journals (Sweden)
Peter Olsovsky
2017-09-01
Full Text Available This paper deals with an efficient method for achieving low power and high speed in advanced Direct-Sequence Code Division Multiple-Access (DS-CDMA wireless communication systems based on the Residue Number System (RNS. A modified algorithm for multiuser DS-CDMA signal generation in MATLAB is proposed and investigated. The most important characteristics of the generated PN code are also presented. Subsequently, a DS-CDMA system based on the combination of the RNS or the so-called Redundant Residue Number System (RRNS is proposed. The enhanced method using a spectrally efficient 8-PSK data modulation scheme to improve the bandwidth efficiency for RRNS-based DS-CDMA systems is presented. By using the C-measure (complexity measure of the error detection function, it is possible to estimate the size of the circuit. Error detection function in RRNSs can be efficiently implemented by LookUp Table (LUT cascades.
An Efficient Parallel Multi-Scale Segmentation Method for Remote Sensing Imagery
Directory of Open Access Journals (Sweden)
Haiyan Gu
2018-04-01
Full Text Available Remote sensing (RS image segmentation is an essential step in geographic object-based image analysis (GEOBIA to ultimately derive “meaningful objects”. While many segmentation methods exist, most of them are not efficient for large data sets. Thus, the goal of this research is to develop an efficient parallel multi-scale segmentation method for RS imagery by combining graph theory and the fractal net evolution approach (FNEA. Specifically, a minimum spanning tree (MST algorithm in graph theory is proposed to be combined with a minimum heterogeneity rule (MHR algorithm that is used in FNEA. The MST algorithm is used for the initial segmentation while the MHR algorithm is used for object merging. An efficient implementation of the segmentation strategy is presented using data partition and the “reverse searching-forward processing” chain based on message passing interface (MPI parallel technology. Segmentation results of the proposed method using images from multiple sensors (airborne, SPECIM AISA EAGLE II, WorldView-2, RADARSAT-2 and different selected landscapes (residential/industrial, residential/agriculture covering four test sites indicated its efficiency in accuracy and speed. We conclude that the proposed method is applicable and efficient for the segmentation of a variety of RS imagery (airborne optical, satellite optical, SAR, high-spectral, while the accuracy is comparable with that of the FNEA method.
Analysis of Power Transfer Efficiency of Standard Integrated Circuit Immunity Test Methods
Directory of Open Access Journals (Sweden)
Hai Au Huynh
2015-01-01
Full Text Available Direct power injection (DPI and bulk current injection (BCI methods are defined in IEC 62132-3 and IEC 62132-4 as the electromagnetic immunity test method of integrated circuits (IC. The forward power measured at the RF noise generator when the IC malfunctions is used as the measure of immunity level of the IC. However, the actual power that causes failure in ICs is different from forward power measured at the noise source. Power transfer efficiency is used as a measure of power loss of the noise injection path. In this paper, the power transfer efficiencies of DPI and BCI methods are derived and validated experimentally with immunity test setup of a clock divider IC. Power transfer efficiency varies significantly over the frequency range as a function of the test method used and the IC input impedance. For the frequency range of 15 kHz to 1 GHz, power transfer efficiency of the BCI test was constantly higher than that of the DPI test. In the DPI test, power transfer efficiency is particularly low in the lower test frequency range up to 10 MHz. When performing the IC immunity tests following the standards, these characteristics of the test methods need to be considered.
International Nuclear Information System (INIS)
Manoj, P.; Parimalam, P.; Shanmugam, A.; Murali, N.
2013-01-01
The objective of this paper is to present the effective and efficient methods for developing application software for Distributed Real Time Systems for Prototype Fast Breeder Reactor. It discusses the effective ways to reduce the language and syntax errors while capturing the requirements. This paper suggests an efficient way of requirements capture and coding application software for I and C systems so that the quality factors of the software such as reliability, maintainability and testability are improved. (author)
Yucel, Abdulkadir C.
2015-05-05
An efficient method for statistically characterizing multiconductor transmission line (MTL) networks subject to a large number of manufacturing uncertainties is presented. The proposed method achieves its efficiency by leveraging a high-dimensional model representation (HDMR) technique that approximates observables (quantities of interest in MTL networks, such as voltages/currents on mission-critical circuits) in terms of iteratively constructed component functions of only the most significant random variables (parameters that characterize the uncertainties in MTL networks, such as conductor locations and widths, and lumped element values). The efficiency of the proposed scheme is further increased using a multielement probabilistic collocation (ME-PC) method to compute the component functions of the HDMR. The ME-PC method makes use of generalized polynomial chaos (gPC) expansions to approximate the component functions, where the expansion coefficients are expressed in terms of integrals of the observable over the random domain. These integrals are numerically evaluated and the observable values at the quadrature/collocation points are computed using a fast deterministic simulator. The proposed method is capable of producing accurate statistical information pertinent to an observable that is rapidly varying across a high-dimensional random domain at a computational cost that is significantly lower than that of gPC or Monte Carlo methods. The applicability, efficiency, and accuracy of the method are demonstrated via statistical characterization of frequency-domain voltages in parallel wire, interconnect, and antenna corporate feed networks.
Management Methods of Energy Efficiency and reduction of Greenhouse Gas Emissions
International Nuclear Information System (INIS)
Actina, G.; Grackova, L.; Zebergs, V.; Zeltins, N.
2007-01-01
The management methods of energy efficiency and reduction of GHG emissions and their introduction depend on the financing possibilities and the management structures. Analysis is made of the following methods for the management of the process of raising energy efficiency: an energy audit and certification; the third-party financing; networks for energy efficiency and services of raising energy efficiency. In Latvia more than a half of all the energy resources are consumed for heating and the supply of hot water. The thermal parameters of buildings are poor therefore wide introduction of buildings certification, based on energy audit is of particular importance. The third-party financing would allow resolving the justified problems of audit and certification in order to hasten the heating process of buildings, particularly, owing to the appearance of respective foreign third-party financing companies, although the privatisation of dwelling houses and reorganisation of their management is not yet completed. The networks for energy efficiency have not found supporters in Latvia, however, great importance is attached to the thermal parameters of industrial premises, which are as poor as in the other buildings of the country, and here is a considerable potential of energy economy. Concerning the services of raising energy efficiency, the management method of this process is supposed to reach maximum energy economy after thermo and technical renovation of buildings at their various stages. It is connected with general organisational and financial adjustment of the management of buildings, as well as with the development of the energy service company.(author)
A method for the efficient and effective evaluation of contract health physics technicians
International Nuclear Information System (INIS)
Burkdoll, S.C.; Conley, T.A.
1992-01-01
This paper reports that Wolf Creek has developed a method for efficiently training contract health physics technicians. The time allotted for training contractors prior to a refueling and maintenance outage is normally limited to a few days. Therefore, it was necessary to develop a systematic method to evaluate prior experience as well as practical skills and knowledge. In addition, instruction in the particular methodologies used at Wolf Creek hadto be included with methods for evaluating technician comprehension
International Nuclear Information System (INIS)
Bolivar, J.P.; Garcia-Leon, M.
1996-01-01
In this paper a general method for γ-ray efficiency calibration is presented. The method takes into account the differences of densities and counting geometry between the real sample and the calibration sample. It is based on the γ-transmission method and gives the correction factor f as a function of E γ , the density and counting geometry. Altough developed for soil samples, its underlying working philosophy is useful for any sample whose geometry can be adequately reproduced. (orig.)
An Energy Efficiency Evaluation Method Based on Energy Baseline for Chemical Industry
Yao, Dong-mei; Zhang, Xin; Wang, Ke-feng; Zou, Tao; Wang, Dong; Qian, Xin-hua
2016-01-01
According to the requirements and structure of ISO 50001 energy management system, this study proposes an energy efficiency evaluation method based on energy baseline for chemical industry. Using this method, the energy plan implementation effect in the processes of chemical production can be evaluated quantitatively, and evidences for system fault diagnosis can be provided. This method establishes the energy baseline models which can meet the demand of the different kinds of production proce...
A comparison of efficient methods for the computation of Born gluon amplitudes
International Nuclear Information System (INIS)
Dinsdale, Michael; Ternick, Marko; Weinzierl, Stefan
2006-01-01
We compare four different methods for the numerical computation of the pure gluonic amplitudes in the Born approximation. We are in particular interested in the efficiency of the various methods as the number n of the external particles increases. In addition we investigate the numerical accuracy in critical phase space regions. The methods considered are based on (i) Berends-Giele recurrence relations, (ii) scalar diagrams, (iii) MHV vertices and (iv) BCF recursion relations
Tolba, Khaled Ibrahim; Morgenthal, Guido
2018-01-01
This paper presents an analysis of the scalability and efficiency of a simulation framework based on the vortex particle method. The code is applied for the numerical aerodynamic analysis of line-like structures. The numerical code runs on multicore CPU and GPU architectures using OpenCL framework. The focus of this paper is the analysis of the parallel efficiency and scalability of the method being applied to an engineering test case, specifically the aeroelastic response of a long-span bridge girder at the construction stage. The target is to assess the optimal configuration and the required computer architecture, such that it becomes feasible to efficiently utilise the method within the computational resources available for a regular engineering office. The simulations and the scalability analysis are performed on a regular gaming type computer.
International Nuclear Information System (INIS)
Murphy, Fionnuala; Sosa, Amanda; McDonnell, Kevin; Devlin, Ger
2016-01-01
The energy sector is the major contributor to GHG (greenhouse gas emissions) in Ireland. Under EU Renewable energy targets, Ireland must achieve contributions of 40%, 12% and 10% from renewables to electricity, heat and transport respectively by 2020, in addition to a 20% reduction in GHG emissions. Life cycle assessment methodology was used to carry out a comprehensive, holistic evaluation of biomass-to-energy systems in 2020 based on indigenous biomass supply chains optimised to reduce production and transportation GHG emissions. Impact categories assessed include; global warming, acidification, eutrophication potentials, and energy demand. Two biomass energy conversion technologies are considered; co-firing with peat, and biomass CHP (combined heat and power) systems. Biomass is allocated to each plant according to a supply optimisation model which ensures minimal GHG emissions. The study shows that while CHP systems produce lower environmental impacts than co-firing systems in isolation, determining overall environmental impacts requires analysis of the reference energy systems which are displaced. In addition, if the aims of these systems are to increase renewable energy penetration in line with the renewable electricity and renewable heat targets, the optimal scenario may not be the one which achieves the greatest environmental impact reductions. - Highlights: • Life cycle assessment of biomass co-firing and CHP systems in Ireland is carried out. • GWP, acidification and eutrophication potentials, and energy demand are assessed. • Biomass supply is optimised based on minimising GHG emissions. • CHP systems cause lower environmental impacts than biomass co-firing with peat. • Displacing peat achieves higher GHG emission reductions than replacing fossil heat.
Energy Technology Data Exchange (ETDEWEB)
Yang Yulan [Key Laboratory of the Three Gorges Reservoir Region' s Eco-Environment under Ministry of Education, Chongqing University, Chongqing (China); College of Civil Engineering and Architecture, Zhejiang University of Technology, Hangzhou (China); Li Baizhan [Key Laboratory of the Three Gorges Reservoir Region' s Eco-Environment under the Ministry of Education, Chongqing University, Chongqing (China); Yao Runming, E-mail: r.yao@reading.ac.u [School of Construction Management and Engineering, University of Reading, Reading (United Kingdom); Key Laboratory of the Three Gorges Reservoir Region' s Eco-Environment under Ministry of Education, Chongqing University, Chongqing (China)
2010-12-15
This paper describes a method of identifying and weighting indicators for assessing the energy efficiency of residential buildings in China. A list of indicators of energy efficiency assessment in residential buildings in the hot summer and cold winter zone in China has been proposed, which supplies an important reference for policy makings in energy efficiency assessment in buildings. The research method applies a wide-ranging literature review and a questionnaire survey involving experts in the field. The group analytic hierarchy process (group AHP) has been used to weight the identified indicators. The size of survey samples are sufficient to support the results, which has been validated by consistency estimation. The proposed method could also be extended to develop the weighted indicators for other climate zones in China. - Research highlights: {yields}Method of identifying indicators of building energy efficiency assessment. {yields}The group AHP method for weighting indicators. {yields}Method of solving multi-criteria decision making problems of choice and prioritisation in policy makings.
A simple and efficient method for isolating small RNAs from different plant species
Directory of Open Access Journals (Sweden)
de Folter Stefan
2011-02-01
Full Text Available Abstract Background Small RNAs emerged over the last decade as key regulators in diverse biological processes in eukaryotic organisms. To identify and study small RNAs, good and efficient protocols are necessary to isolate them, which sometimes may be challenging due to the composition of specific tissues of certain plant species. Here we describe a simple and efficient method to isolate small RNAs from different plant species. Results We developed a simple and efficient method to isolate small RNAs from different plant species by first comparing different total RNA extraction protocols, followed by streamlining the best one, finally resulting in a small RNA extraction method that has no need of first total RNA extraction and is not based on the commercially available TRIzol® Reagent or columns. This small RNA extraction method not only works well for plant tissues with high polysaccharide content, like cactus, agave, banana, and tomato, but also for plant species like Arabidopsis or tobacco. Furthermore, the obtained small RNA samples were successfully used in northern blot assays. Conclusion Here we provide a simple and efficient method to isolate small RNAs from different plant species, such as cactus, agave, banana, tomato, Arabidopsis, and tobacco, and the small RNAs from this simplified and low cost method is suitable for downstream handling like northern blot assays.
International Nuclear Information System (INIS)
Yang Yulan; Li Baizhan; Yao Runming
2010-01-01
This paper describes a method of identifying and weighting indicators for assessing the energy efficiency of residential buildings in China. A list of indicators of energy efficiency assessment in residential buildings in the hot summer and cold winter zone in China has been proposed, which supplies an important reference for policy makings in energy efficiency assessment in buildings. The research method applies a wide-ranging literature review and a questionnaire survey involving experts in the field. The group analytic hierarchy process (group AHP) has been used to weight the identified indicators. The size of survey samples are sufficient to support the results, which has been validated by consistency estimation. The proposed method could also be extended to develop the weighted indicators for other climate zones in China. - Research highlights: →Method of identifying indicators of building energy efficiency assessment. →The group AHP method for weighting indicators. →Method of solving multi-criteria decision making problems of choice and prioritisation in policy makings.
Robust fault detection of linear systems using a computationally efficient set-membership method
DEFF Research Database (Denmark)
Tabatabaeipour, Mojtaba; Bak, Thomas
2014-01-01
In this paper, a computationally efficient set-membership method for robust fault detection of linear systems is proposed. The method computes an interval outer-approximation of the output of the system that is consistent with the model, the bounds on noise and disturbance, and the past measureme...... is trivially parallelizable. The method is demonstrated for fault detection of a hydraulic pitch actuator of a wind turbine. We show the effectiveness of the proposed method by comparing our results with two zonotope-based set-membership methods....
Power-efficient method for IM-DD optical transmission of multiple OFDM signals.
Effenberger, Frank; Liu, Xiang
2015-05-18
We propose a power-efficient method for transmitting multiple frequency-division multiplexed (FDM) orthogonal frequency-division multiplexing (OFDM) signals in intensity-modulation direct-detection (IM-DD) optical systems. This method is based on quadratic soft clipping in combination with odd-only channel mapping. We show, both analytically and experimentally, that the proposed approach is capable of improving the power efficiency by about 3 dB as compared to conventional FDM OFDM signals under practical bias conditions, making it a viable solution in applications such as optical fiber-wireless integrated systems where both IM-DD optical transmission and OFDM signaling are important.
Efficient 3D Volume Reconstruction from a Point Cloud Using a Phase-Field Method
Directory of Open Access Journals (Sweden)
Darae Jeong
2018-01-01
Full Text Available We propose an explicit hybrid numerical method for the efficient 3D volume reconstruction from unorganized point clouds using a phase-field method. The proposed three-dimensional volume reconstruction algorithm is based on the 3D binary image segmentation method. First, we define a narrow band domain embedding the unorganized point cloud and an edge indicating function. Second, we define a good initial phase-field function which speeds up the computation significantly. Third, we use a recently developed explicit hybrid numerical method for solving the three-dimensional image segmentation model to obtain efficient volume reconstruction from point cloud data. In order to demonstrate the practical applicability of the proposed method, we perform various numerical experiments.
Feng, Shuo; Ji, Jim
2014-04-01
Parallel excitation (pTx) techniques with multiple transmit channels have been widely used in high field MRI imaging to shorten the RF pulse duration and/or reduce the specific absorption rate (SAR). However, the efficiency of pulse design still needs substantial improvement for practical real-time applications. In this paper, we present a detailed description of a fast pulse design method with Fourier domain gridding and a conjugate gradient method. Simulation results of the proposed method show that the proposed method can design pTx pulses at an efficiency 10 times higher than that of the conventional conjugate-gradient based method, without reducing the accuracy of the desirable excitation patterns.
A chain-of-states acceleration method for the efficient location of minimum energy paths
International Nuclear Information System (INIS)
Hernández, E. R.; Herrero, C. P.; Soler, J. M.
2015-01-01
We describe a robust and efficient chain-of-states method for computing Minimum Energy Paths (MEPs) associated to barrier-crossing events in poly-atomic systems, which we call the acceleration method. The path is parametrized in terms of a continuous variable t ∈ [0, 1] that plays the role of time. In contrast to previous chain-of-states algorithms such as the nudged elastic band or string methods, where the positions of the states in the chain are taken as variational parameters in the search for the MEP, our strategy is to formulate the problem in terms of the second derivatives of the coordinates with respect to t, i.e., the state accelerations. We show this to result in a very simple and efficient method for determining the MEP. We describe the application of the method to a series of test cases, including two low-dimensional problems and the Stone-Wales transformation in C 60
A new method for calculating volumetric sweeps efficiency using streamline simulation concepts
International Nuclear Information System (INIS)
Hidrobo, E A
2000-01-01
One of the purposes of reservoir engineering is to quantify the volumetric sweep efficiency for optimizing reservoir management decisions. The estimation of this parameter has always been a difficult task. Until now, sweep efficiency correlations and calculations have been limited to mostly homogeneous 2-D cases. Calculating volumetric sweep efficiency in a 3-D heterogeneous reservoir becomes difficult due to inherent complexity of multiple layers and arbitrary well configurations. In this paper, a new method for computing volumetric sweep efficiency for any arbitrary heterogeneity and well configuration is presented. The proposed method is based on Datta-Gupta and King's formulation of streamline time-of-flight (1995). Given the fact that the time-of-flight reflects the fluid front propagation at various times, then the connectivity in the time-of-flight represents a direct measure of the volumetric sweep efficiency. The proposed approach has been applied to synthetic as well as field examples. Synthetic examples are used to validate the volumetric sweep efficiency calculations using the streamline time-of-flight connectivity criterion by comparison with analytic solutions and published correlations. The field example, which illustrates the feasibility of the approach for large-scale field applications, is from the north Robertson unit, a low permeability carbonate reservoir in west Texas
Sutherland, Andrew M; Parrella, Michael P
2011-08-01
Western flower thrips, Frankliniella occidentalis (Pergande) (Thysanoptera: Thripidae), is a major horticultural pest and an important vector of plant viruses in many parts of the world. Methods for assessing thrips population density for pest management decision support are often inaccurate or imprecise due to thrips' positive thigmotaxis, small size, and naturally aggregated populations. Two established methods, flower tapping and an alcohol wash, were compared with a novel method, plant desiccation coupled with passive trapping, using accuracy, precision and economic efficiency as comparative variables. Observed accuracy was statistically similar and low (37.8-53.6%) for all three methods. Flower tapping was the least expensive method, in terms of person-hours, whereas the alcohol wash method was the most expensive. Precision, expressed by relative variation, depended on location within the greenhouse, location on greenhouse benches, and the sampling week, but it was generally highest for the flower tapping and desiccation methods. Economic efficiency, expressed by relative net precision, was highest for the flower tapping method and lowest for the alcohol wash method. Advantages and disadvantages are discussed for all three methods used. If relative density assessment methods such as these can all be assumed to accurately estimate a constant proportion of absolute density, then high precision becomes the methodological goal in terms of measuring insect population density, decision making for pest management, and pesticide efficacy assessments.
Evaluation of the method for determining organic fertilizer efficiency by indirect labelling of 15N
International Nuclear Information System (INIS)
Liu Delin; Zhu Zhaomin; Wu Min
1995-01-01
By using the A-value method, direct method and differential method respectively, the absorption and utilization of organic fertilizer-N by rice were studied. The results are as follows. The utilization efficiency of organic fertilizer-N was 25.48%∼50.5% by the differential method, 19.70%∼27.17% by the A-value method, and 18.49%∼24.80% by the direct method. The data by the differential method was higher than those by the other two methods, and there was no significant difference between the direct method and the A-value method. Meanwhile, when the ratio of inorganic fertilizer-N to organic fertilizer-N was 1:0.48, the results from above two methods were similar. The nitrogen efficiency of 1,5 x 10 4 kg fresh Astragalus sinicus L. was equivalent to 53.43 kg urea for early rice, and 39.15 kg urea for late rice
Methods to classify maize cultivars in use efficiency and response to nitrogen
Directory of Open Access Journals (Sweden)
Cleiton Lacerda Godoy
2013-10-01
Full Text Available n plant breeding programs that aim to obtain cultivars with nitrogen (N use efficiency, the focus is on methods of selection and experimental procedures that present low cost, fast response, high repeatability, and can be applied to a large number of cultivars. Thus, the objectives of this study were to classify maize cultivars regarding their use efficiency and response to N in a breeding program, and to validate the methodology with contrasting doses of the nutrient. The experimental design was a randomized block with the treatments arranged in a split-plot scheme with three replicates and five N doses (0, 30, 60, 120 and 200 kg ha-1 in the plots, and six cultivars in subplots. We compared a method examining the efficiency and response (ER with two contrasting doses of N. After that, the analysis of variance, mean comparison and regression analysis were performed. In conclusion, the method of the use efficiency and response based on two N levels classifies the cultivars in the same way as the regression analysis, and it is appropriate in plant breeding routine. Thus, it is necessary to identify the levels of N required to discriminate maize cultivars in conditions of low and high N availability in plant breeding programs that aim to obtain efficient and responsive cultivars. Moreover, the analysis of the interaction genotype x environment at experiments with contrasting doses is always required, even when the interaction is not significant.
Technical Efficiency and Organ Transplant Performance: A Mixed-Method Approach
de-Pablos-Heredero, Carmen; Fernández-Renedo, Carlos; Medina-Merodio, Jose-Amelio
2015-01-01
Mixed methods research is interesting to understand complex processes. Organ transplants are complex processes in need of improved final performance in times of budgetary restrictions. As the main objective a mixed method approach is used in this article to quantify the technical efficiency and the excellence achieved in organ transplant systems and to prove the influence of organizational structures and internal processes in the observed technical efficiency. The results show that it is possible to implement mechanisms for the measurement of the different components by making use of quantitative and qualitative methodologies. The analysis show a positive relationship between the levels related to the Baldrige indicators and the observed technical efficiency in the donation and transplant units of the 11 analyzed hospitals. Therefore it is possible to conclude that high levels in the Baldrige indexes are a necessary condition to reach an increased level of the service offered. PMID:25950653
Schwarz, Karsten; Rieger, Heiko
2013-03-01
We present an efficient Monte Carlo method to simulate reaction-diffusion processes with spatially varying particle annihilation or transformation rates as it occurs for instance in the context of motor-driven intracellular transport. Like Green's function reaction dynamics and first-passage time methods, our algorithm avoids small diffusive hops by propagating sufficiently distant particles in large hops to the boundaries of protective domains. Since for spatially varying annihilation or transformation rates the single particle diffusion propagator is not known analytically, we present an algorithm that generates efficiently either particle displacements or annihilations with the correct statistics, as we prove rigorously. The numerical efficiency of the algorithm is demonstrated with an illustrative example.
An efficient and accurate method for calculating nonlinear diffraction beam fields
Energy Technology Data Exchange (ETDEWEB)
Jeong, Hyun Jo; Cho, Sung Jong; Nam, Ki Woong; Lee, Jang Hyun [Division of Mechanical and Automotive Engineering, Wonkwang University, Iksan (Korea, Republic of)
2016-04-15
This study develops an efficient and accurate method for calculating nonlinear diffraction beam fields propagating in fluids or solids. The Westervelt equation and quasilinear theory, from which the integral solutions for the fundamental and second harmonics can be obtained, are first considered. A computationally efficient method is then developed using a multi-Gaussian beam (MGB) model that easily separates the diffraction effects from the plane wave solution. The MGB models provide accurate beam fields when compared with the integral solutions for a number of transmitter-receiver geometries. These models can also serve as fast, powerful modeling tools for many nonlinear acoustics applications, especially in making diffraction corrections for the nonlinearity parameter determination, because of their computational efficiency and accuracy.
International Nuclear Information System (INIS)
Lin, Meng; Haussener, Sophia
2015-01-01
Solar-driven non-stoichiometric thermochemical redox cycling of ceria for the conversion of solar energy into fuels shows promise in achieving high solar-to-fuel efficiency. This efficiency is significantly affected by the operating conditions, e.g. redox temperatures, reduction and oxidation pressures, solar irradiation concentration, or heat recovery effectiveness. We present a thermodynamic analysis of five redox cycle designs to investigate the effects of working conditions on the fuel production. We focused on the influence of approaches to reduce the partial pressure of oxygen in the reduction step, namely by mechanical approaches (sweep gassing or vacuum pumping), chemical approaches (chemical scavenger), and combinations thereof. The results indicated that the sweep gas schemes work more efficient at non-isothermal than isothermal conditions, and efficient gas phase heat recovery and sweep gas recycling was important to ensure efficient fuel processing. The vacuum pump scheme achieved best efficiencies at isothermal conditions, and at non-isothermal conditions heat recovery was less essential. The use of oxygen scavengers combined with sweep gas and vacuum pump schemes further increased the system efficiency. The present work can be used to predict the performance of solar-driven non-stoichiometric redox cycles and further offers quantifiable guidelines for system design and operation. - Highlights: • A thermodynamic analysis was conducted for ceria-based thermochemical cycles. • Five novel cycle designs and various operating conditions were proposed and investigated. • Pressure reduction method affects optimal operating conditions for maximized efficiency. • Chemical oxygen scavenger proves to be promising in further increasing efficiency. • Formulation of quantifiable design guidelines for economical competitive solar fuel processing
Analysis of efficient preconditioned defect correction methods for nonlinear water waves
DEFF Research Database (Denmark)
Engsig-Karup, Allan Peter
2014-01-01
Robust computational procedures for the solution of non-hydrostatic, free surface, irrotational and inviscid free-surface water waves in three space dimensions can be based on iterative preconditioned defect correction (PDC) methods. Such methods can be made efficient and scalable to enable...... prediction of free-surface wave transformation and accurate wave kinematics in both deep and shallow waters in large marine areas or for predicting the outcome of experiments in large numerical wave tanks. We revisit the classical governing equations are fully nonlinear and dispersive potential flow...... equations. We present new detailed fundamental analysis using finite-amplitude wave solutions for iterative solvers. We demonstrate that the PDC method in combination with a high-order discretization method enables efficient and scalable solution of the linear system of equations arising in potential flow...
Najafi, Amir Abbas; Pourahmadi, Zahra
2016-04-01
Selecting the optimal combination of assets in a portfolio is one of the most important decisions in investment management. As investment is a long term concept, looking into a portfolio optimization problem just in a single period may cause loss of some opportunities that could be exploited in a long term view. Hence, it is tried to extend the problem from single to multi-period model. We include trading costs and uncertain conditions to this model which made it more realistic and complex. Hence, we propose an efficient heuristic method to tackle this problem. The efficiency of the method is examined and compared with the results of the rolling single-period optimization and the buy and hold method which shows the superiority of the proposed method.
An Efficient Graph-based Method for Long-term Land-use Change Statistics
Directory of Open Access Journals (Sweden)
Yipeng Zhang
2015-12-01
Full Text Available Statistical analysis of land-use change plays an important role in sustainable land management and has received increasing attention from scholars and administrative departments. However, the statistical process involving spatial overlay analysis remains difficult and needs improvement to deal with mass land-use data. In this paper, we introduce a spatio-temporal flow network model to reveal the hidden relational information among spatio-temporal entities. Based on graph theory, the constant condition of saturated multi-commodity flow is derived. A new method based on a network partition technique of spatio-temporal flow network are proposed to optimize the transition statistical process. The effectiveness and efficiency of the proposed method is verified through experiments using land-use data in Hunan from 2009 to 2014. In the comparison among three different land-use change statistical methods, the proposed method exhibits remarkable superiority in efficiency.
A New Method of Histogram Computation for Efficient Implementation of the HOG Algorithm
Directory of Open Access Journals (Sweden)
Mariana-Eugenia Ilas
2018-03-01
Full Text Available In this paper we introduce a new histogram computation method to be used within the histogram of oriented gradients (HOG algorithm. The new method replaces the arctangent with the slope computation and the classical magnitude allocation based on interpolation with a simpler algorithm. The new method allows a more efficient implementation of HOG in general, and particularly in field-programmable gate arrays (FPGAs, by considerably reducing the area (thus increasing the level of parallelism, while maintaining very close classification accuracy compared to the original algorithm. Thus, the new method is attractive for many applications, including car detection and classification.
A new method for flight test determination of propulsive efficiency and drag coefficient
Bull, G.; Bridges, P. D.
1983-01-01
A flight test method is described from which propulsive efficiency as well as parasite and induced drag coefficients can be directly determined using relatively simple instrumentation and analysis techniques. The method uses information contained in the transient response in airspeed for a small power change in level flight in addition to the usual measurement of power required for level flight. Measurements of pitch angle and longitudinal and normal acceleration are eliminated. The theoretical basis for the method, the analytical techniques used, and the results of application of the method to flight test data are presented.
A highly efficient pricing method for European-style options based on Shannon wavelets
L. Ortiz Gracia (Luis); C.W. Oosterlee (Cornelis)
2017-01-01
textabstractIn the search for robust, accurate and highly efficient financial option valuation techniques, we present here the SWIFT method (Shannon Wavelets Inverse Fourier Technique), based on Shannon wavelets. SWIFT comes with control over approximation errors made by means of sharp quantitative
Expanding applications of gene-based targeting biotechnology in functional genomics and the treatment of plants, animals, and microbes has synergized the need for new methods to measure binding efficiencies of these products to their genetic targets. The adaptation and innovative use of Cell–Penetra...
Histological method for evaluation of the efficiency of Enerlit-Clima.
Gol'dshtein, D V; Vikhlyantseva, E V; Sakharova, N K; Maevskii, E I; Pogorelov, A G; Uchitel', M L
2004-08-01
We propose a method of evaluation of anticlimacteric efficiency of a drug by its effect on the estrous cycle. The study was carried out on 9-month-old mice with retained, but notably reduced reproductive function. Analysis of the cell components of the estrous cycle was carried out on histological preparations of vaginal smears.
de Graaf, C.S.L.; Kandhai, D.; Sloot, P.M.A.
According to Basel III, financial institutions have to charge a credit valuation adjustment (CVA) to account for a possible counterparty default. Calculating this measure and its sensitivities is one of the biggest challenges in risk management. Here, we introduce an efficient method for the
C.S.L. de Graaf (Kees); B.D. Kandhai; P.M.A. Sloot
2017-01-01
htmlabstractAccording to Basel III, financial institutions have to charge a credit valuation adjustment (CVA) to account for a possible counterparty default. Calculating this measure and its sensitivities is one of the biggest challenges in risk management. Here, we introduce an efficient method
Energy Technology Data Exchange (ETDEWEB)
Guttenberg, Philipp; Lin, Mengyan [Romax Technology, Nottingham (United Kingdom)
2009-07-01
The following paper presents a comparative efficiency analysis of the Toyota Prius versus the Honda Insight using advanced Energy Flow Analysis methods. The sample study shows that even very different hybrid concepts like a split- and a parallel-hybrid can be compared in a high level of detail and demonstrates the benefit showing exemplary results. (orig.)
Differentiability properties of the efficient (u,q2)-set in the Markowitz portfolio selection method
Kriens, J.; Strijbosch, L.W.G.; Vörös, J.
1994-01-01
The set of efficient (Rho2)-combinations in the (Rho2)-plane of the Markowitz portfolio selection method consists of a series of strictly convex parabola. In the transition points from one parabola to the next one, the curve may be indifferentiable. The article gives necessary and sufficient
International Nuclear Information System (INIS)
Marder, L.I.; Myzin, A.I.
1993-01-01
A methodic approach to the grounding of the integration process efficiency within the Unified electric power system is given together with the selection of a rational areal structure and concentration of power-generating source capacities. Formation of an economic functional according to alternative scenavies including the cost components taking account of the regional interests is considered. A method for estimation and distribution of the effect from electric power production integration in the power systems under new economic conditions is proposed
An efficient method of fuel ice formation in moving free-standing ICF/IFE targets
Aleksandrova, I. V.; Bazdenkov, S. V.; Chtcherbakov, V. I.; Gromov, A. I.; Koresheva, E. R.; Koshelev, E. A.; Osipov, I. E.; Yaguzinskiy, L. S.
2004-04-01
Currently, research fields related to the elaboration of efficient layering methods for ICF/IFE applications are rapidly expanding. Significant progress has been made in the technology development based on rapid fuel layering inside moving free-standing targets (FST) which is referred to as the FST layering method. This paper presents our new results obtained in this area and describes technologically elegant solutions towards demonstrating a credible pathway for mass production of IFE cryogenic targets.
An efficient method of fuel ice formation in moving free-standing ICF/IFE targets
International Nuclear Information System (INIS)
Aleksandrova, I V; Bazdenkov, S V; Chtcherbakov, V I; Gromov, A I; Koresheva, E R; Koshelev, E A; Osipov, I E; Yaguzinskiy, L S
2004-01-01
Currently, research fields related to the elaboration of efficient layering methods for ICF/IFE applications are rapidly expanding. Significant progress has been made in the technology development based on rapid fuel layering inside moving free-standing targets (FST) which is referred to as the FST layering method. This paper presents our new results obtained in this area and describes technologically elegant solutions towards demonstrating a credible pathway for mass production of IFE cryogenic targets
An Efficient Method for the N-Bromosuccinimide Catalyzed Synthesis of Indolyl-Nitroalkanes
Directory of Open Access Journals (Sweden)
Ching-Fa Yao
2009-10-01
Full Text Available An efficient and practical method for the synthesis of indolyl-nitroalkane derivatives catalyzed by N-bromosuccinimide is described. The generality of this method was demonstrated by synthesizing an array of diverse 3-substituted indole derivatives by the reaction of different β-nitrostyrenes with various substituted indoles. Simple reaction conditions accompanied by good yields of indolyl-nitroalkanes are the merits of this methodology.
Comparative efficiency of different methods of gluten extraction in indigenous varieties of wheat
Imran, Samra; Hussain, Zaib; Ghafoor, Farkhanda; Ahmad Nagra, Saeed; Ashbeal Ziai, Naheeda
2013-01-01
The present study investigated six varieties of locally grown wheat (Lasani, Sehar, Miraj-08, Chakwal-50, Faisalabad-08 and Inqlab) procured from Punjab Seed Corporation, Lahore, Pakistan for their proximate contents. On the basis of protein content and ready availability, Faisalabad-08 (FD-08) was selected to be used for the assessment of comparative efficiency of various methods used for gluten extraction. Three methods, mechanical, chemical and microbiological were used for the extraction ...
Optical efficiency of solar concentrators by a reverse optical path method.
Parretta, A; Antonini, A; Milan, E; Stefancich, M; Martinelli, G; Armani, M
2008-09-15
A method for the optical characterization of a solar concentrator, based on the reverse illumination by a Lambertian source and measurement of intensity of light projected on a far screen, has been developed. It is shown that the projected light intensity is simply correlated to the angle-resolved efficiency of a concentrator, conventionally obtained by a direct illumination procedure. The method has been applied by simulating simple reflective nonimaging and Fresnel lens concentrators.
An Investigation on the Efficiency Correction Method of the Turbocharger at Low Speed
Directory of Open Access Journals (Sweden)
Jin Eun Chung
2018-01-01
Full Text Available The heat transfer in the turbocharger occurs due to the temperature difference between the exhaust gas and intake air, coolant, and oil. This heat transfer causes the efficiency of the compressor and turbine to be distorted, which is known to be exacerbated during low rotational speeds. Thus, this study proposes a method to mitigate the distortion of the test result data caused by heat transfer in the turbocharger. With this method, the representative compressor temperature is defined and the heat transfer rate of the compressor is calculated by considering the effect of the oil and turbine inlet temperatures at low rotation speeds, when the cold and the hot gas test are simultaneously performed. The correction of compressor efficiency, depending on the turbine inlet temperature, was performed through both hot and cold gas tests and the results showed a maximum of 16% error prior to correction and a maximum of 3% error after the correction. In addition, it shows that it is possible to correct the efficiency distortion of the turbocharger by heat transfer by correcting to the combined turbine efficiency based on the corrected compressor efficiency.
International Nuclear Information System (INIS)
Swisher, J.N.; Martino Jannuzzi, G. de; Redlinger, R.Y.
1997-01-01
This book resulted from our recognition of the need to have systematic teaching and training materials on energy efficiency, end-use analysis, demand-side management (DSM) and integrated resource planning (IRP). This book addresses energy efficiency programs and IRP, exploring their application in the electricity sector. We believe that these methods will provide powerful and practical tools for designing efficient and environmentally-sustainable energy supply and demand-side programs to minimize the economic, environmental and other social costs of electricity conversion and use. Moreover, the principles of IRP can be and already are being applied in other areas such as natural gas, water supply, and even transportation and health services. Public authorities can use IRP principles to design programs to encourage end-use efficiency and environmental protection through environmental charges and incentives, non-utility programs, and utility programs applied to the functions remaining in monopoly concessions such as the distribution wires. Competitive supply firms can use IRP principles to satisfy customer needs for efficiency and low prices, to comply with present and future environmental restrictions, and to optimize supply and demand-side investments and returns, particularly at the distribution level, where local-area IRP is now being actively practiced. Finally, in those countries where a strong planning function remains in place, IRP provides a way to integrate end-use efficiency and environmental protection into energy development. (EG) 181 refs
Energy Technology Data Exchange (ETDEWEB)
Swisher, J N; Martino Jannuzzi, G de; Redlinger, R Y
1997-11-01
This book resulted from our recognition of the need to have systematic teaching and training materials on energy efficiency, end-use analysis, demand-side management (DSM) and integrated resource planning (IRP). This book addresses energy efficiency programs and IRP, exploring their application in the electricity sector. We believe that these methods will provide powerful and practical tools for designing efficient and environmentally-sustainable energy supply and demand-side programs to minimize the economic, environmental and other social costs of electricity conversion and use. Moreover, the principles of IRP can be and already are being applied in other areas such as natural gas, water supply, and even transportation and health services. Public authorities can use IRP principles to design programs to encourage end-use efficiency and environmental protection through environmental charges and incentives, non-utility programs, and utility programs applied to the functions remaining in monopoly concessions such as the distribution wires. Competitive supply firms can use IRP principles to satisfy customer needs for efficiency and low prices, to comply with present and future environmental restrictions, and to optimize supply and demand-side investments and returns, particularly at the distribution level, where local-area IRP is now being actively practiced. Finally, in those countries where a strong planning function remains in place, IRP provides a way to integrate end-use efficiency and environmental protection into energy development. (EG) 181 refs.
A method to unfold the efficiency of gaseous detectors exposed to broad X-ray spectra
International Nuclear Information System (INIS)
Almeida, Gevaldo L. de; Souza, Maria Ines S. de; Lopes, Ricardo T.
2000-01-01
A method to obtain the efficiency of a gaseous detector exposed to broad energy X-ray spectra was developed. It consists in the de-convolution of the integrated detector response using the shapes of those spectra as a tool to unfold the aimed detector efficiency curve. For this purpose, the spectra emitted by a X-ray tube under several anode voltages, were properly characterized through measurements with a NaI(Tl) spectrometer. A Lorentz function was then fitted to each of the spectra, and their parameters expressed as a function of the anode voltage, by using polynomial and gaussian fittings. The integral of the product of each Lorentz function, by another unknown Lorentz function, expressing the detector efficiency curve, represents the response of the detector for each anode tension, e.g., each X-ray spectrum. The symbolical integration of that product, produces a general function containing the unknown parameters of the unknown efficiency curve. A non-linear fitting of this general function, to the detector response points, as experimentally obtained, generates the aimed parameters for the efficiency curve. The final detector efficiency curve is obtained after normalization procedures. (author)
Efficient alpha particle detection by CR-39 applying 50 Hz-HV electrochemical etching method
International Nuclear Information System (INIS)
Sohrabi, M.; Soltani, Z.
2016-01-01
Alpha particles can be detected by CR-39 by applying either chemical etching (CE), electrochemical etching (ECE), or combined pre-etching and ECE usually through a multi-step HF-HV ECE process at temperatures much higher than room temperature. By applying pre-etching, characteristics responses of fast-neutron-induced recoil tracks in CR-39 by HF-HV ECE versus KOH normality (N) have shown two high-sensitivity peaks around 5–6 and 15–16 N and a large-diameter peak with a minimum sensitivity around 10–11 N at 25°C. On the other hand, 50 Hz-HV ECE method recently advanced in our laboratory detects alpha particles with high efficiency and broad registration energy range with small ECE tracks in polycarbonate (PC) detectors. By taking advantage of the CR-39 sensitivity to alpha particles, efficacy of 50 Hz-HV ECE method and CR-39 exotic responses under different KOH normalities, detection characteristics of 0.8 MeV alpha particle tracks were studied in 500 μm CR-39 for different fluences, ECE duration and KOH normality. Alpha registration efficiency increased as ECE duration increased to 90 ± 2% after 6–8 h beyond which plateaus are reached. Alpha track density versus fluence is linear up to 10 6 tracks cm −2 . The efficiency and mean track diameter versus alpha fluence up to 10 6 alphas cm −2 decrease as the fluence increases. Background track density and minimum detection limit are linear functions of ECE duration and increase as normality increases. The CR-39 processed for the first time in this study by 50 Hz-HV ECE method proved to provide a simple, efficient and practical alpha detection method at room temperature. - Highlights: • Alpha particles of 0.8 MeV were detected in CR-39 by 50 Hz-HV ECE method. • Efficiency/track diameter was studied vs fluence and time for 3 KOH normality. • Background track density and minimum detection limit vs duration were studied. • A new simple, efficient and low-cost alpha detection method
Evaluation of the efficiency and utility of recombinant enzyme-free seamless DNA cloning methods
Directory of Open Access Journals (Sweden)
Ken Motohashi
2017-03-01
Full Text Available Simple and low-cost recombinant enzyme-free seamless DNA cloning methods have recently become available. In vivo Escherichia coli cloning (iVEC can directly transform a mixture of insert and vector DNA fragments into E. coli, which are ligated by endogenous homologous recombination activity in the cells. Seamless ligation cloning extract (SLiCE cloning uses the endogenous recombination activity of E. coli cellular extracts in vitro to ligate insert and vector DNA fragments. An evaluation of the efficiency and utility of these methods is important in deciding the adoption of a seamless cloning method as a useful tool. In this study, both seamless cloning methods incorporated inserting DNA fragments into linearized DNA vectors through short (15–39 bp end homology regions. However, colony formation was 30–60-fold higher with SLiCE cloning in end homology regions between 15 and 29 bp than with the iVEC method using DH5α competent cells. E. coli AQ3625 strains, which harbor a sbcA gene mutation that activates the RecE homologous recombination pathway, can be used to efficiently ligate insert and vector DNA fragments with short-end homology regions in vivo. Using AQ3625 competent cells in the iVEC method improved the rate of colony formation, but the efficiency and accuracy of SLiCE cloning were still higher. In addition, the efficiency of seamless cloning methods depends on the intrinsic competency of E. coli cells. The competency of chemically competent AQ3625 cells was lower than that of competent DH5α cells, in all cases of chemically competent cell preparations using the three different methods. Moreover, SLiCE cloning permits the use of both homemade and commercially available competent cells because it can use general E. coli recA− strains such as DH5α as host cells for transformation. Therefore, between the two methods, SLiCE cloning provides both higher efficiency and better utility than the iVEC method for seamless DNA plasmid
DEFF Research Database (Denmark)
Haller, M.Y.; Yazdanshenas, Eshagh; Andersen, Elsa
2010-01-01
process is in agreement with the first law of thermodynamics. A comparison of the stratification efficiencies obtained from experimental results of charging, standby, and discharging processes gives meaningful insights into the different mixing behaviors of a storage tank that is charged and discharged......A new method for the calculation of a stratification efficiency of thermal energy storages based on the second law of thermodynamics is presented. The biasing influence of heat losses is studied theoretically and experimentally. Theoretically, it does not make a difference if the stratification...
Efficient evaluation of the Coulomb force in the Gaussian and finite-element Coulomb method.
Kurashige, Yuki; Nakajima, Takahito; Sato, Takeshi; Hirao, Kimihiko
2010-06-28
We propose an efficient method for evaluating the Coulomb force in the Gaussian and finite-element Coulomb (GFC) method, which is a linear-scaling approach for evaluating the Coulomb matrix and energy in large molecular systems. The efficient evaluation of the analytical gradient in the GFC is not straightforward as well as the evaluation of the energy because the SCF procedure with the Coulomb matrix does not give a variational solution for the Coulomb energy. Thus, an efficient approximate method is alternatively proposed, in which the Coulomb potential is expanded in the Gaussian and finite-element auxiliary functions as done in the GFC. To minimize the error in the gradient not just in the energy, the derived functions of the original auxiliary functions of the GFC are used additionally for the evaluation of the Coulomb gradient. In fact, the use of the derived functions significantly improves the accuracy of this approach. Although these additional auxiliary functions enlarge the size of the discretized Poisson equation and thereby increase the computational cost, it maintains the near linear scaling as the GFC and does not affects the overall efficiency of the GFC approach.
An Efficient Mesh Generation Method for Fractured Network System Based on Dynamic Grid Deformation
Directory of Open Access Journals (Sweden)
Shuli Sun
2013-01-01
Full Text Available Meshing quality of the discrete model influences the accuracy, convergence, and efficiency of the solution for fractured network system in geological problem. However, modeling and meshing of such a fractured network system are usually tedious and difficult due to geometric complexity of the computational domain induced by existence and extension of fractures. The traditional meshing method to deal with fractures usually involves boundary recovery operation based on topological transformation, which relies on many complicated techniques and skills. This paper presents an alternative and efficient approach for meshing fractured network system. The method firstly presets points on fractures and then performs Delaunay triangulation to obtain preliminary mesh by point-by-point centroid insertion algorithm. Then the fractures are exactly recovered by local correction with revised dynamic grid deformation approach. Smoothing algorithm is finally applied to improve the quality of mesh. The proposed approach is efficient, easy to implement, and applicable to the cases of initial existing fractures and extension of fractures. The method is successfully applied to modeling of two- and three-dimensional discrete fractured network (DFN system in geological problems to demonstrate its effectiveness and high efficiency.
Evaluation Method for Fieldlike-Torque Efficiency by Modulation of the Resonance Field
Kim, Changsoo; Kim, Dongseuk; Chun, Byong Sun; Moon, Kyoung-Woong; Hwang, Chanyong
2018-05-01
The spin Hall effect has attracted a lot of interest in spintronics because it offers the possibility of a faster switching route with an electric current than with a spin-transfer-torque device. Recently, fieldlike spin-orbit torque has been shown to play an important role in the magnetization switching mechanism. However, there is no simple method for observing the fieldlike spin-orbit torque efficiency. We suggest a method for measuring fieldlike spin-orbit torque using a linear change in the resonance field in spectra of direct-current (dc)-tuned spin-torque ferromagnetic resonance. The fieldlike spin-orbit torque efficiency can be obtained in both a macrospin simulation and in experiments by simply subtracting the Oersted field from the shifted amount of resonance field. This method analyzes the effect of fieldlike torque using dc in a normal metal; therefore, only the dc resistivity and the dimensions of each layer are considered in estimating the fieldlike spin-torque efficiency. The evaluation of fieldlike-torque efficiency of a newly emerging material by modulation of the resonance field provides a shortcut in the development of an alternative magnetization switching device.
Measuring efficiency of university-industry Ph.D. projects using best worst method.
Salimi, Negin; Rezaei, Jafar
A collaborative Ph.D. project, carried out by a doctoral candidate, is a type of collaboration between university and industry. Due to the importance of such projects, researchers have considered different ways to evaluate the success, with a focus on the outputs of these projects. However, what has been neglected is the other side of the coin-the inputs. The main aim of this study is to incorporate both the inputs and outputs of these projects into a more meaningful measure called efficiency. A ratio of the weighted sum of outputs over the weighted sum of inputs identifies the efficiency of a Ph.D. The weights of the inputs and outputs can be identified using a multi-criteria decision-making (MCDM) method. Data on inputs and outputs are collected from 51 Ph.D. candidates who graduated from Eindhoven University of Technology. The weights are identified using a new MCDM method called Best Worst Method (BWM). Because there may be differences in the opinion of Ph.D. candidates and supervisors on weighing the inputs and outputs, data for BWM are collected from both groups. It is interesting to see that there are differences in the level of efficiency from the two perspectives, because of the weight differences. Moreover, a comparison between the efficiency scores of these projects and their success scores reveals differences that may have significant implications. A sensitivity analysis divulges the most contributing inputs and outputs.
A simple and efficient method of nickel electrodeposition for the cyclotron production of 64Cu
International Nuclear Information System (INIS)
Manrique-Arias, Juan C.; Avila-Rodriguez, Miguel A.
2014-01-01
Nickel targets for the cyclotron production of 64 Cu were prepared by electrodeposition on a gold backing from nickel chloride solutions using boric acid as buffer. Parameters studied were nickel chloride and boric acid concentration, temperature and current density. All plating conditions studied were successful obtaining efficiencies of approximately 90% in 2–3 h, reaching almost quantitative plating (>97%) in 10–20 h depending on the current density. All plated targets withstood proton irradiations up to 40 µA for 2 h. Recovered nickel was successfully recycled and reused with an overall efficiency >95%. - Highlights: • Simple and efficient method of Ni electrodeposition from NiCl 2 solutions. • Represents an improvement over current methods for the preparation of Ni targets. • All plated targets underwent irradiation and withstood currents up to 40 µA for 2 h. • Nickel target material was recycled and reused with an overall efficiency >95%. • Specific activity of 64 Cu was similar than that obtained with older methods of Ni plating
Dimitrakopoulos, Panagiotis
2018-03-01
The calculation of polytropic efficiencies is a very important task, especially during the development of new compression units, like compressor impellers, stages and stage groups. Such calculations are also crucial for the determination of the performance of a whole compressor. As processors and computational capacities have substantially been improved in the last years, the need for a new, rigorous, robust, accurate and at the same time standardized method merged, regarding the computation of the polytropic efficiencies, especially based on thermodynamics of real gases. The proposed method is based on the rigorous definition of the polytropic efficiency. The input consists of pressure and temperature values at the end points of the compression path (suction and discharge), for a given working fluid. The average relative error for the studied cases was 0.536 %. Thus, this high-accuracy method is proposed for efficiency calculations related with turbocompressors and their compression units, especially when they are operating at high power levels, for example in jet engines and high-power plants.
Smýkal, Petr
2006-01-01
Fast and efficient DNA fingerprinting of crop cultivars and individuals is frequently used in both theoretical population genetics and in practical breeding. Numerous DNA marker technologies exist and the ratio of speed, cost and accuracy are of importance. Therefore even in species where highly accurate and polymorphic marker systems are available, such as microsatellite SSR (simple sequence repeats), also alternative methods may be of interest. Thanks to their high abundance and ubiquity, temporary mobile retrotransposable elements come into recent focus. Their properties, such as genome wide distribution and well-defined origin of individual insertions by descent, predetermine them for use as molecular markers. In this study, several Ty3-gypsy type retrotransposons have been developed and adopted for the inter-retrotransposon amplified polymorphism (IRAP) method, which is suitable for fast and efficient pea cultivar fingerprinting. The method can easily distinguish even between genetically closely related pea cultivars and provide high polymorphic information content (PIC) in a single PCR analysis.
An efficient method for hybrid density functional calculation with spin-orbit coupling
Wang, Maoyuan; Liu, Gui-Bin; Guo, Hong; Yao, Yugui
2018-03-01
In first-principles calculations, hybrid functional is often used to improve accuracy from local exchange correlation functionals. A drawback is that evaluating the hybrid functional needs significantly more computing effort. When spin-orbit coupling (SOC) is taken into account, the non-collinear spin structure increases computing effort by at least eight times. As a result, hybrid functional calculations with SOC are intractable in most cases. In this paper, we present an approximate solution to this problem by developing an efficient method based on a mixed linear combination of atomic orbital (LCAO) scheme. We demonstrate the power of this method using several examples and we show that the results compare very well with those of direct hybrid functional calculations with SOC, yet the method only requires a computing effort similar to that without SOC. The presented technique provides a good balance between computing efficiency and accuracy, and it can be extended to magnetic materials.
Research on an efficient preconditioner using GMRES method for the MOC
International Nuclear Information System (INIS)
Takeda, Satoshi; Kitada, Takanori; Smith, Michael A.
2011-01-01
The modeling accuracy of reactor analysis techniques has improved considerably with the progressive improvements in computational capabilities. The method of characteristics (MOC) solves the neutron transport equation using tracking lines which simulates the neutron paths. The MOC is an accurate calculation method and is becoming a major solver because of the rapid advancement of the computer. In this methodology, the transport equation is discretized into many spatial meshes and energy wise groups. And the discretization generates a large system which needs a lot of computational costs. To reduce computational costs of MOC calculation, we investigate the Generalized Minimal RESidual (GMRES) method as an accelerator and developed an efficient preconditioner for the MOC calculation. The preconditioner we developed was made by simplifying rigorous preconditioner. And the efficiency was verified by comparing the number of iterations which is calculated by one dimensional MOC code
An Efficient Integer Coding and Computing Method for Multiscale Time Segment
Directory of Open Access Journals (Sweden)
TONG Xiaochong
2016-12-01
Full Text Available This article focus on the exist problem and status of current time segment coding, proposed a new set of approach about time segment coding: multi-scale time segment integer coding (MTSIC. This approach utilized the tree structure and the sort by size formed among integer, it reflected the relationship among the multi-scale time segments: order, include/contained, intersection, etc., and finally achieved an unity integer coding processing for multi-scale time. On this foundation, this research also studied the computing method for calculating the time relationships of MTSIC, to support an efficient calculation and query based on the time segment, and preliminary discussed the application method and prospect of MTSIC. The test indicated that, the implement of MTSIC is convenient and reliable, and the transformation between it and the traditional method is convenient, it has the very high efficiency in query and calculating.
Improved DEA Cross Efficiency Evaluation Method Based on Ideal and Anti-Ideal Points
Directory of Open Access Journals (Sweden)
Qiang Hou
2018-01-01
Full Text Available A new model is introduced in the process of evaluating efficiency value of decision making units (DMUs through data envelopment analysis (DEA method. Two virtual DMUs called ideal point DMU and anti-ideal point DMU are combined to form a comprehensive model based on the DEA method. The ideal point DMU is taking self-assessment system according to efficiency concept. The anti-ideal point DMU is taking other-assessment system according to fairness concept. The two distinctive ideal point models are introduced to the DEA method and combined through using variance ration. From the new model, a reasonable result can be obtained. Numerical examples are provided to illustrate the new constructed model and certify the rationality of the constructed model through relevant analysis with the traditional DEA model.
Comparative efficiency of different methods of gluten extraction in indigenous varieties of wheat.
Imran, Samra; Hussain, Zaib; Ghafoor, Farkhanda; Nagra, Saeedahmad; Ziai, Naheeda Ashbeal
2013-06-01
The present study investigated six varieties of locally grown wheat (Lasani, Sehar, Miraj-08, Chakwal-50, Faisalabad-08 and Inqlab) procured from Punjab Seed Corporation, Lahore, Pakistan for their proximate contents. On the basis of protein content and ready availability, Faisalabad-08 (FD-08) was selected to be used for the assessment of comparative efficiency of various methods used for gluten extraction. Three methods, mechanical, chemical and microbiological were used for the extraction of gluten from FD-08. Each method was carried out under ambient conditions using a drying temperature of 55 degrees C. Mechanical method utilized four different processes viz:- dough process, dough batter process, batter process and ethanol washing process using standard 150 mesh. The starch thus obtained was analyzed for its proximate contents. Dough batter process proved to be the most efficient mechanical method and was further investigated using 200 and 300 mesh. Gluten content was determined using sandwich omega-gliadin enzyme-linked immunosorbent assay (ELISA).The results of dough batter process using 200 mesh indicated a starch product with gluten content of 678 ppm. Chemical method indicated high gluten content of more than 5000 ppm and the microbiological method reduced the gluten content from 2500 ppm to 398 ppm. From the results it was observed that no gluten extraction method is viable to produce starch which can fulfill the criteria of a gluten free product (20 ppm).
Energy Technology Data Exchange (ETDEWEB)
Dixon, D.A., E-mail: ddixon@lanl.gov [Los Alamos National Laboratory, P.O. Box 1663, MS P365, Los Alamos, NM 87545 (United States); Prinja, A.K., E-mail: prinja@unm.edu [Department of Nuclear Engineering, MSC01 1120, 1 University of New Mexico, Albuquerque, NM 87131-0001 (United States); Franke, B.C., E-mail: bcfrank@sandia.gov [Sandia National Laboratories, Albuquerque, NM 87123 (United States)
2015-09-15
This paper presents the theoretical development and numerical demonstration of a moment-preserving Monte Carlo electron transport method. Foremost, a full implementation of the moment-preserving (MP) method within the Geant4 particle simulation toolkit is demonstrated. Beyond implementation details, it is shown that the MP method is a viable alternative to the condensed history (CH) method for inclusion in current and future generation transport codes through demonstration of the key features of the method including: systematically controllable accuracy, computational efficiency, mathematical robustness, and versatility. A wide variety of results common to electron transport are presented illustrating the key features of the MP method. In particular, it is possible to achieve accuracy that is statistically indistinguishable from analog Monte Carlo, while remaining up to three orders of magnitude more efficient than analog Monte Carlo simulations. Finally, it is shown that the MP method can be generalized to any applicable analog scattering DCS model by extending previous work on the MP method beyond analytical DCSs to the partial-wave (PW) elastic tabulated DCS data.
International Nuclear Information System (INIS)
May, Gökan; Barletta, Ilaria; Stahl, Bojan; Taisch, Marco
2015-01-01
Highlights: • We propose a 7-step methodology to develop firm-tailored energy-related KPIs (e-KPIs). • We provide a practical guide for companies to identify their most important e-KPIs. • e-KPIs support identification of energy efficiency improvement areas in production. • The method employs an action plan for achieving energy saving targets. • The paper strengthens theoretical base for energy-based decision making in manufacturing. - Abstract: Measuring energy efficiency performance of equipments, processes and factories is the first step to effective energy management in production. Thus, enabled energy-related information allows the assessment of the progress of manufacturing companies toward their energy efficiency goals. In that respect, the study addresses this challenge where current industrial approaches lack the means and appropriate performance indicators to compare energy-use profiles of machines and processes, and for the comparison of their energy efficiency performance to that of competitors’. Focusing on this challenge, the main objective of the paper is to present a method which supports manufacturing companies in the development of energy-based performance indicators. For this purpose, we provide a 7-step method to develop production-tailored and energy-related key performance indicators (e-KPIs). These indicators allow the interpretation of cause-effect relationships and therefore support companies in their operative decision-making process. Consequently, the proposed method supports the identification of weaknesses and areas for energy efficiency improvements related to the management of production and operations. The study therefore aims to strengthen the theoretical base necessary to support energy-based decision making in manufacturing industries
The efficiency of different estimation methods of hydro-physical limits
Directory of Open Access Journals (Sweden)
Emma María Martínez
2012-12-01
Full Text Available The soil water available to crops is defined by specific values of water potential limits. Underlying the estimation of hydro-physical limits, identified as permanent wilting point (PWP and field capacity (FC, is the selection of a suitable method based on a multi-criteria analysis that is not always clear and defined. In this kind of analysis, the time required for measurements must be taken into consideration as well as other external measurement factors, e.g., the reliability and suitability of the study area, measurement uncertainty, cost, effort and labour invested. In this paper, the efficiency of different methods for determining hydro-physical limits is evaluated by using indices that allow for the calculation of efficiency in terms of effort and cost. The analysis evaluates both direct determination methods (pressure plate - PP and water activity meter - WAM and indirect estimation methods (pedotransfer functions - PTFs. The PTFs must be validated for the area of interest before use, but the time and cost associated with this validation are not included in the cost of analysis. Compared to the other methods, the combined use of PP and WAM to determine hydro-physical limits differs significantly in time and cost required and quality of information. For direct methods, increasing sample size significantly reduces cost and time. This paper assesses the effectiveness of combining a general analysis based on efficiency indices and more specific analyses based on the different influencing factors, which were considered separately so as not to mask potential benefits or drawbacks that are not evidenced in efficiency estimation.
Efficient free energy calculations by combining two complementary tempering sampling methods.
Xie, Liangxu; Shen, Lin; Chen, Zhe-Ning; Yang, Mingjun
2017-01-14
Although energy barriers can be efficiently crossed in the reaction coordinate (RC) guided sampling, this type of method suffers from identification of the correct RCs or requirements of high dimensionality of the defined RCs for a given system. If only the approximate RCs with significant barriers are used in the simulations, hidden energy barriers with small to medium height would exist in other degrees of freedom (DOFs) relevant to the target process and consequently cause the problem of insufficient sampling. To address the sampling in this so-called hidden barrier situation, here we propose an effective approach to combine temperature accelerated molecular dynamics (TAMD), an efficient RC-guided sampling method, with the integrated tempering sampling (ITS), a generalized ensemble sampling method. In this combined ITS-TAMD method, the sampling along the major RCs with high energy barriers is guided by TAMD and the sampling of the rest of the DOFs with lower but not negligible barriers is enhanced by ITS. The performance of ITS-TAMD to three systems in the processes with hidden barriers has been examined. In comparison to the standalone TAMD or ITS approach, the present hybrid method shows three main improvements. (1) Sampling efficiency can be improved at least five times even if in the presence of hidden energy barriers. (2) The canonical distribution can be more accurately recovered, from which the thermodynamic properties along other collective variables can be computed correctly. (3) The robustness of the selection of major RCs suggests that the dimensionality of necessary RCs can be reduced. Our work shows more potential applications of the ITS-TAMD method as the efficient and powerful tool for the investigation of a broad range of interesting cases.
International Nuclear Information System (INIS)
Grigorescu, L.
1976-07-01
The efficiency extrapolation method was improved by establishing ''linearity conditions'' for the discrimination on the gamma channel of the coincidence equipment. These conditions were proved to eliminate the systematic error of the method. A control procedure for the fulfilment of linearity conditions and estimation of residual systematic error was given. For law-energy gamma transitions an ''equivalent scheme principle'' was established, which allow for a correct application of the method. Solutions of Cs-134, Co-57, Ba-133 and Zn-65 were standardized with an ''effective standard deviation'' of 0.3-0.7 per cent. For Zn-65 ''special linearity conditions'' were applied. (author)
Efficient propagation of the hierarchical equations of motion using the matrix product state method
Shi, Qiang; Xu, Yang; Yan, Yaming; Xu, Meng
2018-05-01
We apply the matrix product state (MPS) method to propagate the hierarchical equations of motion (HEOM). It is shown that the MPS approximation works well in different type of problems, including boson and fermion baths. The MPS method based on the time-dependent variational principle is also found to be applicable to HEOM with over one thousand effective modes. Combining the flexibility of the HEOM in defining the effective modes and the efficiency of the MPS method thus may provide a promising tool in simulating quantum dynamics in condensed phases.
Collaborative validation of a rapid method for efficient virus concentration in bottled water
DEFF Research Database (Denmark)
Schultz, Anna Charlotte; Perelle, Sylvie; Di Pasquale, Simona
2011-01-01
. Three newly developed methods, A, B and C, for virus concentration in bottled water were compared against the reference method D: (A) Convective Interaction Media (CIM) monolithic chromatography; filtration of viruses followed by (B) direct lysis of viruses on membrane; (C) concentration of viruses......Enteric viruses, including norovirus (NoV) and hepatitis A virus (HAV), have emerged as a major cause of waterborne outbreaks worldwide. Due to their low infectious doses and low concentrations in water samples, an efficient and rapid virus concentration method is required for routine control...... by ultracentrifugation; and (D) concentration of viruses by ultrafiltration, for each methods' (A, B and C) efficacy to recover 10-fold dilutions of HAV and feline calicivirus (FCV) spiked in bottles of 1.5L of mineral water. Within the tested characteristics, all the new methods showed better performance than method D...
An Energy Efficiency Evaluation Method Based on Energy Baseline for Chemical Industry
Directory of Open Access Journals (Sweden)
Dong-mei Yao
2016-01-01
Full Text Available According to the requirements and structure of ISO 50001 energy management system, this study proposes an energy efficiency evaluation method based on energy baseline for chemical industry. Using this method, the energy plan implementation effect in the processes of chemical production can be evaluated quantitatively, and evidences for system fault diagnosis can be provided. This method establishes the energy baseline models which can meet the demand of the different kinds of production processes and gives the general solving method of each kind of model according to the production data. Then the energy plan implementation effect can be evaluated and also whether the system is running normally can be determined through the baseline model. Finally, this method is used on cracked gas compressor unit of ethylene plant in some petrochemical enterprise; it can be proven that this method is correct and practical.
An Efficient, Noniterative Method of Identifying the Cost-Effectiveness Frontier.
Suen, Sze-chuan; Goldhaber-Fiebert, Jeremy D
2016-01-01
Cost-effectiveness analysis aims to identify treatments and policies that maximize benefits subject to resource constraints. However, the conventional process of identifying the efficient frontier (i.e., the set of potentially cost-effective options) can be algorithmically inefficient, especially when considering a policy problem with many alternative options or when performing an extensive suite of sensitivity analyses for which the efficient frontier must be found for each. Here, we describe an alternative one-pass algorithm that is conceptually simple, easier to implement, and potentially faster for situations that challenge the conventional approach. Our algorithm accomplishes this by exploiting the relationship between the net monetary benefit and the cost-effectiveness plane. To facilitate further evaluation and use of this approach, we also provide scripts in R and Matlab that implement our method and can be used to identify efficient frontiers for any decision problem. © The Author(s) 2015.
An Efficient, Non-iterative Method of Identifying the Cost-Effectiveness Frontier
Suen, Sze-chuan; Goldhaber-Fiebert, Jeremy D.
2015-01-01
Cost-effectiveness analysis aims to identify treatments and policies that maximize benefits subject to resource constraints. However, the conventional process of identifying the efficient frontier (i.e., the set of potentially cost-effective options) can be algorithmically inefficient, especially when considering a policy problem with many alternative options or when performing an extensive suite of sensitivity analyses for which the efficient frontier must be found for each. Here, we describe an alternative one-pass algorithm that is conceptually simple, easier to implement, and potentially faster for situations that challenge the conventional approach. Our algorithm accomplishes this by exploiting the relationship between the net monetary benefit and the cost-effectiveness plane. To facilitate further evaluation and use of this approach, we additionally provide scripts in R and Matlab that implement our method and can be used to identify efficient frontiers for any decision problem. PMID:25926282
Efficient generalized Golub-Kahan based methods for dynamic inverse problems
Chung, Julianne; Saibaba, Arvind K.; Brown, Matthew; Westman, Erik
2018-02-01
We consider efficient methods for computing solutions to and estimating uncertainties in dynamic inverse problems, where the parameters of interest may change during the measurement procedure. Compared to static inverse problems, incorporating prior information in both space and time in a Bayesian framework can become computationally intensive, in part, due to the large number of unknown parameters. In these problems, explicit computation of the square root and/or inverse of the prior covariance matrix is not possible, so we consider efficient, iterative, matrix-free methods based on the generalized Golub-Kahan bidiagonalization that allow automatic regularization parameter and variance estimation. We demonstrate that these methods for dynamic inversion can be more flexible than standard methods and develop efficient implementations that can exploit structure in the prior, as well as possible structure in the forward model. Numerical examples from photoacoustic tomography, space-time deblurring, and passive seismic tomography demonstrate the range of applicability and effectiveness of the described approaches. Specifically, in passive seismic tomography, we demonstrate our approach on both synthetic and real data. To demonstrate the scalability of our algorithm, we solve a dynamic inverse problem with approximately 43 000 measurements and 7.8 million unknowns in under 40 s on a standard desktop.
Rapid and efficient method to extract metagenomic DNA from estuarine sediments.
Shamim, Kashif; Sharma, Jaya; Dubey, Santosh Kumar
2017-07-01
Metagenomic DNA from sediments of selective estuaries of Goa, India was extracted using a simple, fast, efficient and environment friendly method. The recovery of pure metagenomic DNA from our method was significantly high as compared to other well-known methods since the concentration of recovered metagenomic DNA ranged from 1185.1 to 4579.7 µg/g of sediment. The purity of metagenomic DNA was also considerably high as the ratio of absorbance at 260 and 280 nm ranged from 1.88 to 1.94. Therefore, the recovered metagenomic DNA was directly used to perform various molecular biology experiments viz. restriction digestion, PCR amplification, cloning and metagenomic library construction. This clearly proved that our protocol for metagenomic DNA extraction using silica gel efficiently removed the contaminants and prevented shearing of the metagenomic DNA. Thus, this modified method can be used to recover pure metagenomic DNA from various estuarine sediments in a rapid, efficient and eco-friendly manner.
Methods to improve efficiency of four stroke, spark ignition engines at part load
International Nuclear Information System (INIS)
Kutlar, Osman Akin; Arslan, Hikmet; Calik, Alper Tolga
2005-01-01
The four stroke, spark ignition (SI) engine pressure-volume diagram (p-V) contains two main parts. They are the compression-combustion-expansion (high pressure loop) and the exhaust-intake (low pressure or gas exchange loop) parts. The main reason for efficiency decrease at part load conditions for these types of engines is the flow restriction at the cross sectional area of the intake system by partially closing the throttle valve, which leads to increased pumping losses and to increased low pressure loop area on the p-V diagram. Meanwhile, the poorer combustion quality, i.e. lower combustion speed and cycle to cycle variations, additionally influence these pressure loop areas. In this study, methods for increasing efficiency at part load conditions and their potential for practical use are investigated. The study also includes a review of the vast literature on the solution of this problem. This investigation shows that the potential for increasing the efficiency of SI engines at part load conditions is not yet exhausted. Each method has its own advantages and disadvantages. Among these, the most promising methods to decrease the fuel consumption at part load conditions are stratified charge and variable displacement engines. When used in combination, the other listed methods are more effective than their usage alone
A two-factor method for appraising building renovation and energy efficiency improvement projects
International Nuclear Information System (INIS)
Martinaitis, Vytautas; Kazakevicius, Eduardas; Vitkauskas, Aloyzas
2007-01-01
The renovation of residential buildings usually involves a variety of measures aiming at reducing energy and building maintenance bills, increasing safety and market value, and improving comfort and aesthetics. A significant number of project appraisal methods in current use-such as calculations of payback time, net present value, internal rate of return or cost of conserved energy (CCE)-only quantify energy efficiency gains. These approaches are relatively easy to use, but offer a distorted view of complex modernization projects. On the other hand, various methods using multiple criteria take a much wider perspective but are usually time-consuming, based on sometimes uncertain assumptions and require sophisticated tools. A 'two-factor' appraisal method offers a compromise between these two approaches. The main idea of the method is to separate investments into those related to energy efficiency improvements, and those related to building renovation. Costs and benefits of complex measures, which both influence energy consumption and improve building constructions, are separated by using a building rehabilitation coefficient. The CCE is used for the appraisal of energy efficiency investments, while investments in building renovation are appraised using standard tools for the assessment of investments in maintenance, repair and rehabilitation
Method for Measuring Cooling Efficiency of Water Droplets Impinging onto Hot Metal Discs
Directory of Open Access Journals (Sweden)
Joachim Søreng Bjørge
2018-06-01
Full Text Available The present work outlines a method for measuring the cooling efficiency of droplets impinging onto hot metal discs in the temperature range of 85 °C to 400 °C, i.e., covering the boiling regimes experienced when applying water to heated objects in fires. Stainless steel and aluminum test discs (with 50-mm diameter, 10-mm thickness, and a surface roughness of Ra 0.4 or Ra 3 were suspended horizontally by four thermocouples that were used to record disc temperatures. The discs were heated by a laboratory burner prior to the experiments, and left to cool with and without applying 2.4-mm diameter water droplets to the discs while the disc temperatures were recorded. The droplets were generated by the acceleration of gravity from a hypodermic injection needle, and hit the disc center at a speed of 2.2 m/s and a rate of 0.02 g/s, i.e., about three droplets per second. Based on the recorded rate of the temperature change, as well as disc mass and disc heat capacity, the absolute droplet cooling effect and the relative cooling efficiency relative to complete droplet evaporation were obtained. There were significant differences in the cooling efficiency as a function of temperature for the two metals investigated, but there was no statistically significant difference with respect to whether the surface roughness was Ra 0.4 or Ra 3. Aluminum showed a higher cooling efficiency in the temperature range of 110 °C to 140 °C, and a lower cooling efficiency in the temperature range of 180 °C to 300 °C compared to stainless steel. Both metals gave a maximum cooling efficiency in the range of 75% to 85%. A minimum of 5% cooling efficiency was experienced for the aluminum disc at 235 °C, i.e., the observed Leidenfrost point. However, stainless steel did not give a clear minimum in cooling efficiency, which was about 12–14% for disc temperatures above 300 °C. This simple and straightforward technique is well suited for assessing the cooling efficiency of
Energy Technology Data Exchange (ETDEWEB)
Kang, M. Y.; Kim, J. H.; Choi, H. D. [Seoul National Univ., Seoul (Korea, Republic of); Sun, G. M. [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)
2014-05-15
In the field of γ-ray measurements, the determination of full energy (FE) absorption peak efficiency for a voluminous sample is difficult, because the preparation of the certified radiation source with the same chemical composition and geometry for the original voluminous sample is not easy. In order to solve this inconvenience, simulation or semi-empirical methods are preferred in many cases. Effective Solid Angle (ESA) Code which includes semi-empirical approach has been developed by the Applied Nuclear Physics Group in Seoul National University. In this study, we validated ESA code by using Marinelli type voluminous KRISS (Korea Research Institute of Standards and Science) CRM (Certified Reference Materials) sources and IAEA standard γ-ray point sources. And semi-empirically determined efficiency curve for voluminous source by using the ESA code is compared with the experimental value. We calculated the efficiency curve of voluminous source from the measured efficiency of standard point source by using the ESA code. We will carry out the ESA code validation by measurement of various CRM volume sources with detector of different efficiency.
Directory of Open Access Journals (Sweden)
Qianwang Deng
2015-11-01
Full Text Available Remanufacturing can bring considerable economic and environmental benefits such as cost saving, conservation of energy and resources, and reduction of emissions. With the increasing awareness of sustainable manufacturing, remanufacturing gradually becomes the research priority. Most studies concentrate on the analysis of influencing factors, or the evaluation of the economic and environmental performance in remanufacturing, while little effort has been devoted to investigating the critical factors influencing the eco-efficiency of remanufacturing. Considering the current development of the remanufacturing industry in China, this paper proposes a set of factors influencing the eco-efficiency of remanufacturing and then utilizes a fuzzy Decision Making Trial and Evaluation Laboratory (DEMATEL method to establish relation matrixes reflecting the interdependent relationships among these factors. Finally, the contributions of each factor to eco-efficiency and mutual influence values among them are obtained, and critical factors in eco-efficiency of remanufacturing are identified. The results of the present work can provide theoretical supports for the government to make appropriate policies to improve the eco-efficiency of remanufacturing.
An Efficient Explicit-time Description Method for Timed Model Checking
Directory of Open Access Journals (Sweden)
Hao Wang
2009-12-01
Full Text Available Timed model checking, the method to formally verify real-time systems, is attracting increasing attention from both the model checking community and the real-time community. Explicit-time description methods verify real-time systems using general model constructs found in standard un-timed model checkers. Lamport proposed an explicit-time description method using a clock-ticking process (Tick to simulate the passage of time together with a group of global variables to model time requirements. Two methods, the Sync-based Explicit-time Description Method using rendezvous synchronization steps and the Semaphore-based Explicit-time Description Method using only one global variable were proposed; they both achieve better modularity than Lamport's method in modeling the real-time systems. In contrast to timed automata based model checkers like UPPAAL, explicit-time description methods can access and store the current time instant for future calculations necessary for many real-time systems, especially those with pre-emptive scheduling. However, the Tick process in the above three methods increments the time by one unit in each tick; the state spaces therefore grow relatively fast as the time parameters increase, a problem when the system's time period is relatively long. In this paper, we propose a more efficient method which enables the Tick process to leap multiple time units in one tick. Preliminary experimental results in a high performance computing environment show that this new method significantly reduces the state space and improves both the time and memory efficiency.
Norzagaray-Valenzuela, Claudia D; Germán-Báez, Lourdes J; Valdez-Flores, Marco A; Hernández-Verdugo, Sergio; Shelton, Luke M; Valdez-Ortiz, Angel
2018-05-16
Microalgae are photosynthetic microorganisms widely used for the production of highly valued compounds, and recently they have been shown to be promising as a system for the heterologous expression of proteins. Several transformation methods have been successfully developed, from which the Agrobacterium tumefaciens-mediated method remains the most promising. However, microalgae transformation efficiency by A. tumefaciens is shown to vary depending on several transformation conditions. The present study aimed to establish an efficient genetic transformation system in the green microalgae Dunaliella tertiolecta using the A. tumefaciens method. The parameters assessed were the infection medium, the concentration of the A. tumefaciens and co-culture time. As a preliminary screening, the expression of the gusA gene and the viability of transformed cells were evaluated and used to calculate a novel parameter called Transformation Efficiency Index (TEI). The statistical analysis of TEI values showed five treatments with the highest gusA gene expression. To ensure stable transformation, transformed colonies were cultured on selective medium using hygromycin B and the DNA of resistant colonies were extracted after five subcultures and molecularly analyzed by PCR. Results revealed that treatments which use solid infection medium, A. tumefaciens OD 600 = 0.5 and co-culture times of 72 h exhibited the highest percentage of stable gusA expression. Overall, this study established an efficient, optimized A. tumefaciens-mediated genetic transformation of D. tertiolecta, which represents a relatively easy procedure with no expensive equipment required. This simple and efficient protocol opens the possibility for further genetic manipulation of this commercially-important microalgae for biotechnological applications. Copyright © 2018 Elsevier B.V. All rights reserved.
International Nuclear Information System (INIS)
Rodríguez, Daniel González; Lira, Carlos Alberto Brayner de Oliveira
2017-01-01
The hydrogen economy is one of the most promising concepts for the energy future. In this scenario, oil is replaced by hydrogen as an energy carrier. This hydrogen, rather than oil, must be produced in volumes not provided by the currently employed methods. In this work two high temperature hydrogen production methods coupled to an advanced nuclear system are presented. A new design of a pebbled-bed accelerator nuclear driven system called TADSEA is chosen because of the advantages it has in matters of transmutation and safety. For the conceptual design of the high temperature electrolysis process a detailed computational fluid dynamics model was developed to analyze the solid oxide electrolytic cell that has a huge influence on the process efficiency. A detailed flowsheet of the high temperature electrolysis process coupled to TADSEA through a Brayton gas cycle was developed using chemical process simulation software: Aspen HYSYS®. The model with optimized operating conditions produces 0.1627 kg/s of hydrogen, resulting in an overall process efficiency of 34.51%, a value in the range of results reported by other authors. A conceptual design of the iodine-sulfur thermochemical water splitting cycle was also developed. The overall efficiency of the process was calculated performing an energy balance resulting in 22.56%. The values of efficiency, hydrogen production rate and energy consumption of the proposed models are in the values considered acceptable in the hydrogen economy concept, being also compatible with the TADSEA design parameters. (author)
Energy Technology Data Exchange (ETDEWEB)
Rodríguez, Daniel González; Lira, Carlos Alberto Brayner de Oliveira [Universidade Federal de Pernambuco (UFPE), Recife, PE (Brazil). Departamento de Energia Nuclear; Fernández, Carlos García, E-mail: danielgonro@gmail.com, E-mail: mmhamada@ipen.br [Instituto Superior de Tecnologías y Ciencias aplicadas (InSTEC), La Habana (Cuba)
2017-07-01
The hydrogen economy is one of the most promising concepts for the energy future. In this scenario, oil is replaced by hydrogen as an energy carrier. This hydrogen, rather than oil, must be produced in volumes not provided by the currently employed methods. In this work two high temperature hydrogen production methods coupled to an advanced nuclear system are presented. A new design of a pebbled-bed accelerator nuclear driven system called TADSEA is chosen because of the advantages it has in matters of transmutation and safety. For the conceptual design of the high temperature electrolysis process a detailed computational fluid dynamics model was developed to analyze the solid oxide electrolytic cell that has a huge influence on the process efficiency. A detailed flowsheet of the high temperature electrolysis process coupled to TADSEA through a Brayton gas cycle was developed using chemical process simulation software: Aspen HYSYS®. The model with optimized operating conditions produces 0.1627 kg/s of hydrogen, resulting in an overall process efficiency of 34.51%, a value in the range of results reported by other authors. A conceptual design of the iodine-sulfur thermochemical water splitting cycle was also developed. The overall efficiency of the process was calculated performing an energy balance resulting in 22.56%. The values of efficiency, hydrogen production rate and energy consumption of the proposed models are in the values considered acceptable in the hydrogen economy concept, being also compatible with the TADSEA design parameters. (author)
Energy Technology Data Exchange (ETDEWEB)
Kurnik, Charles W [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Violette, Daniel M. [Navigant, Boulder, CO (United States); Rathbun, Pamela [Tetra Tech, Madison, WI (United States)
2017-11-02
This chapter focuses on the methods used to estimate net energy savings in evaluation, measurement, and verification (EM and V) studies for energy efficiency (EE) programs. The chapter provides a definition of net savings, which remains an unsettled topic both within the EE evaluation community and across the broader public policy evaluation community, particularly in the context of attribution of savings to a program. The chapter differs from the measure-specific Uniform Methods Project (UMP) chapters in both its approach and work product. Unlike other UMP resources that provide recommended protocols for determining gross energy savings, this chapter describes and compares the current industry practices for determining net energy savings but does not prescribe methods.
Numerical multistep methods for the efficient solution of quantum mechanics and related problems
International Nuclear Information System (INIS)
Anastassi, Z.A.; Simos, T.E.
2009-01-01
In this paper we present the recent development in the numerical integration of the Schroedinger equation and related systems of ordinary differential equations with oscillatory solutions, such as the N-body problem. We examine several types of multistep methods (explicit, implicit, predictor-corrector, hybrid) and several properties (P-stability, trigonometric fitting of various orders, phase fitting, high phase-lag order, algebraic order). We analyze the local truncation error and the stability of the methods. The error for the Schroedinger equation is also presented, which reveals the relation of the error to the energy. The efficiency of the methods is evaluated through the integration of five problems. Figures are presented and analyzed and some general conclusions are made. Code written in Maple is given for the development of all methods analyzed in this paper. Also the subroutines written in Matlab, that concern the integration of the methods, are presented.
Zhang, Ming; Xie, Fei; Zhao, Jing; Sun, Rui; Zhang, Lei; Zhang, Yue
2018-04-01
The prosperity of license plate recognition technology has made great contribution to the development of Intelligent Transport System (ITS). In this paper, a robust and efficient license plate recognition method is proposed which is based on a combined feature extraction model and BPNN (Back Propagation Neural Network) algorithm. Firstly, the candidate region of the license plate detection and segmentation method is developed. Secondly, a new feature extraction model is designed considering three sets of features combination. Thirdly, the license plates classification and recognition method using the combined feature model and BPNN algorithm is presented. Finally, the experimental results indicate that the license plate segmentation and recognition both can be achieved effectively by the proposed algorithm. Compared with three traditional methods, the recognition accuracy of the proposed method has increased to 95.7% and the consuming time has decreased to 51.4ms.
Xu, Zheng; Wang, Sheng; Li, Yeqing; Zhu, Feiyun; Huang, Junzhou
2018-02-08
The most recent history of parallel Magnetic Resonance Imaging (pMRI) has in large part been devoted to finding ways to reduce acquisition time. While joint total variation (JTV) regularized model has been demonstrated as a powerful tool in increasing sampling speed for pMRI, however, the major bottleneck is the inefficiency of the optimization method. While all present state-of-the-art optimizations for the JTV model could only reach a sublinear convergence rate, in this paper, we squeeze the performance by proposing a linear-convergent optimization method for the JTV model. The proposed method is based on the Iterative Reweighted Least Squares algorithm. Due to the complexity of the tangled JTV objective, we design a novel preconditioner to further accelerate the proposed method. Extensive experiments demonstrate the superior performance of the proposed algorithm for pMRI regarding both accuracy and efficiency compared with state-of-the-art methods.
Chen, Zhongxian; Yu, Haitao; Wen, Cheng
2014-01-01
The goal of direct drive ocean wave energy extraction system is to convert ocean wave energy into electricity. The problem explored in this paper is the design and optimal control for the direct drive ocean wave energy extraction system. An optimal control method based on internal model proportion integration differentiation (IM-PID) is proposed in this paper though most of ocean wave energy extraction systems are optimized by the structure, weight, and material. With this control method, the heavy speed of outer heavy buoy of the energy extraction system is in resonance with incident wave, and the system efficiency is largely improved. Validity of the proposed optimal control method is verified in both regular and irregular ocean waves, and it is shown that IM-PID control method is optimal in that it maximizes the energy conversion efficiency. In addition, the anti-interference ability of IM-PID control method has been assessed, and the results show that the IM-PID control method has good robustness, high precision, and strong anti-interference ability. PMID:25152913
Directory of Open Access Journals (Sweden)
Hesheng Cheng
2016-01-01
Full Text Available A metamaterial-inspired efficient electrically small antenna is proposed, firstly. And then several improving power transfer efficiency (PTE methods for wireless power transfer (WPT systems composed of the proposed antenna in the radiating near-field region are investigated. Method one is using a proposed antenna as a power retriever. This WPT system consisted of three proposed antennas: a transmitter, a receiver, and a retriever. The system is fed by only one power source. At a fixed distance from receiver to transmitter, the distance between the transmitter and the retriever is turned to maximize power transfer from the transmitter to the receiver. Method two is using two proposed antennas as transmitters and one antenna as receiver. The receiver is placed between the two transmitters. In this system, two power sources are used to feed the two transmitters, respectively. By adjusting the phase difference between the two feeding sources, the maximum PTE can be obtained at the optimal phase difference. Using the same configuration as method two, method three, where the maximum PTE can be increased by regulating the voltage (or power ratio of the two feeding sources, is proposed. In addition, we combine the proposed methods to construct another two schemes, which improve the PTE at different extent than classical WPT system.
An efficient method for evaluating the effect of input parameters on the integrity of safety systems
International Nuclear Information System (INIS)
Tang, Zhang-Chun; Zuo, Ming J.; Xiao, Ningcong
2016-01-01
Safety systems are significant to reduce or prevent risk from potentially dangerous activities in industry. Probability of failure to perform its functions on demand (PFD) for safety system usually exhibits variation due to the epistemic uncertainty associated with various input parameters. This paper uses the complementary cumulative distribution function of the PFD to define the exceedance probability (EP) that the PFD of the system is larger than the designed value. Sensitivity analysis of safety system is further investigated, which focuses on the effect of the variance of an individual input parameter on the EP resulting from epistemic uncertainty associated with the input parameters. An available numerical technique called finite difference method is first employed to evaluate the effect, which requires extensive computational cost and needs to select a step size. To address these difficulties, this paper proposes an efficient simulation method to estimate the effect. The proposed method needs only an evaluation to estimate the effects corresponding to all input parameters. Two examples are used to demonstrate that the proposed method can obtain more accurate results with less computation time compared to reported methods. - Highlights: • We define a sensitivity index to measure effect of a parameter for safety system. • We analyze the physical meaning of the sensitivity index. • We propose an efficient simulation method to assess the sensitivity index. • We derive the formulations of this index for lognormal and beta distributions. • Results identify important parameters on exceedance probability of safety system.
Chen, Zhongxian; Yu, Haitao; Wen, Cheng
2014-01-01
The goal of direct drive ocean wave energy extraction system is to convert ocean wave energy into electricity. The problem explored in this paper is the design and optimal control for the direct drive ocean wave energy extraction system. An optimal control method based on internal model proportion integration differentiation (IM-PID) is proposed in this paper though most of ocean wave energy extraction systems are optimized by the structure, weight, and material. With this control method, the heavy speed of outer heavy buoy of the energy extraction system is in resonance with incident wave, and the system efficiency is largely improved. Validity of the proposed optimal control method is verified in both regular and irregular ocean waves, and it is shown that IM-PID control method is optimal in that it maximizes the energy conversion efficiency. In addition, the anti-interference ability of IM-PID control method has been assessed, and the results show that the IM-PID control method has good robustness, high precision, and strong anti-interference ability.
Efficient 3D frequency response modeling with spectral accuracy by the rapid expansion method
Chu, Chunlei
2012-07-01
Frequency responses of seismic wave propagation can be obtained either by directly solving the frequency domain wave equations or by transforming the time domain wavefields using the Fourier transform. The former approach requires solving systems of linear equations, which becomes progressively difficult to tackle for larger scale models and for higher frequency components. On the contrary, the latter approach can be efficiently implemented using explicit time integration methods in conjunction with running summations as the computation progresses. Commonly used explicit time integration methods correspond to the truncated Taylor series approximations that can cause significant errors for large time steps. The rapid expansion method (REM) uses the Chebyshev expansion and offers an optimal solution to the second-order-in-time wave equations. When applying the Fourier transform to the time domain wavefield solution computed by the REM, we can derive a frequency response modeling formula that has the same form as the original time domain REM equation but with different summation coefficients. In particular, the summation coefficients for the frequency response modeling formula corresponds to the Fourier transform of those for the time domain modeling equation. As a result, we can directly compute frequency responses from the Chebyshev expansion polynomials rather than the time domain wavefield snapshots as do other time domain frequency response modeling methods. When combined with the pseudospectral method in space, this new frequency response modeling method can produce spectrally accurate results with high efficiency. © 2012 Society of Exploration Geophysicists.
Collier, Nathan; Dalcin, Lisandro; Calo, Victor M.
2014-01-01
SUMMARY: We compare the computational efficiency of isogeometric Galerkin and collocation methods for partial differential equations in the asymptotic regime. We define a metric to identify when numerical experiments have reached this regime. We then apply these ideas to analyze the performance of different isogeometric discretizations, which encompass C0 finite element spaces and higher-continuous spaces. We derive convergence and cost estimates in terms of the total number of degrees of freedom and then perform an asymptotic numerical comparison of the efficiency of these methods applied to an elliptic problem. These estimates are derived assuming that the underlying solution is smooth, the full Gauss quadrature is used in each non-zero knot span and the numerical solution of the discrete system is found using a direct multi-frontal solver. We conclude that under the assumptions detailed in this paper, higher-continuous basis functions provide marginal benefits.
An Improved Supplier Driven Packaging Design and Development Method for Supply Chain Efficiency
DEFF Research Database (Denmark)
Sohrabpour, Vahid; Oghazi, Pejvak; Olsson, Annika
2016-01-01
and satisfaction in interaction with the product and packaging system. It also proposes a supply chain focused packaging design and development method to better satisfy supply chain needs placed on packaging. An extensive literature review was conducted, and a Tetra Pak derived case study was developed......Packaging and the role it plays in supply chain efficiency are overlooked in most design and development research. An opportunity exists to meet the needs of supply chains to increase efficiency. This research presents three propositions on how to reduce the gap between supply chain needs....... The propositions were formulated and became the basis for improving Tetra Pak's existing packaging design and development method by better integrating supply chain needs. This was accomplished by using an expanded operational life cycle perspective that includes the entire supply chain. The resulting supply chain...
Practical Validation of Economic Efficiency Modelling Method for Multi-Boiler Heating System
Directory of Open Access Journals (Sweden)
Aleksejs Jurenoks
2017-12-01
Full Text Available In up-to-date conditions information technology is frequently associated with the modelling process, using computer technology as well as information networks. Statistical modelling is one of the most widespread methods of research of economic systems. The selection of methods of modelling of the economic systems depends on a great number of conditions of the researched system. Modelling is frequently associated with the factor of uncertainty (or risk, who’s description goes outside the confines of the traditional statistical modelling, which, in its turn, complicates the modelling which, in its turn, complicates the modelling process. This article describes the modelling process of assessing the economic efficiency of a multi-boiler adaptive heating system in real-time systems which allows for dynamic change in the operation scenarios of system service installations while enhancing the economic efficiency of the system in consideration.
Collier, Nathan
2014-09-17
SUMMARY: We compare the computational efficiency of isogeometric Galerkin and collocation methods for partial differential equations in the asymptotic regime. We define a metric to identify when numerical experiments have reached this regime. We then apply these ideas to analyze the performance of different isogeometric discretizations, which encompass C0 finite element spaces and higher-continuous spaces. We derive convergence and cost estimates in terms of the total number of degrees of freedom and then perform an asymptotic numerical comparison of the efficiency of these methods applied to an elliptic problem. These estimates are derived assuming that the underlying solution is smooth, the full Gauss quadrature is used in each non-zero knot span and the numerical solution of the discrete system is found using a direct multi-frontal solver. We conclude that under the assumptions detailed in this paper, higher-continuous basis functions provide marginal benefits.
New Method of Selecting Efficient Project Portfolios in the Presence of Hybrid Uncertainty
Directory of Open Access Journals (Sweden)
Bogdan Rębiasz
2016-01-01
Full Text Available A new methods of selecting efficient project portfolios in the presence of hybrid uncertainty has been presented. Pareto optimal solutions have been defined by an algorithm for generating project portfolios. The method presented allows us to select efficient project portfolios taking into account statistical and economic dependencies between projects when some of the parameters used in the calculation of effectiveness can be expressed in the form of an interactive possibility distribution and some in the form of a probability distribution. The procedure for processing such hybrid data combines stochastic simulation with nonlinear programming. The interaction between data are modeled by correlation matrices and the interval regression. Economic dependences are taken into account by the equations balancing the production capacity of the company. The practical example presented indicates that an interaction between projects has a significant impact on the results of calculations. (original abstract
Efficient algorithm for generating spectra using line-by-line methods
International Nuclear Information System (INIS)
Sonnad, V.; Iglesias, C.A.
2011-01-01
A method is presented for efficient generation of spectra using line-by-line approaches. The only approximation is replacing the line shape function with an interpolation procedure, which makes the method independent of the line profile functional form. The resulting computational savings for large number of lines is proportional to the number of frequency points in the spectral range. Therefore, for large-scale problems the method can provide speedups of two orders of magnitude or more. A method was presented to generate line-by-line spectra efficiently. The first step was to replace the explicit calculation of the profile by the Newton divided-differences interpolating polynomial. The second step is to accumulate the lines effectively reducing their number to the number of frequency points. The final step is recognizing the resulting expression as a convolution and amenable to FFT methods. The reduction in computational effort for a configuration-to-configuration transition array with large number of lines is proportional to the number of frequency points. The method involves no approximations except for replacing the explicit profile evaluation by interpolation. Specifically, the line accumulation and convolution are exact given the interpolation procedure. Furthermore, the interpolation makes the method independent of the line profile functional form contrary to other schemes using FFT methods to generate line-by-line spectra but relying on the analytic form of the profile Fourier transform. Finally, the method relies on a uniform frequency mesh. For non-uniform frequency meshes, however, the method can be applied by using a suitable temporary uniform mesh and the results interpolated onto the final mesh with little additional cost.
DEFF Research Database (Denmark)
Duan, Zhi; Siegumfeldt, Henrik
2010-01-01
We generated monoclonal scFv (single chain variable fragment) antibodies from an antibody phage display library towards three small synthetic peptides derived from the sequence of s1-casein. Key difficulties for selection of scFv-phages against small peptides were addressed. Small peptides do....... The scFvs were sequenced and characterized, and specificity was characterized by ELISA. The methods developed in this study are universally applicable for antibody phage display to efficiently produce antibody fragments against small peptides....
Efficiency of solvent extraction methods for the determination of methyl mercury in forest soils
Energy Technology Data Exchange (ETDEWEB)
Qian, J. [Department of Forest Ecology, Swedish University of Agricultural Sciences, Umeaa (Sweden); Dept. of Analytical Chemistry, Umeaa Univ. (Sweden); Skyllberg, U. [Department of Forest Ecology, Swedish University of Agricultural Sciences, Umeaa (Sweden); Tu, Q.; Frech, W. [Dept. of Analytical Chemistry, Umeaa Univ. (Sweden); Bleam, W.F. [Dept. of Soil Science, University of Wisconsin, Madison, WI (United States)
2000-07-01
Methyl mercury was determined by gas chromatography, microwave induced plasma, atomic emission spectrometry (GC-MIP-AES) using two different methods. One was based on extraction of mercury species into toluene, pre-concentration by evaporation and butylation of methyl mercury with a Grignard reagent followed by determination. With the other, methyl mercury was extracted into dichloromethane and back extracted into water followed by in situ ethylation, collection of ethylated mercury species on Tenax and determination. The accuracy of the entire procedure based on butylation was validated for the individual steps involved in the method. Methyl mercury added to various types of soil samples showed an overall average recovery of 87.5%. Reduced recovery was only caused by losses of methyl mercury during extraction into toluene and during pre-concentration by evaporation. The extraction of methyl mercury added to the soil was therefore quantitative. Since it is not possible to directly determine the extraction efficiency of incipient methyl mercury, the extraction efficiency of total mercury with an acidified solution containing CuSO{sub 4} and KBr was compared with high-pressure microwave acid digestion. The solvent extraction efficiency was 93%. For the IAEA 356 sediment certified reference material, mercury was less efficiently extracted and determined methyl mercury concentrations were below the certified value. Incomplete extraction could be explained by the presence of a large part of inorganic sulfides, as determined by x-ray absorption near-edge structure spectroscopy (XANES). Analyses of sediment reference material CRM 580 gave results in agreement with the certified value. The butylation method gave a detection limit for methyl mercury of 0.1 ng g{sup -1}, calculated as three times the standard deviation for repeated analysis of soil samples. Lower values were obtained with the ethylation method. The precision, expressed as RSD for concentrations 20 times
Efficiency determination of whole-body counters by Monte Carlo method, using a microcomputer
International Nuclear Information System (INIS)
Fernandes Neto, J.M.
1987-01-01
A computing program using Monte Carlo method for calculate the whole efficiency of distributed radiation counters in human body is developed. A simulater of human proportions was used, of which was filled with a known and uniform solution containing a quantity of radioisopes. The 99m Tc, 131 I and 42 K were used in this experience, and theirs activities compared by a liquid scintillator. (C.G.C.) [pt
Estimation of the drift eliminator efficiency using numerical and experimental methods
Directory of Open Access Journals (Sweden)
Stodůlka Jiří
2016-01-01
Full Text Available The purpose of the drift eliminators is to prevent water from escaping in significant amounts the cooling tower. They are designed to catch the droplets dragged by the tower draft and the efficiency given by the shape of the eliminator is the main evaluation criteria. The ability to eliminate the escaping water droplets is studied using CFD and using the experimental IPI method.
Method and apparatus for improving the quality and efficiency of ultrashort-pulse laser machining
Stuart, Brent C.; Nguyen, Hoang T.; Perry, Michael D.
2001-01-01
A method and apparatus for improving the quality and efficiency of machining of materials with laser pulse durations shorter than 100 picoseconds by orienting and maintaining the polarization of the laser light such that the electric field vector is perpendicular relative to the edges of the material being processed. Its use is any machining operation requiring remote delivery and/or high precision with minimal collateral dames.
QoE Power-Efficient Multimedia Delivery Method for LTE-A
Mushtaq, M. Sajid; Mellouk, Abdelhamid; Augustin, Brice; Fowler, Scott
2016-01-01
The fastest growing of multimedia services overfuture wireless communication system demand more networkresources, efficient delivery of multimedia service with highusers satisfaction, and power optimization of User Equipments(UEs). The resources and power optimization are significant infuture mobile computing systems, because emerging multimediaservices consume more resources and power. The 4G standard ofLTE-A wireless system has adopted the Discontinuous Reception(DRX) method to extend and o...
Highly efficient strong stability preserving Runge-Kutta methods with Low-Storage Implementations
Ketcheson, David I.
2008-01-01
Strong stability-preserving (SSP) Runge–Kutta methods were developed for time integration of semidiscretizations of partial differential equations. SSP methods preserve stability properties satisfied by forward Euler time integration, under a modified time-step restriction. We consider the problem of finding explicit Runge–Kutta methods with optimal SSP time-step restrictions, first for the case of linear autonomous ordinary differential equations and then for nonlinear or nonautonomous equations. By using alternate formulations of the associated optimization problems and introducing a new, more general class of low-storage implementations of Runge–Kutta methods, new optimal low-storage methods and new low-storage implementations of known optimal methods are found. The results include families of low-storage second and third order methods that achieve the maximum theoretically achievable effective SSP coefficient (independent of stage number), as well as low-storage fourth order methods that are more efficient than current full-storage methods. The theoretical properties of these methods are confirmed by numerical experiment.
Energy Technology Data Exchange (ETDEWEB)
Hu, Rui, E-mail: rhu@anl.gov; Yu, Yiqi
2016-11-15
Highlights: • Developed a computationally efficient method for full-core conjugate heat transfer modeling of sodium fast reactors. • Applied fully-coupled JFNK solution scheme to avoid the operator-splitting errors. • The accuracy and efficiency of the method is confirmed with a 7-assembly test problem. • The effects of different spatial discretization schemes are investigated and compared to the RANS-based CFD simulations. - Abstract: For efficient and accurate temperature predictions of sodium fast reactor structures, a 3-D full-core conjugate heat transfer modeling capability is developed for an advanced system analysis tool, SAM. The hexagon lattice core is modeled with 1-D parallel channels representing the subassembly flow, and 2-D duct walls and inter-assembly gaps. The six sides of the hexagon duct wall and near-wall coolant region are modeled separately to account for different temperatures and heat transfer between coolant flow and each side of the duct wall. The Jacobian Free Newton Krylov (JFNK) solution method is applied to solve the fluid and solid field simultaneously in a fully coupled fashion. The 3-D full-core conjugate heat transfer modeling capability in SAM has been demonstrated by a verification test problem with 7 fuel assemblies in a hexagon lattice layout. Additionally, the SAM simulation results are compared with RANS-based CFD simulations. Very good agreements have been achieved between the results of the two approaches.
Carotenoids from Foods of Plant, Animal and Marine Origin: An Efficient HPLC-DAD Separation Method
Directory of Open Access Journals (Sweden)
Irini F. Strati
2012-12-01
Full Text Available Carotenoids are important antioxidant compounds, present in many foods of plant, animal and marine origin. The aim of the present study was to describe the carotenoid composition of tomato waste, prawn muscle and cephalothorax and avian (duck and goose egg yolks through the use of a modified gradient elution HPLC method with a C30 reversed-phase column for the efficient separation and analysis of carotenoids and their cis-isomers. Elution time was reduced from 60 to 45 min without affecting the separation efficiency. All-trans lycopene predominated in tomato waste, followed by all-trans-β-carotene, 13-cis-lutein and all-trans lutein, while minor amounts of 9-cis-lutein, 13-cis-β-carotene and 9-cis-β-carotene were also detected. Considering the above findings, tomato waste is confirmed to be an excellent source of recovering carotenoids, especially all-trans lycopene, for commercial use. Xanthophylls were the major carotenoids of avian egg yolks, all-trans lutein and all-trans zeaxanthin in duck and goose egg yolk, respectively. In the Penaeus kerathurus prawn, several carotenoids (zeaxanthin, all-trans-lutein, canthaxanthin, cryptoxanthin, optical and geometrical astaxanthin isomers were identified in considerable amounts by the same method. A major advantage of this HPLC method was the efficient separation of carotenoids and their cis-isomers, originating from a wide range of matrices.
Study of method of efficiency transference using detectors NaI(Ti)
International Nuclear Information System (INIS)
Ramos, Thiago L.; Salgado, César M.
2017-01-01
The use of NaI (Tl) scintillation detectors for measurements implies the determination of the detection efficiency as a function of the energy of the incident photons. The efficiency curve can be obtained experimentally with the use of several mono-energy sources calibrated with emission energies covering the whole range of interest or using the Monte Carlo method. The Institute of Nuclear Engineering develops several methodologies using these detectors as they are robust, inexpensive and do not need cooling for their use. The assembly of an experimental arrangement is usually complex, since several factors influence the result affecting reproducibility in measurements, such as: parallelism between source and detector, alignment between source and detector, and accuracy of source-detector distance. In view of such difficulties, an automated positioning system was developed for the source-detector set controlled by a micro controller based on the ARDUINO language in order to guarantee the reproducibility in the experimental arrangements. In the initial phase of this study, a mathematical model was developed in the MCNP-X code using a NaI (Tl) detector. A theoretical validation using the Efficiency Transfer Method was performed at three different positions on the detector's axial axis (10.6 cm, 11.3 cm and 12.0 cm). This method is based on the ratio of effective solid angles. The experimental validation presented maximum relative errors of 7.74% for the position 11.3 cm
Target-ion source unit ionization efficiency measurement by method of stable ion beam implantation
Panteleev, V.N; Fedorov, D.V; Moroz, F.V; Orlov, S.Yu; Volkov, Yu.M
The ionization efficiency is one of the most important parameters of an on-line used target-ion source system exploited for production of exotic radioactive beams. The ionization efficiency value determination as a characteristic of a target-ion source unit in the stage of its normalizing before on-line use is a very important step in the course of the preparation for an on-line experiment. At the IRIS facility (Petersburg Nuclear Physics Institute, Gatchina) a reliable and rather precise method of the target-ion source unit ionization efficiency measurement by the method of stable beam implantation has been developed. The method worked out exploits an off-line mass-separator for the implantation of the ion beams of selected stable isotopes of different elements into a tantalum foil placed inside the Faraday cup in the focal plane of the mass-separator. The amount of implanted ions has been measured with a high accuracy by the current integrator connected to the Faraday cup. After the implantation of needed a...
Efficiency of cleaning and disinfection of surfaces: correlation between assessment methods
Frota, Oleci Pereira; Ferreira, Adriano Menis; Guerra, Odanir Garcia; Rigotti, Marcelo Alessandro; Andrade, Denise de; Borges, Najla Moreira Amaral; Almeida, Margarete Teresa Gottardo de
2017-01-01
ABSTRACT Objective: to assess the correlation among the ATP-bioluminescence assay, visual inspection and microbiological culture in monitoring the efficiency of cleaning and disinfection (C&D) of high-touch clinical surfaces (HTCS) in a walk-in emergency care unit. Method: a prospective and comparative study was carried out from March to June 2015, in which five HTCS were sampled before and after C&D by means of the three methods. The HTCS were considered dirty when dust, waste, humidity an...
International Nuclear Information System (INIS)
Goshtasbi, K.; Ahmadi, M; Naeimi, Y.
2008-01-01
Locating the critical slip surface and the associated minimum factor of safety are two complementary parts in a slope stability analysis. A large number of computer programs exist to solve slope stability problems. Most of these programs, however, have used inefficient and unreliable search procedures to locate the global minimum factor of safety. This paper presents an efficient and reliable method to determine the global minimum factor of safety coupled with a modified version of the Monte Carlo technique. Examples arc presented to illustrate the reliability of the proposed method
A Comfort-Aware Energy Efficient HVAC System Based on the Subspace Identification Method
Directory of Open Access Journals (Sweden)
O. Tsakiridis
2016-01-01
Full Text Available A proactive heating method is presented aiming at reducing the energy consumption in a HVAC system while maintaining the thermal comfort of the occupants. The proposed technique fuses time predictions for the zones’ temperatures, based on a deterministic subspace identification method, and zones’ occupancy predictions, based on a mobility model, in a decision scheme that is capable of regulating the balance between the total energy consumed and the total discomfort cost. Simulation results for various occupation-mobility models demonstrate the efficiency of the proposed technique.
A simple and efficient method for deriving neurospheres from bone marrow stromal cells
International Nuclear Information System (INIS)
Yang Qin; Mu Jun; Li Qi; Li Ao; Zeng Zhilei; Yang Jun; Zhang Xiaodong; Tang Jin; Xie Peng
2008-01-01
Bone marrow stromal cells (MSCs) can be differentiated into neuronal and glial-like cell types under appropriate experimental conditions. However, previously reported methods are complicated and involve the use of toxic reagents. Here, we present a simplified and nontoxic method for efficient conversion of rat MSCs into neurospheres that express the neuroectodermal marker nestin. These neurospheres can proliferate and differentiate into neuron, astrocyte, and oligodendrocyte phenotypes. We thus propose that MSCs are an emerging model cell for the treatment of a variety of neurological diseases
Energy Technology Data Exchange (ETDEWEB)
Campos, F.F. [Universidade Federal de Minas Gerais, Belo Horizonte (Brazil); Birkett, N.R.C. [Oxford Univ. Computing Lab. (United Kingdom)
1996-12-31
The Controlled Cholesky factorisation has been shown to be a robust preconditioner for the Conjugate Gradient method. In this scheme the amount of fill-in is defined in terms of a parameter {eta}, the number of extra elements allowed per column. It is demonstrated how an optimum value of {eta} can be automatically determined when solving time dependent p.d.e.`s using an implicit time step method. A comparison between CCCG({eta}) and the standard ICCG solving parabolic problems on general grids shows CCCG({eta}) to be an efficient general purpose solver.
Increasing the computational efficient of digital cross correlation by a vectorization method
Chang, Ching-Yuan; Ma, Chien-Ching
2017-08-01
This study presents a vectorization method for use in MATLAB programming aimed at increasing the computational efficiency of digital cross correlation in sound and images, resulting in a speedup of 6.387 and 36.044 times compared with performance values obtained from looped expression. This work bridges the gap between matrix operations and loop iteration, preserving flexibility and efficiency in program testing. This paper uses numerical simulation to verify the speedup of the proposed vectorization method as well as experiments to measure the quantitative transient displacement response subjected to dynamic impact loading. The experiment involved the use of a high speed camera as well as a fiber optic system to measure the transient displacement in a cantilever beam under impact from a steel ball. Experimental measurement data obtained from the two methods are in excellent agreement in both the time and frequency domain, with discrepancies of only 0.68%. Numerical and experiment results demonstrate the efficacy of the proposed vectorization method with regard to computational speed in signal processing and high precision in the correlation algorithm. We also present the source code with which to build MATLAB-executable functions on Windows as well as Linux platforms, and provide a series of examples to demonstrate the application of the proposed vectorization method.
Development of efficient time-evolution method based on three-term recurrence relation
International Nuclear Information System (INIS)
Akama, Tomoko; Kobayashi, Osamu; Nanbu, Shinkoh
2015-01-01
The advantage of the real-time (RT) propagation method is a direct solution of the time-dependent Schrödinger equation which describes frequency properties as well as all dynamics of a molecular system composed of electrons and nuclei in quantum physics and chemistry. Its applications have been limited by computational feasibility, as the evaluation of the time-evolution operator is computationally demanding. In this article, a new efficient time-evolution method based on the three-term recurrence relation (3TRR) was proposed to reduce the time-consuming numerical procedure. The basic formula of this approach was derived by introducing a transformation of the operator using the arcsine function. Since this operator transformation causes transformation of time, we derived the relation between original and transformed time. The formula was adapted to assess the performance of the RT time-dependent Hartree-Fock (RT-TDHF) method and the time-dependent density functional theory. Compared to the commonly used fourth-order Runge-Kutta method, our new approach decreased computational time of the RT-TDHF calculation by about factor of four, showing the 3TRR formula to be an efficient time-evolution method for reducing computational cost
A chain-of-states acceleration method for the efficient location of minimum energy paths
Energy Technology Data Exchange (ETDEWEB)
Hernández, E. R., E-mail: Eduardo.Hernandez@csic.es; Herrero, C. P. [Instituto de Ciencia de Materiales de Madrid (ICMM–CSIC), Campus de Cantoblanco, 28049 Madrid (Spain); Soler, J. M. [Departamento de Física de la Materia Condensada and IFIMAC, Universidad Autónoma de Madrid, 28049 Madrid (Spain)
2015-11-14
We describe a robust and efficient chain-of-states method for computing Minimum Energy Paths (MEPs) associated to barrier-crossing events in poly-atomic systems, which we call the acceleration method. The path is parametrized in terms of a continuous variable t ∈ [0, 1] that plays the role of time. In contrast to previous chain-of-states algorithms such as the nudged elastic band or string methods, where the positions of the states in the chain are taken as variational parameters in the search for the MEP, our strategy is to formulate the problem in terms of the second derivatives of the coordinates with respect to t, i.e., the state accelerations. We show this to result in a very simple and efficient method for determining the MEP. We describe the application of the method to a series of test cases, including two low-dimensional problems and the Stone-Wales transformation in C{sub 60}.
A robust and efficient stepwise regression method for building sparse polynomial chaos expansions
Energy Technology Data Exchange (ETDEWEB)
Abraham, Simon, E-mail: Simon.Abraham@ulb.ac.be [Vrije Universiteit Brussel (VUB), Department of Mechanical Engineering, Research Group Fluid Mechanics and Thermodynamics, Pleinlaan 2, 1050 Brussels (Belgium); Raisee, Mehrdad [School of Mechanical Engineering, College of Engineering, University of Tehran, P.O. Box: 11155-4563, Tehran (Iran, Islamic Republic of); Ghorbaniasl, Ghader; Contino, Francesco; Lacor, Chris [Vrije Universiteit Brussel (VUB), Department of Mechanical Engineering, Research Group Fluid Mechanics and Thermodynamics, Pleinlaan 2, 1050 Brussels (Belgium)
2017-03-01
Polynomial Chaos (PC) expansions are widely used in various engineering fields for quantifying uncertainties arising from uncertain parameters. The computational cost of classical PC solution schemes is unaffordable as the number of deterministic simulations to be calculated grows dramatically with the number of stochastic dimension. This considerably restricts the practical use of PC at the industrial level. A common approach to address such problems is to make use of sparse PC expansions. This paper presents a non-intrusive regression-based method for building sparse PC expansions. The most important PC contributions are detected sequentially through an automatic search procedure. The variable selection criterion is based on efficient tools relevant to probabilistic method. Two benchmark analytical functions are used to validate the proposed algorithm. The computational efficiency of the method is then illustrated by a more realistic CFD application, consisting of the non-deterministic flow around a transonic airfoil subject to geometrical uncertainties. To assess the performance of the developed methodology, a detailed comparison is made with the well established LAR-based selection technique. The results show that the developed sparse regression technique is able to identify the most significant PC contributions describing the problem. Moreover, the most important stochastic features are captured at a reduced computational cost compared to the LAR method. The results also demonstrate the superior robustness of the method by repeating the analyses using random experimental designs.
Tian, Fang-Bao; Luo, Haoxiang; Zhu, Luoding; Liao, James C.; Lu, Xi-Yun
2012-01-01
We have introduced a modified penalty approach into the flow-structure interaction solver that combines an immersed boundary method (IBM) and a multi-block lattice Boltzmann method (LBM) to model an incompressible flow and elastic boundaries with finite mass. The effect of the solid structure is handled by the IBM in which the stress exerted by the structure on the fluid is spread onto the collocated grid points near the boundary. The fluid motion is obtained by solving the discrete lattice Boltzmann equation. The inertial force of the thin solid structure is incorporated by connecting this structure through virtual springs to a ghost structure with the equivalent mass. This treatment ameliorates the numerical instability issue encountered in this type of problems. Thanks to the superior efficiency of the IBM and LBM, the overall method is extremely fast for a class of flow-structure interaction problems where details of flow patterns need to be resolved. Numerical examples, including those involving multiple solid bodies, are presented to verify the method and illustrate its efficiency. As an application of the present method, an elastic filament flapping in the Kármán gait and the entrainment regions near a cylinder is studied to model fish swimming in these regions. Significant drag reduction is found for the filament, and the result is consistent with the metabolic cost measured experimentally for the live fish. PMID:23564971
International Nuclear Information System (INIS)
Tolstooukhov, D.A.; Karkhov, A.N.
2001-01-01
At present time, a transition is made to market mechanisms of economy functioning based on equilibrium price formation for products of enterprises and their self-financing. Based on long-term forecasts of economic development, electric power industry should not only ensure preservation of the accumulated potential but should also provide for modernization, reconstruction, service life extension of operating power facilities and construction of new ones. Under market conditions, nuclear power installations will have to prove their right to exist and develop in competition with other power technologies. In these conditions, the responsibility is growing for the correctness of investment decisions taken in the power industry and methods on which they are based. This paper analyzes currently used calculation methods for economic efficiency of investment projects. It emphasizes the limitations and drawbacks of the existing methodical approaches, and their inconsistency with market economy and scientific and technological progress (STP). The said drawbacks lead to serious mistakes in evaluating the prospects for the development of nuclear power. The paper describes a methodical approach based on equilibrium price formation that does not have the said drawbacks and may be used as the basis for further work on creation of improved calculation methods for the economic efficiency of investment projects in nuclear power. (authors)
Tian, Fang-Bao; Luo, Haoxiang; Zhu, Luoding; Liao, James C.; Lu, Xi-Yun
2011-08-01
We have introduced a modified penalty approach into the flow-structure interaction solver that combines an immersed boundary method (IBM) and a multi-block lattice Boltzmann method (LBM) to model an incompressible flow and elastic boundaries with finite mass. The effect of the solid structure is handled by the IBM in which the stress exerted by the structure on the fluid is spread onto the collocated grid points near the boundary. The fluid motion is obtained by solving the discrete lattice Boltzmann equation. The inertial force of the thin solid structure is incorporated by connecting this structure through virtual springs to a ghost structure with the equivalent mass. This treatment ameliorates the numerical instability issue encountered in this type of problems. Thanks to the superior efficiency of the IBM and LBM, the overall method is extremely fast for a class of flow-structure interaction problems where details of flow patterns need to be resolved. Numerical examples, including those involving multiple solid bodies, are presented to verify the method and illustrate its efficiency. As an application of the present method, an elastic filament flapping in the Kármán gait and the entrainment regions near a cylinder is studied to model fish swimming in these regions. Significant drag reduction is found for the filament, and the result is consistent with the metabolic cost measured experimentally for the live fish.
A Robust and Efficient Numerical Method for RNA-Mediated Viral Dynamics
Directory of Open Access Journals (Sweden)
Vladimir Reinharz
2017-10-01
Full Text Available The multiscale model of hepatitis C virus (HCV dynamics, which includes intracellular viral RNA (vRNA replication, has been formulated in recent years in order to provide a new conceptual framework for understanding the mechanism of action of a variety of agents for the treatment of HCV. We present a robust and efficient numerical method that belongs to the family of adaptive stepsize methods and is implicit, a Rosenbrock type method that is highly suited to solve this problem. We provide a Graphical User Interface that applies this method and is useful for simulating viral dynamics during treatment with anti-HCV agents that act against HCV on the molecular level.
[Efficiency of combined methods of hemorroid treatment using hal-rar and laser destruction].
Rodoman, G V; Kornev, L V; Shalaeva, T I; Malushenko, R N
2017-01-01
To develop the combined method of treatment of hemorrhoids with arterial ligation under Doppler control and laser destruction of internal and external hemorrhoids. The study included 100 patients with chronic hemorrhoids stage II and III. Combined method of HAL-laser was used in study group, HAL RAR-technique in control group 1 and closed hemorrhoidectomy with linear stapler in control group 2. Сomparative evaluation of results in both groups was performed. Combined method overcomes the drawbacks of traditional surgical treatment and limitations in external components elimination which are inherent for HAL-RAR. Moreover, it has a higher efficiency in treating of hemorrhoids stage II-III compared with HAL-RAR and is equally safe and well tolerable for patients. This method does not increase the risk of recurrence, reduces incidence of complications and time of disability.
An efficient method for computing the absorption of solar radiation by water vapor
Chou, M.-D.; Arking, A.
1981-01-01
Chou and Arking (1980) have developed a fast but accurate method for computing the IR cooling rate due to water vapor. Using a similar approach, the considered investigation develops a method for computing the heating rates due to the absorption of solar radiation by water vapor in the wavelength range from 4 to 8.3 micrometers. The validity of the method is verified by comparison with line-by-line calculations. An outline is provided of an efficient method for transmittance and flux computations based upon actual line parameters. High speed is achieved by employing a one-parameter scaling approximation to convert an inhomogeneous path into an equivalent homogeneous path at suitably chosen reference conditions.
Energy Technology Data Exchange (ETDEWEB)
Tuvshinjargal, Doopalam; Lee, Deok Jin [Kunsan National University, Gunsan (Korea, Republic of)
2015-06-15
In this paper, an efficient dynamic reactive motion planning method for an autonomous vehicle in a dynamic environment is proposed. The purpose of the proposed method is to improve the robustness of autonomous robot motion planning capabilities within dynamic, uncertain environments by integrating a virtual plane-based reactive motion planning technique with a sensor fusion-based obstacle detection approach. The dynamic reactive motion planning method assumes a local observer in the virtual plane, which allows the effective transformation of complex dynamic planning problems into simple stationary ones proving the speed and orientation information between the robot and obstacles. In addition, the sensor fusion-based obstacle detection technique allows the pose estimation of moving obstacles using a Kinect sensor and sonar sensors, thus improving the accuracy and robustness of the reactive motion planning approach. The performance of the proposed method was demonstrated through not only simulation studies but also field experiments using multiple moving obstacles in hostile dynamic environments.
International Nuclear Information System (INIS)
Tuvshinjargal, Doopalam; Lee, Deok Jin
2015-01-01
In this paper, an efficient dynamic reactive motion planning method for an autonomous vehicle in a dynamic environment is proposed. The purpose of the proposed method is to improve the robustness of autonomous robot motion planning capabilities within dynamic, uncertain environments by integrating a virtual plane-based reactive motion planning technique with a sensor fusion-based obstacle detection approach. The dynamic reactive motion planning method assumes a local observer in the virtual plane, which allows the effective transformation of complex dynamic planning problems into simple stationary ones proving the speed and orientation information between the robot and obstacles. In addition, the sensor fusion-based obstacle detection technique allows the pose estimation of moving obstacles using a Kinect sensor and sonar sensors, thus improving the accuracy and robustness of the reactive motion planning approach. The performance of the proposed method was demonstrated through not only simulation studies but also field experiments using multiple moving obstacles in hostile dynamic environments
Efficient methods for solving discrete topology design problems in the PLATO-N project
DEFF Research Database (Denmark)
Canh, Nam Nguyen; Stolpe, Mathias
This paper considers the general multiple load structural topology design problems in the framework of the PLATO-N project. The problems involve a large number of discrete design variables and were modeled as a non-convex mixed 0–1 program. For the class of problems considered, a global...... optimization method based on the branch-and-cut concept was developed and implemented. In the method a large number of continuous relaxations were solved. We also present an algorithm for generating cuts to strengthen the quality of the relaxations. Several heuristics were also investigated to obtain efficient...... algorithms. The branch and cut method is used to solve benchmark examples which can be used to validate other methods and heuristics....
A fast and efficient method for sequential cone-beam tomography
International Nuclear Information System (INIS)
Koehler, Th.; Proksa, R.; Grass, M.
2001-01-01
Sequential cone-beam tomography is a method that uses data of two or more parallel circular trajectories of a cone-beam scanner to reconstruct the object function. We propose a condition for the data acquisition that ensures that all object points between two successive circles are irradiated over an angular span of the x-ray source position of exactly 360 deg. in total as seen along the rotation axis. A fast and efficient approximative reconstruction method for the proposed acquisition is presented which uses data from exactly 360 deg. for every object point. It is based on the Tent-FDK method which was recently developed for single circular cone-beam CT. The measurement geometry does not provide sufficient data for exact reconstruction but it is shown that the proposed reconstruction method provides satisfying image quality for small cone angles
International Nuclear Information System (INIS)
Ishikawa, H.; Nakano, S.; Yuuki, R.; Chung, N.Y.
1991-01-01
In the virtual crack extension method, the stress intensity factor, K, is obtained from the converged value of the energy release rate by the difference of the finite element stiffness matrix when some crack extension are taken. Instead of the numerical difference of the finite element stiffness, a new method to use a direct dirivative of the finite element stiffness matrix with respect to crack length is proposed. By the present method, the results of some example problems, such as uniform tension problems of a square plate with a center crack and a rectangular plate with an internal slant crack, are obtained with high accuracy and good efficiency. Comparing with analytical results, the present values of the stress intensity factors of the problems are obtained with the error that is less than 0.6%. This shows the numerical assurance of the usefulness of the present method. A personal computer program for the analysis is developed
An efficient implicit direct forcing immersed boundary method for incompressible flows
International Nuclear Information System (INIS)
Cai, S-G; Ouahsine, A; Smaoui, H; Favier, J; Hoarau, Y
2015-01-01
A novel efficient implicit direct forcing immersed boundary method for incompressible flows with complex boundaries is presented. In the previous work [1], the calculation is performed on the Cartesian grid regardless of the immersed object, with a fictitious force evaluated on the Lagrangian points to mimic the presence of the physical boundaries. However the explicit direct forcing method [1] fails to accurately impose the non-slip boundary condition on the immersed interface. In the present work, the calculation is based on the implicit treatment of the artificial force while in an effective way of system iteration. The accuracy is also improved by solving the Navier-Stokes equation with the rotational incremental pressure- correction projection method of Guermond and Shen [2]. Numerical simulations performed with the proposed method are in good agreement with those in the literature
O'Connor, Sydney; Ayres, Alison; Cortellini, Lynelle; Rosand, Jonathan; Rosenthal, Eric; Kimberly, W Taylor
2012-08-01
Reliable and efficient data repositories are essential for the advancement of research in Neurocritical care. Various factors, such as the large volume of patients treated within the neuro ICU, their differing length and complexity of hospital stay, and the substantial amount of desired information can complicate the process of data collection. We adapted the tools of process improvement to the data collection and database design of a research repository for a Neuroscience intensive care unit. By the Shewhart-Deming method, we implemented an iterative approach to improve the process of data collection for each element. After an initial design phase, we re-evaluated all data fields that were challenging or time-consuming to collect. We then applied root-cause analysis to optimize the accuracy and ease of collection, and to determine the most efficient manner of collecting the maximal amount of data. During a 6-month period, we iteratively analyzed the process of data collection for various data elements. For example, the pre-admission medications were found to contain numerous inaccuracies after comparison with a gold standard (sensitivity 71% and specificity 94%). Also, our first method of tracking patient admissions and discharges contained higher than expected errors (sensitivity 94% and specificity 93%). In addition to increasing accuracy, we focused on improving efficiency. Through repeated incremental improvements, we reduced the number of subject records that required daily monitoring from 40 to 6 per day, and decreased daily effort from 4.5 to 1.5 h/day. By applying process improvement methods to the design of a Neuroscience ICU data repository, we achieved a threefold improvement in efficiency and increased accuracy. Although individual barriers to data collection will vary from institution to institution, a focus on process improvement is critical to overcoming these barriers.
Radiation use efficiency of rice under different planting methods and environmental conditions
International Nuclear Information System (INIS)
Apakupakul, R.
1995-01-01
Radiation use efficiency is an important parameter which has often been used in many crop growth models to estimate total biomass and yield. Studies of the relationships between above-ground biomass and accumulative absorbed photosynthetically active radiation (PARa, MJ/square m) of rice were examined both on-farms and on-station in Phatthalung. Planting methods were wet-sown and transplanted rice for Suphanburi 90 in the 1993 dry season and Chieng in the 1993-94 wet season. Solar radiation of the two growing seasons were calculated from climatic data. The objectives of this experiment were (1) to know the pattern of relationship between above-ground biomass and accumulative absorbed PAR of rice cultivars grown in South Thailand, (2) to compare the radiation use efficiency of rice cultivars under different planting methods and (3) to obtain the primary data for rice growth modelling in the southern climate. Results presented that only the duration of first growing period up to stem elongation in both cultivars, above-ground biomass and leaf area index were higher in wet-sown than in transplanted rice. Relationship between above-ground biomass accumulation through growing period and accumulative absorbed PAR was in positive linear regression with R*[2)0.85. Erect leaf of Suphanburi 90 had a radiation use efficiency (RUE, g/MJ) higher than non-erect leaf of Chieng. A problem of weed infestation in wet-sown rice in both cultivars had an effect on the RUE which were highly significant lower than transplanted rice. The Rue of wet-sown and transplanted rice were 2.77 and 3.20 g/MJ, respectively for Suphanburi 90, 2.13 and 2.67 g/MJ for Chieng. These results suggest that when dealing with radiation use efficiency in the rice growth modelling the differences of cultivars and planting methods should be taken into consideration
Energy efficient SO2 removal from flue gases using the method Wellman-Lord
International Nuclear Information System (INIS)
Dzhonova-Atanasova, D.; Razkazova-Velkova, E.; Ljutzkanov, L.; Kolev, N.; Kolev, D.
2013-01-01
Full text: Investigations on development of energy efficient technology for SO 2 removal from flue gases of combustion systems by using the method Wellman-Lord are presented. It is characterized by absorption of sulfur dioxide with sodium sulfite solution, which reacts to form sodium bisulfite. The absorber is a packed column with multiple stages. After evaporation of the solution, SO 2 and sodium sulfite are obtained. The latter is dissolved in water from condensation of the steam carrying SO 2 from the evaporator. The regenerated solution returns in the absorber. The SO 2 removed from the flue gases is obtained as a pure product for use in chemical, food or wine production. The data discussed in the literature sources on flue gas desulfurization demonstrate the predominance of the methods with lime or limestone as absorbent, due to higher capital investments associated with the method of Wellman-Lord. A technological and economical evaluation of this regenerative method is presented in comparison to the non-regenerative gypsum method, using data from the existing sources and our own experience from the development of an innovative gypsum technology. Three solutions are discussed for significant enhancement of the method efficiency on the basis of a considerable increasing of the SO 2 concentration in the saturated absorbent. The improved method uses about 40% less heat for absorbent regeneration, in comparison to the existing applications of the method Wellman-Lord, and gives in addition the possibility to regenerate 95% of the consumed heat for heating water streams to about 90°C. Moreover, the incorporation in the installation of our system with contact economizers of second generation, already in industrial application, enables utilization of the waste heat of the flue gases for district heating. The employment of this system also leads to significant decreasing of the NO x emissions. key words: SO 2 removal, flue gases, absorption
Methods for fitting of efficiency curves obtained by means of HPGe gamma rays spectrometers
International Nuclear Information System (INIS)
Cardoso, Vanderlei
2002-01-01
The present work describes a few methodologies developed for fitting efficiency curves obtained by means of a HPGe gamma-ray spectrometer. The interpolated values were determined by simple polynomial fitting and polynomial fitting between the ratio of experimental peak efficiency and total efficiency, calculated by Monte Carlo technique, as a function of gamma-ray energy. Moreover, non-linear fitting has been performed using a segmented polynomial function and applying the Gauss-Marquardt method. For the peak area obtainment different methodologies were developed in order to estimate the background area under the peak. This information was obtained by numerical integration or by using analytical functions associated to the background. One non-calibrated radioactive source has been included in the curve efficiency in order to provide additional calibration points. As a by-product, it was possible to determine the activity of this non-calibrated source. For all fittings developed in the present work the covariance matrix methodology was used, which is an essential procedure in order to give a complete description of the partial uncertainties involved. (author)
Rezaei, Satar; Zandian, Hamed; Baniasadi, Akram; Moghadam, Telma Zahirian; Delavari, Somayeh; Delavari, Sajad
2016-02-01
Hospitals are the most expensive health services provider in the world. Therefore, the evaluation of their performance can be used to reduce costs. The aim of this study was to determine the efficiency of the hospitals at the Kurdistan University of Medical Sciences using stochastic frontier analysis (SFA). This was a cross-sectional and retrospective study that assessed the performance of Kurdistan teaching hospitals (n = 12) between 2007 and 2013. The Stochastic Frontier Analysis method was used to achieve this aim. The numbers of active beds, nurses, physicians, and other staff members were considered as input variables, while the inpatient admission was considered as the output. The data were analyzed using Frontier 4.1 software. The mean technical efficiency of the hospitals we studied was 0.67. The results of the Cobb-Douglas production function showed that the maximum elasticity was related to the active beds and the elasticity of nurses was negative. Also, the return to scale was increasing. The results of this study indicated that the performances of the hospitals were not appropriate in terms of technical efficiency. In addition, there was a capacity enhancement of the output of the hospitals, compared with the most efficient hospitals studied, of about33%. It is suggested that the effect of various factors, such as the quality of health care and the patients' satisfaction, be considered in the future studies to assess hospitals' performances.
2010-01-01
... efficiency of commercial packaged boilers. 431.86 Section 431.86 Energy DEPARTMENT OF ENERGY ENERGY... Boilers Test Procedures § 431.86 Uniform test method for the measurement of energy efficiency of... packaged boiler equipment classes. (B) On or after March 2, 2012, conduct the thermal efficiency test as...
A simple and efficient total genomic DNA extraction method for individual zooplankton.
Fazhan, Hanafiah; Waiho, Khor; Shahreza, Md Sheriff
2016-01-01
Molecular approaches are widely applied in species identification and taxonomic studies of minute zooplankton. One of the most focused zooplankton nowadays is from Subclass Copepoda. Accurate species identification of all life stages of the generally small sized copepods through molecular analysis is important, especially in taxonomic and systematic assessment of harpacticoid copepod populations and to understand their dynamics within the marine community. However, total genomic DNA (TGDNA) extraction from individual harpacticoid copepods can be problematic due to their small size and epibenthic behavior. In this research, six TGDNA extraction methods done on individual harpacticoid copepods were compared. The first new simple, feasible, efficient and consistent TGDNA extraction method was designed and compared with the commercial kit and modified available TGDNA extraction methods. The newly described TGDNA extraction method, "Incubation in PCR buffer" method, yielded good and consistent results based on the high success rate of PCR amplification (82%) compared to other methods. Coupled with its relatively consistent and economical method the "Incubation in PCR buffer" method is highly recommended in the TGDNA extraction of other minute zooplankton species.
Han, Guomin; Shao, Qian; Li, Cuiping; Zhao, Kai; Jiang, Li; Fan, Jun; Jiang, Haiyang; Tao, Fang
2018-05-01
Aspergillus flavus often invade many important corps and produce harmful aflatoxins both in preharvest and during storage stages. The regulation mechanism of aflatoxin biosynthesis in this fungus has not been well explored mainly due to the lack of an efficient transformation method for constructing a genome-wide gene mutant library. This challenge was resolved in this study, where a reliable and efficient Agrobacterium tumefaciens-mediated transformation (ATMT) protocol for A. flavus NRRL 3357 was established. The results showed that removal of multinucleate conidia, to collect a homogenous sample of uninucleate conidia for use as the transformation material, is the key step in this procedure. A. tumefaciens strain AGL-1 harboring the ble gene for zeocin resistance under the control of the gpdA promoter from A. nidulans is suitable for genetic transformation of this fungus. We successfully generated A. flavus transformants with an efficiency of ∼ 60 positive transformants per 10 6 conidia using our protocol. A small-scale insertional mutant library (∼ 1,000 mutants) was constructed using this method and the resulting several mutants lacked both production of conidia and aflatoxin biosynthesis capacity. Southern blotting analysis demonstrated that the majority of the transformants contained a single T-DNA insert on the genome. To the best of our knowledge, this is the first report of genetic transformation of A. flavus via ATMT and our protocol provides an effective tool for construction of genome-wide gene mutant libraries for functional analysis of important genes in A. flavus.
Using the fuzzy linear regression method to benchmark the energy efficiency of commercial buildings
International Nuclear Information System (INIS)
Chung, William
2012-01-01
Highlights: ► Fuzzy linear regression method is used for developing benchmarking systems. ► The systems can be used to benchmark energy efficiency of commercial buildings. ► The resulting benchmarking model can be used by public users. ► The resulting benchmarking model can capture the fuzzy nature of input–output data. -- Abstract: Benchmarking systems from a sample of reference buildings need to be developed to conduct benchmarking processes for the energy efficiency of commercial buildings. However, not all benchmarking systems can be adopted by public users (i.e., other non-reference building owners) because of the different methods in developing such systems. An approach for benchmarking the energy efficiency of commercial buildings using statistical regression analysis to normalize other factors, such as management performance, was developed in a previous work. However, the field data given by experts can be regarded as a distribution of possibility. Thus, the previous work may not be adequate to handle such fuzzy input–output data. Consequently, a number of fuzzy structures cannot be fully captured by statistical regression analysis. This present paper proposes the use of fuzzy linear regression analysis to develop a benchmarking process, the resulting model of which can be used by public users. An illustrative example is given as well.
An efficient and general numerical method to compute steady uniform vortices
Luzzatto-Fegiz, Paolo; Williamson, Charles H. K.
2011-07-01
Steady uniform vortices are widely used to represent high Reynolds number flows, yet their efficient computation still presents some challenges. Existing Newton iteration methods become inefficient as the vortices develop fine-scale features; in addition, these methods cannot, in general, find solutions with specified Casimir invariants. On the other hand, available relaxation approaches are computationally inexpensive, but can fail to converge to a solution. In this paper, we overcome these limitations by introducing a new discretization, based on an inverse-velocity map, which radically increases the efficiency of Newton iteration methods. In addition, we introduce a procedure to prescribe Casimirs and remove the degeneracies in the steady vorticity equation, thus ensuring convergence for general vortex configurations. We illustrate our methodology by considering several unbounded flows involving one or two vortices. Our method enables the computation, for the first time, of steady vortices that do not exhibit any geometric symmetry. In addition, we discover that, as the limiting vortex state for each flow is approached, each family of solutions traces a clockwise spiral in a bifurcation plot consisting of a velocity-impulse diagram. By the recently introduced "IVI diagram" stability approach [Phys. Rev. Lett. 104 (2010) 044504], each turn of this spiral is associated with a loss of stability for the steady flows. Such spiral structure is suggested to be a universal feature of steady, uniform-vorticity flows.
An Efficient Forensic Method for Copy–move Forgery Detection based on DWT-FWHT
Directory of Open Access Journals (Sweden)
B. Yang
2013-12-01
Full Text Available As the increased availability of sophisticated image processing software and the widespread use of Internet, digital images are easy to acquire and manipulate. The authenticity of the received images is becoming more and more important. Copy-move forgery is one of the most common forgery methods. When creating a Copy-move forgery, it is often necessary to add or remove important features from an image. To carry out such forensic analysis, various technological instruments have been developed in the literatures. However, most of them are time-consuming. In this paper, a more efficient method is proposed. First, the image size is reduced by Discrete Wavelet Transform (DWT. Second, the image is divided into overlapping blocks of equal size and, feature of each block is extracted by fast Walsh-Hadamard Transform (FWHT. Duplicated regions are then detected by lexicographically sorting all features of the image blocks. To make the range matching more efficient, multi-hop jump (MHJ algorithm is using to jump over some the “unnecessary testing blocks” (UTB. Experimental results demonstrated that the proposed method not only is able to detect the copy-move forgery accurately but also can reduce the processing time greatly compared with other methods.
International Nuclear Information System (INIS)
Bacchim Neto, F.A.; Alves, A.F.F.; Rosa, M.E.D.; Pina, D.R.
2017-01-01
Interventional Radiology - IR is the area of medicine that provides the largest occupational exposures. The dose values to which interventionists are exposed are difficult to standardize. The objective of the study is to perform a complete evaluation of occupational exposures and to determine the efficiency of different personal dosimetry methods used in IR. We evaluated the efficiencies of 6 different personal dosimetry methodologies used internationally to estimate the effective dose received by interventional professionals. And, based on this analysis, determine the characteristics of each methodology. One of the methods of personal dosimetry recommended by Brazilian legislation was the most conservative, overestimating, on average, the effective dose of professionals by up to 200%, reaching maximum values close to 400%. The most accurate method was that used in North America. This method did not overestimate the effective dose of the professionals more than a few percent and their standard deviation relative to the effective reference dose were the lowest. Based on these results, the choice of methodologies employing at least two dosimeters, one under and above protective aprons is recommended. In addition, in some situations where the dose in the hands may be high, additional dosimeters for this region are also recommended
International Nuclear Information System (INIS)
Cuce, Erdem; Cuce, Pinar Mert
2015-01-01
Highlights: • Homotopy perturbation method has been applied to porous fins. • Dimensionless efficiency and effectiveness expressions have been firstly developed. • Effects of porous and convection parameters on thermal analysis have been clarified. • Ratio of porous fin to solid fin heat transfer rate has been given for various cases. • Reliability and practicality of homotopy perturbation method has been illustrated. - Abstract: In our previous works, thermal performance of straight fins with both constant and temperature-dependent thermal conductivity has been investigated in detail and dimensionless analytical expressions of fin efficiency and fin effectiveness have been developed for the first time in literature via homotopy perturbation method. In this study, previous works have been extended to porous fins. Governing equations have been formulated by performing Darcy’s model. Dimensionless temperature distribution along the length of porous fin has been determined as a function of porosity and convection parameters. The ratio of porous fin to solid fin heat transfer rate has also been evaluated as a function of thermo-geometric fin parameter. The results have been compared with those of finite difference method for a specific case and an excellent agreement has been observed. The expressions developed are beneficial for thermal engineers for preliminary assessment of thermophysical systems instead of consuming time in heat conduction problems governed by strongly nonlinear differential equations
International Nuclear Information System (INIS)
Mirani, A.A.; Dahri, Z.H.
2011-01-01
This study was conducted at PARC's research station Kala Shah Kaku, Lahore, in order to calculate the water productivity and economic efficiency of wheat-crop under different sowing methods in a combined harvested paddy filed. The sowing methods were direct drilling with FMI Seeder, Zero tillage and conventional method. Data were collected during 2008-09. Wheat-yield was 2750 kg/ha, 2665 kg/ha and 2610 kg/ha for direct drilling with F MI Seeder, Zero tillage and conventional method, respectively. The direct drilling in heavy residue gave 5.4 % more yield than the conventional method and 3.2 % more yield than zero tillage. The zero tillage ensured 2.1% more yield than the conventional method. The net water applied as 323, 354, and 380 mm for direct drilling with FMI seeder, zero tillage and conventional methods respectively against the potential crop evapotranspiration of 383 mm. This indicates that the direct drilling of wheat-crop in heave rice stubbles saves 15% irrigation water as compared to conventional method and 8.8% over zero tillage. The zero tillage method saves 6.8 % of irrigation water over the conventional method. The water productivity was found to be 0.851 kg/m3. 0.753 kg/m/sup 3/ and 0.687 kg/m/sup 3/ for direct drilling with FMI Seeder, Zero tillage and conventional method respectively. This indicates that the direct drilling ensures 23.9% increase in water productivity over conventional method and 13.01% over zero tillage. The zero tillage gave 9.6% more water productivity than the conventional method. The costs of production for the three sowing methods were Rs. 39123/ha, Rs.43737/ha and Rs. 53047/ha for direct drilling, zero tillage and conventional method respectively. This indicates an overall saving of Rs. 13924/ha (26.2 %) by the direct drilling method as compared to the conventional method and Rs. 4613/ha (10.5%) over zero tillage method. The zero tillage saves Rs. 9319/ha (17.6 %) over the conventional method. Thus, the resource
Hussain, S.; Brennan, C.
2017-07-01
This paper presents an efficient ray tracing algorithm for propagation prediction in urban environments. The work presented in this paper builds upon previous work in which the maximum coverage area where rays can propagate after interaction with a wall or vertical edge is described by a lit polygon. The shadow regions formed by buildings within the lit polygon are described by shadow polygons. In this paper, the lit polygons of images are mapped to a coarse grid superimposed over the coverage area. This mapping reduces the active image tree significantly for a given receiver point to accelerate the ray finding process. The algorithm also presents an efficient method of quickly determining the valid ray segments for a mobile receiver moving along a linear trajectory. The validation results show considerable computation time reduction with good agreement between the simulated and measured data for propagation prediction in large urban environments.
SCM: A method to improve network service layout efficiency with network evolution
Zhao, Qi; Zhang, Chuanhao
2017-01-01
Network services are an important component of the Internet, which are used to expand network functions for third-party developers. Network function virtualization (NFV) can improve the speed and flexibility of network service deployment. However, with the evolution of the network, network service layout may become inefficient. Regarding this problem, this paper proposes a service chain migration (SCM) method with the framework of “software defined network + network function virtualization” (SDN+NFV), which migrates service chains to adapt to network evolution and improves the efficiency of the network service layout. SCM is modeled as an integer linear programming problem and resolved via particle swarm optimization. An SCM prototype system is designed based on an SDN controller. Experiments demonstrate that SCM could reduce the network traffic cost and energy consumption efficiently. PMID:29267299
SCM: A method to improve network service layout efficiency with network evolution.
Zhao, Qi; Zhang, Chuanhao; Zhao, Zheng
2017-01-01
Network services are an important component of the Internet, which are used to expand network functions for third-party developers. Network function virtualization (NFV) can improve the speed and flexibility of network service deployment. However, with the evolution of the network, network service layout may become inefficient. Regarding this problem, this paper proposes a service chain migration (SCM) method with the framework of "software defined network + network function virtualization" (SDN+NFV), which migrates service chains to adapt to network evolution and improves the efficiency of the network service layout. SCM is modeled as an integer linear programming problem and resolved via particle swarm optimization. An SCM prototype system is designed based on an SDN controller. Experiments demonstrate that SCM could reduce the network traffic cost and energy consumption efficiently.
Ivanov, Mikhail V; Babikov, Dmitri
2012-05-14
Efficient method is proposed for computing thermal rate constant of recombination reaction that proceeds according to the energy transfer mechanism, when an energized molecule is formed from reactants first, and is stabilized later by collision with quencher. The mixed quantum-classical theory for the collisional energy transfer and the ro-vibrational energy flow [M. Ivanov and D. Babikov, J. Chem. Phys. 134, 144107 (2011)] is employed to treat the dynamics of molecule + quencher collision. Efficiency is achieved by sampling simultaneously (i) the thermal collision energy, (ii) the impact parameter, and (iii) the incident direction of quencher, as well as (iv) the rotational state of energized molecule. This approach is applied to calculate third-order rate constant of the recombination reaction that forms the (16)O(18)O(16)O isotopomer of ozone. Comparison of the predicted rate vs. experimental result is presented.
Measurement of productive efficiency with frontier methods. A case study for wind farms
International Nuclear Information System (INIS)
Iglesias, Guillermo; Castellanos, Pablo; Seijas, Amparo
2010-01-01
In this paper, we measure the productive efficiency of a group of wind farms during the period 2001-2004 using the frontier methods Data Envelopment Analysis (DEA) and Stochastic Frontier Analysis (SFA). Taking an extensive definition of the productive process of wind electricity as our starting point, we obtain results which allow us to identify, on the one hand, an essentially ex ante efficiency measure and, on the other hand, aspects of relevance for wind farm development companies (developers), technology suppliers and operators in terms of their economic impact. These results may also be of interest for regulators and other stakeholders in the sector. Furthermore, we discuss the implications of the simultaneous use of DEA and SFA methodologies. (author)
International Nuclear Information System (INIS)
Kubota, Kanya; Yamana, Hajime; Takeda, Seiichiro.
1984-01-01
Purpose: To significantly improve the ruthenium removing efficiency in a nitric acid solution in an acid recovery system for the recovery of nitric acid from nitric acid liquid wastes through evaporating condensation. Method: Upon evaporating treatment of nitric acid solution containing ruthenium by supplying and heating the solution to a nitric acid evaporating device, hydrazine is previously added to the nitric acid solution. Hydrazine and intermediate reaction product of hydrazine such as azide causes a reduction reaction with intermediate reaction product of ruthenium tetraoxide to suppress the oxidation of ruthenium and thereby improve the decontaminating efficiency of ruthenium. The amount of hydrazine to be added is preferably between 20 - 500 mg/l and most suitably between 200 - 2000 mg/l per one liter of the liquid in the evaporating device. (Seki, T.)
Energy Technology Data Exchange (ETDEWEB)
Voitenko, N. V., E-mail: tevn@hvd.tpu.ru; Yudin, A. S.; Kuznetsova, N. S. [National Research Tomsk Polytechnic University (Russian Federation); Krastelev, E. G. [Russian Academy of Sciences, Joint Institute for High Temperatures (Russian Federation)
2016-12-15
The paper deals with the relevance of using electrical discharge technology for construction works in the city. The technical capabilities of the installation for the multi-borehole electro-discharge splitting off and destruction of rocks and concrete are described. The ways to increase the efficiency of the electrical discharge method for the destruction of solids are proposed. In order to augment the discharge circuit energy, the energy storage is separated into two individual capacitor batteries. The throttle with the inductance of 28.6 μH is installed in one of the batteries, which increases the duration of the channel energy release to 400 μs and the efficiency of electrical discharge splitting off of concrete.
An accurate and efficient method for large-scale SSR genotyping and applications.
Li, Lun; Fang, Zhiwei; Zhou, Junfei; Chen, Hong; Hu, Zhangfeng; Gao, Lifen; Chen, Lihong; Ren, Sheng; Ma, Hongyu; Lu, Long; Zhang, Weixiong; Peng, Hai
2017-06-02
Accurate and efficient genotyping of simple sequence repeats (SSRs) constitutes the basis of SSRs as an effective genetic marker with various applications. However, the existing methods for SSR genotyping suffer from low sensitivity, low accuracy, low efficiency and high cost. In order to fully exploit the potential of SSRs as genetic marker, we developed a novel method for SSR genotyping, named as AmpSeq-SSR, which combines multiplexing polymerase chain reaction (PCR), targeted deep sequencing and comprehensive analysis. AmpSeq-SSR is able to genotype potentially more than a million SSRs at once using the current sequencing techniques. In the current study, we simultaneously genotyped 3105 SSRs in eight rice varieties, which were further validated experimentally. The results showed that the accuracies of AmpSeq-SSR were nearly 100 and 94% with a single base resolution for homozygous and heterozygous samples, respectively. To demonstrate the power of AmpSeq-SSR, we adopted it in two applications. The first was to construct discriminative fingerprints of the rice varieties using 3105 SSRs, which offer much greater discriminative power than the 48 SSRs commonly used for rice. The second was to map Xa21, a gene that confers persistent resistance to rice bacterial blight. We demonstrated that genome-scale fingerprints of an organism can be efficiently constructed and candidate genes, such as Xa21 in rice, can be accurately and efficiently mapped using an innovative strategy consisting of multiplexing PCR, targeted sequencing and computational analysis. While the work we present focused on rice, AmpSeq-SSR can be readily extended to animals and micro-organisms. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.
Resource potential methods using for efficiency of activities in the region increase
Directory of Open Access Journals (Sweden)
M. P. Vasiliev
2016-01-01
Full Text Available The article considers impact methods on the economic results, the effectiveness of the regional economic complex should be based on a high quality of the basic characteristics classification of the region state. Application composition techniques to ensure a comprehensive impact on the achievement of this goal should in synthesized form to union, adopt a target orientation of development of the region with the parameters objectively revealing his condition. Ensuring organizational, economic, financial and investment techniques to achieve the planned targets and requires specifying align resource potential of the region with the available capacity of the regional economic complex to promote economic growth, improve the efficiency of operations. The main characteristics of the potential resource opportunities in the region are the skill level of workers, the degree of depreciation of fixed assets and their renewability, increased innovation in the region, its branches and facilities, strengthening of competitive advantages, the annual average number of employees, the cost of fixed and current assets, financial stability. In the region the opportunity to potentially affect the ability of its structural components to achieve the financial and economic performance targets acts as efficiency ability to provide stable dynamics of regional production efficiency, enhance the level of benefits to achieve the planned efficiency used (consumed resource. Applying of certain methods or their entire structure, created to provide a comprehensive impact on the goal achievement, in the synthesized form of target orientation combines regional development with the parameters most objectively revealing his condition. Achieving the appropriate organizational, economic, financial, investment or other measures to achieve planned targets that are expressed by the level of efficiency of activity in the conditions of the most complete involvement and intensity of use in
International Nuclear Information System (INIS)
Ikeda, Hideaki; Takeda, Toshikazu
2001-01-01
A space/time nodal diffusion code based on the nodal expansion method (NEM), EPISODE, was developed in order to evaluate transient neutron behavior in light water reactor cores. The present code employs the improved quasistatic (IQS) method for spatial neutron kinetics, and neutron flux distribution is numerically obtained by solving the neutron diffusion equation with the nonlinear iteration scheme to achieve fast computation. A predictor-corrector (PC) method developed in the present study enabled to apply a coarse time mesh to the transient spatial neutron calculation than that applicable in the conventional IQS model, which improved computational efficiency further. Its computational advantage was demonstrated by applying to the numerical benchmark problems that simulate reactivity-initiated events, showing reduction of computational times up to a factor of three than the conventional IQS. The thermohydraulics model was also incorporated in EPISODE, and the capability of realistic reactivity event analyses was verified using the SPERT-III/E-Core experimental data. (author)
Zeng, Lang; He, Yu; Povolotskyi, Michael; Liu, XiaoYan; Klimeck, Gerhard; Kubis, Tillmann
2013-06-01
In this work, the low rank approximation concept is extended to the non-equilibrium Green's function (NEGF) method to achieve a very efficient approximated algorithm for coherent and incoherent electron transport. This new method is applied to inelastic transport in various semiconductor nanodevices. Detailed benchmarks with exact NEGF solutions show (1) a very good agreement between approximated and exact NEGF results, (2) a significant reduction of the required memory, and (3) a large reduction of the computational time (a factor of speed up as high as 150 times is observed). A non-recursive solution of the inelastic NEGF transport equations of a 1000 nm long resistor on standard hardware illustrates nicely the capability of this new method.
An Efficient Method of Vibration Diagnostics For Rotating Machinery Using a Decision Tree
Directory of Open Access Journals (Sweden)
Bo Suk Yang
2000-01-01
Full Text Available This paper describes an efficient method to automatize vibration diagnosis for rotating machinery using a decision tree, which is applicable to vibration diagnosis expert system. Decision tree is a widely known formalism for expressing classification knowledge and has been used successfully in many diverse areas such as character recognition, medical diagnosis, and expert systems, etc. In order to build a decision tree for vibration diagnosis, we have to define classes and attributes. A set of cases based on past experiences is also needed. This training set is inducted using a result-cause matrix newly developed in the present work instead of using a conventionally implemented cause-result matrix. This method was applied to diagnostics for various cases taken from published work. It is found that the present method predicts causes of the abnormal vibration for test cases with high reliability.
DEFF Research Database (Denmark)
Li, J; Villemoes, K; Zhang, Y
2009-01-01
The purpose of our work was to establish an efficient-oriented enucleation method to produce transgenic embryos with handmade cloning (HMC). After 41â€"42 h oocytes maturation, the oocytes were further cultured with or without 0.4 Î¼g/ml demecolcine for 45 min [chemically assisted handmade...... cytoplasts without extrusion cones or PB were selected as recipients. Two cytoplasts were electrofused with one transgenic fibroblasts expressing green fluorescent protein (GFP), while non-transgenic fibroblasts were used as controls. Reconstructed embryos were cultured in Well of Wells (WOWs) with porcine......%) of cloned embryos with GFP transgenic fibroblast cells after CAHE vs OHE. With adjusted time-lapse for zonae-free cloned embryos cultured in WOWs with PZM-3, it was obvious that in vitro developmental competence after CAHE was compromised when compared with the OHE method. OHE enucleation method seems...
An efficient cloud detection method for high resolution remote sensing panchromatic imagery
Li, Chaowei; Lin, Zaiping; Deng, Xinpu
2018-04-01
In order to increase the accuracy of cloud detection for remote sensing satellite imagery, we propose an efficient cloud detection method for remote sensing satellite panchromatic images. This method includes three main steps. First, an adaptive intensity threshold value combined with a median filter is adopted to extract the coarse cloud regions. Second, a guided filtering process is conducted to strengthen the textural features difference and then we conduct the detection process of texture via gray-level co-occurrence matrix based on the acquired texture detail image. Finally, the candidate cloud regions are extracted by the intersection of two coarse cloud regions above and we further adopt an adaptive morphological dilation to refine them for thin clouds in boundaries. The experimental results demonstrate the effectiveness of the proposed method.
Smolin, John A; Gambetta, Jay M; Smith, Graeme
2012-02-17
We provide an efficient method for computing the maximum-likelihood mixed quantum state (with density matrix ρ) given a set of measurement outcomes in a complete orthonormal operator basis subject to Gaussian noise. Our method works by first changing basis yielding a candidate density matrix μ which may have nonphysical (negative) eigenvalues, and then finding the nearest physical state under the 2-norm. Our algorithm takes at worst O(d(4)) for the basis change plus O(d(3)) for finding ρ where d is the dimension of the quantum state. In the special case where the measurement basis is strings of Pauli operators, the basis change takes only O(d(3)) as well. The workhorse of the algorithm is a new linear-time method for finding the closest probability distribution (in Euclidean distance) to a set of real numbers summing to one.
Efficient numerical methods for fluid- and electrodynamics on massively parallel systems
Energy Technology Data Exchange (ETDEWEB)
Zudrop, Jens
2016-07-01
In the last decade, computer technology has evolved rapidly. Modern high performance computing systems offer a tremendous amount of computing power in the range of a few peta floating point operations per second. In contrast, numerical software development is much slower and most existing simulation codes cannot exploit the full computing power of these systems. Partially, this is due to the numerical methods themselves and partially it is related to bottlenecks within the parallelization concept and its data structures. The goal of the thesis is the development of numerical algorithms and corresponding data structures to remedy both kinds of parallelization bottlenecks. The approach is based on a co-design of the numerical schemes (including numerical analysis) and their realizations in algorithms and software. Various kinds of applications, from multicomponent flows (Lattice Boltzmann Method) to electrodynamics (Discontinuous Galerkin Method) to embedded geometries (Octree), are considered and efficiency of the developed approaches is demonstrated for large scale simulations.
An efficient preconditioning technique using Krylov subspace methods for 3D characteristics solvers
International Nuclear Information System (INIS)
Dahmani, M.; Le Tellier, R.; Roy, R.; Hebert, A.
2005-01-01
The Generalized Minimal RESidual (GMRES) method, using a Krylov subspace projection, is adapted and implemented to accelerate a 3D iterative transport solver based on the characteristics method. Another acceleration technique called the self-collision rebalancing technique (SCR) can also be used to accelerate the solution or as a left preconditioner for GMRES. The GMRES method is usually used to solve a linear algebraic system (Ax=b). It uses K(r (o) ,A) as projection subspace and AK(r (o) ,A) for the orthogonalization of the residual. This paper compares the performance of these two combined methods on various problems. To implement the GMRES iterative method, the characteristics equations are derived in linear algebra formalism by using the equivalence between the method of characteristics and the method of collision probability to end up with a linear algebraic system involving fluxes and currents. Numerical results show good performance of the GMRES technique especially for the cases presenting large material heterogeneity with a scattering ratio close to 1. Similarly, the SCR preconditioning slightly increases the GMRES efficiency
Jensen, Scott A; Blumberg, Sean; Browning, Megan
2017-09-01
Although time-out has been demonstrated to be effective across multiple settings, little research exists on effective methods for training others to implement time-out. The present set of studies is an exploratory analysis of a structured feedback method for training time-out using repeated role-plays. The three studies examined (a) a between-subjects comparison to more a traditional didactic/video modeling method of time-out training, (b) a within-subjects comparison to traditional didactic/video modeling training for another skill, and (c) the impact of structured feedback training on in-home time-out implementation. Though findings are only preliminary and more research is needed, the structured feedback method appears across studies to be an efficient, effective method that demonstrates good maintenance of skill up to 3 months post training. Findings suggest, though do not confirm, a benefit of the structured feedback method over a more traditional didactic/video training model. Implications and further research on the method are discussed.
Novel and Efficient Methods for Calculating Pressure in Polymer Lattice Models
Zhang, Pengfei; Wang, Qiang
2014-03-01
Pressure calculation in polymer lattice models is an important but nontrivial subject. The three existing methods - thermodynamic integration, repulsive wall, and sedimentation equilibrium methods - all have their limitations and cannot be used to accurately calculate the pressure at all polymer volume fractions φ. Here we propose two novel methods. In the first method, we combine Monte Carlo simulation in an expanded grand-canonical ensemble with the Wang-Landau - Optimized Ensemble (WL-OE) simulation to calculate the pressure as a function of polymer volume fraction, which is very efficient at low to intermediate φ and exhibits negligible finite-size effects. In the second method, we introduce a repulsive plane with bridging bonds, which is similar to the repulsive wall method but eliminates its confinement effects, and estimate the two-dimensional density of states (in terms of the number of bridging bonds and the contact number) using the 1/ t version of Wang-Landau algorithm. This works well at all φ, especially at high φ where all the methods involving chain insertion trial moves fail.
Modeling of detection efficiency of HPGe semiconductor detector by Monte Carlo method
International Nuclear Information System (INIS)
Rapant, T.
2003-01-01
Over the past ten years following the gradual adoption of new legislative standards for protection against ionizing radiation was significant penetration of gamma-spectrometry between standard radioanalytical methods. In terms of nuclear power plant gamma-spectrometry has shown as the most effective method of determining of the activity of individual radionuclides. Spectrometric laboratories were gradually equipped with the most modern technical equipment. Nevertheless, due to the use of costly and time intensive experimental calibration methods, the possibilities of gamma-spectrometry were partially limited. Mainly in late 90-ies during substantial renovation and modernization works. For this reason, in spectrometric laboratory in Nuclear Power Plants Bohunice in cooperation with the Department of Nuclear Physics FMPI in Bratislava were developed and tested several calibration procedures based on computer simulations using GEANT program. In presented thesis the calibration method for measuring of bulk samples based on auto-absorption factors is described. The accuracy of the proposed method is at least comparable with other used methods, but it surpasses them significantly in terms of efficiency and financial time and simplicity. The described method has been used successfully almost for two years in laboratory spectrometric Radiation Protection Division in Bohunice nuclear power. It is shown by the results of international comparison measurements and repeated validation measurements performed by Slovak Institute of Metrology in Bratislava.
Directory of Open Access Journals (Sweden)
Marcos Aurelio Lopes
2017-03-01
Full Text Available We aimed to evaluate the technical efficiency and economic viability of the implementation and use of four cattle identification methods allowed by the Brazilian traceability system. The study was conducted in a beef cattle production system located in the State of Mato Grosso, from January to June 2012. Four identification methods (treatments were compared: T1: ear tag in one ear and ear button in the other ear (eabu; T2: ear tag and iron brand on the right leg (eaib; T3: ear tag in one ear and tattoo on the other ear (eata; and T4: ear tag in one ear and electronic ear tag (eael on the other. Each treatment was applied to 60 Nelore animals, totaling 240 animals, divided equally into three life stages (calves, young cattle, adult cattle. The study had two phases: implementation (phase 1 and reading and transfer of identification numbers to an electronic database (phase 2. All operating expenses related to the two phases of the study were determined. The database was constructed, and the statistical analyses were performed using SPSS® 17.0 software. Regarding the time spent on implementation (phase 1, conventional ear tags and electronic ear tags produced similar results, which were lower than those of hot iron and tattoo methods, which differed from each other. Regarding the time required for reading the numbers on animals and their transcription into a database (phase 2, electronic ear-tagging was the fastest method, followed by conventional ear tag, hot iron and tattoo. Among the methods analyzed, the electronic ear tag had the highest technical efficiency because it required less time to implement identifiers and to complete the process of reading and transcription to an electronic database and because it did not exhibit any errors. However, the cost of using the electronic ear-tagging method was higher primarily due to the cost of the device.
An Efficient Method for Generation of Transgenic Rats Avoiding Embryo Manipulation
Directory of Open Access Journals (Sweden)
Bhola Shankar Pradhan
2016-01-01
Full Text Available Although rats are preferred over mice as an animal model, transgenic animals are generated predominantly using mouse embryos. There are limitations in the generation of transgenic rat by embryo manipulation. Unlike mouse embryos, most of the rat embryos do not survive after male pronuclear DNA injection which reduces the efficiency of generation of transgenic rat by this method. More importantly, this method requires hundreds of eggs collected by killing several females for insertion of transgene to generate transgenic rat. To this end, we developed a noninvasive and deathless technique for generation of transgenic rats by integrating transgene into the genome of the spermatogonial cells by testicular injection of DNA followed by electroporation. After standardization of this technique using EGFP as a transgene, a transgenic disease model displaying alpha thalassemia was successfully generated using rats. This efficient method will ease the generation of transgenic rats without killing the lives of rats while simultaneously reducing the number of rats used for generation of transgenic animal.
Simple-MSSM: a simple and efficient method for simultaneous multi-site saturation mutagenesis.
Cheng, Feng; Xu, Jian-Miao; Xiang, Chao; Liu, Zhi-Qiang; Zhao, Li-Qing; Zheng, Yu-Guo
2017-04-01
To develop a practically simple and robust multi-site saturation mutagenesis (MSSM) method that enables simultaneously recombination of amino acid positions for focused mutant library generation. A general restriction enzyme-free and ligase-free MSSM method (Simple-MSSM) based on prolonged overlap extension PCR (POE-PCR) and Simple Cloning techniques. As a proof of principle of Simple-MSSM, the gene of eGFP (enhanced green fluorescent protein) was used as a template gene for simultaneous mutagenesis of five codons. Forty-eight randomly selected clones were sequenced. Sequencing revealed that all the 48 clones showed at least one mutant codon (mutation efficiency = 100%), and 46 out of the 48 clones had mutations at all the five codons. The obtained diversities at these five codons are 27, 24, 26, 26 and 22, respectively, which correspond to 84, 75, 81, 81, 69% of the theoretical diversity offered by NNK-degeneration (32 codons; NNK, K = T or G). The enzyme-free Simple-MSSM method can simultaneously and efficiently saturate five codons within one day, and therefore avoid missing interactions between residues in interacting amino acid networks.
A Building Energy Efficiency Optimization Method by Evaluating the Effective Thermal Zones Occupancy
Directory of Open Access Journals (Sweden)
Franco Cotana
2012-12-01
Full Text Available Building energy efficiency is strongly linked to the operations and control systems, together with the integrated performance of passive and active systems. In new high quality buildings in particular, where these two latter aspects have been already implemented at the design stage, users’ perspective, obtained through post-occupancy assessment, has to be considered to reduce whole energy requirement during service life. This research presents an innovative and low-cost methodology to reduce buildings’ energy requirements through post-occupancy assessment and optimization of energy operations using effective users’ attitudes and requirements as feedback. As a meaningful example, the proposed method is applied to a multipurpose building located in New York City, NY, USA, where real occupancy conditions are assessed. The effectiveness of the method is tested through dynamic simulations using a numerical model of the case study, calibrated through real monitoring data collected on the building. Results show that, for the chosen case study, the method provides optimized building energy operations which allow a reduction of primary energy requirements for HVAC, lighting, room-electricity, and auxiliary supply by about 21%. This paper shows that the proposed strategy represents an effective way to reduce buildings’ energy waste, in particular in those complex and high-efficiency buildings that are not performing as well as expected during the concept-design-commissioning stage, in particular due to the lack of feedback after the building handover.
A simple and efficient method for assembling TALE protein based on plasmid library.
Zhang, Zhiqiang; Li, Duo; Xu, Huarong; Xin, Ying; Zhang, Tingting; Ma, Lixia; Wang, Xin; Chen, Zhilong; Zhang, Zhiying
2013-01-01
DNA binding domain of the transcription activator-like effectors (TALEs) from Xanthomonas sp. consists of tandem repeats that can be rearranged according to a simple cipher to target new DNA sequences with high DNA-binding specificity. This technology has been successfully applied in varieties of species for genome engineering. However, assembling long TALE tandem repeats remains a big challenge precluding wide use of this technology. Although several new methodologies for efficiently assembling TALE repeats have been recently reported, all of them require either sophisticated facilities or skilled technicians to carry them out. Here, we described a simple and efficient method for generating customized TALE nucleases (TALENs) and TALE transcription factors (TALE-TFs) based on TALE repeat tetramer library. A tetramer library consisting of 256 tetramers covers all possible combinations of 4 base pairs. A set of unique primers was designed for amplification of these tetramers. PCR products were assembled by one step of digestion/ligation reaction. 12 TALE constructs including 4 TALEN pairs targeted to mouse Gt(ROSA)26Sor gene and mouse Mstn gene sequences as well as 4 TALE-TF constructs targeted to mouse Oct4, c-Myc, Klf4 and Sox2 gene promoter sequences were generated by using our method. The construction routines took 3 days and parallel constructions were available. The rate of positive clones during colony PCR verification was 64% on average. Sequencing results suggested that all TALE constructs were performed with high successful rate. This is a rapid and cost-efficient method using the most common enzymes and facilities with a high success rate.
2010-01-01
... 10 Energy 3 2010-01-01 2010-01-01 false Uniform test method for the measurement of energy efficiency of commercial heat pump water heaters. [Reserved] 431.107 Section 431.107 Energy DEPARTMENT OF....107 Uniform test method for the measurement of energy efficiency of commercial heat pump water heaters...
Yuldashev, M. N.; Vlasov, A. I.; Novikov, A. N.
2018-05-01
This paper focuses on the development of an energy-efficient algorithm for classification of states of a wireless sensor network using machine learning methods. The proposed algorithm reduces energy consumption by: 1) elimination of monitoring of parameters that do not affect the state of the sensor network, 2) reduction of communication sessions over the network (the data are transmitted only if their values can affect the state of the sensor network). The studies of the proposed algorithm have shown that at classification accuracy close to 100%, the number of communication sessions can be reduced by 80%.
METHODS FOR IMPROVING AVAILABILITY AND EFFICIENCY OF COMPUTER INFRASTRUCTURE IN SMART CITIES
Directory of Open Access Journals (Sweden)
Jerzy Balicki
2017-09-01
Full Text Available This paper discusses methods for increasing the availability and efficiency of information infrastructure in smart cities. Two criteria have been formulated to assign some key resources in smart city system. The process of finding some compromise solutions from Pareto-optimal solutions has been illustrated. Metaheuristics of collective intelligence, including particle swarm optimization PSO, ant colony optimization ACO, algorithm of bee colony ABC, and differential evolution DE have been described due to smart city infrastructure improving. Other application of above metaheuristics in smart city have been also presented.
Modern efficient methods of steel vertical oil tanks clean-up
Directory of Open Access Journals (Sweden)
Nekrasov Vladimir
2016-01-01
Full Text Available The legislative base of the Russian Federation operating in the field of operation of tanks and tank parks is considered, and consecutive stages of technological process of cleaning of vertical steel tanks from oil ground deposits are presented. In work shortcomings of existing most widespread electromechanical mixers are described when using a hydraulic method of removal and prevention of formation of ground deposits in tanks with oil and oil products. For the purpose of increase of efficiency, reliability and decrease in power consumption of washout of oil ground deposits in tanks the new design of system of funneled washout and prevention of formation of deposits is offered.
Directory of Open Access Journals (Sweden)
Lim, C. H.
2007-01-01
Full Text Available Production of Lactobacillus salivarius i 24, a probiotic strain for chicken, was studied in batch fermentation using 500 mL Erlenmeyer flask. Response surface method (RSM was used to optimize the medium for efficient cultivation of the bacterium. The factors investigated were yeast extract, glucose and initial culture pH. A polynomial regression model with cubic and quartic terms was used for the analysis of the experimental data. Estimated optimal conditions of the factors for growth of L. salivarius i 24 were; 3.32 % (w/v glucose, 4.31 % (w/v yeast extract and initial culture pH of 6.10.
Analysis and design of substrate integrated waveguide using efficient 2D hybrid method
Wu, Xuan Hui
2010-01-01
Substrate integrated waveguide (SIW) is a new type of transmission line. It implements a waveguide on a piece of printed circuit board by emulating the side walls of the waveguide using two rows of metal posts. It inherits the merits both from the microstrip for compact size and easy integration, and from the waveguide for low radiation loss, and thus opens another door to design efficient microwave circuits and antennas at a low cost. This book presents a two-dimensional fullwave analysis method to investigate an SIW circuit composed of metal and dielectric posts. It combines the cylindrical
International Nuclear Information System (INIS)
Park, Beom Woo; Joo, Han Gyu
2015-01-01
Highlights: • The stiffness confinement method is combined with multigroup CMFD with SENM nodal kernel. • The systematic methods for determining the shape and amplitude frequencies are established. • Eigenvalue problems instead of fixed source problems are solved in the transient calculation. • It is demonstrated that much larger time step sizes can be used with the SCM–CMFD method. - Abstract: An improved Stiffness Confinement Method (SCM) is formulated within the framework of the coarse mesh finite difference (CMFD) formulation for efficient multigroup spatial kinetics calculation. The algorithm for searching for the amplitude frequency that makes the dynamic eigenvalue unity is developed in a systematic way along with the methods for determining the shape and precursor frequencies. A nodal calculation scheme is established within the CMFD framework to incorporate the cross section changes due to thermal feedback and dynamic frequency update. The conditional nodal update scheme is employed such that the transient calculation is performed mostly with the CMFD formulation and the CMFD parameters are conditionally updated by intermittent nodal calculations. A quadratic representation of amplitude frequency is introduced as another improvement. The performance of the improved SCM within the CMFD framework is assessed by comparing the solution accuracy and computing times for the NEACRP control rod ejection benchmark problems with those obtained with the Crank–Nicholson method with exponential transform (CNET). It is demonstrated that the improved SCM is beneficial for large time step size calculations with stability and accuracy enhancement
Geostatistical Sampling Methods for Efficient Uncertainty Analysis in Flow and Transport Problems
Liodakis, Stylianos; Kyriakidis, Phaedon; Gaganis, Petros
2015-04-01
In hydrogeological applications involving flow and transport of in heterogeneous porous media the spatial distribution of hydraulic conductivity is often parameterized in terms of a lognormal random field based on a histogram and variogram model inferred from data and/or synthesized from relevant knowledge. Realizations of simulated conductivity fields are then generated using geostatistical simulation involving simple random (SR) sampling and are subsequently used as inputs to physically-based simulators of flow and transport in a Monte Carlo framework for evaluating the uncertainty in the spatial distribution of solute concentration due to the uncertainty in the spatial distribution of hydraulic con- ductivity [1]. Realistic uncertainty analysis, however, calls for a large number of simulated concentration fields; hence, can become expensive in terms of both time and computer re- sources. A more efficient alternative to SR sampling is Latin hypercube (LH) sampling, a special case of stratified random sampling, which yields a more representative distribution of simulated attribute values with fewer realizations [2]. Here, term representative implies realizations spanning efficiently the range of possible conductivity values corresponding to the lognormal random field. In this work we investigate the efficiency of alternative methods to classical LH sampling within the context of simulation of flow and transport in a heterogeneous porous medium. More precisely, we consider the stratified likelihood (SL) sampling method of [3], in which attribute realizations are generated using the polar simulation method by exploring the geometrical properties of the multivariate Gaussian distribution function. In addition, we propose a more efficient version of the above method, here termed minimum energy (ME) sampling, whereby a set of N representative conductivity realizations at M locations is constructed by: (i) generating a representative set of N points distributed on the
Impact of Irrigation Method on Water Use Efficiency and Productivity of Fodder Crops in Nepal
Directory of Open Access Journals (Sweden)
Ajay K Jha
2016-01-01
Full Text Available Improved irrigation use efficiency is an important tool for intensifying and diversifying agriculture in Nepal, resulting in higher economic yield from irrigated farmlands with a minimum input of water. Research was conducted to evaluate the effect of irrigation method (furrow vs. drip on the productivity of nutritious fodder species during off-monsoon dry periods in different elevation zones of central Nepal. A split-block factorial design was used. The factors considered were treatment location, fodder crop, and irrigation method. Commonly used local agronomical practices were followed in all respects except irrigation method. Results revealed that location effect was significant (p < 0.01 with highest fodder productivity seen for the middle elevation site, Syangja. Species effects were also significant, with teosinte (Euchlaena mexicana having higher yield than cowpea (Vigna unguiculata. Irrigation method impacted green biomass yield (higher with furrow irrigation but both methods yielded similar dry biomass, while water use was 73% less under drip irrigation. Our findings indicated that the controlled application of water through drip irrigation is able to produce acceptable yields of nutritionally dense fodder species during dry seasons, leading to more effective utilization and resource conservation of available land, fertilizer and water. Higher productivity of these nutritional fodders resulted in higher milk productivity for livestock smallholders. The ability to grow fodder crops year-round in lowland and hill regions of Nepal with limited water storages using low-cost, water-efficient drip irrigation may greatly increase livestock productivity and, hence, the economic security of smallholder farmers.
Zhang, Jin jing; Shi, Liang; Chen, Hui; Sun, Yun qi; Zhao, Ming wen; Ren, Ang; Chen, Ming jie; Wang, Hong; Feng, Zhi yong
2014-01-01
Hypsizygus marmoreus is one of the major edible mushrooms in East Asia. As no efficient transformation method, the molecular and genetics studies were hindered. The glyceraldehyde-3-phosphate dehydrogenase (GPD) gene of H. marmoreus was isolated and its promoter was used to drive the hygromycin B phosphotransferase (HPH) and enhanced green fluorescent protein (EGFP) in H. marmoreus. Agrobacterium tumefaciens-mediated transformation (ATMT) was successfully applied in H. marmoreus. The transformation parameters were optimized, and it was found that co-cultivation of bacteria with protoplast at a ratio of 1000:1 at a temperature of 26 °C in medium containing 0.3 mM acetosyringone resulted in the highest transformation efficiency for Agrobacterium strain. Besides, three plasmids, each carrying a different promoter (from H. marmoreus, Ganoderma lucidum and Lentinula edodes) driving the expression of an antibiotic resistance marker, were also tested. The construct carrying the H. marmoreus gpd promoter produced more transformants than other constructs. Our analysis showed that over 85% of the transformants tested remained mitotically stable even after five successive rounds of subculturing. Putative transformants were analyzed for the presence of hph gene by PCR and Southern blot. Meanwhile, the expression of EGFP in H. marmoreus transformants was detected by fluorescence imaging. This ATMT system increases the transformation efficiency of H. marmoreus and may represent a useful tool for molecular genetic studies in this mushroom species. Copyright © 2014 Elsevier GmbH. All rights reserved.
New results to BDD truncation method for efficient top event probability calculation
International Nuclear Information System (INIS)
Mo, Yuchang; Zhong, Farong; Zhao, Xiangfu; Yang, Quansheng; Cui, Gang
2012-01-01
A Binary Decision Diagram (BDD) is a graph-based data structure that calculates an exact top event probability (TEP). It has been a very difficult task to develop an efficient BDD algorithm that can solve a large problem since its memory consumption is very high. Recently, in order to solve a large reliability problem within limited computational resources, Jung presented an efficient method to maintain a small BDD size by a BDD truncation during a BDD calculation. In this paper, it is first identified that Jung's BDD truncation algorithm can be improved for a more practical use. Then, a more efficient truncation algorithm is proposed in this paper, which can generate truncated BDD with smaller size and approximate TEP with smaller truncation error. Empirical results showed this new algorithm uses slightly less running time and slightly more storage usage than Jung's algorithm. It was also found, that designing a truncation algorithm with ideal features for every possible fault tree is very difficult, if not impossible. The so-called ideal features of this paper would be that with the decrease of truncation limits, the size of truncated BDD converges to the size of exact BDD, but should never be larger than exact BDD.
An Efficient Estimation Method for Reducing the Axial Intensity Drop in Circular Cone-Beam CT
Directory of Open Access Journals (Sweden)
Lei Zhu
2008-01-01
Full Text Available Reconstruction algorithms for circular cone-beam (CB scans have been extensively studied in the literature. Since insufficient data are measured, an exact reconstruction is impossible for such a geometry. If the reconstruction algorithm assumes zeros for the missing data, such as the standard FDK algorithm, a major type of resulting CB artifacts is the intensity drop along the axial direction. Many algorithms have been proposed to improve image quality when faced with this problem of data missing; however, development of an effective and computationally efficient algorithm remains a major challenge. In this work, we propose a novel method for estimating the unmeasured data and reducing the intensity drop artifacts. Each CB projection is analyzed in the Radon space via Grangeat's first derivative. Assuming the CB projection is taken from a parallel beam geometry, we extract those data that reside in the unmeasured region of the Radon space. These data are then used as in a parallel beam geometry to calculate a correction term, which is added together with Hu’s correction term to the FDK result to form a final reconstruction. More approximations are then made on the calculation of the additional term, and the final formula is implemented very efficiently. The algorithm performance is evaluated using computer simulations on analytical phantoms. The reconstruction comparison with results using other existing algorithms shows that the proposed algorithm achieves a superior performance on the reduction of axial intensity drop artifacts with a high computation efficiency.
Luo, Hu; Guo, Meijian; Yin, Shaohui; Chen, Fengjun; Huang, Shuai; Lu, Ange; Guo, Yuanfan
2018-06-01
Zirconia ceramics is a valuable crucial material for fabricating functional components applied in aerospace, biology, precision machinery, military industry and other fields. However, the properties of its high brittleness and high hardness could seriously reduce its finishing efficiency and surface quality by conventional processing technology. In this work, we present a high efficiency and high-quality finishing process by using magnetorheological finishing (MRF), which employs the permanent magnetic yoke with straight air gap as excitation unit. The sub-nanoscale surface roughness and damage free surface can be obtained after magnetorheological finishing. The XRD results and SEM morphologies confirmed that the mechanical shear removal with ductile modes are the dominant material removal mechanism for the magnetorheological finishing of zirconia ceramic. With the developed experimental apparatus, the effects of workpiece speed, trough speed and work gap on material removal rate and surface roughness were systematically investigated. Zirconia ceramics finished to ultra-smooth surface with surface roughness less than Ra 1 nm was repeatedly achieved during the parametric experiments. Additionally, the highest material removal rate exceeded 1 mg/min when using diamond as an abrasive particle. Magnetorheological finishing promises to be an adaptable and efficient method for zirconia ceramics finishing.
Methods of increasing thermal efficiency of steam and gas turbine plants
Vasserman, A. A.; Shutenko, M. A.
2017-11-01
Three new methods of increasing efficiency of turbine power plants are described. Increasing average temperature of heat supply in steam turbine plant by mixing steam after overheaters with products of combustion of natural gas in the oxygen. Development of this idea consists in maintaining steam temperature on the major part of expansion in the turbine at level, close to initial temperature. Increasing efficiency of gas turbine plant by way of regenerative heating of the air by gas after its expansion in high pressure turbine and before expansion in the low pressure turbine. Due to this temperature of air, entering combustion chamber, is increased and average temperature of heat supply is consequently increased. At the same time average temperature of heat removal is decreased. Increasing efficiency of combined cycle power plant by avoiding of heat transfer from gas to wet steam and transferring heat from gas to water and superheated steam only. Steam will be generated by multi stage throttling of the water from supercritical pressure and temperature close to critical, to the pressure slightly higher than condensation pressure. Throttling of the water and separation of the wet steam on saturated water and steam does not require complicated technical devices.
International Nuclear Information System (INIS)
Han, Yongming; Geng, Zhiqiang; Zhu, Qunxiong; Qu, Yixin
2015-01-01
DEA (data envelopment analysis) has been widely used for the efficiency analysis of industrial production process. However, the conventional DEA model is difficult to analyze the pros and cons of the multi DMUs (decision-making units). The DEACM (DEA cross-model) can distinguish the pros and cons of the effective DMUs, but it is unable to take the effect of the uncertainty data into account. This paper proposes an efficiency analysis method based on FDEACM (fuzzy DEA cross-model) with Fuzzy Data. The proposed method has better objectivity and resolving power for the decision-making. First we obtain the minimum, the median and the maximum values of the multi-criteria ethylene energy consumption data by the data fuzzification. On the basis of the multi-criteria fuzzy data, the benchmark of the effective production situations and the improvement directions of the ineffective of the ethylene plants under different production data configurations are obtained by the FDEACM. The experimental result shows that the proposed method can improve the ethylene production conditions and guide the efficiency of energy utilization during ethylene production process. - Highlights: • This paper proposes an efficiency analysis method based on FDEACM (fuzzy DEA cross-model) with data fuzzification. • The proposed method is more efficient and accurate than other methods. • We obtain an energy efficiency analysis framework and process based on FDEACM in ethylene production industry. • The proposed method is valid and efficient in improvement of energy efficiency in the ethylene plants
Evaluation of the efficiency of some sediment trapping methods after a Mediterranean forest fire.
Fox, D M
2011-02-01
Forest fires are common in Mediterranean environments and may become increasingly more frequent as the climate changes. Destruction of the forest cover and litter layer leads to greater overland flow and increased erosion rates. The greatest risk occurs during the first rainstorms following a major fire, so local authorities must act quickly to put erosion control methods in place in order to avoid excessive post-fire sediment loads in river channels. Deciding on which methods to use requires accurate knowledge of their impact on sediment load and an estimate of their cost efficiency. The objective of this study was to evaluate the efficiency of Log Debris Dams (LDDs) and a sedimentation basin for their effectiveness in trapping sediments. Paired sub-catchments were studied to quantify the amount of sediments trapped in stream channels by a series of LDDs and a sedimentation basin. Cost efficiency was evaluated for each of the measures as a function of the cost per unit volume of sediments trapped. In addition, grain size analyses were performed to characterise the nature of the sediments trapped. A third sediment trapping method, Log Erosion Barriers (LEBs) was evaluated more superficially than the first two and conclusions regarding this method are tentative. LDDs trapped a mean volume of 1.57 m³ per unit (median=1.28 m³); mean LDD height was 105.4 cm (std. dev.=21.9 cm), and mean height of trapped sediments was only 50.0 cm (std. dev.=22.9 cm), showing that the traps were only half filled. Sediment height was limited by the presence of gaps between logs or branches that allowed runoff to flow through. Comparison of the textural characteristics of slope and trapped sediments showed distinct sorting: particles greater than 20mm were not mobilised from the slopes during the study period, sediments in the medium to coarse sand size fractions were trapped preferentially by the LDDs, and sediments in the sedimentation basin were enriched by clay and silt sized
Efficient Implementation of Many-body Quantum Chemical Methods on the Intel Xeon Phi Coprocessor
Energy Technology Data Exchange (ETDEWEB)
Apra, Edoardo; Klemm, Michael; Kowalski, Karol
2014-12-01
This paper presents the implementation and performance of the highly accurate CCSD(T) quantum chemistry method on the Intel Xeon Phi coprocessor within the context of the NWChem computational chemistry package. The widespread use of highly correlated methods in electronic structure calculations is contingent upon the interplay between advances in theory and the possibility of utilizing the ever-growing computer power of emerging heterogeneous architectures. We discuss the design decisions of our implementation as well as the optimizations applied to the compute kernels and data transfers between host and coprocessor. We show the feasibility of adopting the Intel Many Integrated Core Architecture and the Intel Xeon Phi coprocessor for developing efficient computational chemistry modeling tools. Remarkable scalability is demonstrated by benchmarks. Our solution scales up to a total of 62560 cores with the concurrent utilization of Intel Xeon processors and Intel Xeon Phi coprocessors.
Xu, Zhiqiang
2017-02-16
Attributed graph clustering, also known as community detection on attributed graphs, attracts much interests recently due to the ubiquity of attributed graphs in real life. Many existing algorithms have been proposed for this problem, which are either distance based or model based. However, model selection in attributed graph clustering has not been well addressed, that is, most existing algorithms assume the cluster number to be known a priori. In this paper, we propose two efficient approaches for attributed graph clustering with automatic model selection. The first approach is a popular Bayesian nonparametric method, while the second approach is an asymptotic method based on a recently proposed model selection criterion, factorized information criterion. Experimental results on both synthetic and real datasets demonstrate that our approaches for attributed graph clustering with automatic model selection significantly outperform the state-of-the-art algorithm.
An efficient shutter-less non-uniformity correction method for infrared focal plane arrays
Huang, Xiyan; Sui, Xiubao; Zhao, Yao
2017-02-01
The non-uniformity response in infrared focal plane array (IRFPA) detectors has a bad effect on images with fixed pattern noise. At present, it is common to use shutter to prevent from radiation of target and to update the parameters of non-uniformity correction in the infrared imaging system. The use of shutter causes "freezing" image. And inevitably, there exists the problems of the instability and reliability of system, power consumption, and concealment of infrared detection. In this paper, we present an efficient shutter-less non-uniformity correction (NUC) method for infrared focal plane arrays. The infrared imaging system can use the data gaining in thermostat to calculate the incident infrared radiation by shell real-timely. And the primary output of detector except the shell radiation can be corrected by the gain coefficient. This method has been tested in real infrared imaging system, reaching high correction level, reducing fixed pattern noise, adapting wide temperature range.
Xu, Zhiqiang; Cheng, James; Xiao, Xiaokui; Fujimaki, Ryohei; Muraoka, Yusuke
2017-01-01
Attributed graph clustering, also known as community detection on attributed graphs, attracts much interests recently due to the ubiquity of attributed graphs in real life. Many existing algorithms have been proposed for this problem, which are either distance based or model based. However, model selection in attributed graph clustering has not been well addressed, that is, most existing algorithms assume the cluster number to be known a priori. In this paper, we propose two efficient approaches for attributed graph clustering with automatic model selection. The first approach is a popular Bayesian nonparametric method, while the second approach is an asymptotic method based on a recently proposed model selection criterion, factorized information criterion. Experimental results on both synthetic and real datasets demonstrate that our approaches for attributed graph clustering with automatic model selection significantly outperform the state-of-the-art algorithm.
An Efficient Method for Image and Audio Steganography using Least Significant Bit (LSB) Substitution
Chadha, Ankit; Satam, Neha; Sood, Rakshak; Bade, Dattatray
2013-09-01
In order to improve the data hiding in all types of multimedia data formats such as image and audio and to make hidden message imperceptible, a novel method for steganography is introduced in this paper. It is based on Least Significant Bit (LSB) manipulation and inclusion of redundant noise as secret key in the message. This method is applied to data hiding in images. For data hiding in audio, Discrete Cosine Transform (DCT) and Discrete Wavelet Transform (DWT) both are used. All the results displayed prove to be time-efficient and effective. Also the algorithm is tested for various numbers of bits. For those values of bits, Mean Square Error (MSE) and Peak-Signal-to-Noise-Ratio (PSNR) are calculated and plotted. Experimental results show that the stego-image is visually indistinguishable from the original cover-image when nsteganography process does not reveal presence of any hidden message, thus qualifying the criteria of imperceptible message.
An Efficient Method to Search Real-Time Bulk Data for an Information Processing System
International Nuclear Information System (INIS)
Kim, Seong Jin; Kim, Jong Myung; Suh, Yong Suk; Keum, Jong Yong; Park, Heui Youn
2005-01-01
The Man Machine Interface System (MMIS) of System-integrated Modular Advanced ReacTor (SMART) is designed with fully digitalized features. The Information Processing System (IPS) of the MMIS acquires and processes plant data from other systems. In addition, the IPS provides plant operation information to operators in the control room. The IPS is required to process bulky data in a real-time. So, it is necessary to consider a special processing method with regards to flexibility and performance because more than a few thousands of Plant Information converges on the IPS. Among other things, the processing time for searching for data from the bulk data consumes much more than other the processing times. Thus, this paper explores an efficient method for the search and examines its feasibility
An efficient and robutst method for shape-based image retrieval
International Nuclear Information System (INIS)
Salih, N.D.; Besar, R.; Abas, F.S.
2007-01-01
Shapes can be thought as being the words oft he visual language. Shape boundaries need to be simplified and estimated in a wide variety of image analysis applications. Representation and description of Shapes is one of the major problems in content-based image retrieval (CBIR). This paper present an a novel method for shape representation and description named block-based shape representation (BSR), which is capable of extracting reliable information of the object outline in a concise manner. Our technique is translation, scale, and rotation invariant. It works well on different types of shapes and fast enough for use in real-time. This technique has been implemented and evaluated in order to analyze its accuracy and Efficiency. Based on the experimental results, we urge that the proposed BSR is a compact and reliable shape representation method. (author)
An Optimal Design Method of Centrifugal Compressors in Consideration of the Efficiency and the Noise
International Nuclear Information System (INIS)
Ha, K. G.; Sung, S. M.; Kang, S. H.
2007-01-01
A centrifugal compressor is a principal part of the fuelcell vehicles, aircraft and home appliances. Therefore not only efficiency but also compact size and a low operation RPM for noise reducing turn into important criteria of centrifugal compressors design. But those criteria are in conflict each other often. In the case of a RPM in particular, it is profitable to lower the RPM for a noise reduction and an endurance. But for a compact size and a light weight, the reverse has a beneficial effect undoubtedly. So it is necessary to introduce a new optimization concept in the centrifugal compressor design. An one dimensional optimal design method for the centrifugal compressor considering a impeller, a vaneless diffuser and a volute at a time is described. The new optimization process and underlying design methods of centrifugal compressors and some optimal design results are included in the paper
EFFICIENCY ANALYSIS OF HASHING METHODS FOR FILE SYSTEMS IN USER MODE
Directory of Open Access Journals (Sweden)
E. Y. Ivanov
2013-05-01
Full Text Available The article deals with characteristics and performance of interaction protocols between virtual file system and file system, their influence on processing power of microkernel operating systems. User mode implementation of ext2 file system for MINIX 3 OS is used to show that in microkernel operating systems file object identification time might increase up to 26 times in comparison with monolithic systems. Therefore, we present efficiency analysis of various hashing methods for file systems, running in user mode. Studies have shown that using hashing methods recommended in this paper it is possible to achieve competitive performance of the considered component of I/O stacks in microkernel and monolithic operating systems.
Supplementary Material for: DASPfind: new efficient method to predict drug–target interactions
Ba Alawi, Wail
2016-01-01
Abstract Background Identification of novel drug–target interactions (DTIs) is important for drug discovery. Experimental determination of such DTIs is costly and time consuming, hence it necessitates the development of efficient computational methods for the accurate prediction of potential DTIs. To-date, many computational methods have been proposed for this purpose, but they suffer the drawback of a high rate of false positive predictions. Results Here, we developed a novel computational DTI prediction method, DASPfind. DASPfind uses simple paths of particular lengths inferred from a graph that describes DTIs, similarities between drugs, and similarities between the protein targets of drugs. We show that on average, over the four gold standard DTI datasets, DASPfind significantly outperforms other existing methods when the single top-ranked predictions are considered, resulting in 46.17 % of these predictions being correct, and it achieves 49.22 % correct single top ranked predictions when the set of all DTIs for a single drug is tested. Furthermore, we demonstrate that our method is best suited for predicting DTIs in cases of drugs with no known targets or with few known targets. We also show the practical use of DASPfind by generating novel predictions for the Ion Channel dataset and validating them manually. Conclusions DASPfind is a computational method for finding reliable new interactions between drugs and proteins. We show over six different DTI datasets that DASPfind outperforms other state-of-the-art methods when the single top-ranked predictions are considered, or when a drug with no known targets or with few known targets is considered. We illustrate the usefulness and practicality of DASPfind by predicting novel DTIs for the Ion Channel dataset. The validated predictions suggest that DASPfind can be used as an efficient method to identify correct DTIs, thus reducing the cost of necessary experimental verifications in the process of drug discovery
Directory of Open Access Journals (Sweden)
Tatiana Prado
2013-02-01
Full Text Available The presence of enteric viruses in biosolids can be underestimated due to the inefficient methods (mainly molecular methods used to recover the viruses from these matrices. Therefore, the goal of this study was to evaluate the different methods used to recover adenoviruses (AdV, rotavirus species A (RVA, norovirus genogroup II (NoV GII and the hepatitis A virus (HAV from biosolid samples at a large urban wastewater treatment plant in Brazil after they had been treated by mesophilic anaerobic digestion. Quantitative polymerase chain reaction (PCR was used for spiking experiments to compare the detection limits of feasible methods, such as beef extract elution and ultracentrifugation. Tests were performed to detect the inhibition levels and the bacteriophage PP7 was used as an internal control. The results showed that the inhibitors affected the efficiency of the PCR reaction and that beef extract elution is a suitable method for detecting enteric viruses, mainly AdV from biosolid samples. All of the viral groups were detected in the biosolid samples: AdV (90%, RVA, NoV GII (45% and HAV (18%, indicating the viruses' resistance to the anaerobic treatment process. This is the first study in Brazil to detect the presence of RVA, AdV, NoV GII and HAV in anaerobically digested sludge, highlighting the importance of adequate waste management.
New Hybrid Monte Carlo methods for efficient sampling. From physics to biology and statistics
International Nuclear Information System (INIS)
Akhmatskaya, Elena; Reich, Sebastian
2011-01-01
We introduce a class of novel hybrid methods for detailed simulations of large complex systems in physics, biology, materials science and statistics. These generalized shadow Hybrid Monte Carlo (GSHMC) methods combine the advantages of stochastic and deterministic simulation techniques. They utilize a partial momentum update to retain some of the dynamical information, employ modified Hamiltonians to overcome exponential performance degradation with the system’s size and make use of multi-scale nature of complex systems. Variants of GSHMCs were developed for atomistic simulation, particle simulation and statistics: GSHMC (thermodynamically consistent implementation of constant-temperature molecular dynamics), MTS-GSHMC (multiple-time-stepping GSHMC), meso-GSHMC (Metropolis corrected dissipative particle dynamics (DPD) method), and a generalized shadow Hamiltonian Monte Carlo, GSHmMC (a GSHMC for statistical simulations). All of these are compatible with other enhanced sampling techniques and suitable for massively parallel computing allowing for a range of multi-level parallel strategies. A brief description of the GSHMC approach, examples of its application on high performance computers and comparison with other existing techniques are given. Our approach is shown to resolve such problems as resonance instabilities of the MTS methods and non-preservation of thermodynamic equilibrium properties in DPD, and to outperform known methods in sampling efficiency by an order of magnitude. (author)
Kim, Shin Woong; Moon, Jongmin; An, Youn-Joo
2015-01-01
The success of soil toxicity tests using Caenorhabditis elegans may depend in large part on recovering the organisms from the soil. However, it can be difficult to learn the International Organization for Standardization/ASTM International recovery process that uses the colloidal silica flotation method. The present study determined that a soil-agar isolation method provides a highly efficient and less technically demanding alternative to the colloidal silica flotation method. Test soil containing C. elegans was arranged on an agar plate in a donut shape, a linear shape, or a C curve; and microbial food was placed outside the soil to encourage the nematodes to leave the soil. The effects of ventilation and the presence of food on nematode recovery were tested to determine the optimal conditions for recovery. A linear arrangement of soil on an agar plate that was sprinkled with microbial food produced nearly 83% and 90% recovery of live nematodes over a 3-h and a 24-h period, respectively, without subjecting the nematodes to chemical stress. The method was tested using copper (II) chloride dihydrate, and the resulting recovery rate was comparable to that obtained using colloidal silica flotation. The soil-agar isolation method portrayed in the present study enables live nematodes to be isolated with minimal additional physicochemical stress, making it a valuable option for use in subsequent sublethal tests where live nematodes are required. © 2014 SETAC.
Tahayori, B; Khaneja, N; Johnston, L A; Farrell, P M; Mareels, I M Y
2016-01-01
The design of slice selective pulses for magnetic resonance imaging can be cast as an optimal control problem. The Fourier synthesis method is an existing approach to solve these optimal control problems. In this method the gradient field as well as the excitation field are switched rapidly and their amplitudes are calculated based on a Fourier series expansion. Here, we provide a novel insight into the Fourier synthesis method via representing the Bloch equation in spherical coordinates. Based on the spherical Bloch equation, we propose an alternative sequence of pulses that can be used for slice selection which is more time efficient compared to the original method. Simulation results demonstrate that while the performance of both methods is approximately the same, the required time for the proposed sequence of pulses is half of the original sequence of pulses. Furthermore, the slice selectivity of both sequences of pulses changes with radio frequency field inhomogeneities in a similar way. We also introduce a measure, referred to as gradient complexity, to compare the performance of both sequences of pulses. This measure indicates that for a desired level of uniformity in the excited slice, the gradient complexity for the proposed sequence of pulses is less than the original sequence. Copyright © 2015 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
A Rapid and Efficient Screening Method for Antibacterial Compound-Producing Bacteria.
Hettiarachchi, Sachithra; Lee, Su-Jin; Lee, Youngdeuk; Kwon, Young-Kyung; De Zoysa, Mahanama; Moon, Song; Jo, Eunyoung; Kim, Taeho; Kang, Do-Hyung; Heo, Soo-Jin; Oh, Chulhong
2017-08-28
Antibacterial compounds are widely used in the treatment of human and animal diseases. The overuse of antibiotics has led to a rapid rise in the prevalence of drug-resistant bacteria, making the development of new antibacterial compounds essential. This study focused on developing a fast and easy method for identifying marine bacteria that produce antibiotic compounds. Eight randomly selected marine target bacterial species ( Agrococcus terreus, Bacillus algicola, Mesoflavibacter zeaxanthinifaciens, Pseudoalteromonas flavipulchra, P. peptidolytica, P. piscicida, P. rubra , and Zunongwangia atlantica ) were tested for production of antibacterial compounds against four strains of test bacteria ( B. cereus, B. subtilis, Halomonas smyrnensis , and Vibrio alginolyticus ). Colony picking was used as the primary screening method. Clear zones were observed around colonies of P. flavipulchra, P. peptidolytica, P. piscicida , and P. rubra tested against B. cereus, B. subtilis , and H. smyrnensis . The efficiency of colony scraping and broth culture methods for antimicrobial compound extraction was also compared using a disk diffusion assay. P. peptidolytica, P. piscicida , and P. rubra showed antagonistic activity against H. smyrnensis, B. cereus , and B. subtilis , respectively, only in the colony scraping method. Our results show that colony picking and colony scraping are effective, quick, and easy methods of screening for antibacterial compound-producing bacteria.
Study on efficient methods for removal and treatment of graphite blocks in a gas cooled reactor
International Nuclear Information System (INIS)
Fujii, S.; Shirakawa, M.; Murakami, T.
2001-01-01
Tokai Power Station (GCR, 166 MWe) started its commercial operation on July 1966 and ceased activities at the end of March 1998 after 32 years of operation. The decommissioning plans are being developed, to prepare for near future dismantling. In the study, the methods for removal of the graphite blocks of about 1,600 ton have been developed to carrying it out safely and in a short period of time, and the methods of treatment of graphite have also been developed. All technological items have been identified for which R and D work will be required for removal from the core and treatment for disposal. (1) In order to reduce the programme required for the dismantling of reactor internals, an efficient method for removal of the graphite blocks is necessary. For this purpose the design of a dismantling machine has been investigated which can extract several blocks at a time. The conceptual design has being developed and the model has been manufactured and tested in a mock-up facility. (2) In order to reduce disposal costs, it will be necessary to segment the graphite blocks, maximising the packing density available in the disposal containers. Some of the graphite blocks will be cut into pieces longitudinally by a remote machine. Relevant technical matters have been identified, such as graphite cutting methods, the nature of fine particles arising from the cutting operation, the treatment of fine particles for disposal, and the method of mortar filling inside the waste container. (author)
Efficiency of cleaning and disinfection of surfaces: correlation between assessment methods
Directory of Open Access Journals (Sweden)
Oleci Pereira Frota
Full Text Available ABSTRACT Objective: to assess the correlation among the ATP-bioluminescence assay, visual inspection and microbiological culture in monitoring the efficiency of cleaning and disinfection (C&D of high-touch clinical surfaces (HTCS in a walk-in emergency care unit. Method: a prospective and comparative study was carried out from March to June 2015, in which five HTCS were sampled before and after C&D by means of the three methods. The HTCS were considered dirty when dust, waste, humidity and stains were detected in visual inspection; when ≥2.5 colony forming units per cm2 were found in culture; when ≥5 relative light units per cm2 were found at the ATP-bioluminescence assay. Results: 720 analyses were performed, 240 per method. The overall rates of clean surfaces per visual inspection, culture and ATP-bioluminescence assay were 8.3%, 20.8% and 44.2% before C&D, and 92.5%, 50% and 84.2% after C&D, respectively (p<0.001. There were only occasional statistically significant relationships between methods. Conclusion: the methods did not present a good correlation, neither quantitative nor qualitatively.
Evaluation of Application Methods Efficiency of Zinc and Iron for Canola(Brassica napus L.
Directory of Open Access Journals (Sweden)
Ahmad BYBORDI
2010-03-01
Full Text Available In order to evaluation of application method efficiency of zinc and iron microelements in canola, an experiment was conducted in the Agricultural Research Station of Eastern Azerbaijan province in 2008. The experimental design was a RCBD with eight treatments (F1: control, F2: iron, F3: zinc, F4: iron + zinc in the form of soil utility, F5: iron, F6: zinc, F7: iron+ zinc in the form of solution foliar application, and F8: iron + zinc in the form of soil utility and foliar application. Analysis of variance showed that there were significant differences among treatments on given traits, antioxidant enzymes activity, fatty acids percentage, plant height, seed weight to capitulum weight ratio, protein percentage, oil percentage, oil yield, 1000 seed weight, seed yield, nitrogen, phosphorous and potassium percentage of leaves, zinc and iron content of leaves and capitulum diameters. The highest seed yield, oil yield, oil percentage, 1000 seed weight, seed weight to capitulum weight ratio and protein percentage were obtained from the soil and foliar application of iron + zinc treatments (F8. Also, the highest amounts of nitrogen, phosphorous and potassium concentration in leaves were achieved from control treatment which was an indication of non-efficiency of iron and zinc on the absorption rate of these substances in the leaves. The correlation between effective traits on the seed yield, such as, capitalism diameter, number of seed rows in capitulum, seed weight to capitulum weight ratio and 1000 seed weight were positively significant. In general, foliar and soil application of zinc and iron had the highest efficiency in aspect of seed production. The comparison of the various methods of fertilization showed that foliar application was more effective than soil application. Also, micronutrient foliar application increased concentration of elements, especially zinc and iron. Antioxidant enzymes activity was different in response to treatments also the
An efficient parallel stochastic simulation method for analysis of nonviral gene delivery systems
Kuwahara, Hiroyuki
2011-01-01
Gene therapy has a great potential to become an effective treatment for a wide variety of diseases. One of the main challenges to make gene therapy practical in clinical settings is the development of efficient and safe mechanisms to deliver foreign DNA molecules into the nucleus of target cells. Several computational and experimental studies have shown that the design process of synthetic gene transfer vectors can be greatly enhanced by computational modeling and simulation. This paper proposes a novel, effective parallelization of the stochastic simulation algorithm (SSA) for pharmacokinetic models that characterize the rate-limiting, multi-step processes of intracellular gene delivery. While efficient parallelizations of the SSA are still an open problem in a general setting, the proposed parallel simulation method is able to substantially accelerate the next reaction selection scheme and the reaction update scheme in the SSA by exploiting and decomposing the structures of stochastic gene delivery models. This, thus, makes computationally intensive analysis such as parameter optimizations and gene dosage control for specific cell types, gene vectors, and transgene expression stability substantially more practical than that could otherwise be with the standard SSA. Here, we translated the nonviral gene delivery model based on mass-action kinetics by Varga et al. [Molecular Therapy, 4(5), 2001] into a more realistic model that captures intracellular fluctuations based on stochastic chemical kinetics, and as a case study we applied our parallel simulation to this stochastic model. Our results show that our simulation method is able to increase the efficiency of statistical analysis by at least 50% in various settings. © 2011 ACM.
Efficiency determination of an electrostatic lunar dust collector by discrete element method
Afshar-Mohajer, Nima; Wu, Chang-Yu; Sorloaica-Hickman, Nicoleta
2012-07-01
Lunar grains become charged by the sun's radiation in the tenuous atmosphere of the moon. This leads to lunar dust levitation and particle deposition which often create serious problems in the costly system deployed in lunar exploration. In this study, an electrostatic lunar dust collector (ELDC) is proposed to address the issue and the discrete element method (DEM) is used to investigate the effects of electrical particle-particle interactions, non-uniformity of the electrostatic field, and characteristics of the ELDC. The simulations on 20-μm-sized lunar particles reveal the electrical particle-particle interactions of the dust particles within the ELDC plates require 29% higher electrostatic field strength than that without the interactions for 100% collection efficiency. For the given ELDC geometry, consideration of non-uniformity of the electrostatic field along with electrical interactions between particles on the same ELDC geometry leads to a higher requirement of ˜3.5 kV/m to ensure 100% particle collection. Notably, such an electrostatic field is about 103 times less than required for electrodynamic self-cleaning methods. Finally, it is shown for a "half-size" system that the DEM model predicts greater collection efficiency than the Eulerian-based model at all voltages less than required for 100% efficiency. Halving the ELDC dimensions boosts the particle concentration inside the ELDC, as well as the resulting field strength for a given voltage. Though a lunar photovoltaic system was the subject, the results of this study are useful for evaluation of any system for collecting charged particles in other high vacuum environment using an electrostatic field.
An efficient implementation of parallel molecular dynamics method on SMP cluster architecture
International Nuclear Information System (INIS)
Suzuki, Masaaki; Okuda, Hiroshi; Yagawa, Genki
2003-01-01
The authors have applied MPI/OpenMP hybrid parallel programming model to parallelize a molecular dynamics (MD) method on a symmetric multiprocessor (SMP) cluster architecture. In that architecture, it can be expected that the hybrid parallel programming model, which uses the message passing library such as MPI for inter-SMP node communication and the loop directive such as OpenMP for intra-SNP node parallelization, is the most effective one. In this study, the parallel performance of the hybrid style has been compared with that of conventional flat parallel programming style, which uses only MPI, both in cases the fast multipole method (FMM) is employed for computing long-distance interactions and that is not employed. The computer environments used here are Hitachi SR8000/MPP placed at the University of Tokyo. The results of calculation are as follows. Without FMM, the parallel efficiency using 16 SMP nodes (128 PEs) is: 90% with the hybrid style, 75% with the flat-MPI style for MD simulation with 33,402 atoms. With FMM, the parallel efficiency using 16 SMP nodes (128 PEs) is: 60% with the hybrid style, 48% with the flat-MPI style for MD simulation with 117,649 atoms. (author)
An algorithm to improve sampling efficiency for uncertainty propagation using sampling based method
International Nuclear Information System (INIS)
Campolina, Daniel; Lima, Paulo Rubens I.; Pereira, Claubia; Veloso, Maria Auxiliadora F.
2015-01-01
Sample size and computational uncertainty were varied in order to investigate sample efficiency and convergence of the sampling based method for uncertainty propagation. Transport code MCNPX was used to simulate a LWR model and allow the mapping, from uncertain inputs of the benchmark experiment, to uncertain outputs. Random sampling efficiency was improved through the use of an algorithm for selecting distributions. Mean range, standard deviation range and skewness were verified in order to obtain a better representation of uncertainty figures. Standard deviation of 5 pcm in the propagated uncertainties for 10 n-samples replicates was adopted as convergence criterion to the method. Estimation of 75 pcm uncertainty on reactor k eff was accomplished by using sample of size 93 and computational uncertainty of 28 pcm to propagate 1σ uncertainty of burnable poison radius. For a fixed computational time, in order to reduce the variance of the uncertainty propagated, it was found, for the example under investigation, it is preferable double the sample size than double the amount of particles followed by Monte Carlo process in MCNPX code. (author)
A novel method for efficient archiving and retrieval of biomedical images using MPEG-7
Meyer, Joerg; Pahwa, Ash
2004-10-01
Digital archiving and efficient retrieval of radiological scans have become critical steps in contemporary medical diagnostics. Since more and more images and image sequences (single scans or video) from various modalities (CT/MRI/PET/digital X-ray) are now available in digital formats (e.g., DICOM-3), hospitals and radiology clinics need to implement efficient protocols capable of managing the enormous amounts of data generated daily in a typical clinical routine. We present a method that appears to be a viable way to eliminate the tedious step of manually annotating image and video material for database indexing. MPEG-7 is a new framework that standardizes the way images are characterized in terms of color, shape, and other abstract, content-related criteria. A set of standardized descriptors that are automatically generated from an image is used to compare an image to other images in a database, and to compute the distance between two images for a given application domain. Text-based database queries can be replaced with image-based queries using MPEG-7. Consequently, image queries can be conducted without any prior knowledge of the keys that were used as indices in the database. Since the decoding and matching steps are not part of the MPEG-7 standard, this method also enables searches that were not planned by the time the keys were generated.
Directory of Open Access Journals (Sweden)
Deepa Devasenapathy
2015-01-01
Full Text Available The traffic in the road network is progressively increasing at a greater extent. Good knowledge of network traffic can minimize congestions using information pertaining to road network obtained with the aid of communal callers, pavement detectors, and so on. Using these methods, low featured information is generated with respect to the user in the road network. Although the existing schemes obtain urban traffic information, they fail to calculate the energy drain rate of nodes and to locate equilibrium between the overhead and quality of the routing protocol that renders a great challenge. Thus, an energy-efficient cluster-based vehicle detection in road network using the intention numeration method (CVDRN-IN is developed. Initially, sensor nodes that detect a vehicle are grouped into separate clusters. Further, we approximate the strength of the node drain rate for a cluster using polynomial regression function. In addition, the total node energy is estimated by taking the integral over the area. Finally, enhanced data aggregation is performed to reduce the amount of data transmission using digital signature tree. The experimental performance is evaluated with Dodgers loop sensor data set from UCI repository and the performance evaluation outperforms existing work on energy consumption, clustering efficiency, and node drain rate.
Devasenapathy, Deepa; Kannan, Kathiravan
2015-01-01
The traffic in the road network is progressively increasing at a greater extent. Good knowledge of network traffic can minimize congestions using information pertaining to road network obtained with the aid of communal callers, pavement detectors, and so on. Using these methods, low featured information is generated with respect to the user in the road network. Although the existing schemes obtain urban traffic information, they fail to calculate the energy drain rate of nodes and to locate equilibrium between the overhead and quality of the routing protocol that renders a great challenge. Thus, an energy-efficient cluster-based vehicle detection in road network using the intention numeration method (CVDRN-IN) is developed. Initially, sensor nodes that detect a vehicle are grouped into separate clusters. Further, we approximate the strength of the node drain rate for a cluster using polynomial regression function. In addition, the total node energy is estimated by taking the integral over the area. Finally, enhanced data aggregation is performed to reduce the amount of data transmission using digital signature tree. The experimental performance is evaluated with Dodgers loop sensor data set from UCI repository and the performance evaluation outperforms existing work on energy consumption, clustering efficiency, and node drain rate.
An efficient method for in vitro callus induction in Myrciaria dubia (Kunth Mc Vaugh "Camu Camu"
Directory of Open Access Journals (Sweden)
Ana M. Córdova
2014-03-01
Full Text Available Due to the high variability in vitamin C production in Myrciaria dubia "camu camu", biotechnological procedures are necessary for mass clonal propagation of promising genotypes of this species. The aim was to establish an efficient method for in vitro callus induction from explants of M. dubia. Leaf and knot sex plants were obtained from branches grown in the laboratory and from fruit pulp collected in the field. These were desinfected and sown on Murashige-Skoog (1962 medium supplemented with 2,4-dichlorophenoxyacetic acid (2,4-D, benzylaminopurine (BAP and kinetin(Kin. The cultures were maintained at 25±2°C in darkness for 2 weeks and subsequently with a photoperiod of 16 hours in light and 8 hours in dark for 6 weeks. Treatment with 2 mg/L 2,4-D and 0.1 mg/L BAP allowed major callus formation in the three types of explants. Calluswere generated from the first week (knots, fourth week (leaves and sixth week (pulp and these were friable (leaves and nodes and non-friable (pulp. In conclusion, the described method is efficient for in vitro callus induction in leaves, knots and pulp of M. dubia, been leaves and knots explants more suitable for callus obtention
Directory of Open Access Journals (Sweden)
H. Wan
2014-09-01
Full Text Available This paper explores the feasibility of an experimentation strategy for investigating sensitivities in fast components of atmospheric general circulation models. The basic idea is to replace the traditional serial-in-time long-term climate integrations by representative ensembles of shorter simulations. The key advantage of the proposed method lies in its efficiency: since fewer days of simulation are needed, the computational cost is less, and because individual realizations are independent and can be integrated simultaneously, the new dimension of parallelism can dramatically reduce the turnaround time in benchmark tests, sensitivities studies, and model tuning exercises. The strategy is not appropriate for exploring sensitivity of all model features, but it is very effective in many situations. Two examples are presented using the Community Atmosphere Model, version 5. In the first example, the method is used to characterize sensitivities of the simulated clouds to time-step length. Results show that 3-day ensembles of 20 to 50 members are sufficient to reproduce the main signals revealed by traditional 5-year simulations. A nudging technique is applied to an additional set of simulations to help understand the contribution of physics–dynamics interaction to the detected time-step sensitivity. In the second example, multiple empirical parameters related to cloud microphysics and aerosol life cycle are perturbed simultaneously in order to find out which parameters have the largest impact on the simulated global mean top-of-atmosphere radiation balance. It turns out that 12-member ensembles of 10-day simulations are able to reveal the same sensitivities as seen in 4-year simulations performed in a previous study. In both cases, the ensemble method reduces the total computational time by a factor of about 15, and the turnaround time by a factor of several hundred. The efficiency of the method makes it particularly useful for the development of
Moustafa, Sabry Gad Al-Hak Mohammad
Molecular simulation (MS) methods (e.g. Monte Carlo (MC) and molecular dynamics (MD)) provide a reliable tool (especially at extreme conditions) to measure solid properties. However, measuring them accurately and efficiently (smallest uncertainty for a given time) using MS can be a big challenge especially with ab initio-type models. In addition, comparing with experimental results through extrapolating properties from finite size to the thermodynamic limit can be a critical obstacle. We first estimate the free energy (FE) of crystalline system of simple discontinuous potential, hard-spheres (HS), at its melting condition. Several approaches are explored to determine the most efficient route. The comparison study shows a considerable improvement in efficiency over the standard MS methods that are known for solid phases. In addition, we were able to accurately extrapolate to the thermodynamic limit using relatively small system sizes. Although the method is applied to HS model, it is readily extended to more complex hard-body potentials, such as hard tetrahedra. The harmonic approximation of the potential energy surface is usually an accurate model (especially at low temperature and large density) to describe many realistic solid phases. In addition, since the analysis is done numerically the method is relatively cheap. Here, we apply lattice dynamics (LD) techniques to get the FE of clathrate hydrates structures. Rigid-bonds model is assumed to describe water molecules; this, however, requires additional orientation degree-of-freedom in order to specify each molecule. However, we were able to efficiently avoid using those degrees of freedom through a mathematical transformation that only uses the atomic coordinates of water molecules. In addition, the proton-disorder nature of hydrate water networks adds extra complexity to the problem, especially when extrapolating to the thermodynamic limit is needed. The finite-size effects of the proton disorder contribution is
An efficient modularized sample-based method to estimate the first-order Sobol' index
International Nuclear Information System (INIS)
Li, Chenzhao; Mahadevan, Sankaran
2016-01-01
Sobol' index is a prominent methodology in global sensitivity analysis. This paper aims to directly estimate the Sobol' index based only on available input–output samples, even if the underlying model is unavailable. For this purpose, a new method to calculate the first-order Sobol' index is proposed. The innovation is that the conditional variance and mean in the formula of the first-order index are calculated at an unknown but existing location of model inputs, instead of an explicit user-defined location. The proposed method is modularized in two aspects: 1) index calculations for different model inputs are separate and use the same set of samples; and 2) model input sampling, model evaluation, and index calculation are separate. Due to this modularization, the proposed method is capable to compute the first-order index if only input–output samples are available but the underlying model is unavailable, and its computational cost is not proportional to the dimension of the model inputs. In addition, the proposed method can also estimate the first-order index with correlated model inputs. Considering that the first-order index is a desired metric to rank model inputs but current methods can only handle independent model inputs, the proposed method contributes to fill this gap. - Highlights: • An efficient method to estimate the first-order Sobol' index. • Estimate the index from input–output samples directly. • Computational cost is not proportional to the number of model inputs. • Handle both uncorrelated and correlated model inputs.
Energy Technology Data Exchange (ETDEWEB)
Kurnik, Charles W. [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Stewart, James [Cadmus, Waltham, MA (United States); Todd, Annika [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)
2017-11-01
Residential behavior-based (BB) programs use strategies grounded in the behavioral and social sciences to influence household energy use. These may include providing households with real-time or delayed feedback about their energy use; supplying energy efficiency education and tips; rewarding households for reducing their energy use; comparing households to their peers; and establishing games, tournaments, and competitions. BB programs often target multiple energy end uses and encourage energy savings, demand savings, or both. Savings from BB programs are usually a small percentage of energy use, typically less than 5 percent. Utilities will continue to implement residential BB programs as large-scale, randomized control trials (RCTs); however, some are now experimenting with alternative program designs that are smaller scale; involve new communication channels such as the web, social media, and text messaging; or that employ novel strategies for encouraging behavior change (for example, Facebook competitions). These programs will create new evaluation challenges and may require different evaluation methods than those currently employed to verify any savings they generate. Quasi-experimental methods, however, require stronger assumptions to yield valid savings estimates and may not measure savings with the same degree of validity and accuracy as randomized experiments.
Efficient digitalization method for dental restorations using micro-CT data.
Kim, Changhwan; Baek, Seung Hoon; Lee, Taewon; Go, Jonggun; Kim, Sun Young; Cho, Seungryong
2017-03-15
The objective of this study was to demonstrate the feasibility of using micro-CT scan of dental impressions for fabricating dental restorations and to compare the dimensional accuracy of dental models generated from various methods. The key idea of the proposed protocol is that dental impression of patients can be accurately digitized by micro-CT scan and that one can make digital cast model from micro-CT data directly. As air regions of the micro-CT scan data of dental impression are equivalent to the real teeth and surrounding structures, one can segment the air regions and fabricate digital cast model in the STL format out of them. The proposed method was validated by a phantom study using a typodont with prepared teeth. Actual measurement and deviation map analysis were performed after acquiring digital cast models for each restoration methods. Comparisons of the milled restorations were also performed by placing them on the prepared teeth of typodont. The results demonstrated that an efficient fabrication of precise dental restoration is achievable by use of the proposed method.
Efficient digitalization method for dental restorations using micro-CT data
Kim, Changhwan; Baek, Seung Hoon; Lee, Taewon; Go, Jonggun; Kim, Sun Young; Cho, Seungryong
2017-03-01
The objective of this study was to demonstrate the feasibility of using micro-CT scan of dental impressions for fabricating dental restorations and to compare the dimensional accuracy of dental models generated from various methods. The key idea of the proposed protocol is that dental impression of patients can be accurately digitized by micro-CT scan and that one can make digital cast model from micro-CT data directly. As air regions of the micro-CT scan data of dental impression are equivalent to the real teeth and surrounding structures, one can segment the air regions and fabricate digital cast model in the STL format out of them. The proposed method was validated by a phantom study using a typodont with prepared teeth. Actual measurement and deviation map analysis were performed after acquiring digital cast models for each restoration methods. Comparisons of the milled restorations were also performed by placing them on the prepared teeth of typodont. The results demonstrated that an efficient fabrication of precise dental restoration is achievable by use of the proposed method.
Directory of Open Access Journals (Sweden)
Xiaotang Hu
2015-01-01
Full Text Available Cell staining is a necessary and useful technique for visualizing cell morphology and structure under a microscope. This technique has been used in many areas such as cytology, hematology, oncology, histology, virology, serology, microbiology, cell biology, and immunochemistry. One of the key pieces of equipment for preparing a slide for cell staining is cytology centrifuge (cytocentrifuge such as cytospin. However, many small labs do not have this expensive equipment and its accessory, cytoclips (also expensive relatively, which makes them difficult to study cell cytology. Here we present an alternative method for preparing a slide and cell staining in the absence of a cytocentrifuge (and cytoclips. This method is based on the principle that a regular cell centrifuge can be used to concentrate cells harvested from cell culture and then deposit the concentrated cell suspension to a slide evenly by using a cell spreader, followed by cell staining. The method presented is simple, rapid, economic, and efficient. This method may also avoid a possible change in cell morphology induced by cytocentrifuge.
Set-up and methods for SiPM Photo-Detection Efficiency measurements
International Nuclear Information System (INIS)
Zappalà, G.; Acerbi, F.; Ferri, A.; Gola, A.; Paternoster, G.; Zorzi, N.; Piemonte, C.
2016-01-01
In this work, a compact set-up and three different methods to measure the Photo-Detection Efficiency (PDE) of Silicon Photomultipliers (SiPMs) and Single-Photon Avalanche Diodes (SPADs) are presented. The methods, based on either continuous or pulsed light illumination, are discussed in detail and compared in terms of measurement precision and time. For the SiPM, these methods have the feature of minimizing the effect of both the primary and correlated noise on the PDE estimation. The PDE of SiPMs (produced at FBK, Trento, Italy) was measured in a range from UV to NIR, obtaining similar results with all the methods. Furthermore, the advantages of measuring, when possible, the PDE of SPADs (of the same technology and with the same layout of a single SiPM cell) instead of larger devices are also discussed and a direct comparison between measurement results is shown. Using a SPAD, it is possible to reduce the measurement complexity and uncertainty since the correlated noise sources are reduced with respect to the SiPM case.
Omar, Mahmoud A.; Mohamed, Abdel-Maaboud I.; Derayea, Sayed M.; Hammad, Mohamed A.; Mohamed, Abobakr A.
2018-04-01
A new, selective and sensitive spectrofluorimetric method was designed for the quantitation of doxazosin (DOX), terazosin (TER) and alfuzosin (ALF) in their dosage forms and human plasma. The method adopts efficient derivatization of the studied drugs with ortho-phthalaldehyde (OPA), in the presence of 2-mercaptoethanol in borate buffer (pH 9.7) to generate a highly fluorescent isoindole derivatives, which can strongly enhance the fluorescence intensities of the studied drugs, allowing their sensitive determination at 430 nm after excitation at 337 nm. The fluorescence-concentration plots were rectilinear over the ranges (10.0-400.0) ng/mL. Detection and quantification limits were found to be (0.52-3.88) and (1.59-11.76) ng/mL, respectively. The proposed method was validated according to ICH guidelines, and successfully applied for the determination of pharmaceutical preparations of the studied drugs. Moreover, the high sensitivity of the proposed method permits its successful application to the analysis of the studied drugs in spiked human plasma with % recovery (96.12 ± 1.34-100.66 ± 0.57, n = 3). A proposal for the reaction mechanism was presented.
Kearns, F L; Hudson, P S; Boresch, S; Woodcock, H L
2016-01-01
Enzyme activity is inherently linked to free energies of transition states, ligand binding, protonation/deprotonation, etc.; these free energies, and thus enzyme function, can be affected by residue mutations, allosterically induced conformational changes, and much more. Therefore, being able to predict free energies associated with enzymatic processes is critical to understanding and predicting their function. Free energy simulation (FES) has historically been a computational challenge as it requires both the accurate description of inter- and intramolecular interactions and adequate sampling of all relevant conformational degrees of freedom. The hybrid quantum mechanical molecular mechanical (QM/MM) framework is the current tool of choice when accurate computations of macromolecular systems are essential. Unfortunately, robust and efficient approaches that employ the high levels of computational theory needed to accurately describe many reactive processes (ie, ab initio, DFT), while also including explicit solvation effects and accounting for extensive conformational sampling are essentially nonexistent. In this chapter, we will give a brief overview of two recently developed methods that mitigate several major challenges associated with QM/MM FES: the QM non-Boltzmann Bennett's acceptance ratio method and the QM nonequilibrium work method. We will also describe usage of these methods to calculate free energies associated with (1) relative properties and (2) along reaction paths, using simple test cases with relevance to enzymes examples. © 2016 Elsevier Inc. All rights reserved.
A robust, efficient and flexible method for staining myelinated axons in blocks of brain tissue.
Wahlsten, Douglas; Colbourne, Frederick; Pleus, Richard
2003-03-15
Previous studies have demonstrated the utility of the gold chloride method for en bloc staining of a bisected brain in mice and rats. The present study explores several variations in the method, assesses its reliability, and extends the limits of its application. We conclude that the method is very efficient, highly robust, sufficiently accurate for most purposes, and adaptable to many morphometric measures. We obtained acceptable staining of commissures in every brain, despite a wide variety of fixation methods. One-half could be stained 24 h after the brain was extracted and the other half could be stained months later. When staining failed because of an exhausted solution, the brain could be stained successfully in fresh solution. Relatively small changes were found in the sizes of commissures several weeks after initial fixation or staining. A half brain stained to reveal the mid-sagittal section could then be sectioned coronally and stained again in either gold chloride for myelin or cresyl violet for Nissl substance. Uncertainty, arising from pixelation of digitized images was far less than errors arising from human judgments about the histological limits of major commissures. Useful data for morphometric analysis were obtained by scanning the surface of a gold chloride stained block of brain with an inexpensive flatbed scanner.
Bayram, B.; Erdem, F.; Akpinar, B.; Ince, A. K.; Bozkurt, S.; Catal Reis, H.; Seker, D. Z.
2017-11-01
Coastal monitoring plays a vital role in environmental planning and hazard management related issues. Since shorelines are fundamental data for environment management, disaster management, coastal erosion studies, modelling of sediment transport and coastal morphodynamics, various techniques have been developed to extract shorelines. Random Forest is one of these techniques which is used in this study for shoreline extraction.. This algorithm is a machine learning method based on decision trees. Decision trees analyse classes of training data creates rules for classification. In this study, Terkos region has been chosen for the proposed method within the scope of "TUBITAK Project (Project No: 115Y718) titled "Integration of Unmanned Aerial Vehicles for Sustainable Coastal Zone Monitoring Model - Three-Dimensional Automatic Coastline Extraction and Analysis: Istanbul-Terkos Example". Random Forest algorithm has been implemented to extract the shoreline of the Black Sea where near the lake from LANDSAT-8 and GOKTURK-2 satellite imageries taken in 2015. The MATLAB environment was used for classification. To obtain land and water-body classes, the Random Forest method has been applied to NIR bands of LANDSAT-8 (5th band) and GOKTURK-2 (4th band) imageries. Each image has been digitized manually and shorelines obtained for accuracy assessment. According to accuracy assessment results, Random Forest method is efficient for both medium and high resolution images for shoreline extraction studies.
Directory of Open Access Journals (Sweden)
B. Bayram
2017-11-01
Full Text Available Coastal monitoring plays a vital role in environmental planning and hazard management related issues. Since shorelines are fundamental data for environment management, disaster management, coastal erosion studies, modelling of sediment transport and coastal morphodynamics, various techniques have been developed to extract shorelines. Random Forest is one of these techniques which is used in this study for shoreline extraction.. This algorithm is a machine learning method based on decision trees. Decision trees analyse classes of training data creates rules for classification. In this study, Terkos region has been chosen for the proposed method within the scope of "TUBITAK Project (Project No: 115Y718 titled "Integration of Unmanned Aerial Vehicles for Sustainable Coastal Zone Monitoring Model – Three-Dimensional Automatic Coastline Extraction and Analysis: Istanbul-Terkos Example". Random Forest algorithm has been implemented to extract the shoreline of the Black Sea where near the lake from LANDSAT-8 and GOKTURK-2 satellite imageries taken in 2015. The MATLAB environment was used for classification. To obtain land and water-body classes, the Random Forest method has been applied to NIR bands of LANDSAT-8 (5th band and GOKTURK-2 (4th band imageries. Each image has been digitized manually and shorelines obtained for accuracy assessment. According to accuracy assessment results, Random Forest method is efficient for both medium and high resolution images for shoreline extraction studies.
Automatic and efficient methods applied to the binarization of a subway map
Durand, Philippe; Ghorbanzadeh, Dariush; Jaupi, Luan
2015-12-01
The purpose of this paper is the study of efficient methods for image binarization. The objective of the work is the metro maps binarization. the goal is to binarize, avoiding noise to disturb the reading of subway stations. Different methods have been tested. By this way, a method given by Otsu gives particularly interesting results. The difficulty of the binarization is the choice of this threshold in order to reconstruct. Image sticky as possible to reality. Vectorization is a step subsequent to that of the binarization. It is to retrieve the coordinates points containing information and to store them in the two matrices X and Y. Subsequently, these matrices can be exported to a file format 'CSV' (Comma Separated Value) enabling us to deal with them in a variety of software including Excel. The algorithm uses quite a time calculation in Matlab because it is composed of two "for" loops nested. But the "for" loops are poorly supported by Matlab, especially in each other. This therefore penalizes the computation time, but seems the only method to do this.
Efficient numerical methods for the large-scale, parallel solution of elastoplastic contact problems
Frohne, Jörg
2015-08-06
© 2016 John Wiley & Sons, Ltd. Quasi-static elastoplastic contact problems are ubiquitous in many industrial processes and other contexts, and their numerical simulation is consequently of great interest in accurately describing and optimizing production processes. The key component in these simulations is the solution of a single load step of a time iteration. From a mathematical perspective, the problems to be solved in each time step are characterized by the difficulties of variational inequalities for both the plastic behavior and the contact problem. Computationally, they also often lead to very large problems. In this paper, we present and evaluate a complete set of methods that are (1) designed to work well together and (2) allow for the efficient solution of such problems. In particular, we use adaptive finite element meshes with linear and quadratic elements, a Newton linearization of the plasticity, active set methods for the contact problem, and multigrid-preconditioned linear solvers. Through a sequence of numerical experiments, we show the performance of these methods. This includes highly accurate solutions of a three-dimensional benchmark problem and scaling our methods in parallel to 1024 cores and more than a billion unknowns.
Efficient numerical methods for the large-scale, parallel solution of elastoplastic contact problems
Frohne, Jö rg; Heister, Timo; Bangerth, Wolfgang
2015-01-01
© 2016 John Wiley & Sons, Ltd. Quasi-static elastoplastic contact problems are ubiquitous in many industrial processes and other contexts, and their numerical simulation is consequently of great interest in accurately describing and optimizing production processes. The key component in these simulations is the solution of a single load step of a time iteration. From a mathematical perspective, the problems to be solved in each time step are characterized by the difficulties of variational inequalities for both the plastic behavior and the contact problem. Computationally, they also often lead to very large problems. In this paper, we present and evaluate a complete set of methods that are (1) designed to work well together and (2) allow for the efficient solution of such problems. In particular, we use adaptive finite element meshes with linear and quadratic elements, a Newton linearization of the plasticity, active set methods for the contact problem, and multigrid-preconditioned linear solvers. Through a sequence of numerical experiments, we show the performance of these methods. This includes highly accurate solutions of a three-dimensional benchmark problem and scaling our methods in parallel to 1024 cores and more than a billion unknowns.
Directory of Open Access Journals (Sweden)
Mohammad Osama
2014-06-01
Full Text Available Pleurotus ostreatus, a white rot fungus, is capable of bioremediating a wide range of organic contaminants including Polycyclic Aromatic Hydrocarbons (PAHs. Ergosterol is produced by living fungal biomass and used as a measure of fungal biomass. The first part of this work deals with the extraction and quantification of PAHs from contaminated sediments by Lipid Extraction Method (LEM. The second part consists of the development of a novel extraction method (Ergosterol Extraction Method (EEM, quantification and bioremediation. The novelty of this method is the simultaneously extraction and quantification of two different types of compounds, sterol (ergosterol and PAHs and is more efficient than LEM. EEM has been successful in extracting ergosterol from the fungus grown on barley in the concentrations of 17.5-39.94 µg g-1 ergosterol and the PAHs are much more quantified in numbers and amounts as compared to LEM. In addition, cholesterol usually found in animals, has also been detected in the fungus, P. ostreatus at easily detectable levels.
Tejos, Nicolas; Rodríguez-Puebla, Aldo; Primack, Joel R.
2018-01-01
We present a simple, efficient and robust approach to improve cosmological redshift measurements. The method is based on the presence of a reference sample for which a precise redshift number distribution (dN/dz) can be obtained for different pencil-beam-like sub-volumes within the original survey. For each sub-volume we then impose that: (i) the redshift number distribution of the uncertain redshift measurements matches the reference dN/dz corrected by their selection functions and (ii) the rank order in redshift of the original ensemble of uncertain measurements is preserved. The latter step is motivated by the fact that random variables drawn from Gaussian probability density functions (PDFs) of different means and arbitrarily large standard deviations satisfy stochastic ordering. We then repeat this simple algorithm for multiple arbitrary pencil-beam-like overlapping sub-volumes; in this manner, each uncertain measurement has multiple (non-independent) 'recovered' redshifts which can be used to estimate a new redshift PDF. We refer to this method as the Stochastic Order Redshift Technique (SORT). We have used a state-of-the-art N-body simulation to test the performance of SORT under simple assumptions and found that it can improve the quality of cosmological redshifts in a robust and efficient manner. Particularly, SORT redshifts (zsort) are able to recover the distinctive features of the so-called 'cosmic web' and can provide unbiased measurement of the two-point correlation function on scales ≳4 h-1Mpc. Given its simplicity, we envision that a method like SORT can be incorporated into more sophisticated algorithms aimed to exploit the full potential of large extragalactic photometric surveys.
Energy Technology Data Exchange (ETDEWEB)
Wan, Hui; Rasch, Philip J.; Zhang, Kai; Qian, Yun; Yan, Huiping; Zhao, Chun
2014-09-08
This paper explores the feasibility of an experimentation strategy for investigating sensitivities in fast components of atmospheric general circulation models. The basic idea is to replace the traditional serial-in-time long-term climate integrations by representative ensembles of shorter simulations. The key advantage of the proposed method lies in its efficiency: since fewer days of simulation are needed, the computational cost is less, and because individual realizations are independent and can be integrated simultaneously, the new dimension of parallelism can dramatically reduce the turnaround time in benchmark tests, sensitivities studies, and model tuning exercises. The strategy is not appropriate for exploring sensitivity of all model features, but it is very effective in many situations. Two examples are presented using the Community Atmosphere Model version 5. The first example demonstrates that the method is capable of characterizing the model cloud and precipitation sensitivity to time step length. A nudging technique is also applied to an additional set of simulations to help understand the contribution of physics-dynamics interaction to the detected time step sensitivity. In the second example, multiple empirical parameters related to cloud microphysics and aerosol lifecycle are perturbed simultaneously in order to explore which parameters have the largest impact on the simulated global mean top-of-atmosphere radiation balance. Results show that in both examples, short ensembles are able to correctly reproduce the main signals of model sensitivities revealed by traditional long-term climate simulations for fast processes in the climate system. The efficiency of the ensemble method makes it particularly useful for the development of high-resolution, costly and complex climate models.
New Design Methods And Algorithms For High Energy-Efficient And Low-cost Distillation Processes
Energy Technology Data Exchange (ETDEWEB)
Agrawal, Rakesh [Purdue Univ., West Lafayette, IN (United States)
2013-11-21
This project sought and successfully answered two big challenges facing the creation of low-energy, cost-effective, zeotropic multi-component distillation processes: first, identification of an efficient search space that includes all the useful distillation configurations and no undesired configurations; second, development of an algorithm to search the space efficiently and generate an array of low-energy options for industrial multi-component mixtures. Such mixtures are found in large-scale chemical and petroleum plants. Commercialization of our results was addressed by building a user interface allowing practical application of our methods for industrial problems by anyone with basic knowledge of distillation for a given problem. We also provided our algorithm to a major U.S. Chemical Company for use by the practitioners. The successful execution of this program has provided methods and algorithms at the disposal of process engineers to readily generate low-energy solutions for a large class of multicomponent distillation problems in a typical chemical and petrochemical plant. In a petrochemical complex, the distillation trains within crude oil processing, hydrotreating units containing alkylation, isomerization, reformer, LPG (liquefied petroleum gas) and NGL (natural gas liquids) processing units can benefit from our results. Effluents from naphtha crackers and ethane-propane crackers typically contain mixtures of methane, ethylene, ethane, propylene, propane, butane and heavier hydrocarbons. We have shown that our systematic search method with a more complete search space, along with the optimization algorithm, has a potential to yield low-energy distillation configurations for all such applications with energy savings up to 50%.
An efficient soil water balance model based on hybrid numerical and statistical methods
Mao, Wei; Yang, Jinzhong; Zhu, Yan; Ye, Ming; Liu, Zhao; Wu, Jingwei
2018-04-01
Most soil water balance models only consider downward soil water movement driven by gravitational potential, and thus cannot simulate upward soil water movement driven by evapotranspiration especially in agricultural areas. In addition, the models cannot be used for simulating soil water movement in heterogeneous soils, and usually require many empirical parameters. To resolve these problems, this study derives a new one-dimensional water balance model for simulating both downward and upward soil water movement in heterogeneous unsaturated zones. The new model is based on a hybrid of numerical and statistical methods, and only requires four physical parameters. The model uses three governing equations to consider three terms that impact soil water movement, including the advective term driven by gravitational potential, the source/sink term driven by external forces (e.g., evapotranspiration), and the diffusive term driven by matric potential. The three governing equations are solved separately by using the hybrid numerical and statistical methods (e.g., linear regression method) that consider soil heterogeneity. The four soil hydraulic parameters required by the new models are as follows: saturated hydraulic conductivity, saturated water content, field capacity, and residual water content. The strength and weakness of the new model are evaluated by using two published studies, three hypothetical examples and a real-world application. The evaluation is performed by comparing the simulation results of the new model with corresponding results presented in the published studies, obtained using HYDRUS-1D and observation data. The evaluation indicates that the new model is accurate and efficient for simulating upward soil water flow in heterogeneous soils with complex boundary conditions. The new model is used for evaluating different drainage functions, and the square drainage function and the power drainage function are recommended. Computational efficiency of the new
Directory of Open Access Journals (Sweden)
Alejo J Irigoyen
Full Text Available Underwater visual census (UVC is the most common approach for estimating diversity, abundance and size of reef fishes in shallow and clear waters. Abundance estimation through UVC is particularly problematic in species occurring at low densities and/or highly aggregated because of their high variability at both spatial and temporal scales. The statistical power of experiments involving UVC techniques may be increased by augmenting the number of replicates or the area surveyed. In this work we present and test the efficiency of an UVC method based on diver towed GPS, the Tracked Roaming Transect (TRT, designed to maximize transect length (and thus the surveyed area with respect to diving time invested in monitoring, as compared to Conventional Strip Transects (CST. Additionally, we analyze the effect of increasing transect width and length on the precision of density estimates by comparing TRT vs. CST methods using different fixed widths of 6 and 20 m (FW3 and FW10, respectively and the Distance Sampling (DS method, in which perpendicular distance of each fish or group of fishes to the transect line is estimated by divers up to 20 m from the transect line. The TRT was 74% more time and cost efficient than the CST (all transect widths considered together and, for a given time, the use of TRT and/or increasing the transect width increased the precision of density estimates. In addition, since with the DS method distances of fishes to the transect line have to be estimated, and not measured directly as in terrestrial environments, errors in estimations of perpendicular distances can seriously affect DS density estimations. To assess the occurrence of distance estimation errors and their dependence on the observer's experience, a field experiment using wooden fish models was performed. We tested the precision and accuracy of density estimators based on fixed widths and the DS method. The accuracy of the estimates was measured comparing the actual
Sadeghifar, Hamidreza
2015-10-01
Developing general methods that rely on column data for the efficiency estimation of operating (existing) distillation columns has been overlooked in the literature. Most of the available methods are based on empirical mass transfer and hydraulic relations correlated to laboratory data. Therefore, these methods may not be sufficiently accurate when applied to industrial columns. In this paper, an applicable and accurate method was developed for the efficiency estimation of distillation columns filled with trays. This method can calculate efficiency as well as mass and heat transfer coefficients without using any empirical mass transfer or hydraulic correlations and without the need to estimate operational or hydraulic parameters of the column. E.g., the method does not need to estimate tray interfacial area, which can be its most important advantage over all the available methods. The method can be used for the efficiency prediction of any trays in distillation columns. For the efficiency calculation, the method employs the column data and uses the true rates of the mass and heat transfers occurring inside the operating column. It is highly emphasized that estimating efficiency of an operating column has to be distinguished from that of a column being designed.
Teramoto, Reiji; Saito, Chiaki; Funahashi, Shin-ichi
2014-06-30
Knockdown or overexpression of genes is widely used to identify genes that play important roles in many aspects of cellular functions and phenotypes. Because next-generation sequencing generates high-throughput data that allow us to detect genes, it is important to identify genes that drive functional and phenotypic changes of cells. However, conventional methods rely heavily on the assumption of normality and they often give incorrect results when the assumption is not true. To relax the Gaussian assumption in causal inference, we introduce the non-paranormal method to test conditional independence in the PC-algorithm. Then, we present the non-paranormal intervention-calculus when the directed acyclic graph (DAG) is absent (NPN-IDA), which incorporates the cumulative nature of effects through a cascaded pathway via causal inference for ranking causal genes against a phenotype with the non-paranormal method for estimating DAGs. We demonstrate that causal inference with the non-paranormal method significantly improves the performance in estimating DAGs on synthetic data in comparison with the original PC-algorithm. Moreover, we show that NPN-IDA outperforms the conventional methods in exploring regulators of the flowering time in Arabidopsis thaliana and regulators that control the browning of white adipocytes in mice. Our results show that performance improvement in estimating DAGs contributes to an accurate estimation of causal effects. Although the simplest alternative procedure was used, our proposed method enables us to design efficient intervention experiments and can be applied to a wide range of research purposes, including drug discovery, because of its generality.
An Efficient Acoustic Density Estimation Method with Human Detectors Applied to Gibbons in Cambodia.
Directory of Open Access Journals (Sweden)
Darren Kidney
Full Text Available Some animal species are hard to see but easy to hear. Standard visual methods for estimating population density for such species are often ineffective or inefficient, but methods based on passive acoustics show more promise. We develop spatially explicit capture-recapture (SECR methods for territorial vocalising species, in which humans act as an acoustic detector array. We use SECR and estimated bearing data from a single-occasion acoustic survey of a gibbon population in northeastern Cambodia to estimate the density of calling groups. The properties of the estimator are assessed using a simulation study, in which a variety of survey designs are also investigated. We then present a new form of the SECR likelihood for multi-occasion data which accounts for the stochastic availability of animals. In the context of gibbon surveys this allows model-based estimation of the proportion of groups that produce territorial vocalisations on a given day, thereby enabling the density of groups, instead of the density of calling groups, to be estimated. We illustrate the performance of this new estimator by simulation. We show that it is possible to estimate density reliably from human acoustic detections of visually cryptic species using SECR methods. For gibbon surveys we also show that incorporating observers' estimates of bearings to detected groups substantially improves estimator performance. Using the new form of the SECR likelihood we demonstrate that estimates of availability, in addition to population density and detection function parameters, can be obtained from multi-occasion data, and that the detection function parameters are not confounded with the availability parameter. This acoustic SECR method provides a means of obtaining reliable density estimates for territorial vocalising species. It is also efficient in terms of data requirements since since it only requires routine survey data. We anticipate that the low-tech field requirements will
Comparison of different testing methods for gas fired domestic boiler efficiency determination
International Nuclear Information System (INIS)
De Paepe, M.; T'Joen, C.; Huisseune, H.; Van Belleghem, M.; Kessen, V.
2013-01-01
As the Energy Performance of Buildings Directive is being implemented throughout the European Union, a clear need for certification of boiler and domestic heating devices has arisen. Several ‘Notified Bodies’ exist, spread around the different member states. They are acting as the notified body of that member state and focus on local certification. A boiler manufacturer has its equipment tested according to the ‘Boiler Efficiency directive 92/42/EC’. Recently, tests done by several notified bodies in sequence on an identical unit of a manufacturer showed that results could differ depending on which notified body performed the test. In cooperation with ‘Technigas’ (Notified Body in Belgium) a detailed study was done of the measurement setup and devices for determining boiler efficiencies. Several aspects were studied: measurement devices (absolute or differential types), their location within the test setup (focussing on accuracy and their overall impact on the result) and the measurement strategy (measuring on the primary or the secondary water side). The study was performed for both full load and part load scenarios of a gas fired domestic boiler (smaller than 70 kW [4]). The results clearly indicate that temperature measurements arecritical for assessing boiler efficiency. Secondly the test setup using secondary circuit measurements should be preferred. Tests were performed at ‘Technigas’ on different setups in order to validate the findings. - Highlights: ► Labelling of boiler is now obliged by European standards. ► Error propagation is analysed for different methods of boiler performance testing. ► Secondary water side measurement with separate calibration of has highest quality. ► A sensitivity analysis showed that the water temperatures are important factors.
Akshey, Y S; Malakar, D; De, A K; Jena, M K; Sahu, S; Dutta, R
2011-08-01
The present investigation was carried out to find an efficient chemically assisted procedure for enucleation of goat oocytes related to handmade cloning (HMC) technique. After 22-h in vitro maturation, oocytes were incubated with 0.5 μg/ml demecolcine for 2 h. Cumulus cells were removed by pipetting and vortexing in 0.5 mg/ml hyaluronidase, and zona pellucida were digested with pronase. Oocytes with extrusion cones were subjected to oriented bisection. One-third of the cytoplasm with the extrusion cone was removed with a micro blade. The remaining cytoplasts were used as recipients in HMC. Goat foetal fibroblasts were used as nuclear donors. The overall efficiency measured as the number of cytoplasts obtained per total number of oocytes used was significantly (p < 0.05) higher in chemically assisted handmade enucleation (CAHE) than oriented handmade enucleation without demecolcine (OHE) (80.02 ± 1.292% vs. 72.9 ± 1.00%, respectively, mean ± SEM). The reconstructed and activated embryos were cultured in embryo development medium (EDM) for 7 days. Fusion, cleavage and blastocyst development rate were 71.63 ± 1.95%, 92.94 ± 0.91% and 23.78 ± 3.33% (mean ± SEM), respectively which did not differ significantly from those achieved with random handmade enucleation and OHE. In conclusion, chemically assisted enucleation is a highly efficient and reliable enucleation method for goat HMC which eliminates the need of expensive equipment (inverted fluorescence microscope) and potentially harmful chromatin staining and ultraviolet (UV) irradiation for cytoplast selection. © 2010 Blackwell Verlag GmbH.
International Nuclear Information System (INIS)
Cheng Yingsheng; Yang Renjie; Li Minghua; Chen Weixiong; Shang Kezhong; Zhuang Qixin; Xu Jianrong; Chen Niwei; Zhu Yude
2000-01-01
Objective: To study method selection and evaluation of midtrimester and long-term therapeutic efficiency of achalasia with three methods of interventional procedure. Method: 50 cases achalasia with 30 cases performing with balloon dilation (group A) and 5 cases with permanent metallic internal stent dilation (group B) and 15 cases with temporary metallic internal stent dilation (group C) under fluoroscopy. Results: 30 cases of group A had 56 times of dilations (mean 1.9 times). The mean diameter of cardia was (2.4 +- 1.2) mm before dilation and (9.7 +- 3.0) mm after dilation. The mean dysphagia scores were 2.4 +- 1.2 grades before dilation and 1.0 +- 0.3 grades after dilation. Complications in 30 cases included chest pain (n = 9), reflux (n = 8) and bleeding (n = 3). 18(60%) of 30 cases showed dysphagia relapse during follow-up over 6 months, 18(90%) of 20 cases showed dysphagia relapse during follow-up over 12 months. 5 uncovered expandable metal stents were permanently placed in 5 cases of group B. The mean diameter of cardia was (3.2 +- 2.0) mm before dilation and (18.4 +- 1.7) mm after dilation. The mean dysphagia scores were (2.4 +- 1.1) grade before dilation and (0.4 +- 0.2) grade after dilation. Complications in 5 cases included chest pain (n = 3), reflux (n = 4), bleeding (n = 1) and hyperplasia of granulation tissue (n 2). 3(60%) in 5 cases showed dysphagia relapse during follow-up over 6 months, 1(50%) in 2 cases were dysphagia relapse during follow-up over 12 months. 15 covered expandable metal stents were temporarily placed in 15 cases of group C and drawn out at the 3-7 days via gastroscopy. The mean diameter of cardia was (3.4 +- 2.9) mm before dilation and (14.7 +- 2.9) mm after dilation. The mean dysphagia scores were (2.5 +- 1.1) grades before dilation and (0.6 +- 0.3) grades after dilation. Complications in 15 cases included chest pain (n = 3), reflux (n = 3) and bleeding (n = 2). 3(20%) in 15 cases showed dysphagia relapse during follow-up over 6
Bai, Yu; Iwasaki, Yuki; Kanaya, Shigehiko; Zhao, Yue; Ikemura, Toshimichi
2014-01-01
With remarkable increase of genomic sequence data of a wide range of species, novel tools are needed for comprehensive analyses of the big sequence data. Self-Organizing Map (SOM) is an effective tool for clustering and visualizing high-dimensional data such as oligonucleotide composition on one map. By modifying the conventional SOM, we have previously developed Batch-Learning SOM (BLSOM), which allows classification of sequence fragments according to species, solely depending on the oligonucleotide composition. In the present study, we introduce the oligonucleotide BLSOM used for characterization of vertebrate genome sequences. We first analyzed pentanucleotide compositions in 100 kb sequences derived from a wide range of vertebrate genomes and then the compositions in the human and mouse genomes in order to investigate an efficient method for detecting differences between the closely related genomes. BLSOM can recognize the species-specific key combination of oligonucleotide frequencies in each genome, which is called a "genome signature," and the specific regions specifically enriched in transcription-factor-binding sequences. Because the classification and visualization power is very high, BLSOM is an efficient powerful tool for extracting a wide range of information from massive amounts of genomic sequences (i.e., big sequence data).
Shao, Feng; Evanschitzky, Peter; Fühner, Tim; Erdmann, Andreas
2009-10-01
This paper employs the Waveguide decomposition method as an efficient rigorous electromagnetic field (EMF) solver to investigate three dimensional mask-induced imaging artifacts in EUV lithography. The major mask diffraction induced imaging artifacts are first identified by applying the Zernike analysis of the mask nearfield spectrum of 2D lines/spaces. Three dimensional mask features like 22nm semidense/dense contacts/posts, isolated elbows and line-ends are then investigated in terms of lithographic results. After that, the 3D mask-induced imaging artifacts such as feature orientation dependent best focus shift, process window asymmetries, and other aberration-like phenomena are explored for the studied mask features. The simulation results can help lithographers to understand the reasons of EUV-specific imaging artifacts and to devise illumination and feature dependent strategies for their compensation in the optical proximity correction (OPC) for EUV masks. At last, an efficient approach using the Zernike analysis together with the Waveguide decomposition technique is proposed to characterize the impact of mask properties for the future OPC process.
A General Catalytic Method for Highly Cost- and Atom-Efficient Nucleophilic Substitutions.
Huy, Peter H; Filbrich, Isabel
2018-05-23
A general formamide-catalyzed protocol for the efficient transformation of alcohols into alkyl chlorides, which is promoted by substoichiometric amounts (down to 34 mol %) of inexpensive trichlorotriazine (TCT), is introduced. This is the first example of a TCT-mediated dihydroxychlorination of an OH-containing substrate (e.g., alcohols and carboxylic acids) in which all three chlorine atoms of TCT are transferred to the starting material. The consequently enhanced atom economy facilitates a significantly improved waste balance (E-factors down to 4), cost efficiency, and scalability (>50 g). Furthermore, the current procedure is distinguished by high levels of functional-group compatibility and stereoselectivity, as only weakly acidic cyanuric acid is released as exclusive byproduct. Finally, a one-pot protocol for the preparation of amines, azides, ethers, and sulfides enabled the synthesis of the drug rivastigmine with twofold S N 2 inversion, which demonstrates the high practical value of the presented method. © 2018 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
A strategy for improved computational efficiency of the method of anchored distributions
Over, Matthew William; Yang, Yarong; Chen, Xingyuan; Rubin, Yoram
2013-06-01
This paper proposes a strategy for improving the computational efficiency of model inversion using the method of anchored distributions (MAD) by "bundling" similar model parametrizations in the likelihood function. Inferring the likelihood function typically requires a large number of forward model (FM) simulations for each possible model parametrization; as a result, the process is quite expensive. To ease this prohibitive cost, we present an approximation for the likelihood function called bundling that relaxes the requirement for high quantities of FM simulations. This approximation redefines the conditional statement of the likelihood function as the probability of a set of similar model parametrizations "bundle" replicating field measurements, which we show is neither a model reduction nor a sampling approach to improving the computational efficiency of model inversion. To evaluate the effectiveness of these modifications, we compare the quality of predictions and computational cost of bundling relative to a baseline MAD inversion of 3-D flow and transport model parameters. Additionally, to aid understanding of the implementation we provide a tutorial for bundling in the form of a sample data set and script for the R statistical computing language. For our synthetic experiment, bundling achieved a 35% reduction in overall computational cost and had a limited negative impact on predicted probability distributions of the model parameters. Strategies for minimizing error in the bundling approximation, for enforcing similarity among the sets of model parametrizations, and for identifying convergence of the likelihood function are also presented.
METHOD OF DETERMINING ECONOMICAL EFFICIENCY OF HOUSING STOCK RECONSTRUCTION IN A CITY
Directory of Open Access Journals (Sweden)
Petreneva Ol’ga Vladimirovna
2016-03-01
Full Text Available RECONSTRUCTION IN A CITY The demand in comfortable housing has always been very high. The building density is not the same in different regions and sometimes there is no land for new housing construction, especially in the central regions of cities. Moreover, in many cities cultural and historical centers remain, which create the historical appearance of the city, that’s why new construction is impossible in these regions. Though taking into account the depreciation and obsolescence, the operation life of many buildings come to an end, they fall into disrepair. In these cases there arises a question on the reconstruction of the existing residential, public and industrial buildings. The aim of the reconstruction is bringing the existing worn-out building stock into correspondence with technical, social and sanitary requirements and living standards and conditions. The authors consider the currency and reasons for reconstruction of residential buildings. They attempt to answer the question, what is more economical efficient: new construction or reconstruction of residential buildings. The article offers a method to calculate the efficiency of residential buildings reconstruction.
Efficient Multilevel and Multi-index Sampling Methods in Stochastic Differential Equations
Haji-Ali, Abdul Lateef
2016-05-22
Most problems in engineering and natural sciences involve parametric equations in which the parameters are not known exactly due to measurement errors, lack of measurement data, or even intrinsic variability. In such problems, one objective is to compute point or aggregate values, called “quantities of interest”. A rapidly growing research area that tries to tackle this problem is Uncertainty Quantification (UQ). As the name suggests, UQ aims to accurately quantify the uncertainty in quantities of interest. To that end, the approach followed in this thesis is to describe the parameters using probabilistic measures and then to employ probability theory to approximate the probabilistic information of the quantities of interest. In this approach, the parametric equations must be accurately solved for multiple values of the parameters to explore the dependence of the quantities of interest on these parameters, using various so-called “sampling methods”. In almost all cases, the parametric equations cannot be solved exactly and suitable numerical discretization methods are required. The high computational complexity of these numerical methods coupled with the fact that the parametric equations must be solved for multiple values of the parameters make UQ problems computationally intensive, particularly when the dimensionality of the underlying problem and/or the parameter space is high. This thesis is concerned with optimizing existing sampling methods and developing new ones. Starting with the Multilevel Monte Carlo (MLMC) estimator, we first prove its normality using the Lindeberg-Feller CLT theorem. We then design the Continuation Multilevel Monte Carlo (CMLMC) algorithm that efficiently approximates the parameters required to run MLMC. We also optimize the hierarchies of one-dimensional discretization parameters that are used in MLMC and analyze the tolerance splitting parameter between the statistical error and the bias constraints. An important contribution
Cotton Water Use Efficiency under Two Different Deficit Irrigation Scheduling Methods
Directory of Open Access Journals (Sweden)
Jeffrey T. Baker
2015-08-01
Full Text Available Declines in Ogallala aquifer levels used for irrigation has prompted research to identify methods for optimizing water use efficiency (WUE of cotton (Gossypium hirsutum L. In this experiment, conducted at Lubbock, TX, USA in 2014, our objective was to test two canopy temperature based stress indices, each at two different irrigation trigger set points: the Stress Time (ST method with irrigation triggers set at 5.5 (ST_5.5 and 8.5 h (ST_8.5 and the Crop Water Stress Index (CWSI method with irrigation triggers set at 0.3 (CWSI_0.3 and 0.6 (CWSI_0.6. When these irrigation triggers were exceeded on a given day, the crop was deficit irrigated with 5 mm of water via subsurface drip tape. Also included in the experimental design were a well-watered (WW control irrigated at 110% of potential evapotranspiration and a dry land (DL treatment that relied on rainfall only. Seasonal crop water use ranged from 353 to 625 mm across these six treatments. As expected, cotton lint yield increased with increasing crop water use but lint yield WUE displayed asignificant (p ≤ 0.05 peak near 3.6 to 3.7 kg ha−1 mm−1 for the ST_5.5 and CWSI_0.3 treatments, respectively. Our results suggest that WUE may be optimized in cotton with less water than that needed for maximum lint yield.
Efficient implicit LES method for the simulation of turbulent cavitating flows
International Nuclear Information System (INIS)
Egerer, Christian P.; Schmidt, Steffen J.; Hickel, Stefan; Adams, Nikolaus A.
2016-01-01
We present a numerical method for efficient large-eddy simulation of compressible liquid flows with cavitation based on an implicit subgrid-scale model. Phase change and subgrid-scale interface structures are modeled by a homogeneous mixture model that assumes local thermodynamic equilibrium. Unlike previous approaches, emphasis is placed on operating on a small stencil (at most four cells). The truncation error of the discretization is designed to function as a physically consistent subgrid-scale model for turbulence. We formulate a sensor functional that detects shock waves or pseudo-phase boundaries within the homogeneous mixture model for localizing numerical dissipation. In smooth regions of the flow field, a formally non-dissipative central discretization scheme is used in combination with a regularization term to model the effect of unresolved subgrid scales. The new method is validated by computing standard single- and two-phase test-cases. Comparison of results for a turbulent cavitating mixing layer obtained with the new method demonstrates its suitability for the target applications.
Shojaeefard, Mohammad Hasan; Khalkhali, Abolfazl; Yarmohammadisatri, Sadegh
2017-06-01
The main purpose of this paper is to propose a new method for designing Macpherson suspension, based on the Sobol indices in terms of Pearson correlation which determines the importance of each member on the behaviour of vehicle suspension. The formulation of dynamic analysis of Macpherson suspension system is developed using the suspension members as the modified links in order to achieve the desired kinematic behaviour. The mechanical system is replaced with an equivalent constrained links and then kinematic laws are utilised to obtain a new modified geometry of Macpherson suspension. The equivalent mechanism of Macpherson suspension increased the speed of analysis and reduced its complexity. The ADAMS/CAR software is utilised to simulate a full vehicle, Renault Logan car, in order to analyse the accuracy of modified geometry model. An experimental 4-poster test rig is considered for validating both ADAMS/CAR simulation and analytical geometry model. Pearson correlation coefficient is applied to analyse the sensitivity of each suspension member according to vehicle objective functions such as sprung mass acceleration, etc. Besides this matter, the estimation of Pearson correlation coefficient between variables is analysed in this method. It is understood that the Pearson correlation coefficient is an efficient method for analysing the vehicle suspension which leads to a better design of Macpherson suspension system.
Directory of Open Access Journals (Sweden)
Alejandro C Crespo
Full Text Available Smoothed Particle Hydrodynamics (SPH is a numerical method commonly used in Computational Fluid Dynamics (CFD to simulate complex free-surface flows. Simulations with this mesh-free particle method far exceed the capacity of a single processor. In this paper, as part of a dual-functioning code for either central processing units (CPUs or Graphics Processor Units (GPUs, a parallelisation using GPUs is presented. The GPU parallelisation technique uses the Compute Unified Device Architecture (CUDA of nVidia devices. Simulations with more than one million particles on a single GPU card exhibit speedups of up to two orders of magnitude over using a single-core CPU. It is demonstrated that the code achieves different speedups with different CUDA-enabled GPUs. The numerical behaviour of the SPH code is validated with a standard benchmark test case of dam break flow impacting on an obstacle where good agreement with the experimental results is observed. Both the achieved speed-ups and the quantitative agreement with experiments suggest that CUDA-based GPU programming can be used in SPH methods with efficiency and reliability.
Efficient Data Gathering Methods in Wireless Sensor Networks Using GBTR Matrix Completion
Directory of Open Access Journals (Sweden)
Donghao Wang
2016-09-01
Full Text Available To obtain efficient data gathering methods for wireless sensor networks (WSNs, a novel graph based transform regularized (GBTR matrix completion algorithm is proposed. The graph based transform sparsity of the sensed data is explored, which is also considered as a penalty term in the matrix completion problem. The proposed GBTR-ADMM algorithm utilizes the alternating direction method of multipliers (ADMM in an iterative procedure to solve the constrained optimization problem. Since the performance of the ADMM method is sensitive to the number of constraints, the GBTR-A2DM2 algorithm obtained to accelerate the convergence of GBTR-ADMM. GBTR-A2DM2 benefits from merging two constraint conditions into one as well as using a restart rule. The theoretical analysis shows the proposed algorithms obtain satisfactory time complexity. Extensive simulation results verify that our proposed algorithms outperform the state of the art algorithms for data collection problems in WSNs in respect to recovery accuracy, convergence rate, and energy consumption.
A method for real-time memory efficient implementation of blob detection in large images
Directory of Open Access Journals (Sweden)
Petrović Vladimir L.
2017-01-01
Full Text Available In this paper we propose a method for real-time blob detection in large images with low memory cost. The method is suitable for implementation on the specialized parallel hardware such as multi-core platforms, FPGA and ASIC. It uses parallelism to speed-up the blob detection. The input image is divided into blocks of equal sizes to which the maximally stable extremal regions (MSER blob detector is applied in parallel. We propose the usage of multiresolution analysis for detection of large blobs which are not detected by processing the small blocks. This method can find its place in many applications such as medical imaging, text recognition, as well as video surveillance or wide area motion imagery (WAMI. We explored the possibilities of usage of detected blobs in the feature-based image alignment as well. When large images are processed, our approach is 10 to over 20 times more memory efficient than the state of the art hardware implementation of the MSER.
A-VCI: A flexible method to efficiently compute vibrational spectra
Odunlami, Marc; Le Bris, Vincent; Bégué, Didier; Baraille, Isabelle; Coulaud, Olivier
2017-06-01
The adaptive vibrational configuration interaction algorithm has been introduced as a new method to efficiently reduce the dimension of the set of basis functions used in a vibrational configuration interaction process. It is based on the construction of nested bases for the discretization of the Hamiltonian operator according to a theoretical criterion that ensures the convergence of the method. In the present work, the Hamiltonian is written as a sum of products of operators. The purpose of this paper is to study the properties and outline the performance details of the main steps of the algorithm. New parameters have been incorporated to increase flexibility, and their influence has been thoroughly investigated. The robustness and reliability of the method are demonstrated for the computation of the vibrational spectrum up to 3000 cm-1 of a widely studied 6-atom molecule (acetonitrile). Our results are compared to the most accurate up to date computation; we also give a new reference calculation for future work on this system. The algorithm has also been applied to a more challenging 7-atom molecule (ethylene oxide). The computed spectrum up to 3200 cm-1 is the most accurate computation that exists today on such systems.
Methods to determine deer diet composition: A comparison of their efficiency and feasibility
Directory of Open Access Journals (Sweden)
Olivas, S.M.
2014-01-01
Full Text Available The present literary review is focused on examining the different methods that exist to determine deer diets composition. We considered research back ground of different deer species in several regions of the world. The aim was to compare the efficiency and feasibility of the methods. Among the aspects that were considered were the type of samples and the scope of every method to discriminate the composition of the diet up to minors’taxons as genus and species or for type of forage (pastures, herbaceous or shrubs.These studies cover Europa’s regions, Africa, Australia, North and South America. It was found that six methodologies exist: 1 Observation in field, 2 Fecal analysis, 3 Esophageal and rumen fistula techniques, 4 Stomach analysis, 5 n-alkanes in plant’s cuticles (waxes and 6 Infrared reflectance spectroscopy. The most commonly used is the micro-histological analysis of dregs, which offers the advantage of easy access of samples, as well as unlimited quantity of the same ones, and the use of not specialized equipment.
Kizil, Caghan; Brand, Michael
2011-01-01
The teleost fish Danio rerio (zebrafish) has a remarkable ability to generate newborn neurons in its brain at adult stages of its lifespan-a process called adult neurogenesis. This ability relies on proliferating ventricular progenitors and is in striking contrast to mammalian brains that have rather restricted capacity for adult neurogenesis. Therefore, investigating the zebrafish brain can help not only to elucidate the molecular mechanisms of widespread adult neurogenesis in a vertebrate species, but also to design therapies in humans with what we learn from this teleost. Yet, understanding the cellular behavior and molecular programs underlying different biological processes in the adult zebrafish brain requires techniques that allow manipulation of gene function. As a complementary method to the currently used misexpression techniques in zebrafish, such as transgenic approaches or electroporation-based delivery of DNA, we devised a cerebroventricular microinjection (CVMI)-assisted knockdown protocol that relies on vivo morpholino oligonucleotides, which do not require electroporation for cellular uptake. This rapid method allows uniform and efficient knockdown of genes in the ventricular cells of the zebrafish brain, which contain the neurogenic progenitors. We also provide data on the use of CVMI for growth factor administration to the brain – in our case FGF8, which modulates the proliferation rate of the ventricular cells. In this paper, we describe the CVMI method and discuss its potential uses in zebrafish. PMID:22076157
Directory of Open Access Journals (Sweden)
Qiao Wei
2017-01-01
Full Text Available Deep neural networks (DNNs have recently yielded strong results on a range of applications. Training these DNNs using a cluster of commodity machines is a promising approach since training is time consuming and compute-intensive. Furthermore, putting DNN tasks into containers of clusters would enable broader and easier deployment of DNN-based algorithms. Toward this end, this paper addresses the problem of scheduling DNN tasks in the containerized cluster environment. Efficiently scheduling data-parallel computation jobs like DNN over containerized clusters is critical for job performance, system throughput, and resource utilization. It becomes even more challenging with the complex workloads. We propose a scheduling method called Deep Learning Task Allocation Priority (DLTAP which performs scheduling decisions in a distributed manner, and each of scheduling decisions takes aggregation degree of parameter sever task and worker task into account, in particularly, to reduce cross-node network transmission traffic and, correspondingly, decrease the DNN training time. We evaluate the DLTAP scheduling method using a state-of-the-art distributed DNN training framework on 3 benchmarks. The results show that the proposed method can averagely reduce 12% cross-node network traffic, and decrease the DNN training time even with the cluster of low-end servers.
Merrikh-Bayat, Farshad
2011-04-01
One main approach for time-domain simulation of the linear output-feedback systems containing fractional-order controllers is to approximate the transfer function of the controller with an integer-order transfer function and then perform the simulation. In general, this approach suffers from two main disadvantages: first, the internal stability of the resulting feedback system is not guaranteed, and second, the amount of error caused by this approximation is not exactly known. The aim of this paper is to propose an efficient method for time-domain simulation of such systems without facing the above mentioned drawbacks. For this purpose, the fractional-order controller is approximated with an integer-order transfer function (possibly in combination with the delay term) such that the internal stability of the closed-loop system is guaranteed, and then the simulation is performed. It is also shown that the resulting approximate controller can effectively be realized by using the proposed method. Some formulas for estimating and correcting the simulation error, when the feedback system under consideration is subjected to the unit step command or the unit step disturbance, are also presented. Finally, three numerical examples are studied and the results are compared with the Oustaloup continuous approximation method. Copyright © 2011 ISA. Published by Elsevier Ltd. All rights reserved.
An Efficient numerical method to calculate the conductivity tensor for disordered topological matter
Garcia, Jose H.; Covaci, Lucian; Rappoport, Tatiana G.
2015-03-01
We propose a new efficient numerical approach to calculate the conductivity tensor in solids. We use a real-space implementation of the Kubo formalism where both diagonal and off-diagonal conductivities are treated in the same footing. We adopt a formulation of the Kubo theory that is known as Bastin formula and expand the Green's functions involved in terms of Chebyshev polynomials using the kernel polynomial method. Within this method, all the computational effort is on the calculation of the expansion coefficients. It also has the advantage of obtaining both conductivities in a single calculation step and for various values of temperature and chemical potential, capturing the topology of the band-structure. Our numerical technique is very general and is suitable for the calculation of transport properties of disordered systems. We analyze how the method's accuracy varies with the number of moments used in the expansion and illustrate our approach by calculating the transverse conductivity of different topological systems. T.G.R, J.H.G and L.C. acknowledge Brazilian agencies CNPq, FAPERJ and INCT de Nanoestruturas de Carbono, Flemish Science Foundation for financial support.
A New Efficient Analytical Method for Picolinate Ion Measurements in Complex Aqueous Solutions
Energy Technology Data Exchange (ETDEWEB)
Parazols, M.; Dodi, A. [CEA Cadarache, Lab Anal Radiochim and Chim, DEN, F-13108 St Paul Les Durance (France)
2010-07-01
This study focuses on the development of a new simple but sensitive, fast and quantitative liquid chromatography method for picolinate ion measurement in high ionic strength aqueous solutions. It involves cation separation over a chromatographic CS16 column using methane sulfonic acid as a mobile phase and detection by UV absorbance (254 nm). The CS16 column is a high-capacity stationary phase exhibiting both cation exchange and RP properties. It allows interaction with picolinate ions which are in their zwitterionic form at the pH of the mobile phase (1.3-1.7). Analysis is performed in 30 min with a detection limit of about 0.05 {mu}M and a quantification limit of about 0.15 {mu}M. Moreover, this analytical technique has been tested efficiently on complex aqueous samples from an effluent treatment facility. (authors)
Directory of Open Access Journals (Sweden)
Banna Hasanul
2016-03-01
Full Text Available This paper assesses farmers’ willingness to pay for an efficient adaptation programme to climate change for Malaysian agriculture. We used the contingent valuation method to determine the monetary assessment of farmers’ preferences for an adaptation programme. We distributed a structured questionnaire to farmers in Selangor, Malaysia. Based on the survey, 74% of respondents are willing to pay for the adaptation programme with several factors such as socio-economic and motivational factors exerting greater influences over their willingness to pay. However, a significant number of respondents are not willing to pay for the adaptation programme. The Malaysian government, along with social institutions, banks, NGOs, and media could come up with fruitful awareness programmes to motivate financing the programme. Financial institutions such as banks, insurances, leasing firms, etc. along with government and farmers could also donate a substantial portion for the adaptation programme as part of their corporate social responsibility (CSR.
Efficiency estimation method of three-wired AC to DC line transfer
Solovev, S. V.; Bardanov, A. I.
2018-05-01
The development of power semiconductor converters technology expands the scope of their application to medium voltage distribution networks (6-35 kV). Particularly rectifiers and inverters of appropriate power capacity complement the topology of such voltage level networks with the DC links and lines. The article presents a coefficient that allows taking into account the increase of transmission line capacity depending on the parameters of it. The application of the coefficient is presented by the example of transfer three-wired AC line to DC in various methods. Dependences of the change in the capacity from the load power factor of the line and the reactive component of the resistance of the transmission line are obtained. Conclusions are drawn about the most efficient ways of converting a three-wired AC line to direct current.
An efficient method for qualitative screening of phosphate-solubilizing bacteria.
Mehta, S; Nautiyal, C S
2001-07-01
An efficient protocol was developed for qualitative screening of phosphate-solubilizing bacteria, based upon visual observation. Our results indicate that, by using our formulation containing bromophenol blue, it is possible to quickly screen on a qualitative basis the phosphate-solubilizing bacteria. Qualitative analysis of the phosphate solubilized by various groups correlated well with grouping based upon quantitative analysis of bacteria isolated from soil, effect of carbon, nitrogen, salts, and phosphate solubilization-defective transposon mutants. However, unlike quantitative analysis methods that involve time-consuming biochemical procedures, the time for screening phosphate-solubilizing bacteria is significantly reduced by using our simple protocol. Therefore, it is envisaged that usage of this formulation based upon qualitative analysis will be salutary for the quick screening of phosphate-solubilizing bacteria. Our results indicate that the formulation can also be used as a quality control test for expeditiously screening the commercial bioinoculant preparations, based on phosphate solubilizers.
An efficient numerical method for evolving microstructures with strong elastic inhomogeneity
International Nuclear Information System (INIS)
Jeong, Darae; Lee, Seunggyu; Kim, Junseok
2015-01-01
In this paper, we consider a fast and efficient numerical method for the modified Cahn–Hilliard equation with a logarithmic free energy for microstructure evolution. Even though it is physically more appropriate to use a logarithmic free energy, a quartic polynomial approximation is typically used for the logarithmic function due to a logarithmic singularity. In order to overcome the singularity problem, we regularize the logarithmic function and then apply an unconditionally stable scheme to the Cahn–Hilliard part in the model. We present computational results highlighting the different dynamic aspects from two different bulk free energy forms. We also demonstrate the robustness of the regularization of the logarithmic free energy, which implies the time-step restriction is based on accuracy and not stability. (paper)
Efficient methods of nanoimprint stamp cleaning based on imprint self-cleaning effect
Energy Technology Data Exchange (ETDEWEB)
Meng Fantao; Chu Jinkui [Key Laboratory for Micro/Nano Technology and System of Liaoning Province, Dalian University of Technology, 116024 Dalian (China); Luo Gang; Zhou Ye; Carlberg, Patrick; Heidari, Babak [Obducat AB, SE-20125 Malmoe (Sweden); Maximov, Ivan; Montelius, Lars; Xu, H Q [Division of Solid State Physics, Lund University, Box 118, S-22100 Lund (Sweden); Nilsson, Lars, E-mail: ivan.maximov@ftf.lth.se [Department of Food Technology, Engineering and Nutrition, Lund University, Box 117, S-22100 Lund (Sweden)
2011-05-06
Nanoimprint lithography (NIL) is a nonconventional lithographic technique that promises low-cost, high-throughput patterning of structures with sub-10 nm resolution. Contamination of nanoimprint stamps is one of the key obstacles to industrialize the NIL technology. Here, we report two efficient approaches for removal of typical contamination of particles and residual resist from stamps: thermal and ultraviolet (UV) imprinting cleaning-both based on the self-cleaning effect of imprinting process. The contaminated stamps were imprinted onto polymer substrates and after demolding, they were treated with an organic solvent. The images of the stamp before and after the cleaning processes show that the two cleaning approaches can effectively remove contamination from stamps without destroying the stamp structures. The contact angles of the stamp before and after the cleaning processes indicate that the cleaning methods do not significantly degrade the anti-sticking layer. The cleaning processes reported in this work could also be used for substrate cleaning.
Computationally efficient method for optical simulation of solar cells and their applications
Semenikhin, I.; Zanuccoli, M.; Fiegna, C.; Vyurkov, V.; Sangiorgi, E.
2013-01-01
This paper presents two novel implementations of the Differential method to solve the Maxwell equations in nanostructured optoelectronic solid state devices. The first proposed implementation is based on an improved and computationally efficient T-matrix formulation that adopts multiple-precision arithmetic to tackle the numerical instability problem which arises due to evanescent modes. The second implementation adopts the iterative approach that allows to achieve low computational complexity O(N logN) or better. The proposed algorithms may work with structures with arbitrary spatial variation of the permittivity. The developed two-dimensional numerical simulator is applied to analyze the dependence of the absorption characteristics of a thin silicon slab on the morphology of the front interface and on the angle of incidence of the radiation with respect to the device surface.
Efficiency determination of whole-body counter by Monte Carlo method, using a microcomputer
International Nuclear Information System (INIS)
Fernandes Neto, Jose Maria
1986-01-01
The purpose of this investigation was the development of an analytical microcomputer model to evaluate a whole body counter efficiency. The model is based on a modified Sryder's model. A stretcher type geometry along with the Monte Carlo method and a Synclair type microcomputer were used. Experimental measurements were performed using two phantoms, one as an adult and the other as a 5 year old child. The phantoms were made in acrylic and and 99m Tc, 131 I and 42 K were the radioisotopes utilized. Results showed a close relationship between experimental and predicted data for energies ranging from 250 keV to 2 MeV, but some discrepancies were found for lower energies. (author)
A novel, efficient and facile method for the template removal from mesoporous materials
Chen, Lu
2014-11-12
© 2014, Jilin University, The Editorial Department of Chemical Research in Chinese Universities and Springer-Verlag GmbH. A new catalytic-oxidation method was adopted to remove the templates from SBA-15 and MCM-41 mesoporous materials via Fenton-like techniques under microwave irradiation. The mesoporous silica materials were treated with different Fenton agents based on the template’s property and textural property. The samples were characterized by powder X-ray diffraction(XRD) measurement, N2 adsorption-desorption isotherms, infrared spectroscopy, 29Si MAS NMR and thermo gravimetric analysis(TGA). The results reveal that this is an efficient and facile approach to the thorough template-removal from mesoporous silica materials, as well as to offering products with more stable structures, higher BET surface areas, larger pore volumes and larger quantity of silanol groups.
High efficient plastic solar cells fabricated with a high-throughput gravure printing method
Energy Technology Data Exchange (ETDEWEB)
Kopola, P.; Jin, H.; Tuomikoski, M.; Maaninen, A.; Hast, J. [VTT, Kaitovaeylae 1, FIN-90571 Oulu (Finland); Aernouts, T. [IMEC, Organic PhotoVoltaics, Polymer and Molecular Electronics, Kapeldreef 75, B-3001 Leuven (Belgium); Guillerez, S. [CEA-INES RDI, 50 Avenue Du Lac Leman, 73370 Le Bourget Du Lac (France)
2010-10-15
We report on polymer-based solar cells prepared by the high-throughput roll-to-roll gravure printing method. The engravings of the printing plate, along with process parameters like printing speed and ink properties, are studied to optimise the printability of the photoactive as well as the hole transport layer. For the hole transport layer, the focus is on testing different formulations to produce thorough wetting of the indium-tin-oxide (ITO) substrate. The challenge for the photoactive layer is to form a uniform layer with optimal nanomorphology in the poly-3-hexylthiophene (P3HT) and [6,6]-phenyl-C61-butyric acid methyl ester (PCBM) blend. This results in a power conversion efficiency of 2.8% under simulated AM1.5G solar illumination for a solar cell device with gravure-printed hole transport and a photoactive layer. (author)
Two efficient methods for isolation of high-quality genomic DNA from entomopathogenic fungi.
Serna-Domínguez, María G; Andrade-Michel, Gilda Y; Arredondo-Bernal, Hugo C; Gallou, Adrien
2018-03-27
Conventional and commercial methods for isolation of nucleic acids are available for fungal samples including entomopathogenic fungi (EPF). However, there is not a unique optimal method for all organisms. The cell wall structure and the wide range of secondary metabolites of EPF can broadly interfere with the efficiency of the DNA extraction protocol. This study compares three commercial protocols: DNeasy® Plant Mini Kit (Qiagen), Wizard® Genomic DNA Purification Kit (Promega), and Axygen™ Multisource Genomic DNA Miniprep Kit (Axygen) and three conventional methods based on different buffers: SDS, CTAB/PVPP, and CTAB/β-mercaptoethanol versus three cell lysis procedures: liquid nitrogen homogenization and two bead-beating materials (i.e., tungsten-carbide and stainless-steel) for four representative species of EPF (i.e., Beauveria bassiana, Hirsutella citriformis, Isaria javanica, and Metarhizium anisopliae). Liquid nitrogen homogenization combined with DNeasy® Plant Mini Kit (i.e., QN) or SDS buffer (i.e., SN) significantly improved the yield with a good purity (~1.8) and high integrity (>20,000 bp) of genomic DNA in contrast with other methods, also, these results were better when compared with the two bead-beating materials. The purified DNA was evaluated by PCR-based techniques: amplification of translation elongation factor 1-α (TEF) and two highly sensitive molecular markers (i.e., ISSR and AFLP) with reliable and reproducible results. Despite a variation in yield, purity, and integrity of extracted DNA across the four species of EPF with the different DNA extraction methods, the SN and QN protocols maintained a high-quality of DNA which is required for downstream molecular applications. Copyright © 2018 Elsevier B.V. All rights reserved.
International Nuclear Information System (INIS)
Wu, Qiong-Li; Cournède, Paul-Henry; Mathieu, Amélie
2012-01-01
Global sensitivity analysis has a key role to play in the design and parameterisation of functional–structural plant growth models which combine the description of plant structural development (organogenesis and geometry) and functional growth (biomass accumulation and allocation). We are particularly interested in this study in Sobol's method which decomposes the variance of the output of interest into terms due to individual parameters but also to interactions between parameters. Such information is crucial for systems with potentially high levels of non-linearity and interactions between processes, like plant growth. However, the computation of Sobol's indices relies on Monte Carlo sampling and re-sampling, whose costs can be very high, especially when model evaluation is also expensive, as for tree models. In this paper, we thus propose a new method to compute Sobol's indices inspired by Homma–Saltelli, which improves slightly their use of model evaluations, and then derive for this generic type of computational methods an estimator of the error estimation of sensitivity indices with respect to the sampling size. It allows the detailed control of the balance between accuracy and computing time. Numerical tests on a simple non-linear model are convincing and the method is finally applied to a functional–structural model of tree growth, GreenLab, whose particularity is the strong level of interaction between plant functioning and organogenesis. - Highlights: ► We study global sensitivity analysis in the context of functional–structural plant modelling. ► A new estimator based on Homma–Saltelli method is proposed to compute Sobol indices, based on a more balanced re-sampling strategy. ► The estimation accuracy of sensitivity indices for a class of Sobol's estimators can be controlled by error analysis. ► The proposed algorithm is implemented efficiently to compute Sobol indices for a complex tree growth model.
Energy Technology Data Exchange (ETDEWEB)
Lee, Dong-Eun [Advanced Radiation Technology Institute, Korea Atomic Energy Research Institute, Jeongeup 580-185 (Korea, Republic of); Kim, Kwangmeyung [Center for Theragnosis, Biomedical Research Institute, Korea Institute of Science and Technology (KIST), Seoul 136-791 (Korea, Republic of); Park, Sang Hyun [Advanced Radiation Technology Institute, Korea Atomic Energy Research Institute, Jeongeup 580-185 (Korea, Republic of); Department of Radiobiotechnology and Applied Radioisotope Science, Korea University of Science and Technology, Deajeon 305-350 (Korea, Republic of)
2015-07-01
Recently, nanoparticles have received a great deal of interest in diagnosis and therapy applications. Since nanoparticles possess intrinsic features that are often required for a drug delivery system and diagnosis, they have potential to be used as platforms for integrating imaging and therapeutic functions, simultaneously. Intrinsic issues that are associated with theranostic nanoparticles, particularly in cancer treatment, include an efficient and straightforward radiolabeling method for understanding the in vivo biodistribution of nanoparticles to reach the tumor region, and monitoring therapeutic responses. Herein, we investigated a facile and highly efficient strategy to prepare radiolabeled nanoparticles with {sup 64}Cu via a strain-promoted azide, i.e., an alkyne cycloaddition strategy, which is often referred to as click chemistry. First, the azide (N3) group, which allows for the preparation of radiolabeled nanoparticles by copper-free click chemistry, was incorporated into glycol chitosan nanoparticles (CNPs). Second, the strained cyclooctyne derivative, dibenzyl cyclooctyne (DBCO) conjugated with a 1,4,7,10-tetraazacyclododecane- 1,4,7,10-tetraacetic acid (DOTA) chelator, was synthesized for preparing the pre-radiolabeled alkyne complex with {sup 64}Cu radionuclide. Following incubation with the {sup 64}Cu-radiolabeled DBCO complex (DBCO-PEG4-Lys-DOTA-{sup 64}Cu with high specific activity, 18.5 GBq/μ mol), the azide-functionalized CNPs were radiolabeled successfully with {sup 64}Cu, with a high radiolabeling efficiency and a high radiolabeling yield (>98%). Importantly, the radiolabeling of CNPs by copper-free click chemistry was accomplished within 30 min, with great efficiency in aqueous conditions. After {sup 64}Cu-CNPs were intravenously administered to tumor-bearing mice, the real time, in vivo biodistribution and tumor-targeting ability of {sup 64}Cu-CNPs were quantitatively evaluated by micro-PET images of tumor-bearing mice. These results
Graf, Daniel; Beuerle, Matthias; Schurkus, Henry F; Luenser, Arne; Savasci, Gökcen; Ochsenfeld, Christian
2018-05-08
An efficient algorithm for calculating the random phase approximation (RPA) correlation energy is presented that is as accurate as the canonical molecular orbital resolution-of-the-identity RPA (RI-RPA) with the important advantage of an effective linear-scaling behavior (instead of quartic) for large systems due to a formulation in the local atomic orbital space. The high accuracy is achieved by utilizing optimized minimax integration schemes and the local Coulomb metric attenuated by the complementary error function for the RI approximation. The memory bottleneck of former atomic orbital (AO)-RI-RPA implementations ( Schurkus, H. F.; Ochsenfeld, C. J. Chem. Phys. 2016 , 144 , 031101 and Luenser, A.; Schurkus, H. F.; Ochsenfeld, C. J. Chem. Theory Comput. 2017 , 13 , 1647 - 1655 ) is addressed by precontraction of the large 3-center integral matrix with the Cholesky factors of the ground state density reducing the memory requirements of that matrix by a factor of [Formula: see text]. Furthermore, we present a parallel implementation of our method, which not only leads to faster RPA correlation energy calculations but also to a scalable decrease in memory requirements, opening the door for investigations of large molecules even on small- to medium-sized computing clusters. Although it is known that AO methods are highly efficient for extended systems, where sparsity allows for reaching the linear-scaling regime, we show that our work also extends the applicability when considering highly delocalized systems for which no linear scaling can be achieved. As an example, the interlayer distance of two covalent organic framework pore fragments (comprising 384 atoms in total) is analyzed.
Video Coaching as an Efficient Teaching Method for Surgical Residents-A Randomized Controlled Trial.
Soucisse, Mikael L; Boulva, Kerianne; Sideris, Lucas; Drolet, Pierre; Morin, Michel; Dubé, Pierre
As surgical training is evolving and operative exposure is decreasing, new, effective, and experiential learning methods are needed to ensure surgical competency and patient safety. Video coaching is an emerging concept in surgery that needs further investigation. In this randomized controlled trial conducted at a single teaching hospital, participating residents were filmed performing a side-to-side intestinal anastomosis on cadaveric dog bowel for baseline assessment. The Surgical Video Coaching (SVC) group then participated in a one-on-one video playback coaching and debriefing session with a surgeon, during which constructive feedback was given. The control group went on with their normal clinical duties without coaching or debriefing. All participants were filmed making a second intestinal anastomosis. This was compared to their first anastomosis using a 7-category-validated technical skill global rating scale, the Objective Structured Assessment of Technical Skills. A single independent surgeon who did not participate in coaching or debriefing to the SVC group reviewed all videos. A satisfaction survey was then sent to the residents in the coaching group. Department of Surgery, HôpitalMaisonneuve-Rosemont, tertiary teaching hospital affiliated to the University of Montreal, Canada. General surgery residents from University of Montreal were recruited to take part in this trial. A total of 28 residents were randomized and completed the study. After intervention, the SVC group (n = 14) significantly increased their Objective Structured Assessment of Technical Skills score (mean of differences 3.36, [1.09-5.63], p = 0.007) when compared to the control group (n = 14) (mean of differences 0.29, p = 0.759). All residents agreed or strongly agreed that video coaching was a time-efficient teaching method. Video coaching is an effective and efficient teaching intervention to improve surgical residents' technical skills. Crown Copyright © 2017. Published by Elsevier
Directory of Open Access Journals (Sweden)
Nadejda Shatsilo
2018-05-01
Full Text Available The purpose is to justify the principles of assessing the effectiveness of innovation and investment projects of rural areas development on the basis of sustainability. Research methodology. In the course of the research general scientific and special methods for solving the tasks and obtaining the corresponding results were used, in particular: the method of logical analysis - in determining the factors of influence on the efficiency of investment projects; Systematization and generalization - in the synthesis of modern methodological approaches to the evaluation of innovation and investment projects of rural areas development; Abstract-logical - for theoretical generalizations and formulation of the findings of the study. Results.The principles of estimation of efficiency of innovative-investment projects are generalized. The interrelation and interdependence of goals and tasks in the development of three subsystems of sustainability have been studied, which need to be taken into account when identifying the effects arising from the implementation of investment projects. The methodological principles of evaluation of innovation-investment projects of rural areas development in conditions of observance of the requirements of sustainable development are highlighted. The deterrent factors hindering the implementation of the processes of investment of investment resources in the development of rural areas are determined. The principles of implementation of investment projects oriented on sustainable development are substantiated. Priority directions of investing resources investment in the development of rural areas on the principles of sustainability within the framework of solving economic, social and environmental problems have been identified. The mechanism of estimation of efficiency of innovative-investment project of development of rural territory in the conditions of limited financial resources is offered. It is substantiated that it is
Efficient Use of Clickers: A Mixed-Method Inquiry with University Teachers
Directory of Open Access Journals (Sweden)
George Cheung
2018-03-01
Full Text Available With the advancement of information technology and policies encouraging interactivities in teaching and learning, the use of students’ response system (SRS, commonly known as clickers, has experienced substantial growth in recent years. The reported effectiveness of SRS has varied. Based on the framework of technological-pedagogical-content knowledge (TPACK, the current study attempted to explore the disparity in efficiency of adopting SRS. A concurrent mixed method design was adopted to delineate factors conducive to efficient adoption of SRS through closed-ended survey responses and qualitative data. Participants were purposefully sampled from diverse academic disciplines and backgrounds. Seventeen teachers from various disciplines (i.e., tourism management, business, health sciences, applied sciences, engineering, and social sciences at the Hong Kong Polytechnic University formed a teacher focus group for the current study. In the facilitated focus group, issues relating to efficient use of clickers, participants explored questions on teachers’ knowledge on various technologies, knowledge relating to their subject matters, methods and processes of teaching, as well as how to integrate all knowledge into their teaching. The TPACK model was adopted to guide the discussions. Emergent themes from the discussions were extracted using NVivo 10 for Windows, and were categorized according to the framework of TPACK. The survey, implemented on an online survey platform, solicited participants on teachers’ knowledge and technology acceptance. The close-ended survey comprised 30 items based on the Technological Pedagogical Content Knowledge (TPACK framework and 20 items based on the Unified Theory of Acceptance and Use of Technology (UTAUT. Participating teachers concurred with the suggestion that use of clickers is instrumental in engaging students in learning and assessing formative students’ progress. Converging with the survey results
Economical Efficiency of Combined Cooling Heating and Power Systems Based on an Enthalpy Method
Directory of Open Access Journals (Sweden)
Yan Xu
2017-11-01
Full Text Available As the living standards of Chinese people have been improving, the energy demand for cooling and heating, mainly in the form of electricity, has also expanded. Since an integrated cooling, heating and power supply system (CCHP will serve this demand better, the government is now attaching more importance to the application of CCHP energy systems. Based on the characteristics of the combined cooling heating and power supply system, and the method of levelized cost of energy, two calculation methods for the evaluation of the economical efficiency of the system are employed when the energy production in the system is dealt with from the perspective of exergy. According to the first method, fuel costs account for about 75% of the total cost. In the second method, the profits from heating and cooling are converted to fuel costs, resulting in a significant reduction of fuel costs, accounting for 60% of the total cost. Then the heating and cooling parameters of gas turbine exhaust, heat recovery boiler, lithium-bromide heat-cooler and commercial tariff of provincial capitals were set as benchmark based on geographic differences among provinces, and the economical efficiency of combined cooling heating and power systems in each province were evaluated. The results shows that the combined cooling heating and power system is economical in the developed areas of central and eastern China, especially in Hubei and Zhejiang provinces, while in other regions it is not. The sensitivity analysis was also made on related influencing factors of fuel cost, demand intensity in heating and cooling energy, and bank loans ratio. The analysis shows that the levelized cost of energy of combined cooling heating and power systems is very sensitive to exergy consumption and fuel costs. When the consumption of heating and cooling energy increases, the unit cost decreases by 0.1 yuan/kWh, and when the on-grid power ratio decreases by 20%, the cost may increase by 0.1 yuan
Hasegawa, Takanori; Nagasaki, Masao; Yamaguchi, Rui; Imoto, Seiya; Miyano, Satoru
2014-07-01
Recently, several biological simulation models of, e.g., gene regulatory networks and metabolic pathways, have been constructed based on existing knowledge of biomolecular reactions, e.g., DNA-protein and protein-protein interactions. However, since these do not always contain all necessary molecules and reactions, their simulation results can be inconsistent with observational data. Therefore, improvements in such simulation models are urgently required. A previously reported method created multiple candidate simulation models by partially modifying existing models. However, this approach was computationally costly and could not handle a large number of candidates that are required to find models whose simulation results are highly consistent with the data. In order to overcome the problem, we focused on the fact that the qualitative dynamics of simulation models are highly similar if they share a certain amount of regulatory structures. This indicates that better fitting candidates tend to share the basic regulatory structure of the best fitting candidate, which can best predict the data among candidates. Thus, instead of evaluating all candidates, we propose an efficient explorative method that can selectively and sequentially evaluate candidates based on the similarity of their regulatory structures. Furthermore, in estimating the parameter values of a candidate, e.g., synthesis and degradation rates of mRNA, for the data, those of the previously evaluated candidates can be utilized. The method is applied here to the pharmacogenomic pathways for corticosteroids in rats, using time-series microarray expression data. In the performance test, we succeeded in obtaining more than 80% of consistent solutions within 15% of the computational time as compared to the comprehensive evaluation. Then, we applied this approach to 142 literature-recorded simulation models of corticosteroid-induced genes, and consequently selected 134 newly constructed better models. The
TU-AB-BRA-02: An Efficient Atlas-Based Synthetic CT Generation Method
International Nuclear Information System (INIS)
Han, X
2016-01-01
Purpose: A major obstacle for MR-only radiotherapy is the need to generate an accurate synthetic CT (sCT) from MR image(s) of a patient for the purposes of dose calculation and DRR generation. We propose here an accurate and efficient atlas-based sCT generation method, which has a computation speed largely independent of the number of atlases used. Methods: Atlas-based sCT generation requires a set of atlases with co-registered CT and MR images. Unlike existing methods that align each atlas to the new patient independently, we first create an average atlas and pre-align every atlas to the average atlas space. When a new patient arrives, we compute only one deformable image registration to align the patient MR image to the average atlas, which indirectly aligns the patient to all pre-aligned atlases. A patch-based non-local weighted fusion is performed in the average atlas space to generate the sCT for the patient, which is then warped back to the original patient space. We further adapt a PatchMatch algorithm that can quickly find top matches between patches of the patient image and all atlas images, which makes the patch fusion step also independent of the number of atlases used. Results: Nineteen brain tumour patients with both CT and T1-weighted MR images are used as testing data and a leave-one-out validation is performed. Each sCT generated is compared against the original CT image of the same patient on a voxel-by-voxel basis. The proposed method produces a mean absolute error (MAE) of 98.6±26.9 HU overall. The accuracy is comparable with a conventional implementation scheme, but the computation time is reduced from over an hour to four minutes. Conclusion: An average atlas space patch fusion approach can produce highly accurate sCT estimations very efficiently. Further validation on dose computation accuracy and using a larger patient cohort is warranted. The author is a full time employee of Elekta, Inc.
A laboratory method to estimate the efficiency of plant extract to neutralize soil acidity
Directory of Open Access Journals (Sweden)
Marcelo E. Cassiolato
2002-06-01
Full Text Available Water-soluble plant organic compounds have been proposed to be efficient in alleviating soil acidity. Laboratory methods were evaluated to estimate the efficiency of plant extracts to neutralize soil acidity. Plant samples were dried at 65ºC for 48 h and ground to pass 1 mm sieve. Plant extraction procedure was: transfer 3.0 g of plant sample to a becker, add 150 ml of deionized water, shake for 8 h at 175 rpm and filter. Three laboratory methods were evaluated: sigma (Ca+Mg+K of the plant extracts; electrical conductivity of the plant extracts and titration of plant extracts with NaOH solution between pH 3 to 7. These methods were compared with the effect of the plant extracts on acid soil chemistry. All laboratory methods were related with soil reaction. Increasing sigma (Ca+Mg+K, electrical conductivity and the volume of NaOH solution spent to neutralize H+ ion of the plant extracts were correlated with the effect of plant extract on increasing soil pH and exchangeable Ca and decreasing exchangeable Al. It is proposed the electrical conductivity method for estimating the efficiency of plant extract to neutralize soil acidity because it is easily adapted for routine analysis and uses simple instrumentations and materials.Tem sido proposto que os compostos orgânicos de plantas solúveis em água são eficientes na amenização da acidez do solo. Foram avaliados métodos de laboratório para estimar a eficiência dos extratos de plantas na neutralização da acidez do solo. Os materiais de plantas foram secos a 65º C por 48 horas, moídos e passados em peneira de 1mm. Utilizou-se o seguinte procedimento para obtenção do extrato de plantas: transferir 3.0 g da amostra de planta para um becker, adicionar 150 ml de água deionizada, agitar por 8h a 175 rpm e filtrar. Avaliaram-se três métodos de laboratório: sigma (Ca + Mg + K do extrato de planta, condutividade elétrica (CE do extrato de planta e titulação do extrato de planta com solu
Bao, Weizhu; Marahrens, Daniel; Tang, Qinglin; Zhang, Yanzhi
2013-01-01
We propose a simple, efficient, and accurate numerical method for simulating the dynamics of rotating Bose-Einstein condensates (BECs) in a rotational frame with or without longrange dipole-dipole interaction (DDI). We begin with the three
An efficient modeling method for thermal stratification simulation in a BWR suppression pool
Energy Technology Data Exchange (ETDEWEB)
Haihua Zhao; Ling Zou; Hongbin Zhang; Hua Li; Walter Villanueva; Pavel Kudinov
2012-09-01
The suppression pool in a BWR plant not only is the major heat sink within the containment system, but also provides major emergency cooling water for the reactor core. In several accident scenarios, such as LOCA and extended station blackout, thermal stratification tends to form in the pool after the initial rapid venting stage. Accurately predicting the pool stratification phenomenon is important because it affects the peak containment pressure; and the pool temperature distribution also affects the NPSHa (Available Net Positive Suction Head) and therefore the performance of the pump which draws cooling water back to the core. Current safety analysis codes use 0-D lumped parameter methods to calculate the energy and mass balance in the pool and therefore have large uncertainty in prediction of scenarios in which stratification and mixing are important. While 3-D CFD methods can be used to analyze realistic 3D configurations, these methods normally require very fine grid resolution to resolve thin substructures such as jets and wall boundaries, therefore long simulation time. For mixing in stably stratified large enclosures, the BMIX++ code has been developed to implement a highly efficient analysis method for stratification where the ambient fluid volume is represented by 1-D transient partial differential equations and substructures such as free or wall jets are modeled with 1-D integral models. This allows very large reductions in computational effort compared to 3-D CFD modeling. The POOLEX experiments at Finland, which was designed to study phenomena relevant to Nordic design BWR suppression pool including thermal stratification and mixing, are used for validation. GOTHIC lumped parameter models are used to obtain boundary conditions for BMIX++ code and CFD simulations. Comparison between the BMIX++, GOTHIC, and CFD calculations against the POOLEX experimental data is discussed in detail.
International Nuclear Information System (INIS)
Hagag, O.M.; Nafee, S.S.; Naeem, M.A.; El Khatib, A.M.
2011-01-01
The direct mathematical method has been developed for calculating the total efficiency of many cylindrical gamma detectors, especially HPGe and NaI detector. Different source geometries are considered (point and disk). Further into account is taken of gamma attenuation from detector window or any interfacing absorbing layer. Results are compared with published experimental data to study the validity of the direct mathematical method to calculate total efficiency for any gamma detector size.
International Nuclear Information System (INIS)
Geraldo, L.P.; Smith, D.L.
1989-01-01
The methodology of covariance matrix and square methods have been applied in the relative efficiency calibration for a Ge(Li) detector apllied in the relative efficiency calibration for a Ge(Li) detector. Procedures employed to generate, manipulate and test covariance matrices which serve to properly represent uncertainties of experimental data are discussed. Calibration data fitting using least square methods has been performed for a particular experimental data set. (author) [pt
An efficient inverse radiotherapy planning method for VMAT using quadratic programming optimization.
Hoegele, W; Loeschel, R; Merkle, N; Zygmanski, P
2012-01-01
The purpose of this study is to investigate the feasibility of an inverse planning optimization approach for the Volumetric Modulated Arc Therapy (VMAT) based on quadratic programming and the projection method. The performance of this method is evaluated against a reference commercial planning system (eclipse(TM) for rapidarc(TM)) for clinically relevant cases. The inverse problem is posed in terms of a linear combination of basis functions representing arclet dose contributions and their respective linear coefficients as degrees of freedom. MLC motion is decomposed into basic motion patterns in an intuitive manner leading to a system of equations with a relatively small number of equations and unknowns. These equations are solved using quadratic programming under certain limiting physical conditions for the solution, such as the avoidance of negative dose during optimization and Monitor Unit reduction. The modeling by the projection method assures a unique treatment plan with beneficial properties, such as the explicit relation between organ weightings and the final dose distribution. Clinical cases studied include prostate and spine treatments. The optimized plans are evaluated by comparing isodose lines, DVH profiles for target and normal organs, and Monitor Units to those obtained by the clinical treatment planning system eclipse(TM). The resulting dose distributions for a prostate (with rectum and bladder as organs at risk), and for a spine case (with kidneys, liver, lung and heart as organs at risk) are presented. Overall, the results indicate that similar plan qualities for quadratic programming (QP) and rapidarc(TM) could be achieved at significantly more efficient computational and planning effort using QP. Additionally, results for the quasimodo phantom [Bohsung et al., "IMRT treatment planning: A comparative inter-system and inter-centre planning exercise of the estro quasimodo group," Radiother. Oncol. 76(3), 354-361 (2005)] are presented as an example
An Effective Transform Unit Size Decision Method for High Efficiency Video Coding
Directory of Open Access Journals (Sweden)
Chou-Chen Wang
2014-01-01
Full Text Available High efficiency video coding (HEVC is the latest video coding standard. HEVC can achieve higher compression performance than previous standards, such as MPEG-4, H.263, and H.264/AVC. However, HEVC requires enormous computational complexity in encoding process due to quadtree structure. In order to reduce the computational burden of HEVC encoder, an early transform unit (TU decision algorithm (ETDA is adopted to pruning the residual quadtree (RQT at early stage based on the number of nonzero DCT coefficients (called NNZ-EDTA to accelerate the encoding process. However, the NNZ-ETDA cannot effectively reduce the computational load for sequences with active motion or rich texture. Therefore, in order to further improve the performance of NNZ-ETDA, we propose an adaptive RQT-depth decision for NNZ-ETDA (called ARD-NNZ-ETDA by exploiting the characteristics of high temporal-spatial correlation that exist in nature video sequences. Simulation results show that the proposed method can achieve time improving ratio (TIR about 61.26%~81.48% when compared to the HEVC test model 8.1 (HM 8.1 with insignificant loss of image quality. Compared with the NNZ-ETDA, the proposed method can further achieve an average TIR about 8.29%~17.92%.
Development of Efficient Screening Methods for Resistant Cucumber Plants to Meloidogyne incognita
Directory of Open Access Journals (Sweden)
Sung Min Hwang
2014-06-01
Full Text Available Root-knot nematodes represent a significant problem in cucumber, causing reduction in yield and quality. To develop screening methods for resistance of cucumber to root-knot nematode Meloidogyne incognita, development of root-knot nematode of four cucumber cultivars (‘Dragonsamchuk’, ‘Asiastrike’, ‘Nebakja’ and ‘Hanelbakdadaki’ according to several conditions such as inoculum concentration, plant growth stage and transplanting period was investigated by the number of galls and egg masses produced in each seedling 45 days after inoculation. There was no difference in galls and egg masses according to the tested condition except for inoculum concentration. Reproduction of the nematode on all the tested cultivars according to inoculum concentration increased in a dose-dependent manner. On the basis of the result, the optimum conditions for root-knot development on the cultivars is to transplant period of 1 week, inoculum concentration of 5,000 eggs/plant and plant growth stage of 3-week-old in a greenhouse (25 ± 5°C. In addition, under optimum conditions, resistance of 45 commercial cucumber cultivars was evaluated. One rootstock cultivar, Union was moderately resistant to the root-knot nematode. However, no significant difference was in the resistance of the others cultivar. According to the result, we suggest an efficient screening method for new resistant cucumber to the root-knot nematode, M. incognita.