WorldWideScience

Sample records for integral scaling method

  1. Stepwise integral scaling method and its application to severe accident phenomena

    International Nuclear Information System (INIS)

    Ishii, M.; Zhang, G.

    1993-10-01

    Severe accidents in light water reactors are characterized by an occurrence of multiphase flow with complicated phase changes, chemical reaction and various bifurcation phenomena. Because of the inherent difficulties associated with full-scale testing, scaled down and simulation experiments are essential part of the severe accident analyses. However, one of the most significant shortcomings in the area is the lack of well-established and reliable scaling method and scaling criteria. In view of this, the stepwise integral scaling method is developed for severe accident analyses. This new scaling method is quite different from the conventional approach. However, its focus on dominant transport mechanisms and use of the integral response of the system make this method relatively simple to apply to very complicated multi-phase flow problems. In order to demonstrate its applicability and usefulness, three case studies have been made. The phenomena considered are (1) corium dispersion in DCH, (2) corium spreading in BWR MARK-I containment, and (3) incore boil-off and heating process. The results of these studies clearly indicate the effectiveness of their stepwise integral scaling method. Such a simple and systematic scaling method has not been previously available to severe accident analyses

  2. The Software Reliability of Large Scale Integration Circuit and Very Large Scale Integration Circuit

    OpenAIRE

    Artem Ganiyev; Jan Vitasek

    2010-01-01

    This article describes evaluation method of faultless function of large scale integration circuits (LSI) and very large scale integration circuits (VLSI). In the article there is a comparative analysis of factors which determine faultless of integrated circuits, analysis of already existing methods and model of faultless function evaluation of LSI and VLSI. The main part describes a proposed algorithm and program for analysis of fault rate in LSI and VLSI circuits.

  3. Symplectic integrators for large scale molecular dynamics simulations: A comparison of several explicit methods

    International Nuclear Information System (INIS)

    Gray, S.K.; Noid, D.W.; Sumpter, B.G.

    1994-01-01

    We test the suitability of a variety of explicit symplectic integrators for molecular dynamics calculations on Hamiltonian systems. These integrators are extremely simple algorithms with low memory requirements, and appear to be well suited for large scale simulations. We first apply all the methods to a simple test case using the ideas of Berendsen and van Gunsteren. We then use the integrators to generate long time trajectories of a 1000 unit polyethylene chain. Calculations are also performed with two popular but nonsymplectic integrators. The most efficient integrators of the set investigated are deduced. We also discuss certain variations on the basic symplectic integration technique

  4. Test methods of total dose effects in very large scale integrated circuits

    International Nuclear Information System (INIS)

    He Chaohui; Geng Bin; He Baoping; Yao Yujuan; Li Yonghong; Peng Honglun; Lin Dongsheng; Zhou Hui; Chen Yusheng

    2004-01-01

    A kind of test method of total dose effects (TDE) is presented for very large scale integrated circuits (VLSI). The consumption current of devices is measured while function parameters of devices (or circuits) are measured. Then the relation between data errors and consumption current can be analyzed and mechanism of TDE in VLSI can be proposed. Experimental results of 60 Co γ TDEs are given for SRAMs, EEPROMs, FLASH ROMs and a kind of CPU

  5. A New Class of Scaling Correction Methods

    International Nuclear Information System (INIS)

    Mei Li-Jie; Wu Xin; Liu Fu-Yao

    2012-01-01

    When conventional integrators like Runge—Kutta-type algorithms are used, numerical errors can make an orbit deviate from a hypersurface determined by many constraints, which leads to unreliable numerical solutions. Scaling correction methods are a powerful tool to avoid this. We focus on their applications, and also develop a family of new velocity multiple scaling correction methods where scale factors only act on the related components of the integrated momenta. They can preserve exactly some first integrals of motion in discrete or continuous dynamical systems, so that rapid growth of roundoff or truncation errors is suppressed significantly. (general)

  6. Wafer scale integration of catalyst dots into nonplanar microsystems

    DEFF Research Database (Denmark)

    Gjerde, Kjetil; Kjelstrup-Hansen, Jakob; Gammelgaard, Lauge

    2007-01-01

    In order to successfully integrate bottom-up fabricated nanostructures such as carbon nanotubes or silicon, germanium, or III-V nanowires into microelectromechanical systems on a wafer scale, reliable ways of integrating catalyst dots are needed. Here, four methods for integrating sub-100-nm...... diameter nickel catalyst dots on a wafer scale are presented and compared. Three of the methods are based on a p-Si layer utilized as an in situ mask, an encapsulating layer, and a sacrificial window mask, respectively. All methods enable precise positioning of nickel catalyst dots at the end...

  7. Multiple time scale methods in tokamak magnetohydrodynamics

    International Nuclear Information System (INIS)

    Jardin, S.C.

    1984-01-01

    Several methods are discussed for integrating the magnetohydrodynamic (MHD) equations in tokamak systems on other than the fastest time scale. The dynamical grid method for simulating ideal MHD instabilities utilizes a natural nonorthogonal time-dependent coordinate transformation based on the magnetic field lines. The coordinate transformation is chosen to be free of the fast time scale motion itself, and to yield a relatively simple scalar equation for the total pressure, P = p + B 2 /2μ 0 , which can be integrated implicitly to average over the fast time scale oscillations. Two methods are described for the resistive time scale. The zero-mass method uses a reduced set of two-fluid transport equations obtained by expanding in the inverse magnetic Reynolds number, and in the small ratio of perpendicular to parallel mobilities and thermal conductivities. The momentum equation becomes a constraint equation that forces the pressure and magnetic fields and currents to remain in force balance equilibrium as they evolve. The large mass method artificially scales up the ion mass and viscosity, thereby reducing the severe time scale disparity between wavelike and diffusionlike phenomena, but not changing the resistive time scale behavior. Other methods addressing the intermediate time scales are discussed

  8. Integration of expression data in genome-scale metabolic network reconstructions

    Directory of Open Access Journals (Sweden)

    Anna S. Blazier

    2012-08-01

    Full Text Available With the advent of high-throughput technologies, the field of systems biology has amassed an abundance of omics data, quantifying thousands of cellular components across a variety of scales, ranging from mRNA transcript levels to metabolite quantities. Methods are needed to not only integrate this omics data but to also use this data to heighten the predictive capabilities of computational models. Several recent studies have successfully demonstrated how flux balance analysis (FBA, a constraint-based modeling approach, can be used to integrate transcriptomic data into genome-scale metabolic network reconstructions to generate predictive computational models. In this review, we summarize such FBA-based methods for integrating expression data into genome-scale metabolic network reconstructions, highlighting their advantages as well as their limitations.

  9. Scaling for integral simulation of thermal-hydraulic phenomena in SBWR during LOCA

    Energy Technology Data Exchange (ETDEWEB)

    Ishii, M.; Revankar, S.T.; Dowlati, R [Purdue Univ., West Layfayette, IN (United States)] [and others

    1995-09-01

    A scaling study has been conducted for simulation of thermal-hydraulic phenomena in the Simplified Boiling Water Reactor (SBWR) during a loss of coolant accident. The scaling method consists of a three-level scaling approach. The integral system scaling (global scaling or top down approach) consists of two levels, the integral response function scaling which forms the first level, and the control volume and boundary flow scaling which forms the second level. The bottom up approach is carried out by local phenomena scaling which forms the third level scaling. Based on this scaling study the design of the model facility called Purdue University Multi-Dimensional Integral Test Assembly (PUMA) has been carried out. The PUMA facility has 1/4 height and 1/100 area ratio scaling, corresponding to the volume scaling of 1/400. The PUMA power scaling based on the integral scaling is 1/200. The present scaling method predicts that PUMA time scale will be one-half that of the SBWR. The system pressure for PUMA is full scale, therefore, a prototypic pressure is maintained. PUMA is designed to operate at and below 1.03 MPa (150 psi), which allows it to simulate the prototypic SBWR accident conditions below 1.03 MPa (150 psi). The facility includes models for all components of importance.

  10. Scaling analysis for a Savannah River reactor scaled model integral system

    International Nuclear Information System (INIS)

    Boucher, T.J.; Larson, T.K.; McCreery, G.E.; Anderson, J.L.

    1990-11-01

    801The Savannah River Laboratory has requested that the Idaho National Engineering Laboratory perform an analysis to help define, examine, and assess potential concepts for the design of a scaled integral hydraulics test facility representative of the current Savannah River Plant reactor design. In this report the thermal-hydraulic phenomena of importance (based on the knowledge and experience of the authors and the results of the joint INEL/TPG/SRL phenomena identification and ranking effort) to reactor safety during the design basis loss-of-coolant accident were examined and identified. Established scaling methodologies were used to develop potential concepts for integral hydraulic testing facilities. Analysis is conducted to examine the scaling of various phenomena in each of the selected concepts. Results generally support that a one-fourth (1/4) linear scale visual facility capable of operating at pressures up to 350 kPa (51 psia) and temperatures up to 330 K (134 degree F) will scale most hydraulic phenomena reasonably well. However, additional research will be necessary to determine the most appropriate method of simulating several of the reactor components, since the scaling methodology allows for several approaches which may only be assessed via appropriate research. 34 refs., 20 figs., 14 tabs

  11. Efficient orbit integration by manifold correction methods.

    Science.gov (United States)

    Fukushima, Toshio

    2005-12-01

    Triggered by a desire to investigate, numerically, the planetary precession through a long-term numerical integration of the solar system, we developed a new formulation of numerical integration of orbital motion named manifold correct on methods. The main trick is to rigorously retain the consistency of physical relations, such as the orbital energy, the orbital angular momentum, or the Laplace integral, of a binary subsystem. This maintenance is done by applying a correction to the integrated variables at each integration step. Typical methods of correction are certain geometric transformations, such as spatial scaling and spatial rotation, which are commonly used in the comparison of reference frames, or mathematically reasonable operations, such as modularization of angle variables into the standard domain [-pi, pi). The form of the manifold correction methods finally evolved are the orbital longitude methods, which enable us to conduct an extremely precise integration of orbital motions. In unperturbed orbits, the integration errors are suppressed at the machine epsilon level for an indefinitely long period. In perturbed cases, on the other hand, the errors initially grow in proportion to the square root of time and then increase more rapidly, the onset of which depends on the type and magnitude of the perturbations. This feature is also realized for highly eccentric orbits by applying the same idea as used in KS-regularization. In particular, the introduction of time elements greatly enhances the performance of numerical integration of KS-regularized orbits, whether the scaling is applied or not.

  12. Stepwise integral scaling method for severe accident analysis and its application to corium dispersion in direct containment heating

    International Nuclear Information System (INIS)

    Ishii, M.; Zhang, G.; No, H. C.; Eltwila, F.

    1994-01-01

    Accident sequences which lead to severe core damage and to possible radioactive fission products into the environment have a very low probability. However, the interest in this area increased significantly due to the occurrence of the small break loss-of-coolant accident at TMI-2 which led to partial core damage, and of the Chernobyl accident in the former USSR which led to extensive core disassembly and significant release of fission products over several countries. In particular, the latter accident raised the international concern over the potential consequences of severe accidents in nuclear reactor systems. One of the significant shortcomings in the analyses of severe accidents is the lack of well-established and reliable scaling criteria for various multiphase flow phenomena. However, the scaling criteria are essential to the severe accident, because the full scale tests are basically impossible to perform. They are required for (1) designing scaled down or simulation experiments, (2) evaluating data and extrapolating the data to prototypic conditions, and (3) developing correctly scaled physical models and correlations. In view of this, a new scaling method is developed for the analysis of severe accidents. Its approach is quite different from the conventional methods. In order to demonstrate its applicability, this new stepwise integral scaling method has been applied to the analysis of the corium dispersion problem in the direct containment heating. ((orig.))

  13. Evaluation of Kirkwood-Buff integrals via finite size scaling: a large scale molecular dynamics study

    Science.gov (United States)

    Dednam, W.; Botha, A. E.

    2015-01-01

    Solvation of bio-molecules in water is severely affected by the presence of co-solvent within the hydration shell of the solute structure. Furthermore, since solute molecules can range from small molecules, such as methane, to very large protein structures, it is imperative to understand the detailed structure-function relationship on the microscopic level. For example, it is useful know the conformational transitions that occur in protein structures. Although such an understanding can be obtained through large-scale molecular dynamic simulations, it is often the case that such simulations would require excessively large simulation times. In this context, Kirkwood-Buff theory, which connects the microscopic pair-wise molecular distributions to global thermodynamic properties, together with the recently developed technique, called finite size scaling, may provide a better method to reduce system sizes, and hence also the computational times. In this paper, we present molecular dynamics trial simulations of biologically relevant low-concentration solvents, solvated by aqueous co-solvent solutions. In particular we compare two different methods of calculating the relevant Kirkwood-Buff integrals. The first (traditional) method computes running integrals over the radial distribution functions, which must be obtained from large system-size NVT or NpT simulations. The second, newer method, employs finite size scaling to obtain the Kirkwood-Buff integrals directly by counting the particle number fluctuations in small, open sub-volumes embedded within a larger reservoir that can be well approximated by a much smaller simulation cell. In agreement with previous studies, which made a similar comparison for aqueous co-solvent solutions, without the additional solvent, we conclude that the finite size scaling method is also applicable to the present case, since it can produce computationally more efficient results which are equivalent to the more costly radial distribution

  14. Evaluation of Kirkwood-Buff integrals via finite size scaling: a large scale molecular dynamics study

    International Nuclear Information System (INIS)

    Dednam, W; Botha, A E

    2015-01-01

    Solvation of bio-molecules in water is severely affected by the presence of co-solvent within the hydration shell of the solute structure. Furthermore, since solute molecules can range from small molecules, such as methane, to very large protein structures, it is imperative to understand the detailed structure-function relationship on the microscopic level. For example, it is useful know the conformational transitions that occur in protein structures. Although such an understanding can be obtained through large-scale molecular dynamic simulations, it is often the case that such simulations would require excessively large simulation times. In this context, Kirkwood-Buff theory, which connects the microscopic pair-wise molecular distributions to global thermodynamic properties, together with the recently developed technique, called finite size scaling, may provide a better method to reduce system sizes, and hence also the computational times. In this paper, we present molecular dynamics trial simulations of biologically relevant low-concentration solvents, solvated by aqueous co-solvent solutions. In particular we compare two different methods of calculating the relevant Kirkwood-Buff integrals. The first (traditional) method computes running integrals over the radial distribution functions, which must be obtained from large system-size NVT or NpT simulations. The second, newer method, employs finite size scaling to obtain the Kirkwood-Buff integrals directly by counting the particle number fluctuations in small, open sub-volumes embedded within a larger reservoir that can be well approximated by a much smaller simulation cell. In agreement with previous studies, which made a similar comparison for aqueous co-solvent solutions, without the additional solvent, we conclude that the finite size scaling method is also applicable to the present case, since it can produce computationally more efficient results which are equivalent to the more costly radial distribution

  15. VLSI scaling methods and low power CMOS buffer circuit

    International Nuclear Information System (INIS)

    Sharma Vijay Kumar; Pattanaik Manisha

    2013-01-01

    Device scaling is an important part of the very large scale integration (VLSI) design to boost up the success path of VLSI industry, which results in denser and faster integration of the devices. As technology node moves towards the very deep submicron region, leakage current and circuit reliability become the key issues. Both are increasing with the new technology generation and affecting the performance of the overall logic circuit. The VLSI designers must keep the balance in power dissipation and the circuit's performance with scaling of the devices. In this paper, different scaling methods are studied first. These scaling methods are used to identify the effects of those scaling methods on the power dissipation and propagation delay of the CMOS buffer circuit. For mitigating the power dissipation in scaled devices, we have proposed a reliable leakage reduction low power transmission gate (LPTG) approach and tested it on complementary metal oxide semiconductor (CMOS) buffer circuit. All simulation results are taken on HSPICE tool with Berkeley predictive technology model (BPTM) BSIM4 bulk CMOS files. The LPTG CMOS buffer reduces 95.16% power dissipation with 84.20% improvement in figure of merit at 32 nm technology node. Various process, voltage and temperature variations are analyzed for proving the robustness of the proposed approach. Leakage current uncertainty decreases from 0.91 to 0.43 in the CMOS buffer circuit that causes large circuit reliability. (semiconductor integrated circuits)

  16. Conservative multi-implicit integral deferred correction methods with adaptive mesh refinement

    International Nuclear Information System (INIS)

    Layton, A.T.

    2004-01-01

    In most models of reacting gas dynamics, the characteristic time scales of chemical reactions are much shorter than the hydrodynamic and diffusive time scales, rendering the reaction part of the model equations stiff. Moreover, nonlinear forcings may introduce into the solutions sharp gradients or shocks, the robust behavior and correct propagation of which require the use of specialized spatial discretization procedures. This study presents high-order conservative methods for the temporal integration of model equations of reacting flows. By means of a method of lines discretization on the flux difference form of the equations, these methods compute approximations to the cell-averaged or finite-volume solution. The temporal discretization is based on a multi-implicit generalization of integral deferred correction methods. The advection term is integrated explicitly, and the diffusion and reaction terms are treated implicitly but independently, with the splitting errors present in traditional operator splitting methods reduced via the integral deferred correction procedure. To reduce computational cost, time steps used to integrate processes with widely-differing time scales may differ in size. (author)

  17. Supply chain integration scales validation and benchmark values

    Directory of Open Access Journals (Sweden)

    Juan A. Marin-Garcia

    2013-06-01

    Full Text Available Purpose: The clarification of the constructs of the supply chain integration (clients, suppliers, external and internal, the creation of a measurement instrument based on a list of items taken from earlier papers, the validation of these scales and a preliminary benchmark to interpret the scales by percentiles based on a set of control variables (size of the plant, country, sector and degree of vertical integration. Design/methodology/approach: Our empirical analysis is based on the HPM project database (2005-2007 timeframe. The international sample is made up of 266 plants across ten countries: Austria, Canada, Finland, Germany, Italy, Japan, Korea, Spain, Sweden and the USA. In each country. We analized the descriptive statistics, internal consistency testing to purify the items (inter-item correlations, Cronbach’s alpha, squared multiple correlation, corrected item-total correlation, exploratory factor analysis, and finally, a confirmatory factor analysis to check the convergent and discriminant validity of the scales. The analyses will be done with the SPSS and EQS programme using the maximum likelihood parameter estimation method. Findings: The four proposed scales show excellent psychometric properties. Research limitations/implications: with a clearer and more concise designation of the supply chain integration measurement scales more reliable and accurate data could be taken to analyse the relations between these constructs with other variables of interest to the academic l fields. Practical implications: providing scales that are valid as a diagnostic tool for best practices, as well as providing a benchmark with which to compare the score for each individual plant against a collection of industrial companies from the machinery, electronics and transportation sectors. Originality/value: supply chain integration may be a major factor in explaining the performance of companies. The results are nevertheless inconclusive, the vast range

  18. Steffensen's Integral Inequality on Time Scales

    Directory of Open Access Journals (Sweden)

    Ozkan Umut Mutlu

    2007-01-01

    Full Text Available We establish generalizations of Steffensen's integral inequality on time scales via the diamond- dynamic integral, which is defined as a linear combination of the delta and nabla integrals.

  19. Dynamic Reactive Power Compensation of Large Scale Wind Integrated Power System

    DEFF Research Database (Denmark)

    Rather, Zakir Hussain; Chen, Zhe; Thøgersen, Paul

    2015-01-01

    wind turbines especially wind farms with additional grid support functionalities like dynamic support (e,g dynamic reactive power support etc.) and ii) refurbishment of existing conventional central power plants to synchronous condensers could be one of the efficient, reliable and cost effective option......Due to progressive displacement of conventional power plants by wind turbines, dynamic security of large scale wind integrated power systems gets significantly compromised. In this paper we first highlight the importance of dynamic reactive power support/voltage security in large scale wind...... integrated power systems with least presence of conventional power plants. Then we propose a mixed integer dynamic optimization based method for optimal dynamic reactive power allocation in large scale wind integrated power systems. One of the important aspects of the proposed methodology is that unlike...

  20. SCALE-6 Sensitivity/Uncertainty Methods and Covariance Data

    International Nuclear Information System (INIS)

    Williams, Mark L.; Rearden, Bradley T.

    2008-01-01

    Computational methods and data used for sensitivity and uncertainty analysis within the SCALE nuclear analysis code system are presented. The methodology used to calculate sensitivity coefficients and similarity coefficients and to perform nuclear data adjustment is discussed. A description is provided of the SCALE-6 covariance library based on ENDF/B-VII and other nuclear data evaluations, supplemented by 'low-fidelity' approximate covariances. SCALE (Standardized Computer Analyses for Licensing Evaluation) is a modular code system developed by Oak Ridge National Laboratory (ORNL) to perform calculations for criticality safety, reactor physics, and radiation shielding applications. SCALE calculations typically use sequences that execute a predefined series of executable modules to compute particle fluxes and responses like the critical multiplication factor. SCALE also includes modules for sensitivity and uncertainty (S/U) analysis of calculated responses. The S/U codes in SCALE are collectively referred to as TSUNAMI (Tools for Sensitivity and UNcertainty Analysis Methodology Implementation). SCALE-6-scheduled for release in 2008-contains significant new capabilities, including important enhancements in S/U methods and data. The main functions of TSUNAMI are to (a) compute nuclear data sensitivity coefficients and response uncertainties, (b) establish similarity between benchmark experiments and design applications, and (c) reduce uncertainty in calculated responses by consolidating integral benchmark experiments. TSUNAMI includes easy-to-use graphical user interfaces for defining problem input and viewing three-dimensional (3D) geometries, as well as an integrated plotting package.

  1. Improving predictions of large scale soil carbon dynamics: Integration of fine-scale hydrological and biogeochemical processes, scaling, and benchmarking

    Science.gov (United States)

    Riley, W. J.; Dwivedi, D.; Ghimire, B.; Hoffman, F. M.; Pau, G. S. H.; Randerson, J. T.; Shen, C.; Tang, J.; Zhu, Q.

    2015-12-01

    Numerical model representations of decadal- to centennial-scale soil-carbon dynamics are a dominant cause of uncertainty in climate change predictions. Recent attempts by some Earth System Model (ESM) teams to integrate previously unrepresented soil processes (e.g., explicit microbial processes, abiotic interactions with mineral surfaces, vertical transport), poor performance of many ESM land models against large-scale and experimental manipulation observations, and complexities associated with spatial heterogeneity highlight the nascent nature of our community's ability to accurately predict future soil carbon dynamics. I will present recent work from our group to develop a modeling framework to integrate pore-, column-, watershed-, and global-scale soil process representations into an ESM (ACME), and apply the International Land Model Benchmarking (ILAMB) package for evaluation. At the column scale and across a wide range of sites, observed depth-resolved carbon stocks and their 14C derived turnover times can be explained by a model with explicit representation of two microbial populations, a simple representation of mineralogy, and vertical transport. Integrating soil and plant dynamics requires a 'process-scaling' approach, since all aspects of the multi-nutrient system cannot be explicitly resolved at ESM scales. I will show that one approach, the Equilibrium Chemistry Approximation, improves predictions of forest nitrogen and phosphorus experimental manipulations and leads to very different global soil carbon predictions. Translating model representations from the site- to ESM-scale requires a spatial scaling approach that either explicitly resolves the relevant processes, or more practically, accounts for fine-resolution dynamics at coarser scales. To that end, I will present recent watershed-scale modeling work that applies reduced order model methods to accurately scale fine-resolution soil carbon dynamics to coarse-resolution simulations. Finally, we

  2. New method for calculation of integral characteristics of thermal plumes

    DEFF Research Database (Denmark)

    Zukowska, Daria; Popiolek, Zbigniew; Melikov, Arsen Krikor

    2008-01-01

    A method for calculation of integral characteristics of thermal plumes is proposed. The method allows for determination of the integral parameters of plumes based on speed measurements performed with omnidirectional low velocity thermoanemometers. The method includes a procedure for calculation...... of the directional velocity (upward component of the mean velocity). The method is applied for determination of the characteristics of an asymmetric thermal plume generated by a sitting person. The method was validated in full-scale experiments in a climatic chamber with a thermal manikin as a simulator of a sitting...

  3. Effect of Integrated Pest Management Training on Ugandan Small-Scale Farmers

    DEFF Research Database (Denmark)

    Clausen, Anna Sabine; Jørs, Erik; Atuhaire, Aggrey

    2017-01-01

    Small-scale farmers in developing countries use hazardous pesticides taking few or no safety measures. Farmer field schools (FFSs) teaching integrated pest management (IPM) have been shown to reduce pesticide use among trained farmers. This cross-sectional study compares pesticide-related knowledge......-reported symptoms. The study supports IPM as a method to reduce pesticide use and potential exposure and to improve pesticide-related KAP among small-scale farmers in developing countries....

  4. The Genome-Scale Integrated Networks in Microorganisms

    Directory of Open Access Journals (Sweden)

    Tong Hao

    2018-02-01

    Full Text Available The genome-scale cellular network has become a necessary tool in the systematic analysis of microbes. In a cell, there are several layers (i.e., types of the molecular networks, for example, genome-scale metabolic network (GMN, transcriptional regulatory network (TRN, and signal transduction network (STN. It has been realized that the limitation and inaccuracy of the prediction exist just using only a single-layer network. Therefore, the integrated network constructed based on the networks of the three types attracts more interests. The function of a biological process in living cells is usually performed by the interaction of biological components. Therefore, it is necessary to integrate and analyze all the related components at the systems level for the comprehensively and correctly realizing the physiological function in living organisms. In this review, we discussed three representative genome-scale cellular networks: GMN, TRN, and STN, representing different levels (i.e., metabolism, gene regulation, and cellular signaling of a cell’s activities. Furthermore, we discussed the integration of the networks of the three types. With more understanding on the complexity of microbial cells, the development of integrated network has become an inevitable trend in analyzing genome-scale cellular networks of microorganisms.

  5. Optimized negative dimensional integration method (NDIM) and multiloop Feynman diagram calculation

    International Nuclear Information System (INIS)

    Gonzalez, Ivan; Schmidt, Ivan

    2007-01-01

    We present an improved form of the integration technique known as NDIM (negative dimensional integration method), which is a powerful tool in the analytical evaluation of Feynman diagrams. Using this technique we study a φ 3 +φ 4 theory in D=4-2ε dimensions, considering generic topologies of L loops and E independent external momenta, and where the propagator powers are arbitrary. The method transforms the Schwinger parametric integral associated to the diagram into a multiple series expansion, whose main characteristic is that the argument contains several Kronecker deltas which appear naturally in the application of the method, and which we call diagram presolution. The optimization we present here consists in a procedure that minimizes the series multiplicity, through appropriate factorizations in the multinomials that appear in the parametric integral, and which maximizes the number of Kronecker deltas that are generated in the process. The solutions are presented in terms of generalized hypergeometric functions, obtained once the Kronecker deltas have been used in the series. Although the technique is general, we apply it to cases in which there are 2 or 3 different energy scales (masses or kinematic variables associated to the external momenta), obtaining solutions in terms of a finite sum of generalized hypergeometric series 1 and 2 variables respectively, each of them expressible as ratios between the different energy scales that characterize the topology. The main result is a method capable of solving Feynman integrals, expressing the solutions as hypergeometric series of multiplicity (n-1), where n is the number of energy scales present in the diagram

  6. Studies of Sub-Synchronous Oscillations in Large-Scale Wind Farm Integrated System

    Science.gov (United States)

    Yue, Liu; Hang, Mend

    2018-01-01

    With the rapid development and construction of large-scale wind farms and grid-connected operation, the series compensation wind power AC transmission is gradually becoming the main way of power usage and improvement of wind power availability and grid stability, but the integration of wind farm will change the SSO (Sub-Synchronous oscillation) damping characteristics of synchronous generator system. Regarding the above SSO problem caused by integration of large-scale wind farms, this paper focusing on doubly fed induction generator (DFIG) based wind farms, aim to summarize the SSO mechanism in large-scale wind power integrated system with series compensation, which can be classified as three types: sub-synchronous control interaction (SSCI), sub-synchronous torsional interaction (SSTI), sub-synchronous resonance (SSR). Then, SSO modelling and analysis methods are categorized and compared by its applicable areas. Furthermore, this paper summarizes the suppression measures of actual SSO projects based on different control objectives. Finally, the research prospect on this field is explored.

  7. Integrating computational methods to retrofit enzymes to synthetic pathways.

    Science.gov (United States)

    Brunk, Elizabeth; Neri, Marilisa; Tavernelli, Ivano; Hatzimanikatis, Vassily; Rothlisberger, Ursula

    2012-02-01

    Microbial production of desired compounds provides an efficient framework for the development of renewable energy resources. To be competitive to traditional chemistry, one requirement is to utilize the full capacity of the microorganism to produce target compounds with high yields and turnover rates. We use integrated computational methods to generate and quantify the performance of novel biosynthetic routes that contain highly optimized catalysts. Engineering a novel reaction pathway entails addressing feasibility on multiple levels, which involves handling the complexity of large-scale biochemical networks while respecting the critical chemical phenomena at the atomistic scale. To pursue this multi-layer challenge, our strategy merges knowledge-based metabolic engineering methods with computational chemistry methods. By bridging multiple disciplines, we provide an integral computational framework that could accelerate the discovery and implementation of novel biosynthetic production routes. Using this approach, we have identified and optimized a novel biosynthetic route for the production of 3HP from pyruvate. Copyright © 2011 Wiley Periodicals, Inc.

  8. Integrated simulation of continuous-scale and discrete-scale radiative transfer in metal foams

    Science.gov (United States)

    Xia, Xin-Lin; Li, Yang; Sun, Chuang; Ai, Qing; Tan, He-Ping

    2018-06-01

    A novel integrated simulation of radiative transfer in metal foams is presented. It integrates the continuous-scale simulation with the direct discrete-scale simulation in a single computational domain. It relies on the coupling of the real discrete-scale foam geometry with the equivalent continuous-scale medium through a specially defined scale-coupled zone. This zone holds continuous but nonhomogeneous volumetric radiative properties. The scale-coupled approach is compared to the traditional continuous-scale approach using volumetric radiative properties in the equivalent participating medium and to the direct discrete-scale approach employing the real 3D foam geometry obtained by computed tomography. All the analyses are based on geometrical optics. The Monte Carlo ray-tracing procedure is used for computations of the absorbed radiative fluxes and the apparent radiative behaviors of metal foams. The results obtained by the three approaches are in tenable agreement. The scale-coupled approach is fully validated in calculating the apparent radiative behaviors of metal foams composed of very absorbing to very reflective struts and that composed of very rough to very smooth struts. This new approach leads to a reduction in computational time by approximately one order of magnitude compared to the direct discrete-scale approach. Meanwhile, it can offer information on the local geometry-dependent feature and at the same time the equivalent feature in an integrated simulation. This new approach is promising to combine the advantages of the continuous-scale approach (rapid calculations) and direct discrete-scale approach (accurate prediction of local radiative quantities).

  9. An integrated approach coupling physically based models and probabilistic method to assess quantitatively landslide susceptibility at different scale: application to different geomorphological environments

    Science.gov (United States)

    Vandromme, Rosalie; Thiéry, Yannick; Sedan, Olivier; Bernardie, Séverine

    2016-04-01

    Landslide hazard assessment is the estimation of a target area where landslides of a particular type, volume, runout and intensity may occur within a given period. The first step to analyze landslide hazard consists in assessing the spatial and temporal failure probability (when the information is available, i.e. susceptibility assessment). Two types of approach are generally recommended to achieve this goal: (i) qualitative approach (i.e. inventory based methods and knowledge data driven methods) and (ii) quantitative approach (i.e. data-driven methods or deterministic physically based methods). Among quantitative approaches, deterministic physically based methods (PBM) are generally used at local and/or site-specific scales (1:5,000-1:25,000 and >1:5,000, respectively). The main advantage of these methods is the calculation of probability of failure (safety factor) following some specific environmental conditions. For some models it is possible to integrate the land-uses and climatic change. At the opposite, major drawbacks are the large amounts of reliable and detailed data (especially materials type, their thickness and the geotechnical parameters heterogeneity over a large area) and the fact that only shallow landslides are taking into account. This is why they are often used at site-specific scales (> 1:5,000). Thus, to take into account (i) materials' heterogeneity , (ii) spatial variation of physical parameters, (iii) different landslide types, the French Geological Survey (i.e. BRGM) has developed a physically based model (PBM) implemented in a GIS environment. This PBM couples a global hydrological model (GARDENIA®) including a transient unsaturated/saturated hydrological component with a physically based model computing the stability of slopes (ALICE®, Assessment of Landslides Induced by Climatic Events) based on the Morgenstern-Price method for any slip surface. The variability of mechanical parameters is handled by Monte Carlo approach. The

  10. Systematic approximation of multi-scale Feynman integrals arXiv

    CERN Document Server

    Borowka, Sophia; Hulme, Daniel

    An algorithm for the systematic analytical approximation of multi-scale Feynman integrals is presented. The algorithm produces algebraic expressions as functions of the kinematical parameters and mass scales appearing in the Feynman integrals, allowing for fast numerical evaluation. The results are valid in all kinematical regions, both above and below thresholds, up to in principle arbitrary orders in the dimensional regulator. The scope of the algorithm is demonstrated by presenting results for selected two-loop three-point and four-point integrals with an internal mass scale that appear in the two-loop amplitudes for Higgs+jet production.

  11. An Integrated Computational Materials Engineering Method for Woven Carbon Fiber Composites Preforming Process

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Weizhao; Ren, Huaqing; Wang, Zequn; Liu, Wing K.; Chen, Wei; Zeng, Danielle; Su, Xuming; Cao, Jian

    2016-10-19

    An integrated computational materials engineering method is proposed in this paper for analyzing the design and preforming process of woven carbon fiber composites. The goal is to reduce the cost and time needed for the mass production of structural composites. It integrates the simulation methods from the micro-scale to the macro-scale to capture the behavior of the composite material in the preforming process. In this way, the time consuming and high cost physical experiments and prototypes in the development of the manufacturing process can be circumvented. This method contains three parts: the micro-scale representative volume element (RVE) simulation to characterize the material; the metamodeling algorithm to generate the constitutive equations; and the macro-scale preforming simulation to predict the behavior of the composite material during forming. The results show the potential of this approach as a guidance to the design of composite materials and its manufacturing process.

  12. Scaling of graphene integrated circuits.

    Science.gov (United States)

    Bianchi, Massimiliano; Guerriero, Erica; Fiocco, Marco; Alberti, Ruggero; Polloni, Laura; Behnam, Ashkan; Carrion, Enrique A; Pop, Eric; Sordan, Roman

    2015-05-07

    The influence of transistor size reduction (scaling) on the speed of realistic multi-stage integrated circuits (ICs) represents the main performance metric of a given transistor technology. Despite extensive interest in graphene electronics, scaling efforts have so far focused on individual transistors rather than multi-stage ICs. Here we study the scaling of graphene ICs based on transistors from 3.3 to 0.5 μm gate lengths and with different channel widths, access lengths, and lead thicknesses. The shortest gate delay of 31 ps per stage was obtained in sub-micron graphene ROs oscillating at 4.3 GHz, which is the highest oscillation frequency obtained in any strictly low-dimensional material to date. We also derived the fundamental Johnson limit, showing that scaled graphene ICs could be used at high frequencies in applications with small voltage swing.

  13. Investigation on the integral output power model of a large-scale wind farm

    Institute of Scientific and Technical Information of China (English)

    BAO Nengsheng; MA Xiuqian; NI Weidou

    2007-01-01

    The integral output power model of a large-scale wind farm is needed when estimating the wind farm's output over a period of time in the future.The actual wind speed power model and calculation method of a wind farm made up of many wind turbine units are discussed.After analyzing the incoming wind flow characteristics and their energy distributions,and after considering the multi-effects among the wind turbine units and certain assumptions,the incoming wind flow model of multi-units is built.The calculation algorithms and steps of the integral output power model of a large-scale wind farm are provided.Finally,an actual power output of the wind farm is calculated and analyzed by using the practical measurement wind speed data.The characteristics of a large-scale wind farm are also discussed.

  14. MULTI-SCALE SEGMENTATION OF HIGH RESOLUTION REMOTE SENSING IMAGES BY INTEGRATING MULTIPLE FEATURES

    Directory of Open Access Journals (Sweden)

    Y. Di

    2017-05-01

    Full Text Available Most of multi-scale segmentation algorithms are not aiming at high resolution remote sensing images and have difficulty to communicate and use layers’ information. In view of them, we proposes a method of multi-scale segmentation of high resolution remote sensing images by integrating multiple features. First, Canny operator is used to extract edge information, and then band weighted distance function is built to obtain the edge weight. According to the criterion, the initial segmentation objects of color images can be gained by Kruskal minimum spanning tree algorithm. Finally segmentation images are got by the adaptive rule of Mumford–Shah region merging combination with spectral and texture information. The proposed method is evaluated precisely using analog images and ZY-3 satellite images through quantitative and qualitative analysis. The experimental results show that the multi-scale segmentation of high resolution remote sensing images by integrating multiple features outperformed the software eCognition fractal network evolution algorithm (highest-resolution network evolution that FNEA on the accuracy and slightly inferior to FNEA on the efficiency.

  15. GIGGLE: a search engine for large-scale integrated genome analysis.

    Science.gov (United States)

    Layer, Ryan M; Pedersen, Brent S; DiSera, Tonya; Marth, Gabor T; Gertz, Jason; Quinlan, Aaron R

    2018-02-01

    GIGGLE is a genomics search engine that identifies and ranks the significance of genomic loci shared between query features and thousands of genome interval files. GIGGLE (https://github.com/ryanlayer/giggle) scales to billions of intervals and is over three orders of magnitude faster than existing methods. Its speed extends the accessibility and utility of resources such as ENCODE, Roadmap Epigenomics, and GTEx by facilitating data integration and hypothesis generation.

  16. Analysis for Large Scale Integration of Electric Vehicles into Power Grids

    DEFF Research Database (Denmark)

    Hu, Weihao; Chen, Zhe; Wang, Xiaoru

    2011-01-01

    Electric Vehicles (EVs) provide a significant opportunity for reducing the consumption of fossil energies and the emission of carbon dioxide. With more and more electric vehicles integrated in the power systems, it becomes important to study the effects of EV integration on the power systems......, especially the low and middle voltage level networks. In the paper, the basic structure and characteristics of the electric vehicles are introduced. The possible impacts of large scale integration of electric vehicles on the power systems especially the advantage to the integration of the renewable energies...... are discussed. Finally, the research projects related to the large scale integration of electric vehicles into the power systems are introduced, it will provide reference for large scale integration of Electric Vehicles into power grids....

  17. GIGGLE: a search engine for large-scale integrated genome analysis

    Science.gov (United States)

    Layer, Ryan M; Pedersen, Brent S; DiSera, Tonya; Marth, Gabor T; Gertz, Jason; Quinlan, Aaron R

    2018-01-01

    GIGGLE is a genomics search engine that identifies and ranks the significance of genomic loci shared between query features and thousands of genome interval files. GIGGLE (https://github.com/ryanlayer/giggle) scales to billions of intervals and is over three orders of magnitude faster than existing methods. Its speed extends the accessibility and utility of resources such as ENCODE, Roadmap Epigenomics, and GTEx by facilitating data integration and hypothesis generation. PMID:29309061

  18. Wafer integrated micro-scale concentrating photovoltaics

    Science.gov (United States)

    Gu, Tian; Li, Duanhui; Li, Lan; Jared, Bradley; Keeler, Gordon; Miller, Bill; Sweatt, William; Paap, Scott; Saavedra, Michael; Das, Ujjwal; Hegedus, Steve; Tauke-Pedretti, Anna; Hu, Juejun

    2017-09-01

    Recent development of a novel micro-scale PV/CPV technology is presented. The Wafer Integrated Micro-scale PV approach (WPV) seamlessly integrates multijunction micro-cells with a multi-functional silicon platform that provides optical micro-concentration, hybrid photovoltaic, and mechanical micro-assembly. The wafer-embedded micro-concentrating elements is shown to considerably improve the concentration-acceptance-angle product, potentially leading to dramatically reduced module materials and fabrication costs, sufficient angular tolerance for low-cost trackers, and an ultra-compact optical architecture, which makes the WPV module compatible with commercial flat panel infrastructures. The PV/CPV hybrid architecture further allows the collection of both direct and diffuse sunlight, thus extending the geographic and market domains for cost-effective PV system deployment. The WPV approach can potentially benefits from both the high performance of multijunction cells and the low cost of flat plate Si PV systems.

  19. An integrating factor matrix method to find first integrals

    International Nuclear Information System (INIS)

    Saputra, K V I; Quispel, G R W; Van Veen, L

    2010-01-01

    In this paper we develop an integrating factor matrix method to derive conditions for the existence of first integrals. We use this novel method to obtain first integrals, along with the conditions for their existence, for two- and three-dimensional Lotka-Volterra systems with constant terms. The results are compared to previous results obtained by other methods.

  20. Report of the Workshop on Petascale Systems Integration for LargeScale Facilities

    Energy Technology Data Exchange (ETDEWEB)

    Kramer, William T.C.; Walter, Howard; New, Gary; Engle, Tom; Pennington, Rob; Comes, Brad; Bland, Buddy; Tomlison, Bob; Kasdorf, Jim; Skinner, David; Regimbal, Kevin

    2007-10-01

    There are significant issues regarding Large Scale System integration that are not being addressed in other forums such as current research portfolios or vendor user groups. Unfortunately, the issues in the area of large-scale system integration often fall into a netherworld; not research, not facilities, not procurement, not operations, not user services. Taken together, these issues along with the impact of sub-optimal integration technology means the time required to deploy, integrate and stabilize large scale system may consume up to 20 percent of the useful life of such systems. Improving the state of the art for large scale systems integration has potential to increase the scientific productivity of these systems. Sites have significant expertise, but there are no easy ways to leverage this expertise among them . Many issues inhibit the sharing of information, including available time and effort, as well as issues with sharing proprietary information. Vendors also benefit in the long run from the solutions to issues detected during site testing and integration. There is a great deal of enthusiasm for making large scale system integration a full-fledged partner along with the other major thrusts supported by funding agencies in the definition, design, and use of a petascale systems. Integration technology and issues should have a full 'seat at the table' as petascale and exascale initiatives and programs are planned. The workshop attendees identified a wide range of issues and suggested paths forward. Pursuing these with funding opportunities and innovation offers the opportunity to dramatically improve the state of large scale system integration.

  1. A Novel Multiple-Time Scale Integrator for the Hybrid Monte Carlo Algorithm

    International Nuclear Information System (INIS)

    Kamleh, Waseem

    2011-01-01

    Hybrid Monte Carlo simulations that implement the fermion action using multiple terms are commonly used. By the nature of their formulation they involve multiple integration time scales in the evolution of the system through simulation time. These different scales are usually dealt with by the Sexton-Weingarten nested leapfrog integrator. In this scheme the choice of time scales is somewhat restricted as each time step must be an exact multiple of the next smallest scale in the sequence. A novel generalisation of the nested leapfrog integrator is introduced which allows for far greater flexibility in the choice of time scales, as each scale now must only be an exact multiple of the smallest step size.

  2. TIGER: Toolbox for integrating genome-scale metabolic models, expression data, and transcriptional regulatory networks

    Directory of Open Access Journals (Sweden)

    Jensen Paul A

    2011-09-01

    Full Text Available Abstract Background Several methods have been developed for analyzing genome-scale models of metabolism and transcriptional regulation. Many of these methods, such as Flux Balance Analysis, use constrained optimization to predict relationships between metabolic flux and the genes that encode and regulate enzyme activity. Recently, mixed integer programming has been used to encode these gene-protein-reaction (GPR relationships into a single optimization problem, but these techniques are often of limited generality and lack a tool for automating the conversion of rules to a coupled regulatory/metabolic model. Results We present TIGER, a Toolbox for Integrating Genome-scale Metabolism, Expression, and Regulation. TIGER converts a series of generalized, Boolean or multilevel rules into a set of mixed integer inequalities. The package also includes implementations of existing algorithms to integrate high-throughput expression data with genome-scale models of metabolism and transcriptional regulation. We demonstrate how TIGER automates the coupling of a genome-scale metabolic model with GPR logic and models of transcriptional regulation, thereby serving as a platform for algorithm development and large-scale metabolic analysis. Additionally, we demonstrate how TIGER's algorithms can be used to identify inconsistencies and improve existing models of transcriptional regulation with examples from the reconstructed transcriptional regulatory network of Saccharomyces cerevisiae. Conclusion The TIGER package provides a consistent platform for algorithm development and extending existing genome-scale metabolic models with regulatory networks and high-throughput data.

  3. Statistical Methods in Integrative Genomics

    Science.gov (United States)

    Richardson, Sylvia; Tseng, George C.; Sun, Wei

    2016-01-01

    Statistical methods in integrative genomics aim to answer important biology questions by jointly analyzing multiple types of genomic data (vertical integration) or aggregating the same type of data across multiple studies (horizontal integration). In this article, we introduce different types of genomic data and data resources, and then review statistical methods of integrative genomics, with emphasis on the motivation and rationale of these methods. We conclude with some summary points and future research directions. PMID:27482531

  4. Integral methods in low-frequency electromagnetics

    CERN Document Server

    Solin, Pavel; Karban, Pavel; Ulrych, Bohus

    2009-01-01

    A modern presentation of integral methods in low-frequency electromagnetics This book provides state-of-the-art knowledge on integral methods in low-frequency electromagnetics. Blending theory with numerous examples, it introduces key aspects of the integral methods used in engineering as a powerful alternative to PDE-based models. Readers will get complete coverage of: The electromagnetic field and its basic characteristics An overview of solution methods Solutions of electromagnetic fields by integral expressions Integral and integrodifferential methods

  5. Problems of large-scale vertically-integrated aquaculture

    Energy Technology Data Exchange (ETDEWEB)

    Webber, H H; Riordan, P F

    1976-01-01

    The problems of vertically-integrated aquaculture are outlined; they are concerned with: species limitations (in the market, biological and technological); site selection, feed, manpower needs, and legal, institutional and financial requirements. The gaps in understanding of, and the constraints limiting, large-scale aquaculture are listed. Future action is recommended with respect to: types and diversity of species to be cultivated, marketing, biotechnology (seed supply, disease control, water quality and concerted effort), siting, feed, manpower, legal and institutional aids (granting of water rights, grants, tax breaks, duty-free imports, etc.), and adequate financing. The last of hard data based on experience suggests that large-scale vertically-integrated aquaculture is a high risk enterprise, and with the high capital investment required, banks and funding institutions are wary of supporting it. Investment in pilot projects is suggested to demonstrate that large-scale aquaculture can be a fully functional and successful business. Construction and operation of such pilot farms is judged to be in the interests of both the public and private sector.

  6. Linear-scaling quantum mechanical methods for excited states.

    Science.gov (United States)

    Yam, ChiYung; Zhang, Qing; Wang, Fan; Chen, GuanHua

    2012-05-21

    The poor scaling of many existing quantum mechanical methods with respect to the system size hinders their applications to large systems. In this tutorial review, we focus on latest research on linear-scaling or O(N) quantum mechanical methods for excited states. Based on the locality of quantum mechanical systems, O(N) quantum mechanical methods for excited states are comprised of two categories, the time-domain and frequency-domain methods. The former solves the dynamics of the electronic systems in real time while the latter involves direct evaluation of electronic response in the frequency-domain. The localized density matrix (LDM) method is the first and most mature linear-scaling quantum mechanical method for excited states. It has been implemented in time- and frequency-domains. The O(N) time-domain methods also include the approach that solves the time-dependent Kohn-Sham (TDKS) equation using the non-orthogonal localized molecular orbitals (NOLMOs). Besides the frequency-domain LDM method, other O(N) frequency-domain methods have been proposed and implemented at the first-principles level. Except one-dimensional or quasi-one-dimensional systems, the O(N) frequency-domain methods are often not applicable to resonant responses because of the convergence problem. For linear response, the most efficient O(N) first-principles method is found to be the LDM method with Chebyshev expansion for time integration. For off-resonant response (including nonlinear properties) at a specific frequency, the frequency-domain methods with iterative solvers are quite efficient and thus practical. For nonlinear response, both on-resonance and off-resonance, the time-domain methods can be used, however, as the time-domain first-principles methods are quite expensive, time-domain O(N) semi-empirical methods are often the practical choice. Compared to the O(N) frequency-domain methods, the O(N) time-domain methods for excited states are much more mature and numerically stable, and

  7. Fabricating a multi-level barrier-integrated microfluidic device using grey-scale photolithography

    International Nuclear Information System (INIS)

    Nam, Yoonkwang; Kim, Minseok; Kim, Taesung

    2013-01-01

    Most polymer-replica-based microfluidic devices are mainly fabricated by using standard soft-lithography technology so that multi-level masters (MLMs) require multiple spin-coatings, mask alignments, exposures, developments, and bakings. In this paper, we describe a simple method for fabricating MLMs for planar microfluidic channels with multi-level barriers (MLBs). A single photomask is necessary for standard photolithography technology to create a polydimethylsiloxane grey-scale photomask (PGSP), which adjusts the total amount of UV absorption in a negative-tone photoresist via a wide range of dye concentrations. Since the PGSP in turn adjusts the degree of cross-linking of the photoresist, this method enables the fabrication of MLMs for an MLB-integrated microfluidic device. Since the PGSP-based soft-lithography technology provides a simple but powerful fabrication method for MLBs in a microfluidic device, we believe that the fabrication method can be widely used for micro total analysis systems that benefit from MLBs. We demonstrate an MLB-integrated microfluidic device that can separate microparticles. (paper)

  8. A multiple-scale power series method for solving nonlinear ordinary differential equations

    Directory of Open Access Journals (Sweden)

    Chein-Shan Liu

    2016-02-01

    Full Text Available The power series solution is a cheap and effective method to solve nonlinear problems, like the Duffing-van der Pol oscillator, the Volterra population model and the nonlinear boundary value problems. A novel power series method by considering the multiple scales $R_k$ in the power term $(t/R_k^k$ is developed, which are derived explicitly to reduce the ill-conditioned behavior in the data interpolation. In the method a huge value times a tiny value is avoided, such that we can decrease the numerical instability and which is the main reason to cause the failure of the conventional power series method. The multiple scales derived from an integral can be used in the power series expansion, which provide very accurate numerical solutions of the problems considered in this paper.

  9. Evaluation of integration methods for hybrid simulation of complex structural systems through collapse

    Science.gov (United States)

    Del Carpio R., Maikol; Hashemi, M. Javad; Mosqueda, Gilberto

    2017-10-01

    This study examines the performance of integration methods for hybrid simulation of large and complex structural systems in the context of structural collapse due to seismic excitations. The target application is not necessarily for real-time testing, but rather for models that involve large-scale physical sub-structures and highly nonlinear numerical models. Four case studies are presented and discussed. In the first case study, the accuracy of integration schemes including two widely used methods, namely, modified version of the implicit Newmark with fixed-number of iteration (iterative) and the operator-splitting (non-iterative) is examined through pure numerical simulations. The second case study presents the results of 10 hybrid simulations repeated with the two aforementioned integration methods considering various time steps and fixed-number of iterations for the iterative integration method. The physical sub-structure in these tests consists of a single-degree-of-freedom (SDOF) cantilever column with replaceable steel coupons that provides repeatable highlynonlinear behavior including fracture-type strength and stiffness degradations. In case study three, the implicit Newmark with fixed-number of iterations is applied for hybrid simulations of a 1:2 scale steel moment frame that includes a relatively complex nonlinear numerical substructure. Lastly, a more complex numerical substructure is considered by constructing a nonlinear computational model of a moment frame coupled to a hybrid model of a 1:2 scale steel gravity frame. The last two case studies are conducted on the same porotype structure and the selection of time steps and fixed number of iterations are closely examined in pre-test simulations. The generated unbalance forces is used as an index to track the equilibrium error and predict the accuracy and stability of the simulations.

  10. Grey Language Hesitant Fuzzy Group Decision Making Method Based on Kernel and Grey Scale.

    Science.gov (United States)

    Li, Qingsheng; Diao, Yuzhu; Gong, Zaiwu; Hu, Aqin

    2018-03-02

    Based on grey language multi-attribute group decision making, a kernel and grey scale scoring function is put forward according to the definition of grey language and the meaning of the kernel and grey scale. The function introduces grey scale into the decision-making method to avoid information distortion. This method is applied to the grey language hesitant fuzzy group decision making, and the grey correlation degree is used to sort the schemes. The effectiveness and practicability of the decision-making method are further verified by the industry chain sustainable development ability evaluation example of a circular economy. Moreover, its simplicity and feasibility are verified by comparing it with the traditional grey language decision-making method and the grey language hesitant fuzzy weighted arithmetic averaging (GLHWAA) operator integration method after determining the index weight based on the grey correlation.

  11. Integrated multi-scale modelling and simulation of nuclear fuels

    International Nuclear Information System (INIS)

    Valot, C.; Bertolus, M.; Masson, R.; Malerba, L.; Rachid, J.; Besmann, T.; Phillpot, S.; Stan, M.

    2015-01-01

    This chapter aims at discussing the objectives, implementation and integration of multi-scale modelling approaches applied to nuclear fuel materials. We will first show why the multi-scale modelling approach is required, due to the nature of the materials and by the phenomena involved under irradiation. We will then present the multiple facets of multi-scale modelling approach, while giving some recommendations with regard to its application. We will also show that multi-scale modelling must be coupled with appropriate multi-scale experiments and characterisation. Finally, we will demonstrate how multi-scale modelling can contribute to solving technology issues. (authors)

  12. Positive semidefinite tensor factorizations of the two-electron integral matrix for low-scaling ab initio electronic structure.

    Science.gov (United States)

    Hoy, Erik P; Mazziotti, David A

    2015-08-14

    Tensor factorization of the 2-electron integral matrix is a well-known technique for reducing the computational scaling of ab initio electronic structure methods toward that of Hartree-Fock and density functional theories. The simplest factorization that maintains the positive semidefinite character of the 2-electron integral matrix is the Cholesky factorization. In this paper, we introduce a family of positive semidefinite factorizations that generalize the Cholesky factorization. Using an implementation of the factorization within the parametric 2-RDM method [D. A. Mazziotti, Phys. Rev. Lett. 101, 253002 (2008)], we study several inorganic molecules, alkane chains, and potential energy curves and find that this generalized factorization retains the accuracy and size extensivity of the Cholesky factorization, even in the presence of multi-reference correlation. The generalized family of positive semidefinite factorizations has potential applications to low-scaling ab initio electronic structure methods that treat electron correlation with a computational cost approaching that of the Hartree-Fock method or density functional theory.

  13. Positive semidefinite tensor factorizations of the two-electron integral matrix for low-scaling ab initio electronic structure

    Energy Technology Data Exchange (ETDEWEB)

    Hoy, Erik P.; Mazziotti, David A., E-mail: damazz@uchicago.edu [Department of Chemistry and The James Franck Institute, The University of Chicago, Chicago, Illinois 60637 (United States)

    2015-08-14

    Tensor factorization of the 2-electron integral matrix is a well-known technique for reducing the computational scaling of ab initio electronic structure methods toward that of Hartree-Fock and density functional theories. The simplest factorization that maintains the positive semidefinite character of the 2-electron integral matrix is the Cholesky factorization. In this paper, we introduce a family of positive semidefinite factorizations that generalize the Cholesky factorization. Using an implementation of the factorization within the parametric 2-RDM method [D. A. Mazziotti, Phys. Rev. Lett. 101, 253002 (2008)], we study several inorganic molecules, alkane chains, and potential energy curves and find that this generalized factorization retains the accuracy and size extensivity of the Cholesky factorization, even in the presence of multi-reference correlation. The generalized family of positive semidefinite factorizations has potential applications to low-scaling ab initio electronic structure methods that treat electron correlation with a computational cost approaching that of the Hartree-Fock method or density functional theory.

  14. Integrated fringe projection 3D scanning system for large-scale metrology based on laser tracker

    Science.gov (United States)

    Du, Hui; Chen, Xiaobo; Zhou, Dan; Guo, Gen; Xi, Juntong

    2017-10-01

    Large scale components exist widely in advance manufacturing industry,3D profilometry plays a pivotal role for the quality control. This paper proposes a flexible, robust large-scale 3D scanning system by integrating a robot with a binocular structured light scanner and a laser tracker. The measurement principle and system construction of the integrated system are introduced. And a mathematical model is established for the global data fusion. Subsequently, a flexible and robust method and mechanism is introduced for the establishment of the end coordination system. Based on this method, a virtual robot noumenon is constructed for hand-eye calibration. And then the transformation matrix between end coordination system and world coordination system is solved. Validation experiment is implemented for verifying the proposed algorithms. Firstly, hand-eye transformation matrix is solved. Then a car body rear is measured for 16 times for the global data fusion algorithm verification. And the 3D shape of the rear is reconstructed successfully.

  15. Applying recursive numerical integration techniques for solving high dimensional integrals

    International Nuclear Information System (INIS)

    Ammon, Andreas; Genz, Alan; Hartung, Tobias; Jansen, Karl; Volmer, Julia; Leoevey, Hernan

    2016-11-01

    The error scaling for Markov-Chain Monte Carlo techniques (MCMC) with N samples behaves like 1/√(N). This scaling makes it often very time intensive to reduce the error of computed observables, in particular for applications in lattice QCD. It is therefore highly desirable to have alternative methods at hand which show an improved error scaling. One candidate for such an alternative integration technique is the method of recursive numerical integration (RNI). The basic idea of this method is to use an efficient low-dimensional quadrature rule (usually of Gaussian type) and apply it iteratively to integrate over high-dimensional observables and Boltzmann weights. We present the application of such an algorithm to the topological rotor and the anharmonic oscillator and compare the error scaling to MCMC results. In particular, we demonstrate that the RNI technique shows an error scaling in the number of integration points m that is at least exponential.

  16. Applying recursive numerical integration techniques for solving high dimensional integrals

    Energy Technology Data Exchange (ETDEWEB)

    Ammon, Andreas [IVU Traffic Technologies AG, Berlin (Germany); Genz, Alan [Washington State Univ., Pullman, WA (United States). Dept. of Mathematics; Hartung, Tobias [King' s College, London (United Kingdom). Dept. of Mathematics; Jansen, Karl; Volmer, Julia [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC; Leoevey, Hernan [Humboldt Univ. Berlin (Germany). Inst. fuer Mathematik

    2016-11-15

    The error scaling for Markov-Chain Monte Carlo techniques (MCMC) with N samples behaves like 1/√(N). This scaling makes it often very time intensive to reduce the error of computed observables, in particular for applications in lattice QCD. It is therefore highly desirable to have alternative methods at hand which show an improved error scaling. One candidate for such an alternative integration technique is the method of recursive numerical integration (RNI). The basic idea of this method is to use an efficient low-dimensional quadrature rule (usually of Gaussian type) and apply it iteratively to integrate over high-dimensional observables and Boltzmann weights. We present the application of such an algorithm to the topological rotor and the anharmonic oscillator and compare the error scaling to MCMC results. In particular, we demonstrate that the RNI technique shows an error scaling in the number of integration points m that is at least exponential.

  17. Integral equation methods for electromagnetics

    CERN Document Server

    Volakis, John

    2012-01-01

    This text/reference is a detailed look at the development and use of integral equation methods for electromagnetic analysis, specifically for antennas and radar scattering. Developers and practitioners will appreciate the broad-based approach to understanding and utilizing integral equation methods and the unique coverage of historical developments that led to the current state-of-the-art. In contrast to existing books, Integral Equation Methods for Electromagnetics lays the groundwork in the initial chapters so students and basic users can solve simple problems and work their way up to the mo

  18. OffshoreDC DC grids for integration of large scale wind power

    DEFF Research Database (Denmark)

    Zeni, Lorenzo; Endegnanew, Atsede Gualu; Stamatiou, Georgios

    The present report summarizes the main findings of the Nordic Energy Research project “DC grids for large scale integration of offshore wind power – OffshoreDC”. The project is been funded by Nordic Energy Research through the TFI programme and was active between 2011 and 2016. The overall...... objective of the project was to drive the development of the VSC based HVDC technology for future large scale offshore grids, supporting a standardised and commercial development of the technology, and improving the opportunities for the technology to support power system integration of large scale offshore...

  19. CMT scaling analysis and distortion evaluation in passive integral test facility

    International Nuclear Information System (INIS)

    Deng Chengcheng; Qin Benke; Wang Han; Chang Huajian

    2013-01-01

    Core makeup tank (CMT) is the crucial device of AP1000 passive core cooling system, and reasonable scaling analysis of CMT plays a key role in the design of passive integral test facilities. H2TS method was used to perform scaling analysis for both circulating mode and draining mode of CMT. And then, the similarity criteria for CMT important processes were applied in the CMT scaling design of the ACME (advanced core-cooling mechanism experiment) facility now being built in China. Furthermore, the scaling distortion results of CMT characteristic Ⅱ groups of ACME were calculated. At last, the reason of scaling distortion was analyzed and the distortion evaluation was conducted for ACME facility. The dominant processes of CMT circulating mode can be adequately simulated in the ACME facility, but the steam condensation process during CMT draining is not well preserved because the excessive CMT mass leads to more energy to be absorbed by cold metal. However, comprehensive analysis indicates that the ACME facility with high-pressure simulation scheme is able to properly represent CMT's important phenomena and processes of prototype nuclear plant. (authors)

  20. Non-destructive screening method for radiation hardened performance of large scale integration

    International Nuclear Information System (INIS)

    Zhou Dong; Xi Shanbin; Guo Qi; Ren Diyuan; Li Yudong; Sun Jing; Wen Lin

    2013-01-01

    The space radiation environment could induce radiation damage on the electronic devices. As the performance of commercial devices is generally superior to that of radiation hardened devices, it is necessary to screen out the devices with good radiation hardened performance from the commercial devices and applying these devices to space systems could improve the reliability of the systems. Combining the mathematical regression analysis with the different physical stressing experiments, we investigated the non-destructive screening method for radiation hardened performance of the integrated circuit. The relationship between the change of typical parameters and the radiation performance of the circuit was discussed. The irradiation-sensitive parameters were confirmed. The pluralistic linear regression equation toward the prediction of the radiation performance was established. Finally, the regression equations under stress conditions were verified by practical irradiation. The results show that the reliability and accuracy of the non-destructive screening method can be elevated by combining the mathematical regression analysis with the practical stressing experiment. (authors)

  1. Diverse methods for integrable models

    NARCIS (Netherlands)

    Fehér, G.

    2017-01-01

    This thesis is centered around three topics, sharing integrability as a common theme. This thesis explores different methods in the field of integrable models. The first two chapters are about integrable lattice models in statistical physics. The last chapter describes an integrable quantum chain.

  2. A Large-Scale Design Integration Approach Developed in Conjunction with the Ares Launch Vehicle Program

    Science.gov (United States)

    Redmon, John W.; Shirley, Michael C.; Kinard, Paul S.

    2012-01-01

    This paper presents a method for performing large-scale design integration, taking a classical 2D drawing envelope and interface approach and applying it to modern three dimensional computer aided design (3D CAD) systems. Today, the paradigm often used when performing design integration with 3D models involves a digital mockup of an overall vehicle, in the form of a massive, fully detailed, CAD assembly; therefore, adding unnecessary burden and overhead to design and product data management processes. While fully detailed data may yield a broad depth of design detail, pertinent integration features are often obscured under the excessive amounts of information, making them difficult to discern. In contrast, the envelope and interface method results in a reduction in both the amount and complexity of information necessary for design integration while yielding significant savings in time and effort when applied to today's complex design integration projects. This approach, combining classical and modern methods, proved advantageous during the complex design integration activities of the Ares I vehicle. Downstream processes, benefiting from this approach by reducing development and design cycle time, include: Creation of analysis models for the Aerodynamic discipline; Vehicle to ground interface development; Documentation development for the vehicle assembly.

  3. Towards an integrated multiscale simulation of turbulent clouds on PetaScale computers

    International Nuclear Information System (INIS)

    Wang Lianping; Ayala, Orlando; Parishani, Hossein; Gao, Guang R; Kambhamettu, Chandra; Li Xiaoming; Rossi, Louis; Orozco, Daniel; Torres, Claudio; Grabowski, Wojciech W; Wyszogrodzki, Andrzej A; Piotrowski, Zbigniew

    2011-01-01

    The development of precipitating warm clouds is affected by several effects of small-scale air turbulence including enhancement of droplet-droplet collision rate by turbulence, entrainment and mixing at the cloud edges, and coupling of mechanical and thermal energies at various scales. Large-scale computation is a viable research tool for quantifying these multiscale processes. Specifically, top-down large-eddy simulations (LES) of shallow convective clouds typically resolve scales of turbulent energy-containing eddies while the effects of turbulent cascade toward viscous dissipation are parameterized. Bottom-up hybrid direct numerical simulations (HDNS) of cloud microphysical processes resolve fully the dissipation-range flow scales but only partially the inertial subrange scales. it is desirable to systematically decrease the grid length in LES and increase the domain size in HDNS so that they can be better integrated to address the full range of scales and their coupling. In this paper, we discuss computational issues and physical modeling questions in expanding the ranges of scales realizable in LES and HDNS, and in bridging LES and HDNS. We review our on-going efforts in transforming our simulation codes towards PetaScale computing, in improving physical representations in LES and HDNS, and in developing better methods to analyze and interpret the simulation results.

  4. Preface: Introductory Remarks: Linear Scaling Methods

    Science.gov (United States)

    Bowler, D. R.; Fattebert, J.-L.; Gillan, M. J.; Haynes, P. D.; Skylaris, C.-K.

    2008-07-01

    It has been just over twenty years since the publication of the seminal paper on molecular dynamics with ab initio methods by Car and Parrinello [1], and the contribution of density functional theory (DFT) and the related techniques to physics, chemistry, materials science, earth science and biochemistry has been huge. Nevertheless, significant improvements are still being made to the performance of these standard techniques; recent work suggests that speed improvements of one or even two orders of magnitude are possible [2]. One of the areas where major progress has long been expected is in O(N), or linear scaling, DFT, in which the computer effort is proportional to the number of atoms. Linear scaling DFT methods have been in development for over ten years [3] but we are now in an exciting period where more and more research groups are working on these methods. Naturally there is a strong and continuing effort to improve the efficiency of the methods and to make them more robust. But there is also a growing ambition to apply them to challenging real-life problems. This special issue contains papers submitted following the CECAM Workshop 'Linear-scaling ab initio calculations: applications and future directions', held in Lyon from 3-6 September 2007. A noteworthy feature of the workshop is that it included a significant number of presentations involving real applications of O(N) methods, as well as work to extend O(N) methods into areas of greater accuracy (correlated wavefunction methods, quantum Monte Carlo, TDDFT) and large scale computer architectures. As well as explicitly linear scaling methods, the conference included presentations on techniques designed to accelerate and improve the efficiency of standard (that is non-linear-scaling) methods; this highlights the important question of crossover—that is, at what size of system does it become more efficient to use a linear-scaling method? As well as fundamental algorithmic questions, this brings up

  5. Extreme Scale FMM-Accelerated Boundary Integral Equation Solver for Wave Scattering

    KAUST Repository

    AbdulJabbar, Mustafa Abdulmajeed

    2018-03-27

    Algorithmic and architecture-oriented optimizations are essential for achieving performance worthy of anticipated energy-austere exascale systems. In this paper, we present an extreme scale FMM-accelerated boundary integral equation solver for wave scattering, which uses FMM as a matrix-vector multiplication inside the GMRES iterative method. Our FMM Helmholtz kernels treat nontrivial singular and near-field integration points. We implement highly optimized kernels for both shared and distributed memory, targeting emerging Intel extreme performance HPC architectures. We extract the potential thread- and data-level parallelism of the key Helmholtz kernels of FMM. Our application code is well optimized to exploit the AVX-512 SIMD units of Intel Skylake and Knights Landing architectures. We provide different performance models for tuning the task-based tree traversal implementation of FMM, and develop optimal architecture-specific and algorithm aware partitioning, load balancing, and communication reducing mechanisms to scale up to 6,144 compute nodes of a Cray XC40 with 196,608 hardware cores. With shared memory optimizations, we achieve roughly 77% of peak single precision floating point performance of a 56-core Skylake processor, and on average 60% of peak single precision floating point performance of a 72-core KNL. These numbers represent nearly 5.4x and 10x speedup on Skylake and KNL, respectively, compared to the baseline scalar code. With distributed memory optimizations, on the other hand, we report near-optimal efficiency in the weak scalability study with respect to both the logarithmic communication complexity as well as the theoretical scaling complexity of FMM. In addition, we exhibit up to 85% efficiency in strong scaling. We compute in excess of 2 billion DoF on the full-scale of the Cray XC40 supercomputer.

  6. Methods for Quantifying the Uncertainties of LSIT Test Parameters, Test Results, and Full-Scale Mixing Performance Using Models Developed from Scaled Test Data

    Energy Technology Data Exchange (ETDEWEB)

    Piepel, Gregory F. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Cooley, Scott K. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Kuhn, William L. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Rector, David R. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Heredia-Langner, Alejandro [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2015-05-01

    This report discusses the statistical methods for quantifying uncertainties in 1) test responses and other parameters in the Large Scale Integrated Testing (LSIT), and 2) estimates of coefficients and predictions of mixing performance from models that relate test responses to test parameters. Testing at a larger scale has been committed to by Bechtel National, Inc. and the U.S. Department of Energy (DOE) to “address uncertainties and increase confidence in the projected, full-scale mixing performance and operations” in the Waste Treatment and Immobilization Plant (WTP).

  7. Evaluation of scaling concepts for integral system test facilities

    International Nuclear Information System (INIS)

    Condie, K.G.; Larson, T.K.; Davis, C.B.

    1987-01-01

    A study was conducted by EG and G Idaho, Inc., to identify and technically evaluate potential concepts which will allow the U.S. Nuclear Regulatory Commission to maintain the capability to conduct future integral, thermal-hydraulic facility experiments of interest to light water reactor safety. This paper summarizes the methodology used in the study and presents a rankings for each facility concept relative to its ability to simulate phenomena identified as important in selected reactor transients in Babcock and Wilcox and Westinghouse large pressurized water reactors. Established scaling methodologies are used to develop potential concepts for scaled integral thermal-hydraulic experiment facilities. Concepts selected included: full height, full pressure water; reduced height, reduced pressure water; reduced height, full pressure water; one-tenth linear, full pressure water; and reduced height, full scaled pressure Freon. Results from this study suggest that a facility capable of operating at typical reactor operating conditions will scale most phenomena reasonably well. Local heat transfer phenomena is best scaled by the full height facility, while the reduced height facilities provide better scaling where multi-dimensional phenomena are considered important. Although many phenomena in facilities using Freon or water at nontypical pressure will scale reasonably well, those phenomena which are heavily dependent on quality can be distorted. Furthermore, relation of data produced in facilities operating with nontypical fluids or at nontypical pressures to large plants will be a difficult and time-consuming process

  8. Argentinean integrated small reactor design and scale economy analysis of integrated reactor

    International Nuclear Information System (INIS)

    Florido, P. C.; Bergallo, J. E.; Ishida, M. V.

    2000-01-01

    This paper describes the design of CAREM, which is Argentinean integrated small reactor project and the scale economy analysis results of integrated reactor. CAREM project consists on the development, design and construction of a small nuclear power plant. CAREM is an advanced reactor conceived with new generation design solutions and standing on the large experience accumulated in the safe operation of Light Water Reactors. The CAREM is an indirect cycle reactor with some distinctive and characteristic features that greatly simplify the reactor and also contribute to a highly level of safety: integrated primary cooling system, self pressurized, primary cooling by natural circulation and safety system relying on passive features. For a fully doupled economic evaluation of integrated reactors done by IREP (Integrated Reactor Evaluation Program) code transferred to IAEA, CAREM have been used as a reference point. The results shows that integrated reactors become competitive with power larger than 200MWe with Argentinean cheapest electricity option. Due to reactor pressure vessel construction limit, low pressure drop steam generator are used to reach power output of 200MWe for natural circulation. For forced circulation, 300MWe can be achieved. (author)

  9. Fractionaly Integrated Flux model and Scaling Laws in Weather and Climate

    Science.gov (United States)

    Schertzer, Daniel; Lovejoy, Shaun

    2013-04-01

    The Fractionaly Integrated Flux model (FIF) has been extensively used to model intermittent observables, like the velocity field, by defining them with the help of a fractional integration of a conservative (i.e. strictly scale invariant) flux, such as the turbulent energy flux. It indeed corresponds to a well-defined modelling that yields the observed scaling laws. Generalised Scale Invariance (GSI) enables FIF to deal with anisotropic fractional integrations and has been rather successful to define and model a unique regime of scaling anisotropic turbulence up to planetary scales. This turbulence has an effective dimension of 23/9=2.55... instead of the classical hypothesised 2D and 3D turbulent regimes, respectively for large and small spatial scales. It therefore theoretically eliminates a non plausible "dimension transition" between these two regimes and the resulting requirement of a turbulent energy "mesoscale gap", whose empirical evidence has been brought more and more into question. More recently, GSI-FIF was used to analyse climate, therefore at much larger time scales. Indeed, the 23/9-dimensional regime necessarily breaks up at the outer spatial scales. The corresponding transition range, which can be called "macroweather", seems to have many interesting properties, e.g. it rather corresponds to a fractional differentiation in time with a roughly flat frequency spectrum. Furthermore, this transition yields the possibility to have at much larger time scales scaling space-time climate fluctuations with a much stronger scaling anisotropy between time and space. Lovejoy, S. and D. Schertzer (2013). The Weather and Climate: Emergent Laws and Multifractal Cascades. Cambridge Press (in press). Schertzer, D. et al. (1997). Fractals 5(3): 427-471. Schertzer, D. and S. Lovejoy (2011). International Journal of Bifurcation and Chaos 21(12): 3417-3456.

  10. Methods for Quantifying the Uncertainties of LSIT Test Parameters, Test Results, and Full-Scale Mixing Performance Using Models Developed from Scaled Test Data

    International Nuclear Information System (INIS)

    Piepel, Gregory F.; Cooley, Scott K.; Kuhn, William L.; Rector, David R.; Heredia-Langner, Alejandro

    2015-01-01

    This report discusses the statistical methods for quantifying uncertainties in 1) test responses and other parameters in the Large Scale Integrated Testing (LSIT), and 2) estimates of coefficients and predictions of mixing performance from models that relate test responses to test parameters. Testing at a larger scale has been committed to by Bechtel National, Inc. and the U.S. Department of Energy (DOE) to ''address uncertainties and increase confidence in the projected, full-scale mixing performance and operations'' in the Waste Treatment and Immobilization Plant (WTP).

  11. Guide for Regional Integrated Assessments: Handbook of Methods and Procedures, Version 5.1. Appendix 1

    Science.gov (United States)

    Rosenzweig, Cynthia E.; Jones, James W.; Hatfield, Jerry; Antle, John; Ruane, Alex; Boote, Ken; Thorburn, Peter; Valdivia, Roberto; Porter, Cheryl; Janssen, Sander; hide

    2015-01-01

    The purpose of this handbook is to describe recommended methods for a trans-disciplinary, systems-based approach for regional-scale (local to national scale) integrated assessment of agricultural systems under future climate, bio-physical and socio-economic conditions. An earlier version of this Handbook was developed and used by several AgMIP Regional Research Teams (RRTs) in Sub-Saharan Africa (SSA) and South Asia (SA)(AgMIP handbook version 4.2, www.agmip.org/regional-integrated-assessments-handbook/). In contrast to the earlier version, which was written specifically to guide a consistent set of integrated assessments across SSA and SA, this version is intended to be more generic such that the methods can be applied to any region globally. These assessments are the regional manifestation of research activities described by AgMIP in its online protocols document (available at www.agmip.org). AgMIP Protocols were created to guide climate, crop modeling, economics, and information technology components of its projects.

  12. Time Scale in Least Square Method

    Directory of Open Access Journals (Sweden)

    Özgür Yeniay

    2014-01-01

    Full Text Available Study of dynamic equations in time scale is a new area in mathematics. Time scale tries to build a bridge between real numbers and integers. Two derivatives in time scale have been introduced and called as delta and nabla derivative. Delta derivative concept is defined as forward direction, and nabla derivative concept is defined as backward direction. Within the scope of this study, we consider the method of obtaining parameters of regression equation of integer values through time scale. Therefore, we implemented least squares method according to derivative definition of time scale and obtained coefficients related to the model. Here, there exist two coefficients originating from forward and backward jump operators relevant to the same model, which are different from each other. Occurrence of such a situation is equal to total number of values of vertical deviation between regression equations and observation values of forward and backward jump operators divided by two. We also estimated coefficients for the model using ordinary least squares method. As a result, we made an introduction to least squares method on time scale. We think that time scale theory would be a new vision in least square especially when assumptions of linear regression are violated.

  13. Multidimensional quantum entanglement with large-scale integrated optics

    DEFF Research Database (Denmark)

    Wang, Jianwei; Paesani, Stefano; Ding, Yunhong

    2018-01-01

    -dimensional entanglement. A programmable bipartite entangled system is realized with dimension up to 15 × 15 on a large-scale silicon-photonics quantum circuit. The device integrates more than 550 photonic components on a single chip, including 16 identical photon-pair sources. We verify the high precision, generality......The ability to control multidimensional quantum systems is key for the investigation of fundamental science and for the development of advanced quantum technologies. We demonstrate a multidimensional integrated quantum photonic platform able to generate, control and analyze high...

  14. Sparse maps—A systematic infrastructure for reduced-scaling electronic structure methods. I. An efficient and simple linear scaling local MP2 method that uses an intermediate basis of pair natural orbitals

    Energy Technology Data Exchange (ETDEWEB)

    Pinski, Peter; Riplinger, Christoph; Neese, Frank, E-mail: evaleev@vt.edu, E-mail: frank.neese@cec.mpg.de [Max Planck Institute for Chemical Energy Conversion, Stiftstr. 34-36, D-45470 Mülheim an der Ruhr (Germany); Valeev, Edward F., E-mail: evaleev@vt.edu, E-mail: frank.neese@cec.mpg.de [Department of Chemistry, Virginia Tech, Blacksburg, Virginia 24061 (United States)

    2015-07-21

    In this work, a systematic infrastructure is described that formalizes concepts implicit in previous work and greatly simplifies computer implementation of reduced-scaling electronic structure methods. The key concept is sparse representation of tensors using chains of sparse maps between two index sets. Sparse map representation can be viewed as a generalization of compressed sparse row, a common representation of a sparse matrix, to tensor data. By combining few elementary operations on sparse maps (inversion, chaining, intersection, etc.), complex algorithms can be developed, illustrated here by a linear-scaling transformation of three-center Coulomb integrals based on our compact code library that implements sparse maps and operations on them. The sparsity of the three-center integrals arises from spatial locality of the basis functions and domain density fitting approximation. A novel feature of our approach is the use of differential overlap integrals computed in linear-scaling fashion for screening products of basis functions. Finally, a robust linear scaling domain based local pair natural orbital second-order Möller-Plesset (DLPNO-MP2) method is described based on the sparse map infrastructure that only depends on a minimal number of cutoff parameters that can be systematically tightened to approach 100% of the canonical MP2 correlation energy. With default truncation thresholds, DLPNO-MP2 recovers more than 99.9% of the canonical resolution of the identity MP2 (RI-MP2) energy while still showing a very early crossover with respect to the computational effort. Based on extensive benchmark calculations, relative energies are reproduced with an error of typically <0.2 kcal/mol. The efficiency of the local MP2 (LMP2) method can be drastically improved by carrying out the LMP2 iterations in a basis of pair natural orbitals. While the present work focuses on local electron correlation, it is of much broader applicability to computation with sparse tensors in

  15. A numerical integration-based yield estimation method for integrated circuits

    International Nuclear Information System (INIS)

    Liang Tao; Jia Xinzhang

    2011-01-01

    A novel integration-based yield estimation method is developed for yield optimization of integrated circuits. This method tries to integrate the joint probability density function on the acceptability region directly. To achieve this goal, the simulated performance data of unknown distribution should be converted to follow a multivariate normal distribution by using Box-Cox transformation (BCT). In order to reduce the estimation variances of the model parameters of the density function, orthogonal array-based modified Latin hypercube sampling (OA-MLHS) is presented to generate samples in the disturbance space during simulations. The principle of variance reduction of model parameters estimation through OA-MLHS together with BCT is also discussed. Two yield estimation examples, a fourth-order OTA-C filter and a three-dimensional (3D) quadratic function are used for comparison of our method with Monte Carlo based methods including Latin hypercube sampling and importance sampling under several combinations of sample sizes and yield values. Extensive simulations show that our method is superior to other methods with respect to accuracy and efficiency under all of the given cases. Therefore, our method is more suitable for parametric yield optimization. (semiconductor integrated circuits)

  16. A numerical integration-based yield estimation method for integrated circuits

    Energy Technology Data Exchange (ETDEWEB)

    Liang Tao; Jia Xinzhang, E-mail: tliang@yahoo.cn [Key Laboratory of Ministry of Education for Wide Bandgap Semiconductor Materials and Devices, School of Microelectronics, Xidian University, Xi' an 710071 (China)

    2011-04-15

    A novel integration-based yield estimation method is developed for yield optimization of integrated circuits. This method tries to integrate the joint probability density function on the acceptability region directly. To achieve this goal, the simulated performance data of unknown distribution should be converted to follow a multivariate normal distribution by using Box-Cox transformation (BCT). In order to reduce the estimation variances of the model parameters of the density function, orthogonal array-based modified Latin hypercube sampling (OA-MLHS) is presented to generate samples in the disturbance space during simulations. The principle of variance reduction of model parameters estimation through OA-MLHS together with BCT is also discussed. Two yield estimation examples, a fourth-order OTA-C filter and a three-dimensional (3D) quadratic function are used for comparison of our method with Monte Carlo based methods including Latin hypercube sampling and importance sampling under several combinations of sample sizes and yield values. Extensive simulations show that our method is superior to other methods with respect to accuracy and efficiency under all of the given cases. Therefore, our method is more suitable for parametric yield optimization. (semiconductor integrated circuits)

  17. A promising future for integrative biodiversity research: an increased role of scale-dependency and functional biology

    Science.gov (United States)

    Schmitz, L.

    2016-01-01

    Studies into the complex interaction between an organism and changes to its biotic and abiotic environment are fundamental to understanding what regulates biodiversity. These investigations occur at many phylogenetic, temporal and spatial scales and within a variety of biological and geological disciplines but often in relative isolation. This issue focuses on what can be achieved when ecological mechanisms are integrated into analyses of deep-time biodiversity patterns through the union of fossil and extant data and methods. We expand upon this perspective to argue that, given its direct relevance to the current biodiversity crisis, greater integration is needed across biodiversity research. We focus on the need to understand scaling effects, how lower-level ecological and evolutionary processes scale up and vice versa, and the importance of incorporating functional biology. Placing function at the core of biodiversity research is fundamental, as it establishes how an organism interacts with its abiotic and biotic environment and it is functional diversity that ultimately determines important ecosystem processes. To achieve full integration, concerted and ongoing efforts are needed to build a united and interactive community of biodiversity researchers, with education and interdisciplinary training at its heart. PMID:26977068

  18. A promising future for integrative biodiversity research: an increased role of scale-dependency and functional biology.

    Science.gov (United States)

    Price, S A; Schmitz, L

    2016-04-05

    Studies into the complex interaction between an organism and changes to its biotic and abiotic environment are fundamental to understanding what regulates biodiversity. These investigations occur at many phylogenetic, temporal and spatial scales and within a variety of biological and geological disciplines but often in relative isolation. This issue focuses on what can be achieved when ecological mechanisms are integrated into analyses of deep-time biodiversity patterns through the union of fossil and extant data and methods. We expand upon this perspective to argue that, given its direct relevance to the current biodiversity crisis, greater integration is needed across biodiversity research. We focus on the need to understand scaling effects, how lower-level ecological and evolutionary processes scale up and vice versa, and the importance of incorporating functional biology. Placing function at the core of biodiversity research is fundamental, as it establishes how an organism interacts with its abiotic and biotic environment and it is functional diversity that ultimately determines important ecosystem processes. To achieve full integration, concerted and ongoing efforts are needed to build a united and interactive community of biodiversity researchers, with education and interdisciplinary training at its heart. © 2016 The Author(s).

  19. PIV measurements of the turbulence integral length scale on cold combustion flow field of tangential firing boiler

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Wen-fei; Xie, Jing-xing; Gong, Zhi-jun; Li, Bao-wei [Inner Mongolia Univ. of Science and Technology, Baotou (China). Inner Mongolia Key Lab. for Utilization of Bayan Obo Multi-Metallic Resources: Elected State Key Lab.

    2013-07-01

    The process of the pulverized coal combustion in tangential firing boiler has prominent significance on improving boiler operation efficiency and reducing NO{sub X} emission. This paper aims at researching complex turbulent vortex coherent structure formed by the four corners jets in the burner zone, a cold experimental model of tangential firing boiler has been built. And by employing spatial correlation analysis method and PIV (Particle Image Velocimetry) technique, the law of Vortex scale distribution on the three typical horizontal layers of the model based on the turbulent Integral Length Scale (ILS) has been researched. According to the correlation analysis of ILS and the temporal average velocity, it can be seen that the turbulent vortex scale distribution in the burner zone of the model is affected by both jet velocity and the position of wind layers, and is not linear with the variation of jet velocity. The vortex scale distribution of the upper primary air is significantly different from the others. Therefore, studying the ILS of turbulent vortex integral scale is instructive to high efficiency cleaning combustion of pulverized coal in theory.

  20. Battery energy storage systems: Assessment for small-scale renewable energy integration

    Energy Technology Data Exchange (ETDEWEB)

    Nair, Nirmal-Kumar C.; Garimella, Niraj [Power Systems Group, Department of Electrical and Computer Engineering, The University of Auckland, 38 Princes Street, Science Centre, Auckland 1142 (New Zealand)

    2010-11-15

    Concerns arising due to the variability and intermittency of renewable energy sources while integrating with the power grid can be mitigated to an extent by incorporating a storage element within the renewable energy harnessing system. Thus, battery energy storage systems (BESS) are likely to have a significant impact in the small-scale integration of renewable energy sources into commercial building and residential dwelling. These storage technologies not only enable improvements in consumption levels from renewable energy sources but also provide a range of technical and monetary benefits. This paper provides a modelling framework to be able to quantify the associated benefits of renewable resource integration followed by an overview of various small-scale energy storage technologies. A simple, practical and comprehensive assessment of battery energy storage technologies for small-scale renewable applications based on their technical merit and economic feasibility is presented. Software such as Simulink and HOMER provides the platforms for technical and economic assessments of the battery technologies respectively. (author)

  1. The method of arbitrarily large moments to calculate single scale processes in quantum field theory

    Energy Technology Data Exchange (ETDEWEB)

    Bluemlein, Johannes [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany); Schneider, Carsten [Johannes Kepler Univ., Linz (Austria). Research Inst. for Symbolic Computation (RISC)

    2017-01-15

    We device a new method to calculate a large number of Mellin moments of single scale quantities using the systems of differential and/or difference equations obtained by integration-by-parts identities between the corresponding Feynman integrals of loop corrections to physical quantities. These scalar quantities have a much simpler mathematical structure than the complete quantity. A sufficiently large set of moments may even allow the analytic reconstruction of the whole quantity considered, holding in case of first order factorizing systems. In any case, one may derive highly precise numerical representations in general using this method, which is otherwise completely analytic.

  2. Timing of Formal Phase Safety Reviews for Large-Scale Integrated Hazard Analysis

    Science.gov (United States)

    Massie, Michael J.; Morris, A. Terry

    2010-01-01

    Integrated hazard analysis (IHA) is a process used to identify and control unacceptable risk. As such, it does not occur in a vacuum. IHA approaches must be tailored to fit the system being analyzed. Physical, resource, organizational and temporal constraints on large-scale integrated systems impose additional direct or derived requirements on the IHA. The timing and interaction between engineering and safety organizations can provide either benefits or hindrances to the overall end product. The traditional approach for formal phase safety review timing and content, which generally works well for small- to moderate-scale systems, does not work well for very large-scale integrated systems. This paper proposes a modified approach to timing and content of formal phase safety reviews for IHA. Details of the tailoring process for IHA will describe how to avoid temporary disconnects in major milestone reviews and how to maintain a cohesive end-to-end integration story particularly for systems where the integrator inherently has little to no insight into lower level systems. The proposal has the advantage of allowing the hazard analysis development process to occur as technical data normally matures.

  3. Application of the Hybrid Simulation Method for the Full-Scale Precast Reinforced Concrete Shear Wall Structure

    Directory of Open Access Journals (Sweden)

    Zaixian Chen

    2018-02-01

    Full Text Available The hybrid simulation (HS testing method combines physical test and numerical simulation, and provides a viable alternative to evaluate the structural seismic performance. Most studies focused on the accuracy, stability and reliability of the HS method in the small-scale tests. It is a challenge to evaluate the seismic performance of a twelve-story pre-cast reinforced concrete shear-wall structure using this HS method which takes the full-scale bottom three-story structural model as the physical substructure and the elastic non-linear model as the numerical substructure. This paper employs an equivalent force control (EFC method with implicit integration algorithm to deal with the numerical integration of the equation of motion (EOM and the control of the loading device. Because of the arrangement of the test model, an elastic non-linear numerical model is used to simulate the numerical substructure. And non-subdivision strategy for the displacement inflection point of numerical substructure is used to easily realize the simulation of the numerical substructure and thus reduce the measured error. The parameters of the EFC method are calculated basing on analytical and numerical studies and used to the actual full-scale HS test. Finally, the accuracy and feasibility of the EFC-based HS method is verified experimentally through the substructure HS tests of the pre-cast reinforced concrete shear-wall structure model. And the testing results of the descending stage can be conveniently obtained from the EFC-based HS method.

  4. Automatic numerical integration methods for Feynman integrals through 3-loop

    International Nuclear Information System (INIS)

    De Doncker, E; Olagbemi, O; Yuasa, F; Ishikawa, T; Kato, K

    2015-01-01

    We give numerical integration results for Feynman loop diagrams through 3-loop such as those covered by Laporta [1]. The methods are based on automatic adaptive integration, using iterated integration and extrapolation with programs from the QUADPACK package, or multivariate techniques from the ParInt package. The Dqags algorithm from QuadPack accommodates boundary singularities of fairly general types. PARINT is a package for multivariate integration layered over MPI (Message Passing Interface), which runs on clusters and incorporates advanced parallel/distributed techniques such as load balancing among processes that may be distributed over a network of nodes. Results are included for 3-loop self-energy diagrams without IR (infra-red) or UV (ultra-violet) singularities. A procedure based on iterated integration and extrapolation yields a novel method of numerical regularization for integrals with UV terms, and is applied to a set of 2-loop self-energy diagrams with UV singularities. (paper)

  5. Wafer-scale integration of piezoelectric actuation capabilities in nanoelectromechanical systems resonators

    OpenAIRE

    DEZEST, Denis; MATHIEU, Fabrice; MAZENQ, Laurent; SOYER, Caroline; COSTECALDE, Jean; REMIENS, Denis; THOMAS, Olivier; DEÜ, Jean-François; NICU, Liviu

    2013-01-01

    In this work, we demonstrate the integration of piezoelectric actuation means on arrays of nanocantilevers at the wafer scale. We use lead titanate zirconate (PZT) as piezoelectric material mainly because of its excellent actuation properties even when geometrically constrained at extreme scale

  6. Integrating Traditional Ecological Knowledge and Ecological Science: a Question of Scale

    Directory of Open Access Journals (Sweden)

    Catherine A. Gagnon

    2009-12-01

    Full Text Available The benefits and challenges of integrating traditional ecological knowledge and scientific knowledge have led to extensive discussions over the past decades, but much work is still needed to facilitate the articulation and co-application of these two types of knowledge. Through two case studies, we examined the integration of traditional ecological knowledge and scientific knowledge by emphasizing their complementarity across spatial and temporal scales. We expected that combining Inuit traditional ecological knowledge and scientific knowledge would expand the spatial and temporal scales of currently documented knowledge on the arctic fox (Vulpes lagopus and the greater snow goose (Chen caerulescens atlantica, two important tundra species. Using participatory approaches in Mittimatalik (also known as Pond Inlet, Nunavut, Canada, we documented traditional ecological knowledge about these species and found that, in fact, it did expand the spatial and temporal scales of current scientific knowledge for local arctic fox ecology. However, the benefits were not as apparent for snow goose ecology, probably because of the similar spatial and temporal observational scales of the two types of knowledge for this species. Comparing sources of knowledge at similar scales allowed us to gain confidence in our conclusions and to identify areas of disagreement that should be studied further. Emphasizing complementarities across scales was more powerful for generating new insights and hypotheses. We conclude that determining the scales of the observations that form the basis for traditional ecological knowledge and scientific knowledge represents a critical step when evaluating the benefits of integrating these two types of knowledge. This is also critical when examining the congruence or contrast between the two types of knowledge for a given subject.

  7. Cross-scale phenological data integration to benefit resource management and monitoring

    Science.gov (United States)

    Richardson, Andrew D.; Weltzin, Jake F.; Morisette, Jeffrey T.

    2017-01-01

    Climate change is presenting new challenges for natural resource managers charged with maintaining sustainable ecosystems and landscapes. Phenology, a branch of science dealing with seasonal natural phenomena (bird migration or plant flowering in response to weather changes, for example), bridges the gap between the biosphere and the climate system. Phenological processes operate across scales that span orders of magnitude—from leaf to globe and from days to seasons—making phenology ideally suited to multiscale, multiplatform data integration and delivery of information at spatial and temporal scales suitable to inform resource management decisions.A workshop report: Workshop held June 2016 to investigate opportunities and challenges facing multi-scale, multi-platform integration of phenological data to support natural resource management decision-making.

  8. A highly accurate boundary integral equation method for surfactant-laden drops in 3D

    Science.gov (United States)

    Sorgentone, Chiara; Tornberg, Anna-Karin

    2018-05-01

    The presence of surfactants alters the dynamics of viscous drops immersed in an ambient viscous fluid. This is specifically true at small scales, such as in applications of droplet based microfluidics, where the interface dynamics become of increased importance. At such small scales, viscous forces dominate and inertial effects are often negligible. Considering Stokes flow, a numerical method based on a boundary integral formulation is presented for simulating 3D drops covered by an insoluble surfactant. The method is able to simulate drops with different viscosities and close interactions, automatically controlling the time step size and maintaining high accuracy also when substantial drop deformation appears. To achieve this, the drop surfaces as well as the surfactant concentration on each surface are represented by spherical harmonics expansions. A novel reparameterization method is introduced to ensure a high-quality representation of the drops also under deformation, specialized quadrature methods for singular and nearly singular integrals that appear in the formulation are evoked and the adaptive time stepping scheme for the coupled drop and surfactant evolution is designed with a preconditioned implicit treatment of the surfactant diffusion.

  9. Expected Future Conditions for Secure Power Operation with Large Scale of RES Integration

    International Nuclear Information System (INIS)

    Majstrovic, G.; Majstrovic, M.; Sutlovic, E.

    2015-01-01

    EU energy strategy is strongly focused on the large scale integration of renewable energy sources. The most dominant part here is taken by variable sources - wind power plants. Grid integration of intermittent sources along with keeping the system stable and secure is one of the biggest challenges for the TSOs. This part is often neglected by the energy policy makers, so this paper deals with expected future conditions for secure power system operation with large scale wind integration. It gives an overview of expected wind integration development in EU, as well as expected P/f regulation and control needs. The paper is concluded with several recommendations. (author).

  10. “HABITAT MAPPING” GEODATABASE, AN INTEGRATED INTERDISCIPLINARY AND MULTI-SCALE APPROACH FOR DATA MANAGEMENT

    OpenAIRE

    Grande, Valentina; Angeletti, Lorenzo; Campiani, Elisabetta; Conese, Ilaria; Foglini, Federica; Leidi, Elisa; Mercorella, Alessandra; Taviani, Marco

    2016-01-01

    Abstract Historically, a number of different key concepts and methods dealing with marine habitat classifications and mapping have been developed to date. The EU CoCoNET project provides a new attempt in establishing an integrated approach on the definition of habitats. This scheme combines multi-scale geological and biological data, in fact it consists of three levels (Geomorphological level, Substrate level and Biological level) which in turn are divided into several h...

  11. The method of arbitrarily large moments to calculate single scale processes in quantum field theory

    Directory of Open Access Journals (Sweden)

    Johannes Blümlein

    2017-08-01

    Full Text Available We devise a new method to calculate a large number of Mellin moments of single scale quantities using the systems of differential and/or difference equations obtained by integration-by-parts identities between the corresponding Feynman integrals of loop corrections to physical quantities. These scalar quantities have a much simpler mathematical structure than the complete quantity. A sufficiently large set of moments may even allow the analytic reconstruction of the whole quantity considered, holding in case of first order factorizing systems. In any case, one may derive highly precise numerical representations in general using this method, which is otherwise completely analytic.

  12. Distributed constraint satisfaction for coordinating and integrating a large-scale, heterogenous enterprise

    CERN Document Server

    Eisenberg, C

    2003-01-01

    Market forces are continuously driving public and private organisations towards higher productivity, shorter process and production times, and fewer labour hours. To cope with these changes, organisations are adopting new organisational models of coordination and cooperation that increase their flexibility, consistency, efficiency, productivity and profit margins. In this thesis an organisational model of coordination and cooperation is examined using a real life example; the technical integration of a distributed large-scale project of an international physics collaboration. The distributed resource constraint project scheduling problem is modelled and solved with the methods of distributed constraint satisfaction. A distributed local search method, the distributed breakout algorithm (DisBO), is used as the basis for the coordination scheme. The efficiency of the local search method is improved by extending it with an incremental problem solving scheme with variable ordering. The scheme is implemented as cen...

  13. Integral Equation Methods for Electromagnetic and Elastic Waves

    CERN Document Server

    Chew, Weng; Hu, Bin

    2008-01-01

    Integral Equation Methods for Electromagnetic and Elastic Waves is an outgrowth of several years of work. There have been no recent books on integral equation methods. There are books written on integral equations, but either they have been around for a while, or they were written by mathematicians. Much of the knowledge in integral equation methods still resides in journal papers. With this book, important relevant knowledge for integral equations are consolidated in one place and researchers need only read the pertinent chapters in this book to gain important knowledge needed for integral eq

  14. HMC algorithm with multiple time scale integration and mass preconditioning

    Science.gov (United States)

    Urbach, C.; Jansen, K.; Shindler, A.; Wenger, U.

    2006-01-01

    We present a variant of the HMC algorithm with mass preconditioning (Hasenbusch acceleration) and multiple time scale integration. We have tested this variant for standard Wilson fermions at β=5.6 and at pion masses ranging from 380 to 680 MeV. We show that in this situation its performance is comparable to the recently proposed HMC variant with domain decomposition as preconditioner. We give an update of the "Berlin Wall" figure, comparing the performance of our variant of the HMC algorithm to other published performance data. Advantages of the HMC algorithm with mass preconditioning and multiple time scale integration are that it is straightforward to implement and can be used in combination with a wide variety of lattice Dirac operators.

  15. Subdomain Precise Integration Method for Periodic Structures

    Directory of Open Access Journals (Sweden)

    F. Wu

    2014-01-01

    Full Text Available A subdomain precise integration method is developed for the dynamical responses of periodic structures comprising many identical structural cells. The proposed method is based on the precise integration method, the subdomain scheme, and the repeatability of the periodic structures. In the proposed method, each structural cell is seen as a super element that is solved using the precise integration method, considering the repeatability of the structural cells. The computational efforts and the memory size of the proposed method are reduced, while high computational accuracy is achieved. Therefore, the proposed method is particularly suitable to solve the dynamical responses of periodic structures. Two numerical examples are presented to demonstrate the accuracy and efficiency of the proposed method through comparison with the Newmark and Runge-Kutta methods.

  16. Effect of CMOS Technology Scaling on Fully-Integrated Power Supply Efficiency

    OpenAIRE

    Pillonnet , Gaël; Jeanniot , Nicolas

    2016-01-01

    International audience; Integrating a power supply in the same die as the powered circuits is an appropriate solution for granular, fine and fast power management. To allow same-die co-integration, fully integrated DC-DC converters designed in the latest CMOS technologies have been greatly studied by academics and industrialists in the last decade. However, there is little study concerning the effects of the CMOS scaling on these particular circuits. To show the trends, this paper compares th...

  17. Numerical methods for engine-airframe integration

    International Nuclear Information System (INIS)

    Murthy, S.N.B.; Paynter, G.C.

    1986-01-01

    Various papers on numerical methods for engine-airframe integration are presented. The individual topics considered include: scientific computing environment for the 1980s, overview of prediction of complex turbulent flows, numerical solutions of the compressible Navier-Stokes equations, elements of computational engine/airframe integrations, computational requirements for efficient engine installation, application of CAE and CFD techniques to complete tactical missile design, CFD applications to engine/airframe integration, and application of a second-generation low-order panel methods to powerplant installation studies. Also addressed are: three-dimensional flow analysis of turboprop inlet and nacelle configurations, application of computational methods to the design of large turbofan engine nacelles, comparison of full potential and Euler solution algorithms for aeropropulsive flow field computations, subsonic/transonic, supersonic nozzle flows and nozzle integration, subsonic/transonic prediction capabilities for nozzle/afterbody configurations, three-dimensional viscous design methodology of supersonic inlet systems for advanced technology aircraft, and a user's technology assessment

  18. Optimal Wind Energy Integration in Large-Scale Electric Grids

    Science.gov (United States)

    Albaijat, Mohammad H.

    The major concern in electric grid operation is operating under the most economical and reliable fashion to ensure affordability and continuity of electricity supply. This dissertation investigates the effects of such challenges, which affect electric grid reliability and economic operations. These challenges are: 1. Congestion of transmission lines, 2. Transmission lines expansion, 3. Large-scale wind energy integration, and 4. Phaser Measurement Units (PMUs) optimal placement for highest electric grid observability. Performing congestion analysis aids in evaluating the required increase of transmission line capacity in electric grids. However, it is necessary to evaluate expansion of transmission line capacity on methods to ensure optimal electric grid operation. Therefore, the expansion of transmission line capacity must enable grid operators to provide low-cost electricity while maintaining reliable operation of the electric grid. Because congestion affects the reliability of delivering power and increases its cost, the congestion analysis in electric grid networks is an important subject. Consequently, next-generation electric grids require novel methodologies for studying and managing congestion in electric grids. We suggest a novel method of long-term congestion management in large-scale electric grids. Owing to the complication and size of transmission line systems and the competitive nature of current grid operation, it is important for electric grid operators to determine how many transmission lines capacity to add. Traditional questions requiring answers are "Where" to add, "How much of transmission line capacity" to add, and "Which voltage level". Because of electric grid deregulation, transmission lines expansion is more complicated as it is now open to investors, whose main interest is to generate revenue, to build new transmission lines. Adding a new transmission capacity will help the system to relieve the transmission system congestion, create

  19. A numerical method for resonance integral calculations

    International Nuclear Information System (INIS)

    Tanbay, Tayfun; Ozgener, Bilge

    2013-01-01

    A numerical method has been proposed for resonance integral calculations and a cubic fit based on least squares approximation to compute the optimum Bell factor is given. The numerical method is based on the discretization of the neutron slowing down equation. The scattering integral is approximated by taking into account the location of the upper limit in energy domain. The accuracy of the method has been tested by performing computations of resonance integrals for uranium dioxide isolated rods and comparing the results with empirical values. (orig.)

  20. Review of Statistical Learning Methods in Integrated Omics Studies (An Integrated Information Science).

    Science.gov (United States)

    Zeng, Irene Sui Lan; Lumley, Thomas

    2018-01-01

    Integrated omics is becoming a new channel for investigating the complex molecular system in modern biological science and sets a foundation for systematic learning for precision medicine. The statistical/machine learning methods that have emerged in the past decade for integrated omics are not only innovative but also multidisciplinary with integrated knowledge in biology, medicine, statistics, machine learning, and artificial intelligence. Here, we review the nontrivial classes of learning methods from the statistical aspects and streamline these learning methods within the statistical learning framework. The intriguing findings from the review are that the methods used are generalizable to other disciplines with complex systematic structure, and the integrated omics is part of an integrated information science which has collated and integrated different types of information for inferences and decision making. We review the statistical learning methods of exploratory and supervised learning from 42 publications. We also discuss the strengths and limitations of the extended principal component analysis, cluster analysis, network analysis, and regression methods. Statistical techniques such as penalization for sparsity induction when there are fewer observations than the number of features and using Bayesian approach when there are prior knowledge to be integrated are also included in the commentary. For the completeness of the review, a table of currently available software and packages from 23 publications for omics are summarized in the appendix.

  1. Integrating continental-scale ecological data into university courses: Developing NEON's Online Learning Portal

    Science.gov (United States)

    Wasser, L. A.; Gram, W.; Lunch, C. K.; Petroy, S. B.; Elmendorf, S.

    2013-12-01

    'Big Data' are becoming increasingly common in many fields. The National Ecological Observatory Network (NEON) will be collecting data over the 30 years, using consistent, standardized methods across the United States. Similar efforts are underway in other parts of the globe (e.g. Australia's Terrestrial Ecosystem Research Network, TERN). These freely available new data provide an opportunity for increased understanding of continental- and global scale processes such as changes in vegetation structure and condition, biodiversity and landuse. However, while 'big data' are becoming more accessible and available, integrating big data into the university courses is challenging. New and potentially unfamiliar data types and associated processing methods, required to work with a growing diversity of available data, may warrant time and resources that present a barrier to classroom integration. Analysis of these big datasets may further present a challenge given large file sizes, and uncertainty regarding best methods to properly statistically summarize and analyze results. Finally, teaching resources, in the form of demonstrative illustrations, and other supporting media that might help teach key data concepts, take time to find and more time to develop. Available resources are often spread widely across multi-online spaces. This presentation will overview the development of NEON's collaborative University-focused online education portal. Portal content will include 1) interactive, online multi-media content that explains key concepts related to NEON's data products including collection methods, key metadata to consider and consideration of potential error and uncertainty surrounding data analysis; and 2) packaged 'lab' activities that include supporting data to be used in an ecology, biology or earth science classroom. To facilitate broad use in classrooms, lab activities will take advantage of freely and commonly available processing tools, techniques and scripts. All

  2. Research on Linux Trusted Boot Method Based on Reverse Integrity Verification

    Directory of Open Access Journals (Sweden)

    Chenlin Huang

    2016-01-01

    Full Text Available Trusted computing aims to build a trusted computing environment for information systems with the help of secure hardware TPM, which has been proved to be an effective way against network security threats. However, the TPM chips are not yet widely deployed in most computing devices so far, thus limiting the applied scope of trusted computing technology. To solve the problem of lacking trusted hardware in existing computing platform, an alternative security hardware USBKey is introduced in this paper to simulate the basic functions of TPM and a new reverse USBKey-based integrity verification model is proposed to implement the reverse integrity verification of the operating system boot process, which can achieve the effect of trusted boot of the operating system in end systems without TPMs. A Linux operating system booting method based on reverse integrity verification is designed and implemented in this paper, with which the integrity of data and executable files in the operating system are verified and protected during the trusted boot process phase by phase. It implements the trusted boot of operation system without TPM and supports remote attestation of the platform. Enhanced by our method, the flexibility of the trusted computing technology is greatly improved and it is possible for trusted computing to be applied in large-scale computing environment.

  3. Variational Multi-Scale method with spectral approximation of the sub-scales.

    KAUST Repository

    Dia, Ben Mansour

    2015-01-07

    A variational multi-scale method where the sub-grid scales are computed by spectral approximations is presented. It is based upon an extension of the spectral theorem to non necessarily self-adjoint elliptic operators that have an associated base of eigenfunctions which are orthonormal in weighted L2 spaces. We propose a feasible VMS-spectral method by truncation of this spectral expansion to a nite number of modes.

  4. Mining method selection by integrated AHP and PROMETHEE method.

    Science.gov (United States)

    Bogdanovic, Dejan; Nikolic, Djordje; Ilic, Ivana

    2012-03-01

    Selecting the best mining method among many alternatives is a multicriteria decision making problem. The aim of this paper is to demonstrate the implementation of an integrated approach that employs AHP and PROMETHEE together for selecting the most suitable mining method for the "Coka Marin" underground mine in Serbia. The related problem includes five possible mining methods and eleven criteria to evaluate them. Criteria are accurately chosen in order to cover the most important parameters that impact on the mining method selection, such as geological and geotechnical properties, economic parameters and geographical factors. The AHP is used to analyze the structure of the mining method selection problem and to determine weights of the criteria, and PROMETHEE method is used to obtain the final ranking and to make a sensitivity analysis by changing the weights. The results have shown that the proposed integrated method can be successfully used in solving mining engineering problems.

  5. Dual-scale Galerkin methods for Darcy flow

    Science.gov (United States)

    Wang, Guoyin; Scovazzi, Guglielmo; Nouveau, Léo; Kees, Christopher E.; Rossi, Simone; Colomés, Oriol; Main, Alex

    2018-02-01

    The discontinuous Galerkin (DG) method has found widespread application in elliptic problems with rough coefficients, of which the Darcy flow equations are a prototypical example. One of the long-standing issues of DG approximations is the overall computational cost, and many different strategies have been proposed, such as the variational multiscale DG method, the hybridizable DG method, the multiscale DG method, the embedded DG method, and the Enriched Galerkin method. In this work, we propose a mixed dual-scale Galerkin method, in which the degrees-of-freedom of a less computationally expensive coarse-scale approximation are linked to the degrees-of-freedom of a base DG approximation. We show that the proposed approach has always similar or improved accuracy with respect to the base DG method, with a considerable reduction in computational cost. For the specific definition of the coarse-scale space, we consider Raviart-Thomas finite elements for the mass flux and piecewise-linear continuous finite elements for the pressure. We provide a complete analysis of stability and convergence of the proposed method, in addition to a study on its conservation and consistency properties. We also present a battery of numerical tests to verify the results of the analysis, and evaluate a number of possible variations, such as using piecewise-linear continuous finite elements for the coarse-scale mass fluxes.

  6. Scale Expansion of Community Investigations and Integration of the Effects of Abiotic and Biotic Processes on Maintenance of Species Diversity

    Directory of Open Access Journals (Sweden)

    Zhenhong Wang

    2011-01-01

    Full Text Available Information on the maintenance of diversity patterns from regional to local scales is dispersed among academic fields due to the local focus of community ecology. To better understand these patterns, the study of ecological communities needs to be expanded to larger scales and the various processes affecting them need to be integrated using a suitable quantitative method. We determined a range of communities on a flora-subregional scale in Yunnan province, China (383210.02 km2. A series of species pools were delimited from the regional to plot scales. Plant diversity was evaluated and abiotic and biotic processes identified at each pool level. The species pool effect was calculated using an innovative model, and the contribution of these processes to the maintenance of plant species diversity was determined and integrated: climate had the greatest effect at the flora-subregional scale, with historical and evolutionary processes contributing ∼11%; climate and human disturbance had the greatest effect at the local site pool scale; competition exclusion and stress limitation explained strong filtering at the successional stage pool scale; biotic processes contributed more on the local community scale than on the regional scale. Scale expansion combined with the filtering model approach solves the local problem in community ecology.

  7. Integrated circuit and method of arbitration in a network on an integrated circuit.

    NARCIS (Netherlands)

    2011-01-01

    The invention relates to an integrated circuit and to a method of arbitration in a network on an integrated circuit. According to the invention, a method of arbitration in a network on an integrated circuit is provided, the network comprising a router unit, the router unit comprising a first input

  8. An extended Halanay inequality of integral type on time scales

    Directory of Open Access Journals (Sweden)

    Boqun Ou

    2015-07-01

    Full Text Available In this paper, we obtain a Halanay-type inequality of integral type on time scales which improves and extends some earlier results for both the continuous and discrete cases. Several illustrative examples are also given.

  9. An integration weighting method to evaluate extremum coordinates

    International Nuclear Information System (INIS)

    Ilyushchenko, V.I.

    1990-01-01

    The numerical version of the Laplace asymptotics has been used to evaluate the coordinates of extrema of multivariate continuous and discontinuous test functions. The performed computer experiments demonstrate the high efficiency of the integration method proposed. The saturating dependence of extremum coordinates on such parameters as a number of integration subregions and that of K going /theoretically/ to infinity has been studied in detail for the limitand being a ratio of two Laplace integrals with exponentiated K. The given method is an integral equivalent of that of weighted means. As opposed to the standard optimization methods of the zero, first and second order the proposed method can be successfully applied to optimize discontinuous objective functions, too. There are possibilities of applying the integration method in the cases, when the conventional techniques fail due to poor analytical properties of the objective functions near extremal points. The proposed method is efficient in searching for both local and global extrema of multimodal objective functions. 12 refs.; 4 tabs

  10. Scale relativity theory and integrative systems biology: 2. Macroscopic quantum-type mechanics.

    Science.gov (United States)

    Nottale, Laurent; Auffray, Charles

    2008-05-01

    In these two companion papers, we provide an overview and a brief history of the multiple roots, current developments and recent advances of integrative systems biology and identify multiscale integration as its grand challenge. Then we introduce the fundamental principles and the successive steps that have been followed in the construction of the scale relativity theory, which aims at describing the effects of a non-differentiable and fractal (i.e., explicitly scale dependent) geometry of space-time. The first paper of this series was devoted, in this new framework, to the construction from first principles of scale laws of increasing complexity, and to the discussion of some tentative applications of these laws to biological systems. In this second review and perspective paper, we describe the effects induced by the internal fractal structures of trajectories on motion in standard space. Their main consequence is the transformation of classical dynamics into a generalized, quantum-like self-organized dynamics. A Schrödinger-type equation is derived as an integral of the geodesic equation in a fractal space. We then indicate how gauge fields can be constructed from a geometric re-interpretation of gauge transformations as scale transformations in fractal space-time. Finally, we introduce a new tentative development of the theory, in which quantum laws would hold also in scale space, introducing complexergy as a measure of organizational complexity. Initial possible applications of this extended framework to the processes of morphogenesis and the emergence of prokaryotic and eukaryotic cellular structures are discussed. Having founded elements of the evolutionary, developmental, biochemical and cellular theories on the first principles of scale relativity theory, we introduce proposals for the construction of an integrative theory of life and for the design and implementation of novel macroscopic quantum-type experiments and devices, and discuss their potential

  11. Adequacy of power-to-volume scaling philosophy to simulate natural circulation in Integral Test Facilities

    International Nuclear Information System (INIS)

    Nayak, A.K.; Vijayan, P.K.; Saha, D.; Venkat Raj, V.; Aritomi, Masanori

    1998-01-01

    Theoretical and experimental investigations were carried out to study the adequacy of power-to-volume scaling philosophy for the simulation of natural circulation and to establish the scaling philosophy applicable for the design of the Integral Test Facility (ITF-AHWR) for the Indian Advanced Heavy Water Reactor (AHWR). The results indicate that a reduction in the flow channel diameter of the scaled facility as required by the power-to-volume scaling philosophy may affect the simulation of natural circulation behaviour of the prototype plants. This is caused by the distortions due to the inability to simulate the frictional resistance of the scaled facility. Hence, it is recommended that the flow channel diameter of the scaled facility should be as close as possible to the prototype. This was verified by comparing the natural circulation behaviour of a prototype 220 MWe Indian PHWR and its scaled facility (FISBE-1) designed based on power-to-volume scaling philosophy. It is suggested from examinations using a mathematical model and a computer code that the FISBE-1 simulates the steady state and the general trend of transient natural circulation behaviour of the prototype reactor adequately. Finally the proposed scaling method was applied for the design of the ITF-AHWR. (author)

  12. Integral criteria for large-scale multiple fingerprint solutions

    Science.gov (United States)

    Ushmaev, Oleg S.; Novikov, Sergey O.

    2004-08-01

    We propose the definition and analysis of the optimal integral similarity score criterion for large scale multmodal civil ID systems. Firstly, the general properties of score distributions for genuine and impostor matches for different systems and input devices are investigated. The empirical statistics was taken from the real biometric tests. Then we carry out the analysis of simultaneous score distributions for a number of combined biometric tests and primary for ultiple fingerprint solutions. The explicit and approximate relations for optimal integral score, which provides the least value of the FRR while the FAR is predefined, have been obtained. The results of real multiple fingerprint test show good correspondence with the theoretical results in the wide range of the False Acceptance and the False Rejection Rates.

  13. Integrative taxonomy for continental-scale terrestrial insect observations.

    Directory of Open Access Journals (Sweden)

    Cara M Gibson

    Full Text Available Although 21(st century ecology uses unprecedented technology at the largest spatio-temporal scales in history, the data remain reliant on sound taxonomic practices that derive from 18(th century science. The importance of accurate species identifications has been assessed repeatedly and in instances where inappropriate assignments have been made there have been costly consequences. The National Ecological Observatory Network (NEON will use a standardized system based upon an integrative taxonomic foundation to conduct observations of the focal terrestrial insect taxa, ground beetles and mosquitoes, at the continental scale for a 30 year monitoring program. The use of molecular data for continental-scale, multi-decadal research conducted by a geographically widely distributed set of researchers has not been evaluated until this point. The current paper addresses the development of a reference library for verifying species identifications at NEON and the key ways in which this resource will enhance a variety of user communities.

  14. Integrative Taxonomy for Continental-Scale Terrestrial Insect Observations

    Science.gov (United States)

    Gibson, Cara M.; Kao, Rebecca H.; Blevins, Kali K.; Travers, Patrick D.

    2012-01-01

    Although 21st century ecology uses unprecedented technology at the largest spatio-temporal scales in history, the data remain reliant on sound taxonomic practices that derive from 18th century science. The importance of accurate species identifications has been assessed repeatedly and in instances where inappropriate assignments have been made there have been costly consequences. The National Ecological Observatory Network (NEON) will use a standardized system based upon an integrative taxonomic foundation to conduct observations of the focal terrestrial insect taxa, ground beetles and mosquitoes, at the continental scale for a 30 year monitoring program. The use of molecular data for continental-scale, multi-decadal research conducted by a geographically widely distributed set of researchers has not been evaluated until this point. The current paper addresses the development of a reference library for verifying species identifications at NEON and the key ways in which this resource will enhance a variety of user communities. PMID:22666362

  15. Variational Multi-Scale method with spectral approximation of the sub-scales.

    KAUST Repository

    Dia, Ben Mansour; Chá con-Rebollo, Tomas

    2015-01-01

    A variational multi-scale method where the sub-grid scales are computed by spectral approximations is presented. It is based upon an extension of the spectral theorem to non necessarily self-adjoint elliptic operators that have an associated base

  16. Comparison of Multi-Criteria Decision Support Methods for Integrated Rehabilitation Prioritization

    Directory of Open Access Journals (Sweden)

    Franz Tscheikner-Gratl

    2017-01-01

    Full Text Available The decisions taken in rehabilitation planning for the urban water networks will have a long lasting impact on the functionality and quality of future services provided by urban infrastructure. These decisions can be assisted by different approaches ranging from linear depreciation for estimating the economic value of the network over using a deterioration model to assess the probability of failure or the technical service life to sophisticated multi-criteria decision support systems. Subsequently, the aim of this paper is to compare five available multi-criteria decision-making (MCDM methods (ELECTRE, AHP, WSM, TOPSIS, and PROMETHEE for the application in an integrated rehabilitation management scheme for a real world case study and analyze them with respect to their suitability to be used in integrated asset management of water systems. The results of the different methods are not equal. This occurs because the chosen score scales, weights and the resulting distributions of the scores within the criteria do not have the same impact on all the methods. Independently of the method used, the decision maker must be familiar with its strengths but also weaknesses. Therefore, in some cases, it would be rational to use one of the simplest methods. However, to check for consistency and increase the reliability of the results, the application of several methods is encouraged.

  17. Selective Integration in the Material-Point Method

    DEFF Research Database (Denmark)

    Andersen, Lars; Andersen, Søren; Damkilde, Lars

    2009-01-01

    The paper deals with stress integration in the material-point method. In order to avoid parasitic shear in bending, a formulation is proposed, based on selective integration in the background grid that is used to solve the governing equations. The suggested integration scheme is compared...... to a traditional material-point-method computation in which the stresses are evaluated at the material points. The deformation of a cantilever beam is analysed, assuming elastic or elastoplastic material behaviour....

  18. An Integrated Assessment of Location-Dependent Scaling for Microalgae Biofuel Production Facilities

    Energy Technology Data Exchange (ETDEWEB)

    Coleman, Andre M.; Abodeely, Jared; Skaggs, Richard; Moeglein, William AM; Newby, Deborah T.; Venteris, Erik R.; Wigmosta, Mark S.

    2014-06-19

    Successful development of a large-scale microalgae-based biofuels industry requires comprehensive analysis and understanding of the feedstock supply chain—from facility siting/design through processing/upgrading of the feedstock to a fuel product. The evolution from pilot-scale production facilities to energy-scale operations presents many multi-disciplinary challenges, including a sustainable supply of water and nutrients, operational and infrastructure logistics, and economic competitiveness with petroleum-based fuels. These challenges are addressed in part by applying the Integrated Assessment Framework (IAF)—an integrated multi-scale modeling, analysis, and data management suite—to address key issues in developing and operating an open-pond facility by analyzing how variability and uncertainty in space and time affect algal feedstock production rates, and determining the site-specific “optimum” facility scale to minimize capital and operational expenses. This approach explicitly and systematically assesses the interdependence of biofuel production potential, associated resource requirements, and production system design trade-offs. The IAF was applied to a set of sites previously identified as having the potential to cumulatively produce 5 billion-gallons/year in the southeastern U.S. and results indicate costs can be reduced by selecting the most effective processing technology pathway and scaling downstream processing capabilities to fit site-specific growing conditions, available resources, and algal strains.

  19. Method of complex scaling

    International Nuclear Information System (INIS)

    Braendas, E.

    1986-01-01

    The method of complex scaling is taken to include bound states, resonances, remaining scattering background and interference. Particular points of the general complex coordinate formulation are presented. It is shown that care must be exercised to avoid paradoxical situations resulting from inadequate definitions of operator domains. A new resonance localization theorem is presented

  20. Do large-scale assessments measure students' ability to integrate scientific knowledge?

    Science.gov (United States)

    Lee, Hee-Sun

    2010-03-01

    Large-scale assessments are used as means to diagnose the current status of student achievement in science and compare students across schools, states, and countries. For efficiency, multiple-choice items and dichotomously-scored open-ended items are pervasively used in large-scale assessments such as Trends in International Math and Science Study (TIMSS). This study investigated how well these items measure secondary school students' ability to integrate scientific knowledge. This study collected responses of 8400 students to 116 multiple-choice and 84 open-ended items and applied an Item Response Theory analysis based on the Rasch Partial Credit Model. Results indicate that most multiple-choice items and dichotomously-scored open-ended items can be used to determine whether students have normative ideas about science topics, but cannot measure whether students integrate multiple pieces of relevant science ideas. Only when the scoring rubric is redesigned to capture subtle nuances of student open-ended responses, open-ended items become a valid and reliable tool to assess students' knowledge integration ability.

  1. Hydrologic design methodologies for small-scale hydro at ungauged sites. Phase 2A, British Columbia. Summary report. Integrated method for power analysis (IMP)

    Energy Technology Data Exchange (ETDEWEB)

    1986-01-01

    Analysis of the feasibility of sites for small-scale hydroelectric power generation in British Columbia can be a difficult process in cases where gauged streamflow data are not available. This report outlines the capabilities of a package of computer programs, designed to aid the planner of small-scale hydro projects in evaluating the hydrology and energy potential of ungauged sites in that province. The programs were developed by using appropriate programming and hydrological research that had already been done, and taking advantage of experience already available from previous use of components of the package. As the province's mountainous terrain, with numerous microclimates, precluded use of methods of regional analysis based on regression or comparison with gauged streams, the package was developed on the basis of a conceptual model of physical processes on the land and in the atmosphere. The resulting package, the Integrated Method for Power Analysis, consists of five basic components: an atmospheric model of annual precipitation and one-in-ten-year 24-h maximum rainfall; a watershed model that will generate streamflow data based on information from the atmospheric model; a flood frequency analysis system that uses site-specific topographic information and information from the atmospheric model to generate a flood frequency curve; a hydroelectric power simulation program which determines the daily energy output for a run-of-river or reservoir storage site; and a graphics analysis package that provides direct visualization of atmospheric and streamflow data, and the results of watershed and power simulation modelling. 2 figs., 1 tab.

  2. Integration test of ITER full-scale vacuum vessel sector

    International Nuclear Information System (INIS)

    Nakahira, M.; Koizumi, K.; Oka, K.

    2001-01-01

    The full-scale Sector Model Project, which was initiated in 1995 as one of the Large Seven R and D Projects, completed all R and D activities planned in the ITER-EDA period with the joint effort of the ITER Joint Central Team (JCT), the Japanese, the Russian Federation (RF) and the United States (US) Home Teams. The fabrication of a full-scale 18 toroidal sector, which is composed of two 9 sectors spliced at the port center, was successfully completed in September 1997 with the dimensional accuracy of ± 3 mm for the total height and total width. Both sectors were shipped to the test site in JAERI and the integration test was begun in October 1997. The integration test involves the adjustment of field joints, automatic Narrow Gap Tungsten Inert Gas (NG-TIG) welding of field joints with splice plates, and inspection of the joint by ultrasonic testing (UT), which are required for the initial assembly of ITER vacuum vessel. This first demonstration of field joint welding and performance test on the mechanical characteristics were completed in May 1998 and the all results obtained have satisfied the ITER design. In addition to these tests, the integration with the mid plane port extension fabricated by the Russian Home Team, and the cutting and re-welding test of field joints by using full-remotized welding and cutting system developed by the US Home Team, are planned as post EDA activities. (author)

  3. Integration test of ITER full-scale vacuum vessel sector

    International Nuclear Information System (INIS)

    Nakahira, M.; Koizumi, K.; Oka, K.

    1999-01-01

    The full-scale Sector Model Project, which was initiated in 1995 as one of the Large Seven ITER R and D Projects, completed all R and D activities planned in the ITER-EDA period with the joint effort of the ITER Joint Central Team (JCT), the Japanese, the Russian Federation (RF) and the United States (US) Home Teams. The fabrication of a full-scale 18 toroidal sector, which is composed of two 9 sectors spliced at the port center, was successfully completed in September 1997 with the dimensional accuracy of - 3 mm for the total height and total width. Both sectors were shipped to the test site in JAERI and the integration test was begun in October 1997. The integration test involves the adjustment of field joints, automatic Narrow Gap Tungsten Inert Gas (NG-TIG) welding of field joints with splice plates, and inspection of the joint by ultrasonic testing (UT), which are required for the initial assembly of ITER vacuum vessel. This first demonstration of field joint welding and performance test on the mechanical characteristics were completed in May 1998 and the all results obtained have satisfied the ITER design. In addition to these tests, the integration with the mid plane port extension fabricated by the Russian Home Team, and the cutting and re-welding test of field joints by using full-remotized welding and cutting system developed by the US Home Team, are planned as post EDA activities. (author)

  4. A Hamiltonian-based derivation of Scaled Boundary Finite Element Method for elasticity problems

    International Nuclear Information System (INIS)

    Hu Zhiqiang; Lin Gao; Wang Yi; Liu Jun

    2010-01-01

    The Scaled Boundary Finite Method (SBFEM) is a semi-analytical solution approach for solving partial differential equation. For problem in elasticity, the governing equations can be obtained by mechanically based formulation, Scaled-boundary-transformation-based formulation and principle of virtual work. The governing equations are described in the frame of Lagrange system and the unknowns are displacements. But in the solution procedure, the auxiliary variables are introduced and the equations are solved in the state space. Based on the observation that the duality system to solve elastic problem proposed by W.X. Zhong is similar to the above solution approach, the discretization of the SBFEM and the duality system are combined to derive the governing equations in the Hamilton system by introducing the dual variables in this paper. The Precise Integration Method (PIM) used in Duality system is also an efficient method for the solution of the governing equations of SBFEM in displacement and boundary stiffness matrix especially for the case which results some numerical difficulties in the usually uses the eigenvalue method. Numerical examples are used to demonstrate the validity and effectiveness of the PIM for solution of boundary static stiffness.

  5. Electric vehicles and large-scale integration of wind power

    DEFF Research Database (Denmark)

    Liu, Wen; Hu, Weihao; Lund, Henrik

    2013-01-01

    with this imbalance and to reduce its high dependence on oil production. For this reason, it is interesting to analyse the extent to which transport electrification can further the renewable energy integration. This paper quantifies this issue in Inner Mongolia, where the share of wind power in the electricity supply...... was 6.5% in 2009 and which has the plan to develop large-scale wind power. The results show that electric vehicles (EVs) have the ability to balance the electricity demand and supply and to further the wind power integration. In the best case, the energy system with EV can increase wind power...... integration by 8%. The application of EVs benefits from saving both energy system cost and fuel cost. However, the negative consequences of decreasing energy system efficiency and increasing the CO2 emission should be noted when applying the hydrogen fuel cell vehicle (HFCV). The results also indicate...

  6. Computational methods assuring nuclear power plant structural integrity and safety: an overview of the recent activities at VTT

    International Nuclear Information System (INIS)

    Keinaenen, H.; Talja, H.; Rintamaa, R.

    1998-01-01

    Numerical, simplified engineering and standardised methods are applied in the safety analyses of primary circuit components and reactor pressure vessels. The integrity assessment procedures require input relating both to the steady state and transient loading actual material properties data and precise knowledge of the size and geometry of defects. Current procedures bold extensive information regarding these aspects. It is important to verify the accuracy of the different assessment methods especially in the case of complex structures and loading. The focus of this paper is on the recent results and development of computational fracture assessment methods at VTT Manufacturing Technology. The methods include effective engineering type tools for rapid structural integrity assessments and more sophisticated finite-element based methods. An integrated PC-based program system MASI for engineering fracture analysis is described. A summary of the verification of the methods in computational benchmark analyses and against the results of large scale experiments is presented. (orig.)

  7. Method of producing nano-scaled inorganic platelets

    Science.gov (United States)

    Zhamu, Aruna; Jang, Bor Z.

    2012-11-13

    The present invention provides a method of exfoliating a layered material (e.g., transition metal dichalcogenide) to produce nano-scaled platelets having a thickness smaller than 100 nm, typically smaller than 10 nm. The method comprises (a) dispersing particles of a non-graphite laminar compound in a liquid medium containing therein a surfactant or dispersing agent to obtain a stable suspension or slurry; and (b) exposing the suspension or slurry to ultrasonic waves at an energy level for a sufficient length of time to produce separated nano-scaled platelets. The nano-scaled platelets are candidate reinforcement fillers for polymer nanocomposites.

  8. Some effects of integrated production planning in large-scale kitchens

    DEFF Research Database (Denmark)

    Engelund, Eva Høy; Friis, Alan; Jacobsen, Peter

    2005-01-01

    Integrated production planning in large-scale kitchens proves advantageous for increasing the overall quality of the food produced and the flexibility in terms of a diverse food supply. The aim is to increase the flexibility and the variability in the production as well as the focus on freshness ...

  9. The Integrative Psychotherapy Alliance: Family, Couple and Individual Therapy Scales.

    Science.gov (United States)

    Pinsof, William M.; Catherall, Donald R.

    1986-01-01

    Presents an integrative definition of the therapeutic alliance that conceptualizes individual, couple and family therapy as occurring within the same systemic framework. The implications of this concept for therapy reserach are examined. Three new systematically oriented scales to measure the alliance are presented along with some preliminary data…

  10. Achieving Integration in Mixed Methods Designs—Principles and Practices

    Science.gov (United States)

    Fetters, Michael D; Curry, Leslie A; Creswell, John W

    2013-01-01

    Mixed methods research offers powerful tools for investigating complex processes and systems in health and health care. This article describes integration principles and practices at three levels in mixed methods research and provides illustrative examples. Integration at the study design level occurs through three basic mixed method designs—exploratory sequential, explanatory sequential, and convergent—and through four advanced frameworks—multistage, intervention, case study, and participatory. Integration at the methods level occurs through four approaches. In connecting, one database links to the other through sampling. With building, one database informs the data collection approach of the other. When merging, the two databases are brought together for analysis. With embedding, data collection and analysis link at multiple points. Integration at the interpretation and reporting level occurs through narrative, data transformation, and joint display. The fit of integration describes the extent the qualitative and quantitative findings cohere. Understanding these principles and practices of integration can help health services researchers leverage the strengths of mixed methods. PMID:24279835

  11. Wafer-scale integrated micro-supercapacitors on an ultrathin and highly flexible biomedical platform.

    Science.gov (United States)

    Maeng, Jimin; Meng, Chuizhou; Irazoqui, Pedro P

    2015-02-01

    We present wafer-scale integrated micro-supercapacitors on an ultrathin and highly flexible parylene platform, as progress toward sustainably powering biomedical microsystems suitable for implantable and wearable applications. All-solid-state, low-profile (supercapacitors are formed on an ultrathin (~20 μm) freestanding parylene film by a wafer-scale parylene packaging process in combination with a polyaniline (PANI) nanowire growth technique assisted by surface plasma treatment. These micro-supercapacitors are highly flexible and shown to be resilient toward flexural stress. Further, direct integration of micro-supercapacitors into a radio frequency (RF) rectifying circuit is achieved on a single parylene platform, yielding a complete RF energy harvesting microsystem. The system discharging rate is shown to improve by ~17 times in the presence of the integrated micro-supercapacitors. This result suggests that the integrated micro-supercapacitor technology described herein is a promising strategy for sustainably powering biomedical microsystems dedicated to implantable and wearable applications.

  12. Genome scale models of yeast: towards standardized evaluation and consistent omic integration

    DEFF Research Database (Denmark)

    Sanchez, Benjamin J.; Nielsen, Jens

    2015-01-01

    Genome scale models (GEMs) have enabled remarkable advances in systems biology, acting as functional databases of metabolism, and as scaffolds for the contextualization of high-throughput data. In the case of Saccharomyces cerevisiae (budding yeast), several GEMs have been published and are curre......Genome scale models (GEMs) have enabled remarkable advances in systems biology, acting as functional databases of metabolism, and as scaffolds for the contextualization of high-throughput data. In the case of Saccharomyces cerevisiae (budding yeast), several GEMs have been published...... in which all levels of omics data (from gene expression to flux) have been integrated in yeast GEMs. Relevant conclusions and current challenges for both GEM evaluation and omic integration are highlighted....

  13. Numerical method of singular problems on singular integrals

    International Nuclear Information System (INIS)

    Zhao Huaiguo; Mou Zongze

    1992-02-01

    As first part on the numerical research of singular problems, a numerical method is proposed for singular integrals. It is shown that the procedure is quite powerful for solving physics calculation with singularity such as the plasma dispersion function. Useful quadrature formulas for some class of the singular integrals are derived. In general, integrals with more complex singularities can be dealt by this method easily

  14. Integrated care pilot in north west London: a mixed methods evaluation

    Directory of Open Access Journals (Sweden)

    Natasha Curry

    2013-07-01

    Full Text Available Introduction: This paper provides the results of a year-long evaluation of a large-scale integrated care pilot in North West London. The pilot aimed to integrate care across primary, acute, community, mental health and social care for people with diabetes and those over 75 years through: care planning; multidisciplinary case reviews; information sharing; and project management support.   Methods: The evaluation team conducted qualitative studies of change at organisational, clinician, and patient levels (using interviews, focus groups and a survey; and quantitative analysis of change in service use and patient-level clinical outcomes (using patient-level data sets and a matched control study.  Results: The pilot had successfully engaged provider organisations, created a shared strategic vision and established governance structures. However, engagement of clinicians was variable and there was no evidence to date of significant reductions in emergency admissions. There was some evidence of changes in care processes. Conclusion: Although the pilot has demonstrated the beginnings of large-scale change, it remains in the early stages and faces significant challenges as it seeks to become sustainable for the longer term. It is critical that NHS managers and clinicians have realistic expectations of what can be achieved in a relatively short period of time.

  15. Integrated care pilot in north west London: a mixed methods evaluation

    Directory of Open Access Journals (Sweden)

    Natasha Curry

    2013-07-01

    Full Text Available Introduction: This paper provides the results of a year-long evaluation of a large-scale integrated care pilot in North West London. The pilot aimed to integrate care across primary, acute, community, mental health and social care for people with diabetes and those over 75 years through: care planning; multidisciplinary case reviews; information sharing; and project management support.    Methods: The evaluation team conducted qualitative studies of change at organisational, clinician, and patient levels (using interviews, focus groups and a survey; and quantitative analysis of change in service use and patient-level clinical outcomes (using patient-level data sets and a matched control study.   Results: The pilot had successfully engaged provider organisations, created a shared strategic vision and established governance structures. However, engagement of clinicians was variable and there was no evidence to date of significant reductions in emergency admissions. There was some evidence of changes in care processes.   Conclusion: Although the pilot has demonstrated the beginnings of large-scale change, it remains in the early stages and faces significant challenges as it seeks to become sustainable for the longer term. It is critical that NHS managers and clinicians have realistic expectations of what can be achieved in a relatively short period of time.

  16. Ecosystem assessment methods for cumulative effects at the regional scale

    International Nuclear Information System (INIS)

    Hunsaker, C.T.

    1989-01-01

    Environmental issues such as nonpoint-source pollution, acid rain, reduced biodiversity, land use change, and climate change have widespread ecological impacts and require an integrated assessment approach. Since 1978, the implementing regulations for the National Environmental Policy Act (NEPA) have required assessment of potential cumulative environmental impacts. Current environmental issues have encouraged ecologists to improve their understanding of ecosystem process and function at several spatial scales. However, management activities usually occur at the local scale, and there is little consideration of the potential impacts to the environmental quality of a region. This paper proposes that regional ecological risk assessment provides a useful approach for assisting scientists in accomplishing the task of assessing cumulative impacts. Critical issues such as spatial heterogeneity, boundary definition, and data aggregation are discussed. Examples from an assessment of acidic deposition effects on fish in Adirondack lakes illustrate the importance of integrated data bases, associated modeling efforts, and boundary definition at the regional scale

  17. IoT European Large-Scale Pilots – Integration, Experimentation and Testing

    OpenAIRE

    Guillén, Sergio Gustavo; Sala, Pilar; Fico, Giuseppe; Arredondo, Maria Teresa; Cano, Alicia; Posada, Jorge; Gutierrez, Germán; Palau, Carlos; Votis, Konstantinos; Verdouw, Cor N.; Wolfert, Sjaak; Beers, George; Sundmaeker, Harald; Chatzikostas, Grigoris; Ziegler, Sébastien

    2017-01-01

    The IoT European Large-Scale Pilots Programme includes the innovation consortia that are collaborating to foster the deployment of IoT solutions in Europe through the integration of advanced IoT technologies across the value chain, demonstration of multiple IoT applications at scale and in a usage context, and as close as possible to operational conditions. The programme projects are targeted, goal-driven initiatives that propose IoT approaches to specific real-life industrial/societal challe...

  18. Numerov iteration method for second order integral-differential equation

    International Nuclear Information System (INIS)

    Zeng Fanan; Zhang Jiaju; Zhao Xuan

    1987-01-01

    In this paper, Numerov iterative method for second order integral-differential equation and system of equations are constructed. Numerical examples show that this method is better than direct method (Gauss elimination method) in CPU time and memoy requireing. Therefore, this method is an efficient method for solving integral-differential equation in nuclear physics

  19. Achieving integration in mixed methods designs-principles and practices.

    Science.gov (United States)

    Fetters, Michael D; Curry, Leslie A; Creswell, John W

    2013-12-01

    Mixed methods research offers powerful tools for investigating complex processes and systems in health and health care. This article describes integration principles and practices at three levels in mixed methods research and provides illustrative examples. Integration at the study design level occurs through three basic mixed method designs-exploratory sequential, explanatory sequential, and convergent-and through four advanced frameworks-multistage, intervention, case study, and participatory. Integration at the methods level occurs through four approaches. In connecting, one database links to the other through sampling. With building, one database informs the data collection approach of the other. When merging, the two databases are brought together for analysis. With embedding, data collection and analysis link at multiple points. Integration at the interpretation and reporting level occurs through narrative, data transformation, and joint display. The fit of integration describes the extent the qualitative and quantitative findings cohere. Understanding these principles and practices of integration can help health services researchers leverage the strengths of mixed methods. © Health Research and Educational Trust.

  20. Equivalent Method of Integrated Power Generation System of Wind, Photovoltaic and Energy Storage in Power Flow Calculation and Transient Simulation

    Institute of Scientific and Technical Information of China (English)

    2012-01-01

    The integrated power generation system of wind, photovoltaic (PV) and energy storage is composed of several wind turbines, PV units and energy storage units. The detailed model of integrated generation is not suitable for the large-scale powe.r system simulation because of the model's complexity and long computation time. An equivalent method for power flow calculation and transient simulation of the integrated generation system is proposed based on actual projects, so as to establish the foundation of such integrated system simulation and analysis.

  1. Multidimensional quantum entanglement with large-scale integrated optics.

    Science.gov (United States)

    Wang, Jianwei; Paesani, Stefano; Ding, Yunhong; Santagati, Raffaele; Skrzypczyk, Paul; Salavrakos, Alexia; Tura, Jordi; Augusiak, Remigiusz; Mančinska, Laura; Bacco, Davide; Bonneau, Damien; Silverstone, Joshua W; Gong, Qihuang; Acín, Antonio; Rottwitt, Karsten; Oxenløwe, Leif K; O'Brien, Jeremy L; Laing, Anthony; Thompson, Mark G

    2018-04-20

    The ability to control multidimensional quantum systems is central to the development of advanced quantum technologies. We demonstrate a multidimensional integrated quantum photonic platform able to generate, control, and analyze high-dimensional entanglement. A programmable bipartite entangled system is realized with dimensions up to 15 × 15 on a large-scale silicon photonics quantum circuit. The device integrates more than 550 photonic components on a single chip, including 16 identical photon-pair sources. We verify the high precision, generality, and controllability of our multidimensional technology, and further exploit these abilities to demonstrate previously unexplored quantum applications, such as quantum randomness expansion and self-testing on multidimensional states. Our work provides an experimental platform for the development of multidimensional quantum technologies. Copyright © 2018 The Authors, some rights reserved; exclusive licensee American Association for the Advancement of Science. No claim to original U.S. Government Works.

  2. Integral Methods in Science and Engineering

    CERN Document Server

    Constanda, Christian

    2011-01-01

    An enormous array of problems encountered by scientists and engineers are based on the design of mathematical models using many different types of ordinary differential, partial differential, integral, and integro-differential equations. Accordingly, the solutions of these equations are of great interest to practitioners and to science in general. Presenting a wealth of cutting-edge research by a diverse group of experts in the field, Integral Methods in Science and Engineering: Computational and Analytic Aspects gives a vivid picture of both the development of theoretical integral techniques

  3. An Integrative Bioinformatics Framework for Genome-scale Multiple Level Network Reconstruction of Rice

    Directory of Open Access Journals (Sweden)

    Liu Lili

    2013-06-01

    Full Text Available Understanding how metabolic reactions translate the genome of an organism into its phenotype is a grand challenge in biology. Genome-wide association studies (GWAS statistically connect genotypes to phenotypes, without any recourse to known molecular interactions, whereas a molecular mechanistic description ties gene function to phenotype through gene regulatory networks (GRNs, protein-protein interactions (PPIs and molecular pathways. Integration of different regulatory information levels of an organism is expected to provide a good way for mapping genotypes to phenotypes. However, the lack of curated metabolic model of rice is blocking the exploration of genome-scale multi-level network reconstruction. Here, we have merged GRNs, PPIs and genome-scale metabolic networks (GSMNs approaches into a single framework for rice via omics’ regulatory information reconstruction and integration. Firstly, we reconstructed a genome-scale metabolic model, containing 4,462 function genes, 2,986 metabolites involved in 3,316 reactions, and compartmentalized into ten subcellular locations. Furthermore, 90,358 pairs of protein-protein interactions, 662,936 pairs of gene regulations and 1,763 microRNA-target interactions were integrated into the metabolic model. Eventually, a database was developped for systematically storing and retrieving the genome-scale multi-level network of rice. This provides a reference for understanding genotype-phenotype relationship of rice, and for analysis of its molecular regulatory network.

  4. Recent Advances in the Method of Forces: Integrated Force Method of Structural Analysis

    Science.gov (United States)

    Patnaik, Surya N.; Coroneos, Rula M.; Hopkins, Dale A.

    1998-01-01

    Stress that can be induced in an elastic continuum can be determined directly through the simultaneous application of the equilibrium equations and the compatibility conditions. In the literature, this direct stress formulation is referred to as the integrated force method. This method, which uses forces as the primary unknowns, complements the popular equilibrium-based stiffness method, which considers displacements as the unknowns. The integrated force method produces accurate stress, displacement, and frequency results even for modest finite element models. This version of the force method should be developed as an alternative to the stiffness method because the latter method, which has been researched for the past several decades, may have entered its developmental plateau. Stress plays a primary role in the development of aerospace and other products, and its analysis is difficult. Therefore, it is advisable to use both methods to calculate stress and eliminate errors through comparison. This paper examines the role of the integrated force method in analysis, animation and design.

  5. ASPECTS OF INTEGRATION MANAGEMENT METHODS

    Directory of Open Access Journals (Sweden)

    Artemy Varshapetian

    2015-10-01

    Full Text Available For manufacturing companies to succeed in today's unstable economic environment, it is necessary to restructure the main components of its activities: designing innovative product, production using modern reconfigurable manufacturing systems, a business model that takes into account the global strategy and management methods using modern management models and tools. The first three components are discussed in numerous publications, for example, (Koren, 2010 and is therefore not considered in the article. A large number of publications devoted to the methods and tools of production management, for example (Halevi, 2007. On the basis of what was said in the article discusses the possibility of the integration of only three methods have received in recent years, the most widely used, namely: Six Sigma method - SS (George et al., 2005 and supplements its-Design for six sigm? - DFSS (Taguchi, 2003; Lean production transformed with the development to the "Lean management" and further to the "Lean thinking" - Lean (Hirano et al., 2006; Theory of Constraints, developed E.Goldratt - TOC (Dettmer, 2001. The article investigates some aspects of this integration: applications in diverse fields, positive features, changes in management structure, etc.

  6. Constructing sites at a large scale - towards new design (education) methods

    DEFF Research Database (Denmark)

    Braae, Ellen Marie; Tietjen, Anne

    2010-01-01

    of the design disciplines within the development of our urban landscapes. At the same time, urban and landscape designers are confronted with new methodological problems. Within a strategic transformation perspective the formulation of the design problem or brief becomes an integrated part of the design process......Since the 1990s the regional scale has regained importance in urban and landscape design. In parallel, the focus in design tasks has shifted from master plans for urban extension to strategic urban transformation projects. The current paradigm of planning by projects reinforces the role....... This paper discusses new design (education) methods based on a relational concept of urban sites and design processes using the actor-network-theory as theoretical frame....

  7. Pesticide fate at regional scale: Development of an integrated model approach and application

    Science.gov (United States)

    Herbst, M.; Hardelauf, H.; Harms, R.; Vanderborght, J.; Vereecken, H.

    As a result of agricultural practice many soils and aquifers are contaminated with pesticides. In order to quantify the side-effects of these anthropogenic impacts on groundwater quality at regional scale, a process-based, integrated model approach was developed. The Richards’ equation based numerical model TRACE calculates the three-dimensional saturated/unsaturated water flow. For the modeling of regional scale pesticide transport we linked TRACE with the plant module SUCROS and with 3DLEWASTE, a hybrid Lagrangian/Eulerian approach to solve the convection/dispersion equation. We used measurements, standard methods like pedotransfer-functions or parameters from literature to derive the model input for the process model. A first-step application of TRACE/3DLEWASTE to the 20 km 2 test area ‘Zwischenscholle’ for the period 1983-1993 reveals the behaviour of the pesticide isoproturon. The selected test area is characterised by an intense agricultural use and shallow groundwater, resulting in a high vulnerability of the groundwater to pesticide contamination. The model results stress the importance of the unsaturated zone for the occurrence of pesticides in groundwater. Remarkable isoproturon concentrations in groundwater are predicted for locations with thin layered and permeable soils. For four selected locations we used measured piezometric heads to validate predicted groundwater levels. In general, the model results are consistent and reasonable. Thus the developed integrated model approach is seen as a promising tool for the quantification of the agricultural practice impact on groundwater quality.

  8. Temperature scaling method for Markov chains.

    Science.gov (United States)

    Crosby, Lonnie D; Windus, Theresa L

    2009-01-22

    The use of ab initio potentials in Monte Carlo simulations aimed at investigating the nucleation kinetics of water clusters is complicated by the computational expense of the potential energy determinations. Furthermore, the common desire to investigate the temperature dependence of kinetic properties leads to an urgent need to reduce the expense of performing simulations at many different temperatures. A method is detailed that allows a Markov chain (obtained via Monte Carlo) at one temperature to be scaled to other temperatures of interest without the need to perform additional large simulations. This Markov chain temperature-scaling (TeS) can be generally applied to simulations geared for numerous applications. This paper shows the quality of results which can be obtained by TeS and the possible quantities which may be extracted from scaled Markov chains. Results are obtained for a 1-D analytical potential for which the exact solutions are known. Also, this method is applied to water clusters consisting of between 2 and 5 monomers, using Dynamical Nucleation Theory to determine the evaporation rate constant for monomer loss. Although ab initio potentials are not utilized in this paper, the benefit of this method is made apparent by using the Dang-Chang polarizable classical potential for water to obtain statistical properties at various temperatures.

  9. Development of an integrated method for long-term water quality prediction using seasonal climate forecast

    Directory of Open Access Journals (Sweden)

    J. Cho

    2016-10-01

    Full Text Available The APEC Climate Center (APCC produces climate prediction information utilizing a multi-climate model ensemble (MME technique. In this study, four different downscaling methods, in accordance with the degree of utilizing the seasonal climate prediction information, were developed in order to improve predictability and to refine the spatial scale. These methods include: (1 the Simple Bias Correction (SBC method, which directly uses APCC's dynamic prediction data with a 3 to 6 month lead time; (2 the Moving Window Regression (MWR method, which indirectly utilizes dynamic prediction data; (3 the Climate Index Regression (CIR method, which predominantly uses observation-based climate indices; and (4 the Integrated Time Regression (ITR method, which uses predictors selected from both CIR and MWR. Then, a sampling-based temporal downscaling was conducted using the Mahalanobis distance method in order to create daily weather inputs to the Soil and Water Assessment Tool (SWAT model. Long-term predictability of water quality within the Wecheon watershed of the Nakdong River Basin was evaluated. According to the Korean Ministry of Environment's Provisions of Water Quality Prediction and Response Measures, modeling-based predictability was evaluated by using 3-month lead prediction data issued in February, May, August, and November as model input of SWAT. Finally, an integrated approach, which takes into account various climate information and downscaling methods for water quality prediction, was presented. This integrated approach can be used to prevent potential problems caused by extreme climate in advance.

  10. Accurate and Efficient Parallel Implementation of an Effective Linear-Scaling Direct Random Phase Approximation Method.

    Science.gov (United States)

    Graf, Daniel; Beuerle, Matthias; Schurkus, Henry F; Luenser, Arne; Savasci, Gökcen; Ochsenfeld, Christian

    2018-05-08

    An efficient algorithm for calculating the random phase approximation (RPA) correlation energy is presented that is as accurate as the canonical molecular orbital resolution-of-the-identity RPA (RI-RPA) with the important advantage of an effective linear-scaling behavior (instead of quartic) for large systems due to a formulation in the local atomic orbital space. The high accuracy is achieved by utilizing optimized minimax integration schemes and the local Coulomb metric attenuated by the complementary error function for the RI approximation. The memory bottleneck of former atomic orbital (AO)-RI-RPA implementations ( Schurkus, H. F.; Ochsenfeld, C. J. Chem. Phys. 2016 , 144 , 031101 and Luenser, A.; Schurkus, H. F.; Ochsenfeld, C. J. Chem. Theory Comput. 2017 , 13 , 1647 - 1655 ) is addressed by precontraction of the large 3-center integral matrix with the Cholesky factors of the ground state density reducing the memory requirements of that matrix by a factor of [Formula: see text]. Furthermore, we present a parallel implementation of our method, which not only leads to faster RPA correlation energy calculations but also to a scalable decrease in memory requirements, opening the door for investigations of large molecules even on small- to medium-sized computing clusters. Although it is known that AO methods are highly efficient for extended systems, where sparsity allows for reaching the linear-scaling regime, we show that our work also extends the applicability when considering highly delocalized systems for which no linear scaling can be achieved. As an example, the interlayer distance of two covalent organic framework pore fragments (comprising 384 atoms in total) is analyzed.

  11. Structural reliability calculation method based on the dual neural network and direct integration method.

    Science.gov (United States)

    Li, Haibin; He, Yun; Nie, Xiaobo

    2018-01-01

    Structural reliability analysis under uncertainty is paid wide attention by engineers and scholars due to reflecting the structural characteristics and the bearing actual situation. The direct integration method, started from the definition of reliability theory, is easy to be understood, but there are still mathematics difficulties in the calculation of multiple integrals. Therefore, a dual neural network method is proposed for calculating multiple integrals in this paper. Dual neural network consists of two neural networks. The neural network A is used to learn the integrand function, and the neural network B is used to simulate the original function. According to the derivative relationships between the network output and the network input, the neural network B is derived from the neural network A. On this basis, the performance function of normalization is employed in the proposed method to overcome the difficulty of multiple integrations and to improve the accuracy for reliability calculations. The comparisons between the proposed method and Monte Carlo simulation method, Hasofer-Lind method, the mean value first-order second moment method have demonstrated that the proposed method is an efficient and accurate reliability method for structural reliability problems.

  12. Integrated calibration of a 3D attitude sensor in large-scale metrology

    International Nuclear Information System (INIS)

    Gao, Yang; Lin, Jiarui; Yang, Linghui; Zhu, Jigui; Muelaner, Jody; Keogh, Patrick

    2017-01-01

    A novel calibration method is presented for a multi-sensor fusion system in large-scale metrology, which improves the calibration efficiency and reliability. The attitude sensor is composed of a pinhole prism, a converging lens, an area-array camera and a biaxial inclinometer. A mathematical model is established to determine its 3D attitude relative to a cooperative total station by using two vector observations from the imaging system and the inclinometer. There are two areas of unknown parameters in the measurement model that should be calibrated: the intrinsic parameters of the imaging model, and the transformation matrix between the camera and the inclinometer. An integrated calibration method using a three-axis rotary table and a total station is proposed. A single mounting position of the attitude sensor on the rotary table is sufficient to solve for all parameters of the measurement model. A correction technique for the reference laser beam of the total station is also presented to remove the need for accurate positioning of the sensor on the rotary table. Experimental verification has proved the practicality and accuracy of this calibration method. Results show that the mean deviations of attitude angles using the proposed method are less than 0.01°. (paper)

  13. Two-scale large deviations for chemical reaction kinetics through second quantization path integral

    International Nuclear Information System (INIS)

    Li, Tiejun; Lin, Feng

    2016-01-01

    Motivated by the study of rare events for a typical genetic switching model in systems biology, in this paper we aim to establish the general two-scale large deviations for chemical reaction systems. We build a formal approach to explicitly obtain the large deviation rate functionals for the considered two-scale processes based upon the second quantization path integral technique. We get three important types of large deviation results when the underlying two timescales are in three different regimes. This is realized by singular perturbation analysis to the rate functionals obtained by the path integral. We find that the three regimes possess the same deterministic mean-field limit but completely different chemical Langevin approximations. The obtained results are natural extensions of the classical large volume limit for chemical reactions. We also discuss its implication on the single-molecule Michaelis–Menten kinetics. Our framework and results can be applied to understand general multi-scale systems including diffusion processes. (paper)

  14. Scaling of vegetation indices for environmental change studies

    International Nuclear Information System (INIS)

    Qi, J.; Huete, A.; Sorooshian, S.; Chehbouni, A.; Kerr, Y.

    1992-01-01

    The spatial integration of physical parameters in remote sensing studies is of critical concern when evaluating the global biophysical processes on the earth's surface. When high resolution physical parameters, such as vegetation indices, are degraded for integration into global scale studies, they differ from lower spatial resolution data due to spatial variability and the method by which these parameters are integrated. In this study, multi-spatial resolution data sets of SPOT and ground based data obtained at Walnut Gulch Experimental Watershed in southern Arizona, US during MONSOON '90 were used. These data sets were examined to study the variations of the vegetation index parameters when integrated into coarser resolutions. Different integration methods (conventional mean and Geostatistical mean) were used in simulations of high-to-low resolutions. The sensitivity of the integrated parameters were found to vary with both the spatial variability of the area and the integration methods. Modeled equations describing the scale-dependency of the vegetation index are suggested

  15. A mixed-methods study of system-level sustainability of evidence-based practices in 12 large-scale implementation initiatives.

    Science.gov (United States)

    Scudder, Ashley T; Taber-Thomas, Sarah M; Schaffner, Kristen; Pemberton, Joy R; Hunter, Leah; Herschell, Amy D

    2017-12-07

    In recent decades, evidence-based practices (EBPs) have been broadly promoted in community behavioural health systems in the United States of America, yet reported EBP penetration rates remain low. Determining how to systematically sustain EBPs in complex, multi-level service systems has important implications for public health. This study examined factors impacting the sustainability of parent-child interaction therapy (PCIT) in large-scale initiatives in order to identify potential predictors of sustainment. A mixed-methods approach to data collection was used. Qualitative interviews and quantitative surveys examining sustainability processes and outcomes were completed by participants from 12 large-scale initiatives. Sustainment strategies fell into nine categories, including infrastructure, training, marketing, integration and building partnerships. Strategies involving integration of PCIT into existing practices and quality monitoring predicted sustainment, while financing also emerged as a key factor. The reported factors and strategies impacting sustainability varied across initiatives; however, integration into existing practices, monitoring quality and financing appear central to high levels of sustainability of PCIT in community-based systems. More detailed examination of the progression of specific activities related to these strategies may aide in identifying priorities to include in strategic planning of future large-scale initiatives. ClinicalTrials.gov ID NCT02543359 ; Protocol number PRO12060529.

  16. Mixed methods in psychotherapy research: A review of method(ology) integration in psychotherapy science.

    Science.gov (United States)

    Bartholomew, Theodore T; Lockard, Allison J

    2018-06-13

    Mixed methods can foster depth and breadth in psychological research. However, its use remains in development in psychotherapy research. Our purpose was to review the use of mixed methods in psychotherapy research. Thirty-one studies were identified via the PRISMA systematic review method. Using Creswell & Plano Clark's typologies to identify design characteristics, we assessed each study for rigor and how each used mixed methods. Key features of mixed methods designs and these common patterns were identified: (a) integration of clients' perceptions via mixing; (b) understanding group psychotherapy; (c) integrating methods with cases and small samples; (d) analyzing clinical data as qualitative data; and (e) exploring cultural identities in psychotherapy through mixed methods. The review is discussed with respect to the value of integrating multiple data in single studies to enhance psychotherapy research. © 2018 Wiley Periodicals, Inc.

  17. Methods for enhancing numerical integration

    International Nuclear Information System (INIS)

    Doncker, Elise de

    2003-01-01

    We give a survey of common strategies for numerical integration (adaptive, Monte-Carlo, Quasi-Monte Carlo), and attempt to delineate their realm of applicability. The inherent accuracy and error bounds for basic integration methods are given via such measures as the degree of precision of cubature rules, the index of a family of lattice rules, and the discrepancy of uniformly distributed point sets. Strategies incorporating these basic methods often use paradigms to reduce the error by, e.g., increasing the number of points in the domain or decreasing the mesh size, locally or uniformly. For these processes the order of convergence of the strategy is determined by the asymptotic behavior of the error, and may be too slow in practice for the type of problem at hand. For certain problem classes we may be able to improve the effectiveness of the method or strategy by such techniques as transformations, absorbing a difficult part of the integrand into a weight function, suitable partitioning of the domain, transformations and extrapolation or convergence acceleration. Situations warranting the use of these techniques (possibly in an 'automated' way) are described and illustrated by sample applications

  18. Sparse maps—A systematic infrastructure for reduced-scaling electronic structure methods. II. Linear scaling domain based pair natural orbital coupled cluster theory

    International Nuclear Information System (INIS)

    Riplinger, Christoph; Pinski, Peter; Becker, Ute; Neese, Frank; Valeev, Edward F.

    2016-01-01

    Domain based local pair natural orbital coupled cluster theory with single-, double-, and perturbative triple excitations (DLPNO-CCSD(T)) is a highly efficient local correlation method. It is known to be accurate and robust and can be used in a black box fashion in order to obtain coupled cluster quality total energies for large molecules with several hundred atoms. While previous implementations showed near linear scaling up to a few hundred atoms, several nonlinear scaling steps limited the applicability of the method for very large systems. In this work, these limitations are overcome and a linear scaling DLPNO-CCSD(T) method for closed shell systems is reported. The new implementation is based on the concept of sparse maps that was introduced in Part I of this series [P. Pinski, C. Riplinger, E. F. Valeev, and F. Neese, J. Chem. Phys. 143, 034108 (2015)]. Using the sparse map infrastructure, all essential computational steps (integral transformation and storage, initial guess, pair natural orbital construction, amplitude iterations, triples correction) are achieved in a linear scaling fashion. In addition, a number of additional algorithmic improvements are reported that lead to significant speedups of the method. The new, linear-scaling DLPNO-CCSD(T) implementation typically is 7 times faster than the previous implementation and consumes 4 times less disk space for large three-dimensional systems. For linear systems, the performance gains and memory savings are substantially larger. Calculations with more than 20 000 basis functions and 1000 atoms are reported in this work. In all cases, the time required for the coupled cluster step is comparable to or lower than for the preceding Hartree-Fock calculation, even if this is carried out with the efficient resolution-of-the-identity and chain-of-spheres approximations. The new implementation even reduces the error in absolute correlation energies by about a factor of two, compared to the already accurate

  19. The validity of the density scaling method in primary electron transport for photon and electron beams

    International Nuclear Information System (INIS)

    Woo, M.K.; Cunningham, J.R.

    1990-01-01

    In the convolution/superposition method of photon beam dose calculations, inhomogeneities are usually handled by using some form of scaling involving the relative electron densities of the inhomogeneities. In this paper the accuracy of density scaling as applied to primary electrons generated in photon interactions is examined. Monte Carlo calculations are compared with density scaling calculations for air and cork slab inhomogeneities. For individual primary photon kernels as well as for photon interactions restricted to a thin layer, the results can differ significantly, by up to 50%, between the two calculations. However, for realistic photon beams where interactions occur throughout the whole irradiated volume, the discrepancies are much less severe. The discrepancies for the kernel calculation are attributed to the scattering characteristics of the electrons and the consequent oversimplified modeling used in the density scaling method. A technique called the kernel integration technique is developed to analyze the general effects of air and cork inhomogeneities. It is shown that the discrepancies become significant only under rather extreme conditions, such as immediately beyond the surface after a large air gap. In electron beams all the primary electrons originate from the surface of the phantom and the errors caused by simple density scaling can be much more significant. Various aspects relating to the accuracy of density scaling for air and cork slab inhomogeneities are discussed

  20. Momentum integral network method for thermal-hydraulic transient analysis

    International Nuclear Information System (INIS)

    Van Tuyle, G.J.

    1983-01-01

    A new momentum integral network method has been developed, and tested in the MINET computer code. The method was developed in order to facilitate the transient analysis of complex fluid flow and heat transfer networks, such as those found in the balance of plant of power generating facilities. The method employed in the MINET code is a major extension of a momentum integral method reported by Meyer. Meyer integrated the momentum equation over several linked nodes, called a segment, and used a segment average pressure, evaluated from the pressures at both ends. Nodal mass and energy conservation determined nodal flows and enthalpies, accounting for fluid compression and thermal expansion

  1. An Integrated Method for Airfoil Optimization

    Science.gov (United States)

    Okrent, Joshua B.

    Design exploration and optimization is a large part of the initial engineering and design process. To evaluate the aerodynamic performance of a design, viscous Navier-Stokes solvers can be used. However this method can prove to be overwhelmingly time consuming when performing an initial design sweep. Therefore, another evaluation method is needed to provide accurate results at a faster pace. To accomplish this goal, a coupled viscous-inviscid method is used. This thesis proposes an integrated method for analyzing, evaluating, and optimizing an airfoil using a coupled viscous-inviscid solver along with a genetic algorithm to find the optimal candidate. The method proposed is different from prior optimization efforts in that it greatly broadens the design space, while allowing the optimization to search for the best candidate that will meet multiple objectives over a characteristic mission profile rather than over a single condition and single optimization parameter. The increased design space is due to the use of multiple parametric airfoil families, namely the NACA 4 series, CST family, and the PARSEC family. Almost all possible airfoil shapes can be created with these three families allowing for all possible configurations to be included. This inclusion of multiple airfoil families addresses a possible criticism of prior optimization attempts since by only focusing on one airfoil family, they were inherently limiting the number of possible airfoil configurations. By using multiple parametric airfoils, it can be assumed that all reasonable airfoil configurations are included in the analysis and optimization and that a global and not local maximum is found. Additionally, the method used is amenable to customization to suit any specific needs as well as including the effects of other physical phenomena or design criteria and/or constraints. This thesis found that an airfoil configuration that met multiple objectives could be found for a given set of nominal

  2. The Tunneling Method for Global Optimization in Multidimensional Scaling.

    Science.gov (United States)

    Groenen, Patrick J. F.; Heiser, Willem J.

    1996-01-01

    A tunneling method for global minimization in multidimensional scaling is introduced and adjusted for multidimensional scaling with general Minkowski distances. The method alternates a local search step with a tunneling step in which a different configuration is sought with the same STRESS implementation. (SLD)

  3. Concepts: Integrating population survey data from different spatial scales, sampling methods, and species

    Science.gov (United States)

    Dorazio, Robert; Delampady, Mohan; Dey, Soumen; Gopalaswamy, Arjun M.; Karanth, K. Ullas; Nichols, James D.

    2017-01-01

    Conservationists and managers are continually under pressure from the public, the media, and political policy makers to provide “tiger numbers,” not just for protected reserves, but also for large spatial scales, including landscapes, regions, states, nations, and even globally. Estimating the abundance of tigers within relatively small areas (e.g., protected reserves) is becoming increasingly tractable (see Chaps. 9 and 10), but doing so for larger spatial scales still presents a formidable challenge. Those who seek “tiger numbers” are often not satisfied by estimates of tiger occupancy alone, regardless of the reliability of the estimates (see Chaps. 4 and 5). As a result, wherever tiger conservation efforts are underway, either substantially or nominally, scientists and managers are frequently asked to provide putative large-scale tiger numbers based either on a total count or on an extrapolation of some sort (see Chaps. 1 and 2).

  4. Zero boil-off methods for large-scale liquid hydrogen tanks using integrated refrigeration and storage

    Science.gov (United States)

    Notardonato, W. U.; Swanger, A. M.; E Fesmire, J.; Jumper, K. M.; Johnson, W. L.; Tomsik, T. M.

    2017-12-01

    NASA has completed a series of tests at the Kennedy Space Center to demonstrate the capability of using integrated refrigeration and storage (IRAS) to remove energy from a liquid hydrogen (LH2) tank and control the state of the propellant. A primary test objective was the keeping and storing of the liquid in a zero boil-off state, so that the total heat leak entering the tank is removed by a cryogenic refrigerator with an internal heat exchanger. The LH2 is therefore stored and kept with zero losses for an indefinite period of time. The LH2 tank is a horizontal cylindrical geometry with a vacuum-jacketed, multilayer insulation system and a capacity of 125,000 liters. The closed-loop helium refrigeration system was a Linde LR1620 capable of 390W cooling at 20K (without any liquid nitrogen pre-cooling). Three different control methods were used to obtain zero boil-off: temperature control of the helium refrigerant, refrigerator control using the tank pressure sensor, and duty cycling (on/off) of the refrigerator as needed. Summarized are the IRAS design approach, zero boil-off control methods, and results of the series of zero boil-off tests.

  5. Integrals of Frullani type and the method of brackets

    Directory of Open Access Journals (Sweden)

    Bravo Sergio

    2017-01-01

    Full Text Available The method of brackets is a collection of heuristic rules, some of which have being made rigorous, that provide a flexible, direct method for the evaluation of definite integrals. The present work uses this method to establish classical formulas due to Frullani which provide values of a specific family of integrals. Some generalizations are established.

  6. Alternative containment integrity test methods, an overview of possible techniques

    International Nuclear Information System (INIS)

    Spletzer, B.L.

    1986-01-01

    A study is being conducted to develop and analyze alternative methods for testing of containment integrity. The study is focused on techniques for continuously monitoring containment integrity to provide rapid detection of existing leaks, thus providing greater certainty of the integrity of the containment at any time. The study is also intended to develop techniques applicable to the currently required Type A integrated leakage rate tests. A brief discussion of the range of alternative methods currently being considered is presented. The methods include applicability to all major containment types, operating and shutdown plant conditions, and quantitative and qualitative leakage measurements. The techniques are analyzed in accordance with the current state of knowledge of each method. The bulk of the techniques discussed are in the conceptual stage, have not been tested in actual plant conditions, and are presented here as a possible future direction for evaluating containment integrity. Of the methods considered, no single method provides optimum performance for all containment types. Several methods are limited in the types of containment for which they are applicable. The results of the study to date indicate that techniques for continuous monitoring of containment integrity exist for many plants and may be implemented at modest cost

  7. Approximation of the exponential integral (well function) using sampling methods

    Science.gov (United States)

    Baalousha, Husam Musa

    2015-04-01

    Exponential integral (also known as well function) is often used in hydrogeology to solve Theis and Hantush equations. Many methods have been developed to approximate the exponential integral. Most of these methods are based on numerical approximations and are valid for a certain range of the argument value. This paper presents a new approach to approximate the exponential integral. The new approach is based on sampling methods. Three different sampling methods; Latin Hypercube Sampling (LHS), Orthogonal Array (OA), and Orthogonal Array-based Latin Hypercube (OA-LH) have been used to approximate the function. Different argument values, covering a wide range, have been used. The results of sampling methods were compared with results obtained by Mathematica software, which was used as a benchmark. All three sampling methods converge to the result obtained by Mathematica, at different rates. It was found that the orthogonal array (OA) method has the fastest convergence rate compared with LHS and OA-LH. The root mean square error RMSE of OA was in the order of 1E-08. This method can be used with any argument value, and can be used to solve other integrals in hydrogeology such as the leaky aquifer integral.

  8. Modelling across bioreactor scales: methods, challenges and limitations

    DEFF Research Database (Denmark)

    Gernaey, Krist

    that it is challenging and expensive to acquire experimental data of good quality that can be used for characterizing gradients occurring inside a large industrial scale bioreactor. But which model building methods are available? And how can one ensure that the parameters in such a model are properly estimated? And what......Scale-up and scale-down of bioreactors are very important in industrial biotechnology, especially with the currently available knowledge on the occurrence of gradients in industrial-scale bioreactors. Moreover, it becomes increasingly appealing to model such industrial scale systems, considering...

  9. Continuum Level Density in Complex Scaling Method

    International Nuclear Information System (INIS)

    Suzuki, R.; Myo, T.; Kato, K.

    2005-01-01

    A new calculational method of continuum level density (CLD) at unbound energies is studied in the complex scaling method (CSM). It is shown that the CLD can be calculated by employing the discretization of continuum states in the CSM without any smoothing technique

  10. An Integrated Start-Up Method for Pumped Storage Units Based on a Novel Artificial Sheep Algorithm

    Directory of Open Access Journals (Sweden)

    Zanbin Wang

    2018-01-01

    Full Text Available Pumped storage units (PSUs are an important storage tool for power systems containing large-scale renewable energy, and the merit of rapid start-up enable PSUs to modulate and stabilize the power system. In this paper, PSU start-up strategies have been studied and a new integrated start-up method has been proposed for the purpose of achieving swift and smooth start-up. A two-phase closed-loop startup strategy, composed of switching Proportion Integration (PI and Proportion Integration Differentiation (PID controller is designed, and an integrated optimization scheme is proposed for a synchronous optimization of the parameters in the strategy. To enhance the optimization performance, a novel meta-heuristic called Artificial Sheep Algorithm (ASA is proposed and applied to solve the optimization task after a sufficient verification with seven popular meta-heuristic algorithms and 13 typical benchmark functions. Simulation model has been built for a China’s PSU and comparative experiments are conducted to evaluate the proposed integrated method. Results show that the start-up performance could be significantly improved on both indices on overshoot and start-up, and up to 34%-time consumption has been reduced under different working condition. The significant improvements on start-up of PSU is interesting and meaning for further application on real unit.

  11. Continuous integration congestion cost allocation based on sensitivity

    International Nuclear Information System (INIS)

    Wu, Z.Q.; Wang, Y.N.

    2004-01-01

    Congestion cost allocation is a very important topic in congestion management. Allocation methods based on the Aumann-Shapley value use the discrete numerical integration method, which needs to solve the incremented OPF solution many times, and as such it is not suitable for practical application to large-scale systems. The optimal solution and its sensitivity change tendency during congestion removal using a DC optimal power flow (OPF) process is analysed. A simple continuous integration method based on the sensitivity is proposed for the congestion cost allocation. The proposed sensitivity analysis method needs a smaller computation time than the method based on using the quadratic method and inner point iteration. The proposed congestion cost allocation method uses a continuous integration method rather than discrete numerical integration. The method does not need to solve the incremented OPF solutions; which allows it use in large-scale systems. The method can also be used for AC OPF congestion management. (author)

  12. Fuel pin integrity assessment under large scale transients

    International Nuclear Information System (INIS)

    Dutta, B.K.

    2006-01-01

    The integrity of fuel rods under normal, abnormal and accident conditions is an important consideration during fuel design of advanced nuclear reactors. The fuel matrix and the sheath form the first barrier to prevent the release of radioactive materials into the primary coolant. An understanding of the fuel and clad behaviour under different reactor conditions, particularly under the beyond-design-basis accident scenario leading to large scale transients, is always desirable to assess the inherent safety margins in fuel pin design and to plan for the mitigation the consequences of accidents, if any. The severe accident conditions are typically characterized by the energy deposition rates far exceeding the heat removal capability of the reactor coolant system. This may lead to the clad failure due to fission gas pressure at high temperature, large- scale pellet-clad interaction and clad melting. The fuel rod performance is affected by many interdependent complex phenomena involving extremely complex material behaviour. The versatile experimental database available in this area has led to the development of powerful analytical tools to characterize fuel under extreme scenarios

  13. Direct integration multiple collision integral transport analysis method for high energy fusion neutronics

    International Nuclear Information System (INIS)

    Koch, K.R.

    1985-01-01

    A new analysis method specially suited for the inherent difficulties of fusion neutronics was developed to provide detailed studies of the fusion neutron transport physics. These studies should provide a better understanding of the limitations and accuracies of typical fusion neutronics calculations. The new analysis method is based on the direct integration of the integral form of the neutron transport equation and employs a continuous energy formulation with the exact treatment of the energy angle kinematics of the scattering process. In addition, the overall solution is analyzed in terms of uncollided, once-collided, and multi-collided solution components based on a multiple collision treatment. Furthermore, the numerical evaluations of integrals use quadrature schemes that are based on the actual dependencies exhibited in the integrands. The new DITRAN computer code was developed on the Cyber 205 vector supercomputer to implement this direct integration multiple-collision fusion neutronics analysis. Three representative fusion reactor models were devised and the solutions to these problems were studied to provide suitable choices for the numerical quadrature orders as well as the discretized solution grid and to understand the limitations of the new analysis method. As further verification and as a first step in assessing the accuracy of existing fusion-neutronics calculations, solutions obtained using the new analysis method were compared to typical multigroup discrete ordinates calculations

  14. Integrated modelling of nitrate loads to coastal waters and land rent applied to catchment scale water management

    DEFF Research Database (Denmark)

    Jacosen, T.; Refsgaard, A.; Jacobsen, Brian H.

    Abstract The EU WFD requires an integrated approach to river basin management in order to meet environmental and ecological objectives. This paper presents concepts and full-scale application of an integrated modelling framework. The Ringkoebing Fjord basin is characterized by intensive agricultu...... in comprehensive, integrated modelling tools.......Abstract The EU WFD requires an integrated approach to river basin management in order to meet environmental and ecological objectives. This paper presents concepts and full-scale application of an integrated modelling framework. The Ringkoebing Fjord basin is characterized by intensive...... agricultural production and leakage of nitrate constitute a major pollution problem with respect groundwater aquifers (drinking water), fresh surface water systems (water quality of lakes) and coastal receiving waters (eutrophication). The case study presented illustrates an advanced modelling approach applied...

  15. Integrated modelling of nitrate loads to coastal waters and land rent applied to catchment-scale water management

    DEFF Research Database (Denmark)

    Refsgaard, A.; Jacobsen, T.; Jacobsen, Brian H.

    2007-01-01

    The EU Water Framework Directive (WFD) requires an integrated approach to river basin management in order to meet environmental and ecological objectives. This paper presents concepts and full-scale application of an integrated modelling framework. The Ringkoebing Fjord basin is characterized by ...... the potential and limitations of comprehensive, integrated modelling tools.  ......The EU Water Framework Directive (WFD) requires an integrated approach to river basin management in order to meet environmental and ecological objectives. This paper presents concepts and full-scale application of an integrated modelling framework. The Ringkoebing Fjord basin is characterized...... by intensive agricultural production and leakage of nitrate constitute a major pollution problem with respect groundwater aquifers (drinking water), fresh surface water systems (water quality of lakes) and coastal receiving waters (eutrophication). The case study presented illustrates an advanced modelling...

  16. Hydrologic design methodologies for small-scale hydro at ungauged sites. Phase 2A, British Columbia. Primer for Integrated Method for Power Analysis (IMP)

    Energy Technology Data Exchange (ETDEWEB)

    1986-01-01

    This report provides a user's manual for the Integrated Method for Power Analysis (IMP) series of programs, developed to aid in evaluating small-scale hydroelectric power sites in British Columbia. The primary objective of IMP is to address the problem posed by hydroelectric development at sites completely lacking in streamflow records. The province's mountainous terrain, with numerous microclimates, precluded methods of regional analysis based on regression or comparisons with gauged streams; instead, a conceptual model of physical processes on the land and in the atmosphere was used as a basis for developing IMP. The IMP package consists of five basic components: an atmospheric model of annual precipitation and 1-in-10-year 24-h maximum rainfall; a watershed model that will generate a daily series of streamflow data; a flood frequency analysis system that uses site-specific topographical information and information from the atmospheric model to generate the flood frequency curve; a hydroelectric power simulation program which determines the daily energy output for a run-of-river or reservoir storage site; and a graphics analysis package which provides direct visualization of atmospheric and streamflow data, and the results of watershed and power simulation modelling. Within each component of the program, the program requirements and capabilities are discussed. A series of sample screens are shown, illustrating the use of the IMP package for a particular simulation. 18 figs., 2 tabs.

  17. Achieving Integration in Mixed Methods Designs—Principles and Practices

    OpenAIRE

    Fetters, Michael D; Curry, Leslie A; Creswell, John W

    2013-01-01

    Mixed methods research offers powerful tools for investigating complex processes and systems in health and health care. This article describes integration principles and practices at three levels in mixed methods research and provides illustrative examples. Integration at the study design level occurs through three basic mixed method designs—exploratory sequential, explanatory sequential, and convergent—and through four advanced frameworks—multistage, intervention, case study, and participato...

  18. How Does Scale of Implementation Impact the Environmental Sustainability of Wastewater Treatment Integrated with Resource Recovery?

    Science.gov (United States)

    Cornejo, Pablo K; Zhang, Qiong; Mihelcic, James R

    2016-07-05

    Energy and resource consumptions required to treat and transport wastewater have led to efforts to improve the environmental sustainability of wastewater treatment plants (WWTPs). Resource recovery can reduce the environmental impact of these systems; however, limited research has considered how the scale of implementation impacts the sustainability of WWTPs integrated with resource recovery. Accordingly, this research uses life cycle assessment (LCA) to evaluate how the scale of implementation impacts the environmental sustainability of wastewater treatment integrated with water reuse, energy recovery, and nutrient recycling. Three systems were selected: a septic tank with aerobic treatment at the household scale, an advanced water reclamation facility at the community scale, and an advanced water reclamation facility at the city scale. Three sustainability indicators were considered: embodied energy, carbon footprint, and eutrophication potential. This study determined that as with economies of scale, there are benefits to centralization of WWTPs with resource recovery in terms of embodied energy and carbon footprint; however, the community scale was shown to have the lowest eutrophication potential. Additionally, technology selection, nutrient control practices, system layout, and topographical conditions may have a larger impact on environmental sustainability than the implementation scale in some cases.

  19. Elements of a method to scale ignition reactor Tokamak

    International Nuclear Information System (INIS)

    Cotsaftis, M.

    1984-08-01

    Due to unavoidable uncertainties from present scaling laws when projected to thermonuclear regime, a method is proposed to minimize these uncertainties in order to figure out the main parameters of ignited tokamak. The method mainly consists in searching, if any, a domain in adapted parameters space which allows Ignition, but is the least sensitive to possible change in scaling laws. In other words, Ignition domain is researched which is the intersection of all possible Ignition domains corresponding to all possible scaling laws produced by all possible transports

  20. Towards integrated modelling of soil organic carbon cycling at landscape scale

    Science.gov (United States)

    Viaud, V.

    2009-04-01

    Soil organic carbon (SOC) is recognized as a key factor of the chemical, biological and physical quality of soil. Numerous models of soil organic matter turnover have been developed since the 1930ies, most of them dedicated to plot scale applications. More recently, they have been applied to national scales to establish the inventories of carbon stocks directed by the Kyoto protocol. However, only few studies consider the intermediate landscape scale, where the spatio-temporal pattern of land management practices, its interactions with the physical environment and its impacts on SOC dynamics can be investigated to provide guidelines for sustainable management of soils in agricultural areas. Modelling SOC cycling at this scale requires accessing accurate spatially explicit input data on soils (SOC content, bulk density, depth, texture) and land use (land cover, farm practices), and combining both data in a relevant integrated landscape representation. The purpose of this paper is to present a first approach to modelling SOC evolution in a small catchment. The impact of the way landscape is represented on SOC stocks in the catchment was more specifically addressed. This study was based on the field map, the soil survey, the crop rotations and land management practices of an actual 10-km² agricultural catchment located in Brittany (France). RothC model was used to drive soil organic matter dynamics. Landscape representation in the form of a systematic regular grid, where driving properties vary continuously in space, was compared to a representation where landscape is subdivided into a set of homogeneous geographical units. This preliminary work enabled to identify future needs to improve integrated soil-landscape modelling in agricultural areas.

  1. Obtaining bixin from semi-defatted annatto seeds by a mechanical method and solvent extraction: Process integration and economic evaluation.

    Science.gov (United States)

    Alcázar-Alay, Sylvia C; Osorio-Tobón, J Felipe; Forster-Carneiro, Tânia; Meireles, M Angela A

    2017-09-01

    This work involves the application of physical separation methods to concentrate the pigment of semi-defatted annatto seeds, a noble vegetal biomass rich in bixin pigments. Semi-defatted annatto seeds are the residue produced after the extraction of the lipid fraction from annatto seeds using supercritical fluid extraction (SFE). Semi-defatted annatto seeds are use in this work due to three important reasons: i) previous lipid extraction is necessary to recovery the tocotrienol-rich oil present in the annatto seeds, ii) an initial removal of the oil via SFE process favors bixin separation and iii) the cost of raw material is null. Physical methods including i) the mechanical fractionation method and ii) an integrated process of mechanical fractionation method and low-pressure solvent extraction (LPSE) were studied. The integrated process was proposed for processing two different semi-defatted annatto materials denoted Batches 1 and 2. The cost of manufacture (COM) was calculated for two different production scales (5 and 50L) considering the integrated process vs. only the mechanical fractionation method. The integrated process showed a significantly higher COM than mechanical fractionation method. This work suggests that mechanical fractionation method is an adequate and low-cost process to obtain a rich-pigment product from semi-defatted annatto seeds. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. Structural Integrity in Measures of Self Concept.

    Science.gov (United States)

    Stenner, A. Jackson; Katzenmeyer, W.G.

    Structural integrity of a measure is defined in terms of its replicability, constancy, invariance, and stability. Work completed in the development and validation of the Self Observation Scales (SOS) Primary Level (Stenner and Katzenmeyer, 1973) serves to illustrate one method of establishing structural integrity. The name of each scale of the SOS…

  3. [Development method of healthcare information system integration based on business collaboration model].

    Science.gov (United States)

    Li, Shasha; Nie, Hongchao; Lu, Xudong; Duan, Huilong

    2015-02-01

    Integration of heterogeneous systems is the key to hospital information construction due to complexity of the healthcare environment. Currently, during the process of healthcare information system integration, people participating in integration project usually communicate by free-format document, which impairs the efficiency and adaptability of integration. A method utilizing business process model and notation (BPMN) to model integration requirement and automatically transforming it to executable integration configuration was proposed in this paper. Based on the method, a tool was developed to model integration requirement and transform it to integration configuration. In addition, an integration case in radiology scenario was used to verify the method.

  4. Explicit integration of extremely stiff reaction networks: partial equilibrium methods

    International Nuclear Information System (INIS)

    Guidry, M W; Hix, W R; Billings, J J

    2013-01-01

    In two preceding papers (Guidry et al 2013 Comput. Sci. Disc. 6 015001 and Guidry and Harris 2013 Comput. Sci. Disc. 6 015002), we have shown that when reaction networks are well removed from equilibrium, explicit asymptotic and quasi-steady-state approximations can give algebraically stabilized integration schemes that rival standard implicit methods in accuracy and speed for extremely stiff systems. However, we also showed that these explicit methods remain accurate but are no longer competitive in speed as the network approaches equilibrium. In this paper, we analyze this failure and show that it is associated with the presence of fast equilibration timescales that neither asymptotic nor quasi-steady-state approximations are able to remove efficiently from the numerical integration. Based on this understanding, we develop a partial equilibrium method to deal effectively with the approach to equilibrium and show that explicit asymptotic methods, combined with the new partial equilibrium methods, give an integration scheme that can plausibly deal with the stiffest networks, even in the approach to equilibrium, with accuracy and speed competitive with that of implicit methods. Thus we demonstrate that such explicit methods may offer alternatives to implicit integration of even extremely stiff systems and that these methods may permit integration of much larger networks than have been possible before in a number of fields. (paper)

  5. Operation Modeling of Power Systems Integrated with Large-Scale New Energy Power Sources

    Directory of Open Access Journals (Sweden)

    Hui Li

    2016-10-01

    Full Text Available In the most current methods of probabilistic power system production simulation, the output characteristics of new energy power generation (NEPG has not been comprehensively considered. In this paper, the power output characteristics of wind power generation and photovoltaic power generation are firstly analyzed based on statistical methods according to their historical operating data. Then the characteristic indexes and the filtering principle of the NEPG historical output scenarios are introduced with the confidence level, and the calculation model of NEPG’s credible capacity is proposed. Based on this, taking the minimum production costs or the best energy-saving and emission-reduction effect as the optimization objective, the power system operation model with large-scale integration of new energy power generation (NEPG is established considering the power balance, the electricity balance and the peak balance. Besides, the constraints of the operating characteristics of different power generation types, the maintenance schedule, the load reservation, the emergency reservation, the water abandonment and the transmitting capacity between different areas are also considered. With the proposed power system operation model, the operation simulations are carried out based on the actual Northwest power grid of China, which resolves the new energy power accommodations considering different system operating conditions. The simulation results well verify the validity of the proposed power system operation model in the accommodation analysis for the power system which is penetrated with large scale NEPG.

  6. The linearly scaling 3D fragment method for large scale electronic structure calculations

    Energy Technology Data Exchange (ETDEWEB)

    Zhao Zhengji [National Energy Research Scientific Computing Center (NERSC) (United States); Meza, Juan; Shan Hongzhang; Strohmaier, Erich; Bailey, David; Wang Linwang [Computational Research Division, Lawrence Berkeley National Laboratory (United States); Lee, Byounghak, E-mail: ZZhao@lbl.go [Physics Department, Texas State University (United States)

    2009-07-01

    The linearly scaling three-dimensional fragment (LS3DF) method is an O(N) ab initio electronic structure method for large-scale nano material simulations. It is a divide-and-conquer approach with a novel patching scheme that effectively cancels out the artificial boundary effects, which exist in all divide-and-conquer schemes. This method has made ab initio simulations of thousand-atom nanosystems feasible in a couple of hours, while retaining essentially the same accuracy as the direct calculation methods. The LS3DF method won the 2008 ACM Gordon Bell Prize for algorithm innovation. Our code has reached 442 Tflop/s running on 147,456 processors on the Cray XT5 (Jaguar) at OLCF, and has been run on 163,840 processors on the Blue Gene/P (Intrepid) at ALCF, and has been applied to a system containing 36,000 atoms. In this paper, we will present the recent parallel performance results of this code, and will apply the method to asymmetric CdSe/CdS core/shell nanorods, which have potential applications in electronic devices and solar cells.

  7. Integrated Data Collection Analysis (IDCA) Program - SSST Testing Methods

    Energy Technology Data Exchange (ETDEWEB)

    Sandstrom, Mary M. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Brown, Geoffrey W. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Preston, Daniel N. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Pollard, Colin J. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Warner, Kirstin F. [Naval Surface Warfare Center (NSWC), Indian Head, MD (United States). Indian Head Division; Remmers, Daniel L. [Naval Surface Warfare Center (NSWC), Indian Head, MD (United States). Indian Head Division; Sorensen, Daniel N. [Naval Surface Warfare Center (NSWC), Indian Head, MD (United States). Indian Head Division; Whinnery, LeRoy L. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Phillips, Jason J. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Shelley, Timothy J. [Bureau of Alcohol, Tobacco and Firearms (ATF), Huntsville, AL (United States); Reyes, Jose A. [Applied Research Associates, Tyndall AFB, FL (United States); Hsu, Peter C. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Reynolds, John G. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2013-03-25

    The Integrated Data Collection Analysis (IDCA) program is conducting a proficiency study for Small- Scale Safety and Thermal (SSST) testing of homemade explosives (HMEs). Described here are the methods used for impact, friction, electrostatic discharge, and differential scanning calorimetry analysis during the IDCA program. These methods changed throughout the Proficiency Test and the reasons for these changes are documented in this report. The most significant modifications in standard testing methods are: 1) including one specified sandpaper in impact testing among all the participants, 2) diversifying liquid test methods for selected participants, and 3) including sealed sample holders for thermal testing by at least one participant. This effort, funded by the Department of Homeland Security (DHS), is putting the issues of safe handling of these materials in perspective with standard military explosives. The study is adding SSST testing results for a broad suite of different HMEs to the literature. Ultimately the study will suggest new guidelines and methods and possibly establish the SSST testing accuracies needed to develop safe handling practices for HMEs. Each participating testing laboratory uses identical test materials and preparation methods wherever possible. The testing performers involved are Lawrence Livermore National Laboratory (LLNL), Los Alamos National Laboratory (LANL), Indian Head Division, Naval Surface Warfare Center, (NSWC IHD), Sandia National Laboratories (SNL), and Air Force Research Laboratory (AFRL/RXQL). These tests are conducted as a proficiency study in order to establish some consistency in test protocols, procedures, and experiments and to compare results when these testing variables cannot be made consistent.

  8. An integral nodal variational method for multigroup criticality calculations

    International Nuclear Information System (INIS)

    Lewis, E.E.; Tsoulfanidis, N.

    2003-01-01

    An integral formulation of the variational nodal method is presented and applied to a series of benchmark critically problems. The method combines an integral transport treatment of the even-parity flux within the spatial node with an odd-parity spherical harmonics expansion of the Lagrange multipliers at the node interfaces. The response matrices that result from this formulation are compatible with those in the VARIANT code at Argonne National Laboratory. Either homogeneous or heterogeneous nodes may be employed. In general, for calculations requiring higher-order angular approximations, the integral method yields solutions with comparable accuracy while requiring substantially less CPU time and memory than the standard spherical harmonics expansion using the same spatial approximations. (author)

  9. Economies of scale and vertical integration in the investor-owed electric utility industry

    International Nuclear Information System (INIS)

    Thompson, H.G.; Islam, M.; Rose, K.

    1996-01-01

    This report analyzes the nature of costs in a vertically integrated electric utility. Findings provide new insights into the operations of the vertically integrated electric utility and supports earlier research on economics of scale and density; results also provide insights for policy makers dealing with electric industry restructuring issues such as competitive structure and mergers. Overall, results indicate that for most firms in the industry, average costs would not be reduced through expansion of generation, numbers of customers, or the delivery system. Evidently, the combination of benefits from large-scale technologies, managerial experience, coordination, or load diversity have been exhausted by the larger firms in the industry; however many firms would benefit from reducing their generation-to-sales ratio and by increasing sales to their existing customer base. Three cost models were used in the analysis

  10. Quantifying the Impacts of Large Scale Integration of Renewables in Indian Power Sector

    Science.gov (United States)

    Kumar, P.; Mishra, T.; Banerjee, R.

    2017-12-01

    India's power sector is responsible for nearly 37 percent of India's greenhouse gas emissions. For a fast emerging economy like India whose population and energy consumption are poised to rise rapidly in the coming decades, renewable energy can play a vital role in decarbonizing power sector. In this context, India has targeted 33-35 percent emission intensity reduction (with respect to 2005 levels) along with large scale renewable energy targets (100GW solar, 60GW wind, and 10GW biomass energy by 2022) in INDCs submitted at Paris agreement. But large scale integration of renewable energy is a complex process which faces a number of problems like capital intensiveness, matching intermittent loads with least storage capacity and reliability. In this context, this study attempts to assess the technical feasibility of integrating renewables into Indian electricity mix by 2022 and analyze its implications on power sector operations. This study uses TIMES, a bottom up energy optimization model with unit commitment and dispatch features. We model coal and gas fired units discretely with region-wise representation of wind and solar resources. The dispatch features are used for operational analysis of power plant units under ramp rate and minimum generation constraints. The study analyzes India's electricity sector transition for the year 2022 with three scenarios. The base case scenario (no RE addition) along with INDC scenario (with 100GW solar, 60GW wind, 10GW biomass) and low RE scenario (50GW solar, 30GW wind) have been created to analyze the implications of large scale integration of variable renewable energy. The results provide us insights on trade-offs involved in achieving mitigation targets and investment decisions involved. The study also examines operational reliability and flexibility requirements of the system for integrating renewables.

  11. The influence of a scaled boundary response on integral system transient behavior

    International Nuclear Information System (INIS)

    Dimenna, R.A.; Kullberg, C.M.

    1989-01-01

    Scaling relationships associated with the thermal-hydraulic response of a closed-loop system are applied to a calculational assessment of a feed-and-bleed recovery in a nuclear reactor integral effects test. The analysis demonstrates both the influence of scale on the system response and the ability of the thermal-hydraulics code to represent those effects. The qualitative response of the fluid is shown to be coupled to the behavior of the bounding walls through the energy equation. The results of the analysis described in this paper influence the determination of computer code applicability. The sensitivity of the code response to scaling variations introduced in the analysis is found to be appropriate with respect to scaling criteria determined from the scaling literature. Differences in the system response associated with different scaling criteria are found to be plausible and easily explained using well-known principles of heat transfer. Therefore, it is concluded that RELAP5/MOD2 can adequately represent the scaled effects of heat transfer boundary conditions of the thermal-hydraulic calculations through the mechanism of communicating walls. The results of the analysis also serve to clarify certain aspects of experiment and facility design

  12. Automating large-scale reactor systems

    International Nuclear Information System (INIS)

    Kisner, R.A.

    1985-01-01

    This paper conveys a philosophy for developing automated large-scale control systems that behave in an integrated, intelligent, flexible manner. Methods for operating large-scale systems under varying degrees of equipment degradation are discussed, and a design approach that separates the effort into phases is suggested. 5 refs., 1 fig

  13. Semantic Representation and Scale-Up of Integrated Air Traffic Management Data

    Science.gov (United States)

    Keller, Richard M.; Ranjan, Shubha; Wei, Mie; Eshow, Michelle

    2016-01-01

    Each day, the global air transportation industry generates a vast amount of heterogeneous data from air carriers, air traffic control providers, and secondary aviation entities handling baggage, ticketing, catering, fuel delivery, and other services. Generally, these data are stored in isolated data systems, separated from each other by significant political, regulatory, economic, and technological divides. These realities aside, integrating aviation data into a single, queryable, big data store could enable insights leading to major efficiency, safety, and cost advantages. In this paper, we describe an implemented system for combining heterogeneous air traffic management data using semantic integration techniques. The system transforms data from its original disparate source formats into a unified semantic representation within an ontology-based triple store. Our initial prototype stores only a small sliver of air traffic data covering one day of operations at a major airport. The paper also describes our analysis of difficulties ahead as we prepare to scale up data storage to accommodate successively larger quantities of data -- eventually covering all US commercial domestic flights over an extended multi-year timeframe. We review several approaches to mitigating scale-up related query performance concerns.

  14. Large-scale building integrated photovoltaics field trial. First technical report - installation phase

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2004-07-01

    This report summarises the results of the first eighteen months of the Large-Scale Building Integrated Photovoltaic Field Trial focussing on technical aspects. The project aims included increasing awareness and application of the technology, raising the UK capabilities in application of the technology, and assessing the potential for building integrated photovoltaics (BIPV). Details are given of technology choices; project organisation, cost, and status; and the evaluation criteria. Installations of BIPV described include University buildings, commercial centres, and a sports stadium, wildlife park, church hall, and district council building. Lessons learnt are discussed, and a further report covering monitoring aspects is planned.

  15. A dynamic integrated fault diagnosis method for power transformers.

    Science.gov (United States)

    Gao, Wensheng; Bai, Cuifen; Liu, Tong

    2015-01-01

    In order to diagnose transformer fault efficiently and accurately, a dynamic integrated fault diagnosis method based on Bayesian network is proposed in this paper. First, an integrated fault diagnosis model is established based on the causal relationship among abnormal working conditions, failure modes, and failure symptoms of transformers, aimed at obtaining the most possible failure mode. And then considering the evidence input into the diagnosis model is gradually acquired and the fault diagnosis process in reality is multistep, a dynamic fault diagnosis mechanism is proposed based on the integrated fault diagnosis model. Different from the existing one-step diagnosis mechanism, it includes a multistep evidence-selection process, which gives the most effective diagnostic test to be performed in next step. Therefore, it can reduce unnecessary diagnostic tests and improve the accuracy and efficiency of diagnosis. Finally, the dynamic integrated fault diagnosis method is applied to actual cases, and the validity of this method is verified.

  16. A Dynamic Integrated Fault Diagnosis Method for Power Transformers

    Science.gov (United States)

    Gao, Wensheng; Liu, Tong

    2015-01-01

    In order to diagnose transformer fault efficiently and accurately, a dynamic integrated fault diagnosis method based on Bayesian network is proposed in this paper. First, an integrated fault diagnosis model is established based on the causal relationship among abnormal working conditions, failure modes, and failure symptoms of transformers, aimed at obtaining the most possible failure mode. And then considering the evidence input into the diagnosis model is gradually acquired and the fault diagnosis process in reality is multistep, a dynamic fault diagnosis mechanism is proposed based on the integrated fault diagnosis model. Different from the existing one-step diagnosis mechanism, it includes a multistep evidence-selection process, which gives the most effective diagnostic test to be performed in next step. Therefore, it can reduce unnecessary diagnostic tests and improve the accuracy and efficiency of diagnosis. Finally, the dynamic integrated fault diagnosis method is applied to actual cases, and the validity of this method is verified. PMID:25685841

  17. 14th International Conference on Integral Methods in Science and Engineering

    CERN Document Server

    Riva, Matteo; Lamberti, Pier; Musolino, Paolo

    2017-01-01

    This contributed volume contains a collection of articles on the most recent advances in integral methods.  The first of two volumes, this work focuses on the construction of theoretical integral methods. Written by internationally recognized researchers, the chapters in this book are based on talks given at the Fourteenth International Conference on Integral Methods in Science and Engineering, held July 25-29, 2016, in Padova, Italy. A broad range of topics is addressed, such as: • Integral equations • Homogenization • Duality methods • Optimal design • Conformal techniques This collection will be of interest to researchers in applied mathematics, physics, and mechanical and electrical engineering, as well as graduate students in these disciplines, and to other professionals who use integration as an essential tool in their work.

  18. Cultural adaptation and translation of measures: an integrated method.

    Science.gov (United States)

    Sidani, Souraya; Guruge, Sepali; Miranda, Joyal; Ford-Gilboe, Marilyn; Varcoe, Colleen

    2010-04-01

    Differences in the conceptualization and operationalization of health-related concepts may exist across cultures. Such differences underscore the importance of examining conceptual equivalence when adapting and translating instruments. In this article, we describe an integrated method for exploring conceptual equivalence within the process of adapting and translating measures. The integrated method involves five phases including selection of instruments for cultural adaptation and translation; assessment of conceptual equivalence, leading to the generation of a set of items deemed to be culturally and linguistically appropriate to assess the concept of interest in the target community; forward translation; back translation (optional); and pre-testing of the set of items. Strengths and limitations of the proposed integrated method are discussed. (c) 2010 Wiley Periodicals, Inc.

  19. Reframed Genome-Scale Metabolic Model to Facilitate Genetic Design and Integration with Expression Data.

    Science.gov (United States)

    Gu, Deqing; Jian, Xingxing; Zhang, Cheng; Hua, Qiang

    2017-01-01

    Genome-scale metabolic network models (GEMs) have played important roles in the design of genetically engineered strains and helped biologists to decipher metabolism. However, due to the complex gene-reaction relationships that exist in model systems, most algorithms have limited capabilities with respect to directly predicting accurate genetic design for metabolic engineering. In particular, methods that predict reaction knockout strategies leading to overproduction are often impractical in terms of gene manipulations. Recently, we proposed a method named logical transformation of model (LTM) to simplify the gene-reaction associations by introducing intermediate pseudo reactions, which makes it possible to generate genetic design. Here, we propose an alternative method to relieve researchers from deciphering complex gene-reactions by adding pseudo gene controlling reactions. In comparison to LTM, this new method introduces fewer pseudo reactions and generates a much smaller model system named as gModel. We showed that gModel allows two seldom reported applications: identification of minimal genomes and design of minimal cell factories within a modified OptKnock framework. In addition, gModel could be used to integrate expression data directly and improve the performance of the E-Fmin method for predicting fluxes. In conclusion, the model transformation procedure will facilitate genetic research based on GEMs, extending their applications.

  20. ARE METHODS USED TO INTEGRATE STANDARDIZED MANAGEMENT SYSTEMS A CONDITIONING FACTOR OF THE LEVEL OF INTEGRATION? AN EMPIRICAL STUDY

    Directory of Open Access Journals (Sweden)

    Merce Bernardo

    2011-09-01

    Full Text Available Organizations are increasingly implementing multiple Management System Standards (M SSs and considering managing the related Management Systems (MSs as a single system.The aim of this paper is to analyze if methods us ed to integrate standardized MSs condition the level of integration of those MSs. A descriptive methodology has been applied to 343 Spanish organizations registered to, at least, ISO 9001 and ISO 14001. Seven groups of these organizations using different combinations of methods have been analyzed Results show that these organizations have a high level of integration of their MSs. The most common method used, was the process map. Organizations using a combination of different methods achieve higher levels of integration than those using a single method. However, no evidence has been found to confirm the relationship between the method used and the integration level achieved.

  1. BiGG Models: A platform for integrating, standardizing and sharing genome-scale models

    DEFF Research Database (Denmark)

    King, Zachary A.; Lu, Justin; Dräger, Andreas

    2016-01-01

    Genome-scale metabolic models are mathematically-structured knowledge bases that can be used to predict metabolic pathway usage and growth phenotypes. Furthermore, they can generate and test hypotheses when integrated with experimental data. To maximize the value of these models, centralized repo...

  2. INTEGRATED FUSION METHOD FOR MULTIPLE TEMPORAL-SPATIAL-SPECTRAL IMAGES

    Directory of Open Access Journals (Sweden)

    H. Shen

    2012-08-01

    Full Text Available Data fusion techniques have been widely researched and applied in remote sensing field. In this paper, an integrated fusion method for remotely sensed images is presented. Differently from the existed methods, the proposed method has the performance to integrate the complementary information in multiple temporal-spatial-spectral images. In order to represent and process the images in one unified framework, two general image observation models are firstly presented, and then the maximum a posteriori (MAP framework is used to set up the fusion model. The gradient descent method is employed to solve the fused image. The efficacy of the proposed method is validated using simulated images.

  3. Progress on scaling up integrated services for sexual and reproductive health and HIV.

    Science.gov (United States)

    Dickinson, Clare; Attawell, Kathy; Druce, Nel

    2009-11-01

    This paper considers new developments to strengthen sexual and reproductive health and HIV linkages and discusses factors that continue to impede progress. It is based on a previous review undertaken for the United Kingdom Department for International Development in 2006 that examined the constraints and opportunities to scaling up these linkages. We argue that, despite growing evidence that linking sexual and reproductive health and HIV is feasible and beneficial, few countries have achieved significant scale-up of integrated service provision. A lack of common understanding of terminology and clear technical operational guidance, and separate policy, institutional and financing processes continue to represent significant constraints. We draw on experience with tuberculosis and HIV integration to highlight some lessons. The paper concludes that there is little evidence to determine whether funding for health systems is strengthening linkages and we make several recommendations to maximize opportunities represented by recent developments.

  4. Scale factor measure method without turntable for angular rate gyroscope

    Science.gov (United States)

    Qi, Fangyi; Han, Xuefei; Yao, Yanqing; Xiong, Yuting; Huang, Yuqiong; Wang, Hua

    2018-03-01

    In this paper, a scale factor test method without turntable is originally designed for the angular rate gyroscope. A test system which consists of test device, data acquisition circuit and data processing software based on Labview platform is designed. Taking advantage of gyroscope's sensitivity of angular rate, a gyroscope with known scale factor, serves as a standard gyroscope. The standard gyroscope is installed on the test device together with a measured gyroscope. By shaking the test device around its edge which is parallel to the input axis of gyroscope, the scale factor of the measured gyroscope can be obtained in real time by the data processing software. This test method is fast. It helps test system miniaturized, easy to carry or move. Measure quarts MEMS gyroscope's scale factor multi-times by this method, the difference is less than 0.2%. Compare with testing by turntable, the scale factor difference is less than 1%. The accuracy and repeatability of the test system seems good.

  5. Application of heat-balance integral method to conjugate thermal explosion

    Directory of Open Access Journals (Sweden)

    Novozhilov Vasily

    2009-01-01

    Full Text Available Conjugate thermal explosion is an extension of the classical theory, proposed and studied recently by the author. The paper reports application of heat-balance integral method for developing phase portraits for systems undergoing conjugate thermal explosion. The heat-balance integral method is used as an averaging method reducing partical differential equation problem to the set of first-order ordinary differential equations. The latter reduced problem allows natural interpretation in appropriately chosen phase space. It is shown that, with the help of heat-balance integral technique, conjugate thermal explosion problem can be described with a good accuracy by the set of non-linear first-order differential equations involving complex error function. Phase trajectories are presented for typical regimes emerging in conjugate thermal explosion. Use of heat-balance integral as a spatial averaging method allows efficient description of system evolution to be developed.

  6. Silicon hybrid integration

    International Nuclear Information System (INIS)

    Li Xianyao; Yuan Taonu; Shao Shiqian; Shi Zujun; Wang Yi; Yu Yude; Yu Jinzhong

    2011-01-01

    Recently,much attention has concentrated on silicon based photonic integrated circuits (PICs), which provide a cost-effective solution for high speed, wide bandwidth optical interconnection and optical communication.To integrate III-V compounds and germanium semiconductors on silicon substrates,at present there are two kinds of manufacturing methods, i.e., heteroepitaxy and bonding. Low-temperature wafer bonding which can overcome the high growth temperature, lattice mismatch,and incompatibility of thermal expansion coefficients during heteroepitaxy, has offered the possibility for large-scale heterogeneous integration. In this paper, several commonly used bonding methods are reviewed, and the future trends of low temperature wafer bonding envisaged. (authors)

  7. An integrated lean-methods approach to hospital facilities redesign.

    Science.gov (United States)

    Nicholas, John

    2012-01-01

    Lean production methods for eliminating waste and improving processes in manufacturing are now being applied in healthcare. As the author shows, the methods are appropriate for redesigning hospital facilities. When used in an integrated manner and employing teams of mostly clinicians, the methods produce facility designs that are custom-fit to patient needs and caregiver work processes, and reduce operational costs. The author reviews lean methods and an approach for integrating them in the redesign of hospital facilities. A case example of the redesign of an emergency department shows the feasibility and benefits of the approach.

  8. Method of manufacturing Josephson junction integrated circuits

    International Nuclear Information System (INIS)

    Jillie, D.W. Jr.; Smith, L.N.

    1985-01-01

    Josephson junction integrated circuits of the current injection type and magnetically controlled type utilize a superconductive layer that forms both Josephson junction electrode for the Josephson junction devices on the integrated circuit as well as a ground plane for the integrated circuit. Large area Josephson junctions are utilized for effecting contact to lower superconductive layers and islands are formed in superconductive layers to provide isolation between the groudplane function and the Josephson junction electrode function as well as to effect crossovers. A superconductor-barrier-superconductor trilayer patterned by local anodization is also utilized with additional layers formed thereover. Methods of manufacturing the embodiments of the invention are disclosed

  9. Criteria for quantitative and qualitative data integration: mixed-methods research methodology.

    Science.gov (United States)

    Lee, Seonah; Smith, Carrol A M

    2012-05-01

    Many studies have emphasized the need and importance of a mixed-methods approach for evaluation of clinical information systems. However, those studies had no criteria to guide integration of multiple data sets. Integrating different data sets serves to actualize the paradigm that a mixed-methods approach argues; thus, we require criteria that provide the right direction to integrate quantitative and qualitative data. The first author used a set of criteria organized from a literature search for integration of multiple data sets from mixed-methods research. The purpose of this article was to reorganize the identified criteria. Through critical appraisal of the reasons for designing mixed-methods research, three criteria resulted: validation, complementarity, and discrepancy. In applying the criteria to empirical data of a previous mixed methods study, integration of quantitative and qualitative data was achieved in a systematic manner. It helped us obtain a better organized understanding of the results. The criteria of this article offer the potential to produce insightful analyses of mixed-methods evaluations of health information systems.

  10. Integrative methods for analyzing big data in precision medicine.

    Science.gov (United States)

    Gligorijević, Vladimir; Malod-Dognin, Noël; Pržulj, Nataša

    2016-03-01

    We provide an overview of recent developments in big data analyses in the context of precision medicine and health informatics. With the advance in technologies capturing molecular and medical data, we entered the area of "Big Data" in biology and medicine. These data offer many opportunities to advance precision medicine. We outline key challenges in precision medicine and present recent advances in data integration-based methods to uncover personalized information from big data produced by various omics studies. We survey recent integrative methods for disease subtyping, biomarkers discovery, and drug repurposing, and list the tools that are available to domain scientists. Given the ever-growing nature of these big data, we highlight key issues that big data integration methods will face. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  11. Level density in the complex scaling method

    International Nuclear Information System (INIS)

    Suzuki, Ryusuke; Kato, Kiyoshi; Myo, Takayuki

    2005-01-01

    It is shown that the continuum level density (CLD) at unbound energies can be calculated with the complex scaling method (CSM), in which the energy spectra of bound states, resonances and continuum states are obtained in terms of L 2 basis functions. In this method, the extended completeness relation is applied to the calculation of the Green functions, and the continuum-state part is approximately expressed in terms of discretized complex scaled continuum solutions. The obtained result is compared with the CLD calculated exactly from the scattering phase shift. The discretization in the CSM is shown to give a very good description of continuum states. We discuss how the scattering phase shifts can inversely be calculated from the discretized CLD using a basis function technique in the CSM. (author)

  12. A Multi-Objective Optimization Method to integrate Heat Pumps in Industrial Processes

    OpenAIRE

    Becker, Helen; Spinato, Giulia; Maréchal, François

    2011-01-01

    Aim of process integration methods is to increase the efficiency of industrial processes by using pinch analysis combined with process design methods. In this context, appropriate integrated utilities offer promising opportunities to reduce energy consumption, operating costs and pollutants emissions. Energy integration methods are able to integrate any type of predefined utility, but so far there is no systematic approach to generate potential utilities models based on their technology limit...

  13. An Accurate Integral Method for Vibration Signal Based on Feature Information Extraction

    Directory of Open Access Journals (Sweden)

    Yong Zhu

    2015-01-01

    Full Text Available After summarizing the advantages and disadvantages of current integral methods, a novel vibration signal integral method based on feature information extraction was proposed. This method took full advantage of the self-adaptive filter characteristic and waveform correction feature of ensemble empirical mode decomposition in dealing with nonlinear and nonstationary signals. This research merged the superiorities of kurtosis, mean square error, energy, and singular value decomposition on signal feature extraction. The values of the four indexes aforementioned were combined into a feature vector. Then, the connotative characteristic components in vibration signal were accurately extracted by Euclidean distance search, and the desired integral signals were precisely reconstructed. With this method, the interference problem of invalid signal such as trend item and noise which plague traditional methods is commendably solved. The great cumulative error from the traditional time-domain integral is effectively overcome. Moreover, the large low-frequency error from the traditional frequency-domain integral is successfully avoided. Comparing with the traditional integral methods, this method is outstanding at removing noise and retaining useful feature information and shows higher accuracy and superiority.

  14. Large scale integration of photovoltaics in cities

    International Nuclear Information System (INIS)

    Strzalka, Aneta; Alam, Nazmul; Duminil, Eric; Coors, Volker; Eicker, Ursula

    2012-01-01

    Highlights: ► We implement the photovoltaics on a large scale. ► We use three-dimensional modelling for accurate photovoltaic simulations. ► We consider the shadowing effect in the photovoltaic simulation. ► We validate the simulated results using detailed hourly measured data. - Abstract: For a large scale implementation of photovoltaics (PV) in the urban environment, building integration is a major issue. This includes installations on roof or facade surfaces with orientations that are not ideal for maximum energy production. To evaluate the performance of PV systems in urban settings and compare it with the building user’s electricity consumption, three-dimensional geometry modelling was combined with photovoltaic system simulations. As an example, the modern residential district of Scharnhauser Park (SHP) near Stuttgart/Germany was used to calculate the potential of photovoltaic energy and to evaluate the local own consumption of the energy produced. For most buildings of the district only annual electrical consumption data was available and only selected buildings have electronic metering equipment. The available roof area for one of these multi-family case study buildings was used for a detailed hourly simulation of the PV power production, which was then compared to the hourly measured electricity consumption. The results were extrapolated to all buildings of the analyzed area by normalizing them to the annual consumption data. The PV systems can produce 35% of the quarter’s total electricity consumption and half of this generated electricity is directly used within the buildings.

  15. Neural networks supporting audiovisual integration for speech: A large-scale lesion study.

    Science.gov (United States)

    Hickok, Gregory; Rogalsky, Corianne; Matchin, William; Basilakos, Alexandra; Cai, Julia; Pillay, Sara; Ferrill, Michelle; Mickelsen, Soren; Anderson, Steven W; Love, Tracy; Binder, Jeffrey; Fridriksson, Julius

    2018-06-01

    Auditory and visual speech information are often strongly integrated resulting in perceptual enhancements for audiovisual (AV) speech over audio alone and sometimes yielding compelling illusory fusion percepts when AV cues are mismatched, the McGurk-MacDonald effect. Previous research has identified three candidate regions thought to be critical for AV speech integration: the posterior superior temporal sulcus (STS), early auditory cortex, and the posterior inferior frontal gyrus. We assess the causal involvement of these regions (and others) in the first large-scale (N = 100) lesion-based study of AV speech integration. Two primary findings emerged. First, behavioral performance and lesion maps for AV enhancement and illusory fusion measures indicate that classic metrics of AV speech integration are not necessarily measuring the same process. Second, lesions involving superior temporal auditory, lateral occipital visual, and multisensory zones in the STS are the most disruptive to AV speech integration. Further, when AV speech integration fails, the nature of the failure-auditory vs visual capture-can be predicted from the location of the lesions. These findings show that AV speech processing is supported by unimodal auditory and visual cortices as well as multimodal regions such as the STS at their boundary. Motor related frontal regions do not appear to play a role in AV speech integration. Copyright © 2018 Elsevier Ltd. All rights reserved.

  16. A simple analytical scaling method for a scaled-down test facility simulating SB-LOCAs in a passive PWR

    International Nuclear Information System (INIS)

    Lee, Sang Il

    1992-02-01

    A Simple analytical scaling method is developed for a scaled-down test facility simulating SB-LOCAs in a passive PWR. The whole scenario of a SB-LOCA is divided into two phases on the basis of the pressure trend ; depressurization phase and pot-boiling phase. The pressure and the core mixture level are selected as the most critical parameters to be preserved between the prototype and the scaled-down model. In each phase the high important phenomena having the influence on the critical parameters are identified and the scaling parameters governing the high important phenomena are generated by the present method. To validate the model used, Marviken CFT and 336 rod bundle experiment are simulated. The models overpredict both the pressure and two phase mixture level, but it shows agreement at least qualitatively with experimental results. In order to validate whether the scaled-down model well represents the important phenomena, we simulate the nondimensional pressure response of a cold-leg 4-inch break transient for AP-600 and the scaled-down model. The results of the present method are in excellent agreement with those of AP-600. It can be concluded that the present method is suitable for scaling the test facility simulating SB-LOCAs in a passive PWR

  17. Tau method approximation of the Hubbell rectangular source integral

    International Nuclear Information System (INIS)

    Kalla, S.L.; Khajah, H.G.

    2000-01-01

    The Tau method is applied to obtain expansions, in terms of Chebyshev polynomials, which approximate the Hubbell rectangular source integral:I(a,b)=∫ b 0 (1/(√(1+x 2 )) arctan(a/(√(1+x 2 )))) This integral corresponds to the response of an omni-directional radiation detector situated over a corner of a plane isotropic rectangular source. A discussion of the error in the Tau method approximation follows

  18. Challenges and options for large scale integration of wind power

    International Nuclear Information System (INIS)

    Tande, John Olav Giaever

    2006-01-01

    Challenges and options for large scale integration of wind power are examined. Immediate challenges are related to weak grids. Assessment of system stability requires numerical simulation. Models are being developed - validation is essential. Coordination of wind and hydro generation is a key for allowing more wind power capacity in areas with limited transmission corridors. For the case study grid depending on technology and control the allowed wind farm size is increased from 50 to 200 MW. The real life example from 8 January 2005 demonstrates that existing marked based mechanisms can handle large amounts of wind power. In wind integration studies it is essential to take account of the controllability of modern wind farms, the power system flexibility and the smoothing effect of geographically dispersed wind farms. Modern wind farms contribute to system adequacy - combining wind and hydro constitutes a win-win system (ml)

  19. Large-scale modeling of condition-specific gene regulatory networks by information integration and inference.

    Science.gov (United States)

    Ellwanger, Daniel Christian; Leonhardt, Jörn Florian; Mewes, Hans-Werner

    2014-12-01

    Understanding how regulatory networks globally coordinate the response of a cell to changing conditions, such as perturbations by shifting environments, is an elementary challenge in systems biology which has yet to be met. Genome-wide gene expression measurements are high dimensional as these are reflecting the condition-specific interplay of thousands of cellular components. The integration of prior biological knowledge into the modeling process of systems-wide gene regulation enables the large-scale interpretation of gene expression signals in the context of known regulatory relations. We developed COGERE (http://mips.helmholtz-muenchen.de/cogere), a method for the inference of condition-specific gene regulatory networks in human and mouse. We integrated existing knowledge of regulatory interactions from multiple sources to a comprehensive model of prior information. COGERE infers condition-specific regulation by evaluating the mutual dependency between regulator (transcription factor or miRNA) and target gene expression using prior information. This dependency is scored by the non-parametric, nonlinear correlation coefficient η(2) (eta squared) that is derived by a two-way analysis of variance. We show that COGERE significantly outperforms alternative methods in predicting condition-specific gene regulatory networks on simulated data sets. Furthermore, by inferring the cancer-specific gene regulatory network from the NCI-60 expression study, we demonstrate the utility of COGERE to promote hypothesis-driven clinical research. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.

  20. Indirect methods for wake potential integration

    International Nuclear Information System (INIS)

    Zagorodnov, I.

    2006-05-01

    The development of the modern accelerator and free-electron laser projects requires to consider wake fields of very short bunches in arbitrary three dimensional structures. To obtain the wake numerically by direct integration is difficult, since it takes a long time for the scattered fields to catch up to the bunch. On the other hand no general algorithm for indirect wake field integration is available in the literature so far. In this paper we review the know indirect methods to compute wake potentials in rotationally symmetric and cavity-like three dimensional structures. For arbitrary three dimensional geometries we introduce several new techniques and test them numerically. (Orig.)

  1. The prospect of modern thermomechanics in structural integrity calculations of large-scale pressure vessels

    Science.gov (United States)

    Fekete, Tamás

    2018-05-01

    Structural integrity calculations play a crucial role in designing large-scale pressure vessels. Used in the electric power generation industry, these kinds of vessels undergo extensive safety analyses and certification procedures before deemed feasible for future long-term operation. The calculations are nowadays directed and supported by international standards and guides based on state-of-the-art results of applied research and technical development. However, their ability to predict a vessel's behavior under accidental circumstances after long-term operation is largely limited by the strong dependence of the analysis methodology on empirical models that are correlated to the behavior of structural materials and their changes during material aging. Recently a new scientific engineering paradigm, structural integrity has been developing that is essentially a synergistic collaboration between a number of scientific and engineering disciplines, modeling, experiments and numerics. Although the application of the structural integrity paradigm highly contributed to improving the accuracy of safety evaluations of large-scale pressure vessels, the predictive power of the analysis methodology has not yet improved significantly. This is due to the fact that already existing structural integrity calculation methodologies are based on the widespread and commonly accepted 'traditional' engineering thermal stress approach, which is essentially based on the weakly coupled model of thermomechanics and fracture mechanics. Recently, a research has been initiated in MTA EK with the aim to review and evaluate current methodologies and models applied in structural integrity calculations, including their scope of validity. The research intends to come to a better understanding of the physical problems that are inherently present in the pool of structural integrity problems of reactor pressure vessels, and to ultimately find a theoretical framework that could serve as a well

  2. Variational method for integrating radial gradient field

    Science.gov (United States)

    Legarda-Saenz, Ricardo; Brito-Loeza, Carlos; Rivera, Mariano; Espinosa-Romero, Arturo

    2014-12-01

    We propose a variational method for integrating information obtained from circular fringe pattern. The proposed method is a suitable choice for objects with radial symmetry. First, we analyze the information contained in the fringe pattern captured by the experimental setup and then move to formulate the problem of recovering the wavefront using techniques from calculus of variations. The performance of the method is demonstrated by numerical experiments with both synthetic and real data.

  3. Scoping Evaluation of the IRIS Radiation Environment by Using the FW-CADIS Method and SCALE MAVRIC Code

    International Nuclear Information System (INIS)

    Petrovic, B.

    2008-01-01

    IRIS is an advanced pressurized water reactor of integral configuration. This integral configuration with its relatively large reactor vessel and thick downcomer (1.7 m) results in a significant reduction of radiation field and material activation. It thus enables setting up aggressive dose reduction objectives, but at the same time presents challenges for the shielding analysis which needs to be performed over a large spatial domain and include flux attenuation by many orders of magnitude. The Monte Carlo method enables accurately representing irregular geometry and potential streaming paths, but may require significant computational efforts to reduce statistical uncertainty within the acceptable range. Variance reduction methods do exist, but they are designed to provide results for individual detectors and in limited regions, whereas in the scoping phase of IRIS shielding analysis the results are sought throughout the whole containment. To facilitate such analysis, the SCALE MAVRIC was employed. Based on the recently developed FW-CADIS method, MAVRIC uses forward and adjoint deterministic transport theory calculations to generate effective biasing parameters for Monte Carlo simulations throughout the problem. Previous studies have confirmed the potential of this method for obtaining Monte Carlo solutions with acceptable statistics over large spatial domains. The objective of this work was to evaluate the capability of the FW-CADIS/MAVRIC to efficiently perform the required shielding analysis of IRIS. For that purpose, a representative model was prepared, retaining the main problem characteristics, i.e., a large spatial domain (over 10 m in each dimension) and significant attenuation (over 12 orders of magnitude), but geometrically rather simplified. The obtained preliminary results indicate that the FW-CADIS method implemented through the MAVRIC sequence in SCALE will enable determination of radiation field throughout the large spatial domain of the IRIS nuclear

  4. First integral method for an oscillator system

    Directory of Open Access Journals (Sweden)

    Xiaoqian Gong

    2013-04-01

    Full Text Available In this article, we consider the nonlinear Duffing-van der Pol-type oscillator system by means of the first integral method. This system has physical relevance as a model in certain flow-induced structural vibration problems, which includes the van der Pol oscillator and the damped Duffing oscillator etc as particular cases. Firstly, we apply the Division Theorem for two variables in the complex domain, which is based on the ring theory of commutative algebra, to explore a quasi-polynomial first integral to an equivalent autonomous system. Then, through solving an algebraic system we derive the first integral of the Duffing-van der Pol-type oscillator system under certain parametric condition.

  5. Integration by cell algorithm for Slater integrals in a spline basis

    International Nuclear Information System (INIS)

    Qiu, Y.; Fischer, C.F.

    1999-01-01

    An algorithm for evaluating Slater integrals in a B-spline basis is introduced. Based on the piecewise property of the B-splines, the algorithm divides the two-dimensional (r 1 , r 2 ) region into a number of rectangular cells according to the chosen grid and implements the two-dimensional integration over each individual cell using Gaussian quadrature. Over the off-diagonal cells, the integrands are separable so that each two-dimensional cell-integral is reduced to a product of two one-dimensional integrals. Furthermore, the scaling invariance of the B-splines in the logarithmic region of the chosen grid is fully exploited such that only some of the cell integrations need to be implemented. The values of given Slater integrals are obtained by assembling the cell integrals. This algorithm significantly improves the efficiency and accuracy of the traditional method that relies on the solution of differential equations and renders the B-spline method more effective when applied to multi-electron atomic systems

  6. The MIRAGE project: large scale radionuclide transport investigations and integral migration experiments

    International Nuclear Information System (INIS)

    Come, B.; Bidoglio, G.; Chapman, N.

    1986-01-01

    Predictions of radionuclide migration through the geosphere must be supported by large-scale, long-term investigations. Several research areas of the MIRAGE Project are devoted to acquiring reliable data for developing and validating models. Apart from man-made migration experiments in boreholes and/or underground galleries, attention is paid to natural geological migration systems which have been active for very long time spans. The potential role of microbial activity, either resident or introduced into the host media, is also considered. In order to clarify basic mechanisms, smaller scale ''integral'' migration experiments under fully controlled laboratory conditions are also carried out using real waste forms and representative geological media. (author)

  7. Evaluating high risks in large-scale projects using an extended VIKOR method under a fuzzy environment

    Directory of Open Access Journals (Sweden)

    S. Ebrahimnejad

    2012-04-01

    Full Text Available The complexity of large-scale projects has led to numerous risks in their life cycle. This paper presents a new risk evaluation approach in order to rank the high risks in large-scale projects and improve the performance of these projects. It is based on the fuzzy set theory that is an effective tool to handle uncertainty. It is also based on an extended VIKOR method that is one of the well-known multiple criteria decision-making (MCDM methods. The proposed decision-making approach integrates knowledge and experience acquired from professional experts, since they perform the risk identification and also the subjective judgments of the performance rating for high risks in terms of conflicting criteria, including probability, impact, quickness of reaction toward risk, event measure quantity and event capability criteria. The most notable difference of the proposed VIKOR method with its traditional version is just the use of fuzzy decision-matrix data to calculate the ranking index without the need to ask the experts. Finally, the proposed approach is illustrated with a real-case study in an Iranian power plant project, and the associated results are compared with two well-known decision-making methods under a fuzzy environment.

  8. A fast learning method for large scale and multi-class samples of SVM

    Science.gov (United States)

    Fan, Yu; Guo, Huiming

    2017-06-01

    A multi-class classification SVM(Support Vector Machine) fast learning method based on binary tree is presented to solve its low learning efficiency when SVM processing large scale multi-class samples. This paper adopts bottom-up method to set up binary tree hierarchy structure, according to achieved hierarchy structure, sub-classifier learns from corresponding samples of each node. During the learning, several class clusters are generated after the first clustering of the training samples. Firstly, central points are extracted from those class clusters which just have one type of samples. For those which have two types of samples, cluster numbers of their positive and negative samples are set respectively according to their mixture degree, secondary clustering undertaken afterwards, after which, central points are extracted from achieved sub-class clusters. By learning from the reduced samples formed by the integration of extracted central points above, sub-classifiers are obtained. Simulation experiment shows that, this fast learning method, which is based on multi-level clustering, can guarantee higher classification accuracy, greatly reduce sample numbers and effectively improve learning efficiency.

  9. The MIMIC Method with Scale Purification for Detecting Differential Item Functioning

    Science.gov (United States)

    Wang, Wen-Chung; Shih, Ching-Lin; Yang, Chih-Chien

    2009-01-01

    This study implements a scale purification procedure onto the standard MIMIC method for differential item functioning (DIF) detection and assesses its performance through a series of simulations. It is found that the MIMIC method with scale purification (denoted as M-SP) outperforms the standard MIMIC method (denoted as M-ST) in controlling…

  10. Temporal integration and 1/f power scaling in a circuit model of cerebellar interneurons.

    Science.gov (United States)

    Maex, Reinoud; Gutkin, Boris

    2017-07-01

    Inhibitory interneurons interconnected via electrical and chemical (GABA A receptor) synapses form extensive circuits in several brain regions. They are thought to be involved in timing and synchronization through fast feedforward control of principal neurons. Theoretical studies have shown, however, that whereas self-inhibition does indeed reduce response duration, lateral inhibition, in contrast, may generate slow response components through a process of gradual disinhibition. Here we simulated a circuit of interneurons (stellate and basket cells) of the molecular layer of the cerebellar cortex and observed circuit time constants that could rise, depending on parameter values, to >1 s. The integration time scaled both with the strength of inhibition, vanishing completely when inhibition was blocked, and with the average connection distance, which determined the balance between lateral and self-inhibition. Electrical synapses could further enhance the integration time by limiting heterogeneity among the interneurons and by introducing a slow capacitive current. The model can explain several observations, such as the slow time course of OFF-beam inhibition, the phase lag of interneurons during vestibular rotation, or the phase lead of Purkinje cells. Interestingly, the interneuron spike trains displayed power that scaled approximately as 1/ f at low frequencies. In conclusion, stellate and basket cells in cerebellar cortex, and interneuron circuits in general, may not only provide fast inhibition to principal cells but also act as temporal integrators that build a very short-term memory. NEW & NOTEWORTHY The most common function attributed to inhibitory interneurons is feedforward control of principal neurons. In many brain regions, however, the interneurons are densely interconnected via both chemical and electrical synapses but the function of this coupling is largely unknown. Based on large-scale simulations of an interneuron circuit of cerebellar cortex, we

  11. Techno-economic feasibility study of the integration of a commercial small-scale ORC in a real case study

    International Nuclear Information System (INIS)

    Cavazzini, G.; Dal Toso, P.

    2015-01-01

    Highlights: • The integration of a small-scale commercial ORC in a real case study was analyzed. • The possibility of recovering the waste heat produced by an ICE was considered. • A semi-empirical steady-state model of the commercial small scale ORC was created. • Both direct and indirect costs was considered in the business model. • The ORC integration was not economically feasible due to increased indirect costs. - Abstract: The ORC certainly represents a promising solution for recovering low-grade waste heat in industries. However, the efficiency of commercial small-scale ORC solutions is still too low in comparison with the high initial costs of the machine and the lack of simulation models specifically developed for commercial ORC systems prevents industries from defining an accurate business model to correctly evaluate the ORC integration in real industrial processes. This paper presents a techno-economic feasibility analysis of the integration of a small-scale commercial ORC in a real case study, represented by a highly-efficient industrial distillery. The integration is aimed at maximizing the electricity auto-production by exploiting the heat produced by an internal combustion engine, already partially recovered in internal thermal processes. To analyze the influence of the ORC integration on the industrial processes, a semi-empirical steady-state model of the commercial small scale ORC was created. The model made it possible to simulate the performance of the commercial ORC within a hypothetical scenario involving the use of the heat from the cooling water and from the exhaust gases of the internal combustion engine. A detailed thermo-dynamic analysis has been carried out to study the effects of the ORC integration on the plant’s energy system with particular focus on the two processes directly affected by ORC integration, namely vapor production and the drying process of the grape marc. The analysis highlighted the great importance in the

  12. Prediction of irradiation damage effects by multi-scale modelling: EURATOM 3 Framework integrated project perfect

    International Nuclear Information System (INIS)

    Massoud, J.P.; Bugat, St.; Marini, B.; Lidbury, D.; Van Dyck, St.; Debarberis, L.

    2008-01-01

    Full text of publication follows. In nuclear PWRs, materials undergo degradation due to severe irradiation conditions that may limit their operational life. Utilities operating these reactors must quantify the aging and the potential degradations of reactor pressure vessels and also of internal structures to ensure safe and reliable plant operation. The EURATOM 6. Framework Integrated Project PERFECT (Prediction of Irradiation Damage Effects in Reactor Components) addresses irradiation damage in RPV materials and components by multi-scale modelling. This state-of-the-art approach offers potential advantages over the conventional empirical methods used in current practice of nuclear plant lifetime management. Launched in January 2004, this 48-month project is focusing on two main components of nuclear power plants which are subject to irradiation damage: the ferritic steel reactor pressure vessel and the austenitic steel internals. This project is also an opportunity to integrate the fragmented research and experience that currently exists within Europe in the field of numerical simulation of radiation damage and creates the links with international organisations involved in similar projects throughout the world. Continuous progress in the physical understanding of the phenomena involved in irradiation damage and continuous progress in computer sciences make possible the development of multi-scale numerical tools able to simulate the effects of irradiation on materials microstructure. The consequences of irradiation on mechanical and corrosion properties of materials are also tentatively modelled using such multi-scale modelling. But it requires to develop different mechanistic models at different levels of physics and engineering and to extend the state of knowledge in several scientific fields. And the links between these different kinds of models are particularly delicate to deal with and need specific works. Practically the main objective of PERFECT is to build

  13. Combined Monte Carlo and path-integral method for simulated library of time-resolved reflectance curves from layered tissue models

    Science.gov (United States)

    Wilson, Robert H.; Vishwanath, Karthik; Mycek, Mary-Ann

    2009-02-01

    Monte Carlo (MC) simulations are considered the "gold standard" for mathematical description of photon transport in tissue, but they can require large computation times. Therefore, it is important to develop simple and efficient methods for accelerating MC simulations, especially when a large "library" of related simulations is needed. A semi-analytical method involving MC simulations and a path-integral (PI) based scaling technique generated time-resolved reflectance curves from layered tissue models. First, a zero-absorption MC simulation was run for a tissue model with fixed scattering properties in each layer. Then, a closed-form expression for the average classical path of a photon in tissue was used to determine the percentage of time that the photon spent in each layer, to create a weighted Beer-Lambert factor to scale the time-resolved reflectance of the simulated zero-absorption tissue model. This method is a unique alternative to other scaling techniques in that it does not require the path length or number of collisions of each photon to be stored during the initial simulation. Effects of various layer thicknesses and absorption and scattering coefficients on the accuracy of the method will be discussed.

  14. UK Environmental Prediction - integration and evaluation at the convective scale

    Science.gov (United States)

    Fallmann, Joachim; Lewis, Huw; Castillo, Juan Manuel; Pearson, David; Harris, Chris; Saulter, Andy; Bricheno, Lucy; Blyth, Eleanor

    2016-04-01

    Traditionally, the simulation of regional ocean, wave and atmosphere components of the Earth System have been considered separately, with some information on other components provided by means of boundary or forcing conditions. More recently, the potential value of a more integrated approach, as required for global climate and Earth System prediction, for regional short-term applications has begun to gain increasing research effort. In the UK, this activity is motivated by an understanding that accurate prediction and warning of the impacts of severe weather requires an integrated approach to forecasting. The substantial impacts on individuals, businesses and infrastructure of such events indicate a pressing need to understand better the value that might be delivered through more integrated environmental prediction. To address this need, the Met Office, NERC Centre for Ecology & Hydrology and NERC National Oceanography Centre have begun to develop the foundations of a coupled high resolution probabilistic forecast system for the UK at km-scale. This links together existing model components of the atmosphere, coastal ocean, land surface and hydrology. Our initial focus has been on a 2-year Prototype project to demonstrate the UK coupled prediction concept in research mode. This presentation will provide an update on UK environmental prediction activities. We will present the results from the initial implementation of an atmosphere-land-ocean coupled system, including a new eddy-permitting resolution ocean component, and discuss progress and initial results from further development to integrate wave interactions in this relatively high resolution system. We will discuss future directions and opportunities for collaboration in environmental prediction, and the challenges to realise the potential of integrated regional coupled forecasting for improving predictions and applications.

  15. Stored energy analysis in scale-down test facility

    International Nuclear Information System (INIS)

    Deng Chengcheng; Qin Benke; Fang Fangfang; Chang Huajian; Ye Zishen

    2013-01-01

    In the integral test facilities that simulate the accident transient process of the prototype nuclear power plant, the stored energy in the metal components has a direct influence on the simulation range and the test results of the facilities. Based on the heat transfer theory, three methods analyzing the stored energy were developed, and a thorough study on the stored energy problem in the scale-down test facilities was further carried out. The lumped parameter method and power integration method were applied to analyze the transient process of energy releasing and to evaluate the average total energy stored in the reactor pressure vessel of the ACME (advanced core-cooling mechanism experiment) facility, which is now being built in China. The results show that the similarity requirements for such three methods to analyze the stored energy in the test facilities are reduced gradually. Under the condition of satisfying the integral similarity of natural circulation, the stored energy releasing process in the scale-down test facilities can't maintain exact similarity. The stored energy in the reactor pressure vessel wall of ACME, which is released quickly during the early stage of rapid depressurization of system, will not make a major impact on the long-term behavior of system. And the scaling distortion of integral average total energy of the stored heat is acceptable. (authors)

  16. Properties Important To Mixing For WTP Large Scale Integrated Testing

    International Nuclear Information System (INIS)

    Koopman, D.; Martino, C.; Poirier, M.

    2012-01-01

    Large Scale Integrated Testing (LSIT) is being planned by Bechtel National, Inc. to address uncertainties in the full scale mixing performance of the Hanford Waste Treatment and Immobilization Plant (WTP). Testing will use simulated waste rather than actual Hanford waste. Therefore, the use of suitable simulants is critical to achieving the goals of the test program. External review boards have raised questions regarding the overall representativeness of simulants used in previous mixing tests. Accordingly, WTP requested the Savannah River National Laboratory (SRNL) to assist with development of simulants for use in LSIT. Among the first tasks assigned to SRNL was to develop a list of waste properties that matter to pulse-jet mixer (PJM) mixing of WTP tanks. This report satisfies Commitment 5.2.3.1 of the Department of Energy Implementation Plan for Defense Nuclear Facilities Safety Board Recommendation 2010-2: physical properties important to mixing and scaling. In support of waste simulant development, the following two objectives are the focus of this report: (1) Assess physical and chemical properties important to the testing and development of mixing scaling relationships; (2) Identify the governing properties and associated ranges for LSIT to achieve the Newtonian and non-Newtonian test objectives. This includes the properties to support testing of sampling and heel management systems. The test objectives for LSIT relate to transfer and pump out of solid particles, prototypic integrated operations, sparger operation, PJM controllability, vessel level/density measurement accuracy, sampling, heel management, PJM restart, design and safety margin, Computational Fluid Dynamics (CFD) Verification and Validation (V and V) and comparison, performance testing and scaling, and high temperature operation. The slurry properties that are most important to Performance Testing and Scaling depend on the test objective and rheological classification of the slurry (i

  17. PROPERTIES IMPORTANT TO MIXING FOR WTP LARGE SCALE INTEGRATED TESTING

    Energy Technology Data Exchange (ETDEWEB)

    Koopman, D.; Martino, C.; Poirier, M.

    2012-04-26

    Large Scale Integrated Testing (LSIT) is being planned by Bechtel National, Inc. to address uncertainties in the full scale mixing performance of the Hanford Waste Treatment and Immobilization Plant (WTP). Testing will use simulated waste rather than actual Hanford waste. Therefore, the use of suitable simulants is critical to achieving the goals of the test program. External review boards have raised questions regarding the overall representativeness of simulants used in previous mixing tests. Accordingly, WTP requested the Savannah River National Laboratory (SRNL) to assist with development of simulants for use in LSIT. Among the first tasks assigned to SRNL was to develop a list of waste properties that matter to pulse-jet mixer (PJM) mixing of WTP tanks. This report satisfies Commitment 5.2.3.1 of the Department of Energy Implementation Plan for Defense Nuclear Facilities Safety Board Recommendation 2010-2: physical properties important to mixing and scaling. In support of waste simulant development, the following two objectives are the focus of this report: (1) Assess physical and chemical properties important to the testing and development of mixing scaling relationships; (2) Identify the governing properties and associated ranges for LSIT to achieve the Newtonian and non-Newtonian test objectives. This includes the properties to support testing of sampling and heel management systems. The test objectives for LSIT relate to transfer and pump out of solid particles, prototypic integrated operations, sparger operation, PJM controllability, vessel level/density measurement accuracy, sampling, heel management, PJM restart, design and safety margin, Computational Fluid Dynamics (CFD) Verification and Validation (V and V) and comparison, performance testing and scaling, and high temperature operation. The slurry properties that are most important to Performance Testing and Scaling depend on the test objective and rheological classification of the slurry (i

  18. A Methodology for Conducting Integrative Mixed Methods Research and Data Analyses

    Science.gov (United States)

    Castro, Felipe González; Kellison, Joshua G.; Boyd, Stephen J.; Kopak, Albert

    2011-01-01

    Mixed methods research has gained visibility within the last few years, although limitations persist regarding the scientific caliber of certain mixed methods research designs and methods. The need exists for rigorous mixed methods designs that integrate various data analytic procedures for a seamless transfer of evidence across qualitative and quantitative modalities. Such designs can offer the strength of confirmatory results drawn from quantitative multivariate analyses, along with “deep structure” explanatory descriptions as drawn from qualitative analyses. This article presents evidence generated from over a decade of pilot research in developing an integrative mixed methods methodology. It presents a conceptual framework and methodological and data analytic procedures for conducting mixed methods research studies, and it also presents illustrative examples from the authors' ongoing integrative mixed methods research studies. PMID:22167325

  19. Deposit and scale prevention methods in thermal sea water desalination

    International Nuclear Information System (INIS)

    Froehner, K.R.

    1977-01-01

    Introductory remarks deal with the 'fouling factor' and its influence on the overall heat transfer coefficient of msf evaporators. The composition of the matter dissolved in sea water and the thermal and chemical properties lead to formation of alkaline scale or even hard, sulphate scale on the heat exchanger tube walls and can hamper plant operation and economics seriously. Among the scale prevention methods are 1) pH control by acid dosing (decarbonation), 2) 'threshold treatment' by dosing of inhibitors of different kind, 3) mechanical cleaning by sponge rubber balls guided through the heat exchanger tubes, in general combined with methods no. 1 or 2, and 4) application of a scale crystals germ slurry (seeding). Mention is made of several other scale prevention proposals. The problems encountered with marine life (suspension, deposit, growth) in desalination plants are touched. (orig.) [de

  20. Methods of scaling threshold color difference using printed samples

    Science.gov (United States)

    Huang, Min; Cui, Guihua; Liu, Haoxue; Luo, M. Ronnier

    2012-01-01

    A series of printed samples on substrate of semi-gloss paper and with the magnitude of threshold color difference were prepared for scaling the visual color difference and to evaluate the performance of different method. The probabilities of perceptibly was used to normalized to Z-score and different color differences were scaled to the Z-score. The visual color difference was got, and checked with the STRESS factor. The results indicated that only the scales have been changed but the relative scales between pairs in the data are preserved.

  1. A systematic and efficient method to compute multi-loop master integrals

    Science.gov (United States)

    Liu, Xiao; Ma, Yan-Qing; Wang, Chen-Yu

    2018-04-01

    We propose a novel method to compute multi-loop master integrals by constructing and numerically solving a system of ordinary differential equations, with almost trivial boundary conditions. Thus it can be systematically applied to problems with arbitrary kinematic configurations. Numerical tests show that our method can not only achieve results with high precision, but also be much faster than the only existing systematic method sector decomposition. As a by product, we find a new strategy to compute scalar one-loop integrals without reducing them to master integrals.

  2. Computation of rectangular source integral by rational parameter polynomial method

    International Nuclear Information System (INIS)

    Prabha, Hem

    2001-01-01

    Hubbell et al. (J. Res. Nat Bureau Standards 64C, (1960) 121) have obtained a series expansion for the calculation of the radiation field generated by a plane isotropic rectangular source (plaque), in which leading term is the integral H(a,b). In this paper another integral I(a,b), which is related with the integral H(a,b) has been solved by the rational parameter polynomial method. From I(a,b), we compute H(a,b). Using this method the integral I(a,b) is expressed in the form of a polynomial of a rational parameter. Generally, a function f (x) is expressed in terms of x. In this method this is expressed in terms of x/(1+x). In this way, the accuracy of the expression is good over a wide range of x as compared to the earlier approach. The results for I(a,b) and H(a,b) are given for a sixth degree polynomial and are found to be in good agreement with the results obtained by numerically integrating the integral. Accuracy could be increased either by increasing the degree of the polynomial or by dividing the range of integration. The results of H(a,b) and I(a,b) are given for values of b and a up to 2.0 and 20.0, respectively

  3. Large-scale synthesis of YSZ nanopowder by Pechini method

    Indian Academy of Sciences (India)

    Administrator

    structure and chemical purity of 99⋅1% by inductively coupled plasma optical emission spectroscopy on a large scale. Keywords. Sol–gel; yttria-stabilized zirconia; large scale; nanopowder; Pechini method. 1. Introduction. Zirconia has attracted the attention of many scientists because of its tremendous thermal, mechanical ...

  4. Large scale continuous integration and delivery : Making great software better and faster

    NARCIS (Netherlands)

    Stahl, Daniel

    2017-01-01

    Since the inception of continuous integration, and later continuous delivery, the methods of producing software in the industry have changed dramatically over the last two decades. Automated, rapid and frequent compilation, integration, testing, analysis, packaging and delivery of new software

  5. A study of compositional verification based IMA integration method

    Science.gov (United States)

    Huang, Hui; Zhang, Guoquan; Xu, Wanmeng

    2018-03-01

    The rapid development of avionics systems is driving the application of integrated modular avionics (IMA) systems. But meanwhile it is improving avionics system integration, complexity of system test. Then we need simplify the method of IMA system test. The IMA system supports a module platform that runs multiple applications, and shares processing resources. Compared with federated avionics system, IMA system is difficult to isolate failure. Therefore, IMA system verification will face the critical problem is how to test shared resources of multiple application. For a simple avionics system, traditional test methods are easily realizing to test a whole system. But for a complex system, it is hard completed to totally test a huge and integrated avionics system. Then this paper provides using compositional-verification theory in IMA system test, so that reducing processes of test and improving efficiency, consequently economizing costs of IMA system integration.

  6. Higher-Order Integral Equation Methods in Computational Electromagnetics

    DEFF Research Database (Denmark)

    Jørgensen, Erik; Meincke, Peter

    Higher-order integral equation methods have been investigated. The study has focused on improving the accuracy and efficiency of the Method of Moments (MoM) applied to electromagnetic problems. A new set of hierarchical Legendre basis functions of arbitrary order is developed. The new basis...

  7. SITE-94. Discrete-feature modelling of the Aespoe site: 2. Development of the integrated site-scale model

    International Nuclear Information System (INIS)

    Geier, J.E.

    1996-12-01

    A 3-dimensional, discrete-feature hydrological model is developed. The model integrates structural and hydrologic data for the Aespoe site, on scales ranging from semi regional fracture zones to individual fractures in the vicinity of the nuclear waste canisters. Hydrologic properties of the large-scale structures are initially estimated from cross-hole hydrologic test data, and automatically calibrated by numerical simulation of network flow, and comparison with undisturbed heads and observed drawdown in selected cross-hole tests. The calibrated model is combined with a separately derived fracture network model, to yield the integrated model. This model is partly validated by simulation of transient responses to a long-term pumping test and a convergent tracer test, based on the LPT2 experiment at Aespoe. The integrated model predicts that discharge from the SITE-94 repository is predominantly via fracture zones along the eastern shore of Aespoe. Similar discharge loci are produced by numerous model variants that explore uncertainty with regard to effective semi regional boundary conditions, hydrologic properties of the site-scale structures, and alternative structural/hydrological interpretations. 32 refs

  8. SITE-94. Discrete-feature modelling of the Aespoe site: 2. Development of the integrated site-scale model

    Energy Technology Data Exchange (ETDEWEB)

    Geier, J.E. [Golder Associates AB, Uppsala (Sweden)

    1996-12-01

    A 3-dimensional, discrete-feature hydrological model is developed. The model integrates structural and hydrologic data for the Aespoe site, on scales ranging from semi regional fracture zones to individual fractures in the vicinity of the nuclear waste canisters. Hydrologic properties of the large-scale structures are initially estimated from cross-hole hydrologic test data, and automatically calibrated by numerical simulation of network flow, and comparison with undisturbed heads and observed drawdown in selected cross-hole tests. The calibrated model is combined with a separately derived fracture network model, to yield the integrated model. This model is partly validated by simulation of transient responses to a long-term pumping test and a convergent tracer test, based on the LPT2 experiment at Aespoe. The integrated model predicts that discharge from the SITE-94 repository is predominantly via fracture zones along the eastern shore of Aespoe. Similar discharge loci are produced by numerous model variants that explore uncertainty with regard to effective semi regional boundary conditions, hydrologic properties of the site-scale structures, and alternative structural/hydrological interpretations. 32 refs.

  9. The flow measurement methods for the primary system of integral reactors

    International Nuclear Information System (INIS)

    Lee, J.; Seo, J. K.; Lee, D. J.

    2001-01-01

    It is the common features of the integral reactors that the main components of the primary system are installed within the reactor vessel, and so there are no any flow pipes connecting the reactor coolant pumps or steam generators. Due to no any flow pipes, it is impossible to measure the differential pressure at the primary system of the integral reactors, and it also makes impossible measure the primary coolant flow rate. The objective of the study is to draw up the flow measurement methods for the primary system of integral reactors. As a result of the review, we have made a selection of the flow measurement method by pump speed, bt HBM, and by pump motor power as the flow measurement methods for the primary system of integral reactors. Peculiarly, we did not found out a precedent which the direct pump motor power-flow rate curve is used as the flow measurement method in the existing commercial nuclear power reactors. Therefore, to use this method for integral reactors, it is needed to bear the follow-up measures in mind. The follow-up measures is included in this report

  10. Energy System Analysis of Large-Scale Integration of Wind Power

    International Nuclear Information System (INIS)

    Lund, Henrik

    2003-11-01

    The paper presents the results of two research projects conducted by Aalborg University and financed by the Danish Energy Research Programme. Both projects include the development of models and system analysis with focus on large-scale integration of wind power into different energy systems. Market reactions and ability to exploit exchange on the international market for electricity by locating exports in hours of high prices are included in the analyses. This paper focuses on results which are valid for energy systems in general. The paper presents the ability of different energy systems and regulation strategies to integrate wind power, The ability is expressed by three factors: One factor is the degree of electricity excess production caused by fluctuations in wind and CHP heat demands. The other factor is the ability to utilise wind power to reduce CO 2 emission in the system. And the third factor is the ability to benefit from exchange of electricity on the market. Energy systems and regulation strategies are analysed in the range of a wind power input from 0 to 100% of the electricity demand. Based on the Danish energy system, in which 50 per cent of the electricity demand is produced in CHP, a number of future energy systems with CO 2 reduction potentials are analysed, i.e. systems with more CHP, systems using electricity for transportation (battery or hydrogen vehicles) and systems with fuel-cell technologies. For the present and such potential future energy systems different regulation strategies have been analysed, i.e. the inclusion of small CHP plants into the regulation task of electricity balancing and grid stability and investments in electric heating, heat pumps and heat storage capacity. Also the potential of energy management has been analysed. The results of the analyses make it possible to compare short-term and long-term potentials of different strategies of large-scale integration of wind power

  11. Optimal unit sizing for small-scale integrated energy systems using multi-objective interval optimization and evidential reasoning approach

    International Nuclear Information System (INIS)

    Wei, F.; Wu, Q.H.; Jing, Z.X.; Chen, J.J.; Zhou, X.X.

    2016-01-01

    This paper proposes a comprehensive framework including a multi-objective interval optimization model and evidential reasoning (ER) approach to solve the unit sizing problem of small-scale integrated energy systems, with uncertain wind and solar energies integrated. In the multi-objective interval optimization model, interval variables are introduced to tackle the uncertainties of the optimization problem. Aiming at simultaneously considering the cost and risk of a business investment, the average and deviation of life cycle cost (LCC) of the integrated energy system are formulated. In order to solve the problem, a novel multi-objective optimization algorithm, MGSOACC (multi-objective group search optimizer with adaptive covariance matrix and chaotic search), is developed, employing adaptive covariance matrix to make the search strategy adaptive and applying chaotic search to maintain the diversity of group. Furthermore, ER approach is applied to deal with multiple interests of an investor at the business decision making stage and to determine the final unit sizing solution from the Pareto-optimal solutions. This paper reports on the simulation results obtained using a small-scale direct district heating system (DH) and a small-scale district heating and cooling system (DHC) optimized by the proposed framework. The results demonstrate the superiority of the multi-objective interval optimization model and ER approach in tackling the unit sizing problem of integrated energy systems considering the integration of uncertian wind and solar energies. - Highlights: • Cost and risk of investment in small-scale integrated energy systems are considered. • A multi-objective interval optimization model is presented. • A novel multi-objective optimization algorithm (MGSOACC) is proposed. • The evidential reasoning (ER) approach is used to obtain the final optimal solution. • The MGSOACC and ER can tackle the unit sizing problem efficiently.

  12. Comparison of zero-sequence injection methods in cascaded H-bridge multilevel converters for large-scale photovoltaic integration

    DEFF Research Database (Denmark)

    Yu, Yifan; Konstantinou, Georgios; Townsend, Christopher David

    2017-01-01

    to maintain three-phase balanced grid currents with unbalanced power generation. This study theoretically compares power balance capabilities of various zero-sequence injection methods based on two metrics which can be easily generalised for all CHB applications to PV systems. Experimental results based......Photovoltaic (PV) power generation levels in the three phases of a multilevel cascaded H-bridge (CHB) converter can be significantly unbalanced, owing to different irradiance levels and ambient temperatures over a large-scale solar PV power plant. Injection of a zero-sequence voltage is required...... on a 430 V, 10 kW, three-phase, seven-level cascaded H-bridge converter prototype confirm superior performance of the optimal zero-sequence injection technique....

  13. A systematic and efficient method to compute multi-loop master integrals

    Directory of Open Access Journals (Sweden)

    Xiao Liu

    2018-04-01

    Full Text Available We propose a novel method to compute multi-loop master integrals by constructing and numerically solving a system of ordinary differential equations, with almost trivial boundary conditions. Thus it can be systematically applied to problems with arbitrary kinematic configurations. Numerical tests show that our method can not only achieve results with high precision, but also be much faster than the only existing systematic method sector decomposition. As a by product, we find a new strategy to compute scalar one-loop integrals without reducing them to master integrals.

  14. Structural Integrity and Hydraulic Stability of Dolos Armour Layers

    DEFF Research Database (Denmark)

    Burcharth, H. F.

    A method for development of design diagrams to ensure structural integrity of slender unreinforced concrete breakwater armour units is presented. The method is based on experimental data from small scale flume tests as well as impact loading of prototype and small scale units. A prerequisite......-parameter characterization makes it possible to develop simple design diagrams for engineering purposes. Specific design diagrams for integrity of Dolos armour units with the waist ratio as a variable have been produced....

  15. Method of mechanical quadratures for solving singular integral equations of various types

    Science.gov (United States)

    Sahakyan, A. V.; Amirjanyan, H. A.

    2018-04-01

    The method of mechanical quadratures is proposed as a common approach intended for solving the integral equations defined on finite intervals and containing Cauchy-type singular integrals. This method can be used to solve singular integral equations of the first and second kind, equations with generalized kernel, weakly singular equations, and integro-differential equations. The quadrature rules for several different integrals represented through the same coefficients are presented. This allows one to reduce the integral equations containing integrals of different types to a system of linear algebraic equations.

  16. Integrating Expert Knowledge with Statistical Analysis for Landslide Susceptibility Assessment at Regional Scale

    Directory of Open Access Journals (Sweden)

    Christos Chalkias

    2016-03-01

    Full Text Available In this paper, an integration landslide susceptibility model by combining expert-based and bivariate statistical analysis (Landslide Susceptibility Index—LSI approaches is presented. Factors related with the occurrence of landslides—such as elevation, slope angle, slope aspect, lithology, land cover, Mean Annual Precipitation (MAP and Peak Ground Acceleration (PGA—were analyzed within a GIS environment. This integrated model produced a landslide susceptibility map which categorized the study area according to the probability level of landslide occurrence. The accuracy of the final map was evaluated by Receiver Operating Characteristics (ROC analysis depending on an independent (validation dataset of landslide events. The prediction ability was found to be 76% revealing that the integration of statistical analysis with human expertise can provide an acceptable landslide susceptibility assessment at regional scale.

  17. Assessing Backwards Integration as a Method of KBO Family Finding

    Science.gov (United States)

    Benfell, Nathan; Ragozzine, Darin

    2018-04-01

    The age of young asteroid collisional families can sometimes be determined by using backwards n-body integrations of the solar system. This method is not used for discovering young asteroid families and is limited by the unpredictable influence of the Yarkovsky effect on individual specific asteroids over time. Since these limitations are not as important for objects in the Kuiper belt, Marcus et al. 2011 suggested that backwards integration could be used to discover and characterize collisional families in the outer solar system. But various challenges present themselves when running precise and accurate 4+ Gyr integrations of Kuiper Belt objects. We have created simulated families of Kuiper Belt Objects with identical starting locations and velocity distributions, based on the Haumea Family. We then ran several long-term test integrations to observe the effect of various simulation parameters on integration results. These integrations were then used to investigate which parameters are of enough significance to require inclusion in the integration. Thereby we determined how to construct long-term integrations that both yield significant results and require manageable processing power. Additionally, we have tested the use of backwards integration as a method of discovery of potential young families in the Kuiper Belt.

  18. At the Nexus of History, Ecology, and Hydrobiogeochemistry: Improved Predictions across Scales through Integration.

    Science.gov (United States)

    Stegen, James C

    2018-01-01

    To improve predictions of ecosystem function in future environments, we need to integrate the ecological and environmental histories experienced by microbial communities with hydrobiogeochemistry across scales. A key issue is whether we can derive generalizable scaling relationships that describe this multiscale integration. There is a strong foundation for addressing these challenges. We have the ability to infer ecological history with null models and reveal impacts of environmental history through laboratory and field experimentation. Recent developments also provide opportunities to inform ecosystem models with targeted omics data. A major next step is coupling knowledge derived from such studies with multiscale modeling frameworks that are predictive under non-steady-state conditions. This is particularly true for systems spanning dynamic interfaces, which are often hot spots of hydrobiogeochemical function. We can advance predictive capabilities through a holistic perspective focused on the nexus of history, ecology, and hydrobiogeochemistry.

  19. Evaluation of time integration methods for transient response analysis of nonlinear structures

    International Nuclear Information System (INIS)

    Park, K.C.

    1975-01-01

    Recent developments in the evaluation of direct time integration methods for the transient response analysis of nonlinear structures are presented. These developments, which are based on local stability considerations of an integrator, show that the interaction between temporal step size and nonlinearities of structural systems has a pronounced effect on both accuracy and stability of a given time integration method. The resulting evaluation technique is applied to a model nonlinear problem, in order to: 1) demonstrate that it eliminates the present costly process of evaluating time integrator for nonlinear structural systems via extensive numerical experiments; 2) identify the desirable characteristics of time integration methods for nonlinear structural problems; 3) develop improved stiffly-stable methods for application to nonlinear structures. Extension of the methodology for examination of the interaction between a time integrator and the approximate treatment of nonlinearities (such as due to pseudo-force or incremental solution procedures) is also discussed. (Auth.)

  20. Improved dynamical scaling analysis using the kernel method for nonequilibrium relaxation.

    Science.gov (United States)

    Echinaka, Yuki; Ozeki, Yukiyasu

    2016-10-01

    The dynamical scaling analysis for the Kosterlitz-Thouless transition in the nonequilibrium relaxation method is improved by the use of Bayesian statistics and the kernel method. This allows data to be fitted to a scaling function without using any parametric model function, which makes the results more reliable and reproducible and enables automatic and faster parameter estimation. Applying this method, the bootstrap method is introduced and a numerical discrimination for the transition type is proposed.

  1. Nonlinear structural analysis using integrated force method

    Indian Academy of Sciences (India)

    A new formulation termed the Integrated Force Method (IFM) was proposed by Patnaik ... nated ``Structure (nY m)'' where (nY m) are the force and displacement degrees of ..... Patnaik S N, Yadagiri S 1976 Frequency analysis of structures.

  2. Modelling future impacts of air pollution using the multi-scale UK Integrated Assessment Model (UKIAM).

    Science.gov (United States)

    Oxley, Tim; Dore, Anthony J; ApSimon, Helen; Hall, Jane; Kryza, Maciej

    2013-11-01

    Integrated assessment modelling has evolved to support policy development in relation to air pollutants and greenhouse gases by providing integrated simulation tools able to produce quick and realistic representations of emission scenarios and their environmental impacts without the need to re-run complex atmospheric dispersion models. The UK Integrated Assessment Model (UKIAM) has been developed to investigate strategies for reducing UK emissions by bringing together information on projected UK emissions of SO2, NOx, NH3, PM10 and PM2.5, atmospheric dispersion, criteria for protection of ecosystems, urban air quality and human health, and data on potential abatement measures to reduce emissions, which may subsequently be linked to associated analyses of costs and benefits. We describe the multi-scale model structure ranging from continental to roadside, UK emission sources, atmospheric dispersion of emissions, implementation of abatement measures, integration with European-scale modelling, and environmental impacts. The model generates outputs from a national perspective which are used to evaluate alternative strategies in relation to emissions, deposition patterns, air quality metrics and ecosystem critical load exceedance. We present a selection of scenarios in relation to the 2020 Business-As-Usual projections and identify potential further reductions beyond those currently being planned. © 2013.

  3. Deterministic factor analysis: methods of integro-differentiation of non-integral order

    Directory of Open Access Journals (Sweden)

    Valentina V. Tarasova

    2016-12-01

    Full Text Available Objective to summarize the methods of deterministic factor economic analysis namely the differential calculus and the integral method. nbsp Methods mathematical methods for integrodifferentiation of nonintegral order the theory of derivatives and integrals of fractional nonintegral order. Results the basic concepts are formulated and the new methods are developed that take into account the memory and nonlocality effects in the quantitative description of the influence of individual factors on the change in the effective economic indicator. Two methods are proposed for integrodifferentiation of nonintegral order for the deterministic factor analysis of economic processes with memory and nonlocality. It is shown that the method of integrodifferentiation of nonintegral order can give more accurate results compared with standard methods method of differentiation using the first order derivatives and the integral method using the integration of the first order for a wide class of functions describing effective economic indicators. Scientific novelty the new methods of deterministic factor analysis are proposed the method of differential calculus of nonintegral order and the integral method of nonintegral order. Practical significance the basic concepts and formulas of the article can be used in scientific and analytical activity for factor analysis of economic processes. The proposed method for integrodifferentiation of nonintegral order extends the capabilities of the determined factorial economic analysis. The new quantitative method of deterministic factor analysis may become the beginning of quantitative studies of economic agents behavior with memory hereditarity and spatial nonlocality. The proposed methods of deterministic factor analysis can be used in the study of economic processes which follow the exponential law in which the indicators endogenous variables are power functions of the factors exogenous variables including the processes

  4. Multilevel method for modeling large-scale networks.

    Energy Technology Data Exchange (ETDEWEB)

    Safro, I. M. (Mathematics and Computer Science)

    2012-02-24

    Understanding the behavior of real complex networks is of great theoretical and practical significance. It includes developing accurate artificial models whose topological properties are similar to the real networks, generating the artificial networks at different scales under special conditions, investigating a network dynamics, reconstructing missing data, predicting network response, detecting anomalies and other tasks. Network generation, reconstruction, and prediction of its future topology are central issues of this field. In this project, we address the questions related to the understanding of the network modeling, investigating its structure and properties, and generating artificial networks. Most of the modern network generation methods are based either on various random graph models (reinforced by a set of properties such as power law distribution of node degrees, graph diameter, and number of triangles) or on the principle of replicating an existing model with elements of randomization such as R-MAT generator and Kronecker product modeling. Hierarchical models operate at different levels of network hierarchy but with the same finest elements of the network. However, in many cases the methods that include randomization and replication elements on the finest relationships between network nodes and modeling that addresses the problem of preserving a set of simplified properties do not fit accurately enough the real networks. Among the unsatisfactory features are numerically inadequate results, non-stability of algorithms on real (artificial) data, that have been tested on artificial (real) data, and incorrect behavior at different scales. One reason is that randomization and replication of existing structures can create conflicts between fine and coarse scales of the real network geometry. Moreover, the randomization and satisfying of some attribute at the same time can abolish those topological attributes that have been undefined or hidden from

  5. Large-scale integration of off-shore wind power and regulation strategies of cogeneration plants in the Danish electricity system

    DEFF Research Database (Denmark)

    Østergaard, Poul Alberg

    2005-01-01

    The article analyses how the amount of a small-scale CHP plants and heat pumps and the regulation strategies of these affect the quantity of off-shore wind power that may be integrated into Danish electricity supply......The article analyses how the amount of a small-scale CHP plants and heat pumps and the regulation strategies of these affect the quantity of off-shore wind power that may be integrated into Danish electricity supply...

  6. Measurement of Galaxy Cluster Integrated Comptonization and Mass Scaling Relations with the South Pole Telescope

    Energy Technology Data Exchange (ETDEWEB)

    Saliwanchik, B. R.; et al.

    2015-01-22

    We describe a method for measuring the integrated Comptonization (Y (SZ)) of clusters of galaxies from measurements of the Sunyaev-Zel'dovich (SZ) effect in multiple frequency bands and use this method to characterize a sample of galaxy clusters detected in the South Pole Telescope (SPT) data. We use a Markov Chain Monte Carlo method to fit a β-model source profile and integrate Y (SZ) within an angular aperture on the sky. In simulated observations of an SPT-like survey that include cosmic microwave background anisotropy, point sources, and atmospheric and instrumental noise at typical SPT-SZ survey levels, we show that we can accurately recover β-model parameters for inputted clusters. We measure Y (SZ) for simulated semi-analytic clusters and find that Y (SZ) is most accurately determined in an angular aperture comparable to the SPT beam size. We demonstrate the utility of this method to measure Y (SZ) and to constrain mass scaling relations using X-ray mass estimates for a sample of 18 galaxy clusters from the SPT-SZ survey. Measuring Y (SZ) within a 0.'75 radius aperture, we find an intrinsic log-normal scatter of 21% ± 11% in Y (SZ) at a fixed mass. Measuring Y (SZ) within a 0.3 Mpc projected radius (equivalent to 0.'75 at the survey median redshift z = 0.6), we find a scatter of 26% ± 9%. Prior to this study, the SPT observable found to have the lowest scatter with mass was cluster detection significance. We demonstrate, from both simulations and SPT observed clusters that Y (SZ) measured within an aperture comparable to the SPT beam size is equivalent, in terms of scatter with cluster mass, to SPT cluster detection significance.

  7. Integrals of random fields treated by the model correction factor method

    DEFF Research Database (Denmark)

    Franchin, P.; Ditlevsen, Ove Dalager; Kiureghian, Armen Der

    2002-01-01

    The model correction factor method (MCFM) is used in conjunction with the first-order reliability method (FORM) to solve structural reliability problems involving integrals of non-Gaussian random fields. The approach replaces the limit-state function with an idealized one, in which the integrals ...

  8. Numerical methods for multi-scale modeling of non-Newtonian flows

    Science.gov (United States)

    Symeonidis, Vasileios

    This work presents numerical methods for the simulation of Non-Newtonian fluids in the continuum as well as the mesoscopic level. The former is achieved with Direct Numerical Simulation (DNS) spectral h/p methods, while the latter employs the Dissipative Particle Dynamics (DPD) technique. Physical results are also presented as a motivation for a clear understanding of the underlying numerical approaches. The macroscopic simulations employ two non-Newtonian models, namely the Reiner-Ravlin (RR) and the viscoelastic FENE-P model. (1) A spectral viscosity method defined by two parameters ε, M is used to stabilize the FENE-P conformation tensor c. Convergence studies are presented for different combinations of these parameters. Two boundary conditions for the tensor c are also investigated. (2) Agreement is achieved with other works for Stokes flow of a two-dimensional cylinder in a channel. Comparison of the axial normal stress and drag coefficient on the cylinder is presented. Further, similar results from unsteady two- and three-dimensional turbulent flows past a flat plate in a channel are shown. (3) The RR problem is formulated for nearly incompressible flows, with the introduction of a mathematically equivalent tensor formulation. A spectral viscosity method and polynomial over-integration are studied. Convergence studies, including a three-dimensional channel flow with a parallel slot, investigate numerical problems arising from elemental boundaries and sharp corners. (4) The round hole pressure problem is presented for Newtonian and RR fluids in geometries with different hole sizes. Comparison with experimental data is made for the Newtonian case. The flaw in the experimental assumptions of undisturbed pressure opposite the hole is revealed, while good agreement with the data is shown. The Higashitani-Pritchard kinematical theory for RR, fluids is recovered for round holes and an approximate formula for the RR Stokes hole pressure is presented. The mesoscopic

  9. TradeWind. Integrating wind. Developing Europe's power market for the large-scale integration of wind power. Final report

    Energy Technology Data Exchange (ETDEWEB)

    2009-02-15

    Based on a single European grid and power market system, the TradeWind project explores to what extent large-scale wind power integration challenges could be addressed by reinforcing interconnections between Member States in Europe. Additionally, the project looks at the conditions required for a sound power market design that ensures a cost-effective integration of wind power at EU level. In this way, the study addresses two issues of key importance for the future integration of renewable energy, namely the weak interconnectivity levels between control zones and the inflexibility and fragmented nature of the European power market. Work on critical transmission paths and interconnectors is slow for a variety of reasons including planning and administrative barriers, lack of public acceptance, insufficient economic incentives for TSOs, and the lack of a joint European approach by the key stakeholders. (au)

  10. Iterative algorithm for the volume integral method for magnetostatics problems

    International Nuclear Information System (INIS)

    Pasciak, J.E.

    1980-11-01

    Volume integral methods for solving nonlinear magnetostatics problems are considered in this paper. The integral method is discretized by a Galerkin technique. Estimates are given which show that the linearized problems are well conditioned and hence easily solved using iterative techniques. Comparisons of iterative algorithms with the elimination method of GFUN3D shows that the iterative method gives an order of magnitude improvement in computational time as well as memory requirements for large problems. Computational experiments for a test problem as well as a double layer dipole magnet are given. Error estimates for the linearized problem are also derived

  11. A NDVI assisted remote sensing image adaptive scale segmentation method

    Science.gov (United States)

    Zhang, Hong; Shen, Jinxiang; Ma, Yanmei

    2018-03-01

    Multiscale segmentation of images can effectively form boundaries of different objects with different scales. However, for the remote sensing image which widely coverage with complicated ground objects, the number of suitable segmentation scales, and each of the scale size is still difficult to be accurately determined, which severely restricts the rapid information extraction of the remote sensing image. A great deal of experiments showed that the normalized difference vegetation index (NDVI) can effectively express the spectral characteristics of a variety of ground objects in remote sensing images. This paper presents a method using NDVI assisted adaptive segmentation of remote sensing images, which segment the local area by using NDVI similarity threshold to iteratively select segmentation scales. According to the different regions which consist of different targets, different segmentation scale boundaries could be created. The experimental results showed that the adaptive segmentation method based on NDVI can effectively create the objects boundaries for different ground objects of remote sensing images.

  12. A multi-scale method of mapping urban influence

    Science.gov (United States)

    Timothy G. Wade; James D. Wickham; Nicola Zacarelli; Kurt H. Riitters

    2009-01-01

    Urban development can impact environmental quality and ecosystem services well beyond urban extent. Many methods to map urban areas have been developed and used in the past, but most have simply tried to map existing extent of urban development, and all have been single-scale techniques. The method presented here uses a clustering approach to look beyond the extant...

  13. Permutation statistical methods an integrated approach

    CERN Document Server

    Berry, Kenneth J; Johnston, Janis E

    2016-01-01

    This research monograph provides a synthesis of a number of statistical tests and measures, which, at first consideration, appear disjoint and unrelated. Numerous comparisons of permutation and classical statistical methods are presented, and the two methods are compared via probability values and, where appropriate, measures of effect size. Permutation statistical methods, compared to classical statistical methods, do not rely on theoretical distributions, avoid the usual assumptions of normality and homogeneity of variance, and depend only on the data at hand. This text takes a unique approach to explaining statistics by integrating a large variety of statistical methods, and establishing the rigor of a topic that to many may seem to be a nascent field in statistics. This topic is new in that it took modern computing power to make permutation methods available to people working in the mainstream of research. This research monograph addresses a statistically-informed audience, and can also easily serve as a ...

  14. Dual linear structured support vector machine tracking method via scale correlation filter

    Science.gov (United States)

    Li, Weisheng; Chen, Yanquan; Xiao, Bin; Feng, Chen

    2018-01-01

    Adaptive tracking-by-detection methods based on structured support vector machine (SVM) performed well on recent visual tracking benchmarks. However, these methods did not adopt an effective strategy of object scale estimation, which limits the overall tracking performance. We present a tracking method based on a dual linear structured support vector machine (DLSSVM) with a discriminative scale correlation filter. The collaborative tracker comprised of a DLSSVM model and a scale correlation filter obtains good results in tracking target position and scale estimation. The fast Fourier transform is applied for detection. Extensive experiments show that our tracking approach outperforms many popular top-ranking trackers. On a benchmark including 100 challenging video sequences, the average precision of the proposed method is 82.8%.

  15. An innovative large scale integration of silicon nanowire-based field effect transistors

    Science.gov (United States)

    Legallais, M.; Nguyen, T. T. T.; Mouis, M.; Salem, B.; Robin, E.; Chenevier, P.; Ternon, C.

    2018-05-01

    Since the early 2000s, silicon nanowire field effect transistors are emerging as ultrasensitive biosensors while offering label-free, portable and rapid detection. Nevertheless, their large scale production remains an ongoing challenge due to time consuming, complex and costly technology. In order to bypass these issues, we report here on the first integration of silicon nanowire networks, called nanonet, into long channel field effect transistors using standard microelectronic process. A special attention is paid to the silicidation of the contacts which involved a large number of SiNWs. The electrical characteristics of these FETs constituted by randomly oriented silicon nanowires are also studied. Compatible integration on the back-end of CMOS readout and promising electrical performances open new opportunities for sensing applications.

  16. Integration of equations of parabolic type by the method of nets

    CERN Document Server

    Saul'Yev, V K; Stark, M; Ulam, S

    1964-01-01

    International Series of Monographs in Pure and Applied Mathematics, Volume 54: Integration of Equations of Parabolic Type by the Method of Nets deals with solving parabolic partial differential equations using the method of nets. The first part of this volume focuses on the construction of net equations, with emphasis on the stability and accuracy of the approximating net equations. The method of nets or method of finite differences (used to define the corresponding numerical method in ordinary differential equations) is one of many different approximate methods of integration of partial diff

  17. The 3D Lagrangian Integral Method. Henrik Koblitz Rasmussen

    DEFF Research Database (Denmark)

    Rasmussen, Henrik Koblitz

    2003-01-01

    . This are processes such as thermo-forming, gas-assisted injection moulding and all kind of simultaneous multi-component polymer processing operations. Though, in all polymer processing operations free surfaces (or interfaces) are present and the dynamic of these surfaces are of interest. In the "3D Lagrangian...... Integral Method" to simulate viscoelastic flow, the governing equations are solved for the particle positions (Lagrangian kinematics). Therefore, the transient motion of surfaces can be followed in a particularly simple fashion even in 3D viscoelastic flow. The "3D Lagrangian Integral Method" is described...

  18. Systems approach to monitoring and evaluation guides scale up of the Standard Days Method of family planning in Rwanda

    Science.gov (United States)

    Igras, Susan; Sinai, Irit; Mukabatsinda, Marie; Ngabo, Fidele; Jennings, Victoria; Lundgren, Rebecka

    2014-01-01

    There is no guarantee that a successful pilot program introducing a reproductive health innovation can also be expanded successfully to the national or regional level, because the scaling-up process is complex and multilayered. This article describes how a successful pilot program to integrate the Standard Days Method (SDM) of family planning into existing Ministry of Health services was scaled up nationally in Rwanda. Much of the success of the scale-up effort was due to systematic use of monitoring and evaluation (M&E) data from several sources to make midcourse corrections. Four lessons learned illustrate this crucially important approach. First, ongoing M&E data showed that provider training protocols and client materials that worked in the pilot phase did not work at scale; therefore, we simplified these materials to support integration into the national program. Second, triangulation of ongoing monitoring data with national health facility and population-based surveys revealed serious problems in supply chain mechanisms that affected SDM (and the accompanying CycleBeads client tool) availability and use; new procedures for ordering supplies and monitoring stockouts were instituted at the facility level. Third, supervision reports and special studies revealed that providers were imposing unnecessary medical barriers to SDM use; refresher training and revised supervision protocols improved provider practices. Finally, informal environmental scans, stakeholder interviews, and key events timelines identified shifting political and health policy environments that influenced scale-up outcomes; ongoing advocacy efforts are addressing these issues. The SDM scale-up experience in Rwanda confirms the importance of monitoring and evaluating programmatic efforts continuously, using a variety of data sources, to improve program outcomes. PMID:25276581

  19. Planar plane-wave matrix theory at the four loop order: integrability without BMN scaling

    International Nuclear Information System (INIS)

    Fischbacher, Thomas; Klose, Thomas; Plefka, Jan

    2005-01-01

    We study SU(N) plane-wave matrix theory up to fourth perturbative order in its large N planar limit. The effective hamiltonian in the closed su(2) subsector of the model is explicitly computed through a specially tailored computer program to perform large scale distributed symbolic algebra and generation of planar graphs. The number of graphs here was in the deep billions. The outcome of our computation establishes the four-loop integrability of the planar plane-wave matrix model. To elucidate the integrable structure we apply the recent technology of the perturbative asymptotic Bethe ansatz to our model. The resulting S-matrix turns out to be structurally similar but nevertheless distinct to the so far considered long-range spin-chain S-matrices of Inozemtsev, Beisert-Dippel-Staudacher and Arutyunov-Frolov-Staudacher in the AdS/CFT context. In particular our result displays a breakdown of BMN scaling at the four-loop order. That is, while there exists an appropriate identification of the matrix theory mass parameter with the coupling constant of the N=4 superconformal Yang-Mills theory which yields an eighth order lattice derivative for well separated impurities (naively implying BMN scaling) the detailed impurity contact interactions ruin this scaling property at the four-loop order. Moreover we study the issue of 'wrapping' interactions, which show up for the first time at this loop-order through a Konishi descendant length four operator. (author)

  20. Elucidating dynamic metabolic physiology through network integration of quantitative time-course metabolomics

    DEFF Research Database (Denmark)

    Bordbar, Aarash; Yurkovich, James T.; Paglia, Giuseppe

    2017-01-01

    The increasing availability of metabolomics data necessitates novel methods for deeper data analysis and interpretation. We present a flux balance analysis method that allows for the computation of dynamic intracellular metabolic changes at the cellular scale through integration of time-course ab......The increasing availability of metabolomics data necessitates novel methods for deeper data analysis and interpretation. We present a flux balance analysis method that allows for the computation of dynamic intracellular metabolic changes at the cellular scale through integration of time...

  1. The scaling of stress distribution under small scale yielding by T-scaling method and application to prediction of the temperature dependence on fracture toughness

    International Nuclear Information System (INIS)

    Ishihara, Kenichi; Hamada, Takeshi; Meshii, Toshiyuki

    2017-01-01

    In this paper, a new method for scaling the crack tip stress distribution under small scale yielding condition was proposed and named as T-scaling method. This method enables to identify the different stress distributions for materials with different tensile properties but identical load in terms of K or J. Then by assuming that the temperature dependence of a material is represented as the stress-strain relationship temperature dependence, a method to predict the fracture load at an arbitrary temperature from the already known fracture load at a reference temperature was proposed. This method combined the T-scaling method and the knowledge “fracture stress for slip induced cleavage fracture is temperature independent.” Once the fracture load is predicted, fracture toughness J c at the temperature under consideration can be evaluated by running elastic-plastic finite element analysis. Finally, the above-mentioned framework to predict the J c temperature dependency of a material in the ductile-to-brittle temperature distribution was validated for 0.55% carbon steel JIS S55C. The proposed framework seems to have a possibility to solve the problem the master curve is facing in the relatively higher temperature region, by requiring only tensile tests. (author)

  2. CLOSE RANGE HYPERSPECTRAL IMAGING INTEGRATED WITH TERRESTRIAL LIDAR SCANNING APPLIED TO ROCK CHARACTERISATION AT CENTIMETRE SCALE

    Directory of Open Access Journals (Sweden)

    T. H. Kurz

    2012-07-01

    Full Text Available Compact and lightweight hyperspectral imagers allow the application of close range hyperspectral imaging with a ground based scanning setup for geological fieldwork. Using such a scanning setup, steep cliff sections and quarry walls can be scanned with a more appropriate viewing direction and a higher image resolution than from airborne and spaceborne platforms. Integration of the hyperspectral imagery with terrestrial lidar scanning provides the hyperspectral information in a georeferenced framework and enables measurement at centimetre scale. In this paper, three geological case studies are used to demonstrate the potential of this method for rock characterisation. Two case studies are applied to carbonate quarries where mapping of different limestone and dolomite types was required, as well as measurements of faults and layer thicknesses from inaccessible parts of the quarries. The third case study demonstrates the method using artificial lighting, applied in a subsurface scanning scenario where solar radiation cannot be utilised.

  3. Small-scale hybrid plant integrated with municipal energy supply system

    International Nuclear Information System (INIS)

    Bakken, B.H.; Fossum, M.; Belsnes, M.M.

    2001-01-01

    This paper describes a research program started in 2001 to optimize environmental impact and cost of a small-scale hybrid plant based on candidate resources, transportation technologies and conversion efficiency, including integration with existing energy distribution systems. Special attention is given to a novel hybrid energy concept fuelled by municipal solid waste. The commercial interest for the model is expected to be more pronounced in remote communities and villages, including communities subject to growing prosperity. To enable optimization of complex energy distribution systems with multiple energy sources and carriers a flexible and robust methodology must be developed. This will enable energy companies and consultants to carry out comprehensive feasibility studies prior to investment, including technological, economic and environmental aspects. Governmental and municipal bodies will be able to pursue scenario studies involving energy systems and their impact on the environment, and measure the consequences of possible regulation regimes on environmental questions. This paper describes the hybrid concept for conversion of municipal solid waste in terms of energy supply, as well as the methodology for optimizing such integrated energy systems. (author)

  4. Methods for Large-Scale Nonlinear Optimization.

    Science.gov (United States)

    1980-05-01

    STANFORD, CALIFORNIA 94305 METHODS FOR LARGE-SCALE NONLINEAR OPTIMIZATION by Philip E. Gill, Waiter Murray, I Michael A. Saunden, and Masgaret H. Wright...typical iteration can be partitioned so that where B is an m X m basise matrix. This partition effectively divides the vari- ables into three classes... attention is given to the standard of the coding or the documentation. A much better way of obtaining mathematical software is from a software library

  5. Systematization of simplified J-integral evaluation method for flaw evaluation at high temperature

    International Nuclear Information System (INIS)

    Miura, Naoki; Takahashi, Yukio; Nakayama, Yasunari; Shimakawa, Takashi

    2000-01-01

    J-integral is an effective inelastic fracture parameter for the flaw evaluation of cracked components at high temperature. The evaluation of J-integral for an arbitrary crack configuration and an arbitrary loading condition can be generally accomplished by detailed numerical analysis such as finite element analysis, however, it is time-consuming and requires a high degree of expertise for its implementation. Therefore, it is important to develop simplified J-integral estimation techniques from the viewpoint of industrial requirements. In this study, a simplified J-integral evaluation method is proposed to estimate two types of J-integral parameters. One is the fatigue J-integral range to describe fatigue crack propagation behavior, and the other is the creep J-integral to describe creep crack propagation behavior. This paper presents the systematization of the simplified J-integral evaluation method incorporated with the reference stress method and the concept of elastic follow-up, and proposes a comprehensive evaluation procedure. The verification of the proposed method is presented in Part II of this paper. (author)

  6. Real-time hybrid simulation using the convolution integral method

    International Nuclear Information System (INIS)

    Kim, Sung Jig; Christenson, Richard E; Wojtkiewicz, Steven F; Johnson, Erik A

    2011-01-01

    This paper proposes a real-time hybrid simulation method that will allow complex systems to be tested within the hybrid test framework by employing the convolution integral (CI) method. The proposed CI method is potentially transformative for real-time hybrid simulation. The CI method can allow real-time hybrid simulation to be conducted regardless of the size and complexity of the numerical model and for numerical stability to be ensured in the presence of high frequency responses in the simulation. This paper presents the general theory behind the proposed CI method and provides experimental verification of the proposed method by comparing the CI method to the current integration time-stepping (ITS) method. Real-time hybrid simulation is conducted in the Advanced Hazard Mitigation Laboratory at the University of Connecticut. A seismically excited two-story shear frame building with a magneto-rheological (MR) fluid damper is selected as the test structure to experimentally validate the proposed method. The building structure is numerically modeled and simulated, while the MR damper is physically tested. Real-time hybrid simulation using the proposed CI method is shown to provide accurate results

  7. Measurements of Turbulent Flame Speed and Integral Length Scales in a Lean Stationary Premixed Flame

    OpenAIRE

    Klingmann, Jens; Johansson, Bengt

    1998-01-01

    Turbulent premixed natural gas - air flame velocities have been measured in a stationary axi-symmetric burner using LDA. The flame was stabilized by letting the flow retard toward a stagnation plate downstream of the burner exit. Turbulence was generated by letting the flow pass through a plate with drilled holes. Three different hole diameters were used, 3, 6 and 10 mm, in order to achieve different turbulent length scales. Turbulent integral length scales were measured using two-point LD...

  8. Mathematical methods linear algebra normed spaces distributions integration

    CERN Document Server

    Korevaar, Jacob

    1968-01-01

    Mathematical Methods, Volume I: Linear Algebra, Normed Spaces, Distributions, Integration focuses on advanced mathematical tools used in applications and the basic concepts of algebra, normed spaces, integration, and distributions.The publication first offers information on algebraic theory of vector spaces and introduction to functional analysis. Discussions focus on linear transformations and functionals, rectangular matrices, systems of linear equations, eigenvalue problems, use of eigenvectors and generalized eigenvectors in the representation of linear operators, metric and normed vector

  9. A data-model integration approach toward improved understanding on wetland functions and hydrological benefits at the catchment scale

    Science.gov (United States)

    Yeo, I. Y.; Lang, M.; Lee, S.; Huang, C.; Jin, H.; McCarty, G.; Sadeghi, A.

    2017-12-01

    The wetland ecosystem plays crucial roles in improving hydrological function and ecological integrity for the downstream water and the surrounding landscape. However, changing behaviours and functioning of wetland ecosystems are poorly understood and extremely difficult to characterize. Improved understanding on hydrological behaviours of wetlands, considering their interaction with surrounding landscapes and impacts on downstream waters, is an essential first step toward closing the knowledge gap. We present an integrated wetland-catchment modelling study that capitalizes on recently developed inundation maps and other geospatial data. The aim of the data-model integration is to improve spatial prediction of wetland inundation and evaluate cumulative hydrological benefits at the catchment scale. In this paper, we highlight problems arising from data preparation, parameterization, and process representation in simulating wetlands within a distributed catchment model, and report the recent progress on mapping of wetland dynamics (i.e., inundation) using multiple remotely sensed data. We demonstrate the value of spatially explicit inundation information to develop site-specific wetland parameters and to evaluate model prediction at multi-spatial and temporal scales. This spatial data-model integrated framework is tested using Soil and Water Assessment Tool (SWAT) with improved wetland extension, and applied for an agricultural watershed in the Mid-Atlantic Coastal Plain, USA. This study illustrates necessity of spatially distributed information and a data integrated modelling approach to predict inundation of wetlands and hydrologic function at the local landscape scale, where monitoring and conservation decision making take place.

  10. Application of Stochastic Sensitivity Analysis to Integrated Force Method

    Directory of Open Access Journals (Sweden)

    X. F. Wei

    2012-01-01

    Full Text Available As a new formulation in structural analysis, Integrated Force Method has been successfully applied to many structures for civil, mechanical, and aerospace engineering due to the accurate estimate of forces in computation. Right now, it is being further extended to the probabilistic domain. For the assessment of uncertainty effect in system optimization and identification, the probabilistic sensitivity analysis of IFM was further investigated in this study. A set of stochastic sensitivity analysis formulation of Integrated Force Method was developed using the perturbation method. Numerical examples are presented to illustrate its application. Its efficiency and accuracy were also substantiated with direct Monte Carlo simulations and the reliability-based sensitivity method. The numerical algorithm was shown to be readily adaptable to the existing program since the models of stochastic finite element and stochastic design sensitivity are almost identical.

  11. The integrated circuit IC EMP transient state disturbance effect experiment method investigates

    International Nuclear Information System (INIS)

    Li Xiaowei

    2004-01-01

    Transient state disturbance characteristic study on the integrated circuit, IC, need from its coupling path outset. Through cable (aerial) coupling, EMP converts to an pulse current voltage and results in the impact to the integrated circuit I/O orifice passing the cable. Aiming at the armament system construction feature, EMP effect to the integrated circuit, IC inside the system is analyzed. The integrated circuit, IC EMP effect experiment current injection method is investigated and a few experiments method is given. (authors)

  12. Nonlinear Fredholm Integral Equation of the Second Kind with Quadrature Methods

    Directory of Open Access Journals (Sweden)

    M. Jafari Emamzadeh

    2010-06-01

    Full Text Available In this paper, a numerical method for solving the nonlinear Fredholm integral equation is presented. We intend to approximate the solution of this equation by quadrature methods and by doing so, we solve the nonlinear Fredholm integral equation more accurately. Several examples are given at the end of this paper

  13. On the mass-coupling relation of multi-scale quantum integrable models

    Energy Technology Data Exchange (ETDEWEB)

    Bajnok, Zoltán; Balog, János [MTA Lendület Holographic QFT Group, Wigner Research Centre,H-1525 Budapest 114, P.O.B. 49 (Hungary); Ito, Katsushi [Department of Physics, Tokyo Institute of Technology,2-12-1 Ookayama, Meguro-ku, Tokyo 152-8551 (Japan); Satoh, Yuji [Institute of Physics, University of Tsukuba,1-1-1 Tennodai, Tsukuba, Ibaraki 305-8571 (Japan); Tóth, Gábor Zsolt [MTA Lendület Holographic QFT Group, Wigner Research Centre,H-1525 Budapest 114, P.O.B. 49 (Hungary)

    2016-06-13

    We determine exactly the mass-coupling relation for the simplest multi-scale quantum integrable model, the homogenous sine-Gordon model with two independent mass-scales. We first reformulate its perturbed coset CFT description in terms of the perturbation of a projected product of minimal models. This representation enables us to identify conserved tensor currents on the UV side. These UV operators are then mapped via form factor perturbation theory to operators on the IR side, which are characterized by their form factors. The relation between the UV and IR operators is given in terms of the sought-for mass-coupling relation. By generalizing the Θ sum rule Ward identity we are able to derive differential equations for the mass-coupling relation, which we solve in terms of hypergeometric functions. We check these results against the data obtained by numerically solving the thermodynamic Bethe Ansatz equations, and find a complete agreement.

  14. Progress in heavy ion driven inertial fusion energy: From scaled experiments to the integrated research experiment

    International Nuclear Information System (INIS)

    Barnard, J.J.; Ahle, L.E.; Baca, D.; Bangerter, R.O.; Bieniosek, F.M.; Celata, C.M.; Chacon-Golcher, E.; Davidson, R.C.; Faltens, A.; Friedman, A.; Franks, R.M.; Grote, D.P.; Haber, I.; Henestroza, E.; Hoon, M.J.L. de; Kaganovich, I.; Karpenko, V.P.; Kishek, R.A.; Kwan, J.W.; Lee, E.P.; Logan, B.G.; Lund, S.M.; Meier, W.R.; Molvik, A.W.; Olson, C.; Prost, L.R.; Qin, H.; Rose, D.; Sabbi, G.-L.; Sangster, T.C.; Seidl, P.A.; Sharp, W.M.; Shuman, D.; Vay, J.-L.; Waldron, W.L.; Welch, D.; Yu, S.S.

    2001-01-01

    The promise of inertial fusion energy driven by heavy ion beams requires the development of accelerators that produce ion currents (∼100's Amperes/beam) and ion energies (∼1-10 GeV) that have not been achieved simultaneously in any existing accelerator. The high currents imply high generalized perveances, large tune depressions, and high space charge potentials of the beam center relative to the beam pipe. Many of the scientific issues associated with ion beams of high perveance and large tune depression have been addressed over the last two decades on scaled experiments at Lawrence Berkeley and Lawrence Livermore National Laboratories, the University of Maryland, and elsewhere. The additional requirement of high space charge potential (or equivalently high line charge density) gives rise to effects (particularly the role of electrons in beam transport) which must be understood before proceeding to a large scale accelerator. The first phase of a new series of experiments in Heavy Ion Fusion Virtual National Laboratory (HIF VNL), the High Current Experiments (HCX), is now being constructed at LBNL. The mission of the HCX will be to transport beams with driver line charge density so as to investigate the physics of this regime, including constraints on the maximum radial filling factor of the beam through the pipe. This factor is important for determining both cost and reliability of a driver scale accelerator. The HCX will provide data for design of the next steps in the sequence of experiments leading to an inertial fusion energy power plant. The focus of the program after the HCX will be on integration of all of the manipulations required for a driver. In the near term following HCX, an Integrated Beam Experiment (IBX) of the same general scale as the HCX is envisioned. The step which bridges the gap between the IBX and an engineering test facility for fusion has been designated the Integrated Research Experiment (IRE). The IRE (like the IBX) will provide an

  15. Coarse-to-Fine Segmentation with Shape-Tailored Continuum Scale Spaces

    KAUST Repository

    Khan, Naeemullah

    2017-11-09

    We formulate an energy for segmentation that is designed to have preference for segmenting the coarse over fine structure of the image, without smoothing across boundaries of regions. The energy is formulated by integrating a continuum of scales from a scale space computed from the heat equation within regions. We show that the energy can be optimized without computing a continuum of scales, but instead from a single scale. This makes the method computationally efficient in comparison to energies using a discrete set of scales. We apply our method to texture and motion segmentation. Experiments on benchmark datasets show that a continuum of scales leads to better segmentation accuracy over discrete scales and other competing methods.

  16. Coarse-to-Fine Segmentation with Shape-Tailored Continuum Scale Spaces

    KAUST Repository

    Khan, Naeemullah; Hong, Byung-Woo; Yezzi, Anthony; Sundaramoorthi, Ganesh

    2017-01-01

    We formulate an energy for segmentation that is designed to have preference for segmenting the coarse over fine structure of the image, without smoothing across boundaries of regions. The energy is formulated by integrating a continuum of scales from a scale space computed from the heat equation within regions. We show that the energy can be optimized without computing a continuum of scales, but instead from a single scale. This makes the method computationally efficient in comparison to energies using a discrete set of scales. We apply our method to texture and motion segmentation. Experiments on benchmark datasets show that a continuum of scales leads to better segmentation accuracy over discrete scales and other competing methods.

  17. Integrated Graduate and Continuing Education in Protein Chromatography for Bioprocess Development and Scale-Up

    Science.gov (United States)

    Carta, Jungbauer

    2011-01-01

    We describe an intensive course that integrates graduate and continuing education focused on the development and scale-up of chromatography processes used for the recovery and purification of proteins with special emphasis on biotherapeutics. The course includes lectures, laboratories, teamwork, and a design exercise and offers a complete view of…

  18. Deciphering the clinical effect of drugs through large-scale data integration

    DEFF Research Database (Denmark)

    Kjærulff, Sonny Kim

    . This work demonstrates the power of a strategy that uses clinical data mining in association with chemical biology in order to reduce the search space and aid identification of novel drug actions. The second article described in chapter 3 outlines a high confidence side-effect-drug interaction dataset. We...... demonstrates the importance of using high-confidence drug-side-effect data in deciphering the effect of small molecules in humans. In summary, this thesis presents computational systems chemical biology approaches that can help identify clinical effects of small molecules through large-scale data integration...

  19. Quadratic algebras in the noncommutative integration method of wave equation

    International Nuclear Information System (INIS)

    Varaksin, O.L.

    1995-01-01

    The paper deals with the investigation of applications of the method of noncommutative integration of linear differential equations by partial derivatives. Nontrivial example was taken for integration of three-dimensions wave equation with the use of non-Abelian quadratic algebras

  20. A method of orbital analysis for large-scale first-principles simulations

    Energy Technology Data Exchange (ETDEWEB)

    Ohwaki, Tsukuru [Advanced Materials Laboratory, Nissan Research Center, Nissan Motor Co., Ltd., 1 Natsushima-cho, Yokosuka, Kanagawa 237-8523 (Japan); Otani, Minoru [Nanosystem Research Institute, National Institute of Advanced Industrial Science and Technology (AIST), Tsukuba, Ibaraki 305-8568 (Japan); Ozaki, Taisuke [Research Center for Simulation Science (RCSS), Japan Advanced Institute of Science and Technology (JAIST), 1-1 Asahidai, Nomi, Ishikawa 923-1292 (Japan)

    2014-06-28

    An efficient method of calculating the natural bond orbitals (NBOs) based on a truncation of the entire density matrix of a whole system is presented for large-scale density functional theory calculations. The method recovers an orbital picture for O(N) electronic structure methods which directly evaluate the density matrix without using Kohn-Sham orbitals, thus enabling quantitative analysis of chemical reactions in large-scale systems in the language of localized Lewis-type chemical bonds. With the density matrix calculated by either an exact diagonalization or O(N) method, the computational cost is O(1) for the calculation of NBOs associated with a local region where a chemical reaction takes place. As an illustration of the method, we demonstrate how an electronic structure in a local region of interest can be analyzed by NBOs in a large-scale first-principles molecular dynamics simulation for a liquid electrolyte bulk model (propylene carbonate + LiBF{sub 4})

  1. A method of orbital analysis for large-scale first-principles simulations

    International Nuclear Information System (INIS)

    Ohwaki, Tsukuru; Otani, Minoru; Ozaki, Taisuke

    2014-01-01

    An efficient method of calculating the natural bond orbitals (NBOs) based on a truncation of the entire density matrix of a whole system is presented for large-scale density functional theory calculations. The method recovers an orbital picture for O(N) electronic structure methods which directly evaluate the density matrix without using Kohn-Sham orbitals, thus enabling quantitative analysis of chemical reactions in large-scale systems in the language of localized Lewis-type chemical bonds. With the density matrix calculated by either an exact diagonalization or O(N) method, the computational cost is O(1) for the calculation of NBOs associated with a local region where a chemical reaction takes place. As an illustration of the method, we demonstrate how an electronic structure in a local region of interest can be analyzed by NBOs in a large-scale first-principles molecular dynamics simulation for a liquid electrolyte bulk model (propylene carbonate + LiBF 4 )

  2. A New Feature Extraction Method Based on EEMD and Multi-Scale Fuzzy Entropy for Motor Bearing

    Directory of Open Access Journals (Sweden)

    Huimin Zhao

    2016-12-01

    Full Text Available Feature extraction is one of the most important, pivotal, and difficult problems in mechanical fault diagnosis, which directly relates to the accuracy of fault diagnosis and the reliability of early fault prediction. Therefore, a new fault feature extraction method, called the EDOMFE method based on integrating ensemble empirical mode decomposition (EEMD, mode selection, and multi-scale fuzzy entropy is proposed to accurately diagnose fault in this paper. The EEMD method is used to decompose the vibration signal into a series of intrinsic mode functions (IMFs with a different physical significance. The correlation coefficient analysis method is used to calculate and determine three improved IMFs, which are close to the original signal. The multi-scale fuzzy entropy with the ability of effective distinguishing the complexity of different signals is used to calculate the entropy values of the selected three IMFs in order to form a feature vector with the complexity measure, which is regarded as the inputs of the support vector machine (SVM model for training and constructing a SVM classifier (EOMSMFD based on EDOMFE and SVM for fulfilling fault pattern recognition. Finally, the effectiveness of the proposed method is validated by real bearing vibration signals of the motor with different loads and fault severities. The experiment results show that the proposed EDOMFE method can effectively extract fault features from the vibration signal and that the proposed EOMSMFD method can accurately diagnose the fault types and fault severities for the inner race fault, the outer race fault, and rolling element fault of the motor bearing. Therefore, the proposed method provides a new fault diagnosis technology for rotating machinery.

  3. Monolithic Ge-on-Si lasers for large-scale electronic-photonic integration

    Science.gov (United States)

    Liu, Jifeng; Kimerling, Lionel C.; Michel, Jurgen

    2012-09-01

    A silicon-based monolithic laser source has long been envisioned as a key enabling component for large-scale electronic-photonic integration in future generations of high-performance computation and communication systems. In this paper we present a comprehensive review on the development of monolithic Ge-on-Si lasers for this application. Starting with a historical review of light emission from the direct gap transition of Ge dating back to the 1960s, we focus on the rapid progress in band-engineered Ge-on-Si lasers in the past five years after a nearly 30-year gap in this research field. Ge has become an interesting candidate for active devices in Si photonics in the past decade due to its pseudo-direct gap behavior and compatibility with Si complementary metal oxide semiconductor (CMOS) processing. In 2007, we proposed combing tensile strain with n-type doping to compensate the energy difference between the direct and indirect band gap of Ge, thereby achieving net optical gain for CMOS-compatible diode lasers. Here we systematically present theoretical modeling, material growth methods, spontaneous emission, optical gain, and lasing under optical and electrical pumping from band-engineered Ge-on-Si, culminated by recently demonstrated electrically pumped Ge-on-Si lasers with >1 mW output in the communication wavelength window of 1500-1700 nm. The broad gain spectrum enables on-chip wavelength division multiplexing. A unique feature of band-engineered pseudo-direct gap Ge light emitters is that the emission intensity increases with temperature, exactly opposite to conventional direct gap semiconductor light-emitting devices. This extraordinary thermal anti-quenching behavior greatly facilitates monolithic integration on Si microchips where temperatures can reach up to 80 °C during operation. The same band-engineering approach can be extended to other pseudo-direct gap semiconductors, allowing us to achieve efficient light emission at wavelengths previously

  4. Monolithic Ge-on-Si lasers for large-scale electronic–photonic integration

    International Nuclear Information System (INIS)

    Liu, Jifeng; Kimerling, Lionel C; Michel, Jurgen

    2012-01-01

    A silicon-based monolithic laser source has long been envisioned as a key enabling component for large-scale electronic–photonic integration in future generations of high-performance computation and communication systems. In this paper we present a comprehensive review on the development of monolithic Ge-on-Si lasers for this application. Starting with a historical review of light emission from the direct gap transition of Ge dating back to the 1960s, we focus on the rapid progress in band-engineered Ge-on-Si lasers in the past five years after a nearly 30-year gap in this research field. Ge has become an interesting candidate for active devices in Si photonics in the past decade due to its pseudo-direct gap behavior and compatibility with Si complementary metal oxide semiconductor (CMOS) processing. In 2007, we proposed combing tensile strain with n-type doping to compensate the energy difference between the direct and indirect band gap of Ge, thereby achieving net optical gain for CMOS-compatible diode lasers. Here we systematically present theoretical modeling, material growth methods, spontaneous emission, optical gain, and lasing under optical and electrical pumping from band-engineered Ge-on-Si, culminated by recently demonstrated electrically pumped Ge-on-Si lasers with >1 mW output in the communication wavelength window of 1500–1700 nm. The broad gain spectrum enables on-chip wavelength division multiplexing. A unique feature of band-engineered pseudo-direct gap Ge light emitters is that the emission intensity increases with temperature, exactly opposite to conventional direct gap semiconductor light-emitting devices. This extraordinary thermal anti-quenching behavior greatly facilitates monolithic integration on Si microchips where temperatures can reach up to 80 °C during operation. The same band-engineering approach can be extended to other pseudo-direct gap semiconductors, allowing us to achieve efficient light emission at wavelengths previously

  5. Integral methods of solving boundary-value problems of nonstationary heat conduction and their comparative analysis

    Science.gov (United States)

    Kot, V. A.

    2017-11-01

    The modern state of approximate integral methods used in applications, where the processes of heat conduction and heat and mass transfer are of first importance, is considered. Integral methods have found a wide utility in different fields of knowledge: problems of heat conduction with different heat-exchange conditions, simulation of thermal protection, Stefantype problems, microwave heating of a substance, problems on a boundary layer, simulation of a fluid flow in a channel, thermal explosion, laser and plasma treatment of materials, simulation of the formation and melting of ice, inverse heat problems, temperature and thermal definition of nanoparticles and nanoliquids, and others. Moreover, polynomial solutions are of interest because the determination of a temperature (concentration) field is an intermediate stage in the mathematical description of any other process. The following main methods were investigated on the basis of the error norms: the Tsoi and Postol’nik methods, the method of integral relations, the Gudman integral method of heat balance, the improved Volkov integral method, the matched integral method, the modified Hristov method, the Mayer integral method, the Kudinov method of additional boundary conditions, the Fedorov boundary method, the method of weighted temperature function, the integral method of boundary characteristics. It was established that the two last-mentioned methods are characterized by high convergence and frequently give solutions whose accuracy is not worse that the accuracy of numerical solutions.

  6. Integrated Site Investigation Methods and Modeling: Recent Developments at the BHRS (Invited)

    Science.gov (United States)

    Barrash, W.; Bradford, J. H.; Cardiff, M. A.; Dafflon, B.; Johnson, B. A.; Malama, B.; Thoma, M. J.

    2010-12-01

    The Boise Hydrogeophysical Research Site (BHRS) is a field-scale test facility in an unconfined aquifer with the goals of: developing cost-effective, non-invasive methods for quantitative characterization of heterogeneous aquifers using hydrologic and geophysical techniques; understanding fundamental relations and processes at multiple scales; and testing theories and models for groundwater flow and solute transport. The design of the BHRS supports a wide range of single-well, cross-hole, multiwell and multilevel hydrologic, geophysical, and combined hydrogeophysical experiments. New installations support direct and geophysical monitoring of hydrologic fluxes and states from the aquifer through the vadose zone to the atmosphere, including ET and river boundary behavior. Efforts to date have largely focused on establishing the 1D, 2D, and 3D distributions of geologic, hydrologic, and geophysical parameters which can then be used as the basis for testing methods to integrate direct and indirect data and invert for “known” parameter distributions, material boundaries, and tracer test or other system state behavior. Aquifer structure at the BHRS is hierarchical and includes layers and lenses that are recognized with geologic, hydrologic, radar, electrical, and seismic methods. Recent advances extend findings and method developments, but also highlight the need to examine assumptions and understand secular influences when designing and modeling field tests. Examples of advances and caveats include: New high-resolution 1D K profiles obtained from multi-level slug tests (inversion improves with priors for aquifer K, wellbore skin, and local presence of roots) show variable correlation with porosity and bring into question a Kozeny-Carman-type relation for much of the system. Modeling of 2D conservative tracer transport through a synthetic BHRS-like heterogeneous system shows the importance of including porosity heterogeneity (rather than assuming constant porosity for

  7. Micro- and nano-scale optical devices for high density photonic integrated circuits at near-infrared wavelengths

    Science.gov (United States)

    Chatterjee, Rohit

    In this research work, we explore fundamental silicon-based active and passive photonic devices that can be integrated together to form functional photonic integrated circuits. The devices which include power splitters, switches and lenses are studied starting from their physics, their design and fabrication techniques and finally from an experimental standpoint. The experimental results reveal high performance devices that are compatible with standard CMOS fabrication processes and can be easily integrated with other devices for near infrared telecom applications. In Chapter 2, a novel method for optical switching using nanomechanical proximity perturbation technique is described and demonstrated. The method which is experimentally demonstrated employs relatively low powers, small chip footprint and is compatible with standard CMOS fabrication processes. Further, in Chapter 3, this method is applied to develop a hitless bypass switch aimed at solving an important issue in current wavelength division multiplexing systems namely hitless switching of reconfigurable optical add drop multiplexers. Experimental results are presented to demonstrate the application of the nanomechanical proximity perturbation technique to practical situations. In Chapter 4, a fundamental photonic component namely the power splitter is described. Power splitters are important components for any photonic integrated circuits because they help split the power from a single light source to multiple devices on the same chip so that different operations can be performed simultaneously. The power splitters demonstrated in this chapter are based on multimode interference principles resulting in highly compact low loss and highly uniform power splitting to split the power of the light from a single channel to two and four channels. These devices can further be scaled to achieve higher order splitting such as 1x16 and 1x32 power splits. Finally in Chapter 5 we overcome challenges in device

  8. The impact of continuous integration on other software development practices: a large-scale empirical study

    NARCIS (Netherlands)

    Zhao, Y.; Serebrenik, A.; Zhou, Y.; Filkov, V.; Vasilescu, B.N.

    2017-01-01

    Continuous Integration (CI) has become a disruptive innovation in software development: with proper tool support and adoption, positive effects have been demonstrated for pull request throughput and scaling up of project sizes. As any other innovation, adopting CI implies adapting existing practices

  9. Catalyst-Free Vapor-Phase Method for Direct Integration of Gas Sensing Nanostructures with Polymeric Transducing Platforms

    Directory of Open Access Journals (Sweden)

    Stella Vallejos

    2014-01-01

    Full Text Available Tungsten oxide nanoneedles (NNs are grown and integrated directly with polymeric transducing platforms for gas sensors via aerosol-assisted chemical vapor deposition (AACVD method. Material analysis shows the feasibility to grow highly crystalline nanomaterials in the form of NNs with aspect ratios between 80 and 200 and with high concentration of oxygen vacancies at the surface, whereas gas testing demonstrates moderate sensing responses to hydrogen at concentrations between 10 ppm and 50 ppm, which are comparable with results for tungsten oxide NNs grown on silicon transducing platforms. This method is demonstrated to be an attractive route to fabricate next generation of gas sensors devices, provided with flexibility and functionality, with great potential in a cost effective production for large-scale applications.

  10. Cochrane Qualitative and Implementation Methods Group guidance series-paper 5: methods for integrating qualitative and implementation evidence within intervention effectiveness reviews.

    Science.gov (United States)

    Harden, Angela; Thomas, James; Cargo, Margaret; Harris, Janet; Pantoja, Tomas; Flemming, Kate; Booth, Andrew; Garside, Ruth; Hannes, Karin; Noyes, Jane

    2018-05-01

    The Cochrane Qualitative and Implementation Methods Group develops and publishes guidance on the synthesis of qualitative and mixed-method evidence from process evaluations. Despite a proliferation of methods for the synthesis of qualitative research, less attention has focused on how to integrate these syntheses within intervention effectiveness reviews. In this article, we report updated guidance from the group on approaches, methods, and tools, which can be used to integrate the findings from quantitative studies evaluating intervention effectiveness with those from qualitative studies and process evaluations. We draw on conceptual analyses of mixed methods systematic review designs and the range of methods and tools that have been used in published reviews that have successfully integrated different types of evidence. We outline five key methods and tools as devices for integration which vary in terms of the levels at which integration takes place; the specialist skills and expertise required within the review team; and their appropriateness in the context of limited evidence. In situations where the requirement is the integration of qualitative and process evidence within intervention effectiveness reviews, we recommend the use of a sequential approach. Here, evidence from each tradition is synthesized separately using methods consistent with each tradition before integration takes place using a common framework. Reviews which integrate qualitative and process evaluation evidence alongside quantitative evidence on intervention effectiveness in a systematic way are rare. This guidance aims to support review teams to achieve integration and we encourage further development through reflection and formal testing. Copyright © 2017 Elsevier Inc. All rights reserved.

  11. Integrating Expressive Methods in a Relational-Psychotherapy

    Directory of Open Access Journals (Sweden)

    Richard G. Erskine

    2011-06-01

    Full Text Available Therapeutic Involvement is an integral part of all effective psychotherapy.This article is written to illustrate the concept of Therapeutic Involvement in working within a therapeutic relationship – within the transference -- and with active expressive and experiential methods to resolve traumatic experiences, relational disturbances and life shaping decisions.

  12. Test equating, scaling, and linking methods and practices

    CERN Document Server

    Kolen, Michael J

    2014-01-01

    This book provides an introduction to test equating, scaling, and linking, including those concepts and practical issues that are critical for developers and all other testing professionals.  In addition to statistical procedures, successful equating, scaling, and linking involves many aspects of testing, including procedures to develop tests, to administer and score tests, and to interpret scores earned on tests. Test equating methods are used with many standardized tests in education and psychology to ensure that scores from multiple test forms can be used interchangeably.  Test scaling is the process of developing score scales that are used when scores on standardized tests are reported. In test linking, scores from two or more tests are related to one another. Linking has received much recent attention, due largely to investigations of linking similarly named tests from different test publishers or tests constructed for different purposes. In recent years, researchers from the education, psychology, and...

  13. BOX-COX REGRESSION METHOD IN TIME SCALING

    Directory of Open Access Journals (Sweden)

    ATİLLA GÖKTAŞ

    2013-06-01

    Full Text Available Box-Cox regression method with λj, for j = 1, 2, ..., k, power transformation can be used when dependent variable and error term of the linear regression model do not satisfy the continuity and normality assumptions. The situation obtaining the smallest mean square error  when optimum power λj, transformation for j = 1, 2, ..., k, of Y has been discussed. Box-Cox regression method is especially appropriate to adjust existence skewness or heteroscedasticity of error terms for a nonlinear functional relationship between dependent and explanatory variables. In this study, the advantage and disadvantage use of Box-Cox regression method have been discussed in differentiation and differantial analysis of time scale concept.

  14. Fuchs indices and the first integrals of nonlinear differential equations

    International Nuclear Information System (INIS)

    Kudryashov, Nikolai A.

    2005-01-01

    New method of finding the first integrals of nonlinear differential equations in polynomial form is presented. Basic idea of our approach is to use the scaling of solution of nonlinear differential equation and to find the dimensions of arbitrary constants in the Laurent expansion of the general solution. These dimensions allows us to obtain the scalings of members for the first integrals of nonlinear differential equations. Taking the polynomials with unknown coefficients into account we present the algorithm of finding the first integrals of nonlinear differential equations in the polynomial form. Our method is applied to look for the first integrals of eight nonlinear ordinary differential equations of the fourth order. The general solution of one of the fourth order ordinary differential equations is given

  15. Comparison the Students Satisfaction of Traditional and Integrated Teaching Method in Physiology Course

    Directory of Open Access Journals (Sweden)

    Keshavarzi Z.

    2016-02-01

    Full Text Available Aims: Different education methods play crucial roles to improve education quality and students’ satisfaction. In the recent years, medical education highly changes through new education methods. The aim of this study was to compare medical students’ satisfaction in traditional and integrated methods of teaching physiology course. Instrument and Methods: In the descriptive analysis study, fifty 4th semester medical students of Bojnourd University of Medical Sciences were studied in 2015. The subjects were randomly selected based on availability. Data was collected by two researcher-made questionnaires; their validity and reliability were confirmed. Questionnaure 1 was completed by the students after presenting renal and endocrinology topics via traditional and integrated methods. Questionnaire 2 was only completed by the students after presenting the course via integrated method. Data was analyzed by SPSS 16 software using dependent T test. Findings: Mean score of the students’ satisfaction in traditional method (24.80±3.48 was higher than integrated method (22.30±4.03; p<0.0001. In the integrated method, most of the students were agreed and completely agreed on telling stories from daily life (76%, sitting mode in the classroom (48%, an attribution of cell roles to the students (60%, showing movies and animations (76%, using models (84%, and using real animal parts (72% during teaching, as well as expressing clinical items to enhance learning motivations (76%. Conclusion: Favorable satisfaction of the students in traditional lecture method to understand the issues, as well as their acceptance of new and active methods of learning, show effectiveness and efficiency of traditional method and the requirement of its enhancement by the integrated methods

  16. Sensitivity of the two-dimensional shearless mixing layer to the initial turbulent kinetic energy and integral length scale

    Science.gov (United States)

    Fathali, M.; Deshiri, M. Khoshnami

    2016-04-01

    The shearless mixing layer is generated from the interaction of two homogeneous isotropic turbulence (HIT) fields with different integral scales ℓ1 and ℓ2 and different turbulent kinetic energies E1 and E2. In this study, the sensitivity of temporal evolutions of two-dimensional, incompressible shearless mixing layers to the parametric variations of ℓ1/ℓ2 and E1/E2 is investigated. The sensitivity methodology is based on the nonintrusive approach; using direct numerical simulation and generalized polynomial chaos expansion. The analysis is carried out at Reℓ 1=90 for the high-energy HIT region and different integral length scale ratios 1 /4 ≤ℓ1/ℓ2≤4 and turbulent kinetic energy ratios 1 ≤E1/E2≤30 . It is found that the most influential parameter on the variability of the mixing layer evolution is the turbulent kinetic energy while variations of the integral length scale show a negligible influence on the flow field variability. A significant level of anisotropy and intermittency is observed in both large and small scales. In particular, it is found that large scales have higher levels of intermittency and sensitivity to the variations of ℓ1/ℓ2 and E1/E2 compared to the small scales. Reconstructed response surfaces of the flow field intermittency and the turbulent penetration depth show monotonic dependence on ℓ1/ℓ2 and E1/E2 . The mixing layer growth rate and the mixing efficiency both show sensitive dependence on the initial condition parameters. However, the probability density function of these quantities shows relatively small solution variations in response to the variations of the initial condition parameters.

  17. 13th International Conference on Integral Methods in Science and Engineering

    CERN Document Server

    Kirsch, Andreas

    2015-01-01

    This contributed volume contains a collection of articles on state-of-the-art developments on the construction of theoretical integral techniques and their application to specific problems in science and engineering.  Written by internationally recognized researchers, the chapters in this book are based on talks given at the Thirteenth International Conference on Integral Methods in Science and Engineering, held July 21–25, 2014, in Karlsruhe, Germany.   A broad range of topics is addressed, from problems of existence and uniqueness for singular integral equations on domain boundaries to numerical integration via finite and boundary elements, conservation laws, hybrid methods, and other quadrature-related approaches.   This collection will be of interest to researchers in applied mathematics, physics, and mechanical and electrical engineering, as well as graduate students in these disciplines and other professionals for whom integration is an essential tool.

  18. Built-In Data-Flow Integration Testing in Large-Scale Component-Based Systems

    Science.gov (United States)

    Piel, Éric; Gonzalez-Sanchez, Alberto; Gross, Hans-Gerhard

    Modern large-scale component-based applications and service ecosystems are built following a number of different component models and architectural styles, such as the data-flow architectural style. In this style, each building block receives data from a previous one in the flow and sends output data to other components. This organisation expresses information flows adequately, and also favours decoupling between the components, leading to easier maintenance and quicker evolution of the system. Integration testing is a major means to ensure the quality of large systems. Their size and complexity, together with the fact that they are developed and maintained by several stake holders, make Built-In Testing (BIT) an attractive approach to manage their integration testing. However, so far no technique has been proposed that combines BIT and data-flow integration testing. We have introduced the notion of a virtual component in order to realize such a combination. It permits to define the behaviour of several components assembled to process a flow of data, using BIT. Test-cases are defined in a way that they are simple to write and flexible to adapt. We present two implementations of our proposed virtual component integration testing technique, and we extend our previous proposal to detect and handle errors in the definition by the user. The evaluation of the virtual component testing approach suggests that more issues can be detected in systems with data-flows than through other integration testing approaches.

  19. Indoor integrated navigation and synchronous data acquisition method for Android smartphone

    Science.gov (United States)

    Hu, Chunsheng; Wei, Wenjian; Qin, Shiqiao; Wang, Xingshu; Habib, Ayman; Wang, Ruisheng

    2015-08-01

    Smartphones are widely used at present. Most smartphones have cameras and kinds of sensors, such as gyroscope, accelerometer and magnet meter. Indoor navigation based on smartphone is very important and valuable. According to the features of the smartphone and indoor navigation, a new indoor integrated navigation method is proposed, which uses MEMS (Micro-Electro-Mechanical Systems) IMU (Inertial Measurement Unit), camera and magnet meter of smartphone. The proposed navigation method mainly involves data acquisition, camera calibration, image measurement, IMU calibration, initial alignment, strapdown integral, zero velocity update and integrated navigation. Synchronous data acquisition of the sensors (gyroscope, accelerometer and magnet meter) and the camera is the base of the indoor navigation on the smartphone. A camera data acquisition method is introduced, which uses the camera class of Android to record images and time of smartphone camera. Two kinds of sensor data acquisition methods are introduced and compared. The first method records sensor data and time with the SensorManager of Android. The second method realizes open, close, data receiving and saving functions in C language, and calls the sensor functions in Java language with JNI interface. A data acquisition software is developed with JDK (Java Development Kit), Android ADT (Android Development Tools) and NDK (Native Development Kit). The software can record camera data, sensor data and time at the same time. Data acquisition experiments have been done with the developed software and Sumsang Note 2 smartphone. The experimental results show that the first method of sensor data acquisition is convenient but lost the sensor data sometimes, the second method is much better in real-time performance and much less in data losing. A checkerboard image is recorded, and the corner points of the checkerboard are detected with the Harris method. The sensor data of gyroscope, accelerometer and magnet meter have

  20. A Robust WLS Power System State Estimation Method Integrating a Wide-Area Measurement System and SCADA Technology

    Directory of Open Access Journals (Sweden)

    Tao Jin

    2015-04-01

    Full Text Available With the development of modern society, the scale of the power system is rapidly increased accordingly, and the framework and mode of running of power systems are trending towards more complexity. It is nowadays much more important for the dispatchers to know exactly the state parameters of the power network through state estimation. This paper proposes a robust power system WLS state estimation method integrating a wide-area measurement system (WAMS and SCADA technology, incorporating phasor measurements and the results of the traditional state estimator in a post-processing estimator, which greatly reduces the scale of the non-linear estimation problem as well as the number of iterations and the processing time per iteration. This paper firstly analyzes the wide-area state estimation model in detail, then according to the issue that least squares does not account for bad data and outliers, the paper proposes a robust weighted least squares (WLS method that combines a robust estimation principle with least squares by equivalent weight. The performance assessment is discussed through setting up mathematical models of the distribution network. The effectiveness of the proposed method was proved to be accurate and reliable by simulations and experiments.

  1. Mapping multi-scale vascular plant richness in a forest landscape with integrated LiDAR and hyperspectral remote-sensing.

    Science.gov (United States)

    Hakkenberg, C R; Zhu, K; Peet, R K; Song, C

    2018-02-01

    The central role of floristic diversity in maintaining habitat integrity and ecosystem function has propelled efforts to map and monitor its distribution across forest landscapes. While biodiversity studies have traditionally relied largely on ground-based observations, the immensity of the task of generating accurate, repeatable, and spatially-continuous data on biodiversity patterns at large scales has stimulated the development of remote-sensing methods for scaling up from field plot measurements. One such approach is through integrated LiDAR and hyperspectral remote-sensing. However, despite their efficiencies in cost and effort, LiDAR-hyperspectral sensors are still highly constrained in structurally- and taxonomically-heterogeneous forests - especially when species' cover is smaller than the image resolution, intertwined with neighboring taxa, or otherwise obscured by overlapping canopy strata. In light of these challenges, this study goes beyond the remote characterization of upper canopy diversity to instead model total vascular plant species richness in a continuous-cover North Carolina Piedmont forest landscape. We focus on two related, but parallel, tasks. First, we demonstrate an application of predictive biodiversity mapping, using nonparametric models trained with spatially-nested field plots and aerial LiDAR-hyperspectral data, to predict spatially-explicit landscape patterns in floristic diversity across seven spatial scales between 0.01-900 m 2 . Second, we employ bivariate parametric models to test the significance of individual, remotely-sensed predictors of plant richness to determine how parameter estimates vary with scale. Cross-validated results indicate that predictive models were able to account for 15-70% of variance in plant richness, with LiDAR-derived estimates of topography and forest structural complexity, as well as spectral variance in hyperspectral imagery explaining the largest portion of variance in diversity levels. Importantly

  2. Integral staggered point-matching method for millimeter-wave reflective diffraction gratings on electron cyclotron heating systems

    International Nuclear Information System (INIS)

    Xia, Donghui; Huang, Mei; Wang, Zhijiang; Zhang, Feng; Zhuang, Ge

    2016-01-01

    Highlights: • The integral staggered point-matching method for design of polarizers on the ECH systems is presented. • The availability of the integral staggered point-matching method is checked by numerical calculations. • Two polarizers are designed with the integral staggered point-matching method and the experimental results are given. - Abstract: The reflective diffraction gratings are widely used in the high power electron cyclotron heating systems for polarization strategy. This paper presents a method which we call “the integral staggered point-matching method” for design of reflective diffraction gratings. This method is based on the integral point-matching method. However, it effectively removes the convergence problems and tedious calculations of the integral point-matching method, making it easier to be used for a beginner. A code is developed based on this method. The calculation results of the integral staggered point-matching method are compared with the integral point-matching method, the coordinate transformation method and the low power measurement results. It indicates that the integral staggered point-matching method can be used as an optional method for the design of reflective diffraction gratings in electron cyclotron heating systems.

  3. Integral staggered point-matching method for millimeter-wave reflective diffraction gratings on electron cyclotron heating systems

    Energy Technology Data Exchange (ETDEWEB)

    Xia, Donghui [State Key Laboratory of Advanced Electromagnetic Engineering and Technology, Huazhong University of Science and Technology, 430074 Wuhan (China); Huang, Mei [Southwestern Institute of Physics, 610041 Chengdu (China); Wang, Zhijiang, E-mail: wangzj@hust.edu.cn [State Key Laboratory of Advanced Electromagnetic Engineering and Technology, Huazhong University of Science and Technology, 430074 Wuhan (China); Zhang, Feng [Southwestern Institute of Physics, 610041 Chengdu (China); Zhuang, Ge [State Key Laboratory of Advanced Electromagnetic Engineering and Technology, Huazhong University of Science and Technology, 430074 Wuhan (China)

    2016-10-15

    Highlights: • The integral staggered point-matching method for design of polarizers on the ECH systems is presented. • The availability of the integral staggered point-matching method is checked by numerical calculations. • Two polarizers are designed with the integral staggered point-matching method and the experimental results are given. - Abstract: The reflective diffraction gratings are widely used in the high power electron cyclotron heating systems for polarization strategy. This paper presents a method which we call “the integral staggered point-matching method” for design of reflective diffraction gratings. This method is based on the integral point-matching method. However, it effectively removes the convergence problems and tedious calculations of the integral point-matching method, making it easier to be used for a beginner. A code is developed based on this method. The calculation results of the integral staggered point-matching method are compared with the integral point-matching method, the coordinate transformation method and the low power measurement results. It indicates that the integral staggered point-matching method can be used as an optional method for the design of reflective diffraction gratings in electron cyclotron heating systems.

  4. Analysis Method for Integrating Components of Product

    Energy Technology Data Exchange (ETDEWEB)

    Choi, Jun Ho [Inzest Co. Ltd, Seoul (Korea, Republic of); Lee, Kun Sang [Kookmin Univ., Seoul (Korea, Republic of)

    2017-04-15

    This paper presents some of the methods used to incorporate the parts constituting a product. A new relation function concept and its structure are introduced to analyze the relationships of component parts. This relation function has three types of information, which can be used to establish a relation function structure. The relation function structure of the analysis criteria was established to analyze and present the data. The priority components determined by the analysis criteria can be integrated. The analysis criteria were divided based on their number and orientation, as well as their direct or indirect characteristic feature. This paper presents a design algorithm for component integration. This algorithm was applied to actual products, and the components inside the product were integrated. Therefore, the proposed algorithm was used to conduct research to improve the brake discs for bicycles. As a result, an improved product similar to the related function structure was actually created.

  5. Analysis Method for Integrating Components of Product

    International Nuclear Information System (INIS)

    Choi, Jun Ho; Lee, Kun Sang

    2017-01-01

    This paper presents some of the methods used to incorporate the parts constituting a product. A new relation function concept and its structure are introduced to analyze the relationships of component parts. This relation function has three types of information, which can be used to establish a relation function structure. The relation function structure of the analysis criteria was established to analyze and present the data. The priority components determined by the analysis criteria can be integrated. The analysis criteria were divided based on their number and orientation, as well as their direct or indirect characteristic feature. This paper presents a design algorithm for component integration. This algorithm was applied to actual products, and the components inside the product were integrated. Therefore, the proposed algorithm was used to conduct research to improve the brake discs for bicycles. As a result, an improved product similar to the related function structure was actually created.

  6. Approximate calculation method for integral of mean square value of nonstationary response

    International Nuclear Information System (INIS)

    Aoki, Shigeru; Fukano, Azusa

    2010-01-01

    The response of the structure subjected to nonstationary random vibration such as earthquake excitation is nonstationary random vibration. Calculating method for statistical characteristics of such a response is complicated. Mean square value of the response is usually used to evaluate random response. Integral of mean square value of the response corresponds to total energy of the response. In this paper, a simplified calculation method to obtain integral of mean square value of the response is proposed. As input excitation, nonstationary white noise and nonstationary filtered white noise are used. Integrals of mean square value of the response are calculated for various values of parameters. It is found that the proposed method gives exact value of integral of mean square value of the response.

  7. New Approaches to Aluminum Integral Foam Production with Casting Methods

    Directory of Open Access Journals (Sweden)

    Ahmet Güner

    2015-08-01

    Full Text Available Integral foam has been used in the production of polymer materials for a long time. Metal integral foam casting systems are obtained by transferring and adapting polymer injection technology. Metal integral foam produced by casting has a solid skin at the surface and a foam core. Producing near-net shape reduces production expenses. Insurance companies nowadays want the automotive industry to use metallic foam parts because of their higher impact energy absorption properties. In this paper, manufacturing processes of aluminum integral foam with casting methods will be discussed.

  8. Discrete nodal integral transport-theory method for multidimensional reactor physics and shielding calculations

    International Nuclear Information System (INIS)

    Lawrence, R.D.; Dorning, J.J.

    1980-01-01

    A coarse-mesh discrete nodal integral transport theory method has been developed for the efficient numerical solution of multidimensional transport problems of interest in reactor physics and shielding applications. The method, which is the discrete transport theory analogue and logical extension of the nodal Green's function method previously developed for multidimensional neutron diffusion problems, utilizes the same transverse integration procedure to reduce the multidimensional equations to coupled one-dimensional equations. This is followed by the conversion of the differential equations to local, one-dimensional, in-node integral equations by integrating back along neutron flight paths. One-dimensional and two-dimensional transport theory test problems have been systematically studied to verify the superior computational efficiency of the new method

  9. A variational multi-scale method with spectral approximation of the sub-scales: Application to the 1D advection-diffusion equations

    KAUST Repository

    Chacó n Rebollo, Tomá s; Dia, Ben Mansour

    2015-01-01

    This paper introduces a variational multi-scale method where the sub-grid scales are computed by spectral approximations. It is based upon an extension of the spectral theorem to non necessarily self-adjoint elliptic operators that have an associated base of eigenfunctions which are orthonormal in weighted L2 spaces. This allows to element-wise calculate the sub-grid scales by means of the associated spectral expansion. We propose a feasible VMS-spectral method by truncation of this spectral expansion to a finite number of modes. We apply this general framework to the convection-diffusion equation, by analytically computing the family of eigenfunctions. We perform a convergence and error analysis. We also present some numerical tests that show the stability of the method for an odd number of spectral modes, and an improvement of accuracy in the large resolved scales, due to the adding of the sub-grid spectral scales.

  10. A variational multi-scale method with spectral approximation of the sub-scales: Application to the 1D advection-diffusion equations

    KAUST Repository

    Chacón Rebollo, Tomás

    2015-03-01

    This paper introduces a variational multi-scale method where the sub-grid scales are computed by spectral approximations. It is based upon an extension of the spectral theorem to non necessarily self-adjoint elliptic operators that have an associated base of eigenfunctions which are orthonormal in weighted L2 spaces. This allows to element-wise calculate the sub-grid scales by means of the associated spectral expansion. We propose a feasible VMS-spectral method by truncation of this spectral expansion to a finite number of modes. We apply this general framework to the convection-diffusion equation, by analytically computing the family of eigenfunctions. We perform a convergence and error analysis. We also present some numerical tests that show the stability of the method for an odd number of spectral modes, and an improvement of accuracy in the large resolved scales, due to the adding of the sub-grid spectral scales.

  11. Rosenberg's Self-Esteem Scale: Two Factors or Method Effects.

    Science.gov (United States)

    Tomas, Jose M.; Oliver, Amparo

    1999-01-01

    Results of a study with 640 Spanish high school students suggest the existence of a global self-esteem factor underlying responses to Rosenberg's (M. Rosenberg, 1965) Self-Esteem Scale, although the inclusion of method effects is needed to achieve a good model fit. Method effects are associated with item wording. (SLD)

  12. Comparison of Three Different Methods for Pile Integrity Testing on a Cylindrical Homogeneous Polyamide Specimen

    Science.gov (United States)

    Lugovtsova, Y. D.; Soldatov, A. I.

    2016-01-01

    Three different methods for pile integrity testing are proposed to compare on a cylindrical homogeneous polyamide specimen. The methods are low strain pile integrity testing, multichannel pile integrity testing and testing with a shaker system. Since the low strain pile integrity testing is well-established and standardized method, the results from it are used as a reference for other two methods.

  13. A new integral method for solving the point reactor neutron kinetics equations

    International Nuclear Information System (INIS)

    Li Haofeng; Chen Wenzhen; Luo Lei; Zhu Qian

    2009-01-01

    A numerical integral method that efficiently provides the solution of the point kinetics equations by using the better basis function (BBF) for the approximation of the neutron density in one time step integrations is described and investigated. The approach is based on an exact analytic integration of the neutron density equation, where the stiffness of the equations is overcome by the fully implicit formulation. The procedure is tested by using a variety of reactivity functions, including step reactivity insertion, ramp input and oscillatory reactivity changes. The solution of the better basis function method is compared to other analytical and numerical solutions of the point reactor kinetics equations. The results show that selecting a better basis function can improve the efficiency and accuracy of this integral method. The better basis function method can be used in real time forecasting for power reactors in order to prevent reactivity accidents.

  14. Estimating basin scale evapotranspiration (ET) by water balance and remote sensing methods

    Science.gov (United States)

    Senay, G.B.; Leake, S.; Nagler, P.L.; Artan, G.; Dickinson, J.; Cordova, J.T.; Glenn, E.P.

    2011-01-01

    Evapotranspiration (ET) is an important hydrological process that can be studied and estimated at multiple spatial scales ranging from a leaf to a river basin. We present a review of methods in estimating basin scale ET and its applications in understanding basin water balance dynamics. The review focuses on two aspects of ET: (i) how the basin scale water balance approach is used to estimate ET; and (ii) how ‘direct’ measurement and modelling approaches are used to estimate basin scale ET. Obviously, the basin water balance-based ET requires the availability of good precipitation and discharge data to calculate ET as a residual on longer time scales (annual) where net storage changes are assumed to be negligible. ET estimated from such a basin water balance principle is generally used for validating the performance of ET models. On the other hand, many of the direct estimation methods involve the use of remotely sensed data to estimate spatially explicit ET and use basin-wide averaging to estimate basin scale ET. The direct methods can be grouped into soil moisture balance modelling, satellite-based vegetation index methods, and methods based on satellite land surface temperature measurements that convert potential ET into actual ET using a proportionality relationship. The review also includes the use of complementary ET estimation principles for large area applications. The review identifies the need to compare and evaluate the different ET approaches using standard data sets in basins covering different hydro-climatic regions of the world.

  15. Development of polygon elements based on the scaled boundary finite element method

    International Nuclear Information System (INIS)

    Chiong, Irene; Song Chongmin

    2010-01-01

    We aim to extend the scaled boundary finite element method to construct conforming polygon elements. The development of the polygonal finite element is highly anticipated in computational mechanics as greater flexibility and accuracy can be achieved using these elements. The scaled boundary polygonal finite element will enable new developments in mesh generation, better accuracy from a higher order approximation and better transition elements in finite element meshes. Polygon elements of arbitrary number of edges and order have been developed successfully. The edges of an element are discretised with line elements. The displacement solution of the scaled boundary finite element method is used in the development of shape functions. They are shown to be smooth and continuous within the element, and satisfy compatibility and completeness requirements. Furthermore, eigenvalue decomposition has been used to depict element modes and outcomes indicate the ability of the scaled boundary polygonal element to express rigid body and constant strain modes. Numerical tests are presented; the patch test is passed and constant strain modes verified. Accuracy and convergence of the method are also presented and the performance of the scaled boundary polygonal finite element is verified on Cook's swept panel problem. Results show that the scaled boundary polygonal finite element method outperforms a traditional mesh and accuracy and convergence are achieved from fewer nodes. The proposed method is also shown to be truly flexible, and applies to arbitrary n-gons formed of irregular and non-convex polygons.

  16. Combination of statistical and physically based methods to assess shallow slide susceptibility at the basin scale

    Science.gov (United States)

    Oliveira, Sérgio C.; Zêzere, José L.; Lajas, Sara; Melo, Raquel

    2017-07-01

    Approaches used to assess shallow slide susceptibility at the basin scale are conceptually different depending on the use of statistical or physically based methods. The former are based on the assumption that the same causes are more likely to produce the same effects, whereas the latter are based on the comparison between forces which tend to promote movement along the slope and the counteracting forces that are resistant to motion. Within this general framework, this work tests two hypotheses: (i) although conceptually and methodologically distinct, the statistical and deterministic methods generate similar shallow slide susceptibility results regarding the model's predictive capacity and spatial agreement; and (ii) the combination of shallow slide susceptibility maps obtained with statistical and physically based methods, for the same study area, generate a more reliable susceptibility model for shallow slide occurrence. These hypotheses were tested at a small test site (13.9 km2) located north of Lisbon (Portugal), using a statistical method (the information value method, IV) and a physically based method (the infinite slope method, IS). The landslide susceptibility maps produced with the statistical and deterministic methods were combined into a new landslide susceptibility map. The latter was based on a set of integration rules defined by the cross tabulation of the susceptibility classes of both maps and analysis of the corresponding contingency tables. The results demonstrate a higher predictive capacity of the new shallow slide susceptibility map, which combines the independent results obtained with statistical and physically based models. Moreover, the combination of the two models allowed the identification of areas where the results of the information value and the infinite slope methods are contradictory. Thus, these areas were classified as uncertain and deserve additional investigation at a more detailed scale.

  17. The Integrated Multi-Level Bilingual Teaching of "Social Research Methods"

    Science.gov (United States)

    Zhu, Yanhan; Ye, Jian

    2012-01-01

    "Social Research Methods," as a methodology course, combines theories and practices closely. Based on the synergy theory, this paper tries to establish an integrated multi-level bilingual teaching mode. Starting from the transformation of teaching concepts, we should integrate interactions, experiences, and researches together and focus…

  18. An integrated model for assessing both crop productivity and agricultural water resources at a large scale

    Science.gov (United States)

    Okada, M.; Sakurai, G.; Iizumi, T.; Yokozawa, M.

    2012-12-01

    Agricultural production utilizes regional resources (e.g. river water and ground water) as well as local resources (e.g. temperature, rainfall, solar energy). Future climate changes and increasing demand due to population increases and economic developments would intensively affect the availability of water resources for agricultural production. While many studies assessed the impacts of climate change on agriculture, there are few studies that dynamically account for changes in water resources and crop production. This study proposes an integrated model for assessing both crop productivity and agricultural water resources at a large scale. Also, the irrigation management to subseasonal variability in weather and crop response varies for each region and each crop. To deal with such variations, we used the Markov Chain Monte Carlo technique to quantify regional-specific parameters associated with crop growth and irrigation water estimations. We coupled a large-scale crop model (Sakurai et al. 2012), with a global water resources model, H08 (Hanasaki et al. 2008). The integrated model was consisting of five sub-models for the following processes: land surface, crop growth, river routing, reservoir operation, and anthropogenic water withdrawal. The land surface sub-model was based on a watershed hydrology model, SWAT (Neitsch et al. 2009). Surface and subsurface runoffs simulated by the land surface sub-model were input to the river routing sub-model of the H08 model. A part of regional water resources available for agriculture, simulated by the H08 model, was input as irrigation water to the land surface sub-model. The timing and amount of irrigation water was simulated at a daily step. The integrated model reproduced the observed streamflow in an individual watershed. Additionally, the model accurately reproduced the trends and interannual variations of crop yields. To demonstrate the usefulness of the integrated model, we compared two types of impact assessment of

  19. Role of two insect growth regulators in integrated pest management of citrus scales.

    Science.gov (United States)

    Grafton-Cardwell, E E; Lee, J E; Stewart, J R; Olsen, K D

    2006-06-01

    Portions of two commercial citrus orchards were treated for two consecutive years with buprofezin or three consecutive years with pyriproxyfen in a replicated plot design to determine the long-term impact of these insect growth regulators (IGRs) on the San Joaquin Valley California integrated pest management program. Pyriproxyfen reduced the target pest, California red scale, Aonidiella aurantii Maskell, to nondetectable levels on leaf samples approximately 4 mo after treatment. Pyriproxyfen treatments reduced the California red scale parasitoid Aphytis melinus DeBach to a greater extent than the parasitoid Comperiella bifasciata Howard collected on sticky cards. Treatments of lemons Citrus limon (L.) Burm. f. infested with scale parasitized by A. melinus showed only 33% direct mortality of the parasitoid, suggesting the population reduction observed on sticky cards was due to low host density. Three years of pyriproxyfen treatments did not maintain citricola scale, Coccus pseudomagnoliarum (Kuwana), below the treatment threshold and cottony cushion scale, Icerya purchasi Maskell, was slowly but incompletely controlled. Buprofezin reduced California red scale to very low but detectable levels approximately 5 mo after treatment. Buprofezin treatments resulted in similar levels of reduction of the two parasitoids A. melinus and C. bifasciata collected on sticky cards. Treatments of lemons infested with scale parasitized by A. melinus showed only 7% mortality of the parasitoids, suggesting the population reduction observed on sticky cards was due to low host density. Citricola scale was not present in this orchard, and cottony cushion scale was slowly and incompletely controlled by buprofezin. These field plots demonstrated that IGRs can act as organophosphate insecticide replacements for California red scale control; however, their narrower spectrum of activity and disruption of coccinellid beetles can allow other scale species to attain primary pest status.

  20. Hypersingular integral equations, waveguiding effects in Cantorian Universe and genesis of large scale structures

    International Nuclear Information System (INIS)

    Iovane, G.; Giordano, P.

    2005-01-01

    In this work we introduce the hypersingular integral equations and analyze a realistic model of gravitational waveguides on a cantorian space-time. A waveguiding effect is considered with respect to the large scale structure of the Universe, where the structure formation appears as if it were a classically self-similar random process at all astrophysical scales. The result is that it seems we live in an El Naschie's o (∞) Cantorian space-time, where gravitational lensing and waveguiding effects can explain the appearing Universe. In particular, we consider filamentary and planar large scale structures as possible refraction channels for electromagnetic radiation coming from cosmological structures. From this vision the Universe appears like a large self-similar adaptive mirrors set, thanks to three numerical simulations. Consequently, an infinite Universe is just an optical illusion that is produced by mirroring effects connected with the large scale structure of a finite and not a large Universe

  1. Measurement of H'(0.07) with pulse height weighting integration method

    International Nuclear Information System (INIS)

    Liye, LIU; Gang, JIN; Jizeng, MA

    2002-01-01

    H'(0.07) is an important quantity for radiation field measurement in health physics. One of the plastic scintillator measurement methods is employing the weak current produced by PMT. However, there are some weaknesses in the current method. For instance: sensitive to environment humidity and temperature, non-linearity energy response. In order to increase the precision of H'(0.07) measurement, a Pulse Height Weighting Integration Method is introduced for its advantages: low noise, high sensitivity, data processable, wide measurement range. Pulse Height Weighting Integration Method seems to be acceptable to measure directional dose equivalent. The representative theoretical energy response of the pre-described method accords with the preliminary experiment result

  2. Fibonacci-regularization method for solving Cauchy integral equations of the first kind

    Directory of Open Access Journals (Sweden)

    Mohammad Ali Fariborzi Araghi

    2017-09-01

    Full Text Available In this paper, a novel scheme is proposed to solve the first kind Cauchy integral equation over a finite interval. For this purpose, the regularization method is considered. Then, the collocation method with Fibonacci base function is applied to solve the obtained second kind singular integral equation. Also, the error estimate of the proposed scheme is discussed. Finally, some sample Cauchy integral equations stem from the theory of airfoils in fluid mechanics are presented and solved to illustrate the importance and applicability of the given algorithm. The tables in the examples show the efficiency of the method.

  3. Research on performance evaluation and anti-scaling mechanism of green scale inhibitors by static and dynamic methods

    International Nuclear Information System (INIS)

    Liu, D.

    2011-01-01

    Increasing environmental concerns and discharge limitations have imposed additional challenges in treating process waters. Thus, the concept of 'Green Chemistry' was proposed and green scale inhibitors became a focus of water treatment technology. Finding some economical and environmentally friendly inhibitors is one of the major research focuses nowadays. In this dissertation, the inhibition performance of different phosphonates as CaCO 3 scale inhibitors in simulated cooling water was evaluated. Homo-, co-, and ter-polymers were also investigated for their performance as Ca-phosphonate inhibitors. Addition of polymers as inhibitors with phosphonates could reduce Ca-phosphonate precipitation and enhance the inhibition efficiency for CaCO 3 scale. The synergistic effect of poly-aspartic acid (PASP) and Poly-epoxy-succinic acid (PESA) on inhibition of scaling has been studied using both static and dynamic methods. Results showed that the anti-scaling performance of PASP combined with PESA was superior to that of PASP or PESA alone for CaCO 3 , CaSO 4 and BaSO 4 scale. The influence of dosage, temperature and Ca 2+ concentration was also investigated in simulated cooling water circuit. Moreover, SEM analysis demonstrated the modification of crystalline morphology in the presence of PASP and PESA. In this work, we also investigated the respective inhibition effectiveness of copper and zinc ions for scaling in drinking water by the method of Rapid Controlled Precipitation (RCP). The results indicated that the zinc ion and copper ion were high efficient inhibitors of low concentration, and the analysis of SEM and IR showed that copper and zinc ions could affect the calcium carbonate germination and change the crystal morphology. Moreover, the influence of temperature and dissolved CO 2 on the scaling potential of a mineral water (Salvetat) in the presence of copper and zinc ions was studied by laboratory experiments. An ideal scale inhibitor should be a solid form

  4. Collaborative teaching of an integrated methods course

    Directory of Open Access Journals (Sweden)

    George Zhou

    2011-03-01

    Full Text Available With an increasing diversity in American schools, teachers need to be able to collaborate in teaching. University courses are widely considered as a stage to demonstrate or model the ways of collaboration. To respond to this call, three authors team taught an integrated methods course at an urban public university in the city of New York. Following a qualitative research design, this study explored both instructors‟ and pre-service teachers‟ experiences with this course. Study findings indicate that collaborative teaching of an integrated methods course is feasible and beneficial to both instructors and pre-service teachers. For instructors, this collaborative teaching was a reciprocal learning process where they were engaged in thinking about teaching in a broader and innovative way. For pre-service teachers, this collaborative course not only helped them understand how three different subjects could be related to each other, but also provided opportunities for them to actually see how collaboration could take place in teaching. Their understanding of collaborative teaching was enhanced after the course.

  5. Phase Composition Maps integrate mineral compositions with rock textures from the micro-meter to the thin section scale

    Science.gov (United States)

    Willis, Kyle V.; Srogi, LeeAnn; Lutz, Tim; Monson, Frederick C.; Pollock, Meagen

    2017-12-01

    Textures and compositions are critical information for interpreting rock formation. Existing methods to integrate both types of information favor high-resolution images of mineral compositions over small areas or low-resolution images of larger areas for phase identification. The method in this paper produces images of individual phases in which textural and compositional details are resolved over three orders of magnitude, from tens of micrometers to tens of millimeters. To construct these images, called Phase Composition Maps (PCMs), we make use of the resolution in backscattered electron (BSE) images and calibrate the gray scale values with mineral analyses by energy-dispersive X-ray spectrometry (EDS). The resulting images show the area of a standard thin section (roughly 40 mm × 20 mm) with spatial resolution as good as 3.5 μm/pixel, or more than 81 000 pixels/mm2, comparable to the resolution of X-ray element maps produced by wavelength-dispersive spectrometry (WDS). Procedures to create PCMs for mafic igneous rocks with multivariate linear regression models for minerals with solid solution (olivine, plagioclase feldspar, and pyroxenes) are presented and are applicable to other rock types. PCMs are processed using threshold functions based on the regression models to image specific composition ranges of minerals. PCMs are constructed using widely-available instrumentation: a scanning-electron microscope (SEM) with BSE and EDS X-ray detectors and standard image processing software such as ImageJ and Adobe Photoshop. Three brief applications illustrate the use of PCMs as petrologic tools: to reveal mineral composition patterns at multiple scales; to generate crystal size distributions for intracrystalline compositional zones and compare growth over time; and to image spatial distributions of minerals at different stages of magma crystallization by integrating textures and compositions with thermodynamic modeling.

  6. Scaling Analysis Techniques to Establish Experimental Infrastructure for Component, Subsystem, and Integrated System Testing

    Energy Technology Data Exchange (ETDEWEB)

    Sabharwall, Piyush [Idaho National Laboratory (INL), Idaho Falls, ID (United States); O' Brien, James E. [Idaho National Laboratory (INL), Idaho Falls, ID (United States); McKellar, Michael G. [Idaho National Laboratory (INL), Idaho Falls, ID (United States); Housley, Gregory K. [Idaho National Laboratory (INL), Idaho Falls, ID (United States); Bragg-Sitton, Shannon M. [Idaho National Laboratory (INL), Idaho Falls, ID (United States)

    2015-03-01

    Hybrid energy system research has the potential to expand the application for nuclear reactor technology beyond electricity. The purpose of this research is to reduce both technical and economic risks associated with energy systems of the future. Nuclear hybrid energy systems (NHES) mitigate the variability of renewable energy sources, provide opportunities to produce revenue from different product streams, and avoid capital inefficiencies by matching electrical output to demand by using excess generation capacity for other purposes when it is available. An essential step in the commercialization and deployment of this advanced technology is scaled testing to demonstrate integrated dynamic performance of advanced systems and components when risks cannot be mitigated adequately by analysis or simulation. Further testing in a prototypical environment is needed for validation and higher confidence. This research supports the development of advanced nuclear reactor technology and NHES, and their adaptation to commercial industrial applications that will potentially advance U.S. energy security, economy, and reliability and further reduce carbon emissions. Experimental infrastructure development for testing and feasibility studies of coupled systems can similarly support other projects having similar developmental needs and can generate data required for validation of models in thermal energy storage and transport, energy, and conversion process development. Experiments performed in the Systems Integration Laboratory will acquire performance data, identify scalability issues, and quantify technology gaps and needs for various hybrid or other energy systems. This report discusses detailed scaling (component and integrated system) and heat transfer figures of merit that will establish the experimental infrastructure for component, subsystem, and integrated system testing to advance the technology readiness of components and systems to the level required for commercial

  7. Application of the Monte Carlo integration method in calculations of dose distributions in HDR-Brachytherapy

    Energy Technology Data Exchange (ETDEWEB)

    Baltas, D; Geramani, K N; Ioannidis, G T; Kolotas, C; Zamboglou, N [Strahlenklinik, Stadtische Kliniken Offenbach, Offenbach (Germany); Giannouli, S [Department of Electrical and Computer Engineering, National Technical University of Athens, Athens (Greece)

    1999-12-31

    Source anisotropy is a very important factor in brachytherapy quality assurance of high dose rate HDR Ir 192 afterloading stepping sources. If anisotropy is not taken into account then doses received by a brachytherapy patient in certain directions can be in error by a clinically significant amount. Experimental measurements of anisotropy are very labour intensive. We have shown that within acceptable limits of accuracy, Monte Carlo integration (MCI) of a modified Sievert integral (3D generalisation) can provide the necessary data within a much shorter time scale than can experiments. Hence MCI can be used for routine quality assurance schedules whenever a new design of HDR or PDR Ir 192 is used for brachytherapy afterloading. Our MCI calculation results are comparable with published experimental data and Monte Carlo simulation data for microSelectron and VariSource Ir 192 sources. We have shown not only that MCI offers advantages over alternative numerical integration methods, but also that treating filtration coefficients as radial distance-dependent functions improves Sievert integral accuracy at low energies. This paper also provides anisotropy data for three new Ir 192 sources, one for microSelectron-HDR and two for the microSelectron-PDR, for which data currently is not available. The information we have obtained in this study can be incorporated into clinical practice.

  8. Operating Reserves and Wind Power Integration: An International Comparison; Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Milligan, M.; Donohoo, P.; Lew, D.; Ela, E.; Kirby, B.; Holttinen, H.; Lannoye, E.; Flynn, D.; O' Malley, M.; Miller, N.; Eriksen, P. B.; Gottig, A.; Rawn, B.; Gibescu, M.; Lazaro, E. G.; Robitaille, A.; Kamwa, I.

    2010-10-01

    This paper provides a high-level international comparison of methods and key results from both operating practice and integration analysis, based on an informal International Energy Agency Task 25: Large-scale Wind Integration.

  9. Low-frequency scaling of the standard and mixed magnetic field and Müller integral equations

    KAUST Repository

    Bogaert, Ignace

    2014-02-01

    The standard and mixed discretizations for the magnetic field integral equation (MFIE) and the Müller integral equation (MUIE) are investigated in the context of low-frequency (LF) scattering problems involving simply connected scatterers. It is proved that, at low frequencies, the frequency scaling of the nonsolenoidal part of the solution current can be incorrect for the standard discretization. In addition, it is proved that the frequency scaling obtained with the mixed discretization is correct. The reason for this problem in the standard discretization scheme is the absence of exact solenoidal currents in the rotated RWG finite element space. The adoption of the mixed discretization scheme eliminates this problem and leads to a well-conditioned system of linear equations that remains accurate at low frequencies. Numerical results confirm these theoretical predictions and also show that, when the frequency is lowered, a finer and finer mesh is required to keep the accuracy constant with the standard discretization. © 1963-2012 IEEE.

  10. A model for scale up of family health innovations in low-income and middle-income settings: a mixed methods study.

    Science.gov (United States)

    Bradley, Elizabeth H; Curry, Leslie A; Taylor, Lauren A; Pallas, Sarah Wood; Talbert-Slagle, Kristina; Yuan, Christina; Fox, Ashley; Minhas, Dilpreet; Ciccone, Dana Karen; Berg, David; Pérez-Escamilla, Rafael

    2012-01-01

    Many family health innovations that have been shown to be both efficacious and cost-effective fail to scale up for widespread use particularly in low-income and middle-income countries (LMIC). Although individual cases of successful scale-up, in which widespread take up occurs, have been described, we lack an integrated and practical model of scale-up that may be applicable to a wide range of public health innovations in LMIC. To develop an integrated and practical model of scale-up that synthesises experiences of family health programmes in LMICs. We conducted a mixed methods study that included in-depth interviews with 33 key informants and a systematic review of peer-reviewed and grey literature from 11 electronic databases and 20 global health agency web sites. We included key informants and studies that reported on the scale up of several family health innovations including Depo-Provera as an example of a product innovation, exclusive breastfeeding as an example of a health behaviour innovation, community health workers (CHWs) as an example of an organisational innovation and social marketing as an example of a business model innovation. Key informants were drawn from non-governmental, government and international organisations using snowball sampling. An article was excluded if the article: did not meet the study's definition of the innovation; did not address dissemination, diffusion, scale up or sustainability of the innovation; did not address low-income or middle-income countries; was superficial in its discussion and/or did not provide empirical evidence about scale-up of the innovation; was not available online in full text; or was not available in English, French, Spanish or Portuguese, resulting in a final sample of 41 peer-reviewed articles and 30 grey literature sources. We used the constant comparative method of qualitative data analysis to extract recurrent themes from the interviews, and we integrated these themes with findings from the

  11. Cost optimization of biofuel production – The impact of scale, integration, transport and supply chain configurations

    NARCIS (Netherlands)

    de Jong, S.A.|info:eu-repo/dai/nl/41200836X; Hoefnagels, E.T.A.|info:eu-repo/dai/nl/313935998; Wetterlund, Elisabeth; Pettersson, Karin; Faaij, André; Junginger, H.M.|info:eu-repo/dai/nl/202130703

    2017-01-01

    This study uses a geographically-explicit cost optimization model to analyze the impact of and interrelation between four cost reduction strategies for biofuel production: economies of scale, intermodal transport, integration with existing industries, and distributed supply chain configurations

  12. Integration and segregation of large-scale brain networks during short-term task automatization.

    Science.gov (United States)

    Mohr, Holger; Wolfensteller, Uta; Betzel, Richard F; Mišić, Bratislav; Sporns, Olaf; Richiardi, Jonas; Ruge, Hannes

    2016-11-03

    The human brain is organized into large-scale functional networks that can flexibly reconfigure their connectivity patterns, supporting both rapid adaptive control and long-term learning processes. However, it has remained unclear how short-term network dynamics support the rapid transformation of instructions into fluent behaviour. Comparing fMRI data of a learning sample (N=70) with a control sample (N=67), we find that increasingly efficient task processing during short-term practice is associated with a reorganization of large-scale network interactions. Practice-related efficiency gains are facilitated by enhanced coupling between the cingulo-opercular network and the dorsal attention network. Simultaneously, short-term task automatization is accompanied by decreasing activation of the fronto-parietal network, indicating a release of high-level cognitive control, and a segregation of the default mode network from task-related networks. These findings suggest that short-term task automatization is enabled by the brain's ability to rapidly reconfigure its large-scale network organization involving complementary integration and segregation processes.

  13. Gamma Ray Tomographic Scan Method for Large Scale Industrial Plants

    International Nuclear Information System (INIS)

    Moon, Jin Ho; Jung, Sung Hee; Kim, Jong Bum; Park, Jang Geun

    2011-01-01

    The gamma ray tomography systems have been used to investigate a chemical process for last decade. There have been many cases of gamma ray tomography for laboratory scale work but not many cases for industrial scale work. Non-tomographic equipment with gamma-ray sources is often used in process diagnosis. Gamma radiography, gamma column scanning and the radioisotope tracer technique are examples of gamma ray application in industries. In spite of many outdoor non-gamma ray tomographic equipment, the most of gamma ray tomographic systems still remained as indoor equipment. But, as the gamma tomography has developed, the demand on gamma tomography for real scale plants also increased. To develop the industrial scale system, we introduced the gamma-ray tomographic system with fixed detectors and rotating source. The general system configuration is similar to 4 th generation geometry. But the main effort has been made to actualize the instant installation of the system for real scale industrial plant. This work would be a first attempt to apply the 4th generation industrial gamma tomographic scanning by experimental method. The individual 0.5-inch NaI detector was used for gamma ray detection by configuring circular shape around industrial plant. This tomographic scan method can reduce mechanical complexity and require a much smaller space than a conventional CT. Those properties make it easy to get measurement data for a real scale plant

  14. Root Systems Biology: Integrative Modeling across Scales, from Gene Regulatory Networks to the Rhizosphere1

    Science.gov (United States)

    Hill, Kristine; Porco, Silvana; Lobet, Guillaume; Zappala, Susan; Mooney, Sacha; Draye, Xavier; Bennett, Malcolm J.

    2013-01-01

    Genetic and genomic approaches in model organisms have advanced our understanding of root biology over the last decade. Recently, however, systems biology and modeling have emerged as important approaches, as our understanding of root regulatory pathways has become more complex and interpreting pathway outputs has become less intuitive. To relate root genotype to phenotype, we must move beyond the examination of interactions at the genetic network scale and employ multiscale modeling approaches to predict emergent properties at the tissue, organ, organism, and rhizosphere scales. Understanding the underlying biological mechanisms and the complex interplay between systems at these different scales requires an integrative approach. Here, we describe examples of such approaches and discuss the merits of developing models to span multiple scales, from network to population levels, and to address dynamic interactions between plants and their environment. PMID:24143806

  15. The Virasoro algebra in integrable hierarchies and the method of matrix models

    International Nuclear Information System (INIS)

    Semikhatov, A.M.

    1992-01-01

    The action of the Virasoro algebra on hierarchies of nonlinear integrable equations, and also the structure and consequences of Virasoro constraints on these hierarchies, are studied. It is proposed that a broad class of hierarchies, restricted by Virasoro constraints, can be defined in terms of dressing operators hidden in the structure of integrable systems. The Virasoro-algebra representation constructed on the dressing operators displays a number of analogies with structures in conformal field theory. The formulation of the Virasoro constraints that stems from this representation makes it possible to translate into the language of integrable systems a number of concepts from the method of the 'matrix models' that describe nonperturbative quantum gravity, and, in particular, to realize a 'hierarchical' version of the double scaling limit. From the Virasoro constraints written in terms of the dressing operators generalized loop equations are derived, and this makes it possible to do calculations on a reconstruction of the field-theoretical description. The reduction of the Kadomtsev-Petviashvili (KP) hierarchy, subject to Virasoro constraints, to generalized Korteweg-deVries (KdV) hierarchies is implemented, and the corresponding representation of the Virasoro algebra on these hierarchies is found both in the language of scalar differential operators and in the matrix formalism of Drinfel'd and Sokolov. The string equation in the matrix formalism does not replicate the structure of the scalar string equation. The symmetry algebras of the KP and N-KdV hierarchies restricted by Virasoro constraints are calculated: A relationship is established with algebras from the family W ∞ (J) of infinite W-algebras

  16. Climate warming, marine protected areas and the ocean-scale integrity of coral reef ecosystems.

    Directory of Open Access Journals (Sweden)

    Nicholas A J Graham

    Full Text Available Coral reefs have emerged as one of the ecosystems most vulnerable to climate variation and change. While the contribution of a warming climate to the loss of live coral cover has been well documented across large spatial and temporal scales, the associated effects on fish have not. Here, we respond to recent and repeated calls to assess the importance of local management in conserving coral reefs in the context of global climate change. Such information is important, as coral reef fish assemblages are the most species dense vertebrate communities on earth, contributing critical ecosystem functions and providing crucial ecosystem services to human societies in tropical countries. Our assessment of the impacts of the 1998 mass bleaching event on coral cover, reef structural complexity, and reef associated fishes spans 7 countries, 66 sites and 26 degrees of latitude in the Indian Ocean. Using Bayesian meta-analysis we show that changes in the size structure, diversity and trophic composition of the reef fish community have followed coral declines. Although the ocean scale integrity of these coral reef ecosystems has been lost, it is positive to see the effects are spatially variable at multiple scales, with impacts and vulnerability affected by geography but not management regime. Existing no-take marine protected areas still support high biomass of fish, however they had no positive affect on the ecosystem response to large-scale disturbance. This suggests a need for future conservation and management efforts to identify and protect regional refugia, which should be integrated into existing management frameworks and combined with policies to improve system-wide resilience to climate variation and change.

  17. Climate warming, marine protected areas and the ocean-scale integrity of coral reef ecosystems.

    Science.gov (United States)

    Graham, Nicholas A J; McClanahan, Tim R; MacNeil, M Aaron; Wilson, Shaun K; Polunin, Nicholas V C; Jennings, Simon; Chabanet, Pascale; Clark, Susan; Spalding, Mark D; Letourneur, Yves; Bigot, Lionel; Galzin, René; Ohman, Marcus C; Garpe, Kajsa C; Edwards, Alasdair J; Sheppard, Charles R C

    2008-08-27

    Coral reefs have emerged as one of the ecosystems most vulnerable to climate variation and change. While the contribution of a warming climate to the loss of live coral cover has been well documented across large spatial and temporal scales, the associated effects on fish have not. Here, we respond to recent and repeated calls to assess the importance of local management in conserving coral reefs in the context of global climate change. Such information is important, as coral reef fish assemblages are the most species dense vertebrate communities on earth, contributing critical ecosystem functions and providing crucial ecosystem services to human societies in tropical countries. Our assessment of the impacts of the 1998 mass bleaching event on coral cover, reef structural complexity, and reef associated fishes spans 7 countries, 66 sites and 26 degrees of latitude in the Indian Ocean. Using Bayesian meta-analysis we show that changes in the size structure, diversity and trophic composition of the reef fish community have followed coral declines. Although the ocean scale integrity of these coral reef ecosystems has been lost, it is positive to see the effects are spatially variable at multiple scales, with impacts and vulnerability affected by geography but not management regime. Existing no-take marine protected areas still support high biomass of fish, however they had no positive affect on the ecosystem response to large-scale disturbance. This suggests a need for future conservation and management efforts to identify and protect regional refugia, which should be integrated into existing management frameworks and combined with policies to improve system-wide resilience to climate variation and change.

  18. The future of genome-scale modeling of yeast through integration of a transcriptional regulatory network

    DEFF Research Database (Denmark)

    Liu, Guodong; Marras, Antonio; Nielsen, Jens

    2014-01-01

    regulatory information is necessary to improve the accuracy and predictive ability of metabolic models. Here we review the strategies for the reconstruction of a transcriptional regulatory network (TRN) for yeast and the integration of such a reconstruction into a flux balance analysis-based metabolic model......Metabolism is regulated at multiple levels in response to the changes of internal or external conditions. Transcriptional regulation plays an important role in regulating many metabolic reactions by altering the concentrations of metabolic enzymes. Thus, integration of the transcriptional....... While many large-scale TRN reconstructions have been reported for yeast, these reconstructions still need to be improved regarding the functionality and dynamic property of the regulatory interactions. In addition, mathematical modeling approaches need to be further developed to efficiently integrate...

  19. Systems and methods for integrating ion mobility and ion trap mass spectrometers

    Science.gov (United States)

    Ibrahim, Yehia M.; Garimella, Sandilya; Prost, Spencer A.

    2018-04-10

    Described herein are examples of systems and methods for integrating IMS and MS systems. In certain examples, systems and methods for decoding double multiplexed data are described. The systems and methods can also perform multiple refining procedures in order to minimize the demultiplexing artifacts. The systems and methods can be used, for example, for the analysis of proteomic and petroleum samples, where the integration of IMS and high mass resolution are used for accurate assignment of molecular formulae.

  20. The integral equation method applied to eddy currents

    International Nuclear Information System (INIS)

    Biddlecombe, C.S.; Collie, C.J.; Simkin, J.; Trowbridge, C.W.

    1976-04-01

    An algorithm for the numerical solution of eddy current problems is described, based on the direct solution of the integral equation for the potentials. In this method only the conducting and iron regions need to be divided into elements, and there are no boundary conditions. Results from two computer programs using this method for iron free problems for various two-dimensional geometries are presented and compared with analytic solutions. (author)

  1. Extreme Scale FMM-Accelerated Boundary Integral Equation Solver for Wave Scattering

    KAUST Repository

    AbdulJabbar, Mustafa Abdulmajeed; Al Farhan, Mohammed; Al-Harthi, Noha A.; Chen, Rui; Yokota, Rio; Bagci, Hakan; Keyes, David E.

    2018-01-01

    scattering, which uses FMM as a matrix-vector multiplication inside the GMRES iterative method. Our FMM Helmholtz kernels treat nontrivial singular and near-field integration points. We implement highly optimized kernels for both shared and distributed memory

  2. Two new solutions to the third-order symplectic integration method

    International Nuclear Information System (INIS)

    Iwatsu, Reima

    2009-01-01

    Two new solutions are obtained for the symplecticity conditions of explicit third-order partitioned Runge-Kutta time integration method. One of them has larger stability limit and better dispersion property than the Ruth's method.

  3. Two pricing methods for solving an integrated commercial fishery ...

    African Journals Online (AJOL)

    In this paper, we develop two novel pricing methods for solving an integer program. We demonstrate the methods by solving an integrated commercial fishery planning model (IFPM). In this problem, a fishery manager must schedule fishing trawlers (determine when and where the trawlers should go fishing, and when the ...

  4. 300 Area Integrated Field-Scale Subsurface Research Challenge (IFRC) Field Site Management Plan

    Energy Technology Data Exchange (ETDEWEB)

    Freshley, Mark D.

    2008-12-31

    Pacific Northwest National Laboratory (PNNL) has established the 300 Area Integrated Field-Scale Subsurface Research Challenge (300 Area IFRC) on the Hanford Site in southeastern Washington State for the U.S. Department of Energy’s (DOE) Office of Biological and Environmental Research (BER) within the Office of Science. The project is funded by the Environmental Remediation Sciences Division (ERSD). The purpose of the project is to conduct research at the 300 IFRC to investigate multi-scale mass transfer processes associated with a subsurface uranium plume impacting both the vadose zone and groundwater. The management approach for the 300 Area IFRC requires that a Field Site Management Plan be developed. This is an update of the plan to reflect the installation of the well network and other changes.

  5. ADVANCING THE STUDY OF VIOLENCE AGAINST WOMEN USING MIXED METHODS: INTEGRATING QUALITATIVE METHODS INTO A QUANTITATIVE RESEARCH PROGRAM

    Science.gov (United States)

    Testa, Maria; Livingston, Jennifer A.; VanZile-Tamsen, Carol

    2011-01-01

    A mixed methods approach, combining quantitative with qualitative data methods and analysis, offers a promising means of advancing the study of violence. Integrating semi-structured interviews and qualitative analysis into a quantitative program of research on women’s sexual victimization has resulted in valuable scientific insight and generation of novel hypotheses for testing. This mixed methods approach is described and recommendations for integrating qualitative data into quantitative research are provided. PMID:21307032

  6. Calculations of Neutron Flux Distributions by Means of Integral Transport Methods

    Energy Technology Data Exchange (ETDEWEB)

    Carlvik, I

    1967-05-15

    Flux distributions have been calculated mainly in one energy group, for a number of systems representing geometries interesting for reactor calculations. Integral transport methods of two kinds were utilised, collision probabilities (CP) and the discrete method (DIT). The geometries considered comprise the three one-dimensional geometries, planes, sphericals and annular, and further a square cell with a circular fuel rod and a rod cluster cell with a circular outer boundary. For the annular cells both methods (CP and DIT) were used and the results were compared. The purpose of the work is twofold, firstly to demonstrate the versatility and efficacy of integral transport methods and secondly to serve as a guide for anybody who wants to use the methods.

  7. High sensitive quench detection method using an integrated test wire

    International Nuclear Information System (INIS)

    Fevrier, A.; Tavergnier, J.P.; Nithart, H.; Kiblaire, M.; Duchateau, J.L.

    1981-01-01

    A high sensitive quench detection method which works even in the presence of an external perturbing magnetic field is reported. The quench signal is obtained from the difference in voltages at the superconducting winding terminals and at the terminals at a secondary winding strongly coupled to the primary. The secondary winding could consist of a ''zero-current strand'' of the superconducting cable not connected to one of the winding terminals or an integrated normal test wire inside the superconducting cable. Experimental results on quench detection obtained by this method are described. It is shown that the integrated test wire method leads to efficient and sensitive quench detection, especially in the presence of an external perturbing magnetic field

  8. Visualizing Volume to Help Students Understand the Disk Method on Calculus Integral Course

    Science.gov (United States)

    Tasman, F.; Ahmad, D.

    2018-04-01

    Many research shown that students have difficulty in understanding the concepts of integral calculus. Therefore this research is interested in designing a classroom activity integrated with design research method to assist students in understanding the integrals concept especially in calculating the volume of rotary objects using disc method. In order to support student development in understanding integral concepts, this research tries to use realistic mathematical approach by integrating geogebra software. First year university student who takes a calculus course (approximately 30 people) was chosen to implement the classroom activity that has been designed. The results of retrospective analysis show that visualizing volume of rotary objects using geogebra software can assist the student in understanding the disc method as one way of calculating the volume of a rotary object.

  9. Deterministic methods to solve the integral transport equation in neutronic

    International Nuclear Information System (INIS)

    Warin, X.

    1993-11-01

    We present a synthesis of the methods used to solve the integral transport equation in neutronic. This formulation is above all used to compute solutions in 2D in heterogeneous assemblies. Three kinds of methods are described: - the collision probability method; - the interface current method; - the current coupling collision probability method. These methods don't seem to be the most effective in 3D. (author). 9 figs

  10. A multiple-scaling method of the computation of threaded structures

    International Nuclear Information System (INIS)

    Andrieux, S.; Leger, A.

    1989-01-01

    The numerical computation of threaded structures usually leads to very large finite elements problems. It was therefore very difficult to carry out some parametric studies, especially in non-linear cases involving plasticity or unilateral contact conditions. Nevertheless, these parametric studies are essential in many industrial problems, for instance for the evaluation of various repairing processes of the closure studs of PWR. It is well known that such repairing generally involves several modifications of the thread geometry, of the number of active threads, of the flange clamping conditions, and so on. This paper is devoted to the description of a two-scale method, which easily allows parametric studies. The main idea of this method consists of dividing the problem into a global part, and a local part. The local problem is solved by F.E.M. on the precise geometry of the thread of some elementary loadings. The global one is formulated on the gudgeon scale and is reduced to a monodimensional one. The resolution of this global problem leads to the unsignificant computational cost. Then, a post-processing gives the stress field at the thread scale anywhere in the assembly. After recalling some principles of the two-scales approach, the method is described. The validation by comparison with a direct F.E. computation and some further applications are presented

  11. Hybrid Finite Element and Volume Integral Methods for Scattering Using Parametric Geometry

    DEFF Research Database (Denmark)

    Volakis, John L.; Sertel, Kubilay; Jørgensen, Erik

    2004-01-01

    n this paper we address several topics relating to the development and implementation of volume integral and hybrid finite element methods for electromagnetic modeling. Comparisons of volume integral equation formulations with the finite element-boundary integral method are given in terms of accu...... of vanishing divergence within the element but non-zero curl. In addition, a new domain decomposition is introduced for solving array problems involving several million degrees of freedom. Three orders of magnitude CPU reduction is demonstrated for such applications....

  12. Operation strategy for a lab-scale grid-connected photovoltaic generation system integrated with battery energy storage

    International Nuclear Information System (INIS)

    Jou, Hurng-Liahng; Chang, Yi-Hao; Wu, Jinn-Chang; Wu, Kuen-Der

    2015-01-01

    Highlights: • The operation strategy for grid-connected PV generation system integrated with battery energy storage is proposed. • The PV system is composed of an inverter and two DC-DC converter. • The negative impact of grid-connected PV generation systems on the grid can be alleviated by integrating a battery. • The operation of the developed system can be divided into nine modes. - Abstract: The operation strategy for a lab-scale grid-connected photovoltaic generation system integrated with battery energy storage is proposed in this paper. The photovoltaic generation system is composed of a full-bridge inverter, a DC–DC boost converter, an isolated bidirectional DC–DC converter, a solar cell array and a battery set. Since the battery set acts as an energy buffer to adjust the power generation of the solar cell array, the negative impact on power quality caused by the intermittent and unstable output power from a solar cell array is alleviated, so the penetration rate of the grid-connected photovoltaic generation system is increased. A lab-scale prototype is developed to verify the performance of the system. The experimental results show that it achieves the expected performance

  13. Method of producing carbon coated nano- and micron-scale particles

    Science.gov (United States)

    Perry, W. Lee; Weigle, John C; Phillips, Jonathan

    2013-12-17

    A method of making carbon-coated nano- or micron-scale particles comprising entraining particles in an aerosol gas, providing a carbon-containing gas, providing a plasma gas, mixing the aerosol gas, the carbon-containing gas, and the plasma gas proximate a torch, bombarding the mixed gases with microwaves, and collecting resulting carbon-coated nano- or micron-scale particles.

  14. A method of integration of atomistic simulations and continuum mechanics by collecting of dynamical systems with dimensional reduction

    International Nuclear Information System (INIS)

    Kaczmarek, J.

    2002-01-01

    Elementary processes responsible for phenomena in material are frequently related to scale close to atomic one. Therefore atomistic simulations are important for material sciences. On the other hand continuum mechanics is widely applied in mechanics of materials. It seems inevitable that both methods will gradually integrate. A multiscale method of integration of these approaches called collection of dynamical systems with dimensional reduction is introduced in this work. The dimensional reduction procedure realizes transition between various scale models from an elementary dynamical system (EDS) to a reduced dynamical system (RDS). Mappings which transform variables and forces, skeletal dynamical system (SDS) and a set of approximation and identification methods are main components of this procedure. The skeletal dynamical system is a set of dynamical systems parameterized by some constants and has variables related to the dimensionally reduced model. These constants are identified with the aid of solutions of the elementary dynamical system. As a result we obtain a dimensionally reduced dynamical system which describes phenomena in an averaged way in comparison with the EDS. Concept of integration of atomistic simulations with continuum mechanics consists in using a dynamical system describing evolution of atoms as an elementary dynamical system. Then, we introduce a continuum skeletal dynamical system within the dimensional reduction procedure. In order to construct such a system we have to modify a continuum mechanics formulation to some degree. Namely, we formalize scale of averaging for continuum theory and as a result we consider continuum with finite-dimensional fields only. Then, realization of dimensional reduction is possible. A numerical example of realization of the dimensional reduction procedure is shown. We consider a one dimensional chain of atoms interacting by Lennard-Jones potential. Evolution of this system is described by an elementary

  15. Multi-scale calculation based on dual domain material point method combined with molecular dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Dhakal, Tilak Raj [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-02-27

    This dissertation combines the dual domain material point method (DDMP) with molecular dynamics (MD) in an attempt to create a multi-scale numerical method to simulate materials undergoing large deformations with high strain rates. In these types of problems, the material is often in a thermodynamically non-equilibrium state, and conventional constitutive relations are often not available. In this method, the closure quantities, such as stress, at each material point are calculated from a MD simulation of a group of atoms surrounding the material point. Rather than restricting the multi-scale simulation in a small spatial region, such as phase interfaces, or crack tips, this multi-scale method can be used to consider non-equilibrium thermodynamic e ects in a macroscopic domain. This method takes advantage that the material points only communicate with mesh nodes, not among themselves; therefore MD simulations for material points can be performed independently in parallel. First, using a one-dimensional shock problem as an example, the numerical properties of the original material point method (MPM), the generalized interpolation material point (GIMP) method, the convected particle domain interpolation (CPDI) method, and the DDMP method are investigated. Among these methods, only the DDMP method converges as the number of particles increases, but the large number of particles needed for convergence makes the method very expensive especially in our multi-scale method where we calculate stress in each material point using MD simulation. To improve DDMP, the sub-point method is introduced in this dissertation, which provides high quality numerical solutions with a very small number of particles. The multi-scale method based on DDMP with sub-points is successfully implemented for a one dimensional problem of shock wave propagation in a cerium crystal. The MD simulation to calculate stress in each material point is performed in GPU using CUDA to accelerate the

  16. Fully implicit solution of large-scale non-equilibrium radiation diffusion with high order time integration

    International Nuclear Information System (INIS)

    Brown, Peter N.; Shumaker, Dana E.; Woodward, Carol S.

    2005-01-01

    We present a solution method for fully implicit radiation diffusion problems discretized on meshes having millions of spatial zones. This solution method makes use of high order in time integration techniques, inexact Newton-Krylov nonlinear solvers, and multigrid preconditioners. We explore the advantages and disadvantages of high order time integration methods for the fully implicit formulation on both two- and three-dimensional problems with tabulated opacities and highly nonlinear fusion source terms

  17. Iterative resonance self-shielding methods using resonance integral table in heterogeneous transport lattice calculations

    International Nuclear Information System (INIS)

    Hong, Ser Gi; Kim, Kang-Seog

    2011-01-01

    This paper describes the iteration methods using resonance integral tables to estimate the effective resonance cross sections in heterogeneous transport lattice calculations. Basically, these methods have been devised to reduce an effort to convert resonance integral table into subgroup data to be used in the physical subgroup method. Since these methods do not use subgroup data but only use resonance integral tables directly, these methods do not include an error in converting resonance integral into subgroup data. The effective resonance cross sections are estimated iteratively for each resonance nuclide through the heterogeneous fixed source calculations for the whole problem domain to obtain the background cross sections. These methods have been implemented in the transport lattice code KARMA which uses the method of characteristics (MOC) to solve the transport equation. The computational results show that these iteration methods are quite promising in the practical transport lattice calculations.

  18. An Integrated Assessment Approach to Address Artisanal and Small-Scale Gold Mining in Ghana

    Directory of Open Access Journals (Sweden)

    Niladri Basu

    2015-09-01

    Full Text Available Artisanal and small-scale gold mining (ASGM is growing in many regions of the world including Ghana. The problems in these communities are complex and multi-faceted. To help increase understanding of such problems, and to enable consensus-building and effective translation of scientific findings to stakeholders, help inform policies, and ultimately improve decision making, we utilized an Integrated Assessment approach to study artisanal and small-scale gold mining activities in Ghana. Though Integrated Assessments have been used in the fields of environmental science and sustainable development, their use in addressing specific matter in public health, and in particular, environmental and occupational health is quite limited despite their many benefits. The aim of the current paper was to describe specific activities undertaken and how they were organized, and the outputs and outcomes of our activity. In brief, three disciplinary workgroups (Natural Sciences, Human Health, Social Sciences and Economics were formed, with 26 researchers from a range of Ghanaian institutions plus international experts. The workgroups conducted activities in order to address the following question: What are the causes, consequences and correctives of small-scale gold mining in Ghana? More specifically: What alternatives are available in resource-limited settings in Ghana that allow for gold-mining to occur in a manner that maintains ecological health and human health without hindering near- and long-term economic prosperity? Several response options were identified and evaluated, and are currently being disseminated to various stakeholders within Ghana and internationally.

  19. An Integrated Assessment Approach to Address Artisanal and Small-Scale Gold Mining in Ghana.

    Science.gov (United States)

    Basu, Niladri; Renne, Elisha P; Long, Rachel N

    2015-09-17

    Artisanal and small-scale gold mining (ASGM) is growing in many regions of the world including Ghana. The problems in these communities are complex and multi-faceted. To help increase understanding of such problems, and to enable consensus-building and effective translation of scientific findings to stakeholders, help inform policies, and ultimately improve decision making, we utilized an Integrated Assessment approach to study artisanal and small-scale gold mining activities in Ghana. Though Integrated Assessments have been used in the fields of environmental science and sustainable development, their use in addressing specific matter in public health, and in particular, environmental and occupational health is quite limited despite their many benefits. The aim of the current paper was to describe specific activities undertaken and how they were organized, and the outputs and outcomes of our activity. In brief, three disciplinary workgroups (Natural Sciences, Human Health, Social Sciences and Economics) were formed, with 26 researchers from a range of Ghanaian institutions plus international experts. The workgroups conducted activities in order to address the following question: What are the causes, consequences and correctives of small-scale gold mining in Ghana? More specifically: What alternatives are available in resource-limited settings in Ghana that allow for gold-mining to occur in a manner that maintains ecological health and human health without hindering near- and long-term economic prosperity? Several response options were identified and evaluated, and are currently being disseminated to various stakeholders within Ghana and internationally.

  20. Fast large-scale clustering of protein structures using Gauss integrals

    DEFF Research Database (Denmark)

    Harder, Tim; Borg, Mikael; Boomsma, Wouter

    2011-01-01

    trajectories. Results: We present Pleiades, a novel approach to clustering protein structures with a rigorous mathematical underpinning. The method approximates clustering based on the root mean square deviation by rst mapping structures to Gauss integral vectors – which were introduced by Røgen and co......-workers – and subsequently performing K-means clustering. Conclusions: Compared to current methods, Pleiades dramatically improves on the time needed to perform clustering, and can cluster a signicantly larger number of structures, while providing state-ofthe- art results. The number of low energy structures generated...

  1. An Integrated Method of Supply Chains Vulnerability Assessment

    Directory of Open Access Journals (Sweden)

    Jiaguo Liu

    2016-01-01

    Full Text Available Supply chain vulnerability identification and evaluation are extremely important to mitigate the supply chain risk. We present an integrated method to assess the supply chain vulnerability. The potential failure mode of the supply chain vulnerability is analyzed through the SCOR model. Combining the fuzzy theory and the gray theory, the correlation degree of each vulnerability indicator can be calculated and the target improvements can be carried out. In order to verify the effectiveness of the proposed method, we use Kendall’s tau coefficient to measure the effect of different methods. The result shows that the presented method has the highest consistency in the assessment compared with the other two methods.

  2. Practical method of calculating time-integrated concentrations at medium and large distances

    International Nuclear Information System (INIS)

    Cagnetti, P.; Ferrara, V.

    1980-01-01

    Previous reports have covered the possibility of calculating time-integrated concentrations (TICs) for a prolonged release, based on concentration estimates for a brief release. This study proposes a simple method of evaluating concentrations in the air at medium and large distances, for a brief release. It is known that the stability of the atmospheric layers close to ground level influence diffusion only over short distances. Beyond some tens of kilometers, as the pollutant cloud progressively reaches higher layers, diffusion is affected by factors other than the stability at ground level, such as wind shear for intermediate distances and the divergence and rotational motion of air masses towards the upper limit of the mesoscale and on the synoptic scale. Using the data available in the literature, expressions for sigmasub(y) and sigmasub(z) are proposed for transfer times corresponding to those for up to distances of several thousand kilometres, for two initial diffusion situations (up to distances of 10 - 20 km), those characterized by stable and neutral conditions respectively. Using this method simple hand calculations can be made for any problem relating to the diffusion of radioactive pollutants over long distances

  3. Bending analysis of embedded nanoplates based on the integral formulation of Eringen's nonlocal theory using the finite element method

    Science.gov (United States)

    Ansari, R.; Torabi, J.; Norouzzadeh, A.

    2018-04-01

    Due to the capability of Eringen's nonlocal elasticity theory to capture the small length scale effect, it is widely used to study the mechanical behaviors of nanostructures. Previous studies have indicated that in some cases, the differential form of this theory cannot correctly predict the behavior of structure, and the integral form should be employed to avoid obtaining inconsistent results. The present study deals with the bending analysis of nanoplates resting on elastic foundation based on the integral formulation of Eringen's nonlocal theory. Since the formulation is presented in a general form, arbitrary kernel functions can be used. The first order shear deformation plate theory is considered to model the nanoplates, and the governing equations for both integral and differential forms are presented. Finally, the finite element method is applied to solve the problem. Selected results are given to investigate the effects of elastic foundation and to compare the predictions of integral nonlocal model with those of its differential nonlocal and local counterparts. It is found that by the use of proposed integral formulation of Eringen's nonlocal model, the paradox observed for the cantilever nanoplate is resolved.

  4. Integrating macro and micro scale approaches in the agent-based modeling of residential dynamics

    Science.gov (United States)

    Saeedi, Sara

    2018-06-01

    With the advancement of computational modeling and simulation (M&S) methods as well as data collection technologies, urban dynamics modeling substantially improved over the last several decades. The complex urban dynamics processes are most effectively modeled not at the macro-scale, but following a bottom-up approach, by simulating the decisions of individual entities, or residents. Agent-based modeling (ABM) provides the key to a dynamic M&S framework that is able to integrate socioeconomic with environmental models, and to operate at both micro and macro geographical scales. In this study, a multi-agent system is proposed to simulate residential dynamics by considering spatiotemporal land use changes. In the proposed ABM, macro-scale land use change prediction is modeled by Artificial Neural Network (ANN) and deployed as the agent environment and micro-scale residential dynamics behaviors autonomously implemented by household agents. These two levels of simulation interacted and jointly promoted urbanization process in an urban area of Tehran city in Iran. The model simulates the behavior of individual households in finding ideal locations to dwell. The household agents are divided into three main groups based on their income rank and they are further classified into different categories based on a number of attributes. These attributes determine the households' preferences for finding new dwellings and change with time. The ABM environment is represented by a land-use map in which the properties of the land parcels change dynamically over the simulation time. The outputs of this model are a set of maps showing the pattern of different groups of households in the city. These patterns can be used by city planners to find optimum locations for building new residential units or adding new services to the city. The simulation results show that combining macro- and micro-level simulation can give full play to the potential of the ABM to understand the driving

  5. Integrating experimental and numerical methods for a scenario-based quantitative assessment of subsurface energy storage options

    Science.gov (United States)

    Kabuth, Alina; Dahmke, Andreas; Hagrey, Said Attia al; Berta, Márton; Dörr, Cordula; Koproch, Nicolas; Köber, Ralf; Köhn, Daniel; Nolde, Michael; Tilmann Pfeiffer, Wolf; Popp, Steffi; Schwanebeck, Malte; Bauer, Sebastian

    2016-04-01

    Within the framework of the transition to renewable energy sources ("Energiewende"), the German government defined the target of producing 60 % of the final energy consumption from renewable energy sources by the year 2050. However, renewable energies are subject to natural fluctuations. Energy storage can help to buffer the resulting time shifts between production and demand. Subsurface geological structures provide large potential capacities for energy stored in the form of heat or gas on daily to seasonal time scales. In order to explore this potential sustainably, the possible induced effects of energy storage operations have to be quantified for both specified normal operation and events of failure. The ANGUS+ project therefore integrates experimental laboratory studies with numerical approaches to assess subsurface energy storage scenarios and monitoring methods. Subsurface storage options for gas, i.e. hydrogen, synthetic methane and compressed air in salt caverns or porous structures, as well as subsurface heat storage are investigated with respect to site prerequisites, storage dimensions, induced effects, monitoring methods and integration into spatial planning schemes. The conceptual interdisciplinary approach of the ANGUS+ project towards the integration of subsurface energy storage into a sustainable subsurface planning scheme is presented here, and this approach is then demonstrated using the examples of two selected energy storage options: Firstly, the option of seasonal heat storage in a shallow aquifer is presented. Coupled thermal and hydraulic processes induced by periodic heat injection and extraction were simulated in the open-source numerical modelling package OpenGeoSys. Situations of specified normal operation as well as cases of failure in operational storage with leaking heat transfer fluid are considered. Bench-scale experiments provided parameterisations of temperature dependent changes in shallow groundwater hydrogeochemistry. As a

  6. A spatial method to calculate small-scale fisheries effort in data poor scenarios.

    Science.gov (United States)

    Johnson, Andrew Frederick; Moreno-Báez, Marcia; Giron-Nava, Alfredo; Corominas, Julia; Erisman, Brad; Ezcurra, Exequiel; Aburto-Oropeza, Octavio

    2017-01-01

    To gauge the collateral impacts of fishing we must know where fishing boats operate and how much they fish. Although small-scale fisheries land approximately the same amount of fish for human consumption as industrial fleets globally, methods of estimating their fishing effort are comparatively poor. We present an accessible, spatial method of calculating the effort of small-scale fisheries based on two simple measures that are available, or at least easily estimated, in even the most data-poor fisheries: the number of boats and the local coastal human population. We illustrate the method using a small-scale fisheries case study from the Gulf of California, Mexico, and show that our measure of Predicted Fishing Effort (PFE), measured as the number of boats operating in a given area per day adjusted by the number of people in local coastal populations, can accurately predict fisheries landings in the Gulf. Comparing our values of PFE to commercial fishery landings throughout the Gulf also indicates that the current number of small-scale fishing boats in the Gulf is approximately double what is required to land theoretical maximum fish biomass. Our method is fishery-type independent and can be used to quantitatively evaluate the efficacy of growth in small-scale fisheries. This new method provides an important first step towards estimating the fishing effort of small-scale fleets globally.

  7. Methods of integrating Islamic values in teaching biology for shaping attitude and character

    Science.gov (United States)

    Listyono; Supardi, K. I.; Hindarto, N.; Ridlo, S.

    2018-03-01

    Learning is expected to develop the potential of learners to have the spiritual attitude: moral strength, self-control, personality, intelligence, noble character, as well as the skills needed by themselves, society, and nation. Implementation of role and morale in learning is an alternative way which is expected to answer the challenge. The solution offered is to inject student with religious material Islamic in learning biology. The content value of materials teaching biology includes terms of practical value, religious values, daily life value, socio-political value, and the value of art. In Islamic religious values (Qur'an and Hadith) various methods can touch human feelings, souls, and generate motivation. Integrating learning with Islamic value can be done by the deductive or inductive approach. The appropriate method of integration is the amtsal (analog) method, hiwar (dialog) method, targhib & tarhib (encouragement & warning) method, and example method (giving a noble role model / good example). The right strategy in integrating Islamic values is outlined in the design of lesson plan. The integration of Islamic values in lesson plan will facilitate teachers to build students' character because Islamic values can be implemented in every learning steps so students will be accustomed to receiving the character value in this integrated learning.

  8. Multiple time-scale methods in particle simulations of plasmas

    International Nuclear Information System (INIS)

    Cohen, B.I.

    1985-01-01

    This paper surveys recent advances in the application of multiple time-scale methods to particle simulation of collective phenomena in plasmas. These methods dramatically improve the efficiency of simulating low-frequency kinetic behavior by allowing the use of a large timestep, while retaining accuracy. The numerical schemes surveyed provide selective damping of unwanted high-frequency waves and preserve numerical stability in a variety of physics models: electrostatic, magneto-inductive, Darwin and fully electromagnetic. The paper reviews hybrid simulation models, the implicitmoment-equation method, the direct implicit method, orbit averaging, and subcycling

  9. On the calculation of length scales for turbulent heat transfer correlation

    Energy Technology Data Exchange (ETDEWEB)

    Barrett, M.J.; Hollingsworth, D.K.

    1999-07-01

    Turbulence length scale calculation methods were critically reviewed for their usefulness in boundary layer heat transfer correlations. Merits and deficiencies in each calculation method were presented. A rigorous method for calculating an energy-based integral scale was introduced. The method uses the variance of the streamwise velocity and a measured dissipation spectrum to calculate the length scale. Advantages and disadvantages of the new method were discussed. A principal advantage is the capability to decisively calculate length scales in a low-Reynolds-number turbulent boundary layer. The calculation method was tested with data from grid-generated, free-shear-layer, and wall-bounded turbulence. In each case, the method proved successful. The length scale is well behaved in turbulent boundary layers with momentum thickness Reynolds numbers from 400 to 2,100 and in flows with turbulent Reynolds numbers as low as 90.

  10. Hierarchical hybrid control of manipulators: Artificial intelligence in large scale integrated circuits

    Science.gov (United States)

    Greene, P. H.

    1972-01-01

    Both in practical engineering and in control of muscular systems, low level subsystems automatically provide crude approximations to the proper response. Through low level tuning of these approximations, the proper response variant can emerge from standardized high level commands. Such systems are expressly suited to emerging large scale integrated circuit technology. A computer, using symbolic descriptions of subsystem responses, can select and shape responses of low level digital or analog microcircuits. A mathematical theory that reveals significant informational units in this style of control and software for realizing such information structures are formulated.

  11. Integrating Quantitative and Qualitative Results in Health Science Mixed Methods Research Through Joint Displays

    Science.gov (United States)

    Guetterman, Timothy C.; Fetters, Michael D.; Creswell, John W.

    2015-01-01

    PURPOSE Mixed methods research is becoming an important methodology to investigate complex health-related topics, yet the meaningful integration of qualitative and quantitative data remains elusive and needs further development. A promising innovation to facilitate integration is the use of visual joint displays that bring data together visually to draw out new insights. The purpose of this study was to identify exemplar joint displays by analyzing the various types of joint displays being used in published articles. METHODS We searched for empirical articles that included joint displays in 3 journals that publish state-of-the-art mixed methods research. We analyzed each of 19 identified joint displays to extract the type of display, mixed methods design, purpose, rationale, qualitative and quantitative data sources, integration approaches, and analytic strategies. Our analysis focused on what each display communicated and its representation of mixed methods analysis. RESULTS The most prevalent types of joint displays were statistics-by-themes and side-by-side comparisons. Innovative joint displays connected findings to theoretical frameworks or recommendations. Researchers used joint displays for convergent, explanatory sequential, exploratory sequential, and intervention designs. We identified exemplars for each of these designs by analyzing the inferences gained through using the joint display. Exemplars represented mixed methods integration, presented integrated results, and yielded new insights. CONCLUSIONS Joint displays appear to provide a structure to discuss the integrated analysis and assist both researchers and readers in understanding how mixed methods provides new insights. We encourage researchers to use joint displays to integrate and represent mixed methods analysis and discuss their value. PMID:26553895

  12. Explicit Bounds to Some New Gronwall-Bellman-Type Delay Integral Inequalities in Two Independent Variables on Time Scales

    Directory of Open Access Journals (Sweden)

    Fanwei Meng

    2011-01-01

    Full Text Available Some new Gronwall-Bellman-type delay integral inequalities in two independent variables on time scales are established, which provide a handy tool in the research of qualitative and quantitative properties of solutions of delay dynamic equations on time scales. The established inequalities generalize some of the results in the work of Zhang and Meng 2008, Pachpatte 2002, and Ma 2010.

  13. Analysis of Conflict Centers in Projects Procured with Traditional and Integrated Methods in Nigeria

    Directory of Open Access Journals (Sweden)

    Martin O. Dada

    2012-07-01

    Full Text Available Conflicts in any organization can either be functional or dysfunctional and can contribute to or detract from the achievement of organizational or project objectives. This study investigated the frequency and intensity of conflicts, using five conflict centers, on projects executed with either the integrated or traditional method in Nigeria. Questionnaires were administered through purposive and snowballing techniques on 274 projects located in twelve states of Nigeria and Abuja. 94 usable responses were obtained. The collected data were subjected to both descriptive and inferential statistical analysis. In projects procured with traditional methods, conflicts relating to resources for project execution had the greatest frequency, while conflicts around project/client goals had the least frequency. For projects executed with integrated methods, conflicts due to administrative procedures were ranked highest while conflicts due to project/client goals were ranked least. Regarding seriousness of conflict, conflicts due to administrative procedures and resources for project execution were ranked highest respectively for projects procured with traditional and integrated methods. Additionally, in terms of seriousness, personality issues and project/client goals were the least sources of conflict in projects executed with traditional and integrated methods. There were no significant differences in the incidence of conflicts, using the selected conflict centers, between the traditional and integrated procurement methods. There was however significant difference in the intensity or seriousness of conflicts between projects executed with the traditional method and those executed with integrated methods in the following areas: technical issues, administrative matters and personality issues. The study recommends that conscious efforts should be made at teambuilding on projects executed with integrated methods.

  14. Regularization methods for ill-posed problems in multiple Hilbert scales

    International Nuclear Information System (INIS)

    Mazzieri, Gisela L; Spies, Ruben D

    2012-01-01

    Several convergence results in Hilbert scales under different source conditions are proved and orders of convergence and optimal orders of convergence are derived. Also, relations between those source conditions are proved. The concept of a multiple Hilbert scale on a product space is introduced, and regularization methods on these scales are defined, both for the case of a single observation and for the case of multiple observations. In the latter case, it is shown how vector-valued regularization functions in these multiple Hilbert scales can be used. In all cases, convergence is proved and orders and optimal orders of convergence are shown. Finally, some potential applications and open problems are discussed. (paper)

  15. Multistep Methods for Integrating the Solar System

    Science.gov (United States)

    1988-07-01

    Technical Report 1055 [Multistep Methods for Integrating the Solar System 0 Panayotis A. Skordos’ MIT Artificial Intelligence Laboratory DTIC S D g8...RMA ELEENT. PROECT. TASK Artific ial Inteligence Laboratory ARE1A G WORK UNIT NUMBERS 545 Technology Square Cambridge, MA 02139 IL. CONTROLLING...describes research done at the Artificial Intelligence Laboratory of the Massachusetts Institute of Technology, supported by the Advanced Research Projects

  16. The role of large‐scale heat pumps for short term integration of renewable energy

    DEFF Research Database (Denmark)

    Mathiesen, Brian Vad; Blarke, Morten; Hansen, Kenneth

    2011-01-01

    technologies is focusing on natural working fluid hydrocarbons, ammonia, and carbon dioxide. Large-scale heat pumps are crucial for integrating 50% wind power as anticipated to be installed in Denmark in 2020, along with other measures. Also in the longer term heat pumps can contribute to the minimization...... savings with increased wind power and may additionally lead to economic savings in the range of 1,500-1,700 MDKK in total in the period until 2020. Furthermore, the energy system efficiency may be increased due to large heat pumps replacing boiler production. Finally data sheets for large-scale ammonium......In this report the role of large-scale heat pumps in a future energy system with increased renewable energy is presented. The main concepts for large heat pumps in district heating systems are outlined along with the development for heat pump refrigerants. The development of future heat pump...

  17. Translation, adaptation and validation of "Community Integration Questionnaire"

    Directory of Open Access Journals (Sweden)

    Helena Maria Silveira Fraga-Maia

    2015-05-01

    Full Text Available Objective: To translate, adapt, and validate the "Community Integration Questionnaire (CIQ," a tool that evaluates community integration after traumatic brain injury (TBI.Methods: A study of 61 TBI survivors was carried out. The appraisal of the measurement equivalence was based on a reliability assessment by estimating inter-rater agreement, item-scale correlation and internal consistency of CIQ scales, concurrent validity, and construct validity.Results: Inter-rater agreement ranged from substantial to almost perfect. The item-scale correlations were generally higher between the items and their respective domains, whereas the intra-class correlation coefficients were high for both the overall scale and the CIQ domains. The correlation between the CIQ and Disability Rating Scale (DRS, the Extended Glasgow Outcome Scale (GOSE, and the Rancho Los Amigos Level of Cognitive Functioning Scale (RLA reached values considered satisfactory. However, the factor analysis generated four factors (dimensions that did not correspond with the dimensional structure of the original tool.Conclusion: The resulting tool herein may be useful in globally assessing community integration after TBI in the Brazilian context, at least until new CIQ psychometric assessment studies are developed with larger samples.

  18. Functional Integration

    Science.gov (United States)

    Cartier, Pierre; DeWitt-Morette, Cecile

    2010-06-01

    Acknowledgements; List symbols, conventions, and formulary; Part I. The Physical and Mathematical Environment: 1. The physical and mathematical environment; Part II. Quantum Mechanics: 2. First lesson: gaussian integrals; 3. Selected examples; 4. Semiclassical expansion: WKB; 5. Semiclassical expansion: beyond WKB; 6. Quantum dynamics: path integrals and operator formalism; Part III. Methods from Differential Geometry: 7. Symmetries; 8. Homotopy; 9. Grassmann analysis: basics; 10. Grassmann analysis: applications; 11. Volume elements, divergences, gradients; Part IV. Non-Gaussian Applications: 12. Poisson processes in physics; 13. A mathematical theory of Poisson processes; 14. First exit time: energy problems; Part V. Problems in Quantum Field Theory: 15. Renormalization 1: an introduction; 16. Renormalization 2: scaling; 17. Renormalization 3: combinatorics; 18. Volume elements in quantum field theory Bryce DeWitt; Part VI. Projects: 19. Projects; Appendix A. Forward and backward integrals: spaces of pointed paths; Appendix B. Product integrals; Appendix C. A compendium of gaussian integrals; Appendix D. Wick calculus Alexander Wurm; Appendix E. The Jacobi operator; Appendix F. Change of variables of integration; Appendix G. Analytic properties of covariances; Appendix H. Feynman's checkerboard; Bibliography; Index.

  19. Highly Uniform Carbon Nanotube Field-Effect Transistors and Medium Scale Integrated Circuits.

    Science.gov (United States)

    Chen, Bingyan; Zhang, Panpan; Ding, Li; Han, Jie; Qiu, Song; Li, Qingwen; Zhang, Zhiyong; Peng, Lian-Mao

    2016-08-10

    Top-gated p-type field-effect transistors (FETs) have been fabricated in batch based on carbon nanotube (CNT) network thin films prepared from CNT solution and present high yield and highly uniform performance with small threshold voltage distribution with standard deviation of 34 mV. According to the property of FETs, various logical and arithmetical gates, shifters, and d-latch circuits were designed and demonstrated with rail-to-rail output. In particular, a 4-bit adder consisting of 140 p-type CNT FETs was demonstrated with higher packing density and lower supply voltage than other published integrated circuits based on CNT films, which indicates that CNT based integrated circuits can reach to medium scale. In addition, a 2-bit multiplier has been realized for the first time. Benefitted from the high uniformity and suitable threshold voltage of CNT FETs, all of the fabricated circuits based on CNT FETs can be driven by a single voltage as small as 2 V.

  20. Kernel methods for large-scale genomic data analysis

    Science.gov (United States)

    Xing, Eric P.; Schaid, Daniel J.

    2015-01-01

    Machine learning, particularly kernel methods, has been demonstrated as a promising new tool to tackle the challenges imposed by today’s explosive data growth in genomics. They provide a practical and principled approach to learning how a large number of genetic variants are associated with complex phenotypes, to help reveal the complexity in the relationship between the genetic markers and the outcome of interest. In this review, we highlight the potential key role it will have in modern genomic data processing, especially with regard to integration with classical methods for gene prioritizing, prediction and data fusion. PMID:25053743

  1. Experimental methods for laboratory-scale ensilage of lignocellulosic biomass

    International Nuclear Information System (INIS)

    Tanjore, Deepti; Richard, Tom L.; Marshall, Megan N.

    2012-01-01

    Anaerobic fermentation is a potential storage method for lignocellulosic biomass in biofuel production processes. Since biomass is seasonally harvested, stocks are often dried or frozen at laboratory scale prior to fermentation experiments. Such treatments prior to fermentation studies cause irreversible changes in the plant cells, influencing the initial state of biomass and thereby the progression of the fermentation processes itself. This study investigated the effects of drying, refrigeration, and freezing relative to freshly harvested corn stover in lab-scale ensilage studies. Particle sizes, as well as post-ensilage drying temperatures for compositional analysis, were tested to identify the appropriate sample processing methods. After 21 days of ensilage the lowest pH value (3.73 ± 0.03), lowest dry matter loss (4.28 ± 0.26 g. 100 g-1DM), and highest water soluble carbohydrate (WSC) concentrations (7.73 ± 0.26 g. 100 g-1DM) were observed in control biomass (stover ensiled within 12 h of harvest without any treatments). WSC concentration was significantly reduced in samples refrigerated for 7 days prior to ensilage (3.86 ± 0.49 g. 100 g −1 DM). However, biomass frozen prior to ensilage produced statistically similar results to the fresh biomass control, especially in treatments with cell wall degrading enzymes. Grinding to decrease particle size reduced the variance amongst replicates for pH values of individual reactors to a minor extent. Drying biomass prior to extraction of WSCs resulted in degradation of the carbohydrates and a reduced estimate of their concentrations. The methods developed in this study can be used to improve ensilage experiments and thereby help in developing ensilage as a storage method for biofuel production. -- Highlights: ► Laboratory-scale methods to assess the influence of ensilage biofuel production. ► Drying, freezing, and refrigeration of biomass influenced microbial fermentation. ► Freshly ensiled stover exhibited

  2. Method of producing exfoliated graphite, flexible graphite, and nano-scaled graphene platelets

    Science.gov (United States)

    Zhamu, Aruna; Shi, Jinjun; Guo, Jiusheng; Jang, Bor Z.

    2010-11-02

    The present invention provides a method of exfoliating a layered material (e.g., graphite and graphite oxide) to produce nano-scaled platelets having a thickness smaller than 100 nm, typically smaller than 10 nm. The method comprises (a) dispersing particles of graphite, graphite oxide, or a non-graphite laminar compound in a liquid medium containing therein a surfactant or dispersing agent to obtain a stable suspension or slurry; and (b) exposing the suspension or slurry to ultrasonic waves at an energy level for a sufficient length of time to produce separated nano-scaled platelets. The nano-scaled platelets are candidate reinforcement fillers for polymer nanocomposites. Nano-scaled graphene platelets are much lower-cost alternatives to carbon nano-tubes or carbon nano-fibers.

  3. APPLICATION OF BOUNDARY INTEGRAL EQUATION METHOD FOR THERMOELASTICITY PROBLEMS

    Directory of Open Access Journals (Sweden)

    Vorona Yu.V.

    2015-12-01

    Full Text Available Boundary Integral Equation Method is used for solving analytically the problems of coupled thermoelastic spherical wave propagation. The resulting mathematical expressions coincide with the solutions obtained in a conventional manner.

  4. Task-Management Method Using R-Tree Spatial Cloaking for Large-Scale Crowdsourcing

    Directory of Open Access Journals (Sweden)

    Yan Li

    2017-12-01

    Full Text Available With the development of sensor technology and the popularization of the data-driven service paradigm, spatial crowdsourcing systems have become an important way of collecting map-based location data. However, large-scale task management and location privacy are important factors for participants in spatial crowdsourcing. In this paper, we propose the use of an R-tree spatial cloaking-based task-assignment method for large-scale spatial crowdsourcing. We use an estimated R-tree based on the requested crowdsourcing tasks to reduce the crowdsourcing server-side inserting cost and enable the scalability. By using Minimum Bounding Rectangle (MBR-based spatial anonymous data without exact position data, this method preserves the location privacy of participants in a simple way. In our experiment, we showed that our proposed method is faster than the current method, and is very efficient when the scale is increased.

  5. Integrating Quantitative and Qualitative Results in Health Science Mixed Methods Research Through Joint Displays.

    Science.gov (United States)

    Guetterman, Timothy C; Fetters, Michael D; Creswell, John W

    2015-11-01

    Mixed methods research is becoming an important methodology to investigate complex health-related topics, yet the meaningful integration of qualitative and quantitative data remains elusive and needs further development. A promising innovation to facilitate integration is the use of visual joint displays that bring data together visually to draw out new insights. The purpose of this study was to identify exemplar joint displays by analyzing the various types of joint displays being used in published articles. We searched for empirical articles that included joint displays in 3 journals that publish state-of-the-art mixed methods research. We analyzed each of 19 identified joint displays to extract the type of display, mixed methods design, purpose, rationale, qualitative and quantitative data sources, integration approaches, and analytic strategies. Our analysis focused on what each display communicated and its representation of mixed methods analysis. The most prevalent types of joint displays were statistics-by-themes and side-by-side comparisons. Innovative joint displays connected findings to theoretical frameworks or recommendations. Researchers used joint displays for convergent, explanatory sequential, exploratory sequential, and intervention designs. We identified exemplars for each of these designs by analyzing the inferences gained through using the joint display. Exemplars represented mixed methods integration, presented integrated results, and yielded new insights. Joint displays appear to provide a structure to discuss the integrated analysis and assist both researchers and readers in understanding how mixed methods provides new insights. We encourage researchers to use joint displays to integrate and represent mixed methods analysis and discuss their value. © 2015 Annals of Family Medicine, Inc.

  6. Assessment of small-scale integrated water vapour variability during HOPE

    Science.gov (United States)

    Steinke, S.; Eikenberg, S.; Löhnert, U.; Dick, G.; Klocke, D.; Di Girolamo, P.; Crewell, S.

    2015-03-01

    The spatio-temporal variability of integrated water vapour (IWV) on small scales of less than 10 km and hours is assessed with data from the 2 months of the High Definition Clouds and Precipitation for advancing Climate Prediction (HD(CP)2) Observational Prototype Experiment (HOPE). The statistical intercomparison of the unique set of observations during HOPE (microwave radiometer (MWR), Global Positioning System (GPS), sun photometer, radiosondes, Raman lidar, infrared and near-infrared Moderate Resolution Imaging Spectroradiometer (MODIS) on the satellites Aqua and Terra) measuring close together reveals a good agreement in terms of random differences (standard deviation ≤1 kg m-2) and correlation coefficient (≥ 0.98). The exception is MODIS, which appears to suffer from insufficient cloud filtering. For a case study during HOPE featuring a typical boundary layer development, the IWV variability in time and space on scales of less than 10 km and less than 1 h is investigated in detail. For this purpose, the measurements are complemented by simulations with the novel ICOsahedral Nonhydrostatic modelling framework (ICON), which for this study has a horizontal resolution of 156 m. These runs show that differences in space of 3-4 km or time of 10-15 min induce IWV variabilities on the order of 0.4 kg m-2. This model finding is confirmed by observed time series from two MWRs approximately 3 km apart with a comparable temporal resolution of a few seconds. Standard deviations of IWV derived from MWR measurements reveal a high variability (> 1 kg m-2) even at very short time scales of a few minutes. These cannot be captured by the temporally lower-resolved instruments and by operational numerical weather prediction models such as COSMO-DE (an application of the Consortium for Small-scale Modelling covering Germany) of Deutscher Wetterdienst, which is included in the comparison. However, for time scales larger than 1 h, a sampling resolution of 15 min is

  7. Method and apparatus for in-system redundant array repair on integrated circuits

    Science.gov (United States)

    Bright, Arthur A.; Crumley, Paul G.; Dombrowa, Marc B.; Douskey, Steven M.; Haring, Rudolf A.; Oakland, Steven F.; Ouellette, Michael R.; Strissel, Scott A.

    2007-12-18

    Disclosed is a method of repairing an integrated circuit of the type comprising of a multitude of memory arrays and a fuse box holding control data for controlling redundancy logic of the arrays. The method comprises the steps of providing the integrated circuit with a control data selector for passing the control data from the fuse box to the memory arrays; providing a source of alternate control data, external of the integrated circuit; and connecting the source of alternate control data to the control data selector. The method comprises the further step of, at a given time, passing the alternate control data from the source thereof, through the control data selector and to the memory arrays to control the redundancy logic of the memory arrays.

  8. Method and apparatus for in-system redundant array repair on integrated circuits

    Science.gov (United States)

    Bright, Arthur A [Croton-on-Hudson, NY; Crumley, Paul G [Yorktown Heights, NY; Dombrowa, Marc B [Bronx, NY; Douskey, Steven M [Rochester, MN; Haring, Rudolf A [Cortlandt Manor, NY; Oakland, Steven F [Colchester, VT; Ouellette, Michael R [Westford, VT; Strissel, Scott A [Byron, MN

    2008-07-29

    Disclosed is a method of repairing an integrated circuit of the type comprising of a multitude of memory arrays and a fuse box holding control data for controlling redundancy logic of the arrays. The method comprises the steps of providing the integrated circuit with a control data selector for passing the control data from the fuse box to the memory arrays; providing a source of alternate control data, external of the integrated circuit; and connecting the source of alternate control data to the control data selector. The method comprises the further step of, at a given time, passing the alternate control data from the source thereof, through the control data selector and to the memory arrays to control the redundancy logic of the memory arrays.

  9. A simple flow-concentration modelling method for integrating water ...

    African Journals Online (AJOL)

    A simple flow-concentration modelling method for integrating water quality and ... flow requirements are assessed for maintenance low flow, drought low flow ... the instream concentrations of chemical constituents that will arise from different ...

  10. Integrated multiscale modeling of molecular computing devices

    International Nuclear Information System (INIS)

    Cummings, Peter T; Leng Yongsheng

    2005-01-01

    Molecular electronics, in which single organic molecules are designed to perform the functions of transistors, diodes, switches and other circuit elements used in current siliconbased microelecronics, is drawing wide interest as a potential replacement technology for conventional silicon-based lithographically etched microelectronic devices. In addition to their nanoscopic scale, the additional advantage of molecular electronics devices compared to silicon-based lithographically etched devices is the promise of being able to produce them cheaply on an industrial scale using wet chemistry methods (i.e., self-assembly from solution). The design of molecular electronics devices, and the processes to make them on an industrial scale, will require a thorough theoretical understanding of the molecular and higher level processes involved. Hence, the development of modeling techniques for molecular electronics devices is a high priority from both a basic science point of view (to understand the experimental studies in this field) and from an applied nanotechnology (manufacturing) point of view. Modeling molecular electronics devices requires computational methods at all length scales - electronic structure methods for calculating electron transport through organic molecules bonded to inorganic surfaces, molecular simulation methods for determining the structure of self-assembled films of organic molecules on inorganic surfaces, mesoscale methods to understand and predict the formation of mesoscale patterns on surfaces (including interconnect architecture), and macroscopic scale methods (including finite element methods) for simulating the behavior of molecular electronic circuit elements in a larger integrated device. Here we describe a large Department of Energy project involving six universities and one national laboratory aimed at developing integrated multiscale methods for modeling molecular electronics devices. The project is funded equally by the Office of Basic

  11. ELT-scale Adaptive Optics real-time control with thes Intel Xeon Phi Many Integrated Core Architecture

    Science.gov (United States)

    Jenkins, David R.; Basden, Alastair; Myers, Richard M.

    2018-05-01

    We propose a solution to the increased computational demands of Extremely Large Telescope (ELT) scale adaptive optics (AO) real-time control with the Intel Xeon Phi Knights Landing (KNL) Many Integrated Core (MIC) Architecture. The computational demands of an AO real-time controller (RTC) scale with the fourth power of telescope diameter and so the next generation ELTs require orders of magnitude more processing power for the RTC pipeline than existing systems. The Xeon Phi contains a large number (≥64) of low power x86 CPU cores and high bandwidth memory integrated into a single socketed server CPU package. The increased parallelism and memory bandwidth are crucial to providing the performance for reconstructing wavefronts with the required precision for ELT scale AO. Here, we demonstrate that the Xeon Phi KNL is capable of performing ELT scale single conjugate AO real-time control computation at over 1.0kHz with less than 20μs RMS jitter. We have also shown that with a wavefront sensor camera attached the KNL can process the real-time control loop at up to 966Hz, the maximum frame-rate of the camera, with jitter remaining below 20μs RMS. Future studies will involve exploring the use of a cluster of Xeon Phis for the real-time control of the MCAO and MOAO regimes of AO. We find that the Xeon Phi is highly suitable for ELT AO real time control.

  12. Methods for Developing Emissions Scenarios for Integrated Assessment Models

    Energy Technology Data Exchange (ETDEWEB)

    Prinn, Ronald [MIT; Webster, Mort [MIT

    2007-08-20

    The overall objective of this research was to contribute data and methods to support the future development of new emissions scenarios for integrated assessment of climate change. Specifically, this research had two main objectives: 1. Use historical data on economic growth and energy efficiency changes, and develop probability density functions (PDFs) for the appropriate parameters for two or three commonly used integrated assessment models. 2. Using the parameter distributions developed through the first task and previous work, we will develop methods of designing multi-gas emission scenarios that usefully span the joint uncertainty space in a small number of scenarios. Results on the autonomous energy efficiency improvement (AEEI) parameter are summarized, an uncertainty analysis of elasticities of substitution is described, and the probabilistic emissions scenario approach is presented.

  13. Impact of model complexity and multi-scale data integration on the estimation of hydrogeological parameters in a dual-porosity aquifer

    Science.gov (United States)

    Tamayo-Mas, Elena; Bianchi, Marco; Mansour, Majdi

    2018-03-01

    This study investigates the impact of model complexity and multi-scale prior hydrogeological data on the interpretation of pumping test data in a dual-porosity aquifer (the Chalk aquifer in England, UK). In order to characterize the hydrogeological properties, different approaches ranging from a traditional analytical solution (Theis approach) to more sophisticated numerical models with automatically calibrated input parameters are applied. Comparisons of results from the different approaches show that neither traditional analytical solutions nor a numerical model assuming a homogenous and isotropic aquifer can adequately explain the observed drawdowns. A better reproduction of the observed drawdowns in all seven monitoring locations is instead achieved when medium and local-scale prior information about the vertical hydraulic conductivity (K) distribution is used to constrain the model calibration process. In particular, the integration of medium-scale vertical K variations based on flowmeter measurements lead to an improvement in the goodness-of-fit of the simulated drawdowns of about 30%. Further improvements (up to 70%) were observed when a simple upscaling approach was used to integrate small-scale K data to constrain the automatic calibration process of the numerical model. Although the analysis focuses on a specific case study, these results provide insights about the representativeness of the estimates of hydrogeological properties based on different interpretations of pumping test data, and promote the integration of multi-scale data for the characterization of heterogeneous aquifers in complex hydrogeological settings.

  14. A reduced-scaling density matrix-based method for the computation of the vibrational Hessian matrix at the self-consistent field level

    International Nuclear Information System (INIS)

    Kussmann, Jörg; Luenser, Arne; Beer, Matthias; Ochsenfeld, Christian

    2015-01-01

    An analytical method to calculate the molecular vibrational Hessian matrix at the self-consistent field level is presented. By analysis of the multipole expansions of the relevant derivatives of Coulomb-type two-electron integral contractions, we show that the effect of the perturbation on the electronic structure due to the displacement of nuclei decays at least as r −2 instead of r −1 . The perturbation is asymptotically local, and the computation of the Hessian matrix can, in principle, be performed with O(N) complexity. Our implementation exhibits linear scaling in all time-determining steps, with some rapid but quadratic-complexity steps remaining. Sample calculations illustrate linear or near-linear scaling in the construction of the complete nuclear Hessian matrix for sparse systems. For more demanding systems, scaling is still considerably sub-quadratic to quadratic, depending on the density of the underlying electronic structure

  15. Simulation electromagnetic scattering on bodies through integral equation and neural networks methods

    Science.gov (United States)

    Lvovich, I. Ya; Preobrazhenskiy, A. P.; Choporov, O. N.

    2018-05-01

    The paper deals with the issue of electromagnetic scattering on a perfectly conducting diffractive body of a complex shape. Performance calculation of the body scattering is carried out through the integral equation method. Fredholm equation of the second time was used for calculating electric current density. While solving the integral equation through the moments method, the authors have properly described the core singularity. The authors determined piecewise constant functions as basic functions. The chosen equation was solved through the moments method. Within the Kirchhoff integral approach it is possible to define the scattered electromagnetic field, in some way related to obtained electrical currents. The observation angles sector belongs to the area of the front hemisphere of the diffractive body. To improve characteristics of the diffractive body, the authors used a neural network. All the neurons contained a logsigmoid activation function and weighted sums as discriminant functions. The paper presents the matrix of weighting factors of the connectionist model, as well as the results of the optimized dimensions of the diffractive body. The paper also presents some basic steps in calculation technique of the diffractive bodies, based on the combination of integral equation and neural networks methods.

  16. Improving the integration of recreation management with management of other natural resources by applying concepts of scale from ecology

    Science.gov (United States)

    Wayde c. Morse; Troy E. Hall; Linda E. Kruger

    2008-01-01

    In this article, we examine how issues of scale affect the integration of recreation management with the management of other natural resources on public lands. We present two theories used to address scale issues in ecology and explore how they can improve the two most widely applied recreation-planning frameworks. The theory of patch dynamics and hierarchy theory are...

  17. SCALE Code System

    Energy Technology Data Exchange (ETDEWEB)

    Jessee, Matthew Anderson [ORNL

    2016-04-01

    depletion with TRITON (T5-DEPL/T6-DEPL),• CE sensitivity/uncertainty analysis with TSUNAMI-3D,• Simplified and efficient LWR lattice physics with Polaris,• Large scale detailed spent fuel characterization with ORIGAMI and ORIGAMI Automator,• Advanced fission source convergence acceleration capabilities with Sourcerer,• Nuclear data library generation with AMPX, and• Integrated user interface with Fulcrum.Enhanced capabilities include:• Accurate and efficient CE Monte Carlo methods for eigenvalue and fixed source calculations,• Improved MG resonance self-shielding methodologies and data,• Resonance self-shielding with modernized and efficient XSProc integrated into most sequences,• Accelerated calculations with TRITON/NEWT (generally 4x faster than SCALE 6.1),• Spent fuel characterization with 1470 new reactor-specific libraries for ORIGEN,• Modernization of ORIGEN (Chebyshev Rational Approximation Method [CRAM] solver, API for high-performance depletion, new keyword input format)• Extension of the maximum mixture number to values well beyond the previous limit of 2147 to ~2 billion,• Nuclear data formats enabling the use of more than 999 energy groups,• Updated standard composition library to provide more accurate use of natural abundances, andvi• Numerous other enhancements for improved usability and stability.

  18. An Integrated Approach to Research Methods and Capstone

    Science.gov (United States)

    Postic, Robert; McCandless, Ray; Stewart, Beth

    2014-01-01

    In 1991, the AACU issued a report on improving undergraduate education suggesting, in part, that a curriculum should be both comprehensive and cohesive. Since 2008, we have systematically integrated our research methods course with our capstone course in an attempt to accomplish the twin goals of comprehensiveness and cohesion. By taking this…

  19. Integration of Small-Diameter Wood Harvesting in Early Thinnings using the Two pile Cutting Method

    Energy Technology Data Exchange (ETDEWEB)

    Kaerhae, Kalle (Metsaeteho Oy, P.O. Box 101, FI-00171 Helsinki (Finland))

    2008-10-15

    Metsaeteho Oy studied the integrated harvesting of industrial roundwood (pulpwood) and energy wood based on a two-pile cutting method, i.e. pulpwood and energy wood fractions are stacked into two separate piles when cutting a first-thinning stand. The productivity and cost levels of the integrated, two-pile cutting method were determined, and the harvesting costs of the two-pile method were compared with those of conventional separate wood harvesting methods. In the time study, when the size of removal was 50 dm3, the productivity in conventional whole-tree cutting was 6% higher than in integrated cutting. With a stem size of 100 dm3, the productivity in whole-tree cutting was 7% higher than in integrated cutting. The results indicated, however, that integrated harvesting based on the two-pile method enables harvesting costs to be decreased to below the current cost level of separate pulpwood harvesting in first thinning stands. The greatest cost-saving potential lies in small-sized first thinnings. The results showed that, when integrated wood harvesting based on the two-pile method is applied, the removals of both energy wood and pulpwood should be more than 15-20 m3/ha at the harvesting sites in order to achieve economically viable integrated procurement

  20. Two pricing methods for solving an integrated commercial fishery ...

    African Journals Online (AJOL)

    a model (Hasan and Raffensperger, 2006) to solve this problem: the integrated ... planning and labour allocation for that processing firm, but did not consider any fleet- .... the DBONP method actually finds such price information, and uses it.

  1. Full scale test platform for European TBM systems integration and maintenance

    Energy Technology Data Exchange (ETDEWEB)

    Vála, Ladislav, E-mail: ladislav.vala@cvrez.cz; Reungoat, Mathieu; Vician, Martin

    2016-11-01

    Highlights: • A platform for EU-TBS maintenance and integration tests is described. • Its modular design allows adaptation to non-EU TBSs. • Assembling of the facility will be followed by initial tests in 2016. - Abstract: This article deals with description and current status of a project of a non-nuclear, full size (1:1 scale) test platform dedicated to tests, optimization and validation of integration and maintenance operations for the European TBM systems in the ITER port cell #16. The facility called TBM platform reproduces the ITER port cell #16 and port interspace with all the relevant interfaces and mock-ups of the corresponding main components. Thanks to the modular design of the platform, it is possible to adapt or change completely the interfaces in the future if needed or required according to the updated configuration of TBSs. In the same way, based on customer requirements, it will be possible to adapt the interfaces and piping inside the mock-ups in order to represent also the other, non-EU configurations of TBM systems designed for port cells #02 and #18. Construction of this test platform is realized and funded within the scope of the SUSEN project.

  2. Large Scale Environmental Monitoring through Integration of Sensor and Mesh Networks

    Directory of Open Access Journals (Sweden)

    Raja Jurdak

    2008-11-01

    Full Text Available Monitoring outdoor environments through networks of wireless sensors has received interest for collecting physical and chemical samples at high spatial and temporal scales. A central challenge to environmental monitoring applications of sensor networks is the short communication range of the sensor nodes, which increases the complexity and cost of monitoring commodities that are located in geographically spread areas. To address this issue, we propose a new communication architecture that integrates sensor networks with medium range wireless mesh networks, and provides users with an advanced web portal for managing sensed information in an integrated manner. Our architecture adopts a holistic approach targeted at improving the user experience by optimizing the system performance for handling data that originates at the sensors, traverses the mesh network, and resides at the server for user consumption. This holistic approach enables users to set high level policies that can adapt the resolution of information collected at the sensors, set the preferred performance targets for their application, and run a wide range of queries and analysis on both real-time and historical data. All system components and processes will be described in this paper.

  3. Large Scale Environmental Monitoring through Integration of Sensor and Mesh Networks.

    Science.gov (United States)

    Jurdak, Raja; Nafaa, Abdelhamid; Barbirato, Alessio

    2008-11-24

    Monitoring outdoor environments through networks of wireless sensors has received interest for collecting physical and chemical samples at high spatial and temporal scales. A central challenge to environmental monitoring applications of sensor networks is the short communication range of the sensor nodes, which increases the complexity and cost of monitoring commodities that are located in geographically spread areas. To address this issue, we propose a new communication architecture that integrates sensor networks with medium range wireless mesh networks, and provides users with an advanced web portal for managing sensed information in an integrated manner. Our architecture adopts a holistic approach targeted at improving the user experience by optimizing the system performance for handling data that originates at the sensors, traverses the mesh network, and resides at the server for user consumption. This holistic approach enables users to set high level policies that can adapt the resolution of information collected at the sensors, set the preferred performance targets for their application, and run a wide range of queries and analysis on both real-time and historical data. All system components and processes will be described in this paper.

  4. Medium-scale carbon nanotube thin-film integrated circuits on flexible plastic substrates.

    Science.gov (United States)

    Cao, Qing; Kim, Hoon-sik; Pimparkar, Ninad; Kulkarni, Jaydeep P; Wang, Congjun; Shim, Moonsub; Roy, Kaushik; Alam, Muhammad A; Rogers, John A

    2008-07-24

    The ability to form integrated circuits on flexible sheets of plastic enables attributes (for example conformal and flexible formats and lightweight and shock resistant construction) in electronic devices that are difficult or impossible to achieve with technologies that use semiconductor wafers or glass plates as substrates. Organic small-molecule and polymer-based materials represent the most widely explored types of semiconductors for such flexible circuitry. Although these materials and those that use films or nanostructures of inorganics have promise for certain applications, existing demonstrations of them in circuits on plastic indicate modest performance characteristics that might restrict the application possibilities. Here we report implementations of a comparatively high-performance carbon-based semiconductor consisting of sub-monolayer, random networks of single-walled carbon nanotubes to yield small- to medium-scale integrated digital circuits, composed of up to nearly 100 transistors on plastic substrates. Transistors in these integrated circuits have excellent properties: mobilities as high as 80 cm(2) V(-1) s(-1), subthreshold slopes as low as 140 m V dec(-1), operating voltages less than 5 V together with deterministic control over the threshold voltages, on/off ratios as high as 10(5), switching speeds in the kilohertz range even for coarse (approximately 100-microm) device geometries, and good mechanical flexibility-all with levels of uniformity and reproducibility that enable high-yield fabrication of integrated circuits. Theoretical calculations, in contexts ranging from heterogeneous percolative transport through the networks to compact models for the transistors to circuit level simulations, provide quantitative and predictive understanding of these systems. Taken together, these results suggest that sub-monolayer films of single-walled carbon nanotubes are attractive materials for flexible integrated circuits, with many potential areas of

  5. The general 2-D moments via integral transform method for acoustic radiation and scattering

    Science.gov (United States)

    Smith, Jerry R.; Mirotznik, Mark S.

    2004-05-01

    The moments via integral transform method (MITM) is a technique to analytically reduce the 2-D method of moments (MoM) impedance double integrals into single integrals. By using a special integral representation of the Green's function, the impedance integral can be analytically simplified to a single integral in terms of transformed shape and weight functions. The reduced expression requires fewer computations and reduces the fill times of the MoM impedance matrix. Furthermore, the resulting integral is analytic for nearly arbitrary shape and weight function sets. The MITM technique is developed for mixed boundary conditions and predictions with basic shape and weight function sets are presented. Comparisons of accuracy and speed between MITM and brute force are presented. [Work sponsored by ONR and NSWCCD ILIR Board.

  6. Integrative Mixed Methods Data Analytic Strategies in Research on School Success in Challenging Circumstances

    Science.gov (United States)

    Jang, Eunice E.; McDougall, Douglas E.; Pollon, Dawn; Herbert, Monique; Russell, Pia

    2008-01-01

    There are both conceptual and practical challenges in dealing with data from mixed methods research studies. There is a need for discussion about various integrative strategies for mixed methods data analyses. This article illustrates integrative analytic strategies for a mixed methods study focusing on improving urban schools facing challenging…

  7. A simple method to achieve full-field and real-scale reconstruction using a movable stereo rig

    Science.gov (United States)

    Gu, Feifei; Zhao, Hong; Song, Zhan; Tang, Suming

    2018-06-01

    This paper introduces a simple method to achieve full-field and real-scale reconstruction using a movable binocular vision system (MBVS). The MBVS is composed of two cameras, one is called the tracking camera, and the other is called the working camera. The tracking camera is used for tracking the positions of the MBVS and the working camera is used for the 3D reconstruction task. The MBVS has several advantages compared with a single moving camera or multi-camera networks. Firstly, the MBVS could recover the real-scale-depth-information from the captured image sequences without using auxiliary objects whose geometry or motion should be precisely known. Secondly, the removability of the system could guarantee appropriate baselines to supply more robust point correspondences. Additionally, using one camera could avoid the drawback which exists in multi-camera networks, that the variability of a cameras’ parameters and performance could significantly affect the accuracy and robustness of the feature extraction and stereo matching methods. The proposed framework consists of local reconstruction and initial pose estimation of the MBVS based on transferable features, followed by overall optimization and accurate integration of multi-view 3D reconstruction data. The whole process requires no information other than the input images. The framework has been verified with real data, and very good results have been obtained.

  8. Facing the scaling problem: A multi-methodical approach to simulate soil erosion at hillslope and catchment scale

    Science.gov (United States)

    Schmengler, A. C.; Vlek, P. L. G.

    2012-04-01

    Modelling soil erosion requires a holistic understanding of the sediment dynamics in a complex environment. As most erosion models are scale-dependent and their parameterization is spatially limited, their application often requires special care, particularly in data-scarce environments. This study presents a hierarchical approach to overcome the limitations of a single model by using various quantitative methods and soil erosion models to cope with the issues of scale. At hillslope scale, the physically-based Water Erosion Prediction Project (WEPP)-model is used to simulate soil loss and deposition processes. Model simulations of soil loss vary between 5 to 50 t ha-1 yr-1 dependent on the spatial location on the hillslope and have only limited correspondence with the results of the 137Cs technique. These differences in absolute soil loss values could be either due to internal shortcomings of each approach or to external scale-related uncertainties. Pedo-geomorphological soil investigations along a catena confirm that estimations by the 137Cs technique are more appropriate in reflecting both the spatial extent and magnitude of soil erosion at hillslope scale. In order to account for sediment dynamics at a larger scale, the spatially-distributed WaTEM/SEDEM model is used to simulate soil erosion at catchment scale and to predict sediment delivery rates into a small water reservoir. Predicted sediment yield rates are compared with results gained from a bathymetric survey and sediment core analysis. Results show that specific sediment rates of 0.6 t ha-1 yr-1 by the model are in close agreement with observed sediment yield calculated from stratigraphical changes and downcore variations in 137Cs concentrations. Sediment erosion rates averaged over the entire catchment of 1 to 2 t ha-1 yr-1 are significantly lower than results obtained at hillslope scale confirming an inverse correlation between the magnitude of erosion rates and the spatial scale of the model. The

  9. Surface temperature and evapotranspiration: application of local scale methods to regional scales using satellite data

    International Nuclear Information System (INIS)

    Seguin, B.; Courault, D.; Guerif, M.

    1994-01-01

    Remotely sensed surface temperatures have proven useful for monitoring evapotranspiration (ET) rates and crop water use because of their direct relationship with sensible and latent energy exchange processes. Procedures for using the thermal infrared (IR) obtained with hand-held radiometers deployed at ground level are now well established and even routine for many agricultural research and management purposes. The availability of IR from meteorological satellites at scales from 1 km (NOAA-AVHRR) to 5 km (METEOSAT) permits extension of local, ground-based approaches to larger scale crop monitoring programs. Regional observations of surface minus air temperature (i.e., the stress degree day) and remote estimates of daily ET were derived from satellite data over sites in France, the Sahel, and North Africa and summarized here. Results confirm that similar approaches can be applied at local and regional scales despite differences in pixel size and heterogeneity. This article analyzes methods for obtaining these data and outlines the potential utility of satellite data for operational use at the regional scale. (author)

  10. An integration time adaptive control method for atmospheric composition detection of occultation

    Science.gov (United States)

    Ding, Lin; Hou, Shuai; Yu, Fei; Liu, Cheng; Li, Chao; Zhe, Lin

    2018-01-01

    When sun is used as the light source for atmospheric composition detection, it is necessary to image sun for accurate identification and stable tracking. In the course of 180 second of the occultation, the magnitude of sun light intensity through the atmosphere changes greatly. It is nearly 1100 times illumination change between the maximum atmospheric and the minimum atmospheric. And the process of light change is so severe that 2.9 times per second of light change can be reached. Therefore, it is difficult to control the integration time of sun image camera. In this paper, a novel adaptive integration time control method for occultation is presented. In this method, with the distribution of gray value in the image as the reference variable, and the concepts of speed integral PID control, the integration time adaptive control problem of high frequency imaging. The large dynamic range integration time automatic control in the occultation can be achieved.

  11. Vertical equilibrium with sub-scale analytical methods for geological CO2 sequestration

    KAUST Repository

    Gasda, S. E.; Nordbotten, J. M.; Celia, M. A.

    2009-01-01

    equilibrium with sub-scale analytical method (VESA) combines the flexibility of a numerical method, allowing for heterogeneous and geologically complex systems, with the efficiency and accuracy of an analytical method, thereby eliminating expensive grid

  12. Method for integrating a train of fast, nanosecond wide pulses

    International Nuclear Information System (INIS)

    Rose, C.R.

    1987-01-01

    This paper describes a method used to integrate a train of fast, nanosecond wide pulses. The pulses come from current transformers in a RF LINAC beamline. Because they are ac signals and have no dc component, true mathematical integration would yield zero over the pulse train period or an equally erroneous value because of a dc baseline shift. The circuit used to integrate the pulse train first stretches the pulses to 35 ns FWHM. The signals are then fed into a high-speed, precision rectifier which restores a true dc baseline for the following stage - a fast, gated integrator. The rectifier is linear over 55dB in excess of 25 MHz, and the gated integrator is linear over a 60 dB range with input pulse widths as short as 16 ns. The assembled system is linear over 30 dB with a 6 MHz input signal

  13. Numerical method for solving linear Fredholm fuzzy integral equations of the second kind

    Energy Technology Data Exchange (ETDEWEB)

    Abbasbandy, S. [Department of Mathematics, Imam Khomeini International University, P.O. Box 288, Ghazvin 34194 (Iran, Islamic Republic of)]. E-mail: saeid@abbasbandy.com; Babolian, E. [Faculty of Mathematical Sciences and Computer Engineering, Teacher Training University, Tehran 15618 (Iran, Islamic Republic of); Alavi, M. [Department of Mathematics, Arak Branch, Islamic Azad University, Arak 38135 (Iran, Islamic Republic of)

    2007-01-15

    In this paper we use parametric form of fuzzy number and convert a linear fuzzy Fredholm integral equation to two linear system of integral equation of the second kind in crisp case. We can use one of the numerical method such as Nystrom and find the approximation solution of the system and hence obtain an approximation for fuzzy solution of the linear fuzzy Fredholm integral equations of the second kind. The proposed method is illustrated by solving some numerical examples.

  14. Measurement of integrated healthcare delivery: a systematic review of methods and future research directions

    Directory of Open Access Journals (Sweden)

    Martin Strandberg-Larsen

    2009-02-01

    Full Text Available Background: Integrated healthcare delivery is a policy goal of healthcare systems. There is no consensus on how to measure the concept, which makes it difficult to monitor progress. Purpose: To identify the different types of methods used to measure integrated healthcare delivery with emphasis on structural, cultural and process aspects. Methods: Medline/Pubmed, EMBASE, Web of Science, Cochrane Library, WHOLIS, and conventional internet search engines were systematically searched for methods to measure integrated healthcare delivery (published – April 2008. Results: Twenty-four published scientific papers and documents met the inclusion criteria. In the 24 references we identified 24 different measurement methods; however, 5 methods shared theoretical framework. The methods can be categorized according to type of data source: a questionnaire survey data, b automated register data, or c mixed data sources. The variety of concepts measured reflects the significant conceptual diversity within the field, and most methods lack information regarding validity and reliability. Conclusion: Several methods have been developed to measure integrated healthcare delivery; 24 methods are available and some are highly developed. The objective governs the method best used. Criteria for sound measures are suggested and further developments should be based on an explicit conceptual framework and focus on simplifying and validating existing methods.

  15. Leading for the long haul: a mixed-method evaluation of the Sustainment Leadership Scale (SLS).

    Science.gov (United States)

    Ehrhart, Mark G; Torres, Elisa M; Green, Amy E; Trott, Elise M; Willging, Cathleen E; Moullin, Joanna C; Aarons, Gregory A

    2018-01-19

    Despite our progress in understanding the organizational context for implementation and specifically the role of leadership in implementation, its role in sustainment has received little attention. This paper took a mixed-method approach to examine leadership during the sustainment phase of the Exploration, Preparation, Implementation, Sustainment (EPIS) framework. Utilizing the Implementation Leadership Scale as a foundation, we sought to develop a short, practical measure of sustainment leadership that can be used for both applied and research purposes. Data for this study were collected as a part of a larger mixed-method study of evidence-based intervention, SafeCare®, sustainment. Quantitative data were collected from 157 providers using web-based surveys. Confirmatory factor analysis was used to examine the factor structure of the Sustainment Leadership Scale (SLS). Qualitative data were collected from 95 providers who participated in one of 15 focus groups. A framework approach guided qualitative data analysis. Mixed-method integration was also utilized to examine convergence of quantitative and qualitative findings. Confirmatory factor analysis supported the a priori higher order factor structure of the SLS with subscales indicating a single higher order sustainment leadership factor. The SLS demonstrated excellent internal consistency reliability. Qualitative analyses offered support for the dimensions of sustainment leadership captured by the quantitative measure, in addition to uncovering a fifth possible factor, available leadership. This study found qualitative and quantitative support for the pragmatic SLS measure. The SLS can be used for assessing leadership of first-level leaders to understand how staff perceive leadership during sustainment and to suggest areas where leaders could direct more attention in order to increase the likelihood that EBIs are institutionalized into the normal functioning of the organization.

  16. Integrating chemical fate and population-level effect models of pesticides: the importance of capturing the right scales

    NARCIS (Netherlands)

    Focks, A.; Horst, ter M.M.S.; Berg, van den E.; Baveco, H.; Brink, van den P.J.

    2014-01-01

    Any attempt to introduce more ecological realism into ecological risk assessment of chemicals faces the major challenge of integrating different aspects of the chemicals and species of concern, for example, spatial scales of emissions, chemical exposure patterns in space and time, and population

  17. A unified double-loop multi-scale control strategy for NMP integrating-unstable systems

    International Nuclear Information System (INIS)

    Seer, Qiu Han; Nandong, Jobrun

    2016-01-01

    This paper presents a new control strategy which unifies the direct and indirect multi-scale control schemes via a double-loop control structure. This unified control strategy is proposed for controlling a class of highly nonminimum-phase processes having both integrating and unstable modes. This type of systems is often encountered in fed-batch fermentation processes which are very difficult to stabilize via most of the existing well-established control strategies. A systematic design procedure is provided where its applicability is demonstrated via a numerical example. (paper)

  18. Multi-scale method for the resolution of the neutronic kinetics equations

    International Nuclear Information System (INIS)

    Chauvet, St.

    2008-10-01

    In this PhD thesis and in order to improve the time/precision ratio of the numerical simulation calculations, we investigate multi-scale techniques for the resolution of the reactor kinetics equations. We choose to focus on the mixed dual diffusion approximation and the quasi-static methods. We introduce a space dependency for the amplitude function which only depends on the time variable in the standard quasi-static context. With this new factorization, we develop two mixed dual problems which can be solved with Cea's solver MINOS. An algorithm is implemented, performing the resolution of these problems defined on different scales (for time and space). We name this approach: the Local Quasi-Static method. We present here this new multi-scale approach and its implementation. The inherent details of amplitude and shape treatments are discussed and justified. Results and performances, compared to MINOS, are studied. They illustrate the improvement on the time/precision ratio for kinetics calculations. Furthermore, we open some new possibilities to parallelize computations with MINOS. For the future, we also introduce some improvement tracks with adaptive scales. (author)

  19. A comparison of multidimensional scaling methods for perceptual mapping

    NARCIS (Netherlands)

    Bijmolt, T.H.A.; Wedel, M.

    Multidimensional scaling has been applied to a wide range of marketing problems, in particular to perceptual mapping based on dissimilarity judgments. The introduction of methods based on the maximum likelihood principle is one of the most important developments. In this article, the authors compare

  20. Photometric method for determination of acidity constants through integral spectra analysis

    Science.gov (United States)

    Zevatskiy, Yuriy Eduardovich; Ruzanov, Daniil Olegovich; Samoylov, Denis Vladimirovich

    2015-04-01

    An express method for determination of acidity constants of organic acids, based on the analysis of the integral transmittance vs. pH dependence is developed. The integral value is registered as a photocurrent of photometric device simultaneously with potentiometric titration. The proposed method allows to obtain pKa using only simple and low-cost instrumentation. The optical part of the experimental setup has been optimized through the exclusion of the monochromator device. Thus it only takes 10-15 min to obtain one pKa value with the absolute error of less than 0.15 pH units. Application limitations and reliability of the method have been tested for a series of organic acids of various nature.

  1. Improvements of the integral transport theory method

    International Nuclear Information System (INIS)

    Kavenoky, A.; Lam-Hime, M.; Stankovski, Z.

    1979-01-01

    The integral transport theory is widely used in practical reactor design calculations however it is computer time consuming for two dimensional calculations of large media. In the first part of this report a new treatment is presented; it is based on the Galerkin method: inside each region the total flux is expanded over a three component basis. Numerical comparison shows that this method can considerably reduce the computing time. The second part of the this report is devoted to homogeneization theory: a straightforward calculation of the fundamental mode for an heterogeneous cell is presented. At first general presentation of the problem is given, then it is simplified to plane geometry and numerical results are presented

  2. The application of J integral to measure cohesive laws in materials undergoing large scale yielding

    DEFF Research Database (Denmark)

    Sørensen, Bent F.; Goutianos, Stergios

    2015-01-01

    We explore the possibility of determining cohesive laws by the J-integral approach for materials having non-linear stress-strain behaviour (e.g. polymers and composites) by the use of a DCB sandwich specimen, consisting of stiff elastic beams bonded to the non-linear test material, loaded with pure...... bending moments. For a wide range of parameters of the non-linear material, the plastic unloading during crack extension is small, resulting in J integral values (fracture resistance) that deviate maximum 15% from the work of the cohesive traction. Thus the method can be used to extract the cohesive laws...... directly from experiments without any presumption about their shape. Finally, the DCB sandwich specimen was also analysed using the I integral to quantify the overestimation of the steady-state fracture resistance obtained using the J integral based method....

  3. Large scale air pollution estimation method combining land use regression and chemical transport modeling in a geostatistical framework.

    Science.gov (United States)

    Akita, Yasuyuki; Baldasano, Jose M; Beelen, Rob; Cirach, Marta; de Hoogh, Kees; Hoek, Gerard; Nieuwenhuijsen, Mark; Serre, Marc L; de Nazelle, Audrey

    2014-04-15

    In recognition that intraurban exposure gradients may be as large as between-city variations, recent air pollution epidemiologic studies have become increasingly interested in capturing within-city exposure gradients. In addition, because of the rapidly accumulating health data, recent studies also need to handle large study populations distributed over large geographic domains. Even though several modeling approaches have been introduced, a consistent modeling framework capturing within-city exposure variability and applicable to large geographic domains is still missing. To address these needs, we proposed a modeling framework based on the Bayesian Maximum Entropy method that integrates monitoring data and outputs from existing air quality models based on Land Use Regression (LUR) and Chemical Transport Models (CTM). The framework was applied to estimate the yearly average NO2 concentrations over the region of Catalunya in Spain. By jointly accounting for the global scale variability in the concentration from the output of CTM and the intraurban scale variability through LUR model output, the proposed framework outperformed more conventional approaches.

  4. A novel fruit shape classification method based on multi-scale analysis

    Science.gov (United States)

    Gui, Jiangsheng; Ying, Yibin; Rao, Xiuqin

    2005-11-01

    Shape is one of the major concerns and which is still a difficult problem in automated inspection and sorting of fruits. In this research, we proposed the multi-scale energy distribution (MSED) for object shape description, the relationship between objects shape and its boundary energy distribution at multi-scale was explored for shape extraction. MSED offers not only the mainly energy which represent primary shape information at the lower scales, but also subordinate energy which represent local shape information at higher differential scales. Thus, it provides a natural tool for multi resolution representation and can be used as a feature for shape classification. We addressed the three main processing steps in the MSED-based shape classification. They are namely, 1) image preprocessing and citrus shape extraction, 2) shape resample and shape feature normalization, 3) energy decomposition by wavelet and classification by BP neural network. Hereinto, shape resample is resample 256 boundary pixel from a curve which is approximated original boundary by using cubic spline in order to get uniform raw data. A probability function was defined and an effective method to select a start point was given through maximal expectation, which overcame the inconvenience of traditional methods in order to have a property of rotation invariants. The experiment result is relatively well normal citrus and serious abnormality, with a classification rate superior to 91.2%. The global correct classification rate is 89.77%, and our method is more effective than traditional method. The global result can meet the request of fruit grading.

  5. Direct Calculation of Permeability by High-Accurate Finite Difference and Numerical Integration Methods

    KAUST Repository

    Wang, Yi

    2016-07-21

    Velocity of fluid flow in underground porous media is 6~12 orders of magnitudes lower than that in pipelines. If numerical errors are not carefully controlled in this kind of simulations, high distortion of the final results may occur [1-4]. To fit the high accuracy demands of fluid flow simulations in porous media, traditional finite difference methods and numerical integration methods are discussed and corresponding high-accurate methods are developed. When applied to the direct calculation of full-tensor permeability for underground flow, the high-accurate finite difference method is confirmed to have numerical error as low as 10-5% while the high-accurate numerical integration method has numerical error around 0%. Thus, the approach combining the high-accurate finite difference and numerical integration methods is a reliable way to efficiently determine the characteristics of general full-tensor permeability such as maximum and minimum permeability components, principal direction and anisotropic ratio. Copyright © Global-Science Press 2016.

  6. Optimal Homotopy Asymptotic Method for Solving System of Fredholm Integral Equations

    Directory of Open Access Journals (Sweden)

    Bahman Ghazanfari

    2013-08-01

    Full Text Available In this paper, optimal homotopy asymptotic method (OHAM is applied to solve system of Fredholm integral equations. The effectiveness of optimal homotopy asymptotic method is presented. This method provides easy tools to control the convergence region of approximating solution series wherever necessary. The results of OHAM are compared with homotopy perturbation method (HPM and Taylor series expansion method (TSEM.

  7. Integrated project delivery methods for energy renovation of social housing

    Directory of Open Access Journals (Sweden)

    Tadeo Baldiri Salcedo Rahola

    2015-11-01

    Full Text Available Optimised project delivery methods forsocial housing energy renovations European Social Housing Organisations (SHOs are currently facing challenging times. The ageing of their housing stock and the economic crisis, which has affected both their finances and the finances of their tenants, are testing their capacity to stick to their aim of providing decent and affordable housing. Housing renovation projects offer the possibility of upgrading the health and comfort levels of their old housing stock to current standards and improve energy efficiency, and this solution also addresses the fuel poverty problems suffered by some tenants. Unfortunately, the limited financial capacity of SHOs is hampering the scale of housing renovation projects and the energy savings achieved.  At the same time, the renovation of the existing housing stock is seen as one of the most promising alternative routes to achieving the ambitious CO2 emissions reduction targets set by European authorities – namely, to reduce EU CO2 emissions to 20% below their 1990 levels by 2020. The synergy between European targets and the aims of SHOs has been addressed by the energy policies of the member states, which focus on the potential energy savings achievable by renovating social housing. In fact, the European initiatives have prioritised energy savings in social housing renovations to such an extent that these are referred to as ‘energy renovations’. Energy renovation is therefore a renovation project with higher energy savings target than a regular renovation project. In total, European SHOs own 21.5 million dwellings representing around 9.4% of the total housing stock. Each SHO owns a large number of dwellings, which means there are fewer people to convince of the need to make energy savings through building renovations, maximising the potentially high impact of decisions. Moreover, SHOs are responsible for maintaining and upgrading their properties in order to continue

  8. A revised method to calculate the concentration time integral of atmospheric pollutants

    International Nuclear Information System (INIS)

    Voelz, E.; Schultz, H.

    1980-01-01

    It is possible to calculate the spreading of a plume in the atmosphere under nonstationary and nonhomogeneous conditions by introducing the ''particle-in-cell'' method (PIC). This is a numerical method by which the transport of and the diffusion in the plume is reproduced in such a way, that particles representing the concentration are moved time step-wise in restricted regions (cells) and separately with the advection velocity and the diffusion velocity. This has a systematical advantage over the steady state Gaussian plume model usually used. The fixed-point concentration time integral is calculated directly instead of being substituted by the locally integrated concentration at a constant time as is done in the Gaussian model. In this way inaccuracies due to the above mentioned computational techniques may be avoided for short-time emissions, as may be seen by the fact that both integrals do not lead to the same results. Also the PIC method enables one to consider the height-dependent wind speed and its variations while the Gaussian model can be used only with averaged wind data. The concentration time integral calculated by the PIC method results in higher maximum values in shorter distances to the source. This is an effect often observed in measurements. (author)

  9. Application of the critical pathway and integrated case teaching method to nursing orientation.

    Science.gov (United States)

    Goodman, D

    1997-01-01

    Nursing staff development programs must be responsive to current changes in healthcare. New nursing staff must be prepared to manage continuous change and to function competently in clinical practice. The orientation pathway, based on a case management model, is used as a structure for the orientation phase of staff development. The integrated case is incorporated as a teaching strategy in orientation. The integrated case method is based on discussion and analysis of patient situations with emphasis on role modeling and integration of theory and skill. The orientation pathway and integrated case teaching method provide a useful framework for orientation of new staff. Educators, preceptors and orientees find the structure provided by the orientation pathway very useful. Orientation that is developed, implemented and evaluated based on a case management model with the use of an orientation pathway and incorporation of an integrated case teaching method provides a standardized structure for orientation of new staff. This approach is designed for the adult learner, promotes conceptual reasoning, and encourages the social and contextual basis for continued learning.

  10. Small-scale integrated demonstration of high-level radioactive waste processing and vitrification using actual SRP waste

    International Nuclear Information System (INIS)

    Woolsey, G.B.; Baumgarten, P.K.; Eibling, R.E.; Ferguson, R.B.

    1981-01-01

    A small-scale pilot plant for chemical processing and vitrification of actual high-level waste has been constructed at the Savannah River Laboratory (SRL). This fully integrated facility has been constructed in six shielded cells and has eight major unit operations. Equipment performance and processing characteristics of the unit operations are reported

  11. Integral equation models for image restoration: high accuracy methods and fast algorithms

    International Nuclear Information System (INIS)

    Lu, Yao; Shen, Lixin; Xu, Yuesheng

    2010-01-01

    Discrete models are consistently used as practical models for image restoration. They are piecewise constant approximations of true physical (continuous) models, and hence, inevitably impose bottleneck model errors. We propose to work directly with continuous models for image restoration aiming at suppressing the model errors caused by the discrete models. A systematic study is conducted in this paper for the continuous out-of-focus image models which can be formulated as an integral equation of the first kind. The resulting integral equation is regularized by the Lavrentiev method and the Tikhonov method. We develop fast multiscale algorithms having high accuracy to solve the regularized integral equations of the second kind. Numerical experiments show that the methods based on the continuous model perform much better than those based on discrete models, in terms of PSNR values and visual quality of the reconstructed images

  12. IMGMD: A platform for the integration and standardisation of In silico Microbial Genome-scale Metabolic Models.

    Science.gov (United States)

    Ye, Chao; Xu, Nan; Dong, Chuan; Ye, Yuannong; Zou, Xuan; Chen, Xiulai; Guo, Fengbiao; Liu, Liming

    2017-04-07

    Genome-scale metabolic models (GSMMs) constitute a platform that combines genome sequences and detailed biochemical information to quantify microbial physiology at the system level. To improve the unity, integrity, correctness, and format of data in published GSMMs, a consensus IMGMD database was built in the LAMP (Linux + Apache + MySQL + PHP) system by integrating and standardizing 328 GSMMs constructed for 139 microorganisms. The IMGMD database can help microbial researchers download manually curated GSMMs, rapidly reconstruct standard GSMMs, design pathways, and identify metabolic targets for strategies on strain improvement. Moreover, the IMGMD database facilitates the integration of wet-lab and in silico data to gain an additional insight into microbial physiology. The IMGMD database is freely available, without any registration requirements, at http://imgmd.jiangnan.edu.cn/database.

  13. Application of wavelets to singular integral scattering equations

    International Nuclear Information System (INIS)

    Kessler, B.M.; Payne, G.L.; Polyzou, W.N.

    2004-01-01

    The use of orthonormal wavelet basis functions for solving singular integral scattering equations is investigated. It is shown that these basis functions lead to sparse matrix equations which can be solved by iterative techniques. The scaling properties of wavelets are used to derive an efficient method for evaluating the singular integrals. The accuracy and efficiency of the wavelet transforms are demonstrated by solving the two-body T-matrix equation without partial wave projection. The resulting matrix equation which is characteristic of multiparticle integral scattering equations is found to provide an efficient method for obtaining accurate approximate solutions to the integral equation. These results indicate that wavelet transforms may provide a useful tool for studying few-body systems

  14. Correlates of the Rosenberg Self-Esteem Scale Method Effects

    Science.gov (United States)

    Quilty, Lena C.; Oakman, Jonathan M.; Risko, Evan

    2006-01-01

    Investigators of personality assessment are becoming aware that using positively and negatively worded items in questionnaires to prevent acquiescence may negatively impact construct validity. The Rosenberg Self-Esteem Scale (RSES) has demonstrated a bifactorial structure typically proposed to result from these method effects. Recent work suggests…

  15. Quantitative analysis of scaling error compensation methods in dimensional X-ray computed tomography

    DEFF Research Database (Denmark)

    Müller, P.; Hiller, Jochen; Dai, Y.

    2015-01-01

    X-ray Computed Tomography (CT) has become an important technology for quality control of industrial components. As with other technologies, e.g., tactile coordinate measurements or optical measurements, CT is influenced by numerous quantities which may have negative impact on the accuracy...... errors of the manipulator system (magnification axis). This article also introduces a new compensation method for scaling errors using a database of reference scaling factors and discusses its advantages and disadvantages. In total, three methods for the correction of scaling errors – using the CT ball...

  16. Large scale integration of intermittent renewable energy sources in the Greek power sector

    International Nuclear Information System (INIS)

    Voumvoulakis, Emmanouil; Asimakopoulou, Georgia; Danchev, Svetoslav; Maniatis, George; Tsakanikas, Aggelos

    2012-01-01

    As a member of the European Union, Greece has committed to achieve ambitious targets for the penetration of renewable energy sources (RES) in gross electricity consumption by 2020. Large scale integration of RES requires a suitable mixture of compatible generation units, in order to deal with the intermittency of wind velocity and solar irradiation. The scope of this paper is to examine the impact of large scale integration of intermittent energy sources, required to meet the 2020 RES target, on the generation expansion plan, the fuel mix and the spinning reserve requirements of the Greek electricity system. We perform hourly simulation of the intermittent RES generation to estimate residual load curves on a monthly basis, which are then inputted in a WASP-IV model of the Greek power system. We find that the decarbonisation effort, with the rapid entry of RES and the abolishment of the grandfathering of CO 2 allowances, will radically transform the Greek electricity sector over the next 10 years, which has wide-reaching policy implications. - Highlights: ► Greece needs 8.8 to 9.3 GW additional RES installations by 2020. ► RES capacity credit varies between 12.2% and 15.3%, depending on interconnections. ► Without institutional changes, the reserve requirements will be more than double. ► New CCGT installed capacity will probably exceed the cost-efficient level. ► Competitive pressures should be introduced in segments other than day-ahead market.

  17. User's guide to Monte Carlo methods for evaluating path integrals

    Science.gov (United States)

    Westbroek, Marise J. E.; King, Peter R.; Vvedensky, Dimitri D.; Dürr, Stephan

    2018-04-01

    We give an introduction to the calculation of path integrals on a lattice, with the quantum harmonic oscillator as an example. In addition to providing an explicit computational setup and corresponding pseudocode, we pay particular attention to the existence of autocorrelations and the calculation of reliable errors. The over-relaxation technique is presented as a way to counter strong autocorrelations. The simulation methods can be extended to compute observables for path integrals in other settings.

  18. Classification Method in Integrated Information Network Using Vector Image Comparison

    Directory of Open Access Journals (Sweden)

    Zhou Yuan

    2014-05-01

    Full Text Available Wireless Integrated Information Network (WMN consists of integrated information that can get data from its surrounding, such as image, voice. To transmit information, large resource is required which decreases the service time of the network. In this paper we present a Classification Approach based on Vector Image Comparison (VIC for WMN that improve the service time of the network. The available methods for sub-region selection and conversion are also proposed.

  19. Atomization in graphite-furnace atomic absorption spectrometry. Peak-height method vs. integration method of measuring absorbance: carbon rod atomizer 63

    International Nuclear Information System (INIS)

    Sturgeon, R.E.; Chakrabarti, C.L.; Maines, I.S.; Bertels, P.C.

    1975-01-01

    Oscilloscopic traces of transient atomic absorption signals generated during continuous heating of a Carbon Rod Atomizer model 63 show features which are characteristic of the element being atomized. This research was undertaken to determine the significance and usefulness of the two analytically significant parameters, absorbance maximum and integrated absorbance. For measuring integrated absorbance, an electronic integrating control unit consisting of a timing circuit, a lock-in amplifier, and a digital voltmeter, which functions as a direct absorbance x second readout, has been designed, developed, and successfully tested. Oscilloscopic and recorder traces of the absorbance maximum and digital display of the integrated absorbance are simultaneously obtained. For the elements studied, Cd, Zn, Cu, Al, Sn, Mo, and V, the detection limits and the precision obtained are practically identical for both methods of measurements. The sensitivities by the integration method are about the same as, or less than, those obtained by the peak-height method, whereas the calibration curves by the former are generally linear over wider ranges of concentrations. (U.S.)

  20. A 3D bioprinting system to produce human-scale tissue constructs with structural integrity.

    Science.gov (United States)

    Kang, Hyun-Wook; Lee, Sang Jin; Ko, In Kap; Kengla, Carlos; Yoo, James J; Atala, Anthony

    2016-03-01

    A challenge for tissue engineering is producing three-dimensional (3D), vascularized cellular constructs of clinically relevant size, shape and structural integrity. We present an integrated tissue-organ printer (ITOP) that can fabricate stable, human-scale tissue constructs of any shape. Mechanical stability is achieved by printing cell-laden hydrogels together with biodegradable polymers in integrated patterns and anchored on sacrificial hydrogels. The correct shape of the tissue construct is achieved by representing clinical imaging data as a computer model of the anatomical defect and translating the model into a program that controls the motions of the printer nozzles, which dispense cells to discrete locations. The incorporation of microchannels into the tissue constructs facilitates diffusion of nutrients to printed cells, thereby overcoming the diffusion limit of 100-200 μm for cell survival in engineered tissues. We demonstrate capabilities of the ITOP by fabricating mandible and calvarial bone, cartilage and skeletal muscle. Future development of the ITOP is being directed to the production of tissues for human applications and to the building of more complex tissues and solid organs.

  1. Confluent education: an integrative method for nursing (continuing) education.

    NARCIS (Netherlands)

    Francke, A.L.; Erkens, T.

    1994-01-01

    Confluent education is presented as a method to bridge the gap between cognitive and affective learning. Attention is focused on three main characteristics of confluent education: (a) the integration of four overlapping domains in a learning process (readiness, the cognitive domain, the affective

  2. Educational integrating projects as a method of interactive learning

    Directory of Open Access Journals (Sweden)

    Иван Николаевич Куринин

    2013-12-01

    Full Text Available The article describes a method of interactive learning based on educational integrating projects. Some examples of content of such projects for the disciplines related to the study of information and Internet technologies and their application in management are presented.

  3. Integrating Science-Based Co-management, Partnerships, Participatory Processes and Stewardship Incentives to Improve the Performance of Small-Scale Fisheries

    Directory of Open Access Journals (Sweden)

    Kendra A. Karr

    2017-10-01

    Full Text Available Small scale fisheries are critically important for the provision of food security, livelihoods, and economic development for billions of people. Yet, most of these fisheries appear to not be achieving either fisheries or conservation goals, with respect to creating healthier oceans that support more fish, feed more people and improve livelihoods. Research and practical experience have elucidated many insights into how to improve the performance of small-scale fisheries. Here, we present lessons learned from five case studies of small-scale fisheries in Cuba, Mexico, the Philippines, and Belize. The major lessons that arise from these cases are: (1 participatory processes empower fishers, increase compliance, and support integration of local and scientific knowledge; (2 partnership across sectors improves communication and community buy-in; (3 scientific analysis can lead fishery reform and be directly applicable to co-management structures. These case studies suggest that a fully integrated approach that implements a participatory process to generate a scientific basis for fishery management (e.g., data collection, analysis, design and to design management measures among stakeholders will increase the probability that small-scale fisheries will implement science-based management and improve their performance.

  4. Computing thermal Wigner densities with the phase integration method

    International Nuclear Information System (INIS)

    Beutier, J.; Borgis, D.; Vuilleumier, R.; Bonella, S.

    2014-01-01

    We discuss how the Phase Integration Method (PIM), recently developed to compute symmetrized time correlation functions [M. Monteferrante, S. Bonella, and G. Ciccotti, Mol. Phys. 109, 3015 (2011)], can be adapted to sampling/generating the thermal Wigner density, a key ingredient, for example, in many approximate schemes for simulating quantum time dependent properties. PIM combines a path integral representation of the density with a cumulant expansion to represent the Wigner function in a form calculable via existing Monte Carlo algorithms for sampling noisy probability densities. The method is able to capture highly non-classical effects such as correlation among the momenta and coordinates parts of the density, or correlations among the momenta themselves. By using alternatives to cumulants, it can also indicate the presence of negative parts of the Wigner density. Both properties are demonstrated by comparing PIM results to those of reference quantum calculations on a set of model problems

  5. Computing thermal Wigner densities with the phase integration method.

    Science.gov (United States)

    Beutier, J; Borgis, D; Vuilleumier, R; Bonella, S

    2014-08-28

    We discuss how the Phase Integration Method (PIM), recently developed to compute symmetrized time correlation functions [M. Monteferrante, S. Bonella, and G. Ciccotti, Mol. Phys. 109, 3015 (2011)], can be adapted to sampling/generating the thermal Wigner density, a key ingredient, for example, in many approximate schemes for simulating quantum time dependent properties. PIM combines a path integral representation of the density with a cumulant expansion to represent the Wigner function in a form calculable via existing Monte Carlo algorithms for sampling noisy probability densities. The method is able to capture highly non-classical effects such as correlation among the momenta and coordinates parts of the density, or correlations among the momenta themselves. By using alternatives to cumulants, it can also indicate the presence of negative parts of the Wigner density. Both properties are demonstrated by comparing PIM results to those of reference quantum calculations on a set of model problems.

  6. Advanced image based methods for structural integrity monitoring: Review and prospects

    Science.gov (United States)

    Farahani, Behzad V.; Sousa, Pedro José; Barros, Francisco; Tavares, Paulo J.; Moreira, Pedro M. G. P.

    2018-02-01

    There is a growing trend in engineering to develop methods for structural integrity monitoring and characterization of in-service mechanical behaviour of components. The fast growth in recent years of image processing techniques and image-based sensing for experimental mechanics, brought about a paradigm change in phenomena sensing. Hence, several widely applicable optical approaches are playing a significant role in support of experiment. The current review manuscript describes advanced image based methods for structural integrity monitoring, and focuses on methods such as Digital Image Correlation (DIC), Thermoelastic Stress Analysis (TSA), Electronic Speckle Pattern Interferometry (ESPI) and Speckle Pattern Shearing Interferometry (Shearography). These non-contact full-field techniques rely on intensive image processing methods to measure mechanical behaviour, and evolve even as reviews such as this are being written, which justifies a special effort to keep abreast of this progress.

  7. Capturing method for integral three-dimensional imaging using multiviewpoint robotic cameras

    Science.gov (United States)

    Ikeya, Kensuke; Arai, Jun; Mishina, Tomoyuki; Yamaguchi, Masahiro

    2018-03-01

    Integral three-dimensional (3-D) technology for next-generation 3-D television must be able to capture dynamic moving subjects with pan, tilt, and zoom camerawork as good as in current TV program production. We propose a capturing method for integral 3-D imaging using multiviewpoint robotic cameras. The cameras are controlled through a cooperative synchronous system composed of a master camera controlled by a camera operator and other reference cameras that are utilized for 3-D reconstruction. When the operator captures a subject using the master camera, the region reproduced by the integral 3-D display is regulated in real space according to the subject's position and view angle of the master camera. Using the cooperative control function, the reference cameras can capture images at the narrowest view angle that does not lose any part of the object region, thereby maximizing the resolution of the image. 3-D models are reconstructed by estimating the depth from complementary multiviewpoint images captured by robotic cameras arranged in a two-dimensional array. The model is converted into elemental images to generate the integral 3-D images. In experiments, we reconstructed integral 3-D images of karate players and confirmed that the proposed method satisfied the above requirements.

  8. Parallel Computing in SCALE

    International Nuclear Information System (INIS)

    DeHart, Mark D.; Williams, Mark L.; Bowman, Stephen M.

    2010-01-01

    activities has been developed to provide an integrated framework for future methods development. Some of the major components of the SCALE parallel computing development plan are parallelization and multithreading of computationally intensive modules and redesign of the fundamental SCALE computational architecture.

  9. Large scale mapping of groundwater resources using a highly integrated set of tools

    DEFF Research Database (Denmark)

    Søndergaard, Verner; Auken, Esben; Christiansen, Anders Vest

    large areas with information from an optimum number of new investigation boreholes, existing boreholes, logs and water samples to get an integrated and detailed description of the groundwater resources and their vulnerability.Development of more time efficient and airborne geophysical data acquisition...... platforms (e.g. SkyTEM) have made large-scale mapping attractive and affordable in the planning and administration of groundwater resources. The handling and optimized use of huge amounts of geophysical data covering large areas has also required a comprehensive database, where data can easily be stored...

  10. Development and validation of a new global well-being outcomes rating scale for integrative medicine research

    Directory of Open Access Journals (Sweden)

    Bell Iris R

    2004-01-01

    Full Text Available Abstract Background Researchers are finding limitations of currently available disease-focused questionnaire tools for outcome studies in complementary and alternative medicine/integrative medicine (CAM/IM. Methods Three substudies investigated the new one-item visual analogue Arizona Integrative Outcomes Scale (AIOS, which assesses self-rated global sense of spiritual, social, mental, emotional, and physical well-being over the past 24 hours and the past month. The first study tested the scale's ability to discriminate unhealthy individuals (n = 50 from healthy individuals (n = 50 in a rehabilitation outpatient clinic sample. The second study examined the concurrent validity of the AIOS by comparing ratings of global well-being to degree of psychological distress as measured by the Brief Symptom Inventory (BSI in undergraduate college students (N = 458. The third study evaluated the relationships between the AIOS and positively- and negatively-valenced tools (Positive and Negative Affect Scale and the Positive States of Mind Scale in a different sample of undergraduate students (N = 62. Results Substudy (i Rehabilitation patients scored significantly lower than the healthy controls on both forms of the AIOS and a current global health rating. The AIOS 24-hours correlated moderately and significantly with global health (patients r = 0.50; controls r = 0.45. AIOS 1-month correlations with global health were stronger within the controls (patients r = 0.36; controls r = 0.50. Controls (r = 0.64 had a higher correlation between the AIOS 24-hour and 1-month forms than did the patients (r = 0.33, which is consistent with the presumptive improvement in the patients' condition over the previous 30 days in rehabilitation. Substudy (ii In undergraduate students, AIOS scores were inversely related to distress ratings, as measured by the global severity index on the BSI (rAIOS24h = -0.42, rAIOS1month = -0.40. Substudy (iii AIOS scores were significantly

  11. Barriers and facilitators to integrating care: experiences from the English Integrated Care Pilots

    Directory of Open Access Journals (Sweden)

    Tom Ling

    2012-07-01

    Full Text Available Background. In 2008, the English Department of Health appointed 16 'Integrated Care Pilots' which used a range of approaches to provide better integrated care. We report qualitative analyses from a three year multi-method evaluation to identify barriers and facilitators to successful integration of care.  Theory and methods. Data were analysed from transcripts of 213 in-depth staff interviews, and from semi-structured questionnaires (the 'Living Document' completed by staff in pilot sites at six points over a two-year period. Emerging findings were therefore built from 'bottom up' and grounded in the data. However, we were then interested in how these findings compared and contrasted with more generic analyses. Therefore after our analyses were complete we then systematically compared and contrasted the findings with the analysis of barriers and facilitators to quality improvement identified in a systematic review by Kaplan et al (2010 and the analysis of more micro-level shapers of behaviour found in Normalisation Process Theory (May et al 2007. Neither of these approaches claims to be full blown theories but both claim to provide mid-range theoretical arguments which may be used to structure existing data and which can be undercut or reinforced by new data. Results and discussion. Many barriers and facilitators to integrating care are those of any large scale organisational change. These include issues relating to leadership, organisational culture, information technology, physician involvement, and availability of resources. However, activities which appear particularly important for delivering integrated care include personal relationships between leaders in different organisations, the scale of planned activities, governance and finance arrangements, support for staff in new roles, and organisational and staff stability. We illustrate our analyses with a 'routemap' which identifies questions that providers may wish to consider when

  12. Barriers and facilitators to integrating care: experiences from the English Integrated Care Pilots

    Directory of Open Access Journals (Sweden)

    Tom Ling

    2012-07-01

    Full Text Available Background. In 2008, the English Department of Health appointed 16 'Integrated Care Pilots' which used a range of approaches to provide better integrated care. We report qualitative analyses from a three year multi-method evaluation to identify barriers and facilitators to successful integration of care. Theory and methods. Data were analysed from transcripts of 213 in-depth staff interviews, and from semi-structured questionnaires (the 'Living Document' completed by staff in pilot sites at six points over a two-year period. Emerging findings were therefore built from 'bottom up' and grounded in the data. However, we were then interested in how these findings compared and contrasted with more generic analyses. Therefore after our analyses were complete we then systematically compared and contrasted the findings with the analysis of barriers and facilitators to quality improvement identified in a systematic review by Kaplan et al (2010 and the analysis of more micro-level shapers of behaviour found in Normalisation Process Theory (May et al 2007. Neither of these approaches claims to be full blown theories but both claim to provide mid-range theoretical arguments which may be used to structure existing data and which can be undercut or reinforced by new data.Results and discussion. Many barriers and facilitators to integrating care are those of any large scale organisational change. These include issues relating to leadership, organisational culture, information technology, physician involvement, and availability of resources. However, activities which appear particularly important for delivering integrated care include personal relationships between leaders in different organisations, the scale of planned activities, governance and finance arrangements, support for staff in new roles, and organisational and staff stability. We illustrate our analyses with a 'routemap' which identifies questions that providers may wish to consider when planning

  13. Analysis of Elastic-Plastic J Integrals for 3-Dimensional Cracks Using Finite Element Alternating Method

    International Nuclear Information System (INIS)

    Park, Jai Hak

    2009-01-01

    SGBEM(Symmetric Galerkin Boundary Element Method)-FEM alternating method has been proposed by Nikishkov, Park and Atluri. In the proposed method, arbitrarily shaped three-dimensional crack problems can be solved by alternating between the crack solution in an infinite body and the finite element solution without a crack. In the previous study, the SGBEM-FEM alternating method was extended further in order to solve elastic-plastic crack problems and to obtain elastic-plastic stress fields. For the elastic-plastic analysis the algorithm developed by Nikishkov et al. is used after modification. In the algorithm, the initial stress method is used to obtain elastic-plastic stress and strain fields. In this paper, elastic-plastic J integrals for three-dimensional cracks are obtained using the method. For that purpose, accurate values of displacement gradients and stresses are necessary on an integration path. In order to improve the accuracy of stress near crack surfaces, coordinate transformation and partitioning of integration domain are used. The coordinate transformation produces a transformation Jacobian, which cancels the singularity of the integrand. Using the developed program, simple three-dimensional crack problems are solved and elastic and elastic-plastic J integrals are obtained. The obtained J integrals are compared with the values obtained using a handbook solution. It is noted that J integrals obtained from the alternating method are close to the values from the handbook

  14. Research of the effectiveness of parallel multithreaded realizations of interpolation methods for scaling raster images

    Science.gov (United States)

    Vnukov, A. A.; Shershnev, M. B.

    2018-01-01

    The aim of this work is the software implementation of three image scaling algorithms using parallel computations, as well as the development of an application with a graphical user interface for the Windows operating system to demonstrate the operation of algorithms and to study the relationship between system performance, algorithm execution time and the degree of parallelization of computations. Three methods of interpolation were studied, formalized and adapted to scale images. The result of the work is a program for scaling images by different methods. Comparison of the quality of scaling by different methods is given.

  15. Multiscale integration schemes for jump-diffusion systems

    Energy Technology Data Exchange (ETDEWEB)

    Givon, D.; Kevrekidis, I.G.

    2008-12-09

    We study a two-time-scale system of jump-diffusion stochastic differential equations. We analyze a class of multiscale integration methods for these systems, which, in the spirit of [1], consist of a hybridization between a standard solver for the slow components and short runs for the fast dynamics, which are used to estimate the effect that the fast components have on the slow ones. We obtain explicit bounds for the discrepancy between the results of the multiscale integration method and the slow components of the original system.

  16. A comparison of non-integrating reprogramming methods

    Science.gov (United States)

    Schlaeger, Thorsten M; Daheron, Laurence; Brickler, Thomas R; Entwisle, Samuel; Chan, Karrie; Cianci, Amelia; DeVine, Alexander; Ettenger, Andrew; Fitzgerald, Kelly; Godfrey, Michelle; Gupta, Dipti; McPherson, Jade; Malwadkar, Prerana; Gupta, Manav; Bell, Blair; Doi, Akiko; Jung, Namyoung; Li, Xin; Lynes, Maureen S; Brookes, Emily; Cherry, Anne B C; Demirbas, Didem; Tsankov, Alexander M; Zon, Leonard I; Rubin, Lee L; Feinberg, Andrew P; Meissner, Alexander; Cowan, Chad A; Daley, George Q

    2015-01-01

    Human induced pluripotent stem cells (hiPSCs1–3) are useful in disease modeling and drug discovery, and they promise to provide a new generation of cell-based therapeutics. To date there has been no systematic evaluation of the most widely used techniques for generating integration-free hiPSCs. Here we compare Sendai-viral (SeV)4, episomal (Epi)5 and mRNA transfection mRNA6 methods using a number of criteria. All methods generated high-quality hiPSCs, but significant differences existed in aneuploidy rates, reprogramming efficiency, reliability and workload. We discuss the advantages and shortcomings of each approach, and present and review the results of a survey of a large number of human reprogramming laboratories on their independent experiences and preferences. Our analysis provides a valuable resource to inform the use of specific reprogramming methods for different laboratories and different applications, including clinical translation. PMID:25437882

  17. Photometric method for determination of acidity constants through integral spectra analysis.

    Science.gov (United States)

    Zevatskiy, Yuriy Eduardovich; Ruzanov, Daniil Olegovich; Samoylov, Denis Vladimirovich

    2015-04-15

    An express method for determination of acidity constants of organic acids, based on the analysis of the integral transmittance vs. pH dependence is developed. The integral value is registered as a photocurrent of photometric device simultaneously with potentiometric titration. The proposed method allows to obtain pKa using only simple and low-cost instrumentation. The optical part of the experimental setup has been optimized through the exclusion of the monochromator device. Thus it only takes 10-15 min to obtain one pKa value with the absolute error of less than 0.15 pH units. Application limitations and reliability of the method have been tested for a series of organic acids of various nature. Copyright © 2015 Elsevier B.V. All rights reserved.

  18. Social network extraction based on Web: 3. the integrated superficial method

    Science.gov (United States)

    Nasution, M. K. M.; Sitompul, O. S.; Noah, S. A.

    2018-03-01

    The Web as a source of information has become part of the social behavior information. Although, by involving only the limitation of information disclosed by search engines in the form of: hit counts, snippets, and URL addresses of web pages, the integrated extraction method produces a social network not only trusted but enriched. Unintegrated extraction methods may produce social networks without explanation, resulting in poor supplemental information, or resulting in a social network of durmise laden, consequently unrepresentative social structures. The integrated superficial method in addition to generating the core social network, also generates an expanded network so as to reach the scope of relation clues, or number of edges computationally almost similar to n(n - 1)/2 for n social actors.

  19. Understanding hydraulic fracturing: a multi-scale problem

    Science.gov (United States)

    Hyman, J. D.; Jiménez-Martínez, J.; Viswanathan, H. S.; Carey, J. W.; Porter, M. L.; Rougier, E.; Karra, S.; Kang, Q.; Frash, L.; Chen, L.; Lei, Z.; O’Malley, D.; Makedonska, N.

    2016-01-01

    Despite the impact that hydraulic fracturing has had on the energy sector, the physical mechanisms that control its efficiency and environmental impacts remain poorly understood in part because the length scales involved range from nanometres to kilometres. We characterize flow and transport in shale formations across and between these scales using integrated computational, theoretical and experimental efforts/methods. At the field scale, we use discrete fracture network modelling to simulate production of a hydraulically fractured well from a fracture network that is based on the site characterization of a shale gas reservoir. At the core scale, we use triaxial fracture experiments and a finite-discrete element model to study dynamic fracture/crack propagation in low permeability shale. We use lattice Boltzmann pore-scale simulations and microfluidic experiments in both synthetic and shale rock micromodels to study pore-scale flow and transport phenomena, including multi-phase flow and fluids mixing. A mechanistic description and integration of these multiple scales is required for accurate predictions of production and the eventual optimization of hydrocarbon extraction from unconventional reservoirs. Finally, we discuss the potential of CO2 as an alternative working fluid, both in fracturing and re-stimulating activities, beyond its environmental advantages. This article is part of the themed issue ‘Energy and the subsurface’. PMID:27597789

  20. Measurement of definite integral of sinusoidal signal absolute value third power using digital stochastic method

    Directory of Open Access Journals (Sweden)

    Beljić Željko

    2017-01-01

    Full Text Available In this paper a special case of digital stochastic measurement of the third power of definite integral of sinusoidal signal’s absolute value, using 2-bit AD converters is presented. This case of digital stochastic method had emerged from the need to measure power and energy of the wind. Power and energy are proportional to the third power of wind speed. Anemometer output signal is sinusoidal. Therefore an integral of the third power of sinusoidal signal is zero. Two approaches are proposed for the third power calculation of the wind speed signal. One approach is to use absolute value of sinusoidal signal (before AD conversion for which there is no need of multiplier hardware change. The second approach requires small multiplier hardware change, but input signal remains unchanged. For the second approach proposed minimal hardware change was made to calculate absolute value of the result after AD conversion. Simulations have confirmed theoretical analysis. Expected precision of wind energy measurement of proposed device is better than 0,00051% of full scale. [Project of the Serbian Ministry of Education, Science and Technological Development, Grant no. TR32019

  1. Field Method for Integrating the First Order Differential Equation

    Institute of Scientific and Technical Information of China (English)

    JIA Li-qun; ZHENG Shi-wang; ZHANG Yao-yu

    2007-01-01

    An important modern method in analytical mechanics for finding the integral, which is called the field-method, is used to research the solution of a differential equation of the first order. First, by introducing an intermediate variable, a more complicated differential equation of the first order can be expressed by two simple differential equations of the first order, then the field-method in analytical mechanics is introduced for solving the two differential equations of the first order. The conclusion shows that the field-method in analytical mechanics can be fully used to find the solutions of a differential equation of the first order, thus a new method for finding the solutions of the first order is provided.

  2. Phase-integral method allowing nearlying transition points

    CERN Document Server

    Fröman, Nanny

    1996-01-01

    The efficiency of the phase-integral method developed by the present au­ thors has been shown both analytically and numerically in many publica­ tions. With the inclusion of supplementary quantities, closely related to new Stokes constants and obtained with the aid of comparison equation technique, important classes of problems in which transition points may approach each other become accessible to accurate analytical treatment. The exposition in this monograph is of a mathematical nature but has important physical applications, some examples of which are found in the adjoined papers. Thus, we would like to emphasize that, although we aim at mathematical rigor, our treatment is made primarily with physical needs in mind. To introduce the reader into the background of this book, we start by de­ scribing the phase-integral approximation of arbitrary order generated from an unspecified base function. This is done in Chapter 1, which is reprinted, after minor changes, from a review article. Chapter 2 is the re...

  3. Connotations of pixel-based scale effect in remote sensing and the modified fractal-based analysis method

    Science.gov (United States)

    Feng, Guixiang; Ming, Dongping; Wang, Min; Yang, Jianyu

    2017-06-01

    Scale problems are a major source of concern in the field of remote sensing. Since the remote sensing is a complex technology system, there is a lack of enough cognition on the connotation of scale and scale effect in remote sensing. Thus, this paper first introduces the connotations of pixel-based scale and summarizes the general understanding of pixel-based scale effect. Pixel-based scale effect analysis is essentially important for choosing the appropriate remote sensing data and the proper processing parameters. Fractal dimension is a useful measurement to analysis pixel-based scale. However in traditional fractal dimension calculation, the impact of spatial resolution is not considered, which leads that the scale effect change with spatial resolution can't be clearly reflected. Therefore, this paper proposes to use spatial resolution as the modified scale parameter of two fractal methods to further analyze the pixel-based scale effect. To verify the results of two modified methods (MFBM (Modified Windowed Fractal Brownian Motion Based on the Surface Area) and MDBM (Modified Windowed Double Blanket Method)); the existing scale effect analysis method (information entropy method) is used to evaluate. And six sub-regions of building areas and farmland areas were cut out from QuickBird images to be used as the experimental data. The results of the experiment show that both the fractal dimension and information entropy present the same trend with the decrease of spatial resolution, and some inflection points appear at the same feature scales. Further analysis shows that these feature scales (corresponding to the inflection points) are related to the actual sizes of the geo-object, which results in fewer mixed pixels in the image, and these inflection points are significantly indicative of the observed features. Therefore, the experiment results indicate that the modified fractal methods are effective to reflect the pixel-based scale effect existing in remote sensing

  4. A Study of Spectral Integration and Normalization in NMR-based Metabonomic Analyses

    Energy Technology Data Exchange (ETDEWEB)

    Webb-Robertson, Bobbie-Jo M.; Lowry, David F.; Jarman, Kristin H.; Harbo, Sam J.; Meng, Quanxin; Fuciarelli, Alfred F.; Pounds, Joel G.; Lee, Monica T.

    2005-09-15

    Metabonomics involves the quantitation of the dynamic multivariate metabolic response of an organism to a pathological event or genetic modification (Nicholson, Lindon and Holmes, 1999). The analysis of these data involves the use of appropriate multivariate statistical methods. Exploratory Data Analysis (EDA) linear projection methods, primarily Principal Component Analysis (PCA), have been documented as a valuable pattern recognition technique for 1H NMR spectral data (Brindle et al., 2002, Potts et al., 2001, Robertson et al., 2000, Robosky et al., 2002). Prior to PCA the raw data is typically processed through four steps; (1) baseline correction, (2) endogenous peak removal, (3) integration over spectral regions to reduce the number of variables, and (4) normalization. The effect of the size of spectral integration regions and normalization has not been well studied. We assess the variability structure and classification accuracy on two distinctly different datasets via PCA and a leave-one-out cross-validation approach under two normalization approaches and an array of spectral integration regions. This study indicates that independent of the normalization method the classification accuracy achieved from metabonomic studies is not highly sensitive to the size of the spectral integration region. Additionally, both datasets scaled to mean zero and unity variance (auto-scaled) has higher variability within classification accuracy over spectral integration window widths than data scaled to the total intensity of the spectrum.

  5. Mixed Element Formulation for the Finite Element-Boundary Integral Method

    National Research Council Canada - National Science Library

    Meese, J; Kempel, L. C; Schneider, S. W

    2006-01-01

    A mixed element approach using right hexahedral elements and right prism elements for the finite element-boundary integral method is presented and discussed for the study of planar cavity-backed antennas...

  6. TL glow ratios at different temperature intervals of integration in thermoluminescence method. Comparison of Japanese standard (MHLW notified) method with CEN standard methods

    International Nuclear Information System (INIS)

    Todoriki, Setsuko; Saito, Kimie; Tsujimoto, Yuka

    2008-01-01

    The effect of the integration temperature intervals of TL intensities on the TL glow ratio was examined in comparison of the notified method of the Ministry of Health, Labour and Welfare (MHLW method) with EN1788. Two kinds of un-irradiated geological standard rock and three kinds of spices (black pepper, turmeric, and oregano) irradiated at 0.3 kGy or 1.0 kGy were subjected to TL analysis. Although the TL glow ratio exceeded 0.1 in the andesite according to the calculation of the MHLW notified method (integration interval; 70-490degC), the maximum of the first glow were observed at 300degC or more, attributed the influence of the natural radioactivity and distinguished from food irradiation. When the integration interval was set to 166-227degC according to EN1788, the TL glow ratios became remarkably smaller than 0.1, and the evaluation of the un-irradiated sample became more clear. For spices, the TL glow ratios by the MHLW notified method fell below 0.1 in un-irradiated samples and exceeded 0.1 in irradiated ones. Moreover, Glow1 maximum temperatures of the irradiated samples were observed at the range of 168-196degC, and those of un-irradiated samples were 258degC or more. Therefore, all samples were correctly judged by the criteria of the MHLW method. However, based on the temperature range of integration defined by EN1788, the TL glow ratio of un-irradiated samples remarkably became small compared with that of the MHLW method, and the discrimination of the irradiated sample from non-irradiation sample became clearer. (author)

  7. Model correction factor method for reliability problems involving integrals of non-Gaussian random fields

    DEFF Research Database (Denmark)

    Franchin, P.; Ditlevsen, Ove Dalager; Kiureghian, Armen Der

    2002-01-01

    The model correction factor method (MCFM) is used in conjunction with the first-order reliability method (FORM) to solve structural reliability problems involving integrals of non-Gaussian random fields. The approach replaces the limit-state function with an idealized one, in which the integrals ...

  8. Integrated assessment and environmental policy making. In pursuit of usefulness

    International Nuclear Information System (INIS)

    Parson, E.A.

    1995-01-01

    Current integrated assessment projects primarily seek end to end integration through formal models at a national to global scale, and show three significant representational weaknesses: determinants of decadal-scale emissions trends; valuing impacts and adaptive response; and the formation and effects of policies. Meeting the needs of policy audiences may require other forms of integration; may require integration by formal modeling or by other means; and may require representing decisions of other actors through political and negotiating processes. While rational global environmental policy making requires integrated assessment, current practice admits no single vision of how to do it, so understanding will be best advanced by a diverse collection of projects pursuing distinct methods and approaches. Further practice may yield some consensus on best practice, possibly including generic assessment skills generalizable across issues. (Author)

  9. Development and Preliminary Validation of the Scale for Evaluation of Psychiatric Integrative and Continuous Care—Patient’s Version

    Directory of Open Access Journals (Sweden)

    Yuriy Ignatyev

    2017-08-01

    Full Text Available This pilot study aimed to evaluate and examine an instrument that integrates relevant aspects of cross-sectoral (in- and outpatients mental health care, is simply to use and shows satisfactory psychometric properties. The development of the scale comprised literature research, held 14 focus groups and 12 interviews with patients and health care providers, item-pool generation, content validation by a scientific expert panel, and face validation by 90 patients. The preliminary scale was tested on 385 patients across seven German hospitals with cross-sectoral mental health care (CSMHC as part of their treatment program. Psychometric properties of the scale were evaluated using genuine and transformed data scoring. To check reliability and postdictive validity of the scale, Cronbach’s α coefficient and multivariable linear regression were used. This development process led to the development of an 18-item scale called the “Scale for Evaluation of Psychiatric Integrative and Continuous Care (SEPICC” with a two-point and five-point response options. The scale consists of two sections. The first section assesses the presence or absence of patients’ experiences with various CSMHC’ relevant components such as home treatment, flexibility of treatments’ switching, case management, continuity of care, cross-sectoral therapeutic groups, and multidisciplinary teams. The second section evaluates the patients’ opinions about these relevant components. Using raw and transformed scoring resulted into comparable results. However, data distribution using transformed scoring showed a smaller deviation from normality. For the overall scale, the Cronbach’s α coefficient was 0.82. Self-reported experiences with relevant components of the CSMHC were positively associated with the patients approval of these components. In conclusion, the new scale provides a good starting point for further validation. It can be used as a tool to evaluate CSMHC

  10. Hypoxia-elicited impairment of cell wall integrity, glycosylation precursor synthesis, and growth in scaled-up high-cell density fed-batch cultures of Saccharomyces cerevisiae.

    Science.gov (United States)

    Aon, Juan C; Sun, Jianxin; Leighton, Julie M; Appelbaum, Edward R

    2016-08-15

    In this study we examine the integrity of the cell wall during scale up of a yeast fermentation process from laboratory scale (10 L) to industrial scale (10,000 L). In a previous study we observed a clear difference in the volume fraction occupied by yeast cells as revealed by wet cell weight (WCW) measurements between these scales. That study also included metabolite analysis which suggested hypoxia during scale up. Here we hypothesize that hypoxia weakens the yeast cell wall during the scale up, leading to changes in cell permeability, and/or cell mechanical resistance, which in turn may lead to the observed difference in WCW. We tested the cell wall integrity by probing the cell wall sensitivity to Zymolyase. Also exometabolomics data showed changes in supply of precursors for the glycosylation pathway. The results show a more sensitive cell wall later in the production process at industrial scale, while the sensitivity at early time points was similar at both scales. We also report exometabolomics data, in particular a link with the protein glycosylation pathway. Significantly lower levels of Man6P and progressively higher GDP-mannose indicated partially impaired incorporation of this sugar nucleotide during co- or post-translational protein glycosylation pathways at the 10,000 L compared to the 10 L scale. This impairment in glycosylation would be expected to affect cell wall integrity. Although cell viability from samples obtained at both scales were similar, cells harvested from 10 L bioreactors were able to re-initiate growth faster in fresh shake flask media than those harvested from the industrial scale. The results obtained help explain the WCW differences observed at both scales by hypoxia-triggered weakening of the yeast cell wall during the scale up.

  11. Ultra-Fine Scale Spatially-Integrated Mapping of Habitat and Occupancy Using Structure-From-Motion.

    Directory of Open Access Journals (Sweden)

    Philip McDowall

    Full Text Available Organisms respond to and often simultaneously modify their environment. While these interactions are apparent at the landscape extent, the driving mechanisms often occur at very fine spatial scales. Structure-from-Motion (SfM, a computer vision technique, allows the simultaneous mapping of organisms and fine scale habitat, and will greatly improve our understanding of habitat suitability, ecophysiology, and the bi-directional relationship between geomorphology and habitat use. SfM can be used to create high-resolution (centimeter-scale three-dimensional (3D habitat models at low cost. These models can capture the abiotic conditions formed by terrain and simultaneously record the position of individual organisms within that terrain. While coloniality is common in seabird species, we have a poor understanding of the extent to which dense breeding aggregations are driven by fine-scale active aggregation or limited suitable habitat. We demonstrate the use of SfM for fine-scale habitat suitability by reconstructing the locations of nests in a gentoo penguin colony and fitting models that explicitly account for conspecific attraction. The resulting digital elevation models (DEMs are used as covariates in an inhomogeneous hybrid point process model. We find that gentoo penguin nest site selection is a function of the topography of the landscape, but that nests are far more aggregated than would be expected based on terrain alone, suggesting a strong role of behavioral aggregation in driving coloniality in this species. This integrated mapping of organisms and fine scale habitat will greatly improve our understanding of fine-scale habitat suitability, ecophysiology, and the complex bi-directional relationship between geomorphology and habitat use.

  12. A block-iterative nodal integral method for forced convection problems

    International Nuclear Information System (INIS)

    Decker, W.J.; Dorning, J.J.

    1992-01-01

    A new efficient iterative nodal integral method for the time-dependent two- and three-dimensional incompressible Navier-Stokes equations has been developed. Using the approach introduced by Azmy and Droning to develop nodal mehtods with high accuracy on coarse spatial grids for two-dimensional steady-state problems and extended to coarse two-dimensional space-time grids by Wilson et al. for thermal convection problems, we have developed a new iterative nodal integral method for the time-dependent Navier-Stokes equations for mechanically forced convection. A new, extremely efficient block iterative scheme is employed to invert the Jacobian within each of the Newton-Raphson iterations used to solve the final nonlinear discrete-variable equations. By taking advantage of the special structure of the Jacobian, this scheme greatly reduces memory requirements. The accuracy of the overall method is illustrated by appliying it to the time-dependent version of the classic two-dimensional driven cavity problem of computational fluid dynamics

  13. The philosophy and method of integrative humanism and religious ...

    African Journals Online (AJOL)

    This paper titled “Philosophy and Method of Integrative Humanism and Religious Crises in Nigeria: Picking the Essentials”, acknowledges the damaging effects of religious bigotry, fanaticism and creed differences on the social, political and economic development of the country. The need for the cessation of religious ...

  14. Passive technologies for future large-scale photonic integrated circuits on silicon: polarization handling, light non-reciprocity and loss reduction

    Directory of Open Access Journals (Sweden)

    Daoxin Dai

    2012-03-01

    Full Text Available Silicon-based large-scale photonic integrated circuits are becoming important, due to the need for higher complexity and lower cost for optical transmitters, receivers and optical buffers. In this paper, passive technologies for large-scale photonic integrated circuits are described, including polarization handling, light non-reciprocity and loss reduction. The design rule for polarization beam splitters based on asymmetrical directional couplers is summarized and several novel designs for ultra-short polarization beam splitters are reviewed. A novel concept for realizing a polarization splitter–rotator is presented with a very simple fabrication process. Realization of silicon-based light non-reciprocity devices (e.g., optical isolator, which is very important for transmitters to avoid sensitivity to reflections, is also demonstrated with the help of magneto-optical material by the bonding technology. Low-loss waveguides are another important technology for large-scale photonic integrated circuits. Ultra-low loss optical waveguides are achieved by designing a Si3N4 core with a very high aspect ratio. The loss is reduced further to <0.1 dB m−1 with an improved fabrication process incorporating a high-quality thermal oxide upper cladding by means of wafer bonding. With the developed ultra-low loss Si3N4 optical waveguides, some devices are also demonstrated, including ultra-high-Q ring resonators, low-loss arrayed-waveguide grating (demultiplexers, and high-extinction-ratio polarizers.

  15. Optimizing some 3-stage W-methods for the time integration of PDEs

    Science.gov (United States)

    Gonzalez-Pinto, S.; Hernandez-Abreu, D.; Perez-Rodriguez, S.

    2017-07-01

    The optimization of some W-methods for the time integration of time-dependent PDEs in several spatial variables is considered. In [2, Theorem 1] several three-parametric families of three-stage W-methods for the integration of IVPs in ODEs were studied. Besides, the optimization of several specific methods for PDEs when the Approximate Matrix Factorization Splitting (AMF) is used to define the approximate Jacobian matrix (W ≈ fy(yn)) was carried out. Also, some convergence and stability properties were presented [2]. The derived methods were optimized on the base that the underlying explicit Runge-Kutta method is the one having the largest Monotonicity interval among the thee-stage order three Runge-Kutta methods [1]. Here, we propose an optimization of the methods by imposing some additional order condition [7] to keep order three for parabolic PDE problems [6] but at the price of reducing substantially the length of the nonlinear Monotonicity interval of the underlying explicit Runge-Kutta method.

  16. A consistent, differential versus integral, method for measuring the delayed neutron yield in fissions

    International Nuclear Information System (INIS)

    Flip, A.; Pang, H.F.; D'Angelo, A.

    1995-01-01

    Due to the persistent uncertainties: ∼ 5 % (the uncertainty, here and there after, is at 1σ) in the prediction of the 'reactivity scale' (β eff ) for a fast power reactor, an international project was recently initiated in the framework of the OECD/NEA activities for reevaluation, new measurements and integral benchmarking of delayed neutron (DN) data and related kinetic parameters (principally β eff ). Considering that the major part of this uncertainty is due to uncertainties in the DN yields (v d ) and the difficulty for further improvement of the precision in differential (e.g. Keepin's method) measurements, an international cooperative strategy was adopted aiming at extracting and consistently interpreting information from both differential (nuclear) and integral (in reactor) measurements. The main problem arises from the integral side; thus the idea was to realize β eff like measurements (both deterministic and noise) in 'clean' assemblies. The 'clean' calculational context permitted the authors to develop a theory allowing to link explicitly this integral experimental level with the differential one, via a unified 'Master Model' which relates v d and measurables quantities (on both levels) linearly. The combined error analysis is consequently largely simplified and the final uncertainty drastically reduced (theoretically, by a factor √3). On the other hand the same theoretical development leading to the 'Master Model', also resulted in a structured scheme of approximations of the general (stochastic) Boltzmann equation allowing a consistent analysis of the large range of measurements concerned (stochastic, dynamic, static ... ). This paper is focused on the main results of this theoretical development and its application to the analysis of the Preliminary results of the BERENICE program (β eff measurements in MASURCA, the first assembly in CADARACHE-FRANCE)

  17. Improving Allergen Prediction in Main Crops Using a Weighted Integrative Method.

    Science.gov (United States)

    Li, Jing; Wang, Jing; Li, Jing

    2017-12-01

    As a public health problem, food allergy is frequently caused by food allergy proteins, which trigger a type-I hypersensitivity reaction in the immune system of atopic individuals. The food allergens in our daily lives are mainly from crops including rice, wheat, soybean and maize. However, allergens in these main crops are far from fully uncovered. Although some bioinformatics tools or methods predicting the potential allergenicity of proteins have been proposed, each method has their limitation. In this paper, we built a novel algorithm PREAL W , which integrated PREAL, FAO/WHO criteria and motif-based method by a weighted average score, to benefit the advantages of different methods. Our results illustrated PREAL W has better performance significantly in the crops' allergen prediction. This integrative allergen prediction algorithm could be useful for critical food safety matters. The PREAL W could be accessed at http://lilab.life.sjtu.edu.cn:8080/prealw .

  18. Integrating experimental and simulation length and time scales in mechanistic studies of friction

    International Nuclear Information System (INIS)

    Sawyer, W G; Perry, S S; Phillpot, S R; Sinnott, S B

    2008-01-01

    Friction is ubiquitous in all aspects of everyday life and has consequently been under study for centuries. Classical theories of friction have been developed and used to successfully solve numerous tribological problems. However, modern applications that involve advanced materials operating under extreme environments can lead to situations where classical theories of friction are insufficient to describe the physical responses of sliding interfaces. Here, we review integrated experimental and computational studies of atomic-scale friction and wear at solid-solid interfaces across length and time scales. The influence of structural orientation in the case of carbon nanotube bundles, and molecular orientation in the case of polymer films of polytetrafluoroethylene and polyethylene, on friction and wear are discussed. In addition, while friction in solids is generally considered to be athermal, under certain conditions thermally activated friction is observed for polymers, carbon nanotubes and graphite. The conditions under which these transitions occur, and their proposed origins, are discussed. Lastly, a discussion of future directions is presented

  19. Assessment of the methods for determining net radiation at different time-scales of meteorological variables

    Directory of Open Access Journals (Sweden)

    Ni An

    2017-04-01

    Full Text Available When modeling the soil/atmosphere interaction, it is of paramount importance to determine the net radiation flux. There are two common calculation methods for this purpose. Method 1 relies on use of air temperature, while Method 2 relies on use of both air and soil temperatures. Nowadays, there has been no consensus on the application of these two methods. In this study, the half-hourly data of solar radiation recorded at an experimental embankment are used to calculate the net radiation and long-wave radiation at different time-scales (half-hourly, hourly, and daily using the two methods. The results show that, compared with Method 2 which has been widely adopted in agronomical, geotechnical and geo-environmental applications, Method 1 is more feasible for its simplicity and accuracy at shorter time-scale. Moreover, in case of longer time-scale, daily for instance, less variations of net radiation and long-wave radiation are obtained, suggesting that no detailed soil temperature variations can be obtained. In other words, shorter time-scales are preferred in determining net radiation flux.

  20. INTEGRATED SENSOR EVALUATION CIRCUIT AND METHOD FOR OPERATING SAID CIRCUIT

    OpenAIRE

    Krüger, Jens; Gausa, Dominik

    2015-01-01

    WO15090426A1 Sensor evaluation device and method for operating said device Integrated sensor evaluation circuit for evaluating a sensor signal (14) received from a sensor (12), having a first connection (28a) for connection to the sensor and a second connection (28b) for connection to the sensor. The integrated sensor evaluation circuit comprises a configuration data memory (16) for storing configuration data which describe signal properties of a plurality of sensor control signals (26a-c). T...