WorldWideScience

Sample records for integrated mixed interpolation

  1. Integration and interpolation of sampled waveforms

    International Nuclear Information System (INIS)

    Stearns, S.D.

    1978-01-01

    Methods for integrating, interpolating, and improving the signal-to-noise ratio of digitized waveforms are discussed with regard to seismic data from underground tests. The frequency-domain integration method and the digital interpolation method of Schafer and Rabiner are described and demonstrated using test data. The use of bandpass filtering for noise reduction is also demonstrated. With these methods, a backlog of seismic test data has been successfully processed

  2. An integral conservative gridding--algorithm using Hermitian curve interpolation.

    Science.gov (United States)

    Volken, Werner; Frei, Daniel; Manser, Peter; Mini, Roberto; Born, Ernst J; Fix, Michael K

    2008-11-07

    The problem of re-sampling spatially distributed data organized into regular or irregular grids to finer or coarser resolution is a common task in data processing. This procedure is known as 'gridding' or 're-binning'. Depending on the quantity the data represents, the gridding-algorithm has to meet different requirements. For example, histogrammed physical quantities such as mass or energy have to be re-binned in order to conserve the overall integral. Moreover, if the quantity is positive definite, negative sampling values should be avoided. The gridding process requires a re-distribution of the original data set to a user-requested grid according to a distribution function. The distribution function can be determined on the basis of the given data by interpolation methods. In general, accurate interpolation with respect to multiple boundary conditions of heavily fluctuating data requires polynomial interpolation functions of second or even higher order. However, this may result in unrealistic deviations (overshoots or undershoots) of the interpolation function from the data. Accordingly, the re-sampled data may overestimate or underestimate the given data by a significant amount. The gridding-algorithm presented in this work was developed in order to overcome these problems. Instead of a straightforward interpolation of the given data using high-order polynomials, a parametrized Hermitian interpolation curve was used to approximate the integrated data set. A single parameter is determined by which the user can control the behavior of the interpolation function, i.e. the amount of overshoot and undershoot. Furthermore, it is shown how the algorithm can be extended to multidimensional grids. The algorithm was compared to commonly used gridding-algorithms using linear and cubic interpolation functions. It is shown that such interpolation functions may overestimate or underestimate the source data by about 10-20%, while the new algorithm can be tuned to

  3. An integral conservative gridding-algorithm using Hermitian curve interpolation

    International Nuclear Information System (INIS)

    Volken, Werner; Frei, Daniel; Manser, Peter; Mini, Roberto; Born, Ernst J; Fix, Michael K

    2008-01-01

    The problem of re-sampling spatially distributed data organized into regular or irregular grids to finer or coarser resolution is a common task in data processing. This procedure is known as 'gridding' or 're-binning'. Depending on the quantity the data represents, the gridding-algorithm has to meet different requirements. For example, histogrammed physical quantities such as mass or energy have to be re-binned in order to conserve the overall integral. Moreover, if the quantity is positive definite, negative sampling values should be avoided. The gridding process requires a re-distribution of the original data set to a user-requested grid according to a distribution function. The distribution function can be determined on the basis of the given data by interpolation methods. In general, accurate interpolation with respect to multiple boundary conditions of heavily fluctuating data requires polynomial interpolation functions of second or even higher order. However, this may result in unrealistic deviations (overshoots or undershoots) of the interpolation function from the data. Accordingly, the re-sampled data may overestimate or underestimate the given data by a significant amount. The gridding-algorithm presented in this work was developed in order to overcome these problems. Instead of a straightforward interpolation of the given data using high-order polynomials, a parametrized Hermitian interpolation curve was used to approximate the integrated data set. A single parameter is determined by which the user can control the behavior of the interpolation function, i.e. the amount of overshoot and undershoot. Furthermore, it is shown how the algorithm can be extended to multidimensional grids. The algorithm was compared to commonly used gridding-algorithms using linear and cubic interpolation functions. It is shown that such interpolation functions may overestimate or underestimate the source data by about 10-20%, while the new algorithm can be tuned to

  4. Resolution enhancement in integral microscopy by physical interpolation.

    Science.gov (United States)

    Llavador, Anabel; Sánchez-Ortiga, Emilio; Barreiro, Juan Carlos; Saavedra, Genaro; Martínez-Corral, Manuel

    2015-08-01

    Integral-imaging technology has demonstrated its capability for computing depth images from the microimages recorded after a single shot. This capability has been shown in macroscopic imaging and also in microscopy. Despite the possibility of refocusing different planes from one snap-shot is crucial for the study of some biological processes, the main drawback in integral imaging is the substantial reduction of the spatial resolution. In this contribution we report a technique, which permits to increase the two-dimensional spatial resolution of the computed depth images in integral microscopy by a factor of √2. This is made by a double-shot approach, carried out by means of a rotating glass plate, which shifts the microimages in the sensor plane. We experimentally validate the resolution enhancement as well as we show the benefit of applying the technique to biological specimens.

  5. A GPU-Accelerated Parameter Interpolation Thermodynamic Integration Free Energy Method.

    Science.gov (United States)

    Giese, Timothy J; York, Darrin M

    2018-03-13

    There has been a resurgence of interest in free energy methods motivated by the performance enhancements offered by molecular dynamics (MD) software written for specialized hardware, such as graphics processing units (GPUs). In this work, we exploit the properties of a parameter-interpolated thermodynamic integration (PI-TI) method to connect states by their molecular mechanical (MM) parameter values. This pathway is shown to be better behaved for Mg 2+ → Ca 2+ transformations than traditional linear alchemical pathways (with and without soft-core potentials). The PI-TI method has the practical advantage that no modification of the MD code is required to propagate the dynamics, and unlike with linear alchemical mixing, only one electrostatic evaluation is needed (e.g., single call to particle-mesh Ewald) leading to better performance. In the case of AMBER, this enables all the performance benefits of GPU-acceleration to be realized, in addition to unlocking the full spectrum of features available within the MD software, such as Hamiltonian replica exchange (HREM). The TI derivative evaluation can be accomplished efficiently in a post-processing step by reanalyzing the statistically independent trajectory frames in parallel for high throughput. We also show how one can evaluate the particle mesh Ewald contribution to the TI derivative evaluation without needing to perform two reciprocal space calculations. We apply the PI-TI method with HREM on GPUs in AMBER to predict p K a values in double stranded RNA molecules and make comparison with experiments. Convergence to under 0.25 units for these systems required 100 ns or more of sampling per window and coupling of windows with HREM. We find that MM charges derived from ab initio QM/MM fragment calculations improve the agreement between calculation and experimental results.

  6. Numerical Feynman integrals with physically inspired interpolation: Faster convergence and significant reduction of computational cost

    Directory of Open Access Journals (Sweden)

    Nikesh S. Dattani

    2012-03-01

    Full Text Available One of the most successful methods for calculating reduced density operator dynamics in open quantum systems, that can give numerically exact results, uses Feynman integrals. However, when simulating the dynamics for a given amount of time, the number of time steps that can realistically be used with this method is always limited, therefore one often obtains an approximation of the reduced density operator at a sparse grid of points in time. Instead of relying only on ad hoc interpolation methods (such as splines to estimate the system density operator in between these points, I propose a method that uses physical information to assist with this interpolation. This method is tested on a physically significant system, on which its use allows important qualitative features of the density operator dynamics to be captured with as little as two time steps in the Feynman integral. This method allows for an enormous reduction in the amount of memory and CPU time required for approximating density operator dynamics within a desired accuracy. Since this method does not change the way the Feynman integral itself is calculated, the value of the density operator approximation at the points in time used to discretize the Feynamn integral will be the same whether or not this method is used, but its approximation in between these points in time is considerably improved by this method. A list of ways in which this proposed method can be further improved is presented in the last section of the article.

  7. Perceptually informed synthesis of bandlimited classical waveforms using integrated polynomial interpolation.

    Science.gov (United States)

    Välimäki, Vesa; Pekonen, Jussi; Nam, Juhan

    2012-01-01

    Digital subtractive synthesis is a popular music synthesis method, which requires oscillators that are aliasing-free in a perceptual sense. It is a research challenge to find computationally efficient waveform generation algorithms that produce similar-sounding signals to analog music synthesizers but which are free from audible aliasing. A technique for approximately bandlimited waveform generation is considered that is based on a polynomial correction function, which is defined as the difference of a non-bandlimited step function and a polynomial approximation of the ideal bandlimited step function. It is shown that the ideal bandlimited step function is equivalent to the sine integral, and that integrated polynomial interpolation methods can successfully approximate it. Integrated Lagrange interpolation and B-spline basis functions are considered for polynomial approximation. The polynomial correction function can be added onto samples around each discontinuity in a non-bandlimited waveform to suppress aliasing. Comparison against previously known methods shows that the proposed technique yields the best tradeoff between computational cost and sound quality. The superior method amongst those considered in this study is the integrated third-order B-spline correction function, which offers perceptually aliasing-free sawtooth emulation up to the fundamental frequency of 7.8 kHz at the sample rate of 44.1 kHz. © 2012 Acoustical Society of America.

  8. Mixed Waste Landfill Integrated Demonstration

    International Nuclear Information System (INIS)

    1994-02-01

    The mission of the Mixed Waste Landfill Integrated Demonstration (MWLID) is to demonstrate, in contaminated sites, new technologies for clean-up of chemical and mixed waste landfills that are representative of many sites throughout the DOE Complex and the nation. When implemented, these new technologies promise to characterize and remediate the contaminated landfill sites across the country that resulted from past waste disposal practices. Characterization and remediation technologies are aimed at making clean-up less expensive, safer, and more effective than current techniques. This will be done by emphasizing in-situ technologies. Most important, MWLID's success will be shared with other Federal, state, and local governments, and private companies that face the important task of waste site remediation. MWLID will demonstrate technologies at two existing landfills. Sandia National Laboratories' Chemical Waste Landfill received hazardous (chemical) waste from the Laboratory from 1962 to 1985, and the Mixed-Waste Landfill received hazardous and radioactive wastes (mixed wastes) over a twenty-nine year period (1959-1988) from various Sandia nuclear research programs. Both landfills are now closed. Originally, however, the sites were selected because of Albuquerque's and climate and the thick layer of alluvial deposits that overlay groundwater approximately 480 feet below the landfills. This thick layer of ''dry'' soils, gravel, and clays promised to be a natural barrier between the landfills and groundwater

  9. Mixed wasted integrated program: Logic diagram

    International Nuclear Information System (INIS)

    Mayberry, J.; Stelle, S.; O'Brien, M.; Rudin, M.; Ferguson, J.; McFee, J.

    1994-01-01

    The Mixed Waste Integrated Program Logic Diagram was developed to provide technical alternative for mixed wastes projects for the Office of Technology Development's Mixed Waste Integrated Program (MWIP). Technical solutions in the areas of characterization, treatment, and disposal were matched to a select number of US Department of Energy (DOE) treatability groups represented by waste streams found in the Mixed Waste Inventory Report (MWIR)

  10. Elastic-Plastic J-Integral Solutions or Surface Cracks in Tension Using an Interpolation Methodology

    Science.gov (United States)

    Allen, P. A.; Wells, D. N.

    2013-01-01

    No closed form solutions exist for the elastic-plastic J-integral for surface cracks due to the nonlinear, three-dimensional nature of the problem. Traditionally, each surface crack must be analyzed with a unique and time-consuming nonlinear finite element analysis. To overcome this shortcoming, the authors have developed and analyzed an array of 600 3D nonlinear finite element models for surface cracks in flat plates under tension loading. The solution space covers a wide range of crack shapes and depths (shape: 0.2 less than or equal to a/c less than or equal to 1, depth: 0.2 less than or equal to a/B less than or equal to 0.8) and material flow properties (elastic modulus-to-yield ratio: 100 less than or equal to E/ys less than or equal to 1,000, and hardening: 3 less than or equal to n less than or equal to 20). The authors have developed a methodology for interpolating between the goemetric and material property variables that allows the user to reliably evaluate the full elastic-plastic J-integral and force versus crack mouth opening displacement solution; thus, a solution can be obtained very rapidly by users without elastic-plastic fracture mechanics modeling experience. Complete solutions for the 600 models and 25 additional benchmark models are provided in tabular format.

  11. Mixed wasted integrated program: Logic diagram

    Energy Technology Data Exchange (ETDEWEB)

    Mayberry, J.; Stelle, S. [Science Applications International Corp., Idaho Falls, ID (United States); O`Brien, M. [Univ. of Arizona, Tucson, AZ (United States); Rudin, M. [Univ. of Nevada, Las Vegas, NV (United States); Ferguson, J. [Lockheed Idaho Technologies Co., Idaho Falls, ID (United States); McFee, J. [I.T. Corp., Albuquerque, NM (United States)

    1994-11-30

    The Mixed Waste Integrated Program Logic Diagram was developed to provide technical alternative for mixed wastes projects for the Office of Technology Development`s Mixed Waste Integrated Program (MWIP). Technical solutions in the areas of characterization, treatment, and disposal were matched to a select number of US Department of Energy (DOE) treatability groups represented by waste streams found in the Mixed Waste Inventory Report (MWIR).

  12. Multivariate interpolation

    Directory of Open Access Journals (Sweden)

    Pakhnutov I.A.

    2017-04-01

    Full Text Available the paper deals with iterative interpolation methods in forms of similar recursive procedures defined by a sort of simple functions (interpolation basis not necessarily real valued. These basic functions are kind of arbitrary type being defined just by wish and considerations of user. The studied interpolant construction shows virtue of versatility: it may be used in a wide range of vector spaces endowed with scalar product, no dimension restrictions, both in Euclidean and Hilbert spaces. The choice of basic interpolation functions is as wide as possible since it is subdued nonessential restrictions. The interpolation method considered in particular coincides with traditional polynomial interpolation (mimic of Lagrange method in real unidimensional case or rational, exponential etc. in other cases. The interpolation as iterative process, in fact, is fairly flexible and allows one procedure to change the type of interpolation, depending on the node number in a given set. Linear interpolation basis options (perhaps some nonlinear ones allow to interpolate in noncommutative spaces, such as spaces of nondegenerate matrices, interpolated data can also be relevant elements of vector spaces over arbitrary numeric field. By way of illustration, the author gives the examples of interpolation on the real plane, in the separable Hilbert space and the space of square matrices with vektorvalued source data.

  13. Monotone piecewise bicubic interpolation

    International Nuclear Information System (INIS)

    Carlson, R.E.; Fritsch, F.N.

    1985-01-01

    In a 1980 paper the authors developed a univariate piecewise cubic interpolation algorithm which produces a monotone interpolant to monotone data. This paper is an extension of those results to monotone script C 1 piecewise bicubic interpolation to data on a rectangular mesh. Such an interpolant is determined by the first partial derivatives and first mixed partial (twist) at the mesh points. Necessary and sufficient conditions on these derivatives are derived such that the resulting bicubic polynomial is monotone on a single rectangular element. These conditions are then simplified to a set of sufficient conditions for monotonicity. The latter are translated to a system of linear inequalities, which form the basis for a monotone piecewise bicubic interpolation algorithm. 4 references, 6 figures, 2 tables

  14. Spatial interpolation

    NARCIS (Netherlands)

    Stein, A.

    1991-01-01

    The theory and practical application of techniques of statistical interpolation are studied in this thesis, and new developments in multivariate spatial interpolation and the design of sampling plans are discussed. Several applications to studies in soil science are

  15. The mixed waste landfill integrated demonstration

    International Nuclear Information System (INIS)

    Burford, T.D.; Williams, C.V.

    1994-01-01

    The Mixed Waste Landfill Integrated Demonstration (MWLID) focuses on ''in-situ'' characterization, monitoring, remediation, and containment of landfills in arid environments that contain hazardous and mixed waste. The MWLID mission is to assess, demonstrate, and transfer technologies and systems that lead to faster, better, cheaper, and safer cleanup. Most important, the demonstrated technologies will be evaluated against the baseline of conventional technologies and systems. The comparison will include the cost, efficiency, risk, and feasibility of using these innovative technologies at other sites

  16. Integration i ’mixed methods’ forskning

    DEFF Research Database (Denmark)

    Frederiksen, Morten

    2013-01-01

    Udviklingen af mixed methods forskning som en selvstændig forskningstradition er primært sket gennem udviklingen af mixed methods design. Denne artikel argumenterer for, at design-tilgangen bør suppleres med et mere overordnet fokus på hvordan de adskilte dele af mixed methods projekter integreres...... til et hele. Med udgangspunkt i en analyse af integrationsbegrebet i mixed methods litteraturen foreslås en klassifikation af seks integrationsformer: teori-, design-, metode-, data-, analyse- og fortolkningsintegration. Hver af disse beskriver en måde at skabe meningsfulde, håndgribelige relationer...... mellem undersøgelsens dele. Med udgangspunkt i denne klassifikation undersøges anvendeligheden af integrationsbegrebet gennem en analyse af tre meget forskellige traditioner for mixed methods forskning: metodetriangulering, pragmatistisk designoptimering og teori/metode-integration. På baggrund af denne...

  17. An Integrating Framework for Mixed Systems

    Science.gov (United States)

    Coutrix, Céline; Nigay, Laurence

    Technological advances in hardware manufacturing led to an extended range of possibilities for designing physical-digital objects involved in a mixed system. Mixed systems can take various forms and include augmented reality, augmented virtuality, and tangible systems. In this very dynamic context, it is difficult to compare existing mixed systems and to systematically explore the design space. Addressing this design problem, this chapter presents a unified point of view on mixed systems by focusing on mixed objects involved in interaction, i.e., hybrid physical-digital objects straddling physical and digital worlds. Our integrating framework is made of two complementary facets of a mixed object: we define intrinsic as well as extrinsic characteristics of an object by considering its role in the interaction. Such characteristics of an object are useful for comparing existing mixed systems at a fine-grain level. The taxonomic power of these characteristics is discussed in the context of existing mixed systems from the literature. Their generative power is illustrated by considering a system, Roam, which we designed and developed.

  18. Interpolation theory

    CERN Document Server

    Lunardi, Alessandra

    2018-01-01

    This book is the third edition of the 1999 lecture notes of the courses on interpolation theory that the author delivered at the Scuola Normale in 1998 and 1999. In the mathematical literature there are many good books on the subject, but none of them is very elementary, and in many cases the basic principles are hidden below great generality. In this book the principles of interpolation theory are illustrated aiming at simplification rather than at generality. The abstract theory is reduced as far as possible, and many examples and applications are given, especially to operator theory and to regularity in partial differential equations. Moreover the treatment is self-contained, the only prerequisite being the knowledge of basic functional analysis.

  19. Terminal current interpolation for multirate time integration of hierarchical IC models

    NARCIS (Netherlands)

    Verhoeven, A.; Maten, ter E.J.W.; Dohmen, J.J.; Tasic, B.; Mattheij, R.M.M.; Fitt, A.D.; Norbury, J.; Ockendon, H.; Wilson, E.

    2010-01-01

    Multirate time-integration methods [3–5] appear to be attractive for initial value problems for DAEs with latency or multirate behaviour. Latency means that parts of the circuit are constant or slowly time-varying during a certain time interval, while multirate behaviour means that some variables

  20. Mixed Waste Integrated Program emerging technology development

    International Nuclear Information System (INIS)

    Berry, J.B.; Hart, P.W.

    1994-01-01

    The US Department of Energy (DOE) is responsible for the management and treatment of its mixed low-level wastes (MLLW). MLLW are regulated under both the Resource Conservation and Recovery Act and various DOE orders. Over the next 5 years, DOE will manage over 1.2 m 3 of MLLW and mixed transuranic (MTRU) wastes. In order to successfully manage and treat these mixed wastes, DOE must adapt and develop characterization, treatment, and disposal technologies which will meet performance criteria, regulatory approvals, and public acceptance. Although technology to treat MLLW is not currently available without modification, DOE is committed to developing such treatment technologies and demonstrating them at the field scale by FY 1997. The Office of Research and Development's Mixed Waste Integrated Program (MWIP) within the DOE Office of Environmental Management (EM), OfFice of Technology Development, is responsible for the development and demonstration of such technologies for MLLW and MTRU wastes. MWIP advocates and sponsors expedited technology development and demonstrations for the treatment of MLLW

  1. Mixed Waste Integrated Program emerging technology development

    Energy Technology Data Exchange (ETDEWEB)

    Berry, J.B. [Oak Ridge National Lab., TN (United States); Hart, P.W. [USDOE, Washington, DC (United States)

    1994-06-01

    The US Department of Energy (DOE) is responsible for the management and treatment of its mixed low-level wastes (MLLW). MLLW are regulated under both the Resource Conservation and Recovery Act and various DOE orders. Over the next 5 years, DOE will manage over 1.2 m{sup 3} of MLLW and mixed transuranic (MTRU) wastes. In order to successfully manage and treat these mixed wastes, DOE must adapt and develop characterization, treatment, and disposal technologies which will meet performance criteria, regulatory approvals, and public acceptance. Although technology to treat MLLW is not currently available without modification, DOE is committed to developing such treatment technologies and demonstrating them at the field scale by FY 1997. The Office of Research and Development`s Mixed Waste Integrated Program (MWIP) within the DOE Office of Environmental Management (EM), OfFice of Technology Development, is responsible for the development and demonstration of such technologies for MLLW and MTRU wastes. MWIP advocates and sponsors expedited technology development and demonstrations for the treatment of MLLW.

  2. Calculation of electromagnetic parameter based on interpolation algorithm

    International Nuclear Information System (INIS)

    Zhang, Wenqiang; Yuan, Liming; Zhang, Deyuan

    2015-01-01

    Wave-absorbing material is an important functional material of electromagnetic protection. The wave-absorbing characteristics depend on the electromagnetic parameter of mixed media. In order to accurately predict the electromagnetic parameter of mixed media and facilitate the design of wave-absorbing material, based on the electromagnetic parameters of spherical and flaky carbonyl iron mixture of paraffin base, this paper studied two different interpolation methods: Lagrange interpolation and Hermite interpolation of electromagnetic parameters. The results showed that Hermite interpolation is more accurate than the Lagrange interpolation, and the reflectance calculated with the electromagnetic parameter obtained by interpolation is consistent with that obtained through experiment on the whole. - Highlights: • We use interpolation algorithm on calculation of EM-parameter with limited samples. • Interpolation method can predict EM-parameter well with different particles added. • Hermite interpolation is more accurate than Lagrange interpolation. • Calculating RL based on interpolation is consistent with calculating RL from experiment

  3. MIxed Waste Integrated Program (MWIP): Technology summary

    International Nuclear Information System (INIS)

    1994-02-01

    The mission of the Mixed Waste Integrated Program (MWIP) is to develop and demonstrate innovative and emerging technologies for the treatment and management of DOE's mixed low-level wastes (MLLW) for use by its customers, the Office of Waste Operations (EM-30) and the Office of Environmental Restoration (EM-40). The primary goal of MWIP is to develop and demonstrate the treatment and disposal of actual mixed waste (MMLW and MTRU). The vitrification process and the plasma hearth process are scheduled for demonstration on actual radioactive waste in FY95 and FY96, respectively. This will be accomplished by sequential studies of lab-scale non-radioactive testing followed by bench-scale radioactive testing, followed by field-scale radioactive testing. Both processes create a highly durable final waste form that passes leachability requirements while destroying organics. Material handling technology, and off-gas requirements and capabilities for the plasma hearth process and the vitrification process will be established in parallel

  4. Mixed time slicing in path integral simulations

    International Nuclear Information System (INIS)

    Steele, Ryan P.; Zwickl, Jill; Shushkov, Philip; Tully, John C.

    2011-01-01

    A simple and efficient scheme is presented for using different time slices for different degrees of freedom in path integral calculations. This method bridges the gap between full quantization and the standard mixed quantum-classical (MQC) scheme and, therefore, still provides quantum mechanical effects in the less-quantized variables. Underlying the algorithm is the notion that time slices (beads) may be 'collapsed' in a manner that preserves quantization in the less quantum mechanical degrees of freedom. The method is shown to be analogous to multiple-time step integration techniques in classical molecular dynamics. The algorithm and its associated error are demonstrated on model systems containing coupled high- and low-frequency modes; results indicate that convergence of quantum mechanical observables can be achieved with disparate bead numbers in the different modes. Cost estimates indicate that this procedure, much like the MQC method, is most efficient for only a relatively few quantum mechanical degrees of freedom, such as proton transfer. In this regime, however, the cost of a fully quantum mechanical simulation is determined by the quantization of the least quantum mechanical degrees of freedom.

  5. Interpolation functors and interpolation spaces

    CERN Document Server

    Brudnyi, Yu A

    1991-01-01

    The theory of interpolation spaces has its origin in the classical work of Riesz and Marcinkiewicz but had its first flowering in the years around 1960 with the pioneering work of Aronszajn, Calderón, Gagliardo, Krein, Lions and a few others. It is interesting to note that what originally triggered off this avalanche were concrete problems in the theory of elliptic boundary value problems related to the scale of Sobolev spaces. Later on, applications were found in many other areas of mathematics: harmonic analysis, approximation theory, theoretical numerical analysis, geometry of Banach spaces, nonlinear functional analysis, etc. Besides this the theory has a considerable internal beauty and must by now be regarded as an independent branch of analysis, with its own problems and methods. Further development in the 1970s and 1980s included the solution by the authors of this book of one of the outstanding questions in the theory of the real method, the K-divisibility problem. In a way, this book harvests the r...

  6. Linear Methods for Image Interpolation

    OpenAIRE

    Pascal Getreuer

    2011-01-01

    We discuss linear methods for interpolation, including nearest neighbor, bilinear, bicubic, splines, and sinc interpolation. We focus on separable interpolation, so most of what is said applies to one-dimensional interpolation as well as N-dimensional separable interpolation.

  7. Achieving Integration in Mixed Methods Designs—Principles and Practices

    OpenAIRE

    Fetters, Michael D; Curry, Leslie A; Creswell, John W

    2013-01-01

    Mixed methods research offers powerful tools for investigating complex processes and systems in health and health care. This article describes integration principles and practices at three levels in mixed methods research and provides illustrative examples. Integration at the study design level occurs through three basic mixed method designs—exploratory sequential, explanatory sequential, and convergent—and through four advanced frameworks—multistage, intervention, case study, and participato...

  8. Spline Interpolation of Image

    OpenAIRE

    I. Kuba; J. Zavacky; J. Mihalik

    1995-01-01

    This paper presents the use of B spline functions in various digital signal processing applications. The theory of one-dimensional B spline interpolation is briefly reviewed, followed by its extending to two dimensions. After presenting of one and two dimensional spline interpolation, the algorithms of image interpolation and resolution increasing were proposed. Finally, experimental results of computer simulations are presented.

  9. Mixed Waste Integrated Program: A technology assessment for mercury-containing mixed wastes

    International Nuclear Information System (INIS)

    Perona, J.J.; Brown, C.H.

    1993-03-01

    The treatment of mixed wastes must meet US Environmental Protection Agency (EPA) standards for chemically hazardous species and also must provide adequate control of the radioactive species. The US Department of Energy (DOE) Office of Technology Development established the Mixed Waste Integrated Program (MWIP) to develop mixed-waste treatment technology in support of the Mixed Low-Level Waste Program. Many DOE mixed-waste streams contain mercury. This report is an assessment of current state-of-the-art technologies for mercury separations from solids, liquids, and gases. A total of 19 technologies were assessed. This project is funded through the Chemical-Physical Technology Support Group of the MWIP

  10. Prospects for direct measurement of time-integrated Bs mixing

    International Nuclear Information System (INIS)

    Siccama, I.

    1994-01-01

    This note investigates the prospects of measuring time-integrated B s mixing. Three inclusive decay modes of the B s meson are discussed. For each reconstruction mode, the expected number of events and the different background channels are discussed. Estimates are given for the uncertainty on the mixing parameter χ s . (orig.)

  11. Permanently calibrated interpolating time counter

    International Nuclear Information System (INIS)

    Jachna, Z; Szplet, R; Kwiatkowski, P; Różyc, K

    2015-01-01

    We propose a new architecture of an integrated time interval counter that provides its permanent calibration in the background. Time interval measurement and the calibration procedure are based on the use of a two-stage interpolation method and parallel processing of measurement and calibration data. The parallel processing is achieved by a doubling of two-stage interpolators in measurement channels of the counter, and by an appropriate extension of control logic. Such modification allows the updating of transfer characteristics of interpolators without the need to break a theoretically infinite measurement session. We describe the principle of permanent calibration, its implementation and influence on the quality of the counter. The precision of the presented counter is kept at a constant level (below 20 ps) despite significant changes in the ambient temperature (from −10 to 60 °C), which can cause a sevenfold decrease in the precision of the counter with a traditional calibration procedure. (paper)

  12. Mixed problem with integral boundary condition for a high order mixed type partial differential equation

    OpenAIRE

    M. Denche; A. L. Marhoune

    2003-01-01

    In this paper, we study a mixed problem with integral boundary conditions for a high order partial differential equation of mixed type. We prove the existence and uniqueness of the solution. The proof is based on energy inequality, and on the density of the range of the operator generated by the considered problem.

  13. SPLINE, Spline Interpolation Function

    International Nuclear Information System (INIS)

    Allouard, Y.

    1977-01-01

    1 - Nature of physical problem solved: The problem is to obtain an interpolated function, as smooth as possible, that passes through given points. The derivatives of these functions are continuous up to the (2Q-1) order. The program consists of the following two subprograms: ASPLERQ. Transport of relations method for the spline functions of interpolation. SPLQ. Spline interpolation. 2 - Method of solution: The methods are described in the reference under item 10

  14. Achieving integration in mixed methods designs-principles and practices.

    Science.gov (United States)

    Fetters, Michael D; Curry, Leslie A; Creswell, John W

    2013-12-01

    Mixed methods research offers powerful tools for investigating complex processes and systems in health and health care. This article describes integration principles and practices at three levels in mixed methods research and provides illustrative examples. Integration at the study design level occurs through three basic mixed method designs-exploratory sequential, explanatory sequential, and convergent-and through four advanced frameworks-multistage, intervention, case study, and participatory. Integration at the methods level occurs through four approaches. In connecting, one database links to the other through sampling. With building, one database informs the data collection approach of the other. When merging, the two databases are brought together for analysis. With embedding, data collection and analysis link at multiple points. Integration at the interpretation and reporting level occurs through narrative, data transformation, and joint display. The fit of integration describes the extent the qualitative and quantitative findings cohere. Understanding these principles and practices of integration can help health services researchers leverage the strengths of mixed methods. © Health Research and Educational Trust.

  15. Achieving Integration in Mixed Methods Designs—Principles and Practices

    Science.gov (United States)

    Fetters, Michael D; Curry, Leslie A; Creswell, John W

    2013-01-01

    Mixed methods research offers powerful tools for investigating complex processes and systems in health and health care. This article describes integration principles and practices at three levels in mixed methods research and provides illustrative examples. Integration at the study design level occurs through three basic mixed method designs—exploratory sequential, explanatory sequential, and convergent—and through four advanced frameworks—multistage, intervention, case study, and participatory. Integration at the methods level occurs through four approaches. In connecting, one database links to the other through sampling. With building, one database informs the data collection approach of the other. When merging, the two databases are brought together for analysis. With embedding, data collection and analysis link at multiple points. Integration at the interpretation and reporting level occurs through narrative, data transformation, and joint display. The fit of integration describes the extent the qualitative and quantitative findings cohere. Understanding these principles and practices of integration can help health services researchers leverage the strengths of mixed methods. PMID:24279835

  16. Generalized interpolative quantum statistics

    International Nuclear Information System (INIS)

    Ramanathan, R.

    1992-01-01

    A generalized interpolative quantum statistics is presented by conjecturing a certain reordering of phase space due to the presence of possible exotic objects other than bosons and fermions. Such an interpolation achieved through a Bose-counting strategy predicts the existence of an infinite quantum Boltzmann-Gibbs statistics akin to the one discovered by Greenberg recently

  17. CMB anisotropies interpolation

    NARCIS (Netherlands)

    Zinger, S.; Delabrouille, Jacques; Roux, Michel; Maitre, Henri

    2010-01-01

    We consider the problem of the interpolation of irregularly spaced spatial data, applied to observation of Cosmic Microwave Background (CMB) anisotropies. The well-known interpolation methods and kriging are compared to the binning method which serves as a reference approach. We analyse kriging

  18. On interpolation series related to the Abel-Goncharov problem, with applications to arithmetic-geometric mean relationship and Hellinger integrals

    NARCIS (Netherlands)

    K.O. Dzhaparidze (Kacha)

    1998-01-01

    textabstractIn this paper a convergence class is characterized for special series associated with Gelfond's interpolation problem (a generalization of the Abel-Goncharov problem) when the interpolation nodes are equidistantly distributed within the interval $[0,1]$. As a result, an expansion is

  19. Linear Methods for Image Interpolation

    Directory of Open Access Journals (Sweden)

    Pascal Getreuer

    2011-09-01

    Full Text Available We discuss linear methods for interpolation, including nearest neighbor, bilinear, bicubic, splines, and sinc interpolation. We focus on separable interpolation, so most of what is said applies to one-dimensional interpolation as well as N-dimensional separable interpolation.

  20. A mixed integer linear program for an integrated fishery | Hasan ...

    African Journals Online (AJOL)

    ... and labour allocation of quota based integrated fisheries. We demonstrate the workability of our model with a numerical example and sensitivity analysis based on data obtained from one of the major fisheries in New Zealand. Keywords: mixed integer linear program, fishing, trawler scheduling, processing, quotas ORiON: ...

  1. Mixed Waste Treatment Project: Computer simulations of integrated flowsheets

    International Nuclear Information System (INIS)

    Dietsche, L.J.

    1993-12-01

    The disposal of mixed waste, that is waste containing both hazardous and radioactive components, is a challenging waste management problem of particular concern to DOE sites throughout the United States. Traditional technologies used for the destruction of hazardous wastes need to be re-evaluated for their ability to handle mixed wastes, and in some cases new technologies need to be developed. The Mixed Waste Treatment Project (MWTP) was set up by DOE's Waste Operations Program (EM30) to provide guidance on mixed waste treatment options. One of MWTP's charters is to develop flowsheets for prototype integrated mixed waste treatment facilities which can serve as models for sites developing their own treatment strategies. Evaluation of these flowsheets is being facilitated through the use of computer modelling. The objective of the flowsheet simulations is to provide mass and energy balances, product compositions, and equipment sizing (leading to cost) information. The modelled flowsheets need to be easily modified to examine how alternative technologies and varying feed streams effect the overall integrated process. One such commercially available simulation program is ASPEN PLUS. This report contains details of the Aspen Plus program

  2. Feature displacement interpolation

    DEFF Research Database (Denmark)

    Nielsen, Mads; Andresen, Per Rønsholt

    1998-01-01

    Given a sparse set of feature matches, we want to compute an interpolated dense displacement map. The application may be stereo disparity computation, flow computation, or non-rigid medical registration. Also estimation of missing image data, may be phrased in this framework. Since the features...... often are very sparse, the interpolation model becomes crucial. We show that a maximum likelihood estimation based on the covariance properties (Kriging) show properties more expedient than methods such as Gaussian interpolation or Tikhonov regularizations, also including scale......-selection. The computational complexities are identical. We apply the maximum likelihood interpolation to growth analysis of the mandibular bone. Here, the features used are the crest-lines of the object surface....

  3. Extension Of Lagrange Interpolation

    Directory of Open Access Journals (Sweden)

    Mousa Makey Krady

    2015-01-01

    Full Text Available Abstract In this paper is to present generalization of Lagrange interpolation polynomials in higher dimensions by using Gramers formula .The aim of this paper is to construct a polynomials in space with error tends to zero.

  4. Integrated aeroelastic vibrator for fluid mixing in open microwells

    Science.gov (United States)

    Xia, H. M.; Jin, X.; Zhang, Y. Y.; Wu, J. W.; Zhang, J.; Wang, Z. P.

    2018-01-01

    Fluid mixing in micro-wells/chambers is required in a variety of biological and biochemical processes. However, mixing fluids of small volumes is usually difficult due to increased viscous effects. In this study, we propose a new method for mixing enhancement in microliter-scale open wells. A thin elastic diaphragm is used to seal the bottom of the mixing microwell, underneath which an air chamber connects an aeroelastic vibrator. Driven by an air flow, the vibrator produces self-excited vibrations and causes pressure oscillations in the air chamber. Then the elastic diaphragm is actuated to mix the fluids in the microwell. Two designs that respectively have one single well and 2  ×  2 wells were prototyped. Testing results show that for liquids with a volume ranging from 10-60 µl and viscosity ranging from 1-5 cP, complete mixing can be obtained within 5-20 s. Furthermore, the device is operable with an air micropump, and hence facilitating the miniaturization and integration of lab-on-a-chip and microbioreactor systems.

  5. Mixed signal custom integrated circuit development for physics instrumentation

    International Nuclear Information System (INIS)

    Britton, C.L. Jr.; Bryan, W.L.; Emery, M.S.

    1998-01-01

    The Monolithic Systems Development Group at the Oak Ridge National Laboratory has been greatly involved in custom mixed-mode integrated circuit development for the PHENIX detector at the Relativistic Heavy Ion collider (RHIC) at Brookhaven National Laboratory and position-sensitive germanium spectrometer front-ends for the Naval Research Laboratory (NRL). This paper will outline the work done for both PHENIX and the Naval Research Laboratory in the area of full-custom, mixed-signal CMOS integrated electronics. This paper presents the architectures chosen for the various PHENIX detectors which include position-sensitive silicon, capacitive pixel, and phototube detectors, and performance results for the subsystems as well as a system description of the NRL germanium strip system and its performance. The performance of the custom preamplifiers, discriminators, analog memories, analog-digital converters, and control circuitry for all systems will be presented

  6. Mixed signal custom integrated circuit development for physics instrumentation

    Energy Technology Data Exchange (ETDEWEB)

    Britton, C.L. Jr.; Bryan, W.L.; Emery, M.S. [and others

    1998-10-01

    The Monolithic Systems Development Group at the Oak Ridge National Laboratory has been greatly involved in custom mixed-mode integrated circuit development for the PHENIX detector at the Relativistic Heavy Ion collider (RHIC) at Brookhaven National Laboratory and position-sensitive germanium spectrometer front-ends for the Naval Research Laboratory (NRL). This paper will outline the work done for both PHENIX and the Naval Research Laboratory in the area of full-custom, mixed-signal CMOS integrated electronics. This paper presents the architectures chosen for the various PHENIX detectors which include position-sensitive silicon, capacitive pixel, and phototube detectors, and performance results for the subsystems as well as a system description of the NRL germanium strip system and its performance. The performance of the custom preamplifiers, discriminators, analog memories, analog-digital converters, and control circuitry for all systems will be presented.

  7. Numerical treatments for solving nonlinear mixed integral equation

    Directory of Open Access Journals (Sweden)

    M.A. Abdou

    2016-12-01

    Full Text Available We consider a mixed type of nonlinear integral equation (MNLIE of the second kind in the space C[0,T]×L2(Ω,T<1. The Volterra integral terms (VITs are considered in time with continuous kernels, while the Fredholm integral term (FIT is considered in position with singular general kernel. Using the quadratic method and separation of variables method, we obtain a nonlinear system of Fredholm integral equations (NLSFIEs with singular kernel. A Toeplitz matrix method, in each case, is then used to obtain a nonlinear algebraic system. Numerical results are calculated when the kernels take a logarithmic form or Carleman function. Moreover, the error estimates, in each case, are then computed.

  8. Digital time-interpolator

    International Nuclear Information System (INIS)

    Schuller, S.; Nationaal Inst. voor Kernfysica en Hoge-Energiefysica

    1990-01-01

    This report presents a description of the design of a digital time meter. This time meter should be able to measure, by means of interpolation, times of 100 ns with an accuracy of 50 ps. In order to determine the best principle for interpolation, three methods were simulated at the computer with a Pascal code. On the basis of this the best method was chosen and used in the design. In order to test the principal operation of the circuit a part of the circuit was constructed with which the interpolation could be tested. The remainder of the circuit was simulated with a computer. So there are no data available about the operation of the complete circuit in practice. The interpolation part however is the most critical part, the remainder of the circuit is more or less simple logic. Besides this report also gives a description of the principle of interpolation and the design of the circuit. The measurement results at the prototype are presented finally. (author). 3 refs.; 37 figs.; 2 tabs

  9. Multivariate Birkhoff interpolation

    CERN Document Server

    Lorentz, Rudolph A

    1992-01-01

    The subject of this book is Lagrange, Hermite and Birkhoff (lacunary Hermite) interpolation by multivariate algebraic polynomials. It unifies and extends a new algorithmic approach to this subject which was introduced and developed by G.G. Lorentz and the author. One particularly interesting feature of this algorithmic approach is that it obviates the necessity of finding a formula for the Vandermonde determinant of a multivariate interpolation in order to determine its regularity (which formulas are practically unknown anyways) by determining the regularity through simple geometric manipulations in the Euclidean space. Although interpolation is a classical problem, it is surprising how little is known about its basic properties in the multivariate case. The book therefore starts by exploring its fundamental properties and its limitations. The main part of the book is devoted to a complete and detailed elaboration of the new technique. A chapter with an extensive selection of finite elements follows as well a...

  10. Mixed Waste Integrated Program Quality Assurance requirements plan

    International Nuclear Information System (INIS)

    1994-01-01

    Mixed Waste Integrated Program (MWIP) is sponsored by the US Department of Energy (DOE), Office of Technology Development, Waste Management Division. The strategic objectives of MWIP are defined in the Mixed Waste Integrated Program Strategic Plan, and expanded upon in the MWIP Program Management Plan. This MWIP Quality Assurance Requirement Plan (QARP) applies to mixed waste treatment technologies involving both hazardous and radioactive constituents. As a DOE organization, MWIP is required to develop, implement, and maintain a written Quality Assurance Program in accordance with DOE Order 4700.1 Project Management System, DOE Order 5700.6C, Quality Assurance, DOE Order 5820.2A Radioactive Waste Management, ASME NQA-1 Quality Assurance Program Requirements for Nuclear Facilities and ANSI/ASQC E4-19xx Specifications and Guidelines for Quality Systems for Environmental Data Collection and Environmental Technology Programs. The purpose of the MWIP QA program is to establish controls which address the requirements in 5700.6C, with the intent to minimize risks and potential environmental impacts; and to maximize environmental protection, health, safety, reliability, and performance in all program activities. QA program controls are established to assure that each participating organization conducts its activities in a manner consistent with risks posed by those activities

  11. Mixed Waste Integrated Program Quality Assurance requirements plan

    Energy Technology Data Exchange (ETDEWEB)

    1994-04-15

    Mixed Waste Integrated Program (MWIP) is sponsored by the US Department of Energy (DOE), Office of Technology Development, Waste Management Division. The strategic objectives of MWIP are defined in the Mixed Waste Integrated Program Strategic Plan, and expanded upon in the MWIP Program Management Plan. This MWIP Quality Assurance Requirement Plan (QARP) applies to mixed waste treatment technologies involving both hazardous and radioactive constituents. As a DOE organization, MWIP is required to develop, implement, and maintain a written Quality Assurance Program in accordance with DOE Order 4700.1 Project Management System, DOE Order 5700.6C, Quality Assurance, DOE Order 5820.2A Radioactive Waste Management, ASME NQA-1 Quality Assurance Program Requirements for Nuclear Facilities and ANSI/ASQC E4-19xx Specifications and Guidelines for Quality Systems for Environmental Data Collection and Environmental Technology Programs. The purpose of the MWIP QA program is to establish controls which address the requirements in 5700.6C, with the intent to minimize risks and potential environmental impacts; and to maximize environmental protection, health, safety, reliability, and performance in all program activities. QA program controls are established to assure that each participating organization conducts its activities in a manner consistent with risks posed by those activities.

  12. An overview of the Mixed Waste Landfill Integrated Demonstration

    International Nuclear Information System (INIS)

    Williams, C.V.; Burford, T.D.; Betsill, J.D.

    1994-01-01

    The Mixed Waste Landfill Integrated Demonstration (MWLID) focuses on ''in-situ'' characterization, monitoring, remediation, and containment of landfills in and environments that contain hazardous and mixed waste. The MWLID mission is to assess, demonstrate, and transfer technologies and systems that lead to faster, better, cheaper, and safer cleanup. Most important, the demonstrated technologies will be evaluated against the baseline of conventional technologies. Key goals of the MWLID are routine use of these technologies by Environmental Restoration Groups throughout the DOE complex and commercialization of these technologies to the private sector. The MWLID is demonstrating technologies at hazardous waste landfills located at Sandia National Laboratories and on Kirtland Air Force Base. These landfills have been selected because they are representative of many sites throughout the Southwest and in other and climates

  13. Mixed Waste Integrated Program -- Problem-oriented technology development

    International Nuclear Information System (INIS)

    Hart, P.W.; Wolf, S.W.; Berry, J.B.

    1994-01-01

    The Mixed Waste Integrated Program (MWIP) is responding to the need for DOE mixed waste treatment technologies that meet these dual regulatory requirements. MWIP is developing emerging and innovative treatment technologies to determine process feasibility. Technology demonstrations will be used to determine whether processes are superior to existing technologies in reducing risk, minimizing life-cycle cost, and improving process performance. Technology development is ongoing in technical areas required to process mixed waste: materials handling, chemical/physical treatment, waste destruction, off-gas treatment, final forms, and process monitoring/control. MWIP is currently developing a suite of technologies to process heterogeneous waste. One robust process is the fixed-hearth plasma-arc process that is being developed to treat a wide variety of contaminated materials with minimal characterization. Additional processes encompass steam reforming, including treatment of waste under the debris rule. Advanced off-gas systems are also being developed. Vitrification technologies are being demonstrated for the treatment of homogeneous wastes such as incinerator ash and sludge. An alternative to conventional evaporation for liquid removal--freeze crystallization--is being investigated. Since mercury is present in numerous waste streams, mercury removal technologies are being developed

  14. Sandia National Laboratories Mixed Waste Landfill Integrated Demonstration

    International Nuclear Information System (INIS)

    Tyler, L.D.; Phelan, J.M.; Prindle, N.K.; Purvis, S.T.; Stormont, J.C.

    1992-01-01

    The Mixed-Waste Landfill Integrated Demonstration (MWLID) has been assigned to Sandia National Laboratories (SNL) by the US Department of Energy (DOE) Office of Technology Development. The mission of the MWLID is to assess, implement and transfer technologies and systems that lead to quicker, safer, and more efficient remediation of buried chemical and mixed-waste sites. The MWLID focus is on two landfills at SNL in Albuquerque, New Mexico: The Chemical Waste Landfill (CWL) and the Mixed-Waste Landfill (MWL). These landfills received chemical, radioactive and mixed wastes from various SNL nuclear research programs. A characterization system has been designed for the definition of the extent and concentration of contamination. This system includes historical records, directional drilling, and emplacement membrane, sensors, geophysics, sampling strategy, and on site sample analysis. In the remediation task, in-situ remediation systems are being designed to remove volatile organic compounds (VOC's) and heavy metals from soils. The VOC remediation includes vacuum extraction with electrical and radio-frequency heating. For heavy metal contamination, electrokinetic processes are being considered. The MWLID utilizes a phased, parallel approach. Initial testing is performed at an uncontaminated site adjacent to the CWL. Once characterization is underway at the CWL, lessons learned can be directly transferred to the more challenging problem of radioactive waste in the MWL. The MWL characterization can proceed in parallel with the remediation work at CWL. The technologies and systems demonstrated in the MWLID are to be evaluated based on their performance and cost in the real remediation environment of the landfills

  15. Time-interpolator

    International Nuclear Information System (INIS)

    Blok, M. de; Nationaal Inst. voor Kernfysica en Hoge-Energiefysica

    1990-01-01

    This report describes a time-interpolator with which time differences can be measured using digital and analog techniques. It concerns a maximum measuring time of 6.4 μs with a resolution of 100 ps. Use is made of Emitter Coupled Logic (ECL) and analogues of high-frequency techniques. The difficulty which accompanies the use of ECL-logic is keeping as short as possible the mutual connections and closing properly the outputs in order to avoid reflections. The digital part of the time-interpolator consists of a continuous running clock and logic which converts an input signal into a start- and stop signal. The analog part consists of a Time to Amplitude Converter (TAC) and an analog to digital converter. (author). 3 refs.; 30 figs

  16. Interpolative Boolean Networks

    Directory of Open Access Journals (Sweden)

    Vladimir Dobrić

    2017-01-01

    Full Text Available Boolean networks are used for modeling and analysis of complex systems of interacting entities. Classical Boolean networks are binary and they are relevant for modeling systems with complex switch-like causal interactions. More descriptive power can be provided by the introduction of gradation in this model. If this is accomplished by using conventional fuzzy logics, the generalized model cannot secure the Boolean frame. Consequently, the validity of the model’s dynamics is not secured. The aim of this paper is to present the Boolean consistent generalization of Boolean networks, interpolative Boolean networks. The generalization is based on interpolative Boolean algebra, the [0,1]-valued realization of Boolean algebra. The proposed model is adaptive with respect to the nature of input variables and it offers greater descriptive power as compared with traditional models. For illustrative purposes, IBN is compared to the models based on existing real-valued approaches. Due to the complexity of the most systems to be analyzed and the characteristics of interpolative Boolean algebra, the software support is developed to provide graphical and numerical tools for complex system modeling and analysis.

  17. Integrative Mixed Methods Data Analytic Strategies in Research on School Success in Challenging Circumstances

    Science.gov (United States)

    Jang, Eunice E.; McDougall, Douglas E.; Pollon, Dawn; Herbert, Monique; Russell, Pia

    2008-01-01

    There are both conceptual and practical challenges in dealing with data from mixed methods research studies. There is a need for discussion about various integrative strategies for mixed methods data analyses. This article illustrates integrative analytic strategies for a mixed methods study focusing on improving urban schools facing challenging…

  18. Properties Important To Mixing For WTP Large Scale Integrated Testing

    International Nuclear Information System (INIS)

    Koopman, D.; Martino, C.; Poirier, M.

    2012-01-01

    Large Scale Integrated Testing (LSIT) is being planned by Bechtel National, Inc. to address uncertainties in the full scale mixing performance of the Hanford Waste Treatment and Immobilization Plant (WTP). Testing will use simulated waste rather than actual Hanford waste. Therefore, the use of suitable simulants is critical to achieving the goals of the test program. External review boards have raised questions regarding the overall representativeness of simulants used in previous mixing tests. Accordingly, WTP requested the Savannah River National Laboratory (SRNL) to assist with development of simulants for use in LSIT. Among the first tasks assigned to SRNL was to develop a list of waste properties that matter to pulse-jet mixer (PJM) mixing of WTP tanks. This report satisfies Commitment 5.2.3.1 of the Department of Energy Implementation Plan for Defense Nuclear Facilities Safety Board Recommendation 2010-2: physical properties important to mixing and scaling. In support of waste simulant development, the following two objectives are the focus of this report: (1) Assess physical and chemical properties important to the testing and development of mixing scaling relationships; (2) Identify the governing properties and associated ranges for LSIT to achieve the Newtonian and non-Newtonian test objectives. This includes the properties to support testing of sampling and heel management systems. The test objectives for LSIT relate to transfer and pump out of solid particles, prototypic integrated operations, sparger operation, PJM controllability, vessel level/density measurement accuracy, sampling, heel management, PJM restart, design and safety margin, Computational Fluid Dynamics (CFD) Verification and Validation (V and V) and comparison, performance testing and scaling, and high temperature operation. The slurry properties that are most important to Performance Testing and Scaling depend on the test objective and rheological classification of the slurry (i

  19. PROPERTIES IMPORTANT TO MIXING FOR WTP LARGE SCALE INTEGRATED TESTING

    Energy Technology Data Exchange (ETDEWEB)

    Koopman, D.; Martino, C.; Poirier, M.

    2012-04-26

    Large Scale Integrated Testing (LSIT) is being planned by Bechtel National, Inc. to address uncertainties in the full scale mixing performance of the Hanford Waste Treatment and Immobilization Plant (WTP). Testing will use simulated waste rather than actual Hanford waste. Therefore, the use of suitable simulants is critical to achieving the goals of the test program. External review boards have raised questions regarding the overall representativeness of simulants used in previous mixing tests. Accordingly, WTP requested the Savannah River National Laboratory (SRNL) to assist with development of simulants for use in LSIT. Among the first tasks assigned to SRNL was to develop a list of waste properties that matter to pulse-jet mixer (PJM) mixing of WTP tanks. This report satisfies Commitment 5.2.3.1 of the Department of Energy Implementation Plan for Defense Nuclear Facilities Safety Board Recommendation 2010-2: physical properties important to mixing and scaling. In support of waste simulant development, the following two objectives are the focus of this report: (1) Assess physical and chemical properties important to the testing and development of mixing scaling relationships; (2) Identify the governing properties and associated ranges for LSIT to achieve the Newtonian and non-Newtonian test objectives. This includes the properties to support testing of sampling and heel management systems. The test objectives for LSIT relate to transfer and pump out of solid particles, prototypic integrated operations, sparger operation, PJM controllability, vessel level/density measurement accuracy, sampling, heel management, PJM restart, design and safety margin, Computational Fluid Dynamics (CFD) Verification and Validation (V and V) and comparison, performance testing and scaling, and high temperature operation. The slurry properties that are most important to Performance Testing and Scaling depend on the test objective and rheological classification of the slurry (i

  20. Smooth Phase Interpolated Keying

    Science.gov (United States)

    Borah, Deva K.

    2007-01-01

    Smooth phase interpolated keying (SPIK) is an improved method of computing smooth phase-modulation waveforms for radio communication systems that convey digital information. SPIK is applicable to a variety of phase-shift-keying (PSK) modulation schemes, including quaternary PSK (QPSK), octonary PSK (8PSK), and 16PSK. In comparison with a related prior method, SPIK offers advantages of better performance and less complexity of implementation. In a PSK scheme, the underlying information waveform that one seeks to convey consists of discrete rectangular steps, but the spectral width of such a waveform is excessive for practical radio communication. Therefore, the problem is to smooth the step phase waveform in such a manner as to maintain power and bandwidth efficiency without incurring an unacceptably large error rate and without introducing undesired variations in the amplitude of the affected radio signal. Although the ideal constellation of PSK phasor points does not cause amplitude variations, filtering of the modulation waveform (in which, typically, a rectangular pulse is converted to a square-root raised cosine pulse) causes amplitude fluctuations. If a power-efficient nonlinear amplifier is used in the radio communication system, the fluctuating-amplitude signal can undergo significant spectral regrowth, thus compromising the bandwidth efficiency of the system. In the related prior method, one seeks to solve the problem in a procedure that comprises two major steps: phase-value generation and phase interpolation. SPIK follows the two-step approach of the related prior method, but the details of the steps are different. In the phase-value-generation step, the phase values of symbols in the PSK constellation are determined by a phase function that is said to be maximally smooth and that is chosen to minimize the spectral spread of the modulated signal. In this step, the constellation is divided into two groups by assigning, to information symbols, phase values

  1. Interpolating string field theories

    International Nuclear Information System (INIS)

    Zwiebach, B.

    1992-01-01

    This paper reports that a minimal area problem imposing different length conditions on open and closed curves is shown to define a one-parameter family of covariant open-closed quantum string field theories. These interpolate from a recently proposed factorizable open-closed theory up to an extended version of Witten's open string field theory capable of incorporating on shell closed strings. The string diagrams of the latter define a new decomposition of the moduli spaces of Riemann surfaces with punctures and boundaries based on quadratic differentials with both first order and second order poles

  2. Image Interpolation with Contour Stencils

    OpenAIRE

    Pascal Getreuer

    2011-01-01

    Image interpolation is the problem of increasing the resolution of an image. Linear methods must compromise between artifacts like jagged edges, blurring, and overshoot (halo) artifacts. More recent works consider nonlinear methods to improve interpolation of edges and textures. In this paper we apply contour stencils for estimating the image contours based on total variation along curves and then use this estimation to construct a fast edge-adaptive interpolation.

  3. Making the Move: A Mixed Research Integrative Review

    Directory of Open Access Journals (Sweden)

    Sarah Gilbert

    2015-08-01

    Full Text Available The purpose of this mixed research integrative review is to determine factors that influence relocation transitions for older adults who are considering a move from independent living to supervised housing, such as assisted living, using the Theory of Planned Behavior as a conceptual guide. PubMED, CINAHL, and PsychInfo databases were queried using key words: relocation, transition, older adults, and, elderly and time limited from 1992 to 2014. Sixteen articles were retained for review. The majority of articles, qualitative in design, reveal that older adults who comprehend the need to move and participate in the decision-making process of a relocation adjust to new living environments with fewer negative outcomes than older adults who experience a forced relocation. The few quantitative articles examined the elements of impending relocation using a variety of instruments but support the necessity for older adults to recognize the possibility of a future move and contribute to the relocation process. Additionally, the influence of family, friends, and health care providers provides the older adult with support and guidance throughout the process.

  4. Quasi interpolation with Voronoi splines.

    Science.gov (United States)

    Mirzargar, Mahsa; Entezari, Alireza

    2011-12-01

    We present a quasi interpolation framework that attains the optimal approximation-order of Voronoi splines for reconstruction of volumetric data sampled on general lattices. The quasi interpolation framework of Voronoi splines provides an unbiased reconstruction method across various lattices. Therefore this framework allows us to analyze and contrast the sampling-theoretic performance of general lattices, using signal reconstruction, in an unbiased manner. Our quasi interpolation methodology is implemented as an efficient FIR filter that can be applied online or as a preprocessing step. We present visual and numerical experiments that demonstrate the improved accuracy of reconstruction across lattices, using the quasi interpolation framework. © 2011 IEEE

  5. AGILE integration into APC for high mix logic fab

    Science.gov (United States)

    Gatefait, M.; Lam, A.; Le Gratiet, B.; Mikolajczak, M.; Morin, V.; Chojnowski, N.; Kocsis, Z.; Smith, I.; Decaunes, J.; Ostrovsky, A.; Monget, C.

    2015-09-01

    mix logic Fab) in term of product and technology portfolio AGILE corrects for up to 120nm of product topography error on process layer with less than 50nm depth of focus Based on tool functionalities delivered by ASML and on high volume manufacturing requirement, AGILE integration is a real challenge. Regarding ST requirements "Automatic AGILE" functionality developed by ASML was not a turnkey solution and a dedicated functionality was needed. A "ST homemade AGILE integration" has been fully developed and implemented within ASML and ST constraints. This paper describes this integration in our Advanced Process Control platform (APC).

  6. Pixel Interpolation Methods

    OpenAIRE

    Mintěl, Tomáš

    2009-01-01

    Tato diplomová práce se zabývá akcelerací interpolačních metod s využitím GPU a architektury NVIDIA (R) CUDA TM. Grafický výstup je reprezentován demonstrační aplikací pro transformaci obrazu nebo videa s použitím vybrané interpolace. Časově kritické části kódu jsou přesunuty na GPU a vykonány paralelně. Pro práci s obrazem a videem jsou použity vysoce optimalizované algoritmy z knihovny OpenCV, od firmy Intel. This master's thesis deals with acceleration of pixel interpolation methods usi...

  7. A Case Study of a Mixed Methods Study Engaged in Integrated Data Analysis

    Science.gov (United States)

    Schiazza, Daniela Marie

    2013-01-01

    The nascent field of mixed methods research has yet to develop a cohesive framework of guidelines and procedures for mixed methods data analysis (Greene, 2008). To support the field's development of analytical frameworks, this case study reflects on the development and implementation of a mixed methods study engaged in integrated data analysis.…

  8. Fuzzy linguistic model for interpolation

    International Nuclear Information System (INIS)

    Abbasbandy, S.; Adabitabar Firozja, M.

    2007-01-01

    In this paper, a fuzzy method for interpolating of smooth curves was represented. We present a novel approach to interpolate real data by applying the universal approximation method. In proposed method, fuzzy linguistic model (FLM) applied as universal approximation for any nonlinear continuous function. Finally, we give some numerical examples and compare the proposed method with spline method

  9. A disposition of interpolation techniques

    NARCIS (Netherlands)

    Knotters, M.; Heuvelink, G.B.M.

    2010-01-01

    A large collection of interpolation techniques is available for application in environmental research. To help environmental scientists in choosing an appropriate technique a disposition is made, based on 1) applicability in space, time and space-time, 2) quantification of accuracy of interpolated

  10. Contrast-guided image interpolation.

    Science.gov (United States)

    Wei, Zhe; Ma, Kai-Kuang

    2013-11-01

    In this paper a contrast-guided image interpolation method is proposed that incorporates contrast information into the image interpolation process. Given the image under interpolation, four binary contrast-guided decision maps (CDMs) are generated and used to guide the interpolation filtering through two sequential stages: 1) the 45(°) and 135(°) CDMs for interpolating the diagonal pixels and 2) the 0(°) and 90(°) CDMs for interpolating the row and column pixels. After applying edge detection to the input image, the generation of a CDM lies in evaluating those nearby non-edge pixels of each detected edge for re-classifying them possibly as edge pixels. This decision is realized by solving two generalized diffusion equations over the computed directional variation (DV) fields using a derived numerical approach to diffuse or spread the contrast boundaries or edges, respectively. The amount of diffusion or spreading is proportional to the amount of local contrast measured at each detected edge. The diffused DV fields are then thresholded for yielding the binary CDMs, respectively. Therefore, the decision bands with variable widths will be created on each CDM. The two CDMs generated in each stage will be exploited as the guidance maps to conduct the interpolation process: for each declared edge pixel on the CDM, a 1-D directional filtering will be applied to estimate its associated to-be-interpolated pixel along the direction as indicated by the respective CDM; otherwise, a 2-D directionless or isotropic filtering will be used instead to estimate the associated missing pixels for each declared non-edge pixel. Extensive simulation results have clearly shown that the proposed contrast-guided image interpolation is superior to other state-of-the-art edge-guided image interpolation methods. In addition, the computational complexity is relatively low when compared with existing methods; hence, it is fairly attractive for real-time image applications.

  11. Interpolation for de-Dopplerisation

    Science.gov (United States)

    Graham, W. R.

    2018-05-01

    'De-Dopplerisation' is one aspect of a problem frequently encountered in experimental acoustics: deducing an emitted source signal from received data. It is necessary when source and receiver are in relative motion, and requires interpolation of the measured signal. This introduces error. In acoustics, typical current practice is to employ linear interpolation and reduce error by over-sampling. In other applications, more advanced approaches with better performance have been developed. Associated with this work is a large body of theoretical analysis, much of which is highly specialised. Nonetheless, a simple and compact performance metric is available: the Fourier transform of the 'kernel' function underlying the interpolation method. Furthermore, in the acoustics context, it is a more appropriate indicator than other, more abstract, candidates. On this basis, interpolators from three families previously identified as promising - - piecewise-polynomial, windowed-sinc, and B-spline-based - - are compared. The results show that significant improvements over linear interpolation can straightforwardly be obtained. The recommended approach is B-spline-based interpolation, which performs best irrespective of accuracy specification. Its only drawback is a pre-filtering requirement, which represents an additional implementation cost compared to other methods. If this cost is unacceptable, and aliasing errors (on re-sampling) up to approximately 1% can be tolerated, a family of piecewise-cubic interpolators provides the best alternative.

  12. Occlusion-Aware View Interpolation

    Directory of Open Access Journals (Sweden)

    Janusz Konrad

    2009-01-01

    Full Text Available View interpolation is an essential step in content preparation for multiview 3D displays, free-viewpoint video, and multiview image/video compression. It is performed by establishing a correspondence among views, followed by interpolation using the corresponding intensities. However, occlusions pose a significant challenge, especially if few input images are available. In this paper, we identify challenges related to disparity estimation and view interpolation in presence of occlusions. We then propose an occlusion-aware intermediate view interpolation algorithm that uses four input images to handle the disappearing areas. The algorithm consists of three steps. First, all pixels in view to be computed are classified in terms of their visibility in the input images. Then, disparity for each pixel is estimated from different image pairs depending on the computed visibility map. Finally, luminance/color of each pixel is adaptively interpolated from an image pair selected by its visibility label. Extensive experimental results show striking improvements in interpolated image quality over occlusion-unaware interpolation from two images and very significant gains over occlusion-aware spline-based reconstruction from four images, both on synthetic and real images. Although improvements are obvious only in the vicinity of object boundaries, this should be useful in high-quality 3D applications, such as digital 3D cinema and ultra-high resolution multiview autostereoscopic displays, where distortions at depth discontinuities are highly objectionable, especially if they vary with viewpoint change.

  13. Occlusion-Aware View Interpolation

    Directory of Open Access Journals (Sweden)

    Ince Serdar

    2008-01-01

    Full Text Available Abstract View interpolation is an essential step in content preparation for multiview 3D displays, free-viewpoint video, and multiview image/video compression. It is performed by establishing a correspondence among views, followed by interpolation using the corresponding intensities. However, occlusions pose a significant challenge, especially if few input images are available. In this paper, we identify challenges related to disparity estimation and view interpolation in presence of occlusions. We then propose an occlusion-aware intermediate view interpolation algorithm that uses four input images to handle the disappearing areas. The algorithm consists of three steps. First, all pixels in view to be computed are classified in terms of their visibility in the input images. Then, disparity for each pixel is estimated from different image pairs depending on the computed visibility map. Finally, luminance/color of each pixel is adaptively interpolated from an image pair selected by its visibility label. Extensive experimental results show striking improvements in interpolated image quality over occlusion-unaware interpolation from two images and very significant gains over occlusion-aware spline-based reconstruction from four images, both on synthetic and real images. Although improvements are obvious only in the vicinity of object boundaries, this should be useful in high-quality 3D applications, such as digital 3D cinema and ultra-high resolution multiview autostereoscopic displays, where distortions at depth discontinuities are highly objectionable, especially if they vary with viewpoint change.

  14. Analogue and Mixed-Signal Integrated Circuits for Space Applications

    CERN Document Server

    2014-01-01

    The purpose of AMICSA 2014 (organised in collaboration of ESA and CERN) is to provide an international forum for the presentation and discussion of recent advances in analogue and mixed-signal VLSI design techniques and technologies for space applications.

  15. A Methodology for Conducting Integrative Mixed Methods Research and Data Analyses

    Science.gov (United States)

    Castro, Felipe González; Kellison, Joshua G.; Boyd, Stephen J.; Kopak, Albert

    2011-01-01

    Mixed methods research has gained visibility within the last few years, although limitations persist regarding the scientific caliber of certain mixed methods research designs and methods. The need exists for rigorous mixed methods designs that integrate various data analytic procedures for a seamless transfer of evidence across qualitative and quantitative modalities. Such designs can offer the strength of confirmatory results drawn from quantitative multivariate analyses, along with “deep structure” explanatory descriptions as drawn from qualitative analyses. This article presents evidence generated from over a decade of pilot research in developing an integrative mixed methods methodology. It presents a conceptual framework and methodological and data analytic procedures for conducting mixed methods research studies, and it also presents illustrative examples from the authors' ongoing integrative mixed methods research studies. PMID:22167325

  16. Mixed methods in psychotherapy research: A review of method(ology) integration in psychotherapy science.

    Science.gov (United States)

    Bartholomew, Theodore T; Lockard, Allison J

    2018-06-13

    Mixed methods can foster depth and breadth in psychological research. However, its use remains in development in psychotherapy research. Our purpose was to review the use of mixed methods in psychotherapy research. Thirty-one studies were identified via the PRISMA systematic review method. Using Creswell & Plano Clark's typologies to identify design characteristics, we assessed each study for rigor and how each used mixed methods. Key features of mixed methods designs and these common patterns were identified: (a) integration of clients' perceptions via mixing; (b) understanding group psychotherapy; (c) integrating methods with cases and small samples; (d) analyzing clinical data as qualitative data; and (e) exploring cultural identities in psychotherapy through mixed methods. The review is discussed with respect to the value of integrating multiple data in single studies to enhance psychotherapy research. © 2018 Wiley Periodicals, Inc.

  17. BIMOND3, Monotone Bivariate Interpolation

    International Nuclear Information System (INIS)

    Fritsch, F.N.; Carlson, R.E.

    2001-01-01

    1 - Description of program or function: BIMOND is a FORTRAN-77 subroutine for piecewise bi-cubic interpolation to data on a rectangular mesh, which reproduces the monotonousness of the data. A driver program, BIMOND1, is provided which reads data, computes the interpolating surface parameters, and evaluates the function on a mesh suitable for plotting. 2 - Method of solution: Monotonic piecewise bi-cubic Hermite interpolation is used. 3 - Restrictions on the complexity of the problem: The current version of the program can treat data which are monotone in only one of the independent variables, but cannot handle piecewise monotone data

  18. The research on NURBS adaptive interpolation technology

    Science.gov (United States)

    Zhang, Wanjun; Gao, Shanping; Zhang, Sujia; Zhang, Feng

    2017-04-01

    In order to solve the problems of Research on NURBS Adaptive Interpolation Technology, such as interpolation time bigger, calculation more complicated, and NURBS curve step error are not easy changed and so on. This paper proposed a study on the algorithm for NURBS adaptive interpolation method of NURBS curve and simulation. We can use NURBS adaptive interpolation that calculates (xi, yi, zi). Simulation results show that the proposed NURBS curve interpolator meets the high-speed and high-accuracy interpolation requirements of CNC systems. The interpolation of NURBS curve should be finished. The simulation results show that the algorithm is correct; it is consistent with a NURBS curve interpolation requirements.

  19. COMPARISONS BETWEEN DIFFERENT INTERPOLATION TECHNIQUES

    Directory of Open Access Journals (Sweden)

    G. Garnero

    2014-01-01

    In the present study different algorithms will be analysed in order to spot an optimal interpolation methodology. The availability of the recent digital model produced by the Regione Piemonte with airborne LIDAR and the presence of sections of testing realized with higher resolutions and the presence of independent digital models on the same territory allow to set a series of analysis with consequent determination of the best methodologies of interpolation. The analysis of the residuals on the test sites allows to calculate the descriptive statistics of the computed values: all the algorithms have furnished interesting results; all the more interesting, notably for dense models, the IDW (Inverse Distance Weighing algorithm results to give best results in this study case. Moreover, a comparative analysis was carried out by interpolating data at different input point density, with the purpose of highlighting thresholds in input density that may influence the quality reduction of the final output in the interpolation phase.

  20. Interpolation in Spaces of Functions

    Directory of Open Access Journals (Sweden)

    K. Mosaleheh

    2006-03-01

    Full Text Available In this paper we consider the interpolation by certain functions such as trigonometric and rational functions for finite dimensional linear space X. Then we extend this to infinite dimensional linear spaces

  1. Mixed Element Formulation for the Finite Element-Boundary Integral Method

    National Research Council Canada - National Science Library

    Meese, J; Kempel, L. C; Schneider, S. W

    2006-01-01

    A mixed element approach using right hexahedral elements and right prism elements for the finite element-boundary integral method is presented and discussed for the study of planar cavity-backed antennas...

  2. On the mixed discretization of the time domain magnetic field integral equation

    KAUST Repository

    Ulku, Huseyin Arda; Bogaert, Ignace; Cools, Kristof; Andriulli, Francesco P.; Bagci, Hakan

    2012-01-01

    Time domain magnetic field integral equation (MFIE) is discretized using divergence-conforming Rao-Wilton-Glisson (RWG) and curl-conforming Buffa-Christiansen (BC) functions as spatial basis and testing functions, respectively. The resulting mixed

  3. Integral enthalpy of mixing of the liquid ternary Au-Cu-Sn system

    International Nuclear Information System (INIS)

    Knott, S.; Li, Z.; Mikula, A.

    2008-01-01

    The integral enthalpy of mixing of the ternary Au-Cu-Sn has been determined with a Calvet type calorimeter at 6 different cross sections at 1273 K. The substitutional solution model of Redlich-Kister-Muggianu was used for a least square fit of the experimental data in order to get an analytical expression for the integral enthalpy of mixing. The ternary extrapolation models of Kohler, Muggianu and Toop were used to calculate the integral enthalpy of mixing and to compare measured and extrapolated values. Additional calculations of the integral enthalpy of mixing using the Chou model have been performed. With the calculated data, the iso-enthalpy lines have been determined using the Redlich-Kister-Muggianu model. A comparison of the data has been made

  4. Automated Design Tools for Integrated Mixed-Signal Microsystems (NeoCAD)

    National Research Council Canada - National Science Library

    Petre, P; Visher, J; Shringarpure, R; Valley, F; Swaminathan, M

    2005-01-01

    Automated design tools and integrated design flow methodologies were developed that demonstrated more than an order- of-magnitude reduction in cycle time and cost for mixed signal (digital/analoglRF...

  5. Trace interpolation by slant-stack migration

    International Nuclear Information System (INIS)

    Novotny, M.

    1990-01-01

    The slant-stack migration formula based on the radon transform is studied with respect to the depth steep Δz of wavefield extrapolation. It can be viewed as a generalized trace-interpolation procedure including wave extrapolation with an arbitrary step Δz. For Δz > 0 the formula yields the familiar plane-wave decomposition, while for Δz > 0 it provides a robust tool for migration transformation of spatially under sampled wavefields. Using the stationary phase method, it is shown that the slant-stack migration formula degenerates into the Rayleigh-Sommerfeld integral in the far-field approximation. Consequently, even a narrow slant-stack gather applied before the diffraction stack can significantly improve the representation of noisy data in the wavefield extrapolation process. The theory is applied to synthetic and field data to perform trace interpolation and dip reject filtration. The data examples presented prove that the radon interpolator works well in the dip range, including waves with mutual stepouts smaller than half the dominant period

  6. Image Interpolation with Geometric Contour Stencils

    Directory of Open Access Journals (Sweden)

    Pascal Getreuer

    2011-09-01

    Full Text Available We consider the image interpolation problem where given an image vm,n with uniformly-sampled pixels vm,n and point spread function h, the goal is to find function u(x,y satisfying vm,n = (h*u(m,n for all m,n in Z. This article improves upon the IPOL article Image Interpolation with Contour Stencils. In the previous work, contour stencils are used to estimate the image contours locally as short line segments. This article begins with a continuous formulation of total variation integrated over a collection of curves and defines contour stencils as a consistent discretization. This discretization is more reliable than the previous approach and can effectively distinguish contours that are locally shaped like lines, curves, corners, and circles. These improved contour stencils sense more of the geometry in the image. Interpolation is performed using an extension of the method described in the previous article. Using the improved contour stencils, there is an increase in image quality while maintaining similar computational efficiency.

  7. Mixed, Nonsplit, Extended Stability, Stiff Integration of Reaction Diffusion Equations

    KAUST Repository

    Alzahrani, Hasnaa H.

    2016-01-01

    A tailored integration scheme is developed to treat stiff reaction-diffusion prob- lems. The construction adapts a stiff solver, namely VODE, to treat reaction im- plicitly together with explicit treatment of diffusion. The second-order Runge

  8. Environmental Management Integration Project/Mixed Waste Focus Area Partnership

    International Nuclear Information System (INIS)

    Gombert, D.; Kristofferson, K.; Cole, L.

    1999-01-01

    On January 16, 1998, the Assistant Secretary for the Environmental Management (EM) Program at the Department of Energy, issued DOE-Idaho the Program Integration and Systems Engineering Guidance for Fiscal Year 1998, herein called Guidance, which directed that program integration tasks be performed for all EM program areas. This guidance directed the EM Integration team, as part of the Task 1, to develop baseline waste and material disposition maps which are owned by the site Project Baseline Summary (PBS) manager. With these baselines in place Task 2 gave direction to link Science and Technology activities to the waste and material stream supported by that technology. This linkage of EM Program needs with the OST activities supports the DOE goal of maximizing cleanup at DOE sites by 2006 and provides a defensible science and technology program. Additionally, this linkage is a valuable tool in the integration of the waste and material disposition efforts for the DOE complex

  9. Mixed, Nonsplit, Extended Stability, Stiff Integration of Reaction Diffusion Equations

    KAUST Repository

    Alzahrani, Hasnaa H.

    2016-07-26

    A tailored integration scheme is developed to treat stiff reaction-diffusion prob- lems. The construction adapts a stiff solver, namely VODE, to treat reaction im- plicitly together with explicit treatment of diffusion. The second-order Runge-Kutta- Chebyshev (RKC) scheme is adjusted to integrate diffusion. Spatial operator is de- scretised by second-order finite differences on a uniform grid. The overall solution is advanced over S fractional stiff integrations, where S corresponds to the number of RKC stages. The behavior of the scheme is analyzed by applying it to three simple problems. The results show that it achieves second-order accuracy, thus, preserving the formal accuracy of the original RKC. The presented development sets the stage for future extensions, particularly, to multidimensional reacting flows with detailed chemistry.

  10. Integrated treatment process of hazardous and mixed wastes

    International Nuclear Information System (INIS)

    Shibuya, M.; Suzuki, K.; Fujimura, Y.; Nakashima, T.; Moriya, Y.

    1993-01-01

    An integrated waste treatment system was studied based on technologies developed for the treatment of liquid radioactive, organic, and aqueous wastes containing hazardous materials and soils contaminated with heavy metals. The system consists of submerged incineration, metal ion fixing and stabilization, and soil washing treatments. Introduction of this system allows for the simultaneous processing of toxic waste and contaminated soils. Hazardous organic wastes can be decomposed into harmless gases, and aqueous wastes can be converted into a dischargeable effluent. The contaminated soil is backfilled after the removal of toxic materials. Experimental data show that the integration system is practical for complicated toxic wastes

  11. Elastic-Plastic J-Integral Solutions or Surface Cracks in Tension Using an Interpolation Methodology. Appendix C -- Finite Element Models Solution Database File, Appendix D -- Benchmark Finite Element Models Solution Database File

    Science.gov (United States)

    Allen, Phillip A.; Wells, Douglas N.

    2013-01-01

    No closed form solutions exist for the elastic-plastic J-integral for surface cracks due to the nonlinear, three-dimensional nature of the problem. Traditionally, each surface crack must be analyzed with a unique and time-consuming nonlinear finite element analysis. To overcome this shortcoming, the authors have developed and analyzed an array of 600 3D nonlinear finite element models for surface cracks in flat plates under tension loading. The solution space covers a wide range of crack shapes and depths (shape: 0.2 less than or equal to a/c less than or equal to 1, depth: 0.2 less than or equal to a/B less than or equal to 0.8) and material flow properties (elastic modulus-to-yield ratio: 100 less than or equal to E/ys less than or equal to 1,000, and hardening: 3 less than or equal to n less than or equal to 20). The authors have developed a methodology for interpolating between the goemetric and material property variables that allows the user to reliably evaluate the full elastic-plastic J-integral and force versus crack mouth opening displacement solution; thus, a solution can be obtained very rapidly by users without elastic-plastic fracture mechanics modeling experience. Complete solutions for the 600 models and 25 additional benchmark models are provided in tabular format.

  12. Criteria for quantitative and qualitative data integration: mixed-methods research methodology.

    Science.gov (United States)

    Lee, Seonah; Smith, Carrol A M

    2012-05-01

    Many studies have emphasized the need and importance of a mixed-methods approach for evaluation of clinical information systems. However, those studies had no criteria to guide integration of multiple data sets. Integrating different data sets serves to actualize the paradigm that a mixed-methods approach argues; thus, we require criteria that provide the right direction to integrate quantitative and qualitative data. The first author used a set of criteria organized from a literature search for integration of multiple data sets from mixed-methods research. The purpose of this article was to reorganize the identified criteria. Through critical appraisal of the reasons for designing mixed-methods research, three criteria resulted: validation, complementarity, and discrepancy. In applying the criteria to empirical data of a previous mixed methods study, integration of quantitative and qualitative data was achieved in a systematic manner. It helped us obtain a better organized understanding of the results. The criteria of this article offer the potential to produce insightful analyses of mixed-methods evaluations of health information systems.

  13. Solving the Schroedinger equation using Smolyak interpolants

    International Nuclear Information System (INIS)

    Avila, Gustavo; Carrington, Tucker Jr.

    2013-01-01

    In this paper, we present a new collocation method for solving the Schroedinger equation. Collocation has the advantage that it obviates integrals. All previous collocation methods have, however, the crucial disadvantage that they require solving a generalized eigenvalue problem. By combining Lagrange-like functions with a Smolyak interpolant, we device a collocation method that does not require solving a generalized eigenvalue problem. We exploit the structure of the grid to develop an efficient algorithm for evaluating the matrix-vector products required to compute energy levels and wavefunctions. Energies systematically converge as the number of points and basis functions are increased

  14. The Role of Identity Integration in Enhancing Creativity among Mixed-Race Individuals

    Science.gov (United States)

    Tendayi Viki, G.; Williams, May Liang J.

    2014-01-01

    Identity integration among bicultural individuals refers to the perception that their two cultural identities are compatible. Previous research has shown that identity integration is likely to lead to enhanced creativity. However, this research was conducted among first- and second-generation immigrants, but not among mixed-race individuals. The…

  15. Integrating Quantitative and Qualitative Results in Health Science Mixed Methods Research Through Joint Displays.

    Science.gov (United States)

    Guetterman, Timothy C; Fetters, Michael D; Creswell, John W

    2015-11-01

    Mixed methods research is becoming an important methodology to investigate complex health-related topics, yet the meaningful integration of qualitative and quantitative data remains elusive and needs further development. A promising innovation to facilitate integration is the use of visual joint displays that bring data together visually to draw out new insights. The purpose of this study was to identify exemplar joint displays by analyzing the various types of joint displays being used in published articles. We searched for empirical articles that included joint displays in 3 journals that publish state-of-the-art mixed methods research. We analyzed each of 19 identified joint displays to extract the type of display, mixed methods design, purpose, rationale, qualitative and quantitative data sources, integration approaches, and analytic strategies. Our analysis focused on what each display communicated and its representation of mixed methods analysis. The most prevalent types of joint displays were statistics-by-themes and side-by-side comparisons. Innovative joint displays connected findings to theoretical frameworks or recommendations. Researchers used joint displays for convergent, explanatory sequential, exploratory sequential, and intervention designs. We identified exemplars for each of these designs by analyzing the inferences gained through using the joint display. Exemplars represented mixed methods integration, presented integrated results, and yielded new insights. Joint displays appear to provide a structure to discuss the integrated analysis and assist both researchers and readers in understanding how mixed methods provides new insights. We encourage researchers to use joint displays to integrate and represent mixed methods analysis and discuss their value. © 2015 Annals of Family Medicine, Inc.

  16. Integrating Quantitative and Qualitative Results in Health Science Mixed Methods Research Through Joint Displays

    Science.gov (United States)

    Guetterman, Timothy C.; Fetters, Michael D.; Creswell, John W.

    2015-01-01

    PURPOSE Mixed methods research is becoming an important methodology to investigate complex health-related topics, yet the meaningful integration of qualitative and quantitative data remains elusive and needs further development. A promising innovation to facilitate integration is the use of visual joint displays that bring data together visually to draw out new insights. The purpose of this study was to identify exemplar joint displays by analyzing the various types of joint displays being used in published articles. METHODS We searched for empirical articles that included joint displays in 3 journals that publish state-of-the-art mixed methods research. We analyzed each of 19 identified joint displays to extract the type of display, mixed methods design, purpose, rationale, qualitative and quantitative data sources, integration approaches, and analytic strategies. Our analysis focused on what each display communicated and its representation of mixed methods analysis. RESULTS The most prevalent types of joint displays were statistics-by-themes and side-by-side comparisons. Innovative joint displays connected findings to theoretical frameworks or recommendations. Researchers used joint displays for convergent, explanatory sequential, exploratory sequential, and intervention designs. We identified exemplars for each of these designs by analyzing the inferences gained through using the joint display. Exemplars represented mixed methods integration, presented integrated results, and yielded new insights. CONCLUSIONS Joint displays appear to provide a structure to discuss the integrated analysis and assist both researchers and readers in understanding how mixed methods provides new insights. We encourage researchers to use joint displays to integrate and represent mixed methods analysis and discuss their value. PMID:26553895

  17. Points of Convergence in Music Education: The Use of Data Labels as a Strategy for Mixed Methods Integration

    Science.gov (United States)

    Fitzpatrick, Kate R.

    2016-01-01

    Although the mixing of quantitative and qualitative data is an essential component of mixed methods research, the process of integrating both types of data in meaningful ways can be challenging. The purpose of this article is to describe the use of data labels in mixed methods research as a technique for the integration of qualitative and…

  18. Gaussian Process Interpolation for Uncertainty Estimation in Image Registration

    Science.gov (United States)

    Wachinger, Christian; Golland, Polina; Reuter, Martin; Wells, William

    2014-01-01

    Intensity-based image registration requires resampling images on a common grid to evaluate the similarity function. The uncertainty of interpolation varies across the image, depending on the location of resampled points relative to the base grid. We propose to perform Bayesian inference with Gaussian processes, where the covariance matrix of the Gaussian process posterior distribution estimates the uncertainty in interpolation. The Gaussian process replaces a single image with a distribution over images that we integrate into a generative model for registration. Marginalization over resampled images leads to a new similarity measure that includes the uncertainty of the interpolation. We demonstrate that our approach increases the registration accuracy and propose an efficient approximation scheme that enables seamless integration with existing registration methods. PMID:25333127

  19. Spatiotemporal Interpolation Methods for Solar Event Trajectories

    Science.gov (United States)

    Filali Boubrahimi, Soukaina; Aydin, Berkay; Schuh, Michael A.; Kempton, Dustin; Angryk, Rafal A.; Ma, Ruizhe

    2018-05-01

    This paper introduces four spatiotemporal interpolation methods that enrich complex, evolving region trajectories that are reported from a variety of ground-based and space-based solar observatories every day. Our interpolation module takes an existing solar event trajectory as its input and generates an enriched trajectory with any number of additional time–geometry pairs created by the most appropriate method. To this end, we designed four different interpolation techniques: MBR-Interpolation (Minimum Bounding Rectangle Interpolation), CP-Interpolation (Complex Polygon Interpolation), FI-Interpolation (Filament Polygon Interpolation), and Areal-Interpolation, which are presented here in detail. These techniques leverage k-means clustering, centroid shape signature representation, dynamic time warping, linear interpolation, and shape buffering to generate the additional polygons of an enriched trajectory. Using ground-truth objects, interpolation effectiveness is evaluated through a variety of measures based on several important characteristics that include spatial distance, area overlap, and shape (boundary) similarity. To our knowledge, this is the first research effort of this kind that attempts to address the broad problem of spatiotemporal interpolation of solar event trajectories. We conclude with a brief outline of future research directions and opportunities for related work in this area.

  20. A Note on Cubic Convolution Interpolation

    OpenAIRE

    Meijering, E.; Unser, M.

    2003-01-01

    We establish a link between classical osculatory interpolation and modern convolution-based interpolation and use it to show that two well-known cubic convolution schemes are formally equivalent to two osculatory interpolation schemes proposed in the actuarial literature about a century ago. We also discuss computational differences and give examples of other cubic interpolation schemes not previously studied in signal and image processing.

  1. Node insertion in Coalescence Fractal Interpolation Function

    International Nuclear Information System (INIS)

    Prasad, Srijanani Anurag

    2013-01-01

    The Iterated Function System (IFS) used in the construction of Coalescence Hidden-variable Fractal Interpolation Function (CHFIF) depends on the interpolation data. The insertion of a new point in a given set of interpolation data is called the problem of node insertion. In this paper, the effect of insertion of new point on the related IFS and the Coalescence Fractal Interpolation Function is studied. Smoothness and Fractal Dimension of a CHFIF obtained with a node are also discussed

  2. Bayer Demosaicking with Polynomial Interpolation.

    Science.gov (United States)

    Wu, Jiaji; Anisetti, Marco; Wu, Wei; Damiani, Ernesto; Jeon, Gwanggil

    2016-08-30

    Demosaicking is a digital image process to reconstruct full color digital images from incomplete color samples from an image sensor. It is an unavoidable process for many devices incorporating camera sensor (e.g. mobile phones, tablet, etc.). In this paper, we introduce a new demosaicking algorithm based on polynomial interpolation-based demosaicking (PID). Our method makes three contributions: calculation of error predictors, edge classification based on color differences, and a refinement stage using a weighted sum strategy. Our new predictors are generated on the basis of on the polynomial interpolation, and can be used as a sound alternative to other predictors obtained by bilinear or Laplacian interpolation. In this paper we show how our predictors can be combined according to the proposed edge classifier. After populating three color channels, a refinement stage is applied to enhance the image quality and reduce demosaicking artifacts. Our experimental results show that the proposed method substantially improves over existing demosaicking methods in terms of objective performance (CPSNR, S-CIELAB E, and FSIM), and visual performance.

  3. Researches Regarding The Circular Interpolation Algorithms At CNC Laser Cutting Machines

    Science.gov (United States)

    Tîrnovean, Mircea Sorin

    2015-09-01

    This paper presents an integrated simulation approach for studying the circular interpolation regime of CNC laser cutting machines. The circular interpolation algorithm is studied, taking into consideration the numerical character of the system. A simulation diagram, which is able to generate the kinematic inputs for the feed drives of the CNC laser cutting machine is also presented.

  4. Single-event effects in analog and mixed-signal integrated circuits

    International Nuclear Information System (INIS)

    Turflinger, T.L.

    1996-01-01

    Analog and mixed-signal integrated circuits are also susceptible to single-event effects, but they have rarely been tested. Analog circuit single-particle transients require modified test techniques and data analysis. Existing work is reviewed and future concerns are outlined

  5. Triangulation and Mixed Methods Designs: Data Integration with New Research Technologies

    Science.gov (United States)

    Fielding, Nigel G.

    2012-01-01

    Data integration is a crucial element in mixed methods analysis and conceptualization. It has three principal purposes: illustration, convergent validation (triangulation), and the development of analytic density or "richness." This article discusses such applications in relation to new technologies for social research, looking at three…

  6. Integrating Quantitative and Qualitative Data in Mixed Methods Research--Challenges and Benefits

    Science.gov (United States)

    Almalki, Sami

    2016-01-01

    This paper is concerned with investigating the integration of quantitative and qualitative data in mixed methods research and whether, in spite of its challenges, it can be of positive benefit to many investigative studies. The paper introduces the topic, defines the terms with which this subject deals and undertakes a literature review to outline…

  7. Policy Integration and Multi-Level Governance: Dealing with the Vertical Dimension of Policy Mix Designs

    Directory of Open Access Journals (Sweden)

    Michael Howlett

    2017-05-01

    Full Text Available Multifaceted problems such as sustainable development typically involve complex arrangements of institutions and instruments and the subject of how best to design and operate such ‘mixes’, ‘bundles’ or ‘portfolios’ of policy tools is an ongoing issue in this area. One aspect of this question is that some mixes are more difficult to design and operate than others. The paper argues that, ceteris paribus, complex policy-making faces substantial risks of failure when horizontal or vertical dimensions of policy-making are not well integrated. The paper outlines a model of policy mix types which highlights the design problems associated with more complex arrangements and presents two case studies of similarly structured mixes in the areas of marine parks in Australia and coastal zone management in Europe—one a failure and the other a successful case of integration—to illustrate how such mixes can be better designed and managed more effectively.

  8. Approximate solutions for the two-dimensional integral transport equation. The critically mixed methods of resolution

    International Nuclear Information System (INIS)

    Sanchez, Richard.

    1980-11-01

    This work is divided into two part the first part (note CEA-N-2165) deals with the solution of complex two-dimensional transport problems, the second one treats the critically mixed methods of resolution. These methods are applied for one-dimensional geometries with highly anisotropic scattering. In order to simplify the set of integral equation provided by the integral transport equation, the integro-differential equation is used to obtain relations that allow to lower the number of integral equation to solve; a general mathematical and numerical study is presented [fr

  9. Low-frequency scaling of the standard and mixed magnetic field and Müller integral equations

    KAUST Repository

    Bogaert, Ignace; Cools, Kristof; Andriulli, Francesco P.; Bagci, Hakan

    2014-01-01

    The standard and mixed discretizations for the magnetic field integral equation (MFIE) and the Müller integral equation (MUIE) are investigated in the context of low-frequency (LF) scattering problems involving simply connected scatterers

  10. Mixed

    Directory of Open Access Journals (Sweden)

    Pau Baya

    2011-05-01

    Full Text Available Remenat (Catalan (Mixed, "revoltillo" (Scrambled in Spanish, is a dish which, in Catalunya, consists of a beaten egg cooked with vegetables or other ingredients, normally prawns or asparagus. It is delicious. Scrambled refers to the action of mixing the beaten egg with other ingredients in a pan, normally using a wooden spoon Thought is frequently an amalgam of past ideas put through a spinner and rhythmically shaken around like a cocktail until a uniform and dense paste is made. This malleable product, rather like a cake mixture can be deformed pulling it out, rolling it around, adapting its shape to the commands of one’s hands or the tool which is being used on it. In the piece Mixed, the contortion of the wood seeks to reproduce the plasticity of this slow heavy movement. Each piece lays itself on the next piece consecutively like a tongue of incandescent lava slowly advancing but with unstoppable inertia.

  11. Reclassification of Mixed Oligoastrocytic Tumors Using a Genetically Integrated Diagnostic Approach

    Directory of Open Access Journals (Sweden)

    Seong-Ik Kim

    2018-01-01

    Full Text Available Background Mixed gliomas, such as oligoastrocytomas (OA, anaplastic oligoastrocytomas, and glioblastomas (GBMs with an oligodendroglial component (GBMO are defined as tumors composed of a mixture of two distinct neoplastic cell types, astrocytic and oligodendroglial. Recently, mutations ATRX and TP53, and codeletion of 1p/19q are shown to be genetic hallmarks of astrocytic and oligodendroglial tumors, respectively. Subsequent molecular analyses of mixed gliomas preferred the reclassification to either oligodendroglioma or astrocytoma. This study was designed to apply genetically integrated diagnostic criteria to mixed gliomas and determine usefulness and prognostic value of new classification in Korean patients. Methods Fifty-eight cases of mixed OAs and GBMOs were retrieved from the pathology archives of Seoul National University Hospital from 2004 to 2015. Reclassification was performed according to genetic and immunohistochemical properties. Clinicopathological characteristics of each subgroup were evaluated. Overall survival was assessed and compared between subgroups. Results We could reclassify all mixed OAs and GBMOs into either astrocytic or oligodendroglial tumors. Notably, 29 GBMOs could be reclassified into 11 cases of GBM, IDH-mutant, 16 cases of GBM, IDH-wildtype, and two cases of anaplastic oligodendroglioma, IDH mutant. Overall survival was significantly different among these new groups (p<.001. Overall survival and progression-free survival were statistically better in gliomas with IDH mutation, ATRX mutation, no microscopic necrosis, and young patient age (cut off, 45 years old. Conclusions Our results strongly suggest that a genetically integrated diagnosis of glioma better reflects prognosis than former morphology-based methods.

  12. Precipitation interpolation in mountainous areas

    Science.gov (United States)

    Kolberg, Sjur

    2015-04-01

    Different precipitation interpolation techniques as well as external drift covariates are tested and compared in a 26000 km2 mountainous area in Norway, using daily data from 60 stations. The main method of assessment is cross-validation. Annual precipitation in the area varies from below 500 mm to more than 2000 mm. The data were corrected for wind-driven undercatch according to operational standards. While temporal evaluation produce seemingly acceptable at-station correlation values (on average around 0.6), the average daily spatial correlation is less than 0.1. Penalising also bias, Nash-Sutcliffe R2 values are negative for spatial correspondence, and around 0.15 for temporal. Despite largely violated assumptions, plain Kriging produces better results than simple inverse distance weighting. More surprisingly, the presumably 'worst-case' benchmark of no interpolation at all, simply averaging all 60 stations for each day, actually outperformed the standard interpolation techniques. For logistic reasons, high altitudes are under-represented in the gauge network. The possible effect of this was investigated by a) fitting a precipitation lapse rate as an external drift, and b) applying a linear model of orographic enhancement (Smith and Barstad, 2004). These techniques improved the results only marginally. The gauge density in the region is one for each 433 km2; higher than the overall density of the Norwegian national network. Admittedly the cross-validation technique reduces the gauge density, still the results suggest that we are far from able to provide hydrological models with adequate data for the main driving force.

  13. Potential problems with interpolating fields

    Energy Technology Data Exchange (ETDEWEB)

    Birse, Michael C. [The University of Manchester, Theoretical Physics Division, School of Physics and Astronomy, Manchester (United Kingdom)

    2017-11-15

    A potential can have features that do not reflect the dynamics of the system it describes but rather arise from the choice of interpolating fields used to define it. This is illustrated using a toy model of scattering with two coupled channels. A Bethe-Salpeter amplitude is constructed which is a mixture of the waves in the two channels. The potential derived from this has a strong repulsive core, which arises from the admixture of the closed channel in the wave function and not from the dynamics of the model. (orig.)

  14. On the mixed discretization of the time domain magnetic field integral equation

    KAUST Repository

    Ulku, Huseyin Arda

    2012-09-01

    Time domain magnetic field integral equation (MFIE) is discretized using divergence-conforming Rao-Wilton-Glisson (RWG) and curl-conforming Buffa-Christiansen (BC) functions as spatial basis and testing functions, respectively. The resulting mixed discretization scheme, unlike the classical scheme which uses RWG functions as both basis and testing functions, is proper: Testing functions belong to dual space of the basis functions. Numerical results demonstrate that the marching on-in-time (MOT) solution of the mixed discretized MFIE yields more accurate results than that of classically discretized MFIE. © 2012 IEEE.

  15. Interpolation of rational matrix functions

    CERN Document Server

    Ball, Joseph A; Rodman, Leiba

    1990-01-01

    This book aims to present the theory of interpolation for rational matrix functions as a recently matured independent mathematical subject with its own problems, methods and applications. The authors decided to start working on this book during the regional CBMS conference in Lincoln, Nebraska organized by F. Gilfeather and D. Larson. The principal lecturer, J. William Helton, presented ten lectures on operator and systems theory and the interplay between them. The conference was very stimulating and helped us to decide that the time was ripe for a book on interpolation for matrix valued functions (both rational and non-rational). When the work started and the first partial draft of the book was ready it became clear that the topic is vast and that the rational case by itself with its applications is already enough material for an interesting book. In the process of writing the book, methods for the rational case were developed and refined. As a result we are now able to present the rational case as an indepe...

  16. Evaluation of various interpolants available in DICE

    Energy Technology Data Exchange (ETDEWEB)

    Turner, Daniel Z. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Reu, Phillip L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Crozier, Paul [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-02-01

    This report evaluates several interpolants implemented in the Digital Image Correlation Engine (DICe), an image correlation software package developed by Sandia. By interpolants we refer to the basis functions used to represent discrete pixel intensity data as a continuous signal. Interpolation is used to determine intensity values in an image at non - pixel locations. It is also used, in some cases, to evaluate the x and y gradients of the image intensities. Intensity gradients subsequently guide the optimization process. The goal of this report is to inform analysts as to the characteristics of each interpolant and provide guidance towards the best interpolant for a given dataset. This work also serves as an initial verification of each of the interpolants implemented.

  17. Case Mix Management Systems: An Opportunity to Integrate Medical Records and Financial Management System Data Bases

    Science.gov (United States)

    Rusnak, James E.

    1987-01-01

    Due to previous systems selections, many hospitals (health care facilities) are faced with the problem of fragmented data bases containing clinical, demographic and financial information. Projects to select and implement a Case Mix Management System (CMMS) provide an opportunity to reduce the number of separate physical files and to migrate towards systems with an integrated data base. The number of CMMS candidate systems is often restricted due to data base and system interface issues. The hospital must insure the CMMS project provides a means to implement an integrated on-line hospital information data base for use by departments in operating under a DRG-based Prospective Payment System. This paper presents guidelines for use in selecting a Case Mix Mangement System to meet the hospital's financial and operations planning, budgeting, marketing, and other management needs, while considering the data base implications of the implementation.

  18. Analysis of ECT Synchronization Performance Based on Different Interpolation Methods

    Directory of Open Access Journals (Sweden)

    Yang Zhixin

    2014-01-01

    Full Text Available There are two synchronization methods of electronic transformer in IEC60044-8 standard: impulsive synchronization and interpolation. When the impulsive synchronization method is inapplicability, the data synchronization of electronic transformer can be realized by using the interpolation method. The typical interpolation methods are piecewise linear interpolation, quadratic interpolation, cubic spline interpolation and so on. In this paper, the influences of piecewise linear interpolation, quadratic interpolation and cubic spline interpolation for the data synchronization of electronic transformer are computed, then the computational complexity, the synchronization precision, the reliability, the application range of different interpolation methods are analyzed and compared, which can serve as guide studies for practical applications.

  19. Visuo-Haptic Mixed Reality with Unobstructed Tool-Hand Integration.

    Science.gov (United States)

    Cosco, Francesco; Garre, Carlos; Bruno, Fabio; Muzzupappa, Maurizio; Otaduy, Miguel A

    2013-01-01

    Visuo-haptic mixed reality consists of adding to a real scene the ability to see and touch virtual objects. It requires the use of see-through display technology for visually mixing real and virtual objects, and haptic devices for adding haptic interaction with the virtual objects. Unfortunately, the use of commodity haptic devices poses obstruction and misalignment issues that complicate the correct integration of a virtual tool and the user's real hand in the mixed reality scene. In this work, we propose a novel mixed reality paradigm where it is possible to touch and see virtual objects in combination with a real scene, using commodity haptic devices, and with a visually consistent integration of the user's hand and the virtual tool. We discuss the visual obstruction and misalignment issues introduced by commodity haptic devices, and then propose a solution that relies on four simple technical steps: color-based segmentation of the hand, tracking-based segmentation of the haptic device, background repainting using image-based models, and misalignment-free compositing of the user's hand. We have developed a successful proof-of-concept implementation, where a user can touch virtual objects and interact with them in the context of a real scene, and we have evaluated the impact on user performance of obstruction and misalignment correction.

  20. Mixed waste focus area integrated master schedule (current as of May 6, 1996)

    International Nuclear Information System (INIS)

    1996-01-01

    The mission of the Mixed Waste Characterization, Treatment, and Disposal Focus Area (MWFA) is to provide acceptable treatment systems, developed in partnership with users and with the participation of stakeholders, tribal governments, and regulators, that are capable of treating the Department of Energy's (DOE's) mixed wastes. In support of this mission, the MWTA produced the Mixed Waste Focus Area Integrated Technical Baseline Report, Phase I Volume 1, January 16, 1996, which identified a prioritized list of 30 national mixed waste technology deficiencies. The MWFA is targeting funding toward technology development projects that address the current list of deficiencies. A clear connection between the technology development projects and the EM-30 and EM-40 treatment systems that they support is essential for optimizing the MWFA efforts. The purpose of the Integrated Master Schedule (IMS) is to establish and document these connections and to ensure that all technology development activities performed by the MWFA are developed for timely use in those treatment systems. The IMS is a list of treatment systems from the Site Treatment Plans (STPs)/Consent Orders that have been assigned technology development needs with associated time-driven schedules, Technology deficiencies and associated technology development (TD) needs have been identified for each treatment system based on the physical, chemical, and radiological characteristics of the waste targeted for the treatment system. The schedule, the technology development activities, and the treatment system have been verified through the operations contact from the EM-30 organization at the site

  1. Efficient evaluation of Coulomb integrals in a mixed Gaussian and plane-wave basis

    Czech Academy of Sciences Publication Activity Database

    Čársky, Petr

    2007-01-01

    Roč. 107, č. 1 (2007), s. 56-62 ISSN 0020-7608 R&D Projects: GA AV ČR IAA100400501; GA AV ČR 1ET400400413 Grant - others:European Science Foundation (EIPAM)(XE) PESC7-20; U.S. National Science Foundation(US) OISE-0532040 Institutional research plan: CEZ:AV0Z40400503 Keywords : two- electron integrals * mixed plane-wave and Gaussian basis sets * Coulomb integrals Subject RIV: CF - Physical ; The oretical Chemistry Impact factor: 1.368, year: 2007

  2. Case Mix Management Systems: An Opportunity to Integrate Medical Records and Financial Management System Data Bases

    OpenAIRE

    Rusnak, James E.

    1987-01-01

    Due to previous systems selections, many hospitals (health care facilities) are faced with the problem of fragmented data bases containing clinical, demographic and financial information. Projects to select and implement a Case Mix Management System (CMMS) provide an opportunity to reduce the number of separate physical files and to migrate towards systems with an integrated data base. The number of CMMS candidate systems is often restricted due to data base and system interface issues. The h...

  3. Mixed waste focus area integrated technical baseline report. Phase I, Volume 2: Revision 0

    International Nuclear Information System (INIS)

    1996-01-01

    This document (Volume 2) contains the Appendices A through J for the Mixed Waste Focus Area Integrated Technical Baseline Report Phase I for the Idaho National Engineering Laboratory. Included are: Waste Type Managers' Resumes, detailed information on wastewater, combustible organics, debris, unique waste, and inorganic homogeneous solids and soils, and waste data information. A detailed list of technology deficiencies and site needs identification is also provided

  4. The Mixed Waste Management Facility. Design basis integrated operations plan (Title I design)

    International Nuclear Information System (INIS)

    1994-12-01

    The Mixed Waste Management Facility (MWMF) will be a fully integrated, pilotscale facility for the demonstration of low-level, organic-matrix mixed waste treatment technologies. It will provide the bridge from bench-scale demonstrated technologies to the deployment and operation of full-scale treatment facilities. The MWMF is a key element in reducing the risk in deployment of effective and environmentally acceptable treatment processes for organic mixed-waste streams. The MWMF will provide the engineering test data, formal evaluation, and operating experience that will be required for these demonstration systems to become accepted by EPA and deployable in waste treatment facilities. The deployment will also demonstrate how to approach the permitting process with the regulatory agencies and how to operate and maintain the processes in a safe manner. This document describes, at a high level, how the facility will be designed and operated to achieve this mission. It frequently refers the reader to additional documentation that provides more detail in specific areas. Effective evaluation of a technology consists of a variety of informal and formal demonstrations involving individual technology systems or subsystems, integrated technology system combinations, or complete integrated treatment trains. Informal demonstrations will typically be used to gather general operating information and to establish a basis for development of formal demonstration plans. Formal demonstrations consist of a specific series of tests that are used to rigorously demonstrate the operation or performance of a specific system configuration

  5. Research on interpolation methods in medical image processing.

    Science.gov (United States)

    Pan, Mei-Sen; Yang, Xiao-Li; Tang, Jing-Tian

    2012-04-01

    Image interpolation is widely used for the field of medical image processing. In this paper, interpolation methods are divided into three groups: filter interpolation, ordinary interpolation and general partial volume interpolation. Some commonly-used filter methods for image interpolation are pioneered, but the interpolation effects need to be further improved. When analyzing and discussing ordinary interpolation, many asymmetrical kernel interpolation methods are proposed. Compared with symmetrical kernel ones, the former are have some advantages. After analyzing the partial volume and generalized partial volume estimation interpolations, the new concept and constraint conditions of the general partial volume interpolation are defined, and several new partial volume interpolation functions are derived. By performing the experiments of image scaling, rotation and self-registration, the interpolation methods mentioned in this paper are compared in the entropy, peak signal-to-noise ratio, cross entropy, normalized cross-correlation coefficient and running time. Among the filter interpolation methods, the median and B-spline filter interpolations have a relatively better interpolating performance. Among the ordinary interpolation methods, on the whole, the symmetrical cubic kernel interpolations demonstrate a strong advantage, especially the symmetrical cubic B-spline interpolation. However, we have to mention that they are very time-consuming and have lower time efficiency. As for the general partial volume interpolation methods, from the total error of image self-registration, the symmetrical interpolations provide certain superiority; but considering the processing efficiency, the asymmetrical interpolations are better.

  6. Variable Rate Characteristic Waveform Interpolation Speech Coder Based on Phonetic Classification

    Institute of Scientific and Technical Information of China (English)

    WANG Jing; KUANG Jing-ming; ZHAO Sheng-hui

    2007-01-01

    A variable-bit-rate characteristic waveform interpolation (VBR-CWI) speech codec with about 1.8kbit/s average bit rate which integrates phonetic classification into characteristic waveform (CW) decomposition is proposed.Each input frame is classified into one of 4 phonetic classes.Non-speech frames are represented with Bark-band noise model.The extracted CWs become rapidly evolving waveforms (REWs) or slowly evolving waveforms (SEWs) in the cases of unvoiced or stationary voiced frames respectively, while mixed voiced frames use the same CW decomposition as that in the conventional CWI.Experimental results show that the proposed codec can eliminate most buzzy and noisy artifacts existing in the fixed-bit-rate characteristic waveform interpolation (FBR-CWI) speech codec, the average bit rate can be much lower, and its reconstructed speech quality is much better than FS 1016 CELP at 4.8kbit/s and similar to G.723.1 ACELP at 5.3kbit/s.

  7. A Hybrid Interpolation Method for Geometric Nonlinear Spatial Beam Elements with Explicit Nodal Force

    Directory of Open Access Journals (Sweden)

    Huiqing Fang

    2016-01-01

    Full Text Available Based on geometrically exact beam theory, a hybrid interpolation is proposed for geometric nonlinear spatial Euler-Bernoulli beam elements. First, the Hermitian interpolation of the beam centerline was used for calculating nodal curvatures for two ends. Then, internal curvatures of the beam were interpolated with a second interpolation. At this point, C1 continuity was satisfied and nodal strain measures could be consistently derived from nodal displacement and rotation parameters. The explicit expression of nodal force without integration, as a function of global parameters, was founded by using the hybrid interpolation. Furthermore, the proposed beam element can be degenerated into linear beam element under the condition of small deformation. Objectivity of strain measures and patch tests are also discussed. Finally, four numerical examples are discussed to prove the validity and effectivity of the proposed beam element.

  8. Mixed-integrator-based bi-quad cell for designing a continuous time filter

    International Nuclear Information System (INIS)

    Chen Yong; Zhou Yumei

    2010-01-01

    A new mixed-integrator-based bi-quad cell is proposed. An alternative synthesis mechanism of complex poles is proposed compared with source-follower-based bi-quad cells which is designed applying the positive feedback technique. Using the negative feedback technique to combine different integrators, the proposed bi-quad cell synthesizes complex poles for designing a continuous time filter. It exhibits various advantages including compact topology, high gain, no parasitic pole, no CMFB circuit, and high capability. The fourth-order Butterworth lowpass filter using the proposed cells has been fabricated in 0.18 μm CMOS technology. The active area occupied by the filter with test buffer is only 200 x 170 μm 2 . The proposed filter consumes a low power of 201 μW and achieves a 68.5 dB dynamic range. (semiconductor integrated circuits)

  9. THE INTEGRATION OF PIGMEAT MARKETS IN THE EU. EVIDENCE FROM A REGULAR MIXED VINE COPULA

    Directory of Open Access Journals (Sweden)

    Vasilis GRIGORIADIS

    2016-04-01

    Full Text Available The objective of this work is to investigate the degree of integration of national pigmeat markets in the EU. This is pursued using monthly wholesale prices from seven major markets and the statistical tool of mixed R-vine copulas. The empirical results suggest that the markets considered do not constitute a great pool in which prices move, boom, and crash together. The markets of Belgium, Germany, and the Netherlands exhibit a higher degree of integration relative to the others, whereas the Italian market exhibits a lower degree of integration. Also, there is an indication that, in certain cases, the benefits of free trade may be unequally distributed between the trading partners.

  10. Differential Interpolation Effects in Free Recall

    Science.gov (United States)

    Petrusic, William M.; Jamieson, Donald G.

    1978-01-01

    Attempts to determine whether a sufficiently demanding and difficult interpolated task (shadowing, i.e., repeating aloud) would decrease recall for earlier-presented items as well as for more recent items. Listening to music was included as a second interpolated task. Results support views that serial position effects reflect a single process.…

  11. Transfinite C2 interpolant over triangles

    International Nuclear Information System (INIS)

    Alfeld, P.; Barnhill, R.E.

    1984-01-01

    A transfinite C 2 interpolant on a general triangle is created. The required data are essentially C 2 , no compatibility conditions arise, and the precision set includes all polynomials of degree less than or equal to eight. The symbol manipulation language REDUCE is used to derive the scheme. The scheme is discretized to two different finite dimensional C 2 interpolants in an appendix

  12. Mixed Waste Integrated Program interim evaluation report on thermal treatment technologies

    International Nuclear Information System (INIS)

    Gillins, R.L.; DeWitt, L.M.; Wollerman, A.L.

    1993-02-01

    The Mixed Waste Integrated Program (MWIP) is one of several US Department of Energy (DOE) integrated programs established to organize and coordinate throughout the DOE complex the development of technologies for treatment of specific waste categories. The goal of the MWIP is to develop and deploy appropriate technologies for -the treatment of DOE mixed low-level and alpha-contaminated wastes in order to bring all affected DOE installations and projects into compliance with environmental laws. Evaluation of treatment technologies by the MWIP will focus on meeting waste form performance requirements for disposal. Thermal treatment technologies were an early emphasis for the MWIP because thermal treatment is indicated (or mandated) for many of the hazardous constituents in DOE mixed waste and because these technologies have been widely investigated for these applications. An advisory group, the Thermal Treatment Working Group (TTWG), was formed during the program's infancy to assist the MWIP in evaluating and prioritizing thermal treatment technologies suitable for development. The results of the overall evaluation scoring indicate that the four highest-rated technologies were rotary kilns, slagging kilns, electric-arc furnaces, and plasma-arc furnaces. The four highest-rated technologies were all judged to be applicable on five of the six waste streams and are the only technologies in the evaluation with this distinction. Conclusions as to the superiority of one technology over others are not valid based on this preliminary study, although some general conclusions can be drawn

  13. Using a mixed-methods design to examine nurse practitioner integration in British Columbia.

    Science.gov (United States)

    Sangster-Gormley, Esther; Griffith, Janessa; Schreiber, Rita; Borycki, Elizabeth

    2015-07-01

    To discuss and provide examples of how mixed-methods research was used to evaluate the integration of nurse practitioners (NPs) into a Canadian province. Legislation enabling NPs to practise in British Columbia (BC) was enacted in 2005. This research evaluated the integration of NPs and their effect on the BC healthcare system. Data were collected using surveys, focus groups, participant interviews and case studies over three years. Data sources and methods were triangulated to determine how the findings addressed the research questions. The challenges and benefits of using the multiphase design are highlighted in the paper. The multiphase mixed-methods research design was selected because of its applicability to evaluation research. The design proved to be robust and flexible in answering research questions. As sub-studies within the multiphase design are often published separately, it can be difficult for researchers to find examples. This paper highlights ways that a multiphase mixed-methods design can be conducted for researchers unfamiliar with the process.

  14. Analysis of velocity planning interpolation algorithm based on NURBS curve

    Science.gov (United States)

    Zhang, Wanjun; Gao, Shanping; Cheng, Xiyan; Zhang, Feng

    2017-04-01

    To reduce interpolation time and Max interpolation error in NURBS (Non-Uniform Rational B-Spline) inter-polation caused by planning Velocity. This paper proposed a velocity planning interpolation algorithm based on NURBS curve. Firstly, the second-order Taylor expansion is applied on the numerator in NURBS curve representation with parameter curve. Then, velocity planning interpolation algorithm can meet with NURBS curve interpolation. Finally, simulation results show that the proposed NURBS curve interpolator meet the high-speed and high-accuracy interpolation requirements of CNC systems. The interpolation of NURBS curve should be finished.

  15. An Improved Rotary Interpolation Based on FPGA

    Directory of Open Access Journals (Sweden)

    Mingyu Gao

    2014-08-01

    Full Text Available This paper presents an improved rotary interpolation algorithm, which consists of a standard curve interpolation module and a rotary process module. Compared to the conventional rotary interpolation algorithms, the proposed rotary interpolation algorithm is simpler and more efficient. The proposed algorithm was realized on a FPGA with Verilog HDL language, and simulated by the ModelSim software, and finally verified on a two-axis CNC lathe, which uses rotary ellipse and rotary parabolic as an example. According to the theoretical analysis and practical process validation, the algorithm has the following advantages: firstly, less arithmetic items is conducive for interpolation operation; and secondly the computing time is only two clock cycles of the FPGA. Simulations and actual tests have proved that the high accuracy and efficiency of the algorithm, which shows that it is highly suited for real-time applications.

  16. Integration of water footprint accounting and costs for optimal pulp supply mix in paper industry

    DEFF Research Database (Denmark)

    Manzardo, Alessandro; Ren, Jingzheng; Piantella, Antonio

    2014-01-01

    studies have focused on these aspects, but there have been no previous reports on the integrated application of raw material water footprint accounting and costs in the definition of the optimal supply mix of chemical pulps from different countries. The current models that have been applied specifically...... that minimizes the water footprint accounting results and costs of chemical pulp, thereby facilitating the assessment of the water footprint by accounting for different chemical pulps purchased from various suppliers, with a focus on the efficiency of the production process. Water footprint accounting...... was adapted to better represent the efficiency of pulp and paper production. A multi-objective model for supply mix optimization was also developed using multi-criteria decision analysis (MCDA). Water footprint accounting confirmed the importance of the production efficiency of chemical pulp, which affected...

  17. On removing interpolation and resampling artifacts in rigid image registration.

    Science.gov (United States)

    Aganj, Iman; Yeo, Boon Thye Thomas; Sabuncu, Mert R; Fischl, Bruce

    2013-02-01

    We show that image registration using conventional interpolation and summation approximations of continuous integrals can generally fail because of resampling artifacts. These artifacts negatively affect the accuracy of registration by producing local optima, altering the gradient, shifting the global optimum, and making rigid registration asymmetric. In this paper, after an extensive literature review, we demonstrate the causes of the artifacts by comparing inclusion and avoidance of resampling analytically. We show the sum-of-squared-differences cost function formulated as an integral to be more accurate compared with its traditional sum form in a simple case of image registration. We then discuss aliasing that occurs in rotation, which is due to the fact that an image represented in the Cartesian grid is sampled with different rates in different directions, and propose the use of oscillatory isotropic interpolation kernels, which allow better recovery of true global optima by overcoming this type of aliasing. Through our experiments on brain, fingerprint, and white noise images, we illustrate the superior performance of the integral registration cost function in both the Cartesian and spherical coordinates, and also validate the introduced radial interpolation kernel by demonstrating the improvement in registration.

  18. The Contribution of Mixed Methods Research to the Field of Childhood Trauma: A Narrative Review Focused on Data Integration

    Science.gov (United States)

    Boeije, Hennie; Slagt, Meike; van Wesel, Floryt

    2013-01-01

    In mixed methods research (MMR), integrating the quantitative and the qualitative components of a study is assumed to result in additional knowledge (or "yield"). This narrative review examines the extent to which MMR is used in the field of childhood trauma and provides directions for improving mixed methods studies in this field. A…

  19. Integrated process analyses studies on mixed low level and transuranic wastes. Summary report

    International Nuclear Information System (INIS)

    1997-12-01

    Options for integrated thermal and nonthermal treatment systems for mixed low-level waste (MLLW) are compared such as total life cycle cost (TLCC), cost sensitivities, risk, energy requirements, final waste volume, and aqueous and gaseous effluents. The comparisons were derived by requiring all conceptual systems to treat the same composition of waste with the same operating efficiency. Thus, results can be used as a general guideline for the selection of treatment and disposal concepts. However, specific applications of individual systems will require further analysis. The potential for cost saving options and the research and development opportunities are summarized

  20. Integrated process analyses studies on mixed low level and transuranic wastes. Summary report

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-12-01

    Options for integrated thermal and nonthermal treatment systems for mixed low-level waste (MLLW) are compared such as total life cycle cost (TLCC), cost sensitivities, risk, energy requirements, final waste volume, and aqueous and gaseous effluents. The comparisons were derived by requiring all conceptual systems to treat the same composition of waste with the same operating efficiency. Thus, results can be used as a general guideline for the selection of treatment and disposal concepts. However, specific applications of individual systems will require further analysis. The potential for cost saving options and the research and development opportunities are summarized.

  1. ADVANCING THE STUDY OF VIOLENCE AGAINST WOMEN USING MIXED METHODS: INTEGRATING QUALITATIVE METHODS INTO A QUANTITATIVE RESEARCH PROGRAM

    Science.gov (United States)

    Testa, Maria; Livingston, Jennifer A.; VanZile-Tamsen, Carol

    2011-01-01

    A mixed methods approach, combining quantitative with qualitative data methods and analysis, offers a promising means of advancing the study of violence. Integrating semi-structured interviews and qualitative analysis into a quantitative program of research on women’s sexual victimization has resulted in valuable scientific insight and generation of novel hypotheses for testing. This mixed methods approach is described and recommendations for integrating qualitative data into quantitative research are provided. PMID:21307032

  2. Interferometric interpolation of sparse marine data

    KAUST Repository

    Hanafy, Sherif M.

    2013-10-11

    We present the theory and numerical results for interferometrically interpolating 2D and 3D marine surface seismic profiles data. For the interpolation of seismic data we use the combination of a recorded Green\\'s function and a model-based Green\\'s function for a water-layer model. Synthetic (2D and 3D) and field (2D) results show that the seismic data with sparse receiver intervals can be accurately interpolated to smaller intervals using multiples in the data. An up- and downgoing separation of both recorded and model-based Green\\'s functions can help in minimizing artefacts in a virtual shot gather. If the up- and downgoing separation is not possible, noticeable artefacts will be generated in the virtual shot gather. As a partial remedy we iteratively use a non-stationary 1D multi-channel matching filter with the interpolated data. Results suggest that a sparse marine seismic survey can yield more information about reflectors if traces are interpolated by interferometry. Comparing our results to those of f-k interpolation shows that the synthetic example gives comparable results while the field example shows better interpolation quality for the interferometric method. © 2013 European Association of Geoscientists & Engineers.

  3. A MAP-based image interpolation method via Viterbi decoding of Markov chains of interpolation functions.

    Science.gov (United States)

    Vedadi, Farhang; Shirani, Shahram

    2014-01-01

    A new method of image resolution up-conversion (image interpolation) based on maximum a posteriori sequence estimation is proposed. Instead of making a hard decision about the value of each missing pixel, we estimate the missing pixels in groups. At each missing pixel of the high resolution (HR) image, we consider an ensemble of candidate interpolation methods (interpolation functions). The interpolation functions are interpreted as states of a Markov model. In other words, the proposed method undergoes state transitions from one missing pixel position to the next. Accordingly, the interpolation problem is translated to the problem of estimating the optimal sequence of interpolation functions corresponding to the sequence of missing HR pixel positions. We derive a parameter-free probabilistic model for this to-be-estimated sequence of interpolation functions. Then, we solve the estimation problem using a trellis representation and the Viterbi algorithm. Using directional interpolation functions and sequence estimation techniques, we classify the new algorithm as an adaptive directional interpolation using soft-decision estimation techniques. Experimental results show that the proposed algorithm yields images with higher or comparable peak signal-to-noise ratios compared with some benchmark interpolation methods in the literature while being efficient in terms of implementation and complexity considerations.

  4. Interpolation method for the transport theory and its application in fusion-neutronics analysis

    International Nuclear Information System (INIS)

    Jung, J.

    1981-09-01

    This report presents an interpolation method for the solution of the Boltzmann transport equation. The method is based on a flux synthesis technique using two reference-point solutions. The equation for the interpolated solution results in a Volterra integral equation which is proved to have a unique solution. As an application of the present method, tritium breeding ratio is calculated for a typical D-T fusion reactor system. The result is compared to that of a variational technique

  5. Comparison of two interpolative background subtraction methods using phantom and clinical data

    International Nuclear Information System (INIS)

    Houston, A.S.; Sampson, W.F.D.

    1989-01-01

    Two interpolative background subtraction methods used in scintigraphy are tested using both phantom and clinical data. Cauchy integral subtraction was found to be relatively free of artefacts but required more computing time than bilinear interpolation. Both methods may be used with reasonable confidence for the quantification of relative measurements such as left ventricular ejection fraction and myocardial perfusion index but should be avoided if at all possible in the quantification of absolute measurements such as glomerular filtration rate. (author)

  6. US Department of Energy Mixed Waste Integrated Program performance systems analysis

    International Nuclear Information System (INIS)

    Ferrada, J.J.; Berry, J.B.

    1994-01-01

    The primary goal of this project is to support decision making for the U.S. Department of Energy (DOE)/EM-50 Mixed Waste Integrated Program (MWIP) and the Mixed Low-Level Waste Program. A systems approach to the assessment of enhanced waste form(s) production will be employed including, coordination and configuration management of activities in specific technology development tasks. The purpose of this paper is to describe the development and application of a methodology for implementing a performance systems analysis on mixed waste treatment process technologies. The second section describes a conventional approach to process systems analysis followed by a methodology to estimate uncertainties when analyzing innovative technologies. Principles from these methodologies have been used to develop a performance systems analysis for MWIP. The third section describes the systems analysis tools. The fourth section explains how the performance systems analysis will be used to analyze MWIP process alternatives. The fifth and sixth sections summarize this paper and describe future work for this project. Baseline treatment process technologies (i.e., commercially available technologies) and waste management strategies are evaluated systematically using the ASPEN PLUS program applications developed by the DOE Mixed Waste Treatment Project (MWTP). Alternatives to the baseline (i.e., technologies developed by DOE's Office of Technology Development) are analyzed using FLOW, a user-friendly program developed at Oak Ridge National Laboratory (ORNL). Currently, this program is capable of calculating rough order-of-magnitude mass and energy balances to assess the performance of the alternative technologies as compared to the baseline process. In the future, FLOW will be capable of communicating information to the ASPEN PLUS program

  7. NOAA Optimum Interpolation (OI) SST V2

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The optimum interpolation (OI) sea surface temperature (SST) analysis is produced weekly on a one-degree grid. The analysis uses in situ and satellite SST's plus...

  8. Kuu plaat : Interpol Antics. Plaadid kauplusest Lasering

    Index Scriptorium Estoniae

    2005-01-01

    Heliplaatidest: "Interpol Antics", Scooter "Mind the Gap", Slide-Fifty "The Way Ahead", Psyhhoterror "Freddy, löö esimesena!", Riho Sibul "Must", Bossacucanova "Uma Batida Diferente", "Biscantorat - Sound of the spirit from Glenstal Abbey"

  9. Revisiting Veerman’s interpolation method

    DEFF Research Database (Denmark)

    Christiansen, Peter; Bay, Niels Oluf

    2016-01-01

    and (c) FEsimulations. A comparison of the determined forming limits yields insignificant differences in the limit strain obtainedwith Veerman’s method or exact Lagrangian interpolation for the two sheet metal forming processes investigated. Theagreement with the FE-simulations is reasonable.......This article describes an investigation of Veerman’s interpolation method and its applicability for determining sheet metalformability. The theoretical foundation is established and its mathematical assumptions are clarified. An exact Lagrangianinterpolation scheme is also established...... for comparison. Bulge testing and tensile testing of aluminium sheets containingelectro-chemically etched circle grids are performed to experimentally determine the forming limit of the sheet material.The forming limit is determined using (a) Veerman’s interpolation method, (b) exact Lagrangian interpolation...

  10. NOAA Daily Optimum Interpolation Sea Surface Temperature

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The NOAA 1/4° daily Optimum Interpolation Sea Surface Temperature (or daily OISST) is an analysis constructed by combining observations from different platforms...

  11. Wideband DOA Estimation through Projection Matrix Interpolation

    OpenAIRE

    Selva, J.

    2017-01-01

    This paper presents a method to reduce the complexity of the deterministic maximum likelihood (DML) estimator in the wideband direction-of-arrival (WDOA) problem, which is based on interpolating the array projection matrix in the temporal frequency variable. It is shown that an accurate interpolator like Chebyshev's is able to produce DML cost functions comprising just a few narrowband-like summands. Actually, the number of such summands is far smaller (roughly by factor ten in the numerical ...

  12. Interpolation for a subclass of H

    Indian Academy of Sciences (India)

    |g(zm)| ≤ c |zm − zm |, ∀m ∈ N. Thus it is natural to pose the following interpolation problem for H. ∞. : DEFINITION 4. We say that (zn) is an interpolating sequence in the weak sense for H. ∞ if given any sequence of complex numbers (λn) verifying. |λn| ≤ c ψ(zn,z. ∗ n) |zn − zn |, ∀n ∈ N,. (4) there exists a product fg ∈ H.

  13. Linear Invariant Tensor Interpolation Applied to Cardiac Diffusion Tensor MRI

    Science.gov (United States)

    Gahm, Jin Kyu; Wisniewski, Nicholas; Kindlmann, Gordon; Kung, Geoffrey L.; Klug, William S.; Garfinkel, Alan; Ennis, Daniel B.

    2015-01-01

    Purpose Various methods exist for interpolating diffusion tensor fields, but none of them linearly interpolate tensor shape attributes. Linear interpolation is expected not to introduce spurious changes in tensor shape. Methods Herein we define a new linear invariant (LI) tensor interpolation method that linearly interpolates components of tensor shape (tensor invariants) and recapitulates the interpolated tensor from the linearly interpolated tensor invariants and the eigenvectors of a linearly interpolated tensor. The LI tensor interpolation method is compared to the Euclidean (EU), affine-invariant Riemannian (AI), log-Euclidean (LE) and geodesic-loxodrome (GL) interpolation methods using both a synthetic tensor field and three experimentally measured cardiac DT-MRI datasets. Results EU, AI, and LE introduce significant microstructural bias, which can be avoided through the use of GL or LI. Conclusion GL introduces the least microstructural bias, but LI tensor interpolation performs very similarly and at substantially reduced computational cost. PMID:23286085

  14. On the exact interpolating function in ABJ theory

    Energy Technology Data Exchange (ETDEWEB)

    Cavaglià, Andrea [Dipartimento di Fisica and INFN, Università di Torino,Via P. Giuria 1, 10125 Torino (Italy); Gromov, Nikolay [Mathematics Department, King’s College London,The Strand, London WC2R 2LS (United Kingdom); St. Petersburg INP,Gatchina, 188 300, St.Petersburg (Russian Federation); Levkovich-Maslyuk, Fedor [Mathematics Department, King’s College London,The Strand, London WC2R 2LS (United Kingdom); Nordita, KTH Royal Institute of Technology and Stockholm University,Roslagstullsbacken 23, SE-106 91 Stockholm (Sweden)

    2016-12-16

    Based on the recent indications of integrability in the planar ABJ model, we conjecture an exact expression for the interpolating function h(λ{sub 1},λ{sub 2}) in this theory. Our conjecture is based on the observation that the integrability structure of the ABJM theory given by its Quantum Spectral Curve is very rigid and does not allow for a simple consistent modification. Under this assumption, we revised the previous comparison of localization results and exact all loop integrability calculations done for the ABJM theory by one of the authors and Grigory Sizov, fixing h(λ{sub 1},λ{sub 2}). We checked our conjecture against various weak coupling expansions, at strong coupling and also demonstrated its invariance under the Seiberg-like duality. This match also gives further support to the integrability of the model. If our conjecture is correct, it extends all the available integrability results in the ABJM model to the ABJ model.

  15. Mixing positive and negative valence: Affective-semantic integration of bivalent words.

    Science.gov (United States)

    Kuhlmann, Michael; Hofmann, Markus J; Briesemeister, Benny B; Jacobs, Arthur M

    2016-08-05

    Single words have affective and aesthetic properties that influence their processing. Here we investigated the processing of a special case of word stimuli that are extremely difficult to evaluate, bivalent noun-noun-compounds (NNCs), i.e. novel words that mix a positive and negative noun, e.g. 'Bombensex' (bomb-sex). In a functional magnetic resonance imaging (fMRI) experiment we compared their processing with easier-to-evaluate non-bivalent NNCs in a valence decision task (VDT). Bivalent NNCs produced longer reaction times and elicited greater activation in the left inferior frontal gyrus (LIFG) than non-bivalent words, especially in contrast to words of negative valence. We attribute this effect to a LIFG-grounded process of semantic integration that requires greater effort for processing converse information, supporting the notion of a valence representation based on associations in semantic networks.

  16. Reduction of EMC Emissions in Mixed Signal Integrated Circuits with Embedded LIN Driver

    Directory of Open Access Journals (Sweden)

    P. Hartl

    2016-06-01

    Full Text Available This paper describes several methods for reduction of electromagnetic emissions (EME of mixed signal integrated circuits (IC. The focus is on the impact that a LIN bus communication block has on a complex IC which contains analog blocks, noisy digital block, micro-core (µC and several types of memories. It is used in an automotive environment, where EMC emission reduction is one of the key success factors. Several proposed methods for EME reduction are described and implemented on three test chips. These methods include current consumption reduction, internal on-chip decoupling, ground separation and different linear voltage regulator topologies. Measurement results of several fabricated test chips are shown and discussed.

  17. The best-mix of power demand and supply. Energy system integration

    International Nuclear Information System (INIS)

    Ogimoto, Kazuhiko

    2012-01-01

    In September 2012 after nationwide discussions, Energy and Environmental Council decided 'Innovative Strategy for Energy and the Environment': (1) Realization of a society not dependent on nuclear power, (2) Realization of green energy revolution, (3) For ensuring stable supply of energy, (4) Bold implementation of reform of electricity power systems and (5) Steady implementation of global warming countermeasures. Energy problem should be considered as supply and demand of whole energy. However, long-term energy problem such as in 2050 should assume global limits of fossil fuel supply and carbon dioxide emission and then in order to realize sustainable demand and supply of energy, maximum deployment of renewable energy power in primary energy and most practicable electrification of final demand for energy conservation should be implemented. Best mix of power and energy demand and supply would be significant to some extent. This article outlined analysis of power demand and supply in a long term, future power technologies and demand side management, and problems of power system operation and their solution, and then described energy system integration to realize power and energy/society best mix. (T. Tanaka)

  18. Servo-controlling structure of five-axis CNC system for real-time NURBS interpolating

    Science.gov (United States)

    Chen, Liangji; Guo, Guangsong; Li, Huiying

    2017-07-01

    NURBS (Non-Uniform Rational B-Spline) is widely used in CAD/CAM (Computer-Aided Design / Computer-Aided Manufacturing) to represent sculptured curves or surfaces. In this paper, we develop a 5-axis NURBS real-time interpolator and realize it in our developing CNC(Computer Numerical Control) system. At first, we use two NURBS curves to represent tool-tip and tool-axis path respectively. According to feedrate and Taylor series extension, servo-controlling signals of 5 axes are obtained for each interpolating cycle. Then, generation procedure of NC(Numerical Control) code with the presented method is introduced and the method how to integrate the interpolator into our developing CNC system is given. And also, the servo-controlling structure of the CNC system is introduced. Through the illustration, it has been indicated that the proposed method can enhance the machining accuracy and the spline interpolator is feasible for 5-axis CNC system.

  19. Building Input Adaptive Parallel Applications: A Case Study of Sparse Grid Interpolation

    KAUST Repository

    Murarasu, Alin

    2012-12-01

    The well-known power wall resulting in multi-cores requires special techniques for speeding up applications. In this sense, parallelization plays a crucial role. Besides standard serial optimizations, techniques such as input specialization can also bring a substantial contribution to the speedup. By identifying common patterns in the input data, we propose new algorithms for sparse grid interpolation that accelerate the state-of-the-art non-specialized version. Sparse grid interpolation is an inherently hierarchical method of interpolation employed for example in computational steering applications for decompressing highdimensional simulation data. In this context, improving the speedup is essential for real-time visualization. Using input specialization, we report a speedup of up to 9x over the nonspecialized version. The paper covers the steps we took to reach this speedup by means of input adaptivity. Our algorithms will be integrated in fastsg, a library for fast sparse grid interpolation. © 2012 IEEE.

  20. Comparison of spatial interpolation techniques to predict soil properties in the colombian piedmont eastern plains

    Directory of Open Access Journals (Sweden)

    Mauricio Castro Franco

    2017-07-01

    Full Text Available Context: Interpolating soil properties at field-scale in the Colombian piedmont eastern plains is challenging due to: the highly and complex variable nature of some processes; the effects of the soil; the land use; and the management. While interpolation techniques are being adapted to include auxiliary information of these effects, the soil data are often difficult to predict using conventional techniques of spatial interpolation. Method: In this paper, we evaluated and compared six spatial interpolation techniques: Inverse Distance Weighting (IDW, Spline, Ordinary Kriging (KO, Universal Kriging (UK, Cokriging (Ckg, and Residual Maximum Likelihood-Empirical Best Linear Unbiased Predictor (REML-EBLUP, from conditioned Latin Hypercube as a sampling strategy. The ancillary information used in Ckg and REML-EBLUP was indexes calculated from a digital elevation model (MDE. The “Random forest” algorithm was used for selecting the most important terrain index for each soil properties. Error metrics were used to validate interpolations against cross validation. Results: The results support the underlying assumption that HCLc captured adequately the full distribution of variables of ancillary information in the Colombian piedmont eastern plains conditions. They also suggest that Ckg and REML-EBLUP perform best in the prediction in most of the evaluated soil properties. Conclusions: Mixed interpolation techniques having auxiliary soil information and terrain indexes, provided a significant improvement in the prediction of soil properties, in comparison with other techniques.

  1. Edge-detect interpolation for direct digital periapical images

    International Nuclear Information System (INIS)

    Song, Nam Kyu; Koh, Kwang Joon

    1998-01-01

    The purpose of this study was to aid in the use of the digital images by edge-detect interpolation for direct digital periapical images using edge-deted interpolation. This study was performed by image processing of 20 digital periapical images; pixel replication, linear non-interpolation, linear interpolation, and edge-sensitive interpolation. The obtained results were as follows ; 1. Pixel replication showed blocking artifact and serious image distortion. 2. Linear interpolation showed smoothing effect on the edge. 3. Edge-sensitive interpolation overcame the smoothing effect on the edge and showed better image.

  2. MODIS Snow Cover Recovery Using Variational Interpolation

    Science.gov (United States)

    Tran, H.; Nguyen, P.; Hsu, K. L.; Sorooshian, S.

    2017-12-01

    Cloud obscuration is one of the major problems that limit the usages of satellite images in general and in NASA's Moderate Resolution Imaging Spectroradiometer (MODIS) global Snow-Covered Area (SCA) products in particular. Among the approaches to resolve the problem, the Variational Interpolation (VI) algorithm method, proposed by Xia et al., 2012, obtains cloud-free dynamic SCA images from MODIS. This method is automatic and robust. However, computational deficiency is a main drawback that degrades applying the method for larger scales (i.e., spatial and temporal scales). To overcome this difficulty, this study introduces an improved version of the original VI. The modified VI algorithm integrates the MINimum RESidual (MINRES) iteration (Paige and Saunders., 1975) to prevent the system from breaking up when applied to much broader scales. An experiment was done to demonstrate the crash-proof ability of the new algorithm in comparison with the original VI method, an ability that is obtained when maintaining the distribution of the weights set after solving the linear system. After that, the new VI algorithm was applied to the whole Contiguous United States (CONUS) over four winter months of 2016 and 2017, and validated using the snow station network (SNOTEL). The resulting cloud free images have high accuracy in capturing the dynamical changes of snow in contrast with the MODIS snow cover maps. Lastly, the algorithm was applied to create a Cloud free images dataset from March 10, 2000 to February 28, 2017, which is able to provide an overview of snow trends over CONUS for nearly two decades. ACKNOWLEDGMENTSWe would like to acknowledge NASA, NOAA Office of Hydrologic Development (OHD) National Weather Service (NWS), Cooperative Institute for Climate and Satellites (CICS), Army Research Office (ARO), ICIWaRM, and UNESCO for supporting this research.

  3. Integrated demonstration of molten salt oxidation with salt recycle for mixed waste treatment

    International Nuclear Information System (INIS)

    Hsu, P.C.

    1997-01-01

    Molten Salt Oxidation (MSO) is a thermal, nonflame process that has the inherent capability of completely destroying organic constituents of mixed wastes, hazardous wastes, and energetic materials while retaining inorganic and radioactive constituents in the salt. For this reason, MSO is considered a promising alternative to incineration for the treatment of a variety of organic wastes. Lawrence Livermore National Laboratory (LLNL) has prepared a facility and constructed an integrated pilot-scale MSO treatment system in which tests and demonstrations are performed under carefully controlled (experimental) conditions. The system consists of a MSO processor with dedicated off-gas treatment, a salt recycle system, feed preparation equipment, and equipment for preparing ceramic final waste forms. This integrated system was designed and engineered based on laboratory experience with a smaller engineering-scale reactor unit and extensive laboratory development on salt recycle and final forms preparation. In this paper we present design and engineering details of the system and discuss its capabilities as well as preliminary process demonstration data. A primary purpose of these demonstrations is identification of the most suitable waste streams and waste types for MSO treatment

  4. Discrete Orthogonal Transforms and Neural Networks for Image Interpolation

    Directory of Open Access Journals (Sweden)

    J. Polec

    1999-09-01

    Full Text Available In this contribution we present transform and neural network approaches to the interpolation of images. From transform point of view, the principles from [1] are modified for 1st and 2nd order interpolation. We present several new interpolation discrete orthogonal transforms. From neural network point of view, we present interpolation possibilities of multilayer perceptrons. We use various configurations of neural networks for 1st and 2nd order interpolation. The results are compared by means of tables.

  5. IT-supported skill-mix change and standardisation in integrated eyecare: lessons from two screening projects in The Netherlands

    Directory of Open Access Journals (Sweden)

    Marleen de Mul

    2007-05-01

    Full Text Available Introduction: Information Technology (IT has the potential to significantly support skill-mix change and, thereby, to improve the efficiency and effectiveness of integrated care. Theory and methods: IT and skill-mix change share an important precondition: the standardisation of work processes. Standardisation plays a crucial role in IT-supported skill-mix change. It is not a matter of more or less standardisation than in the ‘old’ situation, but about creating an optimal fit. We used qualitative data from our evaluation of two integrated-care projects in Dutch eyecare to identify domains where this fit is important. Results: While standardisation was needed to delegate screening tasks from physicians to non-physicians, and to assure the quality of the integrated-care process as a whole, tensions arose in three domains: the performance of clinical tasks, the documentation, and the communication between professionals. Unfunctional standardisation led to dissatisfaction and distrust between the professionals involved in screening. Discussion and conclusion: Although the integration seems promising, much work is needed to ensure a synergistic relationship between skill-mix change and IT. Developing IT-supported skill-mix change by means of standardisation is a matter of tailoring standardisation to fit the situation at hand, while dealing with the local constraints of available technology and organisational context.

  6. Integration of complex-wide mixed low-level waste activities for program acceleration and optimization

    International Nuclear Information System (INIS)

    McKenney, D.E.

    1998-01-01

    In July 1996, the US Department of Energy (DOE) chartered a contractor-led effort to develop a suite of technically defensible, integrated alternatives which would allow the Environmental Management program to accomplish its mission objectives in an accelerated fashion and at a reduced cost. These alternatives, or opportunities, could then be evaluated by DOE and stakeholders for possible implementation, given precursor requirements (regulatory changes, etc.) could be met and benefits to the Complex realized. This contractor effort initially focused on six waste types, one of which was Mixed Low-Level Waste (MLLW). Many opportunities were identified by the contractor team for integrating MLLW activities across the DOE Complex. These opportunities were further narrowed to six that had the most promise for implementation and savings to the DOE Complex. The opportunities include six items: (1) the consolidation of individual site analytical services procurement efforts, (2) the consolidation of individual site MLLW treatment services procurement efforts, (3) establishment of ''de minimus'' radioactivity levels, (4) standardization of characterization requirements, (5) increased utilization of existing DOE treatment facilities, and (6) using a combination of DOE and commercial MLLW disposal capacity. The results of the integration effort showed that by managing MLLW activities across the DOE Complex as a cohesive unit rather than as independent site efforts, the DOE could improve the rate of progress toward meeting its objectives and reduce its overall MLLW program costs. Savings potential for MLLW, if the identified opportunities could be implemented, could total $224 million or more. Implementation of the opportunities also could result in the acceleration of the MLLW ''work off schedule'' across the DOE Complex by five years

  7. New families of interpolating type IIB backgrounds

    Science.gov (United States)

    Minasian, Ruben; Petrini, Michela; Zaffaroni, Alberto

    2010-04-01

    We construct new families of interpolating two-parameter solutions of type IIB supergravity. These correspond to D3-D5 systems on non-compact six-dimensional manifolds which are mathbb{T}2 fibrations over Eguchi-Hanson and multi-center Taub-NUT spaces, respectively. One end of the interpolation corresponds to a solution with only D5 branes and vanishing NS three-form flux. A topology changing transition occurs at the other end, where the internal space becomes a direct product of the four-dimensional surface and the two-torus and the complexified NS-RR three-form flux becomes imaginary self-dual. Depending on the choice of the connections on the torus fibre, the interpolating family has either mathcal{N}=2 or mathcal{N}=1 supersymmetry. In the mathcal{N}=2 case it can be shown that the solutions are regular.

  8. Interpolation of quasi-Banach spaces

    International Nuclear Information System (INIS)

    Tabacco Vignati, A.M.

    1986-01-01

    This dissertation presents a method of complex interpolation for familities of quasi-Banach spaces. This method generalizes the theory for families of Banach spaces, introduced by others. Intermediate spaces in several particular cases are characterized using different approaches. The situation when all the spaces have finite dimensions is studied first. The second chapter contains the definitions and main properties of the new interpolation spaces, and an example concerning the Schatten ideals associated with a separable Hilbert space. The case of L/sup P/ spaces follows from the maximal operator theory contained in Chapter III. Also introduced is a different method of interpolation for quasi-Banach lattices of functions, and conditions are given to guarantee that the two techniques yield the same result. Finally, the last chapter contains a different, and more direct, approach to the case of Hardy spaces

  9. Quadratic Interpolation and Linear Lifting Design

    Directory of Open Access Journals (Sweden)

    Joel Solé

    2007-03-01

    Full Text Available A quadratic image interpolation method is stated. The formulation is connected to the optimization of lifting steps. This relation triggers the exploration of several interpolation possibilities within the same context, which uses the theory of convex optimization to minimize quadratic functions with linear constraints. The methods consider possible knowledge available from a given application. A set of linear equality constraints that relate wavelet bases and coefficients with the underlying signal is introduced in the formulation. As a consequence, the formulation turns out to be adequate for the design of lifting steps. The resulting steps are related to the prediction minimizing the detail signal energy and to the update minimizing the l2-norm of the approximation signal gradient. Results are reported for the interpolation methods in terms of PSNR and also, coding results are given for the new update lifting steps.

  10. Optimized Quasi-Interpolators for Image Reconstruction.

    Science.gov (United States)

    Sacht, Leonardo; Nehab, Diego

    2015-12-01

    We propose new quasi-interpolators for the continuous reconstruction of sampled images, combining a narrowly supported piecewise-polynomial kernel and an efficient digital filter. In other words, our quasi-interpolators fit within the generalized sampling framework and are straightforward to use. We go against standard practice and optimize for approximation quality over the entire Nyquist range, rather than focusing exclusively on the asymptotic behavior as the sample spacing goes to zero. In contrast to previous work, we jointly optimize with respect to all degrees of freedom available in both the kernel and the digital filter. We consider linear, quadratic, and cubic schemes, offering different tradeoffs between quality and computational cost. Experiments with compounded rotations and translations over a range of input images confirm that, due to the additional degrees of freedom and the more realistic objective function, our new quasi-interpolators perform better than the state of the art, at a similar computational cost.

  11. Multiscale empirical interpolation for solving nonlinear PDEs

    KAUST Repository

    Calo, Victor M.

    2014-12-01

    In this paper, we propose a multiscale empirical interpolation method for solving nonlinear multiscale partial differential equations. The proposed method combines empirical interpolation techniques and local multiscale methods, such as the Generalized Multiscale Finite Element Method (GMsFEM). To solve nonlinear equations, the GMsFEM is used to represent the solution on a coarse grid with multiscale basis functions computed offline. Computing the GMsFEM solution involves calculating the system residuals and Jacobians on the fine grid. We use empirical interpolation concepts to evaluate these residuals and Jacobians of the multiscale system with a computational cost which is proportional to the size of the coarse-scale problem rather than the fully-resolved fine scale one. The empirical interpolation method uses basis functions which are built by sampling the nonlinear function we want to approximate a limited number of times. The coefficients needed for this approximation are computed in the offline stage by inverting an inexpensive linear system. The proposed multiscale empirical interpolation techniques: (1) divide computing the nonlinear function into coarse regions; (2) evaluate contributions of nonlinear functions in each coarse region taking advantage of a reduced-order representation of the solution; and (3) introduce multiscale proper-orthogonal-decomposition techniques to find appropriate interpolation vectors. We demonstrate the effectiveness of the proposed methods on several nonlinear multiscale PDEs that are solved with Newton\\'s methods and fully-implicit time marching schemes. Our numerical results show that the proposed methods provide a robust framework for solving nonlinear multiscale PDEs on a coarse grid with bounded error and significant computational cost reduction.

  12. Positivity Preserving Interpolation Using Rational Bicubic Spline

    Directory of Open Access Journals (Sweden)

    Samsul Ariffin Abdul Karim

    2015-01-01

    Full Text Available This paper discusses the positivity preserving interpolation for positive surfaces data by extending the C1 rational cubic spline interpolant of Karim and Kong to the bivariate cases. The partially blended rational bicubic spline has 12 parameters in the descriptions where 8 of them are free parameters. The sufficient conditions for the positivity are derived on every four boundary curves network on the rectangular patch. Numerical comparison with existing schemes also has been done in detail. Based on Root Mean Square Error (RMSE, our partially blended rational bicubic spline is on a par with the established methods.

  13. Interpolation algorithm for asynchronous ADC-data

    Directory of Open Access Journals (Sweden)

    S. Bramburger

    2017-09-01

    Full Text Available This paper presents a modified interpolation algorithm for signals with variable data rate from asynchronous ADCs. The Adaptive weights Conjugate gradient Toeplitz matrix (ACT algorithm is extended to operate with a continuous data stream. An additional preprocessing of data with constant and linear sections and a weighted overlap of step-by-step into spectral domain transformed signals improve the reconstruction of the asycnhronous ADC signal. The interpolation method can be used if asynchronous ADC data is fed into synchronous digital signal processing.

  14. A Hybrid Method for Interpolating Missing Data in Heterogeneous Spatio-Temporal Datasets

    Directory of Open Access Journals (Sweden)

    Min Deng

    2016-02-01

    Full Text Available Space-time interpolation is widely used to estimate missing or unobserved values in a dataset integrating both spatial and temporal records. Although space-time interpolation plays a key role in space-time modeling, existing methods were mainly developed for space-time processes that exhibit stationarity in space and time. It is still challenging to model heterogeneity of space-time data in the interpolation model. To overcome this limitation, in this study, a novel space-time interpolation method considering both spatial and temporal heterogeneity is developed for estimating missing data in space-time datasets. The interpolation operation is first implemented in spatial and temporal dimensions. Heterogeneous covariance functions are constructed to obtain the best linear unbiased estimates in spatial and temporal dimensions. Spatial and temporal correlations are then considered to combine the interpolation results in spatial and temporal dimensions to estimate the missing data. The proposed method is tested on annual average temperature and precipitation data in China (1984–2009. Experimental results show that, for these datasets, the proposed method outperforms three state-of-the-art methods—e.g., spatio-temporal kriging, spatio-temporal inverse distance weighting, and point estimation model of biased hospitals-based area disease estimation methods.

  15. Implementation of High Time Delay Accuracy of Ultrasonic Phased Array Based on Interpolation CIC Filter.

    Science.gov (United States)

    Liu, Peilu; Li, Xinghua; Li, Haopeng; Su, Zhikun; Zhang, Hongxu

    2017-10-12

    In order to improve the accuracy of ultrasonic phased array focusing time delay, analyzing the original interpolation Cascade-Integrator-Comb (CIC) filter, an 8× interpolation CIC filter parallel algorithm was proposed, so that interpolation and multichannel decomposition can simultaneously process. Moreover, we summarized the general formula of arbitrary multiple interpolation CIC filter parallel algorithm and established an ultrasonic phased array focusing time delay system based on 8× interpolation CIC filter parallel algorithm. Improving the algorithmic structure, 12.5% of addition and 29.2% of multiplication was reduced, meanwhile the speed of computation is still very fast. Considering the existing problems of the CIC filter, we compensated the CIC filter; the compensated CIC filter's pass band is flatter, the transition band becomes steep, and the stop band attenuation increases. Finally, we verified the feasibility of this algorithm on Field Programming Gate Array (FPGA). In the case of system clock is 125 MHz, after 8× interpolation filtering and decomposition, time delay accuracy of the defect echo becomes 1 ns. Simulation and experimental results both show that the algorithm we proposed has strong feasibility. Because of the fast calculation, small computational amount and high resolution, this algorithm is especially suitable for applications with high time delay accuracy and fast detection.

  16. Implementation of High Time Delay Accuracy of Ultrasonic Phased Array Based on Interpolation CIC Filter

    Directory of Open Access Journals (Sweden)

    Peilu Liu

    2017-10-01

    Full Text Available In order to improve the accuracy of ultrasonic phased array focusing time delay, analyzing the original interpolation Cascade-Integrator-Comb (CIC filter, an 8× interpolation CIC filter parallel algorithm was proposed, so that interpolation and multichannel decomposition can simultaneously process. Moreover, we summarized the general formula of arbitrary multiple interpolation CIC filter parallel algorithm and established an ultrasonic phased array focusing time delay system based on 8× interpolation CIC filter parallel algorithm. Improving the algorithmic structure, 12.5% of addition and 29.2% of multiplication was reduced, meanwhile the speed of computation is still very fast. Considering the existing problems of the CIC filter, we compensated the CIC filter; the compensated CIC filter’s pass band is flatter, the transition band becomes steep, and the stop band attenuation increases. Finally, we verified the feasibility of this algorithm on Field Programming Gate Array (FPGA. In the case of system clock is 125 MHz, after 8× interpolation filtering and decomposition, time delay accuracy of the defect echo becomes 1 ns. Simulation and experimental results both show that the algorithm we proposed has strong feasibility. Because of the fast calculation, small computational amount and high resolution, this algorithm is especially suitable for applications with high time delay accuracy and fast detection.

  17. Validation of China-wide interpolated daily climate variables from 1960 to 2011

    Science.gov (United States)

    Yuan, Wenping; Xu, Bing; Chen, Zhuoqi; Xia, Jiangzhou; Xu, Wenfang; Chen, Yang; Wu, Xiaoxu; Fu, Yang

    2015-02-01

    on the performance of these variables in estimating daily variations, interannual variability, and extreme events. Although longitude, latitude, and elevation data are included in the model, additional information, such as topography and cloud cover, should be integrated into the interpolation algorithm to improve performance in estimating wind speed, atmospheric pressure, and precipitation.

  18. Medical Student Research: An Integrated Mixed-Methods Systematic Review and Meta-Analysis.

    Directory of Open Access Journals (Sweden)

    Mohamed Amgad

    Full Text Available Despite the rapidly declining number of physician-investigators, there is no consistent structure within medical education so far for involving medical students in research.To conduct an integrated mixed-methods systematic review and meta-analysis of published studies about medical students' participation in research, and to evaluate the evidence in order to guide policy decision-making regarding this issue.We followed the PRISMA statement guidelines during the preparation of this review and meta-analysis. We searched various databases as well as the bibliographies of the included studies between March 2012 and September 2013. We identified all relevant quantitative and qualitative studies assessing the effect of medical student participation in research, without restrictions regarding study design or publication date. Prespecified outcome-specific quality criteria were used to judge the admission of each quantitative outcome into the meta-analysis. Initial screening of titles and abstracts resulted in the retrieval of 256 articles for full-text assessment. Eventually, 79 articles were included in our study, including eight qualitative studies. An integrated approach was used to combine quantitative and qualitative studies into a single synthesis. Once all included studies were identified, a data-driven thematic analysis was performed.Medical student participation in research is associated with improved short- and long- term scientific productivity, more informed career choices and improved knowledge about-, interest in- and attitudes towards research. Financial worries, gender, having a higher degree (MSc or PhD before matriculation and perceived competitiveness of the residency of choice are among the factors that affect the engagement of medical students in research and/or their scientific productivity. Intercalated BSc degrees, mandatory graduation theses and curricular research components may help in standardizing research education during

  19. Evaluating patient care communication in integrated care settings: application of a mixed method approach in cerebral palsy programs

    NARCIS (Netherlands)

    Gulmans, J.; Gulmans, J.; Vollenbroek-Hutten, Miriam Marie Rosé; van Gemert-Pijnen, Julia E.W.C.; van Harten, Willem H.

    2009-01-01

    Objective. In this study, we evaluated patient care communication in the integrated care setting of children with cerebral palsy in three Dutch regions in order to identify relevant communication gaps experienced by both parents and involved professionals. - Design. A three-step mixed method

  20. The Contribution of Mixed Methods Research to the Field of Childhood Trauma: A Narrative Review focused on Data Integration

    NARCIS (Netherlands)

    Boeije, H.R.; Slagt, M.I.; van Wesel, F.

    2013-01-01

    In mixed methods research (MMR), integrating the quantitative and the qualitative components of a study is assumed to result in additional knowledge (or "yield"). This narrative review examines the extent to which MMR is used in the field of childhood trauma and provides directions for improving

  1. Students' General Knowledge of the Learning Process: A Mixed Methods Study Illustrating Integrated Data Collection and Data Consolidation

    Science.gov (United States)

    van Velzen, Joke H.

    2018-01-01

    There were two purposes for this mixed methods study: to investigate (a) the realistic meaning of awareness and understanding as the underlying constructs of general knowledge of the learning process and (b) a procedure for data consolidation. The participants were 11th-grade high school and first-year university students. Integrated data…

  2. The contribution of mixed methods research to the field of childhood trauma: a narrative review focused on data integration.

    NARCIS (Netherlands)

    Boeije, H.; Slagt, M.; Wesel, F. van

    2013-01-01

    In mixed methods research (MMR), integrating the quantitative and the qualitative components of a study is assumed to result in additional knowledge (or “yield”). This narrative review examines the extent to which MMR is used in the field of childhood trauma and provides directions for improving

  3. Independent peer review panel report on the integrated nonthermal treatment systems study and the comparison of integrated thermal and integrated nonthermal treatment systems for mixed low level waste

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1996-08-01

    The US Department of Energy`s (DOE) Office of Environmental Management (EM) Office of Science and Technology (OST) has conducted studies of integrated thermal treatment systems and integrated nonthermal treatment systems (INTS) for treating contact handled, alpha and non-alpha mixed low level radioactive waste (MLLW). The MLLW in the DOE complex consists of a wide variety of organic and inorganic solids and liquids contaminated with radioactive substances. Treatment systems are needed to destroy organic material and stabilize residues prior to land disposal. In May 1996 the Deputy Assistant Secretary for OST appointed an Independent Peer Review Panel to: (1) review and comment on the INTS Study; (2) make recommendations on the most promising thermal and nonthermal treatment systems; (3) make recommendations on research and development necessary to prove the performance of nonthermal and thermal technologies; and (4) review and comment on the preliminary draft of the ITTS/INTS Comparison Report. This report presents the primary conclusions and recommendations based on the review of the INTS study and the comparison report. System selection, overviews, comparisons, cost estimations and sensitivity analyses, and recommended R and D engineering needs are then described and discussed.

  4. Independent peer review panel report on the integrated nonthermal treatment systems study and the comparison of integrated thermal and integrated nonthermal treatment systems for mixed low level waste

    International Nuclear Information System (INIS)

    1996-08-01

    The US Department of Energy's (DOE) Office of Environmental Management (EM) Office of Science and Technology (OST) has conducted studies of integrated thermal treatment systems and integrated nonthermal treatment systems (INTS) for treating contact handled, alpha and non-alpha mixed low level radioactive waste (MLLW). The MLLW in the DOE complex consists of a wide variety of organic and inorganic solids and liquids contaminated with radioactive substances. Treatment systems are needed to destroy organic material and stabilize residues prior to land disposal. In May 1996 the Deputy Assistant Secretary for OST appointed an Independent Peer Review Panel to: (1) review and comment on the INTS Study; (2) make recommendations on the most promising thermal and nonthermal treatment systems; (3) make recommendations on research and development necessary to prove the performance of nonthermal and thermal technologies; and (4) review and comment on the preliminary draft of the ITTS/INTS Comparison Report. This report presents the primary conclusions and recommendations based on the review of the INTS study and the comparison report. System selection, overviews, comparisons, cost estimations and sensitivity analyses, and recommended R and D engineering needs are then described and discussed

  5. Multiscale empirical interpolation for solving nonlinear PDEs

    KAUST Repository

    Calo, Victor M.; Efendiev, Yalchin R.; Galvis, Juan; Ghommem, Mehdi

    2014-01-01

    residuals and Jacobians on the fine grid. We use empirical interpolation concepts to evaluate these residuals and Jacobians of the multiscale system with a computational cost which is proportional to the size of the coarse-scale problem rather than the fully

  6. Fast image interpolation via random forests.

    Science.gov (United States)

    Huang, Jun-Jie; Siu, Wan-Chi; Liu, Tian-Rui

    2015-10-01

    This paper proposes a two-stage framework for fast image interpolation via random forests (FIRF). The proposed FIRF method gives high accuracy, as well as requires low computation. The underlying idea of this proposed work is to apply random forests to classify the natural image patch space into numerous subspaces and learn a linear regression model for each subspace to map the low-resolution image patch to high-resolution image patch. The FIRF framework consists of two stages. Stage 1 of the framework removes most of the ringing and aliasing artifacts in the initial bicubic interpolated image, while Stage 2 further refines the Stage 1 interpolated image. By varying the number of decision trees in the random forests and the number of stages applied, the proposed FIRF method can realize computationally scalable image interpolation. Extensive experimental results show that the proposed FIRF(3, 2) method achieves more than 0.3 dB improvement in peak signal-to-noise ratio over the state-of-the-art nonlocal autoregressive modeling (NARM) method. Moreover, the proposed FIRF(1, 1) obtains similar or better results as NARM while only takes its 0.3% computational time.

  7. Spectral Compressive Sensing with Polar Interpolation

    DEFF Research Database (Denmark)

    Fyhn, Karsten; Dadkhahi, Hamid; F. Duarte, Marco

    2013-01-01

    . In this paper, we introduce a greedy recovery algorithm that leverages a band-exclusion function and a polar interpolation function to address these two issues in spectral compressive sensing. Our algorithm is geared towards line spectral estimation from compressive measurements and outperforms most existing...

  8. Technique for image interpolation using polynomial transforms

    NARCIS (Netherlands)

    Escalante Ramírez, B.; Martens, J.B.; Haskell, G.G.; Hang, H.M.

    1993-01-01

    We present a new technique for image interpolation based on polynomial transforms. This is an image representation model that analyzes an image by locally expanding it into a weighted sum of orthogonal polynomials. In the discrete case, the image segment within every window of analysis is

  9. Classification of polynomial integrable systems of mixed scalar and vector evolution equations: I

    International Nuclear Information System (INIS)

    Tsuchida, Takayuki; Wolf, Thomas

    2005-01-01

    We perform a classification of integrable systems of mixed scalar and vector evolution equations with respect to higher symmetries. We consider polynomial systems that are homogeneous under a suitable weighting of variables. This paper deals with the KdV weighting, the Burgers (or potential KdV or modified KdV) weighting, the Ibragimov-Shabat weighting and two unfamiliar weightings. The case of other weightings will be studied in a subsequent paper. Making an ansatz for undetermined coefficients and using a computer package for solving bilinear algebraic systems, we give the complete lists of second-order systems with a third-order or a fourth-order symmetry and third-order systems with a fifth-order symmetry. For all but a few systems in the lists, we show that the system (or, at least a subsystem of it) admits either a Lax representation or a linearizing transformation. A thorough comparison with recent work of Foursov and Olver is made

  10. Integrative health care - Toward a common understanding: A mixed method study.

    Science.gov (United States)

    Leach, Matthew J; Wiese, Marlene; Thakkar, Manisha; Agnew, Tamara

    2018-02-01

    To generate a multidisciplinary stakeholder-informed definition of integrative health care (IHC). A mixed-method study design was used, employing the use of focus groups/semi-structured interviews (phase-1) and document analysis (phases 2 and 3). Phase-1 recruited a purposive sample of Australian health consumers/health providers. Phase-2 interrogated websites of international IHC organisations for definitions of IHC. Phase-3 systematically searched bibliographic databases for articles defining IHC. Data were analysed using thematic analysis. Data were drawn from 54 health consumers/providers (phase-1), 23 IHC organisation webpages (phase-2) and 23 eligible articles (phase-3). Seven themes emerged from the data. Consensus was reached on a single, 65-word definition of IHC. An unambiguous definition of IHC is critical to establishing a clearer identity for IHC, as well as providing greater clarity for consumers, health providers and policy makers. In recognising the need for a clearer description, we propose a scientifically-grounded, multi-disciplinary stakeholder-informed definition of IHC. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. Integrated process analysis of treatment systems for mixed low level waste

    International Nuclear Information System (INIS)

    Cooley, C.R.; Schwinkendorf, W.E.; Bechtold, T.E.

    1997-10-01

    Selection of technologies to be developed for treatment of DOE's mixed low level waste (MLLW) requires knowledge and understanding of the expected costs, schedules, risks, performance, and reliability of the total engineered systems that use these technologies. Thus, an integrated process analysis program was undertaken to identify the characteristics and needs of several thermal and nonthermal systems. For purposes of comparison, all systems were conceptually designed for a single facility processing the same amount of waste at the same rate. Thirty treatment systems were evaluated ranging from standard incineration to innovative thermal systems and innovative nonthermal chemical treatment. Treating 236 million pounds of waste in 20 years through a central treatment was found to be the least costly option with total life cycle cost ranging from $2.1 billion for a metal melting system to $3.9 billion for a nonthermal acid digestion system. Little cost difference exists among nonthermal systems or among thermal systems. Significant cost savings could be achieved by working towards maximum on line treatment time per year; vitrifying the final waste residue; decreasing front end characterization segregation and sizing requirements; using contaminated soil as the vitrifying agent; and delisting the final vitrified waste form from Resource Conservation and Recovery Act (RCRA) Land Disposal Restriction (LDR) requirements

  12. Classification of polynomial integrable systems of mixed scalar and vector evolution equations: I

    Energy Technology Data Exchange (ETDEWEB)

    Tsuchida, Takayuki [Department of Physics, Kwansei Gakuin University, 2-1 Gakuen, Sanda 669-1337 (Japan); Wolf, Thomas [Department of Mathematics, Brock University, St Catharines, ON L2S 3A1 (Canada)

    2005-09-02

    We perform a classification of integrable systems of mixed scalar and vector evolution equations with respect to higher symmetries. We consider polynomial systems that are homogeneous under a suitable weighting of variables. This paper deals with the KdV weighting, the Burgers (or potential KdV or modified KdV) weighting, the Ibragimov-Shabat weighting and two unfamiliar weightings. The case of other weightings will be studied in a subsequent paper. Making an ansatz for undetermined coefficients and using a computer package for solving bilinear algebraic systems, we give the complete lists of second-order systems with a third-order or a fourth-order symmetry and third-order systems with a fifth-order symmetry. For all but a few systems in the lists, we show that the system (or, at least a subsystem of it) admits either a Lax representation or a linearizing transformation. A thorough comparison with recent work of Foursov and Olver is made.

  13. An Integrated Approach to Locality-Conscious Processor Allocation and Scheduling of Mixed-Parallel Applications

    Energy Technology Data Exchange (ETDEWEB)

    Vydyanathan, Naga; Krishnamoorthy, Sriram; Sabin, Gerald M.; Catalyurek, Umit V.; Kurc, Tahsin; Sadayappan, Ponnuswamy; Saltz, Joel H.

    2009-08-01

    Complex parallel applications can often be modeled as directed acyclic graphs of coarse-grained application-tasks with dependences. These applications exhibit both task- and data-parallelism, and combining these two (also called mixedparallelism), has been shown to be an effective model for their execution. In this paper, we present an algorithm to compute the appropriate mix of task- and data-parallelism required to minimize the parallel completion time (makespan) of these applications. In other words, our algorithm determines the set of tasks that should be run concurrently and the number of processors to be allocated to each task. The processor allocation and scheduling decisions are made in an integrated manner and are based on several factors such as the structure of the taskgraph, the runtime estimates and scalability characteristics of the tasks and the inter-task data communication volumes. A locality conscious scheduling strategy is used to improve inter-task data reuse. Evaluation through simulations and actual executions of task graphs derived from real applications as well as synthetic graphs shows that our algorithm consistently generates schedules with lower makespan as compared to CPR and CPA, two previously proposed scheduling algorithms. Our algorithm also produces schedules that have lower makespan than pure taskand data-parallel schedules. For task graphs with known optimal schedules or lower bounds on the makespan, our algorithm generates schedules that are closer to the optima than other scheduling approaches.

  14. Mixing-to-eruption timescales: an integrated model combining numerical simulations and high-temperature experiments with natural melts

    Science.gov (United States)

    Montagna, Chiara; Perugini, Diego; De Campos, Christina; Longo, Antonella; Dingwell, Donald Bruce; Papale, Paolo

    2015-04-01

    Arrival of magma from depth into shallow reservoirs and associated mixing processes have been documented as possible triggers of explosive eruptions. Quantifying the timing from beginning of mixing to eruption is of fundamental importance in volcanology in order to put constraints about the possible onset of a new eruption. Here we integrate numerical simulations and high-temperature experiment performed with natural melts with the aim to attempt identifying the mixing-to-eruption timescales. We performed two-dimensional numerical simulations of the arrival of gas-rich magmas into shallow reservoirs. We solve the fluid dynamics for the two interacting magmas evaluating the space-time evolution of the physical properties of the mixture. Convection and mingling develop quickly into the chamber and feeding conduit/dyke. Over time scales of hours, the magmas in the reservoir appear to have mingled throughout, and convective patterns become harder to identify. High-temperature magma mixing experiments have been performed using a centrifuge and using basaltic and phonolitic melts from Campi Flegrei (Italy) as initial end-members. Concentration Variance Decay (CVD), an inevitable consequence of magma mixing, is exponential with time. The rate of CVD is a powerful new geochronometer for the time from mixing to eruption/quenching. The mingling-to-eruption time of three explosive volcanic eruptions from Campi Flegrei (Italy) yield durations on the order of tens of minutes. These results are in perfect agreement with the numerical simulations that suggest a maximum mixing time of a few hours to obtain a hybrid mixture. We show that integration of numerical simulation and high-temperature experiments can provide unprecedented results about mixing processes in volcanic systems. The combined application of numerical simulations and CVD geochronometer to the eruptive products of active volcanoes could be decisive for the preparation of hazard mitigation during volcanic unrest.

  15. Fast Inverse Distance Weighting-Based Spatiotemporal Interpolation: A Web-Based Application of Interpolating Daily Fine Particulate Matter PM2.5 in the Contiguous U.S. Using Parallel Programming and k-d Tree

    Directory of Open Access Journals (Sweden)

    Lixin Li

    2014-09-01

    Full Text Available Epidemiological studies have identified associations between mortality and changes in concentration of particulate matter. These studies have highlighted the public concerns about health effects of particulate air pollution. Modeling fine particulate matter PM2.5 exposure risk and monitoring day-to-day changes in PM2.5 concentration is a critical step for understanding the pollution problem and embarking on the necessary remedy. This research designs, implements and compares two inverse distance weighting (IDW-based spatiotemporal interpolation methods, in order to assess the trend of daily PM2.5 concentration for the contiguous United States over the year of 2009, at both the census block group level and county level. Traditionally, when handling spatiotemporal interpolation, researchers tend to treat space and time separately and reduce the spatiotemporal interpolation problems to a sequence of snapshots of spatial interpolations. In this paper, PM2.5 data interpolation is conducted in the continuous space-time domain by integrating space and time simultaneously, using the so-called extension approach. Time values are calculated with the help of a factor under the assumption that spatial and temporal dimensions are equally important when interpolating a continuous changing phenomenon in the space-time domain. Various IDW-based spatiotemporal interpolation methods with different parameter configurations are evaluated by cross-validation. In addition, this study explores computational issues (computer processing speed faced during implementation of spatiotemporal interpolation for huge data sets. Parallel programming techniques and an advanced data structure, named k-d tree, are adapted in this paper to address the computational challenges. Significant computational improvement has been achieved. Finally, a web-based spatiotemporal IDW-based interpolation application is designed and implemented where users can visualize and animate

  16. Fast Inverse Distance Weighting-Based Spatiotemporal Interpolation: A Web-Based Application of Interpolating Daily Fine Particulate Matter PM2.5 in the Contiguous U.S. Using Parallel Programming and k-d Tree

    Science.gov (United States)

    Li, Lixin; Losser, Travis; Yorke, Charles; Piltner, Reinhard

    2014-01-01

    Epidemiological studies have identified associations between mortality and changes in concentration of particulate matter. These studies have highlighted the public concerns about health effects of particulate air pollution. Modeling fine particulate matter PM2.5 exposure risk and monitoring day-to-day changes in PM2.5 concentration is a critical step for understanding the pollution problem and embarking on the necessary remedy. This research designs, implements and compares two inverse distance weighting (IDW)-based spatiotemporal interpolation methods, in order to assess the trend of daily PM2.5 concentration for the contiguous United States over the year of 2009, at both the census block group level and county level. Traditionally, when handling spatiotemporal interpolation, researchers tend to treat space and time separately and reduce the spatiotemporal interpolation problems to a sequence of snapshots of spatial interpolations. In this paper, PM2.5 data interpolation is conducted in the continuous space-time domain by integrating space and time simultaneously, using the so-called extension approach. Time values are calculated with the help of a factor under the assumption that spatial and temporal dimensions are equally important when interpolating a continuous changing phenomenon in the space-time domain. Various IDW-based spatiotemporal interpolation methods with different parameter configurations are evaluated by cross-validation. In addition, this study explores computational issues (computer processing speed) faced during implementation of spatiotemporal interpolation for huge data sets. Parallel programming techniques and an advanced data structure, named k-d tree, are adapted in this paper to address the computational challenges. Significant computational improvement has been achieved. Finally, a web-based spatiotemporal IDW-based interpolation application is designed and implemented where users can visualize and animate spatiotemporal interpolation

  17. Fast inverse distance weighting-based spatiotemporal interpolation: a web-based application of interpolating daily fine particulate matter PM2:5 in the contiguous U.S. using parallel programming and k-d tree.

    Science.gov (United States)

    Li, Lixin; Losser, Travis; Yorke, Charles; Piltner, Reinhard

    2014-09-03

    Epidemiological studies have identified associations between mortality and changes in concentration of particulate matter. These studies have highlighted the public concerns about health effects of particulate air pollution. Modeling fine particulate matter PM2.5 exposure risk and monitoring day-to-day changes in PM2.5 concentration is a critical step for understanding the pollution problem and embarking on the necessary remedy. This research designs, implements and compares two inverse distance weighting (IDW)-based spatiotemporal interpolation methods, in order to assess the trend of daily PM2.5 concentration for the contiguous United States over the year of 2009, at both the census block group level and county level. Traditionally, when handling spatiotemporal interpolation, researchers tend to treat space and time separately and reduce the spatiotemporal interpolation problems to a sequence of snapshots of spatial interpolations. In this paper, PM2.5 data interpolation is conducted in the continuous space-time domain by integrating space and time simultaneously, using the so-called extension approach. Time values are calculated with the help of a factor under the assumption that spatial and temporal dimensions are equally important when interpolating a continuous changing phenomenon in the space-time domain. Various IDW-based spatiotemporal interpolation methods with different parameter configurations are evaluated by cross-validation. In addition, this study explores computational issues (computer processing speed) faced during implementation of spatiotemporal interpolation for huge data sets. Parallel programming techniques and an advanced data structure, named k-d tree, are adapted in this paper to address the computational challenges. Significant computational improvement has been achieved. Finally, a web-based spatiotemporal IDW-based interpolation application is designed and implemented where users can visualize and animate spatiotemporal interpolation

  18. Multicriteria decision methodology for selecting technical alternatives in the Mixed Waste Integrated Program

    International Nuclear Information System (INIS)

    Ferrada, J.J.; Berry, J.B.

    1993-11-01

    The US Department of Energy (DOE) Mixed Waste Integrated Program (MWIP) has as one of its tasks the identification of a decision methodology and key decision criteria for the selection methodology. The aim of a multicriteria analysis is to provide an instrument for a systematic evaluation of distinct alternative projects. Determination of this methodology will clarify (1) the factors used to evaluate these alternatives, (2) the evaluator's view of the importance of the factors, and (3) the relative value of each alternative. The selected methodology must consider the Comprehensive Environmental Response Compensation and Liability Act (CERCLA) decision-making criteria for application to the analysis technology subsystems developed by the DOE Office of Technology Development. This report contains a compilation of several decision methodologies developed in various national laboratories, institutions, and universities. The purpose of these methodologies may vary, but the core of the decision attributes are very similar. Six approaches were briefly analyzed; from these six, in addition to recommendations made by the MWIP technical support group leaders and CERCLA, the final decision methodology was extracted. Slight variations are observed in the many methodologies developed by different groups, but most of the analyzed methodologies address similar aspects for the most part. These common aspects were the core of the methodology suggested in this report for use within MWIP for the selection of technologies. The set of criteria compiled and developed for this report have been grouped in five categories: (1) process effectiveness, (2) developmental status, (3) life-cycle cost, (4) implementability, and (5) regulatory compliance

  19. TTP AL921102: An integrated geophysics program for non-intrusive characterization of mixed-waste landfill sites

    International Nuclear Information System (INIS)

    Hasbrouck, J.C.

    1992-11-01

    Chem-Nuclear Geotech, Inc. (Geotech), operating contractor for the US Department of Energy Grand Junction Projects Office, is conducting the Integrated Geophysics Program for Non-Intrusive Characterization of Mixed-Waste Landfill Sites (Technical Task Plan [TTP] AL921102). The TTP is part of the Mixed-Waste Landfill Integrated Demonstration (MWLID). The objective of this task was to demonstrate that an integrated program of surface geophysics can be used to effectively and nonintrusively characterize n-mixed-waste landfill sites. To accomplish this objective, integrated field demonstrations were conducted over two previously identified areas of interest (designated Areas A and B) within the MWLID test site at the Chemical Waste Landfill (CWL), Technical Area 3, at the Sandia National Laboratories, Albuquerque, New Mexico (Figures 1 and 2). Area A was centered roughly around the Chromic Acid and Organics Pits in the southeast-central portion of the landfill and Area B was centered around the ''60's Pits'' area in the northeast-central portion of the landfill. Pit locations were known in Area A and suspected in Area B. This progress report describes the geophysical surveys conducted by Geotech and presents preliminary displays and analyses. Volume 2 of this report contains the raw data for all the surveys conducted by Geotech for this TTP

  20. SAR image formation with azimuth interpolation after azimuth transform

    Science.gov (United States)

    Doerry,; Armin W. , Martin; Grant D. , Holzrichter; Michael, W [Albuquerque, NM

    2008-07-08

    Two-dimensional SAR data can be processed into a rectangular grid format by subjecting the SAR data to a Fourier transform operation, and thereafter to a corresponding interpolation operation. Because the interpolation operation follows the Fourier transform operation, the interpolation operation can be simplified, and the effect of interpolation errors can be diminished. This provides for the possibility of both reducing the re-grid processing time, and improving the image quality.

  1. Interpolation of fuzzy data | Khodaparast | Journal of Fundamental ...

    African Journals Online (AJOL)

    Considering the many applications of mathematical functions in different ways, it is essential to have a defining function. In this study, we used Fuzzy Lagrangian interpolation and natural fuzzy spline polynomials to interpolate the fuzzy data. In the current world and in the field of science and technology, interpolation issues ...

  2. Interpolation of diffusion weighted imaging datasets

    DEFF Research Database (Denmark)

    Dyrby, Tim B; Lundell, Henrik; Burke, Mark W

    2014-01-01

    anatomical details and signal-to-noise-ratio for reliable fibre reconstruction. We assessed the potential benefits of interpolating DWI datasets to a higher image resolution before fibre reconstruction using a diffusion tensor model. Simulations of straight and curved crossing tracts smaller than or equal......Diffusion weighted imaging (DWI) is used to study white-matter fibre organisation, orientation and structural connectivity by means of fibre reconstruction algorithms and tractography. For clinical settings, limited scan time compromises the possibilities to achieve high image resolution for finer...... interpolation methods fail to disentangle fine anatomical details if PVE is too pronounced in the original data. As for validation we used ex-vivo DWI datasets acquired at various image resolutions as well as Nissl-stained sections. Increasing the image resolution by a factor of eight yielded finer geometrical...

  3. Integrated Data Collection Analysis (IDCA) Program - Mixing Procedures and Materials Compatibility

    Energy Technology Data Exchange (ETDEWEB)

    Olinger, Becky D. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Sandstrom, Mary M. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Warner, Kirstin F. [Naval Surface Warfare Center (NSWC-IHD), Indian Head, MD (United States); Sorensen, Daniel N. [Naval Surface Warfare Center (NSWC-IHD), Indian Head, MD (United States); Remmers, Daniel L. [Naval Surface Warfare Center (NSWC-IHD), Indian Head, MD (United States); Moran, Jesse S. [Naval Surface Warfare Center (NSWC-IHD), Indian Head, MD (United States); Shelley, Timothy J. [Air Force Research Lab. (AFRL), Tyndall AFB, FL (United States); Whinnery, LeRoy L. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Hsu, Peter C. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Whipple, Richard E. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Kashgarian, Michaele [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Reynolds, John G. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2011-01-14

    Three mixing procedures have been standardized for the IDCA proficiency test—solid-solid, solid-liquid, and liquid-liquid. Due to the variety of precursors used in formulating the materials for the test, these three mixing methods have been designed to address all combinations of materials. Hand mixing is recommended for quantities less than 10 grams and Jar Mill mixing is recommended for quantities over 10 grams. Consideration must also be given to the type of container used for the mixing due to the wide range of chemical reactivity of the precursors and mixtures. Eight web site sources from container and chemical manufacturers have been consulted. Compatible materials have been compiled as a resource for selecting containers made of materials stable to the mixtures. In addition, container materials used in practice by the participating laboratories are discussed. Consulting chemical compatibility tables is highly recommended for each operation by each individual engaged in testing the materials in this proficiency test.

  4. Some splines produced by smooth interpolation

    Czech Academy of Sciences Publication Activity Database

    Segeth, Karel

    2018-01-01

    Roč. 319, 15 February (2018), s. 387-394 ISSN 0096-3003 R&D Projects: GA ČR GA14-02067S Institutional support: RVO:67985840 Keywords : smooth data approximation * smooth data interpolation * cubic spline Subject RIV: BA - General Mathematics OBOR OECD: Applied mathematics Impact factor: 1.738, year: 2016 http://www.sciencedirect.com/science/article/pii/S0096300317302746?via%3Dihub

  5. Some splines produced by smooth interpolation

    Czech Academy of Sciences Publication Activity Database

    Segeth, Karel

    2018-01-01

    Roč. 319, 15 February (2018), s. 387-394 ISSN 0096-3003 R&D Projects: GA ČR GA14-02067S Institutional support: RVO:67985840 Keywords : smooth data approximation * smooth data interpolation * cubic spline Subject RIV: BA - General Mathematics OBOR OECD: Applied mathematics Impact factor: 1.738, year: 2016 http://www. science direct.com/ science /article/pii/S0096300317302746?via%3Dihub

  6. Quadratic polynomial interpolation on triangular domain

    Science.gov (United States)

    Li, Ying; Zhang, Congcong; Yu, Qian

    2018-04-01

    In the simulation of natural terrain, the continuity of sample points are not in consonance with each other always, traditional interpolation methods often can't faithfully reflect the shape information which lie in data points. So, a new method for constructing the polynomial interpolation surface on triangular domain is proposed. Firstly, projected the spatial scattered data points onto a plane and then triangulated them; Secondly, A C1 continuous piecewise quadric polynomial patch was constructed on each vertex, all patches were required to be closed to the line-interpolation one as far as possible. Lastly, the unknown quantities were gotten by minimizing the object functions, and the boundary points were treated specially. The result surfaces preserve as many properties of data points as possible under conditions of satisfying certain accuracy and continuity requirements, not too convex meantime. New method is simple to compute and has a good local property, applicable to shape fitting of mines and exploratory wells and so on. The result of new surface is given in experiments.

  7. Delimiting areas of endemism through kernel interpolation.

    Science.gov (United States)

    Oliveira, Ubirajara; Brescovit, Antonio D; Santos, Adalberto J

    2015-01-01

    We propose a new approach for identification of areas of endemism, the Geographical Interpolation of Endemism (GIE), based on kernel spatial interpolation. This method differs from others in being independent of grid cells. This new approach is based on estimating the overlap between the distribution of species through a kernel interpolation of centroids of species distribution and areas of influence defined from the distance between the centroid and the farthest point of occurrence of each species. We used this method to delimit areas of endemism of spiders from Brazil. To assess the effectiveness of GIE, we analyzed the same data using Parsimony Analysis of Endemism and NDM and compared the areas identified through each method. The analyses using GIE identified 101 areas of endemism of spiders in Brazil GIE demonstrated to be effective in identifying areas of endemism in multiple scales, with fuzzy edges and supported by more synendemic species than in the other methods. The areas of endemism identified with GIE were generally congruent with those identified for other taxonomic groups, suggesting that common processes can be responsible for the origin and maintenance of these biogeographic units.

  8. Delimiting areas of endemism through kernel interpolation.

    Directory of Open Access Journals (Sweden)

    Ubirajara Oliveira

    Full Text Available We propose a new approach for identification of areas of endemism, the Geographical Interpolation of Endemism (GIE, based on kernel spatial interpolation. This method differs from others in being independent of grid cells. This new approach is based on estimating the overlap between the distribution of species through a kernel interpolation of centroids of species distribution and areas of influence defined from the distance between the centroid and the farthest point of occurrence of each species. We used this method to delimit areas of endemism of spiders from Brazil. To assess the effectiveness of GIE, we analyzed the same data using Parsimony Analysis of Endemism and NDM and compared the areas identified through each method. The analyses using GIE identified 101 areas of endemism of spiders in Brazil GIE demonstrated to be effective in identifying areas of endemism in multiple scales, with fuzzy edges and supported by more synendemic species than in the other methods. The areas of endemism identified with GIE were generally congruent with those identified for other taxonomic groups, suggesting that common processes can be responsible for the origin and maintenance of these biogeographic units.

  9. Sensitivity of the two-dimensional shearless mixing layer to the initial turbulent kinetic energy and integral length scale

    Science.gov (United States)

    Fathali, M.; Deshiri, M. Khoshnami

    2016-04-01

    The shearless mixing layer is generated from the interaction of two homogeneous isotropic turbulence (HIT) fields with different integral scales ℓ1 and ℓ2 and different turbulent kinetic energies E1 and E2. In this study, the sensitivity of temporal evolutions of two-dimensional, incompressible shearless mixing layers to the parametric variations of ℓ1/ℓ2 and E1/E2 is investigated. The sensitivity methodology is based on the nonintrusive approach; using direct numerical simulation and generalized polynomial chaos expansion. The analysis is carried out at Reℓ 1=90 for the high-energy HIT region and different integral length scale ratios 1 /4 ≤ℓ1/ℓ2≤4 and turbulent kinetic energy ratios 1 ≤E1/E2≤30 . It is found that the most influential parameter on the variability of the mixing layer evolution is the turbulent kinetic energy while variations of the integral length scale show a negligible influence on the flow field variability. A significant level of anisotropy and intermittency is observed in both large and small scales. In particular, it is found that large scales have higher levels of intermittency and sensitivity to the variations of ℓ1/ℓ2 and E1/E2 compared to the small scales. Reconstructed response surfaces of the flow field intermittency and the turbulent penetration depth show monotonic dependence on ℓ1/ℓ2 and E1/E2 . The mixing layer growth rate and the mixing efficiency both show sensitive dependence on the initial condition parameters. However, the probability density function of these quantities shows relatively small solution variations in response to the variations of the initial condition parameters.

  10. Energy band structure of Cr by the Slater-Koster interpolation scheme

    International Nuclear Information System (INIS)

    Seifu, D.; Mikusik, P.

    1986-04-01

    The matrix elements of the Hamiltonian between nine localized wave-functions in tight-binding formalism are derived. The symmetry adapted wave-functions and the secular equations are formed by the group theory method for high symmetry points in the Brillouin zone. A set of interaction integrals is chosen on physical ground and fitted via the Slater-Koster interpolation scheme to the abinito band structure of chromium calculated by the Green function method. Then the energy band structure of chromium is interpolated and extrapolated in the Brillouin zone. (author)

  11. Some observations on interpolating gauges and non-covariant gauges

    International Nuclear Information System (INIS)

    Joglekar, Satish D.

    2003-01-01

    We discuss the viability of using interpolating gauges to define the non-covariant gauges starting from the covariant ones. We draw attention to the need for a very careful treatment of boundary condition defining term. We show that the boundary condition needed to maintain gauge invariance as the interpolating parameter θ varies, depends very sensitively on the parameter variation. We do this with a gauge used by Doust. We also consider the Lagrangian path-integrals in Minkowski space for gauges with a residual gauge-invariance. We point out the necessity of inclusion of an ε-term (even) in the formal treatments, without which one may reach incorrect conclusions. We, further, point out that the ε-term can contribute to the BRST WT-identities in a non-trivial way (even as ε → 0). We point out that these contributions lead to additional constraints on Green's function that are not normally taken into account in the BRST formalism that ignores the ε-term, and that they are characteristic of the way the singularities in propagators are handled. We argue that a prescription, in general, will require renormalization; if at all it is to be viable. (author)

  12. Test and diagnosis of analogue, mixed-signal and RF integrated circuits the system on chip approach

    CERN Document Server

    Sun, Yichuang

    2008-01-01

    This book provides a comprehensive discussion of automatic testing, diagnosis and tuning of analogue, mixed-signal and RF integrated circuits, and systems in a single source. The book contains eleven chapters written by leading researchers worldwide. As well as fundamental concepts and techniques, the book reports systematically the state of the arts and future research directions of these areas. A complete range of circuit components are covered and test issues are also addressed from the SoC perspective.

  13. Image Interpolation Scheme based on SVM and Improved PSO

    Science.gov (United States)

    Jia, X. F.; Zhao, B. T.; Liu, X. X.; Song, H. P.

    2018-01-01

    In order to obtain visually pleasing images, a support vector machines (SVM) based interpolation scheme is proposed, in which the improved particle swarm optimization is applied to support vector machine parameters optimization. Training samples are constructed by the pixels around the pixel to be interpolated. Then the support vector machine with optimal parameters is trained using training samples. After the training, we can get the interpolation model, which can be employed to estimate the unknown pixel. Experimental result show that the interpolated images get improvement PNSR compared with traditional interpolation methods, which is agrees with the subjective quality.

  14. A 2.9 ps equivalent resolution interpolating time counter based on multiple independent coding lines

    International Nuclear Information System (INIS)

    Szplet, R; Jachna, Z; Kwiatkowski, P; Rozyc, K

    2013-01-01

    We present the design, operation and test results of a time counter that has an equivalent resolution of 2.9 ps, a measurement uncertainty at the level of 6 ps, and a measurement range of 10 s. The time counter has been implemented in a general-purpose reprogrammable device Spartan-6 (Xilinx). To obtain both high precision and wide measurement range the counting of periods of a reference clock is combined with a two-stage interpolation within a single period of the clock signal. The interpolation involves a four-phase clock in the first interpolation stage (FIS) and an equivalent coding line (ECL) in the second interpolation stage (SIS). The ECL is created as a compound of independent discrete time coding lines (TCL). The number of TCLs used to create the virtual ECL has an effect on its resolution. We tested ECLs made from up to 16 TCLs, but the idea may be extended to a larger number of lines. In the presented time counter the coarse resolution of the counting method equal to 2 ns (period of the 500 MHz reference clock) is firstly improved fourfold in the FIS and next even more than 400 times in the SIS. The proposed solution allows us to overcome the technological limitation in achievable resolution and improve the precision of conversion of integrated interpolators based on tapped delay lines. (paper)

  15. Interpolation functions and the Lions-Peetre interpolation construction

    International Nuclear Information System (INIS)

    Ovchinnikov, V I

    2014-01-01

    The generalization of the Lions-Peetre interpolation method of means considered in the present survey is less general than the generalizations known since the 1970s. However, our level of generalization is sufficient to encompass spaces that are most natural from the point of view of applications, like the Lorentz spaces, Orlicz spaces, and their analogues. The spaces φ(X 0 ,X 1 ) p 0 ,p 1 considered here have three parameters: two positive numerical parameters p 0 and p 1 of equal standing, and a function parameter φ. For p 0 ≠p 1 these spaces can be regarded as analogues of Orlicz spaces under the real interpolation method. Embedding criteria are established for the family of spaces φ(X 0 ,X 1 ) p 0 ,p 1 , together with optimal interpolation theorems that refine all the known interpolation theorems for operators acting on couples of weighted spaces L p and that extend these theorems beyond scales of spaces. The main specific feature is that the function parameter φ can be an arbitrary natural functional parameter in the interpolation. Bibliography: 43 titles

  16. Correlation-based motion vector processing with adaptive interpolation scheme for motion-compensated frame interpolation.

    Science.gov (United States)

    Huang, Ai-Mei; Nguyen, Truong

    2009-04-01

    In this paper, we address the problems of unreliable motion vectors that cause visual artifacts but cannot be detected by high residual energy or bidirectional prediction difference in motion-compensated frame interpolation. A correlation-based motion vector processing method is proposed to detect and correct those unreliable motion vectors by explicitly considering motion vector correlation in the motion vector reliability classification, motion vector correction, and frame interpolation stages. Since our method gradually corrects unreliable motion vectors based on their reliability, we can effectively discover the areas where no motion is reliable to be used, such as occlusions and deformed structures. We also propose an adaptive frame interpolation scheme for the occlusion areas based on the analysis of their surrounding motion distribution. As a result, the interpolated frames using the proposed scheme have clearer structure edges and ghost artifacts are also greatly reduced. Experimental results show that our interpolated results have better visual quality than other methods. In addition, the proposed scheme is robust even for those video sequences that contain multiple and fast motions.

  17. Research progress and hotspot analysis of spatial interpolation

    Science.gov (United States)

    Jia, Li-juan; Zheng, Xin-qi; Miao, Jin-li

    2018-02-01

    In this paper, the literatures related to spatial interpolation between 1982 and 2017, which are included in the Web of Science core database, are used as data sources, and the visualization analysis is carried out according to the co-country network, co-category network, co-citation network, keywords co-occurrence network. It is found that spatial interpolation has experienced three stages: slow development, steady development and rapid development; The cross effect between 11 clustering groups, the main convergence of spatial interpolation theory research, the practical application and case study of spatial interpolation and research on the accuracy and efficiency of spatial interpolation. Finding the optimal spatial interpolation is the frontier and hot spot of the research. Spatial interpolation research has formed a theoretical basis and research system framework, interdisciplinary strong, is widely used in various fields.

  18. Digital integrated control of a Mach 2.5 mixed-compression supersonic inlet and an augmented mixed-flow turbofan engine

    Science.gov (United States)

    Batterton, P. G.; Arpasi, D. J.; Baumbick, R. J.

    1974-01-01

    A digitally implemented integrated inlet-engine control system was designed and tested on a mixed-compression, axisymmetric, Mach 2.5, supersonic inlet with 45 percent internal supersonic area contraction and a TF30-P-3 augmented turbofan engine. The control matched engine airflow to available inlet airflow. By monitoring inlet terminal shock position and over-board bypass door command, the control adjusted engine speed so that in steady state, the shock would be at the desired location and the overboard bypass doors would be closed. During engine-induced transients, such as augmentor light-off and cutoff, the inlet operating point was momentarily changed to a more supercritical point to minimize unstarts. The digital control also provided automatic inlet restart. A variable inlet throat bleed control, based on throat Mach number, provided additional inlet stability margin.

  19. Mixed Waste Focus Area Mercury Working Group: An integrated approach to mercury waste treatment and disposal

    International Nuclear Information System (INIS)

    Conley, T.B.; Morris, M.I.; Osborne-Lee, I.W.

    1998-03-01

    In May 1996, the US Department of Energy (DOE) Mixed Waste Focus Area (MWFA) initiated the Mercury Working Group (HgWG). The HgWG was established to address and resolve the issues associated with mercury contaminated mixed wastes. During the MWFA's initial technical baseline development process, three of the top four technology deficiencies identified were related to the need for amalgamation, stabilization, and separation removal technologies for the treatment of mercury and mercury contaminated mixed waste. The HgWG is assisting the MWFA in soliciting, identifying, initiating, and managing efforts to address these areas. The focus of the HgWG is to better establish the mercury related treatment technologies at the DOE sites, refine the MWFA technical baseline as it relates to mercury treatment, and make recommendations to the MWFA on how to most effectively address these needs. Based on the scope and magnitude of the mercury mixed waste problem, as defined by HgWG, solicitations and contract awards have been made to the private sector to demonstrate both the amalgamation and stabilization processes using actual mixed wastes. Development efforts are currently being funded that will address DOE's needs for separation removal processes. This paper discusses the technology selection process, development activities, and the accomplishments of the HgWG to date through these various activities

  20. Plasma simulation with the Differential Algebraic Cubic Interpolated Propagation scheme

    Energy Technology Data Exchange (ETDEWEB)

    Utsumi, Takayuki [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    1998-03-01

    A computer code based on the Differential Algebraic Cubic Interpolated Propagation scheme has been developed for the numerical solution of the Boltzmann equation for a one-dimensional plasma with immobile ions. The scheme advects the distribution function and its first derivatives in the phase space for one time step by using a numerical integration method for ordinary differential equations, and reconstructs the profile in phase space by using a cubic polynomial within a grid cell. The method gives stable and accurate results, and is efficient. It is successfully applied to a number of equations; the Vlasov equation, the Boltzmann equation with the Fokker-Planck or the Bhatnagar-Gross-Krook (BGK) collision term and the relativistic Vlasov equation. The method can be generalized in a straightforward way to treat cases such as problems with nonperiodic boundary conditions and higher dimensional problems. (author)

  1. Generation of nuclear data banks through interpolation

    International Nuclear Information System (INIS)

    Castillo M, J.A.

    1999-01-01

    Nuclear Data Bank generation, is a process in which a great amount of resources is required, both computing and humans. If it is taken into account that at some times it is necessary to create a great amount of those, it is convenient to have a reliable tool that generates Data Banks with the lesser resources, in the least possible time and with a very good approximation. In this work are shown the results obtained during the development of INTPOLBI code, used to generate Nuclear Data Banks employing bi cubic polynomial interpolation, taking as independent variables the uranium and gadolinium percents. Two proposals were worked, applying in both cases the finite element method, using one element with 16 nodes to carry out the interpolation. In the first proposals the canonic base was employed to obtain the interpolating polynomial and later, the corresponding linear equations system. In the solution of this system the Gaussian elimination method with partial pivot was applied. In the second case, the Newton base was used to obtain the mentioned system, resulting in a triangular inferior matrix, which structure, applying elemental operations, to obtain a blocks diagonal matrix, with special characteristics and easier to work with. For the validations test, a comparison was made between the values obtained with INTPOLBI and INTERTEG (created at the Instituto de Investigaciones Electricas with the same purpose) codes, and Data Banks created through the conventional process, that is, with nuclear codes normally used. Finally, it is possible to conclude that the Nuclear Data Banks generated with INTPOLBI code constitute a very good approximation that, even though do not wholly replace conventional process, however are helpful in cases when it is necessary to create a great amount of Data Banks. (Author)

  2. Nuclear data banks generation by interpolation

    International Nuclear Information System (INIS)

    Castillo M, J. A.

    1999-01-01

    Nuclear Data Bank generation, is a process in which a great amount of resources is required, both computing and humans. If it is taken into account that at some times it is necessary to create a great amount of those, it is convenient to have a reliable tool that generates Data Banks with the lesser resources, in the least possible time and with a very good approximation. In this work are shown the results obtained during the development of INTPOLBI code, use to generate Nuclear Data Banks employing bicubic polynominal interpolation, taking as independent variables the uranium and gadolinia percents. Two proposal were worked, applying in both cases the finite element method, using one element with 16 nodes to carry out the interpolation. In the first proposals the canonic base was employed, to obtain the interpolating polynomial and later, the corresponding linear equation systems. In the solution of this systems the Gaussian elimination methods with partial pivot was applied. In the second case, the Newton base was used to obtain the mentioned system, resulting in a triangular inferior matrix, which structure, applying elemental operations, to obtain a blocks diagonal matrix, with special characteristics and easier to work with. For the validation tests, a comparison was made between the values obtained with INTPOLBI and INTERTEG (create at the Instituto de Investigaciones Electricas (MX) with the same purpose) codes, and Data Banks created through the conventional process, that is, with nuclear codes normally used. Finally, it is possible to conclude that the Nuclear Data Banks generated with INTPOLBI code constitute a very good approximation that, even though do not wholly replace conventional process, however are helpful in cases when it is necessary to create a great amount of Data Banks

  3. Calculation of reactivity without Lagrange interpolation

    International Nuclear Information System (INIS)

    Suescun D, D.; Figueroa J, J. H.; Rodriguez R, K. C.; Villada P, J. P.

    2015-09-01

    A new method to solve numerically the inverse equation of punctual kinetics without using Lagrange interpolating polynomial is formulated; this method uses a polynomial approximation with N points based on a process of recurrence for simulating different forms of nuclear power. The results show a reliable accuracy. Furthermore, the method proposed here is suitable for real-time measurements of reactivity, with step sizes of calculations greater that Δt = 0.3 s; due to its precision can be used to implement a digital meter of reactivity in real time. (Author)

  4. Topics in multivariate approximation and interpolation

    CERN Document Server

    Jetter, Kurt

    2005-01-01

    This book is a collection of eleven articles, written by leading experts and dealing with special topics in Multivariate Approximation and Interpolation. The material discussed here has far-reaching applications in many areas of Applied Mathematics, such as in Computer Aided Geometric Design, in Mathematical Modelling, in Signal and Image Processing and in Machine Learning, to mention a few. The book aims at giving a comprehensive information leading the reader from the fundamental notions and results of each field to the forefront of research. It is an ideal and up-to-date introduction for gr

  5. A Well-mixed, Polymer-based Microbioreactor with Integrated Optical Measurements

    DEFF Research Database (Denmark)

    Zhang, Z.; Szita, Nicolas; Boccazzi, P.

    2005-01-01

    . Optical transmission measurements are used for cell density. The body of the reactor is poly(methylmethacrylate) with a thin layer of poly (dimethylsiloxane) for aeration, oxygen diffuses through this gas-permeable membrane into the microbioreactor to support metabolism of bacterial cells. Mixing...... in the reactor is characterized by observation of mixing of dyes and computational fluid dynamics simulations. The oxygenation is described in terms of measured KLa values for microbioreactor, 20–75/h corresponding to increasing stirring speed 200–800 rpm. Escherichia coli cell growth in the microbioreactor...

  6. Low-frequency scaling of the standard and mixed magnetic field and Müller integral equations

    KAUST Repository

    Bogaert, Ignace

    2014-02-01

    The standard and mixed discretizations for the magnetic field integral equation (MFIE) and the Müller integral equation (MUIE) are investigated in the context of low-frequency (LF) scattering problems involving simply connected scatterers. It is proved that, at low frequencies, the frequency scaling of the nonsolenoidal part of the solution current can be incorrect for the standard discretization. In addition, it is proved that the frequency scaling obtained with the mixed discretization is correct. The reason for this problem in the standard discretization scheme is the absence of exact solenoidal currents in the rotated RWG finite element space. The adoption of the mixed discretization scheme eliminates this problem and leads to a well-conditioned system of linear equations that remains accurate at low frequencies. Numerical results confirm these theoretical predictions and also show that, when the frequency is lowered, a finer and finer mesh is required to keep the accuracy constant with the standard discretization. © 1963-2012 IEEE.

  7. Methods That Matter: Integrating Mixed Methods for More Effective Social Science Research

    Science.gov (United States)

    Hay, M. Cameron, Ed.

    2016-01-01

    To do research that really makes a difference--the authors of this book argue--social scientists need questions and methods that reflect the complexity of the world. Bringing together a consortium of voices across a variety of fields, "Methods that Matter" offers compelling and successful examples of mixed methods research that do just…

  8. Mixed Waste Focus Area integrated technical baseline report, Phase 1: Volume 1

    International Nuclear Information System (INIS)

    1996-01-01

    The Department of Energy (DOE) established the Mixed Waste Characterization, Treatment, and Disposal Focus Area (MWFA) to develop and facilitate implementation of technologies required to meet the Department's commitments for treatment of mixed low-level and transuranic wastes. The mission of the MWFA is to provide acceptable treatment systems, developed in partnership with users and with participation of stakeholders, tribal governments, and regulators, that are capable of treating DOE's mixed waste. These treatment systems include all necessary steps such as characterization, pretreatment, and disposal. To accomplish this mission, a technical baseline is being established that forms the basis for determining which technology development activities will be supported by the MWFA. The technical baseline is the prioritized list of deficiencies, and the resulting technology development activities needed to overcome these deficiencies. This document presents Phase I of the technical baseline development process, which resulted in the prioritized list of deficiencies that the MWFA will address. A summary of the data and the assumptions upon which this work was based is included, as well as information concerning the DOE Office of Environmental Management (EM) mixed waste technology development needs. The next phase in the technical baseline development process, Phase II, will result in the identification of technology development activities that will be conducted through the MWFA to resolve the identified deficiencies

  9. Novel true-motion estimation algorithm and its application to motion-compensated temporal frame interpolation.

    Science.gov (United States)

    Dikbas, Salih; Altunbasak, Yucel

    2013-08-01

    In this paper, a new low-complexity true-motion estimation (TME) algorithm is proposed for video processing applications, such as motion-compensated temporal frame interpolation (MCTFI) or motion-compensated frame rate up-conversion (MCFRUC). Regular motion estimation, which is often used in video coding, aims to find the motion vectors (MVs) to reduce the temporal redundancy, whereas TME aims to track the projected object motion as closely as possible. TME is obtained by imposing implicit and/or explicit smoothness constraints on the block-matching algorithm. To produce better quality-interpolated frames, the dense motion field at interpolation time is obtained for both forward and backward MVs; then, bidirectional motion compensation using forward and backward MVs is applied by mixing both elegantly. Finally, the performance of the proposed algorithm for MCTFI is demonstrated against recently proposed methods and smoothness constraint optical flow employed by a professional video production suite. Experimental results show that the quality of the interpolated frames using the proposed method is better when compared with the MCFRUC techniques.

  10. Air Quality Assessment Using Interpolation Technique

    Directory of Open Access Journals (Sweden)

    Awkash Kumar

    2016-07-01

    Full Text Available Air pollution is increasing rapidly in almost all cities around the world due to increase in population. Mumbai city in India is one of the mega cities where air quality is deteriorating at a very rapid rate. Air quality monitoring stations have been installed in the city to regulate air pollution control strategies to reduce the air pollution level. In this paper, air quality assessment has been carried out over the sample region using interpolation techniques. The technique Inverse Distance Weighting (IDW of Geographical Information System (GIS has been used to perform interpolation with the help of concentration data on air quality at three locations of Mumbai for the year 2008. The classification was done for the spatial and temporal variation in air quality levels for Mumbai region. The seasonal and annual variations of air quality levels for SO2, NOx and SPM (Suspended Particulate Matter have been focused in this study. Results show that SPM concentration always exceeded the permissible limit of National Ambient Air Quality Standard. Also, seasonal trends of pollutant SPM was low in monsoon due rain fall. The finding of this study will help to formulate control strategies for rational management of air pollution and can be used for many other regions.

  11. Randomized interpolative decomposition of separated representations

    Science.gov (United States)

    Biagioni, David J.; Beylkin, Daniel; Beylkin, Gregory

    2015-01-01

    We introduce an algorithm to compute tensor interpolative decomposition (dubbed CTD-ID) for the reduction of the separation rank of Canonical Tensor Decompositions (CTDs). Tensor ID selects, for a user-defined accuracy ɛ, a near optimal subset of terms of a CTD to represent the remaining terms via a linear combination of the selected terms. CTD-ID can be used as an alternative to or in combination with the Alternating Least Squares (ALS) algorithm. We present examples of its use within a convergent iteration to compute inverse operators in high dimensions. We also briefly discuss the spectral norm as a computational alternative to the Frobenius norm in estimating approximation errors of tensor ID. We reduce the problem of finding tensor IDs to that of constructing interpolative decompositions of certain matrices. These matrices are generated via randomized projection of the terms of the given tensor. We provide cost estimates and several examples of the new approach to the reduction of separation rank.

  12. Size-Dictionary Interpolation for Robot's Adjustment

    Directory of Open Access Journals (Sweden)

    Morteza eDaneshmand

    2015-05-01

    Full Text Available This paper describes the classification and size-dictionary interpolation of the three-dimensional data obtained by a laser scanner to be used in a realistic virtual fitting room, where automatic activation of the chosen mannequin robot, while several mannequin robots of different genders and sizes are simultaneously connected to the same computer, is also considered to make it mimic the body shapes and sizes instantly. The classification process consists of two layers, dealing, respectively, with gender and size. The interpolation procedure tries to find out which set of the positions of the biologically-inspired actuators for activation of the mannequin robots could lead to the closest possible resemblance of the shape of the body of the person having been scanned, through linearly mapping the distances between the subsequent size-templates and the corresponding position set of the bioengineered actuators, and subsequently, calculating the control measures that could maintain the same distance proportions, where minimizing the Euclidean distance between the size-dictionary template vectors and that of the desired body sizes determines the mathematical description. In this research work, the experimental results of the implementation of the proposed method on Fits.me's mannequin robots are visually illustrated, and explanation of the remaining steps towards completion of the whole realistic online fitting package is provided.

  13. Multiresolution Motion Estimation for Low-Rate Video Frame Interpolation

    Directory of Open Access Journals (Sweden)

    Hezerul Abdul Karim

    2004-09-01

    Full Text Available Interpolation of video frames with the purpose of increasing the frame rate requires the estimation of motion in the image so as to interpolate pixels along the path of the objects. In this paper, the specific challenges of low-rate video frame interpolation are illustrated by choosing one well-performing algorithm for high-frame-rate interpolation (Castango 1996 and applying it to low frame rates. The degradation of performance is illustrated by comparing the original algorithm, the algorithm adapted to low frame rate, and simple averaging. To overcome the particular challenges of low-frame-rate interpolation, two algorithms based on multiresolution motion estimation are developed and compared on objective and subjective basis and shown to provide an elegant solution to the specific challenges of low-frame-rate video interpolation.

  14. SEAMIST trademark in-situ instrumentation and vapor sampling system applications in the Sandia Mixed Waste Landfill Integrated Demonstration Program

    International Nuclear Information System (INIS)

    Lowry, W.E.; Dunn, S.D.; Cremer, S.C.; Williams, C.

    1994-01-01

    The SEAMIST trademark inverting membrane deployment system has been used successfully at the Mixed Waste Landfill Integrated Demonstration (MWLID) for multipoint vapor sampling/pressure measurement/permeability measurement/sensor integration demonstrations and borehole lining. Several instruments were deployed inside the SEAMIST trademark lined boreholes to detect metals, radionuclides, moisture, and geologic variations. The liner protected the instruments from contamination, maintained support of the uncased borehole wall, and sealed the total borehole from air circulation. The current activities have included the installation of three multipoint vapor sampling systems and sensor integration systems in 100-foot-deep vertical boreholes. A long term pressure monitoring program has recorded barometric pressure effects at depth with relatively high spatial resolution. The SEAMIST trademark system has been integrated with a variety of hydrologic and chemical sensors for in-situ measurements, demonstrating its versatility as an instrument deployment system which allows easy emplacement and removal. Standard SEAMIST trademark vapor sampling systems were also integrated with state-of-the-art VOC analysis technologies (automated GC, UV laser fluorometer). The results and status of these demonstration tests are presented

  15. Systems and methods for interpolation-based dynamic programming

    KAUST Repository

    Rockwood, Alyn

    2013-01-03

    Embodiments of systems and methods for interpolation-based dynamic programming. In one embodiment, the method includes receiving an object function and a set of constraints associated with the objective function. The method may also include identifying a solution on the objective function corresponding to intersections of the constraints. Additionally, the method may include generating an interpolated surface that is in constant contact with the solution. The method may also include generating a vector field in response to the interpolated surface.

  16. Systems and methods for interpolation-based dynamic programming

    KAUST Repository

    Rockwood, Alyn

    2013-01-01

    Embodiments of systems and methods for interpolation-based dynamic programming. In one embodiment, the method includes receiving an object function and a set of constraints associated with the objective function. The method may also include identifying a solution on the objective function corresponding to intersections of the constraints. Additionally, the method may include generating an interpolated surface that is in constant contact with the solution. The method may also include generating a vector field in response to the interpolated surface.

  17. Density fitting for derivatives of Coulomb integrals in ab initio calculations using mixed Gaussian and plane-wave basis

    Czech Academy of Sciences Publication Activity Database

    Čársky, Petr

    2009-01-01

    Roč. 109, č. 620 (2009), s. 1237-1242 ISSN 0020-7608 R&D Projects: GA ČR GA203/07/0070; GA ČR GA202/08/0631; GA AV ČR 1ET400400413; GA AV ČR IAA100400501 Institutional research plan: CEZ:AV0Z40400503 Keywords : Derivatives of Coulomb integrals * mixed Gaussian and plane-wave basis sets * electron scattering * computer time saving Subject RIV: CF - Physical ; Theoretical Chemistry Impact factor: 1.315, year: 2009

  18. Integrated assessment of chemical stressors and ecological impact in mixed land use stream systems

    DEFF Research Database (Denmark)

    Sonne, Anne Thobo

    activities, including contaminated sites. To determine potential impacts, the chemical quality of both organic (i.e. pharmaceuticals, gasoline constituents, chlorinated solvents, and pesticides) and inorganic (i.e. metals, general water chemistry and macroions) compounds was assessed in all three stream...... multiple compounds (i.e. organic and inorganic chemical stressors) and stream compartments to locate key sources and risk drivers. The approaches and findings in this thesis could truly be helpful for management and future remediation of mixed land use stream systems....... of the different stream compartments thus comprises both temporal and spatial variation. Despite the growing understanding of the complexity, approaches for a holistic risk assessment of the potential impacts in the three stream compartments of a mixed land use stream system are still missing. To investigate...

  19. Evaluating quality of patient care communication in integrated care settings: a mixed methods apporach

    NARCIS (Netherlands)

    Gulmans, J.; Gulmans, J.; Vollenbroek-Hutten, Miriam Marie Rosé; van Gemert-Pijnen, Julia E.W.C.; van Harten, Willem H.

    2007-01-01

    Background. Owing to the involvement of multiple professionals from various institutions, integrated care settings are prone to suboptimal patient care communication. To assure continuity, communication gaps should be identified for targeted improvement initiatives. However, available assessment

  20. Modeling the optimal energy mix in 2030 : Impact of the integration of renewable energy sources

    OpenAIRE

    Arthur, Camu

    2016-01-01

    The European Council has recently set objectives in the matter of energy and climate policies and thus the interest in renewable energies is more than ever at stake. However, the introduction of renewable energies in an energy mix is also accelerated and altered by political targets. The two most widespread renewable technologies, photovoltaic and wind farms, have specific characteristics - decentralized, intermittency, uncertain production forecast up until a few hours ahead - that oblige to...

  1. Attempted integration of multiple species of turaco into a mixed-species aviary.

    Science.gov (United States)

    Valuska, Annie J; Leighty, Katherine A; Ferrie, Gina M; Nichols, Valerie D; Tybor, Cheryl L; Plassé, Chelle; Bettinger, Tamara L

    2013-03-01

    Mixed-species exhibits offer a variety of benefits but can be challenging to maintain due to difficulty in managing interspecific interactions. This is particularly true when little has been documented on the behavior of the species being mixed. This was the case when we attempted to house three species of turaco (family: Musophagidae) together with other species in a walk-through aviary. To learn more about the behavior of great blue turacos, violaceous turacos, and white-bellied gray go-away birds, we supplemented opportunistic keeper observations with systematic data collection on their behavior, location, distance from other birds, and visibility to visitors. Keepers reported high levels of aggression among turacos, usually initiated by a go-away bird or a violaceous turaco. Most aggression occurred during feedings or when pairs were defending nest sites. Attempts to reduce aggression by temporarily removing birds to holding areas and reintroducing them days later were ineffective. Systematic data collection revealed increased social behavior, including aggression, during breeding season in the violaceous turacos, as well as greater location fidelity. These behavioral cues may be useful in predicting breeding behavior in the future. Ultimately, we were only able to house three species of turaco together for a short time, and prohibitively high levels of conflict occurred when pairs were breeding. We conclude that mixing these three turaco species is challenging and may not be the most appropriate housing situation for them, particularly during breeding season. However, changes in turaco species composition, sex composition, or exhibit design may result in more compatible mixed-turaco species groups. © 2012 Wiley Periodicals, Inc.

  2. The mixed waste focus area mercury working group: an integrated approach for mercury treatment and disposal

    International Nuclear Information System (INIS)

    Conley, T.B.; Morris, M.I.; Holmes-Burns, H.; Petersell, J.; Schwendiman, L.

    1997-01-01

    In May 1996, the U.S. Department of Energy (DOE) Mixed Waste Focus Area (MWFA) initiated the Mercury Work Group (HgWG), which was established to address and resolve the issues associated with mercury- contaminated mixed wastes. Three of the first four technology deficiencies identified during the MWFA technical baseline development process were related to mercury amalgamation, stabilization, and separation/removal. The HgWG will assist the MWFA in soliciting, identifying, initiating, and managing all the efforts required to address these deficiencies. The focus of the HgWG is to better establish the mercury-related treatment needs at the DOE sites, refine the MWFA technical baseline as it relates to mercury treatment, and make recommendations to the MWFA on how to most effectively address these needs. The team will initially focus on the sites with the most mercury-contaminated mixed wastes, whose representatives comprise the HgWG. However, the group will also work with the sites with less inventory to maximize the effectiveness of these efforts in addressing the mercury- related needs throughout the entire complex

  3. Distance-two interpolation for parallel algebraic multigrid

    International Nuclear Information System (INIS)

    Sterck, H de; Falgout, R D; Nolting, J W; Yang, U M

    2007-01-01

    In this paper we study the use of long distance interpolation methods with the low complexity coarsening algorithm PMIS. AMG performance and scalability is compared for classical as well as long distance interpolation methods on parallel computers. It is shown that the increased interpolation accuracy largely restores the scalability of AMG convergence factors for PMIS-coarsened grids, and in combination with complexity reducing methods, such as interpolation truncation, one obtains a class of parallel AMG methods that enjoy excellent scalability properties on large parallel computers

  4. Comparison of Interpolation Methods as Applied to Time Synchronous Averaging

    National Research Council Canada - National Science Library

    Decker, Harry

    1999-01-01

    Several interpolation techniques were investigated to determine their effect on time synchronous averaging of gear vibration signals and also the effects on standard health monitoring diagnostic parameters...

  5. Collaboration processes and perceived effectiveness of integrated care projects in primary care: a longitudinal mixed-methods study.

    Science.gov (United States)

    Valentijn, Pim P; Ruwaard, Dirk; Vrijhoef, Hubertus J M; de Bont, Antoinette; Arends, Rosa Y; Bruijnzeels, Marc A

    2015-10-09

    Collaborative partnerships are considered an essential strategy for integrating local disjointed health and social services. Currently, little evidence is available on how integrated care arrangements between professionals and organisations are achieved through the evolution of collaboration processes over time. The first aim was to develop a typology of integrated care projects (ICPs) based on the final degree of integration as perceived by multiple stakeholders. The second aim was to study how types of integration differ in changes of collaboration processes over time and final perceived effectiveness. A longitudinal mixed-methods study design based on two data sources (surveys and interviews) was used to identify the perceived degree of integration and patterns in collaboration among 42 ICPs in primary care in The Netherlands. We used cluster analysis to identify distinct subgroups of ICPs based on the final perceived degree of integration from a professional, organisational and system perspective. With the use of ANOVAs, the subgroups were contrasted based on: 1) changes in collaboration processes over time (shared ambition, interests and mutual gains, relationship dynamics, organisational dynamics and process management) and 2) final perceived effectiveness (i.e. rated success) at the professional, organisational and system levels. The ICPs were classified into three subgroups with: 'United Integration Perspectives (UIP)', 'Disunited Integration Perspectives (DIP)' and 'Professional-oriented Integration Perspectives (PIP)'. ICPs within the UIP subgroup made the strongest increase in trust-based (mutual gains and relationship dynamics) as well as control-based (organisational dynamics and process management) collaboration processes and had the highest overall effectiveness rates. On the other hand, ICPs with the DIP subgroup decreased on collaboration processes and had the lowest overall effectiveness rates. ICPs within the PIP subgroup increased in control

  6. A One System Integrated Approach to Simulant Selection for Hanford High Level Waste Mixing and Sampling Tests - 13342

    International Nuclear Information System (INIS)

    Thien, Mike G.; Barnes, Steve M.

    2013-01-01

    The Hanford Tank Operations Contractor (TOC) and the Hanford Waste Treatment and Immobilization Plant (WTP) contractor are both engaged in demonstrating mixing, sampling, and transfer system capabilities using simulated Hanford High-Level Waste (HLW) formulations. This represents one of the largest remaining technical issues with the high-level waste treatment mission at Hanford. Previous testing has focused on very specific TOC or WTP test objectives and consequently the simulants were narrowly focused on those test needs. A key attribute in the Defense Nuclear Facilities Safety Board (DNFSB) Recommendation 2010-2 is to ensure testing is performed with a simulant that represents the broad spectrum of Hanford waste. The One System Integrated Project Team is a new joint TOC and WTP organization intended to ensure technical integration of specific TOC and WTP systems and testing. A new approach to simulant definition has been mutually developed that will meet both TOC and WTP test objectives for the delivery and receipt of HLW. The process used to identify critical simulant characteristics, incorporate lessons learned from previous testing, and identify specific simulant targets that ensure TOC and WTP testing addresses the broad spectrum of Hanford waste characteristics that are important to mixing, sampling, and transfer performance are described. (authors)

  7. A One System Integrated Approach to Simulant Selection for Hanford High Level Waste Mixing and Sampling Tests - 13342

    Energy Technology Data Exchange (ETDEWEB)

    Thien, Mike G. [Washington River Protection Solutions, LLC, P.O Box 850, Richland WA, 99352 (United States); Barnes, Steve M. [Waste Treatment Plant, 2435 Stevens Center Place, Richland WA 99354 (United States)

    2013-07-01

    The Hanford Tank Operations Contractor (TOC) and the Hanford Waste Treatment and Immobilization Plant (WTP) contractor are both engaged in demonstrating mixing, sampling, and transfer system capabilities using simulated Hanford High-Level Waste (HLW) formulations. This represents one of the largest remaining technical issues with the high-level waste treatment mission at Hanford. Previous testing has focused on very specific TOC or WTP test objectives and consequently the simulants were narrowly focused on those test needs. A key attribute in the Defense Nuclear Facilities Safety Board (DNFSB) Recommendation 2010-2 is to ensure testing is performed with a simulant that represents the broad spectrum of Hanford waste. The One System Integrated Project Team is a new joint TOC and WTP organization intended to ensure technical integration of specific TOC and WTP systems and testing. A new approach to simulant definition has been mutually developed that will meet both TOC and WTP test objectives for the delivery and receipt of HLW. The process used to identify critical simulant characteristics, incorporate lessons learned from previous testing, and identify specific simulant targets that ensure TOC and WTP testing addresses the broad spectrum of Hanford waste characteristics that are important to mixing, sampling, and transfer performance are described. (authors)

  8. A One System Integrated Approach to Simulant Selection for Hanford High Level Waste Mixing and Sampling Tests

    International Nuclear Information System (INIS)

    Thien, Mike G.; Barnes, Steve M.

    2013-01-01

    The Hanford Tank Operations Contractor (TOC) and the Hanford Waste Treatment and Immobilization Plant (WTP) contractor are both engaged in demonstrating mixing, sampling, and transfer system capabilities using simulated Hanford High-Level Waste (HLW) formulations. This represents one of the largest remaining technical issues with the high-level waste treatment mission at Hanford. Previous testing has focused on very specific TOC or WTP test objectives and consequently the simulants were narrowly focused on those test needs. A key attribute in the Defense Nuclear Facilities Safety Board (DNFSB) Recommendation 2010-2 is to ensure testing is performed with a simulant that represents the broad spectrum of Hanford waste. The One System Integrated Project Team is a new joint TOC and WTP organization intended to ensure technical integration of specific TOC and WTP systems and testing. A new approach to simulant definition has been mutually developed that will meet both TOC and WTP test objectives for the delivery and receipt of HLW. The process used to identify critical simulant characteristics, incorporate lessons learned from previous testing, and identify specific simulant targets that ensure TOC and WTP testing addresses the broad spectrum of Hanford waste characteristics that are important to mixing, sampling, and transfer performance are described

  9. Shape-based grey-level image interpolation

    International Nuclear Information System (INIS)

    Keh-Shih Chuang; Chun-Yuan Chen; Ching-Kai Yeh

    1999-01-01

    The three-dimensional (3D) object data obtained from a CT scanner usually have unequal sampling frequencies in the x-, y- and z-directions. Generally, the 3D data are first interpolated between slices to obtain isotropic resolution, reconstructed, then operated on using object extraction and display algorithms. The traditional grey-level interpolation introduces a layer of intermediate substance and is not suitable for objects that are very different from the opposite background. The shape-based interpolation method transfers a pixel location to a parameter related to the object shape and the interpolation is performed on that parameter. This process is able to achieve a better interpolation but its application is limited to binary images only. In this paper, we present an improved shape-based interpolation method for grey-level images. The new method uses a polygon to approximate the object shape and performs the interpolation using polygon vertices as references. The binary images representing the shape of the object were first generated via image segmentation on the source images. The target object binary image was then created using regular shape-based interpolation. The polygon enclosing the object for each slice can be generated from the shape of that slice. We determined the relative location in the source slices of each pixel inside the target polygon using the vertices of a polygon as the reference. The target slice grey-level was interpolated from the corresponding source image pixels. The image quality of this interpolation method is better and the mean squared difference is smaller than with traditional grey-level interpolation. (author)

  10. Office Hours as You Like Them: Integrating Real-Time Chats into the Course Media Mix.

    Science.gov (United States)

    McKeage, Kim

    2001-01-01

    Reports on one professor's use of integrated synchronous electronic office hours (i.e., a "chat room") with asynchronous course conferencing (i.e., email) in a class on introductory marketing. Describes its uses, limitations, benefits, potential problems, and results from a student survey. (EV)

  11. Top-down design and verification methodology for analog mixed-signal integrated circuits

    NARCIS (Netherlands)

    Beviz, P.

    2016-01-01

    The current report contains the introduction of a novel Top-Down Design and Verification methodology for AMS integrated circuits. With the introduction of new design and verification flow, more reliable and efficient development of AMS ICs is possible. The assignment incorporated the research on the

  12. A mixed spectral-integration model for neutral mean wind flow over hills

    DEFF Research Database (Denmark)

    Corbett, Jean-Francois; Ott, Søren; Landberg, Lars

    2008-01-01

    equations are solved spectrally horizontally and by numerical integration vertically. Non-dimensional solutions are stored in look-up tables for quick re-use. Model results are compared to measurements, as well as other authors' flow models in three test cases. The model is implemented and tested in two...

  13. Implementation of integrated care for type 2 diabetes : A protocol for mixed methods research

    NARCIS (Netherlands)

    Busetto, L.; Luijkx, K.G.; Vrijhoef, H.J.M.

    2014-01-01

    Introduction: While integrated care for diabetes mellitus type 2 has achieved good results in terms of intermediate clinical and process outcomes, the evidence-based knowledge on its implementation is scarce, and insights generalisable to other settings therefore remain limited. Objective: This

  14. Interpolation from Grid Lines: Linear, Transfinite and Weighted Method

    DEFF Research Database (Denmark)

    Lindberg, Anne-Sofie Wessel; Jørgensen, Thomas Martini; Dahl, Vedrana Andersen

    2017-01-01

    When two sets of line scans are acquired orthogonal to each other, intensity values are known along the lines of a grid. To view these values as an image, intensities need to be interpolated at regularly spaced pixel positions. In this paper we evaluate three methods for interpolation from grid l...

  15. Shape Preserving Interpolation Using C2 Rational Cubic Spline

    Directory of Open Access Journals (Sweden)

    Samsul Ariffin Abdul Karim

    2016-01-01

    Full Text Available This paper discusses the construction of new C2 rational cubic spline interpolant with cubic numerator and quadratic denominator. The idea has been extended to shape preserving interpolation for positive data using the constructed rational cubic spline interpolation. The rational cubic spline has three parameters αi, βi, and γi. The sufficient conditions for the positivity are derived on one parameter γi while the other two parameters αi and βi are free parameters that can be used to change the final shape of the resulting interpolating curves. This will enable the user to produce many varieties of the positive interpolating curves. Cubic spline interpolation with C2 continuity is not able to preserve the shape of the positive data. Notably our scheme is easy to use and does not require knots insertion and C2 continuity can be achieved by solving tridiagonal systems of linear equations for the unknown first derivatives di, i=1,…,n-1. Comparisons with existing schemes also have been done in detail. From all presented numerical results the new C2 rational cubic spline gives very smooth interpolating curves compared to some established rational cubic schemes. An error analysis when the function to be interpolated is ft∈C3t0,tn is also investigated in detail.

  16. Input variable selection for interpolating high-resolution climate ...

    African Journals Online (AJOL)

    Although the primary input data of climate interpolations are usually meteorological data, other related (independent) variables are frequently incorporated in the interpolation process. One such variable is elevation, which is known to have a strong influence on climate. This research investigates the potential of 4 additional ...

  17. An efficient interpolation filter VLSI architecture for HEVC standard

    Science.gov (United States)

    Zhou, Wei; Zhou, Xin; Lian, Xiaocong; Liu, Zhenyu; Liu, Xiaoxiang

    2015-12-01

    The next-generation video coding standard of High-Efficiency Video Coding (HEVC) is especially efficient for coding high-resolution video such as 8K-ultra-high-definition (UHD) video. Fractional motion estimation in HEVC presents a significant challenge in clock latency and area cost as it consumes more than 40 % of the total encoding time and thus results in high computational complexity. With aims at supporting 8K-UHD video applications, an efficient interpolation filter VLSI architecture for HEVC is proposed in this paper. Firstly, a new interpolation filter algorithm based on the 8-pixel interpolation unit is proposed in this paper. It can save 19.7 % processing time on average with acceptable coding quality degradation. Based on the proposed algorithm, an efficient interpolation filter VLSI architecture, composed of a reused data path of interpolation, an efficient memory organization, and a reconfigurable pipeline interpolation filter engine, is presented to reduce the implement hardware area and achieve high throughput. The final VLSI implementation only requires 37.2k gates in a standard 90-nm CMOS technology at an operating frequency of 240 MHz. The proposed architecture can be reused for either half-pixel interpolation or quarter-pixel interpolation, which can reduce the area cost for about 131,040 bits RAM. The processing latency of our proposed VLSI architecture can support the real-time processing of 4:2:0 format 7680 × 4320@78fps video sequences.

  18. Some observations on interpolating gauges and non-covariant gauges

    Indian Academy of Sciences (India)

    We discuss the viability of using interpolating gauges to define the non-covariant gauges starting from the covariant ones. We draw attention to the need for a very careful treatment of boundary condition defining term. We show that the boundary condition needed to maintain gauge-invariance as the interpolating parameter ...

  19. Convergence of trajectories in fractal interpolation of stochastic processes

    International Nuclear Information System (INIS)

    MaIysz, Robert

    2006-01-01

    The notion of fractal interpolation functions (FIFs) can be applied to stochastic processes. Such construction is especially useful for the class of α-self-similar processes with stationary increments and for the class of α-fractional Brownian motions. For these classes, convergence of the Minkowski dimension of the graphs in fractal interpolation of the Hausdorff dimension of the graph of original process was studied in [Herburt I, MaIysz R. On convergence of box dimensions of fractal interpolation stochastic processes. Demonstratio Math 2000;4:873-88.], [MaIysz R. A generalization of fractal interpolation stochastic processes to higher dimension. Fractals 2001;9:415-28.], and [Herburt I. Box dimension of interpolations of self-similar processes with stationary increments. Probab Math Statist 2001;21:171-8.]. We prove that trajectories of fractal interpolation stochastic processes converge to the trajectory of the original process. We also show that convergence of the trajectories in fractal interpolation of stochastic processes is equivalent to the convergence of trajectories in linear interpolation

  20. Improved Interpolation Kernels for Super-resolution Algorithms

    DEFF Research Database (Denmark)

    Rasti, Pejman; Orlova, Olga; Tamberg, Gert

    2016-01-01

    Super resolution (SR) algorithms are widely used in forensics investigations to enhance the resolution of images captured by surveillance cameras. Such algorithms usually use a common interpolation algorithm to generate an initial guess for the desired high resolution (HR) image. This initial guess...... when their original interpolation kernel is replaced by the ones introduced in this work....

  1. Integration of the effects of air quality measures in the SOLVE mix of measures

    International Nuclear Information System (INIS)

    Hesselmans, T.; Heijnis, F.

    2008-01-01

    SOLVE is the Dutch abbreviation for fast solutions for air and traffic and is a website by means of which provinces and municipalities in the Netherlands can gain insight into the best measures for traffic to improve the quality of the ambient air. Since halfway June 2008, the effects on air quality of approximately 35 traffic measures were included in the SOLVE mix of measures. The effects of traffic measures on emissions of particulate matter and nitrogen dioxide have been calculated. The effects are expressed in a decrease of the contribution of traffic indicated in a scale from A (very large decrease) to E (no decrease). The outcome depends on the location where the measure is implemented. [mk] [nl

  2. Integrative improvement method and mixed-integer programming in system planning

    International Nuclear Information System (INIS)

    Sadegheih, A.

    2002-01-01

    In this paper, system planning network is formulated for mixed-integer programming and a Ga. The cost function of this problem consists of the capital investment cost in discrete form, the cost of transmission losses and the power generation costs. The Dc load flow equations for the network are embedded in the constraints of the mathematical model to avoid sub-optimal solutions that can arise if the enforcement of such constraints is done in an indirect way. The solution of the model gives the best line additions. and also provides information regarding the optimal generation at each generation point. This method of solutions is demonstrated on the expansion of a 5 bus -bar system to 6 bus-bars

  3. Exploring a new S U (4 ) symmetry of meson interpolators

    Science.gov (United States)

    Glozman, L. Ya.; Pak, M.

    2015-07-01

    In recent lattice calculations it has been discovered that mesons upon truncation of the quasizero modes of the Dirac operator obey a symmetry larger than the S U (2 )L×S U (2 )R×U (1 )A symmetry of the QCD Lagrangian. This symmetry has been suggested to be S U (4 )⊃S U (2 )L×S U (2 )R×U (1 )A that mixes not only the u- and d-quarks of a given chirality, but also the left- and right-handed components. Here it is demonstrated that bilinear q ¯q interpolating fields of a given spin J ≥1 transform into each other according to irreducible representations of S U (4 ) or, in general, S U (2 NF). This fact together with the coincidence of the correlation functions establishes S U (4 ) as a symmetry of the J ≥1 mesons upon quasizero mode reduction. It is shown that this symmetry is a symmetry of the confining instantaneous charge-charge interaction in QCD. Different subgroups of S U (4 ) as well as the S U (4 ) algebra are explored.

  4. Spatial interpolation of precipitation in a dense gauge network for monsoon storm events in the southwestern United States

    Science.gov (United States)

    Garcia, Matthew; Peters-Lidard, Christa D.; Goodrich, David C.

    2008-05-01

    Inaccuracy in spatially distributed precipitation fields can contribute significantly to the uncertainty of hydrological states and fluxes estimated from land surface models. This paper examines the results of selected interpolation methods for both convective and mixed/stratiform events that occurred during the North American monsoon season over a dense gauge network at the U.S. Department of Agriculture Agricultural Research Service Walnut Gulch Experimental Watershed in the southwestern United States. The spatial coefficient of variation for the precipitation field is employed as an indicator of event morphology, and a gauge clustering factor CF is formulated as a new, scale-independent measure of network organization. We consider that CF 0 (clustering in the gauge network) will produce errors because of reduced areal representation of the precipitation field. Spatial interpolation is performed using both inverse-distance-weighted (IDW) and multiquadric-biharmonic (MQB) methods. We employ ensembles of randomly selected network subsets for the statistical evaluation of interpolation errors in comparison with the observed precipitation. The magnitude of interpolation errors and differences in accuracy between interpolation methods depend on both the density and the geometrical organization of the gauge network. Generally, MQB methods outperform IDW methods in terms of interpolation accuracy under all conditions, but it is found that the order of the IDW method is important to the results and may, under some conditions, be just as accurate as the MQB method. In almost all results it is demonstrated that the inverse-distance-squared method for spatial interpolation, commonly employed in operational analyses and for engineering assessments, is inferior to the ID-cubed method, which is also more computationally efficient than the MQB method in studies of large networks.

  5. Scalable Intersample Interpolation Architecture for High-channel-count Beamformers

    DEFF Research Database (Denmark)

    Tomov, Borislav Gueorguiev; Nikolov, Svetoslav I; Jensen, Jørgen Arendt

    2011-01-01

    Modern ultrasound scanners utilize digital beamformers that operate on sampled and quantized echo signals. Timing precision is of essence for achieving good focusing. The direct way to achieve it is through the use of high sampling rates, but that is not economical, so interpolation between echo...... samples is used. This paper presents a beamformer architecture that combines a band-pass filter-based interpolation algorithm with the dynamic delay-and-sum focusing of a digital beamformer. The reduction in the number of multiplications relative to a linear perchannel interpolation and band-pass per......-channel interpolation architecture is respectively 58 % and 75 % beamformer for a 256-channel beamformer using 4-tap filters. The approach allows building high channel count beamformers while maintaining high image quality due to the use of sophisticated intersample interpolation....

  6. Fractional Delayer Utilizing Hermite Interpolation with Caratheodory Representation

    Directory of Open Access Journals (Sweden)

    Qiang DU

    2018-04-01

    Full Text Available Fractional delay is indispensable for many sorts of circuits and signal processing applications. Fractional delay filter (FDF utilizing Hermite interpolation with an analog differentiator is a straightforward way to delay discrete signals. This method has a low time-domain error, but a complicated sampling module than the Shannon sampling scheme. A simplified scheme, which is based on Shannon sampling and utilizing Hermite interpolation with a digital differentiator, will lead a much higher time-domain error when the signal frequency approaches the Nyquist rate. In this letter, we propose a novel fractional delayer utilizing Hermite interpolation with Caratheodory representation. The samples of differential signal are obtained by Caratheodory representation from the samples of the original signal only. So, only one sampler is needed and the sampling module is simple. Simulation results for four types of signals demonstrate that the proposed method has significantly higher interpolation accuracy than Hermite interpolation with digital differentiator.

  7. Integrated care pilot in north west London: a mixed methods evaluation

    Directory of Open Access Journals (Sweden)

    Natasha Curry

    2013-07-01

    Full Text Available Introduction: This paper provides the results of a year-long evaluation of a large-scale integrated care pilot in North West London. The pilot aimed to integrate care across primary, acute, community, mental health and social care for people with diabetes and those over 75 years through: care planning; multidisciplinary case reviews; information sharing; and project management support.   Methods: The evaluation team conducted qualitative studies of change at organisational, clinician, and patient levels (using interviews, focus groups and a survey; and quantitative analysis of change in service use and patient-level clinical outcomes (using patient-level data sets and a matched control study.  Results: The pilot had successfully engaged provider organisations, created a shared strategic vision and established governance structures. However, engagement of clinicians was variable and there was no evidence to date of significant reductions in emergency admissions. There was some evidence of changes in care processes. Conclusion: Although the pilot has demonstrated the beginnings of large-scale change, it remains in the early stages and faces significant challenges as it seeks to become sustainable for the longer term. It is critical that NHS managers and clinicians have realistic expectations of what can be achieved in a relatively short period of time.

  8. Integrated care pilot in north west London: a mixed methods evaluation

    Directory of Open Access Journals (Sweden)

    Natasha Curry

    2013-07-01

    Full Text Available Introduction: This paper provides the results of a year-long evaluation of a large-scale integrated care pilot in North West London. The pilot aimed to integrate care across primary, acute, community, mental health and social care for people with diabetes and those over 75 years through: care planning; multidisciplinary case reviews; information sharing; and project management support.    Methods: The evaluation team conducted qualitative studies of change at organisational, clinician, and patient levels (using interviews, focus groups and a survey; and quantitative analysis of change in service use and patient-level clinical outcomes (using patient-level data sets and a matched control study.   Results: The pilot had successfully engaged provider organisations, created a shared strategic vision and established governance structures. However, engagement of clinicians was variable and there was no evidence to date of significant reductions in emergency admissions. There was some evidence of changes in care processes.   Conclusion: Although the pilot has demonstrated the beginnings of large-scale change, it remains in the early stages and faces significant challenges as it seeks to become sustainable for the longer term. It is critical that NHS managers and clinicians have realistic expectations of what can be achieved in a relatively short period of time.

  9. Computing Diffeomorphic Paths for Large Motion Interpolation.

    Science.gov (United States)

    Seo, Dohyung; Jeffrey, Ho; Vemuri, Baba C

    2013-06-01

    In this paper, we introduce a novel framework for computing a path of diffeomorphisms between a pair of input diffeomorphisms. Direct computation of a geodesic path on the space of diffeomorphisms Diff (Ω) is difficult, and it can be attributed mainly to the infinite dimensionality of Diff (Ω). Our proposed framework, to some degree, bypasses this difficulty using the quotient map of Diff (Ω) to the quotient space Diff ( M )/ Diff ( M ) μ obtained by quotienting out the subgroup of volume-preserving diffeomorphisms Diff ( M ) μ . This quotient space was recently identified as the unit sphere in a Hilbert space in mathematics literature, a space with well-known geometric properties. Our framework leverages this recent result by computing the diffeomorphic path in two stages. First, we project the given diffeomorphism pair onto this sphere and then compute the geodesic path between these projected points. Second, we lift the geodesic on the sphere back to the space of diffeomerphisms, by solving a quadratic programming problem with bilinear constraints using the augmented Lagrangian technique with penalty terms. In this way, we can estimate the path of diffeomorphisms, first, staying in the space of diffeomorphisms, and second, preserving shapes/volumes in the deformed images along the path as much as possible. We have applied our framework to interpolate intermediate frames of frame-sub-sampled video sequences. In the reported experiments, our approach compares favorably with the popular Large Deformation Diffeomorphic Metric Mapping framework (LDDMM).

  10. Functions with disconnected spectrum sampling, interpolation, translates

    CERN Document Server

    Olevskii, Alexander M

    2016-01-01

    The classical sampling problem is to reconstruct entire functions with given spectrum S from their values on a discrete set L. From the geometric point of view, the possibility of such reconstruction is equivalent to determining for which sets L the exponential system with frequencies in L forms a frame in the space L^2(S). The book also treats the problem of interpolation of discrete functions by analytic ones with spectrum in S and the problem of completeness of discrete translates. The size and arithmetic structure of both the spectrum S and the discrete set L play a crucial role in these problems. After an elementary introduction, the authors give a new presentation of classical results due to Beurling, Kahane, and Landau. The main part of the book focuses on recent progress in the area, such as construction of universal sampling sets, high-dimensional and non-analytic phenomena. The reader will see how methods of harmonic and complex analysis interplay with various important concepts in different areas, ...

  11. Spatiotemporal video deinterlacing using control grid interpolation

    Science.gov (United States)

    Venkatesan, Ragav; Zwart, Christine M.; Frakes, David H.; Li, Baoxin

    2015-03-01

    With the advent of progressive format display and broadcast technologies, video deinterlacing has become an important video-processing technique. Numerous approaches exist in the literature to accomplish deinterlacing. While most earlier methods were simple linear filtering-based approaches, the emergence of faster computing technologies and even dedicated video-processing hardware in display units has allowed higher quality but also more computationally intense deinterlacing algorithms to become practical. Most modern approaches analyze motion and content in video to select different deinterlacing methods for various spatiotemporal regions. We introduce a family of deinterlacers that employs spectral residue to choose between and weight control grid interpolation based spatial and temporal deinterlacing methods. The proposed approaches perform better than the prior state-of-the-art based on peak signal-to-noise ratio, other visual quality metrics, and simple perception-based subjective evaluations conducted by human viewers. We further study the advantages of using soft and hard decision thresholds on the visual performance.

  12. Simulating propagation of decoupled elastic waves using low-rank approximate mixed-domain integral operators for anisotropic media

    KAUST Repository

    Cheng, Jiubing; Alkhalifah, Tariq Ali; Wu, Zedong; Zou, Peng; Wang, Chenlong

    2016-01-01

    In elastic imaging, the extrapolated vector fields are decoupled into pure wave modes, such that the imaging condition produces interpretable images. Conventionally, mode decoupling in anisotropic media is costly because the operators involved are dependent on the velocity, and thus they are not stationary. We have developed an efficient pseudospectral approach to directly extrapolate the decoupled elastic waves using low-rank approximate mixed-domain integral operators on the basis of the elastic displacement wave equation. We have applied k-space adjustment to the pseudospectral solution to allow for a relatively large extrapolation time step. The low-rank approximation was, thus, applied to the spectral operators that simultaneously extrapolate and decompose the elastic wavefields. Synthetic examples on transversely isotropic and orthorhombic models showed that our approach has the potential to efficiently and accurately simulate the propagations of the decoupled quasi-P and quasi-S modes as well as the total wavefields for elastic wave modeling, imaging, and inversion.

  13. Simulating propagation of decoupled elastic waves using low-rank approximate mixed-domain integral operators for anisotropic media

    KAUST Repository

    Cheng, Jiubing

    2016-03-15

    In elastic imaging, the extrapolated vector fields are decoupled into pure wave modes, such that the imaging condition produces interpretable images. Conventionally, mode decoupling in anisotropic media is costly because the operators involved are dependent on the velocity, and thus they are not stationary. We have developed an efficient pseudospectral approach to directly extrapolate the decoupled elastic waves using low-rank approximate mixed-domain integral operators on the basis of the elastic displacement wave equation. We have applied k-space adjustment to the pseudospectral solution to allow for a relatively large extrapolation time step. The low-rank approximation was, thus, applied to the spectral operators that simultaneously extrapolate and decompose the elastic wavefields. Synthetic examples on transversely isotropic and orthorhombic models showed that our approach has the potential to efficiently and accurately simulate the propagations of the decoupled quasi-P and quasi-S modes as well as the total wavefields for elastic wave modeling, imaging, and inversion.

  14. A Mixed WLS Power System State Estimation Method Integrating a Wide-Area Measurement System and SCADA Technology

    Directory of Open Access Journals (Sweden)

    Tao Jin

    2018-02-01

    Full Text Available To address the issue that the phasor measurement units (PMUs of wide area measurement system (WAMS are not sufficient for static state estimation in most existing power systems, this paper proposes a mixed power system weighted least squares (WLS state estimation method integrating a wide-area measurement system and supervisory control and data acquisition (SCADA technology. The hybrid calculation model is established by incorporating phasor measurements (including the node voltage phasors and branch current phasors and the results of the traditional state estimator in a post-processing estimator. The performance assessment is discussed through setting up mathematical models of the distribution network. Based on PMU placement optimization and bias analysis, the effectiveness of the proposed method was proved to be accurate and reliable by simulations of different cases. Furthermore, emulating calculation shows this method greatly improves the accuracy and stability of the state estimation solution, compared with the traditional WLS state estimation.

  15. Integrated and Optimized Energy-Efficient Construction Package for a Community of Production Homes in the Mixed-Humid Climate

    Energy Technology Data Exchange (ETDEWEB)

    Mallay, D. [Partnership for Home Innovation, Upper Marlboro, MD (United States); Wiehagen, J. [Partnership for Home Innovation, Upper Marlboro, MD (United States); Del Bianco, M. [Partnership for Home Innovation, Upper Marlboro, MD (United States)

    2014-10-01

    This research high performance home analyzes how a set of advanced technologies can be integrated into a durable and energy-efficient house in the mixed-humid climate while remaining affordable to homeowners. The technical solutions documented in this report are the cornerstone of the builder's entire business model based on delivering high-performance homes on a production basis as a standard product offering to all price segments of the residential market. Home Innovation Research Labs partnered with production builder Nexus EnergyHomes (CZ 4) and they plan to adopt the successful components of the energy solution package for all 55 homes in the community. The research objective was to optimize the builder's energy solution package based on energy performance and construction costs. All of the major construction features, including envelope upgrades, space conditioning system, hot water system, and solar electric system were analyzed.

  16. The Breakthrough Series Collaborative on Service Integration: A Mixed Methods Study of a Strengths-Based Initiative

    Directory of Open Access Journals (Sweden)

    Cynthia A. Lietz

    2010-11-01

    Full Text Available Arizona’s Department of Economic Security (DES engaged in a strengths-based initiative to increase quality and integration of human services. Twenty teams including employees from state agencies, community leaders, and families were brought together to discuss and implement improvements to a variety of social services. A mixed methods study was conducted to explore the complex process of forming diverse teams to strengthen social services. Specifically, the research team conducted focus groups to collect qualitative data from a purposive sample of the teams to explore their experiences in greater depth. Analysis of the data led to the development of an online survey instrument that allowed all collaborative members an opportunity to participate in the study. Findings suggest that while the teams faced many challenges, a commitment to the process brought perseverance, communication, and creativity allowing this collaborative to initiate 105 activities to bring about positive changes in social services within their communities.

  17. CMOS integrated avalanche photodiodes and frequency-mixing optical sensor front end for portable NIR spectroscopy instruments.

    Science.gov (United States)

    Yun, Ruida; Sthalekar, Chirag; Joyner, Valencia M

    2011-01-01

    This paper presents the design and measurement results of two avalanche photodiode structures (APDs) and a novel frequency-mixing transimpedance amplifier (TIA), which are key building blocks towards a monolithically integrated optical sensor front end for near-infrared (NIR) spectroscopy applications. Two different APD structures are fabricated in an unmodified 0.18 \\im CMOS process, one with a shallow trench isolation (STI) guard ring and the other with a P-well guard ring. The APDs are characterized in linear mode. The STI bounded APD demonstrates better performance and exhibits 3.78 A/W responsivity at a wavelength of 690 nm and bias voltage of 10.55 V. The frequency-mixing TIA (FM-TIA) employs a T-feedback network incorporating gate-controlled transistors for resistance modulation, enabling the simultaneous down-conversion and amplification of the high frequency modulated photodiode (PD) current. The TIA achieves 92 dS Ω conversion gain with 0.5 V modulating voltage. The measured IIP(3) is 10.6/M. The amplifier together with the 50 Ω output buffer draws 23 mA from a1.8 V power supply.

  18. Mixed-Dimensionality VLSI-Type Configurable Tools for Virtual Prototyping of Biomicrofluidic Devices and Integrated Systems

    Science.gov (United States)

    Makhijani, Vinod B.; Przekwas, Andrzej J.

    2002-10-01

    This report presents results of a DARPA/MTO Composite CAD Project aimed to develop a comprehensive microsystem CAD environment, CFD-ACE+ Multiphysics, for bio and microfluidic devices and complete microsystems. The project began in July 1998, and was a three-year team effort between CFD Research Corporation, California Institute of Technology (CalTech), University of California, Berkeley (UCB), and Tanner Research, with Mr. Don Verlee from Abbott Labs participating as a consultant on the project. The overall objective of this project was to develop, validate and demonstrate several applications of a user-configurable VLSI-type mixed-dimensionality software tool for design of biomicrofluidics devices and integrated systems. The developed tool would provide high fidelity 3-D multiphysics modeling capability, l-D fluidic circuits modeling, and SPICE interface for system level simulations, and mixed-dimensionality design. It would combine tools for layouts and process fabrication, geometric modeling, and automated grid generation, and interfaces to EDA tools (e.g. Cadence) and MCAD tools (e.g. ProE).

  19. Highly Integrated Mixed-Mode Electronics for the readout of Time Projection Chambers

    CERN Document Server

    França Santos, Hugo Miguel; Musa, Luciano

    Time Projection Chambers (TPCs) are one of the most prevalent particle trackers for high-energy physics experiments. Future planed TPCs for the International Linear Collider (ILC) and the Compact Linear Collider (CLIC) entail very high spatial resolution in large gas volumes, but impose low material budget for the end caps of the TPC cylinder. This constraint is not accomplished with the state-of-the-art front-end electronics because of its unsuited relatively large mass and of its associated water cooling system. To reach the required material budget, highly compact and power efficient dedicated TPC front-end electronics should be developed. This project aims at re-designing the different electronic elements with significant improvements in terms of performance, power efficiency and versatility, and developing an integrated circuit that merges all components of the front-end electronics. This chip ambitions a large volume production at low unitary cost and its employment in multiple detectors. The design of ...

  20. ePRISM: A case study in multiple proxy and mixed temporal resolution integration

    Science.gov (United States)

    Robinson, Marci M.; Dowsett, Harry J.

    2010-01-01

    As part of the Pliocene Research, Interpretation and Synoptic Mapping (PRISM) Project, we present the ePRISM experiment designed I) to provide climate modelers with a reconstruction of an early Pliocene warm period that was warmer than the PRISM interval (similar to 3.3 to 3.0 Ma), yet still similar in many ways to modern conditions and 2) to provide an example of how best to integrate multiple-proxy sea surface temperature (SST) data from time series with varying degrees of temporal resolution and age control as we begin to build the next generation of PRISM, the PRISM4 reconstruction, spanning a constricted time interval. While it is possible to tie individual SST estimates to a single light (warm) oxygen isotope event, we find that the warm peak average of SST estimates over a narrowed time interval is preferential for paleoclimate reconstruction as it allows for the inclusion of more records of multiple paleotemperature proxies.

  1. Integration of mixed conducting membranes in an oxygen–steam biomass gasification process

    DEFF Research Database (Denmark)

    Puig Arnavat, Maria; Soprani, Stefano; Søgaard, Martin

    2013-01-01

    . The two configurations demonstrating the highest efficiency are then thermally integrated into an oxygen– steam biomass gasification plant. The energy demand for oxygen production and the membrane area required for a 6 MWth biomass plant are calculated for different operating conditions. Increasing......Oxygen–steam biomass gasification produces a high quality syngas with a high H2/CO ratio that is suitable for upgrading to liquid fuels. Such a gas is also well suited for use in conjunction with solid oxide fuel cells giving rise to a system yielding high electrical efficiency based on biomass...... distillation, especially for small to medium scale plants. This paper examines different configurations for oxygen production using MIEC membranes where the oxygen partial pressure difference is achieved by creating a vacuum on the permeate side, compressing the air on the feed side or a combination of the two...

  2. Integrated and Optimized Energy-Efficient Construction Package for a Community of Production Homes in the Mixed-Humid Climate

    Energy Technology Data Exchange (ETDEWEB)

    Mallay, D.; Wiehagen, J.; Del Bianco, M.

    2014-10-01

    Selection and integration of high performance home features are two sides of the same coin in energy efficient sustainable construction. Many advanced technologies are available for selection, but it is in the integration of these technologies into an affordable set of features that can be used on a production basis by builders, that ensures whole-house performance meets expectations. This research high performance home analyzes how a set of advanced technologies can be integrated into a durable and energy efficient house in the mixed-humid climate while remaining affordable to homeowners. The technical solutions documented in this report are the cornerstone of the builder's entire business model based on delivering high-performance homes on a production basis as a standard product offering to all price segments of the residential market. Home Innovation Research Labs partnered with production builder Nexus EnergyHomes (CZ 4). The builder plans to adopt the successful components of the energy solution package for all 55 homes in the community. The research objective was to optimize the builder's energy solution package based on energy performance and construction costs. All of the major construction features, including envelope upgrades, space conditioning system, hot water system, and solar electric system were analyzed. The information in this report can be used by builders and designers to evaluate options, and the integration of options, for increasing the efficiency of home designs in climate zone 4. The data also provide a point of reference for evaluating estimates of energy savings and costs for specific features.

  3. Integrated Water Gas Shift Membrane Reactors Utilizing Novel, Non Precious Metal Mixed Matrix Membrane

    Energy Technology Data Exchange (ETDEWEB)

    Ferraris, John P. [Univ. of Texas-Dallas, Richardson, TX (United States). Dept. of Chemistry

    2013-09-30

    Nanoparticles of zeolitic imidazolate frameworks and other related hybrid materials were prepared by modifying published synthesis procedures by introducing bases, changing stoichiometric ratios, or adjusting reaction conditions. These materials were stable at temperatures >300 °C and were compatible with the polymer matrices used to prepare mixed- matrix membranes (MMMs). MMMs tested at 300 °C exhibited a >30 fold increase in permeability, compared to those measured at 35 °C, while maintaining H2/CO2 selectivity. Measurements at high pressure (up to 30 atm) and high temperature (up to 300 °C) resulted in an increase in gas flux across the membrane with retention of selectivity. No variations in permeability were observed at high pressures at either 35 or 300 °C. CO2-induced plasticization was not observed for Matrimid®, VTEC, and PBI polymers or their MMMs at 30 atm and 300 °C. Membrane surface modification by cross-linking with ethanol diamine resulted in an increase in H2/CO2 selectivity at 35 °C. Spectrometric analysis showed that the cross-linking was effective to temperatures <150 °C. At higher temperatures, the cross-linked membranes exhibit a H2/CO2 selectivity similar to the uncross-linked polymer. Performance of the polybenzimidazole (PBI) hollow fibers prepared at Santa Fe Science and Technology (SFST, Inc.) showed increased flux and selectivity at 300 °C, which is comparable to a flat PBI membrane. A water-gas shift reactor has been built and currently being optimized for testing under DOE conditions.

  4. Research of Cubic Bezier Curve NC Interpolation Signal Generator

    Directory of Open Access Journals (Sweden)

    Shijun Ji

    2014-08-01

    Full Text Available Interpolation technology is the core of the computer numerical control (CNC system, and the precision and stability of the interpolation algorithm directly affect the machining precision and speed of CNC system. Most of the existing numerical control interpolation technology can only achieve circular arc interpolation, linear interpolation or parabola interpolation, but for the numerical control (NC machining of parts with complicated surface, it needs to establish the mathematical model and generate the curved line and curved surface outline of parts and then discrete the generated parts outline into a large amount of straight line or arc to carry on the processing, which creates the complex program and a large amount of code, so it inevitably introduce into the approximation error. All these factors affect the machining accuracy, surface roughness and machining efficiency. The stepless interpolation of cubic Bezier curve controlled by analog signal is studied in this paper, the tool motion trajectory of Bezier curve can be directly planned out in CNC system by adjusting control points, and then these data were put into the control motor which can complete the precise feeding of Bezier curve. This method realized the improvement of CNC trajectory controlled ability from the simple linear and circular arc to the complex project curve, and it provides a new way for economy realizing the curve surface parts with high quality and high efficiency machining.

  5. [An Improved Spectral Quaternion Interpolation Method of Diffusion Tensor Imaging].

    Science.gov (United States)

    Xu, Yonghong; Gao, Shangce; Hao, Xiaofei

    2016-04-01

    Diffusion tensor imaging(DTI)is a rapid development technology in recent years of magnetic resonance imaging.The diffusion tensor interpolation is a very important procedure in DTI image processing.The traditional spectral quaternion interpolation method revises the direction of the interpolation tensor and can preserve tensors anisotropy,but the method does not revise the size of tensors.The present study puts forward an improved spectral quaternion interpolation method on the basis of traditional spectral quaternion interpolation.Firstly,we decomposed diffusion tensors with the direction of tensors being represented by quaternion.Then we revised the size and direction of the tensor respectively according to different situations.Finally,we acquired the tensor of interpolation point by calculating the weighted average.We compared the improved method with the spectral quaternion method and the Log-Euclidean method by the simulation data and the real data.The results showed that the improved method could not only keep the monotonicity of the fractional anisotropy(FA)and the determinant of tensors,but also preserve the tensor anisotropy at the same time.In conclusion,the improved method provides a kind of important interpolation method for diffusion tensor image processing.

  6. Introduction to special section of the Journal of Family Psychology, advances in mixed methods in family psychology: integrative and applied solutions for family science.

    Science.gov (United States)

    Weisner, Thomas S; Fiese, Barbara H

    2011-12-01

    Mixed methods in family psychology refer to the systematic integration of qualitative and quantitative techniques to represent family processes and settings. Over the past decade, significant advances have been made in study design, analytic strategies, and technological support (such as software) that allow for the integration of quantitative and qualitative methods and for making appropriate inferences from mixed methods. This special section of the Journal of Family Psychology illustrates how mixed methods may be used to advance knowledge in family science through identifying important cultural differences in family structure, beliefs, and practices, and revealing patterns of family relationships to generate new measurement paradigms and inform clinical practice. Guidance is offered to advance mixed methods research in family psychology through sound principles of peer review.

  7. Shape-based interpolation of multidimensional grey-level images

    International Nuclear Information System (INIS)

    Grevera, G.J.; Udupa, J.K.

    1996-01-01

    Shape-based interpolation as applied to binary images causes the interpolation process to be influenced by the shape of the object. It accomplishes this by first applying a distance transform to the data. This results in the creation of a grey-level data set in which the value at each point represents the minimum distance from that point to the surface of the object. (By convention, points inside the object are assigned positive values; points outside are assigned negative values.) This distance transformed data set is then interpolated using linear or higher-order interpolation and is then thresholded at a distance value of zero to produce the interpolated binary data set. In this paper, the authors describe a new method that extends shape-based interpolation to grey-level input data sets. This generalization consists of first lifting the n-dimensional (n-D) image data to represent it as a surface, or equivalently as a binary image, in an (n + 1)-dimensional [(n + 1)-D] space. The binary shape-based method is then applied to this image to create an (n + 1)-D binary interpolated image. Finally, this image is collapsed (inverse of lifting) to create the n-D interpolated grey-level data set. The authors have conducted several evaluation studies involving patient computed tomography (CT) and magnetic resonance (MR) data as well as mathematical phantoms. They all indicate that the new method produces more accurate results than commonly used grey-level linear interpolation methods, although at the cost of increased computation

  8. Acupuncture and chiropractic care for chronic pain in an integrated health plan: a mixed methods study

    Directory of Open Access Journals (Sweden)

    DeBar Lynn L

    2011-11-01

    Full Text Available Abstract Background Substantial recent research examines the efficacy of many types of complementary and alternative (CAM therapies. However, outcomes associated with the "real-world" use of CAM has been largely overlooked, despite calls for CAM therapies to be studied in the manner in which they are practiced. Americans seek CAM treatments far more often for chronic musculoskeletal pain (CMP than for any other condition. Among CAM treatments for CMP, acupuncture and chiropractic (A/C care are among those with the highest acceptance by physician groups and the best evidence to support their use. Further, recent alarming increases in delivery of opioid treatment and surgical interventions for chronic pain--despite their high costs, potential adverse effects, and modest efficacy--suggests the need to evaluate real world outcomes associated with promising non-pharmacological/non-surgical CAM treatments for CMP, which are often well accepted by patients and increasingly used in the community. Methods/Design This multi-phase, mixed methods study will: (1 conduct a retrospective study using information from electronic medical records (EMRs of a large HMO to identify unique clusters of patients with CMP (e.g., those with differing demographics, histories of pain condition, use of allopathic and CAM health services, and comorbidity profiles that may be associated with different propensities for A/C utilization and/or differential outcomes associated with such care; (2 use qualitative interviews to explore allopathic providers' recommendations for A/C and patients' decisions to pursue and retain CAM care; and (3 prospectively evaluate health services/costs and broader clinical and functional outcomes associated with the receipt of A/C relative to carefully matched comparison participants receiving traditional CMP services. Sensitivity analyses will compare methods relying solely on EMR-derived data versus analyses supplementing EMR data with

  9. Acupuncture and chiropractic care for chronic pain in an integrated health plan: a mixed methods study.

    Science.gov (United States)

    DeBar, Lynn L; Elder, Charles; Ritenbaugh, Cheryl; Aickin, Mikel; Deyo, Rick; Meenan, Richard; Dickerson, John; Webster, Jennifer A; Jo Yarborough, Bobbi

    2011-11-25

    Substantial recent research examines the efficacy of many types of complementary and alternative (CAM) therapies. However, outcomes associated with the "real-world" use of CAM has been largely overlooked, despite calls for CAM therapies to be studied in the manner in which they are practiced. Americans seek CAM treatments far more often for chronic musculoskeletal pain (CMP) than for any other condition. Among CAM treatments for CMP, acupuncture and chiropractic (A/C) care are among those with the highest acceptance by physician groups and the best evidence to support their use. Further, recent alarming increases in delivery of opioid treatment and surgical interventions for chronic pain--despite their high costs, potential adverse effects, and modest efficacy--suggests the need to evaluate real world outcomes associated with promising non-pharmacological/non-surgical CAM treatments for CMP, which are often well accepted by patients and increasingly used in the community. This multi-phase, mixed methods study will: (1) conduct a retrospective study using information from electronic medical records (EMRs) of a large HMO to identify unique clusters of patients with CMP (e.g., those with differing demographics, histories of pain condition, use of allopathic and CAM health services, and comorbidity profiles) that may be associated with different propensities for A/C utilization and/or differential outcomes associated with such care; (2) use qualitative interviews to explore allopathic providers' recommendations for A/C and patients' decisions to pursue and retain CAM care; and (3) prospectively evaluate health services/costs and broader clinical and functional outcomes associated with the receipt of A/C relative to carefully matched comparison participants receiving traditional CMP services. Sensitivity analyses will compare methods relying solely on EMR-derived data versus analyses supplementing EMR data with conventionally collected patient and clinician data

  10. On Multiple Interpolation Functions of the -Genocchi Polynomials

    Directory of Open Access Journals (Sweden)

    Jin Jeong-Hee

    2010-01-01

    Full Text Available Abstract Recently, many mathematicians have studied various kinds of the -analogue of Genocchi numbers and polynomials. In the work (New approach to q-Euler, Genocchi numbers and their interpolation functions, "Advanced Studies in Contemporary Mathematics, vol. 18, no. 2, pp. 105–112, 2009.", Kim defined new generating functions of -Genocchi, -Euler polynomials, and their interpolation functions. In this paper, we give another definition of the multiple Hurwitz type -zeta function. This function interpolates -Genocchi polynomials at negative integers. Finally, we also give some identities related to these polynomials.

  11. Spectral interpolation - Zero fill or convolution. [image processing

    Science.gov (United States)

    Forman, M. L.

    1977-01-01

    Zero fill, or augmentation by zeros, is a method used in conjunction with fast Fourier transforms to obtain spectral spacing at intervals closer than obtainable from the original input data set. In the present paper, an interpolation technique (interpolation by repetitive convolution) is proposed which yields values accurate enough for plotting purposes and which lie within the limits of calibration accuracies. The technique is shown to operate faster than zero fill, since fewer operations are required. The major advantages of interpolation by repetitive convolution are that efficient use of memory is possible (thus avoiding the difficulties encountered in decimation in time FFTs) and that is is easy to implement.

  12. Steady State Stokes Flow Interpolation for Fluid Control

    DEFF Research Database (Denmark)

    Bhatacharya, Haimasree; Nielsen, Michael Bang; Bridson, Robert

    2012-01-01

    — suffer from a common problem. They fail to capture the rotational components of the velocity field, although extrapolation in the normal direction does consider the tangential component. We address this problem by casting the interpolation as a steady state Stokes flow. This type of flow captures......Fluid control methods often require surface velocities interpolated throughout the interior of a shape to use the velocity as a feedback force or as a boundary condition. Prior methods for interpolation in computer graphics — velocity extrapolation in the normal direction and potential flow...

  13. C1 Rational Quadratic Trigonometric Interpolation Spline for Data Visualization

    Directory of Open Access Journals (Sweden)

    Shengjun Liu

    2015-01-01

    Full Text Available A new C1 piecewise rational quadratic trigonometric spline with four local positive shape parameters in each subinterval is constructed to visualize the given planar data. Constraints are derived on these free shape parameters to generate shape preserving interpolation curves for positive and/or monotonic data sets. Two of these shape parameters are constrained while the other two can be set free to interactively control the shape of the curves. Moreover, the order of approximation of developed interpolant is investigated as O(h3. Numeric experiments demonstrate that our method can construct nice shape preserving interpolation curves efficiently.

  14. Graduates' Perceptions of Learning Affordances in Longitudinal Integrated Clerkships: A Dual-Institution, Mixed-Methods Study.

    Science.gov (United States)

    Latessa, Robyn A; Swendiman, Robert A; Parlier, Anna Beth; Galvin, Shelley L; Hirsh, David A

    2017-09-01

    The authors explored affordances that contribute to participants' successful learning in longitudinal integrated clerkships (LICs). This dual-institutional, mixed-methods study included electronic surveys and semistructured interviews of LIC graduates who completed their core clinical (third) year of medical school. These LIC graduates took part in LICs at Harvard Medical School from 2004 to 2013 and the University of North Carolina School of Medicine-Asheville campus from 2009 to 2013. The survey questions asked LIC graduates to rate components of LICs that they perceived as contributing to successful learning. A research assistant interviewed a subset of study participants about their learning experiences. The authors analyzed aggregate data quantitatively and performed a qualitative content analysis on interview data. The graduates reported multiple affordances that they perceive contributed to successful learning in their LIC. The most reported components included continuity and relationships with preceptors, patients, place, and peers, along with integration of and flexibility within the curriculum. As LIC models grow in size and number, and their structures and processes evolve, learners' perceptions of affordances may guide curriculum planning. Further research is needed to investigate to what degree and by what means these affordances support learning in LICs and other models of clinical education.

  15. The complex nature of mixed farming systems requires multidimensional actions supported by integrative research and development efforts.

    Science.gov (United States)

    González-García, E; Gourdine, J L; Alexandre, G; Archimède, H; Vaarst, M

    2012-05-01

    Mixed farming systems (MFS) have demonstrated some success by focusing on the use of integrative and holistic mechanisms, and rationally building on and using the natural and local resource base without exhausting it, while enhancing biodiversity, optimizing complementarities between crops and animal systems and finally increasing opportunities in rural livelihoods. Focusing our analysis and discussion on field experiences and empirical knowledge in the Caribbean islands, this paper discusses the opportunities for a change needed in current MFS research-development philosophy. The importance of shifting from fragile/specialized production systems to MFS under current global conditions is argued with an emphasis on the case of Small Islands Developing States (SIDS) and the Caribbean. Particular vulnerable characteristics as well as the potential and constraints of SIDS and their agricultural sectors are described, while revealing the opportunities for the 'richness' of the natural and local resources to support authentic and less dependent production system strategies. Examples are provided of the use of natural grasses, legumes, crop residues and agro-industrial by-products. We analyse the requirement for a change in research strategies and initiatives through the development of a complex but necessary multi-/inter-/trans-disciplinary teamwork spirit. We stress as essential the collaboration and active participation of local and regional actors, stakeholders and end-users in the identification of research priorities, as well as the generation, exchange and dissemination of knowledge and technology innovations, while strengthening the leadership roles in the conduct of integrative and participative research and development projects.

  16. Chiral properties of baryon interpolating fields

    International Nuclear Information System (INIS)

    Nagata, Keitaro; Hosaka, Atsushi; Dmitrasinovic, V.

    2008-01-01

    We study the chiral transformation properties of all possible local (non-derivative) interpolating field operators for baryons consisting of three quarks with two flavors, assuming good isospin symmetry. We derive and use the relations/identities among the baryon operators with identical quantum numbers that follow from the combined color, Dirac and isospin Fierz transformations. These relations reduce the number of independent baryon operators with any given spin and isospin. The Fierz identities also effectively restrict the allowed baryon chiral multiplets. It turns out that the non-derivative baryons' chiral multiplets have the same dimensionality as their Lorentz representations. For the two independent nucleon operators the only permissible chiral multiplet is the fundamental one, ((1)/(2),0)+(0,(1)/(2)). For the Δ, admissible Lorentz representations are (1,(1)/(2))+((1)/(2),1) and ((3)/(2),0)+(0,(3)/(2)). In the case of the (1,(1)/(2))+((1)/(2),1) chiral multiplet, the I(J)=(3)/(2)((3)/(2)) Δ field has one I(J)=(1)/(2)((3)/(2)) chiral partner; otherwise it has none. We also consider the Abelian (U A (1)) chiral transformation properties of the fields and show that each baryon comes in two varieties: (1) with Abelian axial charge +3; and (2) with Abelian axial charge -1. In case of the nucleon these are the two Ioffe fields; in case of the Δ, the (1,(1)/(2))+((1)/(2),1) multiplet has an Abelian axial charge -1 and the ((3)/(2),0)+(0,(3)/(2)) multiplet has an Abelian axial charge +3. (orig.)

  17. Comparison of two fractal interpolation methods

    Science.gov (United States)

    Fu, Yang; Zheng, Zeyu; Xiao, Rui; Shi, Haibo

    2017-03-01

    As a tool for studying complex shapes and structures in nature, fractal theory plays a critical role in revealing the organizational structure of the complex phenomenon. Numerous fractal interpolation methods have been proposed over the past few decades, but they differ substantially in the form features and statistical properties. In this study, we simulated one- and two-dimensional fractal surfaces by using the midpoint displacement method and the Weierstrass-Mandelbrot fractal function method, and observed great differences between the two methods in the statistical characteristics and autocorrelation features. From the aspect of form features, the simulations of the midpoint displacement method showed a relatively flat surface which appears to have peaks with different height as the fractal dimension increases. While the simulations of the Weierstrass-Mandelbrot fractal function method showed a rough surface which appears to have dense and highly similar peaks as the fractal dimension increases. From the aspect of statistical properties, the peak heights from the Weierstrass-Mandelbrot simulations are greater than those of the middle point displacement method with the same fractal dimension, and the variances are approximately two times larger. When the fractal dimension equals to 1.2, 1.4, 1.6, and 1.8, the skewness is positive with the midpoint displacement method and the peaks are all convex, but for the Weierstrass-Mandelbrot fractal function method the skewness is both positive and negative with values fluctuating in the vicinity of zero. The kurtosis is less than one with the midpoint displacement method, and generally less than that of the Weierstrass-Mandelbrot fractal function method. The autocorrelation analysis indicated that the simulation of the midpoint displacement method is not periodic with prominent randomness, which is suitable for simulating aperiodic surface. While the simulation of the Weierstrass-Mandelbrot fractal function method has

  18. New mixed finite-element methods

    International Nuclear Information System (INIS)

    Franca, L.P.

    1987-01-01

    New finite-element methods are proposed for mixed variational formulations. The methods are constructed by adding to the classical Galerkin method various least-squares like terms. The additional terms involve integrals over element interiors, and include mesh-parameter dependent coefficients. The methods are designed to enhance stability. Consistency is achieved in the sense that exact solutions identically satisfy the variational equations.Applied to several problems, simple finite-element interpolations are rendered convergent, including convenient equal-order interpolations generally unstable within the Galerkin approach. The methods are subdivided into two classes according to the manner in which stability is attained: (1) circumventing Babuska-Brezzi condition methods; (2) satisfying Babuska-Brezzi condition methods. Convergence is established for each class of methods. Applications of the first class of methods to Stokes flow and compressible linear elasticity are presented. The second class of methods is applied to the Poisson, Timoshenko beam and incompressible elasticity problems. Numerical results demonstrate the good stability and accuracy of the methods, and confirm the error estimates

  19. Cartographic continuum rendering based on color and texture interpolation to enhance photo-realism perception

    Science.gov (United States)

    Hoarau, Charlotte; Christophe, Sidonie

    2017-05-01

    Graphic interfaces of geoportals allow visualizing and overlaying various (visually) heterogeneous geographical data, often by image blending: vector data, maps, aerial imagery, Digital Terrain Model, etc. Map design and geo-visualization may benefit from methods and tools to hybrid, i.e. visually integrate, heterogeneous geographical data and cartographic representations. In this paper, we aim at designing continuous hybrid visualizations between ortho-imagery and symbolized vector data, in order to control a particular visual property, i.e. the photo-realism perception. The natural appearance (colors, textures) and various texture effects are used to drive the control the photo-realism level of the visualization: color and texture interpolation blocks have been developed. We present a global design method that allows to manipulate the behavior of those interpolation blocks on each type of geographical layer, in various ways, in order to provide various cartographic continua.

  20. Interpolation-Based Condensation Model Reduction Part 1: Frequency Window Reduction Method Application to Structural Acoustics

    National Research Council Canada - National Science Library

    Ingel, R

    1999-01-01

    ... (which require derivative information) interpolation functions as well as standard Lagrangian functions, which can be linear, quadratic or cubic, have been used to construct the interpolation windows...

  1. An Integrated Mixed Methods Research Design: Example of the Project Foreign Language Learning Strategies and Achievement: Analysis of Strategy Clusters and Sequences

    OpenAIRE

    Vlčková Kateřina

    2014-01-01

    The presentation focused on an so called integrated mixed method research design example on a basis of a Czech Science Foundation Project Nr. GAP407/12/0432 "Foreign Language Learning Strategies and Achievement: Analysis of Strategy Clusters and Sequences". All main integrated parts of the mixed methods research design were discussed: the aim, theoretical framework, research question, methods and validity threats. Prezentace se zaměřovala na tzv. integrovaný vícemetodový výzkumný design na...

  2. Organisational culture and post-merger integration in an academic health centre: a mixed-methods study.

    Science.gov (United States)

    Ovseiko, Pavel V; Melham, Karen; Fowler, Jan; Buchan, Alastair M

    2015-01-22

    Around the world, the last two decades have been characterised by an increase in the numbers of mergers between healthcare providers, including some of the most prestigious university hospitals and academic health centres. However, many mergers fail to bring the anticipated benefits, and successful post-merger integration in university hospitals and academic health centres is even harder to achieve. An increasing body of literature suggests that organisational culture affects the success of post-merger integration and academic-clinical collaboration. This paper reports findings from a mixed-methods single-site study to examine 1) the perceptions of organisational culture in academic and clinical enterprises at one National Health Service (NHS) trust, and 2) the major cultural issues for its post-merger integration with another NHS trust and strategic partnership with a university. From the entire population of 72 clinician-scientists at one of the legacy NHS trusts, 38 (53%) completed a quantitative Competing Values Framework survey and 24 (33%) also provided qualitative responses. The survey was followed up by semi-structured interviews with six clinician-scientists and a group discussion including five senior managers. The cultures of two legacy NHS trusts differed and were primarily distinct from the culture of the academic enterprise. Major cultural issues were related to the relative size, influence, and history of the legacy NHS trusts, and the implications of these for respective identities, clinical services, and finances. Strategic partnership with a university served as an important ameliorating consideration in reaching trust merger. However, some aspects of university entrepreneurial culture are difficult to reconcile with the NHS service delivery model and may create tension. There are challenges in preserving a more desirable culture at one of the legacy NHS trusts, enhancing cultures in both legacy NHS trusts during their post-merger integration, and

  3. Cubic scaling algorithms for RPA correlation using interpolative separable density fitting

    Science.gov (United States)

    Lu, Jianfeng; Thicke, Kyle

    2017-12-01

    We present a new cubic scaling algorithm for the calculation of the RPA correlation energy. Our scheme splits up the dependence between the occupied and virtual orbitals in χ0 by use of Cauchy's integral formula. This introduces an additional integral to be carried out, for which we provide a geometrically convergent quadrature rule. Our scheme also uses the newly developed Interpolative Separable Density Fitting algorithm to further reduce the computational cost in a way analogous to that of the Resolution of Identity method.

  4. Rhie-Chow interpolation in strong centrifugal fields

    Science.gov (United States)

    Bogovalov, S. V.; Tronin, I. V.

    2015-10-01

    Rhie-Chow interpolation formulas are derived from the Navier-Stokes and continuity equations. These formulas are generalized to gas dynamics in strong centrifugal fields (as high as 106 g) occurring in gas centrifuges.

  5. Efficient Algorithms and Design for Interpolation Filters in Digital Receiver

    Directory of Open Access Journals (Sweden)

    Xiaowei Niu

    2014-05-01

    Full Text Available Based on polynomial functions this paper introduces a generalized design method for interpolation filters. The polynomial-based interpolation filters can be implemented efficiently by using a modified Farrow structure with an arbitrary frequency response, the filters allow many pass- bands and stop-bands, and for each band the desired amplitude and weight can be set arbitrarily. The optimization coefficients of the interpolation filters in time domain are got by minimizing the weighted mean squared error function, then converting to solve the quadratic programming problem. The optimization coefficients in frequency domain are got by minimizing the maxima (MiniMax of the weighted mean squared error function. The degree of polynomials and the length of interpolation filter can be selected arbitrarily. Numerical examples verified the proposed design method not only can reduce the hardware cost effectively but also guarantee an excellent performance.

  6. A Meshfree Quasi-Interpolation Method for Solving Burgers’ Equation

    Directory of Open Access Journals (Sweden)

    Mingzhu Li

    2014-01-01

    Full Text Available The main aim of this work is to consider a meshfree algorithm for solving Burgers’ equation with the quartic B-spline quasi-interpolation. Quasi-interpolation is very useful in the study of approximation theory and its applications, since it can yield solutions directly without the need to solve any linear system of equations and overcome the ill-conditioning problem resulting from using the B-spline as a global interpolant. The numerical scheme is presented, by using the derivative of the quasi-interpolation to approximate the spatial derivative of the dependent variable and a low order forward difference to approximate the time derivative of the dependent variable. Compared to other numerical methods, the main advantages of our scheme are higher accuracy and lower computational complexity. Meanwhile, the algorithm is very simple and easy to implement and the numerical experiments show that it is feasible and valid.

  7. [Multimodal medical image registration using cubic spline interpolation method].

    Science.gov (United States)

    He, Yuanlie; Tian, Lianfang; Chen, Ping; Wang, Lifei; Ye, Guangchun; Mao, Zongyuan

    2007-12-01

    Based on the characteristic of the PET-CT multimodal image series, a novel image registration and fusion method is proposed, in which the cubic spline interpolation method is applied to realize the interpolation of PET-CT image series, then registration is carried out by using mutual information algorithm and finally the improved principal component analysis method is used for the fusion of PET-CT multimodal images to enhance the visual effect of PET image, thus satisfied registration and fusion results are obtained. The cubic spline interpolation method is used for reconstruction to restore the missed information between image slices, which can compensate for the shortage of previous registration methods, improve the accuracy of the registration, and make the fused multimodal images more similar to the real image. Finally, the cubic spline interpolation method has been successfully applied in developing 3D-CRT (3D Conformal Radiation Therapy) system.

  8. Interpolating and sampling sequences in finite Riemann surfaces

    OpenAIRE

    Ortega-Cerda, Joaquim

    2007-01-01

    We provide a description of the interpolating and sampling sequences on a space of holomorphic functions on a finite Riemann surface, where a uniform growth restriction is imposed on the holomorphic functions.

  9. Illumination estimation via thin-plate spline interpolation.

    Science.gov (United States)

    Shi, Lilong; Xiong, Weihua; Funt, Brian

    2011-05-01

    Thin-plate spline interpolation is used to interpolate the chromaticity of the color of the incident scene illumination across a training set of images. Given the image of a scene under unknown illumination, the chromaticity of the scene illumination can be found from the interpolated function. The resulting illumination-estimation method can be used to provide color constancy under changing illumination conditions and automatic white balancing for digital cameras. A thin-plate spline interpolates over a nonuniformly sampled input space, which in this case is a training set of image thumbnails and associated illumination chromaticities. To reduce the size of the training set, incremental k medians are applied. Tests on real images demonstrate that the thin-plate spline method can estimate the color of the incident illumination quite accurately, and the proposed training set pruning significantly decreases the computation.

  10. Fast image interpolation for motion estimation using graphics hardware

    Science.gov (United States)

    Kelly, Francis; Kokaram, Anil

    2004-05-01

    Motion estimation and compensation is the key to high quality video coding. Block matching motion estimation is used in most video codecs, including MPEG-2, MPEG-4, H.263 and H.26L. Motion estimation is also a key component in the digital restoration of archived video and for post-production and special effects in the movie industry. Sub-pixel accurate motion vectors can improve the quality of the vector field and lead to more efficient video coding. However sub-pixel accuracy requires interpolation of the image data. Image interpolation is a key requirement of many image processing algorithms. Often interpolation can be a bottleneck in these applications, especially in motion estimation due to the large number pixels involved. In this paper we propose using commodity computer graphics hardware for fast image interpolation. We use the full search block matching algorithm to illustrate the problems and limitations of using graphics hardware in this way.

  11. 3D Medical Image Interpolation Based on Parametric Cubic Convolution

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    In the process of display, manipulation and analysis of biomedical image data, they usually need to be converted to data of isotropic discretization through the process of interpolation, while the cubic convolution interpolation is widely used due to its good tradeoff between computational cost and accuracy. In this paper, we present a whole concept for the 3D medical image interpolation based on cubic convolution, and the six methods, with the different sharp control parameter, which are formulated in details. Furthermore, we also give an objective comparison for these methods using data sets with the different slice spacing. Each slice in these data sets is estimated by each interpolation method and compared with the original slice using three measures: mean-squared difference, number of sites of disagreement, and largest difference. According to the experimental results, we present a recommendation for 3D medical images under the different situations in the end.

  12. Interpolation and sampling in spaces of analytic functions

    CERN Document Server

    Seip, Kristian

    2004-01-01

    The book is about understanding the geometry of interpolating and sampling sequences in classical spaces of analytic functions. The subject can be viewed as arising from three classical topics: Nevanlinna-Pick interpolation, Carleson's interpolation theorem for H^\\infty, and the sampling theorem, also known as the Whittaker-Kotelnikov-Shannon theorem. The book aims at clarifying how certain basic properties of the space at hand are reflected in the geometry of interpolating and sampling sequences. Key words for the geometric descriptions are Carleson measures, Beurling densities, the Nyquist rate, and the Helson-Szegő condition. The book is based on six lectures given by the author at the University of Michigan. This is reflected in the exposition, which is a blend of informal explanations with technical details. The book is essentially self-contained. There is an underlying assumption that the reader has a basic knowledge of complex and functional analysis. Beyond that, the reader should have some familiari...

  13. Energy-Driven Image Interpolation Using Gaussian Process Regression

    Directory of Open Access Journals (Sweden)

    Lingling Zi

    2012-01-01

    Full Text Available Image interpolation, as a method of obtaining a high-resolution image from the corresponding low-resolution image, is a classical problem in image processing. In this paper, we propose a novel energy-driven interpolation algorithm employing Gaussian process regression. In our algorithm, each interpolated pixel is predicted by a combination of two information sources: first is a statistical model adopted to mine underlying information, and second is an energy computation technique used to acquire information on pixel properties. We further demonstrate that our algorithm can not only achieve image interpolation, but also reduce noise in the original image. Our experiments show that the proposed algorithm can achieve encouraging performance in terms of image visualization and quantitative measures.

  14. Spatial interpolation of point velocities in stream cross-section

    Directory of Open Access Journals (Sweden)

    Hasníková Eliška

    2015-03-01

    Full Text Available The most frequently used instrument for measuring velocity distribution in the cross-section of small rivers is the propeller-type current meter. Output of measuring using this instrument is point data of a tiny bulk. Spatial interpolation of measured data should produce a dense velocity profile, which is not available from the measuring itself. This paper describes the preparation of interpolation models.

  15. The Convergence Acceleration of Two-Dimensional Fourier Interpolation

    Directory of Open Access Journals (Sweden)

    Anry Nersessian

    2008-07-01

    Full Text Available Hereby, the convergence acceleration of two-dimensional trigonometric interpolation for a smooth functions on a uniform mesh is considered. Together with theoretical estimates some numerical results are presented and discussed that reveal the potential of this method for application in image processing. Experiments show that suggested algorithm allows acceleration of conventional Fourier interpolation even for sparse meshes that can lead to an efficient image compression/decompression algorithms and also to applications in image zooming procedures.

  16. Survey: interpolation methods for whole slide image processing.

    Science.gov (United States)

    Roszkowiak, L; Korzynska, A; Zak, J; Pijanowska, D; Swiderska-Chadaj, Z; Markiewicz, T

    2017-02-01

    Evaluating whole slide images of histological and cytological samples is used in pathology for diagnostics, grading and prognosis . It is often necessary to rescale whole slide images of a very large size. Image resizing is one of the most common applications of interpolation. We collect the advantages and drawbacks of nine interpolation methods, and as a result of our analysis, we try to select one interpolation method as the preferred solution. To compare the performance of interpolation methods, test images were scaled and then rescaled to the original size using the same algorithm. The modified image was compared to the original image in various aspects. The time needed for calculations and results of quantification performance on modified images were also compared. For evaluation purposes, we used four general test images and 12 specialized biological immunohistochemically stained tissue sample images. The purpose of this survey is to determine which method of interpolation is the best to resize whole slide images, so they can be further processed using quantification methods. As a result, the interpolation method has to be selected depending on the task involving whole slide images. © 2016 The Authors Journal of Microscopy © 2016 Royal Microscopical Society.

  17. Comparing interpolation schemes in dynamic receive ultrasound beamforming

    DEFF Research Database (Denmark)

    Kortbek, Jacob; Andresen, Henrik; Nikolov, Svetoslav

    2005-01-01

    In medical ultrasound interpolation schemes are of- ten applied in receive focusing for reconstruction of image points. This paper investigates the performance of various interpolation scheme by means of ultrasound simulations of point scatterers in Field II. The investigation includes conventional...... B-mode imaging and synthetic aperture (SA) imaging using a 192-element, 7 MHz linear array transducer with λ pitch as simulation model. The evaluation consists primarily of calculations of the side lobe to main lobe ratio, SLMLR, and the noise power of the interpolation error. When using...... conventional B-mode imaging and linear interpolation, the difference in mean SLMLR is 6.2 dB. With polynomial interpolation the ratio is in the range 6.2 dB to 0.3 dB using 2nd to 5th order polynomials, and with FIR interpolation the ratio is in the range 5.8 dB to 0.1 dB depending on the filter design...

  18. Surface interpolation with radial basis functions for medical imaging

    International Nuclear Information System (INIS)

    Carr, J.C.; Beatson, R.K.; Fright, W.R.

    1997-01-01

    Radial basis functions are presented as a practical solution to the problem of interpolating incomplete surfaces derived from three-dimensional (3-D) medical graphics. The specific application considered is the design of cranial implants for the repair of defects, usually holes, in the skull. Radial basis functions impose few restrictions on the geometry of the interpolation centers and are suited to problems where interpolation centers do not form a regular grid. However, their high computational requirements have previously limited their use to problems where the number of interpolation centers is small (<300). Recently developed fast evaluation techniques have overcome these limitations and made radial basis interpolation a practical approach for larger data sets. In this paper radial basis functions are fitted to depth-maps of the skull's surface, obtained from X-ray computed tomography (CT) data using ray-tracing techniques. They are used to smoothly interpolate the surface of the skull across defect regions. The resulting mathematical description of the skull's surface can be evaluated at any desired resolution to be rendered on a graphics workstation or to generate instructions for operating a computer numerically controlled (CNC) mill

  19. Hermite-Hadamard Type Integral Inequalities for Functions Whose Second-Order Mixed Derivatives Are Coordinated (s,m-P-Convex

    Directory of Open Access Journals (Sweden)

    Yu-Mei Bai

    2018-01-01

    Full Text Available We establish some new Hermite-Hadamard type integral inequalities for functions whose second-order mixed derivatives are coordinated (s,m-P-convex. An expression form of Hermite-Hadamard type integral inequalities via the beta function and the hypergeometric function is also presented. Our results provide a significant complement to the work of Wu et al. involving the Hermite-Hadamard type inequalities for coordinated (s,m-P-convex functions in an earlier article.

  20. An evaluation of the implementation of maternal obesity pathways of care: a mixed methods study with data integration.

    Directory of Open Access Journals (Sweden)

    Nicola Heslehurst

    Full Text Available Maternal obesity has multiple associated risks and requires substantial intervention. This research evaluated the implementation of maternal obesity care pathways from multiple stakeholder perspectives.A simultaneous mixed methods model with data integration was used. Three component studies were given equal priority. 1: Semi-structured qualitative interviews explored obese pregnant women's experiences of being on the pathways. 2: A quantitative and qualitative postal survey explored healthcare professionals' experiences of delivering the pathways. 3: A case note audit quantitatively assessed pathway compliance. Data were integrated using following a thread and convergence coding matrix methods to search for agreement and disagreement between studies.Study 1: Four themes were identified: women's overall (positive and negative views of the pathways; knowledge and understanding of the pathways; views on clinical and weight management advice and support; and views on the information leaflet. Key results included positive views of receiving additional clinical care, negative experiences of risk communication, and weight management support was considered a priority. Study 2: Healthcare professionals felt the pathways were worthwhile, facilitated good practice, and increased confidence. Training was consistently identified as being required. Healthcare professionals predominantly focussed on women's response to sensitive obesity communication. Study 3: There was good compliance with antenatal clinical interventions. However, there was poor compliance with public health and postnatal interventions. There were some strong areas of agreement between component studies which can inform future development of the pathways. However, disagreement between studies included a lack of shared priorities between healthcare professionals and women, different perspectives on communication issues, and different perspectives on women's prioritisation of weight

  1. An Evaluation of the Implementation of Maternal Obesity Pathways of Care: A Mixed Methods Study with Data Integration

    Science.gov (United States)

    Heslehurst, Nicola; Dinsdale, Sarah; Sedgewick, Gillian; Simpson, Helen; Sen, Seema; Summerbell, Carolyn Dawn; Rankin, Judith

    2015-01-01

    Objectives Maternal obesity has multiple associated risks and requires substantial intervention. This research evaluated the implementation of maternal obesity care pathways from multiple stakeholder perspectives. Study Design A simultaneous mixed methods model with data integration was used. Three component studies were given equal priority. 1: Semi-structured qualitative interviews explored obese pregnant women’s experiences of being on the pathways. 2: A quantitative and qualitative postal survey explored healthcare professionals’ experiences of delivering the pathways. 3: A case note audit quantitatively assessed pathway compliance. Data were integrated using following a thread and convergence coding matrix methods to search for agreement and disagreement between studies. Results Study 1: Four themes were identified: women’s overall (positive and negative) views of the pathways; knowledge and understanding of the pathways; views on clinical and weight management advice and support; and views on the information leaflet. Key results included positive views of receiving additional clinical care, negative experiences of risk communication, and weight management support was considered a priority. Study 2: Healthcare professionals felt the pathways were worthwhile, facilitated good practice, and increased confidence. Training was consistently identified as being required. Healthcare professionals predominantly focussed on women’s response to sensitive obesity communication. Study 3: There was good compliance with antenatal clinical interventions. However, there was poor compliance with public health and postnatal interventions. There were some strong areas of agreement between component studies which can inform future development of the pathways. However, disagreement between studies included a lack of shared priorities between healthcare professionals and women, different perspectives on communication issues, and different perspectives on women

  2. Solution of a Problem Linear Plane Elasticity with Mixed Boundary Conditions by the Method of Boundary Integrals

    Directory of Open Access Journals (Sweden)

    Nahed S. Hussein

    2014-01-01

    Full Text Available A numerical boundary integral scheme is proposed for the solution to the system of …eld equations of plane. The stresses are prescribed on one-half of the circle, while the displacements are given. The considered problem with mixed boundary conditions in the circle is replaced by two problems with homogeneous boundary conditions, one of each type, having a common solution. The equations are reduced to a system of boundary integral equations, which is then discretized in the usual way, and the problem at this stage is reduced to the solution to a rectangular linear system of algebraic equations. The unknowns in this system of equations are the boundary values of four harmonic functions which define the full elastic solution and the unknown boundary values of stresses or displacements on proper parts of the boundary. On the basis of the obtained results, it is inferred that a stress component has a singularity at each of the two separation points, thought to be of logarithmic type. The results are discussed and boundary plots are given. We have also calculated the unknown functions in the bulk directly from the given boundary conditions using the boundary collocation method. The obtained results in the bulk are discussed and three-dimensional plots are given. A tentative form for the singular solution is proposed and the corresponding singular stresses and displacements are plotted in the bulk. The form of the singular tangential stress is seen to be compatible with the boundary values obtained earlier. The efficiency of the used numerical schemes is discussed.

  3. 5-D interpolation with wave-front attributes

    Science.gov (United States)

    Xie, Yujiang; Gajewski, Dirk

    2017-11-01

    Most 5-D interpolation and regularization techniques reconstruct the missing data in the frequency domain by using mathematical transforms. An alternative type of interpolation methods uses wave-front attributes, that is, quantities with a specific physical meaning like the angle of emergence and wave-front curvatures. In these attributes structural information of subsurface features like dip and strike of a reflector are included. These wave-front attributes work on 5-D data space (e.g. common-midpoint coordinates in x and y, offset, azimuth and time), leading to a 5-D interpolation technique. Since the process is based on stacking next to the interpolation a pre-stack data enhancement is achieved, improving the signal-to-noise ratio (S/N) of interpolated and recorded traces. The wave-front attributes are determined in a data-driven fashion, for example, with the Common Reflection Surface (CRS method). As one of the wave-front-attribute-based interpolation techniques, the 3-D partial CRS method was proposed to enhance the quality of 3-D pre-stack data with low S/N. In the past work on 3-D partial stacks, two potential problems were still unsolved. For high-quality wave-front attributes, we suggest a global optimization strategy instead of the so far used pragmatic search approach. In previous works, the interpolation of 3-D data was performed along a specific azimuth which is acceptable for narrow azimuth acquisition but does not exploit the potential of wide-, rich- or full-azimuth acquisitions. The conventional 3-D partial CRS method is improved in this work and we call it as a wave-front-attribute-based 5-D interpolation (5-D WABI) as the two problems mentioned above are addressed. Data examples demonstrate the improved performance by the 5-D WABI method when compared with the conventional 3-D partial CRS approach. A comparison of the rank-reduction-based 5-D seismic interpolation technique with the proposed 5-D WABI method is given. The comparison reveals that

  4. Integrating Mixed Method Data in Psychological Research: Combining Q Methodology and Questionnaires in a Study Investigating Cultural and Psychological Influences on Adolescent Sexual Behavior

    Science.gov (United States)

    Franz, Anke; Worrell, Marcia; Vögele, Claus

    2013-01-01

    In recent years, combining quantitative and qualitative research methods in the same study has become increasingly acceptable in both applied and academic psychological research. However, a difficulty for many mixed methods researchers is how to integrate findings consistently. The value of using a coherent framework throughout the research…

  5. The Impact of Student Teaching Experience on Pre-Service Teachers' Readiness for Technology Integration: A Mixed Methods Study with Growth Curve Modeling

    Science.gov (United States)

    Sun, Yan; Strobel, Johannes; Newby, Timothy J.

    2017-01-01

    Adopting a two-phase explanatory sequential mixed methods research design, the current study examined the impact of student teaching experiences on pre-service teachers' readiness for technology integration. In phase-1 of quantitative investigation, 2-level growth curve models were fitted using online repeated measures survey data collected from…

  6. Undergraduate physiotherapy students' competencies, attitudes and perceptions after integrated educational pathways in evidence-based practice: a mixed methods study.

    Science.gov (United States)

    Bozzolan, M; Simoni, G; Balboni, M; Fiorini, F; Bombardi, S; Bertin, N; Da Roit, M

    2014-11-01

    This mixed methods study aimed to explore perceptions/attitudes, to evaluate knowledge/ skills, to investigate clinical behaviours of undergraduate physiotherapy students exposed to a composite education curriculum on evidence-based practice (EBP). Students' knowledge and skills were assessed before and after integrated learning activities, using the Adapted Fresno test, whereas their behaviour in EBP was evaluated by examining their internship documentation. Students' perceptions and attitudes were explored through four focus groups. Sixty-two students agreed to participate in the study. The within group mean differences (A-Fresno test) were 34.2 (95% CI 24.4 to 43.9) in the first year and 35.1 (95% CI 23.2 to 47.1) in the second year; no statistically significant change was observed in the third year. Seventy-six percent of the second year and 88% of the third year students reached the pass score. Internship documentation gave evidence of PICOs and database searches (95-100%), critical appraisal of internal validity (25-75%) but not of external validity (5-15%). The correct application of these items ranged from 30 to 100%. Qualitative analysis of the focus groups indicated students valued EBP, but perceived many barriers, with clinicians being both an obstacle and a model. Key elements for changing students' behaviours seem to be internship environment and possibility of continuous practice and feedback.

  7. Greenhouse gas emissions control in integrated municipal solid waste management through mixed integer bilevel decision-making

    Energy Technology Data Exchange (ETDEWEB)

    He, Li, E-mail: li.he@iseis.org [MOE Key Laboratory of Regional Energy Systems Optimization, S and C Academy of Energy and Environmental Research, North China Electric Power University, Beijing 102206 (China); Huang, G.H.; Lu, Hongwei [MOE Key Laboratory of Regional Energy Systems Optimization, S and C Academy of Energy and Environmental Research, North China Electric Power University, Beijing 102206 (China)

    2011-10-15

    Highlights: {yields} We used bilevel analysis to treat two objectives at different levels. {yields} The model can identify allocation schemes for waste flows. {yields} The model can support waste timing, sizing, and siting for facility expansions. {yields} The model can estimate minimized total management cost and GHG emissions. - Abstract: Recent studies indicated that municipal solid waste (MSW) is a major contributor to global warming due to extensive emissions of greenhouse gases (GHGs). However, most of them focused on investigating impacts of MSW on GHG emission amounts. This study presents two mixed integer bilevel decision-making models for integrated municipal solid waste management and GHG emissions control: MGU-MCL and MCU-MGL. The MGU-MCL model represents a top-down decision process, with the environmental sectors at the national level dominating the upper-level objective and the waste management sectors at the municipal level providing the lower-level objective. The MCU-MGL model implies a bottom-up decision process where municipality plays a leading role. Results from the models indicate that: the top-down decisions would reduce metric tonne carbon emissions (MTCEs) by about 59% yet increase about 8% of the total management cost; the bottom-up decisions would reduce MTCE emissions by about 13% but increase the total management cost very slightly; on-site monitoring and downscaled laboratory experiments are still required for reducing uncertainty in GHG emission rate from the landfill facility.

  8. Patient Involvement in Geriatric Care – Results and Experiences from a Mixed Models Design Study within Project INTEGRATE

    Directory of Open Access Journals (Sweden)

    Joern Kiselev

    2018-02-01

    Full Text Available Introduction: Patient involvement is a core component of an integrated care approach. While the benefits and prerequisites of patient involvement have been described in general and additionally for some target populations, little is known about the views and experiences of older people regarding this matter. Methods: A study with a mixed-methods design was conducted to gain a better understanding about patient involvement in geriatric care. A questionnaire on shared decision-making was administered within a group of older adults in Germany. Additionally, 7 focus groups with health professionals and geriatric patients in Germany and Estonia were held to deepen the insight of the questionnaire and discussing experiences and barriers of patient involvement. Results: Older people without an actual medical problem expressed a significantly higher desire to participate in shared decisions than those requiring actual medical care. No significant differences could be found for the desire to be informed as part of the care process. No correlation between patients’ desire and experiences on shared decision-making could be observed. In the focus groups, patients demanded a comprehensive and understandable information and education process while the health professionals’ view was very task-specific. This conflict led to a loss of trust by the patients. Conclusions: There is a gap between patients’ and health professionals’ views on patient involvement in older people. The involvement process should therefore be comprehensive and should take into account different levels of health literacy.

  9. Improved Coarray Interpolation Algorithms with Additional Orthogonal Constraint for Cyclostationary Signals

    Directory of Open Access Journals (Sweden)

    Jinyang Song

    2018-01-01

    Full Text Available Many modulated signals exhibit a cyclostationarity property, which can be exploited in direction-of-arrival (DOA estimation to effectively eliminate interference and noise. In this paper, our aim is to integrate the cyclostationarity with the spatial domain and enable the algorithm to estimate more sources than sensors. However, DOA estimation with a sparse array is performed in the coarray domain and the holes within the coarray limit the usage of the complete coarray information. In order to use the complete coarray information to increase the degrees-of-freedom (DOFs, sparsity-aware-based methods and the difference coarray interpolation methods have been proposed. In this paper, the coarray interpolation technique is further explored with cyclostationary signals. Besides the difference coarray model and its corresponding Toeplitz completion formulation, we build up a sum coarray model and formulate a Hankel completion problem. In order to further improve the performance of the structured matrix completion, we define the spatial spectrum sampling operations and the derivative (conjugate correlation subspaces, which can be exploited to construct orthogonal constraints for the autocorrelation vectors in the coarray interpolation problem. Prior knowledge of the source interval can also be incorporated into the problem. Simulation results demonstrate that the additional constraints contribute to a remarkable performance improvement.

  10. Homography Propagation and Optimization for Wide-Baseline Street Image Interpolation.

    Science.gov (United States)

    Nie, Yongwei; Zhang, Zhensong; Sun, Hanqiu; Su, Tan; Li, Guiqing

    2017-10-01

    Wide-baseline street image interpolation is useful but very challenging. Existing approaches either rely on heavyweight 3D reconstruction or computationally intensive deep networks. We present a lightweight and efficient method which uses simple homography computing and refining operators to estimate piecewise smooth homographies between input views. To achieve the goal, we show how to combine homography fitting and homography propagation together based on reliable and unreliable superpixel discrimination. Such a combination, other than using homography fitting only, dramatically increases the accuracy and robustness of the estimated homographies. Then, we integrate the concepts of homography and mesh warping, and propose a novel homography-constrained warping formulation which enforces smoothness between neighboring homographies by utilizing the first-order continuity of the warped mesh. This further eliminates small artifacts of overlapping, stretching, etc. The proposed method is lightweight and flexible, allows wide-baseline interpolation. It improves the state of the art and demonstrates that homography computation suffices for interpolation. Experiments on city and rural datasets validate the efficiency and effectiveness of our method.

  11. Integrated Sources of Polarization Entangled Photon Pair States via Spontaneous Four-Wave Mixing in AlGaAs Waveguides

    Science.gov (United States)

    Kultavewuti, Pisek

    Polarization-entangled photon pair states (PESs) are indispensable in several quantum protocols that should be implemented in an integrated photonic circuit for realizing a practical quantum technology. Preparing such states in integrated waveguides is in fact a challenge due to polarization mode dispersion. Unlike other conventional ways that are plagued with complications in fabrication or in state generation, in this thesis, the scheme based on parallel spontaneous four-wave mixing processes of two polarization waveguide modes is thoroughly studied in theory and experimentation for the polarization entanglement generation. The scheme in fact needs the modal dispersion, contradictory to the general perception, as revealed by a full quantum mechanical framework. The proper modal dispersion balances the effects of temporal walk-off and state factorizability. The study also shows that the popular standard platform such as a silicon-on-insulator wafer is far from suitable to implement the proposed simple generation technique. Proven by the quantum state tomography, the technique produces a highly-entangled state with a maximum concurrence of 0.97 +/- 0:01 from AlGaAs waveguides. In addition, the devices directly generated Bell states with an observed fidelity of 0.92 +/- 0:01 without any post-generation compensating steps. Novel suspended device structures, including their components, are then investigated numerically and experimentally characterized in pursuit of finding the geometry with the optimal dispersion property. The 700 nm x 1100 nm suspended rectangular waveguide is identified as the best geometry with a predicted maximum concurrence of 0.976 and a generation bandwidth of 3.3 THz. The suspended waveguide fabrication procedure adds about 15 dB/cm and 10 dB/cm of propagation loss to the TE and TM mode respectively, on top of the loss in corresponding full-cladding waveguides. Bridges, which structurally support the suspended waveguides, are optimized using

  12. Study on Scattered Data Points Interpolation Method Based on Multi-line Structured Light

    International Nuclear Information System (INIS)

    Fan, J Y; Wang, F G; W, Y; Zhang, Y L

    2006-01-01

    Aiming at the range image obtained through multi-line structured light, a regional interpolation method is put forward in this paper. This method divides interpolation into two parts according to the memory format of the scattered data, one is interpolation of the data on the stripes, and the other is interpolation of data between the stripes. Trend interpolation method is applied to the data on the stripes, and Gauss wavelet interpolation method is applied to the data between the stripes. Experiments prove regional interpolation method feasible and practical, and it also promotes the speed and precision

  13. Topological Design for Acoustic-Structure Interaction Problems with a Mixed Finite Element Method

    DEFF Research Database (Denmark)

    Yoon, Gil Ho; Jensen, Jakob Søndergaard; Sigmund, Ole

    2006-01-01

    to subdomain interfaces evolving during the optimization process. In this paper, we propose to use a mixed finite element formulation with displacements and pressure as primary variables (u/p formulation) which eliminates the need for explicit boundary representation. In order to describe the Helmholtz...... equation and the linear elasticity equation, the mass density as well as the shear and bulk moduli are interpolated with the design variables. In this formulation, the coupled interface boundary conditions are automatically satisfied without having to compute surface coupling integrals. Two dimensional...

  14. A FAST MORPHING-BASED INTERPOLATION FOR MEDICAL IMAGES: APPLICATION TO CONFORMAL RADIOTHERAPY

    Directory of Open Access Journals (Sweden)

    Hussein Atoui

    2011-05-01

    Full Text Available A method is presented for fast interpolation between medical images. The method is intended for both slice and projective interpolation. It allows offline interpolation between neighboring slices in tomographic data. Spatial correspondence between adjacent images is established using a block matching algorithm. Interpolation of image intensities is then carried out by morphing between the images. The morphing-based method is compared to standard linear interpolation, block-matching-based interpolation and registrationbased interpolation in 3D tomographic data sets. Results show that the proposed method scored similar performance in comparison to registration-based interpolation, and significantly outperforms both linear and block-matching-based interpolation. This method is applied in the context of conformal radiotherapy for online projective interpolation between Digitally Reconstructed Radiographs (DRRs.

  15. A new stellar spectrum interpolation algorithm and its application to Yunnan-III evolutionary population synthesis models

    Science.gov (United States)

    Cheng, Liantao; Zhang, Fenghui; Kang, Xiaoyu; Wang, Lang

    2018-05-01

    In evolutionary population synthesis (EPS) models, we need to convert stellar evolutionary parameters into spectra via interpolation in a stellar spectral library. For theoretical stellar spectral libraries, the spectrum grid is homogeneous on the effective-temperature and gravity plane for a given metallicity. It is relatively easy to derive stellar spectra. For empirical stellar spectral libraries, stellar parameters are irregularly distributed and the interpolation algorithm is relatively complicated. In those EPS models that use empirical stellar spectral libraries, different algorithms are used and the codes are often not released. Moreover, these algorithms are often complicated. In this work, based on a radial basis function (RBF) network, we present a new spectrum interpolation algorithm and its code. Compared with the other interpolation algorithms that are used in EPS models, it can be easily understood and is highly efficient in terms of computation. The code is written in MATLAB scripts and can be used on any computer system. Using it, we can obtain the interpolated spectra from a library or a combination of libraries. We apply this algorithm to several stellar spectral libraries (such as MILES, ELODIE-3.1 and STELIB-3.2) and give the integrated spectral energy distributions (ISEDs) of stellar populations (with ages from 1 Myr to 14 Gyr) by combining them with Yunnan-III isochrones. Our results show that the differences caused by the adoption of different EPS model components are less than 0.2 dex. All data about the stellar population ISEDs in this work and the RBF spectrum interpolation code can be obtained by request from the first author or downloaded from http://www1.ynao.ac.cn/˜zhangfh.

  16. Blend Shape Interpolation and FACS for Realistic Avatar

    Science.gov (United States)

    Alkawaz, Mohammed Hazim; Mohamad, Dzulkifli; Basori, Ahmad Hoirul; Saba, Tanzila

    2015-03-01

    The quest of developing realistic facial animation is ever-growing. The emergence of sophisticated algorithms, new graphical user interfaces, laser scans and advanced 3D tools imparted further impetus towards the rapid advancement of complex virtual human facial model. Face-to-face communication being the most natural way of human interaction, the facial animation systems became more attractive in the information technology era for sundry applications. The production of computer-animated movies using synthetic actors are still challenging issues. Proposed facial expression carries the signature of happiness, sadness, angry or cheerful, etc. The mood of a particular person in the midst of a large group can immediately be identified via very subtle changes in facial expressions. Facial expressions being very complex as well as important nonverbal communication channel are tricky to synthesize realistically using computer graphics. Computer synthesis of practical facial expressions must deal with the geometric representation of the human face and the control of the facial animation. We developed a new approach by integrating blend shape interpolation (BSI) and facial action coding system (FACS) to create a realistic and expressive computer facial animation design. The BSI is used to generate the natural face while the FACS is employed to reflect the exact facial muscle movements for four basic natural emotional expressions such as angry, happy, sad and fear with high fidelity. The results in perceiving the realistic facial expression for virtual human emotions based on facial skin color and texture may contribute towards the development of virtual reality and game environment of computer aided graphics animation systems.

  17. Sparse representation based image interpolation with nonlocal autoregressive modeling.

    Science.gov (United States)

    Dong, Weisheng; Zhang, Lei; Lukac, Rastislav; Shi, Guangming

    2013-04-01

    Sparse representation is proven to be a promising approach to image super-resolution, where the low-resolution (LR) image is usually modeled as the down-sampled version of its high-resolution (HR) counterpart after blurring. When the blurring kernel is the Dirac delta function, i.e., the LR image is directly down-sampled from its HR counterpart without blurring, the super-resolution problem becomes an image interpolation problem. In such cases, however, the conventional sparse representation models (SRM) become less effective, because the data fidelity term fails to constrain the image local structures. In natural images, fortunately, many nonlocal similar patches to a given patch could provide nonlocal constraint to the local structure. In this paper, we incorporate the image nonlocal self-similarity into SRM for image interpolation. More specifically, a nonlocal autoregressive model (NARM) is proposed and taken as the data fidelity term in SRM. We show that the NARM-induced sampling matrix is less coherent with the representation dictionary, and consequently makes SRM more effective for image interpolation. Our extensive experimental results demonstrate that the proposed NARM-based image interpolation method can effectively reconstruct the edge structures and suppress the jaggy/ringing artifacts, achieving the best image interpolation results so far in terms of PSNR as well as perceptual quality metrics such as SSIM and FSIM.

  18. Reducing Interpolation Artifacts for Mutual Information Based Image Registration

    Science.gov (United States)

    Soleimani, H.; Khosravifard, M.A.

    2011-01-01

    Medical image registration methods which use mutual information as similarity measure have been improved in recent decades. Mutual Information is a basic concept of Information theory which indicates the dependency of two random variables (or two images). In order to evaluate the mutual information of two images their joint probability distribution is required. Several interpolation methods, such as Partial Volume (PV) and bilinear, are used to estimate joint probability distribution. Both of these two methods yield some artifacts on mutual information function. Partial Volume-Hanning window (PVH) and Generalized Partial Volume (GPV) methods are introduced to remove such artifacts. In this paper we show that the acceptable performance of these methods is not due to their kernel function. It's because of the number of pixels which incorporate in interpolation. Since using more pixels requires more complex and time consuming interpolation process, we propose a new interpolation method which uses only four pixels (the same as PV and bilinear interpolations) and removes most of the artifacts. Experimental results of the registration of Computed Tomography (CT) images show superiority of the proposed scheme. PMID:22606673

  19. Integrated complex care coordination for children with medical complexity: A mixed-methods evaluation of tertiary care-community collaboration

    Directory of Open Access Journals (Sweden)

    Cohen Eyal

    2012-10-01

    Full Text Available Abstract Background Primary care medical homes may improve health outcomes for children with special healthcare needs (CSHCN, by improving care coordination. However, community-based primary care practices may be challenged to deliver comprehensive care coordination to complex subsets of CSHCN such as children with medical complexity (CMC. Linking a tertiary care center with the community may achieve cost effective and high quality care for CMC. The objective of this study was to evaluate the outcomes of community-based complex care clinics integrated with a tertiary care center. Methods A before- and after-intervention study design with mixed (quantitative/qualitative methods was utilized. Clinics at two community hospitals distant from tertiary care were staffed by local community pediatricians with the tertiary care center nurse practitioner and linked with primary care providers. Eighty-one children with underlying chronic conditions, fragility, requirement for high intensity care and/or technology assistance, and involvement of multiple providers participated. Main outcome measures included health care utilization and expenditures, parent reports of parent- and child-quality of life [QOL (SF-36®, CPCHILD©, PedsQL™], and family-centered care (MPOC-20®. Comparisons were made in equal (up to 1 year pre- and post-periods supplemented by qualitative perspectives of families and pediatricians. Results Total health care system costs decreased from median (IQR $244 (981 per patient per month (PPPM pre-enrolment to $131 (355 PPPM post-enrolment (p=.007, driven primarily by fewer inpatient days in the tertiary care center (p=.006. Parents reported decreased out of pocket expenses (p© domains [Health Standardization Section (p=.04; Comfort and Emotions (p=.03], while total CPCHILD© score decreased between baseline and 1 year (p=.003. Parents and providers reported the ability to receive care close to home as a key benefit. Conclusions Complex

  20. Interpolant tree automata and their application in Horn clause verification

    DEFF Research Database (Denmark)

    Kafle, Bishoksan; Gallagher, John Patrick

    2016-01-01

    This paper investigates the combination of abstract interpretation over the domain of convex polyhedra with interpolant tree automata, in an abstraction-refinement scheme for Horn clause verification. These techniques have been previously applied separately, but are combined in a new way in this ......This paper investigates the combination of abstract interpretation over the domain of convex polyhedra with interpolant tree automata, in an abstraction-refinement scheme for Horn clause verification. These techniques have been previously applied separately, but are combined in a new way...... clause verification problems indicates that the combination of interpolant tree automaton with abstract interpretation gives some increase in the power of the verification tool, while sometimes incurring a performance overhead....

  1. Interpolation of vector fields from human cardiac DT-MRI

    International Nuclear Information System (INIS)

    Yang, F; Zhu, Y M; Rapacchi, S; Robini, M; Croisille, P; Luo, J H

    2011-01-01

    There has recently been increased interest in developing tensor data processing methods for the new medical imaging modality referred to as diffusion tensor magnetic resonance imaging (DT-MRI). This paper proposes a method for interpolating the primary vector fields from human cardiac DT-MRI, with the particularity of achieving interpolation and denoising simultaneously. The method consists of localizing the noise-corrupted vectors using the local statistical properties of vector fields, removing the noise-corrupted vectors and reconstructing them by using the thin plate spline (TPS) model, and finally applying global TPS interpolation to increase the resolution in the spatial domain. Experiments on 17 human hearts show that the proposed method allows us to obtain higher resolution while reducing noise, preserving details and improving direction coherence (DC) of vector fields as well as fiber tracking. Moreover, the proposed method perfectly reconstructs azimuth and elevation angle maps.

  2. Inoculating against eyewitness suggestibility via interpolated verbatim vs. gist testing.

    Science.gov (United States)

    Pansky, Ainat; Tenenboim, Einat

    2011-01-01

    In real-life situations, eyewitnesses often have control over the level of generality in which they choose to report event information. In the present study, we adopted an early-intervention approach to investigate to what extent eyewitness memory may be inoculated against suggestibility, following two different levels of interpolated reporting: verbatim and gist. After viewing a target event, participants responded to interpolated questions that required reporting of target details at either the verbatim or the gist level. After 48 hr, both groups of participants were misled about half of the target details and were finally tested for verbatim memory of all the details. The findings were consistent with our predictions: Whereas verbatim testing was successful in completely inoculating against suggestibility, gist testing did not reduce it whatsoever. These findings are particularly interesting in light of the comparable testing effects found for these two modes of interpolated testing.

  3. Interpolation-free scanning and sampling scheme for tomographic reconstructions

    International Nuclear Information System (INIS)

    Donohue, K.D.; Saniie, J.

    1987-01-01

    In this paper a sampling scheme is developed for computer tomography (CT) systems that eliminates the need for interpolation. A set of projection angles along with their corresponding sampling rates are derived from the geometry of the Cartesian grid such that no interpolation is required to calculate the final image points for the display grid. A discussion is presented on the choice of an optimal set of projection angles that will maintain a resolution comparable to a sampling scheme of regular measurement geometry, while minimizing the computational load. The interpolation-free scanning and sampling (IFSS) scheme developed here is compared to a typical sampling scheme of regular measurement geometry through a computer simulation

  4. Image interpolation used in three-dimensional range data compression.

    Science.gov (United States)

    Zhang, Shaoze; Zhang, Jianqi; Huang, Xi; Liu, Delian

    2016-05-20

    Advances in the field of three-dimensional (3D) scanning have made the acquisition of 3D range data easier and easier. However, with the large size of 3D range data comes the challenge of storing and transmitting it. To address this challenge, this paper presents a framework to further compress 3D range data using image interpolation. We first use a virtual fringe-projection system to store 3D range data as images, and then apply the interpolation algorithm to the images to reduce their resolution to further reduce the data size. When the 3D range data are needed, the low-resolution image is scaled up to its original resolution by applying the interpolation algorithm, and then the scaled-up image is decoded and the 3D range data are recovered according to the decoded result. Experimental results show that the proposed method could further reduce the data size while maintaining a low rate of error.

  5. Importance of interpolation and coincidence errors in data fusion

    Science.gov (United States)

    Ceccherini, Simone; Carli, Bruno; Tirelli, Cecilia; Zoppetti, Nicola; Del Bianco, Samuele; Cortesi, Ugo; Kujanpää, Jukka; Dragani, Rossana

    2018-02-01

    The complete data fusion (CDF) method is applied to ozone profiles obtained from simulated measurements in the ultraviolet and in the thermal infrared in the framework of the Sentinel 4 mission of the Copernicus programme. We observe that the quality of the fused products is degraded when the fusing profiles are either retrieved on different vertical grids or referred to different true profiles. To address this shortcoming, a generalization of the complete data fusion method, which takes into account interpolation and coincidence errors, is presented. This upgrade overcomes the encountered problems and provides products of good quality when the fusing profiles are both retrieved on different vertical grids and referred to different true profiles. The impact of the interpolation and coincidence errors on number of degrees of freedom and errors of the fused profile is also analysed. The approach developed here to account for the interpolation and coincidence errors can also be followed to include other error components, such as forward model errors.

  6. Hermite interpolant multiscaling functions for numerical solution of the convection diffusion equations

    Directory of Open Access Journals (Sweden)

    Elmira Ashpazzadeh

    2018-04-01

    Full Text Available A numerical technique based on the Hermite interpolant multiscaling functions is presented for the solution of Convection-diusion equations. The operational matrices of derivative, integration and product are presented for multiscaling functions and are utilized to reduce the solution of linear Convection-diusion equation to the solution of algebraic equations. Because of sparsity of these matrices, this method is computationally very attractive and reduces the CPU time and computer memory. Illustrative examples are included to demonstrate the validity and applicability of the new technique.

  7. Impact of rain gauge quality control and interpolation on streamflow simulation: an application to the Warwick catchment, Australia

    Science.gov (United States)

    Liu, Shulun; Li, Yuan; Pauwels, Valentijn R. N.; Walker, Jeffrey P.

    2017-12-01

    Rain gauges are widely used to obtain temporally continuous point rainfall records, which are then interpolated into spatially continuous data to force hydrological models. However, rainfall measurements and interpolation procedure are subject to various uncertainties, which can be reduced by applying quality control and selecting appropriate spatial interpolation approaches. Consequently, the integrated impact of rainfall quality control and interpolation on streamflow simulation has attracted increased attention but not been fully addressed. This study applies a quality control procedure to the hourly rainfall measurements obtained in the Warwick catchment in eastern Australia. The grid-based daily precipitation from the Australian Water Availability Project was used as a reference. The Pearson correlation coefficient between the daily accumulation of gauged rainfall and the reference data was used to eliminate gauges with significant quality issues. The unrealistic outliers were censored based on a comparison between gauged rainfall and the reference. Four interpolation methods, including the inverse distance weighting (IDW), nearest neighbors (NN), linear spline (LN), and ordinary Kriging (OK), were implemented. The four methods were firstly assessed through a cross-validation using the quality-controlled rainfall data. The impacts of the quality control and interpolation on streamflow simulation were then evaluated through a semi-distributed hydrological model. The results showed that the Nash–Sutcliffe model efficiency coefficient (NSE) and Bias of the streamflow simulations were significantly improved after quality control. In the cross-validation, the IDW and OK methods resulted in good interpolation rainfall, while the NN led to the worst result. In term of the impact on hydrological prediction, the IDW led to the most consistent streamflow predictions with the observations, according to the validation at five streamflow-gauged locations. The OK method

  8. An adaptive interpolation scheme for molecular potential energy surfaces

    Science.gov (United States)

    Kowalewski, Markus; Larsson, Elisabeth; Heryudono, Alfa

    2016-08-01

    The calculation of potential energy surfaces for quantum dynamics can be a time consuming task—especially when a high level of theory for the electronic structure calculation is required. We propose an adaptive interpolation algorithm based on polyharmonic splines combined with a partition of unity approach. The adaptive node refinement allows to greatly reduce the number of sample points by employing a local error estimate. The algorithm and its scaling behavior are evaluated for a model function in 2, 3, and 4 dimensions. The developed algorithm allows for a more rapid and reliable interpolation of a potential energy surface within a given accuracy compared to the non-adaptive version.

  9. Estimating monthly temperature using point based interpolation techniques

    Science.gov (United States)

    Saaban, Azizan; Mah Hashim, Noridayu; Murat, Rusdi Indra Zuhdi

    2013-04-01

    This paper discusses the use of point based interpolation to estimate the value of temperature at an unallocated meteorology stations in Peninsular Malaysia using data of year 2010 collected from the Malaysian Meteorology Department. Two point based interpolation methods which are Inverse Distance Weighted (IDW) and Radial Basis Function (RBF) are considered. The accuracy of the methods is evaluated using Root Mean Square Error (RMSE). The results show that RBF with thin plate spline model is suitable to be used as temperature estimator for the months of January and December, while RBF with multiquadric model is suitable to estimate the temperature for the rest of the months.

  10. Multi-dimensional cubic interpolation for ICF hydrodynamics simulation

    International Nuclear Information System (INIS)

    Aoki, Takayuki; Yabe, Takashi.

    1991-04-01

    A new interpolation method is proposed to solve the multi-dimensional hyperbolic equations which appear in describing the hydrodynamics of inertial confinement fusion (ICF) implosion. The advection phase of the cubic-interpolated pseudo-particle (CIP) is greatly improved, by assuming the continuities of the second and the third spatial derivatives in addition to the physical value and the first derivative. These derivatives are derived from the given physical equation. In order to evaluate the new method, Zalesak's example is tested, and we obtain successfully good results. (author)

  11. Oversampling of digitized images. [effects on interpolation in signal processing

    Science.gov (United States)

    Fischel, D.

    1976-01-01

    Oversampling is defined as sampling with a device whose characteristic width is greater than the interval between samples. This paper shows why oversampling should be avoided and discusses the limitations in data processing if circumstances dictate that oversampling cannot be circumvented. Principally, oversampling should not be used to provide interpolating data points. Rather, the time spent oversampling should be used to obtain more signal with less relative error, and the Sampling Theorem should be employed to provide any desired interpolated values. The concepts are applicable to single-element and multielement detectors.

  12. Scientific data interpolation with low dimensional manifold model

    Science.gov (United States)

    Zhu, Wei; Wang, Bao; Barnard, Richard; Hauck, Cory D.; Jenko, Frank; Osher, Stanley

    2018-01-01

    We propose to apply a low dimensional manifold model to scientific data interpolation from regular and irregular samplings with a significant amount of missing information. The low dimensionality of the patch manifold for general scientific data sets has been used as a regularizer in a variational formulation. The problem is solved via alternating minimization with respect to the manifold and the data set, and the Laplace-Beltrami operator in the Euler-Lagrange equation is discretized using the weighted graph Laplacian. Various scientific data sets from different fields of study are used to illustrate the performance of the proposed algorithm on data compression and interpolation from both regular and irregular samplings.

  13. Implementing fuzzy polynomial interpolation (FPI and fuzzy linear regression (LFR

    Directory of Open Access Journals (Sweden)

    Maria Cristina Floreno

    1996-05-01

    Full Text Available This paper presents some preliminary results arising within a general framework concerning the development of software tools for fuzzy arithmetic. The program is in a preliminary stage. What has been already implemented consists of a set of routines for elementary operations, optimized functions evaluation, interpolation and regression. Some of these have been applied to real problems.This paper describes a prototype of a library in C++ for polynomial interpolation of fuzzifying functions, a set of routines in FORTRAN for fuzzy linear regression and a program with graphical user interface allowing the use of such routines.

  14. Scientific data interpolation with low dimensional manifold model

    International Nuclear Information System (INIS)

    Zhu, Wei; Wang, Bao; Barnard, Richard C.; Hauck, Cory D.

    2017-01-01

    Here, we propose to apply a low dimensional manifold model to scientific data interpolation from regular and irregular samplings with a significant amount of missing information. The low dimensionality of the patch manifold for general scientific data sets has been used as a regularizer in a variational formulation. The problem is solved via alternating minimization with respect to the manifold and the data set, and the Laplace–Beltrami operator in the Euler–Lagrange equation is discretized using the weighted graph Laplacian. Various scientific data sets from different fields of study are used to illustrate the performance of the proposed algorithm on data compression and interpolation from both regular and irregular samplings.

  15. A Digital Mixed Methods Research Design: Integrating Multimodal Analysis with Data Mining and Information Visualization for Big Data Analytics

    Science.gov (United States)

    O'Halloran, Kay L.; Tan, Sabine; Pham, Duc-Son; Bateman, John; Vande Moere, Andrew

    2018-01-01

    This article demonstrates how a digital environment offers new opportunities for transforming qualitative data into quantitative data in order to use data mining and information visualization for mixed methods research. The digital approach to mixed methods research is illustrated by a framework which combines qualitative methods of multimodal…

  16. The COBAIN (COntact Binary Atmospheres with INterpolation) Code for Radiative Transfer

    Science.gov (United States)

    Kochoska, Angela; Prša, Andrej; Horvat, Martin

    2018-01-01

    Standard binary star modeling codes make use of pre-existing solutions of the radiative transfer equation in stellar atmospheres. The various model atmospheres available today are consistently computed for single stars, under different assumptions - plane-parallel or spherical atmosphere approximation, local thermodynamical equilibrium (LTE) or non-LTE (NLTE), etc. However, they are nonetheless being applied to contact binary atmospheres by populating the surface corresponding to each component separately and neglecting any mixing that would typically occur at the contact boundary. In addition, single stellar atmosphere models do not take into account irradiance from a companion star, which can pose a serious problem when modeling close binaries. 1D atmosphere models are also solved under the assumption of an atmosphere in hydrodynamical equilibrium, which is not necessarily the case for contact atmospheres, as the potentially different densities and temperatures can give rise to flows that play a key role in the heat and radiation transfer.To resolve the issue of erroneous modeling of contact binary atmospheres using single star atmosphere tables, we have developed a generalized radiative transfer code for computation of the normal emergent intensity of a stellar surface, given its geometry and internal structure. The code uses a regular mesh of equipotential surfaces in a discrete set of spherical coordinates, which are then used to interpolate the values of the structural quantites (density, temperature, opacity) in any given point inside the mesh. The radiaitive transfer equation is numerically integrated in a set of directions spanning the unit sphere around each point and iterated until the intensity values for all directions and all mesh points converge within a given tolerance. We have found that this approach, albeit computationally expensive, is the only one that can reproduce the intensity distribution of the non-symmetric contact binary atmosphere and

  17. Image interpolation allows accurate quantitative bone morphometry in registered micro-computed tomography scans.

    Science.gov (United States)

    Schulte, Friederike A; Lambers, Floor M; Mueller, Thomas L; Stauber, Martin; Müller, Ralph

    2014-04-01

    Time-lapsed in vivo micro-computed tomography is a powerful tool to analyse longitudinal changes in the bone micro-architecture. Registration can overcome problems associated with spatial misalignment between scans; however, it requires image interpolation which might affect the outcome of a subsequent bone morphometric analysis. The impact of the interpolation error itself, though, has not been quantified to date. Therefore, the purpose of this ex vivo study was to elaborate the effect of different interpolator schemes [nearest neighbour, tri-linear and B-spline (BSP)] on bone morphometric indices. None of the interpolator schemes led to significant differences between interpolated and non-interpolated images, with the lowest interpolation error found for BSPs (1.4%). Furthermore, depending on the interpolator, the processing order of registration, Gaussian filtration and binarisation played a role. Independent from the interpolator, the present findings suggest that the evaluation of bone morphometry should be done with images registered using greyscale information.

  18. Study on the relationship between stress intensity factor and J integral for mixed mode crack with arbitrary inclination based on SBFEM

    International Nuclear Information System (INIS)

    Zhu, C L; Li, J B; Lin, G; Zhong, H

    2010-01-01

    The J integral and the stress intensity factor (SIF) K are both important research objects of fracture mechanics, and are often employed to establish criteria for crackpropagation. The relationship between them has always been a research hotspot. In this paper, the SIF can be obtained conveniently by the scaled boundary finite element method (SBFEM) due to the fact that analytical solution can be obtained along the radial direction for stress singularity problems. The J integral can be solved analytically using the formulae between J and K for mixed mode crack with arbitrary inclination in elastic materials. Moreover, the J integral values obtained by this method are more accurate and convenient than by its definition. Factors that affect the accuracy of SIF and J integral, such as the distance between the crack and outer boundary, size of the discretized elements and partition of the domain into super-elements, are examined.

  19. A Grid Synchronization PLL Method Based on Mixed Second- and Third-Order Generalized Integrator for DC-Offset Elimination and Frequency Adaptability

    DEFF Research Database (Denmark)

    Zhang, Chunjiang; Zhao, Xiaojun; Wang, Xiaohuan

    2018-01-01

    in the grid voltages, the general SOGI’s performance suffers from its generated dc effect in the lagging sine signal at the output. Therefore, in this paper, a mixed second- and third-order generalized integrator (MSTOGI) is proposed to eliminate this effect caused by the dc offset of grid voltages......The second order generalized integrator (SOGI) has been widely used to implement grid synchronization for grid-connected inverters, and from grid voltages it is able to extract the fundamental components with an output of two orthogonal sinusoidal signals. However, if there is a dc offset existing...

  20. INFLUENCE OF RIVER BED ELEVATION SURVEY CONFIGURATIONS AND INTERPOLATION METHODS ON THE ACCURACY OF LIDAR DTM-BASED RIVER FLOW SIMULATIONS

    Directory of Open Access Journals (Sweden)

    J. R. Santillan

    2016-09-01

    Full Text Available In this paper, we investigated how survey configuration and the type of interpolation method can affect the accuracy of river flow simulations that utilize LIDAR DTM integrated with interpolated river bed as its main source of topographic information. Aside from determining the accuracy of the individually-generated river bed topographies, we also assessed the overall accuracy of the river flow simulations in terms of maximum flood depth and extent. Four survey configurations consisting of river bed elevation data points arranged as cross-section (XS, zig-zag (ZZ, river banks-centerline (RBCL, and river banks-centerline-zig-zag (RBCLZZ, and two interpolation methods (Inverse Distance-Weighted and Ordinary Kriging were considered. Major results show that the choice of survey configuration, rather than the interpolation method, has significant effect on the accuracy of interpolated river bed surfaces, and subsequently on the accuracy of river flow simulations. The RMSEs of the interpolated surfaces and the model results vary from one configuration to another, and depends on how each configuration evenly collects river bed elevation data points. The large RMSEs for the RBCL configuration and the low RMSEs for the XS configuration confirm that as the data points become evenly spaced and cover more portions of the river, the resulting interpolated surface and the river flow simulation where it was used also become more accurate. The XS configuration with Ordinary Kriging (OK as interpolation method provided the best river bed interpolation and river flow simulation results. The RBCL configuration, regardless of the interpolation algorithm used, resulted to least accurate river bed surfaces and simulation results. Based on the accuracy analysis, the use of XS configuration to collect river bed data points and applying the OK method to interpolate the river bed topography are the best methods to use to produce satisfactory river flow simulation outputs

  1. Mixed Waste Focus Area Working Group: An Integrated Approach to Mercury Waste Treatment and Disposal. Revision 1

    International Nuclear Information System (INIS)

    Morris, M.I.; Conley, T.B.; Osborne-Lee, I.W.

    1997-01-01

    May 1996, the U.S. Department of Energy (DOE) Mixed Waste Focus Area (MWFA) initiated the Mercury Work Group (HgWG). The HgWG was established to address and resolve the issues associated with Mercury- contaminated mixed wastes (MWs). During the initial technical baseline development process of the MWFA, three of the top four technology deficiencies identified were related to (1) amalgamation, (2) stabilization, and (3) separation and removal for the treatment of mercury and mercury-contaminated mixed waste (MW). The HgWG is assisting the MWFA in soliciting, identifying, initiating, and managing efforts to address these needs

  2. Levels of reconstruction as complementarity in mixed methods research: a social theory-based conceptual framework for integrating qualitative and quantitative research.

    Science.gov (United States)

    Carroll, Linda J; Rothe, J Peter

    2010-09-01

    Like other areas of health research, there has been increasing use of qualitative methods to study public health problems such as injuries and injury prevention. Likewise, the integration of qualitative and quantitative research (mixed-methods) is beginning to assume a more prominent role in public health studies. Likewise, using mixed-methods has great potential for gaining a broad and comprehensive understanding of injuries and their prevention. However, qualitative and quantitative research methods are based on two inherently different paradigms, and their integration requires a conceptual framework that permits the unity of these two methods. We present a theory-driven framework for viewing qualitative and quantitative research, which enables us to integrate them in a conceptually sound and useful manner. This framework has its foundation within the philosophical concept of complementarity, as espoused in the physical and social sciences, and draws on Bergson's metaphysical work on the 'ways of knowing'. Through understanding how data are constructed and reconstructed, and the different levels of meaning that can be ascribed to qualitative and quantitative findings, we can use a mixed-methods approach to gain a conceptually sound, holistic knowledge about injury phenomena that will enhance our development of relevant and successful interventions.

  3. Biased motion vector interpolation for reduced video artifacts.

    NARCIS (Netherlands)

    2011-01-01

    In a video processing system where motion vectors are estimated for a subset of the blocks of data forming a video frame, and motion vectors are interpolated for the remainder of the blocks of the frame, a method includes determining, for at least at least one block of the current frame for which a

  4. A Note on Interpolation of Stable Processes | Nassiuma | Journal of ...

    African Journals Online (AJOL)

    Interpolation procedures tailored for gaussian processes may not be applied to infinite variance stable processes. Alternative techniques suitable for a limited set of stable case with index α∈(1,2] were initially studied by Pourahmadi (1984) for harmonizable processes. This was later extended to the ARMA stable process ...

  5. Analysis of Spatial Interpolation in the Material-Point Method

    DEFF Research Database (Denmark)

    Andersen, Søren; Andersen, Lars

    2010-01-01

    are obtained using quadratic elements. It is shown that for more complex problems, the use of partially negative shape functions is inconsistent with the material-point method in its current form, necessitating other types of interpolation such as cubic splines in order to obtain smoother representations...

  6. Hybrid vehicle optimal control : Linear interpolation and singular control

    NARCIS (Netherlands)

    Delprat, S.; Hofman, T.

    2015-01-01

    Hybrid vehicle energy management can be formulated as an optimal control problem. Considering that the fuel consumption is often computed using linear interpolation over lookup table data, a rigorous analysis of the necessary conditions provided by the Pontryagin Minimum Principle is conducted. For

  7. Fast interpolation for Global Positioning System (GPS) satellite orbits

    OpenAIRE

    Clynch, James R.; Sagovac, Christopher Patrick; Danielson, D. A. (Donald A.); Neta, Beny

    1995-01-01

    In this report, we discuss and compare several methods for polynomial interpolation of Global Positioning Systems ephemeris data. We show that the use of difference tables is more efficient than the method currently in use to construct and evaluate the Lagrange polynomials.

  8. Interpolation in computing science : the semantics of modularization

    NARCIS (Netherlands)

    Renardel de Lavalette, Gerard R.

    2008-01-01

    The Interpolation Theorem, first formulated and proved by W. Craig fifty years ago for predicate logic, has been extended to many other logical frameworks and is being applied in several areas of computer science. We give a short overview, and focus on the theory of software systems and modules. An

  9. Parallel optimization of IDW interpolation algorithm on multicore platform

    Science.gov (United States)

    Guan, Xuefeng; Wu, Huayi

    2009-10-01

    Due to increasing power consumption, heat dissipation, and other physical issues, the architecture of central processing unit (CPU) has been turning to multicore rapidly in recent years. Multicore processor is packaged with multiple processor cores in the same chip, which not only offers increased performance, but also presents significant challenges to application developers. As a matter of fact, in GIS field most of current GIS algorithms were implemented serially and could not best exploit the parallelism potential on such multicore platforms. In this paper, we choose Inverse Distance Weighted spatial interpolation algorithm (IDW) as an example to study how to optimize current serial GIS algorithms on multicore platform in order to maximize performance speedup. With the help of OpenMP, threading methodology is introduced to split and share the whole interpolation work among processor cores. After parallel optimization, execution time of interpolation algorithm is greatly reduced and good performance speedup is achieved. For example, performance speedup on Intel Xeon 5310 is 1.943 with 2 execution threads and 3.695 with 4 execution threads respectively. An additional output comparison between pre-optimization and post-optimization is carried out and shows that parallel optimization does to affect final interpolation result.

  10. LIP: The Livermore Interpolation Package, Version 1.6

    Energy Technology Data Exchange (ETDEWEB)

    Fritsch, F. N. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2016-01-04

    This report describes LIP, the Livermore Interpolation Package. LIP was totally rewritten from the package described in [1]. In particular, the independent variables are now referred to as x and y, since it is a general-purpose package that need not be restricted to equation of state data, which uses variables ρ (density) and T (temperature).

  11. Interpolation decoding method with variable parameters for fractal image compression

    International Nuclear Information System (INIS)

    He Chuanjiang; Li Gaoping; Shen Xiaona

    2007-01-01

    The interpolation fractal decoding method, which is introduced by [He C, Yang SX, Huang X. Progressive decoding method for fractal image compression. IEE Proc Vis Image Signal Process 2004;3:207-13], involves generating progressively the decoded image by means of an interpolation iterative procedure with a constant parameter. It is well-known that the majority of image details are added at the first steps of iterations in the conventional fractal decoding; hence the constant parameter for the interpolation decoding method must be set as a smaller value in order to achieve a better progressive decoding. However, it needs to take an extremely large number of iterations to converge. It is thus reasonable for some applications to slow down the iterative process at the first stages of decoding and then to accelerate it afterwards (e.g., at some iteration as we need). To achieve the goal, this paper proposed an interpolation decoding scheme with variable (iteration-dependent) parameters and proved the convergence of the decoding process mathematically. Experimental results demonstrate that the proposed scheme has really achieved the above-mentioned goal

  12. Functional Commutant Lifting and Interpolation on Generalized Analytic Polyhedra

    Czech Academy of Sciences Publication Activity Database

    Ambrozie, Calin-Grigore

    2008-01-01

    Roč. 34, č. 2 (2008), s. 519-543 ISSN 0362-1588 R&D Projects: GA ČR(CZ) GA201/06/0128 Institutional research plan: CEZ:AV0Z10190503 Keywords : intertwining lifting * interpolation * analytic functions Subject RIV: BA - General Mathematics Impact factor: 0.327, year: 2008

  13. Interpolation solution of the single-impurity Anderson model

    International Nuclear Information System (INIS)

    Kuzemsky, A.L.

    1990-10-01

    The dynamical properties of the single-impurity Anderson model (SIAM) is studied using a novel Irreducible Green's Function method (IGF). The new solution for one-particle GF interpolating between the strong and weak correlation limits is obtained. The unified concept of relevant mean-field renormalizations is indispensable for strong correlation limit. (author). 21 refs

  14. Interpolant Tree Automata and their Application in Horn Clause Verification

    Directory of Open Access Journals (Sweden)

    Bishoksan Kafle

    2016-07-01

    Full Text Available This paper investigates the combination of abstract interpretation over the domain of convex polyhedra with interpolant tree automata, in an abstraction-refinement scheme for Horn clause verification. These techniques have been previously applied separately, but are combined in a new way in this paper. The role of an interpolant tree automaton is to provide a generalisation of a spurious counterexample during refinement, capturing a possibly infinite set of spurious counterexample traces. In our approach these traces are then eliminated using a transformation of the Horn clauses. We compare this approach with two other methods; one of them uses interpolant tree automata in an algorithm for trace abstraction and refinement, while the other uses abstract interpretation over the domain of convex polyhedra without the generalisation step. Evaluation of the results of experiments on a number of Horn clause verification problems indicates that the combination of interpolant tree automaton with abstract interpretation gives some increase in the power of the verification tool, while sometimes incurring a performance overhead.

  15. Two-dimensional interpolation with experimental data smoothing

    International Nuclear Information System (INIS)

    Trejbal, Z.

    1989-01-01

    A method of two-dimensional interpolation with smoothing of time statistically deflected points is developed for processing of magnetic field measurements at the U-120M field measurements at the U-120M cyclotron. Mathematical statement of initial requirements and the final result of relevant algebraic transformations are given. 3 refs

  16. Data interpolation for vibration diagnostics using two-variable correlations

    International Nuclear Information System (INIS)

    Branagan, L.

    1991-01-01

    This paper reports that effective machinery vibration diagnostics require a clear differentiation between normal vibration changes caused by plant process conditions and those caused by degradation. The normal relationship between vibration and a process parameter can be quantified by developing the appropriate correlation. The differences in data acquisition requirements between dynamic signals (vibration spectra) and static signals (pressure, temperature, etc.) result in asynchronous data acquisition; the development of any correlation must then be based on some form of interpolated data. This interpolation can reproduce or distort the original measured quantity depending on the characteristics of the data and the interpolation technique. Relevant data characteristics, such as acquisition times, collection cycle times, compression method, storage rate, and the slew rate of the measured variable, are dependent both on the data handling and on the measured variable. Linear and staircase interpolation, along with the use of clustering and filtering, provide the necessary options to develop accurate correlations. The examples illustrate the appropriate application of these options

  17. Recent developments in free-viewpoint interpolation for 3DTV

    NARCIS (Netherlands)

    Zinger, S.; Do, Q.L.; With, de P.H.N.

    2012-01-01

    Current development of 3D technologies brings 3DTV within reach for the customers. We discuss in this article the recent advancements in free-viewpoint interpolation for 3D video. This technology is still a research topic and many efforts are dedicated to creation, evaluation and improvement of new

  18. A temporal interpolation approach for dynamic reconstruction in perfusion CT

    International Nuclear Information System (INIS)

    Montes, Pau; Lauritsch, Guenter

    2007-01-01

    This article presents a dynamic CT reconstruction algorithm for objects with time dependent attenuation coefficient. Projection data acquired over several rotations are interpreted as samples of a continuous signal. Based on this idea, a temporal interpolation approach is proposed which provides the maximum temporal resolution for a given rotational speed of the CT scanner. Interpolation is performed using polynomial splines. The algorithm can be adapted to slow signals, reducing the amount of data acquired and the computational cost. A theoretical analysis of the approximations made by the algorithm is provided. In simulation studies, the temporal interpolation approach is compared with three other dynamic reconstruction algorithms based on linear regression, linear interpolation, and generalized Parker weighting. The presented algorithm exhibits the highest temporal resolution for a given sampling interval. Hence, our approach needs less input data to achieve a certain quality in the reconstruction than the other algorithms discussed or, equivalently, less x-ray exposure and computational complexity. The proposed algorithm additionally allows the possibility of using slow rotating scanners for perfusion imaging purposes

  19. Twitch interpolation technique in testing of maximal muscle strength

    DEFF Research Database (Denmark)

    Bülow, P M; Nørregaard, J; Danneskiold-Samsøe, B

    1993-01-01

    The aim was to study the methodological aspects of the muscle twitch interpolation technique in estimating the maximal force of contraction in the quadriceps muscle utilizing commercial muscle testing equipment. Six healthy subjects participated in seven sets of experiments testing the effects...

  20. Limiting reiteration for real interpolation with slowly varying functions

    Czech Academy of Sciences Publication Activity Database

    Gogatishvili, Amiran; Opic, Bohumír; Trebels, W.

    2005-01-01

    Roč. 278, 1-2 (2005), s. 86-107 ISSN 0025-584X R&D Projects: GA ČR(CZ) GA201/01/0333 Institutional research plan: CEZ:AV0Z10190503 Keywords : real interpolation * K-functional * limiting reiteration Subject RIV: BA - General Mathematics Impact factor: 0.465, year: 2005

  1. Approximating Exponential and Logarithmic Functions Using Polynomial Interpolation

    Science.gov (United States)

    Gordon, Sheldon P.; Yang, Yajun

    2017-01-01

    This article takes a closer look at the problem of approximating the exponential and logarithmic functions using polynomials. Either as an alternative to or a precursor to Taylor polynomial approximations at the precalculus level, interpolating polynomials are considered. A measure of error is given and the behaviour of the error function is…

  2. Blind Authentication Using Periodic Properties ofInterpolation

    Czech Academy of Sciences Publication Activity Database

    Mahdian, Babak; Saic, Stanislav

    2008-01-01

    Roč. 3, č. 3 (2008), s. 529-538 ISSN 1556-6013 R&D Projects: GA ČR GA102/08/0470 Institutional research plan: CEZ:AV0Z10750506 Keywords : image forensics * digital forgery * image tampering * interpolation detection * resampling detection Subject RIV: IN - Informatics, Computer Science Impact factor: 2.230, year: 2008

  3. Interpolation Inequalities and Spectral Estimates for Magnetic Operators

    Science.gov (United States)

    Dolbeault, Jean; Esteban, Maria J.; Laptev, Ari; Loss, Michael

    2018-05-01

    We prove magnetic interpolation inequalities and Keller-Lieb-Thir-ring estimates for the principal eigenvalue of magnetic Schr{\\"o}dinger operators. We establish explicit upper and lower bounds for the best constants and show by numerical methods that our theoretical estimates are accurate.

  4. Research on Electronic Transformer Data Synchronization Based on Interpolation Methods and Their Error Analysis

    Directory of Open Access Journals (Sweden)

    Pang Fubin

    2015-09-01

    Full Text Available In this paper the origin problem of data synchronization is analyzed first, and then three common interpolation methods are introduced to solve the problem. Allowing for the most general situation, the paper divides the interpolation error into harmonic and transient interpolation error components, and the error expression of each method is derived and analyzed. Besides, the interpolation errors of linear, quadratic and cubic methods are computed at different sampling rates, harmonic orders and transient components. Further, the interpolation accuracy and calculation amount of each method are compared. The research results provide theoretical guidance for selecting the interpolation method in the data synchronization application of electronic transformer.

  5. Spatial interpolation schemes of daily precipitation for hydrologic modeling

    Science.gov (United States)

    Hwang, Y.; Clark, M.R.; Rajagopalan, B.; Leavesley, G.

    2012-01-01

    Distributed hydrologic models typically require spatial estimates of precipitation interpolated from sparsely located observational points to the specific grid points. We compare and contrast the performance of regression-based statistical methods for the spatial estimation of precipitation in two hydrologically different basins and confirmed that widely used regression-based estimation schemes fail to describe the realistic spatial variability of daily precipitation field. The methods assessed are: (1) inverse distance weighted average; (2) multiple linear regression (MLR); (3) climatological MLR; and (4) locally weighted polynomial regression (LWP). In order to improve the performance of the interpolations, the authors propose a two-step regression technique for effective daily precipitation estimation. In this simple two-step estimation process, precipitation occurrence is first generated via a logistic regression model before estimate the amount of precipitation separately on wet days. This process generated the precipitation occurrence, amount, and spatial correlation effectively. A distributed hydrologic model (PRMS) was used for the impact analysis in daily time step simulation. Multiple simulations suggested noticeable differences between the input alternatives generated by three different interpolation schemes. Differences are shown in overall simulation error against the observations, degree of explained variability, and seasonal volumes. Simulated streamflows also showed different characteristics in mean, maximum, minimum, and peak flows. Given the same parameter optimization technique, LWP input showed least streamflow error in Alapaha basin and CMLR input showed least error (still very close to LWP) in Animas basin. All of the two-step interpolation inputs resulted in lower streamflow error compared to the directly interpolated inputs. ?? 2011 Springer-Verlag.

  6. Understanding case mix across three paediatric services: could integration of primary and secondary general paediatrics alter walk-in emergency attendances?

    Science.gov (United States)

    Steele, Lloyd; Coote, Nicky; Klaber, Robert; Watson, Mando; Coren, Michael

    2018-05-04

    To understand the case mix of three different paediatric services, reasons for using an acute paediatric service in a region of developing integrated care and where acute attendances could alternatively have been managed. Mixed methods service evaluation, including retrospective review of referrals to general paediatric outpatients (n=534) and a virtual integrated service (email advice line) (n=474), as well as a prospective survey of paediatric ambulatory unit (PAU) attendees (n=95) and review by a paediatric consultant/registrar to decide where these cases could alternatively have been managed. The case mix of outpatient referrals and the email advice line was similar, but the case mix for PAU was more acute.The most common parental reasons for attending PAU were referral by a community health professional (27.2%), not being able to get a general practitioner (GP) appointment when desired (21.7%), wanting to avoid accident and emergency (17.4%) and wanting specialist paediatric input (14.1%). More than half of PAU presentations were deemed most appropriate for community management by a GP or midwife. The proportion of cases suitable for community management varied by the reason for attendance, with it highestl for parents reporting not being able to get a GP appointment (85%), and lowest for those referred by community health professionals (29%). One in two attendances to acute paediatric services could have been managed in the community. Integration of paediatric services could help address parental reasons for attending acute services, as well as facilitating the community management of chronic conditions. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  7. A comparison of interpolation methods on the basis of data obtained from a bathymetric survey of Lake Vrana, Croatia

    Science.gov (United States)

    Šiljeg, A.; Lozić, S.; Šiljeg, S.

    2015-08-01

    The bathymetric survey of Lake Vrana included a wide range of activities that were performed in several different stages, in accordance with the standards set by the International Hydrographic Organization. The survey was conducted using an integrated measuring system which consisted of three main parts: a single-beam sonar HydroStar 4300 and GPS devices; a Ashtech ProMark 500 base, and a Thales Z-Max® rover. A total of 12 851 points were gathered. In order to find continuous surfaces necessary for analysing the morphology of the bed of Lake Vrana, it was necessary to approximate values in certain areas that were not directly measured, by using an appropriate interpolation method. The main aims of this research were as follows: (a) to compare the efficiency of 14 different interpolation methods and discover the most appropriate interpolators for the development of a raster model; (b) to calculate the surface area and volume of Lake Vrana, and (c) to compare the differences in calculations between separate raster models. The best deterministic method of interpolation was multiquadric RBF (radio basis function), and the best geostatistical method was ordinary cokriging. The root mean square error in both methods measured less than 0.3 m. The quality of the interpolation methods was analysed in two phases. The first phase used only points gathered by bathymetric measurement, while the second phase also included points gathered by photogrammetric restitution. The first bathymetric map of Lake Vrana in Croatia was produced, as well as scenarios of minimum and maximum water levels. The calculation also included the percentage of flooded areas and cadastre plots in the case of a 2 m increase in the water level. The research presented new scientific and methodological data related to the bathymetric features, surface area and volume of Lake Vrana.

  8. Mixed Waste Focus Area mercury contamination product line: An integrated approach to mercury waste treatment and disposal

    International Nuclear Information System (INIS)

    Hulet, G.A.; Conley, T.B.; Morris, M.I.

    1998-01-01

    The US Department of Energy (DOE) Mixed Waste Focus Area (MWFA) is tasked with ensuring that solutions are available for the mixed waste treatment problems of the DOE complex. During the MWFA's initial technical baseline development process, three of the top four technology deficiencies identified were related to the need for amalgamation, stabilization, and separation/removal technologies for the treatment of mercury and mercury-contaminated mixed waste. The focus area grouped mercury-waste-treatment activities into the mercury contamination product line under which development, demonstration, and deployment efforts are coordinated to provide tested technologies to meet the site needs. The Mercury Working Group (HgWG), a selected group of representatives from DOE sites with significant mercury waste inventories, is assisting the MWFA in soliciting, identifying, initiating, and managing efforts to address these areas. Based on the scope and magnitude of the mercury mixed waste problem, as defined by HgWG, solicitations and contract awards have been made to the private sector to demonstrate amalgamation and stabilization processes using actual mixed wastes. Development efforts are currently being funded under the product line that will address DOE's needs for separation/removal processes. This paper discusses the technology selection process, development activities, and the accomplishments of the MWFA to date through these various activities

  9. Building Input Adaptive Parallel Applications: A Case Study of Sparse Grid Interpolation

    KAUST Repository

    Murarasu, Alin; Weidendorfer, Josef

    2012-01-01

    bring a substantial contribution to the speedup. By identifying common patterns in the input data, we propose new algorithms for sparse grid interpolation that accelerate the state-of-the-art non-specialized version. Sparse grid interpolation

  10. DATASPACE - A PROGRAM FOR THE LOGARITHMIC INTERPOLATION OF TEST DATA

    Science.gov (United States)

    Ledbetter, F. E.

    1994-01-01

    Scientists and engineers work with the reduction, analysis, and manipulation of data. In many instances, the recorded data must meet certain requirements before standard numerical techniques may be used to interpret it. For example, the analysis of a linear visoelastic material requires knowledge of one of two time-dependent properties, the stress relaxation modulus E(t) or the creep compliance D(t), one of which may be derived from the other by a numerical method if the recorded data points are evenly spaced or increasingly spaced with respect to the time coordinate. The problem is that most laboratory data are variably spaced, making the use of numerical techniques difficult. To ease this difficulty in the case of stress relaxation data analysis, NASA scientists developed DATASPACE (A Program for the Logarithmic Interpolation of Test Data), to establish a logarithmically increasing time interval in the relaxation data. The program is generally applicable to any situation in which a data set needs increasingly spaced abscissa values. DATASPACE first takes the logarithm of the abscissa values, then uses a cubic spline interpolation routine (which minimizes interpolation error) to create an evenly spaced array from the log values. This array is returned from the log abscissa domain to the abscissa domain and written to an output file for further manipulation. As a result of the interpolation in the log abscissa domain, the data is increasingly spaced. In the case of stress relaxation data, the array is closely spaced at short times and widely spaced at long times, thus avoiding the distortion inherent in evenly spaced time coordinates. The interpolation routine gives results which compare favorably with the recorded data. The experimental data curve is retained and the interpolated points reflect the desired spacing. DATASPACE is written in FORTRAN 77 for IBM PC compatibles with a math co-processor running MS-DOS and Apple Macintosh computers running MacOS. With

  11. Effect of interpolation on parameters extracted from seating interface pressure arrays

    OpenAIRE

    Michael Wininger, PhD; Barbara Crane, PhD, PT

    2015-01-01

    Interpolation is a common data processing step in the study of interface pressure data collected at the wheelchair seating interface. However, there has been no focused study on the effect of interpolation on features extracted from these pressure maps, nor on whether these parameters are sensitive to the manner in which the interpolation is implemented. Here, two different interpolation paradigms, bilinear versus bicubic spline, are tested for their influence on parameters extracted from pre...

  12. Radial basis function interpolation of unstructured, three-dimensional, volumetric particle tracking velocimetry data

    International Nuclear Information System (INIS)

    Casa, L D C; Krueger, P S

    2013-01-01

    Unstructured three-dimensional fluid velocity data were interpolated using Gaussian radial basis function (RBF) interpolation. Data were generated to imitate the spatial resolution and experimental uncertainty of a typical implementation of defocusing digital particle image velocimetry. The velocity field associated with a steadily rotating infinite plate was simulated to provide a bounded, fully three-dimensional analytical solution of the Navier–Stokes equations, allowing for robust analysis of the interpolation accuracy. The spatial resolution of the data (i.e. particle density) and the number of RBFs were varied in order to assess the requirements for accurate interpolation. Interpolation constraints, including boundary conditions and continuity, were included in the error metric used for the least-squares minimization that determines the interpolation parameters to explore methods for improving RBF interpolation results. Even spacing and logarithmic spacing of RBF locations were also investigated. Interpolation accuracy was assessed using the velocity field, divergence of the velocity field, and viscous torque on the rotating boundary. The results suggest that for the present implementation, RBF spacing of 0.28 times the boundary layer thickness is sufficient for accurate interpolation, though theoretical error analysis suggests that improved RBF positioning may yield more accurate results. All RBF interpolation results were compared to standard Gaussian weighting and Taylor expansion interpolation methods. Results showed that RBF interpolation improves interpolation results compared to the Taylor expansion method by 60% to 90% based on the average squared velocity error and provides comparable velocity results to Gaussian weighted interpolation in terms of velocity error. RMS accuracy of the flow field divergence was one to two orders of magnitude better for the RBF interpolation compared to the other two methods. RBF interpolation that was applied to

  13. Dependable Digitally-Assisted Mixed-Signal IPs Based on Integrated Self-Test & Self-Calibration

    NARCIS (Netherlands)

    Kerkhoff, Hans G.; Wan, J.

    2010-01-01

    Heterogeneous SoC devices, including sensors, analogue and mixed-signal front-end circuits and the availability of massive digital processing capability, are being increasingly used in safety-critical applications like in the automotive, medical, and the security arena. Already a significant amount

  14. Efficient GPU-based texture interpolation using uniform B-splines

    NARCIS (Netherlands)

    Ruijters, D.; Haar Romenij, ter B.M.; Suetens, P.

    2008-01-01

    This article presents uniform B-spline interpolation, completely contained on the graphics processing unit (GPU). This implies that the CPU does not need to compute any lookup tables or B-spline basis functions. The cubic interpolation can be decomposed into several linear interpolations [Sigg and

  15. A parameterization of observer-based controllers: Bumpless transfer by covariance interpolation

    DEFF Research Database (Denmark)

    Stoustrup, Jakob; Komareji, Mohammad

    2009-01-01

    This paper presents an algorithm to interpolate between two observer-based controllers for a linear multivariable system such that the closed loop system remains stable throughout the interpolation. The method interpolates between the inverse Lyapunov functions for the two original state feedback...

  16. Dynamic Stability Analysis Using High-Order Interpolation

    Directory of Open Access Journals (Sweden)

    Juarez-Toledo C.

    2012-10-01

    Full Text Available A non-linear model with robust precision for transient stability analysis in multimachine power systems is proposed. The proposed formulation uses the interpolation of Lagrange and Newton's Divided Difference. The High-Order Interpolation technique developed can be used for evaluation of the critical conditions of the dynamic system.The technique is applied to a 5-area 45-machine model of the Mexican interconnected system. As a particular case, this paper shows the application of the High-Order procedure for identifying the slow-frequency mode for a critical contingency. Numerical examples illustrate the method and demonstrate the ability of the High-Order technique to isolate and extract temporal modal behavior.

  17. LINTAB, Linear Interpolable Tables from any Continuous Variable Function

    International Nuclear Information System (INIS)

    1988-01-01

    1 - Description of program or function: LINTAB is designed to construct linearly interpolable tables from any function. The program will start from any function of a single continuous variable... FUNKY(X). By user input the function can be defined, (1) Over 1 to 100 X ranges. (2) Within each X range the function is defined by 0 to 50 constants. (3) At boundaries between X ranges the function may be continuous or discontinuous (depending on the constants used to define the function within each X range). 2 - Method of solution: LINTAB will construct a table of X and Y values where the tabulated (X,Y) pairs will be exactly equal to the function (Y=FUNKY(X)) and linear interpolation between the tabulated pairs will be within any user specified fractional uncertainty of the function for all values of X within the requested X range

  18. Single image interpolation via adaptive nonlocal sparsity-based modeling.

    Science.gov (United States)

    Romano, Yaniv; Protter, Matan; Elad, Michael

    2014-07-01

    Single image interpolation is a central and extensively studied problem in image processing. A common approach toward the treatment of this problem in recent years is to divide the given image into overlapping patches and process each of them based on a model for natural image patches. Adaptive sparse representation modeling is one such promising image prior, which has been shown to be powerful in filling-in missing pixels in an image. Another force that such algorithms may use is the self-similarity that exists within natural images. Processing groups of related patches together exploits their correspondence, leading often times to improved results. In this paper, we propose a novel image interpolation method, which combines these two forces-nonlocal self-similarities and sparse representation modeling. The proposed method is contrasted with competitive and related algorithms, and demonstrated to achieve state-of-the-art results.

  19. Interpolation strategies for reducing IFOV artifacts in microgrid polarimeter imagery.

    Science.gov (United States)

    Ratliff, Bradley M; LaCasse, Charles F; Tyo, J Scott

    2009-05-25

    Microgrid polarimeters are composed of an array of micro-polarizing elements overlaid upon an FPA sensor. In the past decade systems have been designed and built in all regions of the optical spectrum. These systems have rugged, compact designs and the ability to obtain a complete set of polarimetric measurements during a single image capture. However, these systems acquire the polarization measurements through spatial modulation and each measurement has a varying instantaneous field-of-view (IFOV). When these measurements are combined to estimate the polarization images, strong edge artifacts are present that severely degrade the estimated polarization imagery. These artifacts can be reduced when interpolation strategies are first applied to the intensity data prior to Stokes vector estimation. Here we formally study IFOV error and the performance of several bilinear interpolation strategies used for reducing it.

  20. Bi-local baryon interpolating fields with two flavors

    Energy Technology Data Exchange (ETDEWEB)

    Dmitrasinovic, V. [Belgrade University, Institute of Physics, Pregrevica 118, Zemun, P.O. Box 57, Beograd (RS); Chen, Hua-Xing [Institutos de Investigacion de Paterna, Departamento de Fisica Teorica and IFIC, Centro Mixto Universidad de Valencia-CSIC, Valencia (Spain); Peking University, Department of Physics and State Key Laboratory of Nuclear Physics and Technology, Beijing (China)

    2011-02-15

    We construct bi-local interpolating field operators for baryons consisting of three quarks with two flavors, assuming good isospin symmetry. We use the restrictions following from the Pauli principle to derive relations/identities among the baryon operators with identical quantum numbers. Such relations that follow from the combined spatial, Dirac, color, and isospin Fierz transformations may be called the (total/complete) Fierz identities. These relations reduce the number of independent baryon operators with any given spin and isospin. We also study the Abelian and non-Abelian chiral transformation properties of these fields and place them into baryon chiral multiplets. Thus we derive the independent baryon interpolating fields with given values of spin (Lorentz group representation), chiral symmetry (U{sub L}(2) x U{sub R}(2) group representation) and isospin appropriate for the first angular excited states of the nucleon. (orig.)

  1. Kriging for interpolation of sparse and irregularly distributed geologic data

    Energy Technology Data Exchange (ETDEWEB)

    Campbell, K.

    1986-12-31

    For many geologic problems, subsurface observations are available only from a small number of irregularly distributed locations, for example from a handful of drill holes in the region of interest. These observations will be interpolated one way or another, for example by hand-drawn stratigraphic cross-sections, by trend-fitting techniques, or by simple averaging which ignores spatial correlation. In this paper we consider an interpolation technique for such situations which provides, in addition to point estimates, the error estimates which are lacking from other ad hoc methods. The proposed estimator is like a kriging estimator in form, but because direct estimation of the spatial covariance function is not possible the parameters of the estimator are selected by cross-validation. Its use in estimating subsurface stratigraphy at a candidate site for geologic waste repository provides an example.

  2. Penyelesaian Numerik Persamaan Advection Dengan Radial Point Interpolation Method dan Integrasi Waktu Dengan Discontinuous Galerkin Method

    Directory of Open Access Journals (Sweden)

    Kresno Wikan Sadono

    2016-12-01

    Full Text Available Persamaan differensial banyak digunakan untuk menggambarkan berbagai fenomena dalam bidang sains dan rekayasa. Berbagai masalah komplek dalam kehidupan sehari-hari dapat dimodelkan dengan persamaan differensial dan diselesaikan dengan metode numerik. Salah satu metode numerik, yaitu metode meshfree atau meshless berkembang akhir-akhir ini, tanpa proses pembuatan elemen pada domain. Penelitian ini menggabungkan metode meshless yaitu radial basis point interpolation method (RPIM dengan integrasi waktu discontinuous Galerkin method (DGM, metode ini disebut RPIM-DGM. Metode RPIM-DGM diaplikasikan pada advection equation pada satu dimensi. RPIM menggunakan basis function multiquadratic function (MQ dan integrasi waktu diturunkan untuk linear-DGM maupun quadratic-DGM. Hasil simulasi menunjukkan, metode ini mendekati hasil analitis dengan baik. Hasil simulasi numerik dengan RPIM DGM menunjukkan semakin banyak node dan semakin kecil time increment menunjukkan hasil numerik semakin akurat. Hasil lain menunjukkan, integrasi numerik dengan quadratic-DGM untuk suatu time increment dan jumlah node tertentu semakin meningkatkan akurasi dibandingkan dengan linear-DGM.  [Title: Numerical solution of advection equation with radial basis interpolation method and discontinuous Galerkin method for time integration] Differential equation is widely used to describe a variety of phenomena in science and engineering. A variety of complex issues in everyday life can be modeled with differential equations and solved by numerical method. One of the numerical methods, the method meshfree or meshless developing lately, without making use of the elements in the domain. The research combines methods meshless, i.e. radial basis point interpolation method with discontinuous Galerkin method as time integration method. This method is called RPIM-DGM. The RPIM-DGM applied to one dimension advection equation. The RPIM using basis function multiquadratic function and time

  3. The modal surface interpolation method for damage localization

    Science.gov (United States)

    Pina Limongelli, Maria

    2017-05-01

    The Interpolation Method (IM) has been previously proposed and successfully applied for damage localization in plate like structures. The method is based on the detection of localized reductions of smoothness in the Operational Deformed Shapes (ODSs) of the structure. The IM can be applied to any type of structure provided the ODSs are estimated accurately in the original and in the damaged configurations. If the latter circumstance fails to occur, for example when the structure is subjected to an unknown input(s) or if the structural responses are strongly corrupted by noise, both false and missing alarms occur when the IM is applied to localize a concentrated damage. In order to overcome these drawbacks a modification of the method is herein investigated. An ODS is the deformed shape of a structure subjected to a harmonic excitation: at resonances the ODS are dominated by the relevant mode shapes. The effect of noise at resonance is usually lower with respect to other frequency values hence the relevant ODS are estimated with higher reliability. Several methods have been proposed to reliably estimate modal shapes in case of unknown input. These two circumstances can be exploited to improve the reliability of the IM. In order to reduce or eliminate the drawbacks related to the estimation of the ODSs in case of noisy signals, in this paper is investigated a modified version of the method based on a damage feature calculated considering the interpolation error relevant only to the modal shapes and not to all the operational shapes in the significant frequency range. Herein will be reported the comparison between the results of the IM in its actual version (with the interpolation error calculated summing up the contributions of all the operational shapes) and in the new proposed version (with the estimation of the interpolation error limited to the modal shapes).

  4. Reconstruction of reflectance data using an interpolation technique.

    Science.gov (United States)

    Abed, Farhad Moghareh; Amirshahi, Seyed Hossein; Abed, Mohammad Reza Moghareh

    2009-03-01

    A linear interpolation method is applied for reconstruction of reflectance spectra of Munsell as well as ColorChecker SG color chips from the corresponding colorimetric values under a given set of viewing conditions. Hence, different types of lookup tables (LUTs) have been created to connect the colorimetric and spectrophotometeric data as the source and destination spaces in this approach. To optimize the algorithm, different color spaces and light sources have been used to build different types of LUTs. The effects of applied color datasets as well as employed color spaces are investigated. Results of recovery are evaluated by the mean and the maximum color difference values under other sets of standard light sources. The mean and the maximum values of root mean square (RMS) error between the reconstructed and the actual spectra are also calculated. Since the speed of reflectance reconstruction is a key point in the LUT algorithm, the processing time spent for interpolation of spectral data has also been measured for each model. Finally, the performance of the suggested interpolation technique is compared with that of the common principal component analysis method. According to the results, using the CIEXYZ tristimulus values as a source space shows priority over the CIELAB color space. Besides, the colorimetric position of a desired sample is a key point that indicates the success of the approach. In fact, because of the nature of the interpolation technique, the colorimetric position of the desired samples should be located inside the color gamut of available samples in the dataset. The resultant spectra that have been reconstructed by this technique show considerable improvement in terms of RMS error between the actual and the reconstructed reflectance spectra as well as CIELAB color differences under the other light source in comparison with those obtained from the standard PCA technique.

  5. Direct Trajectory Interpolation on the Surface using an Open CNC

    OpenAIRE

    Beudaert , Xavier; Lavernhe , Sylvain; Tournier , Christophe

    2014-01-01

    International audience; Free-form surfaces are used for many industrial applications from aeronautical parts, to molds or biomedical implants. In the common machining process, computer-aided manufacturing (CAM) software generates approximated tool paths because of the limitation induced by the input tool path format of the industrial CNC. Then, during the tool path interpolation, marks on finished surfaces can appear induced by non smooth feedrate planning. Managing the geometry of the tool p...

  6. Image interpolation via graph-based Bayesian label propagation.

    Science.gov (United States)

    Xianming Liu; Debin Zhao; Jiantao Zhou; Wen Gao; Huifang Sun

    2014-03-01

    In this paper, we propose a novel image interpolation algorithm via graph-based Bayesian label propagation. The basic idea is to first create a graph with known and unknown pixels as vertices and with edge weights encoding the similarity between vertices, then the problem of interpolation converts to how to effectively propagate the label information from known points to unknown ones. This process can be posed as a Bayesian inference, in which we try to combine the principles of local adaptation and global consistency to obtain accurate and robust estimation. Specially, our algorithm first constructs a set of local interpolation models, which predict the intensity labels of all image samples, and a loss term will be minimized to keep the predicted labels of the available low-resolution (LR) samples sufficiently close to the original ones. Then, all of the losses evaluated in local neighborhoods are accumulated together to measure the global consistency on all samples. Moreover, a graph-Laplacian-based manifold regularization term is incorporated to penalize the global smoothness of intensity labels, such smoothing can alleviate the insufficient training of the local models and make them more robust. Finally, we construct a unified objective function to combine together the global loss of the locally linear regression, square error of prediction bias on the available LR samples, and the manifold regularization term. It can be solved with a closed-form solution as a convex optimization problem. Experimental results demonstrate that the proposed method achieves competitive performance with the state-of-the-art image interpolation algorithms.

  7. Strip interpolation in silicon and germanium strip detectors

    International Nuclear Information System (INIS)

    Wulf, E. A.; Phlips, B. F.; Johnson, W. N.; Kurfess, J. D.; Lister, C. J.; Kondev, F.; Physics; Naval Research Lab.

    2004-01-01

    The position resolution of double-sided strip detectors is limited by the strip pitch and a reduction in strip pitch necessitates more electronics. Improved position resolution would improve the imaging capabilities of Compton telescopes and PET detectors. Digitizing the preamplifier waveform yields more information than can be extracted with regular shaping electronics. In addition to the energy, depth of interaction, and which strip was hit, the digitized preamplifier signals can locate the interaction position to less than the strip pitch of the detector by looking at induced signals in neighboring strips. This allows the position of the interaction to be interpolated in three dimensions and improve the imaging capabilities of the system. In a 2 mm thick silicon strip detector with a strip pitch of 0.891 mm, strip interpolation located the interaction of 356 keV gamma rays to 0.3 mm FWHM. In a 2 cm thick germanium detector with a strip pitch of 5 mm, strip interpolation of 356 keV gamma rays yielded a position resolution of 1.5 mm FWHM

  8. Importance of interpolation and coincidence errors in data fusion

    Directory of Open Access Journals (Sweden)

    S. Ceccherini

    2018-02-01

    Full Text Available The complete data fusion (CDF method is applied to ozone profiles obtained from simulated measurements in the ultraviolet and in the thermal infrared in the framework of the Sentinel 4 mission of the Copernicus programme. We observe that the quality of the fused products is degraded when the fusing profiles are either retrieved on different vertical grids or referred to different true profiles. To address this shortcoming, a generalization of the complete data fusion method, which takes into account interpolation and coincidence errors, is presented. This upgrade overcomes the encountered problems and provides products of good quality when the fusing profiles are both retrieved on different vertical grids and referred to different true profiles. The impact of the interpolation and coincidence errors on number of degrees of freedom and errors of the fused profile is also analysed. The approach developed here to account for the interpolation and coincidence errors can also be followed to include other error components, such as forward model errors.

  9. Interpolation of daily rainfall using spatiotemporal models and clustering

    KAUST Repository

    Militino, A. F.

    2014-06-11

    Accumulated daily rainfall in non-observed locations on a particular day is frequently required as input to decision-making tools in precision agriculture or for hydrological or meteorological studies. Various solutions and estimation procedures have been proposed in the literature depending on the auxiliary information and the availability of data, but most such solutions are oriented to interpolating spatial data without incorporating temporal dependence. When data are available in space and time, spatiotemporal models usually provide better solutions. Here, we analyse the performance of three spatiotemporal models fitted to the whole sampled set and to clusters within the sampled set. The data consists of daily observations collected from 87 manual rainfall gauges from 1990 to 2010 in Navarre, Spain. The accuracy and precision of the interpolated data are compared with real data from 33 automated rainfall gauges in the same region, but placed in different locations than the manual rainfall gauges. Root mean squared error by months and by year are also provided. To illustrate these models, we also map interpolated daily precipitations and standard errors on a 1km2 grid in the whole region. © 2014 Royal Meteorological Society.

  10. Interpolation of daily rainfall using spatiotemporal models and clustering

    KAUST Repository

    Militino, A. F.; Ugarte, M. D.; Goicoa, T.; Genton, Marc G.

    2014-01-01

    Accumulated daily rainfall in non-observed locations on a particular day is frequently required as input to decision-making tools in precision agriculture or for hydrological or meteorological studies. Various solutions and estimation procedures have been proposed in the literature depending on the auxiliary information and the availability of data, but most such solutions are oriented to interpolating spatial data without incorporating temporal dependence. When data are available in space and time, spatiotemporal models usually provide better solutions. Here, we analyse the performance of three spatiotemporal models fitted to the whole sampled set and to clusters within the sampled set. The data consists of daily observations collected from 87 manual rainfall gauges from 1990 to 2010 in Navarre, Spain. The accuracy and precision of the interpolated data are compared with real data from 33 automated rainfall gauges in the same region, but placed in different locations than the manual rainfall gauges. Root mean squared error by months and by year are also provided. To illustrate these models, we also map interpolated daily precipitations and standard errors on a 1km2 grid in the whole region. © 2014 Royal Meteorological Society.

  11. Global sensitivity analysis using sparse grid interpolation and polynomial chaos

    International Nuclear Information System (INIS)

    Buzzard, Gregery T.

    2012-01-01

    Sparse grid interpolation is widely used to provide good approximations to smooth functions in high dimensions based on relatively few function evaluations. By using an efficient conversion from the interpolating polynomial provided by evaluations on a sparse grid to a representation in terms of orthogonal polynomials (gPC representation), we show how to use these relatively few function evaluations to estimate several types of sensitivity coefficients and to provide estimates on local minima and maxima. First, we provide a good estimate of the variance-based sensitivity coefficients of Sobol' (1990) [1] and then use the gradient of the gPC representation to give good approximations to the derivative-based sensitivity coefficients described by Kucherenko and Sobol' (2009) [2]. Finally, we use the package HOM4PS-2.0 given in Lee et al. (2008) [3] to determine the critical points of the interpolating polynomial and use these to determine the local minima and maxima of this polynomial. - Highlights: ► Efficient estimation of variance-based sensitivity coefficients. ► Efficient estimation of derivative-based sensitivity coefficients. ► Use of homotopy methods for approximation of local maxima and minima.

  12. Adaptive Residual Interpolation for Color and Multispectral Image Demosaicking.

    Science.gov (United States)

    Monno, Yusuke; Kiku, Daisuke; Tanaka, Masayuki; Okutomi, Masatoshi

    2017-12-01

    Color image demosaicking for the Bayer color filter array is an essential image processing operation for acquiring high-quality color images. Recently, residual interpolation (RI)-based algorithms have demonstrated superior demosaicking performance over conventional color difference interpolation-based algorithms. In this paper, we propose adaptive residual interpolation (ARI) that improves existing RI-based algorithms by adaptively combining two RI-based algorithms and selecting a suitable iteration number at each pixel. These are performed based on a unified criterion that evaluates the validity of an RI-based algorithm. Experimental comparisons using standard color image datasets demonstrate that ARI can improve existing RI-based algorithms by more than 0.6 dB in the color peak signal-to-noise ratio and can outperform state-of-the-art algorithms based on training images. We further extend ARI for a multispectral filter array, in which more than three spectral bands are arrayed, and demonstrate that ARI can achieve state-of-the-art performance also for the task of multispectral image demosaicking.

  13. Stereo matching and view interpolation based on image domain triangulation.

    Science.gov (United States)

    Fickel, Guilherme Pinto; Jung, Claudio R; Malzbender, Tom; Samadani, Ramin; Culbertson, Bruce

    2013-09-01

    This paper presents a new approach for stereo matching and view interpolation problems based on triangular tessellations suitable for a linear array of rectified cameras. The domain of the reference image is initially partitioned into triangular regions using edge and scale information, aiming to place vertices along image edges and increase the number of triangles in textured regions. A region-based matching algorithm is then used to find an initial disparity for each triangle, and a refinement stage is applied to change the disparity at the vertices of the triangles, generating a piecewise linear disparity map. A simple post-processing procedure is applied to connect triangles with similar disparities generating a full 3D mesh related to each camera (view), which are used to generate new synthesized views along the linear camera array. With the proposed framework, view interpolation reduces to the trivial task of rendering polygonal meshes, which can be done very fast, particularly when GPUs are employed. Furthermore, the generated views are hole-free, unlike most point-based view interpolation schemes that require some kind of post-processing procedures to fill holes.

  14. Interpolation between Airy and Poisson statistics for unitary chiral non-Hermitian random matrix ensembles

    International Nuclear Information System (INIS)

    Akemann, G.; Bender, M.

    2010-01-01

    We consider a family of chiral non-Hermitian Gaussian random matrices in the unitarily invariant symmetry class. The eigenvalue distribution in this model is expressed in terms of Laguerre polynomials in the complex plane. These are orthogonal with respect to a non-Gaussian weight including a modified Bessel function of the second kind, and we give an elementary proof for this. In the large n limit, the eigenvalue statistics at the spectral edge close to the real axis are described by the same family of kernels interpolating between Airy and Poisson that was recently found by one of the authors for the elliptic Ginibre ensemble. We conclude that this scaling limit is universal, appearing for two different non-Hermitian random matrix ensembles with unitary symmetry. As a second result we give an equivalent form for the interpolating Airy kernel in terms of a single real integral, similar to representations for the asymptotic kernel in the bulk and at the hard edge of the spectrum. This makes its structure as a one-parameter deformation of the Airy kernel more transparent.

  15. A General 2D Meshless Interpolating Boundary Node Method Based on the Parameter Space

    Directory of Open Access Journals (Sweden)

    Hongyin Yang

    2017-01-01

    Full Text Available The presented study proposed an improved interpolating boundary node method (IIBNM for 2D potential problems. The improved interpolating moving least-square (IIMLS method was applied to construct the shape functions, of which the delta function properties and boundary conditions were directly implemented. In addition, any weight function used in the moving least-square (MLS method was also applicable in the IIMLS method. Boundary cells were required in the computation of the boundary integrals, and additional discretization error was not avoided if traditional cells were used to approximate the geometry. The present study applied the parametric cells created in the parameter space to preserve the exact geometry, and the geometry was maintained due to the number of cells. Only the number of nodes on the boundary was required as additional information for boundary node construction. Most importantly, the IIMLS method can be applied in the parameter space to construct shape functions without the requirement of additional computations for the curve length.

  16. Joint seismic data denoising and interpolation with double-sparsity dictionary learning

    Science.gov (United States)

    Zhu, Lingchen; Liu, Entao; McClellan, James H.

    2017-08-01

    Seismic data quality is vital to geophysical applications, so that methods of data recovery, including denoising and interpolation, are common initial steps in the seismic data processing flow. We present a method to perform simultaneous interpolation and denoising, which is based on double-sparsity dictionary learning. This extends previous work that was for denoising only. The original double-sparsity dictionary learning algorithm is modified to track the traces with missing data by defining a masking operator that is integrated into the sparse representation of the dictionary. A weighted low-rank approximation algorithm is adopted to handle the dictionary updating as a sparse recovery optimization problem constrained by the masking operator. Compared to traditional sparse transforms with fixed dictionaries that lack the ability to adapt to complex data structures, the double-sparsity dictionary learning method learns the signal adaptively from selected patches of the corrupted seismic data, while preserving compact forward and inverse transform operators. Numerical experiments on synthetic seismic data indicate that this new method preserves more subtle features in the data set without introducing pseudo-Gibbs artifacts when compared to other directional multi-scale transform methods such as curvelets.

  17. Comparison of different interpolation operators including nonlinear subdivision schemes in the simulation of particle trajectories

    International Nuclear Information System (INIS)

    Bensiali, Bouchra; Bodi, Kowsik; Ciraolo, Guido; Ghendrih, Philippe; Liandrat, Jacques

    2013-01-01

    In this work, we compare different interpolation operators in the context of particle tracking with an emphasis on situations involving velocity field with steep gradients. Since, in this case, most classical methods give rise to the Gibbs phenomenon (generation of oscillations near discontinuities), we present new methods for particle tracking based on subdivision schemes and especially on the Piecewise Parabolic Harmonic (PPH) scheme which has shown its advantage in image processing in presence of strong contrasts. First an analytic univariate case with a discontinuous velocity field is considered in order to highlight the effect of the Gibbs phenomenon on trajectory calculation. Theoretical results are provided. Then, we show, regardless of the interpolation method, the need to use a conservative approach when integrating a conservative problem with a velocity field deriving from a potential. Finally, the PPH scheme is applied in a more realistic case of a time-dependent potential encountered in the edge turbulence of magnetically confined plasmas, to compare the propagation of density structures (turbulence bursts) with the dynamics of test particles. This study highlights the difference between particle transport and density transport in turbulent fields

  18. Interpolating a consumption variable for scaling and generalizing potential population pressure on urbanizing natural areas

    Science.gov (United States)

    Varanka, Dalia; Jiang, Bin; Yao, Xiaobai

    2010-01-01

    Measures of population pressure, referring in general to the stress upon the environment by human consumption of resources, are imperative for environmental sustainability studies and management. Development based on resource consumption is the predominant factor of population pressure. This paper presents a spatial model of population pressure by linking consumption associated with regional urbanism and ecosystem services. Maps representing relative geographic degree and extent of natural resource consumption and degree and extent of impacts on surrounding areas are new, and this research represents the theoretical research toward this goal. With development, such maps offer a visualization tool for planners of various services, amenities for people, and conservation planning for ecologist. Urbanization is commonly generalized by census numbers or impervious surface area. The potential geographical extent of urbanism encompasses the environmental resources of the surrounding region that sustain cities. This extent is interpolated using kriging of a variable based on population wealth data from the U.S. Census Bureau. When overlayed with land-use/land-cover data, the results indicate that the greatest estimates of population pressure fall within mixed forest areas. Mixed forest areas result from the spread of cedar woods in previously disturbed areas where further disturbance is then suppressed. Low density areas, such as suburbanization and abandoned farmland are characteristic of mixed forest areas.

  19. An energy integrated, multi-microgrid, MILP (mixed-integer linear programming) approach for residential distributed energy system planning – A South Australian case-study

    International Nuclear Information System (INIS)

    Wouters, Carmen; Fraga, Eric S.; James, Adrian M.

    2015-01-01

    The integration of distributed generation units and microgrids in the current grid infrastructure requires an efficient and cost effective local energy system design. A mixed-integer linear programming model is presented to identify such optimal design. The electricity as well as the space heating and cooling demands of a small residential neighbourhood are satisfied through the consideration and combined use of distributed generation technologies, thermal units and energy storage with an optional interconnection with the central grid. Moreover, energy integration is allowed in the form of both optimised pipeline networks and microgrid operation. The objective is to minimise the total annualised cost of the system to meet its yearly energy demand. The model integrates the operational characteristics and constraints of the different technologies for several scenarios in a South Australian setting and is implemented in GAMS. The impact of energy integration is analysed, leading to the identification of key components for residential energy systems. Additionally, a multi-microgrid concept is introduced to allow for local clustering of households within neighbourhoods. The robustness of the model is shown through sensitivity analysis, up-scaling and an effort to address the variability of solar irradiation. - Highlights: • Distributed energy system planning is employed on a small residential scale. • Full energy integration is employed based on microgrid operation and tri-generation. • An MILP for local clustering of households in multi-microgrids is developed. • Micro combined heat and power units are key components for residential microgrids

  20. A Comparison and Integration of MiSeq and MinION Platforms for Sequencing Single Source and Mixed Mitochondrial Genomes.

    Directory of Open Access Journals (Sweden)

    Michael R Lindberg

    Full Text Available Single source and multiple donor (mixed samples of human mitochondrial DNA were analyzed and compared using the MinION and the MiSeq platforms. A generalized variant detection strategy was employed to provide a cursory framework for evaluating the reliability and accuracy of mitochondrial sequences produced by the MinION. The feasibility of long-read phasing was investigated to establish its efficacy in quantitatively distinguishing and deconvolving individuals in a mixture. Finally, a proof-of-concept was demonstrated by integrating both platforms in a hybrid assembly that leverages solely mixture data to accurately reconstruct full mitochondrial genomes.

  1. Two-loop master integrals for the mixed EW-QCD virtual corrections to Drell-Yan scattering

    Energy Technology Data Exchange (ETDEWEB)

    Bonciani, Roberto [' ' La Sapienza' ' Univ., Rome (Italy). Dipt. di Fisica; INFN Sezione Roma (Italy); Di Vita, Stefano [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); Mastrolia, Pierpaolo [Max-Planck-Institut fuer Physik, Muenchen (Germany); Padova Univ. (Italy). Dipt. di Fisica e Astronomia; INFN Sezione di Padova (Italy); Schubert, Ulrich [Max-Planck-Institut fuer Physik, Muenchen (Germany)

    2016-04-15

    We present the calculation of the master integrals needed for the two-loop QCD x EW corrections to q+ anti q → l{sup -}+l{sup +} and q+ anti q{sup '} → l{sup -}+ anti ν, for massless external particles. We treat W and Z bosons as degenerate in mass. We identify three types of diagrams, according to the presence of massive internal lines: the no-mass type, the one-mass type, and the two-mass type, where all massive propagators, when occurring, contain the same mass value. We find a basis of 49 master integrals and evaluate them with the method of the differential equations. The Magnus exponential is employed to choose a set of master integrals that obeys a canonical system of differential equations. Boundary conditions are found either by matching the solutions onto simpler integrals in special kinematic configurations, or by requiring the regularity of the solution at pseudo-thresholds. The canonical master integrals are finally given as Taylor series around d=4 space-time dimensions, up to order four, with coefficients given in terms of iterated integrals, respectively up to weight four.

  2. The bases for the use of interpolation in helical computed tomography: an explanation for radiologists

    International Nuclear Information System (INIS)

    Garcia-Santos, J. M.; Cejudo, J.

    2002-01-01

    In contrast to conventional computed tomography (CT), helical CT requires the application of interpolators to achieve image reconstruction. This is because the projections processed by the computer are not situated in the same plane. Since the introduction of helical CT. a number of interpolators have been designed in the attempt to maintain the thickness of the reconstructed section as close as possible to the thickness of the X-ray beam. The purpose of this article is to discuss the function of these interpolators, stressing the advantages and considering the possible inconveniences of high-grade curved interpolators with respect to standard linear interpolators. (Author) 7 refs

  3. A 45 ps time digitizer with a two-phase clock and dual-edge two-stage interpolation in a field programmable gate array device

    Science.gov (United States)

    Szplet, R.; Kalisz, J.; Jachna, Z.

    2009-02-01

    We present a time digitizer having 45 ps resolution, integrated in a field programmable gate array (FPGA) device. The time interval measurement is based on the two-stage interpolation method. A dual-edge two-phase interpolator is driven by the on-chip synthesized 250 MHz clock with precise phase adjustment. An improved dual-edge double synchronizer was developed to control the main counter. The nonlinearity of the digitizer's transfer characteristic is identified and utilized by the dedicated hardware code processor for the on-the-fly correction of the output data. Application of presented ideas has resulted in the measurement uncertainty of the digitizer below 70 ps RMS over the time interval ranging from 0 to 1 s. The use of the two-stage interpolation and a fast FIFO memory has allowed us to obtain the maximum measurement rate of five million measurements per second.

  4. A 45 ps time digitizer with a two-phase clock and dual-edge two-stage interpolation in a field programmable gate array device

    International Nuclear Information System (INIS)

    Szplet, R; Kalisz, J; Jachna, Z

    2009-01-01

    We present a time digitizer having 45 ps resolution, integrated in a field programmable gate array (FPGA) device. The time interval measurement is based on the two-stage interpolation method. A dual-edge two-phase interpolator is driven by the on-chip synthesized 250 MHz clock with precise phase adjustment. An improved dual-edge double synchronizer was developed to control the main counter. The nonlinearity of the digitizer's transfer characteristic is identified and utilized by the dedicated hardware code processor for the on-the-fly correction of the output data. Application of presented ideas has resulted in the measurement uncertainty of the digitizer below 70 ps RMS over the time interval ranging from 0 to 1 s. The use of the two-stage interpolation and a fast FIFO memory has allowed us to obtain the maximum measurement rate of five million measurements per second

  5. Low power and high accuracy spike sorting microprocessor with on-line interpolation and re-alignment in 90 nm CMOS process.

    Science.gov (United States)

    Chen, Tung-Chien; Ma, Tsung-Chuan; Chen, Yun-Yu; Chen, Liang-Gee

    2012-01-01

    Accurate spike sorting is an important issue for neuroscientific and neuroprosthetic applications. The sorting of spikes depends on the features extracted from the neural waveforms, and a better sorting performance usually comes with a higher sampling rate (SR). However for the long duration experiments on free-moving subjects, the miniaturized and wireless neural recording ICs are the current trend, and the compromise on sorting accuracy is usually made by a lower SR for the lower power consumption. In this paper, we implement an on-chip spike sorting processor with integrated interpolation hardware in order to improve the performance in terms of power versus accuracy. According to the fabrication results in 90nm process, if the interpolation is appropriately performed during the spike sorting, the system operated at the SR of 12.5 k samples per second (sps) can outperform the one not having interpolation at 25 ksps on both accuracy and power.

  6. Development and operation of an integrated sampling probe and gas analyzer for turbulent mixing studies in complex supersonic flows

    Science.gov (United States)

    Wiswall, John D.

    For many aerospace applications, mixing enhancement between co-flowing streams has been identified as a critical and enabling technology. Due to short fuel residence times in scramjet combustors, combustion is limited by the molecular mixing of hydrogen (fuel) and air. Determining the mixedness of fuel and air in these complex supersonic flowfields is critical to the advancement of novel injection schemes currently being developed at UTA in collaboration with NASA Langley and intended to be used on a future two-stage to orbit (~Mach 16) hypersonic air-breathing vehicle for space access. Expanding on previous work, an instrument has been designed, fabricated, and tested in order to measure mean concentrations of injected helium (a passive scalar used instead of hazardous hydrogen) and to quantitatively characterize the nature of the high-frequency concentration fluctuations encountered in the compressible, turbulent, and high-speed (up to Mach 3.5) complex flows associated with the new supersonic injection schemes. This important high-frequency data is not yet attainable when employing other techniques such as Laser Induced Fluorescence, Filtered Rayleigh Scattering or mass spectroscopy in the same complex supersonic flows. The probe operates by exploiting the difference between the thermodynamic properties of two species through independent massflow measurements and calibration. The probe samples isokinetically from the flowfield's area of interest and the helium concentration may be uniquely determined by hot-film anemometry and internally measured stagnation conditions. The final design has a diameter of 0.25" and is only 2.22" long. The overall accuracy of the probe is 3% in molar fraction of helium. The frequency response of mean concentration measurements is estimated at 103 Hz, while high-frequency hot-film measurements were conducted at 60 kHz. Additionally, the work presents an analysis of the probe's internal mixing effects and the effects of the spatial

  7. Study on the algorithm for Newton-Rapson iteration interpolation of NURBS curve and simulation

    Science.gov (United States)

    Zhang, Wanjun; Gao, Shanping; Cheng, Xiyan; Zhang, Feng

    2017-04-01

    In order to solve the problems of Newton-Rapson iteration interpolation method of NURBS Curve, Such as interpolation time bigger, calculation more complicated, and NURBS curve step error are not easy changed and so on. This paper proposed a study on the algorithm for Newton-Rapson iteration interpolation method of NURBS curve and simulation. We can use Newton-Rapson iterative that calculate (xi, yi, zi). Simulation results show that the proposed NURBS curve interpolator meet the high-speed and high-accuracy interpolation requirements of CNC systems. The interpolation of NURBS curve should be finished. The simulation results show that the algorithm is correct; it is consistent with a NURBS curve interpolation requirements.

  8. A space-time mixed galerkin marching-on-in-time scheme for the time-domain combined field integral equation

    KAUST Repository

    Beghein, Yves

    2013-03-01

    The time domain combined field integral equation (TD-CFIE), which is constructed from a weighted sum of the time domain electric and magnetic field integral equations (TD-EFIE and TD-MFIE) for analyzing transient scattering from closed perfect electrically conducting bodies, is free from spurious resonances. The standard marching-on-in-time technique for discretizing the TD-CFIE uses Galerkin and collocation schemes in space and time, respectively. Unfortunately, the standard scheme is theoretically not well understood: stability and convergence have been proven for only one class of space-time Galerkin discretizations. Moreover, existing discretization schemes are nonconforming, i.e., the TD-MFIE contribution is tested with divergence conforming functions instead of curl conforming functions. We therefore introduce a novel space-time mixed Galerkin discretization for the TD-CFIE. A family of temporal basis and testing functions with arbitrary order is introduced. It is explained how the corresponding interactions can be computed efficiently by existing collocation-in-time codes. The spatial mixed discretization is made fully conforming and consistent by leveraging both Rao-Wilton-Glisson and Buffa-Christiansen basis functions and by applying the appropriate bi-orthogonalization procedures. The combination of both techniques is essential when high accuracy over a broad frequency band is required. © 2012 IEEE.

  9. Preliminary analysis in support to the experimental activities on the mixing process in the pressurizer of a small modular reactor integrated primary system

    Energy Technology Data Exchange (ETDEWEB)

    Nascimento, Samira R.V.; Lira, Carlos A.B.O.; Bezerra, Jair L.; Silva, Mario A.B.; Silva, Willdauany C.F., E-mail: samiraruana@gmail.com [Universidade Federal de Pernambuco (DEN/UFPE), Recife, PE (Brazil). Departamento de Energia Nuclear; Lapa, Celso M.F., E-mail: lapa@ien.gov.br [Instituto de Engenharia Nuclear (IEN/CNEN-RJ), Rio de Janeiro, RJ (Brazil); Lima, Fernando R.A., E-mail: falima@crcn.gov.br [Centro Regional de Ciencias Nucleares (CRCN/CNEN-NE), Recife, PE (Brazil); Otero, Maria E.M.; Hernandez, Carlos R.G., E-mail: mmontesi@instec.cu [Department of Nuclear Engineering, InSTEC/CUBA, Higher Institute of Technology and Applied Science, La Habana (Cuba)

    2015-07-01

    Nowadays, there is a renewed interest in the development of advanced/innovative small and medium sized modular reactors (SMRs). The SMRs are variants of the Generation IV systems and usually have attractive characteristics of simplicity, enhanced safety and require limited financial resources. The concept of the integrated primary system reactor (IPSR) is characterized by the inclusion of the entire primary system within a single pressure vessel, including the steam generator and pressurizer. The pressurizer is located within the reactor vessel top, this configuration involves changes on the techniques and is necessary investigate the boron mixing. The present work represents a contribution to the design of an experimental facility planned to provide data relevant for the mixing phenomena in the pressurizer of a compact modular reactor. In particular, in order to evaluate the boron concentration in the surge orifices to simulate the in-surge and out-surge in a facility, scaled 1:200, respect to the ¼ of the pressurizer. The facility behavior studied from one inlet and one outlet of the test section with represent one in-surge e one out-surge the pressurizer of a small modular reactor integrated primary system. (author)

  10. "It was the whole picture" a mixed methods study of successful components in an integrated wellness service in North East England.

    Science.gov (United States)

    Cheetham, M; Van der Graaf, P; Khazaeli, B; Gibson, E; Wiseman, A; Rushmer, R

    2018-03-22

    A growing number of Local Authorities (LAs) have introduced integrated wellness services as part of efforts to deliver cost effective, preventive services that address the social determinants of health. This study examined which elements of an integrated wellness service in the north east of England were effective in improving health and wellbeing (HWB). The study used a mixed-methods approach. In-depth semi-structured interviews (IVs) were conducted with integrated wellness service users (n = 25) and focus groups (FGs) with group based service users (n = 14) and non-service users (n = 23) to gather the views of stakeholders. Findings are presented here alongside analysis of routine monitoring data. The different data were compared to examine what each data source revealed about the effectiveness of the service. Findings suggest that integrated wellness services work by addressing the social determinants of health and respond to multiple complex health and social concerns rather than single issues. The paper identifies examples of 'active ingredients' at the heart of the programme, such as sustained relationships, peer support and confidence building, as well as the activities through which changes take place, such as sports and leisure opportunities which in turn encourage social interaction. Wider wellbeing outcomes, including reduced social isolation and increased self-efficacy are also reported. Practical and motivational support helped build community capacity by encouraging community groups to access funding, helped navigate bureaucratic systems, and promoted understanding of marginalised communities. Fully integrated wellness services could support progression opportunities through volunteering and mentoring. An integrated wellness service that offers a holistic approach was valued by service users and allowed them to address complex issues simultaneously. Few of the reported health gains were captured in routine data. Quantitative and

  11. Marketing Mix Formulation for Higher Education: An Integrated Analysis Employing Analytic Hierarchy Process, Cluster Analysis and Correspondence Analysis

    Science.gov (United States)

    Ho, Hsuan-Fu; Hung, Chia-Chi

    2008-01-01

    Purpose: The purpose of this paper is to examine how a graduate institute at National Chiayi University (NCYU), by using a model that integrates analytic hierarchy process, cluster analysis and correspondence analysis, can develop effective marketing strategies. Design/methodology/approach: This is primarily a quantitative study aimed at…

  12. A Mixed Learning Approach to Integrating Digital Signal Processing Laboratory Exercises into a Non-Lab Junior Year DSP Course

    Science.gov (United States)

    McPheron, Benjamin D.; Thangaraj, Charles V.; Thomas, Charles R.

    2017-01-01

    Laboratory courses can be difficult to fit into an engineering program at a liberal arts-focused university, which requires students to be exposed to appropriate breadth, as well as sufficient depth in their engineering education. One possible solution to this issue is to integrate laboratory exercises with lecture in a "studio" format,…

  13. Integration

    DEFF Research Database (Denmark)

    Emerek, Ruth

    2004-01-01

    Bidraget diskuterer de forskellige intergrationsopfattelse i Danmark - og hvad der kan forstås ved vellykket integration......Bidraget diskuterer de forskellige intergrationsopfattelse i Danmark - og hvad der kan forstås ved vellykket integration...

  14. Impact of dust and smoke mixing on column-integrated aerosol properties from observations during a severe wildfire episode over Valencia (Spain).

    Science.gov (United States)

    Gómez-Amo, J L; Estellés, V; Marcos, C; Segura, S; Esteve, A R; Pedrós, R; Utrillas, M P; Martínez-Lozano, J A

    2017-12-01

    The most destructive wildfire experienced in Spain since 2004 occurred close to Valencia in summer 2012. A total of 48.500ha were affected by two wildfires, which were mostly active during 29-30 June. The fresh smoke plume was detected at the Burjassot measurement station simultaneously to a severe dust episode. We propose an empirical method to evaluate the dust and smoke mixing and its impact on the microphysical and optical properties. For this, we combine direct-sun measurements with a Cimel CE-318 sun-photometer with an inversion methodology, and the Mie theory to derive the column-integrated size distribution, single scattering albedo (SSA) and asymmetry parameter (g). The mixing of dust and smoke greatly increased the aerosol load and modified the background aerosol properties. Mineral dust increased the aerosol optical depth (AOD) up to 1, while the smoke plume caused an extreme AOD peak of 8. The size distribution of the mixture was bimodal, with a fine and coarse modes dominated by the smoke particles and mineral dust, respectively. The SSA and g for the dust-smoke mixture show a marked sensitivity on the smoke mixing-ratio, mainly at longer wavelengths. Mineral dust and smoke share a similar SSA at 440nm (~0.90), but with opposite spectral dependency. A small dust contribution to the total AOD substantially affects the SSA of the mixture, and also SSA at 1020nm increases from 0.87 to 0.95. This leads to a different spectral behaviour of SSA that changes from positive (smoke plume) to negative (dust), depending on the dust and smoke mixing-ratio. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. An improved local radial point interpolation method for transient heat conduction analysis

    Science.gov (United States)

    Wang, Feng; Lin, Gao; Zheng, Bao-Jing; Hu, Zhi-Qiang

    2013-06-01

    The smoothing thin plate spline (STPS) interpolation using the penalty function method according to the optimization theory is presented to deal with transient heat conduction problems. The smooth conditions of the shape functions and derivatives can be satisfied so that the distortions hardly occur. Local weak forms are developed using the weighted residual method locally from the partial differential equations of the transient heat conduction. Here the Heaviside step function is used as the test function in each sub-domain to avoid the need for a domain integral. Essential boundary conditions can be implemented like the finite element method (FEM) as the shape functions possess the Kronecker delta property. The traditional two-point difference method is selected for the time discretization scheme. Three selected numerical examples are presented in this paper to demonstrate the availability and accuracy of the present approach comparing with the traditional thin plate spline (TPS) radial basis functions.

  16. An improved local radial point interpolation method for transient heat conduction analysis

    International Nuclear Information System (INIS)

    Wang Feng; Lin Gao; Hu Zhi-Qiang; Zheng Bao-Jing

    2013-01-01

    The smoothing thin plate spline (STPS) interpolation using the penalty function method according to the optimization theory is presented to deal with transient heat conduction problems. The smooth conditions of the shape functions and derivatives can be satisfied so that the distortions hardly occur. Local weak forms are developed using the weighted residual method locally from the partial differential equations of the transient heat conduction. Here the Heaviside step function is used as the test function in each sub-domain to avoid the need for a domain integral. Essential boundary conditions can be implemented like the finite element method (FEM) as the shape functions possess the Kronecker delta property. The traditional two-point difference method is selected for the time discretization scheme. Three selected numerical examples are presented in this paper to demonstrate the availability and accuracy of the present approach comparing with the traditional thin plate spline (TPS) radial basis functions

  17. Optimal interpolation method for intercomparison of atmospheric measurements.

    Science.gov (United States)

    Ridolfi, Marco; Ceccherini, Simone; Carli, Bruno

    2006-04-01

    Intercomparison of atmospheric measurements is often a difficult task because of the different spatial response functions of the experiments considered. We propose a new method for comparison of two atmospheric profiles characterized by averaging kernels with different vertical resolutions. The method minimizes the smoothing error induced by the differences in the averaging kernels by exploiting an optimal interpolation rule to map one profile into the retrieval grid of the other. Compared with the techniques published so far, this method permits one to retain the vertical resolution of the less-resolved profile involved in the intercomparison.

  18. Advantage of Fast Fourier Interpolation for laser modeling

    International Nuclear Information System (INIS)

    Epatko, I.V.; Serov, R.V.

    2006-01-01

    The abilities of a new algorithm: the 2-dimensional Fast Fourier Interpolation (FFI) with magnification factor (zoom) 2 n whose purpose is to improve the spatial resolution when necessary, are analyzed in details. FFI procedure is useful when diaphragm/aperture size is less than half of the current simulation scale. The computation noise due to FFI procedure is less than 10 -6 . The additional time for FFI is approximately equal to one Fast Fourier Transform execution time. For some applications using FFI procedure, the execution time decreases by a 10 4 factor compared with other laser simulation codes. (authors)

  19. Rate of convergence of Bernstein quasi-interpolants

    International Nuclear Information System (INIS)

    Diallo, A.T.

    1995-09-01

    We show that if f is an element of C[0,1] and B (2r-1) n f (r integer ≥ 1) is the Bernstein Quasi-Interpolant defined by Sablonniere, then parallel B (2r-1) n f - f parallel C[0,1] ≤ ω 2r φ (f, 1/√n) where ω 2r φ is the Ditzian-Totik modulus of smoothness with φ(x) = √ x(1-x), x is an element of [0,1]. (author). 6 refs

  20. Data mining techniques in sensor networks summarization, interpolation and surveillance

    CERN Document Server

    Appice, Annalisa; Fumarola, Fabio; Malerba, Donato

    2013-01-01

    Sensor networks comprise of a number of sensors installed across a spatially distributed network, which gather information and periodically feed a central server with the measured data. The server monitors the data, issues possible alarms and computes fast aggregates. As data analysis requests may concern both present and past data, the server is forced to store the entire stream. But the limited storage capacity of a server may reduce the amount of data stored on the disk. One solution is to compute summaries of the data as it arrives, and to use these summaries to interpolate the real data.

  1. Hörmander spaces, interpolation, and elliptic problems

    CERN Document Server

    Mikhailets, Vladimir A; Malyshev, Peter V

    2014-01-01

    The monograph gives a detailed exposition of the theory of general elliptic operators (scalar and matrix) and elliptic boundary value problems in Hilbert scales of Hörmander function spaces. This theory was constructed by the authors in a number of papers published in 2005-2009. It is distinguished by a systematic use of the method of interpolation with a functional parameter of abstract Hilbert spaces and Sobolev inner product spaces. This method, the theory and their applications are expounded for the first time in the monographic literature. The monograph is written in detail and in a

  2. Acceleration of Meshfree Radial Point Interpolation Method on Graphics Hardware

    International Nuclear Information System (INIS)

    Nakata, Susumu

    2008-01-01

    This article describes a parallel computational technique to accelerate radial point interpolation method (RPIM)-based meshfree method using graphics hardware. RPIM is one of the meshfree partial differential equation solvers that do not require the mesh structure of the analysis targets. In this paper, a technique for accelerating RPIM using graphics hardware is presented. In the method, the computation process is divided into small processes suitable for processing on the parallel architecture of the graphics hardware in a single instruction multiple data manner.

  3. Calibration method of microgrid polarimeters with image interpolation.

    Science.gov (United States)

    Chen, Zhenyue; Wang, Xia; Liang, Rongguang

    2015-02-10

    Microgrid polarimeters have large advantages over conventional polarimeters because of the snapshot nature and because they have no moving parts. However, they also suffer from several error sources, such as fixed pattern noise (FPN), photon response nonuniformity (PRNU), pixel cross talk, and instantaneous field-of-view (IFOV) error. A characterization method is proposed to improve the measurement accuracy in visible waveband. We first calibrate the camera with uniform illumination so that the response of the sensor is uniform over the entire field of view without IFOV error. Then a spline interpolation method is implemented to minimize IFOV error. Experimental results show the proposed method can effectively minimize the FPN and PRNU.

  4. Cardinal Basis Piecewise Hermite Interpolation on Fuzzy Data

    Directory of Open Access Journals (Sweden)

    H. Vosoughi

    2016-01-01

    Full Text Available A numerical method along with explicit construction to interpolation of fuzzy data through the extension principle results by widely used fuzzy-valued piecewise Hermite polynomial in general case based on the cardinal basis functions, which satisfy a vanishing property on the successive intervals, has been introduced here. We have provided a numerical method in full detail using the linear space notions for calculating the presented method. In order to illustrate the method in computational examples, we take recourse to three prime cases: linear, cubic, and quintic.

  5. New extended interpolating operators for hadron correlation functions

    Energy Technology Data Exchange (ETDEWEB)

    Scardino, Francesco; Papinutto, Mauro [Roma ' ' Sapienza' ' Univ. (Italy). Dipt. di Fisica; INFN, Sezione di Roma (Italy); Schaefer, Stefan [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC

    2016-12-22

    New extended interpolating operators made of quenched three dimensional fermions are introduced in the context of lattice QCD. The mass of the 3D fermions can be tuned in a controlled way to find a better overlap of the extended operators with the states of interest. The extended operators have good renormalisation properties and are easy to control when taking the continuum limit. Moreover the short distance behaviour of the two point functions built from these operators is greatly improved. The operators have been numerically implemented and a comparison to point sources and Jacobi smeared sources has been performed on the new CLS configurations.

  6. New extended interpolating operators for hadron correlation functions

    International Nuclear Information System (INIS)

    Scardino, Francesco; Papinutto, Mauro; Schaefer, Stefan

    2016-01-01

    New extended interpolating operators made of quenched three dimensional fermions are introduced in the context of lattice QCD. The mass of the 3D fermions can be tuned in a controlled way to find a better overlap of the extended operators with the states of interest. The extended operators have good renormalisation properties and are easy to control when taking the continuum limit. Moreover the short distance behaviour of the two point functions built from these operators is greatly improved. The operators have been numerically implemented and a comparison to point sources and Jacobi smeared sources has been performed on the new CLS configurations.

  7. Interpolation Error Estimates for Mean Value Coordinates over Convex Polygons.

    Science.gov (United States)

    Rand, Alexander; Gillette, Andrew; Bajaj, Chandrajit

    2013-08-01

    In a similar fashion to estimates shown for Harmonic, Wachspress, and Sibson coordinates in [Gillette et al., AiCM, to appear], we prove interpolation error estimates for the mean value coordinates on convex polygons suitable for standard finite element analysis. Our analysis is based on providing a uniform bound on the gradient of the mean value functions for all convex polygons of diameter one satisfying certain simple geometric restrictions. This work makes rigorous an observed practical advantage of the mean value coordinates: unlike Wachspress coordinates, the gradient of the mean value coordinates does not become large as interior angles of the polygon approach π.

  8. Geometries and interpolations for symmetric positive definite matrices

    DEFF Research Database (Denmark)

    Feragen, Aasa; Fuster, Andrea

    2017-01-01

    . In light of the simulation results, we discuss the mathematical and qualitative properties of these new metrics in comparison with the classical ones. Finally, we explore the nonlinear variation of properties such as shape and scale throughout principal geodesics in different metrics, which affects...... the visualization of scale and shape variation in tensorial data. With the paper, we will release a software package with Matlab scripts for computing the interpolations and statistics used for the experiments in the paper (Code is available at https://sites.google.com/site/aasaferagen/home/software)....

  9. Trends in Continuity and Interpolation for Computer Graphics.

    Science.gov (United States)

    Gonzalez Garcia, Francisco

    2015-01-01

    In every computer graphics oriented application today, it is a common practice to texture 3D models as a way to obtain realistic material. As part of this process, mesh texturing, deformation, and visualization are all key parts of the computer graphics field. This PhD dissertation was completed in the context of these three important and related fields in computer graphics. The article presents techniques that improve on existing state-of-the-art approaches related to continuity and interpolation in texture space (texturing), object space (deformation), and screen space (rendering).

  10. Gas spectroscopy with integrated frequency monitoring through self-mixing in a terahertz quantum-cascade laser.

    Science.gov (United States)

    Chhantyal-Pun, Rabi; Valavanis, Alexander; Keeley, James T; Rubino, Pierluigi; Kundu, Iman; Han, Yingjun; Dean, Paul; Li, Lianhe; Davies, A Giles; Linfield, Edmund H

    2018-05-15

    We demonstrate a gas spectroscopy technique, using self-mixing in a 3.4 terahertz quantum-cascade laser (QCL). All previous QCL spectroscopy techniques have required additional terahertz instrumentation (detectors, mixers, or spectrometers) for system pre-calibration or spectral analysis. By contrast, our system self-calibrates the laser frequency (i.e., with no external instrumentation) to a precision of 630 MHz (0.02%) by analyzing QCL voltage perturbations in response to optical feedback within a 0-800 mm round-trip delay line. We demonstrate methanol spectroscopy by introducing a gas cell into the feedback path and show that a limiting absorption coefficient of ∼1×10 -4   cm -1 is resolvable.

  11. Effect of interpolation on parameters extracted from seating interface pressure arrays.

    Science.gov (United States)

    Wininger, Michael; Crane, Barbara

    2014-01-01

    Interpolation is a common data processing step in the study of interface pressure data collected at the wheelchair seating interface. However, there has been no focused study on the effect of interpolation on features extracted from these pressure maps, nor on whether these parameters are sensitive to the manner in which the interpolation is implemented. Here, two different interpolation paradigms, bilinear versus bicubic spline, are tested for their influence on parameters extracted from pressure array data and compared against a conventional low-pass filtering operation. Additionally, analysis of the effect of tandem filtering and interpolation, as well as the interpolation degree (interpolating to 2, 4, and 8 times sampling density), was undertaken. The following recommendations are made regarding approaches that minimized distortion of features extracted from the pressure maps: (1) filter prior to interpolate (strong effect); (2) use of cubic interpolation versus linear (slight effect); and (3) nominal difference between interpolation orders of 2, 4, and 8 times (negligible effect). We invite other investigators to perform similar benchmark analyses on their own data in the interest of establishing a community consensus of best practices in pressure array data processing.

  12. Comparison of the common spatial interpolation methods used to analyze potentially toxic elements surrounding mining regions.

    Science.gov (United States)

    Ding, Qian; Wang, Yong; Zhuang, Dafang

    2018-04-15

    The appropriate spatial interpolation methods must be selected to analyze the spatial distributions of Potentially Toxic Elements (PTEs), which is a precondition for evaluating PTE pollution. The accuracy and effect of different spatial interpolation methods, which include inverse distance weighting interpolation (IDW) (power = 1, 2, 3), radial basis function interpolation (RBF) (basis function: thin-plate spline (TPS), spline with tension (ST), completely regularized spline (CRS), multiquadric (MQ) and inverse multiquadric (IMQ)) and ordinary kriging interpolation (OK) (semivariogram model: spherical, exponential, gaussian and linear), were compared using 166 unevenly distributed soil PTE samples (As, Pb, Cu and Zn) in the Suxian District, Chenzhou City, Hunan Province as the study subject. The reasons for the accuracy differences of the interpolation methods and the uncertainties of the interpolation results are discussed, then several suggestions for improving the interpolation accuracy are proposed, and the direction of pollution control is determined. The results of this study are as follows: (i) RBF-ST and OK (exponential) are the optimal interpolation methods for As and Cu, and the optimal interpolation method for Pb and Zn is RBF-IMQ. (ii) The interpolation uncertainty is positively correlated with the PTE concentration, and higher uncertainties are primarily distributed around mines, which is related to the strong spatial variability of PTE concentrations caused by human interference. (iii) The interpolation accuracy can be improved by increasing the sample size around the mines, introducing auxiliary variables in the case of incomplete sampling and adopting the partition prediction method. (iv) It is necessary to strengthen the prevention and control of As and Pb pollution, particularly in the central and northern areas. The results of this study can provide an effective reference for the optimization of interpolation methods and parameters for

  13. A mixed integer linear programming model for integrating thermodynamic cycles for waste heat exploitation in process sites

    International Nuclear Information System (INIS)

    Oluleye, Gbemi; Smith, Robin

    2016-01-01

    Highlights: • MILP model developed for integration of waste heat recovery technologies in process sites. • Five thermodynamic cycles considered for exploitation of industrial waste heat. • Temperature and quantity of multiple waste heat sources considered. • Interactions with the site utility system considered. • Industrial case study presented to illustrate application of the proposed methodology. - Abstract: Thermodynamic cycles such as organic Rankine cycles, absorption chillers, absorption heat pumps, absorption heat transformers, and mechanical heat pumps are able to utilize wasted thermal energy in process sites for the generation of electrical power, chilling and heat at a higher temperature. In this work, a novel systematic framework is presented for optimal integration of these technologies in process sites. The framework is also used to assess the best design approach for integrating waste heat recovery technologies in process sites, i.e. stand-alone integration or a systems-oriented integration. The developed framework allows for: (1) selection of one or more waste heat sources (taking into account the temperatures and thermal energy content), (2) selection of one or more technology options and working fluids, (3) selection of end-uses of recovered energy, (4) exploitation of interactions with the existing site utility system and (5) the potential for heat recovery via heat exchange is also explored. The methodology is applied to an industrial case study. Results indicate a systems-oriented design approach reduces waste heat by 24%; fuel consumption by 54% and CO_2 emissions by 53% with a 2 year payback, and stand-alone design approach reduces waste heat by 12%; fuel consumption by 29% and CO_2 emissions by 20.5% with a 4 year payback. Therefore, benefits from waste heat utilization increase when interactions between the existing site utility system and the waste heat recovery technologies are explored simultaneously. The case study also shows

  14. [Integrity].

    Science.gov (United States)

    Gómez Rodríguez, Rafael Ángel

    2014-01-01

    To say that someone possesses integrity is to claim that that person is almost predictable about responses to specific situations, that he or she can prudentially judge and to act correctly. There is a closed interrelationship between integrity and autonomy, and the autonomy rests on the deeper moral claim of all humans to integrity of the person. Integrity has two senses of significance for medical ethic: one sense refers to the integrity of the person in the bodily, psychosocial and intellectual elements; and in the second sense, the integrity is the virtue. Another facet of integrity of the person is la integrity of values we cherish and espouse. The physician must be a person of integrity if the integrity of the patient is to be safeguarded. The autonomy has reduced the violations in the past, but the character and virtues of the physician are the ultimate safeguard of autonomy of patient. A field very important in medicine is the scientific research. It is the character of the investigator that determines the moral quality of research. The problem arises when legitimate self-interests are replaced by selfish, particularly when human subjects are involved. The final safeguard of moral quality of research is the character and conscience of the investigator. Teaching must be relevant in the scientific field, but the most effective way to teach virtue ethics is through the example of the a respected scientist.

  15. Using ‘snapshot’ measurements of CH4 fluxes from an ombrotrophic peatland to estimate annual budgets: interpolation versus modelling

    Directory of Open Access Journals (Sweden)

    S.M. Green

    2017-03-01

    Full Text Available Flux-chamber measurements of greenhouse gas exchanges between the soil and the atmosphere represent a snapshot of the conditions on a particular site and need to be combined or used in some way to provide integrated fluxes for the longer time periods that are often of interest. In contrast to carbon dioxide (CO2, most studies that have estimated the time-integrated flux of CH4 on ombrotrophic peatlands have not used models. Typically, linear interpolation is used to estimate CH4 fluxes during the time periods between flux-chamber measurements. CH4 fluxes generally show a rise followed by a fall through the growing season that may be captured reasonably well by interpolation, provided there are sufficiently frequent measurements. However, day-to-day and week-to-week variability is also often evident in CH4 flux data, and will not necessarily be properly represented by interpolation. Using flux chamber data from a UK blanket peatland, we compared annualised CH4 fluxes estimated by interpolation with those estimated using linear models and found that the former tended to be higher than the latter. We consider the implications of these results for the calculation of the radiative forcing effect of ombrotrophic peatlands.

  16. THE EFFECT OF STIMULUS ANTICIPATION ON THE INTERPOLATED TWITCH TECHNIQUE

    Directory of Open Access Journals (Sweden)

    Duane C. Button

    2008-12-01

    Full Text Available The objective of this study was to investigate the effect of expected and unexpected interpolated stimuli (IT during a maximum voluntary contraction on quadriceps force output and activation. Two groups of male subjects who were either inexperienced (MI: no prior experience with IT tests or experienced (ME: previously experienced 10 or more series of IT tests received an expected or unexpected IT while performing quadriceps isometric maximal voluntary contractions (MVCs. Measurements included MVC force, quadriceps and hamstrings electromyographic (EMG activity, and quadriceps inactivation as measured by the interpolated twitch technique (ITT. When performing MVCs with the expectation of an IT, the knowledge or lack of knowledge of an impending IT occurring during a contraction did not result in significant overall differences in force, ITT inactivation, quadriceps or hamstrings EMG activity. However, the expectation of an IT significantly (p < 0.0001 reduced MVC force (9.5% and quadriceps EMG activity (14.9% when compared to performing MVCs with prior knowledge that stimulation would not occur. While ME exhibited non-significant decreases when expecting an IT during a MVC, MI force and EMG activity significantly decreased 12.4% and 20.9% respectively. Overall, ME had significantly (p < 0.0001 higher force (14.5% and less ITT inactivation (10.4% than MI. The expectation of the noxious stimuli may account for the significant decrements in force and activation during the ITT

  17. Flip-avoiding interpolating surface registration for skull reconstruction.

    Science.gov (United States)

    Xie, Shudong; Leow, Wee Kheng; Lee, Hanjing; Lim, Thiam Chye

    2018-03-30

    Skull reconstruction is an important and challenging task in craniofacial surgery planning, forensic investigation and anthropological studies. Existing methods typically reconstruct approximating surfaces that regard corresponding points on the target skull as soft constraints, thus incurring non-zero error even for non-defective parts and high overall reconstruction error. This paper proposes a novel geometric reconstruction method that non-rigidly registers an interpolating reference surface that regards corresponding target points as hard constraints, thus achieving low reconstruction error. To overcome the shortcoming of interpolating a surface, a flip-avoiding method is used to detect and exclude conflicting hard constraints that would otherwise cause surface patches to flip and self-intersect. Comprehensive test results show that our method is more accurate and robust than existing skull reconstruction methods. By incorporating symmetry constraints, it can produce more symmetric and normal results than other methods in reconstructing defective skulls with a large number of defects. It is robust against severe outliers such as radiation artifacts in computed tomography due to dental implants. In addition, test results also show that our method outperforms thin-plate spline for model resampling, which enables the active shape model to yield more accurate reconstruction results. As the reconstruction accuracy of defective parts varies with the use of different reference models, we also study the implication of reference model selection for skull reconstruction. Copyright © 2018 John Wiley & Sons, Ltd.

  18. Optimal Interpolation scheme to generate reference crop evapotranspiration

    Science.gov (United States)

    Tomas-Burguera, Miquel; Beguería, Santiago; Vicente-Serrano, Sergio; Maneta, Marco

    2018-05-01

    We used an Optimal Interpolation (OI) scheme to generate a reference crop evapotranspiration (ETo) grid, forcing meteorological variables, and their respective error variance in the Iberian Peninsula for the period 1989-2011. To perform the OI we used observational data from the Spanish Meteorological Agency (AEMET) and outputs from a physically-based climate model. To compute ETo we used five OI schemes to generate grids for the five observed climate variables necessary to compute ETo using the FAO-recommended form of the Penman-Monteith equation (FAO-PM). The granularity of the resulting grids are less sensitive to variations in the density and distribution of the observational network than those generated by other interpolation methods. This is because our implementation of the OI method uses a physically-based climate model as prior background information about the spatial distribution of the climatic variables, which is critical for under-observed regions. This provides temporal consistency in the spatial variability of the climatic fields. We also show that increases in the density and improvements in the distribution of the observational network reduces substantially the uncertainty of the climatic and ETo estimates. Finally, a sensitivity analysis of observational uncertainties and network densification suggests the existence of a trade-off between quantity and quality of observations.

  19. A New Interpolation Approach for Linearly Constrained Convex Optimization

    KAUST Repository

    Espinoza, Francisco

    2012-08-01

    In this thesis we propose a new class of Linearly Constrained Convex Optimization methods based on the use of a generalization of Shepard\\'s interpolation formula. We prove the properties of the surface such as the interpolation property at the boundary of the feasible region and the convergence of the gradient to the null space of the constraints at the boundary. We explore several descent techniques such as steepest descent, two quasi-Newton methods and the Newton\\'s method. Moreover, we implement in the Matlab language several versions of the method, particularly for the case of Quadratic Programming with bounded variables. Finally, we carry out performance tests against Matab Optimization Toolbox methods for convex optimization and implementations of the standard log-barrier and active-set methods. We conclude that the steepest descent technique seems to be the best choice so far for our method and that it is competitive with other standard methods both in performance and empirical growth order.

  20. 3D Interpolation Method for CT Images of the Lung

    Directory of Open Access Journals (Sweden)

    Noriaki Asada

    2003-06-01

    Full Text Available A 3-D image can be reconstructed from numerous CT images of the lung. The procedure reconstructs a solid from multiple cross section images, which are collected during pulsation of the heart. Thus the motion of the heart is a special factor that must be taken into consideration during reconstruction. The lung exhibits a repeating transformation synchronized to the beating of the heart as an elastic body. There are discontinuities among neighboring CT images due to the beating of the heart, if no special techniques are used in taking CT images. The 3-D heart image is reconstructed from numerous CT images in which both the heart and the lung are taken. Although the outline shape of the reconstructed 3-D heart is quite unnatural, the envelope of the 3-D unnatural heart is fit to the shape of the standard heart. The envelopes of the lung in the CT images are calculated after the section images of the best fitting standard heart are located at the same positions of the CT images. Thus the CT images are geometrically transformed to the optimal CT images fitting best to the standard heart. Since correct transformation of images is required, an Area oriented interpolation method proposed by us is used for interpolation of transformed images. An attempt to reconstruct a 3-D lung image by a series of such operations without discontinuity is shown. Additionally, the same geometrical transformation method to the original projection images is proposed as a more advanced method.

  1. Interpolation methods for creating a scatter radiation exposure map

    Energy Technology Data Exchange (ETDEWEB)

    Gonçalves, Elicardo A. de S., E-mail: elicardo.goncalves@ifrj.edu.br [Instituto Federal do Rio de Janeiro (IFRJ), Paracambi, RJ (Brazil); Gomes, Celio S.; Lopes, Ricardo T. [Coordenacao de Pos-Graduacao e Pesquisa de Engenharia (PEN/COPPE/UFRJ), Rio de Janeiro, RJ (Brazil). Programa de Engenharia Nuclear; Oliveira, Luis F. de; Anjos, Marcelino J. dos; Oliveira, Davi F. [Universidade do Estado do Rio de Janeiro (UFRJ), RJ (Brazil). Instituto de Física

    2017-07-01

    A well know way for best comprehension of radiation scattering during a radiography is to map exposure over the space around the source and sample. This map is done measuring exposure in points regularly spaced, it means, measurement will be placed in localization chosen by increasing a regular steps from a starting point, along the x, y and z axes or even radial and angular coordinates. However, it is not always possible to maintain the accuracy of the steps throughout the entire space, or there will be regions of difficult access where the regularity of the steps will be impaired. This work intended to use some interpolation techniques that work with irregular steps, and to compare their results and their limits. It was firstly done angular coordinates, and tested in lack of some points. Later, in the same data was performed the Delaunay tessellation interpolation ir order to compare. Computational and graphic treatments was done with the GNU OCTAVE software and its image-processing package. Real data was acquired from a bunker where a 6 MeV betatron can be used to produce radiation scattering. (author)

  2. Interpolation on the manifold of K component GMMs.

    Science.gov (United States)

    Kim, Hyunwoo J; Adluru, Nagesh; Banerjee, Monami; Vemuri, Baba C; Singh, Vikas

    2015-12-01

    Probability density functions (PDFs) are fundamental objects in mathematics with numerous applications in computer vision, machine learning and medical imaging. The feasibility of basic operations such as computing the distance between two PDFs and estimating a mean of a set of PDFs is a direct function of the representation we choose to work with. In this paper, we study the Gaussian mixture model (GMM) representation of the PDFs motivated by its numerous attractive features. (1) GMMs are arguably more interpretable than, say, square root parameterizations (2) the model complexity can be explicitly controlled by the number of components and (3) they are already widely used in many applications. The main contributions of this paper are numerical algorithms to enable basic operations on such objects that strictly respect their underlying geometry. For instance, when operating with a set of K component GMMs, a first order expectation is that the result of simple operations like interpolation and averaging should provide an object that is also a K component GMM. The literature provides very little guidance on enforcing such requirements systematically. It turns out that these tasks are important internal modules for analysis and processing of a field of ensemble average propagators (EAPs), common in diffusion weighted magnetic resonance imaging. We provide proof of principle experiments showing how the proposed algorithms for interpolation can facilitate statistical analysis of such data, essential to many neuroimaging studies. Separately, we also derive interesting connections of our algorithm with functional spaces of Gaussians, that may be of independent interest.

  3. MAGIC: A Tool for Combining, Interpolating, and Processing Magnetograms

    Science.gov (United States)

    Allred, Joel

    2012-01-01

    Transients in the solar coronal magnetic field are ultimately the source of space weather. Models which seek to track the evolution of the coronal field require magnetogram images to be used as boundary conditions. These magnetograms are obtained by numerous instruments with different cadences and resolutions. A tool is required which allows modelers to fmd all available data and use them to craft accurate and physically consistent boundary conditions for their models. We have developed a software tool, MAGIC (MAGnetogram Interpolation and Composition), to perform exactly this function. MAGIC can manage the acquisition of magneto gram data, cast it into a source-independent format, and then perform the necessary spatial and temporal interpolation to provide magnetic field values as requested onto model-defined grids. MAGIC has the ability to patch magneto grams from different sources together providing a more complete picture of the Sun's field than is possible from single magneto grams. In doing this, care must be taken so as not to introduce nonphysical current densities along the seam between magnetograms. We have designed a method which minimizes these spurious current densities. MAGIC also includes a number of post-processing tools which can provide additional information to models. For example, MAGIC includes an interface to the DA VE4VM tool which derives surface flow velocities from the time evolution of surface magnetic field. MAGIC has been developed as an application of the KAMELEON data formatting toolkit which has been developed by the CCMC.

  4. Image re-sampling detection through a novel interpolation kernel.

    Science.gov (United States)

    Hilal, Alaa

    2018-06-01

    Image re-sampling involved in re-size and rotation transformations is an essential element block in a typical digital image alteration. Fortunately, traces left from such processes are detectable, proving that the image has gone a re-sampling transformation. Within this context, we present in this paper two original contributions. First, we propose a new re-sampling interpolation kernel. It depends on five independent parameters that controls its amplitude, angular frequency, standard deviation, and duration. Then, we demonstrate its capacity to imitate the same behavior of the most frequent interpolation kernels used in digital image re-sampling applications. Secondly, the proposed model is used to characterize and detect the correlation coefficients involved in re-sampling transformations. The involved process includes a minimization of an error function using the gradient method. The proposed method is assessed over a large database of 11,000 re-sampled images. Additionally, it is implemented within an algorithm in order to assess images that had undergone complex transformations. Obtained results demonstrate better performance and reduced processing time when compared to a reference method validating the suitability of the proposed approaches. Copyright © 2018 Elsevier B.V. All rights reserved.

  5. Interpolation methods for creating a scatter radiation exposure map

    International Nuclear Information System (INIS)

    Gonçalves, Elicardo A. de S.; Gomes, Celio S.; Lopes, Ricardo T.; Oliveira, Luis F. de; Anjos, Marcelino J. dos; Oliveira, Davi F.

    2017-01-01

    A well know way for best comprehension of radiation scattering during a radiography is to map exposure over the space around the source and sample. This map is done measuring exposure in points regularly spaced, it means, measurement will be placed in localization chosen by increasing a regular steps from a starting point, along the x, y and z axes or even radial and angular coordinates. However, it is not always possible to maintain the accuracy of the steps throughout the entire space, or there will be regions of difficult access where the regularity of the steps will be impaired. This work intended to use some interpolation techniques that work with irregular steps, and to compare their results and their limits. It was firstly done angular coordinates, and tested in lack of some points. Later, in the same data was performed the Delaunay tessellation interpolation ir order to compare. Computational and graphic treatments was done with the GNU OCTAVE software and its image-processing package. Real data was acquired from a bunker where a 6 MeV betatron can be used to produce radiation scattering. (author)

  6. Motion compensated frame interpolation with a symmetric optical flow constraint

    DEFF Research Database (Denmark)

    Rakêt, Lars Lau; Roholm, Lars; Bruhn, Andrés

    2012-01-01

    We consider the problem of interpolating frames in an image sequence. For this purpose accurate motion estimation can be very helpful. We propose to move the motion estimation from the surrounding frames directly to the unknown frame by parametrizing the optical flow objective function such that ......We consider the problem of interpolating frames in an image sequence. For this purpose accurate motion estimation can be very helpful. We propose to move the motion estimation from the surrounding frames directly to the unknown frame by parametrizing the optical flow objective function...... methods. The proposed reparametrization is generic and can be applied to almost every existing algorithm. In this paper we illustrate its advantages by considering the classic TV-L1 optical flow algorithm as a prototype. We demonstrate that this widely used method can produce results that are competitive...... with current state-of-the-art methods. Finally we show that the scheme can be implemented on graphics hardware such that it be- comes possible to double the frame rate of 640 × 480 video footage at 30 fps, i.e. to perform frame doubling in realtime....

  7. Anisotropic interpolation theorems of Musielak-Orlicz type

    Directory of Open Access Journals (Sweden)

    Jinxia Li

    2016-10-01

    Full Text Available Abstract Anisotropy is a common attribute of Nature, which shows different characterizations in different directions of all or part of the physical or chemical properties of an object. The anisotropic property, in mathematics, can be expressed by a fairly general discrete group of dilations { A k : k ∈ Z } $\\{A^{k}: k\\in\\mathbb{Z}\\}$ , where A is a real n × n $n\\times n$ matrix with all its eigenvalues λ satisfy | λ | > 1 $|\\lambda|>1$ . Let φ : R n × [ 0 , ∞ → [ 0 , ∞ $\\varphi: \\mathbb{R}^{n}\\times[0, \\infty\\to[0,\\infty$ be an anisotropic Musielak-Orlicz function such that φ ( x , ⋅ $\\varphi(x,\\cdot$ is an Orlicz function and φ ( ⋅ , t $\\varphi(\\cdot,t$ is a Muckenhoupt A ∞ ( A $\\mathbb {A}_{\\infty}(A$ weight. The aim of this article is to obtain two anisotropic interpolation theorems of Musielak-Orlicz type, which are weighted anisotropic extension of Marcinkiewicz interpolation theorems. The above results are new even for the isotropic weighted settings.

  8. Linear, Transfinite and Weighted Method for Interpolation from Grid Lines Applied to OCT Images

    DEFF Research Database (Denmark)

    Lindberg, Anne-Sofie Wessel; Jørgensen, Thomas Martini; Dahl, Vedrana Andersen

    2018-01-01

    of a square grid, but are unknown inside each square. To view these values as an image, intensities need to be interpolated at regularly spaced pixel positions. In this paper we evaluate three methods for interpolation from grid lines: linear, transfinite and weighted. The linear method does not preserve...... and the stability of the linear method further away. An important parameter influencing the performance of the interpolation methods is the upsampling rate. We perform an extensive evaluation of the three interpolation methods across a range of upsampling rates. Our statistical analysis shows significant difference...... in the performance of the three methods. We find that the transfinite interpolation works well for small upsampling rates and the proposed weighted interpolation method performs very well for all upsampling rates typically used in practice. On the basis of these findings we propose an approach for combining two OCT...

  9. Integrated synoptic surveys of the hydrodynamics and water-quality distributions in two Lake Michigan rivermouth mixing zones using an autonomous underwater vehicle and a manned boat

    Science.gov (United States)

    Jackson, P. Ryan; Reneau, Paul C.

    2014-01-01

    The U.S. Geological Survey (USGS), in cooperation with the National Monitoring Network for U.S. Coastal Waters and Tributaries, launched a pilot project in 2010 to determine the value of integrated synoptic surveys of rivermouths using autonomous underwater vehicle technology in response to a call for rivermouth research, which includes study domains that envelop both the fluvial and lacustrine boundaries of the rivermouth mixing zone. The pilot project was implemented at two Lake Michigan rivermouths with largely different scales, hydrodynamics, and settings, but employing primarily the same survey techniques and methods. The Milwaukee River Estuary Area of Concern (AOC) survey included measurements in the lower 2 to 3 miles of the Milwaukee, Menomonee, and Kinnickinnic Rivers and inner and outer Milwaukee Harbor. This estuary is situated in downtown Milwaukee, Wisconsin, and is the most populated basin that flows directly into Lake Michigan. In contrast, the Manitowoc rivermouth has a relatively small harbor separating the rivermouth from Lake Michigan, and the Manitowoc River Watershed is primarily agricultural. Both the Milwaukee and Manitowoc rivermouths are unregulated and allow free exchange of water with Lake Michigan. This pilot study of the Milwaukee River Estuary and Manitowoc rivermouth using an autonomous underwater vehicle (AUV) paired with a manned survey boat resulted in high spatial and temporal resolution datasets of basic water-quality parameter distributions and hydrodynamics. The AUV performed well in these environments and was found primarily well-suited for harbor and nearshore surveys of three-dimensional water-quality distributions. Both case studies revealed that the use of a manned boat equipped with an acoustic Doppler current profiler (ADCP) and multiparameter sonde (and an optional flow-through water-quality sampling system) was the best option for riverine surveys. To ensure that the most accurate and highest resolution velocity data

  10. Simulating propagation of decomposed elastic waves using low-rank approximate mixed-domain integral operators for heterogeneous transversely isotropic media

    KAUST Repository

    Cheng, Jiubing

    2014-08-05

    In elastic imaging, the extrapolated vector fields are decomposed into pure wave modes, such that the imaging condition produces interpretable images, which characterize reflectivity of different reflection types. Conventionally, wavefield decomposition in anisotropic media is costly as the operators involved is dependent on the velocity, and thus not stationary. In this abstract, we propose an efficient approach to directly extrapolate the decomposed elastic waves using lowrank approximate mixed space/wavenumber domain integral operators for heterogeneous transverse isotropic (TI) media. The low-rank approximation is, thus, applied to the pseudospectral extrapolation and decomposition at the same time. The pseudo-spectral implementation also allows for relatively large time steps in which the low-rank approximation is applied. Synthetic examples show that it can yield dispersionfree extrapolation of the decomposed quasi-P (qP) and quasi- SV (qSV) modes, which can be used for imaging, as well as the total elastic wavefields.

  11. Study of Ni Metallization in Macroporous Si Using Wet Chemistry for Radio Frequency Cross-Talk Isolation in Mixed Signal Integrated Circuits.

    Science.gov (United States)

    Zhang, Xi; Xu, Chengkun; Chong, Kyuchul; Tu, King-Ning; Xie, Ya-Hong

    2011-05-25

    A highly conductive moat or Faraday cage of through-the-wafer thickness in Si substrate was proposed to be effective in shielding electromagnetic interference thereby reducing radio frequency (RF) cross-talk in high performance mixed signal integrated circuits. Such a structure was realized by metallization of selected ultra-high-aspect-ratio macroporous regions that were electrochemically etched in p - Si substrates. The metallization process was conducted by means of wet chemistry in an alkaline aqueous solution containing Ni 2+ without reducing agent. It is found that at elevated temperature during immersion, Ni 2+ was rapidly reduced and deposited into macroporous Si and a conformal metallization of the macropore sidewalls was obtained in a way that the entire porous Si framework was converted to Ni. A conductive moat was as a result incorporated into p - Si substrate. The experimentally measured reduction of crosstalk in this structure is 5~18 dB at frequencies up to 35 GHz.

  12. Study of Ni Metallization in Macroporous Si Using Wet Chemistry for Radio Frequency Cross-Talk Isolation in Mixed Signal Integrated Circuits

    Science.gov (United States)

    Zhang, Xi; Xu, Chengkun; Chong, Kyuchul; Tu, King-Ning; Xie, Ya-Hong

    2011-01-01

    A highly conductive moat or Faraday cage of through-the-wafer thickness in Si substrate was proposed to be effective in shielding electromagnetic interference thereby reducing radio frequency (RF) cross-talk in high performance mixed signal integrated circuits. Such a structure was realized by metallization of selected ultra-high-aspect-ratio macroporous regions that were electrochemically etched in p− Si substrates. The metallization process was conducted by means of wet chemistry in an alkaline aqueous solution containing Ni2+ without reducing agent. It is found that at elevated temperature during immersion, Ni2+ was rapidly reduced and deposited into macroporous Si and a conformal metallization of the macropore sidewalls was obtained in a way that the entire porous Si framework was converted to Ni. A conductive moat was as a result incorporated into p− Si substrate. The experimentally measured reduction of crosstalk in this structure is 5~18 dB at frequencies up to 35 GHz. PMID:28879960

  13. An application of gain-scheduled control using state-space interpolation to hydroactive gas bearings

    DEFF Research Database (Denmark)

    Theisen, Lukas Roy Svane; Camino, Juan F.; Niemann, Hans Henrik

    2016-01-01

    with a gain-scheduling strategy using state-space interpolation, which avoids both the performance loss and the increase of controller order associated to the Youla parametrisation. The proposed state-space interpolation for gain-scheduling is applied for mass imbalance rejection for a controllable gas...... bearing scheduled in two parameters. Comparisons against the Youla-based scheduling demonstrate the superiority of the state-space interpolation....

  14. Convergence acceleration of quasi-periodic and quasi-periodic-rational interpolations by polynomial corrections

    OpenAIRE

    Lusine Poghosyan

    2014-01-01

    The paper considers convergence acceleration of the quasi-periodic and the quasi-periodic-rational interpolations by application of polynomial corrections. We investigate convergence of the resultant quasi-periodic-polynomial and quasi-periodic-rational-polynomial interpolations and derive exact constants of the main terms of asymptotic errors in the regions away from the endpoints. Results of numerical experiments clarify behavior of the corresponding interpolations for moderate number of in...

  15. A grey-forecasting interval-parameter mixed-integer programming approach for integrated electric-environmental management–A case study of Beijing

    International Nuclear Information System (INIS)

    Wang, Xingwei; Cai, Yanpeng; Chen, Jiajun; Dai, Chao

    2013-01-01

    In this study, a GFIPMIP (grey-forecasting interval-parameter mixed-integer programming) approach was developed for supporting IEEM (integrated electric-environmental management) in Beijing. It was an attempt to incorporate an energy-forecasting model within a general modeling framework at the municipal level. The developed GFIPMIP model can not only forecast electric demands, but also reflect dynamic, interactive, and uncertain characteristics of the IEEM system in Beijing. Moreover, it can address issues regarding power supply, and emission reduction of atmospheric pollutants and GHG (greenhouse gas). Optimal solutions were obtained related to power generation patterns and facility capacity expansion schemes under a series of system constraints. Two scenarios were analyzed based on multiple environmental policies. The results were useful for helping decision makers identify desired management strategies to guarantee the city's power supply and mitigate emissions of GHG and atmospheric pollutants. The results also suggested that the developed GFIPMIP model be applicable to similar engineering problems. - Highlights: • A grey-forecasting interval-parameter mixed integer programming (GFIPMIP) approach was developed. • It could reflect dynamic, interactive, and uncertain characteristics of an IEEM system. • The developed GFIPMIP approach was used for supporting IEEM system planning in Beijing. • Two scenarios were established based on different environmental policies and management targets. • Optimal schemes for power generation, energy supply, and environmental protection were identified

  16. Recent advance in high manufacturing readiness level and high temperature CMOS mixed-signal integrated circuits on silicon carbide

    Science.gov (United States)

    Weng, M. H.; Clark, D. T.; Wright, S. N.; Gordon, D. L.; Duncan, M. A.; Kirkham, S. J.; Idris, M. I.; Chan, H. K.; Young, R. A. R.; Ramsay, E. P.; Wright, N. G.; Horsfall, A. B.

    2017-05-01

    A high manufacturing readiness level silicon carbide (SiC) CMOS technology is presented. The unique process flow enables the monolithic integration of pMOS and nMOS transistors with passive circuit elements capable of operation at temperatures of 300 °C and beyond. Critical to this functionality is the behaviour of the gate dielectric and data for high temperature capacitance-voltage measurements are reported for SiO2/4H-SiC (n and p type) MOS structures. In addition, a summary of the long term reliability for a range of structures including contact chains to both n-type and p-type SiC, as well as simple logic circuits is presented, showing function after 2000 h at 300 °C. Circuit data is also presented for the performance of digital logic devices, a 4 to 1 analogue multiplexer and a configurable timer operating over a wide temperature range. A high temperature micro-oven system has been utilised to enable the high temperature testing and stressing of units assembled in ceramic dual in line packages, including a high temperature small form-factor SiC based bridge leg power module prototype, operated for over 1000 h at 300 °C. The data presented show that SiC CMOS is a key enabling technology in high temperature integrated circuit design. In particular it provides the ability to realise sensor interface circuits capable of operating above 300 °C, accommodate shifts in key parameters enabling deployment in applications including automotive, aerospace and deep well drilling.

  17. Interpolation in Time Series: An Introductive Overview of Existing Methods, Their Performance Criteria and Uncertainty Assessment

    Directory of Open Access Journals (Sweden)

    Mathieu Lepot

    2017-10-01

    Full Text Available A thorough review has been performed on interpolation methods to fill gaps in time-series, efficiency criteria, and uncertainty quantifications. On one hand, there are numerous available methods: interpolation, regression, autoregressive, machine learning methods, etc. On the other hand, there are many methods and criteria to estimate efficiencies of these methods, but uncertainties on the interpolated values are rarely calculated. Furthermore, while they are estimated according to standard methods, the prediction uncertainty is not taken into account: a discussion is thus presented on the uncertainty estimation of interpolated/extrapolated data. Finally, some suggestions for further research and a new method are proposed.

  18. [Research on fast implementation method of image Gaussian RBF interpolation based on CUDA].

    Science.gov (United States)

    Chen, Hao; Yu, Haizhong

    2014-04-01

    Image interpolation is often required during medical image processing and analysis. Although interpolation method based on Gaussian radial basis function (GRBF) has high precision, the long calculation time still limits its application in field of image interpolation. To overcome this problem, a method of two-dimensional and three-dimensional medical image GRBF interpolation based on computing unified device architecture (CUDA) is proposed in this paper. According to single instruction multiple threads (SIMT) executive model of CUDA, various optimizing measures such as coalesced access and shared memory are adopted in this study. To eliminate the edge distortion of image interpolation, natural suture algorithm is utilized in overlapping regions while adopting data space strategy of separating 2D images into blocks or dividing 3D images into sub-volumes. Keeping a high interpolation precision, the 2D and 3D medical image GRBF interpolation achieved great acceleration in each basic computing step. The experiments showed that the operative efficiency of image GRBF interpolation based on CUDA platform was obviously improved compared with CPU calculation. The present method is of a considerable reference value in the application field of image interpolation.

  19. Compressive Parameter Estimation for Sparse Translation-Invariant Signals Using Polar Interpolation

    DEFF Research Database (Denmark)

    Fyhn, Karsten; Duarte, Marco F.; Jensen, Søren Holdt

    2015-01-01

    We propose new compressive parameter estimation algorithms that make use of polar interpolation to improve the estimator precision. Our work extends previous approaches involving polar interpolation for compressive parameter estimation in two aspects: (i) we extend the formulation from real non...... to attain good estimation precision and keep the computational complexity low. Our numerical experiments show that the proposed algorithms outperform existing approaches that either leverage polynomial interpolation or are based on a conversion to a frequency-estimation problem followed by a super...... interpolation increases the estimation precision....

  20. Time Reversal Reconstruction Algorithm Based on PSO Optimized SVM Interpolation for Photoacoustic Imaging

    Directory of Open Access Journals (Sweden)

    Mingjian Sun

    2015-01-01

    Full Text Available Photoacoustic imaging is an innovative imaging technique to image biomedical tissues. The time reversal reconstruction algorithm in which a numerical model of the acoustic forward problem is run backwards in time is widely used. In the paper, a time reversal reconstruction algorithm based on particle swarm optimization (PSO optimized support vector machine (SVM interpolation method is proposed for photoacoustics imaging. Numerical results show that the reconstructed images of the proposed algorithm are more accurate than those of the nearest neighbor interpolation, linear interpolation, and cubic convolution interpolation based time reversal algorithm, which can provide higher imaging quality by using significantly fewer measurement positions or scanning times.

  1. ANGELO-LAMBDA, Covariance matrix interpolation and mathematical verification

    International Nuclear Information System (INIS)

    Kodeli, Ivo

    2007-01-01

    1 - Description of program or function: The codes ANGELO-2.3 and LAMBDA-2.3 are used for the interpolation of the cross section covariance data from the original to a user defined energy group structure, and for the mathematical tests of the matrices, respectively. The LAMBDA-2.3 code calculates the eigenvalues of the matrices (both for the original or the converted) and lists them accordingly into positive and negative matrices. This verification is strongly recommended before using any covariance matrices. These versions of the two codes are the extended versions of the previous codes available in the Packages NEA-1264 - ZZ-VITAMIN-J/COVA. They were specifically developed for the purposes of the OECD LWR UAM benchmark, in particular for the processing of the ZZ-SCALE5.1/COVA-44G cross section covariance matrix library retrieved from the SCALE-5.1 package. Either the original SCALE-5.1 libraries or the libraries separated into several files by Nuclides can be (in principle) processed by ANGELO/LAMBDA codes, but the use of the one-nuclide data is strongly recommended. Due to large deviations of the correlation matrix terms from unity observed in some SCALE5.1 covariance matrices, the previous more severe acceptance condition in the ANGELO2.3 code was released. In case the correlation coefficients exceed 1.0, only a warning message is issued, and coefficients are replaced by 1.0. 2 - Methods: ANGELO-2.3 interpolates the covariance matrices to a union grid using flat weighting. LAMBDA-2.3 code includes the mathematical routines to calculate the eigenvalues of the covariance matrices. 3 - Restrictions on the complexity of the problem: The algorithm used in ANGELO is relatively simple, therefore the interpolations involving energy group structure which are very different from the original (e.g. large difference in the number of energy groups) may not be accurate. In particular in the case of the MT=1018 data (fission spectra covariances) the algorithm may not be

  2. Surface filling-in and contour interpolation contribute independently to Kanizsa figure formation.

    Science.gov (United States)

    Chen, Siyi; Glasauer, Stefan; Müller, Hermann J; Conci, Markus

    2018-04-30

    To explore mechanisms of object integration, the present experiments examined how completion of illusory contours and surfaces modulates the sensitivity of localizing a target probe. Observers had to judge whether a briefly presented dot probe was located inside or outside the region demarcated by inducer elements that grouped to form variants of an illusory, Kanizsa-type figure. From the resulting psychometric functions, we determined observers' discrimination thresholds as a sensitivity measure. Experiment 1 showed that sensitivity was systematically modulated by the amount of surface and contour completion afforded by a given configuration. Experiments 2 and 3 presented stimulus variants that induced an (occluded) object without clearly defined bounding contours, which gave rise to a relative sensitivity increase for surface variations on their own. Experiments 4 and 5 were performed to rule out that these performance modulations were simply attributable to variable distances between critical local inducers or to costs in processing an interrupted contour. Collectively, the findings provide evidence for a dissociation between surface and contour processing, supporting a model of object integration in which completion is instantiated by feedforward processing that independently renders surface filling-in and contour interpolation and a feedback loop that integrates these outputs into a complete whole. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  3. Health care providers' perceived barriers to and need for the implementation of a national integrated health care standard on childhood obesity in the Netherlands - a mixed methods approach.

    Science.gov (United States)

    Schalkwijk, Annemarie A H; Nijpels, Giel; Bot, Sandra D M; Elders, Petra J M

    2016-03-08

    In 2010, a national integrated health care standard for (childhood) obesity was published and disseminated in the Netherlands. The aim of this study is to gain insight into the needs of health care providers and the barriers they face in terms of implementing this integrated health care standard. A mixed-methods approach was applied using focus groups, semi-structured, face-to-face interviews and an e-mail-based internet survey. The study's participants included: general practitioners (GPs) (focus groups); health care providers in different professions (face-to-face interviews) and health care providers, including GPs; youth health care workers; pediatricians; dieticians; psychologists and physiotherapists (survey). First, the transcripts from the focus groups were analyzed thematically. The themes identified in this process were then used to analyze the interviews. The results of the analysis of the qualitative data were used to construct the statements used in the e-mail-based internet survey. Responses to items were measured on a 5-point Likert scale and were categorized into three outcomes: 'agree' or 'important' (response categories 1 and 2), 'disagree' or 'not important'. Twenty-seven of the GPs that were invited (51 %) participated in four focus groups. Seven of the nine health care professionals that were invited (78 %) participated in the interviews and 222 questionnaires (17 %) were returned and included in the analysis. The following key barriers were identified with regard to the implementation of the integrated health care standard: reluctance to raise the subject; perceived lack of motivation and knowledge on the part of the parents; previous negative experiences with lifestyle programs; financial constraints and the lack of a structured multidisciplinary approach. The main needs identified were: increased knowledge and awareness on the part of both health care providers and parents/children; a social map of effective intervention; structural

  4. Diabat Interpolation for Polymorph Free-Energy Differences.

    Science.gov (United States)

    Kamat, Kartik; Peters, Baron

    2017-02-02

    Existing methods to compute free-energy differences between polymorphs use harmonic approximations, advanced non-Boltzmann bias sampling techniques, and/or multistage free-energy perturbations. This work demonstrates how Bennett's diabat interpolation method ( J. Comput. Phys. 1976, 22, 245 ) can be combined with energy gaps from lattice-switch Monte Carlo techniques ( Phys. Rev. E 2000, 61, 906 ) to swiftly estimate polymorph free-energy differences. The new method requires only two unbiased molecular dynamics simulations, one for each polymorph. To illustrate the new method, we compute the free-energy difference between face-centered cubic and body-centered cubic polymorphs for a Gaussian core solid. We discuss the justification for parabolic models of the free-energy diabats and similarities to methods that have been used in studies of electron transfer.

  5. Basis set approach in the constrained interpolation profile method

    International Nuclear Information System (INIS)

    Utsumi, T.; Koga, J.; Yabe, T.; Ogata, Y.; Matsunaga, E.; Aoki, T.; Sekine, M.

    2003-07-01

    We propose a simple polynomial basis-set that is easily extendable to any desired higher-order accuracy. This method is based on the Constrained Interpolation Profile (CIP) method and the profile is chosen so that the subgrid scale solution approaches the real solution by the constraints from the spatial derivative of the original equation. Thus the solution even on the subgrid scale becomes consistent with the master equation. By increasing the order of the polynomial, this solution quickly converges. 3rd and 5th order polynomials are tested on the one-dimensional Schroedinger equation and are proved to give solutions a few orders of magnitude higher in accuracy than conventional methods for lower-lying eigenstates. (author)

  6. Interpolation method by whole body computed tomography, Artronix 1120

    International Nuclear Information System (INIS)

    Fujii, Kyoichi; Koga, Issei; Tokunaga, Mitsuo

    1981-01-01

    Reconstruction of the whole body CT images by interpolation method was investigated by rapid scanning. Artronix 1120 with fixed collimator was used to obtain the CT images every 5 mm. X-ray source was circully movable to obtain perpendicular beam to the detector. A length of 150 mm was scanned in about 15 min., with the slice width of 5 mm. The images were reproduced every 7.5 mm, which was able to reduce every 1.5 mm when necessary. Out of 420 inspection in the chest, abdomen, and pelvis, 5 representative cases for which this method was valuable were described. The cases were fibrous histiocytoma of upper mediastinum, left adrenal adenoma, left ureter fibroma, recurrence of colon cancer in the pelvis, and abscess around the rectum. This method improved the image quality of lesions in the vicinity of the ureters, main artery, and rectum. The time required and exposure dose were reduced to 50% by this method. (Nakanishi, T.)

  7. Estimating Frequency by Interpolation Using Least Squares Support Vector Regression

    Directory of Open Access Journals (Sweden)

    Changwei Ma

    2015-01-01

    Full Text Available Discrete Fourier transform- (DFT- based maximum likelihood (ML algorithm is an important part of single sinusoid frequency estimation. As signal to noise ratio (SNR increases and is above the threshold value, it will lie very close to Cramer-Rao lower bound (CRLB, which is dependent on the number of DFT points. However, its mean square error (MSE performance is directly proportional to its calculation cost. As a modified version of support vector regression (SVR, least squares SVR (LS-SVR can not only still keep excellent capabilities for generalizing and fitting but also exhibit lower computational complexity. In this paper, therefore, LS-SVR is employed to interpolate on Fourier coefficients of received signals and attain high frequency estimation accuracy. Our results show that the proposed algorithm can make a good compromise between calculation cost and MSE performance under the assumption that the sample size, number of DFT points, and resampling points are already known.

  8. Finite element analysis of rotating beams physics based interpolation

    CERN Document Server

    Ganguli, Ranjan

    2017-01-01

    This book addresses the solution of rotating beam free-vibration problems using the finite element method. It provides an introduction to the governing equation of a rotating beam, before outlining the solution procedures using Rayleigh-Ritz, Galerkin and finite element methods. The possibility of improving the convergence of finite element methods through a judicious selection of interpolation functions, which are closer to the problem physics, is also addressed. The book offers a valuable guide for students and researchers working on rotating beam problems – important engineering structures used in helicopter rotors, wind turbines, gas turbines, steam turbines and propellers – and their applications. It can also be used as a textbook for specialized graduate and professional courses on advanced applications of finite element analysis.

  9. Spatial Interpolation of Historical Seasonal Rainfall Indices over Peninsular Malaysia

    Directory of Open Access Journals (Sweden)

    Hassan Zulkarnain

    2018-01-01

    Full Text Available The inconsistency in inter-seasonal rainfall due to climate change will cause a different pattern in the rainfall characteristics and distribution. Peninsular Malaysia is not an exception for this inconsistency, in which it is resulting extreme events such as flood and water scarcity. This study evaluates the seasonal patterns in rainfall indices such as total amount of rainfall, the frequency of wet days, rainfall intensity, extreme frequency, and extreme intensity in Peninsular Malaysia. 40 years (1975-2015 data records have been interpolated using Inverse Distance Weighted method. The results show that the formation of rainfall characteristics are significance during the Northeast monsoon (NEM, as compared to Southwest monsoon (SWM. Also, there is a high rainfall intensity and frequency related to extreme over eastern coasts of Peninsula during the NEM season.

  10. Perbaikan Metode Penghitungan Debit Sungai Menggunakan Cubic Spline Interpolation

    Directory of Open Access Journals (Sweden)

    Budi I. Setiawan

    2007-09-01

    Full Text Available Makalah ini menyajikan perbaikan metode pengukuran debit sungai menggunakan fungsi cubic spline interpolation. Fungi ini digunakan untuk menggambarkan profil sungai secara kontinyu yang terbentuk atas hasil pengukuran jarak dan kedalaman sungai. Dengan metoda baru ini, luas dan perimeter sungai lebih mudah, cepat dan tepat dihitung. Demikian pula, fungsi kebalikannnya (inverse function tersedia menggunakan metode. Newton-Raphson sehingga memudahkan dalam perhitungan luas dan perimeter bila tinggi air sungai diketahui. Metode baru ini dapat langsung menghitung debit sungaimenggunakan formula Manning, dan menghasilkan kurva debit (rating curve. Dalam makalah ini dikemukaan satu canton pengukuran debit sungai Rudeng Aceh. Sungai ini mempunyai lebar sekitar 120 m dan kedalaman 7 m, dan pada saat pengukuran mempunyai debit 41 .3 m3/s, serta kurva debitnya mengikuti formula: Q= 0.1649 x H 2.884 , dimana Q debit (m3/s dan H tinggi air dari dasar sungai (m.

  11. An algorithm for centerline extraction using natural neighbour interpolation

    DEFF Research Database (Denmark)

    Mioc, Darka; Antón Castro, Francesc/François; Dharmaraj, Girija

    2004-01-01

    , especially due to the lack of explicit topology in commercial GIS systems. Indeed, each map update might require the batch processing of the whole map. Currently, commercial GIS do not offer completely automatic raster/vector conversion even for simple scanned black and white maps. Various commercial raster...... they need user defined tolerances settings, what causes difficulties in the extraction of complex spatial features, for example: road junctions, curved or irregular lines and complex intersections of linear features. The approach we use here is based on image processing filtering techniques to extract...... to the improvement of data caption and conversion in GIS and to develop a software toolkit for automated raster/vector conversion. The approach is based on computing the skeleton from Voronoi diagrams using natural neighbour interpolation. In this paper we present the algorithm for skeleton extraction from scanned...

  12. Spatial Interpolation of Historical Seasonal Rainfall Indices over Peninsular Malaysia

    Science.gov (United States)

    Hassan, Zulkarnain; Haidir, Ahmad; Saad, Farah Naemah Mohd; Ayob, Afizah; Rahim, Mustaqqim Abdul; Ghazaly, Zuhayr Md.

    2018-03-01

    The inconsistency in inter-seasonal rainfall due to climate change will cause a different pattern in the rainfall characteristics and distribution. Peninsular Malaysia is not an exception for this inconsistency, in which it is resulting extreme events such as flood and water scarcity. This study evaluates the seasonal patterns in rainfall indices such as total amount of rainfall, the frequency of wet days, rainfall intensity, extreme frequency, and extreme intensity in Peninsular Malaysia. 40 years (1975-2015) data records have been interpolated using Inverse Distance Weighted method. The results show that the formation of rainfall characteristics are significance during the Northeast monsoon (NEM), as compared to Southwest monsoon (SWM). Also, there is a high rainfall intensity and frequency related to extreme over eastern coasts of Peninsula during the NEM season.

  13. A fast and accurate dihedral interpolation loop subdivision scheme

    Science.gov (United States)

    Shi, Zhuo; An, Yalei; Wang, Zhongshuai; Yu, Ke; Zhong, Si; Lan, Rushi; Luo, Xiaonan

    2018-04-01

    In this paper, we propose a fast and accurate dihedral interpolation Loop subdivision scheme for subdivision surfaces based on triangular meshes. In order to solve the problem of surface shrinkage, we keep the limit condition unchanged, which is important. Extraordinary vertices are handled using modified Butterfly rules. Subdivision schemes are computationally costly as the number of faces grows exponentially at higher levels of subdivision. To address this problem, our approach is to use local surface information to adaptively refine the model. This is achieved simply by changing the threshold value of the dihedral angle parameter, i.e., the angle between the normals of a triangular face and its adjacent faces. We then demonstrate the effectiveness of the proposed method for various 3D graphic triangular meshes, and extensive experimental results show that it can match or exceed the expected results at lower computational cost.

  14. Differential maps, difference maps, interpolated maps, and long term prediction

    International Nuclear Information System (INIS)

    Talman, R.

    1988-06-01

    Mapping techniques may be thought to be attractive for the long term prediction of motion in accelerators, especially because a simple map can approximately represent an arbitrarily complicated lattice. The intention of this paper is to develop prejudices as to the validity of such methods by applying them to a simple, exactly solveable, example. It is shown that a numerical interpolation map, such as can be generated in the accelerator tracking program TEAPOT, predicts the evolution more accurately than an analytically derived differential map of the same order. Even so, in the presence of ''appreciable'' nonlinearity, it is shown to be impractical to achieve ''accurate'' prediction beyond some hundreds of cycles of oscillation. This suggests that the value of nonlinear maps is restricted to the parameterization of only the ''leading'' deviation from linearity. 41 refs., 6 figs

  15. Hybrid kriging methods for interpolating sparse river bathymetry point data

    Directory of Open Access Journals (Sweden)

    Pedro Velloso Gomes Batista

    Full Text Available ABSTRACT Terrain models that represent riverbed topography are used for analyzing geomorphologic changes, calculating water storage capacity, and making hydrologic simulations. These models are generated by interpolating bathymetry points. River bathymetry is usually surveyed through cross-sections, which may lead to a sparse sampling pattern. Hybrid kriging methods, such as regression kriging (RK and co-kriging (CK employ the correlation with auxiliary predictors, as well as inter-variable correlation, to improve the predictions of the target variable. In this study, we use the orthogonal distance of a (x, y point to the river centerline as a covariate for RK and CK. Given that riverbed elevation variability is abrupt transversely to the flow direction, it is expected that the greater the Euclidean distance of a point to the thalweg, the greater the bed elevation will be. The aim of this study was to evaluate if the use of the proposed covariate improves the spatial prediction of riverbed topography. In order to asses such premise, we perform an external validation. Transversal cross-sections are used to make the spatial predictions, and the point data surveyed between sections are used for testing. We compare the results from CK and RK to the ones obtained from ordinary kriging (OK. The validation indicates that RK yields the lowest RMSE among the interpolators. RK predictions represent the thalweg between cross-sections, whereas the other methods under-predict the river thalweg depth. Therefore, we conclude that RK provides a simple approach for enhancing the quality of the spatial prediction from sparse bathymetry data.

  16. Improving the accuracy of livestock distribution estimates through spatial interpolation.

    Science.gov (United States)

    Bryssinckx, Ward; Ducheyne, Els; Muhwezi, Bernard; Godfrey, Sunday; Mintiens, Koen; Leirs, Herwig; Hendrickx, Guy

    2012-11-01

    Animal distribution maps serve many purposes such as estimating transmission risk of zoonotic pathogens to both animals and humans. The reliability and usability of such maps is highly dependent on the quality of the input data. However, decisions on how to perform livestock surveys are often based on previous work without considering possible consequences. A better understanding of the impact of using different sample designs and processing steps on the accuracy of livestock distribution estimates was acquired through iterative experiments using detailed survey. The importance of sample size, sample design and aggregation is demonstrated and spatial interpolation is presented as a potential way to improve cattle number estimates. As expected, results show that an increasing sample size increased the precision of cattle number estimates but these improvements were mainly seen when the initial sample size was relatively low (e.g. a median relative error decrease of 0.04% per sampled parish for sample sizes below 500 parishes). For higher sample sizes, the added value of further increasing the number of samples declined rapidly (e.g. a median relative error decrease of 0.01% per sampled parish for sample sizes above 500 parishes. When a two-stage stratified sample design was applied to yield more evenly distributed samples, accuracy levels were higher for low sample densities and stabilised at lower sample sizes compared to one-stage stratified sampling. Aggregating the resulting cattle number estimates yielded significantly more accurate results because of averaging under- and over-estimates (e.g. when aggregating cattle number estimates from subcounty to district level, P interpolation to fill in missing values in non-sampled areas, accuracy is improved remarkably. This counts especially for low sample sizes and spatially even distributed samples (e.g. P <0.001 for a sample of 170 parishes using one-stage stratified sampling and aggregation on district level

  17. Integrating seawater desalination and wastewater reclamation forward osmosis process using thin-film composite mixed matrix membrane with functionalized carbon nanotube blended polyethersulfone support layer.

    Science.gov (United States)

    Choi, Hyeon-Gyu; Son, Moon; Choi, Heechul

    2017-10-01

    Thin-film composite mixed matrix membrane (TFC MMM) with functionalized carbon nanotube (fCNT) blended in polyethersulfone (PES) support layer was synthesized via interfacial polymerization and phase inversion. This membrane was firstly tested in lab-scale integrating seawater desalination and wastewater reclamation forward osmosis (FO) process. Water flux of TFC MMM was increased by 72% compared to that of TFC membrane due to enhanced hydrophilicity. Although TFC MMM showed lower water flux than TFC commercial membrane, enhanced reverse salt flux selectivity (RSFS) of TFC MMM was observed compared to TFC membrane (15% higher) and TFC commercial membrane (4% higher), representing membrane permselectivity. Under effluent organic matter (EfOM) fouling test, 16% less normalized flux decline of TFC MMM was observed compared to TFC membrane. There was 8% less decline of TFC MMM compared to TFC commercial membrane due to fCNT effect on repulsive foulant-membrane interaction enhancement, caused by negatively charged membrane surface. After 10 min physical cleaning, TFC MMM displayed higher recovered normalized flux than TFC membrane (6%) and TFC commercial membrane (4%); this was also supported by visualized characterization of fouling layer. This study presents application of TFC MMM to integrated seawater desalination and wastewater reclamation FO process for the first time. It can be concluded that EfOM fouling of TFC MMM was suppressed due to repulsive foulant-membrane interaction. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. A method to generate fully multi-scale optimal interpolation by combining efficient single process analyses, illustrated by a DINEOF analysis spiced with a local optimal interpolation

    Directory of Open Access Journals (Sweden)

    J.-M. Beckers

    2014-10-01

    Full Text Available We present a method in which the optimal interpolation of multi-scale processes can be expanded into a succession of simpler interpolations. First, we prove how the optimal analysis of a superposition of two processes can be obtained by different mathematical formulations involving iterations and analysis focusing on a single process. From the different mathematical equivalent formulations, we then select the most efficient ones by analyzing the behavior of the different possibilities in a simple and well-controlled test case. The clear guidelines deduced from this experiment are then applied to a real situation in which we combine large-scale analysis of hourly Spinning Enhanced Visible and Infrared Imager (SEVIRI satellite images using data interpolating empirical orthogonal functions (DINEOF with a local optimal interpolation using a Gaussian covariance. It is shown that the optimal combination indeed provides the best reconstruction and can therefore be exploited to extract the maximum amount of useful information from the original data.

  19. Can a polynomial interpolation improve on the Kaplan-Yorke dimension?

    International Nuclear Information System (INIS)

    Richter, Hendrik

    2008-01-01

    The Kaplan-Yorke dimension can be derived using a linear interpolation between an h-dimensional Lyapunov exponent λ (h) >0 and an h+1-dimensional Lyapunov exponent λ (h+1) <0. In this Letter, we use a polynomial interpolation to obtain generalized Lyapunov dimensions and study the relationships among them for higher-dimensional systems

  20. Kriging interpolation in seismic attribute space applied to the South Arne Field, North Sea

    DEFF Research Database (Denmark)

    Hansen, Thomas Mejer; Mosegaard, Klaus; Schiøtt, Christian

    2010-01-01

    Seismic attributes can be used to guide interpolation in-between and extrapolation away from well log locations using for example linear regression, neural networks, and kriging. Kriging-based estimation methods (and most other types of interpolation/extrapolation techniques) are intimately linke...