WorldWideScience

Sample records for point-spread function models

  1. Plasmon point spread functions: How do we model plasmon-mediated emission processes?

    Science.gov (United States)

    Willets, Katherine A.

    2014-02-01

    A major challenge with studying plasmon-mediated emission events is the small size of plasmonic nanoparticles relative to the wavelength of light. Objects smaller than roughly half the wavelength of light will appear as diffraction-limited spots in far-field optical images, presenting a significant experimental challenge for studying plasmonic processes on the nanoscale. Super-resolution imaging has recently been applied to plasmonic nanosystems and allows plasmon-mediated emission to be resolved on the order of ˜5 nm. In super-resolution imaging, a diffraction-limited spot is fit to some model function in order to calculate the position of the emission centroid, which represents the location of the emitter. However, the accuracy of the centroid position strongly depends on how well the fitting function describes the data. This Perspective discusses the commonly used two-dimensional Gaussian fitting function applied to super-resolution imaging of plasmon-mediated emission, then introduces an alternative model based on dipole point spread functions. The two fitting models are compared and contrasted for super-resolution imaging of nanoparticle scattering/luminescence, surface-enhanced Raman scattering, and surface-enhanced fluorescence.

  2. Point spread function modeling and image restoration for cone-beam CT

    International Nuclear Information System (INIS)

    Zhang Hua; Shi Yikai; Huang Kuidong; Xu Zhe

    2015-01-01

    X-ray cone-beam computed tomography (CT) has such notable features as high efficiency and precision, and is widely used in the fields of medical imaging and industrial non-destructive testing, but the inherent imaging degradation reduces the quality of CT images. Aimed at the problems of projection image degradation and restoration in cone-beam CT, a point spread function (PSF) modeling method is proposed first. The general PSF model of cone-beam CT is established, and based on it, the PSF under arbitrary scanning conditions can be calculated directly for projection image restoration without the additional measurement, which greatly improved the application convenience of cone-beam CT. Secondly, a projection image restoration algorithm based on pre-filtering and pre-segmentation is proposed, which can make the edge contours in projection images and slice images clearer after restoration, and control the noise in the equivalent level to the original images. Finally, the experiments verified the feasibility and effectiveness of the proposed methods. (authors)

  3. Fluorescence microscopy point spread function model accounting for aberrations due to refractive index variability within a specimen.

    Science.gov (United States)

    Ghosh, Sreya; Preza, Chrysanthe

    2015-07-01

    A three-dimensional (3-D) point spread function (PSF) model for wide-field fluorescence microscopy, suitable for imaging samples with variable refractive index (RI) in multilayered media, is presented. This PSF model is a key component for accurate 3-D image restoration of thick biological samples, such as lung tissue. Microscope- and specimen-derived parameters are combined with a rigorous vectorial formulation to obtain a new PSF model that accounts for additional aberrations due to specimen RI variability. Experimental evaluation and verification of the PSF model was accomplished using images from 175-nm fluorescent beads in a controlled test sample. Fundamental experimental validation of the advantage of using improved PSFs in depth-variant restoration was accomplished by restoring experimental data from beads (6  μm in diameter) mounted in a sample with RI variation. In the investigated study, improvement in restoration accuracy in the range of 18 to 35% was observed when PSFs from the proposed model were used over restoration using PSFs from an existing model. The new PSF model was further validated by showing that its prediction compares to an experimental PSF (determined from 175-nm beads located below a thick rat lung slice) with a 42% improved accuracy over the current PSF model prediction.

  4. Improved image quality in pinhole SPECT by accurate modeling of the point spread function in low magnification systems

    International Nuclear Information System (INIS)

    Pino, Francisco; Roé, Nuria; Aguiar, Pablo; Falcon, Carles; Ros, Domènec; Pavía, Javier

    2015-01-01

    Purpose: Single photon emission computed tomography (SPECT) has become an important noninvasive imaging technique in small-animal research. Due to the high resolution required in small-animal SPECT systems, the spatially variant system response needs to be included in the reconstruction algorithm. Accurate modeling of the system response should result in a major improvement in the quality of reconstructed images. The aim of this study was to quantitatively assess the impact that an accurate modeling of spatially variant collimator/detector response has on image-quality parameters, using a low magnification SPECT system equipped with a pinhole collimator and a small gamma camera. Methods: Three methods were used to model the point spread function (PSF). For the first, only the geometrical pinhole aperture was included in the PSF. For the second, the septal penetration through the pinhole collimator was added. In the third method, the measured intrinsic detector response was incorporated. Tomographic spatial resolution was evaluated and contrast, recovery coefficients, contrast-to-noise ratio, and noise were quantified using a custom-built NEMA NU 4–2008 image-quality phantom. Results: A high correlation was found between the experimental data corresponding to intrinsic detector response and the fitted values obtained by means of an asymmetric Gaussian distribution. For all PSF models, resolution improved as the distance from the point source to the center of the field of view increased and when the acquisition radius diminished. An improvement of resolution was observed after a minimum of five iterations when the PSF modeling included more corrections. Contrast, recovery coefficients, and contrast-to-noise ratio were better for the same level of noise in the image when more accurate models were included. Ring-type artifacts were observed when the number of iterations exceeded 12. Conclusions: Accurate modeling of the PSF improves resolution, contrast, and recovery

  5. Improved image quality in pinhole SPECT by accurate modeling of the point spread function in low magnification systems

    Energy Technology Data Exchange (ETDEWEB)

    Pino, Francisco [Unitat de Biofísica, Facultat de Medicina, Universitat de Barcelona, Barcelona 08036, Spain and Servei de Física Mèdica i Protecció Radiològica, Institut Català d’Oncologia, L’Hospitalet de Llobregat 08907 (Spain); Roé, Nuria [Unitat de Biofísica, Facultat de Medicina, Universitat de Barcelona, Barcelona 08036 (Spain); Aguiar, Pablo, E-mail: pablo.aguiar.fernandez@sergas.es [Fundación Ramón Domínguez, Complexo Hospitalario Universitario de Santiago de Compostela 15706, Spain and Grupo de Imagen Molecular, Instituto de Investigacións Sanitarias de Santiago de Compostela (IDIS), Galicia 15782 (Spain); Falcon, Carles; Ros, Domènec [Institut d’Investigacions Biomèdiques August Pi i Sunyer (IDIBAPS), Barcelona 08036, Spain and CIBER en Bioingeniería, Biomateriales y Nanomedicina (CIBER-BBN), Barcelona 08036 (Spain); Pavía, Javier [Institut d’Investigacions Biomèdiques August Pi i Sunyer (IDIBAPS), Barcelona 080836 (Spain); CIBER en Bioingeniería, Biomateriales y Nanomedicina (CIBER-BBN), Barcelona 08036 (Spain); and Servei de Medicina Nuclear, Hospital Clínic, Barcelona 08036 (Spain)

    2015-02-15

    Purpose: Single photon emission computed tomography (SPECT) has become an important noninvasive imaging technique in small-animal research. Due to the high resolution required in small-animal SPECT systems, the spatially variant system response needs to be included in the reconstruction algorithm. Accurate modeling of the system response should result in a major improvement in the quality of reconstructed images. The aim of this study was to quantitatively assess the impact that an accurate modeling of spatially variant collimator/detector response has on image-quality parameters, using a low magnification SPECT system equipped with a pinhole collimator and a small gamma camera. Methods: Three methods were used to model the point spread function (PSF). For the first, only the geometrical pinhole aperture was included in the PSF. For the second, the septal penetration through the pinhole collimator was added. In the third method, the measured intrinsic detector response was incorporated. Tomographic spatial resolution was evaluated and contrast, recovery coefficients, contrast-to-noise ratio, and noise were quantified using a custom-built NEMA NU 4–2008 image-quality phantom. Results: A high correlation was found between the experimental data corresponding to intrinsic detector response and the fitted values obtained by means of an asymmetric Gaussian distribution. For all PSF models, resolution improved as the distance from the point source to the center of the field of view increased and when the acquisition radius diminished. An improvement of resolution was observed after a minimum of five iterations when the PSF modeling included more corrections. Contrast, recovery coefficients, and contrast-to-noise ratio were better for the same level of noise in the image when more accurate models were included. Ring-type artifacts were observed when the number of iterations exceeded 12. Conclusions: Accurate modeling of the PSF improves resolution, contrast, and recovery

  6. Point-spread function in depleted and partially depleted CCDs

    International Nuclear Information System (INIS)

    Groom, D.E.; Eberhard, P.H.; Holland, S.E.; Levi, M.E.; Palaio, N.P.; Perlmutter, S.; Stover, R.J.; Wei, M.

    1999-01-01

    The point spread function obtainable in an astronomical instrument using CCD readout is limited by a number of factors, among them the lateral diffusion of charge before it is collected in the potential wells. They study this problem both theoretically and experimentally, with emphasis on the thick CCDs on high-resistivity n-type substrates being developed at Lawrence Berkeley National Laboratory

  7. Finding Exoplanets Using Point Spread Function Photometry on Kepler Data

    Science.gov (United States)

    Amaro, Rachael Christina; Scolnic, Daniel; Montet, Ben

    2018-01-01

    The Kepler Mission has been able to identify over 5,000 exoplanet candidates using mostly aperture photometry. Despite the impressive number of discoveries, a large portion of Kepler’s data set is neglected due to limitations using aperture photometry on faint sources in crowded fields. We present an alternate method that overcomes those restrictions — Point Spread Function (PSF) photometry. This powerful tool, which is already used in supernova astronomy, was used for the first time on Kepler Full Frame Images, rather than just looking at the standard light curves. We present light curves for stars in our data set and demonstrate that PSF photometry can at least get down to the same photometric precision as aperture photometry. As a check for the robustness of this method, we change small variables (stamp size, interpolation amount, and noise correction) and show that the PSF light curves maintain the same repeatability across all combinations for one of our models. We also present our progress in the next steps of this project, including the creation of a PSF model from the data itself and applying the model across the entire data set at once.

  8. Point spread functions and deconvolution of ultrasonic images.

    Science.gov (United States)

    Dalitz, Christoph; Pohle-Fröhlich, Regina; Michalk, Thorsten

    2015-03-01

    This article investigates the restoration of ultrasonic pulse-echo C-scan images by means of deconvolution with a point spread function (PSF). The deconvolution concept from linear system theory (LST) is linked to the wave equation formulation of the imaging process, and an analytic formula for the PSF of planar transducers is derived. For this analytic expression, different numerical and analytic approximation schemes for evaluating the PSF are presented. By comparing simulated images with measured C-scan images, we demonstrate that the assumptions of LST in combination with our formula for the PSF are a good model for the pulse-echo imaging process. To reconstruct the object from a C-scan image, we compare different deconvolution schemes: the Wiener filter, the ForWaRD algorithm, and the Richardson-Lucy algorithm. The best results are obtained with the Richardson-Lucy algorithm with total variation regularization. For distances greater or equal twice the near field distance, our experiments show that the numerically computed PSF can be replaced with a simple closed analytic term based on a far field approximation.

  9. Proper Analytic Point Spread Function for Lateral Modulation

    Science.gov (United States)

    Sumi, Chikayoshi; Shimizu, Kunio; Matsui, Norihiko

    2010-07-01

    For ultrasonic lateral modulation for the imaging and measurement of tissue motion, better envelope shapes of the point spread function (PSF) than of a parabolic function are searched for within analytic functions or windows on the basis of the knowledge of the ideal shape of PSF previously obtained, i.e., having a large full width at half maximum and short feet. Through simulation of displacement vector measurement, better shapes are determined. As a better shape, a new window is obtained from a Turkey window by changing Hanning windows by power functions with an order larger than the second order. The order of measurement accuracies obtained is as follows, the new window > rectangular window > power function with a higher order > parabolic function > Akaike window.

  10. Point spread function engineering for iris recognition system design.

    Science.gov (United States)

    Ashok, Amit; Neifeld, Mark A

    2010-04-01

    Undersampling in the detector array degrades the performance of iris-recognition imaging systems. We find that an undersampling of 8 x 8 reduces the iris-recognition performance by nearly a factor of 4 (on CASIA iris database), as measured by the false rejection ratio (FRR) metric. We employ optical point spread function (PSF) engineering via a Zernike phase mask in conjunction with multiple subpixel shifted image measurements (frames) to mitigate the effect of undersampling. A task-specific optimization framework is used to engineer the optical PSF and optimize the postprocessing parameters to minimize the FRR. The optimized Zernike phase enhanced lens (ZPEL) imager design with one frame yields an improvement of nearly 33% relative to a thin observation module by bounded optics (TOMBO) imager with one frame. With four frames the optimized ZPEL imager achieves a FRR equal to that of the conventional imager without undersampling. Further, the ZPEL imager design using 16 frames yields a FRR that is actually 15% lower than that obtained with the conventional imager without undersampling.

  11. High precision wavefront control in point spread function engineering for single emitter localization

    NARCIS (Netherlands)

    Siemons, M.E.; Thorsen, R.Ø; Smith, C.S.; Stallinga, S.

    2018-01-01

    Point spread function (PSF) engineering is used in single emitter localization to measure the emitter position in 3D and possibly other parameters such as the emission color or dipole orientation as well. Advanced PSF models such as spline fits to experimental PSFs or the vectorial PSF model can

  12. Estimation Methods of the Point Spread Function Axial Position: A Comparative Computational Study

    Directory of Open Access Journals (Sweden)

    Javier Eduardo Diaz Zamboni

    2017-01-01

    Full Text Available The precise knowledge of the point spread function is central for any imaging system characterization. In fluorescence microscopy, point spread function (PSF determination has become a common and obligatory task for each new experimental device, mainly due to its strong dependence on acquisition conditions. During the last decade, algorithms have been developed for the precise calculation of the PSF, which fit model parameters that describe image formation on the microscope to experimental data. In order to contribute to this subject, a comparative study of three parameter estimation methods is reported, namely: I-divergence minimization (MIDIV, maximum likelihood (ML and non-linear least square (LSQR. They were applied to the estimation of the point source position on the optical axis, using a physical model. Methods’ performance was evaluated under different conditions and noise levels using synthetic images and considering success percentage, iteration number, computation time, accuracy and precision. The main results showed that the axial position estimation requires a high SNR to achieve an acceptable success level and higher still to be close to the estimation error lower bound. ML achieved a higher success percentage at lower SNR compared to MIDIV and LSQR with an intrinsic noise source. Only the ML and MIDIV methods achieved the error lower bound, but only with data belonging to the optical axis and high SNR. Extrinsic noise sources worsened the success percentage, but no difference was found between noise sources for the same method for all methods studied.

  13. Fast and accurate three-dimensional point spread function computation for fluorescence microscopy.

    Science.gov (United States)

    Li, Jizhou; Xue, Feng; Blu, Thierry

    2017-06-01

    The point spread function (PSF) plays a fundamental role in fluorescence microscopy. A realistic and accurately calculated PSF model can significantly improve the performance in 3D deconvolution microscopy and also the localization accuracy in single-molecule microscopy. In this work, we propose a fast and accurate approximation of the Gibson-Lanni model, which has been shown to represent the PSF suitably under a variety of imaging conditions. We express the Kirchhoff's integral in this model as a linear combination of rescaled Bessel functions, thus providing an integral-free way for the calculation. The explicit approximation error in terms of parameters is given numerically. Experiments demonstrate that the proposed approach results in a significantly smaller computational time compared with current state-of-the-art techniques to achieve the same accuracy. This approach can also be extended to other microscopy PSF models.

  14. Scattering and the Point Spread Function of the New Generation Space Telescope

    Science.gov (United States)

    Schreur, Julian J.

    1996-01-01

    Preliminary design work on the New Generation Space Telescope (NGST) is currently under way. This telescope is envisioned as a lightweight, deployable Cassegrain reflector with an aperture of 8 meters, and an effective focal length of 80 meters. It is to be folded into a small-diameter package for launch by an Atlas booster, and unfolded in orbit. The primary is to consist of an octagon with a hole at the center, and with eight segments arranged in a flower petal configuration about the octagon. The comers of the petal-shaped segments are to be trimmed so that the package will fit atop the Atlas booster. This mirror, along with its secondary will focus the light from a point source into an image which is spread from a point by diffraction effects, figure errors, and scattering of light from the surface. The distribution of light in the image of a point source is called a point spread function (PSF). The obstruction of the incident light by the secondary mirror and its support structure, the trimmed corners of the petals, and the grooves between the segments all cause the diffraction pattern characterizing an ideal point spread function to be changed, with the trimmed comers causing the rings of the Airy pattern to become broken up, and the linear grooves causing diffraction spikes running radially away from the central spot, or Airy disk. Any figure errors the mirror segments may have, or any errors in aligning the petals with the central octagon will also spread the light out from the ideal point spread function. A point spread function for a mirror the size of the NGST and having an incident wavelength of 900 nm is considered. Most of the light is confined in a circle with a diameter of 0.05 arc seconds. The ring pattern ranges in intensity from 10(exp -2) near the center to 10(exp -6) near the edge of the plotted field, and can be clearly discerned in a log plot of the intensity. The total fraction of the light scattered from this point spread function is called

  15. In-flight calibration of the Swift XRT Point Spread Function

    International Nuclear Information System (INIS)

    Moretti, A.; Campana, S.; Chincarini, G.; Covino, S.; Romano, P.; Tagliaferri, G.; Capalbi, M.; Giommi, P.; Perri, M.; Cusumano, G.; La Parola, V.; Mangano, V.; Mineo, T.

    2006-01-01

    The Swift X-ray Telescope (XRT) is designed to make astrometric, spectroscopic and photometric observations of the X-ray emission from Gamma-ray bursts and their afterglows, in the energy band 0.2-10 keV. Here we report the results of the analysis of Swift XRT Point Spread Function (PSF) as measured in the first four months of the mission during the instrument calibration phase. The analysis includes the study of the PSF of different point-like sources both on-axis and off-axis with different spectral properties. We compare the in-flight data with the expectations from the on-ground calibration. On the basis of the calibration data we built an analytical model to reproduce the PSF as a function of the energy and the source position within the detector which can be applied in the PSF correction calculation for any extraction region geometry. All the results of this study are implemented in the standard public software

  16. In-flight calibration of the Hitomi Soft X-ray Spectrometer. (2) Point spread function

    Science.gov (United States)

    Maeda, Yoshitomo; Sato, Toshiki; Hayashi, Takayuki; Iizuka, Ryo; Angelini, Lorella; Asai, Ryota; Furuzawa, Akihiro; Kelley, Richard; Koyama, Shu; Kurashima, Sho; Ishida, Manabu; Mori, Hideyuki; Nakaniwa, Nozomi; Okajima, Takashi; Serlemitsos, Peter J.; Tsujimoto, Masahiro; Yaqoob, Tahir

    2018-03-01

    We present results of inflight calibration of the point spread function of the Soft X-ray Telescope that focuses X-rays onto the pixel array of the Soft X-ray Spectrometer system. We make a full array image of a point-like source by extracting a pulsed component of the Crab nebula emission. Within the limited statistics afforded by an exposure time of only 6.9 ks and limited knowledge of the systematic uncertainties, we find that the raytracing model of 1 {^'.} 2 half-power-diameter is consistent with an image of the observed event distributions across pixels. The ratio between the Crab pulsar image and the raytracing shows scatter from pixel to pixel that is 40% or less in all except one pixel. The pixel-to-pixel ratio has a spread of 20%, on average, for the 15 edge pixels, with an averaged statistical error of 17% (1 σ). In the central 16 pixels, the corresponding ratio is 15% with an error of 6%.

  17. The point-spread function measure of resolution for the 3-D electrical resistivity experiment

    Science.gov (United States)

    Oldenborger, Greg A.; Routh, Partha S.

    2009-02-01

    The solution appraisal component of the inverse problem involves investigation of the relationship between our estimated model and the actual model. However, full appraisal is difficult for large 3-D problems such as electrical resistivity tomography (ERT). We tackle the appraisal problem for 3-D ERT via the point-spread functions (PSFs) of the linearized resolution matrix. The PSFs represent the impulse response of the inverse solution and quantify our parameter-specific resolving capability. We implement an iterative least-squares solution of the PSF for the ERT experiment, using on-the-fly calculation of the sensitivity via an adjoint integral equation with stored Green's functions and subgrid reduction. For a synthetic example, analysis of individual PSFs demonstrates the truly 3-D character of the resolution. The PSFs for the ERT experiment are Gaussian-like in shape, with directional asymmetry and significant off-diagonal features. Computation of attributes representative of the blurring and localization of the PSF reveal significant spatial dependence of the resolution with some correlation to the electrode infrastructure. Application to a time-lapse ground-water monitoring experiment demonstrates the utility of the PSF for assessing feature discrimination, predicting artefacts and identifying model dependence of resolution. For a judicious selection of model parameters, we analyse the PSFs and their attributes to quantify the case-specific localized resolving capability and its variability over regions of interest. We observe approximate interborehole resolving capability of less than 1-1.5m in the vertical direction and less than 1-2.5m in the horizontal direction. Resolving capability deteriorates significantly outside the electrode infrastructure.

  18. The point spread function of the human head and its implications for transcranial current stimulation

    International Nuclear Information System (INIS)

    Dmochowski, Jacek P; Bikson, Marom; Parra, Lucas C

    2012-01-01

    Rational development of transcranial current stimulation (tCS) requires solving the ‘forward problem’: the computation of the electric field distribution in the head resulting from the application of scalp currents. Derivation of forward models has represented a major effort in brain stimulation research, with model complexity ranging from spherical shells to individualized head models based on magnetic resonance imagery. Despite such effort, an easily accessible benchmark head model is greatly needed when individualized modeling is either undesired (to observe general population trends as opposed to individual differences) or unfeasible. Here, we derive a closed-form linear system which relates the applied current to the induced electric potential. It is shown that in the spherical harmonic (Fourier) domain, a simple scalar multiplication relates the current density on the scalp to the electric potential in the brain. Equivalently, the current density in the head follows as the spherical convolution between the scalp current distribution and the point spread function of the head, which we derive. Thus, if one knows the spherical harmonic representation of the scalp current (i.e. the electrode locations and current intensity to be employed), one can easily compute the resulting electric field at any point inside the head. Conversely, one may also readily determine the scalp current distribution required to generate an arbitrary electric field in the brain (the ‘backward problem’ in tCS). We demonstrate the simplicity and utility of the model with a series of characteristic curves which sweep across a variety of stimulation parameters: electrode size, depth of stimulation, head size and anode–cathode separation. Finally, theoretically optimal montages for targeting an infinitesimal point in the brain are shown. (paper)

  19. High precision wavefront control in point spread function engineering for single emitter localization

    Science.gov (United States)

    Siemons, M.; Hulleman, C. N.; Thorsen, R. Ø.; Smith, C. S.; Stallinga, S.

    2018-04-01

    Point Spread Function (PSF) engineering is used in single emitter localization to measure the emitter position in 3D and possibly other parameters such as the emission color or dipole orientation as well. Advanced PSF models such as spline fits to experimental PSFs or the vectorial PSF model can be used in the corresponding localization algorithms in order to model the intricate spot shape and deformations correctly. The complexity of the optical architecture and fit model makes PSF engineering approaches particularly sensitive to optical aberrations. Here, we present a calibration and alignment protocol for fluorescence microscopes equipped with a spatial light modulator (SLM) with the goal of establishing a wavefront error well below the diffraction limit for optimum application of complex engineered PSFs. We achieve high-precision wavefront control, to a level below 20 m$\\lambda$ wavefront aberration over a 30 minute time window after the calibration procedure, using a separate light path for calibrating the pixel-to-pixel variations of the SLM, and alignment of the SLM with respect to the optical axis and Fourier plane within 3 $\\mu$m ($x/y$) and 100 $\\mu$m ($z$) error. Aberrations are retrieved from a fit of the vectorial PSF model to a bead $z$-stack and compensated with a residual wavefront error comparable to the error of the SLM calibration step. This well-calibrated and corrected setup makes it possible to create complex `3D+$\\lambda$' PSFs that fit very well to the vectorial PSF model. Proof-of-principle bead experiments show precisions below 10~nm in $x$, $y$, and $\\lambda$, and below 20~nm in $z$ over an axial range of 1 $\\mu$m with 2000 signal photons and 12 background photons.

  20. Optimization of hybrid imaging systems based on maximization of kurtosis of the restored point spread function

    DEFF Research Database (Denmark)

    Demenikov, Mads

    2011-01-01

    to optimization results based on full-reference image measures of restored images. In comparison with full-reference measures, the kurtosis measure is fast to compute and requires no images, noise distributions, or alignment of restored images, but only the signal-to-noise-ratio. © 2011 Optical Society of America.......I propose a novel, but yet simple, no-reference, objective image quality measure based on the kurtosis of the restored point spread function. Using this measure, I optimize several phase masks for extended-depth-of-field in hybrid imaging systems and obtain results that are identical...

  1. On soft clipping of Zernike moments for deblurring and enhancement of optical point spread functions

    Science.gov (United States)

    Becherer, Nico; Jödicke, Hanna; Schlosser, Gregor; Hesser, Jürgen; Zeilfelder, Frank; Männer, Reinhard

    2006-02-01

    Blur and noise originating from the physical imaging processes degrade the microscope data. Accurate deblurring techniques require, however, an accurate estimation of the underlying point-spread function (PSF). A good representation of PSFs can be achieved by Zernike Polynomials since they offer a compact representation where low-order coefficients represent typical aberrations of optical wavefronts while noise is represented in higher order coefficients. A quantitative description of the noise distribution (Gaussian) over the Zernike moments of various orders is given which is the basis for the new soft clipping approach for denoising of PSFs. Instead of discarding moments beyond a certain order, those Zernike moments that are more sensitive to noise are dampened according to the measured distribution and the present noise model. Further, a new scheme to combine experimental and theoretical PSFs in Zernike space is presented. According to our experimental reconstructions, using the new improved PSF the correlation between reconstructed and original volume is raised by 15% on average cases and up to 85% in the case of thin fibre structures, compared to reconstructions where a non improved PSF was used. Finally, we demonstrate the advantages of our approach on 3D images of confocal microscopes by generating visually improved volumes. Additionally, we are presenting a method to render the reconstructed results using a new volume rendering method that is almost artifact-free. The new approach is based on a Shear-Warp technique, wavelet data encoding techniques and a recent approach to approximate the gray value distribution by a Super spline model.

  2. Derivation of the point spread function for zero-crossing-demodulated position-sensitive detectors

    International Nuclear Information System (INIS)

    Nowlin, C.H.

    1976-07-01

    This work is a mathematical derivation of a high-quality approximation to the point spread function for position-sensitive detectors (PSDs) that use pulse-shape modulation and crossover-time demodulation. The approximation is determined as a general function of the input signals to the crossover detectors so as to enable later determination of optimum position-decoding filters for PSDs. This work is precisely applicable to PSDs that use either RC or LC transmission line encoders. The effects of random variables, such as charge collection time, in the encoding process are included. In addition, this work presents a new, rigorous method for the determination of upper and lower bounds for conditional crossover-time distribution functions (closely related to first-passage-time distribution functions) for arbitrary signals and arbitrary noise covariance functions

  3. 4Pi microscopy deconvolution with a variable point-spread function.

    Science.gov (United States)

    Baddeley, David; Carl, Christian; Cremer, Christoph

    2006-09-20

    To remove the axial sidelobes from 4Pi images, deconvolution forms an integral part of 4Pi microscopy. As a result of its high axial resolution, the 4Pi point spread function (PSF) is particularly susceptible to imperfect optical conditions within the sample. This is typically observed as a shift in the position of the maxima under the PSF envelope. A significantly varying phase shift renders deconvolution procedures based on a spatially invariant PSF essentially useless. We present a technique for computing the forward transformation in the case of a varying phase at a computational expense of the same order of magnitude as that of the shift invariant case, a method for the estimation of PSF phase from an acquired image, and a deconvolution procedure built on these techniques.

  4. Measurement of the point spread function of a pixelated detector array

    Energy Technology Data Exchange (ETDEWEB)

    Ritzer, Christian; Hallen, Patrick; Schug, David; Schulz, Volkmar [Department of Physics of Molecular Imaging Systems, Institute for Experimental Molecular Imaging, RWTH Aachen University, Aachen (Germany)

    2015-05-18

    In order to further understand the PET/MRI scanner of our group, we measured the point spread function of a preclinical scintillation crystal array with a pitch of 1 mm and a total size of 30 mm ~ 30 mm ~ 12 mm. It is coupled via a lightguide to a dSiPM from Philips Digital Photon Counting, used on the TEK-setup. Crystal identification is done with a centre of gravity algorithm and the whole data analysis is performed with the same processing software as for the PET insert, giving comparable results. The beam is created with a 22 NA-Point-Source and a lead collimator, with 0.5 mm bore diameter. The algorithm sorted 62 % of the coincidences into the correct crystal.

  5. Measurement of the point spread function of a pixelated detector array

    International Nuclear Information System (INIS)

    Ritzer, Christian; Hallen, Patrick; Schug, David; Schulz, Volkmar

    2015-01-01

    In order to further understand the PET/MRI scanner of our group, we measured the point spread function of a preclinical scintillation crystal array with a pitch of 1 mm and a total size of 30 mm ~ 30 mm ~ 12 mm. It is coupled via a lightguide to a dSiPM from Philips Digital Photon Counting, used on the TEK-setup. Crystal identification is done with a centre of gravity algorithm and the whole data analysis is performed with the same processing software as for the PET insert, giving comparable results. The beam is created with a 22 NA-Point-Source and a lead collimator, with 0.5 mm bore diameter. The algorithm sorted 62 % of the coincidences into the correct crystal.

  6. Synthesis of atmospheric turbulence point spread functions by sparse and redundant representations

    Science.gov (United States)

    Hunt, Bobby R.; Iler, Amber L.; Bailey, Christopher A.; Rucci, Michael A.

    2018-02-01

    Atmospheric turbulence is a fundamental problem in imaging through long slant ranges, horizontal-range paths, or uplooking astronomical cases through the atmosphere. An essential characterization of atmospheric turbulence is the point spread function (PSF). Turbulence images can be simulated to study basic questions, such as image quality and image restoration, by synthesizing PSFs of desired properties. In this paper, we report on a method to synthesize PSFs of atmospheric turbulence. The method uses recent developments in sparse and redundant representations. From a training set of measured atmospheric PSFs, we construct a dictionary of "basis functions" that characterize the atmospheric turbulence PSFs. A PSF can be synthesized from this dictionary by a properly weighted combination of dictionary elements. We disclose an algorithm to synthesize PSFs from the dictionary. The algorithm can synthesize PSFs in three orders of magnitude less computing time than conventional wave optics propagation methods. The resulting PSFs are also shown to be statistically representative of the turbulence conditions that were used to construct the dictionary.

  7. Point spread function due to multiple scattering of light in the atmosphere

    International Nuclear Information System (INIS)

    Pękala, J.; Wilczyński, H.

    2013-01-01

    The atmospheric scattering of light has a significant influence on the results of optical observations of air showers. It causes attenuation of direct light from the shower, but also contributes a delayed signal to the observed light. The scattering of light therefore should be accounted for, both in simulations of air shower detection and reconstruction of observed events. In this work a Monte Carlo simulation of multiple scattering of light has been used to determine the contribution of the scattered light in observations of a point source of light. Results of the simulations and a parameterization of the angular distribution of the scattered light contribution to the observed signal (the point spread function) are presented. -- Author-Highlights: •Analysis of atmospheric scattering of light from an isotropic point source. •Different geometries and atmospheric conditions were investigated. •A parameterization of scattered light distribution has been developed. •The parameterization allows one to easily account for the light scattering in air. •The results will be useful in analyses of observations of extensive air shower

  8. Extended Nijboer-Zernike approach for the computation of optical point-spread functions.

    Science.gov (United States)

    Janssen, Augustus J E M

    2002-05-01

    New Bessel-series representations for the calculation of the diffraction integral are presented yielding the point-spread function of the optical system, as occurs in the Nijboer-Zernike theory of aberrations. In this analysis one can allow an arbitrary aberration and a defocus part. The representations are presented in full detail for the cases of coma and astigmatism. The analysis leads to stably converging results in the case of large aberration or defocus values, while the applicability of the original Nijboer-Zernike theory is limited mainly to wave-front deviations well below the value of one wavelength. Because of its intrinsic speed, the analysis is well suited to supplement or to replace numerical calculations that are currently used in the fields of (scanning) microscopy, lithography, and astronomy. In a companion paper [J. Opt. Soc. Am. A 19, 860 (2002)], physical interpretations and applications in a lithographic context are presented, a convergence analysis is given, and a comparison is made with results obtained by using a numerical package.

  9. Influence of Signal-to-Noise Ratio and Point Spread Function on Limits of Super-Resolution

    NARCIS (Netherlands)

    Pham, T.Q.; Vliet, L.J. van; Schutte, K.

    2005-01-01

    This paper presents a method to predict the limit of possible resolution enhancement given a sequence of low resolution images. Three important parameters influence the outcome of this limit: the total Point Spread Function (PSF), the Signal-to-Noise Ratio (SNR) and the number of input images.

  10. Influence of signal-to-noise ratio and point spread function on limits of super-resolution

    NARCIS (Netherlands)

    Pham, T.Q.; Van Vliet, L.; Schutte, K.

    2005-01-01

    This paper presents a method to predict the limit of possible resolution enhancement given a sequence of lowresolution images. Three important parameters influence the outcome of this limit: the total Point Spread Function (PSF), the Signal-to-Noise Ratio (SNR) and the number of input images.

  11. Advancement in PET quantification using 3D-OP-OSEM point spread function reconstruction with the HRRT

    Energy Technology Data Exchange (ETDEWEB)

    Varrone, Andrea; Sjoeholm, Nils; Gulyas, Balazs; Halldin, Christer; Farde, Lars [Karolinska Hospital, Karolinska Institutet, Department of Clinical Neuroscience, Psychiatry Section and Stockholm Brain Institute, Stockholm (Sweden); Eriksson, Lars [Karolinska Hospital, Karolinska Institutet, Department of Clinical Neuroscience, Psychiatry Section and Stockholm Brain Institute, Stockholm (Sweden); Siemens Molecular Imaging, Knoxville, TN (United States); University of Stockholm, Department of Physics, Stockholm (Sweden)

    2009-10-15

    Image reconstruction including the modelling of the point spread function (PSF) is an approach improving the resolution of the PET images. This study assessed the quantitative improvements provided by the implementation of the PSF modelling in the reconstruction of the PET data using the High Resolution Research Tomograph (HRRT). Measurements were performed on the NEMA-IEC/2001 (Image Quality) phantom for image quality and on an anthropomorphic brain phantom (STEPBRAIN). PSF reconstruction was also applied to PET measurements in two cynomolgus monkeys examined with [{sup 18}F]FE-PE2I (dopamine transporter) and with [{sup 11}C]MNPA (D{sub 2} receptor), and in one human subject examined with [{sup 11}C]raclopride (D{sub 2} receptor). PSF reconstruction increased the recovery coefficient (RC) in the NEMA phantom by 11-40% and the grey to white matter ratio in the STEPBRAIN phantom by 17%. PSF reconstruction increased binding potential (BP{sub ND}) in the striatum and midbrain by 14 and 18% in the [{sup 18}F]FE-PE2I study, and striatal BP{sub ND} by 6 and 10% in the [{sup 11}C]MNPA and [{sup 11}C]raclopride studies. PSF reconstruction improved quantification by increasing the RC and thus reducing the partial volume effect. This method provides improved conditions for PET quantification in clinical studies with the HRRT system, particularly when targeting receptor populations in small brain structures. (orig.)

  12. Analysis of point source size on measurement accuracy of lateral point-spread function of confocal Raman microscopy

    Science.gov (United States)

    Fu, Shihang; Zhang, Li; Hu, Yao; Ding, Xiang

    2018-01-01

    Confocal Raman Microscopy (CRM) has matured to become one of the most powerful instruments in analytical science because of its molecular sensitivity and high spatial resolution. Compared with conventional Raman Microscopy, CRM can perform three dimensions mapping of tiny samples and has the advantage of high spatial resolution thanking to the unique pinhole. With the wide application of the instrument, there is a growing requirement for the evaluation of the imaging performance of the system. Point-spread function (PSF) is an important approach to the evaluation of imaging capability of an optical instrument. Among a variety of measurement methods of PSF, the point source method has been widely used because it is easy to operate and the measurement results are approximate to the true PSF. In the point source method, the point source size has a significant impact on the final measurement accuracy. In this paper, the influence of the point source sizes on the measurement accuracy of PSF is analyzed and verified experimentally. A theoretical model of the lateral PSF for CRM is established and the effect of point source size on full-width at half maximum of lateral PSF is simulated. For long-term preservation and measurement convenience, PSF measurement phantom using polydimethylsiloxane resin, doped with different sizes of polystyrene microspheres is designed. The PSF of CRM with different sizes of microspheres are measured and the results are compared with the simulation results. The results provide a guide for measuring the PSF of the CRM.

  13. Image-based point spread function implementation in a fully 3D OSEM reconstruction algorithm for PET.

    Science.gov (United States)

    Rapisarda, E; Bettinardi, V; Thielemans, K; Gilardi, M C

    2010-07-21

    The interest in positron emission tomography (PET) and particularly in hybrid integrated PET/CT systems has significantly increased in the last few years due to the improved quality of the obtained images. Nevertheless, one of the most important limits of the PET imaging technique is still its poor spatial resolution due to several physical factors originating both at the emission (e.g. positron range, photon non-collinearity) and at detection levels (e.g. scatter inside the scintillating crystals, finite dimensions of the crystals and depth of interaction). To improve the spatial resolution of the images, a possible way consists of measuring the point spread function (PSF) of the system and then accounting for it inside the reconstruction algorithm. In this work, the system response of the GE Discovery STE operating in 3D mode has been characterized by acquiring (22)Na point sources in different positions of the scanner field of view. An image-based model of the PSF was then obtained by fitting asymmetric two-dimensional Gaussians on the (22)Na images reconstructed with small pixel sizes. The PSF was then incorporated, at the image level, in a three-dimensional ordered subset maximum likelihood expectation maximization (OS-MLEM) reconstruction algorithm. A qualitative and quantitative validation of the algorithm accounting for the PSF has been performed on phantom and clinical data, showing improved spatial resolution, higher contrast and lower noise compared with the corresponding images obtained using the standard OS-MLEM algorithm.

  14. Drop size distribution measured by imaging: determination of the measurement volume by the calibration of the point spread function

    International Nuclear Information System (INIS)

    Fdida, Nicolas; Blaisot, Jean-Bernard

    2010-01-01

    Measurement of drop size distributions in a spray depends on the definition of the control volume for drop counting. For image-based techniques, this implies the definition of a depth-of-field (DOF) criterion. A sizing procedure based on an imaging model and associated with a calibration procedure is presented. Relations between image parameters and object properties are used to provide a measure of the size of the droplets, whatever the distance from the in-focus plane. A DOF criterion independent of the size of the drops and based on the determination of the width of the point spread function (PSF) is proposed. It allows to extend the measurement volume to defocused droplets and, due to the calibration of the PSF, to clearly define the depth of the measurement volume. Calibrated opaque discs, calibrated pinholes and an optical edge are used for this calibration. A comparison of the technique with a phase Doppler particle analyser and a laser diffraction granulometer is performed on an application to an industrial spray. Good agreement is found between the techniques when particular care is given to the sampling of droplets. The determination of the measurement volume is used to determine the drop concentration in the spray and the maximum drop concentration that imaging can support

  15. Assessment of an extended Nijboer-Zernike approach for the computation of optical point-spread functions.

    Science.gov (United States)

    Braat, Joseph; Dirksen, Peter; Janssen, Augustus J E M

    2002-05-01

    We assess the validity of an extended Nijboer-Zernike approach [J. Opt. Soc. Am. A 19, 849 (2002)], based on ecently found Bessel-series representations of diffraction integrals comprising an arbitrary aberration and a defocus part, for the computation of optical point-spread functions of circular, aberrated optical systems. These new series representations yield a flexible means to compute optical point-spread functions, both accurately and efficiently, under defocus and aberration conditions that seem to cover almost all cases of practical interest. Because of the analytical nature of the formulas, there are no discretization effects limiting the accuracy, as opposed to the more commonly used numerical packages based on strictly numerical integration methods. Instead, we have an easily managed criterion, expressed in the number of terms to be included in the Bessel-series representations, guaranteeing the desired accuracy. For this reason, the analytical method can also serve as a calibration tool for the numerically based methods. The analysis is not limited to pointlike objects but can also be used for extended objects under various illumination conditions. The calculation schemes are simple and permit one to trace the relative strength of the various interfering complex-amplitude terms that contribute to the final image intensity function.

  16. Edge Artifacts in Point Spread Function-based PET Reconstruction in Relation to Object Size and Reconstruction Parameters

    Directory of Open Access Journals (Sweden)

    Yuji Tsutsui

    2017-06-01

    Full Text Available Objective(s: We evaluated edge artifacts in relation to phantom diameter and reconstruction parameters in point spread function (PSF-based positron emission tomography (PET image reconstruction.Methods: PET data were acquired from an original cone-shaped phantom filled with 18F solution (21.9 kBq/mL for 10 min using a Biograph mCT scanner. The images were reconstructed using the baseline ordered subsets expectation maximization (OSEM algorithm and the OSEM with PSF correction model. The reconstruction parameters included a pixel size of 1.0, 2.0, or 3.0 mm, 1-12 iterations, 24 subsets, and a full width at half maximum (FWHM of the post-filter Gaussian filter of 1.0, 2.0, or 3.0 mm. We compared both the maximum recovery coefficient (RCmax and the mean recovery coefficient (RCmean in the phantom at different diameters.Results: The OSEM images had no edge artifacts, but the OSEM with PSF images had a dense edge delineating the hot phantom at diameters 10 mm or more and a dense spot at the center at diameters of 8 mm or less. The dense edge was clearly observed on images with a small pixel size, a Gaussian filter with a small FWHM, and a high number of iterations. At a phantom diameter of 6-7 mm, the RCmax for the OSEM and OSEM with PSF images was 60% and 140%, respectively (pixel size: 1.0 mm; FWHM of the Gaussian filter: 2.0 mm; iterations: 2. The RCmean of the OSEM with PSF images did not exceed 100%.Conclusion: PSF-based image reconstruction resulted in edge artifacts, the degree of which depends on the pixel size, number of iterations, FWHM of the Gaussian filter, and object size.

  17. Relationship between line spread function (LSF), or slice sensitivity profile (SSP), and point spread function (PSF) in CT image system

    International Nuclear Information System (INIS)

    Ohkubo, Masaki; Wada, Shinichi; Kobayashi, Teiji; Lee, Yongbum; Tsai, Du-Yih

    2004-01-01

    In the CT image system, we revealed the relationship between line spread function (LSF), or slice sensitivity profile (SSP), and point spread function (PSF). In the system, the following equation has been reported; I(x,y)=O(x,y) ** PSF(x,y), in which I(x,y) and O(x,y) are CT image and object function, respectively, and ** is 2-dimensional convolution. In the same way, the following 3-dimensional expression applies; I'(x,y,z)=O'(x,y,z) *** PSF'(x,y,z), in which z-axis is the direction perpendicular to the x/y-scan plane. We defined that the CT image system was separable, when the above two equations could be transformed into following equations; I(x,y)=[O(x,y) * LSF x (x)] * LSF y (y) and I'(x,y,z) =[O'(x,y,z) * SSP(z)] ** PSF(x,y), respectively, in which LSF x (x) and LSF y (y) are LSFs in x- and y-direction, respectively. Previous reports for the LSF and SSP are considered to assume the separable-system. Under the condition of separable-system, we derived following equations; PSF(x,y)=LSF x (x) ·LSF y (y) and PSF'(x,y,z)=PSF(x,y)·SSP(z). They were validated by the computer-simulations. When the study based on 1-dimensional functions of LSF and SSP are expanded to that based on 2- or 3-dimensional functions of PSF, derived equations must be required. (author)

  18. Influence of the corneal optical zone on the point-spread function of the human eye

    Science.gov (United States)

    Rol, Pascal O.; Parel, Jean-Marie A.

    1992-08-01

    In refractive surgery, a number of surgical techniques have been developed to correct ametropia (refractive defaults) of the eye by changing the exterior shape of the cornea. Because the air-cornea interface makes up for about two thirds of the refractive power of the eye, a refractive correction can be obtained by a suitable reshaping of the cornea. Postoperatively, it is usually observed that the corneal region consists of two or more zones which are characterized by different optical parameters exhibiting in particular different focal distances. Under normal circumstances, only the central area of the cornea is involved in the formation of the retinal image. However, if part of the light entering the eye through peripheral portions of the cornea with refractive properties different from the central area can pass the pupil, an out-of-focus `ghost' image may be overlaid on the retina causing a blur. In such a case the resolution, and the contrast performance of the eye which is expected from a successful operation, may be reduced. This study is an attempt to quantify the vision blur as a function of the diameter of the central zone, i.e., the optical zone which is of importance for vision.

  19. Application of Deconvolution Algorithm of Point Spread Function in Improving Image Quality: An Observer Preference Study on Chest Radiography.

    Science.gov (United States)

    Chae, Kum Ju; Goo, Jin Mo; Ahn, Su Yeon; Yoo, Jin Young; Yoon, Soon Ho

    2018-01-01

    To evaluate the preference of observers for image quality of chest radiography using the deconvolution algorithm of point spread function (PSF) (TRUVIEW ART algorithm, DRTECH Corp.) compared with that of original chest radiography for visualization of anatomic regions of the chest. Prospectively enrolled 50 pairs of posteroanterior chest radiographs collected with standard protocol and with additional TRUVIEW ART algorithm were compared by four chest radiologists. This algorithm corrects scattered signals generated by a scintillator. Readers independently evaluated the visibility of 10 anatomical regions and overall image quality with a 5-point scale of preference. The significance of the differences in reader's preference was tested with a Wilcoxon's signed rank test. All four readers preferred the images applied with the algorithm to those without algorithm for all 10 anatomical regions (mean, 3.6; range, 3.2-4.0; p chest anatomical structures applied with the deconvolution algorithm of PSF was superior to the original chest radiography.

  20. Measurement of the point spread function and effective area of the Solar-A Soft X-ray Telescope mirror

    Science.gov (United States)

    Lemen, J. R.; Claflin, E. S.; Brown, W. A.; Bruner, M. E.; Catura, R. C.

    1989-01-01

    A grazing incidence solar X-ray telescope, Soft X-ray Telescope (SXT), will be flown on the Solar-A satellite in 1991. Measurements have been conducted to determine the focal length, Point Spread Function (PSF), and effective area of the SXT mirror. The measurements were made with pinholes, knife edges, a CCD, and a proportional counter. The results show the 1/r character of the PSF, and indicate a half power diameter of 4.9 arcsec and an effective area of 1.33 sq cm at 13.3 A (0.93 keV). The mirror was found to provide a high contrast image with very little X-ray scattering.

  1. Effect of rotational diffusion in an orientational potential well on the point spread function of electric dipole emitters.

    Science.gov (United States)

    Stallinga, Sjoerd

    2015-02-01

    A study is presented of the point spread function (PSF) of electric dipole emitters that go through a series of absorption-emission cycles while the dipole orientation is changing due to rotational diffusion within the constraint of an orientational potential well. An analytical expression for the PSF is derived valid for arbitrary orientational potential wells in the limit of image acquisition times much larger than the rotational relaxation time. This framework is used to study the effects of the direction of incidence, polarization, and degree of coherence of the illumination. In the limit of fast rotational diffusion on the scale of the fluorescence lifetime the illumination influences only the PSF height, not its shape. In the limit of slow rotational diffusion on the scale of the fluorescence lifetime there is a significant effect on the PSF shape as well, provided the illumination is (partially) coherent. For oblique incidence, illumination asymmetries can arise in the PSF that give rise to position offsets in localization based on Gaussian spot fitting. These asymmetries persist in the limit of free diffusion in a zero orientational potential well.

  2. MeV gamma-ray observation with a well-defined point spread function based on electron tracking

    Science.gov (United States)

    Takada, A.; Tanimori, T.; Kubo, H.; Mizumoto, T.; Mizumura, Y.; Komura, S.; Kishimoto, T.; Takemura, T.; Yoshikawa, K.; Nakamasu, Y.; Matsuoka, Y.; Oda, M.; Miyamoto, S.; Sonoda, S.; Tomono, D.; Miuchi, K.; Kurosawa, S.; Sawano, T.

    2016-07-01

    The field of MeV gamma-ray astronomy has not opened up until recently owing to imaging difficulties. Compton telescopes and coded-aperture imaging cameras are used as conventional MeV gamma-ray telescopes; however their observations are obstructed by huge background, leading to uncertainty of the point spread function (PSF). Conventional MeV gamma-ray telescopes imaging utilize optimizing algorithms such as the ML-EM method, making it difficult to define the correct PSF, which is the uncertainty of a gamma-ray image on the celestial sphere. Recently, we have defined and evaluated the PSF of an electron-tracking Compton camera (ETCC) and a conventional Compton telescope, and thereby obtained an important result: The PSF strongly depends on the precision of the recoil direction of electron (scatter plane deviation, SPD) and is not equal to the angular resolution measure (ARM). Now, we are constructing a 30 cm-cubic ETCC for a second balloon experiment, Sub-MeV gamma ray Imaging Loaded-on-balloon Experiment: SMILE-II. The current ETCC has an effective area of 1 cm2 at 300 keV, a PSF of 10° at FWHM for 662 keV, and a large field of view of 3 sr. We will upgrade this ETCC to have an effective area of several cm2 and a PSF of 5° using a CF4-based gas. Using the upgraded ETCC, our observation plan for SMILE-II is to map of the electron-positron annihilation line and the 1.8 MeV line from 26Al. In this paper, we will report on the current performance of the ETCC and on our observation plan.

  3. DETERMINATION OF THE POINT-SPREAD FUNCTION FOR THE FERMI LARGE AREA TELESCOPE FROM ON-ORBIT DATA AND LIMITS ON PAIR HALOS OF ACTIVE GALACTIC NUCLEI

    Energy Technology Data Exchange (ETDEWEB)

    Ackermann, M. [Deutsches Elektronen Synchrotron DESY, D-15738 Zeuthen (Germany); Ajello, M.; Allafort, A.; Bechtol, K.; Bloom, E. D.; Borgland, A. W.; Bottacini, E.; Buehler, R. [W. W. Hansen Experimental Physics Laboratory, Kavli Institute for Particle Astrophysics and Cosmology, Department of Physics and SLAC National Accelerator Laboratory, Stanford University, Stanford, CA 94305 (United States); Asano, K. [Interactive Research Center of Science, Tokyo Institute of Technology, Meguro City, Tokyo 152-8551 (Japan); Atwood, W. B. [Santa Cruz Institute for Particle Physics, Department of Physics and Department of Astronomy and Astrophysics, University of California at Santa Cruz, Santa Cruz, CA 95064 (United States); Baldini, L.; Bellazzini, R.; Bregeon, J. [Istituto Nazionale di Fisica Nucleare, Sezione di Pisa, I-56127 Pisa (Italy); Ballet, J. [Laboratoire AIM, CEA-IRFU/CNRS/Universite Paris Diderot, Service d' Astrophysique, CEA Saclay, F-91191 Gif sur Yvette (France); Barbiellini, G. [Istituto Nazionale di Fisica Nucleare, Sezione di Trieste, I-34127 Trieste (Italy); Bastieri, D. [Istituto Nazionale di Fisica Nucleare, Sezione di Padova, I-35131 Padova (Italy); Bonamente, E. [Istituto Nazionale di Fisica Nucleare, Sezione di Perugia, I-06123 Perugia (Italy); Brandt, T. J. [CNRS, IRAP, F-31028 Toulouse cedex 4 (France); Brigida, M. [Dipartimento di Fisica ' M. Merlin' dell' Universita e del Politecnico di Bari, I-70126 Bari (Italy); Bruel, P., E-mail: mdwood@slac.stanford.edu, E-mail: mar0@uw.edu [Laboratoire Leprince-Ringuet, Ecole polytechnique, CNRS/IN2P3, F-91128 Palaiseau (France); and others

    2013-03-01

    The Large Area Telescope (LAT) on the Fermi Gamma-ray Space Telescope is a pair-conversion telescope designed to detect photons with energies from Almost-Equal-To 20 MeV to >300 GeV. The pre-launch response functions of the LAT were determined through extensive Monte Carlo simulations and beam tests. The point-spread function (PSF) characterizing the angular distribution of reconstructed photons as a function of energy and geometry in the detector is determined here from two years of on-orbit data by examining the distributions of {gamma} rays from pulsars and active galactic nuclei (AGNs). Above 3 GeV, the PSF is found to be broader than the pre-launch PSF. We checked for dependence of the PSF on the class of {gamma}-ray source and observation epoch and found none. We also investigated several possible spatial models for pair-halo emission around BL Lac AGNs. We found no evidence for a component with spatial extension larger than the PSF and set upper limits on the amplitude of halo emission in stacked images of low- and high-redshift BL Lac AGNs and the TeV blazars 1ES0229+200 and 1ES0347-121.

  4. Clinical evaluation of whole-body oncologic PET with time-of-flight and point-spread function for the hybrid PET/MR system.

    Science.gov (United States)

    Shang, Kun; Cui, Bixiao; Ma, Jie; Shuai, Dongmei; Liang, Zhigang; Jansen, Floris; Zhou, Yun; Lu, Jie; Zhao, Guoguang

    2017-08-01

    Hybrid positron emission tomography/magnetic resonance (PET/MR) imaging is a new multimodality imaging technology that can provide structural and functional information simultaneously. The aim of this study was to investigate the effects of the time-of-flight (TOF) and point-spread function (PSF) on small lesions observed in PET/MR images from clinical patient image sets. This study evaluated 54 small lesions in 14 patients who had undergone 18 F-fluorodeoxyglucose (FDG) PET/MR. Lesions up to 30mm in diameter were included. The PET data were reconstructed with a baseline ordered-subsets expectation-maximization (OSEM) algorithm, OSEM+PSF, OSEM+TOF and OSEM+TOF+PSF. PET image quality and small lesions were visually evaluated and scored by a 3-point scale. A quantitative analysis was then performed using the mean and maximum standardized uptake value (SUV) of the small lesions (SUV mean and SUV max ). The lesions were divided into two groups according to the long-axis diameter and the location respectively and evaluated with each reconstruction algorithm. We also evaluated the background signal by analyzing the SUV liver . OSEM+TOF+PSF provided the highest value and OSEM+TOF or PSF showed a higher value than OSEM for the visual assessment and quantitative analysis. The combination of TOF and PSF increased the SUV mean by 26.6% and the SUV max by 30.0%. The SUV liver was not influenced by PSF or TOF. For the OSEM+TOF+PSF model, the change in SUV mean and SUV max for lesions PET/MR images, potentially improving small lesion detectability. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. Tilted light sheet microscopy with 3D point spread functions for single-molecule super-resolution imaging in mammalian cells

    Science.gov (United States)

    Gustavsson, Anna-Karin; Petrov, Petar N.; Lee, Maurice Y.; Shechtman, Yoav; Moerner, W. E.

    2018-02-01

    To obtain a complete picture of subcellular nanostructures, cells must be imaged with high resolution in all three dimensions (3D). Here, we present tilted light sheet microscopy with 3D point spread functions (TILT3D), an imaging platform that combines a novel, tilted light sheet illumination strategy with engineered long axial range point spread functions (PSFs) for low-background, 3D super localization of single molecules as well as 3D super-resolution imaging in thick cells. TILT3D is built upon a standard inverted microscope and has minimal custom parts. The axial positions of the single molecules are encoded in the shape of the PSF rather than in the position or thickness of the light sheet, and the light sheet can therefore be formed using simple optics. The result is flexible and user-friendly 3D super-resolution imaging with tens of nm localization precision throughout thick mammalian cells. We validated TILT3D for 3D superresolution imaging in mammalian cells by imaging mitochondria and the full nuclear lamina using the double-helix PSF for single-molecule detection and the recently developed Tetrapod PSF for fiducial bead tracking and live axial drift correction. We envision TILT3D to become an important tool not only for 3D super-resolution imaging, but also for live whole-cell single-particle and single-molecule tracking.

  6. Tilted Light Sheet Microscopy with 3D Point Spread Functions for Single-Molecule Super-Resolution Imaging in Mammalian Cells.

    Science.gov (United States)

    Gustavsson, Anna-Karin; Petrov, Petar N; Lee, Maurice Y; Shechtman, Yoav; Moerner, W E

    2018-02-01

    To obtain a complete picture of subcellular nanostructures, cells must be imaged with high resolution in all three dimensions (3D). Here, we present tilted light sheet microscopy with 3D point spread functions (TILT3D), an imaging platform that combines a novel, tilted light sheet illumination strategy with engineered long axial range point spread functions (PSFs) for low-background, 3D super localization of single molecules as well as 3D super-resolution imaging in thick cells. TILT3D is built upon a standard inverted microscope and has minimal custom parts. The axial positions of the single molecules are encoded in the shape of the PSF rather than in the position or thickness of the light sheet, and the light sheet can therefore be formed using simple optics. The result is flexible and user-friendly 3D super-resolution imaging with tens of nm localization precision throughout thick mammalian cells. We validated TILT3D for 3D super-resolution imaging in mammalian cells by imaging mitochondria and the full nuclear lamina using the double-helix PSF for single-molecule detection and the recently developed Tetrapod PSF for fiducial bead tracking and live axial drift correction. We envision TILT3D to become an important tool not only for 3D super-resolution imaging, but also for live whole-cell single-particle and single-molecule tracking.

  7. The (lack of) relation between straylight and visual acuity. Two domains of the point-spread-function

    NARCIS (Netherlands)

    van den Berg, Thomas J T P

    2017-01-01

    PURPOSE: The effect of cataract and other media opacities on functional vision is typically assessed clinically using visual acuity. In both clinical and basic research, straylight (the functional result of light scattering in the eye) is commonly measured. The purpose of the present study was to

  8. Determination of point spread function for a flat-panel X-ray imager and its application in image restoration

    International Nuclear Information System (INIS)

    Jeon, Sungchae; Cho, Gyuseong; Huh, Young; Jin, Seungoh; Park, Jongduk

    2006-01-01

    We investigate the image blur estimation methods, namely modified the Richardson-Lucy (R-L) estimator and the Wiener estimator. Based on the empirical model of the PSF, an image restoration is applied to radiological images. The accuracy of the PSF estimation under the Poisson noise and readout electronic noise is significantly better for the R-L estimator than the Wiener estimator. In the image restoration using the 2-D PSF from the R-L estimator, the result shows a good improvement in the low and middle range of spatial frequency

  9. The edge artifact in the point-spread function-based PET reconstruction at different sphere-to-background ratios of radioactivity.

    Science.gov (United States)

    Kidera, Daisuke; Kihara, Ken; Akamatsu, Go; Mikasa, Shohei; Taniguchi, Takafumi; Tsutsui, Yuji; Takeshita, Toshiki; Maebatake, Akira; Miwa, Kenta; Sasaki, Masayuki

    2016-02-01

    The aim of this study was to quantitatively evaluate the edge artifacts in PET images reconstructed using the point-spread function (PSF) algorithm at different sphere-to-background ratios of radioactivity (SBRs). We used a NEMA IEC body phantom consisting of six spheres with 37, 28, 22, 17, 13 and 10 mm in inner diameter. The background was filled with (18)F solution with a radioactivity concentration of 2.65 kBq/mL. We prepared three sets of phantoms with SBRs of 16, 8, 4 and 2. The PET data were acquired for 20 min using a Biograph mCT scanner. The images were reconstructed with the baseline ordered subsets expectation maximization (OSEM) algorithm, and with the OSEM + PSF correction model (PSF). For the image reconstruction, the number of iterations ranged from one to 10. The phantom PET image analyses were performed by a visual assessment of the PET images and profiles, a contrast recovery coefficient (CRC), which is the ratio of SBR in the images to the true SBR, and the percent change in the maximum count between the OSEM and PSF images (Δ % counts). In the PSF images, the spheres with a diameter of 17 mm or larger were surrounded by a dense edge in comparison with the OSEM images. In the spheres with a diameter of 22 mm or smaller, an overshoot appeared in the center of the spheres as a sharp peak in the PSF images in low SBR. These edge artifacts were clearly observed in relation to the increase of the SBR. The overestimation of the CRC was observed in 13 mm spheres in the PSF images. In the spheres with a diameter of 17 mm or smaller, the Δ % counts increased with an increasing SBR. The Δ % counts increased to 91 % in the 10-mm sphere at the SBR of 16. The edge artifacts in the PET images reconstructed using the PSF algorithm increased with an increasing SBR. In the small spheres, the edge artifact was observed as a sharp peak at the center of spheres and could result in overestimation.

  10. Projections onto Convex Sets Super-Resolution Reconstruction Based on Point Spread Function Estimation of Low-Resolution Remote Sensing Images

    Directory of Open Access Journals (Sweden)

    Chong Fan

    2017-02-01

    Full Text Available To solve the problem on inaccuracy when estimating the point spread function (PSF of the ideal original image in traditional projection onto convex set (POCS super-resolution (SR reconstruction, this paper presents an improved POCS SR algorithm based on PSF estimation of low-resolution (LR remote sensing images. The proposed algorithm can improve the spatial resolution of the image and benefit agricultural crop visual interpolation. The PSF of the highresolution (HR image is unknown in reality. Therefore, analysis of the relationship between the PSF of the HR image and the PSF of the LR image is important to estimate the PSF of the HR image by using multiple LR images. In this study, the linear relationship between the PSFs of the HR and LR images can be proven. In addition, the novel slant knife-edge method is employed, which can improve the accuracy of the PSF estimation of LR images. Finally, the proposed method is applied to reconstruct airborne digital sensor 40 (ADS40 three-line array images and the overlapped areas of two adjacent GF-2 images by embedding the estimated PSF of the HR image to the original POCS SR algorithm. Experimental results show that the proposed method yields higher quality of reconstructed images than that produced by the blind SR method and the bicubic interpolation method.

  11. Impact of point spread function correction in standardized uptake value quantitation for positron emission tomography images. A study based on phantom experiments and clinical images

    International Nuclear Information System (INIS)

    Nakamura, Akihiro; Tanizaki, Yasuo; Takeuchi, Miho

    2014-01-01

    While point spread function (PSF)-based positron emission tomography (PET) reconstruction effectively improves the spatial resolution and image quality of PET, it may damage its quantitative properties by producing edge artifacts, or Gibbs artifacts, which appear to cause overestimation of regional radioactivity concentration. In this report, we investigated how edge artifacts produce negative effects on the quantitative properties of PET. Experiments with a National Electrical Manufacturers Association (NEMA) phantom, containing radioactive spheres of a variety of sizes and background filled with cold air or water, or radioactive solutions, showed that profiles modified by edge artifacts were reproducible regardless of background μ values, and the effects of edge artifacts increased with increasing sphere-to-background radioactivity concentration ratio (S/B ratio). Profiles were also affected by edge artifacts in complex fashion in response to variable combinations of sphere sizes and S/B ratios; and central single-peak overestimation up to 50% was occasionally noted in relatively small spheres with high S/B ratios. Effects of edge artifacts were obscured in spheres with low S/B ratios. In patient images with a variety of focal lesions, areas of higher radioactivity accumulation were generally more enhanced by edge artifacts, but the effects were variable depending on the size of and accumulation in the lesion. PET images generated using PSF-based reconstruction are therefore not appropriate for the evaluation of SUV. (author)

  12. Imaging Cajal's neuronal avalanche: how wide-field optical imaging of the point-spread advanced the understanding of neocortical structure-function relationship.

    Science.gov (United States)

    Frostig, Ron D; Chen-Bee, Cynthia H; Johnson, Brett A; Jacobs, Nathan S

    2017-07-01

    This review brings together a collection of studies that specifically use wide-field high-resolution mesoscopic level imaging techniques (intrinsic signal optical imaging; voltage-sensitive dye optical imaging) to image the cortical point spread (PS): the total spread of cortical activation comprising a large neuronal ensemble evoked by spatially restricted (point) stimulation of the sensory periphery (e.g., whisker, pure tone, point visual stimulation). The collective imaging findings, combined with supporting anatomical and electrophysiological findings, revealed some key aspects about the PS including its very large (radius of several mm) and relatively symmetrical spatial extent capable of crossing cytoarchitectural borders and trespassing into other cortical areas; its relationship with underlying evoked subthreshold activity and underlying anatomical system of long-range horizontal projections within gray matter, both also crossing borders; its contextual modulation and plasticity; the ability of its relative spatiotemporal profile to remain invariant to major changes in stimulation parameters; its potential role as a building block for integrative cortical activity; and its ubiquitous presence across various cortical areas and across mammalian species. Together, these findings advance our understanding about the neocortex at the mesoscopic level by underscoring that the cortical PS constitutes a fundamental motif of neocortical structure-function relationship.

  13. [Impact of point spread function correction in standardized uptake value quantitation for positron emission tomography images: a study based on phantom experiments and clinical images].

    Science.gov (United States)

    Nakamura, Akihiro; Tanizaki, Yasuo; Takeuchi, Miho; Ito, Shigeru; Sano, Yoshitaka; Sato, Mayumi; Kanno, Toshihiko; Okada, Hiroyuki; Torizuka, Tatsuo; Nishizawa, Sadahiko

    2014-06-01

    While point spread function (PSF)-based positron emission tomography (PET) reconstruction effectively improves the spatial resolution and image quality of PET, it may damage its quantitative properties by producing edge artifacts, or Gibbs artifacts, which appear to cause overestimation of regional radioactivity concentration. In this report, we investigated how edge artifacts produce negative effects on the quantitative properties of PET. Experiments with a National Electrical Manufacturers Association (NEMA) phantom, containing radioactive spheres of a variety of sizes and background filled with cold air or water, or radioactive solutions, showed that profiles modified by edge artifacts were reproducible regardless of background μ values, and the effects of edge artifacts increased with increasing sphere-to-background radioactivity concentration ratio (S/B ratio). Profiles were also affected by edge artifacts in complex fashion in response to variable combinations of sphere sizes and S/B ratios; and central single-peak overestimation up to 50% was occasionally noted in relatively small spheres with high S/B ratios. Effects of edge artifacts were obscured in spheres with low S/B ratios. In patient images with a variety of focal lesions, areas of higher radioactivity accumulation were generally more enhanced by edge artifacts, but the effects were variable depending on the size of and accumulation in the lesion. PET images generated using PSF-based reconstruction are therefore not appropriate for the evaluation of SUV.

  14. SU-G-IeP3-08: Image Reconstruction for Scanning Imaging System Based On Shape-Modulated Point Spreading Function

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Ruixing; Yang, LV [College of Optoelectronic Science and Engineering, National University of Defense Technology, Changsha, Hunan (China); Xu, Kele [College of Electronical Science and Engineering, National University of Defense Technology, Changsha, Hunan (China); Zhu, Li [Institute of Electrostatic and Electromagnetic Protection, Mechanical Engineering College, Shijiazhuang, Hebei (China)

    2016-06-15

    Purpose: Deconvolution is a widely used tool in the field of image reconstruction algorithm when the linear imaging system has been blurred by the imperfect system transfer function. However, due to the nature of Gaussian-liked distribution for point spread function (PSF), the components with coherent high frequency in the image are hard to restored in most of the previous scanning imaging system, even the relatively accurate PSF is acquired. We propose a novel method for deconvolution of images which are obtained by using shape-modulated PSF. Methods: We use two different types of PSF - Gaussian shape and donut shape - to convolute the original image in order to simulate the process of scanning imaging. By employing deconvolution of the two images with corresponding given priors, the image quality of the deblurred images are compared. Then we find the critical size of the donut shape compared with the Gaussian shape which has similar deconvolution results. Through calculation of tightened focusing process using radially polarized beam, such size of donut is achievable under same conditions. Results: The effects of different relative size of donut and Gaussian shapes are investigated. When the full width at half maximum (FWHM) ratio of donut and Gaussian shape is set about 1.83, similar resolution results are obtained through our deconvolution method. Decreasing the size of donut will favor the deconvolution method. A mask with both amplitude and phase modulation is used to create a donut-shaped PSF compared with the non-modulated Gaussian PSF. Donut with size smaller than our critical value is obtained. Conclusion: The utility of donutshaped PSF are proved useful and achievable in the imaging and deconvolution processing, which is expected to have potential practical applications in high resolution imaging for biological samples.

  15. X-ray beam-shaping via deformable mirrors: surface profile and point spread function computation for Gaussian beams using physical optics.

    Science.gov (United States)

    Spiga, D

    2018-01-01

    X-ray mirrors with high focusing performances are commonly used in different sectors of science, such as X-ray astronomy, medical imaging and synchrotron/free-electron laser beamlines. While deformations of the mirror profile may cause degradation of the focus sharpness, a deliberate deformation of the mirror can be made to endow the focus with a desired size and distribution, via piezo actuators. The resulting profile can be characterized with suitable metrology tools and correlated with the expected optical quality via a wavefront propagation code or, sometimes, predicted using geometric optics. In the latter case and for the special class of profile deformations with monotonically increasing derivative, i.e. concave upwards, the point spread function (PSF) can even be predicted analytically. Moreover, under these assumptions, the relation can also be reversed: from the desired PSF the required profile deformation can be computed analytically, avoiding the use of trial-and-error search codes. However, the computation has been so far limited to geometric optics, which entailed some limitations: for example, mirror diffraction effects and the size of the coherent X-ray source were not considered. In this paper, the beam-shaping formalism in the framework of physical optics is reviewed, in the limit of small light wavelengths and in the case of Gaussian intensity wavefronts. Some examples of shaped profiles are also shown, aiming at turning a Gaussian intensity distribution into a top-hat one, and checks of the shaping performances computing the at-wavelength PSF by means of the WISE code are made.

  16. Evaluation of spatial dependence of point spread function-based PET reconstruction using a traceable point-like 22Na source

    Directory of Open Access Journals (Sweden)

    Taisuke Murata

    2016-10-01

    Full Text Available Abstract Background The point spread function (PSF of positron emission tomography (PET depends on the position across the field of view (FOV. Reconstruction based on PSF improves spatial resolution and quantitative accuracy. The present study aimed to quantify the effects of PSF correction as a function of the position of a traceable point-like 22Na source over the FOV on two PET scanners with a different detector design. Methods We used Discovery 600 and Discovery 710 (GE Healthcare PET scanners and traceable point-like 22Na sources (<1 MBq with a spherical absorber design that assures uniform angular distribution of the emitted annihilation photons. The source was moved in three directions at intervals of 1 cm from the center towards the peripheral FOV using a three-dimensional (3D-positioning robot, and data were acquired over a period of 2 min per point. The PET data were reconstructed by filtered back projection (FBP, the ordered subset expectation maximization (OSEM, OSEM + PSF, and OSEM + PSF + time-of-flight (TOF. Full width at half maximum (FWHM was determined according to the NEMA method, and total counts in regions of interest (ROI for each reconstruction were quantified. Results The radial FWHM of FBP and OSEM increased towards the peripheral FOV, whereas PSF-based reconstruction recovered the FWHM at all points in the FOV of both scanners. The radial FWHM for PSF was 30–50 % lower than that of OSEM at the center of the FOV. The accuracy of PSF correction was independent of detector design. Quantitative values were stable across the FOV in all reconstruction methods. The effect of TOF on spatial resolution and quantitation accuracy was less noticeable. Conclusions The traceable 22Na point-like source allowed the evaluation of spatial resolution and quantitative accuracy across the FOV using different reconstruction methods and scanners. PSF-based reconstruction reduces dependence of the spatial resolution on the

  17. A method for partial volume correction of PET-imaged tumor heterogeneity using expectation maximization with a spatially varying point spread function

    International Nuclear Information System (INIS)

    Barbee, David L; Holden, James E; Nickles, Robert J; Jeraj, Robert; Flynn, Ryan T

    2010-01-01

    Tumor heterogeneities observed in positron emission tomography (PET) imaging are frequently compromised by partial volume effects which may affect treatment prognosis, assessment or future implementations such as biologically optimized treatment planning (dose painting). This paper presents a method for partial volume correction of PET-imaged heterogeneous tumors. A point source was scanned on a GE Discovery LS at positions of increasing radii from the scanner's center to obtain the spatially varying point spread function (PSF). PSF images were fit in three dimensions to Gaussian distributions using least squares optimization. Continuous expressions were devised for each Gaussian width as a function of radial distance, allowing for generation of the system PSF at any position in space. A spatially varying partial volume correction (SV-PVC) technique was developed using expectation maximization (EM) and a stopping criterion based on the method's correction matrix generated for each iteration. The SV-PVC was validated using a standard tumor phantom and a tumor heterogeneity phantom and was applied to a heterogeneous patient tumor. SV-PVC results were compared to results obtained from spatially invariant partial volume correction (SINV-PVC), which used directionally uniform three-dimensional kernels. SV-PVC of the standard tumor phantom increased the maximum observed sphere activity by 55 and 40% for 10 and 13 mm diameter spheres, respectively. Tumor heterogeneity phantom results demonstrated that as net changes in the EM correction matrix decreased below 35%, further iterations improved overall quantitative accuracy by less than 1%. SV-PVC of clinically observed tumors frequently exhibited changes of ±30% in regions of heterogeneity. The SV-PVC method implemented spatially varying kernel widths and automatically determined the number of iterations for optimal restoration, parameters which are arbitrarily chosen in SINV-PVC. Comparing SV-PVC to SINV-PVC demonstrated

  18. Evaluation of the Effect of Tumor Position on Standardized Uptake Value Using Time-of-Flight Reconstruction and Point Spread Function

    Directory of Open Access Journals (Sweden)

    Yasuharu Wakabayashi

    2016-01-01

    Full Text Available Objective(s: The present study was conducted to examine whether the standardized uptake value (SUV may be affected by the spatial position of a lesion in the radial direction on positron emission tomography (PET images, obtained via two methods based on time-of-flight (TOF reconstruction and point spread function (PSF. Methods: A cylinder phantom with the sphere (30mm diameter, located in the center was used in this study. Fluorine-18 fluorodeoxyglucose (18F-FDG concentrations of 5.3 kBq/ml and 21.2 kBq/ml were used for the background in the cylinder phantom and the central sphere respectively. By the use of TOF and PSF, SUVmax and SUVmean were determined while moving the phantom in a horizontal direction (X direction from the center of field of view (FOV: 0 mm at 50, 100, 150 and 200 mm positions, respectively. Furthermore, we examined 41 patients (23 male, 18 female, mean age: 68±11.2 years with lymph node tumors , who had undergone 18F-FDG PET examinations. The distance of each lymph node from FOV center was measured, based on the clinical images. Results: As the distance of a lesion from the FOV center exceeded 100 mm, the value of SUVmax, which was obtained with the cylinder phantom, was overestimated, while SUVmean by TOF and/or PSF was underestimated. Based on the clinical examinations, the average volume of interest was 8.5 cm3. Concomitant use of PSF increased SUVmax and SUVmean by 27.9% and 2.8%, respectively. However, size of VOI and distance from the FOV center did not affect SUVmax or SUVmean in clinical examinations. Conclusion: The reliability of SUV quantification by TOF and/or PSF decreased, when the tumor was located at a 100 mm distance (or farther from the center of FOV. In clinical examinations, if the lymph node was located within 100 mm distance from the center of FOV, SUV remained stable within a constantly increasing range by use of both TOF and PSF. We conclude that, use of both TOF and PSF may be helpful.

  19. Intensity-dependent point spread image processing

    International Nuclear Information System (INIS)

    Cornsweet, T.N.; Yellott, J.I.

    1984-01-01

    There is ample anatomical, physiological and psychophysical evidence that the mammilian retina contains networks that mediate interactions among neighboring receptors, resulting in intersecting transformations between input images and their corresponding neural output patterns. The almost universally accepted view is that the principal form of interaction involves lateral inhibition, resulting in an output pattern that is the convolution of the input with a ''Mexican hat'' or difference-of-Gaussians spread function, having a positive center and a negative surround. A closely related process is widely applied in digital image processing, and in photography as ''unsharp masking''. The authors show that a simple and fundamentally different process, involving no inhibitory or subtractive terms can also account for the physiological and psychophysical findings that have been attributed to lateral inhibition. This process also results in a number of fundamental effects that occur in mammalian vision and that would be of considerable significance in robotic vision, but which cannot be explained by lateral inhibitory interaction

  20. A Practical Point Spread Model for Ocean Waters

    National Research Council Canada - National Science Library

    Hou, Weilin; Gray, Deric; Weidemann, Alan D; Arnone, Robert A

    2008-01-01

    .... These inherent optical properties (IOP), although measured frequently due to their important applications in ocean optics, especially in remote sensing, cannot be applied to underwater imaging issues directly, since they inherently reflect the chance of the single scattering.

  1. “Hot Hand” in the National Basketball Association Point Spread Betting Market: A 34-Year Analysis

    Directory of Open Access Journals (Sweden)

    Benjamin Waggoner

    2014-11-01

    Full Text Available Several articles have looked at factors that affect the adjustments of point spreads, based on hot hands or streaks, for smaller durations of time. This study examines these effects for 34 regular seasons in the National Basketball Association (NBA. Estimating a Seemingly Unrelated Regression model using all 34 seasons, all streaks significantly impacted point spreads and difference in actual points. When estimating each season individually, differences emerged particularly examining winning and losing streaks of six games or more. The results indicate both the presence of momentum effects and the gambler’s fallacy.

  2. Point Spread Function of ASTRO-H Soft X-Ray Telescope (SXT)

    Science.gov (United States)

    Hayashi, Takayuki; Sato, Toshiki; Kikuchi, Naomichi; Iizuka, Ryo; Maeda, Yoshitomo; Ishida, Manabu; Kurashima, Sho; Nakaniwa, Nozomi; Okajima, Takashi; Mori, Hideyuki; hide

    2016-01-01

    ASTRO-H (Hitomi) satellite equips two Soft X-ray Telescopes (SXTs), one of which (SXT-S) is coupled to Soft-X-ray Spectrometer (SXS) while the other (SXT-I) is coupled to Soft X-ray Imager (SXI). Although SXTs are lightweight of approximately 42 kgmodule1 and have large on-axis effective area (EA) of approximately 450 cm(exp 2) at 4.5 keV module(sub 1) by themselves, their angular resolutions are moderate approximately 1.2 arcmin in half power diameter. The amount of contamination into the SXS FOV (3.05 times 3.05 arcmin(exp 2) from nearby sources was measured in the ground-based calibration at the beamline in Institute of Space and Astronautical Science. The contamination at 4.5 keV were measured with sources distant from the SXS center by one width of the FOV in perpendicular and diagonal directions, that is, 3 and 4.5 arcmin-off, respectively. The average EA of the contamination in the four directions with the 3 and 4.5 arcmin-off were measured to be 2 and 0.6% of the on-axis EA of 412 cm (exp) for the SXS FOV, respectively. The contamination from a source distant by two FOV widths in a diagonal direction, that is, 8.6 arcmin-off was measured to be 0.1% of the on-axis at 4.5 keV. The contamination amounts were also measured at 1.5 keV and 8.0 keV which indicated that the ratio of the contamination EA to that of on-axis hardly depended on the source energy. The off-axis SXT-I images from 4.5 to 27 arcmin were acquired at intervals of -4.5 arcmin for the SXI FOV of 38 times 38 arcmin(exp 2). The image shrinked as the off-axis angle increased. Above 13.5 arcmin of off-angle, a stray appeared around the image center in the off-axis direction. As for the on-axis image, a ring-shaped stray appeared at the edge of SXI of approximately 18 arcmin distant from the image center.

  3. Chandra's Ultimate Angular Resolution: Studies of the HRC-I Point Spread Function

    Science.gov (United States)

    Juda, Michael; Karovska, M.

    2010-03-01

    The Chandra High Resolution Camera (HRC) should provide an ideal imaging match to the High-Resolution Mirror Assembly (HRMA). The laboratory-measured intrinsic resolution of the HRC is 20 microns FWHM. HRC event positions are determined via a centroiding method rather than by using discrete pixels. This event position reconstruction method and any non-ideal performance of the detector electronics can introduce distortions in event locations that, when combined with spacecraft dither, produce artifacts in source images. We compare ray-traces of the HRMA response to "on-axis" observations of AR Lac and Capella as they move through their dither patterns to images produced from filtered event lists to characterize the effective intrinsic PSF of the HRC-I. A two-dimensional Gaussian, which is often used to represent the detector response, is NOT a good representation of the intrinsic PSF of the HRC-I; the actual PSF has a sharper peak and additional structure which will be discussed. This work was supported under NASA contract NAS8-03060.

  4. Detecting Near-Earth Objects Using Cross-Correlation with a Point Spread Function

    Science.gov (United States)

    2009-03-01

    impact in the Yucatan Peninsula caused the extinction of the dinosaurs in the Cretaceous Period [Fix, 1995]. Even the Moon is pot marked by many...the atmosphere that the light traverses. For this reason , it is typically better to be at higher elevations to decrease the amount of atmosphere the...detection on average for the Rayleigh sampling with cross-correlation of a PSF than the Rayleigh sampling without cross- correlation. For this reason

  5. Comparison and Validation of Point Spread Models for Imaging in Natural Waters

    National Research Council Canada - National Science Library

    Hou, Weilin; Gray, Deric; Weidemann, Alan; Arnone, Robert

    2008-01-01

    .... This will extend the performance range as well as the information retrieval from underwater electro-optical systems, which is critical in many civilian and military applications, including target...

  6. Nonparametric Transfer Function Models

    Science.gov (United States)

    Liu, Jun M.; Chen, Rong; Yao, Qiwei

    2009-01-01

    In this paper a class of nonparametric transfer function models is proposed to model nonlinear relationships between ‘input’ and ‘output’ time series. The transfer function is smooth with unknown functional forms, and the noise is assumed to be a stationary autoregressive-moving average (ARMA) process. The nonparametric transfer function is estimated jointly with the ARMA parameters. By modeling the correlation in the noise, the transfer function can be estimated more efficiently. The parsimonious ARMA structure improves the estimation efficiency in finite samples. The asymptotic properties of the estimators are investigated. The finite-sample properties are illustrated through simulations and one empirical example. PMID:20628584

  7. New deconvolution method for microscopic images based on the continuous Gaussian radial basis function interpolation model.

    Science.gov (United States)

    Chen, Zhaoxue; Chen, Hao

    2014-01-01

    A deconvolution method based on the Gaussian radial basis function (GRBF) interpolation is proposed. Both the original image and Gaussian point spread function are expressed as the same continuous GRBF model, thus image degradation is simplified as convolution of two continuous Gaussian functions, and image deconvolution is converted to calculate the weighted coefficients of two-dimensional control points. Compared with Wiener filter and Lucy-Richardson algorithm, the GRBF method has an obvious advantage in the quality of restored images. In order to overcome such a defect of long-time computing, the method of graphic processing unit multithreading or increasing space interval of control points is adopted, respectively, to speed up the implementation of GRBF method. The experiments show that based on the continuous GRBF model, the image deconvolution can be efficiently implemented by the method, which also has a considerable reference value for the study of three-dimensional microscopic image deconvolution.

  8. Modelling of the over-exposed pixel area of CCD cameras caused by laser dazzling

    NARCIS (Netherlands)

    Benoist, K.W.; Schleijpen, R.M.A.

    2014-01-01

    A simple model has been developed and implemented in Matlab code, predicting the over-exposed pixel area of cameras caused by laser dazzling. Inputs of this model are the laser irradiance on the front optics of the camera, the Point Spread Function (PSF) of the used optics, the integration time of

  9. Point spread function and centroiding accuracy measurements with the JET-X mirror and MOS CCD detector of the Swift gamma ray burst explorer's X-ray telescope

    CERN Document Server

    Ambrosi, R M; Hutchinson, I B; Willingale, R; Wells, A; Short, A D T; Campana, S; Citterio, O; Tagliaferri, G; Burkert, W; Bräuninger, H

    2002-01-01

    The optical components of the Swift X-ray telescope (XRT) are already developed items. They are the flight spare X-ray mirror from the JET-X/Spectrum-X program and an MOS CCD (CCD22) of the type currently operating in orbit as part of the EPIC focal plane camera on XMM-Newton (SPIE 4140 (2000) 64). The JET-X mirrors were first calibrated at the Max Planck Institute for Extraterrestrial Physics' (MPE) Panter facility, Garching, Germany in 1996 (SPIE 2805 (1996) 56; SPIE 3114 (1997) 392). Half-energy widths of 16 arcsec at 1.5 keV were confirmed for the two flight mirrors and the flight spare. The calibration of the flight spare was repeated at Panter in July 2000 in order to establish whether any changes had occurred during the 4 yr that the mirror had been in storage at the OAB, Milan, Italy. The results reported in this paper confirm that the resolution of the JET-X mirrors has remained stable over this storage period. In an extension of this test program, the flight spare EPIC camera was installed at the fo...

  10. A new signal restoration method based on deconvolution of the Point Spread Function (PSF) for the Flat-Field Holographic Concave Grating UV spectrometer system

    Science.gov (United States)

    Dai, Honglin; Luo, Yongdao

    2013-12-01

    In recent years, with the development of the Flat-Field Holographic Concave Grating, they are adopted by all kinds of UV spectrometers. By means of single optical surface, the Flat-Field Holographic Concave Grating can implement dispersion and imaging that make the UV spectrometer system design quite compact. However, the calibration of the Flat-Field Holographic Concave Grating is very difficult. Various factors make its imaging quality difficult to be guaranteed. So we have to process the spectrum signal with signal restoration before using it. Guiding by the theory of signals and systems, and after a series of experiments, we found that our UV spectrometer system is a Linear Space- Variant System. It means that we have to measure PSF of every pixel of the system which contains thousands of pixels. Obviously, that's a large amount of calculation .For dealing with this problem, we proposes a novel signal restoration method. This method divides the system into several Linear Space-Invariant subsystems and then makes signal restoration with PSFs. Our experiments turn out that this method is effective and inexpensive.

  11. Ray tracing the Wigner distribution function for optical simulations

    Science.gov (United States)

    Mout, Marco; Wick, Michael; Bociort, Florian; Petschulat, Joerg; Urbach, Paul

    2018-01-01

    We study a simulation method that uses the Wigner distribution function to incorporate wave optical effects in an established framework based on geometrical optics, i.e., a ray tracing engine. We use the method to calculate point spread functions and show that it is accurate for paraxial systems but produces unphysical results in the presence of aberrations. The cause of these anomalies is explained using an analytical model.

  12. Incorporation of intraocular scattering in schematic eye models

    International Nuclear Information System (INIS)

    Navarro, R.

    1985-01-01

    Beckmann's theory of scattering from rough surfaces is applied to obtain, from the experimental veiling glare functions, a diffuser that when placed at the pupil plane would produce the same scattering halo as the ocular media. This equivalent diffuser is introduced in a schematic eye model, and its influence on the point-spread function and the modulation-transfer function of the eye is analyzed

  13. Statistical modelling with quantile functions

    CERN Document Server

    Gilchrist, Warren

    2000-01-01

    Galton used quantiles more than a hundred years ago in describing data. Tukey and Parzen used them in the 60s and 70s in describing populations. Since then, the authors of many papers, both theoretical and practical, have used various aspects of quantiles in their work. Until now, however, no one put all the ideas together to form what turns out to be a general approach to statistics.Statistical Modelling with Quantile Functions does just that. It systematically examines the entire process of statistical modelling, starting with using the quantile function to define continuous distributions. The author shows that by using this approach, it becomes possible to develop complex distributional models from simple components. A modelling kit can be developed that applies to the whole model - deterministic and stochastic components - and this kit operates by adding, multiplying, and transforming distributions rather than data.Statistical Modelling with Quantile Functions adds a new dimension to the practice of stati...

  14. Functioning with a Sticky Model.

    Science.gov (United States)

    Reys, Robert E.

    1981-01-01

    A model that can be effectively used to develop the notion of function and provide varied practice by using "real world" examples and concrete objects is covered. The use of Popsicle-sticks is featured, with some suggestions for tasks involving functions with one operation, two operations, and inverse operations covered. (MP)

  15. A deterministic width function model

    Directory of Open Access Journals (Sweden)

    C. E. Puente

    2003-01-01

    Full Text Available Use of a deterministic fractal-multifractal (FM geometric method to model width functions of natural river networks, as derived distributions of simple multifractal measures via fractal interpolating functions, is reported. It is first demonstrated that the FM procedure may be used to simulate natural width functions, preserving their most relevant features like their overall shape and texture and their observed power-law scaling on their power spectra. It is then shown, via two natural river networks (Racoon and Brushy creeks in the United States, that the FM approach may also be used to closely approximate existing width functions.

  16. Zhang functions and various models

    CERN Document Server

    Zhang, Yunong

    2015-01-01

    This book focuses on solving different types of time-varying problems. It presents various Zhang dynamics (ZD) models by defining various Zhang functions (ZFs) in real and complex domains. It then provides theoretical analyses of such ZD models and illustrates their results. It also uses simulations to substantiate their efficacy and show the feasibility of the presented ZD approach (i.e., different ZFs leading to different ZD models), which is further applied to the repetitive motion planning (RMP) of redundant robots, showing its application potential.

  17. Cost functions of greenhouse models

    International Nuclear Information System (INIS)

    Linderoth, H.

    2000-01-01

    The benchmark is equal to the cost (D) caused by an increase in temperature since the middle of the nineteenth century (T) of nearly 2.5 deg. C. According to mainstream economists, the benchmark is 1-2% of GDP, but very different estimates can also be found. Even though there appears to be agreement among a number of economists that the benchmark is 1-2% of GDP, major differences exist when it comes to estimating D for different sectors. One of the main problems is how to estimate non-market activities. Normally, the benchmark is the best guess, but due to the possibility of catastrophic events this can be considerable smaller than the mean. Certainly, the cost function is skewed to the right. The benchmark is just one point on the cost curve. To a great extent, cost functions are alike in greenhouse models (D = α ''.T'' λ). Cost functions are region and sector dependent in several models. In any case, both α (benchmark) and λ are rough estimates. Besides being dependent on α and λ, the marginal emission cost depends on the discount rate. In fact, because emissions have effects continuing for many years, the discount rate is clearly the most important parameter. (au) (au)

  18. Transfer Function Identification Using Orthogonal Fourier Transform Modeling Functions

    Science.gov (United States)

    Morelli, Eugene A.

    2013-01-01

    A method for transfer function identification, including both model structure determination and parameter estimation, was developed and demonstrated. The approach uses orthogonal modeling functions generated from frequency domain data obtained by Fourier transformation of time series data. The method was applied to simulation data to identify continuous-time transfer function models and unsteady aerodynamic models. Model fit error, estimated model parameters, and the associated uncertainties were used to show the effectiveness of the method for identifying accurate transfer function models from noisy data.

  19. Function Model for Community Health Service Information

    Science.gov (United States)

    Yang, Peng; Pan, Feng; Liu, Danhong; Xu, Yongyong

    In order to construct a function model of community health service (CHS) information for development of CHS information management system, Integration Definition for Function Modeling (IDEF0), an IEEE standard which is extended from Structured Analysis and Design(SADT) and now is a widely used function modeling method, was used to classifying its information from top to bottom. The contents of every level of the model were described and coded. Then function model for CHS information, which includes 4 super-classes, 15 classes and 28 sub-classed of business function, 43 business processes and 168 business activities, was established. This model can facilitate information management system development and workflow refinement.

  20. Functional State Modelling of Saccharomyces cerevisiae Cultivations

    Directory of Open Access Journals (Sweden)

    Iasen Hristozov

    2004-10-01

    Full Text Available The implementation of functional state approach for modelling of yeast cultivation is considered in this paper. This concept helps in monitoring and control of complex processes such as bioprocesses. Using of functional state modelling approach for fermentation processes aims to overcome the main disadvantage of using global process model, namely complex model structure and big number of model parameters. The main advantage of functional state modelling is that the parameters of each local model can be separately estimated from other local models parameters. The results achieved from batch, as well as from fed-batch, cultivations are presented.

  1. Structure functions from chiral soliton models

    International Nuclear Information System (INIS)

    Weigel, H.; Reinhardt, H.; Gamberg, L.

    1997-01-01

    We study nucleon structure functions within the bosonized Nambu-Jona-Lasinio (NJL) model where the nucleon emerges as a chiral soliton. We discuss the model predictions on the Gottfried sum rule for electron-nucleon scattering. A comparison with a low-scale parametrization shows that the model reproduces the gross features of the empirical structure functions. We also compute the leading twist contributions of the polarized structure functions g 1 and g 2 in this model. We compare the model predictions on these structure functions with data from the E143 experiment by GLAP evolving them from the scale characteristic for the NJL-model to the scale of the data

  2. Value function in economic growth model

    Science.gov (United States)

    Bagno, Alexander; Tarasyev, Alexandr A.; Tarasyev, Alexander M.

    2017-11-01

    Properties of the value function are examined in an infinite horizon optimal control problem with an unlimited integrand index appearing in the quality functional with a discount factor. Optimal control problems of such type describe solutions in models of economic growth. Necessary and sufficient conditions are derived to ensure that the value function satisfies the infinitesimal stability properties. It is proved that value function coincides with the minimax solution of the Hamilton-Jacobi equation. Description of the growth asymptotic behavior for the value function is provided for the logarithmic, power and exponential quality functionals and an example is given to illustrate construction of the value function in economic growth models.

  3. Load function modelling for light impact

    International Nuclear Information System (INIS)

    Klingmueller, O.

    1982-01-01

    For Pile Integrity Testing light weight drop hammers are used to induce stress waves. In the computational analysis of one-dimensional wave propagation a load function has to be used. Several mechanical models and corresponding load functions are discussed. It is shown that a bell-shaped function which does not correspond to a mechanical model is in best accordance with test results and does not lead to numerical disturbances in the computational results. (orig.) [de

  4. A Memristor Model with Piecewise Window Function

    Directory of Open Access Journals (Sweden)

    J. Yu

    2013-12-01

    Full Text Available In this paper, we present a memristor model with piecewise window function, which is continuously differentiable and consists of three nonlinear pieces. By introducing two parameters, the shape of this window function can be flexibly adjusted to model different types of memristors. Using this model, one can easily obtain an expression of memristance depending on charge, from which the numerical value of memristance can be readily calculated for any given charge, and eliminate the error occurring in the simulation of some existing window function models.

  5. Model wave functions for the deuteron

    International Nuclear Information System (INIS)

    Certov, A.; Mathelitsch, L.; Moravcsik, M.J.

    1987-01-01

    Model wave functions are constructed for the deuteron to facilitate the unambiguous exploration of dependencies on the percentage D state and on the small-, medium-, and large-distance parts of the deuteron wave function. The wave functions are constrained by those deuteron properties which are accurately known experimentally, and are in an analytic form which is easily integrable in expressions usually encountered in the use of such wave functions

  6. Diagnostics for Linear Models With Functional Responses

    OpenAIRE

    Xu, Hongquan; Shen, Qing

    2005-01-01

    Linear models where the response is a function and the predictors are vectors are useful in analyzing data from designed experiments and other situations with functional observations. Residual analysis and diagnostics are considered for such models. Studentized residuals are defined and their properties are studied. Chi-square quantile-quantile plots are proposed to check the assumption of Gaussian error process and outliers. Jackknife residuals and an associated test are proposed to det...

  7. Functional Modeling of Neural-Glia Interaction

    DEFF Research Database (Denmark)

    Postnov, D.E.; Brazhe, N.A.; Sosnovtseva, Olga

    2012-01-01

    Functional modeling is an approach that focuses on the representation of the qualitative dynamics of the individual components (e.g. cells) of a system and on the structure of the interaction network.......Functional modeling is an approach that focuses on the representation of the qualitative dynamics of the individual components (e.g. cells) of a system and on the structure of the interaction network....

  8. Neural modeling of prefrontal executive function

    Energy Technology Data Exchange (ETDEWEB)

    Levine, D.S. [Univ. of Texas, Arlington, TX (United States)

    1996-12-31

    Brain executive function is based in a distributed system whereby prefrontal cortex is interconnected with other cortical. and subcortical loci. Executive function is divided roughly into three interacting parts: affective guidance of responses; linkage among working memory representations; and forming complex behavioral schemata. Neural network models of each of these parts are reviewed and fit into a preliminary theoretical framework.

  9. A Multivariate Approach to Functional Neuro Modeling

    DEFF Research Database (Denmark)

    Mørch, Niels J.S.

    1998-01-01

    by the application of linear and more flexible, nonlinear microscopic regression models to a real-world dataset. The dependency of model performance, as quantified by generalization error, on model flexibility and training set size is demonstrated, leading to the important realization that no uniformly optimal model......, provides the basis for a generalization theoretical framework relating model performance to model complexity and dataset size. Briefly summarized the major topics discussed in the thesis include: - An introduction of the representation of functional datasets by pairs of neuronal activity patterns...... exists. - Model visualization and interpretation techniques. The simplicity of this task for linear models contrasts the difficulties involved when dealing with nonlinear models. Finally, a visualization technique for nonlinear models is proposed. A single observation emerges from the thesis...

  10. The universal function in color dipole model

    Science.gov (United States)

    Jalilian, Z.; Boroun, G. R.

    2017-10-01

    In this work we review color dipole model and recall properties of the saturation and geometrical scaling in this model. Our primary aim is determining the exact universal function in terms of the introduced scaling variable in different distance than the saturation radius. With inserting the mass in calculation we compute numerically the contribution of heavy productions in small x from the total structure function by the fraction of universal functions and show the geometrical scaling is established due to our scaling variable in this study.

  11. Prediction of Chemical Function: Model Development and ...

    Science.gov (United States)

    The United States Environmental Protection Agency’s Exposure Forecaster (ExpoCast) project is developing both statistical and mechanism-based computational models for predicting exposures to thousands of chemicals, including those in consumer products. The high-throughput (HT) screening-level exposures developed under ExpoCast can be combined with HT screening (HTS) bioactivity data for the risk-based prioritization of chemicals for further evaluation. The functional role (e.g. solvent, plasticizer, fragrance) that a chemical performs can drive both the types of products in which it is found and the concentration in which it is present and therefore impacting exposure potential. However, critical chemical use information (including functional role) is lacking for the majority of commercial chemicals for which exposure estimates are needed. A suite of machine-learning based models for classifying chemicals in terms of their likely functional roles in products based on structure were developed. This effort required collection, curation, and harmonization of publically-available data sources of chemical functional use information from government and industry bodies. Physicochemical and structure descriptor data were generated for chemicals with function data. Machine-learning classifier models for function were then built in a cross-validated manner from the descriptor/function data using the method of random forests. The models were applied to: 1) predict chemi

  12. Functional model of biological neural networks.

    Science.gov (United States)

    Lo, James Ting-Ho

    2010-12-01

    A functional model of biological neural networks, called temporal hierarchical probabilistic associative memory (THPAM), is proposed in this paper. THPAM comprises functional models of dendritic trees for encoding inputs to neurons, a first type of neuron for generating spike trains, a second type of neuron for generating graded signals to modulate neurons of the first type, supervised and unsupervised Hebbian learning mechanisms for easy learning and retrieving, an arrangement of dendritic trees for maximizing generalization, hardwiring for rotation-translation-scaling invariance, and feedback connections with different delay durations for neurons to make full use of present and past informations generated by neurons in the same and higher layers. These functional models and their processing operations have many functions of biological neural networks that have not been achieved by other models in the open literature and provide logically coherent answers to many long-standing neuroscientific questions. However, biological justifications of these functional models and their processing operations are required for THPAM to qualify as a macroscopic model (or low-order approximate) of biological neural networks.

  13. Mathematical modeling and visualization of functional neuroimages

    DEFF Research Database (Denmark)

    Rasmussen, Peter Mondrup

    This dissertation presents research results regarding mathematical modeling in the context of the analysis of functional neuroimages. Specifically, the research focuses on pattern-based analysis methods that recently have become popular analysis tools within the neuroimaging community. Such methods...... neuroimaging data sets are characterized by relatively few data observations in a high dimensional space. The process of building models in such data sets often requires strong regularization. Often, the degree of model regularization is chosen in order to maximize prediction accuracy. We focus on the relative...... be carefully selected, so that the model and its visualization enhance our ability to interpret brain function. The second part concerns interpretation of nonlinear models and procedures for extraction of ‘brain maps’ from nonlinear kernel models. We assess the performance of the sensitivity map as means...

  14. Structure functions in the chiral bag model

    International Nuclear Information System (INIS)

    Sanjose, V.; Vento, V.; Centro Mixto CSIC/Valencia Univ., Valencia

    1989-01-01

    We calculate the structure functions of an isoscalar nuclear target for the deep inelastic scattering by leptons in an extended version of the chiral bag model which incorporates the qanti q structure of the pions in the cloud. Bjorken scaling and Regge behavior are satisfied. The model calculation reproduces the low-x behavior of the data but fails to explain the medium- to large-x behavior. Evolution of the quark structure functions seem inevitable to attempt a connection between the low-energy models and the high-energy behavior of quantum chromodynamics. (orig.)

  15. Structure functions in the chiral bag model

    Energy Technology Data Exchange (ETDEWEB)

    Sanjose, V.; Vento, V.

    1989-07-13

    We calculate the structure functions of an isoscalar nuclear target for the deep inelastic scattering by leptons in an extended version of the chiral bag model which incorporates the qanti q structure of the pions in the cloud. Bjorken scaling and Regge behavior are satisfied. The model calculation reproduces the low-x behavior of the data but fails to explain the medium- to large-x behavior. Evolution of the quark structure functions seem inevitable to attempt a connection between the low-energy models and the high-energy behavior of quantum chromodynamics. (orig.).

  16. The SOS model partition function and the elliptic weight functions

    International Nuclear Information System (INIS)

    Pakuliak, S; Silantyev, A; Rubtsov, V

    2008-01-01

    We generalized a recent observation (Khoroshkin and Pakuliak 2005 Theor. Math. Phys. 145 1373) that the partition function of the six-vertex model with domain wall boundary conditions can be obtained from a calculation of projections of the product of total currents in the quantum affine algebra U q (sl 2 -hat) in its current realization. A generalization is done for the elliptic current algebra (Enriquez and Felder 1998 Commun. Math. Phys. 195 651, Enriquez and Rubtsov 1997 Ann. Sci. Ecole Norm. Sup. 30 821). The projections of the product of total currents in this case are calculated explicitly and are presented as integral transforms of a product of the total currents. It is proved that the integral kernel of this transform is proportional to the partition function of the SOS model with domain wall boundary conditions

  17. Data Acquisition for Quality Loss Function Modelling

    DEFF Research Database (Denmark)

    Pedersen, Søren Nygaard; Howard, Thomas J.

    2016-01-01

    Quality loss functions can be a valuable tool when assessing the impact of variation on product quality. Typically, the input for the quality loss function would be a measure of the varying product performance and the output would be a measure of quality. While the unit of the input is given by t...... by the product function in focus, the quality output can be measured and quantified in a number of ways. In this article a structured approach for acquiring stakeholder satisfaction data for use in quality loss function modelling is introduced.......Quality loss functions can be a valuable tool when assessing the impact of variation on product quality. Typically, the input for the quality loss function would be a measure of the varying product performance and the output would be a measure of quality. While the unit of the input is given...

  18. Thresholding projection estimators in functional linear models

    OpenAIRE

    Cardot, Hervé; Johannes, Jan

    2010-01-01

    We consider the problem of estimating the regression function in functional linear regression models by proposing a new type of projection estimators which combine dimension reduction and thresholding. The introduction of a threshold rule allows to get consistency under broad assumptions as well as minimax rates of convergence under additional regularity hypotheses. We also consider the particular case of Sobolev spaces generated by the trigonometric basis which permits to get easily mean squ...

  19. A systemic approach for modeling soil functions

    Science.gov (United States)

    Vogel, Hans-Jörg; Bartke, Stephan; Daedlow, Katrin; Helming, Katharina; Kögel-Knabner, Ingrid; Lang, Birgit; Rabot, Eva; Russell, David; Stößel, Bastian; Weller, Ulrich; Wiesmeier, Martin; Wollschläger, Ute

    2018-03-01

    The central importance of soil for the functioning of terrestrial systems is increasingly recognized. Critically relevant for water quality, climate control, nutrient cycling and biodiversity, soil provides more functions than just the basis for agricultural production. Nowadays, soil is increasingly under pressure as a limited resource for the production of food, energy and raw materials. This has led to an increasing demand for concepts assessing soil functions so that they can be adequately considered in decision-making aimed at sustainable soil management. The various soil science disciplines have progressively developed highly sophisticated methods to explore the multitude of physical, chemical and biological processes in soil. It is not obvious, however, how the steadily improving insight into soil processes may contribute to the evaluation of soil functions. Here, we present to a new systemic modeling framework that allows for a consistent coupling between reductionist yet observable indicators for soil functions with detailed process understanding. It is based on the mechanistic relationships between soil functional attributes, each explained by a network of interacting processes as derived from scientific evidence. The non-linear character of these interactions produces stability and resilience of soil with respect to functional characteristics. We anticipate that this new conceptional framework will integrate the various soil science disciplines and help identify important future research questions at the interface between disciplines. It allows the overwhelming complexity of soil systems to be adequately coped with and paves the way for steadily improving our capability to assess soil functions based on scientific understanding.

  20. Bayesian Modelling of Functional Whole Brain Connectivity

    DEFF Research Database (Denmark)

    Røge, Rasmus

    the prevalent strategy of standardizing of fMRI time series and model data using directional statistics or we model the variability in the signal across the brain and across multiple subjects. In either case, we use Bayesian nonparametric modeling to automatically learn from the fMRI data the number......This thesis deals with parcellation of whole-brain functional magnetic resonance imaging (fMRI) using Bayesian inference with mixture models tailored to the fMRI data. In the three included papers and manuscripts, we analyze two different approaches to modeling fMRI signal; either we accept...... of funcional units, i.e. parcels. We benchmark the proposed mixture models against state of the art methods of brain parcellation, both probabilistic and non-probabilistic. The time series of each voxel are most often standardized using z-scoring which projects the time series data onto a hypersphere...

  1. Mathematical modeling and visualization of functional neuroimages

    DEFF Research Database (Denmark)

    Rasmussen, Peter Mondrup

    This dissertation presents research results regarding mathematical modeling in the context of the analysis of functional neuroimages. Specifically, the research focuses on pattern-based analysis methods that recently have become popular within the neuroimaging community. Such methods attempt...... sets are characterized by relatively few data observations in a high dimensional space. The process of building models in such data sets often requires strong regularization. Often, the degree of model regularization is chosen in order to maximize prediction accuracy. We focus on the relative influence...... be carefully selected, so that the model and its visualization enhance our ability to interpret the brain. The second part concerns interpretation of nonlinear models and procedures for extraction of ‘brain maps’ from nonlinear kernel models. We assess the performance of the sensitivity map as means...

  2. The Goodwin model: behind the Hill function.

    Directory of Open Access Journals (Sweden)

    Didier Gonze

    Full Text Available The Goodwin model is a 3-variable model demonstrating the emergence of oscillations in a delayed negative feedback-based system at the molecular level. This prototypical model and its variants have been commonly used to model circadian and other genetic oscillators in biology. The only source of non-linearity in this model is a Hill function, characterizing the repression process. It was mathematically shown that to obtain limit-cycle oscillations, the Hill coefficient must be larger than 8, a value often considered unrealistic. It is indeed difficult to explain such a high coefficient with simple cooperative dynamics. We present here molecular models of the standard Goodwin model, based on single or multisite phosphorylation/dephosphorylation processes of a transcription factor, which have been previously shown to generate switch-like responses. We show that when the phosphorylation/dephosphorylation processes are fast enough, the limit-cycle obtained with a multisite phosphorylation-based mechanism is in very good quantitative agreement with the oscillations observed in the Goodwin model. Conditions in which the detailed mechanism is well approximated by the Goodwin model are given. A variant of the Goodwin model which displays sharp thresholds and relaxation oscillations is also explained by a double phosphorylation/dephosphorylation-based mechanism through a bistable behavior. These results not only provide rational support for the Goodwin model but also highlight the crucial role of the speed of post-translational processes, whose response curve are usually established at a steady state, in biochemical oscillators.

  3. Correlation functions of two-matrix models

    International Nuclear Information System (INIS)

    Bonora, L.; Xiong, C.S.

    1993-11-01

    We show how to calculate correlation functions of two matrix models without any approximation technique (except for genus expansion). In particular we do not use any continuum limit technique. This allows us to find many solutions which are invisible to the latter technique. To reach our goal we make full use of the integrable hierarchies and their reductions which were shown in previous papers to naturally appear in multi-matrix models. The second ingredient we use, even though to a lesser extent, are the W-constraints. In fact an explicit solution of the relevant hierarchy, satisfying the W-constraints (string equation), underlies the explicit calculation of the correlation functions. The correlation functions we compute lend themselves to a possible interpretation in terms of topological field theories. (orig.)

  4. Symmetries and modelling functions for diffusion processes

    International Nuclear Information System (INIS)

    Nikitin, A G; Spichak, S V; Vedula, Yu S; Naumovets, A G

    2009-01-01

    A constructive approach to the theory of diffusion processes is proposed, which is based on application of both symmetry analysis and the method of modelling functions. An algorithm for construction of the modelling functions is suggested. This algorithm is based on the error function expansion (ERFEX) of experimental concentration profiles. The high-accuracy analytical description of the profiles provided by ERFEX approximation allows a convenient extraction of the concentration dependence of diffusivity from experimental data and prediction of the diffusion process. Our analysis is exemplified by its employment in experimental results obtained for surface diffusion of lithium on the molybdenum (1 1 2) surface precovered with dysprosium. The ERFEX approximation can be directly extended to many other diffusion systems.

  5. Hazard identification based on plant functional modelling

    International Nuclear Information System (INIS)

    Rasmussen, B.; Whetton, C.

    1993-10-01

    A major objective of the present work is to provide means for representing a process plant as a socio-technical system, so as to allow hazard identification at a high level. The method includes technical, human and organisational aspects and is intended to be used for plant level hazard identification so as to identify critical areas and the need for further analysis using existing methods. The first part of the method is the preparation of a plant functional model where a set of plant functions link together hardware, software, operations, work organisation and other safety related aspects of the plant. The basic principle of the functional modelling is that any aspect of the plant can be represented by an object (in the sense that this term is used in computer science) based upon an Intent (or goal); associated with each Intent are Methods, by which the Intent is realized, and Constraints, which limit the Intent. The Methods and Constraints can themselves be treated as objects and decomposed into lower-level Intents (hence the procedure is known as functional decomposition) so giving rise to a hierarchical, object-oriented structure. The plant level hazard identification is carried out on the plant functional model using the Concept Hazard Analysis method. In this, the user will be supported by checklists and keywords and the analysis is structured by pre-defined worksheets. The preparation of the plant functional model and the performance of the hazard identification can be carried out manually or with computer support. (au) (4 tabs., 10 ills., 7 refs.)

  6. Maximum entropy models of ecosystem functioning

    International Nuclear Information System (INIS)

    Bertram, Jason

    2014-01-01

    Using organism-level traits to deduce community-level relationships is a fundamental problem in theoretical ecology. This problem parallels the physical one of using particle properties to deduce macroscopic thermodynamic laws, which was successfully achieved with the development of statistical physics. Drawing on this parallel, theoretical ecologists from Lotka onwards have attempted to construct statistical mechanistic theories of ecosystem functioning. Jaynes’ broader interpretation of statistical mechanics, which hinges on the entropy maximisation algorithm (MaxEnt), is of central importance here because the classical foundations of statistical physics do not have clear ecological analogues (e.g. phase space, dynamical invariants). However, models based on the information theoretic interpretation of MaxEnt are difficult to interpret ecologically. Here I give a broad discussion of statistical mechanical models of ecosystem functioning and the application of MaxEnt in these models. Emphasising the sample frequency interpretation of MaxEnt, I show that MaxEnt can be used to construct models of ecosystem functioning which are statistical mechanical in the traditional sense using a savanna plant ecology model as an example

  7. Maximum entropy models of ecosystem functioning

    Energy Technology Data Exchange (ETDEWEB)

    Bertram, Jason, E-mail: jason.bertram@anu.edu.au [Research School of Biology, The Australian National University, Canberra ACT 0200 (Australia)

    2014-12-05

    Using organism-level traits to deduce community-level relationships is a fundamental problem in theoretical ecology. This problem parallels the physical one of using particle properties to deduce macroscopic thermodynamic laws, which was successfully achieved with the development of statistical physics. Drawing on this parallel, theoretical ecologists from Lotka onwards have attempted to construct statistical mechanistic theories of ecosystem functioning. Jaynes’ broader interpretation of statistical mechanics, which hinges on the entropy maximisation algorithm (MaxEnt), is of central importance here because the classical foundations of statistical physics do not have clear ecological analogues (e.g. phase space, dynamical invariants). However, models based on the information theoretic interpretation of MaxEnt are difficult to interpret ecologically. Here I give a broad discussion of statistical mechanical models of ecosystem functioning and the application of MaxEnt in these models. Emphasising the sample frequency interpretation of MaxEnt, I show that MaxEnt can be used to construct models of ecosystem functioning which are statistical mechanical in the traditional sense using a savanna plant ecology model as an example.

  8. Blurred image restoration using knife-edge function and optimal window Wiener filtering

    Science.gov (United States)

    Zhou, Shudao; Yan, Wei

    2018-01-01

    Motion blur in images is usually modeled as the convolution of a point spread function (PSF) and the original image represented as pixel intensities. The knife-edge function can be used to model various types of motion-blurs, and hence it allows for the construction of a PSF and accurate estimation of the degradation function without knowledge of the specific degradation model. This paper addresses the problem of image restoration using a knife-edge function and optimal window Wiener filtering. In the proposed method, we first calculate the motion-blur parameters and construct the optimal window. Then, we use the detected knife-edge function to obtain the system degradation function. Finally, we perform Wiener filtering to obtain the restored image. Experiments show that the restored image has improved resolution and contrast parameters with clear details and no discernible ringing effects. PMID:29377950

  9. AN APPLICATION OF FUNCTIONAL MULTIVARIATE REGRESSION MODEL TO MULTICLASS CLASSIFICATION

    OpenAIRE

    Krzyśko, Mirosław; Smaga, Łukasz

    2017-01-01

    In this paper, the scale response functional multivariate regression model is considered. By using the basis functions representation of functional predictors and regression coefficients, this model is rewritten as a multivariate regression model. This representation of the functional multivariate regression model is used for multiclass classification for multivariate functional data. Computational experiments performed on real labelled data sets demonstrate the effectiveness of the proposed ...

  10. Multivariate Heteroscedasticity Models for Functional Brain Connectivity

    Directory of Open Access Journals (Sweden)

    Christof Seiler

    2017-12-01

    Full Text Available Functional brain connectivity is the co-occurrence of brain activity in different areas during resting and while doing tasks. The data of interest are multivariate timeseries measured simultaneously across brain parcels using resting-state fMRI (rfMRI. We analyze functional connectivity using two heteroscedasticity models. Our first model is low-dimensional and scales linearly in the number of brain parcels. Our second model scales quadratically. We apply both models to data from the Human Connectome Project (HCP comparing connectivity between short and conventional sleepers. We find stronger functional connectivity in short than conventional sleepers in brain areas consistent with previous findings. This might be due to subjects falling asleep in the scanner. Consequently, we recommend the inclusion of average sleep duration as a covariate to remove unwanted variation in rfMRI studies. A power analysis using the HCP data shows that a sample size of 40 detects 50% of the connectivity at a false discovery rate of 20%. We provide implementations using R and the probabilistic programming language Stan.

  11. A Generic Modeling Process to Support Functional Fault Model Development

    Science.gov (United States)

    Maul, William A.; Hemminger, Joseph A.; Oostdyk, Rebecca; Bis, Rachael A.

    2016-01-01

    Functional fault models (FFMs) are qualitative representations of a system's failure space that are used to provide a diagnostic of the modeled system. An FFM simulates the failure effect propagation paths within a system between failure modes and observation points. These models contain a significant amount of information about the system including the design, operation and off nominal behavior. The development and verification of the models can be costly in both time and resources. In addition, models depicting similar components can be distinct, both in appearance and function, when created individually, because there are numerous ways of representing the failure space within each component. Generic application of FFMs has the advantages of software code reuse: reduction of time and resources in both development and verification, and a standard set of component models from which future system models can be generated with common appearance and diagnostic performance. This paper outlines the motivation to develop a generic modeling process for FFMs at the component level and the effort to implement that process through modeling conventions and a software tool. The implementation of this generic modeling process within a fault isolation demonstration for NASA's Advanced Ground System Maintenance (AGSM) Integrated Health Management (IHM) project is presented and the impact discussed.

  12. Parisi function for two spin glass models

    International Nuclear Information System (INIS)

    Sibani, P.; Hertz, J.A.

    1984-01-01

    The probability distribution function P(q) for the overlap of pairs of metastable states and the associated Parisi order function q(x) are calculated exactly at zero temperature for two simple models. The first is a chain in which each spin interacts randomly with the sum of all the spins between it and one end of the chain; the second is an infinite-range limit of a spin glass version of Dyson's hierarchical model. Both have nontrivial overlap distributions: In the first case the problem reduces to a variable-step-length random walk problem, leading to q(x)=sin(πx). In the second model P(q) can be calculated by a simple recursion relation which generates devil's staircase structure in q(x). If the fraction p of antiferromagnetic bonds is less than 1/√2, the staircase is complete and the fractal dimensionality of the complement of the domain where q(x) is flat is log 2/log (1/p 2 ). In both models the space of metastable states can be described in terms of Cayley trees, which however have a different physical interpretation than in the S.K. model. (orig.)

  13. Functional Security Model: Managers Engineers Working Together

    Science.gov (United States)

    Guillen, Edward Paul; Quintero, Rulfo

    2008-05-01

    Information security has a wide variety of solutions including security policies, network architectures and technological applications, they are usually designed and implemented by security architects, but in its own complexity this solutions are difficult to understand by company managers and they are who finally fund the security project. The main goal of the functional security model is to achieve a solid security platform reliable and understandable in the whole company without leaving of side the rigor of the recommendations and the laws compliance in a single frame. This paper shows a general scheme of the model with the use of important standards and tries to give an integrated solution.

  14. Sivers function in constituent quark models

    CERN Document Server

    Scopetta, S.; Fratini, F.; Vento, V.

    2008-01-01

    A formalism to evaluate the Sivers function, developed for calculations in constituent quark models, is applied to the Isgur-Karl model. A non-vanishing Sivers asymmetry, with opposite signs for the u and d flavor, is found; the Burkardt sum rule is fulfilled up to 2 %. Nuclear effects in the extraction of neutron single spin asymmetries in semi-inclusive deep inelastic scattering off 3He are also evaluated. In the kinematics of JLab, it is found that the nuclear effects described by an Impulse Approximation approach are under control.

  15. A general phenomenological model for work function

    Science.gov (United States)

    Brodie, I.; Chou, S. H.; Yuan, H.

    2014-07-01

    A general phenomenological model is presented for obtaining the zero Kelvin work function of any crystal facet of metals and semiconductors, both clean and covered with a monolayer of electropositive atoms. It utilizes the known physical structure of the crystal and the Fermi energy of the two-dimensional electron gas assumed to form on the surface. A key parameter is the number of electrons donated to the surface electron gas per surface lattice site or adsorbed atom, which is taken to be an integer. Initially this is found by trial and later justified by examining the state of the valence electrons of the relevant atoms. In the case of adsorbed monolayers of electropositive atoms a satisfactory justification could not always be found, particularly for cesium, but a trial value always predicted work functions close to the experimental values. The model can also predict the variation of work function with temperature for clean crystal facets. The model is applied to various crystal faces of tungsten, aluminium, silver, and select metal oxides, and most demonstrate good fits compared to available experimental values.

  16. Mathematical Models of Cardiac Pacemaking Function

    Science.gov (United States)

    Li, Pan; Lines, Glenn T.; Maleckar, Mary M.; Tveito, Aslak

    2013-10-01

    Over the past half century, there has been intense and fruitful interaction between experimental and computational investigations of cardiac function. This interaction has, for example, led to deep understanding of cardiac excitation-contraction coupling; how it works, as well as how it fails. However, many lines of inquiry remain unresolved, among them the initiation of each heartbeat. The sinoatrial node, a cluster of specialized pacemaking cells in the right atrium of the heart, spontaneously generates an electro-chemical wave that spreads through the atria and through the cardiac conduction system to the ventricles, initiating the contraction of cardiac muscle essential for pumping blood to the body. Despite the fundamental importance of this primary pacemaker, this process is still not fully understood, and ionic mechanisms underlying cardiac pacemaking function are currently under heated debate. Several mathematical models of sinoatrial node cell membrane electrophysiology have been constructed as based on different experimental data sets and hypotheses. As could be expected, these differing models offer diverse predictions about cardiac pacemaking activities. This paper aims to present the current state of debate over the origins of the pacemaking function of the sinoatrial node. Here, we will specifically review the state-of-the-art of cardiac pacemaker modeling, with a special emphasis on current discrepancies, limitations, and future challenges.

  17. Mathematical Models of Cardiac Pacemaking Function

    Directory of Open Access Journals (Sweden)

    Pan eLi

    2013-10-01

    Full Text Available Over the past half century, there has been intense and fruitful interaction between experimental and computational investigations of cardiac function. This interaction has, for example, led to deep understanding of cardiac excitation-contraction coupling; how it works, as well as how it fails. However, many lines of inquiry remain unresolved, among them the initiation of each heartbeat. The sinoatrial node, a cluster of specialized pacemaking cells in the right atrium of the heart, spontaneously generates an electro-chemical wave that spreads through the atria and through the cardiac conduction system to the ventricles, initiating the contraction of cardiac muscle essential for pumping blood to the body. Despite the fundamental importance of this primary pacemaker, this process is still not fully understood, and ionic mechanisms underlying cardiac pacemaking function are currently under heated debate. Several mathematical models of sinoatrial node cell membrane electrophysiology have been constructed as based on different experimental data sets and hypotheses. As could be expected, these differing models offer diverse predictions about cardiac pacemaking activities. This paper aims to present the current state of debate over the origins of the pacemaking function of the sinoatrial node. Here, we will specifically review the state-of-the-art of cardiac pacemaker modeling, with a special emphasis on current discrepancies, limitations, and future challenges.

  18. Modeling of sintering of functionally gradated materials

    International Nuclear Information System (INIS)

    Gasik, M.; Zhang, B.

    2001-01-01

    The functionally gradated materials (FGMs) are distinguished from isotropic materials by gradients of composition, phase distribution, porosity, and related properties. For FGMs made by powder metallurgy, sintering control is one of the most important factors. In this study sintering process of FGMs is modeled and simulated with a computer. A new modeling approach was used to formulate equation systems and the model for sintering of gradated hard metals, coupled with heat transfer and grain growth. A FEM module was developed to simulate FGM sintering in conventional, microwave and hybrid conditions, to calculate density, stress and temperature distribution. Behavior of gradated WC-Co hardmetal plate and cone specimens was simulated for various conditions, such as mean particle size, green density distribution and cobalt gradation parameter. The results show that the deformation behavior and stress history of graded powder compacts during heating, sintering and cooling could be predicted for optimization of sintering process. (author)

  19. Functionalized anatomical models for EM-neuron Interaction modeling

    Science.gov (United States)

    Neufeld, Esra; Cassará, Antonino Mario; Montanaro, Hazael; Kuster, Niels; Kainz, Wolfgang

    2016-06-01

    The understanding of interactions between electromagnetic (EM) fields and nerves are crucial in contexts ranging from therapeutic neurostimulation to low frequency EM exposure safety. To properly consider the impact of in vivo induced field inhomogeneity on non-linear neuronal dynamics, coupled EM-neuronal dynamics modeling is required. For that purpose, novel functionalized computable human phantoms have been developed. Their implementation and the systematic verification of the integrated anisotropic quasi-static EM solver and neuronal dynamics modeling functionality, based on the method of manufactured solutions and numerical reference data, is described. Electric and magnetic stimulation of the ulnar and sciatic nerve were modeled to help understanding a range of controversial issues related to the magnitude and optimal determination of strength-duration (SD) time constants. The results indicate the importance of considering the stimulation-specific inhomogeneous field distributions (especially at tissue interfaces), realistic models of non-linear neuronal dynamics, very short pulses, and suitable SD extrapolation models. These results and the functionalized computable phantom will influence and support the development of safe and effective neuroprosthetic devices and novel electroceuticals. Furthermore they will assist the evaluation of existing low frequency exposure standards for the entire population under all exposure conditions.

  20. Mass functions from the excursion set model

    Science.gov (United States)

    Hiotelis, Nicos; Del Popolo, Antonino

    2017-11-01

    Aims: We aim to study the stochastic evolution of the smoothed overdensity δ at scale S of the form δ(S) = ∫0S K(S,u)dW(u), where K is a kernel and dW is the usual Wiener process. Methods: For a Gaussian density field, smoothed by the top-hat filter, in real space, we used a simple kernel that gives the correct correlation between scales. A Monte Carlo procedure was used to construct random walks and to calculate first crossing distributions and consequently mass functions for a constant barrier. Results: We show that the evolution considered here improves the agreement with the results of N-body simulations relative to analytical approximations which have been proposed from the same problem by other authors. In fact, we show that an evolution which is fully consistent with the ideas of the excursion set model, describes accurately the mass function of dark matter haloes for values of ν ≤ 1 and underestimates the number of larger haloes. Finally, we show that a constant threshold of collapse, lower than it is usually used, it is able to produce a mass function which approximates the results of N-body simulations for a variety of redshifts and for a wide range of masses. Conclusions: A mass function in good agreement with N-body simulations can be obtained analytically using a lower than usual constant collapse threshold.

  1. Electricity price forecasting through transfer function models

    International Nuclear Information System (INIS)

    Nogales, F.J.; Conejo, A.J.

    2006-01-01

    Forecasting electricity prices in present day competitive electricity markets is a must for both producers and consumers because both need price estimates to develop their respective market bidding strategies. This paper proposes a transfer function model to predict electricity prices based on both past electricity prices and demands, and discuss the rationale to build it. The importance of electricity demand information is assessed. Appropriate metrics to appraise prediction quality are identified and used. Realistic and extensive simulations based on data from the PJM Interconnection for year 2003 are conducted. The proposed model is compared with naive and other techniques. Journal of the Operational Research Society (2006) 57, 350-356.doi:10.1057/palgrave.jors.2601995; published online 18 May 2005. (author)

  2. Signed distance function implicit geologic modeling

    Directory of Open Access Journals (Sweden)

    Roberto Mentzingen Rolo

    Full Text Available Abstract Prior to every geostatistical estimation or simulation study there is a need for delimiting the geologic domains of the deposit, which is traditionally done manually by a geomodeler in a laborious, time consuming and subjective process. For this reason, novel techniques referred to as implicit modelling have appeared. These techniques provide algorithms that replace the manual digitization process of the traditional methods by some form of automatic procedure. This paper covers a few well established implicit methods currently available with special attention to the signed distance function methodology. A case study based on a real dataset was performed and its applicability discussed. Although it did not replace an experienced geomodeler, the method proved to be capable in creating semi-automatic geological models from the sampling data, especially in the early stages of exploration.

  3. Medicare capitation model, functional status, and multiple comorbidities: model accuracy

    Science.gov (United States)

    Noyes, Katia; Liu, Hangsheng; Temkin-Greener, Helena

    2012-01-01

    Objective This study examined financial implications of CMS-Hierarchical Condition Categories (HCC) risk-adjustment model on Medicare payments for individuals with comorbid chronic conditions. Study Design The study used 1992-2000 data from the Medicare Current Beneficiary Survey and corresponding Medicare claims. The pairs of comorbidities were formed based on the prior evidence about possible synergy between these conditions and activities of daily living (ADL) deficiencies and included heart disease and cancer, lung disease and cancer, stroke and hypertension, stroke and arthritis, congestive heart failure (CHF) and osteoporosis, diabetes and coronary artery disease, CHF and dementia. Methods For each beneficiary, we calculated the actual Medicare cost ratio as the ratio of the individual’s annualized costs to the mean annual Medicare cost of all people in the study. The actual Medicare cost ratios, by ADLs, were compared to the HCC ratios under the CMS-HCC payment model. Using multivariate regression models, we tested whether having the identified pairs of comorbidities affects the accuracy of CMS-HCC model predictions. Results The CMS-HCC model underpredicted Medicare capitation payments for patients with hypertension, lung disease, congestive heart failure and dementia. The difference between the actual costs and predicted payments was partially explained by beneficiary functional status and less than optimal adjustment for these chronic conditions. Conclusions Information about beneficiary functional status should be incorporated in reimbursement models since underpaying providers for caring for population with multiple comorbidities may provide severe disincentives for managed care plans to enroll such individuals and to appropriately manage their complex and costly conditions. PMID:18837646

  4. Fully automated calculation of image-derived input function in simultaneous PET/MRI in a sheep model

    International Nuclear Information System (INIS)

    Jochimsen, Thies H.; Zeisig, Vilia; Schulz, Jessica; Werner, Peter; Patt, Marianne; Patt, Jörg; Dreyer, Antje Y.; Boltze, Johannes; Barthel, Henryk; Sabri, Osama; Sattler, Bernhard

    2016-01-01

    Obtaining the arterial input function (AIF) from image data in dynamic positron emission tomography (PET) examinations is a non-invasive alternative to arterial blood sampling. In simultaneous PET/magnetic resonance imaging (PET/MRI), high-resolution MRI angiographies can be used to define major arteries for correction of partial-volume effects (PVE) and point spread function (PSF) response in the PET data. The present study describes a fully automated method to obtain the image-derived input function (IDIF) in PET/MRI. Results are compared to those obtained by arterial blood sampling. To segment the trunk of the major arteries in the neck, a high-resolution time-of-flight MRI angiography was postprocessed by a vessel-enhancement filter based on the inertia tensor. Together with the measured PSF of the PET subsystem, the arterial mask was used for geometrical deconvolution, yielding the time-resolved activity concentration averaged over a major artery. The method was compared to manual arterial blood sampling at the hind leg of 21 sheep (animal stroke model) during measurement of blood flow with O15-water. Absolute quantification of activity concentration was compared after bolus passage during steady state, i.e., between 2.5- and 5-min post injection. Cerebral blood flow (CBF) values from blood sampling and IDIF were also compared. The cross-calibration factor obtained by comparing activity concentrations in blood samples and IDIF during steady state is 0.98 ± 0.10. In all examinations, the IDIF provided a much earlier and sharper bolus peak than in the time course of activity concentration obtained by arterial blood sampling. CBF using the IDIF was 22 % higher than CBF obtained by using the AIF yielded by blood sampling. The small deviation between arterial blood sampling and IDIF during steady state indicates that correction of PVE and PSF is possible with the method presented. The differences in bolus dynamics and, hence, CBF values can be explained by the

  5. Fully automated calculation of image-derived input function in simultaneous PET/MRI in a sheep model

    Energy Technology Data Exchange (ETDEWEB)

    Jochimsen, Thies H.; Zeisig, Vilia [Department of Nuclear Medicine, Leipzig University Hospital, Liebigstr. 18, Leipzig (Germany); Schulz, Jessica [Max Planck Institute for Human Cognitive and Brain Sciences, Stephanstr. 1a, Leipzig, D-04103 (Germany); Werner, Peter; Patt, Marianne; Patt, Jörg [Department of Nuclear Medicine, Leipzig University Hospital, Liebigstr. 18, Leipzig (Germany); Dreyer, Antje Y. [Fraunhofer Institute of Cell Therapy and Immunology, Perlickstr. 1, Leipzig, D-04103 (Germany); Translational Centre for Regenerative Medicine, University Leipzig, Philipp-Rosenthal-Str. 55, Leipzig, D-04103 (Germany); Boltze, Johannes [Fraunhofer Institute of Cell Therapy and Immunology, Perlickstr. 1, Leipzig, D-04103 (Germany); Translational Centre for Regenerative Medicine, University Leipzig, Philipp-Rosenthal-Str. 55, Leipzig, D-04103 (Germany); Fraunhofer Research Institution of Marine Biotechnology and Institute for Medical and Marine Biotechnology, University of Lübeck, Lübeck (Germany); Barthel, Henryk; Sabri, Osama; Sattler, Bernhard [Department of Nuclear Medicine, Leipzig University Hospital, Liebigstr. 18, Leipzig (Germany)

    2016-02-13

    Obtaining the arterial input function (AIF) from image data in dynamic positron emission tomography (PET) examinations is a non-invasive alternative to arterial blood sampling. In simultaneous PET/magnetic resonance imaging (PET/MRI), high-resolution MRI angiographies can be used to define major arteries for correction of partial-volume effects (PVE) and point spread function (PSF) response in the PET data. The present study describes a fully automated method to obtain the image-derived input function (IDIF) in PET/MRI. Results are compared to those obtained by arterial blood sampling. To segment the trunk of the major arteries in the neck, a high-resolution time-of-flight MRI angiography was postprocessed by a vessel-enhancement filter based on the inertia tensor. Together with the measured PSF of the PET subsystem, the arterial mask was used for geometrical deconvolution, yielding the time-resolved activity concentration averaged over a major artery. The method was compared to manual arterial blood sampling at the hind leg of 21 sheep (animal stroke model) during measurement of blood flow with O15-water. Absolute quantification of activity concentration was compared after bolus passage during steady state, i.e., between 2.5- and 5-min post injection. Cerebral blood flow (CBF) values from blood sampling and IDIF were also compared. The cross-calibration factor obtained by comparing activity concentrations in blood samples and IDIF during steady state is 0.98 ± 0.10. In all examinations, the IDIF provided a much earlier and sharper bolus peak than in the time course of activity concentration obtained by arterial blood sampling. CBF using the IDIF was 22 % higher than CBF obtained by using the AIF yielded by blood sampling. The small deviation between arterial blood sampling and IDIF during steady state indicates that correction of PVE and PSF is possible with the method presented. The differences in bolus dynamics and, hence, CBF values can be explained by the

  6. Correlation Functions in Holographic Minimal Models

    CERN Document Server

    Papadodimas, Kyriakos

    2012-01-01

    We compute exact three and four point functions in the W_N minimal models that were recently conjectured to be dual to a higher spin theory in AdS_3. The boundary theory has a large number of light operators that are not only invisible in the bulk but grow exponentially with N even at small conformal dimensions. Nevertheless, we provide evidence that this theory can be understood in a 1/N expansion since our correlators look like free-field correlators corrected by a power series in 1/N . However, on examining these corrections we find that the four point function of the two bulk scalar fields is corrected at leading order in 1/N through the contribution of one of the additional light operators in an OPE channel. This suggests that, to correctly reproduce even tree-level correlators on the boundary, the bulk theory needs to be modified by the inclusion of additional fields. As a technical by-product of our analysis, we describe two separate methods -- including a Coulomb gas type free-field formalism -- that ...

  7. Methods for deconvolving sparse positive delta function series

    International Nuclear Information System (INIS)

    Trussell, H.J.; Schwalbe, L.A.

    1981-01-01

    Sparse delta function series occur as data in many chemical analyses and seismic methods. These original data are often sufficiently degraded by the recording instrument response that the individual delta function peaks are difficult to distinguish and measure. A method, which has been used to measure these peaks, is to fit a parameterized model by a nonlinear least-squares fitting algorithm. The deconvolution approaches described have the advantage of not requiring a parameterized point spread function, nor do they expect a fixed number of peaks. Two new methods are presented. The maximum power technique is reviewed. A maximum a posteriori technique is introduced. Results on both simulated and real data by the two methods are presented. The characteristics of the data can determine which method gives superior results. 5 figures

  8. Functional Validation of Heteromeric Kainate Receptor Models.

    Science.gov (United States)

    Paramo, Teresa; Brown, Patricia M G E; Musgaard, Maria; Bowie, Derek; Biggin, Philip C

    2017-11-21

    Kainate receptors require the presence of external ions for gating. Most work thus far has been performed on homomeric GluK2 but, in vivo, kainate receptors are likely heterotetramers. Agonists bind to the ligand-binding domain (LBD) which is arranged as a dimer of dimers as exemplified in homomeric structures, but no high-resolution structure currently exists of heteromeric kainate receptors. In a full-length heterotetramer, the LBDs could potentially be arranged either as a GluK2 homomer alongside a GluK5 homomer or as two GluK2/K5 heterodimers. We have constructed models of the LBD dimers based on the GluK2 LBD crystal structures and investigated their stability with molecular dynamics simulations. We have then used the models to make predictions about the functional behavior of the full-length GluK2/K5 receptor, which we confirmed via electrophysiological recordings. A key prediction and observation is that lithium ions bind to the dimer interface of GluK2/K5 heteromers and slow their desensitization. Copyright © 2017 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  9. Numerical model of the influence function of deformable mirrors based on Bessel Fourier orthogonal functions

    International Nuclear Information System (INIS)

    Li Shun; Zhang Sijiong

    2014-01-01

    A numerical model is presented to simulate the influence function of deformable mirror actuators. The numerical model is formed by Bessel Fourier orthogonal functions, which are constituted of Bessel orthogonal functions and a Fourier basis. A detailed comparison is presented between the new Bessel Fourier model, the Zernike model, the Gaussian influence function and the modified Gaussian influence function. Numerical experiments indicate that the new numerical model is easy to use and more accurate compared with other numerical models. The new numerical model can be used for describing deformable mirror performances and numerical simulations of adaptive optics systems. (research papers)

  10. Control functions in nonseparable simultaneous equations models

    OpenAIRE

    Blundell, R.; Matzkin, R. L.

    2014-01-01

    The control function approach (Heckman and Robb (1985)) in a system of linear simultaneous equations provides a convenient procedure to estimate one of the functions in the system using reduced form residuals from the other functions as additional regressors. The conditions on the structural system under which this procedure can be used in nonlinear and nonparametric simultaneous equations has thus far been unknown. In this paper, we define a new property of functions called control function ...

  11. Electron beam lithographic modeling assisted by artificial intelligence technology

    Science.gov (United States)

    Nakayamada, Noriaki; Nishimura, Rieko; Miura, Satoru; Nomura, Haruyuki; Kamikubo, Takashi

    2017-07-01

    We propose a new concept of tuning a point-spread function (a "kernel" function) in the modeling of electron beam lithography using the machine learning scheme. Normally in the work of artificial intelligence, the researchers focus on the output results from a neural network, such as success ratio in image recognition or improved production yield, etc. In this work, we put more focus on the weights connecting the nodes in a convolutional neural network, which are naturally the fractions of a point-spread function, and take out those weighted fractions after learning to be utilized as a tuned kernel. Proof-of-concept of the kernel tuning has been demonstrated using the examples of proximity effect correction with 2-layer network, and charging effect correction with 3-layer network. This type of new tuning method can be beneficial to give researchers more insights to come up with a better model, yet it might be too early to be deployed to production to give better critical dimension (CD) and positional accuracy almost instantly.

  12. Function modeling: improved raster analysis through delayed reading and function raster datasets

    Science.gov (United States)

    John S. Hogland; Nathaniel M. Anderson; J .Greg Jones

    2013-01-01

    Raster modeling is an integral component of spatial analysis. However, conventional raster modeling techniques can require a substantial amount of processing time and storage space, often limiting the types of analyses that can be performed. To address this issue, we have developed Function Modeling. Function Modeling is a new modeling framework that streamlines the...

  13. Simplifying numerical ray tracing for two-dimensional non circularly symmetric models of the human eye.

    Science.gov (United States)

    Jesus, Danilo A; Iskander, D Robert

    2015-12-01

    Ray tracing is a powerful technique to understand the light behavior through an intricate optical system such as that of a human eye. The prediction of visual acuity can be achieved through characteristics of an optical system such as the geometrical point spread function. In general, its precision depends on the number of discrete rays and the accurate surface representation of each eye's components. Recently, a method that simplifies calculation of the geometrical point spread function has been proposed for circularly symmetric systems [Appl. Opt.53, 4784 (2014)]. An extension of this method to 2D noncircularly symmetric systems is proposed. In this method, a two-dimensional ray tracing procedure for an arbitrary number of surfaces and arbitrary surface shapes has been developed where surfaces, rays, and refractive indices are all represented in functional forms being approximated by Chebyshev polynomials. The Liou and Brennan anatomically accurate eye model has been adapted and used for evaluating the method. Further, real measurements of the anterior corneal surface of normal, astigmatic, and keratoconic eyes were substituted for the first surface in the model. The results have shown that performing ray tracing, utilizing the two-dimensional Chebyshev function approximation, is possible for noncircularly symmetric models, and that such calculation can be performed with a newly created Chebfun toolbox.

  14. Mirror neurons: functions, mechanisms and models.

    Science.gov (United States)

    Oztop, Erhan; Kawato, Mitsuo; Arbib, Michael A

    2013-04-12

    Mirror neurons for manipulation fire both when the animal manipulates an object in a specific way and when it sees another animal (or the experimenter) perform an action that is more or less similar. Such neurons were originally found in macaque monkeys, in the ventral premotor cortex, area F5 and later also in the inferior parietal lobule. Recent neuroimaging data indicate that the adult human brain is endowed with a "mirror neuron system," putatively containing mirror neurons and other neurons, for matching the observation and execution of actions. Mirror neurons may serve action recognition in monkeys as well as humans, whereas their putative role in imitation and language may be realized in human but not in monkey. This article shows the important role of computational models in providing sufficient and causal explanations for the observed phenomena involving mirror systems and the learning processes which form them, and underlines the need for additional circuitry to lift up the monkey mirror neuron circuit to sustain the posited cognitive functions attributed to the human mirror neuron system. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  15. On the conversion of functional models : Bridging differences between functional taxonomies in the modeling of user actions

    NARCIS (Netherlands)

    Van Eck, D.

    2009-01-01

    In this paper, I discuss a methodology for the conversion of functional models between functional taxonomies developed by Kitamura et al. (2007) and Ookubo et al. (2007). They apply their methodology to the conversion of functional models described in terms of the Functional Basis taxonomy into

  16. School Teams up for SSP Functional Models

    Science.gov (United States)

    Pignolet, G.; Lallemand, R.; Celeste, A.; von Muldau, H.

    2002-01-01

    Space Solar Power systems appear increasingly as one of the major solutions to the upcoming global energy crisis, by collecting solar energy in space where this is most easy, and sending it by microwave beam to the surface of the planet, where the need for controlled energy is located. While fully operational systems are still decades away, the need for major development efforts is with us now. Yet, for many decision-makers and for most of the public, SSP often still sounds like science fiction. Six functional demonstration systems, based on the Japanese SPS-2000 concept, have been built as a result of a cooperation between France and Japan, and they are currently used extensively, in Japan, in Europe and in North America, for executive presentations as well as for public exhibitions. There is demand for more models, both for science museums and for use by energy dedicated groups, and a senior high school in La Reunion, France, has picked up the challenge to make the production of such models an integrated practical school project for pre-college students. In December 2001, the administration and the teachers of the school have evaluated the feasibility of the project and eventually taken the go decision for the school year 2002- 2003, when for education purposes a temporary "school business company" will be incorporated with the goal to study and manufacture a limited series of professional quality SSP demonstration models, and to sell them world- wide to institutions and advocacy groups concerned with energy problems and with the environment. The different sections of the school will act as the different services of an integrated business : based on the current existing models, the electronic section will redesign the energy management system and the microwave projector module, while the mechanical section of the school will adapt and re-conceive the whole packaging of the demonstrator. The French and foreign language sections will write up a technical manual for

  17. Modelling the dependability in Network Function Virtualisation

    OpenAIRE

    Lin, Wenqi

    2017-01-01

    Network Function Virtualization has been brought up to allow the TSPs to have more possibilities and flexibilities to provision services with better load optimizing, energy utilizing and dynamic scaling. Network functions will be decoupled from the underlying dedicated hardware into software instances that run on commercial off-the-shelf servers. However, the development is still at an early stage and the dependability concerns raise by the virtualization of the network functions are touched ...

  18. Variance Function Partially Linear Single-Index Models1.

    Science.gov (United States)

    Lian, Heng; Liang, Hua; Carroll, Raymond J

    2015-01-01

    We consider heteroscedastic regression models where the mean function is a partially linear single index model and the variance function depends upon a generalized partially linear single index model. We do not insist that the variance function depend only upon the mean function, as happens in the classical generalized partially linear single index model. We develop efficient and practical estimation methods for the variance function and for the mean function. Asymptotic theory for the parametric and nonparametric parts of the model is developed. Simulations illustrate the results. An empirical example involving ozone levels is used to further illustrate the results, and is shown to be a case where the variance function does not depend upon the mean function.

  19. Ray tracing the Wigner distribution function for optical simulations

    NARCIS (Netherlands)

    Mout, B.M.; Wick, Michael; Bociort, F.; Petschulat, Joerg; Urbach, Paul

    2018-01-01

    We study a simulation method that uses the Wigner distribution function to incorporate wave optical effects in an established framework based on geometrical optics, i.e., a ray tracing engine. We use the method to calculate point spread functions and show that it is accurate for paraxial systems

  20. Error of the slanted edge method for measuring the modulation transfer function of imaging systems.

    Science.gov (United States)

    Xie, Xufen; Fan, Hongda; Wang, Hongyuan; Wang, Zebin; Zou, Nianyu

    2018-03-01

    The slanted edge method is a basic approach for measuring the modulation transfer function (MTF) of imaging systems; however, its measurement accuracy is limited in practice. Theoretical analysis of the slanted edge MTF measurement method performed in this paper reveals that inappropriate edge angles and random noise reduce this accuracy. The error caused by edge angles is analyzed using sampling and reconstruction theory. Furthermore, an error model combining noise and edge angles is proposed. We verify the analyses and model with respect to (i) the edge angle, (ii) a statistical analysis of the measurement error, (iii) the full width at half-maximum of a point spread function, and (iv) the error model. The experimental results verify the theoretical findings. This research can be referential for applications of the slanted edge MTF measurement method.

  1. Factorisations for partition functions of random Hermitian matrix models

    International Nuclear Information System (INIS)

    Jackson, D.M.; Visentin, T.I.

    1996-01-01

    The partition function Z N , for Hermitian-complex matrix models can be expressed as an explicit integral over R N , where N is a positive integer. Such an integral also occurs in connection with random surfaces and models of two dimensional quantum gravity. We show that Z N can be expressed as the product of two partition functions, evaluated at translated arguments, for another model, giving an explicit connection between the two models. We also give an alternative computation of the partition function for the φ 4 -model.The approach is an algebraic one and holds for the functions regarded as formal power series in the appropriate ring. (orig.)

  2. Exact 2-point function in Hermitian matrix model

    International Nuclear Information System (INIS)

    Morozov, A.; Shakirov, Sh.

    2009-01-01

    J. Harer and D. Zagier have found a strikingly simple generating function [1,2] for exact (all-genera) 1-point correlators in the Gaussian Hermitian matrix model. In this paper we generalize their result to 2-point correlators, using Toda integrability of the model. Remarkably, this exact 2-point correlation function turns out to be an elementary function - arctangent. Relation to the standard 2-point resolvents is pointed out. Some attempts of generalization to 3-point and higher functions are described.

  3. On Support Functions for the Development of MFM Models

    DEFF Research Database (Denmark)

    Heussen, Kai; Lind, Morten

    2012-01-01

    a review of MFM applications, and contextualizes the model development with respect to process design and operation knowledge. Developing a perspective for an environment for MFM-oriented model- and application-development a tool-chain is outlined and relevant software functions are discussed......A modeling environment and methodology are necessary to ensure quality and reusability of models in any domain. For MFM in particular, as a tool for modeling complex systems, awareness has been increasing for this need. Introducing the context of modeling support functions, this paper provides....... With a perspective on MFM-modeling for existing processes and automation design, modeling stages and corresponding formal model properties are identified. Finally, practically feasible support functions and model-checks to support the model-development are suggested....

  4. Modelling Strategies for Functional Magnetic Resonance Imaging

    DEFF Research Database (Denmark)

    Madsen, Kristoffer Hougaard

    2009-01-01

    and generalisations to higher order arrays are considered. Additionally, an application of the natural conjugate prior for supervised learning in the general linear model to efficiently incorporate prior information for supervised analysis is presented. Further extensions include methods to model nuisance effects...... in fMIR data thereby suppressing noise for both supervised and unsupervised analysis techniques....

  5. Functional Decomposition of Modeling and Simulation Terrain Database Generation Process

    National Research Council Canada - National Science Library

    Yakich, Valerie R; Lashlee, J. D

    2008-01-01

    .... This report documents the conceptual procedure as implemented by Lockheed Martin Simulation, Training, and Support and decomposes terrain database construction using the Integration Definition for Function Modeling (IDEF...

  6. Mechanical modeling of skeletal muscle functioning

    NARCIS (Netherlands)

    van der Linden, B.J.J.J.

    1998-01-01

    For movement of body or body segments is combined effort needed of the central nervous system and the muscular-skeletal system. This thesis deals with the mechanical functioning of skeletal muscle. That muscles come in a large variety of geometries, suggest the existence of a relation between muscle

  7. Partially linear varying coefficient models stratified by a functional covariate

    KAUST Repository

    Maity, Arnab; Huang, Jianhua Z.

    2012-01-01

    We consider the problem of estimation in semiparametric varying coefficient models where the covariate modifying the varying coefficients is functional and is modeled nonparametrically. We develop a kernel-based estimator of the nonparametric

  8. Model predictive control using fuzzy decision functions

    NARCIS (Netherlands)

    Kaymak, U.; Costa Sousa, da J.M.

    2001-01-01

    Fuzzy predictive control integrates conventional model predictive control with techniques from fuzzy multicriteria decision making, translating the goals and the constraints to predictive control in a transparent way. The information regarding the (fuzzy) goals and the (fuzzy) constraints of the

  9. SOFTWARE DESIGN MODELLING WITH FUNCTIONAL PETRI NETS

    African Journals Online (AJOL)

    Dr Obe

    the system, which can be described as a set of conditions. ... FPN Software prototype proposed for the conventional programming construct: if-then-else ... mathematical modeling tool allowing for ... methods and techniques of software design.

  10. Building functional networks of spiking model neurons.

    Science.gov (United States)

    Abbott, L F; DePasquale, Brian; Memmesheimer, Raoul-Martin

    2016-03-01

    Most of the networks used by computer scientists and many of those studied by modelers in neuroscience represent unit activities as continuous variables. Neurons, however, communicate primarily through discontinuous spiking. We review methods for transferring our ability to construct interesting networks that perform relevant tasks from the artificial continuous domain to more realistic spiking network models. These methods raise a number of issues that warrant further theoretical and experimental study.

  11. The characteristic function of rough Heston models

    OpenAIRE

    Euch, Omar El; Rosenbaum, Mathieu

    2016-01-01

    It has been recently shown that rough volatility models, where the volatility is driven by a fractional Brownian motion with small Hurst parameter, provide very relevant dynamics in order to reproduce the behavior of both historical and implied volatilities. However, due to the non-Markovian nature of the fractional Brownian motion, they raise new issues when it comes to derivatives pricing. Using an original link between nearly unstable Hawkes processes and fractional volatility models, we c...

  12. Measurement of Function Post Hip Fracture: Testing a Comprehensive Measurement Model of Physical Function.

    Science.gov (United States)

    Resnick, Barbara; Gruber-Baldini, Ann L; Hicks, Gregory; Ostir, Glen; Klinedinst, N Jennifer; Orwig, Denise; Magaziner, Jay

    2016-07-01

    Measurement of physical function post hip fracture has been conceptualized using multiple different measures. This study tested a comprehensive measurement model of physical function. This was a descriptive secondary data analysis including 168 men and 171 women post hip fracture. Using structural equation modeling, a measurement model of physical function which included grip strength, activities of daily living, instrumental activities of daily living, and performance was tested for fit at 2 and 12 months post hip fracture, and among male and female participants. Validity of the measurement model of physical function was evaluated based on how well the model explained physical activity, exercise, and social activities post hip fracture. The measurement model of physical function fit the data. The amount of variance the model or individual factors of the model explained varied depending on the activity. Decisions about the ideal way in which to measure physical function should be based on outcomes considered and participants. The measurement model of physical function is a reliable and valid method to comprehensively measure physical function across the hip fracture recovery trajectory. © 2015 Association of Rehabilitation Nurses.

  13. Applications of model theory to functional analysis

    CERN Document Server

    Iovino, Jose

    2014-01-01

    During the last two decades, methods that originated within mathematical logic have exhibited powerful applications to Banach space theory, particularly set theory and model theory. This volume constitutes the first self-contained introduction to techniques of model theory in Banach space theory. The area of research has grown rapidly since this monograph's first appearance, but much of this material is still not readily available elsewhere. For instance, this volume offers a unified presentation of Krivine's theorem and the Krivine-Maurey theorem on stable Banach spaces, with emphasis on the

  14. Diet models with linear goal programming: impact of achievement functions.

    Science.gov (United States)

    Gerdessen, J C; de Vries, J H M

    2015-11-01

    Diet models based on goal programming (GP) are valuable tools in designing diets that comply with nutritional, palatability and cost constraints. Results derived from GP models are usually very sensitive to the type of achievement function that is chosen.This paper aims to provide a methodological insight into several achievement functions. It describes the extended GP (EGP) achievement function, which enables the decision maker to use either a MinSum achievement function (which minimizes the sum of the unwanted deviations) or a MinMax achievement function (which minimizes the largest unwanted deviation), or a compromise between both. An additional advantage of EGP models is that from one set of data and weights multiple solutions can be obtained. We use small numerical examples to illustrate the 'mechanics' of achievement functions. Then, the EGP achievement function is demonstrated on a diet problem with 144 foods, 19 nutrients and several types of palatability constraints, in which the nutritional constraints are modeled with fuzzy sets. Choice of achievement function affects the results of diet models. MinSum achievement functions can give rise to solutions that are sensitive to weight changes, and that pile all unwanted deviations on a limited number of nutritional constraints. MinMax achievement functions spread the unwanted deviations as evenly as possible, but may create many (small) deviations. EGP comprises both types of achievement functions, as well as compromises between them. It can thus, from one data set, find a range of solutions with various properties.

  15. Refined functional relations for the elliptic SOS model

    Energy Technology Data Exchange (ETDEWEB)

    Galleas, W., E-mail: w.galleas@uu.nl [ARC Centre of Excellence for the Mathematics and Statistics of Complex Systems, University of Melbourne, VIC 3010 (Australia)

    2013-02-21

    In this work we refine the method presented in Galleas (2012) [1] and obtain a novel kind of functional equation determining the partition function of the elliptic SOS model with domain wall boundaries. This functional relation arises from the dynamical Yang-Baxter relation and its solution is given in terms of multiple contour integrals.

  16. Refined functional relations for the elliptic SOS model

    International Nuclear Information System (INIS)

    Galleas, W.

    2013-01-01

    In this work we refine the method presented in Galleas (2012) [1] and obtain a novel kind of functional equation determining the partition function of the elliptic SOS model with domain wall boundaries. This functional relation arises from the dynamical Yang–Baxter relation and its solution is given in terms of multiple contour integrals.

  17. Density functional theory and multiscale materials modeling

    Indian Academy of Sciences (India)

    One of the vital ingredients in the theoretical tools useful in materials modeling at all the length scales of interest is the concept of density. In the microscopic length scale, it is the electron density that has played a major role in providing a deeper understanding of chemical binding in atoms, molecules and solids.

  18. Quark fragmentation function and the nonlinear chiral quark model

    International Nuclear Information System (INIS)

    Zhu, Z.K.

    1993-01-01

    The scaling law of the fragmentation function has been proved in this paper. With that, we show that low-P T quark fragmentation function can be studied as a low energy physocs in the light-cone coordinate frame. We therefore use the nonlinear chiral quark model which is able to study the low energy physics under scale Λ CSB to study such a function. Meanwhile the formalism for studying the quark fragmentation function has been established. The nonlinear chiral quark model is quantized on the light-front. We then use old-fashioned perturbation theory to study the quark fragmentation function. Our first order result for such a function shows in agreement with the phenomenological model study of e + e - jet. The probability for u,d pair formation in the e + e - jet from our calculation is also in agreement with the phenomenological model results

  19. Local and Global Function Model of the Liver

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Hesheng, E-mail: hesheng@umich.edu [Department of Radiation Oncology, University of Michigan, Ann Arbor, Michigan (United States); Feng, Mary [Department of Radiation Oncology, University of Michigan, Ann Arbor, Michigan (United States); Jackson, Andrew [Department of Medical Physics, Memorial Sloan-Kettering Cancer Center, New York, New York (United States); Ten Haken, Randall K.; Lawrence, Theodore S. [Department of Radiation Oncology, University of Michigan, Ann Arbor, Michigan (United States); Cao, Yue [Department of Radiation Oncology, University of Michigan, Ann Arbor, Michigan (United States); Department of Radiology, University of Michigan, Ann Arbor, Michigan (United States); Department of Biomedical Engineering, University of Michigan, Ann Arbor, Michigan (United States)

    2016-01-01

    Purpose: To develop a local and global function model in the liver based on regional and organ function measurements to support individualized adaptive radiation therapy (RT). Methods and Materials: A local and global model for liver function was developed to include both functional volume and the effect of functional variation of subunits. Adopting the assumption of parallel architecture in the liver, the global function was composed of a sum of local function probabilities of subunits, varying between 0 and 1. The model was fit to 59 datasets of liver regional and organ function measures from 23 patients obtained before, during, and 1 month after RT. The local function probabilities of subunits were modeled by a sigmoid function in relating to MRI-derived portal venous perfusion values. The global function was fitted to a logarithm of an indocyanine green retention rate at 15 minutes (an overall liver function measure). Cross-validation was performed by leave-m-out tests. The model was further evaluated by fitting to the data divided according to whether the patients had hepatocellular carcinoma (HCC) or not. Results: The liver function model showed that (1) a perfusion value of 68.6 mL/(100 g · min) yielded a local function probability of 0.5; (2) the probability reached 0.9 at a perfusion value of 98 mL/(100 g · min); and (3) at a probability of 0.03 [corresponding perfusion of 38 mL/(100 g · min)] or lower, the contribution to global function was lost. Cross-validations showed that the model parameters were stable. The model fitted to the data from the patients with HCC indicated that the same amount of portal venous perfusion was translated into less local function probability than in the patients with non-HCC tumors. Conclusions: The developed liver function model could provide a means to better assess individual and regional dose-responses of hepatic functions, and provide guidance for individualized treatment planning of RT.

  20. Modelling of functional systems of managerial accounting

    Directory of Open Access Journals (Sweden)

    O.V. Fomina

    2017-12-01

    Full Text Available The modern stage of managerial accounting development takes place under the powerful influence of managerial innovations. The article aimed at the development of integrational model of budgeting and the system of balanced indices in the system of managerial accounting that will contribute the increasing of relevance for making managerial decisions by managers of different levels management. As a result of the study the author proposed the highly pragmatical integration model of budgeting and system of the balanced indices in the system of managerial accounting, which is realized by the development of the system of gathering, consolidation, analysis, and interpretation of financial and nonfinancial information, contributes the increasing of relevance for making managerial decisions on the base of coordination and effective and purpose orientation both strategical and operative resources of an enterprise. The effective integrational process of the system components makes it possible to distribute limited resources rationally taking into account prospective purposes and strategic initiatives, to carry

  1. Functional summary statistics for the Johnson-Mehl model

    DEFF Research Database (Denmark)

    Møller, Jesper; Ghorbani, Mohammad

    The Johnson-Mehl germination-growth model is a spatio-temporal point process model which among other things have been used for the description of neurotransmitters datasets. However, for such datasets parametric Johnson-Mehl models fitted by maximum likelihood have yet not been evaluated by means...... of functional summary statistics. This paper therefore invents four functional summary statistics adapted to the Johnson-Mehl model, with two of them based on the second-order properties and the other two on the nuclei-boundary distances for the associated Johnson-Mehl tessellation. The functional summary...... statistics theoretical properties are investigated, non-parametric estimators are suggested, and their usefulness for model checking is examined in a simulation study. The functional summary statistics are also used for checking fitted parametric Johnson-Mehl models for a neurotransmitters dataset....

  2. Modelling of multidimensional quantum systems by the numerical functional integration

    International Nuclear Information System (INIS)

    Lobanov, Yu.Yu.; Zhidkov, E.P.

    1990-01-01

    The employment of the numerical functional integration for the description of multidimensional systems in quantum and statistical physics is considered. For the multiple functional integrals with respect to Gaussian measures in the full separable metric spaces the new approximation formulas exact on a class of polynomial functionals of a given summary degree are constructed. The use of the formulas is demonstrated on example of computation of the Green function and the ground state energy in multidimensional Calogero model. 15 refs.; 2 tabs

  3. Modeling a space-based quantum link that includes an adaptive optics system

    Science.gov (United States)

    Duchane, Alexander W.; Hodson, Douglas D.; Mailloux, Logan O.

    2017-10-01

    Quantum Key Distribution uses optical pulses to generate shared random bit strings between two locations. If a high percentage of the optical pulses are comprised of single photons, then the statistical nature of light and information theory can be used to generate secure shared random bit strings which can then be converted to keys for encryption systems. When these keys are incorporated along with symmetric encryption techniques such as a one-time pad, then this method of key generation and encryption is resistant to future advances in quantum computing which will significantly degrade the effectiveness of current asymmetric key sharing techniques. This research first reviews the transition of Quantum Key Distribution free-space experiments from the laboratory environment to field experiments, and finally, ongoing space experiments. Next, a propagation model for an optical pulse from low-earth orbit to ground and the effects of turbulence on the transmitted optical pulse is described. An Adaptive Optics system is modeled to correct for the aberrations caused by the atmosphere. The long-term point spread function of the completed low-earth orbit to ground optical system is explored in the results section. Finally, the impact of this optical system and its point spread function on an overall quantum key distribution system as well as the future work necessary to show this impact is described.

  4. Functional Modelling for Fault Diagnosis and its application for NPP

    DEFF Research Database (Denmark)

    Lind, Morten; Zhang, Xinxin

    2014-01-01

    The paper presents functional modelling and its application for diagnosis in nuclear power plants.Functional modelling is defined and it is relevance for coping with the complexity of diagnosis in large scale systems like nuclear plants is explained. The diagnosis task is analyzed and it is demon...... operating modes. The FBR example illustrates how the modeling development effort can be managed by proper strategies including decomposition and reuse....

  5. Modeling dynamic functional connectivity using a wishart mixture model

    DEFF Research Database (Denmark)

    Nielsen, Søren Føns Vind; Madsen, Kristoffer Hougaard; Schmidt, Mikkel Nørgaard

    2017-01-01

    framework provides model selection by quantifying models generalization to new data. We use this to quantify the number of states within a prespecified window length. We further propose a heuristic procedure for choosing the window length based on contrasting for each window length the predictive...... together whereas short windows are more unstable and influenced by noise and we find that our heuristic correctly identifies an adequate level of complexity. On single subject resting state fMRI data we find that dynamic models generally outperform static models and using the proposed heuristic points...

  6. Kaon fragmentation function from NJL-jet model

    International Nuclear Information System (INIS)

    Matevosyan, Hrayr H.; Thomas, Anthony W.; Bentz, Wolfgang

    2010-01-01

    The NJL-jet model provides a sound framework for calculating the fragmentation functions in an effective chiral quark theory, where the momentum and isospin sum rules are satisfied without the introduction of ad hoc parameters [1]. Earlier studies of the pion fragmentation functions using the Nambu-Jona-Lasinio (NJL) model within this framework showed good qualitative agreement with the empirical parameterizations. Here we extend the NJL-jet model by including the strange quark. The corrections to the pion fragmentation function and corresponding kaon fragmentation functions are calculated using the elementary quark to quark-meson fragmentation functions from NJL. The results for the kaon fragmentation function exhibit a qualitative agreement with the empirical parameterizations, while the unfavored strange quark fragmentation to pions is shown to be of the same order of magnitude as the unfavored light quark's. The results of these studies are expected to provide important guidance for the analysis of a large variety of semi-inclusive data.

  7. Functional Characterization of a Porcine Emphysema Model

    DEFF Research Database (Denmark)

    Bruun, Camilla Sichlau; Jensen, Louise Kruse; Leifsson, Páll Skuli

    2013-01-01

    Lung emphysema is a central feature of chronic obstructive pulmonary disease (COPD), a frequent human disease worldwide. Cigarette smoking is the major cause of COPD, but genetic predisposition seems to be an important factor. Mutations in surfactant protein genes have been linked to COPD...... phenotypes in humans. Also, the catalytic activities of metalloproteinases (MMPs) are central in the pathogenesis of emphysema/COPD. Especially MMP9, but also MMP2, MMP7, and MMP12 seem to be involved in human emphysema. MMP12−/− mice are protected from smoke-induced emphysema. ITGB6−/− mice spontaneously...... develop age-related lung emphysema due to lack of ITGB6-TGF-β1 regulation of the MMP12 expression.A mutated pig phenotype characterized by age-related lung emphysema and resembling the ITGB6−/− mouse has been described previously. To investigate the emphysema pathogenesis in this pig model, we examined...

  8. Predictive assessment of models for dynamic functional connectivity

    DEFF Research Database (Denmark)

    Nielsen, Søren Føns Vind; Schmidt, Mikkel Nørgaard; Madsen, Kristoffer Hougaard

    2018-01-01

    represent functional brain networks as a meta-stable process with a discrete number of states; however, there is a lack of consensus on how to perform model selection and learn the number of states, as well as a lack of understanding of how different modeling assumptions influence the estimated state......In neuroimaging, it has become evident that models of dynamic functional connectivity (dFC), which characterize how intrinsic brain organization changes over time, can provide a more detailed representation of brain function than traditional static analyses. Many dFC models in the literature...... dynamics. To address these issues, we consider a predictive likelihood approach to model assessment, where models are evaluated based on their predictive performance on held-out test data. Examining several prominent models of dFC (in their probabilistic formulations) we demonstrate our framework...

  9. Affinity functions for modeling glass dissolution rates

    Energy Technology Data Exchange (ETDEWEB)

    Bourcier, W.L. [Lawrence Livermore National Lab., CA (United States)

    1997-07-01

    Glass dissolution rates decrease dramatically as glass approach ''saturation'' with respect to the leachate solution. Most repository sites are chosen where water fluxes are minimal, and therefore the waste glass is most likely to dissolve under conditions close to ''saturation''. The key term in the rate expression used to predict glass dissolution rates close to ''saturation'' is the affinity term, which accounts for saturation effects on dissolution rates. Interpretations of recent experimental data on the dissolution behaviour of silicate glasses and silicate minerals indicate the following: 1) simple affinity control does not explain the observed dissolution rate for silicate minerals or glasses; 2) dissolution rates can be significantly modified by dissolved cations even under conditions far from saturation where the affinity term is near unity; 3) the effects of dissolved species such as Al and Si on the dissolution rate vary with pH, temperature, and saturation state; and 4) as temperature is increased, the effect of both pH and temperature on glass and mineral dissolution rates decrease, which strongly suggests a switch in rate control from surface reaction-based to diffusion control. Borosilicate glass dissolution models need to be upgraded to account for these recent experimental observations. (A.C.)

  10. Deep inelastic structure functions in the chiral bag model

    International Nuclear Information System (INIS)

    Sanjose, V.; Vento, V.; Centro Mixto CSIC/Valencia Univ., Valencia

    1989-01-01

    We calculate the structure functions for deep inelastic scattering on baryons in the cavity approximation to the chiral bag model. The behavior of these structure functions is analyzed in the Bjorken limit. We conclude that scaling is satisfied, but not Regge behavior. A trivial extension as a parton model can be achieved by introducing the structure function for the pion in a convolution picture. In this extended version of the model not only scaling but also Regge behavior is satisfied. Conclusions are drawn from the comparison of our results with experimental data. (orig.)

  11. Deep inelastic structure functions in the chiral bag model

    Energy Technology Data Exchange (ETDEWEB)

    Sanjose, V. (Valencia Univ. (Spain). Dept. de Didactica de las Ciencias Experimentales); Vento, V. (Valencia Univ. (Spain). Dept. de Fisica Teorica; Centro Mixto CSIC/Valencia Univ., Valencia (Spain). Inst. de Fisica Corpuscular)

    1989-10-02

    We calculate the structure functions for deep inelastic scattering on baryons in the cavity approximation to the chiral bag model. The behavior of these structure functions is analyzed in the Bjorken limit. We conclude that scaling is satisfied, but not Regge behavior. A trivial extension as a parton model can be achieved by introducing the structure function for the pion in a convolution picture. In this extended version of the model not only scaling but also Regge behavior is satisfied. Conclusions are drawn from the comparison of our results with experimental data. (orig.).

  12. A DSM-based framework for integrated function modelling

    DEFF Research Database (Denmark)

    Eisenbart, Boris; Gericke, Kilian; Blessing, Lucienne T. M.

    2017-01-01

    an integrated function modelling framework, which specifically aims at relating between the different function modelling perspectives prominently addressed in different disciplines. It uses interlinked matrices based on the concept of DSM and MDM in order to facilitate cross-disciplinary modelling and analysis...... of the functionality of a system. The article further presents the application of the framework based on a product example. Finally, an empirical study in industry is presented. Therein, feedback on the potential of the proposed framework to support interdisciplinary design practice as well as on areas of further...

  13. Composite spectral functions for solving Volterra's population model

    International Nuclear Information System (INIS)

    Ramezani, M.; Razzaghi, M.; Dehghan, M.

    2007-01-01

    An approximate method for solving Volterra's population model for population growth of a species in a closed system is proposed. Volterra's model is a nonlinear integro-differential equation, where the integral term represents the effect of toxin. The approach is based upon composite spectral functions approximations. The properties of composite spectral functions consisting of few terms of orthogonal functions are presented and are utilized to reduce the solution of the Volterra's model to the solution of a system of algebraic equations. The method is easy to implement and yields very accurate result

  14. Distinguishing Differential Testlet Functioning from Differential Bundle Functioning Using the Multilevel Measurement Model

    Science.gov (United States)

    Beretvas, S. Natasha; Walker, Cindy M.

    2012-01-01

    This study extends the multilevel measurement model to handle testlet-based dependencies. A flexible two-level testlet response model (the MMMT-2 model) for dichotomous items is introduced that permits assessment of differential testlet functioning (DTLF). A distinction is made between this study's conceptualization of DTLF and that of…

  15. BioModels: Content, Features, Functionality, and Use

    Science.gov (United States)

    Juty, N; Ali, R; Glont, M; Keating, S; Rodriguez, N; Swat, MJ; Wimalaratne, SM; Hermjakob, H; Le Novère, N; Laibe, C; Chelliah, V

    2015-01-01

    BioModels is a reference repository hosting mathematical models that describe the dynamic interactions of biological components at various scales. The resource provides access to over 1,200 models described in literature and over 140,000 models automatically generated from pathway resources. Most model components are cross-linked with external resources to facilitate interoperability. A large proportion of models are manually curated to ensure reproducibility of simulation results. This tutorial presents BioModels' content, features, functionality, and usage. PMID:26225232

  16. Multinomial-exponential reliability function: a software reliability model

    International Nuclear Information System (INIS)

    Saiz de Bustamante, Amalio; Saiz de Bustamante, Barbara

    2003-01-01

    The multinomial-exponential reliability function (MERF) was developed during a detailed study of the software failure/correction processes. Later on MERF was approximated by a much simpler exponential reliability function (EARF), which keeps most of MERF mathematical properties, so the two functions together makes up a single reliability model. The reliability model MERF/EARF considers the software failure process as a non-homogeneous Poisson process (NHPP), and the repair (correction) process, a multinomial distribution. The model supposes that both processes are statistically independent. The paper discusses the model's theoretical basis, its mathematical properties and its application to software reliability. Nevertheless it is foreseen model applications to inspection and maintenance of physical systems. The paper includes a complete numerical example of the model application to a software reliability analysis

  17. Computational Models for Calcium-Mediated Astrocyte Functions

    Directory of Open Access Journals (Sweden)

    Tiina Manninen

    2018-04-01

    Full Text Available The computational neuroscience field has heavily concentrated on the modeling of neuronal functions, largely ignoring other brain cells, including one type of glial cell, the astrocytes. Despite the short history of modeling astrocytic functions, we were delighted about the hundreds of models developed so far to study the role of astrocytes, most often in calcium dynamics, synchronization, information transfer, and plasticity in vitro, but also in vascular events, hyperexcitability, and homeostasis. Our goal here is to present the state-of-the-art in computational modeling of astrocytes in order to facilitate better understanding of the functions and dynamics of astrocytes in the brain. Due to the large number of models, we concentrated on a hundred models that include biophysical descriptions for calcium signaling and dynamics in astrocytes. We categorized the models into four groups: single astrocyte models, astrocyte network models, neuron-astrocyte synapse models, and neuron-astrocyte network models to ease their use in future modeling projects. We characterized the models based on which earlier models were used for building the models and which type of biological entities were described in the astrocyte models. Features of the models were compared and contrasted so that similarities and differences were more readily apparent. We discovered that most of the models were basically generated from a small set of previously published models with small variations. However, neither citations to all the previous models with similar core structure nor explanations of what was built on top of the previous models were provided, which made it possible, in some cases, to have the same models published several times without an explicit intention to make new predictions about the roles of astrocytes in brain functions. Furthermore, only a few of the models are available online which makes it difficult to reproduce the simulation results and further develop

  18. Computational Models for Calcium-Mediated Astrocyte Functions.

    Science.gov (United States)

    Manninen, Tiina; Havela, Riikka; Linne, Marja-Leena

    2018-01-01

    The computational neuroscience field has heavily concentrated on the modeling of neuronal functions, largely ignoring other brain cells, including one type of glial cell, the astrocytes. Despite the short history of modeling astrocytic functions, we were delighted about the hundreds of models developed so far to study the role of astrocytes, most often in calcium dynamics, synchronization, information transfer, and plasticity in vitro , but also in vascular events, hyperexcitability, and homeostasis. Our goal here is to present the state-of-the-art in computational modeling of astrocytes in order to facilitate better understanding of the functions and dynamics of astrocytes in the brain. Due to the large number of models, we concentrated on a hundred models that include biophysical descriptions for calcium signaling and dynamics in astrocytes. We categorized the models into four groups: single astrocyte models, astrocyte network models, neuron-astrocyte synapse models, and neuron-astrocyte network models to ease their use in future modeling projects. We characterized the models based on which earlier models were used for building the models and which type of biological entities were described in the astrocyte models. Features of the models were compared and contrasted so that similarities and differences were more readily apparent. We discovered that most of the models were basically generated from a small set of previously published models with small variations. However, neither citations to all the previous models with similar core structure nor explanations of what was built on top of the previous models were provided, which made it possible, in some cases, to have the same models published several times without an explicit intention to make new predictions about the roles of astrocytes in brain functions. Furthermore, only a few of the models are available online which makes it difficult to reproduce the simulation results and further develop the models. Thus

  19. Two point function for a simple general relativistic quantum model

    OpenAIRE

    Colosi, Daniele

    2007-01-01

    We study the quantum theory of a simple general relativistic quantum model of two coupled harmonic oscillators and compute the two-point function following a proposal first introduced in the context of loop quantum gravity.

  20. Functional dynamic factor models with application to yield curve forecasting

    KAUST Repository

    Hays, Spencer; Shen, Haipeng; Huang, Jianhua Z.

    2012-01-01

    resulted in a trade-off between goodness of fit and consistency with economic theory. To address this, herein we propose a novel formulation which connects the dynamic factor model (DFM) framework with concepts from functional data analysis: a DFM

  1. Commonsense Psychology and the Functional Requirements of Cognitive Models

    National Research Council Canada - National Science Library

    Gordon, Andrew S

    2005-01-01

    In this paper we argue that previous models of cognitive abilities (e.g. memory, analogy) have been constructed to satisfy functional requirements of implicit commonsense psychological theories held by researchers and nonresearchers alike...

  2. Software Design Modelling with Functional Petri Nets | Bakpo ...

    African Journals Online (AJOL)

    Software Design Modelling with Functional Petri Nets. ... of structured programs and a FPN Software prototype proposed for the conventional programming construct: if-then-else statement. ... EMAIL FREE FULL TEXT EMAIL FREE FULL TEXT

  3. Importance of predictor variables for models of chemical function

    Data.gov (United States)

    U.S. Environmental Protection Agency — Importance of random forest predictors for all classification models of chemical function. This dataset is associated with the following publication: Isaacs , K., M....

  4. Generalized Functional Linear Models With Semiparametric Single-Index Interactions

    KAUST Repository

    Li, Yehua

    2010-06-01

    We introduce a new class of functional generalized linear models, where the response is a scalar and some of the covariates are functional. We assume that the response depends on multiple covariates, a finite number of latent features in the functional predictor, and interaction between the two. To achieve parsimony, the interaction between the multiple covariates and the functional predictor is modeled semiparametrically with a single-index structure. We propose a two step estimation procedure based on local estimating equations, and investigate two situations: (a) when the basis functions are pre-determined, e.g., Fourier or wavelet basis functions and the functional features of interest are known; and (b) when the basis functions are data driven, such as with functional principal components. Asymptotic properties are developed. Notably, we show that when the functional features are data driven, the parameter estimates have an increased asymptotic variance, due to the estimation error of the basis functions. Our methods are illustrated with a simulation study and applied to an empirical data set, where a previously unknown interaction is detected. Technical proofs of our theoretical results are provided in the online supplemental materials.

  5. Generalized Functional Linear Models With Semiparametric Single-Index Interactions

    KAUST Repository

    Li, Yehua; Wang, Naisyin; Carroll, Raymond J.

    2010-01-01

    We introduce a new class of functional generalized linear models, where the response is a scalar and some of the covariates are functional. We assume that the response depends on multiple covariates, a finite number of latent features in the functional predictor, and interaction between the two. To achieve parsimony, the interaction between the multiple covariates and the functional predictor is modeled semiparametrically with a single-index structure. We propose a two step estimation procedure based on local estimating equations, and investigate two situations: (a) when the basis functions are pre-determined, e.g., Fourier or wavelet basis functions and the functional features of interest are known; and (b) when the basis functions are data driven, such as with functional principal components. Asymptotic properties are developed. Notably, we show that when the functional features are data driven, the parameter estimates have an increased asymptotic variance, due to the estimation error of the basis functions. Our methods are illustrated with a simulation study and applied to an empirical data set, where a previously unknown interaction is detected. Technical proofs of our theoretical results are provided in the online supplemental materials.

  6. FUNCTIONAL MODELLING FOR FAULT DIAGNOSIS AND ITS APPLICATION FOR NPP

    Directory of Open Access Journals (Sweden)

    MORTEN LIND

    2014-12-01

    Full Text Available The paper presents functional modelling and its application for diagnosis in nuclear power plants. Functional modelling is defined and its relevance for coping with the complexity of diagnosis in large scale systems like nuclear plants is explained. The diagnosis task is analyzed and it is demonstrated that the levels of abstraction in models for diagnosis must reflect plant knowledge about goals and functions which is represented in functional modelling. Multilevel flow modelling (MFM, which is a method for functional modelling, is introduced briefly and illustrated with a cooling system example. The use of MFM for reasoning about causes and consequences is explained in detail and demonstrated using the reasoning tool, the MFMSuite. MFM applications in nuclear power systems are described by two examples: a PWR; and an FBR reactor. The PWR example show how MFM can be used to model and reason about operating modes. The FBR example illustrates how the modelling development effort can be managed by proper strategies including decomposition and reuse.

  7. Functional Modelling for fault diagnosis and its application for NPP

    Energy Technology Data Exchange (ETDEWEB)

    Lind, Morten; Zhang, Xin Xin [Dept. of Electrical Engineering, Technical University of Denmark, Kongens Lyngby (Denmark)

    2014-12-15

    The paper presents functional modelling and its application for diagnosis in nuclear power plants. Functional modelling is defined and its relevance for coping with the complexity of diagnosis in large scale systems like nuclear plants is explained. The diagnosis task is analyzed and it is demonstrated that the levels of abstraction in models for diagnosis must reflect plant knowledge about goals and functions which is represented in functional modelling. Multilevel flow modelling (MFM), which is a method for functional modelling, is introduced briefly and illustrated with a cooling system example. The use of MFM for reasoning about causes and consequences is explained in detail and demonstrated using the reasoning tool, the MFMSuite. MFM applications in nuclear power systems are described by two examples: a PWR; and an FBR reactor. The PWR example show how MFM can be used to model and reason about operating modes. The FBR example illustrates how the modelling development effort can be managed by proper strategies including decomposition and reuse.

  8. Functional Modelling for fault diagnosis and its application for NPP

    International Nuclear Information System (INIS)

    Lind, Morten; Zhang, Xin Xin

    2014-01-01

    The paper presents functional modelling and its application for diagnosis in nuclear power plants. Functional modelling is defined and its relevance for coping with the complexity of diagnosis in large scale systems like nuclear plants is explained. The diagnosis task is analyzed and it is demonstrated that the levels of abstraction in models for diagnosis must reflect plant knowledge about goals and functions which is represented in functional modelling. Multilevel flow modelling (MFM), which is a method for functional modelling, is introduced briefly and illustrated with a cooling system example. The use of MFM for reasoning about causes and consequences is explained in detail and demonstrated using the reasoning tool, the MFMSuite. MFM applications in nuclear power systems are described by two examples: a PWR; and an FBR reactor. The PWR example show how MFM can be used to model and reason about operating modes. The FBR example illustrates how the modelling development effort can be managed by proper strategies including decomposition and reuse.

  9. Global sensitivity analysis of computer models with functional inputs

    International Nuclear Information System (INIS)

    Iooss, Bertrand; Ribatet, Mathieu

    2009-01-01

    Global sensitivity analysis is used to quantify the influence of uncertain model inputs on the response variability of a numerical model. The common quantitative methods are appropriate with computer codes having scalar model inputs. This paper aims at illustrating different variance-based sensitivity analysis techniques, based on the so-called Sobol's indices, when some model inputs are functional, such as stochastic processes or random spatial fields. In this work, we focus on large cpu time computer codes which need a preliminary metamodeling step before performing the sensitivity analysis. We propose the use of the joint modeling approach, i.e., modeling simultaneously the mean and the dispersion of the code outputs using two interlinked generalized linear models (GLMs) or generalized additive models (GAMs). The 'mean model' allows to estimate the sensitivity indices of each scalar model inputs, while the 'dispersion model' allows to derive the total sensitivity index of the functional model inputs. The proposed approach is compared to some classical sensitivity analysis methodologies on an analytical function. Lastly, the new methodology is applied to an industrial computer code that simulates the nuclear fuel irradiation.

  10. Functional linear models for association analysis of quantitative traits.

    Science.gov (United States)

    Fan, Ruzong; Wang, Yifan; Mills, James L; Wilson, Alexander F; Bailey-Wilson, Joan E; Xiong, Momiao

    2013-11-01

    Functional linear models are developed in this paper for testing associations between quantitative traits and genetic variants, which can be rare variants or common variants or the combination of the two. By treating multiple genetic variants of an individual in a human population as a realization of a stochastic process, the genome of an individual in a chromosome region is a continuum of sequence data rather than discrete observations. The genome of an individual is viewed as a stochastic function that contains both linkage and linkage disequilibrium (LD) information of the genetic markers. By using techniques of functional data analysis, both fixed and mixed effect functional linear models are built to test the association between quantitative traits and genetic variants adjusting for covariates. After extensive simulation analysis, it is shown that the F-distributed tests of the proposed fixed effect functional linear models have higher power than that of sequence kernel association test (SKAT) and its optimal unified test (SKAT-O) for three scenarios in most cases: (1) the causal variants are all rare, (2) the causal variants are both rare and common, and (3) the causal variants are common. The superior performance of the fixed effect functional linear models is most likely due to its optimal utilization of both genetic linkage and LD information of multiple genetic variants in a genome and similarity among different individuals, while SKAT and SKAT-O only model the similarities and pairwise LD but do not model linkage and higher order LD information sufficiently. In addition, the proposed fixed effect models generate accurate type I error rates in simulation studies. We also show that the functional kernel score tests of the proposed mixed effect functional linear models are preferable in candidate gene analysis and small sample problems. The methods are applied to analyze three biochemical traits in data from the Trinity Students Study. © 2013 WILEY

  11. A no extensive statistical model for the nucleon structure function

    International Nuclear Information System (INIS)

    Trevisan, Luis A.; Mirez, Carlos

    2013-01-01

    We studied an application of nonextensive thermodynamics to describe the structure function of nucleon, in a model where the usual Fermi-Dirac and Bose-Einstein energy distribution were replaced by the equivalent functions of the q-statistical. The parameters of the model are given by an effective temperature T, the q parameter (from Tsallis statistics), and two chemical potentials given by the corresponding up (u) and down (d) quark normalization in the nucleon.

  12. The Potts model and flows. 1. The pair correlation function

    International Nuclear Information System (INIS)

    Essam, J.W.; Tsallis, C.

    1985-01-01

    It is shown that the partition function for the lambda-state Potts model with pair-interactions is related to the expected number of integer mod-lambda flows in a percolation model. The relation is generalised to the pair correlation function. The resulting high temperature expansion coefficients are shown to be the flow polynomials of graph theory. An observation of Tsallis and Levy concerning the equivalent transmissivity of a cluster is also proved. (Author) [pt

  13. Longitudinal mixed-effects models for latent cognitive function

    NARCIS (Netherlands)

    van den Hout, Ardo; Fox, Gerardus J.A.; Muniz-Terrera, Graciela

    2015-01-01

    A mixed-effects regression model with a bent-cable change-point predictor is formulated to describe potential decline of cognitive function over time in the older population. For the individual trajectories, cognitive function is considered to be a latent variable measured through an item response

  14. Correlation function of four spins in the percolation model

    Directory of Open Access Journals (Sweden)

    Vladimir S. Dotsenko

    2016-10-01

    It is known that the four-point functions define the actual fusion rules of a particular model. In this respect, we find that fusion of two spins, of dimension Δσ=596, produce a new channel, in the 4-point function, which is due to the operator with dimension Δ=5/8.

  15. Improved Wave-vessel Transfer Functions by Uncertainty Modelling

    DEFF Research Database (Denmark)

    Nielsen, Ulrik Dam; Fønss Bach, Kasper; Iseki, Toshio

    2016-01-01

    This paper deals with uncertainty modelling of wave-vessel transfer functions used to calculate or predict wave-induced responses of a ship in a seaway. Although transfer functions, in theory, can be calculated to exactly reflect the behaviour of the ship when exposed to waves, uncertainty in inp...

  16. Exploitation of geoinformatics at modelling of functional effects of forest functions

    International Nuclear Information System (INIS)

    Sitko, R.

    2005-01-01

    From point of view of space modelling geoinformatics has wide application in group of ecologic function of forest because they directly depend on natural conditions of site. A causa de cy modelling application was realised on the territory of TANAP (Tatras National Park), West Tatras, in the part Liptovske Kopy. The size of this territory is about 4,900 hectares and forests there subserve the first of all significant ecological functions, what are soil protection from erosion, water management, and anti-avalanche function. Of environmental functions they have recreational role of the forest and function of nature protection. Anti-avalanche and anti-erosion function of forest is evaluated in this presentation

  17. Features of Functioning the Integrated Building Thermal Model

    Directory of Open Access Journals (Sweden)

    Morozov Maxim N.

    2017-01-01

    Full Text Available A model of the building heating system, consisting of energy source, a distributed automatic control system, elements of individual heating unit and heating system is designed. Application Simulink of mathematical package Matlab is selected as a platform for the model. There are the specialized application Simscape libraries in aggregate with a wide range of Matlab mathematical tools allow to apply the “acausal” modeling concept. Implementation the “physical” representation of the object model gave improving the accuracy of the models. Principle of operation and features of the functioning of the thermal model is described. The investigations of building cooling dynamics were carried out.

  18. Ab initio derivation of model energy density functionals

    International Nuclear Information System (INIS)

    Dobaczewski, Jacek

    2016-01-01

    I propose a simple and manageable method that allows for deriving coupling constants of model energy density functionals (EDFs) directly from ab initio calculations performed for finite fermion systems. A proof-of-principle application allows for linking properties of finite nuclei, determined by using the nuclear nonlocal Gogny functional, to the coupling constants of the quasilocal Skyrme functional. The method does not rely on properties of infinite fermion systems but on the ab initio calculations in finite systems. It also allows for quantifying merits of different model EDFs in describing the ab initio results. (letter)

  19. Joint Modelling of Structural and Functional Brain Networks

    DEFF Research Database (Denmark)

    Andersen, Kasper Winther; Herlau, Tue; Mørup, Morten

    -parametric Bayesian network model which allows for joint modelling and integration of multiple networks. We demonstrate the model’s ability to detect vertices that share structure across networks jointly in functional MRI (fMRI) and diffusion MRI (dMRI) data. Using two fMRI and dMRI scans per subject, we establish...

  20. Constructing rule-based models using the belief functions framework

    NARCIS (Netherlands)

    Almeida, R.J.; Denoeux, T.; Kaymak, U.; Greco, S.; Bouchon-Meunier, B.; Coletti, G.; Fedrizzi, M.; Matarazzo, B.; Yager, R.R.

    2012-01-01

    Abstract. We study a new approach to regression analysis. We propose a new rule-based regression model using the theoretical framework of belief functions. For this purpose we use the recently proposed Evidential c-means (ECM) to derive rule-based models solely from data. ECM allocates, for each

  1. PSA Model Improvement Using Maintenance Rule Function Mapping

    Energy Technology Data Exchange (ETDEWEB)

    Seo, Mi Ro [KHNP-CRI, Nuclear Safety Laboratory, Daejeon (Korea, Republic of)

    2011-10-15

    The Maintenance Rule (MR) program, in nature, is a performance-based program. Therefore, the risk information derived from the Probabilistic Safety Assessment model is introduced into the MR program during the Safety Significance determination and Performance Criteria selection processes. However, this process also facilitates the determination of the vulnerabilities in currently utilized PSA models and offers means of improving them. To find vulnerabilities in an existing PSA model, an initial review determines whether the safety-related MR functions are included in the PSA model. Because safety-related MR functions are related to accident prevention and mitigation, it is generally necessary for them to be included in the PSA model. In the process of determining the safety significance of each functions, quantitative risk importance levels are determined through a process known as PSA model basic event mapping to MR functions. During this process, it is common for some inadequate and overlooked models to be uncovered. In this paper, the PSA model and the MR program of Wolsong Unit 1 were used as references

  2. NJL-jet model for quark fragmentation functions

    International Nuclear Information System (INIS)

    Ito, T.; Bentz, W.; Cloeet, I. C.; Thomas, A. W.; Yazaki, K.

    2009-01-01

    A description of fragmentation functions which satisfy the momentum and isospin sum rules is presented in an effective quark theory. Concentrating on the pion fragmentation function, we first explain why the elementary (lowest order) fragmentation process q→qπ is completely inadequate to describe the empirical data, although the crossed process π→qq describes the quark distribution functions in the pion reasonably well. Taking into account cascadelike processes in a generalized jet-model approach, we then show that the momentum and isospin sum rules can be satisfied naturally, without the introduction of ad hoc parameters. We present results for the Nambu-Jona-Lasinio (NJL) model in the invariant mass regularization scheme and compare them with the empirical parametrizations. We argue that the NJL-jet model, developed herein, provides a useful framework with which to calculate the fragmentation functions in an effective chiral quark theory.

  3. Apply Functional Modelling to Consequence Analysis in Supervision Systems

    DEFF Research Database (Denmark)

    Zhang, Xinxin; Lind, Morten; Gola, Giulio

    2013-01-01

    This paper will first present the purpose and goals of applying functional modelling approach to consequence analysis by adopting Multilevel Flow Modelling (MFM). MFM Models describe a complex system in multiple abstraction levels in both means-end dimension and whole-part dimension. It contains...... consequence analysis to practical or online applications in supervision systems. It will also suggest a multiagent solution as the integration architecture for developing tools to facilitate the utilization results of functional consequence analysis. Finally a prototype of the multiagent reasoning system...... causal relations between functions and goals. A rule base system can be developed to trace the causal relations and perform consequence propagations. This paper will illustrate how to use MFM for consequence reasoning by using rule base technology and describe the challenges for integrating functional...

  4. Density Functional Theory and Materials Modeling at Atomistic Length Scales

    Directory of Open Access Journals (Sweden)

    Swapan K. Ghosh

    2002-04-01

    Full Text Available Abstract: We discuss the basic concepts of density functional theory (DFT as applied to materials modeling in the microscopic, mesoscopic and macroscopic length scales. The picture that emerges is that of a single unified framework for the study of both quantum and classical systems. While for quantum DFT, the central equation is a one-particle Schrodinger-like Kohn-Sham equation, the classical DFT consists of Boltzmann type distributions, both corresponding to a system of noninteracting particles in the field of a density-dependent effective potential, the exact functional form of which is unknown. One therefore approximates the exchange-correlation potential for quantum systems and the excess free energy density functional or the direct correlation functions for classical systems. Illustrative applications of quantum DFT to microscopic modeling of molecular interaction and that of classical DFT to a mesoscopic modeling of soft condensed matter systems are highlighted.

  5. Evaluation-Function-based Model-free Adaptive Fuzzy Control

    Directory of Open Access Journals (Sweden)

    Agus Naba

    2016-12-01

    Full Text Available Designs of adaptive fuzzy controllers (AFC are commonly based on the Lyapunov approach, which requires a known model of the controlled plant. They need to consider a Lyapunov function candidate as an evaluation function to be minimized. In this study these drawbacks were handled by designing a model-free adaptive fuzzy controller (MFAFC using an approximate evaluation function defined in terms of the current state, the next state, and the control action. MFAFC considers the approximate evaluation function as an evaluative control performance measure similar to the state-action value function in reinforcement learning. The simulation results of applying MFAFC to the inverted pendulum benchmark verified the proposed scheme’s efficacy.

  6. Model Penentuan Nilai Target Functional Requirement Berbasis Utilitas

    Directory of Open Access Journals (Sweden)

    Cucuk Nur Rosyidi

    2012-01-01

    Full Text Available In a product design and development process, a designer faces a problem to decide functional requirement (FR target values. That decision is made under a risk since it is conducted in the early design phase using incomplete information. Utility function can be used to reflect the decision maker attitude towards the risk in making such decision. In this research, we develop a utility-based model to determine FR target values using quadratic utility function and information from Quality Function Deployment (QFD. A pencil design is used as a numerical example using quadratic utility function for each FR. The model can be applied for balancing customer and designer interest in determining FR target values.

  7. Modelling the Impact of Soil Management on Soil Functions

    Science.gov (United States)

    Vogel, H. J.; Weller, U.; Rabot, E.; Stößel, B.; Lang, B.; Wiesmeier, M.; Urbanski, L.; Wollschläger, U.

    2017-12-01

    Due to an increasing soil loss and an increasing demand for food and energy there is an enormous pressure on soils as the central resource for agricultural production. Besides the importance of soils for biomass production there are other essential soil functions, i.e. filter and buffer for water, carbon sequestration, provision and recycling of nutrients, and habitat for biological activity. All these functions have a direct feed back to biogeochemical cycles and climate. To render agricultural production efficient and sustainable we need to develop model tools that are capable to predict quantitatively the impact of a multitude of management measures on these soil functions. These functions are considered as emergent properties produced by soils as complex systems. The major challenge is to handle the multitude of physical, chemical and biological processes interacting in a non-linear manner. A large number of validated models for specific soil processes are available. However, it is not possible to simulate soil functions by coupling all the relevant processes at the detailed (i.e. molecular) level where they are well understood. A new systems perspective is required to evaluate the ensemble of soil functions and their sensitivity to external forcing. Another challenge is that soils are spatially heterogeneous systems by nature. Soil processes are highly dependent on the local soil properties and, hence, any model to predict soil functions needs to account for the site-specific conditions. For upscaling towards regional scales the spatial distribution of functional soil types need to be taken into account. We propose a new systemic model approach based on a thorough analysis of the interactions between physical, chemical and biological processes considering their site-specific characteristics. It is demonstrated for the example of soil compaction and the recovery of soil structure, water capacity and carbon stocks as a result of plant growth and biological

  8. String beta function equations from c=1 matrix model

    CERN Document Server

    Dhar, A; Wadia, S R; Dhar, Avinash; Mandal, Gautam; Wadia, Spenta R

    1995-01-01

    We derive the \\sigma-model tachyon \\beta-function equation of 2-dimensional string theory, in the background of flat space and linear dilaton, working entirely within the c=1 matrix model. The tachyon \\beta-function equation is satisfied by a \\underbar{nonlocal} and \\underbar{nonlinear} combination of the (massless) scalar field of the matrix model. We discuss the possibility of describing the `discrete states' as well as other possible gravitational and higher tensor backgrounds of 2-dimensional string theory within the c=1 matrix model. We also comment on the realization of the W-infinity symmetry of the matrix model in the string theory. The present work reinforces the viewpoint that a nonlocal (and nonlinear) transform is required to extract the space-time physics of 2-dimensional string theory from the c=1 matrix model.

  9. Partially linear varying coefficient models stratified by a functional covariate

    KAUST Repository

    Maity, Arnab

    2012-10-01

    We consider the problem of estimation in semiparametric varying coefficient models where the covariate modifying the varying coefficients is functional and is modeled nonparametrically. We develop a kernel-based estimator of the nonparametric component and a profiling estimator of the parametric component of the model and derive their asymptotic properties. Specifically, we show the consistency of the nonparametric functional estimates and derive the asymptotic expansion of the estimates of the parametric component. We illustrate the performance of our methodology using a simulation study and a real data application.

  10. Zeros of the partition function for some generalized Ising models

    International Nuclear Information System (INIS)

    Dunlop, F.

    1981-01-01

    The author considers generalized Ising Models with two and four body interactions in a complex external field h such that Re h>=mod(Im h) + C, where C is an explicit function of the interaction parameters. The partition function Z(h) is then shown to satisfy mod(Z(h))>=Z(c), so that the pressure is analytic in h inside the given region. The method is applied to specific examples: the gauge invariant Ising Model, and the Widom Rowlinson model on the lattice. (Auth.)

  11. Functional State Modelling of Cultivation Processes: Dissolved Oxygen Limitation State

    Directory of Open Access Journals (Sweden)

    Olympia Roeva

    2015-04-01

    Full Text Available A new functional state, namely dissolved oxygen limitation state for both bacteria Escherichia coli and yeast Saccharomyces cerevisiae fed-batch cultivation processes is presented in this study. Functional state modelling approach is applied to cultivation processes in order to overcome the main disadvantages of using global process model, namely complex model structure and a big number of model parameters. Alongwith the newly introduced dissolved oxygen limitation state, second acetate production state and first acetate production state are recognized during the fed-batch cultivation of E. coli, while mixed oxidative state and first ethanol production state are recognized during the fed-batch cultivation of S. cerevisiae. For all mentioned above functional states both structural and parameter identification is here performed based on experimental data of E. coli and S. cerevisiae fed-batch cultivations.

  12. Assessment of tropospheric delay mapping function models in Egypt: Using PTD database model

    Science.gov (United States)

    Abdelfatah, M. A.; Mousa, Ashraf E.; El-Fiky, Gamal S.

    2018-06-01

    For space geodetic measurements, estimates of tropospheric delays are highly correlated with site coordinates and receiver clock biases. Thus, it is important to use the most accurate models for the tropospheric delay to reduce errors in the estimates of the other parameters. Both the zenith delay value and mapping function should be assigned correctly to reduce such errors. Several mapping function models can treat the troposphere slant delay. The recent models were not evaluated for the Egyptian local climate conditions. An assessment of these models is needed to choose the most suitable one. The goal of this paper is to test the quality of global mapping function which provides high consistency with precise troposphere delay (PTD) mapping functions. The PTD model is derived from radiosonde data using ray tracing, which consider in this paper as true value. The PTD mapping functions were compared, with three recent total mapping functions model and another three separate dry and wet mapping function model. The results of the research indicate that models are very close up to zenith angle 80°. Saastamoinen and 1/cos z model are behind accuracy. Niell model is better than VMF model. The model of Black and Eisner is a good model. The results also indicate that the geometric range error has insignificant effect on slant delay and the fluctuation of azimuth anti-symmetric is about 1%.

  13. Embedded systems development from functional models to implementations

    CERN Document Server

    Zeng, Haibo; Natale, Marco; Marwedel, Peter

    2014-01-01

    This book offers readers broad coverage of techniques to model, verify and validate the behavior and performance of complex distributed embedded systems.  The authors attempt to bridge the gap between the three disciplines of model-based design, real-time analysis and model-driven development, for a better understanding of the ways in which new development flows can be constructed, going from system-level modeling to the correct and predictable generation of a distributed implementation, leveraging current and future research results.     Describes integration of heterogeneous models; Discusses synthesis of task model implementations and code implementations; Compares model-based design vs. model-driven approaches; Explains how to enforce correctness by construction in the functional and time domains; Includes optimization techniques for control performance.

  14. Modeling Functional Neuroanatomy for an Anatomy Information System

    Science.gov (United States)

    Niggemann, Jörg M.; Gebert, Andreas; Schulz, Stefan

    2008-01-01

    Objective Existing neuroanatomical ontologies, databases and information systems, such as the Foundational Model of Anatomy (FMA), represent outgoing connections from brain structures, but cannot represent the “internal wiring” of structures and as such, cannot distinguish between different independent connections from the same structure. Thus, a fundamental aspect of Neuroanatomy, the functional pathways and functional systems of the brain such as the pupillary light reflex system, is not adequately represented. This article identifies underlying anatomical objects which are the source of independent connections (collections of neurons) and uses these as basic building blocks to construct a model of functional neuroanatomy and its functional pathways. Design The basic representational elements of the model are unnamed groups of neurons or groups of neuron segments. These groups, their relations to each other, and the relations to the objects of macroscopic anatomy are defined. The resulting model can be incorporated into the FMA. Measurements The capabilities of the presented model are compared to the FMA and the Brain Architecture Management System (BAMS). Results Internal wiring as well as functional pathways can correctly be represented and tracked. Conclusion This model bridges the gap between representations of single neurons and their parts on the one hand and representations of spatial brain structures and areas on the other hand. It is capable of drawing correct inferences on pathways in a nervous system. The object and relation definitions are related to the Open Biomedical Ontology effort and its relation ontology, so that this model can be further developed into an ontology of neuronal functional systems. PMID:18579841

  15. Quark fragmentation functions in NJL-jet model

    Science.gov (United States)

    Bentz, Wolfgang; Matevosyan, Hrayr; Thomas, Anthony

    2014-09-01

    We report on our studies of quark fragmentation functions in the Nambu-Jona-Lasinio (NJL) - jet model. The results of Monte-Carlo simulations for the fragmentation functions to mesons and nucleons, as well as to pion and kaon pairs (dihadron fragmentation functions) are presented. The important role of intermediate vector meson resonances for those semi-inclusive deep inelastic production processes is emphasized. Our studies are very relevant for the extraction of transverse momentum dependent quark distribution functions from measured scattering cross sections. We report on our studies of quark fragmentation functions in the Nambu-Jona-Lasinio (NJL) - jet model. The results of Monte-Carlo simulations for the fragmentation functions to mesons and nucleons, as well as to pion and kaon pairs (dihadron fragmentation functions) are presented. The important role of intermediate vector meson resonances for those semi-inclusive deep inelastic production processes is emphasized. Our studies are very relevant for the extraction of transverse momentum dependent quark distribution functions from measured scattering cross sections. Supported by Grant in Aid for Scientific Research, Japanese Ministry of Education, Culture, Sports, Science and Technology, Project No. 20168769.

  16. Functionally unidimensional item response models for multivariate binary data

    DEFF Research Database (Denmark)

    Ip, Edward; Molenberghs, Geert; Chen, Shyh-Huei

    2013-01-01

    The problem of fitting unidimensional item response models to potentially multidimensional data has been extensively studied. The focus of this article is on response data that have a strong dimension but also contain minor nuisance dimensions. Fitting a unidimensional model to such multidimensio......The problem of fitting unidimensional item response models to potentially multidimensional data has been extensively studied. The focus of this article is on response data that have a strong dimension but also contain minor nuisance dimensions. Fitting a unidimensional model...... to such multidimensional data is believed to result in ability estimates that represent a combination of the major and minor dimensions. We conjecture that the underlying dimension for the fitted unidimensional model, which we call the functional dimension, represents a nonlinear projection. In this article we investigate...... tool. An example regarding a construct of desire for physical competency is used to illustrate the functional unidimensional approach....

  17. Towards refactoring the Molecular Function Ontology with a UML profile for function modeling.

    Science.gov (United States)

    Burek, Patryk; Loebe, Frank; Herre, Heinrich

    2017-10-04

    Gene Ontology (GO) is the largest resource for cataloging gene products. This resource grows steadily and, naturally, this growth raises issues regarding the structure of the ontology. Moreover, modeling and refactoring large ontologies such as GO is generally far from being simple, as a whole as well as when focusing on certain aspects or fragments. It seems that human-friendly graphical modeling languages such as the Unified Modeling Language (UML) could be helpful in connection with these tasks. We investigate the use of UML for making the structural organization of the Molecular Function Ontology (MFO), a sub-ontology of GO, more explicit. More precisely, we present a UML dialect, called the Function Modeling Language (FueL), which is suited for capturing functions in an ontologically founded way. FueL is equipped, among other features, with language elements that arise from studying patterns of subsumption between functions. We show how to use this UML dialect for capturing the structure of molecular functions. Furthermore, we propose and discuss some refactoring options concerning fragments of MFO. FueL enables the systematic, graphical representation of functions and their interrelations, including making information explicit that is currently either implicit in MFO or is mainly captured in textual descriptions. Moreover, the considered subsumption patterns lend themselves to the methodical analysis of refactoring options with respect to MFO. On this basis we argue that the approach can increase the comprehensibility of the structure of MFO for humans and can support communication, for example, during revision and further development.

  18. Driver steering model for closed-loop steering function analysis

    Science.gov (United States)

    Bolia, Pratiksh; Weiskircher, Thomas; Müller, Steffen

    2014-05-01

    In this paper, a two level preview driver steering control model for the use in numerical vehicle dynamics simulation is introduced. The proposed model is composed of cascaded control loops: The outer loop is the path following layer based on potential field framework. The inner loop tries to capture the driver's physical behaviour. The proposed driver model allows easy implementation of different driving situations to simulate a wide range of different driver types, moods and vehicle types. The expediency of the proposed driver model is shown with the help of developed driver steering assist (DSA) function integrated with a conventional series production (Electric Power steering System with rack assist servo unit) system. With the help of the DSA assist function, the driver is prevented from over saturating the front tyre forces and loss of stability and controllability during cornering. The simulation results show different driver reactions caused by the change in the parameters or properties of the proposed driver model if the DSA assist function is activated. Thus, the proposed driver model is useful for the advanced driver steering and vehicle stability assist function evaluation in the early stage of vehicle dynamics handling and stability evaluation.

  19. Spatial and functional modeling of carnivore and insectivore molariform teeth.

    Science.gov (United States)

    Evans, Alistair R; Sanson, Gordon D

    2006-06-01

    The interaction between the two main competing geometric determinants of teeth (the geometry of function and the geometry of occlusion) were investigated through the construction of three-dimensional spatial models of several mammalian tooth forms (carnassial, insectivore premolar, zalambdodont, dilambdodont, and tribosphenic). These models aim to emulate the shape and function of mammalian teeth. The geometric principles of occlusion relating to single- and double-crested teeth are reviewed. Function was considered using engineering principles that relate tooth shape to function. Substantial similarity between the models and mammalian teeth were achieved. Differences between the two indicate the influence of tooth strength, geometric relations between upper and lower teeth (including the presence of the protocone), and wear on tooth morphology. The concept of "autocclusion" is expanded to include any morphological features that ensure proper alignment of cusps on the same tooth and other teeth in the tooth row. It is concluded that the tooth forms examined are auto-aligning, and do not require additional morphological guides for correct alignment. The model of therian molars constructed by Crompton and Sita-Lumsden ([1970] Nature 227:197-199) is reconstructed in 3D space to show that their hypothesis of crest geometry is erroneous, and that their model is a special case of a more general class of models. (c) 2004 Wiley-Liss, Inc.

  20. Correlation functions of heisenberg-mattis model in one dimension

    International Nuclear Information System (INIS)

    Azeeem, W.

    1991-01-01

    The technique of real-space renormalization to the dynamics of Heisenberg-Mattis model, which represents a random magnetic system with competing ferromagnetic and antiferromagnetic interactions has been applied. The renormalization technique, which has been in use for calculating density of states, is extended to calculate dynamical response function from momentum energy dependent Green's functions. Our numerical results on density of states and structure function of one-dimensional Heisenberg-Mattis model come out to be in good agreement with computer simulation results. The numerical scheme worked out in this thesis has the advantage that it can also provide a complete map of momentum and energy dependence of the structure function. (author)

  1. A unified wall function for compressible turbulence modelling

    Science.gov (United States)

    Ong, K. C.; Chan, A.

    2018-05-01

    Turbulence modelling near the wall often requires a high mesh density clustered around the wall and the first cells adjacent to the wall to be placed in the viscous sublayer. As a result, the numerical stability is constrained by the smallest cell size and hence requires high computational overhead. In the present study, a unified wall function is developed which is valid for viscous sublayer, buffer sublayer and inertial sublayer, as well as including effects of compressibility, heat transfer and pressure gradient. The resulting wall function applies to compressible turbulence modelling for both isothermal and adiabatic wall boundary conditions with the non-zero pressure gradient. Two simple wall function algorithms are implemented for practical computation of isothermal and adiabatic wall boundary conditions. The numerical results show that the wall function evaluates the wall shear stress and turbulent quantities of wall adjacent cells at wide range of non-dimensional wall distance and alleviate the number and size of cells required.

  2. Functional form diagnostics for Cox's proportional hazards model.

    Science.gov (United States)

    León, Larry F; Tsai, Chih-Ling

    2004-03-01

    We propose a new type of residual and an easily computed functional form test for the Cox proportional hazards model. The proposed test is a modification of the omnibus test for testing the overall fit of a parametric regression model, developed by Stute, González Manteiga, and Presedo Quindimil (1998, Journal of the American Statistical Association93, 141-149), and is based on what we call censoring consistent residuals. In addition, we develop residual plots that can be used to identify the correct functional forms of covariates. We compare our test with the functional form test of Lin, Wei, and Ying (1993, Biometrika80, 557-572) in a simulation study. The practical application of the proposed residuals and functional form test is illustrated using both a simulated data set and a real data set.

  3. Conserved Functional Motifs and Homology Modeling to Predict Hidden Moonlighting Functional Sites

    KAUST Repository

    Wong, Aloysius Tze; Gehring, Christoph A; Irving, Helen R.

    2015-01-01

    Moonlighting functional centers within proteins can provide them with hitherto unrecognized functions. Here, we review how hidden moonlighting functional centers, which we define as binding sites that have catalytic activity or regulate protein function in a novel manner, can be identified using targeted bioinformatic searches. Functional motifs used in such searches include amino acid residues that are conserved across species and many of which have been assigned functional roles based on experimental evidence. Molecules that were identified in this manner seeking cyclic mononucleotide cyclases in plants are used as examples. The strength of this computational approach is enhanced when good homology models can be developed to test the functionality of the predicted centers in silico, which, in turn, increases confidence in the ability of the identified candidates to perform the predicted functions. Computational characterization of moonlighting functional centers is not diagnostic for catalysis but serves as a rapid screening method, and highlights testable targets from a potentially large pool of candidates for subsequent in vitro and in vivo experiments required to confirm the functionality of the predicted moonlighting centers.

  4. Conserved Functional Motifs and Homology Modeling to Predict Hidden Moonlighting Functional Sites

    KAUST Repository

    Wong, Aloysius Tze

    2015-06-09

    Moonlighting functional centers within proteins can provide them with hitherto unrecognized functions. Here, we review how hidden moonlighting functional centers, which we define as binding sites that have catalytic activity or regulate protein function in a novel manner, can be identified using targeted bioinformatic searches. Functional motifs used in such searches include amino acid residues that are conserved across species and many of which have been assigned functional roles based on experimental evidence. Molecules that were identified in this manner seeking cyclic mononucleotide cyclases in plants are used as examples. The strength of this computational approach is enhanced when good homology models can be developed to test the functionality of the predicted centers in silico, which, in turn, increases confidence in the ability of the identified candidates to perform the predicted functions. Computational characterization of moonlighting functional centers is not diagnostic for catalysis but serves as a rapid screening method, and highlights testable targets from a potentially large pool of candidates for subsequent in vitro and in vivo experiments required to confirm the functionality of the predicted moonlighting centers.

  5. Bread dough rheology: Computing with a damage function model

    Science.gov (United States)

    Tanner, Roger I.; Qi, Fuzhong; Dai, Shaocong

    2015-01-01

    We describe an improved damage function model for bread dough rheology. The model has relatively few parameters, all of which can easily be found from simple experiments. Small deformations in the linear region are described by a gel-like power-law memory function. A set of large non-reversing deformations - stress relaxation after a step of shear, steady shearing and elongation beginning from rest, and biaxial stretching, is used to test the model. With the introduction of a revised strain measure which includes a Mooney-Rivlin term, all of these motions can be well described by the damage function described in previous papers. For reversing step strains, larger amplitude oscillatory shearing and recoil reasonable predictions have been found. The numerical methods used are discussed and we give some examples.

  6. Modeling the microstructure of surface by applying BRDF function

    Science.gov (United States)

    Plachta, Kamil

    2017-06-01

    The paper presents the modeling of surface microstructure using a bidirectional reflectance distribution function. This function contains full information about the reflectance properties of the flat surfaces - it is possible to determine the share of the specular, directional and diffuse components in the reflected luminous stream. The software is based on the authorial algorithm that uses selected elements of this function models, which allows to determine the share of each component. Basing on obtained data, the surface microstructure of each material can be modeled, which allows to determine the properties of this materials. The concentrator directs the reflected solar radiation onto the photovoltaic surface, increasing, at the same time, the value of the incident luminous stream. The paper presents an analysis of selected materials that can be used to construct the solar concentrator system. The use of concentrator increases the power output of the photovoltaic system by up to 17% as compared to the standard solution.

  7. Stability of cylindrical plasma in the Bessel function model

    International Nuclear Information System (INIS)

    Yamagishi, T.; Gimblett, C.G.

    1988-01-01

    The stability of free boundary ideal and tearing modes in a cylindrical plasma is studied by examining the discontinuity (Δ') of the helical flux function given by the force free Bessel function model at the singular surface. The m = O and m = 1 free boundary tearing modes become strongly unstable when the singular surface is just inside the plasma boundary for a wide range of longitudinal wave numbers. (author)

  8. Four point functions in the SL(2,R) WZW model

    Energy Technology Data Exchange (ETDEWEB)

    Minces, Pablo [Instituto de Astronomia y Fisica del Espacio (IAFE), C.C. 67 Suc. 28, 1428 Buenos Aires (Argentina)]. E-mail: minces@iafe.uba.ar; Nunez, Carmen [Instituto de Astronomia y Fisica del Espacio (IAFE), C.C. 67 Suc. 28, 1428 Buenos Aires (Argentina) and Physics Department, University of Buenos Aires, Ciudad Universitaria, Pab. I, 1428 Buenos Aires (Argentina)]. E-mail: carmen@iafe.uba.ar

    2007-04-19

    We consider winding conserving four point functions in the SL(2,R) WZW model for states in arbitrary spectral flow sectors. We compute the leading order contribution to the expansion of the amplitudes in powers of the cross ratio of the four points on the worldsheet, both in the m- and x-basis, with at least one state in the spectral flow image of the highest weight discrete representation. We also perform certain consistency check on the winding conserving three point functions.

  9. Four point functions in the SL(2,R) WZW model

    International Nuclear Information System (INIS)

    Minces, Pablo; Nunez, Carmen

    2007-01-01

    We consider winding conserving four point functions in the SL(2,R) WZW model for states in arbitrary spectral flow sectors. We compute the leading order contribution to the expansion of the amplitudes in powers of the cross ratio of the four points on the worldsheet, both in the m- and x-basis, with at least one state in the spectral flow image of the highest weight discrete representation. We also perform certain consistency check on the winding conserving three point functions

  10. Applying Functional Modeling for Accident Management of Nuclear Power Plant

    Energy Technology Data Exchange (ETDEWEB)

    Lind, Morten; Zhang Xinxin [Harbin Engineering University, Harbin (China)

    2014-08-15

    The paper investigate applications of functional modeling for accident management in complex industrial plant with special reference to nuclear power production. Main applications for information sharing among decision makers and decision support are identified. An overview of Multilevel Flow Modeling is given and a detailed presentation of the foundational means-end concepts is presented and the conditions for proper use in modelling accidents are identified. It is shown that Multilevel Flow Modeling can be used for modelling and reasoning about design basis accidents. Its possible role for information sharing and decision support in accidents beyond design basis is also indicated. A modelling example demonstrating the application of Multilevel Flow Modelling and reasoning for a PWR LOCA is presented.

  11. Using special functions to model the propagation of airborne diseases

    Science.gov (United States)

    Bolaños, Daniela

    2014-06-01

    Some special functions of the mathematical physics are using to obtain a mathematical model of the propagation of airborne diseases. In particular we study the propagation of tuberculosis in closed rooms and we model the propagation using the error function and the Bessel function. In the model, infected individual emit pathogens to the environment and this infect others individuals who absorb it. The evolution in time of the concentration of pathogens in the environment is computed in terms of error functions. The evolution in time of the number of susceptible individuals is expressed by a differential equation that contains the error function and it is solved numerically for different parametric simulations. The evolution in time of the number of infected individuals is plotted for each numerical simulation. On the other hand, the spatial distribution of the pathogen around the source of infection is represented by the Bessel function K0. The spatial and temporal distribution of the number of infected individuals is computed and plotted for some numerical simulations. All computations were made using software Computer algebra, specifically Maple. It is expected that the analytical results that we obtained allow the design of treatment rooms and ventilation systems that reduce the risk of spread of tuberculosis.

  12. Enabling Cross-Discipline Collaboration Via a Functional Data Model

    Science.gov (United States)

    Lindholm, D. M.; Wilson, A.; Baltzer, T.

    2016-12-01

    Many research disciplines have very specialized data models that are used to express the detailed semantics that are meaningful to that community and easily utilized by their data analysis tools. While invaluable to members of that community, such expressive data structures and metadata are of little value to potential collaborators from other scientific disciplines. Many data interoperability efforts focus on the difficult task of computationally mapping concepts from one domain to another to facilitate discovery and use of data. Although these efforts are important and promising, we have found that a great deal of discovery and dataset understanding still happens at the level of less formal, personal communication. However, a significant barrier to inter-disciplinary data sharing that remains is one of data access.Scientists and data analysts continue to spend inordinate amounts of time simply trying to get data into their analysis tools. Providing data in a standard file format is often not sufficient since data can be structured in many ways. Adhering to more explicit community standards for data structure and metadata does little to help those in other communities.The Functional Data Model specializes the Relational Data Model (used by many database systems)by defining relations as functions between independent (domain) and dependent (codomain) variables. Given that arrays of data in many scientific data formats generally represent functionally related parameters (e.g. temperature as a function of space and time), the Functional Data Model is quite relevant for these datasets as well. The LaTiS software framework implements the Functional Data Model and provides a mechanism to expose an existing data source as a LaTiS dataset. LaTiS datasets can be manipulated using a Functional Algebra and output in any number of formats.LASP has successfully used the Functional Data Model and its implementation in the LaTiS software framework to bridge the gap between

  13. Optimal hemodynamic response model for functional near-infrared spectroscopy

    Directory of Open Access Journals (Sweden)

    Muhammad Ahmad Kamran

    2015-06-01

    Full Text Available Functional near-infrared spectroscopy (fNIRS is an emerging non-invasive brain imaging technique and measures brain activities by means of near-infrared light of 650-950 nm wavelengths. The cortical hemodynamic response (HR differs in attributes at different brain regions and on repetition of trials, even if the experimental paradigm is kept exactly the same. Therefore, an HR model that can estimate such variations in the response is the objective of this research. The canonical hemodynamic response function (cHRF is modeled by using two Gamma functions with six unknown parameters. The HRF model is supposed to be linear combination of HRF, baseline and physiological noises (amplitudes and frequencies of physiological noises are supposed to be unknown. An objective function is developed as a square of the residuals with constraints on twelve free parameters. The formulated problem is solved by using an iterative optimization algorithm to estimate the unknown parameters in the model. Inter-subject variations in HRF and physiological noises have been estimated for better cortical functional maps. The accuracy of the algorithm has been verified using ten real and fifteen simulated data sets. Ten healthy subjects participated in the experiment and their HRF for finger-tapping tasks have been estimated and analyzed. The statistical significance of the estimated activity strength parameters has been verified by employing statistical analysis, i.e., (t-value >tcritical and p-value < 0.05.

  14. Using Lambert W function and error function to model phase change on microfluidics

    Science.gov (United States)

    Bermudez Garcia, Anderson

    2014-05-01

    Solidification and melting modeling on microfluidics are solved using Lambert W's function and error's functions. Models are formulated using the heat's diffusion equation. The generic posed case is the melting of a slab with time dependent surface temperature, having a micro or nano-fluid liquid phase. At the beginning the solid slab is at melting temperature. A slab's face is put and maintained at temperature greater than the melting limit and varying in time. Lambert W function and error function are applied via Maple to obtain the analytic solution evolution of the front of microfluidic-solid interface, it is analytically computed and slab's corresponding melting time is determined. It is expected to have analytical results to be useful for food engineering, cooking engineering, pharmaceutical engineering, nano-engineering and bio-medical engineering.

  15. Functional dynamic factor models with application to yield curve forecasting

    KAUST Repository

    Hays, Spencer

    2012-09-01

    Accurate forecasting of zero coupon bond yields for a continuum of maturities is paramount to bond portfolio management and derivative security pricing. Yet a universal model for yield curve forecasting has been elusive, and prior attempts often resulted in a trade-off between goodness of fit and consistency with economic theory. To address this, herein we propose a novel formulation which connects the dynamic factor model (DFM) framework with concepts from functional data analysis: a DFM with functional factor loading curves. This results in a model capable of forecasting functional time series. Further, in the yield curve context we show that the model retains economic interpretation. Model estimation is achieved through an expectation- maximization algorithm, where the time series parameters and factor loading curves are simultaneously estimated in a single step. Efficient computing is implemented and a data-driven smoothing parameter is nicely incorporated. We show that our model performs very well on forecasting actual yield data compared with existing approaches, especially in regard to profit-based assessment for an innovative trading exercise. We further illustrate the viability of our model to applications outside of yield forecasting.

  16. Collapse of the wave function models, ontology, origin, and implications

    CERN Document Server

    2018-01-01

    This is the first single volume about the collapse theories of quantum mechanics, which is becoming a very active field of research in both physics and philosophy. In standard quantum mechanics, it is postulated that when the wave function of a quantum system is measured, it no longer follows the Schrödinger equation, but instantaneously and randomly collapses to one of the wave functions that correspond to definite measurement results. However, why and how a definite measurement result appears is unknown. A promising solution to this problem are collapse theories in which the collapse of the wave function is spontaneous and dynamical. Chapters written by distinguished physicists and philosophers of physics discuss the origin and implications of wave-function collapse, the controversies around collapse models and their ontologies, and new arguments for the reality of wave function collapse. This is an invaluable resource for students and researchers interested in the philosophy of physics and foundations of ...

  17. DEFINE: A Service-Oriented Dynamically Enabling Function Model

    Directory of Open Access Journals (Sweden)

    Tan Wei-Yi

    2017-01-01

    In this paper, we introduce an innovative Dynamically Enable Function In Network Equipment (DEFINE to allow tenant get the network service quickly. First, DEFINE decouples an application into different functional components, and connects these function components in a reconfigurable method. Second, DEFINE provides a programmable interface to the third party, who can develop their own processing modules according to their own needs. To verify the effectiveness of this model, we set up an evaluating network with a FPGA-based OpenFlow switch prototype, and deployed several applications on it. Our results show that DEFINE has excellent flexibility and performance.

  18. Green function simulation of Hamiltonian lattice models with stochastic reconfiguration

    International Nuclear Information System (INIS)

    Beccaria, M.

    2000-01-01

    We apply a recently proposed Green function Monte Carlo procedure to the study of Hamiltonian lattice gauge theories. This class of algorithms computes quantum vacuum expectation values by averaging over a set of suitable weighted random walkers. By means of a procedure called stochastic reconfiguration the long standing problem of keeping fixed the walker population without a priori knowledge of the ground state is completely solved. In the U(1) 2 model, which we choose as our theoretical laboratory, we evaluate the mean plaquette and the vacuum energy per plaquette. We find good agreement with previous works using model-dependent guiding functions for the random walkers. (orig.)

  19. Hypnosis as a model of functional neurologic disorders.

    Science.gov (United States)

    Deeley, Q

    2016-01-01

    In the 19th century it was recognized that neurologic symptoms could be caused by "morbid ideation" as well as organic lesions. The subsequent observation that hysteric (now called "functional") symptoms could be produced and removed by hypnotic suggestion led Charcot to hypothesize that suggestion mediated the effects of ideas on hysteric symptoms through as yet unknown effects on brain activity. The advent of neuroimaging 100 years later revealed strikingly similar neural correlates in experiments matching functional symptoms with clinical analogs created by suggestion. Integrative models of suggested and functional symptoms regard these alterations in brain function as the endpoint of a broader set of changes in information processing due to suggestion. These accounts consider that suggestions alter experience by mobilizing representations from memory systems, and altering causal attributions, during preconscious processing which alters the content of what is provided to our highly edited subjective version of the world. Hypnosis as a model for functional symptoms draws attention to how radical alterations in experience and behavior can conform to the content of mental representations through effects on cognition and brain function. Experimental study of functional symptoms and their suggested counterparts in hypnosis reveals the distinct and shared processes through which this can occur. © 2016 Elsevier B.V. All rights reserved.

  20. Function and Innervation of the Locus Ceruleus in a Macaque Model of Functional Hypothalamic Amenorrhea

    OpenAIRE

    Bethea, Cynthia L; Kim, Aaron; Cameron, Judy L

    2012-01-01

    A body of knowledge implicates an increase in output from the locus ceruleus (LC) during stress. We questioned the innervation and function of the LC in our macaque model of Functional Hypothalamic Amenorrhea, also known as Stress-Induced Amenorrhea. Cohorts of macaques were initially characterized as highly stress resilient (HSR) or stress-sensitive (SS) based upon the presence or absence of ovulation during a protocol involving 2 menstrual cycles with psychosocial and metabolic stress. Afte...

  1. Descriptions and models of safety functions - a prestudy

    International Nuclear Information System (INIS)

    Harms-Ringdahl, L.

    1999-09-01

    A study has been made with the focus on different theories and applications concerning 'safety functions' and 'barriers'. In this report, a safety function is defined as a technical or organisational function with the aim to reduce probability and/or consequences associated with a hazard. The study contains a limited review of practice and theories related to safety, with a focus on applications from nuclear and industrial safety. The study is based on a literature review and interviews. A summary has been made of definitions and terminology, which shows a large variation. E.g. 'barrier' can have a precise physical and technical meaning, or it can include human, technical and organisational elements. Only a few theoretical models describing safety functions have been found. One section of the report summarises problems related to safety issues and procedures. They concern errors in procedure design and user compliance. A proposal for describing and structuring safety functions has been made. Dimensions in a description could be degree of abstraction, systems level, the different parts of the function, etc. A model for safety functions has been proposed, which includes the division of a safety function in a number connected 'safety function elements'. One conclusion is that there is a potential for improving theories and tools for safety work and procedures. Safety function could be a useful concept in such a development, and advantages and disadvantages with this is discussed. If further work should be done, it is recommended that this is made as a combination of theoretical analysis and case studies

  2. Structure, function, and behaviour of computational models in systems biology.

    Science.gov (United States)

    Knüpfer, Christian; Beckstein, Clemens; Dittrich, Peter; Le Novère, Nicolas

    2013-05-31

    Systems Biology develops computational models in order to understand biological phenomena. The increasing number and complexity of such "bio-models" necessitate computer support for the overall modelling task. Computer-aided modelling has to be based on a formal semantic description of bio-models. But, even if computational bio-models themselves are represented precisely in terms of mathematical expressions their full meaning is not yet formally specified and only described in natural language. We present a conceptual framework - the meaning facets - which can be used to rigorously specify the semantics of bio-models. A bio-model has a dual interpretation: On the one hand it is a mathematical expression which can be used in computational simulations (intrinsic meaning). On the other hand the model is related to the biological reality (extrinsic meaning). We show that in both cases this interpretation should be performed from three perspectives: the meaning of the model's components (structure), the meaning of the model's intended use (function), and the meaning of the model's dynamics (behaviour). In order to demonstrate the strengths of the meaning facets framework we apply it to two semantically related models of the cell cycle. Thereby, we make use of existing approaches for computer representation of bio-models as much as possible and sketch the missing pieces. The meaning facets framework provides a systematic in-depth approach to the semantics of bio-models. It can serve two important purposes: First, it specifies and structures the information which biologists have to take into account if they build, use and exchange models. Secondly, because it can be formalised, the framework is a solid foundation for any sort of computer support in bio-modelling. The proposed conceptual framework establishes a new methodology for modelling in Systems Biology and constitutes a basis for computer-aided collaborative research.

  3. Functional results-oriented healthcare leadership: a novel leadership model.

    Science.gov (United States)

    Al-Touby, Salem Said

    2012-03-01

    This article modifies the traditional functional leadership model to accommodate contemporary needs in healthcare leadership based on two findings. First, the article argues that it is important that the ideal healthcare leadership emphasizes the outcomes of the patient care more than processes and structures used to deliver such care; and secondly, that the leadership must strive to attain effectiveness of their care provision and not merely targeting the attractive option of efficient operations. Based on these premises, the paper reviews the traditional Functional Leadership Model and the three elements that define the type of leadership an organization has namely, the tasks, the individuals, and the team. The article argues that concentrating on any one of these elements is not ideal and proposes adding a new element to the model to construct a novel Functional Result-Oriented healthcare leadership model. The recommended Functional-Results Oriented leadership model embosses the results element on top of the other three elements so that every effort on healthcare leadership is directed towards attaining excellent patient outcomes.

  4. Cohesive fracture model for functionally graded fiber reinforced concrete

    International Nuclear Information System (INIS)

    Park, Kyoungsoo; Paulino, Glaucio H.; Roesler, Jeffery

    2010-01-01

    A simple, effective, and practical constitutive model for cohesive fracture of fiber reinforced concrete is proposed by differentiating the aggregate bridging zone and the fiber bridging zone. The aggregate bridging zone is related to the total fracture energy of plain concrete, while the fiber bridging zone is associated with the difference between the total fracture energy of fiber reinforced concrete and the total fracture energy of plain concrete. The cohesive fracture model is defined by experimental fracture parameters, which are obtained through three-point bending and split tensile tests. As expected, the model describes fracture behavior of plain concrete beams. In addition, it predicts the fracture behavior of either fiber reinforced concrete beams or a combination of plain and fiber reinforced concrete functionally layered in a single beam specimen. The validated model is also applied to investigate continuously, functionally graded fiber reinforced concrete composites.

  5. Regional differences in prediction models of lung function in Germany

    Directory of Open Access Journals (Sweden)

    Schäper Christoph

    2010-04-01

    Full Text Available Abstract Background Little is known about the influencing potential of specific characteristics on lung function in different populations. The aim of this analysis was to determine whether lung function determinants differ between subpopulations within Germany and whether prediction equations developed for one subpopulation are also adequate for another subpopulation. Methods Within three studies (KORA C, SHIP-I, ECRHS-I in different areas of Germany 4059 adults performed lung function tests. The available data consisted of forced expiratory volume in one second, forced vital capacity and peak expiratory flow rate. For each study multivariate regression models were developed to predict lung function and Bland-Altman plots were established to evaluate the agreement between predicted and measured values. Results The final regression equations for FEV1 and FVC showed adjusted r-square values between 0.65 and 0.75, and for PEF they were between 0.46 and 0.61. In all studies gender, age, height and pack-years were significant determinants, each with a similar effect size. Regarding other predictors there were some, although not statistically significant, differences between the studies. Bland-Altman plots indicated that the regression models for each individual study adequately predict medium (i.e. normal but not extremely high or low lung function values in the whole study population. Conclusions Simple models with gender, age and height explain a substantial part of lung function variance whereas further determinants add less than 5% to the total explained r-squared, at least for FEV1 and FVC. Thus, for different adult subpopulations of Germany one simple model for each lung function measures is still sufficient.

  6. Lyapunov functions for a dengue disease transmission model

    International Nuclear Information System (INIS)

    Tewa, Jean Jules; Dimi, Jean Luc; Bowong, Samuel

    2009-01-01

    In this paper, we study a model for the dynamics of dengue fever when only one type of virus is present. For this model, Lyapunov functions are used to show that when the basic reproduction ratio is less than or equal to one, the disease-free equilibrium is globally asymptotically stable, and when it is greater than one there is an endemic equilibrium which is also globally asymptotically stable.

  7. Lyapunov functions for a dengue disease transmission model

    Energy Technology Data Exchange (ETDEWEB)

    Tewa, Jean Jules [Department of Mathematics, Faculty of Science, University of Yaounde I, P.O. Box 812, Yaounde (Cameroon)], E-mail: tewa@univ-metz.fr; Dimi, Jean Luc [Department of Mathematics, Faculty of Science, University Marien Ngouabi, P.O. Box 69, Brazzaville (Congo, The Democratic Republic of the)], E-mail: jldimi@yahoo.fr; Bowong, Samuel [Department of Mathematics and Computer Science, Faculty of Science, University of Douala, P.O. Box 24157, Douala (Cameroon)], E-mail: samuelbowong@yahoo.fr

    2009-01-30

    In this paper, we study a model for the dynamics of dengue fever when only one type of virus is present. For this model, Lyapunov functions are used to show that when the basic reproduction ratio is less than or equal to one, the disease-free equilibrium is globally asymptotically stable, and when it is greater than one there is an endemic equilibrium which is also globally asymptotically stable.

  8. Towards aspect-oriented functional--structural plant modelling.

    Science.gov (United States)

    Cieslak, Mikolaj; Seleznyova, Alla N; Prusinkiewicz, Przemyslaw; Hanan, Jim

    2011-10-01

    Functional-structural plant models (FSPMs) are used to integrate knowledge and test hypotheses of plant behaviour, and to aid in the development of decision support systems. A significant amount of effort is being put into providing a sound methodology for building them. Standard techniques, such as procedural or object-oriented programming, are not suited for clearly separating aspects of plant function that criss-cross between different components of plant structure, which makes it difficult to reuse and share their implementations. The aim of this paper is to present an aspect-oriented programming approach that helps to overcome this difficulty. The L-system-based plant modelling language L+C was used to develop an aspect-oriented approach to plant modelling based on multi-modules. Each element of the plant structure was represented by a sequence of L-system modules (rather than a single module), with each module representing an aspect of the element's function. Separate sets of productions were used for modelling each aspect, with context-sensitive rules facilitated by local lists of modules to consider/ignore. Aspect weaving or communication between aspects was made possible through the use of pseudo-L-systems, where the strict-predecessor of a production rule was specified as a multi-module. The new approach was used to integrate previously modelled aspects of carbon dynamics, apical dominance and biomechanics with a model of a developing kiwifruit shoot. These aspects were specified independently and their implementation was based on source code provided by the original authors without major changes. This new aspect-oriented approach to plant modelling is well suited for studying complex phenomena in plant science, because it can be used to integrate separate models of individual aspects of plant development and function, both previously constructed and new, into clearly organized, comprehensive FSPMs. In a future work, this approach could be further

  9. Regulator Loss Functions and Hierarchical Modeling for Safety Decision Making.

    Science.gov (United States)

    Hatfield, Laura A; Baugh, Christine M; Azzone, Vanessa; Normand, Sharon-Lise T

    2017-07-01

    Regulators must act to protect the public when evidence indicates safety problems with medical devices. This requires complex tradeoffs among risks and benefits, which conventional safety surveillance methods do not incorporate. To combine explicit regulator loss functions with statistical evidence on medical device safety signals to improve decision making. In the Hospital Cost and Utilization Project National Inpatient Sample, we select pediatric inpatient admissions and identify adverse medical device events (AMDEs). We fit hierarchical Bayesian models to the annual hospital-level AMDE rates, accounting for patient and hospital characteristics. These models produce expected AMDE rates (a safety target), against which we compare the observed rates in a test year to compute a safety signal. We specify a set of loss functions that quantify the costs and benefits of each action as a function of the safety signal. We integrate the loss functions over the posterior distribution of the safety signal to obtain the posterior (Bayes) risk; the preferred action has the smallest Bayes risk. Using simulation and an analysis of AMDE data, we compare our minimum-risk decisions to a conventional Z score approach for classifying safety signals. The 2 rules produced different actions for nearly half of hospitals (45%). In the simulation, decisions that minimize Bayes risk outperform Z score-based decisions, even when the loss functions or hierarchical models are misspecified. Our method is sensitive to the choice of loss functions; eliciting quantitative inputs to the loss functions from regulators is challenging. A decision-theoretic approach to acting on safety signals is potentially promising but requires careful specification of loss functions in consultation with subject matter experts.

  10. Bifurcations in a discrete time model composed of Beverton-Holt function and Ricker function.

    Science.gov (United States)

    Shang, Jin; Li, Bingtuan; Barnard, Michael R

    2015-05-01

    We provide rigorous analysis for a discrete-time model composed of the Ricker function and Beverton-Holt function. This model was proposed by Lewis and Li [Bull. Math. Biol. 74 (2012) 2383-2402] in the study of a population in which reproduction occurs at a discrete instant of time whereas death and competition take place continuously during the season. We show analytically that there exists a period-doubling bifurcation curve in the model. The bifurcation curve divides the parameter space into the region of stability and the region of instability. We demonstrate through numerical bifurcation diagrams that the regions of periodic cycles are intermixed with the regions of chaos. We also study the global stability of the model. Copyright © 2015 Elsevier Inc. All rights reserved.

  11. Hierarchical functional model for automobile development; Jidosha kaihatsu no tame no kaisogata kino model

    Energy Technology Data Exchange (ETDEWEB)

    Sumida, S [U-shin Ltd., Tokyo (Japan); Nagamatsu, M; Maruyama, K [Hokkaido Institute of Technology, Sapporo (Japan); Hiramatsu, S [Mazda Motor Corp., Hiroshima (Japan)

    1997-10-01

    A new approach on modeling is put forward in order to compose the virtual prototype which is indispensable for fully computer integrated concurrent development of automobile product. A basic concept of the hierarchical functional model is proposed as the concrete form of this new modeling technology. This model is used mainly for explaining and simulating functions and efficiencies of both the parts and the total product of automobile. All engineers who engage themselves in design and development of automobile can collaborate with one another using this model. Some application examples are shown, and usefulness of this model is demonstrated. 5 refs., 5 figs.

  12. Comparing Transformation Possibilities of Topological Functioning Model and BPMN in the Context of Model Driven Architecture

    Directory of Open Access Journals (Sweden)

    Solomencevs Artūrs

    2016-05-01

    Full Text Available The approach called “Topological Functioning Model for Software Engineering” (TFM4SE applies the Topological Functioning Model (TFM for modelling the business system in the context of Model Driven Architecture. TFM is a mathematically formal computation independent model (CIM. TFM4SE is compared to an approach that uses BPMN as a CIM. The comparison focuses on CIM modelling and on transformation to UML Sequence diagram on the platform independent (PIM level. The results show the advantages and drawbacks the formalism of TFM brings into the development.

  13. The functional neuroanatomy of bipolar disorder: a consensus model

    Science.gov (United States)

    Strakowski, Stephen M; Adler, Caleb M; Almeida, Jorge; Altshuler, Lori L; Blumberg, Hilary P; Chang, Kiki D; DelBello, Melissa P; Frangou, Sophia; McIntosh, Andrew; Phillips, Mary L; Sussman, Jessika E; Townsend, Jennifer D

    2013-01-01

    Objectives Functional neuroimaging methods have proliferated in recent years, such that functional magnetic resonance imaging, in particular, is now widely used to study bipolar disorder. However, discrepant findings are common. A workgroup was organized by the Department of Psychiatry, University of Cincinnati (Cincinnati, OH, USA) to develop a consensus functional neuroanatomic model of bipolar I disorder based upon the participants’ work as well as that of others. Methods Representatives from several leading bipolar disorder neuroimaging groups were organized to present an overview of their areas of expertise as well as focused reviews of existing data. The workgroup then developed a consensus model of the functional neuroanatomy of bipolar disorder based upon these data. Results Among the participants, a general consensus emerged that bipolar I disorder arises from abnormalities in the structure and function of key emotional control networks in the human brain. Namely, disruption in early development (e.g., white matter connectivity, prefrontal pruning) within brain networks that modulate emotional behavior leads to decreased connectivity among ventral prefrontal networks and limbic brain regions, especially amygdala. This developmental failure to establish healthy ventral prefrontal–limbic modulation underlies the onset of mania and ultimately, with progressive changes throughout these networks over time and with affective episodes, a bipolar course of illness. Conclusions This model provides a potential substrate to guide future investigations and areas needing additional focus are identified. PMID:22631617

  14. Mass corrections to Green functions in instanton vacuum model

    International Nuclear Information System (INIS)

    Esaibegyan, S.V.; Tamaryan, S.N.

    1987-01-01

    The first nonvanishing mass corrections to the effective Green functions are calculated in the model of instanton-based vacuum consisting of a superposition of instanton-antiinstanton fluctuations. The meson current correlators are calculated with account of these corrections; the mass spectrum of pseudoscalar octet as well as the value of the kaon axial constant are found. 7 refs

  15. How to Maximize the Likelihood Function for a DSGE Model

    DEFF Research Database (Denmark)

    Andreasen, Martin Møller

    This paper extends two optimization routines to deal with objective functions for DSGE models. The optimization routines are i) a version of Simulated Annealing developed by Corana, Marchesi & Ridella (1987), and ii) the evolutionary algorithm CMA-ES developed by Hansen, Müller & Koumoutsakos (2003...

  16. A review of function modeling : Approaches and applications

    NARCIS (Netherlands)

    Erden, M.S.; Komoto, H.; Van Beek, T.J.; D'Amelio, V.; Echavarria, E.; Tomiyama, T.

    2008-01-01

    This work is aimed at establishing a common frame and understanding of function modeling (FM) for our ongoing research activities. A comparative review of the literature is performed to grasp the various FM approaches with their commonalities and differences. The relations of FM with the research

  17. Gene Discovery and Functional Analyses in the Model Plant Arabidopsis

    DEFF Research Database (Denmark)

    Feng, Cai-ping; Mundy, J.

    2006-01-01

    The present mini-review describes newer methods and strategies, including transposon and T-DNA insertions, TILLING, Deleteagene, and RNA interference, to functionally analyze genes of interest in the model plant Arabidopsis. The relative advantages and disadvantages of the systems are also discus...

  18. From dynamics to structure and function of model biomolecular systems

    NARCIS (Netherlands)

    Fontaine-Vive-Curtaz, F.

    2007-01-01

    The purpose of this thesis was to extend recent works on structure and dynamics of hydrogen bonded crystals to model biomolecular systems and biological processes. The tools that we have used are neutron scattering (NS) and density functional theory (DFT) and force field (FF) based simulation

  19. Density-correlation functions in Calogero-Sutherland models

    International Nuclear Information System (INIS)

    Minahan, J.A.; Polychronakos, A.P.

    1994-01-01

    Using arguments from two-dimensional Yang-Mills theory and the collective coordinate formulation of the Calogero-Sutherland model, we conjecture the dynamical density-correlation function for coupling l and 1/l, where l is an integer. We present overwhelming evidence that the conjecture is indeed correct

  20. Density correlation functions in Calogero-Sutherland models

    CERN Document Server

    Minahan, Joseph A.; Joseph A Minahan; Alexios P Polychronakos

    1994-01-01

    Using arguments from two dimensional Yang-Mills theory and the collective coordinate formulation of the Calogero-Sutherland model, we conjecture the dynamical density correlation function for coupling l and 1/l, where l is an integer. We present overwhelming evidence that the conjecture is indeed correct.

  1. A Multi-Level Model of Moral Functioning Revisited

    Science.gov (United States)

    Reed, Don Collins

    2009-01-01

    The model of moral functioning scaffolded in the 2008 "JME" Special Issue is here revisited in response to three papers criticising that volume. As guest editor of that Special Issue I have formulated the main body of this response, concerning the dynamic systems approach to moral development, the problem of moral relativism and the role of…

  2. Describing a Strongly Correlated Model System with Density Functional Theory.

    Science.gov (United States)

    Kong, Jing; Proynov, Emil; Yu, Jianguo; Pachter, Ruth

    2017-07-06

    The linear chain of hydrogen atoms, a basic prototype for the transition from a metal to Mott insulator, is studied with a recent density functional theory model functional for nondynamic and strong correlation. The computed cohesive energy curve for the transition agrees well with accurate literature results. The variation of the electronic structure in this transition is characterized with a density functional descriptor that yields the atomic population of effectively localized electrons. These new methods are also applied to the study of the Peierls dimerization of the stretched even-spaced Mott insulator to a chain of H 2 molecules, a different insulator. The transitions among the two insulating states and the metallic state of the hydrogen chain system are depicted in a semiquantitative phase diagram. Overall, we demonstrate the capability of studying strongly correlated materials with a mean-field model at the fundamental level, in contrast to the general pessimistic view on such a feasibility.

  3. Development on electromagnetic impedance function modeling and its estimation

    Energy Technology Data Exchange (ETDEWEB)

    Sutarno, D., E-mail: Sutarno@fi.itb.ac.id [Earth Physics and Complex System Division Faculty of Mathematics and Natural Sciences Institut Teknologi Bandung (Indonesia)

    2015-09-30

    Today the Electromagnetic methods such as magnetotellurics (MT) and controlled sources audio MT (CSAMT) is used in a broad variety of applications. Its usefulness in poor seismic areas and its negligible environmental impact are integral parts of effective exploration at minimum cost. As exploration was forced into more difficult areas, the importance of MT and CSAMT, in conjunction with other techniques, has tended to grow continuously. However, there are obviously important and difficult problems remaining to be solved concerning our ability to collect process and interpret MT as well as CSAMT in complex 3D structural environments. This talk aim at reviewing and discussing the recent development on MT as well as CSAMT impedance functions modeling, and also some improvements on estimation procedures for the corresponding impedance functions. In MT impedance modeling, research efforts focus on developing numerical method for computing the impedance functions of three dimensionally (3-D) earth resistivity models. On that reason, 3-D finite elements numerical modeling for the impedances is developed based on edge element method. Whereas, in the CSAMT case, the efforts were focused to accomplish the non-plane wave problem in the corresponding impedance functions. Concerning estimation of MT and CSAMT impedance functions, researches were focused on improving quality of the estimates. On that objective, non-linear regression approach based on the robust M-estimators and the Hilbert transform operating on the causal transfer functions, were used to dealing with outliers (abnormal data) which are frequently superimposed on a normal ambient MT as well as CSAMT noise fields. As validated, the proposed MT impedance modeling method gives acceptable results for standard three dimensional resistivity models. Whilst, the full solution based modeling that accommodate the non-plane wave effect for CSAMT impedances is applied for all measurement zones, including near-, transition

  4. Universality of correlation functions in random matrix models of QCD

    International Nuclear Information System (INIS)

    Jackson, A.D.; Sener, M.K.; Verbaarschot, J.J.M.

    1997-01-01

    We demonstrate the universality of the spectral correlation functions of a QCD inspired random matrix model that consists of a random part having the chiral structure of the QCD Dirac operator and a deterministic part which describes a schematic temperature dependence. We calculate the correlation functions analytically using the technique of Itzykson-Zuber integrals for arbitrary complex supermatrices. An alternative exact calculation for arbitrary matrix size is given for the special case of zero temperature, and we reproduce the well-known Laguerre kernel. At finite temperature, the microscopic limit of the correlation functions are calculated in the saddle-point approximation. The main result of this paper is that the microscopic universality of correlation functions is maintained even though unitary invariance is broken by the addition of a deterministic matrix to the ensemble. (orig.)

  5. Bessel functions in mass action modeling of memories and remembrances

    Energy Technology Data Exchange (ETDEWEB)

    Freeman, Walter J. [Department of Molecular and Cell Biology, University of California, Berkeley, CA 94720-3206 (United States); Capolupo, Antonio [Dipartimento di Fisica, E.R. Caianiello Universitá di Salerno, and INFN Gruppo collegato di Salerno, Fisciano 84084 (Italy); Kozma, Robert [Department of Mathematics, Memphis University, Memphis, TN 38152 (United States); Olivares del Campo, Andrés [The Blackett Laboratory, Imperial College London, Prince Consort Road, London SW7 2BZ (United Kingdom); Vitiello, Giuseppe, E-mail: vitiello@sa.infn.it [Dipartimento di Fisica, E.R. Caianiello Universitá di Salerno, and INFN Gruppo collegato di Salerno, Fisciano 84084 (Italy)

    2015-10-02

    Data from experimental observations of a class of neurological processes (Freeman K-sets) present functional distribution reproducing Bessel function behavior. We model such processes with couples of damped/amplified oscillators which provide time dependent representation of Bessel equation. The root loci of poles and zeros conform to solutions of K-sets. Some light is shed on the problem of filling the gap between the cellular level dynamics and the brain functional activity. Breakdown of time-reversal symmetry is related with the cortex thermodynamic features. This provides a possible mechanism to deduce lifetime of recorded memory. - Highlights: • We consider data from observations of impulse responses of cortex to electric shocks. • These data are fitted by Bessel functions which may be represented by couples of damped/amplified oscillators. • We study the data by using couples of damped/amplified oscillators. • We discuss lifetime and other properties of the considered brain processes.

  6. Optimal hemodynamic response model for functional near-infrared spectroscopy.

    Science.gov (United States)

    Kamran, Muhammad A; Jeong, Myung Yung; Mannan, Malik M N

    2015-01-01

    Functional near-infrared spectroscopy (fNIRS) is an emerging non-invasive brain imaging technique and measures brain activities by means of near-infrared light of 650-950 nm wavelengths. The cortical hemodynamic response (HR) differs in attributes at different brain regions and on repetition of trials, even if the experimental paradigm is kept exactly the same. Therefore, an HR model that can estimate such variations in the response is the objective of this research. The canonical hemodynamic response function (cHRF) is modeled by two Gamma functions with six unknown parameters (four of them to model the shape and other two to scale and baseline respectively). The HRF model is supposed to be a linear combination of HRF, baseline, and physiological noises (amplitudes and frequencies of physiological noises are supposed to be unknown). An objective function is developed as a square of the residuals with constraints on 12 free parameters. The formulated problem is solved by using an iterative optimization algorithm to estimate the unknown parameters in the model. Inter-subject variations in HRF and physiological noises have been estimated for better cortical functional maps. The accuracy of the algorithm has been verified using 10 real and 15 simulated data sets. Ten healthy subjects participated in the experiment and their HRF for finger-tapping tasks have been estimated and analyzed. The statistical significance of the estimated activity strength parameters has been verified by employing statistical analysis (i.e., t-value > t critical and p-value < 0.05).

  7. Production functions for climate policy modeling. An empirical analysis

    International Nuclear Information System (INIS)

    Van der Werf, Edwin

    2008-01-01

    Quantitative models for climate policy modeling differ in the production structure used and in the sizes of the elasticities of substitution. The empirical foundation for both is generally lacking. This paper estimates the parameters of 2-level CES production functions with capital, labour and energy as inputs, and is the first to systematically compare all nesting structures. Using industry-level data from 12 OECD countries, we find that the nesting structure where capital and labour are combined first, fits the data best, but for most countries and industries we cannot reject that all three inputs can be put into one single nest. These two nesting structures are used by most climate models. However, while several climate policy models use a Cobb-Douglas function for (part of the) production function, we reject elasticities equal to one, in favour of considerably smaller values. Finally we find evidence for factor-specific technological change. With lower elasticities and with factor-specific technological change, some climate policy models may find a bigger effect of endogenous technological change on mitigating the costs of climate policy. (author)

  8. Rationalisation of distribution functions for models of nanoparticle magnetism

    International Nuclear Information System (INIS)

    El-Hilo, M.; Chantrell, R.W.

    2012-01-01

    A formalism is presented which reconciles the use of different distribution functions of particle diameter in analytical models of the magnetic properties of nanoparticle systems. For the lognormal distribution a transformation is derived which shows that a distribution of volume fraction transforms into a lognormal distribution of particle number albeit with a modified median diameter. This transformation resolves an apparent discrepancy reported in Tournus and Tamion [Journal of Magnetism and Magnetic Materials 323 (2011) 1118]. - Highlights: ► We resolve a problem resulting from the misunderstanding of the nature. ► The nature of dispersion functions in models of nanoparticle magnetism. ► The derived transformation between distributions will be of benefit in comparing models and experimental results.

  9. Parton distribution functions with QED corrections in the valon model

    Science.gov (United States)

    Mottaghizadeh, Marzieh; Taghavi Shahri, Fatemeh; Eslami, Parvin

    2017-10-01

    The parton distribution functions (PDFs) with QED corrections are obtained by solving the QCD ⊗QED DGLAP evolution equations in the framework of the "valon" model at the next-to-leading-order QCD and the leading-order QED approximations. Our results for the PDFs with QED corrections in this phenomenological model are in good agreement with the newly related CT14QED global fits code [Phys. Rev. D 93, 114015 (2016), 10.1103/PhysRevD.93.114015] and APFEL (NNPDF2.3QED) program [Comput. Phys. Commun. 185, 1647 (2014), 10.1016/j.cpc.2014.03.007] in a wide range of x =[10-5,1 ] and Q2=[0.283 ,108] GeV2 . The model calculations agree rather well with those codes. In the latter, we proposed a new method for studying the symmetry breaking of the sea quark distribution functions inside the proton.

  10. Modeling of nanoscale liquid mixture transport by density functional hydrodynamics

    Science.gov (United States)

    Dinariev, Oleg Yu.; Evseev, Nikolay V.

    2017-06-01

    Modeling of multiphase compositional hydrodynamics at nanoscale is performed by means of density functional hydrodynamics (DFH). DFH is the method based on density functional theory and continuum mechanics. This method has been developed by the authors over 20 years and used for modeling in various multiphase hydrodynamic applications. In this paper, DFH was further extended to encompass phenomena inherent in liquids at nanoscale. The new DFH extension is based on the introduction of external potentials for chemical components. These potentials are localized in the vicinity of solid surfaces and take account of the van der Waals forces. A set of numerical examples, including disjoining pressure, film precursors, anomalous rheology, liquid in contact with heterogeneous surface, capillary condensation, and forward and reverse osmosis, is presented to demonstrate modeling capabilities.

  11. Functional Somatic Syndromes: Emerging Biomedical Models and Traditional Chinese Medicine

    Directory of Open Access Journals (Sweden)

    Steven Tan

    2004-01-01

    Full Text Available The so-called functional somatic syndromes comprise a group of disorders that are primarily symptom-based, multisystemic in presentation and probably involve alterations in mind-brain-body interactions. The emerging neurobiological models of allostasis/allostatic load and of the emotional motor system show striking similarities with concepts used by Traditional Chinese Medicine (TCM to understand the functional somatic disorders and their underlying pathogenesis. These models incorporate a macroscopic perspective, accounting for the toll of acute and chronic traumas, physical and emotional stressors and the complex interactions between the mind, brain and body. The convergence of these biomedical models with the ancient paradigm of TCM may provide a new insight into scientifically verifiable diagnostic and therapeutic approaches for these common disorders.

  12. Future of Plant Functional Types in Terrestrial Biosphere Models

    Science.gov (United States)

    Wullschleger, S. D.; Euskirchen, E. S.; Iversen, C. M.; Rogers, A.; Serbin, S.

    2015-12-01

    Earth system models describe the physical, chemical, and biological processes that govern our global climate. While it is difficult to single out one component as being more important than another in these sophisticated models, terrestrial vegetation is a critical player in the biogeochemical and biophysical dynamics of the Earth system. There is much debate, however, as to how plant diversity and function should be represented in these models. Plant functional types (PFTs) have been adopted by modelers to represent broad groupings of plant species that share similar characteristics (e.g. growth form) and roles (e.g. photosynthetic pathway) in ecosystem function. In this review the PFT concept is traced from its origin in the early 1800s to its current use in regional and global dynamic vegetation models (DVMs). Special attention is given to the representation and parameterization of PFTs and to validation and benchmarking of predicted patterns of vegetation distribution in high-latitude ecosystems. These ecosystems are sensitive to changing climate and thus provide a useful test case for model-based simulations of past, current, and future distribution of vegetation. Models that incorporate the PFT concept predict many of the emerging patterns of vegetation change in tundra and boreal forests, given known processes of tree mortality, treeline migration, and shrub expansion. However, representation of above- and especially belowground traits for specific PFTs continues to be problematic. Potential solutions include developing trait databases and replacing fixed parameters for PFTs with formulations based on trait co-variance and empirical trait-environment relationships. Surprisingly, despite being important to land-atmosphere interactions of carbon, water, and energy, PFTs such as moss and lichen are largely absent from DVMs. Close collaboration among those involved in modelling with the disciplines of taxonomy, biogeography, ecology, and remote sensing will be

  13. Functional Dual Adaptive Control with Recursive Gaussian Process Model

    International Nuclear Information System (INIS)

    Prüher, Jakub; Král, Ladislav

    2015-01-01

    The paper deals with dual adaptive control problem, where the functional uncertainties in the system description are modelled by a non-parametric Gaussian process regression model. Current approaches to adaptive control based on Gaussian process models are severely limited in their practical applicability, because the model is re-adjusted using all the currently available data, which keeps growing with every time step. We propose the use of recursive Gaussian process regression algorithm for significant reduction in computational requirements, thus bringing the Gaussian process-based adaptive controllers closer to their practical applicability. In this work, we design a bi-criterial dual controller based on recursive Gaussian process model for discrete-time stochastic dynamic systems given in an affine-in-control form. Using Monte Carlo simulations, we show that the proposed controller achieves comparable performance with the full Gaussian process-based controller in terms of control quality while keeping the computational demands bounded. (paper)

  14. Development of an Upper Extremity Function Measurement Model.

    Science.gov (United States)

    Hong, Ickpyo; Simpson, Annie N; Li, Chih-Ying; Velozo, Craig A

    This study demonstrated the development of a measurement model for gross upper-extremity function (GUE). The dependent variable was the Rasch calibration of the 27 ICF-GUE test items. The predictors were object weight, lifting distance from floor, carrying, and lifting. Multiple regression was used to investigate the contribution that each independent variable makes to the model with 203 outpatients. Object weight and lifting distance were the only statistically and clinically significant independent variables in the model, accounting for 83% of the variance (p model indicates that, with each one pound increase in object weight, item challenge increases by 0.16 (p measurement model for the ICF-GUE can be explained by object weight and distance lifted from the floor.

  15. Advanced Mirror & Modelling Technology Development

    Science.gov (United States)

    Effinger, Michael; Stahl, H. Philip; Abplanalp, Laura; Maffett, Steven; Egerman, Robert; Eng, Ron; Arnold, William; Mosier, Gary; Blaurock, Carl

    2014-01-01

    The 2020 Decadal technology survey is starting in 2018. Technology on the shelf at that time will help guide selection to future low risk and low cost missions. The Advanced Mirror Technology Development (AMTD) team has identified development priorities based on science goals and engineering requirements for Ultraviolet Optical near-Infrared (UVOIR) missions in order to contribute to the selection process. One key development identified was lightweight mirror fabrication and testing. A monolithic, stacked, deep core mirror was fused and replicated twice to achieve the desired radius of curvature. It was subsequently successfully polished and tested. A recently awarded second phase to the AMTD project will develop larger mirrors to demonstrate the lateral scaling of the deep core mirror technology. Another key development was rapid modeling for the mirror. One model focused on generating optical and structural model results in minutes instead of months. Many variables could be accounted for regarding the core, face plate and back structure details. A portion of a spacecraft model was also developed. The spacecraft model incorporated direct integration to transform optical path difference to Point Spread Function (PSF) and between PSF to modulation transfer function. The second phase to the project will take the results of the rapid mirror modeler and integrate them into the rapid spacecraft modeler.

  16. Functional requirements of a mathematical model of the heart.

    Science.gov (United States)

    Palladino, Joseph L; Noordergraaf, Abraham

    2009-01-01

    Functional descriptions of the heart, especially the left ventricle, are often based on the measured variables pressure and ventricular outflow, embodied as a time-varying elastance. The fundamental difficulty of describing the mechanical properties of the heart with a time-varying elastance function that is set a priori is described. As an alternative, a new functional model of the heart is presented, which characterizes the ventricle's contractile state with parameters, rather than variables. Each chamber is treated as a pressure generator that is time and volume dependent. The heart's complex dynamics develop from a single equation based on the formation and relaxation of crossbridge bonds. This equation permits the calculation of ventricular elastance via E(v) = partial differentialp(v)/ partial differentialV(v). This heart model is defined independently from load properties, and ventricular elastance is dynamic and reflects changing numbers of crossbridge bonds. In this paper, the functionality of this new heart model is presented via computed work loops that demonstrate the Frank-Starling mechanism and the effects of preload, the effects of afterload, inotropic changes, and varied heart rate, as well as the interdependence of these effects. Results suggest the origin of the equivalent of Hill's force-velocity relation in the ventricle.

  17. Functional-derivative study of the Hubbard model. III. Fully renormalized Green's function

    International Nuclear Information System (INIS)

    Arai, T.; Cohen, M.H.

    1980-01-01

    The functional-derivative method of calculating the Green's function developed earlier for the Hubbard model is generalized and used to obtain a fully renormalized solution. Higher-order functional derivatives operating on the basic Green's functions, G and GAMMA, are all evaluated explicitly, thus making the solution applicable to the narrow-band region as well as the wide-band region. Correction terms Phi generated from functional derivatives of equal-time Green's functions of the type delta/sup n/ /deltaepsilon/sup n/, etc., with n > or = 2. It is found that the Phi's are, in fact, renormalization factors involved in the self-energy Σ and that the structure of the Phi's resembles that of Σ and contains the same renormalization factors Phi. The renormalization factors Phi are shown to satisfy a set of equations and can be evaluated self-consistently. In the presence of the Phi's, all difficulties found in the previous results (papers I and II) are removed, and the energy spectrum ω can now be evaluated for all occupations n. The Schwinger relation is the only basic relation used in generating this fully self-consistent Green's function, and the Baym-Kadanoff continuity condition is automatically satisfied

  18. Correlation functions of the Ising model and the eight-vertex model

    International Nuclear Information System (INIS)

    Ko, L.F.

    1986-01-01

    Calculations for the two-point correlation functions in the scaling limit for two statistical models are presented. In Part I, the Ising model with a linear defect is studied for T T/sub c/. The transfer matrix method of Onsager and Kaufman is used. The energy-density correlation is given by functions related to the modified Bessel functions. The dispersion expansion for the spin-spin correlation functions are derived. The dominant behavior for large separations at T not equal to T/sub c/ is extracted. It is shown that these expansions lead to systems of Fredholm integral equations. In Part II, the electric correlation function of the eight-vertex model for T < T/sub c/ is studied. The eight vertex model decouples to two independent Ising models when the four spin coupling vanishes. To first order in the four-spin coupling, the electric correlation function is related to a three-point function of the Ising model. This relation is systematically investigated and the full dispersion expansion (to first order in four-spin coupling) is obtained. The results is a new kind of structure which, unlike those of many solvable models, is apparently not expressible in terms of linear integral equations

  19. The Schroedinger functional for Gross-Neveu models

    International Nuclear Information System (INIS)

    Leder, B.

    2007-01-01

    Gross-Neveu type models with a finite number of fermion flavours are studied on a two-dimensional Euclidean space-time lattice. The models are asymptotically free and are invariant under a chiral symmetry. These similarities to QCD make them perfect benchmark systems for fermion actions used in large scale lattice QCD computations. The Schroedinger functional for the Gross-Neveu models is defined for both, Wilson and Ginsparg-Wilson fermions, and shown to be renormalisable in 1-loop lattice perturbation theory. In two dimensions four fermion interactions of the Gross-Neveu models have dimensionless coupling constants. The symmetry properties of the four fermion interaction terms and the relations among them are discussed. For Wilson fermions chiral symmetry is explicitly broken and additional terms must be included in the action. Chiral symmetry is restored up to cut-off effects by tuning the bare mass and one of the couplings. The critical mass and the symmetry restoring coupling are computed to second order in lattice perturbation theory. This result is used in the 1-loop computation of the renormalised couplings and the associated beta-functions. The renormalised couplings are defined in terms of suitable boundary-to-boundary correlation functions. In the computation the known first order coefficients of the beta-functions are reproduced. One of the couplings is found to have a vanishing betafunction. The calculation is repeated for the recently proposed Schroedinger functional with exact chiral symmetry, i.e. Ginsparg-Wilson fermions. The renormalisation pattern is found to be the same as in the Wilson case. Using the regularisation dependent finite part of the renormalised couplings, the ratio of the Lambda-parameters is computed. (orig.)

  20. A Review of Modeling Pedagogies: Pedagogical Functions, Discursive Acts, and Technology in Modeling Instruction

    Science.gov (United States)

    Campbell, Todd; Oh, Phil Seok; Maughn, Milo; Kiriazis, Nick; Zuwallack, Rebecca

    2015-01-01

    The current review examined modeling literature in top science education journals to better understand the pedagogical functions of modeling instruction reported over the last decade. Additionally, the review sought to understand the extent to which different modeling pedagogies were employed, the discursive acts that were identified as important,…

  1. Spin-density functional for exchange anisotropic Heisenberg model

    International Nuclear Information System (INIS)

    Prata, G.N.; Penteado, P.H.; Souza, F.C.; Libero, Valter L.

    2009-01-01

    Ground-state energies for antiferromagnetic Heisenberg models with exchange anisotropy are estimated by means of a local-spin approximation made in the context of the density functional theory. Correlation energy is obtained using the non-linear spin-wave theory for homogeneous systems from which the spin functional is built. Although applicable to chains of any size, the results are shown for small number of sites, to exhibit finite-size effects and allow comparison with exact-numerical data from direct diagonalization of small chains.

  2. Medical Writing Competency Model - Section 1: Functions, Tasks, and Activities.

    Science.gov (United States)

    Clemow, David B; Wagner, Bertil; Marshallsay, Christopher; Benau, Dan; L'Heureux, Darryl; Brown, David H; Dasgupta, Devjani Ghosh; Girten, Eileen; Hubbard, Frank; Gawrylewski, Helle-Mai; Ebina, Hiroko; Stoltenborg, Janet; York, J P; Green, Kim; Wood, Linda Fossati; Toth, Lisa; Mihm, Michael; Katz, Nancy R; Vasconcelos, Nina-Maria; Sakiyama, Norihisa; Whitsell, Robin; Gopalakrishnan, Shobha; Bairnsfather, Susan; Wanderer, Tatyana; Schindler, Thomas M; Mikyas, Yeshi; Aoyama, Yumiko

    2018-01-01

    This article provides Section 1 of the 2017 Edition 2 Medical Writing Competency Model that describes the core work functions and associated tasks and activities related to professional medical writing within the life sciences industry. The functions in the Model are scientific communication strategy; document preparation, development, and finalization; document project management; document template, standard, format, and style development and maintenance; outsourcing, alliance partner, and client management; knowledge, skill, ability, and behavior development and sharing; and process improvement. The full Model also includes Section 2, which covers the knowledge, skills, abilities, and behaviors needed for medical writers to be effective in their roles; Section 2 is presented in a companion article. Regulatory, publication, and other scientific writing as well as management of writing activities are covered. The Model was developed to aid medical writers and managers within the life sciences industry regarding medical writing hiring, training, expectation and goal setting, performance evaluation, career development, retention, and role value sharing to cross-functional partners.

  3. Analysis of a Heroin Epidemic Model with Saturated Treatment Function

    Directory of Open Access Journals (Sweden)

    Isaac Mwangi Wangari

    2017-01-01

    Full Text Available A mathematical model is developed that examines how heroin addiction spreads in society. The model is formulated to take into account the treatment of heroin users by incorporating a realistic functional form that “saturates” representing the limited availability of treatment. Bifurcation analysis reveals that the model has an intrinsic backward bifurcation whenever the saturation parameter is larger than a fixed threshold. We are particularly interested in studying the model’s global stability. In the absence of backward bifurcations, Lyapunov functions can often be found and used to prove global stability. However, in the presence of backward bifurcations, such Lyapunov functions may not exist or may be difficult to construct. We make use of the geometric approach to global stability to derive a condition that ensures that the system is globally asymptotically stable. Numerical simulations are also presented to give a more complete representation of the model dynamics. Sensitivity analysis performed by Latin hypercube sampling (LHS suggests that the effective contact rate in the population, the relapse rate of heroin users undergoing treatment, and the extent of saturation of heroin users are mechanisms fuelling heroin epidemic proliferation.

  4. Functional Mixed Effects Model for Small Area Estimation.

    Science.gov (United States)

    Maiti, Tapabrata; Sinha, Samiran; Zhong, Ping-Shou

    2016-09-01

    Functional data analysis has become an important area of research due to its ability of handling high dimensional and complex data structures. However, the development is limited in the context of linear mixed effect models, and in particular, for small area estimation. The linear mixed effect models are the backbone of small area estimation. In this article, we consider area level data, and fit a varying coefficient linear mixed effect model where the varying coefficients are semi-parametrically modeled via B-splines. We propose a method of estimating the fixed effect parameters and consider prediction of random effects that can be implemented using a standard software. For measuring prediction uncertainties, we derive an analytical expression for the mean squared errors, and propose a method of estimating the mean squared errors. The procedure is illustrated via a real data example, and operating characteristics of the method are judged using finite sample simulation studies.

  5. Model of bidirectional reflectance distribution function for metallic materials

    International Nuclear Information System (INIS)

    Wang Kai; Zhu Jing-Ping; Liu Hong; Hou Xun

    2016-01-01

    Based on the three-component assumption that the reflection is divided into specular reflection, directional diffuse reflection, and ideal diffuse reflection, a bidirectional reflectance distribution function (BRDF) model of metallic materials is presented. Compared with the two-component assumption that the reflection is composed of specular reflection and diffuse reflection, the three-component assumption divides the diffuse reflection into directional diffuse and ideal diffuse reflection. This model effectively resolves the problem that constant diffuse reflection leads to considerable error for metallic materials. Simulation and measurement results validate that this three-component BRDF model can improve the modeling accuracy significantly and describe the reflection properties in the hemisphere space precisely for the metallic materials. (paper)

  6. Model of bidirectional reflectance distribution function for metallic materials

    Science.gov (United States)

    Wang, Kai; Zhu, Jing-Ping; Liu, Hong; Hou, Xun

    2016-09-01

    Based on the three-component assumption that the reflection is divided into specular reflection, directional diffuse reflection, and ideal diffuse reflection, a bidirectional reflectance distribution function (BRDF) model of metallic materials is presented. Compared with the two-component assumption that the reflection is composed of specular reflection and diffuse reflection, the three-component assumption divides the diffuse reflection into directional diffuse and ideal diffuse reflection. This model effectively resolves the problem that constant diffuse reflection leads to considerable error for metallic materials. Simulation and measurement results validate that this three-component BRDF model can improve the modeling accuracy significantly and describe the reflection properties in the hemisphere space precisely for the metallic materials.

  7. A Tensor Statistical Model for Quantifying Dynamic Functional Connectivity.

    Science.gov (United States)

    Zhu, Yingying; Zhu, Xiaofeng; Kim, Minjeong; Yan, Jin; Wu, Guorong

    2017-06-01

    Functional connectivity (FC) has been widely investigated in many imaging-based neuroscience and clinical studies. Since functional Magnetic Resonance Image (MRI) signal is just an indirect reflection of brain activity, it is difficult to accurately quantify the FC strength only based on signal correlation. To address this limitation, we propose a learning-based tensor model to derive high sensitivity and specificity connectome biomarkers at the individual level from resting-state fMRI images. First, we propose a learning-based approach to estimate the intrinsic functional connectivity. In addition to the low level region-to-region signal correlation, latent module-to-module connection is also estimated and used to provide high level heuristics for measuring connectivity strength. Furthermore, sparsity constraint is employed to automatically remove the spurious connections, thus alleviating the issue of searching for optimal threshold. Second, we integrate our learning-based approach with the sliding-window technique to further reveal the dynamics of functional connectivity. Specifically, we stack the functional connectivity matrix within each sliding window and form a 3D tensor where the third dimension denotes for time. Then we obtain dynamic functional connectivity (dFC) for each individual subject by simultaneously estimating the within-sliding-window functional connectivity and characterizing the across-sliding-window temporal dynamics. Third, in order to enhance the robustness of the connectome patterns extracted from dFC, we extend the individual-based 3D tensors to a population-based 4D tensor (with the fourth dimension stands for the training subjects) and learn the statistics of connectome patterns via 4D tensor analysis. Since our 4D tensor model jointly (1) optimizes dFC for each training subject and (2) captures the principle connectome patterns, our statistical model gains more statistical power of representing new subject than current state

  8. Simple model for low-frequency guitar function

    DEFF Research Database (Denmark)

    Christensen, Ove; Vistisen, Bo B.

    1980-01-01

    - frequency guitar function. The model predicts frequency responce of sound pressure and top plate mobility which are in close quantitative agreement with experimental responses. The absolute sound pressure level and mobility level are predicted to within a few decibels, and the equivalent piston area......The frequency response of sound pressure and top plate mobility is studied around the two first resonances of the guitar. These resonances are shown to result from a coupling between the fundamental top plate mode and the Helmholtz resonance of the cavity. A simple model is proposed for low...

  9. Control architecture of power systems: Modeling of purpose and function

    DEFF Research Database (Denmark)

    Heussen, Kai; Saleem, Arshad; Lind, Morten

    2009-01-01

    Many new technologies with novel control capabilities have been developed in the context of “smart grid” research. However, often it is not clear how these capabilities should best be integrated in the overall system operation. New operation paradigms change the traditional control architecture...... of power systems and it is necessary to identify requirements and functions. How does new control architecture fit with the old architecture? How can power system functions be specified independent of technology? What is the purpose of control in power systems? In this paper, a method suitable...... for semantically consistent modeling of control architecture is presented. The method, called Multilevel Flow Modeling (MFM), is applied to the case of system balancing. It was found that MFM is capable of capturing implicit control knowledge, which is otherwise difficult to formalize. The method has possible...

  10. The Use of Modeling Approach for Teaching Exponential Functions

    Science.gov (United States)

    Nunes, L. F.; Prates, D. B.; da Silva, J. M.

    2017-12-01

    This work presents a discussion related to the teaching and learning of mathematical contents related to the study of exponential functions in a freshman students group enrolled in the first semester of the Science and Technology Bachelor’s (STB of the Federal University of Jequitinhonha and Mucuri Valleys (UFVJM). As a contextualization tool strongly mentioned in the literature, the modelling approach was used as an educational teaching tool to produce contextualization in the teaching-learning process of exponential functions to these students. In this sense, were used some simple models elaborated with the GeoGebra software and, to have a qualitative evaluation of the investigation and the results, was used Didactic Engineering as a methodology research. As a consequence of this detailed research, some interesting details about the teaching and learning process were observed, discussed and described.

  11. Using computational models to relate structural and functional brain connectivity

    Czech Academy of Sciences Publication Activity Database

    Hlinka, Jaroslav; Coombes, S.

    2012-01-01

    Roč. 36, č. 2 (2012), s. 2137-2145 ISSN 0953-816X R&D Projects: GA MŠk 7E08027 EU Projects: European Commission(XE) 200728 - BRAINSYNC Institutional research plan: CEZ:AV0Z10300504 Keywords : brain disease * computational modelling * functional connectivity * graph theory * structural connectivity Subject RIV: FH - Neurology Impact factor: 3.753, year: 2012

  12. A review of function modeling: Approaches and applications

    OpenAIRE

    Erden, M.S.; Komoto, H.; Van Beek, T.J.; D'Amelio, V.; Echavarria, E.; Tomiyama, T.

    2008-01-01

    This work is aimed at establishing a common frame and understanding of function modeling (FM) for our ongoing research activities. A comparative review of the literature is performed to grasp the various FM approaches with their commonalities and differences. The relations of FM with the research fields of artificial intelligence, design theory, and maintenance are discussed. In this discussion the goals are to highlight the features of various classical approaches in relation to FM, to delin...

  13. Correlation functions and Schwinger-Dyson equations for Penner's model

    International Nuclear Information System (INIS)

    Chair, N.; Panda, S.

    1991-05-01

    The free energy of Penner's model exhibits logarithmic singularity in the continuum limit. We show, however, that the one and two point correlators of the usual loop-operators do not exhibit logarithmic singularity. The continuum Schwinger-Dyson equations involving these correlation functions are derived and it is found that within the space of the corresponding couplings, the resulting constraints obey a Virasoro algebra. The puncture operator having the correct (logarithmic) scaling behaviour is identified. (author). 13 refs

  14. Model Complexities of Shallow Networks Representing Highly Varying Functions

    Czech Academy of Sciences Publication Activity Database

    Kůrková, Věra; Sanguineti, M.

    2016-01-01

    Roč. 171, 1 January (2016), s. 598-604 ISSN 0925-2312 R&D Projects: GA MŠk(CZ) LD13002 Grant - others:grant for Visiting Professors(IT) GNAMPA-INdAM Institutional support: RVO:67985807 Keywords : shallow networks * model complexity * highly varying functions * Chernoff bound * perceptrons * Gaussian kernel units Subject RIV: IN - Informatics, Computer Science Impact factor: 3.317, year: 2016

  15. Linking density functional and mode coupling models for supercooled liquids

    Energy Technology Data Exchange (ETDEWEB)

    Premkumar, Leishangthem; Bidhoodi, Neeta; Das, Shankar P. [School of Physical Sciences, Jawaharlal Nehru University, New Delhi 110067 (India)

    2016-03-28

    We compare predictions from two familiar models of the metastable supercooled liquid, respectively, constructed with thermodynamic and dynamic approaches. In the so called density functional theory the free energy F[ρ] of the liquid is a functional of the inhomogeneous density ρ(r). The metastable state is identified as a local minimum of F[ρ]. The sharp density profile characterizing ρ(r) is identified as a single particle oscillator, whose frequency is obtained from the parameters of the optimum density function. On the other hand, a dynamic approach to supercooled liquids is taken in the mode coupling theory (MCT) which predict a sharp ergodicity-non-ergodicity transition at a critical density. The single particle dynamics in the non-ergodic state, treated approximately, represents a propagating mode whose characteristic frequency is computed from the corresponding memory function of the MCT. The mass localization parameters in the above two models (treated in their simplest forms) are obtained, respectively, in terms of the corresponding natural frequencies depicted and are shown to have comparable magnitudes.

  16. Linking density functional and mode coupling models for supercooled liquids.

    Science.gov (United States)

    Premkumar, Leishangthem; Bidhoodi, Neeta; Das, Shankar P

    2016-03-28

    We compare predictions from two familiar models of the metastable supercooled liquid, respectively, constructed with thermodynamic and dynamic approaches. In the so called density functional theory the free energy F[ρ] of the liquid is a functional of the inhomogeneous density ρ(r). The metastable state is identified as a local minimum of F[ρ]. The sharp density profile characterizing ρ(r) is identified as a single particle oscillator, whose frequency is obtained from the parameters of the optimum density function. On the other hand, a dynamic approach to supercooled liquids is taken in the mode coupling theory (MCT) which predict a sharp ergodicity-non-ergodicity transition at a critical density. The single particle dynamics in the non-ergodic state, treated approximately, represents a propagating mode whose characteristic frequency is computed from the corresponding memory function of the MCT. The mass localization parameters in the above two models (treated in their simplest forms) are obtained, respectively, in terms of the corresponding natural frequencies depicted and are shown to have comparable magnitudes.

  17. Transposons As Tools for Functional Genomics in Vertebrate Models.

    Science.gov (United States)

    Kawakami, Koichi; Largaespada, David A; Ivics, Zoltán

    2017-11-01

    Genetic tools and mutagenesis strategies based on transposable elements are currently under development with a vision to link primary DNA sequence information to gene functions in vertebrate models. By virtue of their inherent capacity to insert into DNA, transposons can be developed into powerful tools for chromosomal manipulations. Transposon-based forward mutagenesis screens have numerous advantages including high throughput, easy identification of mutated alleles, and providing insight into genetic networks and pathways based on phenotypes. For example, the Sleeping Beauty transposon has become highly instrumental to induce tumors in experimental animals in a tissue-specific manner with the aim of uncovering the genetic basis of diverse cancers. Here, we describe a battery of mutagenic cassettes that can be applied in conjunction with transposon vectors to mutagenize genes, and highlight versatile experimental strategies for the generation of engineered chromosomes for loss-of-function as well as gain-of-function mutagenesis for functional gene annotation in vertebrate models, including zebrafish, mice, and rats. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. Modeling Marine Electromagnetic Survey with Radial Basis Function Networks

    Directory of Open Access Journals (Sweden)

    Agus Arif

    2014-11-01

    Full Text Available A marine electromagnetic survey is an engineering endeavour to discover the location and dimension of a hydrocarbon layer under an ocean floor. In this kind of survey, an array of electric and magnetic receivers are located on the sea floor and record the scattered, refracted and reflected electromagnetic wave, which has been transmitted by an electric dipole antenna towed by a vessel. The data recorded in receivers must be processed and further analysed to estimate the hydrocarbon location and dimension. To conduct those analyses successfuly, a radial basis function (RBF network could be employed to become a forward model of the input-output relationship of the data from a marine electromagnetic survey. This type of neural networks is working based on distances between its inputs and predetermined centres of some basis functions. A previous research had been conducted to model the same marine electromagnetic survey using another type of neural networks, which is a multi layer perceptron (MLP network. By comparing their validation and training performances (mean-squared errors and correlation coefficients, it is concluded that, in this case, the MLP network is comparatively better than the RBF network[1].[1] This manuscript is an extended version of our previous paper, entitled Radial Basis Function Networks for Modeling Marine Electromagnetic Survey, which had been presented on 2011 International Conference on Electrical Engineering and Informatics, 17-19 July 2011, Bandung, Indonesia.

  19. Monopoly models with time-varying demand function

    Science.gov (United States)

    Cavalli, Fausto; Naimzada, Ahmad

    2018-05-01

    We study a family of monopoly models for markets characterized by time-varying demand functions, in which a boundedly rational agent chooses output levels on the basis of a gradient adjustment mechanism. After presenting the model for a generic framework, we analytically study the case of cyclically alternating demand functions. We show that both the perturbation size and the agent's reactivity to profitability variation signals can have counterintuitive roles on the resulting period-2 cycles and on their stability. In particular, increasing the perturbation size can have both a destabilizing and a stabilizing effect on the resulting dynamics. Moreover, in contrast with the case of time-constant demand functions, the agent's reactivity is not just destabilizing, but can improve stability, too. This means that a less cautious behavior can provide better performance, both with respect to stability and to achieved profits. We show that, even if the decision mechanism is very simple and is not able to always provide the optimal production decisions, achieved profits are very close to those optimal. Finally, we show that in agreement with the existing empirical literature, the price series obtained simulating the proposed model exhibit a significant deviation from normality and large volatility, in particular when underlying deterministic dynamics become unstable and complex.

  20. Calibration of two complex ecosystem models with different likelihood functions

    Science.gov (United States)

    Hidy, Dóra; Haszpra, László; Pintér, Krisztina; Nagy, Zoltán; Barcza, Zoltán

    2014-05-01

    The biosphere is a sensitive carbon reservoir. Terrestrial ecosystems were approximately carbon neutral during the past centuries, but they became net carbon sinks due to climate change induced environmental change and associated CO2 fertilization effect of the atmosphere. Model studies and measurements indicate that the biospheric carbon sink can saturate in the future due to ongoing climate change which can act as a positive feedback. Robustness of carbon cycle models is a key issue when trying to choose the appropriate model for decision support. The input parameters of the process-based models are decisive regarding the model output. At the same time there are several input parameters for which accurate values are hard to obtain directly from experiments or no local measurements are available. Due to the uncertainty associated with the unknown model parameters significant bias can be experienced if the model is used to simulate the carbon and nitrogen cycle components of different ecosystems. In order to improve model performance the unknown model parameters has to be estimated. We developed a multi-objective, two-step calibration method based on Bayesian approach in order to estimate the unknown parameters of PaSim and Biome-BGC models. Biome-BGC and PaSim are a widely used biogeochemical models that simulate the storage and flux of water, carbon, and nitrogen between the ecosystem and the atmosphere, and within the components of the terrestrial ecosystems (in this research the developed version of Biome-BGC is used which is referred as BBGC MuSo). Both models were calibrated regardless the simulated processes and type of model parameters. The calibration procedure is based on the comparison of measured data with simulated results via calculating a likelihood function (degree of goodness-of-fit between simulated and measured data). In our research different likelihood function formulations were used in order to examine the effect of the different model

  1. Modeling phytoplankton community in reservoirs. A comparison between taxonomic and functional groups-based models.

    Science.gov (United States)

    Di Maggio, Jimena; Fernández, Carolina; Parodi, Elisa R; Diaz, M Soledad; Estrada, Vanina

    2016-01-01

    In this paper we address the formulation of two mechanistic water quality models that differ in the way the phytoplankton community is described. We carry out parameter estimation subject to differential-algebraic constraints and validation for each model and comparison between models performance. The first approach aggregates phytoplankton species based on their phylogenetic characteristics (Taxonomic group model) and the second one, on their morpho-functional properties following Reynolds' classification (Functional group model). The latter approach takes into account tolerance and sensitivity to environmental conditions. The constrained parameter estimation problems are formulated within an equation oriented framework, with a maximum likelihood objective function. The study site is Paso de las Piedras Reservoir (Argentina), which supplies water for consumption for 450,000 population. Numerical results show that phytoplankton morpho-functional groups more closely represent each species growth requirements within the group. Each model performance is quantitatively assessed by three diagnostic measures. Parameter estimation results for seasonal dynamics of the phytoplankton community and main biogeochemical variables for a one-year time horizon are presented and compared for both models, showing the functional group model enhanced performance. Finally, we explore increasing nutrient loading scenarios and predict their effect on phytoplankton dynamics throughout a one-year time horizon. Copyright © 2015 Elsevier Ltd. All rights reserved.

  2. The stochastic resonance for the incidence function model of metapopulation

    Science.gov (United States)

    Li, Jiang-Cheng; Dong, Zhi-Wei; Zhou, Ruo-Wei; Li, Yun-Xian; Qian, Zhen-Wei

    2017-06-01

    A stochastic model with endogenous and exogenous periodicities is proposed in this paper on the basis of metapopulation dynamics to model the crop yield losses due to pests and diseases. The rationale is that crop yield losses occur because the physiology of the growing crop is negatively affected by pests and diseases in a dynamic way over time as crop both grows and develops. Metapopulation dynamics can thus be used to model the resultant crop yield losses. The stochastic metapopulation process is described by using the Simplified Incidence Function model (IFM). Compared to the original IFMs, endogenous and exogenous periodicities are considered in the proposed model to handle the cyclical patterns observed in pest infestations, diseases epidemics, and exogenous affecting factors such as temperature and rainfalls. Agricultural loss data in China are used to fit the proposed model. Experimental results demonstrate that: (1) Model with endogenous and exogenous periodicities is a better fit; (2) When the internal system fluctuations and external environmental fluctuations are negatively correlated, EIL or the cost of loss is monotonically increasing; when the internal system fluctuations and external environmental fluctuations are positively correlated, an outbreak of pests and diseases might occur; (3) If the internal system fluctuations and external environmental fluctuations are positively correlated, an optimal patch size can be identified which will greatly weaken the effects of external environmental influence and hence inhibit pest infestations and disease epidemics.

  3. Model validation and calibration based on component functions of model output

    International Nuclear Information System (INIS)

    Wu, Danqing; Lu, Zhenzhou; Wang, Yanping; Cheng, Lei

    2015-01-01

    The target in this work is to validate the component functions of model output between physical observation and computational model with the area metric. Based on the theory of high dimensional model representations (HDMR) of independent input variables, conditional expectations are component functions of model output, and the conditional expectations reflect partial information of model output. Therefore, the model validation of conditional expectations tells the discrepancy between the partial information of the computational model output and that of the observations. Then a calibration of the conditional expectations is carried out to reduce the value of model validation metric. After that, a recalculation of the model validation metric of model output is taken with the calibrated model parameters, and the result shows that a reduction of the discrepancy in the conditional expectations can help decrease the difference in model output. At last, several examples are employed to demonstrate the rationality and necessity of the methodology in case of both single validation site and multiple validation sites. - Highlights: • A validation metric of conditional expectations of model output is proposed. • HDRM explains the relationship of conditional expectations and model output. • An improved approach of parameter calibration updates the computational models. • Validation and calibration process are applied at single site and multiple sites. • Validation and calibration process show a superiority than existing methods

  4. A single model procedure for tank calibration function estimation

    International Nuclear Information System (INIS)

    York, J.C.; Liebetrau, A.M.

    1995-01-01

    Reliable tank calibrations are a vital component of any measurement control and accountability program for bulk materials in a nuclear reprocessing facility. Tank volume calibration functions used in nuclear materials safeguards and accountability programs are typically constructed from several segments, each of which is estimated independently. Ideally, the segments correspond to structural features in the tank. In this paper the authors use an extension of the Thomas-Liebetrau model to estimate the entire calibration function in a single step. This procedure automatically takes significant run-to-run differences into account and yields an estimate of the entire calibration function in one operation. As with other procedures, the first step is to define suitable calibration segments. Next, a polynomial of low degree is specified for each segment. In contrast with the conventional practice of constructing a separate model for each segment, this information is used to set up the design matrix for a single model that encompasses all of the calibration data. Estimation of the model parameters is then done using conventional statistical methods. The method described here has several advantages over traditional methods. First, modeled run-to-run differences can be taken into account automatically at the estimation step. Second, no interpolation is required between successive segments. Third, variance estimates are based on all the data, rather than that from a single segment, with the result that discontinuities in confidence intervals at segment boundaries are eliminated. Fourth, the restrictive assumption of the Thomas-Liebetrau method, that the measured volumes be the same for all runs, is not required. Finally, the proposed methods are readily implemented using standard statistical procedures and widely-used software packages

  5. Adaptive filters and internal models: multilevel description of cerebellar function.

    Science.gov (United States)

    Porrill, John; Dean, Paul; Anderson, Sean R

    2013-11-01

    Cerebellar function is increasingly discussed in terms of engineering schemes for motor control and signal processing that involve internal models. To address the relation between the cerebellum and internal models, we adopt the chip metaphor that has been used to represent the combination of a homogeneous cerebellar cortical microcircuit with individual microzones having unique external connections. This metaphor indicates that identifying the function of a particular cerebellar chip requires knowledge of both the general microcircuit algorithm and the chip's individual connections. Here we use a popular candidate algorithm as embodied in the adaptive filter, which learns to decorrelate its inputs from a reference ('teaching', 'error') signal. This algorithm is computationally powerful enough to be used in a very wide variety of engineering applications. However, the crucial issue is whether the external connectivity required by such applications can be implemented biologically. We argue that some applications appear to be in principle biologically implausible: these include the Smith predictor and Kalman filter (for state estimation), and the feedback-error-learning scheme for adaptive inverse control. However, even for plausible schemes, such as forward models for noise cancellation and novelty-detection, and the recurrent architecture for adaptive inverse control, there is unlikely to be a simple mapping between microzone function and internal model structure. This initial analysis suggests that cerebellar involvement in particular behaviours is therefore unlikely to have a neat classification into categories such as 'forward model'. It is more likely that cerebellar microzones learn a task-specific adaptive-filter operation which combines a number of signal-processing roles. Copyright © 2012 Elsevier Ltd. All rights reserved.

  6. Confronting species distribution model predictions with species functional traits.

    Science.gov (United States)

    Wittmann, Marion E; Barnes, Matthew A; Jerde, Christopher L; Jones, Lisa A; Lodge, David M

    2016-02-01

    Species distribution models are valuable tools in studies of biogeography, ecology, and climate change and have been used to inform conservation and ecosystem management. However, species distribution models typically incorporate only climatic variables and species presence data. Model development or validation rarely considers functional components of species traits or other types of biological data. We implemented a species distribution model (Maxent) to predict global climate habitat suitability for Grass Carp (Ctenopharyngodon idella). We then tested the relationship between the degree of climate habitat suitability predicted by Maxent and the individual growth rates of both wild (N = 17) and stocked (N = 51) Grass Carp populations using correlation analysis. The Grass Carp Maxent model accurately reflected the global occurrence data (AUC = 0.904). Observations of Grass Carp growth rate covered six continents and ranged from 0.19 to 20.1 g day(-1). Species distribution model predictions were correlated (r = 0.5, 95% CI (0.03, 0.79)) with observed growth rates for wild Grass Carp populations but were not correlated (r = -0.26, 95% CI (-0.5, 0.012)) with stocked populations. Further, a review of the literature indicates that the few studies for other species that have previously assessed the relationship between the degree of predicted climate habitat suitability and species functional traits have also discovered significant relationships. Thus, species distribution models may provide inferences beyond just where a species may occur, providing a useful tool to understand the linkage between species distributions and underlying biological mechanisms.

  7. Theoretical limit of spatial resolution in diffuse optical tomography using a perturbation model

    International Nuclear Information System (INIS)

    Konovalov, A B; Vlasov, V V

    2014-01-01

    We have assessed the limit of spatial resolution of timedomain diffuse optical tomography (DOT) based on a perturbation reconstruction model. From the viewpoint of the structure reconstruction accuracy, three different approaches to solving the inverse DOT problem are compared. The first approach involves reconstruction of diffuse tomograms from straight lines, the second – from average curvilinear trajectories of photons and the third – from total banana-shaped distributions of photon trajectories. In order to obtain estimates of resolution, we have derived analytical expressions for the point spread function and modulation transfer function, as well as have performed a numerical experiment on reconstruction of rectangular scattering objects with circular absorbing inhomogeneities. It is shown that in passing from reconstruction from straight lines to reconstruction using distributions of photon trajectories we can improve resolution by almost an order of magnitude and exceed the accuracy of reconstruction of multi-step algorithms used in DOT. (optical tomography)

  8. Quantitative and Functional Requirements for Bioluminescent Cancer Models.

    Science.gov (United States)

    Feys, Lynn; Descamps, Benedicte; Vanhove, Christian; Vermeulen, Stefan; Vandesompele, J O; Vanderheyden, Katrien; Messens, Kathy; Bracke, Marc; De Wever, Olivier

    2016-01-01

    Bioluminescent cancer models are widely used but detailed quantification of the luciferase signal and functional comparison with a non-transfected control cell line are generally lacking. In the present study, we provide quantitative and functional tests for luciferase-transfected cells. We quantified the luciferase expression in BLM and HCT8/E11 transfected cancer cells, and examined the effect of long-term luciferin exposure. The present study also investigated functional differences between parental and transfected cancer cells. Our results showed that quantification of different single-cell-derived populations are superior with droplet digital polymerase chain reaction. Quantification of luciferase protein level and luciferase bioluminescent activity is only useful when there is a significant difference in copy number. Continuous exposure of cell cultures to luciferin leads to inhibitory effects on mitochondrial activity, cell growth and bioluminescence. These inhibitory effects correlate with luciferase copy number. Cell culture and mouse xenograft assays showed no significant functional differences between luciferase-transfected and parental cells. Luciferase-transfected cells should be validated by quantitative and functional assays before starting large-scale experiments. Copyright © 2016 International Institute of Anticancer Research (Dr. John G. Delinassios), All rights reserved.

  9. Gaussian copula as a likelihood function for environmental models

    Science.gov (United States)

    Wani, O.; Espadas, G.; Cecinati, F.; Rieckermann, J.

    2017-12-01

    Parameter estimation of environmental models always comes with uncertainty. To formally quantify this parametric uncertainty, a likelihood function needs to be formulated, which is defined as the probability of observations given fixed values of the parameter set. A likelihood function allows us to infer parameter values from observations using Bayes' theorem. The challenge is to formulate a likelihood function that reliably describes the error generating processes which lead to the observed monitoring data, such as rainfall and runoff. If the likelihood function is not representative of the error statistics, the parameter inference will give biased parameter values. Several uncertainty estimation methods that are currently being used employ Gaussian processes as a likelihood function, because of their favourable analytical properties. Box-Cox transformation is suggested to deal with non-symmetric and heteroscedastic errors e.g. for flow data which are typically more uncertain in high flows than in periods with low flows. Problem with transformations is that the results are conditional on hyper-parameters, for which it is difficult to formulate the analyst's belief a priori. In an attempt to address this problem, in this research work we suggest learning the nature of the error distribution from the errors made by the model in the "past" forecasts. We use a Gaussian copula to generate semiparametric error distributions . 1) We show that this copula can be then used as a likelihood function to infer parameters, breaking away from the practice of using multivariate normal distributions. Based on the results from a didactical example of predicting rainfall runoff, 2) we demonstrate that the copula captures the predictive uncertainty of the model. 3) Finally, we find that the properties of autocorrelation and heteroscedasticity of errors are captured well by the copula, eliminating the need to use transforms. In summary, our findings suggest that copulas are an

  10. Colour-independent partition functions in coloured vertex models

    Energy Technology Data Exchange (ETDEWEB)

    Foda, O., E-mail: omar.foda@unimelb.edu.au [Dept. of Mathematics and Statistics, University of Melbourne, Parkville, VIC 3010 (Australia); Wheeler, M., E-mail: mwheeler@lpthe.jussieu.fr [Laboratoire de Physique Théorique et Hautes Energies, CNRS UMR 7589 (France); Université Pierre et Marie Curie – Paris 6, 4 place Jussieu, 75252 Paris cedex 05 (France)

    2013-06-11

    We study lattice configurations related to S{sub n}, the scalar product of an off-shell state and an on-shell state in rational A{sub n} integrable vertex models, n∈{1,2}. The lattice lines are colourless and oriented. The state variables are n conserved colours that flow along the line orientations, but do not necessarily cover every bond in the lattice. Choosing boundary conditions such that the positions where the colours flow into the lattice are fixed, and where they flow out are summed over, we show that the partition functions of these configurations, with these boundary conditions, are n-independent. Our results extend to trigonometric A{sub n} models, and to all n. This n-independence explains, in vertex-model terms, results from recent studies of S{sub 2} (Caetano and Vieira, 2012, [1], Wheeler, (arXiv:1204.2089), [2]). Namely, 1.S{sub 2}, which depends on two sets of Bethe roots, {b_1} and {b_2}, and cannot (as far as we know) be expressed in single determinant form, degenerates in the limit {b_1}→∞, and/or {b_2}→∞, into a product of determinants, 2. Each of the latter determinants is an A{sub 1} vertex-model partition function.

  11. Colour-independent partition functions in coloured vertex models

    International Nuclear Information System (INIS)

    Foda, O.; Wheeler, M.

    2013-01-01

    We study lattice configurations related to S n , the scalar product of an off-shell state and an on-shell state in rational A n integrable vertex models, n∈{1,2}. The lattice lines are colourless and oriented. The state variables are n conserved colours that flow along the line orientations, but do not necessarily cover every bond in the lattice. Choosing boundary conditions such that the positions where the colours flow into the lattice are fixed, and where they flow out are summed over, we show that the partition functions of these configurations, with these boundary conditions, are n-independent. Our results extend to trigonometric A n models, and to all n. This n-independence explains, in vertex-model terms, results from recent studies of S 2 (Caetano and Vieira, 2012, [1], Wheeler, (arXiv:1204.2089), [2]). Namely, 1.S 2 , which depends on two sets of Bethe roots, {b 1 } and {b 2 }, and cannot (as far as we know) be expressed in single determinant form, degenerates in the limit {b 1 }→∞, and/or {b 2 }→∞, into a product of determinants, 2. Each of the latter determinants is an A 1 vertex-model partition function

  12. Model parameters for representative wetland plant functional groups

    Science.gov (United States)

    Williams, Amber S.; Kiniry, James R.; Mushet, David M.; Smith, Loren M.; McMurry, Scott T.; Attebury, Kelly; Lang, Megan; McCarty, Gregory W.; Shaffer, Jill A.; Effland, William R.; Johnson, Mari-Vaughn V.

    2017-01-01

    Wetlands provide a wide variety of ecosystem services including water quality remediation, biodiversity refugia, groundwater recharge, and floodwater storage. Realistic estimation of ecosystem service benefits associated with wetlands requires reasonable simulation of the hydrology of each site and realistic simulation of the upland and wetland plant growth cycles. Objectives of this study were to quantify leaf area index (LAI), light extinction coefficient (k), and plant nitrogen (N), phosphorus (P), and potassium (K) concentrations in natural stands of representative plant species for some major plant functional groups in the United States. Functional groups in this study were based on these parameters and plant growth types to enable process-based modeling. We collected data at four locations representing some of the main wetland regions of the United States. At each site, we collected on-the-ground measurements of fraction of light intercepted, LAI, and dry matter within the 2013–2015 growing seasons. Maximum LAI and k variables showed noticeable variations among sites and years, while overall averages and functional group averages give useful estimates for multisite simulation modeling. Variation within each species gives an indication of what can be expected in such natural ecosystems. For P and K, the concentrations from highest to lowest were spikerush (Eleocharis macrostachya), reed canary grass (Phalaris arundinacea), smartweed (Polygonum spp.), cattail (Typha spp.), and hardstem bulrush (Schoenoplectus acutus). Spikerush had the highest N concentration, followed by smartweed, bulrush, reed canary grass, and then cattail. These parameters will be useful for the actual wetland species measured and for the wetland plant functional groups they represent. These parameters and the associated process-based models offer promise as valuable tools for evaluating environmental benefits of wetlands and for evaluating impacts of various agronomic practices in

  13. Plant lessons: exploring ABCB functionality through structural modeling

    Directory of Open Access Journals (Sweden)

    Aurélien eBailly

    2012-01-01

    Full Text Available In contrast to mammalian ABCB1 proteins, narrow substrate specificity has been extensively documented for plant orthologs shown to catalyze the transport of the plant hormone, auxin. Using the crystal structures of the multidrug exporters Sav1866 and MmABCB1 as templates, we have developed structural models of plant ABCB proteins with a common architecture. Comparisons of these structures identified kingdom-specific candidate substrate-binding regions within the translocation chamber formed by the transmembrane domains of ABCBs from the model plant Arabidopsis. These results suggest an early evolutionary divergence of plant and mammalian ABCBs. Validation of these models becomes a priority for efforts to elucidate ABCB function and manipulate this class of transporters to enhance plant productivity and quality.

  14. Model of coupling with core in the Green function method

    International Nuclear Information System (INIS)

    Kamerdzhiev, S.P.; Tselyaev, V.I.

    1983-01-01

    Models of coupling with core in the method of the Green functions, presenting generalization of conventional method of chaotic phases, i.e. account of configurations of more complex than monoparticle-monohole (1p1h) configurations, have been considered. Odd nuclei are studied only to the extent when the task of odd nucleus is solved for even-even nucleus. Microscopic model of the account of delay effects in mass operator M=M(epsilon), which corresponds to the account of the effects influence only on the change of quasiparticle behaviour in magic nucleus as compared with their behaviour, described by pure model of cores, has been considered. The change results in fragmentation of monoparticle levels, which is the main effect, and in the necessity to use new basis as compared with the shell one, corresponding to inoculative quasiparticles. When formulas have been devived concrete type of mass operator M(epsilon) is not used

  15. Zebrafish models for the functional genomics of neurogenetic disorders.

    Science.gov (United States)

    Kabashi, Edor; Brustein, Edna; Champagne, Nathalie; Drapeau, Pierre

    2011-03-01

    In this review, we consider recent work using zebrafish to validate and study the functional consequences of mutations of human genes implicated in a broad range of degenerative and developmental disorders of the brain and spinal cord. Also we present technical considerations for those wishing to study their own genes of interest by taking advantage of this easily manipulated and clinically relevant model organism. Zebrafish permit mutational analyses of genetic function (gain or loss of function) and the rapid validation of human variants as pathological mutations. In particular, neural degeneration can be characterized at genetic, cellular, functional, and behavioral levels. Zebrafish have been used to knock down or express mutations in zebrafish homologs of human genes and to directly express human genes bearing mutations related to neurodegenerative disorders such as spinal muscular atrophy, ataxia, hereditary spastic paraplegia, amyotrophic lateral sclerosis (ALS), epilepsy, Huntington's disease, Parkinson's disease, fronto-temporal dementia, and Alzheimer's disease. More recently, we have been using zebrafish to validate mutations of synaptic genes discovered by large-scale genomic approaches in developmental disorders such as autism, schizophrenia, and non-syndromic mental retardation. Advances in zebrafish genetics such as multigenic analyses and chemical genetics now offer a unique potential for disease research. Thus, zebrafish hold much promise for advancing the functional genomics of human diseases, the understanding of the genetics and cell biology of degenerative and developmental disorders, and the discovery of therapeutics. This article is part of a Special Issue entitled Zebrafish Models of Neurological Diseases. Copyright © 2010 Elsevier B.V. All rights reserved.

  16. Preequilibrium decay models and the quantum Green function method

    International Nuclear Information System (INIS)

    Zhivopistsev, F.A.; Rzhevskij, E.S.; Gosudarstvennyj Komitet po Ispol'zovaniyu Atomnoj Ehnergii SSSR, Moscow. Inst. Teoreticheskoj i Ehksperimental'noj Fiziki)

    1977-01-01

    The nuclear process mechanism and preequilibrium decay involving complex particles are expounded on the basis of the Green function formalism without the weak interaction assumptions. The Green function method is generalized to a general nuclear reaction: A+α → B+β+γ+...rho, where A is the target nucleus, α is a complex particle in the initial state, B is the final nucleus, and β, γ, ... rho are nuclear fragments in the final state. The relationship between the generalized Green function and Ssub(fi)-matrix is established. The resultant equations account for: 1) direct and quasi-direct processes responsible for the angular distribution asymmetry of the preequilibrium component; 2) the appearance of addends corresponding to the excitation of complex states of final nucleus; and 3) the relationship between the preequilibrium decay model and the general models of nuclear reaction theories (Lippman-Schwinger formalism). The formulation of preequilibrium emission via the S(T) matrix allows to account for all the differential terms in succession important to an investigation of the angular distribution assymetry of emitted particles

  17. Two-point functions in a holographic Kondo model

    Science.gov (United States)

    Erdmenger, Johanna; Hoyos, Carlos; O'Bannon, Andy; Papadimitriou, Ioannis; Probst, Jonas; Wu, Jackson M. S.

    2017-03-01

    We develop the formalism of holographic renormalization to compute two-point functions in a holographic Kondo model. The model describes a (0 + 1)-dimensional impurity spin of a gauged SU( N ) interacting with a (1 + 1)-dimensional, large- N , strongly-coupled Conformal Field Theory (CFT). We describe the impurity using Abrikosov pseudo-fermions, and define an SU( N )-invariant scalar operator O built from a pseudo-fermion and a CFT fermion. At large N the Kondo interaction is of the form O^{\\dagger}O, which is marginally relevant, and generates a Renormalization Group (RG) flow at the impurity. A second-order mean-field phase transition occurs in which O condenses below a critical temperature, leading to the Kondo effect, including screening of the impurity. Via holography, the phase transition is dual to holographic superconductivity in (1 + 1)-dimensional Anti-de Sitter space. At all temperatures, spectral functions of O exhibit a Fano resonance, characteristic of a continuum of states interacting with an isolated resonance. In contrast to Fano resonances observed for example in quantum dots, our continuum and resonance arise from a (0 + 1)-dimensional UV fixed point and RG flow, respectively. In the low-temperature phase, the resonance comes from a pole in the Green's function of the form - i2, which is characteristic of a Kondo resonance.

  18. Two-point functions in a holographic Kondo model

    Energy Technology Data Exchange (ETDEWEB)

    Erdmenger, Johanna [Institut für Theoretische Physik und Astrophysik, Julius-Maximilians-Universität Würzburg,Am Hubland, D-97074 Würzburg (Germany); Max-Planck-Institut für Physik (Werner-Heisenberg-Institut),Föhringer Ring 6, D-80805 Munich (Germany); Hoyos, Carlos [Department of Physics, Universidad de Oviedo, Avda. Calvo Sotelo 18, 33007, Oviedo (Spain); O’Bannon, Andy [STAG Research Centre, Physics and Astronomy, University of Southampton,Highfield, Southampton SO17 1BJ (United Kingdom); Papadimitriou, Ioannis [SISSA and INFN - Sezione di Trieste, Via Bonomea 265, I 34136 Trieste (Italy); Probst, Jonas [Rudolf Peierls Centre for Theoretical Physics, University of Oxford,1 Keble Road, Oxford OX1 3NP (United Kingdom); Wu, Jackson M.S. [Department of Physics and Astronomy, University of Alabama, Tuscaloosa, AL 35487 (United States)

    2017-03-07

    We develop the formalism of holographic renormalization to compute two-point functions in a holographic Kondo model. The model describes a (0+1)-dimensional impurity spin of a gauged SU(N) interacting with a (1+1)-dimensional, large-N, strongly-coupled Conformal Field Theory (CFT). We describe the impurity using Abrikosov pseudo-fermions, and define an SU(N)-invariant scalar operator O built from a pseudo-fermion and a CFT fermion. At large N the Kondo interaction is of the form O{sup †}O, which is marginally relevant, and generates a Renormalization Group (RG) flow at the impurity. A second-order mean-field phase transition occurs in which O condenses below a critical temperature, leading to the Kondo effect, including screening of the impurity. Via holography, the phase transition is dual to holographic superconductivity in (1+1)-dimensional Anti-de Sitter space. At all temperatures, spectral functions of O exhibit a Fano resonance, characteristic of a continuum of states interacting with an isolated resonance. In contrast to Fano resonances observed for example in quantum dots, our continuum and resonance arise from a (0+1)-dimensional UV fixed point and RG flow, respectively. In the low-temperature phase, the resonance comes from a pole in the Green’s function of the form −i〈O〉{sup 2}, which is characteristic of a Kondo resonance.

  19. A method of PSF generation for 3D brightfield deconvolution.

    Science.gov (United States)

    Tadrous, P J

    2010-02-01

    This paper addresses the problem of 3D deconvolution of through focus widefield microscope datasets (Z-stacks). One of the most difficult stages in brightfield deconvolution is finding the point spread function. A theoretically calculated point spread function (called a 'synthetic PSF' in this paper) requires foreknowledge of many system parameters and still gives only approximate results. A point spread function measured from a sub-resolution bead suffers from low signal-to-noise ratio, compounded in the brightfield setting (by contrast to fluorescence) by absorptive, refractive and dispersal effects. This paper describes a method of point spread function estimation based on measurements of a Z-stack through a thin sample. This Z-stack is deconvolved by an idealized point spread function derived from the same Z-stack to yield a point spread function of high signal-to-noise ratio that is also inherently tailored to the imaging system. The theory is validated by a practical experiment comparing the non-blind 3D deconvolution of the yeast Saccharomyces cerevisiae with the point spread function generated using the method presented in this paper (called the 'extracted PSF') to a synthetic point spread function. Restoration of both high- and low-contrast brightfield structures is achieved with fewer artefacts using the extracted point spread function obtained with this method. Furthermore the deconvolution progresses further (more iterations are allowed before the error function reaches its nadir) with the extracted point spread function compared to the synthetic point spread function indicating that the extracted point spread function is a better fit to the brightfield deconvolution model than the synthetic point spread function.

  20. Stress field models from Maxwell stress functions: southern California

    Science.gov (United States)

    Bird, Peter

    2017-08-01

    The lithospheric stress field is formally divided into three components: a standard pressure which is a function of elevation (only), a topographic stress anomaly (3-D tensor field) and a tectonic stress anomaly (3-D tensor field). The boundary between topographic and tectonic stress anomalies is somewhat arbitrary, and here is based on the modeling tools available. The topographic stress anomaly is computed by numerical convolution of density anomalies with three tensor Green's functions provided by Boussinesq, Cerruti and Mindlin. By assuming either a seismically estimated or isostatic Moho depth, and by using Poisson ratio of either 0.25 or 0.5, I obtain four alternative topographic stress models. The tectonic stress field, which satisfies the homogeneous quasi-static momentum equation, is obtained from particular second derivatives of Maxwell vector potential fields which are weighted sums of basis functions representing constant tectonic stress components, linearly varying tectonic stress components and tectonic stress components that vary harmonically in one, two and three dimensions. Boundary conditions include zero traction due to tectonic stress anomaly at sea level, and zero traction due to the total stress anomaly on model boundaries at depths within the asthenosphere. The total stress anomaly is fit by least squares to both World Stress Map data and to a previous faulted-lithosphere, realistic-rheology dynamic model of the region computed with finite-element program Shells. No conflict is seen between the two target data sets, and the best-fitting model (using an isostatic Moho and Poisson ratio 0.5) gives minimum directional misfits relative to both targets. Constraints of computer memory, execution time and ill-conditioning of the linear system (which requires damping) limit harmonically varying tectonic stress to no more than six cycles along each axis of the model. The primary limitation on close fitting is that the Shells model predicts very sharp

  1. AMFESYS: Modelling and diagnosis functions for operations support

    Science.gov (United States)

    Wheadon, J.

    1993-01-01

    Packetized telemetry, combined with low station coverage for close-earth satellites, may introduce new problems in presenting to the operator a clear picture of what the spacecraft is doing. A recent ESOC study has gone some way to show, by means of a practical demonstration, how the use of subsystem models combined with artificial intelligence techniques, within a real-time spacecraft control system (SCS), can help to overcome these problems. A spin-off from using these techniques can be an improvement in the reliability of the telemetry (TM) limit-checking function, as well as the telecommand verification function, of the Spacecraft Control systems (SCS). The problem and how it was addressed, including an overview of the 'AMF Expert System' prototype are described, and proposes further work which needs to be done to prove the concept. The Automatic Mirror Furnace is part of the payload of the European Retrievable Carrier (EURECA) spacecraft, which was launched in July 1992.

  2. Kaon quark distribution functions in the chiral constituent quark model

    Science.gov (United States)

    Watanabe, Akira; Sawada, Takahiro; Kao, Chung Wen

    2018-04-01

    We investigate the valence u and s ¯ quark distribution functions of the K+ meson, vK (u )(x ,Q2) and vK (s ¯)(x ,Q2), in the framework of the chiral constituent quark model. We judiciously choose the bare distributions at the initial scale to generate the dressed distributions at the higher scale, considering the meson cloud effects and the QCD evolution, which agree with the phenomenologically satisfactory valence quark distribution of the pion and the experimental data of the ratio vK (u )(x ,Q2)/vπ (u )(x ,Q2) . We show how the meson cloud effects affect the bare distribution functions in detail. We find that a smaller S U (3 ) flavor symmetry breaking effect is observed, compared with results of the preceding studies based on other approaches.

  3. The negotiated equilibrium model of spinal cord function.

    Science.gov (United States)

    Wolpaw, Jonathan R

    2018-04-16

    The belief that the spinal cord is hardwired is no longer tenable. Like the rest of the CNS, the spinal cord changes during growth and aging, when new motor behaviours are acquired, and in response to trauma and disease. This paper describes a new model of spinal cord function that reconciles its recently appreciated plasticity with its long recognized reliability as the final common pathway for behaviour. According to this model, the substrate of each motor behaviour comprises brain and spinal plasticity: the plasticity in the brain induces and maintains the plasticity in the spinal cord. Each time a behaviour occurs, the spinal cord provides the brain with performance information that guides changes in the substrate of the behaviour. All the behaviours in the repertoire undergo this process concurrently; each repeatedly induces plasticity to preserve its key features despite the plasticity induced by other behaviours. The aggregate process is a negotiation among the behaviours: they negotiate the properties of the spinal neurons and synapses that they all use. The ongoing negotiation maintains the spinal cord in an equilibrium - a negotiated equilibrium - that serves all the behaviours. This new model of spinal cord function is supported by laboratory and clinical data, makes predictions borne out by experiment, and underlies a new approach to restoring function to people with neuromuscular disorders. Further studies are needed to test its generality, to determine whether it may apply to other CNS areas such as the cerebral cortex, and to develop its therapeutic implications. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  4. The singular multiparticle correlation function and the α-model

    International Nuclear Information System (INIS)

    Bozek, P.; Ploszajczak, M.

    1991-01-01

    The comparison is made between the two descriptions of multiparticle correlations using either the α-model or the scale-invariant distribution functions. The case of the strong and weak intermittency is discussed. These two descriptions show similar results for both the scaled factorial moments and the scaled factorial correlators. It is shown that the dimensional projection does not alter this similarity and moreover, it explains an experimentally observed difference between the slopes of factorial moments and factorial correlators. (author) 8 refs.; 3 figs

  5. Functional networks inference from rule-based machine learning models.

    Science.gov (United States)

    Lazzarini, Nicola; Widera, Paweł; Williamson, Stuart; Heer, Rakesh; Krasnogor, Natalio; Bacardit, Jaume

    2016-01-01

    Functional networks play an important role in the analysis of biological processes and systems. The inference of these networks from high-throughput (-omics) data is an area of intense research. So far, the similarity-based inference paradigm (e.g. gene co-expression) has been the most popular approach. It assumes a functional relationship between genes which are expressed at similar levels across different samples. An alternative to this paradigm is the inference of relationships from the structure of machine learning models. These models are able to capture complex relationships between variables, that often are different/complementary to the similarity-based methods. We propose a protocol to infer functional networks from machine learning models, called FuNeL. It assumes, that genes used together within a rule-based machine learning model to classify the samples, might also be functionally related at a biological level. The protocol is first tested on synthetic datasets and then evaluated on a test suite of 8 real-world datasets related to human cancer. The networks inferred from the real-world data are compared against gene co-expression networks of equal size, generated with 3 different methods. The comparison is performed from two different points of view. We analyse the enriched biological terms in the set of network nodes and the relationships between known disease-associated genes in a context of the network topology. The comparison confirms both the biological relevance and the complementary character of the knowledge captured by the FuNeL networks in relation to similarity-based methods and demonstrates its potential to identify known disease associations as core elements of the network. Finally, using a prostate cancer dataset as a case study, we confirm that the biological knowledge captured by our method is relevant to the disease and consistent with the specialised literature and with an independent dataset not used in the inference process. The

  6. Analyzing availability using transfer function models and cross spectral analysis

    International Nuclear Information System (INIS)

    Singpurwalla, N.D.

    1980-01-01

    The paper shows how the methods of multivariate time series analysis can be used in a novel way to investigate the interrelationships between a series of operating (running) times and a series of maintenance (down) times of a complex system. Specifically, the techniques of cross spectral analysis are used to help obtain a Box-Jenkins type transfer function model for the running times and the down times of a nuclear reactor. A knowledge of the interrelationships between the running times and the down times is useful for an evaluation of maintenance policies, for replacement policy decisions, and for evaluating the availability and the readiness of complex systems

  7. On a Modeling of Online User Behavior Using Function Representation

    Directory of Open Access Journals (Sweden)

    Pavel Pesout

    2012-01-01

    Full Text Available Understanding the online user system requirements has become very crucial for online services providers. The existence of many users and services leads to different users’ needs. The objective of this presented piece of work is to explore the algorithms of how to optimize providers supply with proposing a new way to represent user requirements as continuous functions depending on time. We address the problems of the prediction the of system requirements and reducing model complexity by creating the typical user behavior profiles.

  8. Bidirectional Texture Function Modeling: State of the Art Survey

    Czech Academy of Sciences Publication Activity Database

    Filip, Jiří; Haindl, Michal

    2009-01-01

    Roč. 31, č. 11 (2009), s. 1921-1940 ISSN 0162-8828 R&D Projects: GA MŠk 1M0572; GA ČR GA102/08/0593; GA AV ČR 1ET400750407 Grant - others:EC Marie Curie(BE) 41358; GA MŠk(CZ) 2C06019 Institutional research plan: CEZ:AV0Z10750506 Keywords : BTF * surface texture * 3D texture Subject RIV: BD - Theory of Information Impact factor: 4.378, year: 2009 http://library.utia.cas.cz/separaty/2009/RO/filip-bidirectional texture function modeling state of the art survey.pdf

  9. Approximate models for the analysis of laser velocimetry correlation functions

    International Nuclear Information System (INIS)

    Robinson, D.P.

    1981-01-01

    Velocity distributions in the subchannels of an eleven pin test section representing a slice through a Fast Reactor sub-assembly were measured with a dual beam laser velocimeter system using a Malvern K 7023 digital photon correlator for signal processing. Two techniques were used for data reduction of the correlation function to obtain velocity and turbulence values. Whilst both techniques were in excellent agreement on the velocity, marked discrepancies were apparent in the turbulence levels. As a consequence of this the turbulence data were not reported. Subsequent investigation has shown that the approximate technique used as the basis of Malvern's Data Processor 7023V is restricted in its range of application. In this note alternative approximate models are described and evaluated. The objective of this investigation was to develop an approximate model which could be used for on-line determination of the turbulence level. (author)

  10. Functional renormalization group study of the Anderson–Holstein model

    International Nuclear Information System (INIS)

    Laakso, M A; Kennes, D M; Jakobs, S G; Meden, V

    2014-01-01

    We present a comprehensive study of the spectral and transport properties in the Anderson–Holstein model both in and out of equilibrium using the functional renormalization group (fRG). We show how the previously established machinery of Matsubara and Keldysh fRG can be extended to include the local phonon mode. Based on the analysis of spectral properties in equilibrium we identify different regimes depending on the strength of the electron–phonon interaction and the frequency of the phonon mode. We supplement these considerations with analytical results from the Kondo model. We also calculate the nonlinear differential conductance through the Anderson–Holstein quantum dot and find clear signatures of the presence of the phonon mode. (paper)

  11. Models for predicting objective function weights in prostate cancer IMRT

    International Nuclear Information System (INIS)

    Boutilier, Justin J.; Lee, Taewoo; Craig, Tim; Sharpe, Michael B.; Chan, Timothy C. Y.

    2015-01-01

    Purpose: To develop and evaluate the clinical applicability of advanced machine learning models that simultaneously predict multiple optimization objective function weights from patient geometry for intensity-modulated radiation therapy of prostate cancer. Methods: A previously developed inverse optimization method was applied retrospectively to determine optimal objective function weights for 315 treated patients. The authors used an overlap volume ratio (OV) of bladder and rectum for different PTV expansions and overlap volume histogram slopes (OVSR and OVSB for the rectum and bladder, respectively) as explanatory variables that quantify patient geometry. Using the optimal weights as ground truth, the authors trained and applied three prediction models: logistic regression (LR), multinomial logistic regression (MLR), and weighted K-nearest neighbor (KNN). The population average of the optimal objective function weights was also calculated. Results: The OV at 0.4 cm and OVSR at 0.1 cm features were found to be the most predictive of the weights. The authors observed comparable performance (i.e., no statistically significant difference) between LR, MLR, and KNN methodologies, with LR appearing to perform the best. All three machine learning models outperformed the population average by a statistically significant amount over a range of clinical metrics including bladder/rectum V53Gy, bladder/rectum V70Gy, and dose to the bladder, rectum, CTV, and PTV. When comparing the weights directly, the LR model predicted bladder and rectum weights that had, on average, a 73% and 74% relative improvement over the population average weights, respectively. The treatment plans resulting from the LR weights had, on average, a rectum V70Gy that was 35% closer to the clinical plan and a bladder V70Gy that was 29% closer, compared to the population average weights. Similar results were observed for all other clinical metrics. Conclusions: The authors demonstrated that the KNN and MLR

  12. Models for predicting objective function weights in prostate cancer IMRT

    Energy Technology Data Exchange (ETDEWEB)

    Boutilier, Justin J., E-mail: j.boutilier@mail.utoronto.ca; Lee, Taewoo [Department of Mechanical and Industrial Engineering, University of Toronto, 5 King’s College Road, Toronto, Ontario M5S 3G8 (Canada); Craig, Tim [Radiation Medicine Program, UHN Princess Margaret Cancer Centre, 610 University of Avenue, Toronto, Ontario M5T 2M9, Canada and Department of Radiation Oncology, University of Toronto, 148 - 150 College Street, Toronto, Ontario M5S 3S2 (Canada); Sharpe, Michael B. [Radiation Medicine Program, UHN Princess Margaret Cancer Centre, 610 University of Avenue, Toronto, Ontario M5T 2M9 (Canada); Department of Radiation Oncology, University of Toronto, 148 - 150 College Street, Toronto, Ontario M5S 3S2 (Canada); Techna Institute for the Advancement of Technology for Health, 124 - 100 College Street, Toronto, Ontario M5G 1P5 (Canada); Chan, Timothy C. Y. [Department of Mechanical and Industrial Engineering, University of Toronto, 5 King’s College Road, Toronto, Ontario M5S 3G8, Canada and Techna Institute for the Advancement of Technology for Health, 124 - 100 College Street, Toronto, Ontario M5G 1P5 (Canada)

    2015-04-15

    Purpose: To develop and evaluate the clinical applicability of advanced machine learning models that simultaneously predict multiple optimization objective function weights from patient geometry for intensity-modulated radiation therapy of prostate cancer. Methods: A previously developed inverse optimization method was applied retrospectively to determine optimal objective function weights for 315 treated patients. The authors used an overlap volume ratio (OV) of bladder and rectum for different PTV expansions and overlap volume histogram slopes (OVSR and OVSB for the rectum and bladder, respectively) as explanatory variables that quantify patient geometry. Using the optimal weights as ground truth, the authors trained and applied three prediction models: logistic regression (LR), multinomial logistic regression (MLR), and weighted K-nearest neighbor (KNN). The population average of the optimal objective function weights was also calculated. Results: The OV at 0.4 cm and OVSR at 0.1 cm features were found to be the most predictive of the weights. The authors observed comparable performance (i.e., no statistically significant difference) between LR, MLR, and KNN methodologies, with LR appearing to perform the best. All three machine learning models outperformed the population average by a statistically significant amount over a range of clinical metrics including bladder/rectum V53Gy, bladder/rectum V70Gy, and dose to the bladder, rectum, CTV, and PTV. When comparing the weights directly, the LR model predicted bladder and rectum weights that had, on average, a 73% and 74% relative improvement over the population average weights, respectively. The treatment plans resulting from the LR weights had, on average, a rectum V70Gy that was 35% closer to the clinical plan and a bladder V70Gy that was 29% closer, compared to the population average weights. Similar results were observed for all other clinical metrics. Conclusions: The authors demonstrated that the KNN and MLR

  13. Globally COnstrained Local Function Approximation via Hierarchical Modelling, a Framework for System Modelling under Partial Information

    DEFF Research Database (Denmark)

    Øjelund, Henrik; Sadegh, Payman

    2000-01-01

    be obtained. This paper presents a new approach for system modelling under partial (global) information (or the so called Gray-box modelling) that seeks to perserve the benefits of the global as well as local methodologies sithin a unified framework. While the proposed technique relies on local approximations......Local function approximations concern fitting low order models to weighted data in neighbourhoods of the points where the approximations are desired. Despite their generality and convenience of use, local models typically suffer, among others, from difficulties arising in physical interpretation...... simultaneously with the (local estimates of) function values. The approach is applied to modelling of a linear time variant dynamic system under prior linear time invariant structure where local regression fails as a result of high dimensionality....

  14. A marked correlation function for constraining modified gravity models

    Science.gov (United States)

    White, Martin

    2016-11-01

    Future large scale structure surveys will provide increasingly tight constraints on our cosmological model. These surveys will report results on the distance scale and growth rate of perturbations through measurements of Baryon Acoustic Oscillations and Redshift-Space Distortions. It is interesting to ask: what further analyses should become routine, so as to test as-yet-unknown models of cosmic acceleration? Models which aim to explain the accelerated expansion rate of the Universe by modifications to General Relativity often invoke screening mechanisms which can imprint a non-standard density dependence on their predictions. This suggests density-dependent clustering as a `generic' constraint. This paper argues that a density-marked correlation function provides a density-dependent statistic which is easy to compute and report and requires minimal additional infrastructure beyond what is routinely available to such survey analyses. We give one realization of this idea and study it using low order perturbation theory. We encourage groups developing modified gravity theories to see whether such statistics provide discriminatory power for their models.

  15. A marked correlation function for constraining modified gravity models

    Energy Technology Data Exchange (ETDEWEB)

    White, Martin, E-mail: mwhite@berkeley.edu [Department of Physics, University of California, Berkeley, CA 94720 (United States)

    2016-11-01

    Future large scale structure surveys will provide increasingly tight constraints on our cosmological model. These surveys will report results on the distance scale and growth rate of perturbations through measurements of Baryon Acoustic Oscillations and Redshift-Space Distortions. It is interesting to ask: what further analyses should become routine, so as to test as-yet-unknown models of cosmic acceleration? Models which aim to explain the accelerated expansion rate of the Universe by modifications to General Relativity often invoke screening mechanisms which can imprint a non-standard density dependence on their predictions. This suggests density-dependent clustering as a 'generic' constraint. This paper argues that a density-marked correlation function provides a density-dependent statistic which is easy to compute and report and requires minimal additional infrastructure beyond what is routinely available to such survey analyses. We give one realization of this idea and study it using low order perturbation theory. We encourage groups developing modified gravity theories to see whether such statistics provide discriminatory power for their models.

  16. Functional overview of the Production Planning Model (ProdMod)

    International Nuclear Information System (INIS)

    Gregory, M.V.; Paul, P.K.

    1995-09-01

    The Production Planning Model (ProdMod) has been developed by SRTC for use by High Level Waste Program Management and High Level Waste Engineering as a fast running, integrated, comprehensive model of the entire SRS high level waste (HLW) complex. ProdMod can simulate the response of the HLW complex from its current state to the end of tank clean-up or to any intermediate point. The present document describes the initial release of ProdMod at the end of FY95: a model version that contains all the significant elements from the High-level Waste System Plan Revision 5 and is capable of running the simulation all the way to the postulated completion of waste removal. For the scenario represented by this release, that simulates approximately 70 years of operation of the HLW complex (out to FY2065). This initial release of ProdMod will serve as the immediate starting point for the modeling of the High-Level Waste System Plan Revision 6. Thus ProdMod is expected to be in a state of continuous change and improvement.the initial goal has been to generate a simulation of the processes of interest, with the emphasis on mass and volume balances tracked throughout the HLW complex. That has been accomplished. Future development will add a set of cost equations to the process equations and extend the model for use as a linear programming (optimization) application. The goal of this later phase will be to free the ProdMod user to some extent from the need to set up detailed simulation scenarios: the model will automatically make operational choices which minimize or maximize a given objective function. Appendix A contains the source code

  17. Informing soil models using pedotransfer functions: challenges and perspectives

    Science.gov (United States)

    Pachepsky, Yakov; Romano, Nunzio

    2015-04-01

    Pedotransfer functions (PTFs) are empirical relationships between parameters of soil models and more easily obtainable data on soil properties. PTFs have become an indispensable tool in modeling soil processes. As alternative methods to direct measurements, they bridge the data we have and data we need by using soil survey and monitoring data to enable modeling for real-world applications. Pedotransfer is extensively used in soil models addressing the most pressing environmental issues. The following is an attempt to provoke a discussion by listing current issues that are faced by PTF development. 1. As more intricate biogeochemical processes are being modeled, development of PTFs for parameters of those processes becomes essential. 2. Since the equations to express PTF relationships are essentially unknown, there has been a trend to employ highly nonlinear equations, e.g. neural networks, which in theory are flexible enough to simulate any dependence. This, however, comes with the penalty of large number of coefficients that are difficult to estimate reliably. A preliminary classification applied to PTF inputs and PTF development for each of the resulting groups may provide simple, transparent, and more reliable pedotransfer equations. 3. The multiplicity of models, i.e. presence of several models producing the same output variables, is commonly found in soil modeling, and is a typical feature in the PTF research field. However, PTF intercomparisons are lagging behind PTF development. This is aggravated by the fact that coefficients of PTF based on machine-learning methods are usually not reported. 4. The existence of PTFs is the result of some soil processes. Using models of those processes to generate PTFs, and more general, developing physics-based PTFs remains to be explored. 5. Estimating the variability of soil model parameters becomes increasingly important, as the newer modeling technologies such as data assimilation, ensemble modeling, and model

  18. Longitudinal functional principal component modelling via Stochastic Approximation Monte Carlo

    KAUST Repository

    Martinez, Josue G.

    2010-06-01

    The authors consider the analysis of hierarchical longitudinal functional data based upon a functional principal components approach. In contrast to standard frequentist approaches to selecting the number of principal components, the authors do model averaging using a Bayesian formulation. A relatively straightforward reversible jump Markov Chain Monte Carlo formulation has poor mixing properties and in simulated data often becomes trapped at the wrong number of principal components. In order to overcome this, the authors show how to apply Stochastic Approximation Monte Carlo (SAMC) to this problem, a method that has the potential to explore the entire space and does not become trapped in local extrema. The combination of reversible jump methods and SAMC in hierarchical longitudinal functional data is simplified by a polar coordinate representation of the principal components. The approach is easy to implement and does well in simulated data in determining the distribution of the number of principal components, and in terms of its frequentist estimation properties. Empirical applications are also presented.

  19. Minimal models on Riemann surfaces: The partition functions

    International Nuclear Information System (INIS)

    Foda, O.

    1990-01-01

    The Coulomb gas representation of the A n series of c=1-6/[m(m+1)], m≥3, minimal models is extended to compact Riemann surfaces of genus g>1. An integral representation of the partition functions, for any m and g is obtained as the difference of two gaussian correlation functions of a background charge, (background charge on sphere) x (1-g), and screening charges integrated over the surface. The coupling constant x (compacitification radius) 2 of the gaussian expressions are, as on the torus, m(m+1), and m/(m+1). The partition functions obtained are modular invariant, have the correct conformal anomaly and - restricting the propagation of states to a single handle - one can verify explicitly the decoupling of the null states. On the other hand, they are given in terms of coupled surface integrals, and it remains to show how they degenerate consistently to those on lower-genus surfaces. In this work, this is clear only at the lattice level, where no screening charges appear. (orig.)

  20. Minimal models on Riemann surfaces: The partition functions

    Energy Technology Data Exchange (ETDEWEB)

    Foda, O. (Katholieke Univ. Nijmegen (Netherlands). Inst. voor Theoretische Fysica)

    1990-06-04

    The Coulomb gas representation of the A{sub n} series of c=1-6/(m(m+1)), m{ge}3, minimal models is extended to compact Riemann surfaces of genus g>1. An integral representation of the partition functions, for any m and g is obtained as the difference of two gaussian correlation functions of a background charge, (background charge on sphere) x (1-g), and screening charges integrated over the surface. The coupling constant x (compacitification radius){sup 2} of the gaussian expressions are, as on the torus, m(m+1), and m/(m+1). The partition functions obtained are modular invariant, have the correct conformal anomaly and - restricting the propagation of states to a single handle - one can verify explicitly the decoupling of the null states. On the other hand, they are given in terms of coupled surface integrals, and it remains to show how they degenerate consistently to those on lower-genus surfaces. In this work, this is clear only at the lattice level, where no screening charges appear. (orig.).

  1. Computer Modeling of the Earliest Cellular Structures and Functions

    Science.gov (United States)

    Pohorille, Andrew; Chipot, Christophe; Schweighofer, Karl

    2000-01-01

    In the absence of extinct or extant record of protocells (the earliest ancestors of contemporary cells). the most direct way to test our understanding of the origin of cellular life is to construct laboratory models of protocells. Such efforts are currently underway in the NASA Astrobiology Program. They are accompanied by computational studies aimed at explaining self-organization of simple molecules into ordered structures and developing designs for molecules that perform proto-cellular functions. Many of these functions, such as import of nutrients, capture and storage of energy. and response to changes in the environment are carried out by proteins bound to membranestructures at water-membrane interfaces and insert into membranes, (b) how these peptides aggregate to form membrane-spanning structures (eg. channels), and (c) by what mechanisms such aggregates perform essential proto-cellular functions, such as proton transport of protons across cell walls, a key step in cellular bioenergetics. The simulations were performed using the molecular dynamics method, in which Newton's equations of motion for each item in the system are solved iteratively. The problems of interest required simulations on multi-nanosecond time scales, which corresponded to 10(exp 6)-10(exp 8) time steps.

  2. An Evolutionary Game Theory Model of Spontaneous Brain Functioning.

    Science.gov (United States)

    Madeo, Dario; Talarico, Agostino; Pascual-Leone, Alvaro; Mocenni, Chiara; Santarnecchi, Emiliano

    2017-11-22

    Our brain is a complex system of interconnected regions spontaneously organized into distinct networks. The integration of information between and within these networks is a continuous process that can be observed even when the brain is at rest, i.e. not engaged in any particular task. Moreover, such spontaneous dynamics show predictive value over individual cognitive profile and constitute a potential marker in neurological and psychiatric conditions, making its understanding of fundamental importance in modern neuroscience. Here we present a theoretical and mathematical model based on an extension of evolutionary game theory on networks (EGN), able to capture brain's interregional dynamics by balancing emulative and non-emulative attitudes among brain regions. This results in the net behavior of nodes composing resting-state networks identified using functional magnetic resonance imaging (fMRI), determining their moment-to-moment level of activation and inhibition as expressed by positive and negative shifts in BOLD fMRI signal. By spontaneously generating low-frequency oscillatory behaviors, the EGN model is able to mimic functional connectivity dynamics, approximate fMRI time series on the basis of initial subset of available data, as well as simulate the impact of network lesions and provide evidence of compensation mechanisms across networks. Results suggest evolutionary game theory on networks as a new potential framework for the understanding of human brain network dynamics.

  3. Microscopic models for hadronic form factors and vertex functions

    International Nuclear Information System (INIS)

    Santhanam, I.; Bhatnagar, S.; Mitra, A.N.

    1990-01-01

    We review the status of nucleon (N) and few-nucleon form factors (f.f.'s) from the view-point of a gradual unfolding of successively inner degrees of freedom (d.o.f.) with increase in q 2 . To this end we focus attention on the problem of a microscopic formulation of hadronic vertex functions (v.f.) from the point of view of their key role in understanding the physics of a large variety of few-hadron reactions on the one hand, and their practical usefulness in articulating the internal dynamics of hadron and few-hadron systems on the other hand. The criterion of an integrated view from low-energy spectroscopy to high-q 2 amplitudes is employed to emphasize the desirability of formulations in terms of relativistic dynamical equations based on Lorentz and gauge invariance in preference to phenomenological models, which often require additional assumptions beyond their original premises to extend their applicability domains. In this respect, the practical possibilities of the Bethe-Salpeter equation (BSE) in articulating the necessary dynamical ingredients are emphasized on a two-tier basis, the basis constants (3) being pre-determined from the mass spectral data (1 st stage) in preparation for the construction of the hadron-quark vertex functions (2 nd stage). An explicit construction is outlined for meson-quark and baryon-quark vertex functions as well as of meson-nucleon vertex functions in a stepwise fashion. The role of the latter as basic parameter-free ingredients is discussed for possible use in the more serious treatment in the current literature of quark-meson level (α) and meson-isobar (β) d.o.f. in 2-N and 3-N form factor studies. Since most of these studies are characterized by the use of RGM techniques at the six-quark level, a comparative discussion is also given of several contemporary RGM based models. Finally, the concrete prospects for employing such hardon-quark vertex functions for evaluating pp-bar annihilation amplitudes are briefly indicated

  4. Density Functional Theory Modeling of Ferrihydrite Nanoparticle Adsorption Behavior

    Science.gov (United States)

    Kubicki, J.

    2016-12-01

    Ferrihydrite is a critical substrate for adsorption of oxyanion species in the environment1. The nanoparticulate nature of ferrihydrite is inherent to its formation, and hence it has been called a "nano-mineral"2. The nano-scale size and unusual composition of ferrihydrite has made structural determination of this phase problematic. Michel et al.3 have proposed an atomic structure for ferrihydrite, but this model has been controversial4,5. Recent work has shown that the Michel et al.3 model structure may be reasonably accurate despite some deficiencies6-8. An alternative model has been proposed by Manceau9. This work utilizes density functional theory (DFT) calculations to model both the structure of ferrihydrite nanoparticles based on the Michel et al. 3 model as refined in Hiemstra8 and the modified akdalaite model of Manceau9. Adsorption energies of carbonate, phosphate, sulfate, chromate, arsenite and arsenate are calculated. Periodic projector-augmented planewave calculations were performed with the Vienna Ab-initio Simulation Package (VASP10) on an approximately 1.7 nm diameter Michel nanoparticle (Fe38O112H110) and on a 2 nm Manceau nanoparticle (Fe38O95H76). After energy minimization of the surface H and O atoms. The model will be used to assess the possible configurations of adsorbed oxyanions on the model nanoparticles. Brown G.E. Jr. and Calas G. (2012) Geochemical Perspectives, 1, 483-742. Hochella M.F. and Madden A.S. (2005) Elements, 1, 199-203. Michel, F.M., Ehm, L., Antao, S.M., Lee, P.L., Chupas, P.J., Liu, G., Strongin, D.R., Schoonen, M.A.A., Phillips, B.L., and Parise, J.B., 2007, Science, 316, 1726-1729. Rancourt, D.G., and Meunier, J.F., 2008, American Mineralogist, 93, 1412-1417. Manceau, A., 2011, American Mineralogist, 96, 521-533. Maillot, F., Morin, G., Wang, Y., Bonnin, D., Ildefonse, P., Chaneac, C., Calas, G., 2011, Geochimica et Cosmochimica Acta, 75, 2708-2720. Pinney, N., Kubicki, J.D., Middlemiss, D.S., Grey, C.P., and Morgan, D

  5. Applying fuzzy analytic network process in quality function deployment model

    Directory of Open Access Journals (Sweden)

    Mohammad Ali Afsharkazemi

    2012-08-01

    Full Text Available In this paper, we propose an empirical study of QFD implementation when fuzzy numbers are used to handle the uncertainty associated with different components of the proposed model. We implement fuzzy analytical network to find the relative importance of various criteria and using fuzzy numbers we calculate the relative importance of these factors. The proposed model of this paper uses fuzzy matrix and house of quality to study the products development in QFD and also the second phase i.e. part deployment. In most researches, the primary objective is only on CRs to implement the quality function deployment and some other criteria such as production costs, manufacturing costs etc were disregarded. The results of using fuzzy analysis network process based on the QFD model in Daroupat packaging company to develop PVDC show that the most important indexes are being waterproof, resistant pill packages, and production cost. In addition, the PVDC coating is the most important index in terms of company experts’ point of view.

  6. Modeling fire occurrence as a function of landscape

    Science.gov (United States)

    Loboda, T. V.; Carroll, M.; DiMiceli, C.

    2011-12-01

    Wildland fire is a prominent component of ecosystem functioning worldwide. Nearly all ecosystems experience the impact of naturally occurring or anthropogenically driven fire. Here, we present a spatially explicit and regionally parameterized Fire Occurrence Model (FOM) aimed at developing fire occurrence estimates at landscape and regional scales. The model provides spatially explicit scenarios of fire occurrence based on the available records from fire management agencies, satellite observations, and auxiliary geospatial data sets. Fire occurrence is modeled as a function of the risk of ignition, potential fire behavior, and fire weather using internal regression tree-driven algorithms and empirically established, regionally derived relationships between fire occurrence, fire behavior, and fire weather. The FOM presents a flexible modeling structure with a set of internal globally available default geospatial independent and dependent variables. However, the flexible modeling environment adapts to ingest a variable number, resolution, and content of inputs provided by the user to supplement or replace the default parameters to improve the model's predictive capability. A Southern California FOM instance (SC FOM) was developed using satellite assessments of fire activity from a suite of Landsat and Moderate Resolution Imaging Spectroradiometer (MODIS) satellite data, Monitoring Trends in Burn Severity fire perimeters, and auxiliary geospatial information including land use and ownership, utilities, transportation routes, and the Remote Automated Weather Station data records. The model was parameterized based on satellite data acquired between 2001 and 2009 and fire management fire perimeters available prior to 2009. SC FOM predictive capabilities were assessed using observed fire occurrence available from the MODIS active fire product during 2010. The results show that SC FOM provides a realistic estimate of fire occurrence at the landscape level: the fraction of

  7. Statistical Modelling of Resonant Cross Section Structure in URR, Model of the Characteristic Function

    International Nuclear Information System (INIS)

    Koyumdjieva, N.

    2006-01-01

    A statistical model for the resonant cross section structure in the Unresolved Resonance Region has been developed in the framework of the R-matrix formalism in Reich Moore approach with effective accounting of the resonance parameters fluctuations. The model uses only the average resonance parameters and can be effectively applied for analyses of cross sections functional, averaged over many resonances. Those are cross section moments, transmission and self-indication functions measured through thick sample. In this statistical model the resonant cross sections structure is accepted to be periodic and the R-matrix is a function of ε=E/D with period 0≤ε≤N; R nc (ε)=π/2√(S n *S c )1/NΣ(i=1,N)(β in *β ic *ctg[π(ε i - = ε-iS i )/N]; Here S n ,S c ,S i is respectively neutron strength function, strength function for fission or inelastic channel and strength function for radiative capture, N is the number of resonances (ε i ,β i ) that obey the statistic of Porter-Thomas and Wigner's one. The simple case of this statistical model concerns the resonant cross section structure for non-fissile nuclei under the threshold for inelastic scattering - the model of the characteristic function with HARFOR program. In the above model some improvements of calculation of the phases and logarithmic derivatives of neutron channels have been done. In the parameterization we use the free parameter R l ∞ , which accounts the influence of long-distant resonances. The above scheme for statistical modelling of the resonant cross section structure has been applied for evaluation of experimental data for total, capture and inelastic cross sections for 232 Th in the URR (4-150) keV and also the transmission and self-indication functions in (4-175) keV. The set of evaluated average resonance parameters have been obtained. The evaluated average resonance parameters in the URR are consistent with those in the Resolved Resonance Region (CRP for Th-U cycle, Vienna, 2006

  8. Estimating Stochastic Volatility Models using Prediction-based Estimating Functions

    DEFF Research Database (Denmark)

    Lunde, Asger; Brix, Anne Floor

    to the performance of the GMM estimator based on conditional moments of integrated volatility from Bollerslev and Zhou (2002). The case where the observed log-price process is contaminated by i.i.d. market microstructure (MMS) noise is also investigated. First, the impact of MMS noise on the parameter estimates from......In this paper prediction-based estimating functions (PBEFs), introduced in Sørensen (2000), are reviewed and PBEFs for the Heston (1993) stochastic volatility model are derived. The finite sample performance of the PBEF based estimator is investigated in a Monte Carlo study, and compared...... to correctly account for the noise are investigated. Our Monte Carlo study shows that the estimator based on PBEFs outperforms the GMM estimator, both in the setting with and without MMS noise. Finally, an empirical application investigates the possible challenges and general performance of applying the PBEF...

  9. Functional validation of candidate genes detected by genomic feature models

    DEFF Research Database (Denmark)

    Rohde, Palle Duun; Østergaard, Solveig; Kristensen, Torsten Nygaard

    2018-01-01

    to investigate locomotor activity, and applied genomic feature prediction models to identify gene ontology (GO) cate- gories predictive of this phenotype. Next, we applied the covariance association test to partition the genomic variance of the predictive GO terms to the genes within these terms. We...... then functionally assessed whether the identified candidate genes affected locomotor activity by reducing gene expression using RNA interference. In five of the seven candidate genes tested, reduced gene expression altered the phenotype. The ranking of genes within the predictive GO term was highly correlated......Understanding the genetic underpinnings of complex traits requires knowledge of the genetic variants that contribute to phenotypic variability. Reliable statistical approaches are needed to obtain such knowledge. In genome-wide association studies, variants are tested for association with trait...

  10. Dynamic density functional theory of solid tumor growth: Preliminary models

    Directory of Open Access Journals (Sweden)

    Arnaud Chauviere

    2012-03-01

    Full Text Available Cancer is a disease that can be seen as a complex system whose dynamics and growth result from nonlinear processes coupled across wide ranges of spatio-temporal scales. The current mathematical modeling literature addresses issues at various scales but the development of theoretical methodologies capable of bridging gaps across scales needs further study. We present a new theoretical framework based on Dynamic Density Functional Theory (DDFT extended, for the first time, to the dynamics of living tissues by accounting for cell density correlations, different cell types, phenotypes and cell birth/death processes, in order to provide a biophysically consistent description of processes across the scales. We present an application of this approach to tumor growth.

  11. Type-2 fuzzy elliptic membership functions for modeling uncertainty

    DEFF Research Database (Denmark)

    Kayacan, Erdal; Sarabakha, Andriy; Coupland, Simon

    2018-01-01

    Whereas type-1 and type-2 membership functions (MFs) are the core of any fuzzy logic system, there are no performance criteria available to evaluate the goodness or correctness of the fuzzy MFs. In this paper, we make extensive analysis in terms of the capability of type-2 elliptic fuzzy MFs...... in modeling uncertainty. Having decoupled parameters for its support and width, elliptic MFs are unique amongst existing type-2 fuzzy MFs. In this investigation, the uncertainty distribution along the elliptic MF support is studied, and a detailed analysis is given to compare and contrast its performance...... advantages mentioned above, elliptic MFs have comparable prediction results when compared to Gaussian and triangular MFs. Finally, in order to test the performance of fuzzy logic controller with elliptic interval type-2 MFs, extensive real-time experiments are conducted for the 3D trajectory tracking problem...

  12. Stand diameter distribution modelling and prediction based on Richards function.

    Directory of Open Access Journals (Sweden)

    Ai-guo Duan

    Full Text Available The objective of this study was to introduce application of the Richards equation on modelling and prediction of stand diameter distribution. The long-term repeated measurement data sets, consisted of 309 diameter frequency distributions from Chinese fir (Cunninghamia lanceolata plantations in the southern China, were used. Also, 150 stands were used as fitting data, the other 159 stands were used for testing. Nonlinear regression method (NRM or maximum likelihood estimates method (MLEM were applied to estimate the parameters of models, and the parameter prediction method (PPM and parameter recovery method (PRM were used to predict the diameter distributions of unknown stands. Four main conclusions were obtained: (1 R distribution presented a more accurate simulation than three-parametric Weibull function; (2 the parameters p, q and r of R distribution proved to be its scale, location and shape parameters, and have a deep relationship with stand characteristics, which means the parameters of R distribution have good theoretical interpretation; (3 the ordinate of inflection point of R distribution has significant relativity with its skewness and kurtosis, and the fitted main distribution range for the cumulative diameter distribution of Chinese fir plantations was 0.4∼0.6; (4 the goodness-of-fit test showed diameter distributions of unknown stands can be well estimated by applying R distribution based on PRM or the combination of PPM and PRM under the condition that only quadratic mean DBH or plus stand age are known, and the non-rejection rates were near 80%, which are higher than the 72.33% non-rejection rate of three-parametric Weibull function based on the combination of PPM and PRM.

  13. Twist operator correlation functions in O(n) loop models

    International Nuclear Information System (INIS)

    Simmons, Jacob J H; Cardy, John

    2009-01-01

    Using conformal field theoretic methods we calculate correlation functions of geometric observables in the loop representation of the O(n) model at the critical point. We focus on correlation functions containing twist operators, combining these with anchored loops, boundaries with SLE processes and with double SLE processes. We focus further upon n = 0, representing self-avoiding loops, which corresponds to a logarithmic conformal field theory (LCFT) with c = 0. In this limit the twist operator plays the role of a 0-weight indicator operator, which we verify by comparison with known examples. Using the additional conditions imposed by the twist operator null states, we derive a new explicit result for the probabilities that an SLE 8/3 winds in various ways about two points in the upper half-plane, e.g. that the SLE passes to the left of both points. The collection of c = 0 logarithmic CFT operators that we use deriving the winding probabilities is novel, highlighting a potential incompatibility caused by the presence of two distinct logarithmic partners to the stress tensor within the theory. We argue that both partners do appear in the theory, one in the bulk and one on the boundary and that the incompatibility is resolved by restrictive bulk-boundary fusion rules

  14. Development and Validation of a Predictive Model for Functional Outcome After Stroke Rehabilitation: The Maugeri Model.

    Science.gov (United States)

    Scrutinio, Domenico; Lanzillo, Bernardo; Guida, Pietro; Mastropasqua, Filippo; Monitillo, Vincenzo; Pusineri, Monica; Formica, Roberto; Russo, Giovanna; Guarnaschelli, Caterina; Ferretti, Chiara; Calabrese, Gianluigi

    2017-12-01

    Prediction of outcome after stroke rehabilitation may help clinicians in decision-making and planning rehabilitation care. We developed and validated a predictive tool to estimate the probability of achieving improvement in physical functioning (model 1) and a level of independence requiring no more than supervision (model 2) after stroke rehabilitation. The models were derived from 717 patients admitted for stroke rehabilitation. We used multivariable logistic regression analysis to build each model. Then, each model was prospectively validated in 875 patients. Model 1 included age, time from stroke occurrence to rehabilitation admission, admission motor and cognitive Functional Independence Measure scores, and neglect. Model 2 included age, male gender, time since stroke onset, and admission motor and cognitive Functional Independence Measure score. Both models demonstrated excellent discrimination. In the derivation cohort, the area under the curve was 0.883 (95% confidence intervals, 0.858-0.910) for model 1 and 0.913 (95% confidence intervals, 0.884-0.942) for model 2. The Hosmer-Lemeshow χ 2 was 4.12 ( P =0.249) and 1.20 ( P =0.754), respectively. In the validation cohort, the area under the curve was 0.866 (95% confidence intervals, 0.840-0.892) for model 1 and 0.850 (95% confidence intervals, 0.815-0.885) for model 2. The Hosmer-Lemeshow χ 2 was 8.86 ( P =0.115) and 34.50 ( P =0.001), respectively. Both improvement in physical functioning (hazard ratios, 0.43; 0.25-0.71; P =0.001) and a level of independence requiring no more than supervision (hazard ratios, 0.32; 0.14-0.68; P =0.004) were independently associated with improved 4-year survival. A calculator is freely available for download at https://goo.gl/fEAp81. This study provides researchers and clinicians with an easy-to-use, accurate, and validated predictive tool for potential application in rehabilitation research and stroke management. © 2017 American Heart Association, Inc.

  15. Function of dynamic models in systems biology: linking structure to behaviour.

    Science.gov (United States)

    Knüpfer, Christian; Beckstein, Clemens

    2013-10-08

    Dynamic models in Systems Biology are used in computational simulation experiments for addressing biological questions. The complexity of the modelled biological systems and the growing number and size of the models calls for computer support for modelling and simulation in Systems Biology. This computer support has to be based on formal representations of relevant knowledge fragments. In this paper we describe different functional aspects of dynamic models. This description is conceptually embedded in our "meaning facets" framework which systematises the interpretation of dynamic models in structural, functional and behavioural facets. Here we focus on how function links the structure and the behaviour of a model. Models play a specific role (teleological function) in the scientific process of finding explanations for dynamic phenomena. In order to fulfil this role a model has to be used in simulation experiments (pragmatical function). A simulation experiment always refers to a specific situation and a state of the model and the modelled system (conditional function). We claim that the function of dynamic models refers to both the simulation experiment executed by software (intrinsic function) and the biological experiment which produces the phenomena under investigation (extrinsic function). We use the presented conceptual framework for the function of dynamic models to review formal accounts for functional aspects of models in Systems Biology, such as checklists, ontologies, and formal languages. Furthermore, we identify missing formal accounts for some of the functional aspects. In order to fill one of these gaps we propose an ontology for the teleological function of models. We have thoroughly analysed the role and use of models in Systems Biology. The resulting conceptual framework for the function of models is an important first step towards a comprehensive formal representation of the functional knowledge involved in the modelling and simulation process

  16. Statistical Method to Overcome Overfitting Issue in Rational Function Models

    Science.gov (United States)

    Alizadeh Moghaddam, S. H.; Mokhtarzade, M.; Alizadeh Naeini, A.; Alizadeh Moghaddam, S. A.

    2017-09-01

    Rational function models (RFMs) are known as one of the most appealing models which are extensively applied in geometric correction of satellite images and map production. Overfitting is a common issue, in the case of terrain dependent RFMs, that degrades the accuracy of RFMs-derived geospatial products. This issue, resulting from the high number of RFMs' parameters, leads to ill-posedness of the RFMs. To tackle this problem, in this study, a fast and robust statistical approach is proposed and compared to Tikhonov regularization (TR) method, as a frequently-used solution to RFMs' overfitting. In the proposed method, a statistical test, namely, significance test is applied to search for the RFMs' parameters that are resistant against overfitting issue. The performance of the proposed method was evaluated for two real data sets of Cartosat-1 satellite images. The obtained results demonstrate the efficiency of the proposed method in term of the achievable level of accuracy. This technique, indeed, shows an improvement of 50-80% over the TR.

  17. Functional renormalization for antiferromagnetism and superconductivity in the Hubbard model

    Energy Technology Data Exchange (ETDEWEB)

    Friederich, Simon

    2010-12-08

    Despite its apparent simplicity, the two-dimensional Hubbard model for locally interacting fermions on a square lattice is widely considered as a promising approach for the understanding of Cooper pair formation in the quasi two-dimensional high-T{sub c} cuprate materials. In the present work this model is investigated by means of the functional renormalization group, based on an exact flow equation for the effective average action. In addition to the fermionic degrees of freedom of the Hubbard Hamiltonian, bosonic fields are introduced which correspond to the different possible collective orders of the system, for example magnetism and superconductivity. The interactions between bosons and fermions are determined by means of the method of ''rebosonization'' (or ''flowing bosonization''), which can be described as a continuous, scale-dependent Hubbard-Stratonovich transformation. This method allows an efficient parameterization of the momentum-dependent effective two-particle interaction between fermions (four-point vertex), and it makes it possible to follow the flow of the running couplings into the regimes exhibiting spontaneous symmetry breaking, where bosonic fluctuations determine the types of order which are present on large length scales. Numerical results for the phase diagram are presented, which include the mutual influence of different, competing types of order. (orig.)

  18. Functional renormalization for antiferromagnetism and superconductivity in the Hubbard model

    International Nuclear Information System (INIS)

    Friederich, Simon

    2010-01-01

    Despite its apparent simplicity, the two-dimensional Hubbard model for locally interacting fermions on a square lattice is widely considered as a promising approach for the understanding of Cooper pair formation in the quasi two-dimensional high-T c cuprate materials. In the present work this model is investigated by means of the functional renormalization group, based on an exact flow equation for the effective average action. In addition to the fermionic degrees of freedom of the Hubbard Hamiltonian, bosonic fields are introduced which correspond to the different possible collective orders of the system, for example magnetism and superconductivity. The interactions between bosons and fermions are determined by means of the method of ''rebosonization'' (or ''flowing bosonization''), which can be described as a continuous, scale-dependent Hubbard-Stratonovich transformation. This method allows an efficient parameterization of the momentum-dependent effective two-particle interaction between fermions (four-point vertex), and it makes it possible to follow the flow of the running couplings into the regimes exhibiting spontaneous symmetry breaking, where bosonic fluctuations determine the types of order which are present on large length scales. Numerical results for the phase diagram are presented, which include the mutual influence of different, competing types of order. (orig.)

  19. Understanding Service Composition with Non-functional Properties Using Declarative Model-to-model Transformations

    Directory of Open Access Journals (Sweden)

    Max Mäuhlhäuser

    2011-01-01

    Full Text Available Developing applications comprising service composition is a complex task. Therefore, to lower the skill barrier for developers, it is important to describe the problem at hand on an abstract level and not to focus on implementation details. This can be done using declarative programming which allows to describe only the result of the problem (which is what the developer wants rather than the description of the implementation. We therefore use purely declarative model-to-model transformations written in a universal model transformation language which is capable of handling even non functional properties using optimization and mathematical programming. This makes it easier to understand and describe service composition and non-functional properties for the developer.

  20. Economic modelling of energy services: Rectifying misspecified energy demand functions

    International Nuclear Information System (INIS)

    Hunt, Lester C.; Ryan, David L.

    2015-01-01

    estimation of an aggregate energy demand function for the UK with data over the period 1960–2011. - Highlights: • Introduces explicit modelling of demands for energy services • Derives estimable energy demand equations from energy service demands • Demonstrates the implicit misspecification with typical energy demand equations • Empirical implementation using aggregate and individual energy source data • Illustrative empirical example using UK data and energy efficiency modelling

  1. Estimating FIA plot characteristics using NAIP imagery, function modeling, and the RMRS raster utility coding library

    Science.gov (United States)

    John S. Hogland; Nathaniel M. Anderson

    2015-01-01

    Raster modeling is an integral component of spatial analysis. However, conventional raster modeling techniques can require a substantial amount of processing time and storage space, often limiting the types of analyses that can be performed. To address this issue, we have developed Function Modeling. Function Modeling is a new modeling framework that streamlines the...

  2. Annotation and retrieval system of CAD models based on functional semantics

    Science.gov (United States)

    Wang, Zhansong; Tian, Ling; Duan, Wenrui

    2014-11-01

    CAD model retrieval based on functional semantics is more significant than content-based 3D model retrieval during the mechanical conceptual design phase. However, relevant research is still not fully discussed. Therefore, a functional semantic-based CAD model annotation and retrieval method is proposed to support mechanical conceptual design and design reuse, inspire designer creativity through existing CAD models, shorten design cycle, and reduce costs. Firstly, the CAD model functional semantic ontology is constructed to formally represent the functional semantics of CAD models and describe the mechanical conceptual design space comprehensively and consistently. Secondly, an approach to represent CAD models as attributed adjacency graphs(AAG) is proposed. In this method, the geometry and topology data are extracted from STEP models. On the basis of AAG, the functional semantics of CAD models are annotated semi-automatically by matching CAD models that contain the partial features of which functional semantics have been annotated manually, thereby constructing CAD Model Repository that supports model retrieval based on functional semantics. Thirdly, a CAD model retrieval algorithm that supports multi-function extended retrieval is proposed to explore more potential creative design knowledge in the semantic level. Finally, a prototype system, called Functional Semantic-based CAD Model Annotation and Retrieval System(FSMARS), is implemented. A case demonstrates that FSMARS can successfully botain multiple potential CAD models that conform to the desired function. The proposed research addresses actual needs and presents a new way to acquire CAD models in the mechanical conceptual design phase.

  3. Function and innervation of the locus ceruleus in a macaque model of Functional Hypothalamic Amenorrhea.

    Science.gov (United States)

    Bethea, Cynthia L; Kim, Aaron; Cameron, Judy L

    2013-02-01

    A body of knowledge implicates an increase in output from the locus ceruleus (LC) during stress. We questioned the innervation and function of the LC in our macaque model of Functional Hypothalamic Amenorrhea, also known as Stress-Induced Amenorrhea. Cohorts of macaques were initially characterized as highly stress resilient (HSR) or stress-sensitive (SS) based upon the presence or absence of ovulation during a protocol involving 2 menstrual cycles with psychosocial and metabolic stress. Afterwards, the animals were rested until normal menstrual cycles resumed and then euthanized on day 5 of a new menstrual cycle [a] in the absence of further stress; or [b] after 5 days of resumed psychosocial and metabolic stress. In this study, parameters of the LC were examined in HSR and SS animals in the presence and absence of stress (2×2 block design) using ICC and image analysis. Tyrosine hydroxylase (TH) is the rate-limiting enzyme for the synthesis of catecholamines; and the TH level was used to assess by inference, NE output. The pixel area of TH-positive dendrites extending outside the medial border of the LC was significantly increased by stress to a similar degree in both HSR and SS animals (p<0.0001). There is a significant CRF innervation of the LC. The positive pixel area of CRF boutons, lateral to the LC, was higher in SS than HSR animals in the absence of stress. Five days of moderate stress significantly increased the CRF-positive bouton pixel area in the HSR group (p<0.02), but not in the SS group. There is also a significant serotonin innervation of the LC. A marked increase in medial serotonin dendrite swelling and beading was observed in the SS+stress group, which may be a consequence of excitotoxicity. The dendrite beading interfered with analysis of axonal boutons. However, at one anatomical level, the serotonin-positive bouton area was obtained between the LC and the superior cerebellar peduncle. Serotonin-positive bouton pixel area was significantly

  4. Fitting the Probability Distribution Functions to Model Particulate Matter Concentrations

    International Nuclear Information System (INIS)

    El-Shanshoury, Gh.I.

    2017-01-01

    The main objective of this study is to identify the best probability distribution and the plotting position formula for modeling the concentrations of Total Suspended Particles (TSP) as well as the Particulate Matter with an aerodynamic diameter<10 μm (PM 10 ). The best distribution provides the estimated probabilities that exceed the threshold limit given by the Egyptian Air Quality Limit value (EAQLV) as well the number of exceedance days is estimated. The standard limits of the EAQLV for TSP and PM 10 concentrations are 24-h average of 230 μg/m 3 and 70 μg/m 3 , respectively. Five frequency distribution functions with seven formula of plotting positions (empirical cumulative distribution functions) are compared to fit the average of daily TSP and PM 10 concentrations in year 2014 for Ain Sokhna city. The Quantile-Quantile plot (Q-Q plot) is used as a method for assessing how closely a data set fits a particular distribution. A proper probability distribution that represents the TSP and PM 10 has been chosen based on the statistical performance indicator values. The results show that Hosking and Wallis plotting position combined with Frechet distribution gave the highest fit for TSP and PM 10 concentrations. Burr distribution with the same plotting position follows Frechet distribution. The exceedance probability and days over the EAQLV are predicted using Frechet distribution. In 2014, the exceedance probability and days for TSP concentrations are 0.052 and 19 days, respectively. Furthermore, the PM 10 concentration is found to exceed the threshold limit by 174 days

  5. Structural equation modeling of motor impairment, gross motor function, and the functional outcome in children with cerebral palsy.

    Science.gov (United States)

    Park, Eun-Young; Kim, Won-Ho

    2013-05-01

    Physical therapy intervention for children with cerebral palsy (CP) is focused on reducing neurological impairments, improving strength, and preventing the development of secondary impairments in order to improve functional outcomes. However, relationship between motor impairments and functional outcome has not been proved definitely. This study confirmed the construct of motor impairment and performed structural equation modeling (SEM) between motor impairment, gross motor function, and functional outcomes of regarding activities of daily living in children with CP. 98 children (59 boys, 39 girls) with CP participated in this cross-sectional study. Mean age was 11 y 5 mo (SD 1 y 9 mo). The Manual Muscle Test (MMT), the Modified Ashworth Scale (MAS), range of motion (ROM) measurement, and the selective motor control (SMC) scale were used to assess motor impairments. Gross motor function and functional outcomes were measured using the Gross Motor Function Measure (GMFM) and the Functional Skills domain of the Pediatric Evaluation of Disability Inventory (PEDI) respectively. Measurement of motor impairment was consisted of strength, spasticity, ROM, and SMC. The construct of motor impairment was confirmed though an examination of a measurement model. The proposed SEM model showed good fit indices. Motor impairment effected gross motor function (β=-.0869). Gross motor function and motor impairment affected functional outcomes directly (β=0.890) and indirectly (β=-0.773) respectively. We confirmed that the construct of motor impairment consist of strength, spasticity, ROM, and SMC and it was identified through measurement model analysis. Functional outcomes are best predicted by gross motor function and motor impairments have indirect effects on functional outcomes. Copyright © 2013 Elsevier Ltd. All rights reserved.

  6. Two-point boundary correlation functions of dense loop models

    Directory of Open Access Journals (Sweden)

    Alexi Morin-Duchesne, Jesper Lykke Jacobsen

    2018-06-01

    Full Text Available We investigate six types of two-point boundary correlation functions in the dense loop model. These are defined as ratios $Z/Z^0$ of partition functions on the $m\\times n$ square lattice, with the boundary condition for $Z$ depending on two points $x$ and $y$. We consider: the insertion of an isolated defect (a and a pair of defects (b in a Dirichlet boundary condition, the transition (c between Dirichlet and Neumann boundary conditions, and the connectivity of clusters (d, loops (e and boundary segments (f in a Neumann boundary condition. For the model of critical dense polymers, corresponding to a vanishing loop weight ($\\beta = 0$, we find determinant and pfaffian expressions for these correlators. We extract the conformal weights of the underlying conformal fields and find $\\Delta = -\\frac18$, $0$, $-\\frac3{32}$, $\\frac38$, $1$, $\\tfrac \\theta \\pi (1+\\tfrac{2\\theta}\\pi$, where $\\theta$ encodes the weight of one class of loops for the correlator of type f. These results are obtained by analysing the asymptotics of the exact expressions, and by using the Cardy-Peschel formula in the case where $x$ and $y$ are set to the corners. For type b, we find a $\\log|x-y|$ dependence from the asymptotics, and a $\\ln (\\ln n$ term in the corner free energy. This is consistent with the interpretation of the boundary condition of type b as the insertion of a logarithmic field belonging to a rank two Jordan cell. For the other values of $\\beta = 2 \\cos \\lambda$, we use the hypothesis of conformal invariance to predict the conformal weights and find $\\Delta = \\Delta_{1,2}$, $\\Delta_{1,3}$, $\\Delta_{0,\\frac12}$, $\\Delta_{1,0}$, $\\Delta_{1,-1}$ and $\\Delta_{\\frac{2\\theta}\\lambda+1,\\frac{2\\theta}\\lambda+1}$, extending the results of critical dense polymers. With the results for type f, we reproduce a Coulomb gas prediction for the valence bond entanglement entropy of Jacobsen and Saleur.

  7. Applying Quality Function Deployment Model in Burn Unit Service Improvement.

    Science.gov (United States)

    Keshtkaran, Ali; Hashemi, Neda; Kharazmi, Erfan; Abbasi, Mehdi

    2016-01-01

    Quality function deployment (QFD) is one of the most effective quality design tools. This study applies QFD technique to improve the quality of the burn unit services in Ghotbedin Hospital in Shiraz, Iran. First, the patients' expectations of burn unit services and their priorities were determined through Delphi method. Thereafter, burn unit service specifications were determined through Delphi method. Further, the relationships between the patients' expectations and service specifications and also the relationships between service specifications were determined through an expert group's opinion. Last, the final importance scores of service specifications were calculated through simple additive weighting method. The findings show that burn unit patients have 40 expectations in six different areas. These expectations are in 16 priority levels. Burn units also have 45 service specifications in six different areas. There are four-level relationships between the patients' expectations and service specifications and four-level relationships between service specifications. The most important burn unit service specifications have been identified in this study. The QFD model developed in the study can be a general guideline for QFD planners and executives.

  8. Drosophila Cancer Models Identify Functional Differences between Ret Fusions.

    Science.gov (United States)

    Levinson, Sarah; Cagan, Ross L

    2016-09-13

    We generated and compared Drosophila models of RET fusions CCDC6-RET and NCOA4-RET. Both RET fusions directed cells to migrate, delaminate, and undergo EMT, and both resulted in lethality when broadly expressed. In all phenotypes examined, NCOA4-RET was more severe than CCDC6-RET, mirroring their effects on patients. A functional screen against the Drosophila kinome and a library of cancer drugs found that CCDC6-RET and NCOA4-RET acted through different signaling networks and displayed distinct drug sensitivities. Combining data from the kinome and drug screens identified the WEE1 inhibitor AZD1775 plus the multi-kinase inhibitor sorafenib as a synergistic drug combination that is specific for NCOA4-RET. Our work emphasizes the importance of identifying and tailoring a patient's treatment to their specific RET fusion isoform and identifies a multi-targeted therapy that may prove effective against tumors containing the NCOA4-RET fusion. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  9. Function-centered modeling of engineering systems using the goal tree-success tree technique and functional primitives

    International Nuclear Information System (INIS)

    Modarres, Mohammad; Cheon, Se Woo

    1999-01-01

    Most of the complex systems are formed through some hierarchical evolution. Therefore, those systems can be best described through hierarchical frameworks. This paper describes some fundamental attributes of complex physical systems and several hierarchies such as functional, behavioral, goal/condition, and event hierarchies, then presents a function-centered approach to system modeling. Based on the function-centered concept, this paper describes the joint goal tree-success tree (GTST) and the master logic diagram (MLD) as a framework for developing models of complex physical systems. A function-based lexicon for classifying the most common elements of engineering systems for use in the GTST-MLD framework has been proposed. The classification is based on the physical conservation laws that govern the engineering systems. Functional descriptions based on conservation laws provide a simple and rich vocabulary for modeling complex engineering systems

  10. Cost damping and functional form in transport models

    DEFF Research Database (Denmark)

    Rich, Jeppe; Mabit, Stefan Lindhard

    2016-01-01

    out to be an important guidance as the damping rate largely dictates which link functions are appropriate for the data. Thirdly, inspired by the Box–Cox function, we propose alternative linear-in-parameter link functions, some of which are based on interpolation of approximate Box–Cox end points...

  11. A modelling framework to simulate foliar fungal epidemics using functional-structural plant models.

    Science.gov (United States)

    Garin, Guillaume; Fournier, Christian; Andrieu, Bruno; Houlès, Vianney; Robert, Corinne; Pradal, Christophe

    2014-09-01

    Sustainable agriculture requires the identification of new, environmentally responsible strategies of crop protection. Modelling of pathosystems can allow a better understanding of the major interactions inside these dynamic systems and may lead to innovative protection strategies. In particular, functional-structural plant models (FSPMs) have been identified as a means to optimize the use of architecture-related traits. A current limitation lies in the inherent complexity of this type of modelling, and thus the purpose of this paper is to provide a framework to both extend and simplify the modelling of pathosystems using FSPMs. Different entities and interactions occurring in pathosystems were formalized in a conceptual model. A framework based on these concepts was then implemented within the open-source OpenAlea modelling platform, using the platform's general strategy of modelling plant-environment interactions and extending it to handle plant interactions with pathogens. New developments include a generic data structure for representing lesions and dispersal units, and a series of generic protocols to communicate with objects representing the canopy and its microenvironment in the OpenAlea platform. Another development is the addition of a library of elementary models involved in pathosystem modelling. Several plant and physical models are already available in OpenAlea and can be combined in models of pathosystems using this framework approach. Two contrasting pathosystems are implemented using the framework and illustrate its generic utility. Simulations demonstrate the framework's ability to simulate multiscaled interactions within pathosystems, and also show that models are modular components within the framework and can be extended. This is illustrated by testing the impact of canopy architectural traits on fungal dispersal. This study provides a framework for modelling a large number of pathosystems using FSPMs. This structure can accommodate both

  12. EVALUATION OF RATIONAL FUNCTION MODEL FOR GEOMETRIC MODELING OF CHANG'E-1 CCD IMAGES

    Directory of Open Access Journals (Sweden)

    Y. Liu

    2012-08-01

    Full Text Available Rational Function Model (RFM is a generic geometric model that has been widely used in geometric processing of high-resolution earth-observation satellite images, due to its generality and excellent capability of fitting complex rigorous sensor models. In this paper, the feasibility and precision of RFM for geometric modeling of China's Chang'E-1 (CE-1 lunar orbiter images is presented. The RFM parameters of forward-, nadir- and backward-looking CE-1 images are generated though least squares solution using virtual control points derived from the rigorous sensor model. The precision of the RFM is evaluated by comparing with the rigorous sensor model in both image space and object space. Experimental results using nine images from three orbits show that RFM can precisely fit the rigorous sensor model of CE-1 CCD images with a RMS residual error of 1/100 pixel level in image space and less than 5 meters in object space. This indicates that it is feasible to use RFM to describe the imaging geometry of CE-1 CCD images and spacecraft position and orientation. RFM will enable planetary data centers to have an option to supply RFM parameters of orbital images while keeping the original orbit trajectory data confidential.

  13. Applying Functional Modeling for Accident Management of Nuclear Power Plant

    DEFF Research Database (Denmark)

    Lind, Morten; Zhang, Xinxin

    2014-01-01

    Modeling is given and a detailed presentation of the foundational means-end concepts is presented and the conditions for proper use in modelling accidents are identified. It is shown that Multilevel Flow Modeling can be used for modelling and reasoning about design basis accidents. Its possible role...... for information sharing and decision support in accidents beyond design basis is also indicated. A modelling example demonstrating the application of Multilevel Flow Modelling and reasoning for a PWR LOCA is presented...

  14. Applying Functional Modeling for Accident Management of Nucler Power Plant

    DEFF Research Database (Denmark)

    Lind, Morten; Zhang, Xinxin

    2014-01-01

    Modeling is given and a detailed presentation of the foundational means-end concepts is presented and the conditions for proper use in modelling accidents are identified. It is shown that Multilevel Flow Modeling can be used for modelling and reasoning about design basis accidents. Its possible role...... for information sharing and decision support in accidents beyond design basis is also indicated. A modelling example demonstrating the application of Multilevel Flow Modelling and reasoning for a PWR LOCA is presented....

  15. Software to model AXAF image quality

    Science.gov (United States)

    Ahmad, Anees

    1993-01-01

    This draft final report describes the work performed under this delivery order from May 1992 through June 1993. The purpose of this contract was to enhance and develop an integrated optical performance modeling software for complex x-ray optical systems such as AXAF. The GRAZTRACE program developed by the MSFC Optical Systems Branch for modeling VETA-I was used as the starting baseline program. The original program was a large single file program and, therefore, could not be modified very efficiently. The original source code has been reorganized, and a 'Make Utility' has been written to update the original program. The new version of the source code consists of 36 small source files to make it easier for the code developer to manage and modify the program. A user library has also been built and a 'Makelib' utility has been furnished to update the library. With the user library, the users can easily access the GRAZTRACE source files and build a custom library. A user manual for the new version of GRAZTRACE has been compiled. The plotting capability for the 3-D point spread functions and contour plots has been provided in the GRAZTRACE using the graphics package DISPLAY. The Graphics emulator over the network has been set up for programming the graphics routine. The point spread function and the contour plot routines have also been modified to display the plot centroid, and to allow the user to specify the plot range, and the viewing angle options. A Command Mode version of GRAZTRACE has also been developed. More than 60 commands have been implemented in a Code-V like format. The functions covered in this version include data manipulation, performance evaluation, and inquiry and setting of internal parameters. The user manual for these commands has been formatted as in Code-V, showing the command syntax, synopsis, and options. An interactive on-line help system for the command mode has also been accomplished to allow the user to find valid commands, command syntax

  16. Collins fragmentation function for pions and kaons in a spectator model

    Energy Technology Data Exchange (ETDEWEB)

    Bacchetta, A. [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); Gamberg, L.P. [Penn State Univ., Berks, PA (United States). Dept. of Physics; Goldstein, G.R. [Tufts Univ., Medford, MA (United States). Dept. of Physics and Astronomy; Mukherjee, A. [Indian Institute of Technology Bombay, Mumbai (India). Physics Dept.

    2007-07-15

    We calculate the Collins fragmentation function in the framework of a spectator model with pseudoscalar pion-quark coupling and a Gaussian form factor at the vertex. We determine the model parameters by fitting the unpolarized fragmentation function for pions and kaons. We show that the Collins function for the pions in this model is in reasonable agreement with recent parametrizations obtained by fits of the available data. In addition, we compute for the first time the Collins function for the kaons. (orig.)

  17. Modelling the joint distribution of competing risks survival times using copula functions

    OpenAIRE

    Kaishev, V. K.; Haberman, S.; Dimitrova, D. S.

    2005-01-01

    The problem of modelling the joint distribution of survival times in a competing risks model, using copula functions is considered. In order to evaluate this joint distribution and the related overall survival function, a system of non-linear differential equations is solved, which relates the crude and net survival functions of the modelled competing risks, through the copula. A similar approach to modelling dependent multiple decrements was applied by Carriere (1994) who used a Gaussian cop...

  18. Probiotics and novel digestion models for functional food ingredients

    Energy Technology Data Exchange (ETDEWEB)

    Pal, Karoly; Kiss, Attila [Eszterhazy Karoly College, Eger (Hungary). EgerFood Regional Knowledge Center (EgerFood-RKC); Szarvas, Jozsef [Eszterhazy Karoly College, Eger (Hungary). Department of Biochemistry and Molecular Biology; Naar, Zoltan [Eszterhazy Karoly College, Eger (Hungary). Department of Botany

    2009-07-01

    Complete text of publication follows. A number of factors compromise the health of modern people: stressful lifestyle, unbalanced nourishment, excessive consumption of refined foods with a big measure, admission of different chemical agents into the human body. These factors harm directly or indirectly the intestinal activity, that forms a considerable part of the immune system, including the production of essential substances that have beneficial effects on the human body. The role of the so-called prebiotics (e.g. inulin, various oligosaccharides, raffinose, resistant starch etc.) is to prevent and reduce the damage of useful microbes, which are termed as probiotics. These substances selectively facilitate the propagation of probiotic bacteria (e.g. Bifidobacterium bifidum, Bifidobacterium longum, Enterococcus faecium, Lactobacillus acidophilus), therefore increase the rate of the synthesis of vitamin B and of beneficial short chain fatty acids, improve the absorption of minerals, decrease the level of cholesterol, triglycerides, insulin, glucose, ammonia and uric acid and improve the functioning of the immune system. The majority of the examination results about prebiotics are based on clinical dietary and animal experiments. In contrast to this we simulated the process of digestion and the effect of prebiotics on probiotic and non-probiotic bacteria selected by us in an artificial digestion model, the Atlas Potassium reactor system. The instrument enabled the control of pH, temperature, dosage of digestion enzymes and juices (saliva, gastric juice, bile and duodenal juice) and anaerobity in the course of the experiment. In our experiments we investigated different bakery products and biscuits containing various prebiotic ingredients, e.g. inulin and other fructo oligosaccharides. In the digestion model the different bakery products and biscuits got through the simulated oral cavity (pH=6.8), stomach (pH=2-3) and intestine (pH=6.5-7) and might be modified in the

  19. Optimizing an objective function under a bivariate probability model

    NARCIS (Netherlands)

    X. Brusset; N.M. Temme (Nico)

    2007-01-01

    htmlabstractThe motivation of this paper is to obtain an analytical closed form of a quadratic objective function arising from a stochastic decision process with bivariate exponential probability distribution functions that may be dependent. This method is applicable when results need to be

  20. Predictive model for functional consequences of oral cavity tumour resections

    NARCIS (Netherlands)

    van Alphen, M.J.A.; Hageman, T.A.G.; Hageman, Tijmen Antoon Geert; Smeele, L.E.; Balm, Alfonsus Jacobus Maria; Balm, A.J.M.; van der Heijden, Ferdinand; Lemke, H.U.

    2013-01-01

    The prediction of functional consequences after treatment of large oral cavity tumours is mainly based on the size and location of the tumour. However, patient specific factors play an important role in the functional outcome, making the current predictions unreliable and subjective. An objective

  1. Mittag-Leffler function for discrete fractional modelling

    Directory of Open Access Journals (Sweden)

    Guo-Cheng Wu

    2016-01-01

    Full Text Available From the difference equations on discrete time scales, this paper numerically investigates one discrete fractional difference equation in the Caputo delta’s sense which has an explicit solution in form of the discrete Mittag-Leffler function. The exact numerical values of the solutions are given in comparison with the truncated Mittag-Leffler function.

  2. Testing the Conditional Mean Function of Autoregressive Conditional Duration Models

    DEFF Research Database (Denmark)

    Hautsch, Nikolaus

    be subject to censoring structures. In an empirical study based on financial transaction data we present an application of the model to estimate conditional asset price change probabilities. Evaluating the forecasting properties of the model, it is shown that the proposed approach is a promising competitor......This paper proposes a dynamic proportional hazard (PH) model with non-specified baseline hazard for the modelling of autoregressive duration processes. A categorization of the durations allows us to reformulate the PH model as an ordered response model based on extreme value distributed errors...

  3. Modeling mixtures of thyroid gland function disruptors in a vertebrate alternative model, the zebrafish eleutheroembryo

    Energy Technology Data Exchange (ETDEWEB)

    Thienpont, Benedicte; Barata, Carlos [Department of Environmental Chemistry, Institute of Environmental Assessment and Water Research (IDAEA, CSIC), Jordi Girona, 18-26, 08034 Barcelona (Spain); Raldúa, Demetrio, E-mail: drpqam@cid.csic.es [Department of Environmental Chemistry, Institute of Environmental Assessment and Water Research (IDAEA, CSIC), Jordi Girona, 18-26, 08034 Barcelona (Spain); Maladies Rares: Génétique et Métabolisme (MRGM), University of Bordeaux, EA 4576, F-33400 Talence (France)

    2013-06-01

    Maternal thyroxine (T4) plays an essential role in fetal brain development, and even mild and transitory deficits in free-T4 in pregnant women can produce irreversible neurological effects in their offspring. Women of childbearing age are daily exposed to mixtures of chemicals disrupting the thyroid gland function (TGFDs) through the diet, drinking water, air and pharmaceuticals, which has raised the highest concern for the potential additive or synergic effects on the development of mild hypothyroxinemia during early pregnancy. Recently we demonstrated that zebrafish eleutheroembryos provide a suitable alternative model for screening chemicals impairing the thyroid hormone synthesis. The present study used the intrafollicular T4-content (IT4C) of zebrafish eleutheroembryos as integrative endpoint for testing the hypotheses that the effect of mixtures of TGFDs with a similar mode of action [inhibition of thyroid peroxidase (TPO)] was well predicted by a concentration addition concept (CA) model, whereas the response addition concept (RA) model predicted better the effect of dissimilarly acting binary mixtures of TGFDs [TPO-inhibitors and sodium-iodide symporter (NIS)-inhibitors]. However, CA model provided better prediction of joint effects than RA in five out of the six tested mixtures. The exception being the mixture MMI (TPO-inhibitor)-KClO{sub 4} (NIS-inhibitor) dosed at a fixed ratio of EC{sub 10} that provided similar CA and RA predictions and hence it was difficult to get any conclusive result. There results support the phenomenological similarity criterion stating that the concept of concentration addition could be extended to mixture constituents having common apical endpoints or common adverse outcomes. - Highlights: • Potential synergic or additive effect of mixtures of chemicals on thyroid function. • Zebrafish as alternative model for testing the effect of mixtures of goitrogens. • Concentration addition seems to predict better the effect of

  4. Modeling mixtures of thyroid gland function disruptors in a vertebrate alternative model, the zebrafish eleutheroembryo

    International Nuclear Information System (INIS)

    Thienpont, Benedicte; Barata, Carlos; Raldúa, Demetrio

    2013-01-01

    Maternal thyroxine (T4) plays an essential role in fetal brain development, and even mild and transitory deficits in free-T4 in pregnant women can produce irreversible neurological effects in their offspring. Women of childbearing age are daily exposed to mixtures of chemicals disrupting the thyroid gland function (TGFDs) through the diet, drinking water, air and pharmaceuticals, which has raised the highest concern for the potential additive or synergic effects on the development of mild hypothyroxinemia during early pregnancy. Recently we demonstrated that zebrafish eleutheroembryos provide a suitable alternative model for screening chemicals impairing the thyroid hormone synthesis. The present study used the intrafollicular T4-content (IT4C) of zebrafish eleutheroembryos as integrative endpoint for testing the hypotheses that the effect of mixtures of TGFDs with a similar mode of action [inhibition of thyroid peroxidase (TPO)] was well predicted by a concentration addition concept (CA) model, whereas the response addition concept (RA) model predicted better the effect of dissimilarly acting binary mixtures of TGFDs [TPO-inhibitors and sodium-iodide symporter (NIS)-inhibitors]. However, CA model provided better prediction of joint effects than RA in five out of the six tested mixtures. The exception being the mixture MMI (TPO-inhibitor)-KClO 4 (NIS-inhibitor) dosed at a fixed ratio of EC 10 that provided similar CA and RA predictions and hence it was difficult to get any conclusive result. There results support the phenomenological similarity criterion stating that the concept of concentration addition could be extended to mixture constituents having common apical endpoints or common adverse outcomes. - Highlights: • Potential synergic or additive effect of mixtures of chemicals on thyroid function. • Zebrafish as alternative model for testing the effect of mixtures of goitrogens. • Concentration addition seems to predict better the effect of mixtures of

  5. Function-specific and Enhanced Brain Structural Connectivity Mapping via Joint Modeling of Diffusion and Functional MRI.

    Science.gov (United States)

    Chu, Shu-Hsien; Parhi, Keshab K; Lenglet, Christophe

    2018-03-16

    A joint structural-functional brain network model is presented, which enables the discovery of function-specific brain circuits, and recovers structural connections that are under-estimated by diffusion MRI (dMRI). Incorporating information from functional MRI (fMRI) into diffusion MRI to estimate brain circuits is a challenging task. Usually, seed regions for tractography are selected from fMRI activation maps to extract the white matter pathways of interest. The proposed method jointly analyzes whole brain dMRI and fMRI data, allowing the estimation of complete function-specific structural networks instead of interactively investigating the connectivity of individual cortical/sub-cortical areas. Additionally, tractography techniques are prone to limitations, which can result in erroneous pathways. The proposed framework explicitly models the interactions between structural and functional connectivity measures thereby improving anatomical circuit estimation. Results on Human Connectome Project (HCP) data demonstrate the benefits of the approach by successfully identifying function-specific anatomical circuits, such as the language and resting-state networks. In contrast to correlation-based or independent component analysis (ICA) functional connectivity mapping, detailed anatomical connectivity patterns are revealed for each functional module. Results on a phantom (Fibercup) also indicate improvements in structural connectivity mapping by rejecting false-positive connections with insufficient support from fMRI, and enhancing under-estimated connectivity with strong functional correlation.

  6. Modeling the fundamental characteristics and processes of the spacecraft functioning

    Science.gov (United States)

    Bazhenov, V. I.; Osin, M. I.; Zakharov, Y. V.

    1986-01-01

    The fundamental aspects of modeling of spacecraft characteristics by using computing means are considered. Particular attention is devoted to the design studies, the description of physical appearance of the spacecraft, and simulated modeling of spacecraft systems. The fundamental questions of organizing the on-the-ground spacecraft testing and the methods of mathematical modeling were presented.

  7. Mathematical modeling of damage function when attacking file server

    Science.gov (United States)

    Kozlov, V. G.; Skrypnikov, A. V.; Chernyshova, E. V.; Mogutnov, R. V.; Levushkin, D. M.

    2018-05-01

    The development of information technologies in Russia and the prospects for their further improvement allow us to identify a stable trend of expansion of both functions of the corresponding automated information systems (AIS) and the spheres of their application. At the same time, many threats to information processes in the AIS are expanding, which in turn stimulates the development of adequate means and systems for ensuring the information security of the AS and methods for assessing their protection. It is necessary to assess the ability of the system to continue its normal functioning under the conditions of permanent destructive influences and to resist them, to adapt the functioning algorithms to new conditions and to organize functional restoration or to ensure functioning with a gradual process of degradation, possibly without losing the most significant “critical” information functions. The analysis and evaluation of reliability are needed to be transformed into the analysis and evaluation of survivability. Survivability can be considered as the ability of the information system to store and restore the performance of basic functions in a given volume and for a given time in the case of a change in the system structure and / or algorithms and the conditions of its functioning due to adverse effects. One of the system survivability indicators is the reserve of survivability (S-survivability) that is the critical number of defects reduced by a unit. The authors will consider defect as a unit of measurement of damage to the information system by adverse impact. U is denoted as the critical number of defects, then S = 1-U is the index of S-survivability. The article gives the definition of an analytical formula for the function of damage and risk.

  8. Overlap integrals of model wave functions of 4He and 3He,3H nuclei

    International Nuclear Information System (INIS)

    Voloshin, N.I.; Levshin, E.B.; Fursa, A.D.

    1990-01-01

    Overlap integrals of wave functions 4 He nucleus and 3 He and 3 H nuclei are calculated. Two types of model wave functions are used to describe the structure of nuclei. The wace function is taken as a product of the one-particle Gaussian functions of the Gaussian type in the second case

  9. Scattering function for a model of interacting surfaces

    International Nuclear Information System (INIS)

    Colangelo, P.; Gonnella, G.; Maritan, A.

    1993-01-01

    The two-point correlation function of an ensemble of interacting closed self-avoiding surfaces on a cubic lattice is analyzed in the disordered phase, which corresponds to the paramagnetic region in a related spin formulation. Mean-field theory and Monte Carlo simulations predict the existence of a disorder line which corresponds to a transition from an exponential decay to an oscillatory damped behavior of the two-point correlation function. The relevance of the results for the description of amphiphilic systems in a microemulsion phase is discussed. The scattering function is also calculated for a bicontinuous phase coexisting with the paramagnetic phase

  10. Validation of a functional model for integration of safety into process system design

    DEFF Research Database (Denmark)

    Wu, J.; Lind, M.; Zhang, X.

    2015-01-01

    with the process system functionalities as required for the intended safety applications. To provide the scientific rigor and facilitate the acceptance of qualitative modelling, this contribution focuses on developing a scientifically based validation method for functional models. The Multilevel Flow Modeling (MFM...

  11. Assessing elders using the functional health pattern assessment model.

    Science.gov (United States)

    Beyea, S; Matzo, M

    1989-01-01

    The impact of older Americans on the health care system requires we increase our students' awareness of their unique needs. The authors discuss strategies to develop skills using Gordon's Functional Health Patterns Assessment for assessing older clients.

  12. Green function of the model two-centre quantum-mechanical problem

    International Nuclear Information System (INIS)

    Khoma, M.V.; Lazur, V.Yu.

    2002-01-01

    The expansions of a Green function for the Simmons molecular potential (SMP) in terms of spheroidal function are built. The solutions of degenerate hypergeometric equation are used as basis function system while expanding regular and irregular model spheroidal functions into series. Rather simple three-terms recurrence relations are obtained for the coefficients of these expansions. Much attentions is given to different asymptotic representation as well as Sturmian expansions of the Green function of the two-centre SMP wave functions. In all cases considered the Green function is reduced to the form similar to the Hostler's representation of the Coulomb Green function

  13. Functional inverted Wishart for Bayesian multivariate spatial modeling with application to regional climatology model data.

    Science.gov (United States)

    Duan, L L; Szczesniak, R D; Wang, X

    2017-11-01

    Modern environmental and climatological studies produce multiple outcomes at high spatial resolutions. Multivariate spatial modeling is an established means to quantify cross-correlation among outcomes. However, existing models typically suffer from poor computational efficiency and lack the flexibility to simultaneously estimate auto- and cross-covariance structures. In this article, we undertake a novel construction of covariance by utilizing spectral convolution and by imposing an inverted Wishart prior on the cross-correlation structure. The cross-correlation structure with this functional inverted Wishart prior flexibly accommodates not only positive but also weak or negative associations among outcomes while preserving spatial resolution. Furthermore, the proposed model is computationally efficient and produces easily interpretable results, including the individual autocovariances and full cross-correlation matrices, as well as a partial cross-correlation matrix reflecting the outcome correlation after excluding the effects caused by spatial convolution. The model is examined using simulated data sets under different scenarios. It is also applied to the data from the North American Regional Climate Change Assessment Program, examining long-term associations between surface outcomes for air temperature, pressure, humidity, and radiation, on the land area of the North American West Coast. Results and predictive performance are compared with findings from approaches using convolution only or coregionalization.

  14. Functional inverted Wishart for Bayesian multivariate spatial modeling with application to regional climatology model data

    Science.gov (United States)

    Duan, L. L.; Szczesniak, R. D.; Wang, X.

    2018-01-01

    Modern environmental and climatological studies produce multiple outcomes at high spatial resolutions. Multivariate spatial modeling is an established means to quantify cross-correlation among outcomes. However, existing models typically suffer from poor computational efficiency and lack the flexibility to simultaneously estimate auto- and cross-covariance structures. In this article, we undertake a novel construction of covariance by utilizing spectral convolution and by imposing an inverted Wishart prior on the cross-correlation structure. The cross-correlation structure with this functional inverted Wishart prior flexibly accommodates not only positive but also weak or negative associations among outcomes while preserving spatial resolution. Furthermore, the proposed model is computationally efficient and produces easily interpretable results, including the individual autocovariances and full cross-correlation matrices, as well as a partial cross-correlation matrix reflecting the outcome correlation after excluding the effects caused by spatial convolution. The model is examined using simulated data sets under different scenarios. It is also applied to the data from the North American Regional Climate Change Assessment Program, examining long-term associations between surface outcomes for air temperature, pressure, humidity, and radiation, on the land area of the North American West Coast. Results and predictive performance are compared with findings from approaches using convolution only or coregionalization. PMID:29576735

  15. Commonsense Psychology and the Functional Requirements of Cognitive Models

    National Research Council Canada - National Science Library

    Gordon, Andrew S

    2005-01-01

    .... Rather than working to avoid the influence of commonsense psychology in cognitive modeling research, we propose to capitalize on progress in developing formal theories of commonsense psychology...

  16. Discrete two-sex models of population dynamics: On modelling the mating function

    Science.gov (United States)

    Bessa-Gomes, Carmen; Legendre, Stéphane; Clobert, Jean

    2010-09-01

    Although sexual reproduction has long been a central subject of theoretical ecology, until recently its consequences for population dynamics were largely overlooked. This is now changing, and many studies have addressed this issue, showing that when the mating system is taken into account, the population dynamics depends on the relative abundance of males and females, and is non-linear. Moreover, sexual reproduction increases the extinction risk, namely due to the Allee effect. Nevertheless, different studies have identified diverse potential consequences, depending on the choice of mating function. In this study, we investigate the consequences of three alternative mating functions that are frequently used in discrete population models: the minimum; the harmonic mean; and the modified harmonic mean. We consider their consequences at three levels: on the probability that females will breed; on the presence and intensity of the Allee effect; and on the extinction risk. When we consider the harmonic mean, the number of times the individuals of the least abundant sex mate exceeds their mating potential, which implies that with variable sex-ratios the potential reproductive rate is no longer under the modeller's control. Consequently, the female breeding probability exceeds 1 whenever the sex-ratio is male-biased, which constitutes an obvious problem. The use of the harmonic mean is thus only justified if we think that this parameter should be re-defined in order to represent the females' breeding rate and the fact that females may reproduce more than once per breeding season. This phenomenon buffers the Allee effect, and reduces the extinction risk. However, when we consider birth-pulse populations, such a phenomenon is implausible because the number of times females can reproduce per birth season is limited. In general, the minimum or modified harmonic mean mating functions seem to be more suitable for assessing the impact of mating systems on population dynamics.

  17. Nonparametric modeling of dynamic functional connectivity in fmri data

    DEFF Research Database (Denmark)

    Nielsen, Søren Føns Vind; Madsen, Kristoffer H.; Røge, Rasmus

    2015-01-01

    dynamic changes. The existing approaches modeling dynamic connectivity have primarily been based on time-windowing the data and k-means clustering. We propose a nonparametric generative model for dynamic FC in fMRI that does not rely on specifying window lengths and number of dynamic states. Rooted...

  18. Modelling graphene quantum dot functionalization via ethylene-dinitrobenzoyl

    International Nuclear Information System (INIS)

    Noori, Keian; Giustino, Feliciano; Hübener, Hannes; Kymakis, Emmanuel

    2016-01-01

    Ethylene-dinitrobenzoyl (EDNB) linked to graphene oxide has been shown to improve the performance of graphene/polymer organic photovoltaics. Its binding conformation on graphene, however, is not yet clear, nor have its effects on work function and optical absorption been explored more generally for graphene quantum dots. In this report, we clarify the linkage of EDNB to GQDs from first principles and show that the binding of the molecule increases the work function of graphene, while simultaneously modifying its absorption in the ultraviolet region.

  19. Modelling graphene quantum dot functionalization via ethylene-dinitrobenzoyl

    Energy Technology Data Exchange (ETDEWEB)

    Noori, Keian; Giustino, Feliciano [Department of Materials, University of Oxford, Parks Road, Oxford OX1 3PH (United Kingdom); Hübener, Hannes [Department of Materials, University of Oxford, Parks Road, Oxford OX1 3PH (United Kingdom); Nano-Bio Spectroscopy Group and European Theoretical Spectroscopy Facility (ETSF), Universidad del País Vasco CFM CSIC-UPV/EHU-MPC & DIPC, Av. Tolosa 72, 20018 San Sebastián (Spain); Kymakis, Emmanuel [Center of Materials Technology and Photonics & Electrical Engineering Department, Technological Educational Institute (TEI) of Crete, Heraklion, 71004 Crete (Greece)

    2016-03-21

    Ethylene-dinitrobenzoyl (EDNB) linked to graphene oxide has been shown to improve the performance of graphene/polymer organic photovoltaics. Its binding conformation on graphene, however, is not yet clear, nor have its effects on work function and optical absorption been explored more generally for graphene quantum dots. In this report, we clarify the linkage of EDNB to GQDs from first principles and show that the binding of the molecule increases the work function of graphene, while simultaneously modifying its absorption in the ultraviolet region.

  20. Bioinorganic Chemistry Modeled with the TPSSh Density Functional

    DEFF Research Database (Denmark)

    Kepp, Kasper Planeta

    2008-01-01

    functionals such as B3LYP and BP86. TPSSh gives a slope of 0.99 upon linear fitting to experimental bond energies, whereas B3LYP and BP86, representing 20% and 0% exact exchange, respectively, give linear fits with slopes of 0.91 and 1.07. Thus, TPSSh eliminates the large systematic component of the error...... functional provides energies approximately halfway between nonhybrids BP86 and TPSS, and 20% exact exchange hybrid B3LYP: Thus, a linear correlation between the amount of exact exchange and the numeric value of the reaction energy is observed in all these cases. For these reasons, TPSSh stands out as a most...

  1. Calculations of higher twist distribution functions in the MIT bag model

    International Nuclear Information System (INIS)

    Signal, A.I.

    1997-01-01

    We calculate all twist-2, -3 and -4 parton distribution functions involving two quark correlations using the wave function of the MIT bag model. The distributions are evolved up to experimental scales and combined to give the various nucleon structure functions. Comparisons with recent experimental data on higher twist structure functions at moderate values of Q 2 give good agreement with the calculated structure functions. (orig.)

  2. Design, Fabrication, Characterization and Modeling of Integrated Functional Materials

    Science.gov (United States)

    2015-12-01

    activities is expected to lead to new devices/ systems /composite materials useful for the USAMRMC. 15. SUBJECT TERMS Functional materials, integrated...fabrication, nanobiotechnology, multifunctional, dimensional integration, nanocomposites, sensor technology, thermoelectrics, solar cells, photovoltaics ...loop measured in the presence of an AC field, and can be increased by tuning several parameters, such as the nanoparticles’ size , saturation

  3. A Model for Teaching Literary Analysis Using Systemic Functional Grammar

    Science.gov (United States)

    McCrocklin, Shannon; Slater, Tammy

    2017-01-01

    This article introduces an approach that middle-school teachers can follow to help their students carry out linguistic-based literary analyses. As an example, it draws on Systemic Functional Grammar (SFG) to show how J.K. Rowling used language to characterize Hermione as an intelligent female in "Harry Potter and the Deathly Hallows."…

  4. Modelling functional and structural impact of non-synonymous ...

    African Journals Online (AJOL)

    ... that could affect protein function and structure. Further wet-lab confirmatory analysis in a pathological association study involving a larger population of goats is required at the DQA1 locus. This would lay a sound foundation for breeding disease-resistant individuals in the future. Keywords: Goats, in silico, mutants, protein, ...

  5. Structure and Function of a Nonruminant Gut: A Porcine Model

    DEFF Research Database (Denmark)

    Tajima, Kiyoshi; Aminov, Rustam

    2015-01-01

    In many aspects, the anatomical, physiological, and microbial diversity features of the ruminant gut are different from that of the monogastric animals. Thus, the main aim of this chapter is to give a comparative overview of the structure and function of the gastrointestinal tract of a nonruminan...

  6. Modelling of Functional States during Saccharomyces cerevisiae Fed-batch Cultivation

    Directory of Open Access Journals (Sweden)

    Stoyan Tzonkov

    2005-04-01

    Full Text Available An implementation of functional state approach for modelling of yeast fed-batch cultivation is presented in this paper. Using of functional state modelling approach aims to overcome the main disadvantage of using global process model, namely complex model structure and big number of model parameters, which complicate the model simulation and parameter estimation. This approach has computational advantages, such as the possibility to use the estimated values from the previous state as starting values for estimation of parameters of a new state. The functional state modelling approach is applied here for fedbatch cultivation of Saccharomyces cerevisiae. Four functional states are recognised and parameter estimation of local models is presented as well.

  7. The relationship between structural and functional connectivity: graph theoretical analysis of an EEG neural mass model

    NARCIS (Netherlands)

    Ponten, S.C.; Daffertshofer, A.; Hillebrand, A.; Stam, C.J.

    2010-01-01

    We investigated the relationship between structural network properties and both synchronization strength and functional characteristics in a combined neural mass and graph theoretical model of the electroencephalogram (EEG). Thirty-two neural mass models (NMMs), each representing the lump activity

  8. Evaluation of Preservation Planning within OAIS, based on the Planets Functional Model

    OpenAIRE

    Sierman, Barbara; Wheatley, Paul

    2010-01-01

    This report gives an overview of the Planets Functional Model and relates it to the Planets deliverables. It also gives a set of recommendations for the OAIS model. The Report was part of the European FP6 Project Planets

  9. Constructing Markov State Models to elucidate the functional conformational changes of complex biomolecules

    KAUST Repository

    Wang, Wei; Cao, Siqin; Zhu, Lizhe; Huang, Xuhui

    2017-01-01

    bioengineering applications and rational drug design. Constructing Markov State Models (MSMs) based on large-scale molecular dynamics simulations has emerged as a powerful approach to model functional conformational changes of the biomolecular system

  10. Diffusion Forecasting Model with Basis Functions from QR-Decomposition

    Science.gov (United States)

    Harlim, John; Yang, Haizhao

    2017-12-01

    The diffusion forecasting is a nonparametric approach that provably solves the Fokker-Planck PDE corresponding to Itô diffusion without knowing the underlying equation. The key idea of this method is to approximate the solution of the Fokker-Planck equation with a discrete representation of the shift (Koopman) operator on a set of basis functions generated via the diffusion maps algorithm. While the choice of these basis functions is provably optimal under appropriate conditions, computing these basis functions is quite expensive since it requires the eigendecomposition of an N× N diffusion matrix, where N denotes the data size and could be very large. For large-scale forecasting problems, only a few leading eigenvectors are computationally achievable. To overcome this computational bottleneck, a new set of basis functions constructed by orthonormalizing selected columns of the diffusion matrix and its leading eigenvectors is proposed. This computation can be carried out efficiently via the unpivoted Householder QR factorization. The efficiency and effectiveness of the proposed algorithm will be shown in both deterministically chaotic and stochastic dynamical systems; in the former case, the superiority of the proposed basis functions over purely eigenvectors is significant, while in the latter case forecasting accuracy is improved relative to using a purely small number of eigenvectors. Supporting arguments will be provided on three- and six-dimensional chaotic ODEs, a three-dimensional SDE that mimics turbulent systems, and also on the two spatial modes associated with the boreal winter Madden-Julian Oscillation obtained from applying the Nonlinear Laplacian Spectral Analysis on the measured Outgoing Longwave Radiation.

  11. A universal Model-R Coupler to facilitate the use of R functions for model calibration and analysis

    Science.gov (United States)

    Wu, Yiping; Liu, Shuguang; Yan, Wende

    2014-01-01

    Mathematical models are useful in various fields of science and engineering. However, it is a challenge to make a model utilize the open and growing functions (e.g., model inversion) on the R platform due to the requirement of accessing and revising the model's source code. To overcome this barrier, we developed a universal tool that aims to convert a model developed in any computer language to an R function using the template and instruction concept of the Parameter ESTimation program (PEST) and the operational structure of the R-Soil and Water Assessment Tool (R-SWAT). The developed tool (Model-R Coupler) is promising because users of any model can connect an external algorithm (written in R) with their model to implement various model behavior analyses (e.g., parameter optimization, sensitivity and uncertainty analysis, performance evaluation, and visualization) without accessing or modifying the model's source code.

  12. Longitudinal functional principal component modelling via Stochastic Approximation Monte Carlo

    KAUST Repository

    Martinez, Josue G.; Liang, Faming; Zhou, Lan; Carroll, Raymond J.

    2010-01-01

    model averaging using a Bayesian formulation. A relatively straightforward reversible jump Markov Chain Monte Carlo formulation has poor mixing properties and in simulated data often becomes trapped at the wrong number of principal components. In order

  13. The Ecosystem Functions Model: A Tool for Restoration Planning

    National Research Council Canada - National Science Library

    Hickey, John T; Dunn, Chris N

    2004-01-01

    .... Army Corps of Engineers Hydrologic Engineering Center (HEC) is developing the EFM and envisions environmental planners, biologists, and engineers using the model to help determine whether proposed alternatives (e.g...

  14. Prediction of Chemical Function: Model Development and Application

    Science.gov (United States)

    The United States Environmental Protection Agency’s Exposure Forecaster (ExpoCast) project is developing both statistical and mechanism-based computational models for predicting exposures to thousands of chemicals, including those in consumer products. The high-throughput (...

  15. Functional Testing Protocols for Commercial Building Efficiency Baseline Modeling Software

    Energy Technology Data Exchange (ETDEWEB)

    Jump, David; Price, Phillip N.; Granderson, Jessica; Sohn, Michael

    2013-09-06

    This document describes procedures for testing and validating proprietary baseline energy modeling software accuracy in predicting energy use over the period of interest, such as a month or a year. The procedures are designed according to the methodology used for public domain baselining software in another LBNL report that was (like the present report) prepared for Pacific Gas and Electric Company: ?Commercial Building Energy Baseline Modeling Software: Performance Metrics and Method Testing with Open Source Models and Implications for Proprietary Software Testing Protocols? (referred to here as the ?Model Analysis Report?). The test procedure focuses on the quality of the software?s predictions rather than on the specific algorithms used to predict energy use. In this way the software vendor is not required to divulge or share proprietary information about how their software works, while enabling stakeholders to assess its performance.

  16. Consistent Evolution of Software Artifacts and Non-Functional Models

    Science.gov (United States)

    2014-11-14

    induce bad software performance)? 15. SUBJECT TERMS EOARD, Nano particles, Photo-Acoustic Sensors, Model-Driven Engineering ( MDE ), Software Performance...Università degli Studi dell’Aquila, Via Vetoio, 67100 L’Aquila, Italy Email: vittorio.cortellessa@univaq.it Web : http: // www. di. univaq. it/ cortelle/ Phone...Model-Driven Engineering ( MDE ), Software Performance Engineering (SPE), Change Propagation, Performance Antipatterns. For sake of readability of the

  17. Modelling of Multi Input Transfer Function for Rainfall Forecasting in Batu City

    OpenAIRE

    Priska Arindya Purnama

    2017-01-01

    The aim of this research is to model and forecast the rainfall in Batu City using multi input transfer function model based on air temperature, humidity, wind speed and cloud. Transfer function model is a multivariate time series model which consists of an output series (Yt) sequence expected to be effected by an input series (Xt) and other inputs in a group called a noise series (Nt). Multi input transfer function model obtained is (b1,s1,r1) (b2,s2,r2) (b3,s3,r3) (b4,s4,r4)(pn,qn) = (0,0,0)...

  18. Modeling of Tilting-Pad Journal Bearings with Transfer Functions

    Directory of Open Access Journals (Sweden)

    J. A. Vázquez

    2001-01-01

    Full Text Available Tilting-pad journal bearings are widely used to promote stability in modern rotating machinery. However, the dynamics associated with pad motion alters this stabilizing capacity depending on the operating speed of the machine and the bearing geometric parameters, particularly the bearing preload. In modeling the dynamics of the entire rotor-bearing system, the rotor is augmented with a model of the bearings. This model may explicitly include the pad degrees of freedom or may implicitly include them by using dynamic matrix reduction methods. The dynamic reduction models may be represented as a set of polynomials in the eigenvalues of the system used to determine stability. All tilting-pad bearings can then be represented by a fixed size matrix with polynomial elements interacting with the rotor. This paper presents a procedure to calculate the coefficients of polynomials for implicit bearing models. The order of the polynomials changes to reflect the number of pads in the bearings. This results in a very compact and computationally efficient method for fully including the dynamics of tilting-pad bearings or other multiple degrees of freedom components that interact with rotors. The fixed size of the dynamic reduction matrices permits the method to be easily incorporated into rotor dynamic stability codes. A recursive algorithm is developed and presented for calculating the coefficients of the polynomials. The method is applied to stability calculations for a model of a typical industrial compressor.

  19. Correspondence between Traditional Models of Functional Analysis and a Functional Analysis of Manding Behavior

    Science.gov (United States)

    LaRue, Robert H.; Sloman, Kimberly N.; Weiss, Mary Jane; Delmolino, Lara; Hansford, Amy; Szalony, Jill; Madigan, Ryan; Lambright, Nathan M.

    2011-01-01

    Functional analysis procedures have been effectively used to determine the maintaining variables for challenging behavior and subsequently develop effective interventions. However, fear of evoking dangerous topographies of maladaptive behavior and concerns for reinforcing infrequent maladaptive behavior present challenges for people working in…

  20. A Model-Based Approach to Constructing Music Similarity Functions

    Science.gov (United States)

    West, Kris; Lamere, Paul

    2006-12-01

    Several authors have presented systems that estimate the audio similarity of two pieces of music through the calculation of a distance metric, such as the Euclidean distance, between spectral features calculated from the audio, related to the timbre or pitch of the signal. These features can be augmented with other, temporally or rhythmically based features such as zero-crossing rates, beat histograms, or fluctuation patterns to form a more well-rounded music similarity function. It is our contention that perceptual or cultural labels, such as the genre, style, or emotion of the music, are also very important features in the perception of music. These labels help to define complex regions of similarity within the available feature spaces. We demonstrate a machine-learning-based approach to the construction of a similarity metric, which uses this contextual information to project the calculated features into an intermediate space where a music similarity function that incorporates some of the cultural information may be calculated.

  1. Exactly solvable model for the time response function of RPCs

    International Nuclear Information System (INIS)

    Mangiarotti, A.; Fonte, P.; Gobbi, A.

    2004-01-01

    The fluctuation theory for the growth of several avalanches is briefly summarized and extended to include the case of electronegative gas mixtures. Based on such physical picture, the intrinsic time response function of an RPC can be calculated in a closed form and its average and rms extracted from series representations. The corresponding timing resolution, expressed in units of 1/((α-η)vd), is a universal function of the mean number of 'effective' clusters n0 reduced by electron attachment: n0(1-η/α). A comparison to a few selected good-quality experimental data is attempted for the timing resolution of both 1-gap and 4-gaps RPCs, finding a reasonable agreement

  2. Source function for tritium transport models in the Pacific

    International Nuclear Information System (INIS)

    Fine, R.A.; Ostlund, H.G.

    1977-01-01

    An empirically fitted function describes surface Pacific Ocean tritium concentrations as varying exponentially with latitude, the r.m.s. fit to observations is 18%. The oceanic tritium concentration maximum in the North Pacific, which resulted from nuclear weapons testing, lagged the rain data by two to three years occurring in 1965--66. Tritium-salinity correlations are consistent with climatology. Tritium-longitude correlations are consistent with surface water circulation

  3. Buckled graphene: A model study based on density functional theory

    KAUST Repository

    Khan, Yasser

    2010-09-01

    We make use of ab initio calculations within density functional theory to investigate the influence of buckling on the electronic structure of single layer graphene. Our systematic study addresses a wide range of bond length and bond angle variations in order to obtain insights into the energy scale associated with the formation of ripples in a graphene sheet. © 2010 Elsevier B.V. All rights reserved.

  4. Buckled graphene: A model study based on density functional theory

    KAUST Repository

    Khan, Yasser; Mukaddam, Mohsin Ahmed; Schwingenschlö gl, Udo

    2010-01-01

    We make use of ab initio calculations within density functional theory to investigate the influence of buckling on the electronic structure of single layer graphene. Our systematic study addresses a wide range of bond length and bond angle variations in order to obtain insights into the energy scale associated with the formation of ripples in a graphene sheet. © 2010 Elsevier B.V. All rights reserved.

  5. Evaluation of Analytical Modeling Functions for the Phonation Onset Process

    Directory of Open Access Journals (Sweden)

    Simon Petermann

    2016-01-01

    Full Text Available The human voice originates from oscillations of the vocal folds in the larynx. The duration of the voice onset (VO, called the voice onset time (VOT, is currently under investigation as a clinical indicator for correct laryngeal functionality. Different analytical approaches for computing the VOT based on endoscopic imaging were compared to determine the most reliable method to quantify automatically the transient vocal fold oscillations during VO. Transnasal endoscopic imaging in combination with a high-speed camera (8000 fps was applied to visualize the phonation onset process. Two different definitions of VO interval were investigated. Six analytical functions were tested that approximate the envelope of the filtered or unfiltered glottal area waveform (GAW during phonation onset. A total of 126 recordings from nine healthy males and 210 recordings from 15 healthy females were evaluated. Three criteria were analyzed to determine the most appropriate computation approach: (1 reliability of the fit function for a correct approximation of VO; (2 consistency represented by the standard deviation of VOT; and (3 accuracy of the approximation of VO. The results suggest the computation of VOT by a fourth-order polynomial approximation in the interval between 32.2 and 67.8% of the saturation amplitude of the filtered GAW.

  6. A Tiered Control Plane Model for Service Function Chaining Isolation

    Directory of Open Access Journals (Sweden)

    Håkon Gunleifsen

    2018-06-01

    Full Text Available This article presents an architecture for encryption automation in interconnected Network Function Virtualization (NFV domains. Current NFV implementations are designed for deployment within trusted domains, where overlay networks with static trusted links are utilized for enabling network security. Nevertheless, within a Service Function Chain (SFC, Virtual Network Function (VNF flows cannot be isolated and end-to-end encrypted because each VNF requires direct access to the overall SFC data-flow. This restricts both end-users and Service Providers from enabling end-to-end security, and in extended VNF isolation within the SFC data traffic. Encrypting data flows on a per-flow basis results in an extensive amount of secure tunnels, which cannot scale efficiently in manual configurations. Additionally, creating secure data plane tunnels between NFV providers requires secure exchange of key parameters, and the establishment of an east–west control plane protocol. In this article, we present an architecture focusing on these two problems, investigating how overlay networks can be created, isolated, and secured dynamically. Accordingly, we propose an architecture for automated establishment of encrypted tunnels in NFV, which introduces a novel, tiered east–west communication channel between network controllers in a multi-domain environment.

  7. A functional model for characterizing long-distance movement behaviour

    Science.gov (United States)

    Buderman, Frances E.; Hooten, Mevin B.; Ivan, Jacob S.; Shenk, Tanya M.

    2016-01-01

    Advancements in wildlife telemetry techniques have made it possible to collect large data sets of highly accurate animal locations at a fine temporal resolution. These data sets have prompted the development of a number of statistical methodologies for modelling animal movement.Telemetry data sets are often collected for purposes other than fine-scale movement analysis. These data sets may differ substantially from those that are collected with technologies suitable for fine-scale movement modelling and may consist of locations that are irregular in time, are temporally coarse or have large measurement error. These data sets are time-consuming and costly to collect but may still provide valuable information about movement behaviour.We developed a Bayesian movement model that accounts for error from multiple data sources as well as movement behaviour at different temporal scales. The Bayesian framework allows us to calculate derived quantities that describe temporally varying movement behaviour, such as residence time, speed and persistence in direction. The model is flexible, easy to implement and computationally efficient.We apply this model to data from Colorado Canada lynx (Lynx canadensis) and use derived quantities to identify changes in movement behaviour.

  8. A Simple Beta-Function Model for Soil-Water Repellency as a Function of Water and Organic Carbon Contents

    DEFF Research Database (Denmark)

    Karunarathna, Anurudda Kumara; Kawamoto, Ken; Møldrup, Per

    2010-01-01

    Soil-water content (θ) and soil organic carbon (SOC) are key factors controlling the occurrence and magnitude of soil-water repellency (WR). Although expressions have recently been proposed to describe the nonlinear variation of WR with θ, the inclusion of easily measurable parameters in predictive...... conditions for 19 soils were used to test the model. The beta function successfully reproduced all the measured soil-water repellency characteristic, α(θ), curves. Significant correlations were found between model parameters and SOC content (1%-14%). The model was independently tested against data...

  9. Computer-controlled mechanical lung model for application in pulmonary function studies

    NARCIS (Netherlands)

    A.F.M. Verbraak (Anton); J.E.W. Beneken; J.M. Bogaard (Jan); A. Versprille (Adrian)

    1995-01-01

    textabstractA computer controlled mechanical lung model has been developed for testing lung function equipment, validation of computer programs and simulation of impaired pulmonary mechanics. The construction, function and some applications are described. The physical model is constructed from two

  10. A functional integral approach without slave bosons to the Anderson model

    International Nuclear Information System (INIS)

    Nguyen Ngoc Thuan; Nguyen Toan Thang; Coqblin, B.; Bhattacharjee, A.; Hoang Anh Tuan.

    1994-06-01

    We developed the technique of the functional integral method without slave bosons for the Periodic Anderson Model (PAM) suggested by Sarker for treating the Hubbard Model. This technique allowed us to obtain an analytical expression of Green functions containing U-dependence that is omitted in the formalism with slave bosons. (author). 9 refs

  11. An exactly solvable model of an oscillator with nonlinear coupling and zeros of Bessel functions

    Science.gov (United States)

    Dodonov, V. V.; Klimov, A. B.

    1993-01-01

    We consider an oscillator model with nonpolynomial interaction. The model admits exact solutions for two situations: for energy eigenvalues in terms of zeros of Bessel functions, that were considered as functions of the continuous index; and for the corresponding eigenstates in terms of Lommel polynomials.

  12. About the functions of the Wigner distribution for the q-deformed harmonic oscillator model

    International Nuclear Information System (INIS)

    Atakishiev, N.M.; Nagiev, S.M.; Djafarov, E.I.; Imanov, R.M.

    2005-01-01

    Full text : A q-deformed model of the linear harmonic oscillator in the Wigner phase-space is studied. It was derived an explicit expression for the Wigner probability distribution function, as well as the Wigner distribution function of a thermodynamic equilibrium for this model

  13. A Classroom Note on: Modeling Functions with the TI-83/84 Calculator

    Science.gov (United States)

    Lubowsky, Jack

    2011-01-01

    In Pre-Calculus courses, students are taught the composition and combination of functions to model physical applications. However, when combining two or more functions into a single more complicated one, students may lose sight of the physical picture which they are attempting to model. A block diagram, or flow chart, in which each block…

  14. Food supply and demand, a simulation model of the functional response of grazing ruminants

    NARCIS (Netherlands)

    Smallegange, I.M.; Brunsting, A.M.H.

    2002-01-01

    A dynamic model of the functional response is a first prerequisite to be able to bridge the gap between local feeding ecology and grazing rules that pertain to larger scales. A mechanistic model is presented that simulates the functional response, growth and grazing time of ruminants. It is based on

  15. Group-ICA model order highlights patterns of functional brain connectivity

    Directory of Open Access Journals (Sweden)

    Ahmed eAbou Elseoud

    2011-06-01

    Full Text Available Resting-state networks (RSNs can be reliably and reproducibly detected using independent component analysis (ICA at both individual subject and group levels. Altering ICA dimensionality (model order estimation can have a significant impact on the spatial characteristics of the RSNs as well as their parcellation into sub-networks. Recent evidence from several neuroimaging studies suggests that the human brain has a modular hierarchical organization which resembles the hierarchy depicted by different ICA model orders. We hypothesized that functional connectivity between-group differences measured with ICA might be affected by model order selection. We investigated differences in functional connectivity using so-called dual-regression as a function of ICA model order in a group of unmedicated seasonal affective disorder (SAD patients compared to normal healthy controls. The results showed that the detected disease-related differences in functional connectivity alter as a function of ICA model order. The volume of between-group differences altered significantly as a function of ICA model order reaching maximum at model order 70 (which seems to be an optimal point that conveys the largest between-group difference then stabilized afterwards. Our results show that fine-grained RSNs enable better detection of detailed disease-related functional connectivity changes. However, high model orders show an increased risk of false positives that needs to be overcome. Our findings suggest that multilevel ICA exploration of functional connectivity enables optimization of sensitivity to brain disorders.

  16. Range walk error correction and modeling on Pseudo-random photon counting system

    Science.gov (United States)

    Shen, Shanshan; Chen, Qian; He, Weiji

    2017-08-01

    Signal to noise ratio and depth accuracy are modeled for the pseudo-random ranging system with two random processes. The theoretical results, developed herein, capture the effects of code length and signal energy fluctuation are shown to agree with Monte Carlo simulation measurements. First, the SNR is developed as a function of the code length. Using Geiger-mode avalanche photodiodes (GMAPDs), longer code length is proven to reduce the noise effect and improve SNR. Second, the Cramer-Rao lower bound on range accuracy is derived to justify that longer code length can bring better range accuracy. Combined with the SNR model and CRLB model, it is manifested that the range accuracy can be improved by increasing the code length to reduce the noise-induced error. Third, the Cramer-Rao lower bound on range accuracy is shown to converge to the previously published theories and introduce the Gauss range walk model to range accuracy. Experimental tests also converge to the presented boundary model in this paper. It has been proven that depth error caused by the fluctuation of the number of detected photon counts in the laser echo pulse leads to the depth drift of Time Point Spread Function (TPSF). Finally, numerical fitting function is used to determine the relationship between the depth error and the photon counting ratio. Depth error due to different echo energy is calibrated so that the corrected depth accuracy is improved to 1cm.

  17. Household time allocation model based on a group utility function

    NARCIS (Netherlands)

    Zhang, J.; Borgers, A.W.J.; Timmermans, H.J.P.

    2002-01-01

    Existing activity-based models typically assume an individual decision-making process. In household decision-making, however, interaction exists among household members and their activities during the allocation of the members' limited time. This paper, therefore, attempts to develop a new household

  18. A Functional Model for Counseling Parents of Gifted Students.

    Science.gov (United States)

    Dettmann, David F.; Colangelo, Nicholas

    1980-01-01

    The authors present a model of parent-school involvement in furthering the educational development of gifted students. The disadvantages and advantages of three counseling approaches are pointed out--parent centered approach, school centered approach, and the partnership approach. (SBH)

  19. Modelling functional and structural impact of non-synonymous ...

    African Journals Online (AJOL)

    Dr. Abdulmojeed

    2017-02-03

    Feb 3, 2017 ... Based on the PANTHER, PROVEAN and PolyPhen-2 algorithms, there .... ProSA returns a z-score that indicates overall model quality based ..... atomic-style geometrical figures into the computational protocol. Bioinformatics ...

  20. Lyapunov functions for the fixed points of the Lorenz model

    International Nuclear Information System (INIS)

    Bakasov, A.A.; Govorkov, B.B. Jr.

    1992-11-01

    We have shown how the explicit Lyapunov functions can be constructed in the framework of a regular procedure suggested and completed by Lyapunov a century ago (''method of critical cases''). The method completely covers all practically encountering subtle cases of stability study for ordinary differential equations when the linear stability analysis fails. These subtle cases, ''the critical cases'', according to Lyapunov, include both bifurcations of solutions and solutions of systems with symmetry. Being properly specialized and actually powerful in case of ODE's, this Lyapunov's method is formulated in simple language and should attract a wide interest of the physical audience. The method leads to inevitable construction of the explicit Lyapunov function, takes automatically into account the Fredholm alternative and avoids infinite step calculations. Easy and apparent physical interpretation of the Lyapunov function as a potential or as a time-dependent entropy provides one with more details about the local dynamics of the system at non-equilibrium phase transition points. Another advantage is that this Lyapunov's method consists of a set of very detailed explicit prescriptions which allow one to easy programmize the method for a symbolic processor. The application of the Lyapunov theory for critical cases has been done in this work to the real Lorenz equations and it is shown, in particular, that increasing σ at the Hopf bifurcation point suppresses the contribution of one of the variables to the destabilization of the system. The relation of the method to contemporary methods and its place among them have been clearly and extensively discussed. Due to Appendices, the paper is self-contained and does not require from a reader to approach results published only in Russian. (author). 38 refs

  1. Neutron strength functions: the link between resolved resonances and the optical model

    International Nuclear Information System (INIS)

    Moldauer, P.A.

    1980-01-01

    Neutron strength functions and scattering radii are useful as energy and channel radius independent parameters that characterize neutron scattering resonances and provide a connection between R-matrix resonance analysis and the optical model. The choice of R-matrix channel radii is discussed, as are limitations on the accuracies of strength functions. New definitions of the p-wave strength function and scattering radius are proposed. For light nuclei, where strength functions display optical model energy variations over the resolved resonances, a doubly reduced partial neutron width is introduced for more meaningful statistical analyses of widths. The systematic behavior of strength functions and scattering radii is discussed

  2. QTL Analysis and Functional Genomics of Animal Model

    DEFF Research Database (Denmark)

    Farajzadeh, Leila

    , for example, has enabled scientists to examine more complex interactions in connection with studies of properties and diseases. In her PhD project, Leila Farajzadeh integrated different organisational levels in biology, including genotype, phenotype, association studies, transcription profiles and genetic......In recent years, the use of functional genomics and next-generation sequencing technologies has increased the probability of success in studies of complex properties. The integration of large data sets from association studies, DNA resequencing, gene expression profiles and phenotypic data...

  3. Functional validation of candidate genes detected by genomic feature models

    DEFF Research Database (Denmark)

    Rohde, Palle Duun; Østergaard, Solveig; Kristensen, Torsten Nygaard

    2018-01-01

    Understanding the genetic underpinnings of complex traits requires knowledge of the genetic variants that contribute to phenotypic variability. Reliable statistical approaches are needed to obtain such knowledge. In genome-wide association studies, variants are tested for association with trait...... then functionally assessed whether the identified candidate genes affected locomotor activity by reducing gene expression using RNA interference. In five of the seven candidate genes tested, reduced gene expression altered the phenotype. The ranking of genes within the predictive GO term was highly correlated...

  4. Spectral functions for the flat plasma sheet model

    International Nuclear Information System (INIS)

    Pirozhenko, I G

    2006-01-01

    The present work is based on Bordag M et al 2005 (J. Phys. A: Math. Gen. 38 11027) where the spectral analysis of the electromagnetic field on the background of an infinitely thin flat plasma layer is carried out. The solutions to Maxwell equations with the appropriate matching conditions at the plasma layer are derived and the spectrum of electromagnetic oscillations is determined. The spectral zeta function and the integrated heat kernel are constructed for different branches of the spectrum in an explicit form. The asymptotic expansion of the integrated heat kernel at small values of the evolution parameter is derived. The local heat kernels are considered also

  5. Form factors and structure functions of hadrons in parton model

    International Nuclear Information System (INIS)

    Volkonskij, N.Yu.

    1979-01-01

    The hadron charge form factors and their relation to the deep-inelastic lepton-production structure functions in the regions of asymptotically high and small momentum transfer Q 2 are studied. The nucleon and pion charge radii are calculated. The results of calculations are in good agreement with the experimental data. The K- and D-meson charge radii are estimated. In the region of asymptotically high Q 2 the possibility of Drell-Yan-West relation violation is analyzed. It is shown, that for pseudoscalar mesons this relation is violated. The relation between the proton and neutron form factor asymptotics is obtained

  6. Why are you telling me that? A conceptual model of the social function of autobiographical memory.

    Science.gov (United States)

    Alea, Nicole; Bluck, Susan

    2003-03-01

    In an effort to stimulate and guide empirical work within a functional framework, this paper provides a conceptual model of the social functions of autobiographical memory (AM) across the lifespan. The model delineates the processes and variables involved when AMs are shared to serve social functions. Components of the model include: lifespan contextual influences, the qualitative characteristics of memory (emotionality and level of detail recalled), the speaker's characteristics (age, gender, and personality), the familiarity and similarity of the listener to the speaker, the level of responsiveness during the memory-sharing process, and the nature of the social relationship in which the memory sharing occurs (valence and length of the relationship). These components are shown to influence the type of social function served and/or, the extent to which social functions are served. Directions for future empirical work to substantiate the model and hypotheses derived from the model are provided.

  7. A Model-Based Approach to Constructing Music Similarity Functions

    Directory of Open Access Journals (Sweden)

    Lamere Paul

    2007-01-01

    Full Text Available Several authors have presented systems that estimate the audio similarity of two pieces of music through the calculation of a distance metric, such as the Euclidean distance, between spectral features calculated from the audio, related to the timbre or pitch of the signal. These features can be augmented with other, temporally or rhythmically based features such as zero-crossing rates, beat histograms, or fluctuation patterns to form a more well-rounded music similarity function. It is our contention that perceptual or cultural labels, such as the genre, style, or emotion of the music, are also very important features in the perception of music. These labels help to define complex regions of similarity within the available feature spaces. We demonstrate a machine-learning-based approach to the construction of a similarity metric, which uses this contextual information to project the calculated features into an intermediate space where a music similarity function that incorporates some of the cultural information may be calculated.

  8. Modeling charged defects inside density functional theory band gaps

    International Nuclear Information System (INIS)

    Schultz, Peter A.; Edwards, Arthur H.

    2014-01-01

    Density functional theory (DFT) has emerged as an important tool to probe microscopic behavior in materials. The fundamental band gap defines the energy scale for charge transition energy levels of point defects in ionic and covalent materials. The eigenvalue gap between occupied and unoccupied states in conventional DFT, the Kohn–Sham gap, is often half or less of the experimental band gap, seemingly precluding quantitative studies of charged defects. Applying explicit and rigorous control of charge boundary conditions in supercells, we find that calculations of defect energy levels derived from total energy differences give accurate predictions of charge transition energy levels in Si and GaAs, unhampered by a band gap problem. The GaAs system provides a good theoretical laboratory for investigating band gap effects in defect level calculations: depending on the functional and pseudopotential, the Kohn–Sham gap can be as large as 1.1 eV or as small as 0.1 eV. We find that the effective defect band gap, the computed range in defect levels, is mostly insensitive to the Kohn–Sham gap, demonstrating it is often possible to use conventional DFT for quantitative studies of defect chemistry governing interesting materials behavior in semiconductors and oxides despite a band gap problem

  9. Nucleon deep-inelastic structure functions in a quark model with factorizability assumptions

    International Nuclear Information System (INIS)

    Linkevich, A.D.; Skachkov, N.B.

    1979-01-01

    Formula for structure functions of deep-inelastic electron scattering on nucleon is derived. For this purpose the dynamic model of factorizing quark amplitudes is used. It has been found that with increase of Q 2 transferred pulse square at great values of x kinemastic variable the decrease of structure function values is observed. At x single values the increase of structure function values is found. The comparison With experimental data shows a good agreement of the model with experiment

  10. A statistical model of structure functions and quantum chromodynamics

    International Nuclear Information System (INIS)

    Mac, E.; Ugaz, E.; Universidad Nacional de Ingenieria, Lima

    1989-01-01

    We consider a model for the x-dependence of the quark distributions in the proton. Within the context of simple statistical assumptions, we obtain the parton densities in the infinite momentum frame. In a second step lowest order QCD corrections are incorporated to these distributions. Crude, but reasonable, agreement with experiment is found for the F 2 , valence and q, anti q distributions for x> or approx.0.2. (orig.)

  11. Improving the Functional Diagnostic Process using Dynamic Master Logic Diagram (DMLD) Modeling Strategy

    International Nuclear Information System (INIS)

    Mohamed, A.H.

    2008-01-01

    In recent years, state based functional diagnostic systems have gained a growing attention among the model based diagnostic systems. They have been used to diagnose the new faults of the complex systems. On the other hand, a main point considered against it is its subjective, and the inability of reusing the knowledge gathered from one engineer by others. Different methods have been' suggested to solve these problems. In the same way, the suggested functional diagnostic system introduces the uses of Dynamic Master Logic Diagram (DMLD) modeling strategy for the functional diagnostic systems. DMLD has proven its power as a good modeling strategy. It can model the functions of the system's components in terms of a set of defined primitives for the domain of applications. However, the suggested system can use the DMLD technique to model the small functions of the system according to the defined primitives of its domain. So, the modeling process of the system is relatively invariant from one modeler to another. Also, the functions defined can be reused by other users in the domain for solving different problems. Besides, it can deal with the complex system in a flexible manner. Thus, the proposed system can improve the performance of the state based functional diagnostic systems. It can be applied for a wide area of the complex systems. It has been applied for a fluid system as a case of the real-time systems. The suggested system has proved its success as a powerful practical state based functional diagnostic system

  12. Alternative Functional In Vitro Models of Human Intestinal Epithelia

    Directory of Open Access Journals (Sweden)

    Amanda L Kauffman

    2013-07-01

    Full Text Available Physiologically relevant sources of absorptive intestinal epithelial cells are crucial for human drug transport studies. Human adenocarcinoma-derived intestinal cell lines, such as Caco-2, offer conveniences of easy culture maintenance and scalability, but do not fully recapitulate in vivo intestinal phenotypes. Additional sources of renewable physiologically relevant human intestinal cells would provide a much needed tool for drug discovery and intestinal physiology. We sought to evaluate and compare two alternative sources of human intestinal cells, commercially available primary human intestinal epithelial cells (hInEpCs and induced pluripotent stem cell (iPSC-derived intestinal cells to Caco-2, for use in in vitro transwell monolayer intestinal transport assays. To achieve this for iPSC-derived cells, our previously described 3-dimensional intestinal organogenesis method was adapted to transwell differentiation. Intestinal cells were assessed by marker expression through immunocytochemical and mRNA expression analyses, monolayer integrity through Transepithelial Electrical Resistance (TEER measurements and molecule permeability, and functionality by taking advantage the well-characterized intestinal transport mechanisms. In most cases, marker expression for primary hInEpCs and iPSC-derived cells appeared to be as good as or better than Caco-2. Furthermore, transwell monolayers exhibited high TEER with low permeability. Primary hInEpCs showed molecule efflux indicative of P-glycoprotein transport. Primary hInEpCs and iPSC-derived cells also showed neonatal Fc receptor-dependent binding of immunoglobulin G variants. Primary hInEpCs and iPSC-derived intestinal cells exhibit expected marker expression and demonstrate basic functional monolayer formation, similar to or better than Caco-2. These cells could offer an alternative source of human intestinal cells for understanding normal intestinal epithelial physiology and drug transport.

  13. Finite-element modeling of the human neurocranium under functional anatomical aspects.

    Science.gov (United States)

    Mall, G; Hubig, M; Koebke, J; Steinbuch, R

    1997-08-01

    Due to its functional significance the human skull plays an important role in biomechanical research. The present work describes a new Finite-Element model of the human neurocranium. The dry skull of a middle-aged woman served as a pattern. The model was developed using only the preprocessor (Mentat) of a commercial FE-system (Marc). Unlike that of other FE models of the human skull mentioned in the literature, the geometry in this model was designed according to functional anatomical findings. Functionally important morphological structures representing loci minoris resistentiae, especially the foramina and fissures of the skull base, were included in the model. The results of two linear static loadcase analyses in the region of the skull base underline the importance of modeling from the functional anatomical point of view.

  14. Elucidation of spin echo small angle neutron scattering correlation functions through model studies.

    Science.gov (United States)

    Shew, Chwen-Yang; Chen, Wei-Ren

    2012-02-14

    Several single-modal Debye correlation functions to approximate part of the overall Debey correlation function of liquids are closely examined for elucidating their behavior in the corresponding spin echo small angle neutron scattering (SESANS) correlation functions. We find that the maximum length scale of a Debye correlation function is identical to that of its SESANS correlation function. For discrete Debye correlation functions, the peak of SESANS correlation function emerges at their first discrete point, whereas for continuous Debye correlation functions with greater width, the peak position shifts to a greater value. In both cases, the intensity and shape of the peak of the SESANS correlation function are determined by the width of the Debye correlation functions. Furthermore, we mimic the intramolecular and intermolecular Debye correlation functions of liquids composed of interacting particles based on a simple model to elucidate their competition in the SESANS correlation function. Our calculations show that the first local minimum of a SESANS correlation function can be negative and positive. By adjusting the spatial distribution of the intermolecular Debye function in the model, the calculated SESANS spectra exhibit the profile consistent with that of hard-sphere and sticky-hard-sphere liquids predicted by more sophisticated liquid state theory and computer simulation. © 2012 American Institute of Physics

  15. Root structural and functional dynamics in terrestrial biosphere models--evaluation and recommendations.

    Science.gov (United States)

    Warren, Jeffrey M; Hanson, Paul J; Iversen, Colleen M; Kumar, Jitendra; Walker, Anthony P; Wullschleger, Stan D

    2015-01-01

    There is wide breadth of root function within ecosystems that should be considered when modeling the terrestrial biosphere. Root structure and function are closely associated with control of plant water and nutrient uptake from the soil, plant carbon (C) assimilation, partitioning and release to the soils, and control of biogeochemical cycles through interactions within the rhizosphere. Root function is extremely dynamic and dependent on internal plant signals, root traits and morphology, and the physical, chemical and biotic soil environment. While plant roots have significant structural and functional plasticity to changing environmental conditions, their dynamics are noticeably absent from the land component of process-based Earth system models used to simulate global biogeochemical cycling. Their dynamic representation in large-scale models should improve model veracity. Here, we describe current root inclusion in models across scales, ranging from mechanistic processes of single roots to parameterized root processes operating at the landscape scale. With this foundation we discuss how existing and future root functional knowledge, new data compilation efforts, and novel modeling platforms can be leveraged to enhance root functionality in large-scale terrestrial biosphere models by improving parameterization within models, and introducing new components such as dynamic root distribution and root functional traits linked to resource extraction. No claim to original US Government works. New Phytologist © 2014 New Phytologist Trust.

  16. Extreme Compression and Modeling of Bidirectional Texture Function

    Czech Academy of Sciences Publication Activity Database

    Haindl, Michal; Filip, Jiří

    2007-01-01

    Roč. 29, č. 10 (2007), s. 1859-1865 ISSN 0162-8828 R&D Projects: GA AV ČR 1ET400750407; GA MŠk 1M0572; GA AV ČR IAA2075302 EU Projects: European Commission(XE) 507752 - MUSCLE Grant - others:GA MŠk(CZ) 2C06019 Institutional research plan: CEZ:AV0Z10750506 Keywords : Rough texture * 3D texture * BTF * texture synthesis * texture modeling * data compression Subject RIV: BD - Theory of Information Impact factor: 3.579, year: 2007 http://doi.ieeecomputersociety.org/10.1109/TPAMI.2007.1139

  17. A dynamic model of functioning of a bank

    Science.gov (United States)

    Malafeyev, Oleg; Awasthi, Achal; Zaitseva, Irina; Rezenkov, Denis; Bogdanova, Svetlana

    2018-04-01

    In this paper, we analyze dynamic programming as a novel approach to solve the problem of maximizing the profits of a bank. The mathematical model of the problem and the description of bank's work is described in this paper. The problem is then approached using the method of dynamic programming. Dynamic programming makes sure that the solutions obtained are globally optimal and numerically stable. The optimization process is set up as a discrete multi-stage decision process and solved with the help of dynamic programming.

  18. Can Microbial Ecology and Mycorrhizal Functioning Inform Climate Change Models?

    Energy Technology Data Exchange (ETDEWEB)

    Hofmockel, Kirsten; Hobbie, Erik

    2017-07-31

    Our funded research focused on soil organic matter dynamics and plant-microbe interactions by examining the role of belowground processes and mechanisms across scales, including decomposition of organic molecules, microbial interactions, and plant-microbe interactions associated with a changing climate. Research foci included mycorrhizal mediated priming of soil carbon turnover, organic N use and depolymerization by free-living microbes and mycorrhizal fungi, and the use of isotopes as additional constraints for improved modeling of belowground processes. This work complemented the DOE’s mandate to understand both the consequences of atmospheric and climatic change for key ecosystems and the feedbacks on C cycling.

  19. The Sandia MEMS Passive Shock Sensor : FY08 testing for functionality, model validation, and technology readiness.

    Energy Technology Data Exchange (ETDEWEB)

    Walraven, Jeremy Allen; Blecke, Jill; Baker, Michael Sean; Clemens, Rebecca C.; Mitchell, John Anthony; Brake, Matthew Robert; Epp, David S.; Wittwer, Jonathan W.

    2008-10-01

    This report summarizes the functional, model validation, and technology readiness testing of the Sandia MEMS Passive Shock Sensor in FY08. Functional testing of a large number of revision 4 parts showed robust and consistent performance. Model validation testing helped tune the models to match data well and identified several areas for future investigation related to high frequency sensitivity and thermal effects. Finally, technology readiness testing demonstrated the integrated elements of the sensor under realistic environments.

  20. Optical Imaging and Radiometric Modeling and Simulation

    Science.gov (United States)

    Ha, Kong Q.; Fitzmaurice, Michael W.; Moiser, Gary E.; Howard, Joseph M.; Le, Chi M.

    2010-01-01

    OPTOOL software is a general-purpose optical systems analysis tool that was developed to offer a solution to problems associated with computational programs written for the James Webb Space Telescope optical system. It integrates existing routines into coherent processes, and provides a structure with reusable capabilities that allow additional processes to be quickly developed and integrated. It has an extensive graphical user interface, which makes the tool more intuitive and friendly. OPTOOL is implemented using MATLAB with a Fourier optics-based approach for point spread function (PSF) calculations. It features parametric and Monte Carlo simulation capabilities, and uses a direct integration calculation to permit high spatial sampling of the PSF. Exit pupil optical path difference (OPD) maps can be generated using combinations of Zernike polynomials or shaped power spectral densities. The graphical user interface allows rapid creation of arbitrary pupil geometries, and entry of all other modeling parameters to support basic imaging and radiometric analyses. OPTOOL provides the capability to generate wavefront-error (WFE) maps for arbitrary grid sizes. These maps are 2D arrays containing digital sampled versions of functions ranging from Zernike polynomials to combination of sinusoidal wave functions in 2D, to functions generated from a spatial frequency power spectral distribution (PSD). It also can generate optical transfer functions (OTFs), which are incorporated into the PSF calculation. The user can specify radiometrics for the target and sky background, and key performance parameters for the instrument s focal plane array (FPA). This radiometric and detector model setup is fairly extensive, and includes parameters such as zodiacal background, thermal emission noise, read noise, and dark current. The setup also includes target spectral energy distribution as a function of wavelength for polychromatic sources, detector pixel size, and the FPA s charge

  1. Calculating kaon fragmentation functions from the Nambu-Jona-Lasinio jet model

    International Nuclear Information System (INIS)

    Matevosyan, Hrayr H.; Thomas, Anthony W.; Bentz, Wolfgang

    2011-01-01

    The Nambu-Jona-Lasinio (NJL)-jet model provides a sound framework for calculating the fragmentation functions in an effective chiral quark theory, where the momentum and isospin sum rules are satisfied without the introduction of ad hoc parameters. Earlier studies of the pion fragmentation functions using the NJL model within this framework showed qualitative agreement with the empirical parametrizations. Here we extend the NJL-jet model by including the strange quark. The corrections to the pion fragmentation functions and corresponding kaon fragmentation functions are calculated using the elementary quark to quark-meson fragmentation functions from NJL. The results for the kaon fragmentation functions exhibit a qualitative agreement with the empirical parametrizations, while the unfavored strange quark fragmentation to pions is shown to be of the same order of magnitude as the unfavored light quark. The results of these studies are expected to provide important guidance for the analysis of a large variety of semi-inclusive data.

  2. Diversity-interaction modeling: estimating contributions of species identities and interactions to ecosystem function

    DEFF Research Database (Denmark)

    Kirwan, L; Connolly, J; Finn, J A

    2009-01-01

    to the roles of evenness, functional groups, and functional redundancy. These more parsimonious descriptions can be especially useful in identifying general diversity-function relationships in communities with large numbers of species. We provide an example of the application of the modeling framework......We develop a modeling framework that estimates the effects of species identity and diversity on ecosystem function and permits prediction of the diversity-function relationship across different types of community composition. Rather than just measure an overall effect of diversity, we separately....... These models describe community-level performance and thus do not require separate measurement of the performance of individual species. This flexible modeling approach can be tailored to test many hypotheses in biodiversity research and can suggest the interaction mechanisms that may be acting....

  3. Gluon field strength correlation functions within a constrained instanton model

    International Nuclear Information System (INIS)

    Dorokhov, A.E.; Esaibegyan, S.V.; Maximov, A.E.; Mikhailov, S.V.

    2000-01-01

    We suggest a constrained instanton (CI) solution in the physical QCD vacuum which is described by large-scale vacuum field fluctuations. This solution decays exponentially at large distances. It is stable only if the interaction of the instanton with the background vacuum field is small and additional constraints are introduced. The CI solution is explicitly constructed in the ansatz form, and the two-point vacuum correlator of the gluon field strengths is calculated in the framework of the effective instanton vacuum model. At small distances the results are qualitatively similar to the single instanton case; in particular, the D 1 invariant structure is small, which is in agreement with the lattice calculations. (orig.)

  4. STRUCTURAL AND FUNCTIONAL MODEL OF CLOUD ORIENTED LEARNING ENVIRONMENT FOR BACHELORS OF INFORMATICS TRAINING

    Directory of Open Access Journals (Sweden)

    Tetiana A. Vakaliuk

    2017-06-01

    Full Text Available The article summarizes the essence of the category "model". There are presented the main types of models used in educational research: structural, functional, structural and functional model as well as basic requirements for building these types of models. The national experience in building models and designing cloud-based learning environment of educational institutions (both higher and secondary is analyzed. It is presented structural and functional model of cloud-based learning environment for Bachelor of Informatics. Also we describe each component of cloud-based learning environment model for bachelors of informatics training: target, managerial, organizational, content and methodical, communication, technological and productive. It is summarized, that COLE should solve all major tasks that relate to higher education institutions.

  5. Reduced Rank Mixed Effects Models for Spatially Correlated Hierarchical Functional Data

    KAUST Repository

    Zhou, Lan

    2010-03-01

    Hierarchical functional data are widely seen in complex studies where sub-units are nested within units, which in turn are nested within treatment groups. We propose a general framework of functional mixed effects model for such data: within unit and within sub-unit variations are modeled through two separate sets of principal components; the sub-unit level functions are allowed to be correlated. Penalized splines are used to model both the mean functions and the principal components functions, where roughness penalties are used to regularize the spline fit. An EM algorithm is developed to fit the model, while the specific covariance structure of the model is utilized for computational efficiency to avoid storage and inversion of large matrices. Our dimension reduction with principal components provides an effective solution to the difficult tasks of modeling the covariance kernel of a random function and modeling the correlation between functions. The proposed methodology is illustrated using simulations and an empirical data set from a colon carcinogenesis study. Supplemental materials are available online.

  6. Testing the efficacy of downscaling in species distribution modelling: a comparison between MaxEnt and Favourability Function models

    Directory of Open Access Journals (Sweden)

    Olivero, J.

    2016-03-01

    Full Text Available Statistical downscaling is used to improve the knowledge of spatial distributions from broad–scale to fine–scale maps with higher potential for conservation planning. We assessed the effectiveness of downscaling in two commonly used species distribution models: Maximum Entropy (MaxEnt and the Favourability Function (FF. We used atlas data (10 x 10 km of the fire salamander Salamandra salamandra distribution in southern Spain to derive models at a 1 x 1 km resolution. Downscaled models were assessed using an independent dataset of the species’ distribution at 1 x 1 km. The Favourability model showed better downscaling performance than the MaxEnt model, and the models that were based on linear combinations of environmental variables performed better than models allowing higher flexibility. The Favourability model minimized model overfitting compared to the MaxEnt model.

  7. Testing the efficacy of downscaling in species distribution modelling: a comparison between MaxEnt and Favourability Function models

    Energy Technology Data Exchange (ETDEWEB)

    Olivero, J.; Toxopeus, A.G.; Skidmore, A.K.; Real, R.

    2016-07-01

    Statistical downscaling is used to improve the knowledge of spatial distributions from broad–scale to fine–scale maps with higher potential for conservation planning. We assessed the effectiveness of downscaling in two commonly used species distribution models: Maximum Entropy (MaxEnt) and the Favourability Function (FF). We used atlas data (10 x 10 km) of the fire salamander Salamandra salamandra distribution in southern Spain to derive models at a 1 x 1 km resolution. Downscaled models were assessed using an independent dataset of the species’ distribution at 1 x 1 km. The Favourability model showed better downscaling performance than the MaxEnt model, and the models that were based on linear combinations of environmental variables performed better than models allowing higher flexibility. The Favourability model minimized model overfitting compared to the MaxEnt model. (Author)

  8. End to end distribution functions for a class of polymer models

    International Nuclear Information System (INIS)

    Khandekar, D.C.; Wiegel, F.W.

    1988-01-01

    The two point end-to-end distribution functions for a class of polymer models have been obtained within the first cumulant approximation. The trial distribution function this purpose is chosen to correspond to a general non-local quadratic functional. An Exact expression for the trial distribution function is obtained. It is pointed out that these trial distribution functions themselves can be used to study certain aspects of the configurational behaviours of polymers. These distribution functions are also used to obtain the averaged mean square size 2 > of a polymer characterized by the non-local quadratic potential energy functional. Finally, we derive an analytic expression for 2 > of a polyelectrolyte model and show that for a long polymer a weak electrostatic interaction does not change the behaviour of 2 > from that of a free polymer. (author). 16 refs

  9. Infinite Relational Modeling of Functional Connectivity in Resting State fMRI

    DEFF Research Database (Denmark)

    Mørup, Morten; Madsen, Kristoffer H.; Dogonowski, Anne Marie

    2010-01-01

    Functional magnetic resonance imaging (fMRI) can be applied to study the functional connectivity of the neural elements which form complex network at a whole brain level. Most analyses of functional resting state networks (RSN) have been based on the analysis of correlation between the temporal...... dynamics of various regions of the brain. While these models can identify coherently behaving groups in terms of correlation they give little insight into how these groups interact. In this paper we take a different view on the analysis of functional resting state networks. Starting from the definition...... of resting state as functional coherent groups we search for functional units of the brain that communicate with other parts of the brain in a coherent manner as measured by mutual information. We use the infinite relational model (IRM) to quantify functional coherent groups of resting state networks...

  10. Secure and Resilient Functional Modeling for Navy Cyber-Physical Systems

    Science.gov (United States)

    2017-05-24

    control systems, it was determined that this project will employ the model of a Ship Chilled Water Distribution System as a central use case. This model...Siemens Corporation Corporate Technology Unrestricted. Distribution Statement A. Approved for public...release; distribution is unlimited. Page 1 of 4 Secure & Resilient Functional Modeling for Navy Cyber-Physical Systems FY17 Quarter 1 Technical Progress

  11. Completely X-symmetric S-matrices corresponding to theta functions and models of statistical mechanics

    International Nuclear Information System (INIS)

    Chudnovsky, D.V.; Chudnovsky, G.V.

    1981-01-01

    We consider general expressions of factorized S-matrices with Abelian symmetry expressed in terms of theta-functions. These expressions arise from representations of the Heisenberg group. New examples of factorized S-matrices lead to a large class of completely integrable models of statistical mechanics which generalize the XYZ-model of the eight-vertex model. (orig.)

  12. A Linear Programming Model to Optimize Various Objective Functions of a Foundation Type State Support Program.

    Science.gov (United States)

    Matzke, Orville R.

    The purpose of this study was to formulate a linear programming model to simulate a foundation type support program and to apply this model to a state support program for the public elementary and secondary school districts in the State of Iowa. The model was successful in producing optimal solutions to five objective functions proposed for…

  13. A Note on the Item Information Function of the Four-Parameter Logistic Model

    Science.gov (United States)

    Magis, David

    2013-01-01

    This article focuses on four-parameter logistic (4PL) model as an extension of the usual three-parameter logistic (3PL) model with an upper asymptote possibly different from 1. For a given item with fixed item parameters, Lord derived the value of the latent ability level that maximizes the item information function under the 3PL model. The…

  14. Magnetic induction pneumography: a planar coil system for continuous monitoring of lung function via contactless measurements

    Directory of Open Access Journals (Sweden)

    Doga Gursoy

    2010-11-01

    Full Text Available Continuous monitoring of lung function is of particular interest to the mechanically ventilated patients during critical care. Recent studies have shown that magnetic induction measurements with single coils provide signals which are correlated with the lung dynamics and this idea is extended here by using a 5 by 5 planar coil matrix for data acquisition in order to image the regional thoracic conductivity changes. The coil matrix can easily be mounted onto the patient bed, and thus, the problems faced in methods that use contacting sensors can readily be eliminated and the patient comfort can be improved. In the proposed technique, the data are acquired by successively exciting each coil in order to induce an eddy-current density within the dorsal tissues and measuring the corresponding response magnetic field strength by the remaining coils. The recorded set of data is then used to reconstruct the internal conductivity distribution by means of algorithms that minimize the residual norm between the estimated and measured data. To investigate the feasibility of the technique, the sensitivity maps and the point spread functions at different locations and depths were studied. To simulate a realistic scenario, a chest model was generated by segmenting the tissue boundaries from NMR images. The reconstructions of the ventilation distribution and the development of an edematous lung injury were presented. The imaging artifacts caused by either the incorrect positioning of the patient or the expansion of the chest wall due to breathing were illustrated by simulations.

  15. General atomistic approach for modeling metal-semiconductor interfaces using density functional theory and nonequilibrium Green's function

    DEFF Research Database (Denmark)

    Stradi, Daniele; Martinez, Umberto; Blom, Anders

    2016-01-01

    Metal-semiconductor contacts are a pillar of modern semiconductor technology. Historically, their microscopic understanding has been hampered by the inability of traditional analytical and numerical methods to fully capture the complex physics governing their operating principles. Here we introduce...... an atomistic approach based on density functional theory and nonequilibrium Green's function, which includes all the relevant ingredients required to model realistic metal-semiconductor interfaces and allows for a direct comparison between theory and experiments via I-Vbias curve simulations. We apply...... interfaces as it neglects electron tunneling, and that finite-size atomistic models have problems in describing these interfaces in the presence of doping due to a poor representation of space-charge effects. Conversely, the present method deals effectively with both issues, thus representing a valid...

  16. Employee subjective well-being and physiological functioning: An integrative model.

    Science.gov (United States)

    Kuykendall, Lauren; Tay, Louis

    2015-01-01

    Research shows that worker subjective well-being influences physiological functioning-an early signal of poor health outcomes. While several theoretical perspectives provide insights on this relationship, the literature lacks an integrative framework explaining the relationship. We develop a conceptual model explaining the link between subjective well-being and physiological functioning in the context of work. Integrating positive psychology and occupational stress perspectives, our model explains the relationship between subjective well-being and physiological functioning as a result of the direct influence of subjective well-being on physiological functioning and of their common relationships with work stress and personal resources, both of which are influenced by job conditions.

  17. A FUNCTIONAL MODEL OF COMPUTER-ORIENTED LEARNING ENVIRONMENT OF A POST-DEGREE PEDAGOGICAL EDUCATION

    Directory of Open Access Journals (Sweden)

    Kateryna R. Kolos

    2014-06-01

    Full Text Available The study substantiates the need for a systematic study of the functioning of computer-oriented learning environment of a post-degree pedagogical education; it is determined the definition of “functional model of computer-oriented learning environment of a post-degree pedagogical education”; it is built a functional model of computer-oriented learning environment of a post-degree pedagogical education in accordance with the functions of business, information and communication technology, academic, administrative staff and peculiarities of training courses teachers.

  18. Functional integral and effective Hamiltonian t-J-V model of strongly correlated electron system

    International Nuclear Information System (INIS)

    Belinicher, V.I.; Chertkov, M.V.

    1990-09-01

    The functional integral representation for the generating functional of t-J-V model is obtained. In the case close to half filling this functional integral representation reduces the conventional Hamiltonian of t-J-V model to the Hamiltonian of the system containing holes and spins 1/2 at each lattice size. This effective Hamiltonian coincides with that one obtained one of the authors by different method. This Hamiltonian and its dynamical variables can be used for description of different magnetic phases of t-J-V model. (author). 16 refs

  19. The distance-decay function of geographical gravity model: Power law or exponential law?

    International Nuclear Information System (INIS)

    Chen, Yanguang

    2015-01-01

    Highlights: •The distance-decay exponent of the gravity model is a fractal dimension. •Entropy maximization accounts for the gravity model based on power law decay. •Allometric scaling relations relate gravity models with spatial interaction models. •The four-parameter gravity models have dual mathematical expressions. •The inverse power law is the most probable distance-decay function. -- Abstract: The distance-decay function of the geographical gravity model is originally an inverse power law, which suggests a scaling process in spatial interaction. However, the distance exponent of the model cannot be reasonably explained with the ideas from Euclidean geometry. This results in a dimension dilemma in geographical analysis. Consequently, a negative exponential function was used to replace the inverse power function to serve for a distance-decay function. But a new puzzle arose that the exponential-based gravity model goes against the first law of geography. This paper is devoted for solving these kinds of problems by mathematical reasoning and empirical analysis. New findings are as follows. First, the distance exponent of the gravity model is demonstrated to be a fractal dimension using the geometric measure relation. Second, the similarities and differences between the gravity models and spatial interaction models are revealed using allometric relations. Third, a four-parameter gravity model possesses a symmetrical expression, and we need dual gravity models to describe spatial flows. The observational data of China's cities and regions (29 elements indicative of 841 data points) in 2010 are employed to verify the theoretical inferences. A conclusion can be reached that the geographical gravity model based on power-law decay is more suitable for analyzing large, complex, and scale-free regional and urban systems. This study lends further support to the suggestion that the underlying rationale of fractal structure is entropy maximization. Moreover

  20. Modeling multivariate time series on manifolds with skew radial basis functions.

    Science.gov (United States)

    Jamshidi, Arta A; Kirby, Michael J

    2011-01-01

    We present an approach for constructing nonlinear empirical mappings from high-dimensional domains to multivariate ranges. We employ radial basis functions and skew radial basis functions for constructing a model using data that are potentially scattered or sparse. The algorithm progresses iteratively, adding a new function at each step to refine the model. The placement of the functions is driven by a statistical hypothesis test that accounts for correlation in the multivariate range variables. The test is applied on training and validation data and reveals nonstatistical or geometric structure when it fails. At each step, the added function is fit to data contained in a spatiotemporally defined local region to determine the parameters--in particular, the scale of the local model. The scale of the function is determined by the zero crossings of the autocorrelation function of the residuals. The model parameters and the number of basis functions are determined automatically from the given data, and there is no need to initialize any ad hoc parameters save for the selection of the skew radial basis functions. Compactly supported skew radial basis functions are employed to improve model accuracy, order, and convergence properties. The extension of the algorithm to higher-dimensional ranges produces reduced-order models by exploiting the existence of correlation in the range variable data. Structure is tested not just in a single time series but between all pairs of time series. We illustrate the new methodologies using several illustrative problems, including modeling data on manifolds and the prediction of chaotic time series.

  1. A more general model for testing measurement invariance and differential item functioning.

    Science.gov (United States)

    Bauer, Daniel J

    2017-09-01

    The evaluation of measurement invariance is an important step in establishing the validity and comparability of measurements across individuals. Most commonly, measurement invariance has been examined using 1 of 2 primary latent variable modeling approaches: the multiple groups model or the multiple-indicator multiple-cause (MIMIC) model. Both approaches offer opportunities to detect differential item functioning within multi-item scales, and thereby to test measurement invariance, but both approaches also have significant limitations. The multiple groups model allows 1 to examine the invariance of all model parameters but only across levels of a single categorical individual difference variable (e.g., ethnicity). In contrast, the MIMIC model permits both categorical and continuous individual difference variables (e.g., sex and age) but permits only a subset of the model parameters to vary as a function of these characteristics. The current article argues that moderated nonlinear factor analysis (MNLFA) constitutes an alternative, more flexible model for evaluating measurement invariance and differential item functioning. We show that the MNLFA subsumes and combines the strengths of the multiple group and MIMIC models, allowing for a full and simultaneous assessment of measurement invariance and differential item functioning across multiple categorical and/or continuous individual difference variables. The relationships between the MNLFA model and the multiple groups and MIMIC models are shown mathematically and via an empirical demonstration. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  2. Generalized neurofuzzy network modeling algorithms using Bézier-Bernstein polynomial functions and additive decomposition.

    Science.gov (United States)

    Hong, X; Harris, C J

    2000-01-01

    This paper introduces a new neurofuzzy model construction algorithm for nonlinear dynamic systems based upon basis functions that are Bézier-Bernstein polynomial functions. This paper is generalized in that it copes with n-dimensional inputs by utilising an additive decomposition construction to overcome the curse of dimensionality associated with high n. This new construction algorithm also introduces univariate Bézier-Bernstein polynomial functions for the completeness of the generalized procedure. Like the B-spline expansion based neurofuzzy systems, Bézier-Bernstein polynomial function based neurofuzzy networks hold desirable properties such as nonnegativity of the basis functions, unity of support, and interpretability of basis function as fuzzy membership functions, moreover with the additional advantages of structural parsimony and Delaunay input space partition, essentially overcoming the curse of dimensionality associated with conventional fuzzy and RBF networks. This new modeling network is based on additive decomposition approach together with two separate basis function formation approaches for both univariate and bivariate Bézier-Bernstein polynomial functions used in model construction. The overall network weights are then learnt using conventional least squares methods. Numerical examples are included to demonstrate the effectiveness of this new data based modeling approach.

  3. Cross-polarization microwave radar return at severe wind conditions: laboratory model and geophysical model function.

    Science.gov (United States)

    Troitskaya, Yuliya; Abramov, Victor; Ermoshkin, Alexey; Zuikova, Emma; Kazakov, Vassily; Sergeev, Daniil; Kandaurov, Alexandr

    2014-05-01

    Satellite remote sensing is one of the main techniques of monitoring severe weather conditions over the ocean. The principal difficulty of the existing algorithms of retrieving wind based on dependence of microwave backscattering cross-section on wind speed (Geophysical Model Function, GMF) is due to its saturation at winds exceeding 25 - 30 m/s. Recently analysis of dual- and quad-polarization C-band radar return measured from satellite Radarsat-2 suggested that the cross-polarized radar return has much higher sensitivity to the wind speed than co-polarized back scattering [1] and conserved sensitivity to wind speed at hurricane conditions [2]. Since complete collocation of these data was not possible and time difference in flight legs and SAR images acquisition was up to 3 hours, these two sets of data were compared in [2] only statistically. The main purpose of this paper is investigation of the functional dependence of cross-polarized radar cross-section on the wind speed in laboratory experiment. Since cross-polarized radar return is formed due to scattering at small-scale structures of the air-sea interface (short-crested waves, foam, sprays, etc), which are well reproduced in laboratory conditions, then the approach based on laboratory experiment on radar scattering of microwaves at the water surface under hurricane wind looks feasible. The experiments were performed in the Wind-wave flume located on top of the Large Thermostratified Tank of the Institute of Applied Physics, where the airflow was produced in the flume with the straight working part of 10 m and operating cross section 0.40?0.40 sq. m, the axis velocity can be varied from 5 to 25 m/s. Microwave measurements were carried out by a coherent Doppler X-band (3.2 cm) scatterometer with the consequent receive of linear polarizations. Experiments confirmed higher sensitivity to the wind speed of the cross-polarized radar return. Simultaneously parameters of the air flow in the turbulent boundary layer

  4. Identification of hidden failures in control systems: a functional modelling approach

    International Nuclear Information System (INIS)

    Jalashgar, A.; Modarres, M.

    1996-01-01

    This paper presents a model which encompasses knowledge about a process control system's functionalities in a function-oriented failure analysis task. The technique called Hybrid MFM-GTST, mainly utilizes two different function - oriented methods (MFM and GTST) to identify all functions of the system components, and hence possible sources of hidden failures in process control systems. Hidden failures are referred to incipient failures within the system that in long term may lead to loss of major functions. The features of the method are described and demonstrated by using an example of a process control system

  5. COPEWELL: A Conceptual Framework and System Dynamics Model for Predicting Community Functioning and Resilience After Disasters.

    Science.gov (United States)

    Links, Jonathan M; Schwartz, Brian S; Lin, Sen; Kanarek, Norma; Mitrani-Reiser, Judith; Sell, Tara Kirk; Watson, Crystal R; Ward, Doug; Slemp, Cathy; Burhans, Robert; Gill, Kimberly; Igusa, Tak; Zhao, Xilei; Aguirre, Benigno; Trainor, Joseph; Nigg, Joanne; Inglesby, Thomas; Carbone, Eric; Kendra, James M

    2018-02-01

    Policy-makers and practitioners have a need to assess community resilience in disasters. Prior efforts conflated resilience with community functioning, combined resistance and recovery (the components of resilience), and relied on a static model for what is inherently a dynamic process. We sought to develop linked conceptual and computational models of community functioning and resilience after a disaster. We developed a system dynamics computational model that predicts community functioning after a disaster. The computational model outputted the time course of community functioning before, during, and after a disaster, which was used to calculate resistance, recovery, and resilience for all US counties. The conceptual model explicitly separated resilience from community functioning and identified all key components for each, which were translated into a system dynamics computational model with connections and feedbacks. The components were represented by publicly available measures at the county level. Baseline community functioning, resistance, recovery, and resilience evidenced a range of values and geographic clustering, consistent with hypotheses based on the disaster literature. The work is transparent, motivates ongoing refinements, and identifies areas for improved measurements. After validation, such a model can be used to identify effective investments to enhance community resilience. (Disaster Med Public Health Preparedness. 2018;12:127-137).

  6. Developing Students’ Reflections about the Function and Status of Mathematical Modeling in Different Scientific Practices

    DEFF Research Database (Denmark)

    Kjeldsen, Tinne Hoff; Blomhøj, Morten

    2013-01-01

    position held by the modeler(s) and the practitioners in the extra-mathematical domain. For students to experience the significance of different scientific practices and cultures for the function and status of mathematical modeling in other sciences, students need to be placed in didactical situations......Mathematical models and mathematical modeling play different roles in the different areas and problems in which they are used. The function and status of mathematical modeling and models in the different areas depend on the scientific practice as well as the underlying philosophical and theoretical...... where such differences are exposed and made into explicit objects of their reflections. It can be difficult to create such situations in the teaching of contemporary science in which modeling is part of the culture. In this paper we show how history can serve as a means for students to be engaged...

  7. Experience gained with the application of the MODIS diffusion model compared with the ATMOS Gauss-function-based model

    International Nuclear Information System (INIS)

    Mueller, A.

    1985-01-01

    The advantage of the Gauss-function-based models doubtlessly consists in their proven propagation parameter sets and empirical stack plume rise formulas and in their easy matchability and handability. However, grid models based on trace matter transport equation are more convincing concerning their fundamental principle. Grid models of the MODIS type are to acquire a practical applicability comparable to Gauss models by developing techniques allowing to consider the vertical self-movement of the plumes in grid models and to secure improved diffusion co-efficient determination. (orig./PW) [de

  8. Means-End based Functional Modeling for Intelligent Control: Modeling and Experiments with an Industrial Heat Pump System

    DEFF Research Database (Denmark)

    Saleem, Arshad

    2007-01-01

    The purpose of this paper is to present a Multilevel Flow Model (MFM) of an industrial heat pump system and its use for diagnostic reasoning. MFM is functional modeling language supporting an explicit means-ends intelligent control strategy for large industrial process plants. The model is used...... in several diagnostic experiments analyzing different fault scenarios. The model and results of the experiments are explained and it is shown how MFM based intelligent modeling and automated reasoning can improve the fault diagnosis process significantly....

  9. Functional Freedom: A Psychological Model of Freedom in Decision-Making.

    Science.gov (United States)

    Lau, Stephan; Hiemisch, Anette

    2017-07-05

    The freedom of a decision is not yet sufficiently described as a psychological variable. We present a model of functional decision freedom that aims to fill that role. The model conceptualizes functional freedom as a capacity of people that varies depending on certain conditions of a decision episode. It denotes an inner capability to consciously shape complex decisions according to one's own values and needs. Functional freedom depends on three compensatory dimensions: it is greatest when the decision-maker is highly rational, when the structure of the decision is highly underdetermined, and when the decision process is strongly based on conscious thought and reflection. We outline possible research questions, argue for psychological benefits of functional decision freedom, and explicate the model's implications on current knowledge and research. In conclusion, we show that functional freedom is a scientific variable, permitting an additional psychological foothold in research on freedom, and that is compatible with a deterministic worldview.

  10. An alternative approach for modeling strength differential effect in sheet metals with symmetric yield functions

    Science.gov (United States)

    Kurukuri, Srihari; Worswick, Michael J.

    2013-12-01

    An alternative approach is proposed to utilize symmetric yield functions for modeling the tension-compression asymmetry commonly observed in hcp materials. In this work, the strength differential (SD) effect is modeled by choosing separate symmetric plane stress yield functions (for example, Barlat Yld 2000-2d) for the tension i.e., in the first quadrant of principal stress space, and compression i.e., third quadrant of principal stress space. In the second and fourth quadrants, the yield locus is constructed by adopting interpolating functions between uniaxial tensile and compressive stress states. In this work, different interpolating functions are chosen and the predictive capability of each approach is discussed. The main advantage of this proposed approach is that the yield locus parameters are deterministic and relatively easy to identify when compared to the Cazacu family of yield functions commonly used for modeling SD effect observed in hcp materials.

  11. Dynamics of a Fractional Order HIV Infection Model with Specific Functional Response and Cure Rate

    Directory of Open Access Journals (Sweden)

    Adnane Boukhouima

    2017-01-01

    Full Text Available We propose a fractional order model in this paper to describe the dynamics of human immunodeficiency virus (HIV infection. In the model, the infection transmission process is modeled by a specific functional response. First, we show that the model is mathematically and biologically well posed. Second, the local and global stabilities of the equilibria are investigated. Finally, some numerical simulations are presented in order to illustrate our theoretical results.

  12. Functional models for commutative systems of linear operators and de Branges spaces on a Riemann surface

    International Nuclear Information System (INIS)

    Zolotarev, Vladimir A

    2009-01-01

    Functional models are constructed for commutative systems {A 1 ,A 2 } of bounded linear non-self-adjoint operators which do not contain dissipative operators (which means that ξ 1 A 1 +ξ 2 A 2 is not a dissipative operator for any ξ 1 , ξ 2 element of R). A significant role is played here by the de Branges transform and the function classes occurring in this context. Classes of commutative systems of operators {A 1 ,A 2 } for which such a construction is possible are distinguished. Realizations of functional models in special spaces of meromorphic functions on Riemann surfaces are found, which lead to reasonable analogues of de Branges spaces on these Riemann surfaces. It turns out that the functions E(p) and E-tilde(p) determining the order of growth in de Branges spaces on Riemann surfaces coincide with the well-known Baker-Akhiezer functions. Bibliography: 11 titles.

  13. Stability analysis of polynomial fuzzy models via polynomial fuzzy Lyapunov functions

    OpenAIRE

    Bernal Reza, Miguel Ángel; Sala, Antonio; JAADARI, ABDELHAFIDH; Guerra, Thierry-Marie

    2011-01-01

    In this paper, the stability of continuous-time polynomial fuzzy models by means of a polynomial generalization of fuzzy Lyapunov functions is studied. Fuzzy Lyapunov functions have been fruitfully used in the literature for local analysis of Takagi-Sugeno models, a particular class of the polynomial fuzzy ones. Based on a recent Taylor-series approach which allows a polynomial fuzzy model to exactly represent a nonlinear model in a compact set of the state space, it is shown that a refinemen...

  14. Resolving Microzooplankton Functional Groups In A Size-Structured Planktonic Model

    Science.gov (United States)

    Taniguchi, D.; Dutkiewicz, S.; Follows, M. J.; Jahn, O.; Menden-Deuer, S.

    2016-02-01

    Microzooplankton are important marine grazers, often consuming a large fraction of primary productivity. They consist of a great diversity of organisms with different behaviors, characteristics, and rates. This functional diversity, and its consequences, are not currently reflected in large-scale ocean ecological simulations. How should these organisms be represented, and what are the implications for their biogeography? We develop a size-structured, trait-based model to characterize a diversity of microzooplankton functional groups. We compile and examine size-based laboratory data on the traits, revealing some patterns with size and functional group that we interpret with mechanistic theory. Fitting the model to the data provides parameterizations of key rates and properties, which we employ in a numerical ocean model. The diversity of grazing preference, rates, and trophic strategies enables the coexistence of different functional groups of micro-grazers under various environmental conditions, and the model produces testable predictions of the biogeography.

  15. Covariant two-particle wave functions for model quasipotentials admitting exact solutions

    International Nuclear Information System (INIS)

    Kapshaj, V.N.; Skachkov, N.B.

    1983-01-01

    Two formulations of quasipotential equations in the relativistic configurational representation are considered for the wave function of the internal motion of the bound system of two relativistic particles. Exact solutions of these equations are found for some model quasipotentials

  16. Covariant two-particle wave functions for model quasipotential allowing exact solutions

    International Nuclear Information System (INIS)

    Kapshaj, V.N.; Skachkov, N.B.

    1982-01-01

    Two formulations of quasipotential equations in the relativistic configurational representation are considered for the wave function of relative motion of a bound state of two relativistic particles. Exact solutions of these equations are found for some model quasipotentials

  17. A mathematical model of functioning of channels of distribution of freight traffic

    OpenAIRE

    Shramenko, N.

    2006-01-01

    The mathematical model of functioning of channels of distribution of freight traffics is developed at international transportations which allows to optimize technological parameters of separate parts and to minimize expenses along the logistical chain.

  18. Functional modelling for integration of human-software-hardware in complex physical systems

    International Nuclear Information System (INIS)

    Modarres, M.

    1996-01-01

    A framework describing the properties of complex physical systems composed of human-software-hardware interactions in terms of their functions is described. It is argued that such a framework is domain-general, so that functional primitives present a language that is more general than most other modeling methods such as mathematical simulation. The characteristics and types of functional models are described. Examples of uses of the framework in modeling physical systems composed of human-software-hardware (hereby we refer to them as only physical systems) are presented. It is concluded that a function-centered model of a physical system provides a capability for generating a high-level simulation of the system for intelligent diagnostic, control or other similar applications

  19. Modeling of edge effect in subaperture tool influence functions of computer controlled optical surfacing.

    Science.gov (United States)

    Wan, Songlin; Zhang, Xiangchao; He, Xiaoying; Xu, Min

    2016-12-20

    Computer controlled optical surfacing requires an accurate tool influence function (TIF) for reliable path planning and deterministic fabrication. Near the edge of the workpieces, the TIF has a nonlinear removal behavior, which will cause a severe edge-roll phenomenon. In the present paper, a new edge pressure model is developed based on the finite element analysis results. The model is represented as the product of a basic pressure function and a correcting function. The basic pressure distribution is calculated according to the surface shape of the polishing pad, and the correcting function is used to compensate the errors caused by the edge effect. Practical experimental results demonstrate that the new model can accurately predict the edge TIFs with different overhang ratios. The relative error of the new edge model can be reduced to 15%.

  20. An orbital-overlap model for minimal work functions of cesiated metal surfaces

    International Nuclear Information System (INIS)

    Chou, Sharon H; Bargatin, Igor; Howe, Roger T; Voss, Johannes; Vojvodic, Aleksandra; Abild-Pedersen, Frank

    2012-01-01

    We introduce a model for the effect of cesium adsorbates on the work function of transition metal surfaces. The model builds on the classical point-dipole equation by adding exponential terms that characterize the degree of orbital overlap between the 6s states of neighboring cesium adsorbates and its effect on the strength and orientation of electric dipoles along the adsorbate-substrate interface. The new model improves upon earlier models in terms of agreement with the work function-coverage curves obtained via first-principles calculations based on density functional theory. All the cesiated metal surfaces have optimal coverages between 0.6 and 0.8 monolayers, in accordance with experimental data. Of all the cesiated metal surfaces that we have considered, tungsten has the lowest minimum work function, also in accordance with experiments.