WorldWideScience

Sample records for linearly mixed images

  1. Image quality optimization and evaluation of linearly mixed images in dual-source, dual-energy CT

    International Nuclear Information System (INIS)

    Yu Lifeng; Primak, Andrew N.; Liu Xin; McCollough, Cynthia H.

    2009-01-01

    In dual-source dual-energy CT, the images reconstructed from the low- and high-energy scans (typically at 80 and 140 kV, respectively) can be mixed together to provide a single set of non-material-specific images for the purpose of routine diagnostic interpretation. Different from the material-specific information that may be obtained from the dual-energy scan data, the mixed images are created with the purpose of providing the interpreting physician a single set of images that have an appearance similar to that in single-energy images acquired at the same total radiation dose. In this work, the authors used a phantom study to evaluate the image quality of linearly mixed images in comparison to single-energy CT images, assuming the same total radiation dose and taking into account the effect of patient size and the dose partitioning between the low-and high-energy scans. The authors first developed a method to optimize the quality of the linearly mixed images such that the single-energy image quality was compared to the best-case image quality of the dual-energy mixed images. Compared to 80 kV single-energy images for the same radiation dose, the iodine CNR in dual-energy mixed images was worse for smaller phantom sizes. However, similar noise and similar or improved iodine CNR relative to 120 kV images could be achieved for dual-energy mixed images using the same total radiation dose over a wide range of patient sizes (up to 45 cm lateral thorax dimension). Thus, for adult CT practices, which primarily use 120 kV scanning, the use of dual-energy CT for the purpose of material-specific imaging can also produce a set of non-material-specific images for routine diagnostic interpretation that are of similar or improved quality relative to single-energy 120 kV scans.

  2. Para-mixed linear spaces

    Directory of Open Access Journals (Sweden)

    Crasmareanu Mircea

    2017-12-01

    Full Text Available We consider the paracomplex version of the notion of mixed linear spaces introduced by M. Jurchescu in [4] by replacing the complex unit i with the paracomplex unit j, j2 = 1. The linear algebra of these spaces is studied with a special view towards their morphisms.

  3. Generalized, Linear, and Mixed Models

    CERN Document Server

    McCulloch, Charles E; Neuhaus, John M

    2011-01-01

    An accessible and self-contained introduction to statistical models-now in a modernized new editionGeneralized, Linear, and Mixed Models, Second Edition provides an up-to-date treatment of the essential techniques for developing and applying a wide variety of statistical models. The book presents thorough and unified coverage of the theory behind generalized, linear, and mixed models and highlights their similarities and differences in various construction, application, and computational aspects.A clear introduction to the basic ideas of fixed effects models, random effects models, and mixed m

  4. Linear and Generalized Linear Mixed Models and Their Applications

    CERN Document Server

    Jiang, Jiming

    2007-01-01

    This book covers two major classes of mixed effects models, linear mixed models and generalized linear mixed models, and it presents an up-to-date account of theory and methods in analysis of these models as well as their applications in various fields. The book offers a systematic approach to inference about non-Gaussian linear mixed models. Furthermore, it has included recently developed methods, such as mixed model diagnostics, mixed model selection, and jackknife method in the context of mixed models. The book is aimed at students, researchers and other practitioners who are interested

  5. Linear mixed models in sensometrics

    DEFF Research Database (Denmark)

    Kuznetsova, Alexandra

    quality of decision making in Danish as well as international food companies and other companies using the same methods. The two open-source R packages lmerTest and SensMixed implement and support the methodological developments in the research papers as well as the ANOVA modelling part of the Consumer...... an open-source software tool ConsumerCheck was developed in this project and now is available for everyone. will represent a major step forward when concerns this important problem in modern consumer driven product development. Standard statistical software packages can be used for some of the purposes......Today’s companies and researchers gather large amounts of data of different kind. In consumer studies the objective is the collection of the data to better understand consumer acceptance of products. In such studies a number of persons (generally not trained) are selected in order to score products...

  6. Linear mixed models for longitudinal data

    CERN Document Server

    Molenberghs, Geert

    2000-01-01

    This paperback edition is a reprint of the 2000 edition. This book provides a comprehensive treatment of linear mixed models for continuous longitudinal data. Next to model formulation, this edition puts major emphasis on exploratory data analysis for all aspects of the model, such as the marginal model, subject-specific profiles, and residual covariance structure. Further, model diagnostics and missing data receive extensive treatment. Sensitivity analysis for incomplete data is given a prominent place. Several variations to the conventional linear mixed model are discussed (a heterogeity model, conditional linear mixed models). This book will be of interest to applied statisticians and biomedical researchers in industry, public health organizations, contract research organizations, and academia. The book is explanatory rather than mathematically rigorous. Most analyses were done with the MIXED procedure of the SAS software package, and many of its features are clearly elucidated. However, some other commerc...

  7. Multivariate generalized linear mixed models using R

    CERN Document Server

    Berridge, Damon Mark

    2011-01-01

    Multivariate Generalized Linear Mixed Models Using R presents robust and methodologically sound models for analyzing large and complex data sets, enabling readers to answer increasingly complex research questions. The book applies the principles of modeling to longitudinal data from panel and related studies via the Sabre software package in R. A Unified Framework for a Broad Class of Models The authors first discuss members of the family of generalized linear models, gradually adding complexity to the modeling framework by incorporating random effects. After reviewing the generalized linear model notation, they illustrate a range of random effects models, including three-level, multivariate, endpoint, event history, and state dependence models. They estimate the multivariate generalized linear mixed models (MGLMMs) using either standard or adaptive Gaussian quadrature. The authors also compare two-level fixed and random effects linear models. The appendices contain additional information on quadrature, model...

  8. Statistical Tests for Mixed Linear Models

    CERN Document Server

    Khuri, André I; Sinha, Bimal K

    2011-01-01

    An advanced discussion of linear models with mixed or random effects. In recent years a breakthrough has occurred in our ability to draw inferences from exact and optimum tests of variance component models, generating much research activity that relies on linear models with mixed and random effects. This volume covers the most important research of the past decade as well as the latest developments in hypothesis testing. It compiles all currently available results in the area of exact and optimum tests for variance component models and offers the only comprehensive treatment for these models a

  9. Estimate the time varying brain receptor occupancy in PET imaging experiments using non-linear fixed and mixed effect modeling approach

    International Nuclear Information System (INIS)

    Zamuner, Stefano; Gomeni, Roberto; Bye, Alan

    2002-01-01

    Positron-Emission Tomography (PET) is an imaging technology currently used in drug development as a non-invasive measure of drug distribution and interaction with biochemical target system. The level of receptor occupancy achieved by a compound can be estimated by comparing time-activity measurements in an experiment done using tracer alone with the activity measured when the tracer is given following administration of unlabelled compound. The effective use of this surrogate marker as an enabling tool for drug development requires the definition of a model linking the brain receptor occupancy with the fluctuation of plasma concentrations. However, the predictive performance of such a model is strongly related to the precision on the estimate of receptor occupancy evaluated in PET scans collected at different times following drug treatment. Several methods have been proposed for the analysis and the quantification of the ligand-receptor interactions investigated from PET data. The aim of the present study is to evaluate alternative parameter estimation strategies based on the use of non-linear mixed effect models allowing to account for intra and inter-subject variability on the time-activity and for covariates potentially explaining this variability. A comparison of the different modeling approaches is presented using real data. The results of this comparison indicates that the mixed effect approach with a primary model partitioning the variance in term of Inter-Individual Variability (IIV) and Inter-Occasion Variability (IOV) and a second stage model relating the changes on binding potential to the dose of unlabelled drug is definitely the preferred approach

  10. Linear Algebra and Image Processing

    Science.gov (United States)

    Allali, Mohamed

    2010-01-01

    We use the computing technology digital image processing (DIP) to enhance the teaching of linear algebra so as to make the course more visual and interesting. Certainly, this visual approach by using technology to link linear algebra to DIP is interesting and unexpected to both students as well as many faculty. (Contains 2 tables and 11 figures.)

  11. Linear mixing model applied to AVHRR LAC data

    Science.gov (United States)

    Holben, Brent N.; Shimabukuro, Yosio E.

    1993-01-01

    A linear mixing model was applied to coarse spatial resolution data from the NOAA Advanced Very High Resolution Radiometer. The reflective component of the 3.55 - 3.93 microns channel was extracted and used with the two reflective channels 0.58 - 0.68 microns and 0.725 - 1.1 microns to run a Constraine Least Squares model to generate vegetation, soil, and shade fraction images for an area in the Western region of Brazil. The Landsat Thematic Mapper data covering the Emas National park region was used for estimating the spectral response of the mixture components and for evaluating the mixing model results. The fraction images were compared with an unsupervised classification derived from Landsat TM data acquired on the same day. The relationship between the fraction images and normalized difference vegetation index images show the potential of the unmixing techniques when using coarse resolution data for global studies.

  12. Linear mixing model applied to coarse resolution satellite data

    Science.gov (United States)

    Holben, Brent N.; Shimabukuro, Yosio E.

    1992-01-01

    A linear mixing model typically applied to high resolution data such as Airborne Visible/Infrared Imaging Spectrometer, Thematic Mapper, and Multispectral Scanner System is applied to the NOAA Advanced Very High Resolution Radiometer coarse resolution satellite data. The reflective portion extracted from the middle IR channel 3 (3.55 - 3.93 microns) is used with channels 1 (0.58 - 0.68 microns) and 2 (0.725 - 1.1 microns) to run the Constrained Least Squares model to generate fraction images for an area in the west central region of Brazil. The derived fraction images are compared with an unsupervised classification and the fraction images derived from Landsat TM data acquired in the same day. In addition, the relationship betweeen these fraction images and the well known NDVI images are presented. The results show the great potential of the unmixing techniques for applying to coarse resolution data for global studies.

  13. Non-linear Ultrasound Imaging

    DEFF Research Database (Denmark)

    Du, Yigang

    .3% relative to the measurement from a 1 inch diameter transducer. A preliminary study for harmonic imaging using synthetic aperture sequential beamforming (SASB) has been demonstrated. A wire phantom underwater measurement is made by an experimental synthetic aperture real-time ultrasound scanner (SARUS......) with a linear array transducer. The second harmonic imaging is obtained by a pulse inversion technique. The received data is beamformed by the SASB using a Beamformation Toolbox. In the measurements the lateral resolution at -6 dB is improved by 66% compared to the conventional imaging algorithm. There is also...... a 35% improvement for the lateral resolution at -6 dB compared with the sole harmonic imaging and a 46% improvement compared with merely using the SASB....

  14. A Note on the Identifiability of Generalized Linear Mixed Models

    DEFF Research Database (Denmark)

    Labouriau, Rodrigo

    2014-01-01

    I present here a simple proof that, under general regularity conditions, the standard parametrization of generalized linear mixed model is identifiable. The proof is based on the assumptions of generalized linear mixed models on the first and second order moments and some general mild regularity...... conditions, and, therefore, is extensible to quasi-likelihood based generalized linear models. In particular, binomial and Poisson mixed models with dispersion parameter are identifiable when equipped with the standard parametrization...

  15. Actuarial statistics with generalized linear mixed models

    NARCIS (Netherlands)

    Antonio, K.; Beirlant, J.

    2007-01-01

    Over the last decade the use of generalized linear models (GLMs) in actuarial statistics has received a lot of attention, starting from the actuarial illustrations in the standard text by McCullagh and Nelder [McCullagh, P., Nelder, J.A., 1989. Generalized linear models. In: Monographs on Statistics

  16. From linear to generalized linear mixed models: A case study in repeated measures

    Science.gov (United States)

    Compared to traditional linear mixed models, generalized linear mixed models (GLMMs) can offer better correspondence between response variables and explanatory models, yielding more efficient estimates and tests in the analysis of data from designed experiments. Using proportion data from a designed...

  17. Linear Methods for Image Interpolation

    OpenAIRE

    Pascal Getreuer

    2011-01-01

    We discuss linear methods for interpolation, including nearest neighbor, bilinear, bicubic, splines, and sinc interpolation. We focus on separable interpolation, so most of what is said applies to one-dimensional interpolation as well as N-dimensional separable interpolation.

  18. A property of assignment type mixed integer linear programming problems

    NARCIS (Netherlands)

    Benders, J.F.; van Nunen, J.A.E.E.

    1982-01-01

    In this paper we will proof that rather tight upper bounds can be given for the number of non-unique assignments that are achieved after solving the linear programming relaxation of some types of mixed integer linear assignment problems. Since in these cases the number of splitted assignments is

  19. Linear Methods for Image Interpolation

    Directory of Open Access Journals (Sweden)

    Pascal Getreuer

    2011-09-01

    Full Text Available We discuss linear methods for interpolation, including nearest neighbor, bilinear, bicubic, splines, and sinc interpolation. We focus on separable interpolation, so most of what is said applies to one-dimensional interpolation as well as N-dimensional separable interpolation.

  20. Best linear decoding of random mask images

    International Nuclear Information System (INIS)

    Woods, J.W.; Ekstrom, M.P.; Palmieri, T.M.; Twogood, R.E.

    1975-01-01

    In 1968 Dicke proposed coded imaging of x and γ rays via random pinholes. Since then, many authors have agreed with him that this technique can offer significant image improvement. A best linear decoding of the coded image is presented, and its superiority over the conventional matched filter decoding is shown. Experimental results in the visible light region are presented. (U.S.)

  1. Evaluation of a Linear Mixing Model to Retrieve Soil and Vegetation Temperatures of Land Targets

    International Nuclear Information System (INIS)

    Yang, Jinxin; Jia, Li; Cui, Yaokui; Zhou, Jie; Menenti, Massimo

    2014-01-01

    A simple linear mixing model of heterogeneous soil-vegetation system and retrieval of component temperatures from directional remote sensing measurements by inverting this model is evaluated in this paper using observations by a thermal camera. The thermal camera was used to obtain multi-angular TIR (Thermal Infra-Red) images over vegetable and orchard canopies. A whole thermal camera image was treated as a pixel of a satellite image to evaluate the model with the two-component system, i.e. soil and vegetation. The evaluation included two parts: evaluation of the linear mixing model and evaluation of the inversion of the model to retrieve component temperatures. For evaluation of the linear mixing model, the RMSE is 0.2 K between the observed and modelled brightness temperatures, which indicates that the linear mixing model works well under most conditions. For evaluation of the model inversion, the RMSE between the model retrieved and the observed vegetation temperatures is 1.6K, correspondingly, the RMSE between the observed and retrieved soil temperatures is 2.0K. According to the evaluation of the sensitivity of retrieved component temperatures on fractional cover, the linear mixing model gives more accurate retrieval accuracies for both soil and vegetation temperatures under intermediate fractional cover conditions

  2. Application of Hierarchical Linear Models/Linear Mixed-Effects Models in School Effectiveness Research

    Science.gov (United States)

    Ker, H. W.

    2014-01-01

    Multilevel data are very common in educational research. Hierarchical linear models/linear mixed-effects models (HLMs/LMEs) are often utilized to analyze multilevel data nowadays. This paper discusses the problems of utilizing ordinary regressions for modeling multilevel educational data, compare the data analytic results from three regression…

  3. Linear mixed models a practical guide using statistical software

    CERN Document Server

    West, Brady T; Galecki, Andrzej T

    2006-01-01

    Simplifying the often confusing array of software programs for fitting linear mixed models (LMMs), Linear Mixed Models: A Practical Guide Using Statistical Software provides a basic introduction to primary concepts, notation, software implementation, model interpretation, and visualization of clustered and longitudinal data. This easy-to-navigate reference details the use of procedures for fitting LMMs in five popular statistical software packages: SAS, SPSS, Stata, R/S-plus, and HLM. The authors introduce basic theoretical concepts, present a heuristic approach to fitting LMMs based on bo

  4. Malvar-He-Cutler Linear Image Demosaicking

    Directory of Open Access Journals (Sweden)

    Pascal Getreuer

    2011-08-01

    Full Text Available Image demosaicking (or demosaicing is the interpolation problem of estimating complete color information for an image that has been captured through a color filter array (CFA, particularly on the Bayer pattern. In this paper we review a simple linear method using 5 x 5 filters, proposed by Malvar, He, and Cutler in 2004, that shows surprisingly good results.

  5. Extending the linear model with R generalized linear, mixed effects and nonparametric regression models

    CERN Document Server

    Faraway, Julian J

    2005-01-01

    Linear models are central to the practice of statistics and form the foundation of a vast range of statistical methodologies. Julian J. Faraway''s critically acclaimed Linear Models with R examined regression and analysis of variance, demonstrated the different methods available, and showed in which situations each one applies. Following in those footsteps, Extending the Linear Model with R surveys the techniques that grow from the regression model, presenting three extensions to that framework: generalized linear models (GLMs), mixed effect models, and nonparametric regression models. The author''s treatment is thoroughly modern and covers topics that include GLM diagnostics, generalized linear mixed models, trees, and even the use of neural networks in statistics. To demonstrate the interplay of theory and practice, throughout the book the author weaves the use of the R software environment to analyze the data of real examples, providing all of the R commands necessary to reproduce the analyses. All of the ...

  6. A mixed integer linear program for an integrated fishery | Hasan ...

    African Journals Online (AJOL)

    ... and labour allocation of quota based integrated fisheries. We demonstrate the workability of our model with a numerical example and sensitivity analysis based on data obtained from one of the major fisheries in New Zealand. Keywords: mixed integer linear program, fishing, trawler scheduling, processing, quotas ORiON: ...

  7. Confidence Intervals for Assessing Heterogeneity in Generalized Linear Mixed Models

    Science.gov (United States)

    Wagler, Amy E.

    2014-01-01

    Generalized linear mixed models are frequently applied to data with clustered categorical outcomes. The effect of clustering on the response is often difficult to practically assess partly because it is reported on a scale on which comparisons with regression parameters are difficult to make. This article proposes confidence intervals for…

  8. Model Selection with the Linear Mixed Model for Longitudinal Data

    Science.gov (United States)

    Ryoo, Ji Hoon

    2011-01-01

    Model building or model selection with linear mixed models (LMMs) is complicated by the presence of both fixed effects and random effects. The fixed effects structure and random effects structure are codependent, so selection of one influences the other. Most presentations of LMM in psychology and education are based on a multilevel or…

  9. Resolving Mixed Algal Species in Hyperspectral Images

    Directory of Open Access Journals (Sweden)

    Mehrube Mehrubeoglu

    2013-12-01

    Full Text Available We investigated a lab-based hyperspectral imaging system’s response from pure (single and mixed (two algal cultures containing known algae types and volumetric combinations to characterize the system’s performance. The spectral response to volumetric changes in single and combinations of algal mixtures with known ratios were tested. Constrained linear spectral unmixing was applied to extract the algal content of the mixtures based on abundances that produced the lowest root mean square error. Percent prediction error was computed as the difference between actual percent volumetric content and abundances at minimum RMS error. Best prediction errors were computed as 0.4%, 0.4% and 6.3% for the mixed spectra from three independent experiments. The worst prediction errors were found as 5.6%, 5.4% and 13.4% for the same order of experiments. Additionally, Beer-Lambert’s law was utilized to relate transmittance to different volumes of pure algal suspensions demonstrating linear logarithmic trends for optical property measurements.

  10. Image denoising using non linear diffusion tensors

    International Nuclear Information System (INIS)

    Benzarti, F.; Amiri, H.

    2011-01-01

    Image denoising is an important pre-processing step for many image analysis and computer vision system. It refers to the task of recovering a good estimate of the true image from a degraded observation without altering and changing useful structure in the image such as discontinuities and edges. In this paper, we propose a new approach for image denoising based on the combination of two non linear diffusion tensors. One allows diffusion along the orientation of greatest coherences, while the other allows diffusion along orthogonal directions. The idea is to track perfectly the local geometry of the degraded image and applying anisotropic diffusion mainly along the preferred structure direction. To illustrate the effective performance of our model, we present some experimental results on a test and real photographic color images.

  11. Practical likelihood analysis for spatial generalized linear mixed models

    DEFF Research Database (Denmark)

    Bonat, W. H.; Ribeiro, Paulo Justiniano

    2016-01-01

    We investigate an algorithm for maximum likelihood estimation of spatial generalized linear mixed models based on the Laplace approximation. We compare our algorithm with a set of alternative approaches for two datasets from the literature. The Rhizoctonia root rot and the Rongelap are......, respectively, examples of binomial and count datasets modeled by spatial generalized linear mixed models. Our results show that the Laplace approximation provides similar estimates to Markov Chain Monte Carlo likelihood, Monte Carlo expectation maximization, and modified Laplace approximation. Some advantages...... of Laplace approximation include the computation of the maximized log-likelihood value, which can be used for model selection and tests, and the possibility to obtain realistic confidence intervals for model parameters based on profile likelihoods. The Laplace approximation also avoids the tuning...

  12. Estimation and Inference for Very Large Linear Mixed Effects Models

    OpenAIRE

    Gao, K.; Owen, A. B.

    2016-01-01

    Linear mixed models with large imbalanced crossed random effects structures pose severe computational problems for maximum likelihood estimation and for Bayesian analysis. The costs can grow as fast as $N^{3/2}$ when there are N observations. Such problems arise in any setting where the underlying factors satisfy a many to many relationship (instead of a nested one) and in electronic commerce applications, the N can be quite large. Methods that do not account for the correlation structure can...

  13. Systematic analysis of the impact of mixing locality on Mixing-DAC linearity for multicarrier GSM

    NARCIS (Netherlands)

    Bechthum, E.; Radulov, G.I.; Briaire, J.; Geelen, G.; Roermund, van A.H.M.

    2012-01-01

    In an RF transmitter, the function of the mixer and the DAC can be combined in a single block: the Mixing-DAC. For the generation of multicarrier GSM signals in a basestation, high dynamic linearity is required, i.e. SFDR>85dBc, at high output signal frequency, i.e. ƒout ˜ 4GHz. This represents a

  14. Linear mixed models a practical guide using statistical software

    CERN Document Server

    West, Brady T; Galecki, Andrzej T

    2014-01-01

    Highly recommended by JASA, Technometrics, and other journals, the first edition of this bestseller showed how to easily perform complex linear mixed model (LMM) analyses via a variety of software programs. Linear Mixed Models: A Practical Guide Using Statistical Software, Second Edition continues to lead readers step by step through the process of fitting LMMs. This second edition covers additional topics on the application of LMMs that are valuable for data analysts in all fields. It also updates the case studies using the latest versions of the software procedures and provides up-to-date information on the options and features of the software procedures available for fitting LMMs in SAS, SPSS, Stata, R/S-plus, and HLM.New to the Second Edition A new chapter on models with crossed random effects that uses a case study to illustrate software procedures capable of fitting these models Power analysis methods for longitudinal and clustered study designs, including software options for power analyses and suggest...

  15. Linear models for sound from supersonic reacting mixing layers

    Science.gov (United States)

    Chary, P. Shivakanth; Samanta, Arnab

    2016-12-01

    We perform a linearized reduced-order modeling of the aeroacoustic sound sources in supersonic reacting mixing layers to explore their sensitivities to some of the flow parameters in radiating sound. Specifically, we investigate the role of outer modes as the effective flow compressibility is raised, when some of these are expected to dominate over the traditional Kelvin-Helmholtz (K-H) -type central mode. Although the outer modes are known to be of lesser importance in the near-field mixing, how these radiate to the far-field is uncertain, on which we focus. On keeping the flow compressibility fixed, the outer modes are realized via biasing the respective mean densities of the fast (oxidizer) or slow (fuel) side. Here the mean flows are laminar solutions of two-dimensional compressible boundary layers with an imposed composite (turbulent) spreading rate, which we show to significantly alter the growth of instability waves by saturating them earlier, similar to in nonlinear calculations, achieved here via solving the linear parabolized stability equations. As the flow parameters are varied, instability of the slow modes is shown to be more sensitive to heat release, potentially exceeding equivalent central modes, as these modes yield relatively compact sound sources with lesser spreading of the mixing layer, when compared to the corresponding fast modes. In contrast, the radiated sound seems to be relatively unaffected when the mixture equivalence ratio is varied, except for a lean mixture which is shown to yield a pronounced effect on the slow mode radiation by reducing its modal growth.

  16. Medical Image Denoising Using Mixed Transforms

    Directory of Open Access Journals (Sweden)

    Jaleel Sadoon Jameel

    2018-02-01

    Full Text Available  In this paper,  a mixed transform method is proposed based on a combination of wavelet transform (WT and multiwavelet transform (MWT in order to denoise medical images. The proposed method consists of WT and MWT in cascade form to enhance the denoising performance of image processing. Practically, the first step is to add a noise to Magnetic Resonance Image (MRI or Computed Tomography (CT images for the sake of testing. The noisy image is processed by WT to achieve four sub-bands and each sub-band is treated individually using MWT before the soft/hard denoising stage. Simulation results show that a high peak signal to noise ratio (PSNR is improved significantly and the characteristic features are well preserved by employing mixed transform of WT and MWT due to their capability of separating noise signals from image signals. Moreover, the corresponding mean square error (MSE is decreased accordingly compared to other available methods.

  17. Spatial generalised linear mixed models based on distances.

    Science.gov (United States)

    Melo, Oscar O; Mateu, Jorge; Melo, Carlos E

    2016-10-01

    Risk models derived from environmental data have been widely shown to be effective in delineating geographical areas of risk because they are intuitively easy to understand. We present a new method based on distances, which allows the modelling of continuous and non-continuous random variables through distance-based spatial generalised linear mixed models. The parameters are estimated using Markov chain Monte Carlo maximum likelihood, which is a feasible and a useful technique. The proposed method depends on a detrending step built from continuous or categorical explanatory variables, or a mixture among them, by using an appropriate Euclidean distance. The method is illustrated through the analysis of the variation in the prevalence of Loa loa among a sample of village residents in Cameroon, where the explanatory variables included elevation, together with maximum normalised-difference vegetation index and the standard deviation of normalised-difference vegetation index calculated from repeated satellite scans over time. © The Author(s) 2013.

  18. Linear mixing model applied to coarse spatial resolution data from multispectral satellite sensors

    Science.gov (United States)

    Holben, Brent N.; Shimabukuro, Yosio E.

    1993-01-01

    A linear mixing model was applied to coarse spatial resolution data from the NOAA Advanced Very High Resolution Radiometer. The reflective component of the 3.55-3.95 micron channel was used with the two reflective channels 0.58-0.68 micron and 0.725-1.1 micron to run a constrained least squares model to generate fraction images for an area in the west central region of Brazil. The fraction images were compared with an unsupervised classification derived from Landsat TM data acquired on the same day. The relationship between the fraction images and normalized difference vegetation index images show the potential of the unmixing techniques when using coarse spatial resolution data for global studies.

  19. Solving large mixed linear models using preconditioned conjugate gradient iteration.

    Science.gov (United States)

    Strandén, I; Lidauer, M

    1999-12-01

    Continuous evaluation of dairy cattle with a random regression test-day model requires a fast solving method and algorithm. A new computing technique feasible in Jacobi and conjugate gradient based iterative methods using iteration on data is presented. In the new computing technique, the calculations in multiplication of a vector by a matrix were recorded to three steps instead of the commonly used two steps. The three-step method was implemented in a general mixed linear model program that used preconditioned conjugate gradient iteration. Performance of this program in comparison to other general solving programs was assessed via estimation of breeding values using univariate, multivariate, and random regression test-day models. Central processing unit time per iteration with the new three-step technique was, at best, one-third that needed with the old technique. Performance was best with the test-day model, which was the largest and most complex model used. The new program did well in comparison to other general software. Programs keeping the mixed model equations in random access memory required at least 20 and 435% more time to solve the univariate and multivariate animal models, respectively. Computations of the second best iteration on data took approximately three and five times longer for the animal and test-day models, respectively, than did the new program. Good performance was due to fast computing time per iteration and quick convergence to the final solutions. Use of preconditioned conjugate gradient based methods in solving large breeding value problems is supported by our findings.

  20. Learning oncogenetic networks by reducing to mixed integer linear programming.

    Science.gov (United States)

    Shahrabi Farahani, Hossein; Lagergren, Jens

    2013-01-01

    Cancer can be a result of accumulation of different types of genetic mutations such as copy number aberrations. The data from tumors are cross-sectional and do not contain the temporal order of the genetic events. Finding the order in which the genetic events have occurred and progression pathways are of vital importance in understanding the disease. In order to model cancer progression, we propose Progression Networks, a special case of Bayesian networks, that are tailored to model disease progression. Progression networks have similarities with Conjunctive Bayesian Networks (CBNs) [1],a variation of Bayesian networks also proposed for modeling disease progression. We also describe a learning algorithm for learning Bayesian networks in general and progression networks in particular. We reduce the hard problem of learning the Bayesian and progression networks to Mixed Integer Linear Programming (MILP). MILP is a Non-deterministic Polynomial-time complete (NP-complete) problem for which very good heuristics exists. We tested our algorithm on synthetic and real cytogenetic data from renal cell carcinoma. We also compared our learned progression networks with the networks proposed in earlier publications. The software is available on the website https://bitbucket.org/farahani/diprog.

  1. Generalized linear longitudinal mixed models with linear covariance structure and multiplicative random effects

    DEFF Research Database (Denmark)

    Holst, René; Jørgensen, Bent

    2015-01-01

    The paper proposes a versatile class of multiplicative generalized linear longitudinal mixed models (GLLMM) with additive dispersion components, based on explicit modelling of the covariance structure. The class incorporates a longitudinal structure into the random effects models and retains...... a marginal as well as a conditional interpretation. The estimation procedure is based on a computationally efficient quasi-score method for the regression parameters combined with a REML-like bias-corrected Pearson estimating function for the dispersion and correlation parameters. This avoids...... the multidimensional integral of the conventional GLMM likelihood and allows an extension of the robust empirical sandwich estimator for use with both association and regression parameters. The method is applied to a set of otholit data, used for age determination of fish....

  2. Estimating a graphical intra-class correlation coefficient (GICC) using multivariate probit-linear mixed models.

    Science.gov (United States)

    Yue, Chen; Chen, Shaojie; Sair, Haris I; Airan, Raag; Caffo, Brian S

    2015-09-01

    Data reproducibility is a critical issue in all scientific experiments. In this manuscript, the problem of quantifying the reproducibility of graphical measurements is considered. The image intra-class correlation coefficient (I2C2) is generalized and the graphical intra-class correlation coefficient (GICC) is proposed for such purpose. The concept for GICC is based on multivariate probit-linear mixed effect models. A Markov Chain Monte Carlo EM (mcm-cEM) algorithm is used for estimating the GICC. Simulation results with varied settings are demonstrated and our method is applied to the KIRBY21 test-retest dataset.

  3. Linear mixed-effects modeling approach to FMRI group analysis.

    Science.gov (United States)

    Chen, Gang; Saad, Ziad S; Britton, Jennifer C; Pine, Daniel S; Cox, Robert W

    2013-06-01

    Conventional group analysis is usually performed with Student-type t-test, regression, or standard AN(C)OVA in which the variance-covariance matrix is presumed to have a simple structure. Some correction approaches are adopted when assumptions about the covariance structure is violated. However, as experiments are designed with different degrees of sophistication, these traditional methods can become cumbersome, or even be unable to handle the situation at hand. For example, most current FMRI software packages have difficulty analyzing the following scenarios at group level: (1) taking within-subject variability into account when there are effect estimates from multiple runs or sessions; (2) continuous explanatory variables (covariates) modeling in the presence of a within-subject (repeated measures) factor, multiple subject-grouping (between-subjects) factors, or the mixture of both; (3) subject-specific adjustments in covariate modeling; (4) group analysis with estimation of hemodynamic response (HDR) function by multiple basis functions; (5) various cases of missing data in longitudinal studies; and (6) group studies involving family members or twins. Here we present a linear mixed-effects modeling (LME) methodology that extends the conventional group analysis approach to analyze many complicated cases, including the six prototypes delineated above, whose analyses would be otherwise either difficult or unfeasible under traditional frameworks such as AN(C)OVA and general linear model (GLM). In addition, the strength of the LME framework lies in its flexibility to model and estimate the variance-covariance structures for both random effects and residuals. The intraclass correlation (ICC) values can be easily obtained with an LME model with crossed random effects, even at the presence of confounding fixed effects. The simulations of one prototypical scenario indicate that the LME modeling keeps a balance between the control for false positives and the sensitivity

  4. Non-linear mixed-effects pharmacokinetic/pharmacodynamic modelling in NLME using differential equations

    DEFF Research Database (Denmark)

    Tornøe, Christoffer Wenzel; Agersø, Henrik; Madsen, Henrik

    2004-01-01

    The standard software for non-linear mixed-effect analysis of pharmacokinetic/phar-macodynamic (PK/PD) data is NONMEM while the non-linear mixed-effects package NLME is an alternative as tong as the models are fairly simple. We present the nlmeODE package which combines the ordinary differential...... equation (ODE) solver package odesolve and the non-Linear mixed effects package NLME thereby enabling the analysis of complicated systems of ODEs by non-linear mixed-effects modelling. The pharmacokinetics of the anti-asthmatic drug theophylline is used to illustrate the applicability of the nlme...

  5. Effect of Image Linearization on Normalized Compression Distance

    Science.gov (United States)

    Mortensen, Jonathan; Wu, Jia Jie; Furst, Jacob; Rogers, John; Raicu, Daniela

    Normalized Information Distance, based on Kolmogorov complexity, is an emerging metric for image similarity. It is approximated by the Normalized Compression Distance (NCD) which generates the relative distance between two strings by using standard compression algorithms to compare linear strings of information. This relative distance quantifies the degree of similarity between the two objects. NCD has been shown to measure similarity effectively on information which is already a string: genomic string comparisons have created accurate phylogeny trees and NCD has also been used to classify music. Currently, to find a similarity measure using NCD for images, the images must first be linearized into a string, and then compared. To understand how linearization of a 2D image affects the similarity measure, we perform four types of linearization on a subset of the Corel image database and compare each for a variety of image transformations. Our experiment shows that different linearization techniques produce statistically significant differences in NCD for identical spatial transformations.

  6. Multimodal Image Alignment via Linear Mapping between Feature Modalities.

    Science.gov (United States)

    Jiang, Yanyun; Zheng, Yuanjie; Hou, Sujuan; Chang, Yuchou; Gee, James

    2017-01-01

    We propose a novel landmark matching based method for aligning multimodal images, which is accomplished uniquely by resolving a linear mapping between different feature modalities. This linear mapping results in a new measurement on similarity of images captured from different modalities. In addition, our method simultaneously solves this linear mapping and the landmark correspondences by minimizing a convex quadratic function. Our method can estimate complex image relationship between different modalities and nonlinear nonrigid spatial transformations even in the presence of heavy noise, as shown in our experiments carried out by using a variety of image modalities.

  7. Analysis of baseline, average, and longitudinally measured blood pressure data using linear mixed models.

    Science.gov (United States)

    Hossain, Ahmed; Beyene, Joseph

    2014-01-01

    This article compares baseline, average, and longitudinal data analysis methods for identifying genetic variants in genome-wide association study using the Genetic Analysis Workshop 18 data. We apply methods that include (a) linear mixed models with baseline measures, (b) random intercept linear mixed models with mean measures outcome, and (c) random intercept linear mixed models with longitudinal measurements. In the linear mixed models, covariates are included as fixed effects, whereas relatedness among individuals is incorporated as the variance-covariance structure of the random effect for the individuals. The overall strategy of applying linear mixed models decorrelate the data is based on Aulchenko et al.'s GRAMMAR. By analyzing systolic and diastolic blood pressure, which are used separately as outcomes, we compare the 3 methods in identifying a known genetic variant that is associated with blood pressure from chromosome 3 and simulated phenotype data. We also analyze the real phenotype data to illustrate the methods. We conclude that the linear mixed model with longitudinal measurements of diastolic blood pressure is the most accurate at identifying the known single-nucleotide polymorphism among the methods, but linear mixed models with baseline measures perform best with systolic blood pressure as the outcome.

  8. Rectification of aerial images using piecewise linear transformation

    International Nuclear Information System (INIS)

    Liew, L H; Lee, B Y; Wang, Y C; Cheah, W S

    2014-01-01

    Aerial images are widely used in various activities by providing visual records. This type of remotely sensed image is helpful in generating digital maps, managing ecology, monitoring crop growth and region surveying. Such images could provide insight into areas of interest that have lower altitude, particularly in regions where optical satellite imaging is prevented due to cloudiness. Aerial images captured using a non-metric cameras contain real details of the images as well as unexpected distortions. Distortions would affect the actual length, direction and shape of objects in the images. There are many sources that could cause distortions such as lens, earth curvature, topographic relief and the attitude of the aircraft that is used to carry the camera. These distortions occur differently, collectively and irregularly in the entire image. Image rectification is an essential image pre-processing step to eliminate or at least reduce the effect of distortions. In this paper, a non-parametric approach with piecewise linear transformation is investigated in rectifying distorted aerial images. The non-parametric approach requires a set of corresponding control points obtained from a reference image and a distorted image. The corresponding control points are then applied with piecewise linear transformation as geometric transformation. Piecewise linear transformation divides the image into regions by triangulation. Different linear transformations are employed separately to triangular regions instead of using a single transformation as the rectification model for the entire image. The result of rectification is evaluated using total root mean square error (RMSE). Experiments show that piecewise linear transformation could assist in improving the limitation of using global transformation to rectify images

  9. Mixed-Integer Conic Linear Programming: Challenges and Perspectives

    Science.gov (United States)

    2013-10-01

    The novel DCCs for MISOCO may be used in branch- and-cut algorithms when solving MISOCO problems. The experimental software CICLO was developed to...perform limited, but rigorous computational experiments. The CICLO solver utilizes continuous SOCO solvers, MOSEK, CPLES or SeDuMi, builds on the open...submitted Fall 2013. Software: 1. CICLO : Integer conic linear optimization package. Authors: J.C. Góez, T.K. Ralphs, Y. Fu, and T. Terlaky

  10. Delta-tilde interpretation of standard linear mixed model results

    DEFF Research Database (Denmark)

    Brockhoff, Per Bruun; Amorim, Isabel de Sousa; Kuznetsova, Alexandra

    2016-01-01

    effects relative to the residual error and to choose the proper effect size measure. For multi-attribute bar plots of F-statistics this amounts, in balanced settings, to a simple transformation of the bar heights to get them transformed into depicting what can be seen as approximately the average pairwise...... data set and compared to actual d-prime calculations based on Thurstonian regression modeling through the ordinal package. For more challenging cases we offer a generic "plug-in" implementation of a version of the method as part of the R-package SensMixed. We discuss and clarify the bias mechanisms...

  11. lmerTest Package: Tests in Linear Mixed Effects Models

    DEFF Research Database (Denmark)

    Kuznetsova, Alexandra; Brockhoff, Per B.; Christensen, Rune Haubo Bojesen

    2017-01-01

    One of the frequent questions by users of the mixed model function lmer of the lme4 package has been: How can I get p values for the F and t tests for objects returned by lmer? The lmerTest package extends the 'lmerMod' class of the lme4 package, by overloading the anova and summary functions...... by providing p values for tests for fixed effects. We have implemented the Satterthwaite's method for approximating degrees of freedom for the t and F tests. We have also implemented the construction of Type I - III ANOVA tables. Furthermore, one may also obtain the summary as well as the anova table using...

  12. A Linear Mixed-Effects Model of Wireless Spectrum Occupancy

    Directory of Open Access Journals (Sweden)

    Pagadarai Srikanth

    2010-01-01

    Full Text Available We provide regression analysis-based statistical models to explain the usage of wireless spectrum across four mid-size US cities in four frequency bands. Specifically, the variations in spectrum occupancy across space, time, and frequency are investigated and compared between different sites within the city as well as with other cities. By applying the mixed-effects models, several conclusions are drawn that give the occupancy percentage and the ON time duration of the licensed signal transmission as a function of several predictor variables.

  13. Generalized linear mixed models modern concepts, methods and applications

    CERN Document Server

    Stroup, Walter W

    2012-01-01

    PART I The Big PictureModeling BasicsWhat Is a Model?Two Model Forms: Model Equation and Probability DistributionTypes of Model EffectsWriting Models in Matrix FormSummary: Essential Elements for a Complete Statement of the ModelDesign MattersIntroductory Ideas for Translating Design and Objectives into ModelsDescribing ""Data Architecture"" to Facilitate Model SpecificationFrom Plot Plan to Linear PredictorDistribution MattersMore Complex Example: Multiple Factors with Different Units of ReplicationSetting the StageGoals for Inference with Models: OverviewBasic Tools of InferenceIssue I: Data

  14. Subpixel Mapping of Hyperspectral Image Based on Linear Subpixel Feature Detection and Object Optimization

    Science.gov (United States)

    Liu, Zhaoxin; Zhao, Liaoying; Li, Xiaorun; Chen, Shuhan

    2018-04-01

    Owing to the limitation of spatial resolution of the imaging sensor and the variability of ground surfaces, mixed pixels are widesperead in hyperspectral imagery. The traditional subpixel mapping algorithms treat all mixed pixels as boundary-mixed pixels while ignoring the existence of linear subpixels. To solve this question, this paper proposed a new subpixel mapping method based on linear subpixel feature detection and object optimization. Firstly, the fraction value of each class is obtained by spectral unmixing. Secondly, the linear subpixel features are pre-determined based on the hyperspectral characteristics and the linear subpixel feature; the remaining mixed pixels are detected based on maximum linearization index analysis. The classes of linear subpixels are determined by using template matching method. Finally, the whole subpixel mapping results are iteratively optimized by binary particle swarm optimization algorithm. The performance of the proposed subpixel mapping method is evaluated via experiments based on simulated and real hyperspectral data sets. The experimental results demonstrate that the proposed method can improve the accuracy of subpixel mapping.

  15. Evaluating significance in linear mixed-effects models in R.

    Science.gov (United States)

    Luke, Steven G

    2017-08-01

    Mixed-effects models are being used ever more frequently in the analysis of experimental data. However, in the lme4 package in R the standards for evaluating significance of fixed effects in these models (i.e., obtaining p-values) are somewhat vague. There are good reasons for this, but as researchers who are using these models are required in many cases to report p-values, some method for evaluating the significance of the model output is needed. This paper reports the results of simulations showing that the two most common methods for evaluating significance, using likelihood ratio tests and applying the z distribution to the Wald t values from the model output (t-as-z), are somewhat anti-conservative, especially for smaller sample sizes. Other methods for evaluating significance, including parametric bootstrapping and the Kenward-Roger and Satterthwaite approximations for degrees of freedom, were also evaluated. The results of these simulations suggest that Type 1 error rates are closest to .05 when models are fitted using REML and p-values are derived using the Kenward-Roger or Satterthwaite approximations, as these approximations both produced acceptable Type 1 error rates even for smaller samples.

  16. Mixed integer linear programming for maximum-parsimony phylogeny inference.

    Science.gov (United States)

    Sridhar, Srinath; Lam, Fumei; Blelloch, Guy E; Ravi, R; Schwartz, Russell

    2008-01-01

    Reconstruction of phylogenetic trees is a fundamental problem in computational biology. While excellent heuristic methods are available for many variants of this problem, new advances in phylogeny inference will be required if we are to be able to continue to make effective use of the rapidly growing stores of variation data now being gathered. In this paper, we present two integer linear programming (ILP) formulations to find the most parsimonious phylogenetic tree from a set of binary variation data. One method uses a flow-based formulation that can produce exponential numbers of variables and constraints in the worst case. The method has, however, proven extremely efficient in practice on datasets that are well beyond the reach of the available provably efficient methods, solving several large mtDNA and Y-chromosome instances within a few seconds and giving provably optimal results in times competitive with fast heuristics than cannot guarantee optimality. An alternative formulation establishes that the problem can be solved with a polynomial-sized ILP. We further present a web server developed based on the exponential-sized ILP that performs fast maximum parsimony inferences and serves as a front end to a database of precomputed phylogenies spanning the human genome.

  17. The Solution Set Characterization and Error Bound for the Extended Mixed Linear Complementarity Problem

    Directory of Open Access Journals (Sweden)

    Hongchun Sun

    2012-01-01

    Full Text Available For the extended mixed linear complementarity problem (EML CP, we first present the characterization of the solution set for the EMLCP. Based on this, its global error bound is also established under milder conditions. The results obtained in this paper can be taken as an extension for the classical linear complementarity problems.

  18. Axial displacement of external and internal implant-abutment connection evaluated by linear mixed model analysis.

    Science.gov (United States)

    Seol, Hyon-Woo; Heo, Seong-Joo; Koak, Jai-Young; Kim, Seong-Kyun; Kim, Shin-Koo

    2015-01-01

    To analyze the axial displacement of external and internal implant-abutment connection after cyclic loading. Three groups of external abutments (Ext group), an internal tapered one-piece-type abutment (Int-1 group), and an internal tapered two-piece-type abutment (Int-2 group) were prepared. Cyclic loading was applied to implant-abutment assemblies at 150 N with a frequency of 3 Hz. The amount of axial displacement, the Periotest values (PTVs), and the removal torque values(RTVs) were measured. Both a repeated measures analysis of variance and pattern analysis based on the linear mixed model were used for statistical analysis. Scanning electron microscopy (SEM) was used to evaluate the surface of the implant-abutment connection. The mean axial displacements after 1,000,000 cycles were 0.6 μm in the Ext group, 3.7 μm in the Int-1 group, and 9.0 μm in the Int-2 group. Pattern analysis revealed a breakpoint at 171 cycles. The Ext group showed no declining pattern, and the Int-1 group showed no declining pattern after the breakpoint (171 cycles). However, the Int-2 group experienced continuous axial displacement. After cyclic loading, the PTV decreased in the Int-2 group, and the RTV decreased in all groups. SEM imaging revealed surface wear in all groups. Axial displacement and surface wear occurred in all groups. The PTVs remained stable, but the RTVs decreased after cyclic loading. Based on linear mixed model analysis, the Ext and Int-1 groups' axial displacements plateaued after little cyclic loading. The Int-2 group's rate of axial displacement slowed after 100,000 cycles.

  19. Comparison of linear, mixed integer and non-linear programming methods in energy system dispatch modelling

    DEFF Research Database (Denmark)

    Ommen, Torben Schmidt; Markussen, Wiebke Brix; Elmegaard, Brian

    2014-01-01

    In the paper, three frequently used operation optimisation methods are examined with respect to their impact on operation management of the combined utility technologies for electric power and DH (district heating) of eastern Denmark. The investigation focusses on individual plant operation...... differences and differences between the solution found by each optimisation method. One of the investigated approaches utilises LP (linear programming) for optimisation, one uses LP with binary operation constraints, while the third approach uses NLP (non-linear programming). The LP model is used...... as a benchmark, as this type is frequently used, and has the lowest amount of constraints of the three. A comparison of the optimised operation of a number of units shows significant differences between the three methods. Compared to the reference, the use of binary integer variables, increases operation...

  20. Linear-constraint wavefront control for exoplanet coronagraphic imaging systems

    Science.gov (United States)

    Sun, He; Eldorado Riggs, A. J.; Kasdin, N. Jeremy; Vanderbei, Robert J.; Groff, Tyler Dean

    2017-01-01

    A coronagraph is a leading technology for achieving high-contrast imaging of exoplanets in a space telescope. It uses a system of several masks to modify the diffraction and achieve extremely high contrast in the image plane around target stars. However, coronagraphic imaging systems are very sensitive to optical aberrations, so wavefront correction using deformable mirrors (DMs) is necessary to avoid contrast degradation in the image plane. Electric field conjugation (EFC) and Stroke minimization (SM) are two primary high-contrast wavefront controllers explored in the past decade. EFC minimizes the average contrast in the search areas while regularizing the strength of the control inputs. Stroke minimization calculates the minimum DM commands under the constraint that a target average contrast is achieved. Recently in the High Contrast Imaging Lab at Princeton University (HCIL), a new linear-constraint wavefront controller based on stroke minimization was developed and demonstrated using numerical simulation. Instead of only constraining the average contrast over the entire search area, the new controller constrains the electric field of each single pixel using linear programming, which could led to significant increases in speed of the wavefront correction and also create more uniform dark holes. As a follow-up of this work, another linear-constraint controller modified from EFC is demonstrated theoretically and numerically and the lab verification of the linear-constraint controllers is reported. Based on the simulation and lab results, the pros and cons of linear-constraint controllers are carefully compared with EFC and stroke minimization.

  1. Optimized multiple linear mappings for single image super-resolution

    Science.gov (United States)

    Zhang, Kaibing; Li, Jie; Xiong, Zenggang; Liu, Xiuping; Gao, Xinbo

    2017-12-01

    Learning piecewise linear regression has been recognized as an effective way for example learning-based single image super-resolution (SR) in literature. In this paper, we employ an expectation-maximization (EM) algorithm to further improve the SR performance of our previous multiple linear mappings (MLM) based SR method. In the training stage, the proposed method starts with a set of linear regressors obtained by the MLM-based method, and then jointly optimizes the clustering results and the low- and high-resolution subdictionary pairs for regression functions by using the metric of the reconstruction errors. In the test stage, we select the optimal regressor for SR reconstruction by accumulating the reconstruction errors of m-nearest neighbors in the training set. Thorough experimental results carried on six publicly available datasets demonstrate that the proposed SR method can yield high-quality images with finer details and sharper edges in terms of both quantitative and perceptual image quality assessments.

  2. An R2 statistic for fixed effects in the linear mixed model.

    Science.gov (United States)

    Edwards, Lloyd J; Muller, Keith E; Wolfinger, Russell D; Qaqish, Bahjat F; Schabenberger, Oliver

    2008-12-20

    Statisticians most often use the linear mixed model to analyze Gaussian longitudinal data. The value and familiarity of the R(2) statistic in the linear univariate model naturally creates great interest in extending it to the linear mixed model. We define and describe how to compute a model R(2) statistic for the linear mixed model by using only a single model. The proposed R(2) statistic measures multivariate association between the repeated outcomes and the fixed effects in the linear mixed model. The R(2) statistic arises as a 1-1 function of an appropriate F statistic for testing all fixed effects (except typically the intercept) in a full model. The statistic compares the full model with a null model with all fixed effects deleted (except typically the intercept) while retaining exactly the same covariance structure. Furthermore, the R(2) statistic leads immediately to a natural definition of a partial R(2) statistic. A mixed model in which ethnicity gives a very small p-value as a longitudinal predictor of blood pressure (BP) compellingly illustrates the value of the statistic. In sharp contrast to the extreme p-value, a very small R(2) , a measure of statistical and scientific importance, indicates that ethnicity has an almost negligible association with the repeated BP outcomes for the study.

  3. Valid statistical approaches for analyzing sholl data: Mixed effects versus simple linear models.

    Science.gov (United States)

    Wilson, Machelle D; Sethi, Sunjay; Lein, Pamela J; Keil, Kimberly P

    2017-03-01

    The Sholl technique is widely used to quantify dendritic morphology. Data from such studies, which typically sample multiple neurons per animal, are often analyzed using simple linear models. However, simple linear models fail to account for intra-class correlation that occurs with clustered data, which can lead to faulty inferences. Mixed effects models account for intra-class correlation that occurs with clustered data; thus, these models more accurately estimate the standard deviation of the parameter estimate, which produces more accurate p-values. While mixed models are not new, their use in neuroscience has lagged behind their use in other disciplines. A review of the published literature illustrates common mistakes in analyses of Sholl data. Analysis of Sholl data collected from Golgi-stained pyramidal neurons in the hippocampus of male and female mice using both simple linear and mixed effects models demonstrates that the p-values and standard deviations obtained using the simple linear models are biased downwards and lead to erroneous rejection of the null hypothesis in some analyses. The mixed effects approach more accurately models the true variability in the data set, which leads to correct inference. Mixed effects models avoid faulty inference in Sholl analysis of data sampled from multiple neurons per animal by accounting for intra-class correlation. Given the widespread practice in neuroscience of obtaining multiple measurements per subject, there is a critical need to apply mixed effects models more widely. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. Human visual modeling and image deconvolution by linear filtering

    International Nuclear Information System (INIS)

    Larminat, P. de; Barba, D.; Gerber, R.; Ronsin, J.

    1978-01-01

    The problem is the numerical restoration of images degraded by passing through a known and spatially invariant linear system, and by the addition of a stationary noise. We propose an improvement of the Wiener's filter to allow the restoration of such images. This improvement allows to reduce the important drawbacks of classical Wiener's filter: the voluminous data processing, the lack of consideration of the vision's characteristivs which condition the perception by the observer of the restored image. In a first paragraph, we describe the structure of the visual detection system and a modelling method of this system. In the second paragraph we explain a restoration method by Wiener filtering that takes the visual properties into account and that can be adapted to the local properties of the image. Then the results obtained on TV images or scintigrams (images obtained by a gamma-camera) are commented [fr

  5. Linearized least-square imaging of internally scattered data

    KAUST Repository

    Aldawood, Ali; Hoteit, Ibrahim; Turkiyyah, George M.; Zuberi, M. A H; Alkhalifah, Tariq Ali

    2014-01-01

    Internal multiples deteriorate the quality of the migrated image obtained conventionally by imaging single scattering energy. However, imaging internal multiples properly has the potential to enhance the migrated image because they illuminate zones in the subsurface that are poorly illuminated by single-scattering energy such as nearly vertical faults. Standard migration of these multiples provide subsurface reflectivity distributions with low spatial resolution and migration artifacts due to the limited recording aperture, coarse sources and receivers sampling, and the band-limited nature of the source wavelet. Hence, we apply a linearized least-square inversion scheme to mitigate the effect of the migration artifacts, enhance the spatial resolution, and provide more accurate amplitude information when imaging internal multiples. Application to synthetic data demonstrated the effectiveness of the proposed inversion in imaging a reflector that is poorly illuminated by single-scattering energy. The least-square inversion of doublescattered data helped delineate that reflector with minimal acquisition fingerprint.

  6. Imaging of common bile duct by linear endoscopic ultrasound

    Institute of Scientific and Technical Information of China (English)

    Malay; Sharma; Amit; Pathak; Abid; Shoukat; Chittapuram; Srinivasan; Rameshbabu; Akash; Ajmera; Zeeshn; Ahamad; Wani; Praveer; Rai

    2015-01-01

    Imaging of common bile duct(CBD) can be done by many techniques. Endoscopic retrograde cholangiopancreaticography is considered the gold standard for imaging of CBD. A standard technique of imaging of CBD by endoscopic ultrasound(EUS) has not been specifically described. The available descriptions mention different stations of imaging from the stomach and duodenum. The CBD lies closest to duodenum and choice of imaging may be restricted to duodenum for many operators. Generally most operators prefer multi station imaging during EUS and the choice of selecting the initial station varies from operator to operator. Detailed evaluation of CBD is frequently the main focus of imaging during EUS and in such situations multi station imaging with a high-resolution ultrasound scanner may provide useful information. Examination of the CBD is one of the primary indications for doing an EUS and it can be done from five stations:(1) the fundus of stomach;(2) body of stomach;(3) duodenal bulb;(4) descending duodenum; and(5) antrum. Following down the upper 1/3rd of CBD can do imaging of entire CBD from the liver window and following up the lower 1/3rd of CBD can do imaging of entire CBD from the pancreatic window. This article aims at simplifying the techniques of imaging of CBD by linear EUS.

  7. A Comparative Theoretical and Computational Study on Robust Counterpart Optimization: I. Robust Linear Optimization and Robust Mixed Integer Linear Optimization

    Science.gov (United States)

    Li, Zukui; Ding, Ran; Floudas, Christodoulos A.

    2011-01-01

    Robust counterpart optimization techniques for linear optimization and mixed integer linear optimization problems are studied in this paper. Different uncertainty sets, including those studied in literature (i.e., interval set; combined interval and ellipsoidal set; combined interval and polyhedral set) and new ones (i.e., adjustable box; pure ellipsoidal; pure polyhedral; combined interval, ellipsoidal, and polyhedral set) are studied in this work and their geometric relationship is discussed. For uncertainty in the left hand side, right hand side, and objective function of the optimization problems, robust counterpart optimization formulations induced by those different uncertainty sets are derived. Numerical studies are performed to compare the solutions of the robust counterpart optimization models and applications in refinery production planning and batch process scheduling problem are presented. PMID:21935263

  8. A Multiphase Non-Linear Mixed Effects Model: An Application to Spirometry after Lung Transplantation

    Science.gov (United States)

    Rajeswaran, Jeevanantham; Blackstone, Eugene H.

    2014-01-01

    In medical sciences, we often encounter longitudinal temporal relationships that are non-linear in nature. The influence of risk factors may also change across longitudinal follow-up. A system of multiphase non-linear mixed effects model is presented to model temporal patterns of longitudinal continuous measurements, with temporal decomposition to identify the phases and risk factors within each phase. Application of this model is illustrated using spirometry data after lung transplantation using readily available statistical software. This application illustrates the usefulness of our flexible model when dealing with complex non-linear patterns and time varying coefficients. PMID:24919830

  9. Speed Sensorless mixed sensitivity linear parameter variant H_inf control of the induction motor

    NARCIS (Netherlands)

    Toth, R.; Fodor, D.

    2004-01-01

    The paper shows the design of a robust control structure for the speed sensorless vector control of the IM, based on the mixed sensitivity (MS) linear parameter variant (LPV) H8 control theory. The controller makes possible the direct control of the flux and speed of the motor with torque adaptation

  10. Markov and semi-Markov switching linear mixed models used to identify forest tree growth components.

    Science.gov (United States)

    Chaubert-Pereira, Florence; Guédon, Yann; Lavergne, Christian; Trottier, Catherine

    2010-09-01

    Tree growth is assumed to be mainly the result of three components: (i) an endogenous component assumed to be structured as a succession of roughly stationary phases separated by marked change points that are asynchronous among individuals, (ii) a time-varying environmental component assumed to take the form of synchronous fluctuations among individuals, and (iii) an individual component corresponding mainly to the local environment of each tree. To identify and characterize these three components, we propose to use semi-Markov switching linear mixed models, i.e., models that combine linear mixed models in a semi-Markovian manner. The underlying semi-Markov chain represents the succession of growth phases and their lengths (endogenous component) whereas the linear mixed models attached to each state of the underlying semi-Markov chain represent-in the corresponding growth phase-both the influence of time-varying climatic covariates (environmental component) as fixed effects, and interindividual heterogeneity (individual component) as random effects. In this article, we address the estimation of Markov and semi-Markov switching linear mixed models in a general framework. We propose a Monte Carlo expectation-maximization like algorithm whose iterations decompose into three steps: (i) sampling of state sequences given random effects, (ii) prediction of random effects given state sequences, and (iii) maximization. The proposed statistical modeling approach is illustrated by the analysis of successive annual shoots along Corsican pine trunks influenced by climatic covariates. © 2009, The International Biometric Society.

  11. Evaluation of a Linear Mixing Model to Retrieve Soil and Vegetation Temperatures of Land Targets

    NARCIS (Netherlands)

    Yang, J.; Jia, L.; Cui, Y.; Zhou, J.; Menenti, M.

    2014-01-01

    A simple linear mixing model of heterogeneous soil-vegetation system and retrieval of component temperatures from directional remote sensing measurements by inverting this model is evaluated in this paper using observations by a thermal camera. The thermal camera was used to obtain multi-angular TIR

  12. A method for fitting regression splines with varying polynomial order in the linear mixed model.

    Science.gov (United States)

    Edwards, Lloyd J; Stewart, Paul W; MacDougall, James E; Helms, Ronald W

    2006-02-15

    The linear mixed model has become a widely used tool for longitudinal analysis of continuous variables. The use of regression splines in these models offers the analyst additional flexibility in the formulation of descriptive analyses, exploratory analyses and hypothesis-driven confirmatory analyses. We propose a method for fitting piecewise polynomial regression splines with varying polynomial order in the fixed effects and/or random effects of the linear mixed model. The polynomial segments are explicitly constrained by side conditions for continuity and some smoothness at the points where they join. By using a reparameterization of this explicitly constrained linear mixed model, an implicitly constrained linear mixed model is constructed that simplifies implementation of fixed-knot regression splines. The proposed approach is relatively simple, handles splines in one variable or multiple variables, and can be easily programmed using existing commercial software such as SAS or S-plus. The method is illustrated using two examples: an analysis of longitudinal viral load data from a study of subjects with acute HIV-1 infection and an analysis of 24-hour ambulatory blood pressure profiles.

  13. Optimising the selection of food items for food frequency questionnaires using Mixed Integer Linear Programming

    NARCIS (Netherlands)

    Lemmen-Gerdessen, van J.C.; Souverein, O.W.; Veer, van 't P.; Vries, de J.H.M.

    2015-01-01

    Objective To support the selection of food items for FFQs in such a way that the amount of information on all relevant nutrients is maximised while the food list is as short as possible. Design Selection of the most informative food items to be included in FFQs was modelled as a Mixed Integer Linear

  14. Bayesian prediction of spatial count data using generalized linear mixed models

    DEFF Research Database (Denmark)

    Christensen, Ole Fredslund; Waagepetersen, Rasmus Plenge

    2002-01-01

    Spatial weed count data are modeled and predicted using a generalized linear mixed model combined with a Bayesian approach and Markov chain Monte Carlo. Informative priors for a data set with sparse sampling are elicited using a previously collected data set with extensive sampling. Furthermore, ...

  15. Modeling containment of large wildfires using generalized linear mixed-model analysis

    Science.gov (United States)

    Mark Finney; Isaac C. Grenfell; Charles W. McHugh

    2009-01-01

    Billions of dollars are spent annually in the United States to contain large wildland fires, but the factors contributing to suppression success remain poorly understood. We used a regression model (generalized linear mixed-model) to model containment probability of individual fires, assuming that containment was a repeated-measures problem (fixed effect) and...

  16. Mixed-Integer-Linear-Programming-Based Energy Management System for Hybrid PV-Wind-Battery Microgrids

    DEFF Research Database (Denmark)

    Hernández, Adriana Carolina Luna; Aldana, Nelson Leonardo Diaz; Graells, Moises

    2017-01-01

    -side strategy, defined as a general mixed-integer linear programming by taking into account two stages for proper charging of the storage units. This model is considered as a deterministic problem that aims to minimize operating costs and promote self-consumption based on 24-hour ahead forecast data...

  17. Visual, Algebraic and Mixed Strategies in Visually Presented Linear Programming Problems.

    Science.gov (United States)

    Shama, Gilli; Dreyfus, Tommy

    1994-01-01

    Identified and classified solution strategies of (n=49) 10th-grade students who were presented with linear programming problems in a predominantly visual setting in the form of a computerized game. Visual strategies were developed more frequently than either algebraic or mixed strategies. Appendix includes questionnaires. (Contains 11 references.)…

  18. A Second-Order Conditionally Linear Mixed Effects Model with Observed and Latent Variable Covariates

    Science.gov (United States)

    Harring, Jeffrey R.; Kohli, Nidhi; Silverman, Rebecca D.; Speece, Deborah L.

    2012-01-01

    A conditionally linear mixed effects model is an appropriate framework for investigating nonlinear change in a continuous latent variable that is repeatedly measured over time. The efficacy of the model is that it allows parameters that enter the specified nonlinear time-response function to be stochastic, whereas those parameters that enter in a…

  19. Robust linear registration of CT images using random regression forests

    Science.gov (United States)

    Konukoglu, Ender; Criminisi, Antonio; Pathak, Sayan; Robertson, Duncan; White, Steve; Haynor, David; Siddiqui, Khan

    2011-03-01

    Global linear registration is a necessary first step for many different tasks in medical image analysis. Comparing longitudinal studies1, cross-modality fusion2, and many other applications depend heavily on the success of the automatic registration. The robustness and efficiency of this step is crucial as it affects all subsequent operations. Most common techniques cast the linear registration problem as the minimization of a global energy function based on the image intensities. Although these algorithms have proved useful, their robustness in fully automated scenarios is still an open question. In fact, the optimization step often gets caught in local minima yielding unsatisfactory results. Recent algorithms constrain the space of registration parameters by exploiting implicit or explicit organ segmentations, thus increasing robustness4,5. In this work we propose a novel robust algorithm for automatic global linear image registration. Our method uses random regression forests to estimate posterior probability distributions for the locations of anatomical structures - represented as axis aligned bounding boxes6. These posterior distributions are later integrated in a global linear registration algorithm. The biggest advantage of our algorithm is that it does not require pre-defined segmentations or regions. Yet it yields robust registration results. We compare the robustness of our algorithm with that of the state of the art Elastix toolbox7. Validation is performed via 1464 pair-wise registrations in a database of very diverse 3D CT images. We show that our method decreases the "failure" rate of the global linear registration from 12.5% (Elastix) to only 1.9%.

  20. A novel mixed-synchronization phenomenon in coupled Chua's circuits via non-fragile linear control

    International Nuclear Information System (INIS)

    Wang Jun-Wei; Ma Qing-Hua; Zeng Li

    2011-01-01

    Dynamical variables of coupled nonlinear oscillators can exhibit different synchronization patterns depending on the designed coupling scheme. In this paper, a non-fragile linear feedback control strategy with multiplicative controller gain uncertainties is proposed for realizing the mixed-synchronization of Chua's circuits connected in a drive-response configuration. In particular, in the mixed-synchronization regime, different state variables of the response system can evolve into complete synchronization, anti-synchronization and even amplitude death simultaneously with the drive variables for an appropriate choice of scaling matrix. Using Lyapunov stability theory, we derive some sufficient criteria for achieving global mixed-synchronization. It is shown that the desired non-fragile state feedback controller can be constructed by solving a set of linear matrix inequalities (LMIs). Numerical simulations are also provided to demonstrate the effectiveness of the proposed control approach. (general)

  1. Linearized inversion frameworks toward high-resolution seismic imaging

    KAUST Repository

    Aldawood, Ali

    2016-09-01

    internally multiply scattered seismic waves to obtain highly resolved images delineating vertical faults that are otherwise not easily imaged by primaries. Seismic interferometry is conventionally based on the cross-correlation and convolution of seismic traces to transform seismic data from one acquisition geometry to another. The conventional interferometric transformation yields virtual data that suffers from low temporal resolution, wavelet distortion, and correlation/convolution artifacts. I therefore incorporate a least-squares datuming technique to interferometrically transform vertical-seismic-profile surface-related multiples to surface-seismic-profile primaries. This yields redatumed data with high temporal resolution and less artifacts, which are subsequently imaged to obtain highly resolved subsurface images. Tests on synthetic examples demonstrate the efficiency of the proposed techniques, yielding highly resolved migrated sections compared with images obtained by imaging conventionally redatumed data. I further advance the recently developed cost-effective Generalized Interferometric Multiple Imaging procedure, which aims to not only image first but also higher-order multiples as well. I formulate this procedure as a linearized inversion framework and solve it as a least-squares problem. Tests of the least-squares Generalized Interferometric Multiple imaging framework on synthetic datasets and demonstrate that it could provide highly resolved migrated images and delineate vertical fault planes compared with the standard procedure. The results support the assertion that this linearized inversion framework can illuminate subsurface zones that are mainly illuminated by internally scattered energy.

  2. Scene matching based on non-linear pre-processing on reference image and sensed image

    Institute of Scientific and Technical Information of China (English)

    Zhong Sheng; Zhang Tianxu; Sang Nong

    2005-01-01

    To solve the heterogeneous image scene matching problem, a non-linear pre-processing method for the original images before intensity-based correlation is proposed. The result shows that the proper matching probability is raised greatly. Especially for the low S/N image pairs, the effect is more remarkable.

  3. Non-linear imaging condition to image fractures as non-welded interfaces

    NARCIS (Netherlands)

    Minato, S.; Ghose, R.

    2014-01-01

    Hydraulic properties of a fractured reservoir are often controlled by large fractures. In order to seismically detect and characterize them, a high-resolution imaging method is necessary. We apply a non-linear imaging condition to image fractures, considered as non-welded interfaces. We derive the

  4. The linearized inversion of the generalized interferometric multiple imaging

    KAUST Repository

    Aldawood, Ali

    2016-09-06

    The generalized interferometric multiple imaging (GIMI) procedure can be used to image duplex waves and other higher order internal multiples. Imaging duplex waves could help illuminate subsurface zones that are not easily illuminated by primaries such as vertical and nearly vertical fault planes, and salt flanks. To image first-order internal multiple, the GIMI framework consists of three datuming steps, followed by applying the zero-lag cross-correlation imaging condition. However, the standard GIMI procedure yields migrated images that suffer from low spatial resolution, migration artifacts, and cross-talk noise. To alleviate these problems, we propose a least-squares GIMI framework in which we formulate the first two steps as a linearized inversion problem when imaging first-order internal multiples. Tests on synthetic datasets demonstrate the ability to localize subsurface scatterers in their true positions, and delineate a vertical fault plane using the proposed method. We, also, demonstrate the robustness of the proposed framework when imaging the scatterers or the vertical fault plane with erroneous migration velocities.

  5. Linear and Weakly Nonlinear Instability of Shallow Mixing Layers with Variable Friction

    Directory of Open Access Journals (Sweden)

    Irina Eglite

    2018-01-01

    Full Text Available Linear and weakly nonlinear instability of shallow mixing layers is analysed in the present paper. It is assumed that the resistance force varies in the transverse direction. Linear stability problem is solved numerically using collocation method. It is shown that the increase in the ratio of the friction coefficients in the main channel to that in the floodplain has a stabilizing influence on the flow. The amplitude evolution equation for the most unstable mode (the complex Ginzburg–Landau equation is derived from the shallow water equations under the rigid-lid assumption. Results of numerical calculations are presented.

  6. Linear dose dependence of ion beam mixing of metals on Si

    International Nuclear Information System (INIS)

    Poker, D.B.; Appleton, B.R.

    1985-01-01

    These experiments were conducted to determine the dose dependences of ion beam mixing of various metal-silicon couples. V/Si and Cr/Si were included because these couples were previously suspected of exhibiting a linear dose dependence. Pd/Si was chosen because it had been reported as exhibiting only the square root dependence. Samples were cut from wafers of (100) n-type Si. The samples were cleaned in organic solvents, etched in hydrofluoric acid, and rinsed with methanol before mounting in an oil-free vacuum system for thin-film deposition. Films of Au, V, Cr, or Pd were evaporated onto the Si samples with a nominal deposition rate of 10 A/s. The thicknesses were large compared with those usually used to measure ion beam mixing and were used to ensure that conditions of unlimited supply were met. Samples were mixed with Si ions ranging in energy from 300 to 375 keV, chosen to produce ion ranges that significantly exceeded the metal film depth. Si was used as the mixing ion to prevent impurity doping of the Si substrate and to exclude a background signal from the Rutherford backscattering (RBS) spectra. Samples were mixed at room temperature, with the exception of the Au/Si samples, which were mixed at liquid nitrogen temperature. The samples were alternately mixed and analyzed in situ without exposure to atmosphere between mixing doses. The compositional distributions after mixing were measured using RBS of 2.5-MeV 4 He atoms

  7. Mixed integer linear programming model for dynamic supplier selection problem considering discounts

    Directory of Open Access Journals (Sweden)

    Adi Wicaksono Purnawan

    2018-01-01

    Full Text Available Supplier selection is one of the most important elements in supply chain management. This function involves evaluation of many factors such as, material costs, transportation costs, quality, delays, supplier capacity, storage capacity and others. Each of these factors varies with time, therefore, supplier identified for one period is not necessarily be same for the next period to supply the same product. So, mixed integer linear programming (MILP was developed to overcome the dynamic supplier selection problem (DSSP. In this paper, a mixed integer linear programming model is built to solve the lot-sizing problem with multiple suppliers, multiple periods, multiple products and quantity discounts. The buyer has to make a decision for some products which will be supplied by some suppliers for some periods cosidering by discount. To validate the MILP model with randomly generated data. The model is solved by Lingo 16.

  8. Mixed problems for linear symmetric hyperbolic systems with characteristic boundary conditions

    International Nuclear Information System (INIS)

    Secchi, P.

    1994-01-01

    We consider the initial-boundary value problem for symmetric hyperbolic systems with characteristic boundary of constant multiplicity. In the linear case we give some results about the existence of regular solutions in suitable functions spaces which take in account the loss of regularity in the normal direction to the characteristic boundary. We also consider the equations of ideal magneto-hydrodynamics under perfectly conducting wall boundary conditions and give some results about the solvability of such mixed problem. (author). 16 refs

  9. Log-normal frailty models fitted as Poisson generalized linear mixed models.

    Science.gov (United States)

    Hirsch, Katharina; Wienke, Andreas; Kuss, Oliver

    2016-12-01

    The equivalence of a survival model with a piecewise constant baseline hazard function and a Poisson regression model has been known since decades. As shown in recent studies, this equivalence carries over to clustered survival data: A frailty model with a log-normal frailty term can be interpreted and estimated as a generalized linear mixed model with a binary response, a Poisson likelihood, and a specific offset. Proceeding this way, statistical theory and software for generalized linear mixed models are readily available for fitting frailty models. This gain in flexibility comes at the small price of (1) having to fix the number of pieces for the baseline hazard in advance and (2) having to "explode" the data set by the number of pieces. In this paper we extend the simulations of former studies by using a more realistic baseline hazard (Gompertz) and by comparing the model under consideration with competing models. Furthermore, the SAS macro %PCFrailty is introduced to apply the Poisson generalized linear mixed approach to frailty models. The simulations show good results for the shared frailty model. Our new %PCFrailty macro provides proper estimates, especially in case of 4 events per piece. The suggested Poisson generalized linear mixed approach for log-normal frailty models based on the %PCFrailty macro provides several advantages in the analysis of clustered survival data with respect to more flexible modelling of fixed and random effects, exact (in the sense of non-approximate) maximum likelihood estimation, and standard errors and different types of confidence intervals for all variance parameters. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  10. Linear mixed-effects models for central statistical monitoring of multicenter clinical trials

    OpenAIRE

    Desmet, L.; Venet, D.; Doffagne, E.; Timmermans, C.; BURZYKOWSKI, Tomasz; LEGRAND, Catherine; BUYSE, Marc

    2014-01-01

    Multicenter studies are widely used to meet accrual targets in clinical trials. Clinical data monitoring is required to ensure the quality and validity of the data gathered across centers. One approach to this end is central statistical monitoring, which aims at detecting atypical patterns in the data by means of statistical methods. In this context, we consider the simple case of a continuous variable, and we propose a detection procedure based on a linear mixed-effects model to detect locat...

  11. A Mixed Integer Linear Programming Model for the North Atlantic Aircraft Trajectory Planning

    OpenAIRE

    Sbihi , Mohammed; Rodionova , Olga; Delahaye , Daniel; Mongeau , Marcel

    2015-01-01

    International audience; This paper discusses the trajectory planning problem for ights in the North Atlantic oceanic airspace (NAT). We develop a mathematical optimization framework in view of better utilizing available capacity by re-routing aircraft. The model is constructed by discretizing the problem parameters. A Mixed integer linear program (MILP) is proposed. Based on the MILP a heuristic to solve real-size instances is also introduced

  12. An SDP Approach for Multiperiod Mixed 0–1 Linear Programming Models with Stochastic Dominance Constraints for Risk Management

    DEFF Research Database (Denmark)

    Escudero, Laureano F.; Monge, Juan Francisco; Morales, Dolores Romero

    2015-01-01

    In this paper we consider multiperiod mixed 0–1 linear programming models under uncertainty. We propose a risk averse strategy using stochastic dominance constraints (SDC) induced by mixed-integer linear recourse as the risk measure. The SDC strategy extends the existing literature to the multist...

  13. Light Scattering Study of Mixed Micelles Made from Elastin-Like Polypeptide Linear Chains and Trimers

    Science.gov (United States)

    Terrano, Daniel; Tsuper, Ilona; Maraschky, Adam; Holland, Nolan; Streletzky, Kiril

    Temperature sensitive nanoparticles were generated from a construct (H20F) of three chains of elastin-like polypeptides (ELP) linked to a negatively charged foldon domain. This ELP system was mixed at different ratios with linear chains of ELP (H40L) which lacks the foldon domain. The mixed system is soluble at room temperature and at a transition temperature (Tt) will form swollen micelles with the hydrophobic linear chains hidden inside. This system was studied using depolarized dynamic light scattering (DDLS) and static light scattering (SLS) to determine the size, shape, and internal structure of the mixed micelles. The mixed micelle in equal parts of H20F and H40L show a constant apparent hydrodynamic radius of 40-45 nm at the concentration window from 25:25 to 60:60 uM (1:1 ratio). At a fixed 50 uM concentration of the H20F, varying H40L concentration from 5 to 80 uM resulted in a linear growth in the hydrodynamic radius from about 11 to about 62 nm, along with a 1000-fold increase in VH signal. A possible simple model explaining the growth of the swollen micelles is considered. Lastly, the VH signal can indicate elongation in the geometry of the particle or could possibly be a result from anisotropic properties from the core of the micelle. SLS was used to study the molecular weight, and the radius of gyration of the micelle to help identify the structure and morphology of mixed micelles and the tangible cause of the VH signal.

  14. Modelling subject-specific childhood growth using linear mixed-effect models with cubic regression splines.

    Science.gov (United States)

    Grajeda, Laura M; Ivanescu, Andrada; Saito, Mayuko; Crainiceanu, Ciprian; Jaganath, Devan; Gilman, Robert H; Crabtree, Jean E; Kelleher, Dermott; Cabrera, Lilia; Cama, Vitaliano; Checkley, William

    2016-01-01

    Childhood growth is a cornerstone of pediatric research. Statistical models need to consider individual trajectories to adequately describe growth outcomes. Specifically, well-defined longitudinal models are essential to characterize both population and subject-specific growth. Linear mixed-effect models with cubic regression splines can account for the nonlinearity of growth curves and provide reasonable estimators of population and subject-specific growth, velocity and acceleration. We provide a stepwise approach that builds from simple to complex models, and account for the intrinsic complexity of the data. We start with standard cubic splines regression models and build up to a model that includes subject-specific random intercepts and slopes and residual autocorrelation. We then compared cubic regression splines vis-à-vis linear piecewise splines, and with varying number of knots and positions. Statistical code is provided to ensure reproducibility and improve dissemination of methods. Models are applied to longitudinal height measurements in a cohort of 215 Peruvian children followed from birth until their fourth year of life. Unexplained variability, as measured by the variance of the regression model, was reduced from 7.34 when using ordinary least squares to 0.81 (p linear mixed-effect models with random slopes and a first order continuous autoregressive error term. There was substantial heterogeneity in both the intercept (p modeled with a first order continuous autoregressive error term as evidenced by the variogram of the residuals and by a lack of association among residuals. The final model provides a parametric linear regression equation for both estimation and prediction of population- and individual-level growth in height. We show that cubic regression splines are superior to linear regression splines for the case of a small number of knots in both estimation and prediction with the full linear mixed effect model (AIC 19,352 vs. 19

  15. Detecting treatment-subgroup interactions in clustered data with generalized linear mixed-effects model trees.

    Science.gov (United States)

    Fokkema, M; Smits, N; Zeileis, A; Hothorn, T; Kelderman, H

    2017-10-25

    Identification of subgroups of patients for whom treatment A is more effective than treatment B, and vice versa, is of key importance to the development of personalized medicine. Tree-based algorithms are helpful tools for the detection of such interactions, but none of the available algorithms allow for taking into account clustered or nested dataset structures, which are particularly common in psychological research. Therefore, we propose the generalized linear mixed-effects model tree (GLMM tree) algorithm, which allows for the detection of treatment-subgroup interactions, while accounting for the clustered structure of a dataset. The algorithm uses model-based recursive partitioning to detect treatment-subgroup interactions, and a GLMM to estimate the random-effects parameters. In a simulation study, GLMM trees show higher accuracy in recovering treatment-subgroup interactions, higher predictive accuracy, and lower type II error rates than linear-model-based recursive partitioning and mixed-effects regression trees. Also, GLMM trees show somewhat higher predictive accuracy than linear mixed-effects models with pre-specified interaction effects, on average. We illustrate the application of GLMM trees on an individual patient-level data meta-analysis on treatments for depression. We conclude that GLMM trees are a promising exploratory tool for the detection of treatment-subgroup interactions in clustered datasets.

  16. Mixed H∞ and passive control for linear switched systems via hybrid control approach

    Science.gov (United States)

    Zheng, Qunxian; Ling, Youzhu; Wei, Lisheng; Zhang, Hongbin

    2018-03-01

    This paper investigates the mixed H∞ and passive control problem for linear switched systems based on a hybrid control strategy. To solve this problem, first, a new performance index is proposed. This performance index can be viewed as the mixed weighted H∞ and passivity performance. Then, the hybrid controllers are used to stabilise the switched systems. The hybrid controllers consist of dynamic output-feedback controllers for every subsystem and state updating controllers at the switching instant. The design of state updating controllers not only depends on the pre-switching subsystem and the post-switching subsystem, but also depends on the measurable output signal. The hybrid controllers proposed in this paper can include some existing ones as special cases. Combine the multiple Lyapunov functions approach with the average dwell time technique, new sufficient conditions are obtained. Under the new conditions, the closed-loop linear switched systems are globally uniformly asymptotically stable with a mixed H∞ and passivity performance index. Moreover, the desired hybrid controllers can be constructed by solving a set of linear matrix inequalities. Finally, a numerical example and a practical example are given.

  17. A novel highly parallel algorithm for linearly unmixing hyperspectral images

    Science.gov (United States)

    Guerra, Raúl; López, Sebastián.; Callico, Gustavo M.; López, Jose F.; Sarmiento, Roberto

    2014-10-01

    Endmember extraction and abundances calculation represent critical steps within the process of linearly unmixing a given hyperspectral image because of two main reasons. The first one is due to the need of computing a set of accurate endmembers in order to further obtain confident abundance maps. The second one refers to the huge amount of operations involved in these time-consuming processes. This work proposes an algorithm to estimate the endmembers of a hyperspectral image under analysis and its abundances at the same time. The main advantage of this algorithm is its high parallelization degree and the mathematical simplicity of the operations implemented. This algorithm estimates the endmembers as virtual pixels. In particular, the proposed algorithm performs the descent gradient method to iteratively refine the endmembers and the abundances, reducing the mean square error, according with the linear unmixing model. Some mathematical restrictions must be added so the method converges in a unique and realistic solution. According with the algorithm nature, these restrictions can be easily implemented. The results obtained with synthetic images demonstrate the well behavior of the algorithm proposed. Moreover, the results obtained with the well-known Cuprite dataset also corroborate the benefits of our proposal.

  18. Robust linearized image reconstruction for multifrequency EIT of the breast.

    Science.gov (United States)

    Boverman, Gregory; Kao, Tzu-Jen; Kulkarni, Rujuta; Kim, Bong Seok; Isaacson, David; Saulnier, Gary J; Newell, Jonathan C

    2008-10-01

    Electrical impedance tomography (EIT) is a developing imaging modality that is beginning to show promise for detecting and characterizing tumors in the breast. At Rensselaer Polytechnic Institute, we have developed a combined EIT-tomosynthesis system that allows for the coregistered and simultaneous analysis of the breast using EIT and X-ray imaging. A significant challenge in EIT is the design of computationally efficient image reconstruction algorithms which are robust to various forms of model mismatch. Specifically, we have implemented a scaling procedure that is robust to the presence of a thin highly-resistive layer of skin at the boundary of the breast and we have developed an algorithm to detect and exclude from the image reconstruction electrodes that are in poor contact with the breast. In our initial clinical studies, it has been difficult to ensure that all electrodes make adequate contact with the breast, and thus procedures for the use of data sets containing poorly contacting electrodes are particularly important. We also present a novel, efficient method to compute the Jacobian matrix for our linearized image reconstruction algorithm by reducing the computation of the sensitivity for each voxel to a quadratic form. Initial clinical results are presented, showing the potential of our algorithms to detect and localize breast tumors.

  19. Longitudinal Data Analyses Using Linear Mixed Models in SPSS: Concepts, Procedures and Illustrations

    Directory of Open Access Journals (Sweden)

    Daniel T. L. Shek

    2011-01-01

    Full Text Available Although different methods are available for the analyses of longitudinal data, analyses based on generalized linear models (GLM are criticized as violating the assumption of independence of observations. Alternatively, linear mixed models (LMM are commonly used to understand changes in human behavior over time. In this paper, the basic concepts surrounding LMM (or hierarchical linear models are outlined. Although SPSS is a statistical analyses package commonly used by researchers, documentation on LMM procedures in SPSS is not thorough or user friendly. With reference to this limitation, the related procedures for performing analyses based on LMM in SPSS are described. To demonstrate the application of LMM analyses in SPSS, findings based on six waves of data collected in the Project P.A.T.H.S. (Positive Adolescent Training through Holistic Social Programmes in Hong Kong are presented.

  20. Longitudinal data analyses using linear mixed models in SPSS: concepts, procedures and illustrations.

    Science.gov (United States)

    Shek, Daniel T L; Ma, Cecilia M S

    2011-01-05

    Although different methods are available for the analyses of longitudinal data, analyses based on generalized linear models (GLM) are criticized as violating the assumption of independence of observations. Alternatively, linear mixed models (LMM) are commonly used to understand changes in human behavior over time. In this paper, the basic concepts surrounding LMM (or hierarchical linear models) are outlined. Although SPSS is a statistical analyses package commonly used by researchers, documentation on LMM procedures in SPSS is not thorough or user friendly. With reference to this limitation, the related procedures for performing analyses based on LMM in SPSS are described. To demonstrate the application of LMM analyses in SPSS, findings based on six waves of data collected in the Project P.A.T.H.S. (Positive Adolescent Training through Holistic Social Programmes) in Hong Kong are presented.

  1. An Improved Piecewise Linear Chaotic Map Based Image Encryption Algorithm

    Directory of Open Access Journals (Sweden)

    Yuping Hu

    2014-01-01

    Full Text Available An image encryption algorithm based on improved piecewise linear chaotic map (MPWLCM model was proposed. The algorithm uses the MPWLCM to permute and diffuse plain image simultaneously. Due to the sensitivity to initial key values, system parameters, and ergodicity in chaotic system, two pseudorandom sequences are designed and used in the processes of permutation and diffusion. The order of processing pixels is not in accordance with the index of pixels, but it is from beginning or end alternately. The cipher feedback was introduced in diffusion process. Test results and security analysis show that not only the scheme can achieve good encryption results but also its key space is large enough to resist against brute attack.

  2. Non-linear Imaging using an Experimental Synthetic Aperture Real Time Ultrasound Scanner

    DEFF Research Database (Denmark)

    Rasmussen, Joachim; Du, Yigang; Jensen, Jørgen Arendt

    2011-01-01

    This paper presents the first non-linear B-mode image of a wire phantom using pulse inversion attained via an experimental synthetic aperture real-time ultrasound scanner (SARUS). The purpose of this study is to implement and validate non-linear imaging on SARUS for the further development of new...... non-linear techniques. This study presents non-linear and linear B-mode images attained via SARUS and an existing ultrasound system as well as a Field II simulation. The non-linear image shows an improved spatial resolution and lower full width half max and -20 dB resolution values compared to linear...

  3. Linear matrix inequality approach for synchronization control of fuzzy cellular neural networks with mixed time delays

    International Nuclear Information System (INIS)

    Balasubramaniam, P.; Kalpana, M.; Rakkiyappan, R.

    2012-01-01

    Fuzzy cellular neural networks (FCNNs) are special kinds of cellular neural networks (CNNs). Each cell in an FCNN contains fuzzy operating abilities. The entire network is governed by cellular computing laws. The design of FCNNs is based on fuzzy local rules. In this paper, a linear matrix inequality (LMI) approach for synchronization control of FCNNs with mixed delays is investigated. Mixed delays include discrete time-varying delays and unbounded distributed delays. A dynamic control scheme is proposed to achieve the synchronization between a drive network and a response network. By constructing the Lyapunov—Krasovskii functional which contains a triple-integral term and the free-weighting matrices method an improved delay-dependent stability criterion is derived in terms of LMIs. The controller can be easily obtained by solving the derived LMIs. A numerical example and its simulations are presented to illustrate the effectiveness of the proposed method. (interdisciplinary physics and related areas of science and technology)

  4. Mixed Gaussian-Impulse Noise Image Restoration Via Total Variation

    Science.gov (United States)

    2012-05-01

    deblurring under impulse noise ,” J. Math. Imaging Vis., vol. 36, pp. 46–53, January 2010. [5] B. Li, Q. Liu, J. Xu, and X. Luo, “A new method for removing......Several Total Variation (TV) regularization methods have recently been proposed to address denoising under mixed Gaussian and impulse noise . While

  5. Skew-t partially linear mixed-effects models for AIDS clinical studies.

    Science.gov (United States)

    Lu, Tao

    2016-01-01

    We propose partially linear mixed-effects models with asymmetry and missingness to investigate the relationship between two biomarkers in clinical studies. The proposed models take into account irregular time effects commonly observed in clinical studies under a semiparametric model framework. In addition, commonly assumed symmetric distributions for model errors are substituted by asymmetric distribution to account for skewness. Further, informative missing data mechanism is accounted for. A Bayesian approach is developed to perform parameter estimation simultaneously. The proposed model and method are applied to an AIDS dataset and comparisons with alternative models are performed.

  6. Mixed Integer Linear Programming model for Crude Palm Oil Supply Chain Planning

    Science.gov (United States)

    Sembiring, Pasukat; Mawengkang, Herman; Sadyadharma, Hendaru; Bu'ulolo, F.; Fajriana

    2018-01-01

    The production process of crude palm oil (CPO) can be defined as the milling process of raw materials, called fresh fruit bunch (FFB) into end products palm oil. The process usually through a series of steps producing and consuming intermediate products. The CPO milling industry considered in this paper does not have oil palm plantation, therefore the FFB are supplied by several public oil palm plantations. Due to the limited availability of FFB, then it is necessary to choose from which plantations would be appropriate. This paper proposes a mixed integer linear programming model the supply chain integrated problem, which include waste processing. The mathematical programming model is solved using neighborhood search approach.

  7. An overview of solution methods for multi-objective mixed integer linear programming programs

    DEFF Research Database (Denmark)

    Andersen, Kim Allan; Stidsen, Thomas Riis

    Multiple objective mixed integer linear programming (MOMIP) problems are notoriously hard to solve to optimality, i.e. finding the complete set of non-dominated solutions. We will give an overview of existing methods. Among those are interactive methods, the two phases method and enumeration...... methods. In particular we will discuss the existing branch and bound approaches for solving multiple objective integer programming problems. Despite the fact that branch and bound methods has been applied successfully to integer programming problems with one criterion only a few attempts has been made...

  8. Analyzing longitudinal data with the linear mixed models procedure in SPSS.

    Science.gov (United States)

    West, Brady T

    2009-09-01

    Many applied researchers analyzing longitudinal data share a common misconception: that specialized statistical software is necessary to fit hierarchical linear models (also known as linear mixed models [LMMs], or multilevel models) to longitudinal data sets. Although several specialized statistical software programs of high quality are available that allow researchers to fit these models to longitudinal data sets (e.g., HLM), rapid advances in general purpose statistical software packages have recently enabled analysts to fit these same models when using preferred packages that also enable other more common analyses. One of these general purpose statistical packages is SPSS, which includes a very flexible and powerful procedure for fitting LMMs to longitudinal data sets with continuous outcomes. This article aims to present readers with a practical discussion of how to analyze longitudinal data using the LMMs procedure in the SPSS statistical software package.

  9. Mixed Higher Order Variational Model for Image Recovery

    Directory of Open Access Journals (Sweden)

    Pengfei Liu

    2014-01-01

    Full Text Available A novel mixed higher order regularizer involving the first and second degree image derivatives is proposed in this paper. Using spectral decomposition, we reformulate the new regularizer as a weighted L1-L2 mixed norm of image derivatives. Due to the equivalent formulation of the proposed regularizer, an efficient fast projected gradient algorithm combined with monotone fast iterative shrinkage thresholding, called, FPG-MFISTA, is designed to solve the resulting variational image recovery problems under majorization-minimization framework. Finally, we demonstrate the effectiveness of the proposed regularization scheme by the experimental comparisons with total variation (TV scheme, nonlocal TV scheme, and current second degree methods. Specifically, the proposed approach achieves better results than related state-of-the-art methods in terms of peak signal to ratio (PSNR and restoration quality.

  10. Quadratic temporal finite element method for linear elastic structural dynamics based on mixed convolved action

    International Nuclear Information System (INIS)

    Kim, Jin Kyu; Kim, Dong Keon

    2016-01-01

    A common approach for dynamic analysis in current practice is based on a discrete time-integration scheme. This approach can be largely attributed to the absence of a true variational framework for initial value problems. To resolve this problem, a new stationary variational principle was recently established for single-degree-of-freedom oscillating systems using mixed variables, fractional derivatives and convolutions of convolutions. In this mixed convolved action, all the governing differential equations and initial conditions are recovered from the stationarity of a single functional action. Thus, the entire description of linear elastic dynamical systems is encapsulated. For its practical application to structural dynamics, this variational formalism is systemically extended to linear elastic multidegree- of-freedom systems in this study, and a corresponding weak form is numerically implemented via a quadratic temporal finite element method. The developed numerical method is symplectic and unconditionally stable with respect to a time step for the underlying conservative system. For the forced-damped vibration, a three-story shear building is used as an example to investigate the performance of the developed numerical method, which provides accurate results with good convergence characteristics

  11. Stability Criterion of Linear Stochastic Systems Subject to Mixed H2/Passivity Performance

    Directory of Open Access Journals (Sweden)

    Cheung-Chieh Ku

    2015-01-01

    Full Text Available The H2 control scheme and passivity theory are applied to investigate the stability criterion of continuous-time linear stochastic system subject to mixed performance. Based on the stochastic differential equation, the stochastic behaviors can be described as multiplicative noise terms. For the considered system, the H2 control scheme is applied to deal with the problem on minimizing output energy. And the asymptotical stability of the system can be guaranteed under desired initial conditions. Besides, the passivity theory is employed to constrain the effect of external disturbance on the system. Moreover, the Itô formula and Lyapunov function are used to derive the sufficient conditions which are converted into linear matrix inequality (LMI form for applying convex optimization algorithm. Via solving the sufficient conditions, the state feedback controller can be established such that the asymptotical stability and mixed performance of the system are achieved in the mean square. Finally, the synchronous generator system is used to verify the effectiveness and applicability of the proposed design method.

  12. Spatial variability in floodplain sedimentation: the use of generalized linear mixed-effects models

    Directory of Open Access Journals (Sweden)

    A. Cabezas

    2010-08-01

    Full Text Available Sediment, Total Organic Carbon (TOC and total nitrogen (TN accumulation during one overbank flood (1.15 y return interval were examined at one reach of the Middle Ebro River (NE Spain for elucidating spatial patterns. To achieve this goal, four areas with different geomorphological features and located within the study reach were examined by using artificial grass mats. Within each area, 1 m2 study plots consisting of three pseudo-replicates were placed in a semi-regular grid oriented perpendicular to the main channel. TOC, TN and Particle-Size composition of deposited sediments were examined and accumulation rates estimated. Generalized linear mixed-effects models were used to analyze sedimentation patterns in order to handle clustered sampling units, specific-site effects and spatial self-correlation between observations. Our results confirm the importance of channel-floodplain morphology and site micro-topography in explaining sediment, TOC and TN deposition patterns, although the importance of other factors as vegetation pattern should be included in further studies to explain small-scale variability. Generalized linear mixed-effect models provide a good framework to deal with the high spatial heterogeneity of this phenomenon at different spatial scales, and should be further investigated in order to explore its validity when examining the importance of factors such as flood magnitude or suspended sediment concentration.

  13. Partially linear mixed-effects joint models for skewed and missing longitudinal competing risks outcomes.

    Science.gov (United States)

    Lu, Tao; Lu, Minggen; Wang, Min; Zhang, Jun; Dong, Guang-Hui; Xu, Yong

    2017-12-18

    Longitudinal competing risks data frequently arise in clinical studies. Skewness and missingness are commonly observed for these data in practice. However, most joint models do not account for these data features. In this article, we propose partially linear mixed-effects joint models to analyze skew longitudinal competing risks data with missingness. In particular, to account for skewness, we replace the commonly assumed symmetric distributions by asymmetric distribution for model errors. To deal with missingness, we employ an informative missing data model. The joint models that couple the partially linear mixed-effects model for the longitudinal process, the cause-specific proportional hazard model for competing risks process and missing data process are developed. To estimate the parameters in the joint models, we propose a fully Bayesian approach based on the joint likelihood. To illustrate the proposed model and method, we implement them to an AIDS clinical study. Some interesting findings are reported. We also conduct simulation studies to validate the proposed method.

  14. Spoofing cyber attack detection in probe-based traffic monitoring systems using mixed integer linear programming

    KAUST Repository

    Canepa, Edward S.

    2013-01-01

    Traffic sensing systems rely more and more on user generated (insecure) data, which can pose a security risk whenever the data is used for traffic flow control. In this article, we propose a new formulation for detecting malicious data injection in traffic flow monitoring systems by using the underlying traffic flow model. The state of traffic is modeled by the Lighthill-Whitham- Richards traffic flow model, which is a first order scalar conservation law with concave flux function. Given a set of traffic flow data, we show that the constraints resulting from this partial differential equation are mixed integer linear inequalities for some decision variable. We use this fact to pose the problem of detecting spoofing cyber-attacks in probe-based traffic flow information systems as mixed integer linear feasibility problem. The resulting framework can be used to detect spoofing attacks in real time, or to evaluate the worst-case effects of an attack offline. A numerical implementation is performed on a cyber-attack scenario involving experimental data from the Mobile Century experiment and the Mobile Millennium system currently operational in Northern California. © 2013 IEEE.

  15. Spoofing cyber attack detection in probe-based traffic monitoring systems using mixed integer linear programming

    KAUST Repository

    Canepa, Edward S.

    2013-09-01

    Traffic sensing systems rely more and more on user generated (insecure) data, which can pose a security risk whenever the data is used for traffic flow control. In this article, we propose a new formulation for detecting malicious data injection in traffic flow monitoring systems by using the underlying traffic flow model. The state of traffic is modeled by the Lighthill- Whitham-Richards traffic flow model, which is a first order scalar conservation law with concave flux function. Given a set of traffic flow data generated by multiple sensors of different types, we show that the constraints resulting from this partial differential equation are mixed integer linear inequalities for a specific decision variable. We use this fact to pose the problem of detecting spoofing cyber attacks in probe-based traffic flow information systems as mixed integer linear feasibility problem. The resulting framework can be used to detect spoofing attacks in real time, or to evaluate the worst-case effects of an attack offliine. A numerical implementation is performed on a cyber attack scenario involving experimental data from the Mobile Century experiment and the Mobile Millennium system currently operational in Northern California. © American Institute of Mathematical Sciences.

  16. Optimal placement of capacitors in a radial network using conic and mixed integer linear programming

    Energy Technology Data Exchange (ETDEWEB)

    Jabr, R.A. [Electrical, Computer and Communication Engineering Department, Notre Dame University, P.O. Box: 72, Zouk Mikhael, Zouk Mosbeh (Lebanon)

    2008-06-15

    This paper considers the problem of optimally placing fixed and switched type capacitors in a radial distribution network. The aim of this problem is to minimize the costs associated with capacitor banks, peak power, and energy losses whilst satisfying a pre-specified set of physical and technical constraints. The proposed solution is obtained using a two-phase approach. In phase-I, the problem is formulated as a conic program in which all nodes are candidates for placement of capacitor banks whose sizes are considered as continuous variables. A global solution of the phase-I problem is obtained using an interior-point based conic programming solver. Phase-II seeks a practical optimal solution by considering capacitor sizes as discrete variables. The problem in this phase is formulated as a mixed integer linear program based on minimizing the L1-norm of deviations from the phase-I state variable values. The solution to the phase-II problem is obtained using a mixed integer linear programming solver. The proposed method is validated via extensive comparisons with previously published results. (author)

  17. Quadratic temporal finite element method for linear elastic structural dynamics based on mixed convolved action

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Jin Kyu [School of Architecture and Architectural Engineering, Hanyang University, Ansan (Korea, Republic of); Kim, Dong Keon [Dept. of Architectural Engineering, Dong A University, Busan (Korea, Republic of)

    2016-09-15

    A common approach for dynamic analysis in current practice is based on a discrete time-integration scheme. This approach can be largely attributed to the absence of a true variational framework for initial value problems. To resolve this problem, a new stationary variational principle was recently established for single-degree-of-freedom oscillating systems using mixed variables, fractional derivatives and convolutions of convolutions. In this mixed convolved action, all the governing differential equations and initial conditions are recovered from the stationarity of a single functional action. Thus, the entire description of linear elastic dynamical systems is encapsulated. For its practical application to structural dynamics, this variational formalism is systemically extended to linear elastic multidegree- of-freedom systems in this study, and a corresponding weak form is numerically implemented via a quadratic temporal finite element method. The developed numerical method is symplectic and unconditionally stable with respect to a time step for the underlying conservative system. For the forced-damped vibration, a three-story shear building is used as an example to investigate the performance of the developed numerical method, which provides accurate results with good convergence characteristics.

  18. Linear analysis of rotationally invariant, radially variant tomographic imaging systems

    International Nuclear Information System (INIS)

    Huesmann, R.H.

    1990-01-01

    This paper describes a method to analyze the linear imaging characteristics of rotationally invariant, radially variant tomographic imaging systems using singular value decomposition (SVD). When the projection measurements from such a system are assumed to be samples from independent and identically distributed multi-normal random variables, the best estimate of the emission intensity is given by the unweighted least squares estimator. The noise amplification of this estimator is inversely proportional to the singular values of the normal matrix used to model projection and backprojection. After choosing an acceptable noise amplification, the new method can determine the number of parameters and hence the number of pixels that should be estimated from data acquired from an existing system with a fixed number of angles and projection bins. Conversely, for the design of a new system, the number of angles and projection bins necessary for a given number of pixels and noise amplification can be determined. In general, computing the SVD of the projection normal matrix has cubic computational complexity. However, the projection normal matrix for this class of rotationally invariant, radially variant systems has a block circulant form. A fast parallel algorithm to compute the SVD of this block circulant matrix makes the singular value analysis practical by asymptotically reducing the computation complexity of the method by a multiplicative factor equal to the number of angles squared

  19. A Linear Criterion to sort Color Components in Images

    Directory of Open Access Journals (Sweden)

    Leonardo Barriga Rodriguez

    2017-01-01

    Full Text Available The color and its representation play a basic role in Image Analysis process. Several methods can be beneficial whenever they have a correct representation of wave-length variations used to represent scenes with a camera. A wide variety of spaces and color representations is founded in specialized literature. Each one is useful in concrete circumstances and others may offer redundant color information (for instance, all RGB components are high correlated. This work deals with the task of identifying and sorting which component from several color representations offers the majority of information about the scene. This approach is based on analyzing linear dependences among each color component, by the implementation of a new sorting algorithm based on entropy. The proposal is tested in several outdoor/indoor scenes with different light conditions. Repeatability and stability are tested in order to guarantee its use in several image analysis applications. Finally, the results of this work have been used to enhance an external algorithm to compensate the camera random vibrations.

  20. Model and measurements of linear mixing in thermal IR ground leaving radiance spectra

    Science.gov (United States)

    Balick, Lee; Clodius, William; Jeffery, Christopher; Theiler, James; McCabe, Matthew; Gillespie, Alan; Mushkin, Amit; Danilina, Iryna

    2007-10-01

    Hyperspectral thermal IR remote sensing is an effective tool for the detection and identification of gas plumes and solid materials. Virtually all remotely sensed thermal IR pixels are mixtures of different materials and temperatures. As sensors improve and hyperspectral thermal IR remote sensing becomes more quantitative, the concept of homogeneous pixels becomes inadequate. The contributions of the constituents to the pixel spectral ground leaving radiance are weighted by their spectral emissivities and their temperature, or more correctly, temperature distributions, because real pixels are rarely thermally homogeneous. Planck's Law defines a relationship between temperature and radiance that is strongly wavelength dependent, even for blackbodies. Spectral ground leaving radiance (GLR) from mixed pixels is temperature and wavelength dependent and the relationship between observed radiance spectra from mixed pixels and library emissivity spectra of mixtures of 'pure' materials is indirect. A simple model of linear mixing of subpixel radiance as a function of material type, the temperature distribution of each material and the abundance of the material within a pixel is presented. The model indicates that, qualitatively and given normal environmental temperature variability, spectral features remain observable in mixtures as long as the material occupies more than roughly 10% of the pixel. Field measurements of known targets made on the ground and by an airborne sensor are presented here and serve as a reality check on the model. Target spectral GLR from mixtures as a function of temperature distribution and abundance within the pixel at day and night are presented and compare well qualitatively with model output.

  1. Imaging of intracranial neuronal and mixed neuronal-glial tumours

    International Nuclear Information System (INIS)

    Cui Shimin; Qin Jinxi; Zhang Leili; Liu Meili; Jin Song; Yan Shixin; Liu Li; Dai Weiying; Li Tao; Gao Man

    2001-01-01

    Objective: To investigate the characteristic clinical, imaging , and pathologic findings of intracranial neuronal and mixed neuronal-glial tumours. Methods: The imaging findings of surgery and pathobiology proved intracranial neuronal and mixed neuronal-glial tumours in 14 cases (7 male and 7 female, ranging in age from 6-56 years; mean age 33.8 years) were retrospectively analyzed. Results: Eight gangliogliomas were located in the frontal lobe (4 cases), temporal lobe (1 case), front- temporal lobe (2 cases), and pons (1 case). They appeared as iso-or low density on CT, iso-or low signal intensity on T 1 WI, and high signal intensity on T 2 WI on MR imaging. Two central neurocytomas were located in the supratentorial ventricles. Four desmoplastic gangliogliomas were seen as cystic masses, appearing as low signal intensity on T 1 WI and high signal intensity on T 2 WI. Conclusion: Intracranial neuronal and mixed neuronal-glial tumours had imaging characteristics. Combined with clinical history, it was possible to make a tendency preoperative diagnosis using CT or MR

  2. Progress towards monochromatic imaging of mix at the NIF

    Science.gov (United States)

    Kyrala, G. A.; Murphy, T. J.; Bradley, P. A.; Krashenninnikova, N. S.; Tregillis, I. L.; Obrey, K.; Shah, R. C.; Hakel, P.; Kline, J. L.; Grim, G. P.; Schmitt, M. J.; Kanzleiter, R. J.; Regan, S. P.; Barrios, M. A.

    2013-10-01

    Mix of non-hydrogenic (Z >1) material into the hydrogenic (D and T) ICF capsule fuel degrades implosion performance. The amount of degradation depends on the degree and the spatial distribution of mix. Experiments are underway at NIF to quantify the mix of shell material into fuel using directly driven capsules. CH or CD shells with various dopants, implanted at different depths in the shell are being used to change the amount of dopant mix. Spatially and spectrally resolved emission from the ionized dopants will be used to generate spatially and temporally dependent density and temperature maps of the ionized dopants that are mixed and heated in the core plasma. This information will be used to validate different mix models. This talk will describe the search for the appropriate dopant that gave a radiation spectrum that could be used to record images with the MMI diagnostic. This work is supported by US DOE/NNSA, performed at LANL, operated by LANS LLC under contract DE-AC52-06NA25396.

  3. Mixed Linear/Square-Root Encoded Single Slope Ramp Provides a Fast, Low Noise Analog to Digital Converter with Very High Linearity for Focal Plane Arrays

    Science.gov (United States)

    Wrigley, Christopher James (Inventor); Hancock, Bruce R. (Inventor); Newton, Kenneth W. (Inventor); Cunningham, Thomas J. (Inventor)

    2014-01-01

    An analog-to-digital converter (ADC) converts pixel voltages from a CMOS image into a digital output. A voltage ramp generator generates a voltage ramp that has a linear first portion and a non-linear second portion. A digital output generator generates a digital output based on the voltage ramp, the pixel voltages, and comparator output from an array of comparators that compare the voltage ramp to the pixel voltages. A return lookup table linearizes the digital output values.

  4. Effect of correlation on covariate selection in linear and nonlinear mixed effect models.

    Science.gov (United States)

    Bonate, Peter L

    2017-01-01

    The effect of correlation among covariates on covariate selection was examined with linear and nonlinear mixed effect models. Demographic covariates were extracted from the National Health and Nutrition Examination Survey III database. Concentration-time profiles were Monte Carlo simulated where only one covariate affected apparent oral clearance (CL/F). A series of univariate covariate population pharmacokinetic models was fit to the data and compared with the reduced model without covariate. The "best" covariate was identified using either the likelihood ratio test statistic or AIC. Weight and body surface area (calculated using Gehan and George equation, 1970) were highly correlated (r = 0.98). Body surface area was often selected as a better covariate than weight, sometimes as high as 1 in 5 times, when weight was the covariate used in the data generating mechanism. In a second simulation, parent drug concentration and three metabolites were simulated from a thorough QT study and used as covariates in a series of univariate linear mixed effects models of ddQTc interval prolongation. The covariate with the largest significant LRT statistic was deemed the "best" predictor. When the metabolite was formation-rate limited and only parent concentrations affected ddQTc intervals the metabolite was chosen as a better predictor as often as 1 in 5 times depending on the slope of the relationship between parent concentrations and ddQTc intervals. A correlated covariate can be chosen as being a better predictor than another covariate in a linear or nonlinear population analysis by sheer correlation These results explain why for the same drug different covariates may be identified in different analyses. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  5. Impacto não Linear do Marketing Mix no Desempenho em Vendas de Marcas

    Directory of Open Access Journals (Sweden)

    Rafael Barreiros Porto

    2015-01-01

    Full Text Available O padrão de impacto que as atividades de marketingexercem nas vendas não tem sido evidenciado na literatura. Muitas pesquisas adotam perspectivas lineares restritas, desconsiderando as evidências empíricas. Este trabalho investigou o impacto não linear do marketingmixno volume em vendas e no volume de consumidores e de compra por consumidor. Realizou-se um estudo longitudinal em painel de marcas e de consumidores simultâneos. Analisaram-se 121 marcas durante 13 meses, com 793 compras/mês feitas pelos consumidores por meio de três equações de estimativas generalizadas. Os resultados apontam que o marketing mix, em especial brandinge precificação, impacta fortemente todas as dependentes em formato não linear, com bons ajustes dos parâmetros. Oefeito conjunto gera economias de escala para as marcas, enquanto, para cada consumidor, o efeito conjunto estimula-o a adquirir maiores quantidades gradativamente. A pesquisa demonstra oito padrões impactantes do marketingmixsobre os indicadores investigados, com alterações de sua ordem e de seu peso para marcas e consumidores.

  6. Linear and Non-Linear Optical Imaging of Cancer Cells with Silicon Nanoparticles

    Science.gov (United States)

    Tolstik, Elen; Osminkina, Liubov A.; Akimov, Denis; Gongalsky, Maksim B.; Kudryavtsev, Andrew A.; Timoshenko, Victor Yu.; Heintzmann, Rainer; Sivakov, Vladimir; Popp, Jürgen

    2016-01-01

    New approaches for visualisation of silicon nanoparticles (SiNPs) in cancer cells are realised by means of the linear and nonlinear optics in vitro. Aqueous colloidal solutions of SiNPs with sizes of about 10–40 nm obtained by ultrasound grinding of silicon nanowires were introduced into breast cancer cells (MCF-7 cell line). Further, the time-varying nanoparticles enclosed in cell structures were visualised by high-resolution structured illumination microscopy (HR-SIM) and micro-Raman spectroscopy. Additionally, the nonlinear optical methods of two-photon excited fluorescence (TPEF) and coherent anti-Stokes Raman scattering (CARS) with infrared laser excitation were applied to study the localisation of SiNPs in cells. Advantages of the nonlinear methods, such as rapid imaging, which prevents cells from overheating and larger penetration depth compared to the single-photon excited HR-SIM, are discussed. The obtained results reveal new perspectives of the multimodal visualisation and precise detection of the uptake of biodegradable non-toxic SiNPs by cancer cells and they are discussed in view of future applications for the optical diagnostics of cancer tumours. PMID:27626408

  7. Increasing Linear Dynamic Range of a CMOS Image Sensor

    Science.gov (United States)

    Pain, Bedabrata

    2007-01-01

    A generic design and a corresponding operating sequence have been developed for increasing the linear-response dynamic range of a complementary metal oxide/semiconductor (CMOS) image sensor. The design provides for linear calibrated dual-gain pixels that operate at high gain at a low signal level and at low gain at a signal level above a preset threshold. Unlike most prior designs for increasing dynamic range of an image sensor, this design does not entail any increase in noise (including fixed-pattern noise), decrease in responsivity or linearity, or degradation of photometric calibration. The figure is a simplified schematic diagram showing the circuit of one pixel and pertinent parts of its column readout circuitry. The conventional part of the pixel circuit includes a photodiode having a small capacitance, CD. The unconventional part includes an additional larger capacitance, CL, that can be connected to the photodiode via a transfer gate controlled in part by a latch. In the high-gain mode, the signal labeled TSR in the figure is held low through the latch, which also helps to adapt the gain on a pixel-by-pixel basis. Light must be coupled to the pixel through a microlens or by back illumination in order to obtain a high effective fill factor; this is necessary to ensure high quantum efficiency, a loss of which would minimize the efficacy of the dynamic- range-enhancement scheme. Once the level of illumination of the pixel exceeds the threshold, TSR is turned on, causing the transfer gate to conduct, thereby adding CL to the pixel capacitance. The added capacitance reduces the conversion gain, and increases the pixel electron-handling capacity, thereby providing an extension of the dynamic range. By use of an array of comparators also at the bottom of the column, photocharge voltages on sampling capacitors in each column are compared with a reference voltage to determine whether it is necessary to switch from the high-gain to the low-gain mode. Depending upon

  8. Short communication: Alteration of priors for random effects in Gaussian linear mixed model

    DEFF Research Database (Denmark)

    Vandenplas, Jérémie; Christensen, Ole Fredslund; Gengler, Nicholas

    2014-01-01

    such alterations. Therefore, the aim of this study was to propose a method to alter both the mean and (co)variance of the prior multivariate normal distributions of random effects of linear mixed models while using currently available software packages. The proposed method was tested on simulated examples with 3......, multiple-trait predictions of lactation yields, and Bayesian approaches integrating external information into genetic evaluations) need to alter both the mean and (co)variance of the prior distributions and, to our knowledge, most software packages available in the animal breeding community do not permit...... different software packages available in animal breeding. The examples showed the possibility of the proposed method to alter both the mean and (co)variance of the prior distributions with currently available software packages through the use of an extended data file and a user-supplied (co)variance matrix....

  9. A Mixed Integer Linear Programming Approach to Electrical Stimulation Optimization Problems.

    Science.gov (United States)

    Abouelseoud, Gehan; Abouelseoud, Yasmine; Shoukry, Amin; Ismail, Nour; Mekky, Jaidaa

    2018-02-01

    Electrical stimulation optimization is a challenging problem. Even when a single region is targeted for excitation, the problem remains a constrained multi-objective optimization problem. The constrained nature of the problem results from safety concerns while its multi-objectives originate from the requirement that non-targeted regions should remain unaffected. In this paper, we propose a mixed integer linear programming formulation that can successfully address the challenges facing this problem. Moreover, the proposed framework can conclusively check the feasibility of the stimulation goals. This helps researchers to avoid wasting time trying to achieve goals that are impossible under a chosen stimulation setup. The superiority of the proposed framework over alternative methods is demonstrated through simulation examples.

  10. Warped linear mixed models for the genetic analysis of transformed phenotypes.

    Science.gov (United States)

    Fusi, Nicolo; Lippert, Christoph; Lawrence, Neil D; Stegle, Oliver

    2014-09-19

    Linear mixed models (LMMs) are a powerful and established tool for studying genotype-phenotype relationships. A limitation of the LMM is that the model assumes Gaussian distributed residuals, a requirement that rarely holds in practice. Violations of this assumption can lead to false conclusions and loss in power. To mitigate this problem, it is common practice to pre-process the phenotypic values to make them as Gaussian as possible, for instance by applying logarithmic or other nonlinear transformations. Unfortunately, different phenotypes require different transformations, and choosing an appropriate transformation is challenging and subjective. Here we present an extension of the LMM that estimates an optimal transformation from the observed data. In simulations and applications to real data from human, mouse and yeast, we show that using transformations inferred by our model increases power in genome-wide association studies and increases the accuracy of heritability estimation and phenotype prediction.

  11. Non Linear Analyses for the Evaluation of Seismic Behavior of Mixed R.C.-Masonry Structures

    International Nuclear Information System (INIS)

    Liberatore, Laura; Tocci, Cesare; Masiani, Renato

    2008-01-01

    In this work the seismic behavior of masonry buildings with mixed structural system, consisting of perimeter masonry walls and internal r.c. frames, is studied by means of non linear static (pushover) analyses. Several aspects, like the distribution of seismic action between masonry and r.c. elements, the local and global behavior of the structure, the crisis of the connections and the attainment of the ultimate strength of the whole structure are examined. The influence of some parameters, such as the masonry compressive and tensile strength, on the structural behavior is investigated. The numerical analyses are also repeated on a building in which the r.c. internal frames are replaced with masonry walls

  12. A widefield fluorescence microscope with a linear image sensor for image cytometry of biospecimens: Considerations for image quality optimization

    Energy Technology Data Exchange (ETDEWEB)

    Hutcheson, Joshua A.; Majid, Aneeka A.; Powless, Amy J.; Muldoon, Timothy J., E-mail: tmuldoon@uark.edu [Department of Biomedical Engineering, University of Arkansas, 120 Engineering Hall, Fayetteville, Arkansas 72701 (United States)

    2015-09-15

    Linear image sensors have been widely used in numerous research and industry applications to provide continuous imaging of moving objects. Here, we present a widefield fluorescence microscope with a linear image sensor used to image translating objects for image cytometry. First, a calibration curve was characterized for a custom microfluidic chamber over a span of volumetric pump rates. Image data were also acquired using 15 μm fluorescent polystyrene spheres on a slide with a motorized translation stage in order to match linear translation speed with line exposure periods to preserve the image aspect ratio. Aspect ratios were then calculated after imaging to ensure quality control of image data. Fluorescent beads were imaged in suspension flowing through the microfluidics chamber being pumped by a mechanical syringe pump at 16 μl min{sup −1} with a line exposure period of 150 μs. The line period was selected to acquire images of fluorescent beads with a 40 dB signal-to-background ratio. A motorized translation stage was then used to transport conventional glass slides of stained cellular biospecimens. Whole blood collected from healthy volunteers was stained with 0.02% (w/v) proflavine hemisulfate was imaged to highlight leukocyte morphology with a 1.56 mm × 1.28 mm field of view (1540 ms total acquisition time). Oral squamous cells were also collected from healthy volunteers and stained with 0.01% (w/v) proflavine hemisulfate to demonstrate quantifiable subcellular features and an average nuclear to cytoplasmic ratio of 0.03 (n = 75), with a resolution of 0.31 μm pixels{sup −1}.

  13. An Efficient Test for Gene-Environment Interaction in Generalized Linear Mixed Models with Family Data.

    Science.gov (United States)

    Mazo Lopera, Mauricio A; Coombes, Brandon J; de Andrade, Mariza

    2017-09-27

    Gene-environment (GE) interaction has important implications in the etiology of complex diseases that are caused by a combination of genetic factors and environment variables. Several authors have developed GE analysis in the context of independent subjects or longitudinal data using a gene-set. In this paper, we propose to analyze GE interaction for discrete and continuous phenotypes in family studies by incorporating the relatedness among the relatives for each family into a generalized linear mixed model (GLMM) and by using a gene-based variance component test. In addition, we deal with collinearity problems arising from linkage disequilibrium among single nucleotide polymorphisms (SNPs) by considering their coefficients as random effects under the null model estimation. We show that the best linear unbiased predictor (BLUP) of such random effects in the GLMM is equivalent to the ridge regression estimator. This equivalence provides a simple method to estimate the ridge penalty parameter in comparison to other computationally-demanding estimation approaches based on cross-validation schemes. We evaluated the proposed test using simulation studies and applied it to real data from the Baependi Heart Study consisting of 76 families. Using our approach, we identified an interaction between BMI and the Peroxisome Proliferator Activated Receptor Gamma ( PPARG ) gene associated with diabetes.

  14. Linear Optimization Techniques for Product-Mix of Paints Production in Nigeria

    Directory of Open Access Journals (Sweden)

    Sulaimon Olanrewaju Adebiyi

    2014-02-01

    Full Text Available Many paint producers in Nigeria do not lend themselves to flexible production process which is important for them to manage the use of resources for effective optimal production. These goals can be achieved through the application of optimization models in their resources allocation and utilisation. This research focuses on linear optimization for achieving product- mix optimization in terms of the product identification and the right quantity in paint production in Nigeria for better profit and optimum firm performance. The computational experiments in this research contains data and information on the units item costs, unit contribution margin, maximum resources capacity, individual products absorption rate and other constraints that are particular to each of the five products produced in the company employed as case study. In data analysis, linear programming model was employed with the aid LINDO 11 software to analyse the data. The result has showed that only two out of the five products under consideration are profitable. It also revealed the rate to which the company needs to reduce cost incurred on the three other products before making them profitable for production.

  15. Mixed models, linear dependency, and identification in age-period-cohort models.

    Science.gov (United States)

    O'Brien, Robert M

    2017-07-20

    This paper examines the identification problem in age-period-cohort models that use either linear or categorically coded ages, periods, and cohorts or combinations of these parameterizations. These models are not identified using the traditional fixed effect regression model approach because of a linear dependency between the ages, periods, and cohorts. However, these models can be identified if the researcher introduces a single just identifying constraint on the model coefficients. The problem with such constraints is that the results can differ substantially depending on the constraint chosen. Somewhat surprisingly, age-period-cohort models that specify one or more of ages and/or periods and/or cohorts as random effects are identified. This is the case without introducing an additional constraint. I label this identification as statistical model identification and show how statistical model identification comes about in mixed models and why which effects are treated as fixed and which are treated as random can substantially change the estimates of the age, period, and cohort effects. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  16. Robust automated mass spectra interpretation and chemical formula calculation using mixed integer linear programming.

    Science.gov (United States)

    Baran, Richard; Northen, Trent R

    2013-10-15

    Untargeted metabolite profiling using liquid chromatography and mass spectrometry coupled via electrospray ionization is a powerful tool for the discovery of novel natural products, metabolic capabilities, and biomarkers. However, the elucidation of the identities of uncharacterized metabolites from spectral features remains challenging. A critical step in the metabolite identification workflow is the assignment of redundant spectral features (adducts, fragments, multimers) and calculation of the underlying chemical formula. Inspection of the data by experts using computational tools solving partial problems (e.g., chemical formula calculation for individual ions) can be performed to disambiguate alternative solutions and provide reliable results. However, manual curation is tedious and not readily scalable or standardized. Here we describe an automated procedure for the robust automated mass spectra interpretation and chemical formula calculation using mixed integer linear programming optimization (RAMSI). Chemical rules among related ions are expressed as linear constraints and both the spectra interpretation and chemical formula calculation are performed in a single optimization step. This approach is unbiased in that it does not require predefined sets of neutral losses and positive and negative polarity spectra can be combined in a single optimization. The procedure was evaluated with 30 experimental mass spectra and was found to effectively identify the protonated or deprotonated molecule ([M + H](+) or [M - H](-)) while being robust to the presence of background ions. RAMSI provides a much-needed standardized tool for interpreting ions for subsequent identification in untargeted metabolomics workflows.

  17. Spillways Scheduling for Flood Control of Three Gorges Reservoir Using Mixed Integer Linear Programming Model

    Directory of Open Access Journals (Sweden)

    Maoyuan Feng

    2014-01-01

    Full Text Available This study proposes a mixed integer linear programming (MILP model to optimize the spillways scheduling for reservoir flood control. Unlike the conventional reservoir operation model, the proposed MILP model specifies the spillways status (including the number of spillways to be open and the degree of the spillway opened instead of reservoir release, since the release is actually controlled by using the spillway. The piecewise linear approximation is used to formulate the relationship between the reservoir storage and water release for a spillway, which should be open/closed with a status depicted by a binary variable. The control order and symmetry rules of spillways are described and incorporated into the constraints for meeting the practical demand. Thus, a MILP model is set up to minimize the maximum reservoir storage. The General Algebraic Modeling System (GAMS and IBM ILOG CPLEX Optimization Studio (CPLEX software are used to find the optimal solution for the proposed MILP model. The China’s Three Gorges Reservoir, whose spillways are of five types with the total number of 80, is selected as the case study. It is shown that the proposed model decreases the flood risk compared with the conventional operation and makes the operation more practical by specifying the spillways status directly.

  18. Identification of hydrometeor mixtures in polarimetric radar measurements and their linear de-mixing

    Science.gov (United States)

    Besic, Nikola; Ventura, Jordi Figueras i.; Grazioli, Jacopo; Gabella, Marco; Germann, Urs; Berne, Alexis

    2017-04-01

    The issue of hydrometeor mixtures affects radar sampling volumes without a clear dominant hydrometeor type. Containing a number of different hydrometeor types which significantly contribute to the polarimetric variables, these volumes are likely to occur in the vicinity of the melting layer and mainly, at large distance from a given radar. Motivated by potential benefits for both quantitative and qualitative applications of dual-pol radar, we propose a method for the identification of hydrometeor mixtures and their subsequent linear de-mixing. This method is intrinsically related to our recently proposed semi-supervised approach for hydrometeor classification. The mentioned classification approach [1] performs labeling of radar sampling volumes by using as a criterion the Euclidean distance with respect to five-dimensional centroids, depicting nine hydrometeor classes. The positions of the centroids in the space formed by four radar moments and one external parameter (phase indicator), are derived through a technique of k-medoids clustering, applied on a selected representative set of radar observations, and coupled with statistical testing which introduces the assumed microphysical properties of the different hydrometeor types. Aside from a hydrometeor type label, each radar sampling volume is characterized by an entropy estimate, indicating the uncertainty of the classification. Here, we revisit the concept of entropy presented in [1], in order to emphasize its presumed potential for the identification of hydrometeor mixtures. The calculation of entropy is based on the estimate of the probability (pi ) that the observation corresponds to the hydrometeor type i (i = 1,ṡṡṡ9) . The probability is derived from the Euclidean distance (di ) of the observation to the centroid characterizing the hydrometeor type i . The parametrization of the d → p transform is conducted in a controlled environment, using synthetic polarimetric radar datasets. It ensures balanced

  19. Separation method of heavy-ion particle image from gamma-ray mixed images using an imaging plate

    CERN Document Server

    Yamadera, A; Ohuchi, H; Nakamura, T; Fukumura, A

    1999-01-01

    We have developed a separation method of alpha-ray and gamma-ray images using the imaging plate (IP). The IP from which the first image was read out by an image reader was annealed at 50 deg. C for 2 h in a drying oven and the second image was read out by the image reader. It was found out that an annealing ratio, k, which is defined as a ratio of the photo-stimulated luminescence (PSL) density at the first measurement to that at the second measurement, was different for alpha rays and gamma rays. By subtracting the second image multiplied by a factor of k from the first image, the alpha-ray image was separated from the alpha and gamma-ray mixed images. This method was applied to identify the images of helium, carbon and neon particles of high energies using the heavy-ion medical accelerator, HIMAC. (author)

  20. The influence of image reconstruction algorithms on linear thorax EIT image analysis of ventilation

    International Nuclear Information System (INIS)

    Zhao, Zhanqi; Möller, Knut; Frerichs, Inéz; Pulletz, Sven; Müller-Lisse, Ullrich

    2014-01-01

    Analysis methods of electrical impedance tomography (EIT) images based on different reconstruction algorithms were examined. EIT measurements were performed on eight mechanically ventilated patients with acute respiratory distress syndrome. A maneuver with step increase of airway pressure was performed. EIT raw data were reconstructed offline with (1) filtered back-projection (BP); (2) the Dräger algorithm based on linearized Newton–Raphson (DR); (3) the GREIT (Graz consensus reconstruction algorithm for EIT) reconstruction algorithm with a circular forward model (GR C ) and (4) GREIT with individual thorax geometry (GR T ). Individual thorax contours were automatically determined from the routine computed tomography images. Five indices were calculated on the resulting EIT images respectively: (a) the ratio between tidal and deep inflation impedance changes; (b) tidal impedance changes in the right and left lungs; (c) center of gravity; (d) the global inhomogeneity index and (e) ventilation delay at mid-dorsal regions. No significant differences were found in all examined indices among the four reconstruction algorithms (p > 0.2, Kruskal–Wallis test). The examined algorithms used for EIT image reconstruction do not influence the selected indices derived from the EIT image analysis. Indices that validated for images with one reconstruction algorithm are also valid for other reconstruction algorithms. (paper)

  1. The influence of image reconstruction algorithms on linear thorax EIT image analysis of ventilation.

    Science.gov (United States)

    Zhao, Zhanqi; Frerichs, Inéz; Pulletz, Sven; Müller-Lisse, Ullrich; Möller, Knut

    2014-06-01

    Analysis methods of electrical impedance tomography (EIT) images based on different reconstruction algorithms were examined. EIT measurements were performed on eight mechanically ventilated patients with acute respiratory distress syndrome. A maneuver with step increase of airway pressure was performed. EIT raw data were reconstructed offline with (1) filtered back-projection (BP); (2) the Dräger algorithm based on linearized Newton-Raphson (DR); (3) the GREIT (Graz consensus reconstruction algorithm for EIT) reconstruction algorithm with a circular forward model (GR(C)) and (4) GREIT with individual thorax geometry (GR(T)). Individual thorax contours were automatically determined from the routine computed tomography images. Five indices were calculated on the resulting EIT images respectively: (a) the ratio between tidal and deep inflation impedance changes; (b) tidal impedance changes in the right and left lungs; (c) center of gravity; (d) the global inhomogeneity index and (e) ventilation delay at mid-dorsal regions. No significant differences were found in all examined indices among the four reconstruction algorithms (p > 0.2, Kruskal-Wallis test). The examined algorithms used for EIT image reconstruction do not influence the selected indices derived from the EIT image analysis. Indices that validated for images with one reconstruction algorithm are also valid for other reconstruction algorithms.

  2. Phase mixing of transverse oscillations in the linear and nonlinear regimes for IFR relativistic electron beam propagation

    International Nuclear Information System (INIS)

    Shokair, I.R.

    1991-01-01

    Phase mixing of transverse oscillations changes the nature of the ion hose instability from an absolute to a convective instability. The stronger the phase mixing, the faster an electron beam reaches equilibrium with the guiding ion channel. This is important for long distance propagation of relativistic electron beams where it is desired that transverse oscillations phase mix within a few betatron wavelengths of injection and subsequently an equilibrium is reached with no further beam emittance growth. In the linear regime phase mixing is well understood and results in asymptotic decay of transverse oscillations as 1/Z 2 for a Gaussian beam and channel system, Z being the axial distance measured in betatron wavelengths. In the nonlinear regime (which is likely mode of propagation for long pulse beams) results of the spread mass model indicate that phase mixing is considerably weaker than in the regime. In this paper we consider this problem of phase mixing in the nonlinear regime. Results of the spread mass model will be shown along with a simple analysis of phase mixing for multiple oscillator models. Particle simulations also indicate that phase mixing is weaker in nonlinear regime than in the linear regime. These results will also be shown. 3 refs., 4 figs

  3. Response Surface Method and Linear Programming in the development of mixed nectar of acceptability high and minimum cost

    Directory of Open Access Journals (Sweden)

    Enrique López Calderón

    2012-06-01

    Full Text Available The aim of this study was to develop a high acceptability mixed nectar and low cost. To obtain the nectar mixed considered different amounts of passion fruit, sweet pepino, sucrose, and completing 100% with water, following a two-stage design: screening (using a design of type 2 3 + 4 center points and optimization (using a design of type 2 2 + 2*2 + 4 center points; stages that allow explore a high acceptability formulation. Then we used the technique of Linear Programming to minimize the cost of high acceptability nectar. Result of this process was obtained a mixed nectar optimal acceptability (score of 7, when the formulation is between 9 and 14% of passion fruit, 4 and 5% of sucrose, 73.5% of sweet pepino juice and filling with water to the 100%. Linear Programming possible reduced the cost of nectar mixed with optimal acceptability at S/.174 for a production of 1000 L/day.

  4. Fourier imaging of non-linear structure formation

    International Nuclear Information System (INIS)

    Brandbyge, Jacob; Hannestad, Steen

    2017-01-01

    We perform a Fourier space decomposition of the dynamics of non-linear cosmological structure formation in ΛCDM models. From N -body simulations involving only cold dark matter we calculate 3-dimensional non-linear density, velocity divergence and vorticity Fourier realizations, and use these to calculate the fully non-linear mode coupling integrals in the corresponding fluid equations. Our approach allows for a reconstruction of the amount of mode coupling between any two wavenumbers as a function of redshift. With our Fourier decomposition method we identify the transfer of power from larger to smaller scales, the stable clustering regime, the scale where vorticity becomes important, and the suppression of the non-linear divergence power spectrum as compared to linear theory. Our results can be used to improve and calibrate semi-analytical structure formation models.

  5. Fourier imaging of non-linear structure formation

    Energy Technology Data Exchange (ETDEWEB)

    Brandbyge, Jacob; Hannestad, Steen, E-mail: jacobb@phys.au.dk, E-mail: sth@phys.au.dk [Department of Physics and Astronomy, University of Aarhus, Ny Munkegade 120, DK-8000 Aarhus C (Denmark)

    2017-04-01

    We perform a Fourier space decomposition of the dynamics of non-linear cosmological structure formation in ΛCDM models. From N -body simulations involving only cold dark matter we calculate 3-dimensional non-linear density, velocity divergence and vorticity Fourier realizations, and use these to calculate the fully non-linear mode coupling integrals in the corresponding fluid equations. Our approach allows for a reconstruction of the amount of mode coupling between any two wavenumbers as a function of redshift. With our Fourier decomposition method we identify the transfer of power from larger to smaller scales, the stable clustering regime, the scale where vorticity becomes important, and the suppression of the non-linear divergence power spectrum as compared to linear theory. Our results can be used to improve and calibrate semi-analytical structure formation models.

  6. Linearized inversion frameworks toward high-resolution seismic imaging

    KAUST Repository

    Aldawood, Ali

    2016-01-01

    installed along the earth surface or down boreholes. Seismic imaging is a powerful tool to map these reflected and scattered energy back to their subsurface scattering or reflection points. Seismic imaging is conventionally based on the single

  7. Mixed linear-nonlinear fault slip inversion: Bayesian inference of model, weighting, and smoothing parameters

    Science.gov (United States)

    Fukuda, J.; Johnson, K. M.

    2009-12-01

    Studies utilizing inversions of geodetic data for the spatial distribution of coseismic slip on faults typically present the result as a single fault plane and slip distribution. Commonly the geometry of the fault plane is assumed to be known a priori and the data are inverted for slip. However, sometimes there is not strong a priori information on the geometry of the fault that produced the earthquake and the data is not always strong enough to completely resolve the fault geometry. We develop a method to solve for the full posterior probability distribution of fault slip and fault geometry parameters in a Bayesian framework using Monte Carlo methods. The slip inversion problem is particularly challenging because it often involves multiple data sets with unknown relative weights (e.g. InSAR, GPS), model parameters that are related linearly (slip) and nonlinearly (fault geometry) through the theoretical model to surface observations, prior information on model parameters, and a regularization prior to stabilize the inversion. We present the theoretical framework and solution method for a Bayesian inversion that can handle all of these aspects of the problem. The method handles the mixed linear/nonlinear nature of the problem through combination of both analytical least-squares solutions and Monte Carlo methods. We first illustrate and validate the inversion scheme using synthetic data sets. We then apply the method to inversion of geodetic data from the 2003 M6.6 San Simeon, California earthquake. We show that the uncertainty in strike and dip of the fault plane is over 20 degrees. We characterize the uncertainty in the slip estimate with a volume around the mean fault solution in which the slip most likely occurred. Slip likely occurred somewhere in a volume that extends 5-10 km in either direction normal to the fault plane. We implement slip inversions with both traditional, kinematic smoothing constraints on slip and a simple physical condition of uniform stress

  8. Optimal Airport Surface Traffic Planning Using Mixed-Integer Linear Programming

    Directory of Open Access Journals (Sweden)

    P. C. Roling

    2008-01-01

    Full Text Available We describe an ongoing research effort pertaining to the development of a surface traffic automation system that will help controllers to better coordinate surface traffic movements related to arrival and departure traffic. More specifically, we describe the concept for a taxi-planning support tool that aims to optimize the routing and scheduling of airport surface traffic in such a way as to deconflict the taxi plans while optimizing delay, total taxi-time, or some other airport efficiency metric. Certain input parameters related to resource demand, such as the expected landing times and the expected pushback times, are rather difficult to predict accurately. Due to uncertainty in the input data driving the taxi-planning process, the taxi-planning tool is designed such that it produces solutions that are robust to uncertainty. The taxi-planning concept presented herein, which is based on mixed-integer linear programming, is designed such that it is able to adapt to perturbations in these input conditions, as well as to account for failure in the actual execution of surface trajectories. The capabilities of the tool are illustrated in a simple hypothetical airport.

  9. Optimising the selection of food items for FFQs using Mixed Integer Linear Programming.

    Science.gov (United States)

    Gerdessen, Johanna C; Souverein, Olga W; van 't Veer, Pieter; de Vries, Jeanne Hm

    2015-01-01

    To support the selection of food items for FFQs in such a way that the amount of information on all relevant nutrients is maximised while the food list is as short as possible. Selection of the most informative food items to be included in FFQs was modelled as a Mixed Integer Linear Programming (MILP) model. The methodology was demonstrated for an FFQ with interest in energy, total protein, total fat, saturated fat, monounsaturated fat, polyunsaturated fat, total carbohydrates, mono- and disaccharides, dietary fibre and potassium. The food lists generated by the MILP model have good performance in terms of length, coverage and R 2 (explained variance) of all nutrients. MILP-generated food lists were 32-40 % shorter than a benchmark food list, whereas their quality in terms of R 2 was similar to that of the benchmark. The results suggest that the MILP model makes the selection process faster, more standardised and transparent, and is especially helpful in coping with multiple nutrients. The complexity of the method does not increase with increasing number of nutrients. The generated food lists appear either shorter or provide more information than a food list generated without the MILP model.

  10. Spatiotemporal chaos in mixed linear-nonlinear two-dimensional coupled logistic map lattice

    Science.gov (United States)

    Zhang, Ying-Qian; He, Yi; Wang, Xing-Yuan

    2018-01-01

    We investigate a new spatiotemporal dynamics with mixing degrees of nonlinear chaotic maps for spatial coupling connections based on 2DCML. Here, the coupling methods are including with linear neighborhood coupling and the nonlinear chaotic map coupling of lattices, and the former 2DCML system is only a special case in the proposed system. In this paper the criteria such Kolmogorov-Sinai entropy density and universality, bifurcation diagrams, space-amplitude and snapshot pattern diagrams are provided in order to investigate the chaotic behaviors of the proposed system. Furthermore, we also investigate the parameter ranges of the proposed system which holds those features in comparisons with those of the 2DCML system and the MLNCML system. Theoretical analysis and computer simulation indicate that the proposed system contains features such as the higher percentage of lattices in chaotic behaviors for most of parameters, less periodic windows in bifurcation diagrams and the larger range of parameters for chaotic behaviors, which is more suitable for cryptography.

  11. A turbulent mixing Reynolds stress model fitted to match linear interaction analysis predictions

    International Nuclear Information System (INIS)

    Griffond, J; Soulard, O; Souffland, D

    2010-01-01

    To predict the evolution of turbulent mixing zones developing in shock tube experiments with different gases, a turbulence model must be able to reliably evaluate the production due to the shock-turbulence interaction. In the limit of homogeneous weak turbulence, 'linear interaction analysis' (LIA) can be applied. This theory relies on Kovasznay's decomposition and allows the computation of waves transmitted or produced at the shock front. With assumptions about the composition of the upstream turbulent mixture, one can connect the second-order moments downstream from the shock front to those upstream through a transfer matrix, depending on shock strength. The purpose of this work is to provide a turbulence model that matches LIA results for the shock-turbulent mixture interaction. Reynolds stress models (RSMs) with additional equations for the density-velocity correlation and the density variance are considered here. The turbulent states upstream and downstream from the shock front calculated with these models can also be related through a transfer matrix, provided that the numerical implementation is based on a pseudo-pressure formulation. Then, the RSM should be modified in such a way that its transfer matrix matches the LIA one. Using the pseudo-pressure to introduce ad hoc production terms, we are able to obtain a close agreement between LIA and RSM matrices for any shock strength and thus improve the capabilities of the RSM.

  12. A mixed-integer linear programming approach to the reduction of genome-scale metabolic networks.

    Science.gov (United States)

    Röhl, Annika; Bockmayr, Alexander

    2017-01-03

    Constraint-based analysis has become a widely used method to study metabolic networks. While some of the associated algorithms can be applied to genome-scale network reconstructions with several thousands of reactions, others are limited to small or medium-sized models. In 2015, Erdrich et al. introduced a method called NetworkReducer, which reduces large metabolic networks to smaller subnetworks, while preserving a set of biological requirements that can be specified by the user. Already in 2001, Burgard et al. developed a mixed-integer linear programming (MILP) approach for computing minimal reaction sets under a given growth requirement. Here we present an MILP approach for computing minimum subnetworks with the given properties. The minimality (with respect to the number of active reactions) is not guaranteed by NetworkReducer, while the method by Burgard et al. does not allow specifying the different biological requirements. Our procedure is about 5-10 times faster than NetworkReducer and can enumerate all minimum subnetworks in case there exist several ones. This allows identifying common reactions that are present in all subnetworks, and reactions appearing in alternative pathways. Applying complex analysis methods to genome-scale metabolic networks is often not possible in practice. Thus it may become necessary to reduce the size of the network while keeping important functionalities. We propose a MILP solution to this problem. Compared to previous work, our approach is more efficient and allows computing not only one, but even all minimum subnetworks satisfying the required properties.

  13. Mixed Integer Linear Programming based machine learning approach identifies regulators of telomerase in yeast.

    Science.gov (United States)

    Poos, Alexandra M; Maicher, André; Dieckmann, Anna K; Oswald, Marcus; Eils, Roland; Kupiec, Martin; Luke, Brian; König, Rainer

    2016-06-02

    Understanding telomere length maintenance mechanisms is central in cancer biology as their dysregulation is one of the hallmarks for immortalization of cancer cells. Important for this well-balanced control is the transcriptional regulation of the telomerase genes. We integrated Mixed Integer Linear Programming models into a comparative machine learning based approach to identify regulatory interactions that best explain the discrepancy of telomerase transcript levels in yeast mutants with deleted regulators showing aberrant telomere length, when compared to mutants with normal telomere length. We uncover novel regulators of telomerase expression, several of which affect histone levels or modifications. In particular, our results point to the transcription factors Sum1, Hst1 and Srb2 as being important for the regulation of EST1 transcription, and we validated the effect of Sum1 experimentally. We compiled our machine learning method leading to a user friendly package for R which can straightforwardly be applied to similar problems integrating gene regulator binding information and expression profiles of samples of e.g. different phenotypes, diseases or treatments. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.

  14. Application of mixed-integer linear programming in a car seats assembling process

    Directory of Open Access Journals (Sweden)

    Jorge Iván Perez Rave

    2011-12-01

    Full Text Available In this paper, a decision problem involving a car parts manufacturing company is modeled in order to prepare the company for an increase in demand. Mixed-integer linear programming was used with the following decision variables: creating a second shift, purchasing additional equipment, determining the required work force, and other alternatives involving new manners of work distribution that make it possible to separate certain operations from some workplaces and integrate them into others to minimize production costs. The model was solved using GAMS. The solution consisted of programming 19 workers under a configuration that merges two workplaces and separates some operations from some workplaces. The solution did not involve purchasing additional machinery or creating a second shift. As a result, the manufacturing paradigms that had been valid in the company for over 14 years were broken. This study allowed the company to increase its productivity and obtain significant savings. It also shows the benefits of joint work between academia and companies, and provides useful information for professors, students and engineers regarding production and continuous improvement.

  15. Automatic Design of Synthetic Gene Circuits through Mixed Integer Non-linear Programming

    Science.gov (United States)

    Huynh, Linh; Kececioglu, John; Köppe, Matthias; Tagkopoulos, Ilias

    2012-01-01

    Automatic design of synthetic gene circuits poses a significant challenge to synthetic biology, primarily due to the complexity of biological systems, and the lack of rigorous optimization methods that can cope with the combinatorial explosion as the number of biological parts increases. Current optimization methods for synthetic gene design rely on heuristic algorithms that are usually not deterministic, deliver sub-optimal solutions, and provide no guaranties on convergence or error bounds. Here, we introduce an optimization framework for the problem of part selection in synthetic gene circuits that is based on mixed integer non-linear programming (MINLP), which is a deterministic method that finds the globally optimal solution and guarantees convergence in finite time. Given a synthetic gene circuit, a library of characterized parts, and user-defined constraints, our method can find the optimal selection of parts that satisfy the constraints and best approximates the objective function given by the user. We evaluated the proposed method in the design of three synthetic circuits (a toggle switch, a transcriptional cascade, and a band detector), with both experimentally constructed and synthetic promoter libraries. Scalability and robustness analysis shows that the proposed framework scales well with the library size and the solution space. The work described here is a step towards a unifying, realistic framework for the automated design of biological circuits. PMID:22536398

  16. Spatial generalized linear mixed models of electric power outages due to hurricanes and ice storms

    International Nuclear Information System (INIS)

    Liu Haibin; Davidson, Rachel A.; Apanasovich, Tatiyana V.

    2008-01-01

    This paper presents new statistical models that predict the number of hurricane- and ice storm-related electric power outages likely to occur in each 3 kmx3 km grid cell in a region. The models are based on a large database of recent outages experienced by three major East Coast power companies in six hurricanes and eight ice storms. A spatial generalized linear mixed modeling (GLMM) approach was used in which spatial correlation is incorporated through random effects. Models were fitted using a composite likelihood approach and the covariance matrix was estimated empirically. A simulation study was conducted to test the model estimation procedure, and model training, validation, and testing were done to select the best models and assess their predictive power. The final hurricane model includes number of protective devices, maximum gust wind speed, hurricane indicator, and company indicator covariates. The final ice storm model includes number of protective devices, ice thickness, and ice storm indicator covariates. The models should be useful for power companies as they plan for future storms. The statistical modeling approach offers a new way to assess the reliability of electric power and other infrastructure systems in extreme events

  17. A multiple objective mixed integer linear programming model for power generation expansion planning

    Energy Technology Data Exchange (ETDEWEB)

    Antunes, C. Henggeler; Martins, A. Gomes [INESC-Coimbra, Coimbra (Portugal); Universidade de Coimbra, Dept. de Engenharia Electrotecnica, Coimbra (Portugal); Brito, Isabel Sofia [Instituto Politecnico de Beja, Escola Superior de Tecnologia e Gestao, Beja (Portugal)

    2004-03-01

    Power generation expansion planning inherently involves multiple, conflicting and incommensurate objectives. Therefore, mathematical models become more realistic if distinct evaluation aspects, such as cost and environmental concerns, are explicitly considered as objective functions rather than being encompassed by a single economic indicator. With the aid of multiple objective models, decision makers may grasp the conflicting nature and the trade-offs among the different objectives in order to select satisfactory compromise solutions. This paper presents a multiple objective mixed integer linear programming model for power generation expansion planning that allows the consideration of modular expansion capacity values of supply-side options. This characteristic of the model avoids the well-known problem associated with continuous capacity values that usually have to be discretized in a post-processing phase without feedback on the nature and importance of the changes in the attributes of the obtained solutions. Demand-side management (DSM) is also considered an option in the planning process, assuming there is a sufficiently large portion of the market under franchise conditions. As DSM full costs are accounted in the model, including lost revenues, it is possible to perform an evaluation of the rate impact in order to further inform the decision process (Author)

  18. Further Improvements to Linear Mixed Models for Genome-Wide Association Studies

    Science.gov (United States)

    Widmer, Christian; Lippert, Christoph; Weissbrod, Omer; Fusi, Nicolo; Kadie, Carl; Davidson, Robert; Listgarten, Jennifer; Heckerman, David

    2014-11-01

    We examine improvements to the linear mixed model (LMM) that better correct for population structure and family relatedness in genome-wide association studies (GWAS). LMMs rely on the estimation of a genetic similarity matrix (GSM), which encodes the pairwise similarity between every two individuals in a cohort. These similarities are estimated from single nucleotide polymorphisms (SNPs) or other genetic variants. Traditionally, all available SNPs are used to estimate the GSM. In empirical studies across a wide range of synthetic and real data, we find that modifications to this approach improve GWAS performance as measured by type I error control and power. Specifically, when only population structure is present, a GSM constructed from SNPs that well predict the phenotype in combination with principal components as covariates controls type I error and yields more power than the traditional LMM. In any setting, with or without population structure or family relatedness, a GSM consisting of a mixture of two component GSMs, one constructed from all SNPs and another constructed from SNPs that well predict the phenotype again controls type I error and yields more power than the traditional LMM. Software implementing these improvements and the experimental comparisons are available at http://microsoft.com/science.

  19. Optical colour image watermarking based on phase-truncated linear canonical transform and image decomposition

    Science.gov (United States)

    Su, Yonggang; Tang, Chen; Li, Biyuan; Lei, Zhenkun

    2018-05-01

    This paper presents a novel optical colour image watermarking scheme based on phase-truncated linear canonical transform (PT-LCT) and image decomposition (ID). In this proposed scheme, a PT-LCT-based asymmetric cryptography is designed to encode the colour watermark into a noise-like pattern, and an ID-based multilevel embedding method is constructed to embed the encoded colour watermark into a colour host image. The PT-LCT-based asymmetric cryptography, which can be optically implemented by double random phase encoding with a quadratic phase system, can provide a higher security to resist various common cryptographic attacks. And the ID-based multilevel embedding method, which can be digitally implemented by a computer, can make the information of the colour watermark disperse better in the colour host image. The proposed colour image watermarking scheme possesses high security and can achieve a higher robustness while preserving the watermark’s invisibility. The good performance of the proposed scheme has been demonstrated by extensive experiments and comparison with other relevant schemes.

  20. Longitudinal mathematics development of students with learning disabilities and students without disabilities: a comparison of linear, quadratic, and piecewise linear mixed effects models.

    Science.gov (United States)

    Kohli, Nidhi; Sullivan, Amanda L; Sadeh, Shanna; Zopluoglu, Cengiz

    2015-04-01

    Effective instructional planning and intervening rely heavily on accurate understanding of students' growth, but relatively few researchers have examined mathematics achievement trajectories, particularly for students with special needs. We applied linear, quadratic, and piecewise linear mixed-effects models to identify the best-fitting model for mathematics development over elementary and middle school and to ascertain differences in growth trajectories of children with learning disabilities relative to their typically developing peers. The analytic sample of 2150 students was drawn from the Early Childhood Longitudinal Study - Kindergarten Cohort, a nationally representative sample of United States children who entered kindergarten in 1998. We first modeled students' mathematics growth via multiple mixed-effects models to determine the best fitting model of 9-year growth and then compared the trajectories of students with and without learning disabilities. Results indicate that the piecewise linear mixed-effects model captured best the functional form of students' mathematics trajectories. In addition, there were substantial achievement gaps between students with learning disabilities and students with no disabilities, and their trajectories differed such that students without disabilities progressed at a higher rate than their peers who had learning disabilities. The results underscore the need for further research to understand how to appropriately model students' mathematics trajectories and the need for attention to mathematics achievement gaps in policy. Copyright © 2015 Society for the Study of School Psychology. Published by Elsevier Ltd. All rights reserved.

  1. Hybrid simulation using mixed reality for interventional ultrasound imaging training.

    Science.gov (United States)

    Freschi, C; Parrini, S; Dinelli, N; Ferrari, M; Ferrari, V

    2015-07-01

    Ultrasound (US) imaging offers advantages over other imaging modalities and has become the most widespread modality for many diagnostic and interventional procedures. However, traditional 2D US requires a long training period, especially to learn how to manipulate the probe. A hybrid interactive system based on mixed reality was designed, implemented and tested for hand-eye coordination training in diagnostic and interventional US. A hybrid simulator was developed integrating a physical US phantom and a software application with a 3D virtual scene. In this scene, a 3D model of the probe with its relative scan plane is coherently displayed with a 3D representation of the phantom internal structures. An evaluation study of the diagnostic module was performed by recruiting thirty-six novices and four experts. The performances of the hybrid (HG) versus physical (PG) simulator were compared. After the training session, each novice was required to visualize a particular target structure. The four experts completed a 5-point Likert scale questionnaire. Seventy-eight percentage of the HG novices successfully visualized the target structure, whereas only 45% of the PG reached this goal. The mean scores from the questionnaires were 5.00 for usefulness, 4.25 for ease of use, 4.75 for 3D perception, and 3.25 for phantom realism. The hybrid US training simulator provides ease of use and is effective as a hand-eye coordination teaching tool. Mixed reality can improve US probe manipulation training.

  2. The linearized inversion of the generalized interferometric multiple imaging

    KAUST Repository

    Aldawood, Ali; Hoteit, Ibrahim; Alkhalifah, Tariq Ali

    2016-01-01

    such as vertical and nearly vertical fault planes, and salt flanks. To image first-order internal multiple, the GIMI framework consists of three datuming steps, followed by applying the zero-lag cross-correlation imaging condition. However, the standard GIMI

  3. Multivariate mixed linear model analysis of longitudinal data: an information-rich statistical technique for analyzing disease resistance data

    Science.gov (United States)

    The mixed linear model (MLM) is currently among the most advanced and flexible statistical modeling techniques and its use in tackling problems in plant pathology has begun surfacing in the literature. The longitudinal MLM is a multivariate extension that handles repeatedly measured data, such as r...

  4. Deliberate practice predicts performance throughout time in adolescent chess players and dropouts: A linear mixed models analysis.

    NARCIS (Netherlands)

    de Bruin, A.B.H.; Smits, N.; Rikers, R.M.J.P.; Schmidt, H.G.

    2008-01-01

    In this study, the longitudinal relation between deliberate practice and performance in chess was examined using a linear mixed models analysis. The practice activities and performance ratings of young elite chess players, who were either in, or had dropped out of the Dutch national chess training,

  5. glmmTMB balances speed and flexibility among packages for Zero-inflated Generalized Linear Mixed Modeling

    DEFF Research Database (Denmark)

    Brooks, Mollie Elizabeth; Kristensen, Kasper; van Benthem, Koen J.

    2017-01-01

    Count data can be analyzed using generalized linear mixed models when observations are correlated in ways that require random effects. However, count data are often zero-inflated, containing more zeros than would be expected from the typical error distributions. We present a new package, glmm...

  6. Node-Splitting Generalized Linear Mixed Models for Evaluation of Inconsistency in Network Meta-Analysis.

    Science.gov (United States)

    Yu-Kang, Tu

    2016-12-01

    Network meta-analysis for multiple treatment comparisons has been a major development in evidence synthesis methodology. The validity of a network meta-analysis, however, can be threatened by inconsistency in evidence within the network. One particular issue of inconsistency is how to directly evaluate the inconsistency between direct and indirect evidence with regard to the effects difference between two treatments. A Bayesian node-splitting model was first proposed and a similar frequentist side-splitting model has been put forward recently. Yet, assigning the inconsistency parameter to one or the other of the two treatments or splitting the parameter symmetrically between the two treatments can yield different results when multi-arm trials are involved in the evaluation. We aimed to show that a side-splitting model can be viewed as a special case of design-by-treatment interaction model, and different parameterizations correspond to different design-by-treatment interactions. We demonstrated how to evaluate the side-splitting model using the arm-based generalized linear mixed model, and an example data set was used to compare results from the arm-based models with those from the contrast-based models. The three parameterizations of side-splitting make slightly different assumptions: the symmetrical method assumes that both treatments in a treatment contrast contribute to inconsistency between direct and indirect evidence, whereas the other two parameterizations assume that only one of the two treatments contributes to this inconsistency. With this understanding in mind, meta-analysts can then make a choice about how to implement the side-splitting method for their analysis. Copyright © 2016 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  7. lme4qtl: linear mixed models with flexible covariance structure for genetic studies of related individuals.

    Science.gov (United States)

    Ziyatdinov, Andrey; Vázquez-Santiago, Miquel; Brunel, Helena; Martinez-Perez, Angel; Aschard, Hugues; Soria, Jose Manuel

    2018-02-27

    Quantitative trait locus (QTL) mapping in genetic data often involves analysis of correlated observations, which need to be accounted for to avoid false association signals. This is commonly performed by modeling such correlations as random effects in linear mixed models (LMMs). The R package lme4 is a well-established tool that implements major LMM features using sparse matrix methods; however, it is not fully adapted for QTL mapping association and linkage studies. In particular, two LMM features are lacking in the base version of lme4: the definition of random effects by custom covariance matrices; and parameter constraints, which are essential in advanced QTL models. Apart from applications in linkage studies of related individuals, such functionalities are of high interest for association studies in situations where multiple covariance matrices need to be modeled, a scenario not covered by many genome-wide association study (GWAS) software. To address the aforementioned limitations, we developed a new R package lme4qtl as an extension of lme4. First, lme4qtl contributes new models for genetic studies within a single tool integrated with lme4 and its companion packages. Second, lme4qtl offers a flexible framework for scenarios with multiple levels of relatedness and becomes efficient when covariance matrices are sparse. We showed the value of our package using real family-based data in the Genetic Analysis of Idiopathic Thrombophilia 2 (GAIT2) project. Our software lme4qtl enables QTL mapping models with a versatile structure of random effects and efficient computation for sparse covariances. lme4qtl is available at https://github.com/variani/lme4qtl .

  8. Non-linear optical imaging – Introduction and pharmaceutical applications

    NARCIS (Netherlands)

    Fussell, A.L.; Isomaki, Antti; Strachan, Clare J.

    2013-01-01

    Nonlinear optical imaging is an emerging technology with much potential in pharmaceutical analysis. The technique encompasses a range of optical phenomena, including coherent anti-Stokes Raman scattering (CARS), second harmonic generation (SHG), and twophoton excited fluorescence (TPEF). The

  9. Monte Carlo simulation of OLS and linear mixed model inference of phenotypic effects on gene expression.

    Science.gov (United States)

    Walker, Jeffrey A

    2016-01-01

    downward biased standard errors and inflated coefficients. The Monte Carlo simulation of error rates shows highly inflated Type I error from the GLS test and slightly inflated Type I error from the GEE test. By contrast, Type I error for all OLS tests are at the nominal level. The permutation F -tests have ∼1.9X the power of the other OLS tests. This increased power comes at a cost of high sign error (∼10%) if tested on small effects. The apparently replicated pattern of well-being effects on gene expression is most parsimoniously explained as "correlated noise" due to the geometry of multiple regression. The GLS for fixed effects with correlated error, or any linear mixed model for estimating fixed effects in designs with many repeated measures or outcomes, should be used cautiously because of the inflated Type I and M error. By contrast, all OLS tests perform well, and the permutation F -tests have superior performance, including moderate power for very small effects.

  10. Monte Carlo simulation of OLS and linear mixed model inference of phenotypic effects on gene expression

    Directory of Open Access Journals (Sweden)

    Jeffrey A. Walker

    2016-10-01

    distributions suggest that the GLS results in downward biased standard errors and inflated coefficients. The Monte Carlo simulation of error rates shows highly inflated Type I error from the GLS test and slightly inflated Type I error from the GEE test. By contrast, Type I error for all OLS tests are at the nominal level. The permutation F-tests have ∼1.9X the power of the other OLS tests. This increased power comes at a cost of high sign error (∼10% if tested on small effects. Discussion The apparently replicated pattern of well-being effects on gene expression is most parsimoniously explained as “correlated noise” due to the geometry of multiple regression. The GLS for fixed effects with correlated error, or any linear mixed model for estimating fixed effects in designs with many repeated measures or outcomes, should be used cautiously because of the inflated Type I and M error. By contrast, all OLS tests perform well, and the permutation F-tests have superior performance, including moderate power for very small effects.

  11. Modelling lactation curve for milk fat to protein ratio in Iranian buffaloes (Bubalus bubalis) using non-linear mixed models.

    Science.gov (United States)

    Hossein-Zadeh, Navid Ghavi

    2016-08-01

    The aim of this study was to compare seven non-linear mathematical models (Brody, Wood, Dhanoa, Sikka, Nelder, Rook and Dijkstra) to examine their efficiency in describing the lactation curves for milk fat to protein ratio (FPR) in Iranian buffaloes. Data were 43 818 test-day records for FPR from the first three lactations of Iranian buffaloes which were collected on 523 dairy herds in the period from 1996 to 2012 by the Animal Breeding Center of Iran. Each model was fitted to monthly FPR records of buffaloes using the non-linear mixed model procedure (PROC NLMIXED) in SAS and the parameters were estimated. The models were tested for goodness of fit using Akaike's information criterion (AIC), Bayesian information criterion (BIC) and log maximum likelihood (-2 Log L). The Nelder and Sikka mixed models provided the best fit of lactation curve for FPR in the first and second lactations of Iranian buffaloes, respectively. However, Wood, Dhanoa and Sikka mixed models provided the best fit of lactation curve for FPR in the third parity buffaloes. Evaluation of first, second and third lactation features showed that all models, except for Dijkstra model in the third lactation, under-predicted test time at which daily FPR was minimum. On the other hand, minimum FPR was over-predicted by all equations. Evaluation of the different models used in this study indicated that non-linear mixed models were sufficient for fitting test-day FPR records of Iranian buffaloes.

  12. Research on geometric rectification of the Large FOV Linear Array Whiskbroom Image

    Science.gov (United States)

    Liu, Dia; Liu, Hui-tong; Dong, Hao; Liu, Xiao-bo

    2015-08-01

    To solve the geometric distortion problem of large FOV linear array whiskbroom image, a model of multi center central projection collinearity equation was founded considering its whiskbroom and linear CCD imaging feature, and the principle of distortion was analyzed. Based on the rectification method with POS, we introduced the angular position sensor data of the servo system, and restored the geometric imaging process exactly. An indirect rectification scheme aiming at linear array imaging with best scanline searching method was adopted, matrixes for calculating the exterior orientation elements was redesigned. We improved two iterative algorithms for this device, and did comparison and analysis. The rectification for the images of airborne imaging experiment showed ideal effect.

  13. System and method for generating 3D images of non-linear properties of rock formation using surface seismic or surface to borehole seismic or both

    Science.gov (United States)

    Vu, Cung Khac; Nihei, Kurt Toshimi; Johnson, Paul A.; Guyer, Robert A.; Ten Cate, James A.; Le Bas, Pierre-Yves; Larmat, Carene S.

    2016-06-07

    A system and method of characterizing properties of a medium from a non-linear interaction are include generating, by first and second acoustic sources disposed on a surface of the medium on a first line, first and second acoustic waves. The first and second acoustic sources are controllable such that trajectories of the first and second acoustic waves intersect in a mixing zone within the medium. The method further includes receiving, by a receiver positioned in a plane containing the first and second acoustic sources, a third acoustic wave generated by a non-linear mixing process from the first and second acoustic waves in the mixing zone; and creating a first two-dimensional image of non-linear properties or a first ratio of compressional velocity and shear velocity, or both, of the medium in a first plane generally perpendicular to the surface and containing the first line, based on the received third acoustic wave.

  14. Sparse PDF maps for non-linear multi-resolution image operations

    KAUST Repository

    Hadwiger, Markus; Sicat, Ronell Barrera; Beyer, Johanna; Krü ger, Jens J.; Mö ller, Torsten

    2012-01-01

    feasible for gigapixel images, while enabling direct evaluation of a variety of non-linear operators from the same representation. We illustrate this versatility for antialiased color mapping, O(n) local Laplacian filters, smoothed local histogram filters

  15. Automated, non-linear registration between 3-dimensional brain map and medical head image

    International Nuclear Information System (INIS)

    Mizuta, Shinobu; Urayama, Shin-ichi; Zoroofi, R.A.; Uyama, Chikao

    1998-01-01

    In this paper, we propose an automated, non-linear registration method between 3-dimensional medical head image and brain map in order to efficiently extract the regions of interest. In our method, input 3-dimensional image is registered into a reference image extracted from a brain map. The problems to be solved are automated, non-linear image matching procedure, and cost function which represents the similarity between two images. Non-linear matching is carried out by dividing the input image into connected partial regions, transforming the partial regions preserving connectivity among the adjacent images, evaluating the image similarity between the transformed regions of the input image and the correspondent regions of the reference image, and iteratively searching the optimal transformation of the partial regions. In order to measure the voxelwise similarity of multi-modal images, a cost function is introduced, which is based on the mutual information. Some experiments using MR images presented the effectiveness of the proposed method. (author)

  16. A D-vine copula-based model for repeated measurements extending linear mixed models with homogeneous correlation structure.

    Science.gov (United States)

    Killiches, Matthias; Czado, Claudia

    2018-03-22

    We propose a model for unbalanced longitudinal data, where the univariate margins can be selected arbitrarily and the dependence structure is described with the help of a D-vine copula. We show that our approach is an extremely flexible extension of the widely used linear mixed model if the correlation is homogeneous over the considered individuals. As an alternative to joint maximum-likelihood a sequential estimation approach for the D-vine copula is provided and validated in a simulation study. The model can handle missing values without being forced to discard data. Since conditional distributions are known analytically, we easily make predictions for future events. For model selection, we adjust the Bayesian information criterion to our situation. In an application to heart surgery data our model performs clearly better than competing linear mixed models. © 2018, The International Biometric Society.

  17. Analysis of 24-Hour Ambulatory Blood Pressure Monitoring Data using Orthonormal Polynomials in the Linear Mixed Model

    OpenAIRE

    Edwards, Lloyd J.; Simpson, Sean L.

    2010-01-01

    The use of 24-hour ambulatory blood pressure monitoring (ABPM) in clinical practice and observational epidemiological studies has grown considerably in the past 25 years. ABPM is a very effective technique for assessing biological, environmental, and drug effects on blood pressure. In order to enhance the effectiveness of ABPM for clinical and observational research studies via analytical and graphical results, developing alternative data analysis approaches are important. The linear mixed mo...

  18. Linear mixed-effects models for within-participant psychology experiments: an introductory tutorial and free, graphical user interface (LMMgui).

    Science.gov (United States)

    Magezi, David A

    2015-01-01

    Linear mixed-effects models (LMMs) are increasingly being used for data analysis in cognitive neuroscience and experimental psychology, where within-participant designs are common. The current article provides an introductory review of the use of LMMs for within-participant data analysis and describes a free, simple, graphical user interface (LMMgui). LMMgui uses the package lme4 (Bates et al., 2014a,b) in the statistical environment R (R Core Team).

  19. Photogrammetric Processing of Planetary Linear Pushbroom Images Based on Approximate Orthophotos

    Science.gov (United States)

    Geng, X.; Xu, Q.; Xing, S.; Hou, Y. F.; Lan, C. Z.; Zhang, J. J.

    2018-04-01

    It is still a great challenging task to efficiently produce planetary mapping products from orbital remote sensing images. There are many disadvantages in photogrammetric processing of planetary stereo images, such as lacking ground control information and informative features. Among which, image matching is the most difficult job in planetary photogrammetry. This paper designs a photogrammetric processing framework for planetary remote sensing images based on approximate orthophotos. Both tie points extraction for bundle adjustment and dense image matching for generating digital terrain model (DTM) are performed on approximate orthophotos. Since most of planetary remote sensing images are acquired by linear scanner cameras, we mainly deal with linear pushbroom images. In order to improve the computational efficiency of orthophotos generation and coordinates transformation, a fast back-projection algorithm of linear pushbroom images is introduced. Moreover, an iteratively refined DTM and orthophotos scheme was adopted in the DTM generation process, which is helpful to reduce search space of image matching and improve matching accuracy of conjugate points. With the advantages of approximate orthophotos, the matching results of planetary remote sensing images can be greatly improved. We tested the proposed approach with Mars Express (MEX) High Resolution Stereo Camera (HRSC) and Lunar Reconnaissance Orbiter (LRO) Narrow Angle Camera (NAC) images. The preliminary experimental results demonstrate the feasibility of the proposed approach.

  20. Linear C32H66 hydrocarbon in the mixed state with C10H22 ...

    Indian Academy of Sciences (India)

    Unknown

    S R Research Laboratory for Studies in Crystallization Phenomena, 10-1-96, ... mixed state with certain shorter chain length homologues (SMOLLENCs), estimated ... Methods. Five hydrocarbons of even carbon numbers, C10, C12, C14, C16 ...

  1. Chaos-based partial image encryption scheme based on linear fractional and lifting wavelet transforms

    Science.gov (United States)

    Belazi, Akram; Abd El-Latif, Ahmed A.; Diaconu, Adrian-Viorel; Rhouma, Rhouma; Belghith, Safya

    2017-01-01

    In this paper, a new chaos-based partial image encryption scheme based on Substitution-boxes (S-box) constructed by chaotic system and Linear Fractional Transform (LFT) is proposed. It encrypts only the requisite parts of the sensitive information in Lifting-Wavelet Transform (LWT) frequency domain based on hybrid of chaotic maps and a new S-box. In the proposed encryption scheme, the characteristics of confusion and diffusion are accomplished in three phases: block permutation, substitution, and diffusion. Then, we used dynamic keys instead of fixed keys used in other approaches, to control the encryption process and make any attack impossible. The new S-box was constructed by mixing of chaotic map and LFT to insure the high confidentiality in the inner encryption of the proposed approach. In addition, the hybrid compound of S-box and chaotic systems strengthened the whole encryption performance and enlarged the key space required to resist the brute force attacks. Extensive experiments were conducted to evaluate the security and efficiency of the proposed approach. In comparison with previous schemes, the proposed cryptosystem scheme showed high performances and great potential for prominent prevalence in cryptographic applications.

  2. An A Posteriori Error Analysis of Mixed Finite Element Galerkin Approximations to Second Order Linear Parabolic Problems

    KAUST Repository

    Memon, Sajid; Nataraj, Neela; Pani, Amiya Kumar

    2012-01-01

    In this article, a posteriori error estimates are derived for mixed finite element Galerkin approximations to second order linear parabolic initial and boundary value problems. Using mixed elliptic reconstructions, a posteriori error estimates in L∞(L2)- and L2(L2)-norms for the solution as well as its flux are proved for the semidiscrete scheme. Finally, based on a backward Euler method, a completely discrete scheme is analyzed and a posteriori error bounds are derived, which improves upon earlier results on a posteriori estimates of mixed finite element approximations to parabolic problems. Results of numerical experiments verifying the efficiency of the estimators have also been provided. © 2012 Society for Industrial and Applied Mathematics.

  3. Generating synthetic wave climates for coastal modelling: a linear mixed modelling approach

    Science.gov (United States)

    Thomas, C.; Lark, R. M.

    2013-12-01

    Numerical coastline morphological evolution models require wave climate properties to drive morphological change through time. Wave climate properties (typically wave height, period and direction) may be temporally fixed, culled from real wave buoy data, or allowed to vary in some way defined by a Gaussian or other pdf. However, to examine sensitivity of coastline morphologies to wave climate change, it seems desirable to be able to modify wave climate time series from a current to some new state along a trajectory, but in a way consistent with, or initially conditioned by, the properties of existing data, or to generate fully synthetic data sets with realistic time series properties. For example, mean or significant wave height time series may have underlying periodicities, as revealed in numerous analyses of wave data. Our motivation is to develop a simple methodology to generate synthetic wave climate time series that can change in some stochastic way through time. We wish to use such time series in a coastline evolution model to test sensitivities of coastal landforms to changes in wave climate over decadal and centennial scales. We have worked initially on time series of significant wave height, based on data from a Waverider III buoy located off the coast of Yorkshire, England. The statistical framework for the simulation is the linear mixed model. The target variable, perhaps after transformation (Box-Cox), is modelled as a multivariate Gaussian, the mean modelled as a function of a fixed effect, and two random components, one of which is independently and identically distributed (iid) and the second of which is temporally correlated. The model was fitted to the data by likelihood methods. We considered the option of a periodic mean, the period either fixed (e.g. at 12 months) or estimated from the data. We considered two possible correlation structures for the second random effect. In one the correlation decays exponentially with time. In the second

  4. The separation-combination method of linear structures in remote sensing image interpretation and its application

    International Nuclear Information System (INIS)

    Liu Linqin

    1991-01-01

    The separation-combination method a new kind of analysis method of linear structures in remote sensing image interpretation is introduced taking northwestern Fujian as the example, its practical application is examined. The practice shows that application results not only reflect intensities of linear structures in overall directions at different locations, but also contribute to the zonation of linear structures and display their space distribution laws. Based on analyses of linear structures, it can provide more information concerning remote sensing on studies of regional mineralization laws and the guide to ore-finding combining with mineralization

  5. Measurement of activity distribution using photostimulable phosphor imaging plates in decommissioned 10 MV medical linear accelerator.

    Science.gov (United States)

    Fujibuchi, Toshioh; Yonai, Shunsuke; Yoshida, Masahiro; Sakae, Takeji; Watanabe, Hiroshi; Abe, Yoshihisa; Itami, Jun

    2014-08-01

    Photonuclear reactions generate neutrons in the head of the linear accelerator. Therefore, some parts of the linear accelerator can become activated. Such activated materials must be handled as radioactive waste. The authors attempted to investigate the distribution of induced radioactivity using photostimulable phosphor imaging plates. Autoradiographs were produced from some parts of the linear accelerator (the target, upper jaw, multileaf collimator and shielding). The levels of induced radioactivity were confirmed to be non-uniform within each part from the autoradiographs. The method was a simple and highly sensitive approach to evaluating the relative degree of activation of the linear accelerators, so that appropriate materials management procedures can be carried out.

  6. Calibration of EFOSC2 Broadband Linear Imaging Polarimetry

    Science.gov (United States)

    Wiersema, K.; Higgins, A. B.; Covino, S.; Starling, R. L. C.

    2018-03-01

    The European Southern Observatory Faint Object Spectrograph and Camera v2 is one of the workhorse instruments on ESO's New Technology Telescope, and is one of the most popular instruments at La Silla observatory. It is mounted at a Nasmyth focus, and therefore exhibits strong, wavelength and pointing-direction-dependent instrumental polarisation. In this document, we describe our efforts to calibrate the broadband imaging polarimetry mode, and provide a calibration for broadband B, V, and R filters to a level that satisfies most use cases (i.e. polarimetric calibration uncertainty 0.1%). We make our calibration codes public. This calibration effort can be used to enhance the yield of future polarimetric programmes with the European Southern Observatory Faint Object Spectrograph and Camera v2, by allowing good calibration with a greatly reduced number of standard star observations. Similarly, our calibration model can be combined with archival calibration observations to post-process data taken in past years, to form the European Southern Observatory Faint Object Spectrograph and Camera v2 legacy archive with substantial scientific potential.

  7. Acceleration of the direct reconstruction of linear parametric images using nested algorithms

    International Nuclear Information System (INIS)

    Wang Guobao; Qi Jinyi

    2010-01-01

    Parametric imaging using dynamic positron emission tomography (PET) provides important information for biological research and clinical diagnosis. Indirect and direct methods have been developed for reconstructing linear parametric images from dynamic PET data. Indirect methods are relatively simple and easy to implement because the image reconstruction and kinetic modeling are performed in two separate steps. Direct methods estimate parametric images directly from raw PET data and are statistically more efficient. However, the convergence rate of direct algorithms can be slow due to the coupling between the reconstruction and kinetic modeling. Here we present two fast gradient-type algorithms for direct reconstruction of linear parametric images. The new algorithms decouple the reconstruction and linear parametric modeling at each iteration by employing the principle of optimization transfer. Convergence speed is accelerated by running more sub-iterations of linear parametric estimation because the computation cost of the linear parametric modeling is much less than that of the image reconstruction. Computer simulation studies demonstrated that the new algorithms converge much faster than the traditional expectation maximization (EM) and the preconditioned conjugate gradient algorithms for dynamic PET.

  8. Multi-disease analysis of maternal antibody decay using non-linear mixed models accounting for censoring.

    Science.gov (United States)

    Goeyvaerts, Nele; Leuridan, Elke; Faes, Christel; Van Damme, Pierre; Hens, Niel

    2015-09-10

    Biomedical studies often generate repeated measures of multiple outcomes on a set of subjects. It may be of interest to develop a biologically intuitive model for the joint evolution of these outcomes while assessing inter-subject heterogeneity. Even though it is common for biological processes to entail non-linear relationships, examples of multivariate non-linear mixed models (MNMMs) are still fairly rare. We contribute to this area by jointly analyzing the maternal antibody decay for measles, mumps, rubella, and varicella, allowing for a different non-linear decay model for each infectious disease. We present a general modeling framework to analyze multivariate non-linear longitudinal profiles subject to censoring, by combining multivariate random effects, non-linear growth and Tobit regression. We explore the hypothesis of a common infant-specific mechanism underlying maternal immunity using a pairwise correlated random-effects approach and evaluating different correlation matrix structures. The implied marginal correlation between maternal antibody levels is estimated using simulations. The mean duration of passive immunity was less than 4 months for all diseases with substantial heterogeneity between infants. The maternal antibody levels against rubella and varicella were found to be positively correlated, while little to no correlation could be inferred for the other disease pairs. For some pairs, computational issues occurred with increasing correlation matrix complexity, which underlines the importance of further developing estimation methods for MNMMs. Copyright © 2015 John Wiley & Sons, Ltd.

  9. Linear mixed-effects models to describe individual tree crown width for China-fir in Fujian Province, southeast China.

    Science.gov (United States)

    Hao, Xu; Yujun, Sun; Xinjie, Wang; Jin, Wang; Yao, Fu

    2015-01-01

    A multiple linear model was developed for individual tree crown width of Cunninghamia lanceolata (Lamb.) Hook in Fujian province, southeast China. Data were obtained from 55 sample plots of pure China-fir plantation stands. An Ordinary Linear Least Squares (OLS) regression was used to establish the crown width model. To adjust for correlations between observations from the same sample plots, we developed one level linear mixed-effects (LME) models based on the multiple linear model, which take into account the random effects of plots. The best random effects combinations for the LME models were determined by the Akaike's information criterion, the Bayesian information criterion and the -2logarithm likelihood. Heteroscedasticity was reduced by three residual variance functions: the power function, the exponential function and the constant plus power function. The spatial correlation was modeled by three correlation structures: the first-order autoregressive structure [AR(1)], a combination of first-order autoregressive and moving average structures [ARMA(1,1)], and the compound symmetry structure (CS). Then, the LME model was compared to the multiple linear model using the absolute mean residual (AMR), the root mean square error (RMSE), and the adjusted coefficient of determination (adj-R2). For individual tree crown width models, the one level LME model showed the best performance. An independent dataset was used to test the performance of the models and to demonstrate the advantage of calibrating LME models.

  10. Sparse PDF maps for non-linear multi-resolution image operations

    KAUST Repository

    Hadwiger, Markus

    2012-11-01

    We introduce a new type of multi-resolution image pyramid for high-resolution images called sparse pdf maps (sPDF-maps). Each pyramid level consists of a sparse encoding of continuous probability density functions (pdfs) of pixel neighborhoods in the original image. The encoded pdfs enable the accurate computation of non-linear image operations directly in any pyramid level with proper pre-filtering for anti-aliasing, without accessing higher or lower resolutions. The sparsity of sPDF-maps makes them feasible for gigapixel images, while enabling direct evaluation of a variety of non-linear operators from the same representation. We illustrate this versatility for antialiased color mapping, O(n) local Laplacian filters, smoothed local histogram filters (e.g., median or mode filters), and bilateral filters. © 2012 ACM.

  11. Fast photoacoustic imaging system based on 320-element linear transducer array

    International Nuclear Information System (INIS)

    Yin Bangzheng; Xing Da; Wang Yi; Zeng Yaguang; Tan Yi; Chen Qun

    2004-01-01

    A fast photoacoustic (PA) imaging system, based on a 320-transducer linear array, was developed and tested on a tissue phantom. To reconstruct a test tomographic image, 64 time-domain PA signals were acquired from a tissue phantom with embedded light-absorption targets. A signal acquisition was accomplished by utilizing 11 phase-controlled sub-arrays, each consisting of four transducers. The results show that the system can rapidly map the optical absorption of a tissue phantom and effectively detect the embedded light-absorbing target. By utilizing the multi-element linear transducer array and phase-controlled imaging algorithm, we thus can acquire PA tomography more efficiently, compared to other existing technology and algorithms. The methodology and equipment thus provide a rapid and reliable approach to PA imaging that may have potential applications in noninvasive imaging and clinic diagnosis

  12. Implementation of non-linear filters for iterative penalized maximum likelihood image reconstruction

    International Nuclear Information System (INIS)

    Liang, Z.; Gilland, D.; Jaszczak, R.; Coleman, R.

    1990-01-01

    In this paper, the authors report on the implementation of six edge-preserving, noise-smoothing, non-linear filters applied in image space for iterative penalized maximum-likelihood (ML) SPECT image reconstruction. The non-linear smoothing filters implemented were the median filter, the E 6 filter, the sigma filter, the edge-line filter, the gradient-inverse filter, and the 3-point edge filter with gradient-inverse filter, and the 3-point edge filter with gradient-inverse weight. A 3 x 3 window was used for all these filters. The best image obtained, by viewing the profiles through the image in terms of noise-smoothing, edge-sharpening, and contrast, was the one smoothed with the 3-point edge filter. The computation time for the smoothing was less than 1% of one iteration, and the memory space for the smoothing was negligible. These images were compared with the results obtained using Bayesian analysis

  13. Accuracy of Linear Measurements in Stitched Versus Non-Stitched Cone Beam Computed Tomography Images

    International Nuclear Information System (INIS)

    Srimawong, P.; Krisanachinda, A.; Chindasombatjaroen, J.

    2012-01-01

    Cone beam computed tomography images are useful in clinical dentistry. Linear measurements are necessary for accurate treatment planning.Therefore, the accuracy of linear measurements on CBCT images is needed to be verified. Current program called stitching program in Kodak 9000C 3D systems automatically combines up to three localized volumes to construct larger images with small voxel size.The purpose of this study was to assess the accuracy of linear measurements from stitched and non-stitched CBCT images in comparison to direct measurements.This study was performed in 10 human dry mandibles. Gutta-percha rods were marked at reference points to obtain 10 vertical and horizontal distances. Direct measurements by digital caliper were served as gold standard. All distances on CBCT images obtained by using and not using stitching program were measured, and compared with direct measurements.The intraclass correlation coefficients (ICC) were calculated.The ICC of direct measurements were 0.998 to 1.000.The ICC of intraobserver of both non-stitched CBCT images and stitched CBCT images were 1.000 indicated strong agreement made by a single observer.The intermethod ICC between direct measurements vs non-stitched CBCT images and direct measurements vs stitched CBCT images ranged from 0.972 to 1.000 and 0.967 to 0.998, respectively. No statistically significant differences between direct measurements and stitched CBCT images or non-stitched CBCT images (P > 0.05). The results showed that linear measurements on non-stitched and stitched CBCT images were highly accurate with no statistical difference compared to direct measurements. The ICC values in non-stitched and stitched CBCT images and direct measurements of vertical distances were slightly higher than those of horizontal distances. This indicated that the measurements in vertical orientation were more accurate than those in horizontal orientation. However, the differences were not statistically significant. Stitching

  14. Linearized image reconstruction method for ultrasound modulated electrical impedance tomography based on power density distribution

    International Nuclear Information System (INIS)

    Song, Xizi; Xu, Yanbin; Dong, Feng

    2017-01-01

    Electrical resistance tomography (ERT) is a promising measurement technique with important industrial and clinical applications. However, with limited effective measurements, it suffers from poor spatial resolution due to the ill-posedness of the inverse problem. Recently, there has been an increasing research interest in hybrid imaging techniques, utilizing couplings of physical modalities, because these techniques obtain much more effective measurement information and promise high resolution. Ultrasound modulated electrical impedance tomography (UMEIT) is one of the newly developed hybrid imaging techniques, which combines electric and acoustic modalities. A linearized image reconstruction method based on power density is proposed for UMEIT. The interior data, power density distribution, is adopted to reconstruct the conductivity distribution with the proposed image reconstruction method. At the same time, relating the power density change to the change in conductivity, the Jacobian matrix is employed to make the nonlinear problem into a linear one. The analytic formulation of this Jacobian matrix is derived and its effectiveness is also verified. In addition, different excitation patterns are tested and analyzed, and opposite excitation provides the best performance with the proposed method. Also, multiple power density distributions are combined to implement image reconstruction. Finally, image reconstruction is implemented with the linear back-projection (LBP) algorithm. Compared with ERT, with the proposed image reconstruction method, UMEIT can produce reconstructed images with higher quality and better quantitative evaluation results. (paper)

  15. HIGH-RESOLUTION LINEAR POLARIMETRIC IMAGING FOR THE EVENT HORIZON TELESCOPE

    Energy Technology Data Exchange (ETDEWEB)

    Chael, Andrew A.; Johnson, Michael D.; Narayan, Ramesh; Doeleman, Sheperd S. [Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138 (United States); Wardle, John F. C. [Brandeis University, Physics Department, Waltham, MA 02454 (United States); Bouman, Katherine L., E-mail: achael@cfa.harvard.edu [Massachusetts Institute of Technology, Computer Science and Artificial Intelligence Laboratory, 32 Vassar Street, Cambridge, MA 02139 (United States)

    2016-09-20

    Images of the linear polarizations of synchrotron radiation around active galactic nuclei (AGNs) highlight their projected magnetic field lines and provide key data for understanding the physics of accretion and outflow from supermassive black holes. The highest-resolution polarimetric images of AGNs are produced with Very Long Baseline Interferometry (VLBI). Because VLBI incompletely samples the Fourier transform of the source image, any image reconstruction that fills in unmeasured spatial frequencies will not be unique and reconstruction algorithms are required. In this paper, we explore some extensions of the Maximum Entropy Method (MEM) to linear polarimetric VLBI imaging. In contrast to previous work, our polarimetric MEM algorithm combines a Stokes I imager that only uses bispectrum measurements that are immune to atmospheric phase corruption, with a joint Stokes Q and U imager that operates on robust polarimetric ratios. We demonstrate the effectiveness of our technique on 7 and 3 mm wavelength quasar observations from the VLBA and simulated 1.3 mm Event Horizon Telescope observations of Sgr A* and M87. Consistent with past studies, we find that polarimetric MEM can produce superior resolution compared to the standard CLEAN algorithm, when imaging smooth and compact source distributions. As an imaging framework, MEM is highly adaptable, allowing a range of constraints on polarization structure. Polarimetric MEM is thus an attractive choice for image reconstruction with the EHT.

  16. A Bayesian Framework for Generalized Linear Mixed Modeling Identifies New Candidate Loci for Late-Onset Alzheimer's Disease.

    Science.gov (United States)

    Wang, Xulong; Philip, Vivek M; Ananda, Guruprasad; White, Charles C; Malhotra, Ankit; Michalski, Paul J; Karuturi, Krishna R Murthy; Chintalapudi, Sumana R; Acklin, Casey; Sasner, Michael; Bennett, David A; De Jager, Philip L; Howell, Gareth R; Carter, Gregory W

    2018-03-05

    Recent technical and methodological advances have greatly enhanced genome-wide association studies (GWAS). The advent of low-cost whole-genome sequencing facilitates high-resolution variant identification, and the development of linear mixed models (LMM) allows improved identification of putatively causal variants. While essential for correcting false positive associations due to sample relatedness and population stratification, LMMs have commonly been restricted to quantitative variables. However, phenotypic traits in association studies are often categorical, coded as binary case-control or ordered variables describing disease stages. To address these issues, we have devised a method for genomic association studies that implements a generalized linear mixed model (GLMM) in a Bayesian framework, called Bayes-GLMM Bayes-GLMM has four major features: (1) support of categorical, binary and quantitative variables; (2) cohesive integration of previous GWAS results for related traits; (3) correction for sample relatedness by mixed modeling; and (4) model estimation by both Markov chain Monte Carlo (MCMC) sampling and maximal likelihood estimation. We applied Bayes-GLMM to the whole-genome sequencing cohort of the Alzheimer's Disease Sequencing Project (ADSP). This study contains 570 individuals from 111 families, each with Alzheimer's disease diagnosed at one of four confidence levels. With Bayes-GLMM we identified four variants in three loci significantly associated with Alzheimer's disease. Two variants, rs140233081 and rs149372995 lie between PRKAR1B and PDGFA The coded proteins are localized to the glial-vascular unit, and PDGFA transcript levels are associated with AD-related neuropathology. In summary, this work provides implementation of a flexible, generalized mixed model approach in a Bayesian framework for association studies. Copyright © 2018, Genetics.

  17. An Improved Search Approach for Solving Non-Convex Mixed-Integer Non Linear Programming Problems

    Science.gov (United States)

    Sitopu, Joni Wilson; Mawengkang, Herman; Syafitri Lubis, Riri

    2018-01-01

    The nonlinear mathematical programming problem addressed in this paper has a structure characterized by a subset of variables restricted to assume discrete values, which are linear and separable from the continuous variables. The strategy of releasing nonbasic variables from their bounds, combined with the “active constraint” method, has been developed. This strategy is used to force the appropriate non-integer basic variables to move to their neighbourhood integer points. Successful implementation of these algorithms was achieved on various test problems.

  18. Simultaneous inference for multilevel linear mixed models - with an application to a large-scale school meal study

    DEFF Research Database (Denmark)

    Ritz, Christian; Laursen, Rikke Pilmann; Damsgaard, Camilla Trab

    2017-01-01

    of a school meal programme. We propose a novel and versatile framework for simultaneous inference on parameters estimated from linear mixed models that were fitted separately for several outcomes from the same study, but did not necessarily contain the same fixed or random effects. By combining asymptotic...... sizes of practical relevance we studied simultaneous coverage through simulation, which showed that the approach achieved acceptable coverage probabilities even for small sample sizes (10 clusters) and for 2–16 outcomes. The approach also compared favourably with a joint modelling approach. We also...

  19. A simulation-based goodness-of-fit test for random effects in generalized linear mixed models

    DEFF Research Database (Denmark)

    Waagepetersen, Rasmus

    2006-01-01

    The goodness-of-fit of the distribution of random effects in a generalized linear mixed model is assessed using a conditional simulation of the random effects conditional on the observations. Provided that the specified joint model for random effects and observations is correct, the marginal...... distribution of the simulated random effects coincides with the assumed random effects distribution. In practice, the specified model depends on some unknown parameter which is replaced by an estimate. We obtain a correction for this by deriving the asymptotic distribution of the empirical distribution...

  20. A Mixed-Integer Linear Programming approach to wind farm layout and inter-array cable routing

    DEFF Research Database (Denmark)

    Fischetti, Martina; Leth, John-Josef; Borchersen, Anders Bech

    2015-01-01

    A Mixed-Integer Linear Programming (MILP) approach is proposed to optimize the turbine allocation and inter-array offshore cable routing. The two problems are considered with a two steps strategy, solving the layout problem first and then the cable problem. We give an introduction to both problems...... and present the MILP models we developed to solve them. To deal with interference in the onshore cases, we propose an adaptation of the standard Jensen’s model, suitable for 3D cases. A simple Stochastic Programming variant of our model allows us to consider different wind scenarios in the optimization...

  1. A simulation-based goodness-of-fit test for random effects in generalized linear mixed models

    DEFF Research Database (Denmark)

    Waagepetersen, Rasmus Plenge

    The goodness-of-fit of the distribution of random effects in a generalized linear mixed model is assessed using a conditional simulation of the random effects conditional on the observations. Provided that the specified joint model for random effects and observations is correct, the marginal...... distribution of the simulated random effects coincides with the assumed random effects distribution. In practice the specified model depends on some unknown parameter which is replaced by an estimate. We obtain a correction for this by deriving the asymptotic distribution of the empirical distribution function...

  2. Thermodynamics versus Kinetics Dichotomy in the Linear Self-Assembly of Mixed Nanoblocks.

    Science.gov (United States)

    Ruiz, L; Keten, S

    2014-06-05

    We report classical and replica exchange molecular dynamics simulations that establish the mechanisms underpinning the growth kinetics of a binary mix of nanorings that form striped nanotubes via self-assembly. A step-growth coalescence model captures the growth process of the nanotubes, which suggests that high aspect ratio nanostructures can grow by obeying the universal laws of self-similar coarsening, contrary to systems that grow through nucleation and elongation. Notably, striped patterns do not depend on specific growth mechanisms, but are governed by tempering conditions that control the likelihood of depropagation and fragmentation.

  3. GREIT: a unified approach to 2D linear EIT reconstruction of lung images.

    Science.gov (United States)

    Adler, Andy; Arnold, John H; Bayford, Richard; Borsic, Andrea; Brown, Brian; Dixon, Paul; Faes, Theo J C; Frerichs, Inéz; Gagnon, Hervé; Gärber, Yvo; Grychtol, Bartłomiej; Hahn, Günter; Lionheart, William R B; Malik, Anjum; Patterson, Robert P; Stocks, Janet; Tizzard, Andrew; Weiler, Norbert; Wolf, Gerhard K

    2009-06-01

    Electrical impedance tomography (EIT) is an attractive method for clinically monitoring patients during mechanical ventilation, because it can provide a non-invasive continuous image of pulmonary impedance which indicates the distribution of ventilation. However, most clinical and physiological research in lung EIT is done using older and proprietary algorithms; this is an obstacle to interpretation of EIT images because the reconstructed images are not well characterized. To address this issue, we develop a consensus linear reconstruction algorithm for lung EIT, called GREIT (Graz consensus Reconstruction algorithm for EIT). This paper describes the unified approach to linear image reconstruction developed for GREIT. The framework for the linear reconstruction algorithm consists of (1) detailed finite element models of a representative adult and neonatal thorax, (2) consensus on the performance figures of merit for EIT image reconstruction and (3) a systematic approach to optimize a linear reconstruction matrix to desired performance measures. Consensus figures of merit, in order of importance, are (a) uniform amplitude response, (b) small and uniform position error, (c) small ringing artefacts, (d) uniform resolution, (e) limited shape deformation and (f) high resolution. Such figures of merit must be attained while maintaining small noise amplification and small sensitivity to electrode and boundary movement. This approach represents the consensus of a large and representative group of experts in EIT algorithm design and clinical applications for pulmonary monitoring. All software and data to implement and test the algorithm have been made available under an open source license which allows free research and commercial use.

  4. GREIT: a unified approach to 2D linear EIT reconstruction of lung images

    International Nuclear Information System (INIS)

    Adler, Andy; Arnold, John H; Bayford, Richard; Tizzard, Andrew; Borsic, Andrea; Brown, Brian; Dixon, Paul; Faes, Theo J C; Frerichs, Inéz; Weiler, Norbert; Gagnon, Hervé; Gärber, Yvo; Grychtol, Bartłomiej; Hahn, Günter; Lionheart, William R B; Malik, Anjum; Patterson, Robert P; Stocks, Janet; Wolf, Gerhard K

    2009-01-01

    Electrical impedance tomography (EIT) is an attractive method for clinically monitoring patients during mechanical ventilation, because it can provide a non-invasive continuous image of pulmonary impedance which indicates the distribution of ventilation. However, most clinical and physiological research in lung EIT is done using older and proprietary algorithms; this is an obstacle to interpretation of EIT images because the reconstructed images are not well characterized. To address this issue, we develop a consensus linear reconstruction algorithm for lung EIT, called GREIT (Graz consensus Reconstruction algorithm for EIT). This paper describes the unified approach to linear image reconstruction developed for GREIT. The framework for the linear reconstruction algorithm consists of (1) detailed finite element models of a representative adult and neonatal thorax, (2) consensus on the performance figures of merit for EIT image reconstruction and (3) a systematic approach to optimize a linear reconstruction matrix to desired performance measures. Consensus figures of merit, in order of importance, are (a) uniform amplitude response, (b) small and uniform position error, (c) small ringing artefacts, (d) uniform resolution, (e) limited shape deformation and (f) high resolution. Such figures of merit must be attained while maintaining small noise amplification and small sensitivity to electrode and boundary movement. This approach represents the consensus of a large and representative group of experts in EIT algorithm design and clinical applications for pulmonary monitoring. All software and data to implement and test the algorithm have been made available under an open source license which allows free research and commercial use

  5. COLOR IMAGE RETRIEVAL BASED ON FEATURE FUSION THROUGH MULTIPLE LINEAR REGRESSION ANALYSIS

    Directory of Open Access Journals (Sweden)

    K. Seetharaman

    2015-08-01

    Full Text Available This paper proposes a novel technique based on feature fusion using multiple linear regression analysis, and the least-square estimation method is employed to estimate the parameters. The given input query image is segmented into various regions according to the structure of the image. The color and texture features are extracted on each region of the query image, and the features are fused together using the multiple linear regression model. The estimated parameters of the model, which is modeled based on the features, are formed as a vector called a feature vector. The Canberra distance measure is adopted to compare the feature vectors of the query and target images. The F-measure is applied to evaluate the performance of the proposed technique. The obtained results expose that the proposed technique is comparable to the other existing techniques.

  6. Blind phase retrieval for aberrated linear shift-invariant imaging systems

    International Nuclear Information System (INIS)

    Yu, Rotha P; Paganin, David M

    2010-01-01

    We develop a means to reconstruct an input complex coherent scalar wavefield, given a through focal series (TFS) of three intensity images output from a two-dimensional (2D) linear shift-invariant optical imaging system with unknown aberrations. This blind phase retrieval technique unites two methods, namely (i) TFS phase retrieval and (ii) iterative blind deconvolution. The efficacy of our blind phase retrieval procedure has been demonstrated using simulated data, for a variety of Poisson noise levels.

  7. Exact solutions to robust control problems involving scalar hyperbolic conservation laws using Mixed Integer Linear Programming

    KAUST Repository

    Li, Yanning

    2013-10-01

    This article presents a new robust control framework for transportation problems in which the state is modeled by a first order scalar conservation law. Using an equivalent formulation based on a Hamilton-Jacobi equation, we pose the problem of controlling the state of the system on a network link, using boundary flow control, as a Linear Program. Unlike many previously investigated transportation control schemes, this method yields a globally optimal solution and is capable of handling shocks (i.e. discontinuities in the state of the system). We also demonstrate that the same framework can handle robust control problems, in which the uncontrollable components of the initial and boundary conditions are encoded in intervals on the right hand side of inequalities in the linear program. The lower bound of the interval which defines the smallest feasible solution set is used to solve the robust LP (or MILP if the objective function depends on boolean variables). Since this framework leverages the intrinsic properties of the Hamilton-Jacobi equation used to model the state of the system, it is extremely fast. Several examples are given to demonstrate the performance of the robust control solution and the trade-off between the robustness and the optimality. © 2013 IEEE.

  8. Exact solutions to robust control problems involving scalar hyperbolic conservation laws using Mixed Integer Linear Programming

    KAUST Repository

    Li, Yanning; Canepa, Edward S.; Claudel, Christian G.

    2013-01-01

    This article presents a new robust control framework for transportation problems in which the state is modeled by a first order scalar conservation law. Using an equivalent formulation based on a Hamilton-Jacobi equation, we pose the problem of controlling the state of the system on a network link, using boundary flow control, as a Linear Program. Unlike many previously investigated transportation control schemes, this method yields a globally optimal solution and is capable of handling shocks (i.e. discontinuities in the state of the system). We also demonstrate that the same framework can handle robust control problems, in which the uncontrollable components of the initial and boundary conditions are encoded in intervals on the right hand side of inequalities in the linear program. The lower bound of the interval which defines the smallest feasible solution set is used to solve the robust LP (or MILP if the objective function depends on boolean variables). Since this framework leverages the intrinsic properties of the Hamilton-Jacobi equation used to model the state of the system, it is extremely fast. Several examples are given to demonstrate the performance of the robust control solution and the trade-off between the robustness and the optimality. © 2013 IEEE.

  9. Multispectral code excited linear prediction coding and its application in magnetic resonance images.

    Science.gov (United States)

    Hu, J H; Wang, Y; Cahill, P T

    1997-01-01

    This paper reports a multispectral code excited linear prediction (MCELP) method for the compression of multispectral images. Different linear prediction models and adaptation schemes have been compared. The method that uses a forward adaptive autoregressive (AR) model has been proven to achieve a good compromise between performance, complexity, and robustness. This approach is referred to as the MFCELP method. Given a set of multispectral images, the linear predictive coefficients are updated over nonoverlapping three-dimensional (3-D) macroblocks. Each macroblock is further divided into several 3-D micro-blocks, and the best excitation signal for each microblock is determined through an analysis-by-synthesis procedure. The MFCELP method has been applied to multispectral magnetic resonance (MR) images. To satisfy the high quality requirement for medical images, the error between the original image set and the synthesized one is further specified using a vector quantizer. This method has been applied to images from 26 clinical MR neuro studies (20 slices/study, three spectral bands/slice, 256x256 pixels/band, 12 b/pixel). The MFCELP method provides a significant visual improvement over the discrete cosine transform (DCT) based Joint Photographers Expert Group (JPEG) method, the wavelet transform based embedded zero-tree wavelet (EZW) coding method, and the vector tree (VT) coding method, as well as the multispectral segmented autoregressive moving average (MSARMA) method we developed previously.

  10. Analysis of correlated count data using generalised linear mixed models exemplified by field data on aggressive behaviour of boars

    Directory of Open Access Journals (Sweden)

    N. Mielenz

    2015-01-01

    Full Text Available Population-averaged and subject-specific models are available to evaluate count data when repeated observations per subject are present. The latter are also known in the literature as generalised linear mixed models (GLMM. In GLMM repeated measures are taken into account explicitly through random animal effects in the linear predictor. In this paper the relevant GLMMs are presented based on conditional Poisson or negative binomial distribution of the response variable for given random animal effects. Equations for the repeatability of count data are derived assuming normal distribution and logarithmic gamma distribution for the random animal effects. Using count data on aggressive behaviour events of pigs (barrows, sows and boars in mixed-sex housing, we demonstrate the use of the Poisson »log-gamma intercept«, the Poisson »normal intercept« and the »normal intercept« model with negative binomial distribution. Since not all count data can definitely be seen as Poisson or negative-binomially distributed, questions of model selection and model checking are examined. Emanating from the example, we also interpret the least squares means, estimated on the link as well as the response scale. Options provided by the SAS procedure NLMIXED for estimating model parameters and for estimating marginal expected values are presented.

  11. Some consequences of assuming simple patterns for the treatment effect over time in a linear mixed model.

    Science.gov (United States)

    Bamia, Christina; White, Ian R; Kenward, Michael G

    2013-07-10

    Linear mixed models are often used for the analysis of data from clinical trials with repeated quantitative outcomes. This paper considers linear mixed models where a particular form is assumed for the treatment effect, in particular constant over time or proportional to time. For simplicity, we assume no baseline covariates and complete post-baseline measures, and we model arbitrary mean responses for the control group at each time. For the variance-covariance matrix, we consider an unstructured model, a random intercepts model and a random intercepts and slopes model. We show that the treatment effect estimator can be expressed as a weighted average of the observed time-specific treatment effects, with weights depending on the covariance structure and the magnitude of the estimated variance components. For an assumed constant treatment effect, under the random intercepts model, all weights are equal, but in the random intercepts and slopes and the unstructured models, we show that some weights can be negative: thus, the estimated treatment effect can be negative, even if all time-specific treatment effects are positive. Our results suggest that particular models for the treatment effect combined with particular covariance structures may result in estimated treatment effects of unexpected magnitude and/or direction. Methods are illustrated using a Parkinson's disease trial. Copyright © 2012 John Wiley & Sons, Ltd.

  12. Bayesian quantile regression-based partially linear mixed-effects joint models for longitudinal data with multiple features.

    Science.gov (United States)

    Zhang, Hanze; Huang, Yangxin; Wang, Wei; Chen, Henian; Langland-Orban, Barbara

    2017-01-01

    In longitudinal AIDS studies, it is of interest to investigate the relationship between HIV viral load and CD4 cell counts, as well as the complicated time effect. Most of common models to analyze such complex longitudinal data are based on mean-regression, which fails to provide efficient estimates due to outliers and/or heavy tails. Quantile regression-based partially linear mixed-effects models, a special case of semiparametric models enjoying benefits of both parametric and nonparametric models, have the flexibility to monitor the viral dynamics nonparametrically and detect the varying CD4 effects parametrically at different quantiles of viral load. Meanwhile, it is critical to consider various data features of repeated measurements, including left-censoring due to a limit of detection, covariate measurement error, and asymmetric distribution. In this research, we first establish a Bayesian joint models that accounts for all these data features simultaneously in the framework of quantile regression-based partially linear mixed-effects models. The proposed models are applied to analyze the Multicenter AIDS Cohort Study (MACS) data. Simulation studies are also conducted to assess the performance of the proposed methods under different scenarios.

  13. Linear GPR Imaging Based on Electromagnetic Plane-Wave Spectra and Diffraction Tomography

    DEFF Research Database (Denmark)

    Meincke, Peter

    2004-01-01

    Two linear diffraction-tomography based inversion schemes, referred to as the Fourier transform method (FTM) and the far-field method (FFM), are derived for 3-dimensional fixed-offset GPR imaging of buried objects. The FTM and FFM are obtained by using different asymptotic approximations...

  14. Linear array implementation of the EM algorithm for PET image reconstruction

    International Nuclear Information System (INIS)

    Rajan, K.; Patnaik, L.M.; Ramakrishna, J.

    1995-01-01

    The PET image reconstruction based on the EM algorithm has several attractive advantages over the conventional convolution back projection algorithms. However, the PET image reconstruction based on the EM algorithm is computationally burdensome for today's single processor systems. In addition, a large memory is required for the storage of the image, projection data, and the probability matrix. Since the computations are easily divided into tasks executable in parallel, multiprocessor configurations are the ideal choice for fast execution of the EM algorithms. In tis study, the authors attempt to overcome these two problems by parallelizing the EM algorithm on a multiprocessor systems. The parallel EM algorithm on a linear array topology using the commercially available fast floating point digital signal processor (DSP) chips as the processing elements (PE's) has been implemented. The performance of the EM algorithm on a 386/387 machine, IBM 6000 RISC workstation, and on the linear array system is discussed and compared. The results show that the computational speed performance of a linear array using 8 DSP chips as PE's executing the EM image reconstruction algorithm is about 15.5 times better than that of the IBM 6000 RISC workstation. The novelty of the scheme is its simplicity. The linear array topology is expandable with a larger number of PE's. The architecture is not dependant on the DSP chip chosen, and the substitution of the latest DSP chip is straightforward and could yield better speed performance

  15. Single image super-resolution using locally adaptive multiple linear regression.

    Science.gov (United States)

    Yu, Soohwan; Kang, Wonseok; Ko, Seungyong; Paik, Joonki

    2015-12-01

    This paper presents a regularized superresolution (SR) reconstruction method using locally adaptive multiple linear regression to overcome the limitation of spatial resolution of digital images. In order to make the SR problem better-posed, the proposed method incorporates the locally adaptive multiple linear regression into the regularization process as a local prior. The local regularization prior assumes that the target high-resolution (HR) pixel is generated by a linear combination of similar pixels in differently scaled patches and optimum weight parameters. In addition, we adapt a modified version of the nonlocal means filter as a smoothness prior to utilize the patch redundancy. Experimental results show that the proposed algorithm better restores HR images than existing state-of-the-art methods in the sense of the most objective measures in the literature.

  16. Pengaruh Marketing Mix Dan Brand Image Terhadap Customer Satisfaction Dan Customer Loyalty Di Restoran X Surabaya

    OpenAIRE

    Wijaya, Kevin; Sutejo, Andrew Robby Darmawan

    2017-01-01

    Tujuan dilakukannya penelitian ini adalah untuk mengetahui pengaruh Marketing Mix dan Brand Image yang dijalankan oleh restoran X Surabaya terhadap kepuasan konsumen dan loyalitas konsumen. Penelitian ini menggunakan penelitian quantitatif, sampel yang diperoleh dari pengisian kuesioner oleh 100 konsumen yang pernah makan di restoran X. Dengan menggunakan metode analisa Partial Least Square, hasil penelitian menunjukkan bahwa marketing mix mempunyai pengaruh positif significant terhadap kepua...

  17. A mixed integer linear programming model applied in barge planning for Omya

    Directory of Open Access Journals (Sweden)

    David Bredström

    2015-12-01

    Full Text Available This article presents a mathematical model for barge transport planning on the river Rhine, which is part of a decision support system (DSS recently taken into use by the Swiss company Omya. The system is operated by Omya’s regional office in Cologne, Germany, responsible for distribution planning at the regional distribution center (RDC in Moerdijk, the Netherlands. The distribution planning is a vital part of supply chain management of Omya’s production of Norwegian high quality calcium carbonate slurry, supplied to European paper manufacturers. The DSS operates within a vendor managed inventory (VMI setting, where the customer inventories are monitored by Omya, who decides upon the refilling days and quantities delivered by barges. The barge planning problem falls into the category of inventory routing problems (IRP and is further characterized with multiple products, heterogeneous fleet with availability restrictions (the fleet is owned by third party, vehicle compartments, dependency of barge capacity on water-level, multiple customer visits, bounded customer inventories and rolling planning horizon. There are additional modelling details which had to be considered to make it possible to employ the model in practice at a sufficient level of detail. To the best of our knowledge, we have not been able to find similar models covering all these aspects in barge planning. This article presents the developed mixed-integer programming model and discusses practical experience with its solution. Briefly, it also puts the model into the context of the entire business case of value chain optimization in Omya.

  18. Thin Cloud Detection Method by Linear Combination Model of Cloud Image

    Science.gov (United States)

    Liu, L.; Li, J.; Wang, Y.; Xiao, Y.; Zhang, W.; Zhang, S.

    2018-04-01

    The existing cloud detection methods in photogrammetry often extract the image features from remote sensing images directly, and then use them to classify images into cloud or other things. But when the cloud is thin and small, these methods will be inaccurate. In this paper, a linear combination model of cloud images is proposed, by using this model, the underlying surface information of remote sensing images can be removed. So the cloud detection result can become more accurate. Firstly, the automatic cloud detection program in this paper uses the linear combination model to split the cloud information and surface information in the transparent cloud images, then uses different image features to recognize the cloud parts. In consideration of the computational efficiency, AdaBoost Classifier was introduced to combine the different features to establish a cloud classifier. AdaBoost Classifier can select the most effective features from many normal features, so the calculation time is largely reduced. Finally, we selected a cloud detection method based on tree structure and a multiple feature detection method using SVM classifier to compare with the proposed method, the experimental data shows that the proposed cloud detection program in this paper has high accuracy and fast calculation speed.

  19. Focal spot motion of linear accelerators and its effect on portal image analysis

    International Nuclear Information System (INIS)

    Sonke, Jan-Jakob; Brand, Bob; Herk, Marcel van

    2003-01-01

    The focal spot of a linear accelerator is often considered to have a fully stable position. In practice, however, the beam control loop of a linear accelerator needs to stabilize after the beam is turned on. As a result, some motion of the focal spot might occur during the start-up phase of irradiation. When acquiring portal images, this motion will affect the projected position of anatomy and field edges, especially when low exposures are used. In this paper, the motion of the focal spot and the effect of this motion on portal image analysis are quantified. A slightly tilted narrow slit phantom was placed at the isocenter of several linear accelerators and images were acquired (3.5 frames per second) by means of an amorphous silicon flat panel imager positioned ∼0.7 m below the isocenter. The motion of the focal spot was determined by converting the tilted slit images to subpixel accurate line spread functions. The error in portal image analysis due to focal spot motion was estimated by a subtraction of the relative displacement of the projected slit from the relative displacement of the field edges. It was found that the motion of the focal spot depends on the control system and design of the accelerator. The shift of the focal spot at the start of irradiation ranges between 0.05-0.7 mm in the gun-target (GT) direction. In the left-right (AB) direction the shift is generally smaller. The resulting error in portal image analysis due to focal spot motion ranges between 0.05-1.1 mm for a dose corresponding to two monitor units (MUs). For 20 MUs, the effect of the focal spot motion reduces to 0.01-0.3 mm. The error in portal image analysis due to focal spot motion can be reduced by reducing the applied dose rate

  20. INVESTIGATION OF SECONDARY MIXED RADIATION FIELD AROUND A MEDICAL LINEAR ACCELERATOR.

    Science.gov (United States)

    Tulik, Piotr; Tulik, Monika; Maciak, Maciej; Golnik, Natalia; Kabat, Damian; Byrski, Tomasz; Lesiak, Jan

    2017-09-29

    The aim of this study is to investigate secondary mixed radiation field around linac, as the first part of an overall assessment of out-of-field contribution of neutron dose for new advanced radiation dose delivery techniques. All measurements were performed around Varian Clinic 2300 C/D accelerator at Maria Sklodowska-Curie Memorial, Cancer Center and Institute of Oncology, Krakow Branch. Recombination chambers REM-2 and GW2 were used for recombination index of radiation quality Q4 determination (as an estimate of quality factor Q), measurement of total tissue dose Dt and calculation of gamma and neutron components to Dt. Estimation of Dt and Q4 allowed for the ambient dose equivalent H*(10) per monitor unit (MU) calculations. Measurements around linac were performed on the height of the middle of the linac's head (three positions) and on the height of the linac's isocentre (five positions). Estimation of secondary radiation level was carried out for seven different configurations of upper and lower jaws position and multileaf collimator set open or closed in each position. Study includes the use of two photon beam modes: 6 and 18 MV. Spatial distribution of ambient dose equivalent H*(10) per MU on the height of the linac's head and on the standard couch height for patients during the routine treatment, as well as relative contribution of gamma and neutron secondary radiation inside treatment room were evaluated. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  1. An MCMC method for the evaluation of the Fisher information matrix for non-linear mixed effect models.

    Science.gov (United States)

    Riviere, Marie-Karelle; Ueckert, Sebastian; Mentré, France

    2016-10-01

    Non-linear mixed effect models (NLMEMs) are widely used for the analysis of longitudinal data. To design these studies, optimal design based on the expected Fisher information matrix (FIM) can be used instead of performing time-consuming clinical trial simulations. In recent years, estimation algorithms for NLMEMs have transitioned from linearization toward more exact higher-order methods. Optimal design, on the other hand, has mainly relied on first-order (FO) linearization to calculate the FIM. Although efficient in general, FO cannot be applied to complex non-linear models and with difficulty in studies with discrete data. We propose an approach to evaluate the expected FIM in NLMEMs for both discrete and continuous outcomes. We used Markov Chain Monte Carlo (MCMC) to integrate the derivatives of the log-likelihood over the random effects, and Monte Carlo to evaluate its expectation w.r.t. the observations. Our method was implemented in R using Stan, which efficiently draws MCMC samples and calculates partial derivatives of the log-likelihood. Evaluated on several examples, our approach showed good performance with relative standard errors (RSEs) close to those obtained by simulations. We studied the influence of the number of MC and MCMC samples and computed the uncertainty of the FIM evaluation. We also compared our approach to Adaptive Gaussian Quadrature, Laplace approximation, and FO. Our method is available in R-package MIXFIM and can be used to evaluate the FIM, its determinant with confidence intervals (CIs), and RSEs with CIs. © The Author 2016. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  2. Post irradiation examinations of uranium-plutonium mixed carbide fuels irradiated at low linear power rate

    International Nuclear Information System (INIS)

    Maeda, Atsushi; Sasayama, Tatsuo; Iwai, Takashi; Aizawa, Sakuei; Ohwada, Isao; Aizawa, Masao; Ohmichi, Toshihiko; Handa, Muneo

    1988-11-01

    Two pins containing uranium-plutonium carbide fuels which are different in stoichiometry, i.e. (U,Pu)C 1.0 and (U,Pu)C 1.1 , were constructed into a capsule, ICF-37H, and were irradiated in JRR-2 up to 1.0 at % burnup at the linear heat rate of 420 W/cm. After being cooled for about one year, the irradiated capsule was transferred to the Reactor Fuel Examination Facility where the non-destructive examinations of the fuel pins in the β-γ cells and the destructive ones in two α-γ inert gas atmosphere cells were carried out. The release rates of fission gas were low enough, 0.44 % from (U,Pu)C 1.0 fuel pin and 0.09% from (U,Pu)C 1.1 fuel pin, which is reasonable because of the low central temperature of fuel pellets, about 1000 deg C and is estimated that the release is mainly governed by recoil and knock-out mechanisms. Volume swelling of the fuels was observed to be in the range of 1.3 ∼ 1.6 % for carbide fuels below 1000 deg C. Respective open porosities of (U,Pu)C 1.0 and (U,Pu)C 1.1 fuel were 1.3 % and 0.45 %, being in accordance with the release behavior of fission gas. Metallographic observation of the radial sections of pellets showed the increase of pore size and crystal grain size in the center and middle region of (U,Pu)C 1.0 pellets. The chemical interaction between fuel pellets and claddings in the carbide fuels is the penetration of carbon in the fuels to stainless steel tubes. The depth of corrosion layer in inner sides of cladding tubes ranged 10 ∼ 15 μm in the (U,Pu)C 1.0 fuel and 15 #approx #25 μm in the (U,Pu)C 1.1 fuel, which is correlative with the carbon potential of fuels posibly affecting the amount of carbon penetration. (author)

  3. Solving a mixed-integer linear programming model for a multi-skilled project scheduling problem by simulated annealing

    Directory of Open Access Journals (Sweden)

    H Kazemipoor

    2012-04-01

    Full Text Available A multi-skilled project scheduling problem (MSPSP has been generally presented to schedule a project with staff members as resources. Each activity in project network requires different skills and also staff members have different skills, too. This causes the MSPSP becomes a special type of a multi-mode resource-constrained project scheduling problem (MM-RCPSP with a huge number of modes. Given the importance of this issue, in this paper, a mixed integer linear programming for the MSPSP is presented. Due to the complexity of the problem, a meta-heuristic algorithm is proposed in order to find near optimal solutions. To validate performance of the algorithm, results are compared against exact solutions solved by the LINGO solver. The results are promising and show that optimal or near-optimal solutions are derived for small instances and good solutions for larger instances in reasonable time.

  4. A Mixed Integer Linear Programming Model for the Design of Remanufacturing Closed–loop Supply Chain Network

    Directory of Open Access Journals (Sweden)

    Mbarek Elbounjimi

    2015-11-01

    Full Text Available Closed-loop supply chain network design is a critical issue due to its impact on both economic and environmental performances of the supply chain. In this paper, we address the problem of designing a multi-echelon, multi-product and capacitated closed-loop supply chain network. First, a mixed-integer linear programming formulation is developed to maximize the total profit. The main contribution of the proposed model is addressing two economic viability issues of closed-loop supply chain. The first issue is the collection of sufficient quantity of end-of-life products are assured by retailers against an acquisition price. The second issue is exploiting the benefits of colocation of forward facilities and reverse facilities. The presented model is solved by LINGO for some test problems. Computational results and sensitivity analysis are conducted to show the performance of the proposed model.

  5. Measuring the individual benefit of a medical or behavioral treatment using generalized linear mixed-effects models.

    Science.gov (United States)

    Diaz, Francisco J

    2016-10-15

    We propose statistical definitions of the individual benefit of a medical or behavioral treatment and of the severity of a chronic illness. These definitions are used to develop a graphical method that can be used by statisticians and clinicians in the data analysis of clinical trials from the perspective of personalized medicine. The method focuses on assessing and comparing individual effects of treatments rather than average effects and can be used with continuous and discrete responses, including dichotomous and count responses. The method is based on new developments in generalized linear mixed-effects models, which are introduced in this article. To illustrate, analyses of data from the Sequenced Treatment Alternatives to Relieve Depression clinical trial of sequences of treatments for depression and data from a clinical trial of respiratory treatments are presented. The estimation of individual benefits is also explained. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  6. Non-linear mixing in coupled photonic crystal nanobeam cavities due to cross-coupling opto-mechanical mechanisms

    Energy Technology Data Exchange (ETDEWEB)

    Ramos, Daniel, E-mail: daniel.ramos@csic.es; Frank, Ian W.; Deotare, Parag B.; Bulu, Irfan; Lončar, Marko [School of Engineering and Applied Sciences, Harvard University, Cambridge, Massachusetts 02138 (United States)

    2014-11-03

    We investigate the coupling between mechanical and optical modes supported by coupled, freestanding, photonic crystal nanobeam cavities. We show that localized cavity modes for a given gap between the nanobeams provide weak optomechanical coupling with out-of-plane mechanical modes. However, we show that the coupling can be significantly increased, more than an order of magnitude for the symmetric mechanical mode, due to optical resonances that arise from the interaction of the localized cavity modes with standing waves formed by the reflection from thesubstrate. Finally, amplification of motion for the symmetric mode has been observed and attributed to the strong optomechanical interaction of our hybrid system. The amplitude of these self-sustained oscillations is large enough to put the system into a non-linear oscillation regime where a mixing between the mechanical modes is experimentally observed and theoretically explained.

  7. Non-linear mixed effects modeling - from methodology and software development to driving implementation in drug development science.

    Science.gov (United States)

    Pillai, Goonaseelan Colin; Mentré, France; Steimer, Jean-Louis

    2005-04-01

    Few scientific contributions have made significant impact unless there was a champion who had the vision to see the potential for its use in seemingly disparate areas-and who then drove active implementation. In this paper, we present a historical summary of the development of non-linear mixed effects (NLME) modeling up to the more recent extensions of this statistical methodology. The paper places strong emphasis on the pivotal role played by Lewis B. Sheiner (1940-2004), who used this statistical methodology to elucidate solutions to real problems identified in clinical practice and in medical research and on how he drove implementation of the proposed solutions. A succinct overview of the evolution of the NLME modeling methodology is presented as well as ideas on how its expansion helped to provide guidance for a more scientific view of (model-based) drug development that reduces empiricism in favor of critical quantitative thinking and decision making.

  8. Rationally encapsulated gold nanorods improving both linear and nonlinear photoacoustic imaging contrast in vivo.

    Science.gov (United States)

    Gao, Fei; Bai, Linyi; Liu, Siyu; Zhang, Ruochong; Zhang, Jingtao; Feng, Xiaohua; Zheng, Yuanjin; Zhao, Yanli

    2017-01-07

    Photoacoustic tomography has emerged as a promising non-invasive imaging technique that integrates the merits of high optical contrast with high ultrasound resolution in deep scattering medium. Unfortunately, the blood background in vivo seriously impedes the quality of imaging due to its comparable optical absorption with contrast agents, especially in conventional linear photoacoustic imaging modality. In this study, we demonstrated that two hybrids consisting of gold nanorods (Au NRs) and zinc tetra(4-pyridyl)porphyrin (ZnTPP) exhibited a synergetic effect in improving optical absorption, conversion efficiency from light to heat, and thermoelastic expansion, leading to a notable enhancement in both linear (four times greater) and nonlinear (more than six times) photoacoustic signals as compared with conventional Au NRs. Subsequently, we carefully investigated the interesting factors that may influence photoacoustic signal amplification, suggesting that the coating of ZnTPP on Au NRs could result in the reduction of gold interfacial thermal conductance with a solvent, so that the heat is more confined within the nanoparticle clusters for a significant enhancement of local temperature. Hence, both the linear and nonlinear photoacoustic signals are enhanced on account of better thermal confinement. The present work not only shows that ZnTPP coated Au NRs could serve as excellent photoacoustic nanoamplifiers, but also brings a perspective for photoacoustic image-guided therapy.

  9. Improved measurement linearity and precision for AMCW time-of-flight range imaging cameras.

    Science.gov (United States)

    Payne, Andrew D; Dorrington, Adrian A; Cree, Michael J; Carnegie, Dale A

    2010-08-10

    Time-of-flight range imaging systems utilizing the amplitude modulated continuous wave (AMCW) technique often suffer from measurement nonlinearity due to the presence of aliased harmonics within the amplitude modulation signals. Typically a calibration is performed to correct these errors. We demonstrate an alternative phase encoding approach that attenuates the harmonics during the sampling process, thereby improving measurement linearity in the raw measurements. This mitigates the need to measure the system's response or calibrate for environmental changes. In conjunction with improved linearity, we demonstrate that measurement precision can also be increased by reducing the duty cycle of the amplitude modulated illumination source (while maintaining overall illumination power).

  10. Geomorphic domains and linear features on Landsat images, Circle Quadrangle, Alaska

    Science.gov (United States)

    Simpson, S.L.

    1984-01-01

    A remote sensing study using Landsat images was undertaken as part of the Alaska Mineral Resource Assessment Program (AMRAP). Geomorphic domains A and B, identified on enhanced Landsat images, divide Circle quadrangle south of Tintina fault zone into two regional areas having major differences in surface characteristics. Domain A is a roughly rectangular, northeast-trending area of relatively low relief and simple, widely spaced drainages, except where igneous rocks are exposed. In contrast, domain B, which bounds two sides of domain A, is more intricately dissected showing abrupt changes in slope and relatively high relief. The northwestern part of geomorphic domain A includes a previously mapped tectonostratigraphic terrane. The southeastern boundary of domain A occurs entirely within the adjoining tectonostratigraphic terrane. The sharp geomorphic contrast along the southeastern boundary of domain A and the existence of known faults along this boundary suggest that the southeastern part of domain A may be a subdivision of the adjoining terrane. Detailed field studies would be necessary to determine the characteristics of the subdivision. Domain B appears to be divisible into large areas of different geomorphic terrains by east-northeast-trending curvilinear lines drawn on Landsat images. Segments of two of these lines correlate with parts of boundaries of mapped tectonostratigraphic terranes. On Landsat images prominent north-trending lineaments together with the curvilinear lines form a large-scale regional pattern that is transected by mapped north-northeast-trending high-angle faults. The lineaments indicate possible lithlogic variations and/or structural boundaries. A statistical strike-frequency analysis of the linear features data for Circle quadrangle shows that northeast-trending linear features predominate throughout, and that most northwest-trending linear features are found south of Tintina fault zone. A major trend interval of N.64-72E. in the linear

  11. Single Image Super-Resolution Using Global Regression Based on Multiple Local Linear Mappings.

    Science.gov (United States)

    Choi, Jae-Seok; Kim, Munchurl

    2017-03-01

    Super-resolution (SR) has become more vital, because of its capability to generate high-quality ultra-high definition (UHD) high-resolution (HR) images from low-resolution (LR) input images. Conventional SR methods entail high computational complexity, which makes them difficult to be implemented for up-scaling of full-high-definition input images into UHD-resolution images. Nevertheless, our previous super-interpolation (SI) method showed a good compromise between Peak-Signal-to-Noise Ratio (PSNR) performances and computational complexity. However, since SI only utilizes simple linear mappings, it may fail to precisely reconstruct HR patches with complex texture. In this paper, we present a novel SR method, which inherits the large-to-small patch conversion scheme from SI but uses global regression based on local linear mappings (GLM). Thus, our new SR method is called GLM-SI. In GLM-SI, each LR input patch is divided into 25 overlapped subpatches. Next, based on the local properties of these subpatches, 25 different local linear mappings are applied to the current LR input patch to generate 25 HR patch candidates, which are then regressed into one final HR patch using a global regressor. The local linear mappings are learned cluster-wise in our off-line training phase. The main contribution of this paper is as follows: Previously, linear-mapping-based conventional SR methods, including SI only used one simple yet coarse linear mapping to each patch to reconstruct its HR version. On the contrary, for each LR input patch, our GLM-SI is the first to apply a combination of multiple local linear mappings, where each local linear mapping is found according to local properties of the current LR patch. Therefore, it can better approximate nonlinear LR-to-HR mappings for HR patches with complex texture. Experiment results show that the proposed GLM-SI method outperforms most of the state-of-the-art methods, and shows comparable PSNR performance with much lower

  12. Efficient multiple-trait association and estimation of genetic correlation using the matrix-variate linear mixed model.

    Science.gov (United States)

    Furlotte, Nicholas A; Eskin, Eleazar

    2015-05-01

    Multiple-trait association mapping, in which multiple traits are used simultaneously in the identification of genetic variants affecting those traits, has recently attracted interest. One class of approaches for this problem builds on classical variance component methodology, utilizing a multitrait version of a linear mixed model. These approaches both increase power and provide insights into the genetic architecture of multiple traits. In particular, it is possible to estimate the genetic correlation, which is a measure of the portion of the total correlation between traits that is due to additive genetic effects. Unfortunately, the practical utility of these methods is limited since they are computationally intractable for large sample sizes. In this article, we introduce a reformulation of the multiple-trait association mapping approach by defining the matrix-variate linear mixed model. Our approach reduces the computational time necessary to perform maximum-likelihood inference in a multiple-trait model by utilizing a data transformation. By utilizing a well-studied human cohort, we show that our approach provides more than a 10-fold speedup, making multiple-trait association feasible in a large population cohort on the genome-wide scale. We take advantage of the efficiency of our approach to analyze gene expression data. By decomposing gene coexpression into a genetic and environmental component, we show that our method provides fundamental insights into the nature of coexpressed genes. An implementation of this method is available at http://genetics.cs.ucla.edu/mvLMM. Copyright © 2015 by the Genetics Society of America.

  13. Generalized Linear Mixed Model Analysis of Urban-Rural Differences in Social and Behavioral Factors for Colorectal Cancer Screening

    Science.gov (United States)

    Wang, Ke-Sheng; Liu, Xuefeng; Ategbole, Muyiwa; Xie, Xin; Liu, Ying; Xu, Chun; Xie, Changchun; Sha, Zhanxin

    2017-09-27

    Objective: Screening for colorectal cancer (CRC) can reduce disease incidence, morbidity, and mortality. However, few studies have investigated the urban-rural differences in social and behavioral factors influencing CRC screening. The objective of the study was to investigate the potential factors across urban-rural groups on the usage of CRC screening. Methods: A total of 38,505 adults (aged ≥40 years) were selected from the 2009 California Health Interview Survey (CHIS) data - the latest CHIS data on CRC screening. The weighted generalized linear mixed-model (WGLIMM) was used to deal with this hierarchical structure data. Weighted simple and multiple mixed logistic regression analyses in SAS ver. 9.4 were used to obtain the odds ratios (ORs) and their 95% confidence intervals (CIs). Results: The overall prevalence of CRC screening was 48.1% while the prevalence in four residence groups - urban, second city, suburban, and town/rural, were 45.8%, 46.9%, 53.7% and 50.1%, respectively. The results of WGLIMM analysis showed that there was residence effect (pregression analysis revealed that age, race, marital status, education level, employment stats, binge drinking, and smoking status were associated with CRC screening (p<0.05). Stratified by residence regions, age and poverty level showed associations with CRC screening in all four residence groups. Education level was positively associated with CRC screening in second city and suburban. Infrequent binge drinking was associated with CRC screening in urban and suburban; while current smoking was a protective factor in urban and town/rural groups. Conclusions: Mixed models are useful to deal with the clustered survey data. Social factors and behavioral factors (binge drinking and smoking) were associated with CRC screening and the associations were affected by living areas such as urban and rural regions. Creative Commons Attribution License

  14. Using empirical Bayes predictors from generalized linear mixed models to test and visualize associations among longitudinal outcomes.

    Science.gov (United States)

    Mikulich-Gilbertson, Susan K; Wagner, Brandie D; Grunwald, Gary K; Riggs, Paula D; Zerbe, Gary O

    2018-01-01

    Medical research is often designed to investigate changes in a collection of response variables that are measured repeatedly on the same subjects. The multivariate generalized linear mixed model (MGLMM) can be used to evaluate random coefficient associations (e.g. simple correlations, partial regression coefficients) among outcomes that may be non-normal and differently distributed by specifying a multivariate normal distribution for their random effects and then evaluating the latent relationship between them. Empirical Bayes predictors are readily available for each subject from any mixed model and are observable and hence, plotable. Here, we evaluate whether second-stage association analyses of empirical Bayes predictors from a MGLMM, provide a good approximation and visual representation of these latent association analyses using medical examples and simulations. Additionally, we compare these results with association analyses of empirical Bayes predictors generated from separate mixed models for each outcome, a procedure that could circumvent computational problems that arise when the dimension of the joint covariance matrix of random effects is large and prohibits estimation of latent associations. As has been shown in other analytic contexts, the p-values for all second-stage coefficients that were determined by naively assuming normality of empirical Bayes predictors provide a good approximation to p-values determined via permutation analysis. Analyzing outcomes that are interrelated with separate models in the first stage and then associating the resulting empirical Bayes predictors in a second stage results in different mean and covariance parameter estimates from the maximum likelihood estimates generated by a MGLMM. The potential for erroneous inference from using results from these separate models increases as the magnitude of the association among the outcomes increases. Thus if computable, scatterplots of the conditionally independent empirical Bayes

  15. Comparing a single case to a control group - Applying linear mixed effects models to repeated measures data.

    Science.gov (United States)

    Huber, Stefan; Klein, Elise; Moeller, Korbinian; Willmes, Klaus

    2015-10-01

    In neuropsychological research, single-cases are often compared with a small control sample. Crawford and colleagues developed inferential methods (i.e., the modified t-test) for such a research design. In the present article, we suggest an extension of the methods of Crawford and colleagues employing linear mixed models (LMM). We first show that a t-test for the significance of a dummy coded predictor variable in a linear regression is equivalent to the modified t-test of Crawford and colleagues. As an extension to this idea, we then generalized the modified t-test to repeated measures data by using LMMs to compare the performance difference in two conditions observed in a single participant to that of a small control group. The performance of LMMs regarding Type I error rates and statistical power were tested based on Monte-Carlo simulations. We found that starting with about 15-20 participants in the control sample Type I error rates were close to the nominal Type I error rate using the Satterthwaite approximation for the degrees of freedom. Moreover, statistical power was acceptable. Therefore, we conclude that LMMs can be applied successfully to statistically evaluate performance differences between a single-case and a control sample. Copyright © 2015 Elsevier Ltd. All rights reserved.

  16. Suppression of chaos at slow variables by rapidly mixing fast dynamics through linear energy-preserving coupling

    Science.gov (United States)

    Abramov, R. V.

    2011-12-01

    Chaotic multiscale dynamical systems are common in many areas of science, one of the examples being the interaction of the low-frequency dynamics in the atmosphere with the fast turbulent weather dynamics. One of the key questions about chaotic multiscale systems is how the fast dynamics affects chaos at the slow variables, and, therefore, impacts uncertainty and predictability of the slow dynamics. Here we demonstrate that the linear slow-fast coupling with the total energy conservation property promotes the suppression of chaos at the slow variables through the rapid mixing at the fast variables, both theoretically and through numerical simulations. A suitable mathematical framework is developed, connecting the slow dynamics on the tangent subspaces to the infinite-time linear response of the mean state to a constant external forcing at the fast variables. Additionally, it is shown that the uncoupled dynamics for the slow variables may remain chaotic while the complete multiscale system loses chaos and becomes completely predictable at the slow variables through increasing chaos and turbulence at the fast variables. This result contradicts the common sense intuition, where, naturally, one would think that coupling a slow weakly chaotic system with another much faster and much stronger chaotic system would result in general increase of chaos at the slow variables.

  17. Fourier-based linear systems description of free-breathing pulmonary magnetic resonance imaging

    Science.gov (United States)

    Capaldi, D. P. I.; Svenningsen, S.; Cunningham, I. A.; Parraga, G.

    2015-03-01

    Fourier-decomposition of free-breathing pulmonary magnetic resonance imaging (FDMRI) was recently piloted as a way to provide rapid quantitative pulmonary maps of ventilation and perfusion without the use of exogenous contrast agents. This method exploits fast pulmonary MRI acquisition of free-breathing proton (1H) pulmonary images and non-rigid registration to compensate for changes in position and shape of the thorax associated with breathing. In this way, ventilation imaging using conventional MRI systems can be undertaken but there has been no systematic evaluation of fundamental image quality measurements based on linear systems theory. We investigated the performance of free-breathing pulmonary ventilation imaging using a Fourier-based linear system description of each operation required to generate FDMRI ventilation maps. Twelve subjects with chronic obstructive pulmonary disease (COPD) or bronchiectasis underwent pulmonary function tests and MRI. Non-rigid registration was used to co-register the temporal series of pulmonary images. Pulmonary voxel intensities were aligned along a time axis and discrete Fourier transforms were performed on the periodic signal intensity pattern to generate frequency spectra. We determined the signal-to-noise ratio (SNR) of the FDMRI ventilation maps using a conventional approach (SNRC) and using the Fourier-based description (SNRF). Mean SNR was 4.7 ± 1.3 for subjects with bronchiectasis and 3.4 ± 1.8, for COPD subjects (p>.05). SNRF was significantly different than SNRC (p<.01). SNRF was approximately 50% of SNRC suggesting that the linear system model well-estimates the current approach.

  18. Marketing Mix and Tourism Destination Image: The Study of Destination Bled, Slovenia

    Directory of Open Access Journals (Sweden)

    Binter Urška

    2016-12-01

    Full Text Available Background and Purpose: The aim of the research was to find out how business partners from the field of tourism estimate the dimensions of the image of Bled and the marketing mix used to promote Bled. Further on we were interested in evaluating the influence of image and marketing mix on the scope of sales measured in overnights. The following dimensions of image were explored: perceived uniqueness of image as a whole, perceived uniqueness of attractions and experiences, perceived quality of the environment (cleanness, perceived feeling of safety, as well as the dimensions of the marketing mix: perceived quality of products (accommodation, culinary offer, transfers, etc., perceived price of services, perceived manner of sales for the promotion of Bled, perception of promotional channels, perception of residents (politeness, friendliness, multicultural and religious openness, etc. and a positive experience of visiting Slovenia.

  19. An improved triangulation laser rangefinder using a custom CMOS HDR linear image sensor

    Science.gov (United States)

    Liscombe, Michael

    3-D triangulation laser rangefinders are used in many modern applications, from terrain mapping to biometric identification. Although a wide variety of designs have been proposed, laser speckle noise still provides a fundamental limitation on range accuracy. These works propose a new triangulation laser rangefinder designed specifically to mitigate the effects of laser speckle noise. The proposed rangefinder uses a precision linear translator to laterally reposition the imaging system (e.g., image sensor and imaging lens). For a given spatial location of the laser spot, capturing N spatially uncorrelated laser spot profiles is shown to improve range accuracy by a factor of N . This technique has many advantages over past speckle-reduction technologies, such as a fixed system cost and form factor, and the ability to virtually eliminate laser speckle noise. These advantages are made possible through spatial diversity and come at the cost of increased acquisition time. The rangefinder makes use of the ICFYKWG1 linear image sensor, a custom CMOS sensor developed at the Vision Sensor Laboratory (York University). Tests are performed on the image sensor's innovative high dynamic range technology to determine its effects on range accuracy. As expected, experimental results have shown that the sensor provides a trade-off between dynamic range and range accuracy.

  20. Multi-linear sparse reconstruction for SAR imaging based on higher-order SVD

    Science.gov (United States)

    Gao, Yu-Fei; Gui, Guan; Cong, Xun-Chao; Yang, Yue; Zou, Yan-Bin; Wan, Qun

    2017-12-01

    This paper focuses on the spotlight synthetic aperture radar (SAR) imaging for point scattering targets based on tensor modeling. In a real-world scenario, scatterers usually distribute in the block sparse pattern. Such a distribution feature has been scarcely utilized by the previous studies of SAR imaging. Our work takes advantage of this structure property of the target scene, constructing a multi-linear sparse reconstruction algorithm for SAR imaging. The multi-linear block sparsity is introduced into higher-order singular value decomposition (SVD) with a dictionary constructing procedure by this research. The simulation experiments for ideal point targets show the robustness of the proposed algorithm to the noise and sidelobe disturbance which always influence the imaging quality of the conventional methods. The computational resources requirement is further investigated in this paper. As a consequence of the algorithm complexity analysis, the present method possesses the superiority on resource consumption compared with the classic matching pursuit method. The imaging implementations for practical measured data also demonstrate the effectiveness of the algorithm developed in this paper.

  1. Two-dimensional Fast ESPRIT Algorithm for Linear Array SAR Imaging

    Directory of Open Access Journals (Sweden)

    Zhao Yi-chao

    2015-10-01

    Full Text Available The linear array Synthetic Aperture Radar (SAR system is a popular research tool, because it can realize three-dimensional imaging. However, owning to limitations of the aircraft platform and actual conditions, resolution improvement is difficult in cross-track and along-track directions. In this study, a twodimensional fast Estimation of Signal Parameters by Rotational Invariance Technique (ESPRIT algorithm for linear array SAR imaging is proposed to overcome these limitations. This approach combines the Gerschgorin disks method and the ESPRIT algorithm to estimate the positions of scatterers in cross and along-rack directions. Moreover, the reflectivity of scatterers is obtained by a modified pairing method based on “region growing”, replacing the least-squares method. The simulation results demonstrate the applicability of the algorithm with high resolution, quick calculation, and good real-time response.

  2. The linear attenuation coefficients as features of multiple energy CT image classification

    International Nuclear Information System (INIS)

    Homem, M.R.P.; Mascarenhas, N.D.A.; Cruvinel, P.E.

    2000-01-01

    We present in this paper an analysis of the linear attenuation coefficients as useful features of single and multiple energy CT images with the use of statistical pattern classification tools. We analyzed four CT images through two pointwise classifiers (the first classifier is based on the maximum-likelihood criterion and the second classifier is based on the k-means clustering algorithm) and one contextual Bayesian classifier (ICM algorithm - Iterated Conditional Modes) using an a priori Potts-Strauss model. A feature extraction procedure using the Jeffries-Matusita (J-M) distance and the Karhunen-Loeve transformation was also performed. Both the classification and the feature selection procedures were found to be in agreement with the predicted discrimination given by the separation of the linear attenuation coefficient curves for different materials

  3. Linear, Transfinite and Weighted Method for Interpolation from Grid Lines Applied to OCT Images

    DEFF Research Database (Denmark)

    Lindberg, Anne-Sofie Wessel; Jørgensen, Thomas Martini; Dahl, Vedrana Andersen

    2018-01-01

    of a square grid, but are unknown inside each square. To view these values as an image, intensities need to be interpolated at regularly spaced pixel positions. In this paper we evaluate three methods for interpolation from grid lines: linear, transfinite and weighted. The linear method does not preserve...... and the stability of the linear method further away. An important parameter influencing the performance of the interpolation methods is the upsampling rate. We perform an extensive evaluation of the three interpolation methods across a range of upsampling rates. Our statistical analysis shows significant difference...... in the performance of the three methods. We find that the transfinite interpolation works well for small upsampling rates and the proposed weighted interpolation method performs very well for all upsampling rates typically used in practice. On the basis of these findings we propose an approach for combining two OCT...

  4. Craniospinal treatment with IMRT multi-isocentric and image-guided linear accelerator based on Gantry

    International Nuclear Information System (INIS)

    Sanz Beltran, M.; Caballero Perea, B.; Rodriguez Rodriguez, C.; Arminio Diaz, E.; Lopez Fernandez, A.; Gomez Fervienza, J. R.; Crespo Diez, P.; Cantarero Valenzuela, N.; Alvarez Sanchez, M.; Martin Martin, G.; Gomez Fervienza, J. r.; Crespo Diez, P.; Cantarero Valenzuela, N.; Alvarez Sanchez, M.; Martin Martin, G.

    2011-01-01

    The objective is the realization of craniospinal treatment with a linear accelerator equipped with gantry based on MLC, carbon fiber table and Image Guided capability. The great length of treatment (patient l,80m in height) was a great difficulty for want of full length of the longitudinal movement of the table to adequately cover the PTV, plus free metallic screws fastening the head of the table extender preventing further incidents.

  5. Resolution limits of migration and linearized waveform inversion images in a lossy medium

    KAUST Repository

    Schuster, Gerard T.; Dutta, Gaurav; Li, Jing

    2017-01-01

    The vertical-and horizontal-resolution limits Delta x(lossy) and Delta z(lossy) of post-stack migration and linearized waveform inversion images are derived for lossy data in the far-field approximation. Unlike the horizontal resolution limit Delta x proportional to lambda z/L in a lossless medium which linearly worsens in depth z, Delta x(lossy) proportional to z(2)/QL worsens quadratically with depth for a medium with small Q values. Here, Q is the quality factor, lambda is the effective wavelength, L is the recording aperture, and loss in the resolution formulae is accounted for by replacing lambda with z/Q. In contrast, the lossy vertical-resolution limit Delta z(lossy) only worsens linearly in depth compared to Delta z proportional to lambda for a lossless medium. For both the causal and acausal Q models, the resolution limits are linearly proportional to 1/Q for small Q. These theoretical predictions are validated with migration images computed from lossy data.

  6. Resolution limits of migration and linearized waveform inversion images in a lossy medium

    KAUST Repository

    Schuster, Gerard T.

    2017-03-10

    The vertical-and horizontal-resolution limits Delta x(lossy) and Delta z(lossy) of post-stack migration and linearized waveform inversion images are derived for lossy data in the far-field approximation. Unlike the horizontal resolution limit Delta x proportional to lambda z/L in a lossless medium which linearly worsens in depth z, Delta x(lossy) proportional to z(2)/QL worsens quadratically with depth for a medium with small Q values. Here, Q is the quality factor, lambda is the effective wavelength, L is the recording aperture, and loss in the resolution formulae is accounted for by replacing lambda with z/Q. In contrast, the lossy vertical-resolution limit Delta z(lossy) only worsens linearly in depth compared to Delta z proportional to lambda for a lossless medium. For both the causal and acausal Q models, the resolution limits are linearly proportional to 1/Q for small Q. These theoretical predictions are validated with migration images computed from lossy data.

  7. The effect of the observer vantage point on perceived distortions in linear perspective images.

    Science.gov (United States)

    Todorović, Dejan

    2009-01-01

    Some features of linear perspective images may look distorted. Such distortions appear in two drawings by Jan Vredeman de Vries involving perceived elliptical, instead of circular, pillars and tilted, instead of upright, columns. Distortions may be due to factors intrinsic to the images, such as violations of the so-called Perkins's laws, or factors extrinsic to them, such as observing the images from positions different from their center of projection. When the correct projection centers for the two drawings were reconstructed, it was found that they were very close to the images and, therefore, practically unattainable in normal observation. In two experiments, enlarged versions of images were used as stimuli, making the positions of the projection centers attainable for observers. When observed from the correct positions, the perceived distortions disappeared or were greatly diminished. Distortions perceived from other positions were smaller than would be predicted by geometrical analyses, possibly due to flatness cues in the images. The results are relevant for the practical purposes of creating faithful impressions of 3-D spaces using 2-D images.

  8. Fast optical measurements and imaging of flow mixing

    DEFF Research Database (Denmark)

    Clausen, Sønnik; Fateev, Alexander; Nielsen, Karsten Lindorff

    Project is focused on fast time-resolved infrared measurements of gas temperature and fast IR-imagining of flames in various combustion environments. The infrared spectrometer system was developed in the project for fast infrared spectral measurements on industrial scale using IR-fibre- optics. F...... engine and visualisation of gas flow behaviour in cylinder.......Project is focused on fast time-resolved infrared measurements of gas temperature and fast IR-imagining of flames in various combustion environments. The infrared spectrometer system was developed in the project for fast infrared spectral measurements on industrial scale using IR-fibre- optics....... Fast time-and spectral-resolved measurements in 1.5-5.1 μm spectral range give information about flame characteristics like gas and particle temperatures, eddies and turbulent gas mixing. Time-resolved gas composition in that spectral range (H2O, CH4, CO2, CO) which is one of the key parameters...

  9. Mixed

    Directory of Open Access Journals (Sweden)

    Pau Baya

    2011-05-01

    Full Text Available Remenat (Catalan (Mixed, "revoltillo" (Scrambled in Spanish, is a dish which, in Catalunya, consists of a beaten egg cooked with vegetables or other ingredients, normally prawns or asparagus. It is delicious. Scrambled refers to the action of mixing the beaten egg with other ingredients in a pan, normally using a wooden spoon Thought is frequently an amalgam of past ideas put through a spinner and rhythmically shaken around like a cocktail until a uniform and dense paste is made. This malleable product, rather like a cake mixture can be deformed pulling it out, rolling it around, adapting its shape to the commands of one’s hands or the tool which is being used on it. In the piece Mixed, the contortion of the wood seeks to reproduce the plasticity of this slow heavy movement. Each piece lays itself on the next piece consecutively like a tongue of incandescent lava slowly advancing but with unstoppable inertia.

  10. Improved CT-detection of acute bowel ischemia using frequency selective non-linear image blending.

    Science.gov (United States)

    Schneeweiss, Sven; Esser, Michael; Thaiss, Wolfgang; Boesmueller, Hans; Ditt, Hendrik; Nikolau, Konstantin; Horger, Marius

    2017-07-01

    Computed tomography (CT) as a fast and reliable diagnostic technique is the imaging modality of choice for acute bowel ischemia. However, diagnostic is often difficult mainly due to low attenuation differences between ischemic and perfused segments. To compare the diagnostic efficacy of a new post-processing tool based on frequency selective non-linear blending with that of conventional linear contrast-enhanced CT (CECT) image blending for the detection of bowel ischemia. Twenty-seven consecutive patients (19 women; mean age = 73.7 years, age range = 50-94 years) with acute bowel ischemia were scanned using multidetector CT (120 kV; 100-200 mAs). Pre-contrast and portal venous scans (65-70 s delay) were acquired. All patients underwent surgery for acute bowel ischemia and intraoperative diagnosis as well as histologic evaluation of explanted bowel segments was considered "gold standard." First, two radiologists read the conventional CECT images in which linear blending was adapted for optimal contrast, and second (three weeks later) the frequency selective non-linear blending (F-NLB) image. Attenuation values were compared, both in the involved and non-involved bowel segments creating ratios between unenhanced and CECT. The mean attenuation difference between ischemic and non-ischemic wall in the portal venous scan was 69.54 HU (reader 2 = 69.01 HU) higher for F-NLB compared with conventional CECT. Also, the attenuation ratio between contrast-enhanced and pre-contrast CT data for the non-ischemic walls showed significantly higher values for the F-NLB image (CECT: reader 1 = 2.11 (reader 2 = 3.36), F-NLB: reader 1 = 4.46 (reader 2 = 4.98)]. Sensitivity in detecting ischemic areas increased significantly for both readers using F-NLB (CECT: reader 1/2 = 53%/65% versus F-NLB: reader 1/2 = 62%/75%). Frequency selective non-linear blending improves detection of bowel ischemia compared with conventional CECT by increasing

  11. Prenatal magnetic resonance imaging: brain normal linear biometric values below 24 gestational weeks

    International Nuclear Information System (INIS)

    Parazzini, C.; Righini, A.; Triulzi, F.; Rustico, M.; Consonni, D.

    2008-01-01

    Prenatal magnetic resonance (MR) imaging is currently used to measure quantitative data concerning brain structural development. At present, morphometric MR imaging studies have been focused mostly on the third trimester of gestational age. However, in many countries, because of legal restriction on abortion timing, the majority of MR imaging fetal examination has to be carried out during the last part of the second trimester of pregnancy (i.e., before the 24th week of gestation). Accurate and reliable normative data of the brain between 20 and 24 weeks of gestation is not available. This report provides easy and practical parametric support to assess those normative data. From a database of 1,200 fetal MR imaging studies, we retrospectively selected 84 studies of the brain of fetuses aged 20-24 weeks of gestation that resulted normal on clinical and radiological follow-up. Fetuses with proved or suspected infections, twin pregnancy, and fetuses of mothers affected by pathology that might have influenced fetal growth were excluded. Linear biometrical measurements of the main cerebral structures were obtained by three experienced pediatric neuroradiologists. A substantial interobserver agreement for each measurements was reached, and normative data with median, maximum, and minimum value were obtained for brain structures. The knowledge of a range of normality and interindividual variability of linear biometrical values for the developing brain between 20th and 24th weeks of gestation may be valuable in assessing normal brain development in clinical settings. (orig.)

  12. Prenatal magnetic resonance imaging: brain normal linear biometric values below 24 gestational weeks

    Energy Technology Data Exchange (ETDEWEB)

    Parazzini, C.; Righini, A.; Triulzi, F. [Children' s Hospital ' ' V. Buzzi' ' , Department of Radiology and Neuroradiology, Milan (Italy); Rustico, M. [Children' s Hospital ' ' V. Buzzi' ' , Department of Obstetrics and Gynecology, Milan (Italy); Consonni, D. [Fondazione IRCCS Ospedale Maggiore Policlinico, Unit of Epidemiology, Milan (Italy)

    2008-10-15

    Prenatal magnetic resonance (MR) imaging is currently used to measure quantitative data concerning brain structural development. At present, morphometric MR imaging studies have been focused mostly on the third trimester of gestational age. However, in many countries, because of legal restriction on abortion timing, the majority of MR imaging fetal examination has to be carried out during the last part of the second trimester of pregnancy (i.e., before the 24th week of gestation). Accurate and reliable normative data of the brain between 20 and 24 weeks of gestation is not available. This report provides easy and practical parametric support to assess those normative data. From a database of 1,200 fetal MR imaging studies, we retrospectively selected 84 studies of the brain of fetuses aged 20-24 weeks of gestation that resulted normal on clinical and radiological follow-up. Fetuses with proved or suspected infections, twin pregnancy, and fetuses of mothers affected by pathology that might have influenced fetal growth were excluded. Linear biometrical measurements of the main cerebral structures were obtained by three experienced pediatric neuroradiologists. A substantial interobserver agreement for each measurements was reached, and normative data with median, maximum, and minimum value were obtained for brain structures. The knowledge of a range of normality and interindividual variability of linear biometrical values for the developing brain between 20th and 24th weeks of gestation may be valuable in assessing normal brain development in clinical settings. (orig.)

  13. The sharing of radiological images by professional mixed martial arts fighters on social media.

    Science.gov (United States)

    Rahmani, George; Joyce, Cormac W; McCarthy, Peter

    2017-06-01

    Mixed martial arts is a sport that has recently enjoyed a significant increase in popularity. This rise in popularity has catapulted many of these "cage fighters" into stardom and many regularly use social media to reach out to their fans. An interesting result of this interaction on social media is that athletes are sharing images of their radiological examinations when they sustain an injury. To review instances where mixed martial arts fighters shared images of their radiological examinations on social media and in what context they were shared. An Internet search was performed using the Google search engine. Search terms included "MMA," "mixed martial arts," "injury," "scan," "X-ray," "fracture," and "break." Articles which discussed injuries to MMA fighters were examined and those in which the fighter themselves shared a radiological image of their injury on social media were identified. During our search, we identified 20 MMA fighters that had shared radiological images of their injuries on social media. There were 15 different types of injury, with a fracture of the mid-shaft of the ulna being the most common. The most popular social media platform was Twitter. The most common imaging modality X-ray (71%). The majority of injuries were sustained during competition (81%) and 35% of these fights resulted in a win for the fighter. Professional mixed martial artists are sharing radiological images of their injuries on social media. This may be in an attempt to connect with fans and raise their profile among other fighters.

  14. Linear-fitting-based similarity coefficient map for tissue dissimilarity analysis in -w magnetic resonance imaging

    International Nuclear Information System (INIS)

    Yu Shao-De; Wu Shi-Bin; Xie Yao-Qin; Wang Hao-Yu; Wei Xin-Hua; Chen Xin; Pan Wan-Long; Hu Jiani

    2015-01-01

    Similarity coefficient mapping (SCM) aims to improve the morphological evaluation of weighted magnetic resonance imaging However, how to interpret the generated SCM map is still pending. Moreover, is it probable to extract tissue dissimilarity messages based on the theory behind SCM? The primary purpose of this paper is to address these two questions. First, the theory of SCM was interpreted from the perspective of linear fitting. Then, a term was embedded for tissue dissimilarity information. Finally, our method was validated with sixteen human brain image series from multi-echo . Generated maps were investigated from signal-to-noise ratio (SNR) and perceived visual quality, and then interpreted from intra- and inter-tissue intensity. Experimental results show that both perceptibility of anatomical structures and tissue contrast are improved. More importantly, tissue similarity or dissimilarity can be quantified and cross-validated from pixel intensity analysis. This method benefits image enhancement, tissue classification, malformation detection and morphological evaluation. (paper)

  15. Image-guided linear accelerator-based spinal radiosurgery for hemangioblastoma.

    Science.gov (United States)

    Selch, Michael T; Tenn, Steve; Agazaryan, Nzhde; Lee, Steve P; Gorgulho, Alessandra; De Salles, Antonio A F

    2012-01-01

    To retrospectively review the efficacy and safety of image-guided linear accelerator-based radiosurgery for spinal hemangioblastomas. Between August 2004 and September 2010, nine patients with 20 hemangioblastomas underwent spinal radiosurgery. Five patients had von Hipple-Lindau disease. Four patients had multiple tumors. Ten tumors were located in the thoracic spine, eight in the cervical spine, and two in the lumbar spine. Tumor volume varied from 0.08 to 14.4 cc (median 0.72 cc). Maximum tumor dimension varied from 2.5 to 24 mm (median 10.5 mm). Radiosurgery was performed with a dedicated 6 MV linear accelerator equipped with a micro-multileaf collimator. Median peripheral tumor dose and prescription isodose were 12 Gy and 90%, respectively. Image guidance was performed by optical tracking of infrared reflectors, fusion of oblique radiographs with dynamically reconstructed digital radiographs, and automatic patient positioning. Follow-up varied from 14 to 86 months (median 51 months). Kaplan-Meier estimated 4-year overall and solid tumor local control rates were 90% and 95%, respectively. One tumor progressed 12 months after treatment and a new cyst developed 10 months after treatment in another tumor. There has been no clinical or imaging evidence for spinal cord injury. Results of this limited experience indicate linear accelerator-based radiosurgery is safe and effective for spinal cord hemangioblastomas. Longer follow-up is necessary to confirm the durability of tumor control, but these initial results imply linear accelerator-based radiosurgery may represent a therapeutic alternative to surgery for selected patients with spinal hemangioblastomas.

  16. Predicting the multi-domain progression of Parkinson's disease: a Bayesian multivariate generalized linear mixed-effect model.

    Science.gov (United States)

    Wang, Ming; Li, Zheng; Lee, Eun Young; Lewis, Mechelle M; Zhang, Lijun; Sterling, Nicholas W; Wagner, Daymond; Eslinger, Paul; Du, Guangwei; Huang, Xuemei

    2017-09-25

    It is challenging for current statistical models to predict clinical progression of Parkinson's disease (PD) because of the involvement of multi-domains and longitudinal data. Past univariate longitudinal or multivariate analyses from cross-sectional trials have limited power to predict individual outcomes or a single moment. The multivariate generalized linear mixed-effect model (GLMM) under the Bayesian framework was proposed to study multi-domain longitudinal outcomes obtained at baseline, 18-, and 36-month. The outcomes included motor, non-motor, and postural instability scores from the MDS-UPDRS, and demographic and standardized clinical data were utilized as covariates. The dynamic prediction was performed for both internal and external subjects using the samples from the posterior distributions of the parameter estimates and random effects, and also the predictive accuracy was evaluated based on the root of mean square error (RMSE), absolute bias (AB) and the area under the receiver operating characteristic (ROC) curve. First, our prediction model identified clinical data that were differentially associated with motor, non-motor, and postural stability scores. Second, the predictive accuracy of our model for the training data was assessed, and improved prediction was gained in particularly for non-motor (RMSE and AB: 2.89 and 2.20) compared to univariate analysis (RMSE and AB: 3.04 and 2.35). Third, the individual-level predictions of longitudinal trajectories for the testing data were performed, with ~80% observed values falling within the 95% credible intervals. Multivariate general mixed models hold promise to predict clinical progression of individual outcomes in PD. The data was obtained from Dr. Xuemei Huang's NIH grant R01 NS060722 , part of NINDS PD Biomarker Program (PDBP). All data was entered within 24 h of collection to the Data Management Repository (DMR), which is publically available ( https://pdbp.ninds.nih.gov/data-management ).

  17. A mixed integer linear programming approach for optimal DER portfolio, sizing, and placement in multi-energy microgrids

    International Nuclear Information System (INIS)

    Mashayekh, Salman; Stadler, Michael; Cardoso, Gonçalo; Heleno, Miguel

    2017-01-01

    Highlights: • This paper presents a MILP model for optimal design of multi-energy microgrids. • Our microgrid design includes optimal technology portfolio, placement, and operation. • Our model includes microgrid electrical power flow and heat transfer equations. • The case study shows advantages of our model over aggregate single-node approaches. • The case study shows the accuracy of the integrated linearized power flow model. - Abstract: Optimal microgrid design is a challenging problem, especially for multi-energy microgrids with electricity, heating, and cooling loads as well as sources, and multiple energy carriers. To address this problem, this paper presents an optimization model formulated as a mixed-integer linear program, which determines the optimal technology portfolio, the optimal technology placement, and the associated optimal dispatch, in a microgrid with multiple energy types. The developed model uses a multi-node modeling approach (as opposed to an aggregate single-node approach) that includes electrical power flow and heat flow equations, and hence, offers the ability to perform optimal siting considering physical and operational constraints of electrical and heating/cooling networks. The new model is founded on the existing optimization model DER-CAM, a state-of-the-art decision support tool for microgrid planning and design. The results of a case study that compares single-node vs. multi-node optimal design for an example microgrid show the importance of multi-node modeling. It has been shown that single-node approaches are not only incapable of optimal DER placement, but may also result in sub-optimal DER portfolio, as well as underestimation of investment costs.

  18. AUTOMATED ANALYSIS OF QUANTITATIVE IMAGE DATA USING ISOMORPHIC FUNCTIONAL MIXED MODELS, WITH APPLICATION TO PROTEOMICS DATA.

    Science.gov (United States)

    Morris, Jeffrey S; Baladandayuthapani, Veerabhadran; Herrick, Richard C; Sanna, Pietro; Gutstein, Howard

    2011-01-01

    Image data are increasingly encountered and are of growing importance in many areas of science. Much of these data are quantitative image data, which are characterized by intensities that represent some measurement of interest in the scanned images. The data typically consist of multiple images on the same domain and the goal of the research is to combine the quantitative information across images to make inference about populations or interventions. In this paper, we present a unified analysis framework for the analysis of quantitative image data using a Bayesian functional mixed model approach. This framework is flexible enough to handle complex, irregular images with many local features, and can model the simultaneous effects of multiple factors on the image intensities and account for the correlation between images induced by the design. We introduce a general isomorphic modeling approach to fitting the functional mixed model, of which the wavelet-based functional mixed model is one special case. With suitable modeling choices, this approach leads to efficient calculations and can result in flexible modeling and adaptive smoothing of the salient features in the data. The proposed method has the following advantages: it can be run automatically, it produces inferential plots indicating which regions of the image are associated with each factor, it simultaneously considers the practical and statistical significance of findings, and it controls the false discovery rate. Although the method we present is general and can be applied to quantitative image data from any application, in this paper we focus on image-based proteomic data. We apply our method to an animal study investigating the effects of opiate addiction on the brain proteome. Our image-based functional mixed model approach finds results that are missed with conventional spot-based analysis approaches. In particular, we find that the significant regions of the image identified by the proposed method

  19. A FPGA implementation for linearly unmixing a hyperspectral image using OpenCL

    Science.gov (United States)

    Guerra, Raúl; López, Sebastián.; Sarmiento, Roberto

    2017-10-01

    Hyperspectral imaging systems provide images in which single pixels have information from across the electromagnetic spectrum of the scene under analysis. These systems divide the spectrum into many contiguos channels, which may be even out of the visible part of the spectra. The main advantage of the hyperspectral imaging technology is that certain objects leave unique fingerprints in the electromagnetic spectrum, known as spectral signatures, which allow to distinguish between different materials that may look like the same in a traditional RGB image. Accordingly, the most important hyperspectral imaging applications are related with distinguishing or identifying materials in a particular scene. In hyperspectral imaging applications under real-time constraints, the huge amount of information provided by the hyperspectral sensors has to be rapidly processed and analysed. For such purpose, parallel hardware devices, such as Field Programmable Gate Arrays (FPGAs) are typically used. However, developing hardware applications typically requires expertise in the specific targeted device, as well as in the tools and methodologies which can be used to perform the implementation of the desired algorithms in the specific device. In this scenario, the Open Computing Language (OpenCL) emerges as a very interesting solution in which a single high-level synthesis design language can be used to efficiently develop applications in multiple and different hardware devices. In this work, the Fast Algorithm for Linearly Unmixing Hyperspectral Images (FUN) has been implemented into a Bitware Stratix V Altera FPGA using OpenCL. The obtained results demonstrate the suitability of OpenCL as a viable design methodology for quickly creating efficient FPGAs designs for real-time hyperspectral imaging applications.

  20. Deliberate practice predicts performance over time in adolescent chess players and drop-outs: a linear mixed models analysis.

    Science.gov (United States)

    de Bruin, Anique B H; Smits, Niels; Rikers, Remy M J P; Schmidt, Henk G

    2008-11-01

    In this study, the longitudinal relation between deliberate practice and performance in chess was examined using a linear mixed models analysis. The practice activities and performance ratings of young elite chess players, who were either in, or had dropped out of the Dutch national chess training, were analysed since they had started playing chess seriously. The results revealed that deliberate practice (i.e. serious chess study alone and serious chess play) strongly contributed to chess performance. The influence of deliberate practice was not only observable in current performance, but also over chess players' careers. Moreover, although the drop-outs' chess ratings developed more slowly over time, both the persistent and drop-out chess players benefited to the same extent from investments in deliberate practice. Finally, the effect of gender on chess performance proved to be much smaller than the effect of deliberate practice. This study provides longitudinal support for the monotonic benefits assumption of deliberate practice, by showing that over chess players' careers, deliberate practice has a significant effect on performance, and to the same extent for chess players of different ultimate performance levels. The results of this study are not in line with critique raised against the deliberate practice theory that the factors deliberate practice and talent could be confounded.

  1. Generalized linear mixed model for binary outcomes when covariates are subject to measurement errors and detection limits.

    Science.gov (United States)

    Xie, Xianhong; Xue, Xiaonan; Strickler, Howard D

    2018-01-15

    Longitudinal measurement of biomarkers is important in determining risk factors for binary endpoints such as infection or disease. However, biomarkers are subject to measurement error, and some are also subject to left-censoring due to a lower limit of detection. Statistical methods to address these issues are few. We herein propose a generalized linear mixed model and estimate the model parameters using the Monte Carlo Newton-Raphson (MCNR) method. Inferences regarding the parameters are made by applying Louis's method and the delta method. Simulation studies were conducted to compare the proposed MCNR method with existing methods including the maximum likelihood (ML) method and the ad hoc approach of replacing the left-censored values with half of the detection limit (HDL). The results showed that the performance of the MCNR method is superior to ML and HDL with respect to the empirical standard error, as well as the coverage probability for the 95% confidence interval. The HDL method uses an incorrect imputation method, and the computation is constrained by the number of quadrature points; while the ML method also suffers from the constrain for the number of quadrature points, the MCNR method does not have this limitation and approximates the likelihood function better than the other methods. The improvement of the MCNR method is further illustrated with real-world data from a longitudinal study of local cervicovaginal HIV viral load and its effects on oncogenic HPV detection in HIV-positive women. Copyright © 2017 John Wiley & Sons, Ltd.

  2. A guide to developing resource selection functions from telemetry data using generalized estimating equations and generalized linear mixed models

    Directory of Open Access Journals (Sweden)

    Nicola Koper

    2012-03-01

    Full Text Available Resource selection functions (RSF are often developed using satellite (ARGOS or Global Positioning System (GPS telemetry datasets, which provide a large amount of highly correlated data. We discuss and compare the use of generalized linear mixed-effects models (GLMM and generalized estimating equations (GEE for using this type of data to develop RSFs. GLMMs directly model differences among caribou, while GEEs depend on an adjustment of the standard error to compensate for correlation of data points within individuals. Empirical standard errors, rather than model-based standard errors, must be used with either GLMMs or GEEs when developing RSFs. There are several important differences between these approaches; in particular, GLMMs are best for producing parameter estimates that predict how management might influence individuals, while GEEs are best for predicting how management might influence populations. As the interpretation, value, and statistical significance of both types of parameter estimates differ, it is important that users select the appropriate analytical method. We also outline the use of k-fold cross validation to assess fit of these models. Both GLMMs and GEEs hold promise for developing RSFs as long as they are used appropriately.

  3. Log-gamma linear-mixed effects models for multiple outcomes with application to a longitudinal glaucoma study

    Science.gov (United States)

    Zhang, Peng; Luo, Dandan; Li, Pengfei; Sharpsten, Lucie; Medeiros, Felipe A.

    2015-01-01

    Glaucoma is a progressive disease due to damage in the optic nerve with associated functional losses. Although the relationship between structural and functional progression in glaucoma is well established, there is disagreement on how this association evolves over time. In addressing this issue, we propose a new class of non-Gaussian linear-mixed models to estimate the correlations among subject-specific effects in multivariate longitudinal studies with a skewed distribution of random effects, to be used in a study of glaucoma. This class provides an efficient estimation of subject-specific effects by modeling the skewed random effects through the log-gamma distribution. It also provides more reliable estimates of the correlations between the random effects. To validate the log-gamma assumption against the usual normality assumption of the random effects, we propose a lack-of-fit test using the profile likelihood function of the shape parameter. We apply this method to data from a prospective observation study, the Diagnostic Innovations in Glaucoma Study, to present a statistically significant association between structural and functional change rates that leads to a better understanding of the progression of glaucoma over time. PMID:26075565

  4. Minimising negative externalities cost using 0-1 mixed integer linear programming model in e-commerce environment

    Directory of Open Access Journals (Sweden)

    Akyene Tetteh

    2017-04-01

    Full Text Available Background: Although the Internet boosts business profitability, without certain activities like efficient transportation, scheduling, products ordered via the Internet may reach their destination very late. The environmental problems (vehicle part disposal, carbon monoxide [CO], nitrogen oxide [NOx] and hydrocarbons [HC] associated with transportation are mostly not accounted for by industries. Objectives: The main objective of this article is to minimising negative externalities cost in e-commerce environments. Method: The 0-1 mixed integer linear programming (0-1 MILP model was used to model the problem statement. The result was further analysed using the externality percentage impact factor (EPIF. Results: The simulation results suggest that (1 The mode of ordering refined petroleum products does not impact on the cost of distribution, (2 an increase in private cost is directly proportional to the externality cost, (3 externality cost is largely controlled by the government and number of vehicles used in the distribution and this is in no way influenced by the mode of request (i.e. Internet or otherwise and (4 externality cost may be reduce by using more ecofriendly fuel system.

  5. Real time image synthesis on a SIMD linear array processor: algorithms and architectures

    International Nuclear Information System (INIS)

    Letellier, Laurent

    1993-01-01

    Nowadays, image synthesis has become a widely used technique. The impressive computing power required for real time applications necessitates the use of parallel architectures. In this context, we evaluate an SIMD linear parallel architecture, SYMPATI2, dedicated to image processing. The objective of this study is to propose a cost-effective graphics accelerator relying on SYMPATI2's modular and programmable structure. The parallelization of basic image synthesis algorithms on SYMPATI2 enables us to determine its limits in this application field. These limits lead us to evaluate a new structure with a fast intercommunication network between processors, but processors have to support the message consistency, which brings about a strong decrease in performance. To solve this problem, we suggest a simple network whose access priorities are represented by tokens. The simulations of this new architecture indicate that the SIMD mode causes a drastic cut in parallelism. To cope with this drawback, we propose a context switching procedure which reduces the SIMD rigidity and increases the parallelism rate significantly. Then, the graphics accelerator we propose is compared with existing graphics workstations. This comparison indicates that our structure, which is able to accelerate both image synthesis and image processing, is competitive and well-suited for multimedia applications. (author) [fr

  6. Prenatal Brain MR Imaging: Reference Linear Biometric Centiles between 20 and 24 Gestational Weeks.

    Science.gov (United States)

    Conte, G; Milani, S; Palumbo, G; Talenti, G; Boito, S; Rustico, M; Triulzi, F; Righini, A; Izzo, G; Doneda, C; Zolin, A; Parazzini, C

    2018-05-01

    Evaluation of biometry is a fundamental step in prenatal brain MR imaging. While different studies have reported reference centiles for MR imaging biometric data of fetuses in the late second and third trimesters of gestation, no one has reported them in fetuses in the early second trimester. We report centiles of normal MR imaging linear biometric data of a large cohort of fetal brains within 24 weeks of gestation. From the data bases of 2 referral centers of fetal medicine, accounting for 3850 examinations, we retrospectively collected 169 prenatal brain MR imaging examinations of singleton pregnancies, between 20 and 24 weeks of gestational age, with normal brain anatomy at MR imaging and normal postnatal neurologic development. To trace the reference centiles, we used the CG-LMS method. Reference biometric centiles for the developing structures of the cerebrum, cerebellum, brain stem, and theca were obtained. The overall interassessor agreement was adequate for all measurements. Reference biometric centiles of the brain structures in fetuses between 20 and 24 weeks of gestational age may be a reliable tool in assessing fetal brain development. © 2018 by American Journal of Neuroradiology.

  7. Frequency Selective Non-Linear Blending to Improve Image Quality in Liver CT.

    Science.gov (United States)

    Bongers, M N; Bier, G; Kloth, C; Schabel, C; Fritz, J; Nikolaou, K; Horger, M

    2016-12-01

    Purpose: To evaluate the effects of a new frequency selective non-linear blending (NLB) algorithm on the contrast resolution of liver CT with low intravascular concentration of iodine contrast. Materials and Methods: Our local ethics committee approved this retrospective study. The informed consent requirement was waived. CT exams of 25 patients (60 % female, mean age: 65 ± 16 years of age) with late phase CT scans of the liver were included as a model for poor intrahepatic vascular contrast enhancement. Optimal post-processing settings to enhance the contrast of hepatic vessels were determined. Outcome variables included signal-to-noise (SNR) and contrast-to-noise ratios (CNR) of hepatic vessels and SNR of liver parenchyma of standard and post-processed images. Image quality was quantified by two independent readers using Likert scales. Results: The post-processing settings for the visualization of hepatic vasculature were optimal at a center of 115HU, delta of 25HU, and slope of 5. Image noise was statistically indifferent between standard and post-processed images. The CNR between the hepatic vasculature (HV) and liver parenchyma could be significantly increased for liver veins (CNR Standard 1.62 ± 1.10, CNR NLB 3.6 ± 2.94, p = 0.0002) and portal veins (CNR Standard 1.31 ± 0.85, CNR NLB 2.42 ± 3.03, p = 0.046). The SNR of liver parenchyma was significantly higher on post-processed images (SNR NLB 11.26 ± 3.16, SNR Standard 8.85 ± 2.27, p = 0.008). The overall image quality and depiction of HV were significantly higher on post-processed images (NLB DHV : 4 [3 - 4.75], S tandardDHV : 2 [1.3 - 2.5], p = algorithm increases the contrast resolution of liver CT and can improve the visibility of the hepatic vasculature in the setting of a low contrast ratio between vessels and the parenchyma. Key Points: • Using the new frequency selective non-linear blending algorithm is feasible in contrast

  8. System and method to create three-dimensional images of non-linear acoustic properties in a region remote from a borehole

    Science.gov (United States)

    Vu, Cung; Nihei, Kurt T.; Schmitt, Denis P.; Skelt, Christopher; Johnson, Paul A.; Guyer, Robert; TenCate, James A.; Le Bas, Pierre-Yves

    2013-01-01

    In some aspects of the disclosure, a method for creating three-dimensional images of non-linear properties and the compressional to shear velocity ratio in a region remote from a borehole using a conveyed logging tool is disclosed. In some aspects, the method includes arranging a first source in the borehole and generating a steered beam of elastic energy at a first frequency; arranging a second source in the borehole and generating a steerable beam of elastic energy at a second frequency, such that the steerable beam at the first frequency and the steerable beam at the second frequency intercept at a location away from the borehole; receiving at the borehole by a sensor a third elastic wave, created by a three wave mixing process, with a frequency equal to a difference between the first and second frequencies and a direction of propagation towards the borehole; determining a location of a three wave mixing region based on the arrangement of the first and second sources and on properties of the third wave signal; and creating three-dimensional images of the non-linear properties using data recorded by repeating the generating, receiving and determining at a plurality of azimuths, inclinations and longitudinal locations within the borehole. The method is additionally used to generate three dimensional images of the ratio of compressional to shear acoustic velocity of the same volume surrounding the borehole.

  9. Designing Non-linear Frequency Modulated Signals For Medical Ultrasound Imaging

    DEFF Research Database (Denmark)

    Gran, Fredrik; Jensen, Jørgen Arendt

    2006-01-01

    In this paper a new method for designing non-linear frequency modulated (NLFM) waveforms for ultrasound imaging is proposed. The objective is to control the amplitude spectrum of the designed waveform and still keep a constant transmit amplitude, so that the transmitted energy is maximized....... The signal-to-noise-ratio can in this way be optimized. The waveform design is based on least squares optimization. A desired amplitude spectrum is chosen, hereafter the phase spectrum is chosen, so that the instantaneous frequency takes on the form of a third order polynomial. The finite energy waveform...

  10. High energy X-ray photon counting imaging using linear accelerator and silicon strip detectors

    Energy Technology Data Exchange (ETDEWEB)

    Tian, Y., E-mail: cycjty@sophie.q.t.u-tokyo.ac.jp [Department of Bioengineering, the University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8656 (Japan); Shimazoe, K.; Yan, X. [Department of Nuclear Engineering and Management, the University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8656 (Japan); Ueda, O.; Ishikura, T. [Fuji Electric Co., Ltd., Fuji, Hino, Tokyo 191-8502 (Japan); Fujiwara, T. [National Institute of Advanced Industrial Science and Technology, 1-1-1 Umezono, Tsukuba, Ibaraki 305-8568 (Japan); Uesaka, M.; Ohno, M. [Nuclear Professional School, the University of Tokyo, 2-22 Shirakata-shirane, Tokai, Ibaraki 319-1188 (Japan); Tomita, H. [Department of Quantum Engineering, Nagoya University, Furo, Chikusa, Nagoya 464-8603 (Japan); Yoshihara, Y. [Department of Nuclear Engineering and Management, the University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8656 (Japan); Takahashi, H. [Department of Bioengineering, the University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8656 (Japan); Department of Nuclear Engineering and Management, the University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8656 (Japan)

    2016-09-11

    A photon counting imaging detector system for high energy X-rays is developed for on-site non-destructive testing of thick objects. One-dimensional silicon strip (1 mm pitch) detectors are stacked to form a two-dimensional edge-on module. Each detector is connected to a 48-channel application specific integrated circuit (ASIC). The threshold-triggered events are recorded by a field programmable gate array based counter in each channel. The detector prototype is tested using 950 kV linear accelerator X-rays. The fast CR shaper (300 ns pulse width) of the ASIC makes it possible to deal with the high instant count rate during the 2 μs beam pulse. The preliminary imaging results of several metal and concrete samples are demonstrated.

  11. High energy X-ray photon counting imaging using linear accelerator and silicon strip detectors

    International Nuclear Information System (INIS)

    Tian, Y.; Shimazoe, K.; Yan, X.; Ueda, O.; Ishikura, T.; Fujiwara, T.; Uesaka, M.; Ohno, M.; Tomita, H.; Yoshihara, Y.; Takahashi, H.

    2016-01-01

    A photon counting imaging detector system for high energy X-rays is developed for on-site non-destructive testing of thick objects. One-dimensional silicon strip (1 mm pitch) detectors are stacked to form a two-dimensional edge-on module. Each detector is connected to a 48-channel application specific integrated circuit (ASIC). The threshold-triggered events are recorded by a field programmable gate array based counter in each channel. The detector prototype is tested using 950 kV linear accelerator X-rays. The fast CR shaper (300 ns pulse width) of the ASIC makes it possible to deal with the high instant count rate during the 2 μs beam pulse. The preliminary imaging results of several metal and concrete samples are demonstrated.

  12. Frequency selective non-linear blending to improve image quality in liver CT

    International Nuclear Information System (INIS)

    Bongers, M.N.; Bier, G.; Kloth, C.; Schabel, C.; Nikolaou, K.; Horger, M.; Fritz, J.

    2016-01-01

    To evaluate the effects of a new frequency selective non-linear blending (NLB) algorithm on the contrast resolution of liver CT with low intravascular concentration of iodine contrast. Our local ethics committee approved this retrospective study. The informed consent requirement was waived. CT exams of 25 patients (60% female, mean age: 65±16 years of age) with late phase CT scans of the liver were included as a model for poor intrahepatic vascular contrast enhancement. Optimal post-processing settings to enhance the contrast of hepatic vessels were determined. Outcome variables included signal-to-noise (SNR) and contrast-to-noise ratios (CNR) of hepatic vessels and SNR of liver parenchyma of standard and post-processed images. Image quality was quantified by two independent readers using Likert scales. The post-processing settings for the visualization of hepatic vasculature were optimal at a center of 115HU, delta of 25HU, and slope of 5. Image noise was statistically indifferent between standard and post-processed images. The CNR between the hepatic vasculature (HV) and liver parenchyma could be significantly increased for liver veins (CNR Standard 1.62±1.10, CNR NLB 3.6±2.94, p=0.0002) and portal veins (CNR Standard 1.31±0.85, CNR NLB 2.42±3.03, p=0.046). The SNR of liver parenchyma was significantly higher on post-processed images (SNR NLB 11.26±3.16, SNR Standard 8.85± 2.27, p=0.008). The overall image quality and depiction of HV were significantly higher on post-processed images (NLB DHV : 4 [3-4.75], S tandardDHV : 2 [1.3-2.5], p=<0.0001; NLBIQ : 4 [4-4], StandardIQ : 2 [2-3], p=<0.0001). The use of a frequency selective non-linear blending algorithm increases the contrast resolution of liver CT and can improve the visibility of the hepatic vasculature in the setting of a low contrast ratio between vessels and the parenchyma.

  13. Frequency selective non-linear blending to improve image quality in liver CT

    Energy Technology Data Exchange (ETDEWEB)

    Bongers, M.N.; Bier, G.; Kloth, C.; Schabel, C.; Nikolaou, K.; Horger, M. [University Hospital of Tuebingen (Germany). Dept. of Diagnostic and Interventional Radiology; Fritz, J. [Johns Hopkins University School of Medicine, Baltimore, MD (United States). Russell H. Morgan Dept. of Radiology and Radiological Science

    2016-12-15

    To evaluate the effects of a new frequency selective non-linear blending (NLB) algorithm on the contrast resolution of liver CT with low intravascular concentration of iodine contrast. Our local ethics committee approved this retrospective study. The informed consent requirement was waived. CT exams of 25 patients (60% female, mean age: 65±16 years of age) with late phase CT scans of the liver were included as a model for poor intrahepatic vascular contrast enhancement. Optimal post-processing settings to enhance the contrast of hepatic vessels were determined. Outcome variables included signal-to-noise (SNR) and contrast-to-noise ratios (CNR) of hepatic vessels and SNR of liver parenchyma of standard and post-processed images. Image quality was quantified by two independent readers using Likert scales. The post-processing settings for the visualization of hepatic vasculature were optimal at a center of 115HU, delta of 25HU, and slope of 5. Image noise was statistically indifferent between standard and post-processed images. The CNR between the hepatic vasculature (HV) and liver parenchyma could be significantly increased for liver veins (CNR{sub Standard} 1.62±1.10, CNR{sub NLB} 3.6±2.94, p=0.0002) and portal veins (CNR{sub Standard} 1.31±0.85, CNR{sub NLB} 2.42±3.03, p=0.046). The SNR of liver parenchyma was significantly higher on post-processed images (SNR{sub NLB} 11.26±3.16, SNR{sub Standard} 8.85± 2.27, p=0.008). The overall image quality and depiction of HV were significantly higher on post-processed images (NLB{sub DHV}: 4 [3-4.75], S{sub tandardDHV}: 2 [1.3-2.5], p=<0.0001; {sub NLBIQ}: 4 [4-4], {sub StandardIQ}: 2 [2-3], p=<0.0001). The use of a frequency selective non-linear blending algorithm increases the contrast resolution of liver CT and can improve the visibility of the hepatic vasculature in the setting of a low contrast ratio between vessels and the parenchyma.

  14. Effect of mandibular plane angle on image dimensions in linear tomography

    Directory of Open Access Journals (Sweden)

    Bashizadeh Fakhar H

    2011-02-01

    Full Text Available "nBackground and Aims: Accurate bone measurements are essential for determining the optimal size and length of proposed implants. The radiologist should be aware of the head position effects on image dimensions in each imaging technique. The purpose of this study was to evaluate the effect of mandibular plane angle on image dimensions in linear tomography."nMaterials and Methods: In this in vitro study, the vertical dimensions of linear tomograms taken from 3 dry mandibles in different posteroantenior or mediolateral tilts were compared with actual condition. In order to evaluate the effects of head position in linear tomography, 16 series of images while mandibular plane angle was tilted with 5, 10, 15 and 20 degrees in anterior, posterior, medial, or lateral angulations as well as a series of standard images without any tilt in mandibular position were taken. Vertical distances between the alveolar crest and the superior border of the inferior alveolar canal were measured in posterior mandible and the vertical distances between the alveolar crest and inferior rim were measured in anterior mandible in 12 sites of tomograms. Each bone was then sectioned through the places marked with a radiopaque object. The radiographic values were compared with the real conditions. Repeat measure ANOVA was used to analyze the data."nResults: The findings of this study showed that there was significant statistical difference between standard position and 15º posteroanterior tilt (P<0.001. Also there was significant statistical difference between standard position and 10º lateral tilt (P<0.008, 15º tilt (P<0.001, and 20º upward tilt (P<0.001. In standard mandibular position with no tilt, the mean exact error was the same in all regions (0.22±0.19 mm except the premolar region which the mean exact error was calculated as 0.44±0.19 mm. The most mean exact error among various postroanterior tilts was seen in 20º lower tilt in the canine region (1±0.88 mm

  15. Stimulated Emission Computed Tomography (NSECT) images enhancement using a linear filter in the frequency domain

    Energy Technology Data Exchange (ETDEWEB)

    Viana, Rodrigo S.S.; Tardelli, Tiago C.; Yoriyaz, Helio, E-mail: hyoriyaz@ipen.b [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil); Jackowski, Marcel P., E-mail: mjack@ime.usp.b [University of Sao Paulo (USP), SP (Brazil). Dept. of Computer Science

    2011-07-01

    In recent years, a new technique for in vivo spectrographic imaging of stable isotopes was presented as Neutron Stimulated Emission Computed Tomography (NSECT). In this technique, a fast neutrons beam stimulates stable nuclei in a sample, which emit characteristic gamma radiation. The photon energy is unique and is used to identify the emitting nuclei. The emitted gamma energy spectra can be used for reconstruction of the target tissue image and for determination of the tissue elemental composition. Due to the stochastic nature of photon emission process by irradiated tissue, one of the most suitable algorithms for tomographic reconstruction is the Expectation-Maximization (E-M) algorithm, once on its formulation are considered simultaneously the probabilities of photons emission and detection. However, a disadvantage of this algorithm is the introduction of noise in the reconstructed image as the number of iterations increases. This increase can be caused either by features of the algorithm itself or by the low sampling rate of projections used for tomographic reconstruction. In this work, a linear filter in the frequency domain was used in order to improve the quality of the reconstructed images. (author)

  16. Stimulated Emission Computed Tomography (NSECT) images enhancement using a linear filter in the frequency domain

    International Nuclear Information System (INIS)

    Viana, Rodrigo S.S.; Tardelli, Tiago C.; Yoriyaz, Helio; Jackowski, Marcel P.

    2011-01-01

    In recent years, a new technique for in vivo spectrographic imaging of stable isotopes was presented as Neutron Stimulated Emission Computed Tomography (NSECT). In this technique, a fast neutrons beam stimulates stable nuclei in a sample, which emit characteristic gamma radiation. The photon energy is unique and is used to identify the emitting nuclei. The emitted gamma energy spectra can be used for reconstruction of the target tissue image and for determination of the tissue elemental composition. Due to the stochastic nature of photon emission process by irradiated tissue, one of the most suitable algorithms for tomographic reconstruction is the Expectation-Maximization (E-M) algorithm, once on its formulation are considered simultaneously the probabilities of photons emission and detection. However, a disadvantage of this algorithm is the introduction of noise in the reconstructed image as the number of iterations increases. This increase can be caused either by features of the algorithm itself or by the low sampling rate of projections used for tomographic reconstruction. In this work, a linear filter in the frequency domain was used in order to improve the quality of the reconstructed images. (author)

  17. QR code-based non-linear image encryption using Shearlet transform and spiral phase transform

    Science.gov (United States)

    Kumar, Ravi; Bhaduri, Basanta; Hennelly, Bryan

    2018-02-01

    In this paper, we propose a new quick response (QR) code-based non-linear technique for image encryption using Shearlet transform (ST) and spiral phase transform. The input image is first converted into a QR code and then scrambled using the Arnold transform. The scrambled image is then decomposed into five coefficients using the ST and the first Shearlet coefficient, C1 is interchanged with a security key before performing the inverse ST. The output after inverse ST is then modulated with a random phase mask and further spiral phase transformed to get the final encrypted image. The first coefficient, C1 is used as a private key for decryption. The sensitivity of the security keys is analysed in terms of correlation coefficient and peak signal-to noise ratio. The robustness of the scheme is also checked against various attacks such as noise, occlusion and special attacks. Numerical simulation results are shown in support of the proposed technique and an optoelectronic set-up for encryption is also proposed.

  18. Localized irradiation of mouse legs using an image-guided robotic linear accelerator.

    Science.gov (United States)

    Kufeld, Markus; Escobar, Helena; Marg, Andreas; Pasemann, Diana; Budach, Volker; Spuler, Simone

    2017-04-01

    To investigate the potential of human satellite cells in muscle regeneration small animal models are useful to evaluate muscle regeneration. To suppress the inherent regeneration ability of the tibialis muscle of mice before transplantation of human muscle fibers, a localized irradiation of the mouse leg should be conducted. We analyzed the feasibility of an image-guided robotic irradiation procedure, a routine treatment method in radiation oncology, for the focal irradiation of mouse legs. After conducting a planning computed tomography (CT) scan of one mouse in its customized mold a three-dimensional dose plan was calculated using a dedicated planning workstation. 18 Gy have been applied to the right anterior tibial muscle of 4 healthy and 12 mice with immune defect in general anesthesia using an image-guided robotic linear accelerator (LINAC). The mice were fixed in a customized acrylic mold with attached fiducial markers for image guided tracking. All 16 mice could be irradiated as prevised without signs of acute radiation toxicity or anesthesiological side effects. The animals survived until scarification after 8, 21 and 49 days as planned. The procedure was straight forward and the irradiation process took 5 minutes to apply the dose of 18 Gy. Localized irradiation of mice legs using a robotic LINAC could be conducted as planned. It is a feasible procedure without recognizable side effects. Image guidance offers precise dose delivery and preserves adjacent body parts and tissues.

  19. Real-time terahertz imaging through self-mixing in a quantum-cascade laser

    Energy Technology Data Exchange (ETDEWEB)

    Wienold, M., E-mail: martin.wienold@dlr.de; Rothbart, N.; Hübers, H.-W. [Institute of Optical Sensor Systems, German Aerospace Center (DLR), Rutherfordstr. 2, 12489 Berlin (Germany); Department of Physics, Humboldt-Universität zu Berlin, Newtonstr. 15, 12489 Berlin (Germany); Hagelschuer, T. [Institute of Optical Sensor Systems, German Aerospace Center (DLR), Rutherfordstr. 2, 12489 Berlin (Germany); Schrottke, L.; Biermann, K.; Grahn, H. T. [Paul-Drude-Institut für Festkörperelektronik, Leibniz-Institut im Forschungsverbund Berlin e. V., Hausvogteiplatz 5-7, 10117 Berlin (Germany)

    2016-07-04

    We report on a fast self-mixing approach for real-time, coherent terahertz imaging based on a quantum-cascade laser and a scanning mirror. Due to a fast deflection of the terahertz beam, images with frame rates up to several Hz are obtained, eventually limited by the mechanical inertia of the employed scanning mirror. A phase modulation technique allows for the separation of the amplitude and phase information without the necessity of parameter fitting routines. We further demonstrate the potential for transmission imaging.

  20. System and method for investigating sub-surface features and 3D imaging of non-linear property, compressional velocity VP, shear velocity VS and velocity ratio VP/VS of a rock formation

    Science.gov (United States)

    Vu, Cung Khac; Skelt, Christopher; Nihei, Kurt; Johnson, Paul A.; Guyer, Robert; Ten Cate, James A.; Le Bas, Pierre-Yves; Larmat, Carene S.

    2015-06-02

    A system and a method for generating a three-dimensional image of a rock formation, compressional velocity VP, shear velocity VS and velocity ratio VP/VS of a rock formation are provided. A first acoustic signal includes a first plurality of pulses. A second acoustic signal from a second source includes a second plurality of pulses. A detected signal returning to the borehole includes a signal generated by a non-linear mixing process from the first and second acoustic signals in a non-linear mixing zone within an intersection volume. The received signal is processed to extract the signal over noise and/or signals resulting from linear interaction and the three dimensional image of is generated.

  1. Nested generalized linear mixed model with ordinal response: Simulation and application on poverty data in Java Island

    Science.gov (United States)

    Widyaningsih, Yekti; Saefuddin, Asep; Notodiputro, Khairil A.; Wigena, Aji H.

    2012-05-01

    The objective of this research is to build a nested generalized linear mixed model using an ordinal response variable with some covariates. There are three main jobs in this paper, i.e. parameters estimation procedure, simulation, and implementation of the model for the real data. At the part of parameters estimation procedure, concepts of threshold, nested random effect, and computational algorithm are described. The simulations data are built for 3 conditions to know the effect of different parameter values of random effect distributions. The last job is the implementation of the model for the data about poverty in 9 districts of Java Island. The districts are Kuningan, Karawang, and Majalengka chose randomly in West Java; Temanggung, Boyolali, and Cilacap from Central Java; and Blitar, Ngawi, and Jember from East Java. The covariates in this model are province, number of bad nutrition cases, number of farmer families, and number of health personnel. In this modeling, all covariates are grouped as ordinal scale. Unit observation in this research is sub-district (kecamatan) nested in district, and districts (kabupaten) are nested in province. For the result of simulation, ARB (Absolute Relative Bias) and RRMSE (Relative Root of mean square errors) scale is used. They show that prov parameters have the highest bias, but more stable RRMSE in all conditions. The simulation design needs to be improved by adding other condition, such as higher correlation between covariates. Furthermore, as the result of the model implementation for the data, only number of farmer family and number of medical personnel have significant contributions to the level of poverty in Central Java and East Java province, and only district 2 (Karawang) of province 1 (West Java) has different random effect from the others. The source of the data is PODES (Potensi Desa) 2008 from BPS (Badan Pusat Statistik).

  2. Yield response of winter wheat cultivars to environments modeled by different variance-covariance structures in linear mixed models

    Energy Technology Data Exchange (ETDEWEB)

    Studnicki, M.; Mądry, W.; Noras, K.; Wójcik-Gront, E.; Gacek, E.

    2016-11-01

    The main objectives of multi-environmental trials (METs) are to assess cultivar adaptation patterns under different environmental conditions and to investigate genotype by environment (G×E) interactions. Linear mixed models (LMMs) with more complex variance-covariance structures have become recognized and widely used for analyzing METs data. Best practice in METs analysis is to carry out a comparison of competing models with different variance-covariance structures. Improperly chosen variance-covariance structures may lead to biased estimation of means resulting in incorrect conclusions. In this work we focused on adaptive response of cultivars on the environments modeled by the LMMs with different variance-covariance structures. We identified possible limitations of inference when using an inadequate variance-covariance structure. In the presented study we used the dataset on grain yield for 63 winter wheat cultivars, evaluated across 18 locations, during three growing seasons (2008/2009-2010/2011) from the Polish Post-registration Variety Testing System. For the evaluation of variance-covariance structures and the description of cultivars adaptation to environments, we calculated adjusted means for the combination of cultivar and location in models with different variance-covariance structures. We concluded that in order to fully describe cultivars adaptive patterns modelers should use the unrestricted variance-covariance structure. The restricted compound symmetry structure may interfere with proper interpretation of cultivars adaptive patterns. We found, that the factor-analytic structure is also a good tool to describe cultivars reaction on environments, and it can be successfully used in METs data after determining the optimal component number for each dataset. (Author)

  3. Solution of a Problem Linear Plane Elasticity with Mixed Boundary Conditions by the Method of Boundary Integrals

    Directory of Open Access Journals (Sweden)

    Nahed S. Hussein

    2014-01-01

    Full Text Available A numerical boundary integral scheme is proposed for the solution to the system of …eld equations of plane. The stresses are prescribed on one-half of the circle, while the displacements are given. The considered problem with mixed boundary conditions in the circle is replaced by two problems with homogeneous boundary conditions, one of each type, having a common solution. The equations are reduced to a system of boundary integral equations, which is then discretized in the usual way, and the problem at this stage is reduced to the solution to a rectangular linear system of algebraic equations. The unknowns in this system of equations are the boundary values of four harmonic functions which define the full elastic solution and the unknown boundary values of stresses or displacements on proper parts of the boundary. On the basis of the obtained results, it is inferred that a stress component has a singularity at each of the two separation points, thought to be of logarithmic type. The results are discussed and boundary plots are given. We have also calculated the unknown functions in the bulk directly from the given boundary conditions using the boundary collocation method. The obtained results in the bulk are discussed and three-dimensional plots are given. A tentative form for the singular solution is proposed and the corresponding singular stresses and displacements are plotted in the bulk. The form of the singular tangential stress is seen to be compatible with the boundary values obtained earlier. The efficiency of the used numerical schemes is discussed.

  4. COMSAT: Residue contact prediction of transmembrane proteins based on support vector machines and mixed integer linear programming.

    Science.gov (United States)

    Zhang, Huiling; Huang, Qingsheng; Bei, Zhendong; Wei, Yanjie; Floudas, Christodoulos A

    2016-03-01

    In this article, we present COMSAT, a hybrid framework for residue contact prediction of transmembrane (TM) proteins, integrating a support vector machine (SVM) method and a mixed integer linear programming (MILP) method. COMSAT consists of two modules: COMSAT_SVM which is trained mainly on position-specific scoring matrix features, and COMSAT_MILP which is an ab initio method based on optimization models. Contacts predicted by the SVM model are ranked by SVM confidence scores, and a threshold is trained to improve the reliability of the predicted contacts. For TM proteins with no contacts above the threshold, COMSAT_MILP is used. The proposed hybrid contact prediction scheme was tested on two independent TM protein sets based on the contact definition of 14 Å between Cα-Cα atoms. First, using a rigorous leave-one-protein-out cross validation on the training set of 90 TM proteins, an accuracy of 66.8%, a coverage of 12.3%, a specificity of 99.3% and a Matthews' correlation coefficient (MCC) of 0.184 were obtained for residue pairs that are at least six amino acids apart. Second, when tested on a test set of 87 TM proteins, the proposed method showed a prediction accuracy of 64.5%, a coverage of 5.3%, a specificity of 99.4% and a MCC of 0.106. COMSAT shows satisfactory results when compared with 12 other state-of-the-art predictors, and is more robust in terms of prediction accuracy as the length and complexity of TM protein increase. COMSAT is freely accessible at http://hpcc.siat.ac.cn/COMSAT/. © 2016 Wiley Periodicals, Inc.

  5. Group-Level EEG-Processing Pipeline for Flexible Single Trial-Based Analyses Including Linear Mixed Models.

    Science.gov (United States)

    Frömer, Romy; Maier, Martin; Abdel Rahman, Rasha

    2018-01-01

    Here we present an application of an EEG processing pipeline customizing EEGLAB and FieldTrip functions, specifically optimized to flexibly analyze EEG data based on single trial information. The key component of our approach is to create a comprehensive 3-D EEG data structure including all trials and all participants maintaining the original order of recording. This allows straightforward access to subsets of the data based on any information available in a behavioral data structure matched with the EEG data (experimental conditions, but also performance indicators, such accuracy or RTs of single trials). In the present study we exploit this structure to compute linear mixed models (LMMs, using lmer in R) including random intercepts and slopes for items. This information can easily be read out from the matched behavioral data, whereas it might not be accessible in traditional ERP approaches without substantial effort. We further provide easily adaptable scripts for performing cluster-based permutation tests (as implemented in FieldTrip), as a more robust alternative to traditional omnibus ANOVAs. Our approach is particularly advantageous for data with parametric within-subject covariates (e.g., performance) and/or multiple complex stimuli (such as words, faces or objects) that vary in features affecting cognitive processes and ERPs (such as word frequency, salience or familiarity), which are sometimes hard to control experimentally or might themselves constitute variables of interest. The present dataset was recorded from 40 participants who performed a visual search task on previously unfamiliar objects, presented either visually intact or blurred. MATLAB as well as R scripts are provided that can be adapted to different datasets.

  6. Transformation of Summary Statistics from Linear Mixed Model Association on All-or-None Traits to Odds Ratio.

    Science.gov (United States)

    Lloyd-Jones, Luke R; Robinson, Matthew R; Yang, Jian; Visscher, Peter M

    2018-04-01

    Genome-wide association studies (GWAS) have identified thousands of loci that are robustly associated with complex diseases. The use of linear mixed model (LMM) methodology for GWAS is becoming more prevalent due to its ability to control for population structure and cryptic relatedness and to increase power. The odds ratio (OR) is a common measure of the association of a disease with an exposure ( e.g. , a genetic variant) and is readably available from logistic regression. However, when the LMM is applied to all-or-none traits it provides estimates of genetic effects on the observed 0-1 scale, a different scale to that in logistic regression. This limits the comparability of results across studies, for example in a meta-analysis, and makes the interpretation of the magnitude of an effect from an LMM GWAS difficult. In this study, we derived transformations from the genetic effects estimated under the LMM to the OR that only rely on summary statistics. To test the proposed transformations, we used real genotypes from two large, publicly available data sets to simulate all-or-none phenotypes for a set of scenarios that differ in underlying model, disease prevalence, and heritability. Furthermore, we applied these transformations to GWAS summary statistics for type 2 diabetes generated from 108,042 individuals in the UK Biobank. In both simulation and real-data application, we observed very high concordance between the transformed OR from the LMM and either the simulated truth or estimates from logistic regression. The transformations derived and validated in this study improve the comparability of results from prospective and already performed LMM GWAS on complex diseases by providing a reliable transformation to a common comparative scale for the genetic effects. Copyright © 2018 by the Genetics Society of America.

  7. Analisa Pengaruh Brand Identity Terhadap Pembentukan Brand Image Dengan Promotion Mix Dan Brand Awareness Sebagai Variabel Intervening Pada Merek Speedo

    OpenAIRE

    Lunardi, Jeconiah

    2015-01-01

    Penelitian ini dilakukan untuk mengetahui seberapa dampak variabel brand identity terhadap pembentukan brand image pada merek Speedo. Penelitian menggunakan variabel penghubung promotion mix dan brand awareness. Bahasan utama dari penelitian adalah seberapa signifikan dampak dari brand identity terhadap brand image secara langsung maupun melalui variabel intervening promotion mix dan brand awareness pada merek Speedo. Penelitian berjenis kuantitatif dengan penyebaran kuisioner kepada 150 pela...

  8. Submillisecond mixing in a continuous-flow, microfluidic mixer utilizing mid-infrared hyperspectral imaging detection.

    Science.gov (United States)

    Kise, Drew P; Magana, Donny; Reddish, Michael J; Dyer, R Brian

    2014-02-07

    We report a continuous-flow, microfluidic mixer utilizing mid-infrared hyperspectral imaging detection, with an experimentally determined, submillisecond mixing time. The simple and robust mixer design has the microfluidic channels cut through a polymer spacer that is sandwiched between two IR transparent windows. The mixer hydrodynamically focuses the sample stream with two side flow channels, squeezing it into a thin jet and initiating mixing through diffusion and advection. The detection system generates a mid-infrared hyperspectral absorbance image of the microfluidic sample stream. Calibration of the hyperspectral image yields the mid-IR absorbance spectrum of the sample versus time. A mixing time of 269 μs was measured for a pD jump from 3.2 to above 4.5 in a D2O sample solution of adenosine monophosphate (AMP), which acts as an infrared pD indicator. The mixer was further characterized by comparing experimental results with a simulation of the mixing of an H2O sample stream with a D2O sheath flow, showing good agreement between the two. The IR microfluidic mixer eliminates the need for fluorescence labeling of proteins with bulky, interfering dyes, because it uses the intrinsic IR absorbance of the molecules of interest, and the structural specificity of IR spectroscopy to follow specific chemical changes such as the protonation state of AMP.

  9. Linear and non-linear enhancement for sun glint reduction in advanced very high resolution radiometer (AVHRR) image

    International Nuclear Information System (INIS)

    Roslan, N; Reba, M N M; Askari, M; Halim, M K A

    2014-01-01

    Cloud detection over water surfaces is difficult due to the sun glint effect. The mixed pixels between both features may introduce inaccurate cloud classification. This problem generally occurs because of less contrast between the glint and the cloud. Both features have almost the same reflectance in the visible wavelength. The piecewise contrast stretch technique shows preservation capability on the reflectance of the cloud. The result of a band ratio was smoothed by applying the Sobel edge detection to provide better cloud feature detection. The study achieved an accuracy of about 77.5% in cloud pixels detection

  10. Static Hyperspectral Fluorescence Imaging of Viscous Materials Based on a Linear Variable Filter Spectrometer

    Directory of Open Access Journals (Sweden)

    Alexander W. Koch

    2013-09-01

    Full Text Available This paper presents a low-cost hyperspectral measurement setup in a new application based on fluorescence detection in the visible (Vis wavelength range. The aim of the setup is to take hyperspectral fluorescence images of viscous materials. Based on these images, fluorescent and non-fluorescent impurities in the viscous materials can be detected. For the illumination of the measurement object, a narrow-band high-power light-emitting diode (LED with a center wavelength of 370 nm was used. The low-cost acquisition unit for the imaging consists of a linear variable filter (LVF and a complementary metal oxide semiconductor (CMOS 2D sensor array. The translucent wavelength range of the LVF is from 400 nm to 700 nm. For the confirmation of the concept, static measurements of fluorescent viscous materials with a non-fluorescent impurity have been performed and analyzed. With the presented setup, measurement surfaces in the micrometer range can be provided. The measureable minimum particle size of the impurities is in the nanometer range. The recording rate for the measurements depends on the exposure time of the used CMOS 2D sensor array and has been found to be in the microsecond range.

  11. Multi-modal imaging, model-based tracking, and mixed reality visualisation for orthopaedic surgery

    Science.gov (United States)

    Fuerst, Bernhard; Tateno, Keisuke; Johnson, Alex; Fotouhi, Javad; Osgood, Greg; Tombari, Federico; Navab, Nassir

    2017-01-01

    Orthopaedic surgeons are still following the decades old workflow of using dozens of two-dimensional fluoroscopic images to drill through complex 3D structures, e.g. pelvis. This Letter presents a mixed reality support system, which incorporates multi-modal data fusion and model-based surgical tool tracking for creating a mixed reality environment supporting screw placement in orthopaedic surgery. A red–green–blue–depth camera is rigidly attached to a mobile C-arm and is calibrated to the cone-beam computed tomography (CBCT) imaging space via iterative closest point algorithm. This allows real-time automatic fusion of reconstructed surface and/or 3D point clouds and synthetic fluoroscopic images obtained through CBCT imaging. An adapted 3D model-based tracking algorithm with automatic tool segmentation allows for tracking of the surgical tools occluded by hand. This proposed interactive 3D mixed reality environment provides an intuitive understanding of the surgical site and supports surgeons in quickly localising the entry point and orienting the surgical tool during screw placement. The authors validate the augmentation by measuring target registration error and also evaluate the tracking accuracy in the presence of partial occlusion. PMID:29184659

  12. A mixed integer linear programming model for operational planning of a biodiesel supply chain network from used cooking oil

    Science.gov (United States)

    Jonrinaldi, Hadiguna, Rika Ampuh; Salastino, Rades

    2017-11-01

    Environmental consciousness has paid many attention nowadays. It is not only about how to recycle, remanufacture or reuse used end products but it is also how to optimize the operations of the reverse system. A previous research has proposed a design of reverse supply chain of biodiesel network from used cooking oil. However, the research focused on the design of the supply chain strategy not the operations of the supply chain. It only decided how to design the structure of the supply chain in the next few years, and the process of each stage will be conducted in the supply chain system in general. The supply chain system has not considered operational policies to be conducted by the companies in the supply chain. Companies need a policy for each stage of the supply chain operations to be conducted so as to produce the optimal supply chain system, including how to use all the resources that have been designed in order to achieve the objectives of the supply chain system. Therefore, this paper proposes a model to optimize the operational planning of a biodiesel supply chain network from used cooking oil. A mixed integer linear programming is developed to model the operational planning of biodiesel supply chain in order to minimize the total operational cost of the supply chain. Based on the implementation of the model developed, the total operational cost of the biodiesel supply chain incurred by the system is less than the total operational cost of supply chain based on the previous research during seven days of operational planning about amount of 2,743,470.00 or 0.186%. Production costs contributed to 74.6 % of total operational cost and the cost of purchasing the used cooking oil contributed to 24.1 % of total operational cost. So, the system should pay more attention to these two aspects as changes in the value of these aspects will cause significant effects to the change in the total operational cost of the supply chain.

  13. Multiscale imaging and characterization of the effect of mixing temperature on asphalt concrete containing recycled components.

    Science.gov (United States)

    Cavalli, M C; Griffa, M; Bressi, S; Partl, M N; Tebaldi, G; Poulikakos, L D

    2016-10-01

    When producing asphalt concrete mixture with high amounts of reclaimed asphalt pavement (RAP), the mixing temperature plays a significant role in the resulting spatial distribution of the components as well as on the quality of the resulting mixture, in terms of workability during mixing and compaction as well as in service mechanical properties. Asphalt concrete containing 50% RAP was investigated at mixing temperatures of 140, 160 and 180°C, using a multiscale approach. At the microscale, using energy dispersive X-ray spectroscopy the RAP binder film thickness was visualized and measured. It was shown that at higher mixing temperatures this film thickness was reduced. The reduction in film thickness can be attributed to the loss of volatiles as well as the mixing of RAP binder with virgin binder at higher temperatures. X-ray computer tomography was used to characterize statistically the distribution of the RAP and virgin aggregates geometric features: volume, width and shape anisotropy. In addition using X-ray computer tomography, the packing and spatial distribution of the RAP and virgin aggregates was characterized using the nearest neighbour metric. It was shown that mixing temperature may have a positive effect on the spatial distribution of the aggregates but did not affect the packing. The study shows a tendency for the RAP aggregates to be more likely distributed in clusters at lower mixing temperatures. At higher temperatures, they were more homogeneously distributed. This indicates a higher degree of blending both at microscale (binder film) and macroscale (spatial distribution) between RAP and virgin aggregates as a result of increasing mixing temperatures and the ability to quantify this using various imaging techniques. © 2016 The Authors Journal of Microscopy © 2016 Royal Microscopical Society.

  14. Non-linear imaging techniques visualize the lipid profile of C. elegans

    Science.gov (United States)

    Mari, Meropi; Petanidou, Barbara; Palikaras, Konstantinos; Fotakis, Costas; Tavernarakis, Nektarios; Filippidis, George

    2015-07-01

    The non-linear techniques Second and Third Harmonic Generation (SHG, THG) have been employed simultaneously to record three dimensional (3D) imaging and localize the lipid content of the muscular areas (ectopic fat) of Caenorhabditis elegans (C. elegans). Simultaneously, Two-Photon Fluorescence (TPEF) was used initially to localize the stained lipids with Nile Red, but also to confirm the THG potential to image lipids successfully. In addition, GFP labelling of the somatic muscles, proves the initial suggestion of the existence of ectopic fat on the muscles and provides complementary information to the SHG imaging of the pharynx. The ectopic fat may be related to a complex of pathological conditions including type-2 diabetes, hypertension and cardiovascular diseases. The elucidation of the molecular path leading to the development of metabolic syndrome is a vital issue with high biological significance and necessitates accurate methods competent of monitoring lipid storage distribution and dynamics in vivo. THG microscopy was employed as a quantitative tool to monitor the lipid accumulation in non-adipose tissues in the pharyngeal muscles of 12 unstained specimens while the SHG imaging revealed the anatomical structure of the muscles. The ectopic fat accumulation on the pharyngeal muscles increases in wild type (N2) C. elegans between 1 and 9 days of adulthood. This suggests a correlation of the ectopic fat accumulation with the aging. Our results can provide new evidence relating the deposition of ectopic fat with aging, but also validate SHG and THG microscopy modalities as new, non-invasive tools capable of localizing and quantifying selectively lipid accumulation and distribution.

  15. Scanning Electron Microscope Calibration Using a Multi-Image Non-Linear Minimization Process

    Science.gov (United States)

    Cui, Le; Marchand, Éric

    2015-04-01

    A scanning electron microscope (SEM) calibrating approach based on non-linear minimization procedure is presented in this article. A part of this article has been published in IEEE International Conference on Robotics and Automation (ICRA), 2014. . Both the intrinsic parameters and the extrinsic parameters estimations are achieved simultaneously by minimizing the registration error. The proposed approach considers multi-images of a multi-scale calibration pattern view from different positions and orientations. Since the projection geometry of the scanning electron microscope is different from that of a classical optical sensor, the perspective projection model and the parallel projection model are considered and compared with distortion models. Experiments are realized by varying the position and the orientation of a multi-scale chessboard calibration pattern from 300× to 10,000×. The experimental results show the efficiency and the accuracy of this approach.

  16. General rigid motion correction for computed tomography imaging based on locally linear embedding

    Science.gov (United States)

    Chen, Mianyi; He, Peng; Feng, Peng; Liu, Baodong; Yang, Qingsong; Wei, Biao; Wang, Ge

    2018-02-01

    The patient motion can damage the quality of computed tomography images, which are typically acquired in cone-beam geometry. The rigid patient motion is characterized by six geometric parameters and are more challenging to correct than in fan-beam geometry. We extend our previous rigid patient motion correction method based on the principle of locally linear embedding (LLE) from fan-beam to cone-beam geometry and accelerate the computational procedure with the graphics processing unit (GPU)-based all scale tomographic reconstruction Antwerp toolbox. The major merit of our method is that we need neither fiducial markers nor motion-tracking devices. The numerical and experimental studies show that the LLE-based patient motion correction is capable of calibrating the six parameters of the patient motion simultaneously, reducing patient motion artifacts significantly.

  17. Self-image of the Patients with Head and Neck Cancer: A Mixed Method Research

    Science.gov (United States)

    Nayak, Shalini G; Pai, Mamatha Shivananda; George, Linu Sara

    2016-01-01

    Aim: The aim of the study was to assess the self-image of the patients with head and neck cancers (HNCs) by using a mixed method research. Subjects and Methods: A mixed method approach and triangulation design was used with the aim of assessing the self-image of the patients with HNCs. Data was gathered by using self-administered self-image scale and structured interview. Nested sampling technique was adopted. Sample size for quantitative approach was 54 and data saturation was achieved with seven subjects for qualitative approach. Institutional Ethical Committee clearance was obtained. Results: The results of the study showed that 30 (56%) subjects had positive self-image and 24 (44%) had negative self-image. There was a moderate positive correlation between body image and integrity (r = 0.430, P = 0.001), weak positive correlation between body image and self-esteem (r = 0.270, P = 0.049), and no correlation between self-esteem and integrity (r = 0.203, P = 0.141). The participants also scored maximum (24/24) in the areas of body image and self-esteem. Similar findings were also observed in the phenomenological approach. The themes evolved were immaterial of outer appearance and desire of good health to all. Conclusion: The illness is long-term and impacts the individual 24 h a day. Understanding patients’ self-concept and living experiences of patients with HNC is important for the health care professionals to improve the care. PMID:27559264

  18. Self-image of the patients with head and neck cancer: A mixed method research

    Directory of Open Access Journals (Sweden)

    Shalini G Nayak

    2016-01-01

    Full Text Available Aim: The aim of the study was to assess the self-image of the patients with head and neck cancers (HNCs by using a mixed method research. Subjects and Methods: A mixed method approach and triangulation design was used with the aim of assessing the self-image of the patients with HNCs. Data was gathered by using self-administered self-image scale and structured interview. Nested sampling technique was adopted. Sample size for quantitative approach was 54 and data saturation was achieved with seven subjects for qualitative approach. Institutional Ethical Committee clearance was obtained. Results: The results of the study showed that 30 (56% subjects had positive self-image and 24 (44% had negative self-image. There was a moderate positive correlation between body image and integrity (r = 0.430, P = 0.001, weak positive correlation between body image and self-esteem (r = 0.270, P = 0.049, and no correlation between self-esteem and integrity (r = 0.203, P = 0.141. The participants also scored maximum (24/24 in the areas of body image and self-esteem. Similar findings were also observed in the phenomenological approach. The themes evolved were "immaterial of outer appearance" and "desire of good health to all." Conclusion: The illness is long-term and impacts the individual 24 h a day. Understanding patients′ self-concept and living experiences of patients with HNC is important for the health care professionals to improve the care.

  19. Spatial filtering self-velocimeter for vehicle application using a CMOS linear image sensor

    Science.gov (United States)

    He, Xin; Zhou, Jian; Nie, Xiaoming; Long, Xingwu

    2015-03-01

    The idea of using a spatial filtering velocimeter (SFV) to measure the velocity of a vehicle for an inertial navigation system is put forward. The presented SFV is based on a CMOS linear image sensor with a high-speed data rate, large pixel size, and built-in timing generator. These advantages make the image sensor suitable to measure vehicle velocity. The power spectrum of the output signal is obtained by fast Fourier transform and is corrected by a frequency spectrum correction algorithm. This velocimeter was used to measure the velocity of a conveyor belt driven by a rotary table and the measurement uncertainty is ˜0.54%. Furthermore, it was also installed on a vehicle together with a laser Doppler velocimeter (LDV) to measure self-velocity. The measurement result of the designed SFV is compared with that of the LDV. It is shown that the measurement result of the SFV is coincident with that of the LDV. Therefore, the designed SFV is suitable for a vehicle self-contained inertial navigation system.

  20. Diffuse Optical Tomography for Brain Imaging: Continuous Wave Instrumentation and Linear Analysis Methods

    Science.gov (United States)

    Giacometti, Paolo; Diamond, Solomon G.

    Diffuse optical tomography (DOT) is a functional brain imaging technique that measures cerebral blood oxygenation and blood volume changes. This technique is particularly useful in human neuroimaging measurements because of the coupling between neural and hemodynamic activity in the brain. DOT is a multichannel imaging extension of near-infrared spectroscopy (NIRS). NIRS uses laser sources and light detectors on the scalp to obtain noninvasive hemodynamic measurements from spectroscopic analysis of the remitted light. This review explains how NIRS data analysis is performed using a combination of the modified Beer-Lambert law (MBLL) and the diffusion approximation to the radiative transport equation (RTE). Laser diodes, photodiode detectors, and optical terminals that contact the scalp are the main components in most NIRS systems. Placing multiple sources and detectors over the surface of the scalp allows for tomographic reconstructions that extend the individual measurements of NIRS into DOT. Mathematically arranging the DOT measurements into a linear system of equations that can be inverted provides a way to obtain tomographic reconstructions of hemodynamics in the brain.

  1. Locally Linear Embedding of Local Orthogonal Least Squares Images for Face Recognition

    Science.gov (United States)

    Hafizhelmi Kamaru Zaman, Fadhlan

    2018-03-01

    Dimensionality reduction is very important in face recognition since it ensures that high-dimensionality data can be mapped to lower dimensional space without losing salient and integral facial information. Locally Linear Embedding (LLE) has been previously used to serve this purpose, however, the process of acquiring LLE features requires high computation and resources. To overcome this limitation, we propose a locally-applied Local Orthogonal Least Squares (LOLS) model can be used as initial feature extraction before the application of LLE. By construction of least squares regression under orthogonal constraints we can preserve more discriminant information in the local subspace of facial features while reducing the overall features into a more compact form that we called LOLS images. LLE can then be applied on the LOLS images to maps its representation into a global coordinate system of much lower dimensionality. Several experiments carried out using publicly available face datasets such as AR, ORL, YaleB, and FERET under Single Sample Per Person (SSPP) constraint demonstrates that our proposed method can reduce the time required to compute LLE features while delivering better accuracy when compared to when either LLE or OLS alone is used. Comparison against several other feature extraction methods and more recent feature-learning method such as state-of-the-art Convolutional Neural Networks (CNN) also reveal the superiority of the proposed method under SSPP constraint.

  2. Improved linearity using harmonic error rejection in a full-field range imaging system

    Science.gov (United States)

    Payne, Andrew D.; Dorrington, Adrian A.; Cree, Michael J.; Carnegie, Dale A.

    2008-02-01

    Full field range imaging cameras are used to simultaneously measure the distance for every pixel in a given scene using an intensity modulated illumination source and a gain modulated receiver array. The light is reflected from an object in the scene, and the modulation envelope experiences a phase shift proportional to the target distance. Ideally the waveforms are sinusoidal, allowing the phase, and hence object range, to be determined from four measurements using an arctangent function. In practice these waveforms are often not perfectly sinusoidal, and in some cases square waveforms are instead used to simplify the electronic drive requirements. The waveforms therefore commonly contain odd harmonics which contribute a nonlinear error to the phase determination, and therefore an error in the range measurement. We have developed a unique sampling method to cancel the effect of these harmonics, with the results showing an order of magnitude improvement in the measurement linearity without the need for calibration or lookup tables, while the acquisition time remains unchanged. The technique can be applied to existing range imaging systems without having to change or modify the complex illumination or sensor systems, instead only requiring a change to the signal generation and timing electronics.

  3. Characterization and recognition of mixed emotional expressions in thermal face image

    Science.gov (United States)

    Saha, Priya; Bhattacharjee, Debotosh; De, Barin K.; Nasipuri, Mita

    2016-05-01

    Facial expressions in infrared imaging have been introduced to solve the problem of illumination, which is an integral constituent of visual imagery. The paper investigates facial skin temperature distribution on mixed thermal facial expressions of our created face database where six are basic expressions and rest 12 are a mixture of those basic expressions. Temperature analysis has been performed on three facial regions of interest (ROIs); periorbital, supraorbital and mouth. Temperature variability of the ROIs in different expressions has been measured using statistical parameters. The temperature variation measurement in ROIs of a particular expression corresponds to a vector, which is later used in recognition of mixed facial expressions. Investigations show that facial features in mixed facial expressions can be characterized by positive emotion induced facial features and negative emotion induced facial features. Supraorbital is a useful facial region that can differentiate basic expressions from mixed expressions. Analysis and interpretation of mixed expressions have been conducted with the help of box and whisker plot. Facial region containing mixture of two expressions is generally less temperature inducing than corresponding facial region containing basic expressions.

  4. Imaging ultrasonic dispersive guided wave energy in long bones using linear radon transform.

    Science.gov (United States)

    Tran, Tho N H T; Nguyen, Kim-Cuong T; Sacchi, Mauricio D; Le, Lawrence H

    2014-11-01

    Multichannel analysis of dispersive ultrasonic energy requires a reliable mapping of the data from the time-distance (t-x) domain to the frequency-wavenumber (f-k) or frequency-phase velocity (f-c) domain. The mapping is usually performed with the classic 2-D Fourier transform (FT) with a subsequent substitution and interpolation via c = 2πf/k. The extracted dispersion trajectories of the guided modes lack the resolution in the transformed plane to discriminate wave modes. The resolving power associated with the FT is closely linked to the aperture of the recorded data. Here, we present a linear Radon transform (RT) to image the dispersive energies of the recorded ultrasound wave fields. The RT is posed as an inverse problem, which allows implementation of the regularization strategy to enhance the focusing power. We choose a Cauchy regularization for the high-resolution RT. Three forms of Radon transform: adjoint, damped least-squares, and high-resolution are described, and are compared with respect to robustness using simulated and cervine bone data. The RT also depends on the data aperture, but not as severely as does the FT. With the RT, the resolution of the dispersion panel could be improved up to around 300% over that of the FT. Among the Radon solutions, the high-resolution RT delineated the guided wave energy with much better imaging resolution (at least 110%) than the other two forms. The Radon operator can also accommodate unevenly spaced records. The results of the study suggest that the high-resolution RT is a valuable imaging tool to extract dispersive guided wave energies under limited aperture. Copyright © 2014 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.

  5. Linear single-step image reconstruction in the presence of nonscattering regions

    Science.gov (United States)

    Dehghani, H.; Delpy, D. T.

    2002-06-01

    There is growing interest in the use of near-infrared spectroscopy for the noninvasive determination of the oxygenation level within biological tissue. Stemming from this application, there has been further research in using this technique for obtaining tomographic images of the neonatal head, with the view of determining the level of oxygenated and deoxygenated blood within the brain. Because of computational complexity, methods used for numerical modeling of photon transfer within tissue have usually been limited to the diffusion approximation of the Boltzmann transport equation. The diffusion approximation, however, is not valid in regions of low scatter, such as the cerebrospinal fluid. Methods have been proposed for dealing with nonscattering regions within diffusing materials through the use of a radiosity-diffusion model. Currently, this new model assumes prior knowledge of the void region; therefore it is instructive to examine the errors introduced in applying a simple diffusion-based reconstruction scheme in cases where a nonscattering region exists. We present reconstructed images, using linear algorithms, of models that contain a nonscattering region within a diffusing material. The forward data are calculated by using the radiosity-diffusion model, and the inverse problem is solved by using either the radiosity-diffusion model or the diffusion-only model. When using data from a model containing a clear layer and reconstructing with the correct model, one can reconstruct the anomaly, but the qualitative accuracy and the position of the reconstructed anomaly depend on the size and the position of the clear regions. If the inverse model has no information about the clear regions (i.e., it is a purely diffusing model), an anomaly can be reconstructed, but the resulting image has very poor qualitative accuracy and poor localization of the anomaly. The errors in quantitative and localization accuracies depend on the size and location of the clear regions.

  6. From diets to foods: using linear programming to formulate a nutritious, minimum-cost porridge mix for children aged 1 to 2 years.

    Science.gov (United States)

    De Carvalho, Irene Stuart Torrié; Granfeldt, Yvonne; Dejmek, Petr; Håkansson, Andreas

    2015-03-01

    Linear programming has been used extensively as a tool for nutritional recommendations. Extending the methodology to food formulation presents new challenges, since not all combinations of nutritious ingredients will produce an acceptable food. Furthermore, it would help in implementation and in ensuring the feasibility of the suggested recommendations. To extend the previously used linear programming methodology from diet optimization to food formulation using consistency constraints. In addition, to exemplify usability using the case of a porridge mix formulation for emergency situations in rural Mozambique. The linear programming method was extended with a consistency constraint based on previously published empirical studies on swelling of starch in soft porridges. The new method was exemplified using the formulation of a nutritious, minimum-cost porridge mix for children aged 1 to 2 years for use as a complete relief food, based primarily on local ingredients, in rural Mozambique. A nutritious porridge fulfilling the consistency constraints was found; however, the minimum cost was unfeasible with local ingredients only. This illustrates the challenges in formulating nutritious yet economically feasible foods from local ingredients. The high cost was caused by the high cost of mineral-rich foods. A nutritious, low-cost porridge that fulfills the consistency constraints was obtained by including supplements of zinc and calcium salts as ingredients. The optimizations were successful in fulfilling all constraints and provided a feasible porridge, showing that the extended constrained linear programming methodology provides a systematic tool for designing nutritious foods.

  7. Neuromyelitis optica with linear enhancement of corpus callosum in brain magnetic resonance imaging with contrast: a case report.

    Science.gov (United States)

    Sahraian, Mohammad Ali; Moghadasi, Abdorreza Naser; Owji, Mahsa; Naghshineh, Hoda; Minagar, Alireza

    2015-06-10

    Neuromyelitis optica is a demyelinating disease of the central nervous system with various patterns of brain lesions. Corpus callosum may be involved in both multiple sclerosis and neuromyelitis optica. Previous case reports have demonstrated that callosal lesions in neuromyelitis optica are usually large and edematous and have a heterogeneous intensity showing a "marbled pattern" in the acute phase. Their size and intensity may reduce with time or disappear in the chronic stages. In this report, we describe a case of a 25-year-old Caucasian man with neuromyelitis optica who presented clinically with optic neuritis and myelitis. His brain magnetic resonance imaging demonstrated linear enhancement of the corpus callosum. Brain images with contrast agent added also showed linear ependymal layer enhancement of the lateral ventricles, which has been reported in this disease previously. Linear enhancement of corpus callosum in magnetic resonance imaging with contrast agent could help in diagnosing neuromyelitis optica and differentiating it from other demyelinating disease, especially multiple sclerosis.

  8. Optimization of the time series NDVI-rainfall relationship using linear mixed-effects modeling for the anti-desertification area in the Beijing and Tianjin sandstorm source region

    Science.gov (United States)

    Wang, Jin; Sun, Tao; Fu, Anmin; Xu, Hao; Wang, Xinjie

    2018-05-01

    Degradation in drylands is a critically important global issue that threatens ecosystem and environmental in many ways. Researchers have tried to use remote sensing data and meteorological data to perform residual trend analysis and identify human-induced vegetation changes. However, complex interactions between vegetation and climate, soil units and topography have not yet been considered. Data used in the study included annual accumulated Moderate Resolution Imaging Spectroradiometer (MODIS) 250 m normalized difference vegetation index (NDVI) from 2002 to 2013, accumulated rainfall from September to August, digital elevation model (DEM) and soil units. This paper presents linear mixed-effect (LME) modeling methods for the NDVI-rainfall relationship. We developed linear mixed-effects models that considered the random effects of sample points nested in soil units for nested two-level modeling and single-level modeling of soil units and sample points, respectively. Additionally, three functions, including the exponential function (exp), the power function (power), and the constant plus power function (CPP), were tested to remove heterogeneity, and an additional three correlation structures, including the first-order autoregressive structure [AR(1)], a combination of first-order autoregressive and moving average structures [ARMA(1,1)] and the compound symmetry structure (CS), were used to address the spatiotemporal correlations. It was concluded that the nested two-level model considering both heteroscedasticity with (CPP) and spatiotemporal correlation with [ARMA(1,1)] showed the best performance (AMR = 0.1881, RMSE = 0.2576, adj- R 2 = 0.9593). Variations between soil units and sample points that may have an effect on the NDVI-rainfall relationship should be included in model structures, and linear mixed-effects modeling achieves this in an effective and accurate way.

  9. Mixing and transport during pharmaceutical twin-screw wet granulation: experimental analysis via chemical imaging.

    Science.gov (United States)

    Kumar, Ashish; Vercruysse, Jurgen; Toiviainen, Maunu; Panouillot, Pierre-Emmanuel; Juuti, Mikko; Vanhoorne, Valérie; Vervaet, Chris; Remon, Jean Paul; Gernaey, Krist V; De Beer, Thomas; Nopens, Ingmar

    2014-07-01

    Twin-screw granulation is a promising continuous alternative for traditional batch high shear wet granulation (HSWG). The extent of HSWG in a twin screw granulator (TSG) is greatly governed by the residence time of the granulation materials in the TSG and degree of mixing. In order to determine the residence time distribution (RTD) and mixing in TSG, mostly visual observation and particle tracking methods are used, which are either inaccurate and difficult for short RTD, or provide an RTD only for a finite number of preferential tracer paths. In this study, near infrared chemical imaging, which is more accurate and provides a complete RTD, was used. The impact of changes in material throughput (10-17 kg/h), screw speed (500-900 rpm), number of kneading discs (2-12) and stagger angle (30-90°) on the RTD and axial mixing of the material was characterised. The experimental RTD curves were used to calculate the mean residence time, mean centred variance and the Péclet number to determine the axial mixing and predominance of convective over dispersive transport. The results showed that screw speed is the most influential parameter in terms of RTD and axial mixing in the TSG and established a significant interaction between screw design parameters (number and stagger angle of kneading discs) and the process parameters (material throughput and number of kneading discs). The results of the study will allow the development and validation of a transport model capable of predicting the RTD and macro-mixing in the TSG. These can later be coupled with a population balance model in order to predict granulation yields in a TSG more accurately. Copyright © 2014 Elsevier B.V. All rights reserved.

  10. Local Genealogies in a Linear Mixed Model for Genome-wide Association Mapping in Complex Pedigreed Populations

    DEFF Research Database (Denmark)

    Sahana, Goutam; Mailund, Thomas; Lund, Mogens Sandø

    2011-01-01

    be extended to incorporate other effects in a straightforward and rigorous fashion. Here, we present a complementary approach, called ‘GENMIX (genealogy based mixed model)’ which combines advantages from two powerful GWAS methods: genealogy-based haplotype grouping and MMA. Subjects and Methods: We validated......Introduction: The state-of-the-art for dealing with multiple levels of relationship among the samples in genome-wide association studies (GWAS) is unified mixed model analysis (MMA). This approach is very flexible, can be applied to both family-based and population-based samples, and can...

  11. The effect of Moidal non-linear blending function for dual-energy CT on CT image quality

    International Nuclear Information System (INIS)

    Zhang Fan; Yang Li

    2011-01-01

    Objective: To compare the difference between linear blending and non-linear blending function for dual-energy CT, and to evaluate the effect on CT image quality. Methods: The model was made of a piece of fresh pork liver inserted with 5 syringes containing various concentrations of iodine solutions (16.3, 26.4, 48.7, 74.6 and 112.3 HU). Linear blending images were automatically reformatted after the model was scanned in the dual-energy mode. Non-linear blending images were reformatted using the software of optimal contrast in Syngo workstation. Images were divided into 3 groups, including linear blending group, non-linear blending group and 120 kV group. Contrast noise ratio (CNR) were measured and calculated respectively in the 3 groups and the different figure of merit (FOM) values between the groups were compared using one-way ANOVA. Twenty patients scanned in the dual-energy mode were randomly selected and the SNR of their liver, renal cortex, spleen, pancreas and abdominal aorta were measured. The independent sample t test was used to compare the difference of signal to noise ratio (SNR) between linear blending group and non linear blending group. Two readers' agreement score and single-blind method were used to investigate the conspicuity difference between linear blending group and non linear blending group. Results: With models of different CT values, the FOM values in non-linear blending group were 20.65± 8.18, 11.40±4.25, 1.60±0.82, 2.40±1.13, 45.49±17.86. In 74.6 HU and 112.3 HU models, the differences of the FOM values observed among the three groups were statistically significant (P<0.05), which were 0.30±0.06 and 14.43±4.59 for linear blending group, and 0.22±0.05 and 15.31±5.16 for 120 kV group. And non-linear blending group had a better FOM value. The SNR of renal cortex and abdominal aorta were 19.2±5.1 and 36.5±13.9 for non-linear blending group, while they were 12.4±3.8 and 22.6±7.0 for linear blending group. There were statistically

  12. Development of a shortleaf pine individual-tree growth equation using non-linear mixed modeling techniques

    Science.gov (United States)

    Chakra B. Budhathoki; Thomas B. Lynch; James M. Guldin

    2010-01-01

    Nonlinear mixed-modeling methods were used to estimate parameters in an individual-tree basal area growth model for shortleaf pine (Pinus echinata Mill.). Shortleaf pine individual-tree growth data were available from over 200 permanently established 0.2-acre fixed-radius plots located in naturally-occurring even-aged shortleaf pine forests on the...

  13. Direct measurement of the image displacement instability in a linear induction accelerator

    Science.gov (United States)

    Burris-Mog, T. J.; Ekdahl, C. A.; Moir, D. C.

    2017-06-01

    The image displacement instability (IDI) has been measured on the 20 MeV Axis I of the dual axis radiographic hydrodynamic test facility and compared to theory. A 0.23 kA electron beam was accelerated across 64 gaps in a low solenoid focusing field, and the position of the beam centroid was measured to 34.3 meters downstream from the cathode. One beam dynamics code was used to model the IDI from first principles, while another code characterized the effects of the resistive wall instability and the beam break-up (BBU) instability. Although the BBU instability was not found to influence the IDI, it appears that the IDI influences the BBU. Because the BBU theory does not fully account for the dependence on beam position for coupling to cavity transverse magnetic modes, the effect of the IDI is missing from the BBU theory. This becomes of particular concern to users of linear induction accelerators operating in or near low magnetic guide fields tunes.

  14. Direct measurement of the image displacement instability in a linear induction accelerator

    Directory of Open Access Journals (Sweden)

    T. J. Burris-Mog

    2017-06-01

    Full Text Available The image displacement instability (IDI has been measured on the 20 MeV Axis I of the dual axis radiographic hydrodynamic test facility and compared to theory. A 0.23 kA electron beam was accelerated across 64 gaps in a low solenoid focusing field, and the position of the beam centroid was measured to 34.3 meters downstream from the cathode. One beam dynamics code was used to model the IDI from first principles, while another code characterized the effects of the resistive wall instability and the beam break-up (BBU instability. Although the BBU instability was not found to influence the IDI, it appears that the IDI influences the BBU. Because the BBU theory does not fully account for the dependence on beam position for coupling to cavity transverse magnetic modes, the effect of the IDI is missing from the BBU theory. This becomes of particular concern to users of linear induction accelerators operating in or near low magnetic guide fields tunes.

  15. Predicting respiratory motion signals for image-guided radiotherapy using multi-step linear methods (MULIN)

    International Nuclear Information System (INIS)

    Ernst, Floris; Schweikard, Achim

    2008-01-01

    Forecasting of respiration motion in image-guided radiotherapy requires algorithms that can accurately and efficiently predict target location. Improved methods for respiratory motion forecasting were developed and tested. MULIN, a new family of prediction algorithms based on linear expansions of the prediction error, was developed and tested. Computer-generated data with a prediction horizon of 150 ms was used for testing in simulation experiments. MULIN was compared to Least Mean Squares-based predictors (LMS; normalized LMS, nLMS; wavelet-based multiscale autoregression, wLMS) and a multi-frequency Extended Kalman Filter (EKF) approach. The in vivo performance of the algorithms was tested on data sets of patients who underwent radiotherapy. The new MULIN methods are highly competitive, outperforming the LMS and the EKF prediction algorithms in real-world settings and performing similarly to optimized nLMS and wLMS prediction algorithms. On simulated, periodic data the MULIN algorithms are outperformed only by the EKF approach due to its inherent advantage in predicting periodic signals. In the presence of noise, the MULIN methods significantly outperform all other algorithms. The MULIN family of algorithms is a feasible tool for the prediction of respiratory motion, performing as well as or better than conventional algorithms while requiring significantly lower computational complexity. The MULIN algorithms are of special importance wherever high-speed prediction is required. (orig.)

  16. Improved application of independent component analysis to functional magnetic resonance imaging study via linear projection techniques.

    Science.gov (United States)

    Long, Zhiying; Chen, Kewei; Wu, Xia; Reiman, Eric; Peng, Danling; Yao, Li

    2009-02-01

    Spatial Independent component analysis (sICA) has been widely used to analyze functional magnetic resonance imaging (fMRI) data. The well accepted implicit assumption is the spatially statistical independency of intrinsic sources identified by sICA, making the sICA applications difficult for data in which there exist interdependent sources and confounding factors. This interdependency can arise, for instance, from fMRI studies investigating two tasks in a single session. In this study, we introduced a linear projection approach and considered its utilization as a tool to separate task-related components from two-task fMRI data. The robustness and feasibility of the method are substantiated through simulation on computer data and fMRI real rest data. Both simulated and real two-task fMRI experiments demonstrated that sICA in combination with the projection method succeeded in separating spatially dependent components and had better detection power than pure model-based method when estimating activation induced by each task as well as both tasks.

  17. Predicting respiratory motion signals for image-guided radiotherapy using multi-step linear methods (MULIN)

    Energy Technology Data Exchange (ETDEWEB)

    Ernst, Floris; Schweikard, Achim [University of Luebeck, Institute for Robotics and Cognitive Systems, Luebeck (Germany)

    2008-06-15

    Forecasting of respiration motion in image-guided radiotherapy requires algorithms that can accurately and efficiently predict target location. Improved methods for respiratory motion forecasting were developed and tested. MULIN, a new family of prediction algorithms based on linear expansions of the prediction error, was developed and tested. Computer-generated data with a prediction horizon of 150 ms was used for testing in simulation experiments. MULIN was compared to Least Mean Squares-based predictors (LMS; normalized LMS, nLMS; wavelet-based multiscale autoregression, wLMS) and a multi-frequency Extended Kalman Filter (EKF) approach. The in vivo performance of the algorithms was tested on data sets of patients who underwent radiotherapy. The new MULIN methods are highly competitive, outperforming the LMS and the EKF prediction algorithms in real-world settings and performing similarly to optimized nLMS and wLMS prediction algorithms. On simulated, periodic data the MULIN algorithms are outperformed only by the EKF approach due to its inherent advantage in predicting periodic signals. In the presence of noise, the MULIN methods significantly outperform all other algorithms. The MULIN family of algorithms is a feasible tool for the prediction of respiratory motion, performing as well as or better than conventional algorithms while requiring significantly lower computational complexity. The MULIN algorithms are of special importance wherever high-speed prediction is required. (orig.)

  18. High-dynamic-range coherent diffractive imaging: ptychography using the mixed-mode pixel array detector

    Energy Technology Data Exchange (ETDEWEB)

    Giewekemeyer, Klaus, E-mail: klaus.giewekemeyer@xfel.eu [European XFEL GmbH, Hamburg (Germany); Philipp, Hugh T. [Cornell University, Ithaca, NY (United States); Wilke, Robin N. [Georg-August-Universität Göttingen, Göttingen (Germany); Aquila, Andrew [European XFEL GmbH, Hamburg (Germany); Osterhoff, Markus [Georg-August-Universität Göttingen, Göttingen (Germany); Tate, Mark W.; Shanks, Katherine S. [Cornell University, Ithaca, NY (United States); Zozulya, Alexey V. [Deutsches Elektronen-Synchrotron DESY, Hamburg (Germany); Salditt, Tim [Georg-August-Universität Göttingen, Göttingen (Germany); Gruner, Sol M. [Cornell University, Ithaca, NY (United States); Cornell University, Ithaca, NY (United States); Kavli Institute of Cornell for Nanoscience, Ithaca, NY (United States); Mancuso, Adrian P. [European XFEL GmbH, Hamburg (Germany)

    2014-08-07

    The advantages of a novel wide dynamic range hard X-ray detector are demonstrated for (ptychographic) coherent X-ray diffractive imaging. Coherent (X-ray) diffractive imaging (CDI) is an increasingly popular form of X-ray microscopy, mainly due to its potential to produce high-resolution images and the lack of an objective lens between the sample and its corresponding imaging detector. One challenge, however, is that very high dynamic range diffraction data must be collected to produce both quantitative and high-resolution images. In this work, hard X-ray ptychographic coherent diffractive imaging has been performed at the P10 beamline of the PETRA III synchrotron to demonstrate the potential of a very wide dynamic range imaging X-ray detector (the Mixed-Mode Pixel Array Detector, or MM-PAD). The detector is capable of single photon detection, detecting fluxes exceeding 1 × 10{sup 8} 8-keV photons pixel{sup −1} s{sup −1}, and framing at 1 kHz. A ptychographic reconstruction was performed using a peak focal intensity on the order of 1 × 10{sup 10} photons µm{sup −2} s{sup −1} within an area of approximately 325 nm × 603 nm. This was done without need of a beam stop and with a very modest attenuation, while ‘still’ images of the empty beam far-field intensity were recorded without any attenuation. The treatment of the detector frames and CDI methodology for reconstruction of non-sensitive detector regions, partially also extending the active detector area, are described.

  19. An update on modeling dose-response relationships: Accounting for correlated data structure and heterogeneous error variance in linear and nonlinear mixed models.

    Science.gov (United States)

    Gonçalves, M A D; Bello, N M; Dritz, S S; Tokach, M D; DeRouchey, J M; Woodworth, J C; Goodband, R D

    2016-05-01

    Advanced methods for dose-response assessments are used to estimate the minimum concentrations of a nutrient that maximizes a given outcome of interest, thereby determining nutritional requirements for optimal performance. Contrary to standard modeling assumptions, experimental data often present a design structure that includes correlations between observations (i.e., blocking, nesting, etc.) as well as heterogeneity of error variances; either can mislead inference if disregarded. Our objective is to demonstrate practical implementation of linear and nonlinear mixed models for dose-response relationships accounting for correlated data structure and heterogeneous error variances. To illustrate, we modeled data from a randomized complete block design study to evaluate the standardized ileal digestible (SID) Trp:Lys ratio dose-response on G:F of nursery pigs. A base linear mixed model was fitted to explore the functional form of G:F relative to Trp:Lys ratios and assess model assumptions. Next, we fitted 3 competing dose-response mixed models to G:F, namely a quadratic polynomial (QP) model, a broken-line linear (BLL) ascending model, and a broken-line quadratic (BLQ) ascending model, all of which included heteroskedastic specifications, as dictated by the base model. The GLIMMIX procedure of SAS (version 9.4) was used to fit the base and QP models and the NLMIXED procedure was used to fit the BLL and BLQ models. We further illustrated the use of a grid search of initial parameter values to facilitate convergence and parameter estimation in nonlinear mixed models. Fit between competing dose-response models was compared using a maximum likelihood-based Bayesian information criterion (BIC). The QP, BLL, and BLQ models fitted on G:F of nursery pigs yielded BIC values of 353.7, 343.4, and 345.2, respectively, thus indicating a better fit of the BLL model. The BLL breakpoint estimate of the SID Trp:Lys ratio was 16.5% (95% confidence interval [16.1, 17.0]). Problems with

  20. Multiplicative mixing of object identity and image attributes in single inferior temporal neurons.

    Science.gov (United States)

    Ratan Murty, N Apurva; Arun, S P

    2018-04-03

    Object recognition is challenging because the same object can produce vastly different images, mixing signals related to its identity with signals due to its image attributes, such as size, position, rotation, etc. Previous studies have shown that both signals are present in high-level visual areas, but precisely how they are combined has remained unclear. One possibility is that neurons might encode identity and attribute signals multiplicatively so that each can be efficiently decoded without interference from the other. Here, we show that, in high-level visual cortex, responses of single neurons can be explained better as a product rather than a sum of tuning for object identity and tuning for image attributes. This subtle effect in single neurons produced substantially better population decoding of object identity and image attributes in the neural population as a whole. This property was absent both in low-level vision models and in deep neural networks. It was also unique to invariances: when tested with two-part objects, neural responses were explained better as a sum than as a product of part tuning. Taken together, our results indicate that signals requiring separate decoding, such as object identity and image attributes, are combined multiplicatively in IT neurons, whereas signals that require integration (such as parts in an object) are combined additively. Copyright © 2018 the Author(s). Published by PNAS.

  1. CT and MR imaging features in phosphaturic mesenchymal tumor-mixed connective tissue: A case report.

    Science.gov (United States)

    Shi, Zhenshan; Deng, Yiqiong; Li, Xiumei; Li, Yueming; Cao, Dairong; Coossa, Vikash Sahadeo

    2018-04-01

    Phosphaturic mesenchymal tumor-mixed connective tissue (PMT-MCT) is rare and usually benign and slow-growing. The majority of these tumors is associated with sporadic tumor-induced osteomalacia (TIO) or rickets, affect middle-aged individuals and are located in the extremities. Previous imaging studies often focused on seeking the causative tumors of TIO, not on the radiological features of these tumors, especially magnetic resonance imaging (MRI) features. PMT-MCT remains a largely misdiagnosed, ignored or unknown entity by most radiologists and clinicians. In the present case report, a review of the known literature of PMT-MCT was conducted and the CT and MRI findings from three patient cases were described for diagnosing the small subcutaneous tumor. Typical MRI appearances of PMT-MCT were isointense relative to the muscles on T1-weighted imaging, and markedly hyperintense on T2-weighted imaging containing variably flow voids, with markedly heterogeneous/homogenous enhancement on post contrast T1-weighted fat-suppression imaging. Short time inversion recovery was demonstrated to be the optimal sequence in localizing the tumor.

  2. Multi-dimensional Imaging and Characterization of Convective Mixing in a Porous Media

    Science.gov (United States)

    Liyanage, R.; Pini, R.; Crawshaw, J.; Krevor, S. C.

    2017-12-01

    The dissolution of CO2 into reservoir brines is one of the key trapping mechanisms during CO2 sequestration in deep saline aquifers. The dissolution at the CO2-brine interface induces a buoyant instability in the aqueous phase following a local brine density increase in the range of 0.1-1% depending on pressure, temperature, and salinity. As a result the CO2 -saturated brine mixes with fresh brine to form characteristic finger-like patterns. This downward flow pushes fresh brine to the CO2-brine interface and further enhances dissolution. This phenomenon is referred to as convective mixing. A study has been undertaken to investigate convective mixing in a 3D opaque porous medium. A novel protocol is presented using X-ray Computed Tomography (X-ray CT) to image the evolution of convective mixing over time. Results are presented for experiments carried out at ambient conditions using a spherical bowl (diameter of 20 cm) packed with glass beads (diameter, 0.5 mm). Surrogate fluids are used that provide good x-ray contrast whilst maintaining a maximum density differential comparable to the one observed in a supercritical CO2-brine system (about 10 kg/m3). We use a mixture of methanol and ethylene glycol (MEG) at three different ratios (and doped with KI) and brine. We perform two repeats for each fluid pair and during a typical experiment scans are taken at regular time intervals for up to 10 hours. 3D images of the bowl are reconstructed (fig. 1) with (2x2x2) mm3 voxels. The experiments are classified by Rayleigh number covering the range Ra = 5,000-25,000. As expected, higher Ra leads to early development of instability, with the plume moving faster towards the bottom of the bowl. The computed dissolution flux supports these visual observations and confirms that dissolutions enhanced mixing produces fluxes that are significantly larger than the corresponding purely diffusive scenario. While quantitative agreement is observed from repeated experiments, we note that

  3. Scaling of Thermal Images at Different Spatial Resolution: The Mixed Pixel Problem

    Directory of Open Access Journals (Sweden)

    Hamlyn G. Jones

    2014-07-01

    Full Text Available The consequences of changes in spatial resolution for application of thermal imagery in plant phenotyping in the field are discussed. Where image pixels are significantly smaller than the objects of interest (e.g., leaves, accurate estimates of leaf temperature are possible, but when pixels reach the same scale or larger than the objects of interest, the observed temperatures become significantly biased by the background temperature as a result of the presence of mixed pixels. Approaches to the estimation of the true leaf temperature that apply both at the whole-pixel level and at the sub-pixel level are reviewed and discussed.

  4. Double-Stage Delay Multiply and Sum Beamforming Algorithm: Application to Linear-Array Photoacoustic Imaging

    OpenAIRE

    Mozaffarzadeh, Moein; Mahloojifar, Ali; Orooji, Mahdi; Adabi, Saba; Nasiriavanaki, Mohammadreza

    2018-01-01

    Photoacoustic imaging (PAI) is an emerging medical imaging modality capable of providing high spatial resolution of Ultrasound (US) imaging and high contrast of optical imaging. Delay-and-Sum (DAS) is the most common beamforming algorithm in PAI. However, using DAS beamformer leads to low resolution images and considerable contribution of off-axis signals. A new paradigm namely Delay-Multiply-and-Sum (DMAS), which was originally used as a reconstruction algorithm in confocal microwave imaging...

  5. The body project 4 all: A pilot randomized controlled trial of a mixed-gender dissonance-based body image program.

    Science.gov (United States)

    Kilpela, Lisa Smith; Blomquist, Kerstin; Verzijl, Christina; Wilfred, Salomé; Beyl, Robbie; Becker, Carolyn Black

    2016-06-01

    The Body Project is a cognitive dissonance-based body image improvement program with ample research support among female samples. More recently, researchers have highlighted the extent of male body dissatisfaction and disordered eating behaviors; however, boys/men have not been included in the majority of body image improvement programs. This study aims to explore the efficacy of a mixed-gender Body Project compared with the historically female-only body image intervention program. Participants included male and female college students (N = 185) across two sites. We randomly assigned women to a mixed-gender modification of the two-session, peer-led Body Project (MG), the two-session, peer-led, female-only (FO) Body Project, or a waitlist control (WL), and men to either MG or WL. Participants completed self-report measures assessing negative affect, appearance-ideal internalization, body satisfaction, and eating disorder pathology at baseline, post-test, and at 2- and 6-month follow-up. Linear mixed effects modeling to estimate the change from baseline over time for each dependent variable across conditions were used. For women, results were mixed regarding post-intervention improvement compared with WL, and were largely non-significant compared with WL at 6-month follow-up. Alternatively, results indicated that men in MG consistently improved compared with WL through 6-month follow-up on all measures except negative affect and appearance-ideal internalization. Results differed markedly between female and male samples, and were more promising for men than for women. Various explanations are provided, and further research is warranted prior to drawing firm conclusions regarding mixed-gender programming of the Body Project. © 2016 Wiley Periodicals, Inc.(Int J Eat Disord 2016; 49:591-602). © 2016 Wiley Periodicals, Inc.

  6. Non-linear thermal engineering, chaotic advection and mixing; Thermique non-lineaire, melange et advection chaotique

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1999-12-31

    This conference day was jointly organized by the `university group of thermal engineering (GUT)` and the French association of thermal engineers. This book of proceedings contains 7 papers entitled: `energy spectra of a passive scalar undergoing advection by a chaotic flow`; `analysis of chaotic behaviours: from topological characterization to modeling`; `temperature homogeneity by Lagrangian chaos in a direct current flow heat exchanger: numerical approach`; ` thermal instabilities in a mixed convection phenomenon: nonlinear dynamics`; `experimental characterization study of the 3-D Lagrangian chaos by thermal analogy`; `influence of coherent structures on the mixing of a passive scalar`; `evaluation of the performance index of a chaotic advection effect heat exchanger for a wide range of Reynolds numbers`. (J.S.)

  7. Non-linear thermal engineering, chaotic advection and mixing; Thermique non-lineaire, melange et advection chaotique

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1998-12-31

    This conference day was jointly organized by the `university group of thermal engineering (GUT)` and the French association of thermal engineers. This book of proceedings contains 7 papers entitled: `energy spectra of a passive scalar undergoing advection by a chaotic flow`; `analysis of chaotic behaviours: from topological characterization to modeling`; `temperature homogeneity by Lagrangian chaos in a direct current flow heat exchanger: numerical approach`; ` thermal instabilities in a mixed convection phenomenon: nonlinear dynamics`; `experimental characterization study of the 3-D Lagrangian chaos by thermal analogy`; `influence of coherent structures on the mixing of a passive scalar`; `evaluation of the performance index of a chaotic advection effect heat exchanger for a wide range of Reynolds numbers`. (J.S.)

  8. Exact spectrum of non-linear chirp scaling and its application in geosynchronous synthetic aperture radar imaging

    Directory of Open Access Journals (Sweden)

    Chen Qi

    2013-07-01

    Full Text Available Non-linear chirp scaling (NLCS is a feasible method to deal with time-variant frequency modulation (FM rate problem in synthetic aperture radar (SAR imaging. However, approximations in derivation of NLCS spectrum lead to performance decline in some cases. Presented is the exact spectrum of the NLCS function. Simulation with a geosynchronous synthetic aperture radar (GEO-SAR configuration is implemented. The results show that using the presented spectrum can significantly improve imaging performance, and the NLCS algorithm is suitable for GEO-SAR imaging after modification.

  9. Comparison of height-diameter models based on geographically weighted regressions and linear mixed modelling applied to large scale forest inventory data

    Energy Technology Data Exchange (ETDEWEB)

    Quirós Segovia, M.; Condés Ruiz, S.; Drápela, K.

    2016-07-01

    Aim of the study: The main objective of this study was to test Geographically Weighted Regression (GWR) for developing height-diameter curves for forests on a large scale and to compare it with Linear Mixed Models (LMM). Area of study: Monospecific stands of Pinus halepensis Mill. located in the region of Murcia (Southeast Spain). Materials and Methods: The dataset consisted of 230 sample plots (2582 trees) from the Third Spanish National Forest Inventory (SNFI) randomly split into training data (152 plots) and validation data (78 plots). Two different methodologies were used for modelling local (Petterson) and generalized height-diameter relationships (Cañadas I): GWR, with different bandwidths, and linear mixed models. Finally, the quality of the estimated models was compared throughout statistical analysis. Main results: In general, both LMM and GWR provide better prediction capability when applied to a generalized height-diameter function than when applied to a local one, with R2 values increasing from around 0.6 to 0.7 in the model validation. Bias and RMSE were also lower for the generalized function. However, error analysis showed that there were no large differences between these two methodologies, evidencing that GWR provides results which are as good as the more frequently used LMM methodology, at least when no additional measurements are available for calibrating. Research highlights: GWR is a type of spatial analysis for exploring spatially heterogeneous processes. GWR can model spatial variation in tree height-diameter relationship and its regression quality is comparable to LMM. The advantage of GWR over LMM is the possibility to determine the spatial location of every parameter without additional measurements. Abbreviations: GWR (Geographically Weighted Regression); LMM (Linear Mixed Model); SNFI (Spanish National Forest Inventory). (Author)

  10. Combined mixed approach algorithm for in-line phase-contrast x-ray imaging

    International Nuclear Information System (INIS)

    De Caro, Liberato; Scattarella, Francesco; Giannini, Cinzia; Tangaro, Sabina; Rigon, Luigi; Longo, Renata; Bellotti, Roberto

    2010-01-01

    Purpose: In the past decade, phase-contrast imaging (PCI) has been applied to study different kinds of tissues and human body parts, with an increased improvement of the image quality with respect to simple absorption radiography. A technique closely related to PCI is phase-retrieval imaging (PRI). Indeed, PCI is an imaging modality thought to enhance the total contrast of the images through the phase shift introduced by the object (human body part); PRI is a mathematical technique to extract the quantitative phase-shift map from PCI. A new phase-retrieval algorithm for the in-line phase-contrast x-ray imaging is here proposed. Methods: The proposed algorithm is based on a mixed transfer-function and transport-of-intensity approach (MA) and it requires, at most, an initial approximate estimate of the average phase shift introduced by the object as prior knowledge. The accuracy in the initial estimate determines the convergence speed of the algorithm. The proposed algorithm retrieves both the object phase and its complex conjugate in a combined MA (CMA). Results: Although slightly less computationally effective with respect to other mixed-approach algorithms, as two phases have to be retrieved, the results obtained by the CMA on simulated data have shown that the obtained reconstructed phase maps are characterized by particularly low normalized mean square errors. The authors have also tested the CMA on noisy experimental phase-contrast data obtained by a suitable weakly absorbing sample consisting of a grid of submillimetric nylon fibers as well as on a strongly absorbing object made of a 0.03 mm thick lead x-ray resolution star pattern. The CMA has shown a good efficiency in recovering phase information, also in presence of noisy data, characterized by peak-to-peak signal-to-noise ratios down to a few dBs, showing the possibility to enhance with phase radiography the signal-to-noise ratio for features in the submillimetric scale with respect to the attenuation

  11. Third-order-accurate numerical methods for efficient, large time-step solutions of mixed linear and nonlinear problems

    Energy Technology Data Exchange (ETDEWEB)

    Cobb, J.W.

    1995-02-01

    There is an increasing need for more accurate numerical methods for large-scale nonlinear magneto-fluid turbulence calculations. These methods should not only increase the current state of the art in terms of accuracy, but should also continue to optimize other desired properties such as simplicity, minimized computation, minimized memory requirements, and robust stability. This includes the ability to stably solve stiff problems with long time-steps. This work discusses a general methodology for deriving higher-order numerical methods. It also discusses how the selection of various choices can affect the desired properties. The explicit discussion focuses on third-order Runge-Kutta methods, including general solutions and five examples. The study investigates the linear numerical analysis of these methods, including their accuracy, general stability, and stiff stability. Additional appendices discuss linear multistep methods, discuss directions for further work, and exhibit numerical analysis results for some other commonly used lower-order methods.

  12. Diffusion tractography imaging-guided frameless linear accelerator stereotactic radiosurgical thalamotomy for tremor: case report.

    Science.gov (United States)

    Kim, Won; Sharim, Justin; Tenn, Stephen; Kaprealian, Tania; Bordelon, Yvette; Agazaryan, Nzhde; Pouratian, Nader

    2018-01-01

    Essential tremor and Parkinson's disease-associated tremor are extremely prevalent within the field of movement disorders. The ventral intermediate (VIM) nucleus of the thalamus has been commonly used as both a neuromodulatory and neuroablative target for the treatment of these forms of tremor. With both deep brain stimulation and Gamma Knife radiosurgery, there is an abundance of literature regarding the surgical planning, targeting, and outcomes of these methodologies. To date, there have been no reports of frameless, linear accelerator (LINAC)-based thalomotomies for tremor. The authors report the case of a patient with tremor-dominant Parkinson's disease, with poor tremor improvement with medication, who was offered LINAC-based thalamotomy. High-resolution 0.9-mm isotropic MR images were obtained, and simulation was performed via CT with 1.5-mm contiguous slices. The VIM thalamic nucleus was determined using diffusion tensor imaging (DTI)-based segmentation on FSL using probabilistic tractography. The supplemental motor and premotor areas were the cortical target masks. The authors centered their isocenter within the region of the DTI-determined target and treated the patient with 140 Gy in a single fraction. The DTI-determined target had coordinates of 14.2 mm lateral and 8.36 mm anterior to the posterior commissure (PC), and 3 mm superior to the anterior commissure (AC)-PC line, which differed by 3.30 mm from the original target determined by anatomical considerations (15.5 mm lateral and 7 mm anterior to the PC, and 0 mm superior to the AC-PC line). There was faint radiographic evidence of lesioning at the 3-month follow-up within the target zone, which continued to consolidate on subsequent scans. The patient experienced continued right upper-extremity resting tremor improvement starting at 10 months until it was completely resolved at 22 months of follow-up. Frameless LINAC-based thalamotomy guided by DTI-based thalamic segmentation is a feasible method

  13. Suppression of chaos at slow variables by rapidly mixing fast dynamics through linear energy-preserving coupling

    OpenAIRE

    Abramov, Rafail V.

    2011-01-01

    Chaotic multiscale dynamical systems are common in many areas of science, one of the examples being the interaction of the low-frequency dynamics in the atmosphere with the fast turbulent weather dynamics. One of the key questions about chaotic multiscale systems is how the fast dynamics affects chaos at the slow variables, and, therefore, impacts uncertainty and predictability of the slow dynamics. Here we demonstrate that the linear slow-fast coupling with the total energy conservation prop...

  14. Modeling Learning in Doubly Multilevel Binary Longitudinal Data Using Generalized Linear Mixed Models: An Application to Measuring and Explaining Word Learning.

    Science.gov (United States)

    Cho, Sun-Joo; Goodwin, Amanda P

    2016-04-01

    When word learning is supported by instruction in experimental studies for adolescents, word knowledge outcomes tend to be collected from complex data structure, such as multiple aspects of word knowledge, multilevel reader data, multilevel item data, longitudinal design, and multiple groups. This study illustrates how generalized linear mixed models can be used to measure and explain word learning for data having such complexity. Results from this application provide deeper understanding of word knowledge than could be attained from simpler models and show that word knowledge is multidimensional and depends on word characteristics and instructional contexts.

  15. Image analysis of food particles can discriminate deficient mastication of mixed foodstuffs simulating daily meal.

    Science.gov (United States)

    Sugimoto, K; Hashimoto, Y; Fukuike, C; Kodama, N; Minagi, S

    2014-03-01

    Because food texture is regarded as an important factor for smooth deglutition, identification of objective parameters that could provide a basis for food texture selection for elderly or dysphagic patients is of great importance. We aimed to develop an objective evaluation method of mastication using a mixed test food comprising foodstuffs, simulating daily dietary life. The particle size distribution (>2 mm in diameter) in a bolus was analysed using a digital image under dark-field illumination. Ten female participants (mean age ± s.d., 27·6 ± 2·6 years) masticated a mixed test food comprising prescribed amounts of rice, sausage, hard omelette, raw cabbage and raw cucumber with 100%, 75%, 50% and 25% of the number of their masticatory strokes. A single set of coefficient thresholds of 0·10 for the homogeneity index and 1·62 for the particle size index showed excellent discrimination of deficient masticatory conditions with high sensitivity (0·90) and specificity (0·77). Based on the results of this study, normal mastication was discriminated from deficient masticatory conditions using a large particle analysis of mixed foodstuffs, thus showing the possibility of future application of this method for objective decision-making regarding the properties of meals served to dysphagic patients. © 2014 John Wiley & Sons Ltd.

  16. Linear mixed-effects models to describe length-weight relationships for yellow croaker (Larimichthys Polyactis) along the north coast of China.

    Science.gov (United States)

    Ma, Qiuyun; Jiao, Yan; Ren, Yiping

    2017-01-01

    In this study, length-weight relationships and relative condition factors were analyzed for Yellow Croaker (Larimichthys polyactis) along the north coast of China. Data covered six regions from north to south: Yellow River Estuary, Coastal Waters of Northern Shandong, Jiaozhou Bay, Coastal Waters of Qingdao, Haizhou Bay, and South Yellow Sea. In total 3,275 individuals were collected during six years (2008, 2011-2015). One generalized linear model, two simply linear models and nine linear mixed effect models that applied the effects from regions and/or years to coefficient a and/or the exponent b were studied and compared. Among these twelve models, the linear mixed effect model with random effects from both regions and years fit the data best, with lowest Akaike information criterion value and mean absolute error. In this model, the estimated a was 0.0192, with 95% confidence interval 0.0178~0.0308, and the estimated exponent b was 2.917 with 95% confidence interval 2.731~2.945. Estimates for a and b with the random effects in intercept and coefficient from Region and Year, ranged from 0.013 to 0.023 and from 2.835 to 3.017, respectively. Both regions and years had effects on parameters a and b, while the effects from years were shown to be much larger than those from regions. Except for Coastal Waters of Northern Shandong, a decreased from north to south. Condition factors relative to reference years of 1960, 1986, 2005, 2007, 2008~2009 and 2010 revealed that the body shape of Yellow Croaker became thinner in recent years. Furthermore relative condition factors varied among months, years, regions and length. The values of a and relative condition factors decreased, when the environmental pollution became worse, therefore, length-weight relationships could be an indicator for the environment quality. Results from this study provided basic description of current condition of Yellow Croaker along the north coast of China.

  17. Optical imaging through turbid media with a degenerate four wave mixing correlation time gate

    International Nuclear Information System (INIS)

    Sappey, A.D.

    1994-01-01

    A novel method for detection of ballistic light and rejection of unwanted diffusive light to image structures inside highly scattering media is demonstrated. Degenerate four wave mixing (DFWM) of a doubled YAG laser in Rhodamine 6G is used to provide an ultrafast correlation time gate to discriminate against light that has undergone multiple scattering and therefore lost memory of the structures inside the scattering medium. We present preliminary results that determine the nature of the DFWM grating, confirm the coherence time of the laser, prove the phase-conjugate nature of the signal beam, and determine the dependence of the signal (reflectivity) on dye concentration and laser intensity. Finally, we have obtained images of a test cross-hair pattern through highly turbid suspensions of whole milk in water that are opaque to the naked eye. These imaging experiments demonstrate the utility of DFWM for imaging through turbid media. Based on our results, the use of DFWM as an ultrafast time gate for the detection of ballistic light in optical mammography appears to hold great promise for improving the current state of the art

  18. Use of non-linear mixed-effects modelling and regression analysis to predict the number of somatic coliphages by plaque enumeration after 3 hours of incubation.

    Science.gov (United States)

    Mendez, Javier; Monleon-Getino, Antonio; Jofre, Juan; Lucena, Francisco

    2017-10-01

    The present study aimed to establish the kinetics of the appearance of coliphage plaques using the double agar layer titration technique to evaluate the feasibility of using traditional coliphage plaque forming unit (PFU) enumeration as a rapid quantification method. Repeated measurements of the appearance of plaques of coliphages titrated according to ISO 10705-2 at different times were analysed using non-linear mixed-effects regression to determine the most suitable model of their appearance kinetics. Although this model is adequate, to simplify its applicability two linear models were developed to predict the numbers of coliphages reliably, using the PFU counts as determined by the ISO after only 3 hours of incubation. One linear model, when the number of plaques detected was between 4 and 26 PFU after 3 hours, had a linear fit of: (1.48 × Counts 3 h + 1.97); and the other, values >26 PFU, had a fit of (1.18 × Counts 3 h + 2.95). If the number of plaques detected was PFU after 3 hours, we recommend incubation for (18 ± 3) hours. The study indicates that the traditional coliphage plating technique has a reasonable potential to provide results in a single working day without the need to invest in additional laboratory equipment.

  19. GRAPHICS-IMAGE MIXED METHOD FOR LARGE-SCALE BUILDINGS RENDERING

    Directory of Open Access Journals (Sweden)

    Y. Zhou

    2018-05-01

    Full Text Available Urban 3D model data is huge and unstructured, LOD and Out-of-core algorithm are usually used to reduce the amount of data that drawn in each frame to improve the rendering efficiency. When the scene is large enough, even the complex optimization algorithm is difficult to achieve better results. Based on the traditional study, a novel idea was developed. We propose a graphics and image mixed method for large-scale buildings rendering. Firstly, the view field is divided into several regions, the graphics-image mixed method used to render the scene on both screen and FBO, then blending the FBO with scree. The algorithm is tested on the huge CityGML model data in the urban areas of New York which contained 188195 public building models, and compared with the Cesium platform. The experiment result shows the system was running smoothly. The experimental results confirm that the algorithm can achieve more massive building scene roaming under the same hardware conditions, and can rendering the scene without vision loss.

  20. Frequency Mixing Magnetic Detection Scanner for Imaging Magnetic Particles in Planar Samples.

    Science.gov (United States)

    Hong, Hyobong; Lim, Eul-Gyoon; Jeong, Jae-Chan; Chang, Jiho; Shin, Sung-Woong; Krause, Hans-Joachim

    2016-06-09

    The setup of a planar Frequency Mixing Magnetic Detection (p-FMMD) scanner for performing Magnetic Particles Imaging (MPI) of flat samples is presented. It consists of two magnetic measurement heads on both sides of the sample mounted on the legs of a u-shaped support. The sample is locally exposed to a magnetic excitation field consisting of two distinct frequencies, a stronger component at about 77 kHz and a weaker field at 61 Hz. The nonlinear magnetization characteristics of superparamagnetic particles give rise to the generation of intermodulation products. A selected sum-frequency component of the high and low frequency magnetic field incident on the magnetically nonlinear particles is recorded by a demodulation electronics. In contrast to a conventional MPI scanner, p-FMMD does not require the application of a strong magnetic field to the whole sample because mixing of the two frequencies occurs locally. Thus, the lateral dimensions of the sample are just limited by the scanning range and the supports. However, the sample height determines the spatial resolution. In the current setup it is limited to 2 mm. As examples, we present two 20 mm × 25 mm p-FMMD images acquired from samples with 1 µm diameter maghemite particles in silanol matrix and with 50 nm magnetite particles in aminosilane matrix. The results show that the novel MPI scanner can be applied for analysis of thin biological samples and for medical diagnostic purposes.

  1. Visualizing multifactorial and multi-attribute effect sizes in linear mixed models with a view towards sensometrics

    DEFF Research Database (Denmark)

    and straightforward idea is to interpret effects relative to the residual error and to choose the proper effect size measure. For multi-attribute bar plots of F-statistics this amounts, in balanced settings, to a simple transformation of the bar heights to get them transformed into depicting what can be seen...... on a multifactorial sensory profile data set and compared to actual d-prime calculations based on ordinal regression modelling through the ordinal package. A generic ``plug-in'' implementation of the method is given in the SensMixed package, which again depends on the lmerTest package. We discuss and clarify the bias...

  2. Deep linear autoencoder and patch clustering-based unified one-dimensional coding of image and video

    Science.gov (United States)

    Li, Honggui

    2017-09-01

    This paper proposes a unified one-dimensional (1-D) coding framework of image and video, which depends on deep learning neural network and image patch clustering. First, an improved K-means clustering algorithm for image patches is employed to obtain the compact inputs of deep artificial neural network. Second, for the purpose of best reconstructing original image patches, deep linear autoencoder (DLA), a linear version of the classical deep nonlinear autoencoder, is introduced to achieve the 1-D representation of image blocks. Under the circumstances of 1-D representation, DLA is capable of attaining zero reconstruction error, which is impossible for the classical nonlinear dimensionality reduction methods. Third, a unified 1-D coding infrastructure for image, intraframe, interframe, multiview video, three-dimensional (3-D) video, and multiview 3-D video is built by incorporating different categories of videos into the inputs of patch clustering algorithm. Finally, it is shown in the results of simulation experiments that the proposed methods can simultaneously gain higher compression ratio and peak signal-to-noise ratio than those of the state-of-the-art methods in the situation of low bitrate transmission.

  3. Focal spot motion of linear accelerators and its effect on portal image analysis

    NARCIS (Netherlands)

    Sonke, Jan-Jakob; Brand, Bob; van Herk, Marcel

    2003-01-01

    The focal spot of a linear accelerator is often considered to have a fully stable position. In practice, however, the beam control loop of a linear accelerator needs to stabilize after the beam is turned on. As a result, some motion of the focal spot might occur during the start-up phase of

  4. Low-sensitivity, low-bounce, high-linearity current-controlled oscillator suitable for single-supply mixed-mode instrumentation system.

    Science.gov (United States)

    Hwang, Yuh-Shyan; Kung, Che-Min; Lin, Ho-Cheng; Chen, Jiann-Jong

    2009-02-01

    A low-sensitivity, low-bounce, high-linearity current-controlled oscillator (CCO) suitable for a single-supply mixed-mode instrumentation system is designed and proposed in this paper. The designed CCO can be operated at low voltage (2 V). The power bounce and ground bounce generated by this CCO is less than 7 mVpp when the power-line parasitic inductance is increased to 100 nH to demonstrate the effect of power bounce and ground bounce. The power supply noise caused by the proposed CCO is less than 0.35% in reference to the 2 V supply voltage. The average conversion ratio KCCO is equal to 123.5 GHz/A. The linearity of conversion ratio is high and its tolerance is within +/-1.2%. The sensitivity of the proposed CCO is nearly independent of the power supply voltage, which is less than a conventional current-starved oscillator. The performance of the proposed CCO has been compared with the current-starved oscillator. It is shown that the proposed CCO is suitable for single-supply mixed-mode instrumentation systems.

  5. Commissioning and quality assurance of the x-ray volume imaging system of an image-guided radiotherapy capable linear accelerator

    International Nuclear Information System (INIS)

    Muralidhar, K.R.; Narayana Murthy, P.; Kumar, Rajneesh

    2008-01-01

    An Image-Guided Radiotherapy-capable linear accelerator (Elekta Synergy) was installed at our hospital, which is equipped with a kV x-ray volume imaging (XVI) system and electronic portal imaging device (iViewGT). The objective of this presentation is to describe the results of commissioning measurements carried out on the XVI facility to verify the manufacturer's specifications and also to evolve a QA schedule which can be used to test its performance routinely. The QA program consists of a series of tests (safety features, geometric accuracy, and image quality). These tests were found to be useful to assess the performance of the XVI system and also proved that XVI system is very suitable for image-guided high-precision radiation therapy. (author)

  6. Commissioning and quality assurance of the X-ray volume Imaging system of an image-guided radiotherapy capable linear accelerator

    Directory of Open Access Journals (Sweden)

    Muralidhar K

    2008-01-01

    Full Text Available An Image-Guided Radiotherapy-capable linear accelerator (Elekta Synergy was installed at our hospital, which is equipped with a kV x-ray volume imaging (XVI system and electronic portal imaging device (iViewGT. The objective of this presentation is to describe the results of commissioning measurements carried out on the XVI facility to verify the manufacturer′s specifications and also to evolve a QA schedule which can be used to test its performance routinely. The QA program consists of a series of tests (safety features, geometric accuracy, and image quality. These tests were found to be useful to assess the performance of the XVI system and also proved that XVI system is very suitable for image-guided high-precision radiation therapy.

  7. Local genealogies in a linear mixed model for genome-wide association mapping in complex pedigreed populations.

    Directory of Open Access Journals (Sweden)

    Goutam Sahana

    Full Text Available INTRODUCTION: The state-of-the-art for dealing with multiple levels of relationship among the samples in genome-wide association studies (GWAS is unified mixed model analysis (MMA. This approach is very flexible, can be applied to both family-based and population-based samples, and can be extended to incorporate other effects in a straightforward and rigorous fashion. Here, we present a complementary approach, called 'GENMIX (genealogy based mixed model' which combines advantages from two powerful GWAS methods: genealogy-based haplotype grouping and MMA. SUBJECTS AND METHODS: We validated GENMIX using genotyping data of Danish Jersey cattle and simulated phenotype and compared to the MMA. We simulated scenarios for three levels of heritability (0.21, 0.34, and 0.64, seven levels of MAF (0.05, 0.10, 0.15, 0.20, 0.25, 0.35, and 0.45 and five levels of QTL effect (0.1, 0.2, 0.5, 0.7 and 1.0 in phenotypic standard deviation unit. Each of these 105 possible combinations (3 h(2 x 7 MAF x 5 effects of scenarios was replicated 25 times. RESULTS: GENMIX provides a better ranking of markers close to the causative locus' location. GENMIX outperformed MMA when the QTL effect was small and the MAF at the QTL was low. In scenarios where MAF was high or the QTL affecting the trait had a large effect both GENMIX and MMA performed similarly. CONCLUSION: In discovery studies, where high-ranking markers are identified and later examined in validation studies, we therefore expect GENMIX to enrich candidates brought to follow-up studies with true positives over false positives more than the MMA would.

  8. Cone-beam CT image contrast and attenuation-map linearity improvement (CALI) for brain stereotactic radiosurgery procedures

    Science.gov (United States)

    Hashemi, Sayed Masoud; Lee, Young; Eriksson, Markus; Nordström, Hâkan; Mainprize, James; Grouza, Vladimir; Huynh, Christopher; Sahgal, Arjun; Song, William Y.; Ruschin, Mark

    2017-03-01

    A Contrast and Attenuation-map (CT-number) Linearity Improvement (CALI) framework is proposed for cone-beam CT (CBCT) images used for brain stereotactic radiosurgery (SRS). The proposed framework is used together with our high spatial resolution iterative reconstruction algorithm and is tailored for the Leksell Gamma Knife ICON (Elekta, Stockholm, Sweden). The incorporated CBCT system in ICON facilitates frameless SRS planning and treatment delivery. The ICON employs a half-cone geometry to accommodate the existing treatment couch. This geometry increases the amount of artifacts and together with other physical imperfections causes image inhomogeneity and contrast reduction. Our proposed framework includes a preprocessing step, involving a shading and beam-hardening artifact correction, and a post-processing step to correct the dome/capping artifact caused by the spatial variations in x-ray energy generated by bowtie-filter. Our shading correction algorithm relies solely on the acquired projection images (i.e. no prior information required) and utilizes filtered-back-projection (FBP) reconstructed images to generate a segmented bone and soft-tissue map. Ideal projections are estimated from the segmented images and a smoothed version of the difference between the ideal and measured projections is used in correction. The proposed beam-hardening and dome artifact corrections are segmentation free. The CALI was tested on CatPhan, as well as patient images acquired on the ICON system. The resulting clinical brain images show substantial improvements in soft contrast visibility, revealing structures such as ventricles and lesions which were otherwise un-detectable in FBP-reconstructed images. The linearity of the reconstructed attenuation-map was also improved, resulting in more accurate CT#.

  9. Stereo Imaging Velocimetry of Mixing Driven by Buoyancy Induced Flow Fields

    Science.gov (United States)

    Duval, W. M. B.; Jacqmin, D.; Bomani, B. M.; Alexander, I. J.; Kassemi, M.; Batur, C.; Tryggvason, B. V.; Lyubimov, D. V.; Lyubimova, T. P.

    2000-01-01

    Mixing of two fluids generated by steady and particularly g-jitter acceleration is fundamental towards the understanding of transport phenomena in a microgravity environment. We propose to carry out flight and ground-based experiments to quantify flow fields due to g-jitter type of accelerations using Stereo Imaging Velocimetry (SIV), and measure the concentration field using laser fluorescence. The understanding of the effects of g-jitter on transport phenomena is of great practical interest to the microgravity community and impacts the design of experiments for the Space Shuttle as well as the International Space Station. The aim of our proposed research is to provide quantitative data to the community on the effects of g-jitter on flow fields due to mixing induced by buoyancy forces. The fundamental phenomenon of mixing occurs in a broad range of materials processing encompassing the growth of opto-electronic materials and semiconductors, (by directional freezing and physical vapor transport), to solution and protein crystal growth. In materials processing of these systems, crystal homogeneity, which is affected by the solutal field distribution, is one of the major issues. The understanding of fluid mixing driven by buoyancy forces, besides its importance as a topic in fundamental science, can contribute towards the understanding of how solutal fields behave under various body forces. The body forces of interest are steady acceleration and g-jitter acceleration as in a Space Shuttle environment or the International Space Station. Since control of the body force is important, the flight experiment will be carried out on a tunable microgravity vibration isolation mount, which will permit us to precisely input the desired forcing function to simulate a range of body forces. To that end, we propose to design a flight experiment that can only be carried out under microgravity conditions to fully exploit the effects of various body forces on fluid mixing. Recent

  10. Intraobserver and interobserver reproducibility in linear measurements on axial images obtained by cone-beam computed tomography

    Energy Technology Data Exchange (ETDEWEB)

    De Silva, Nathalia Cristine; Junqueira, Jose Luiz Cinta; Panzarella, Francine Keuhi; Raitz, Ricardo [Sao Leopoldo Mandic Research Center, Dept. of Oral Radiology, College of Dentistry, Sao Paulo (Brazil); Brriviera, Mauricio [Dept. of Oral Radiology, College of Dentistry, Catholic University of Brasilia, Sao Paulo (Brazil)

    2017-03-15

    This study was performed to investigate the intra- and inter-observer variability in linear measurements with axial images obtained by PreXion (PreXion Inc., San Mateo, USA) and i-CAT (Imaging Sciences International, Xoran Technologies Inc., Hatfield, USA) CBCT scanners, with different voxel sizes. A cylindrical object made from nylon with radiopaque markers (phantom) was scanned by i-CAT and PreXion 3D devices. For each axial image, measurements were taken twice in the horizontal (distance A-B) and vertical (distance C-D) directions, randomly, with a one-week interval between measurements, by four oral radiologists with five years or more experience in the use of these measuring tools. All of the obtained linear measurements had lower values than those of the phantom. The statistical analysis showed high intra- and inter-observer reliability (p=0.297). Compared to the real measurements, the measurements obtained using the i-CAT device and PreXion tomography, on average, revealed absolute errors ranging from 0.22 to 0.59 mm and from 0.23 to 0.63 mm, respectively. It can be concluded that both scanners are accurate, although the linear measurements are underestimations, with no significant differences between the evaluators.

  11. A new perspective on sexual mixing among men who have sex with men by body image.

    Science.gov (United States)

    Leung, Ka-Kit; Wong, Horas T H; Naftalin, Claire M; Lee, Shui Shan

    2014-01-01

    "Casual sex" is seldom as non-selective and random as it may sound. During each sexual encounter, people consciously and unconsciously seek their casual sex partners according to different attributes. Influential to a sexual network, research focusing on quantifying the effects of physical appearance on sexual network has been sparse. We evaluated the application of Log odds score (LOD) to assess the mixing patterns of 326 men who have sex with men (MSM) in Hong Kong in their networking of casual sex partners by Body Image Type (BIT). This involved an analysis of 1,196 respondents-casual sex partner pairs. Seven BITs were used in the study: Bear, Chubby, Slender, Lean toned, Muscular, Average and Other. A hierarchical pattern was observed in the preference of MSM for casual sex partners by the latter's BIT. Overall, Muscular men were most preferred, followed by Lean toned while the least preferred was Slender, as illustrated by LOD going down along the hierarchy in the same direction. Marked avoidance was found between men who self-identified as Chubby and men of Other body type (within-group-LOD: 1.25-2.89; between-group-LOD: man who self-identified as Average for casual sex. We have demonstrated the possibility of adopting a mathematical prototype to investigate the influence of BIT in a sexual network of MSM. Construction of matrix based on culture-specific BIT and cross-cultural comparisons would generate new knowledge on the mixing behaviors of MSM.

  12. Linear image reconstruction for a diffuse optical mammography system in a noncompressed geometry using scattering fluid

    International Nuclear Information System (INIS)

    Nielsen, Tim; Brendel, Bernhard; Ziegler, Ronny; Beek, Michiel van; Uhlemann, Falk; Bontus, Claas; Koehler, Thomas

    2009-01-01

    Diffuse optical tomography (DOT) is a potential new imaging modality to detect or monitor breast lesions. Recently, Philips developed a new DOT system capable of transmission and fluorescence imaging, where the investigated breast is hanging freely into the measurement cup containing scattering fluid. We present a fast and robust image reconstruction algorithm that is used for the transmission measurements. The algorithm is based on the Rytov approximation. We show that this algorithm can be used over a wide range of tissue optical properties if the reconstruction is adapted to each patient. We use estimates of the breast shape and average tissue optical properties to initialize the reconstruction, which improves the image quality significantly. We demonstrate the capability of the measurement system and reconstruction to image breast lesions by clinical examples

  13. Clinical Experiences With Onboard Imager KV Images for Linear Accelerator-Based Stereotactic Radiosurgery and Radiotherapy Setup

    International Nuclear Information System (INIS)

    Hong, Linda X.; Chen, Chin C.; Garg, Madhur; Yaparpalvi, Ravindra; Mah, Dennis

    2009-01-01

    Purpose: To report our clinical experiences with on-board imager (OBI) kV image verification for cranial stereotactic radiosurgery (SRS) and radiotherapy (SRT) treatments. Methods and Materials: Between January 2007 and May 2008, 42 patients (57 lesions) were treated with SRS with head frame immobilization and 13 patients (14 lesions) were treated with SRT with face mask immobilization at our institution. No margin was added to the gross tumor for SRS patients, and a 3-mm three-dimensional margin was added to the gross tumor to create the planning target volume for SRT patients. After localizing the patient with stereotactic target positioner (TaPo), orthogonal kV images using OBI were taken and fused to planning digital reconstructed radiographs. Suggested couch shifts in vertical, longitudinal, and lateral directions were recorded. kV images were also taken immediately after treatment for 21 SRS patients and on a weekly basis for 6 SRT patients to assess any intrafraction changes. Results: For SRS patients, 57 pretreatment kV images were evaluated and the suggested shifts were all within 1 mm in any direction (i.e., within the accuracy of image fusion). For SRT patients, the suggested shifts were out of the 3-mm tolerance for 31 of 309 setups. Intrafraction motions were detected in 3 SRT patients. Conclusions: kV imaging provided a useful tool for SRS or SRT setups. For SRS setup with head frame, it provides radiographic confirmation of localization using the stereotactic target positioner. For SRT with mask, a 3-mm margin is adequate and feasible for routine setup when TaPo is combined with kV imaging

  14. A mixed integer linear programming model to reconstruct phylogenies from single nucleotide polymorphism haplotypes under the maximum parsimony criterion

    Science.gov (United States)

    2013-01-01

    that these constraints can often lead to significant reductions in the gap between the optimal solution and its non-integral linear programming bound relative to the prior art as well as often substantially faster processing of moderately hard problem instances. Conclusion We provide an indication of the conditions under which such an optimal enumeration approach is likely to be feasible, suggesting that these strategies are usable for relatively large numbers of taxa, although with stricter limits on numbers of variable sites. The work thus provides methodology suitable for provably optimal solution of some harder instances that resist all prior approaches. PMID:23343437

  15. Scanning differential polarization microscope: Its use to image linear and circular differential scattering

    International Nuclear Information System (INIS)

    Mickols, W.; Maestre, M.F.

    1988-01-01

    A differential polarization microscope that couples the sensitivity of single-beam measurement of circular dichroism and circular differential scattering with the simultaneous measurement of linear dichroism and linear differential scattering has been developed. The microscope uses a scanning microscope stage and single-point illumination to give the very shallow depth of field found in confocal microscopy. This microscope can operate in the confocal mode as well as in the near confocal condition that can allow one to program the coherence and spatial resolution of the microscope. This microscope has been used to study the change in the structure of chromatin during the development of sperm in Drosophila

  16. Using a generalized linear mixed model approach to explore the role of age, motor proficiency, and cognitive styles in children's reach estimation accuracy.

    Science.gov (United States)

    Caçola, Priscila M; Pant, Mohan D

    2014-10-01

    The purpose was to use a multi-level statistical technique to analyze how children's age, motor proficiency, and cognitive styles interact to affect accuracy on reach estimation tasks via Motor Imagery and Visual Imagery. Results from the Generalized Linear Mixed Model analysis (GLMM) indicated that only the 7-year-old age group had significant random intercepts for both tasks. Motor proficiency predicted accuracy in reach tasks, and cognitive styles (object scale) predicted accuracy in the motor imagery task. GLMM analysis is suitable to explore age and other parameters of development. In this case, it allowed an assessment of motor proficiency interacting with age to shape how children represent, plan, and act on the environment.

  17. The Capability Portfolio Analysis Tool (CPAT): A Mixed Integer Linear Programming Formulation for Fleet Modernization Analysis (Version 2.0.2).

    Energy Technology Data Exchange (ETDEWEB)

    Waddell, Lucas [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Muldoon, Frank [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Henry, Stephen Michael [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Hoffman, Matthew John [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Zwerneman, April Marie [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Backlund, Peter [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Melander, Darryl J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Lawton, Craig R. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Rice, Roy Eugene [Teledyne Brown Engineering, Huntsville, AL (United States)

    2017-09-01

    In order to effectively plan the management and modernization of their large and diverse fleets of vehicles, Program Executive Office Ground Combat Systems (PEO GCS) and Program Executive Office Combat Support and Combat Service Support (PEO CS&CSS) commis- sioned the development of a large-scale portfolio planning optimization tool. This software, the Capability Portfolio Analysis Tool (CPAT), creates a detailed schedule that optimally prioritizes the modernization or replacement of vehicles within the fleet - respecting numerous business rules associated with fleet structure, budgets, industrial base, research and testing, etc., while maximizing overall fleet performance through time. This paper contains a thor- ough documentation of the terminology, parameters, variables, and constraints that comprise the fleet management mixed integer linear programming (MILP) mathematical formulation. This paper, which is an update to the original CPAT formulation document published in 2015 (SAND2015-3487), covers the formulation of important new CPAT features.

  18. X-ray Digital Linear Tomosynthesis Imaging for Artificial Pulmonary Nodule Detection

    Directory of Open Access Journals (Sweden)

    Tsutomu Gomi

    2011-01-01

    Full Text Available The purpose of this paper is to identify indications for volumetric X-ray digital linear tomosynthesis (DLT with single- and dual-energy subtraction techniques for artificial pulmonary nodule detection and compare X-ray DLT, X-ray digital radiography, and computed tomography.

  19. CCD linear image sensor ILX511 arrangment for a technical spectrometer

    Czech Academy of Sciences Publication Activity Database

    Bartoněk, L.; Keprt, Jiří; Vlček, Martin

    2003-01-01

    Roč. 33, 2-3 (2003), s. 548-553 ISSN 0078-5466 Institutional research plan: CEZ:AV0Z1010921 Keywords : CCD linear sensor ILX511 * enhanced parallel port (EPP able IEEE1284) * A/D converter AD9280 Subject RIV: BH - Optics, Masers, Lasers Impact factor: 0.221, year: 2003

  20. Subspace in Linear Algebra: Investigating Students' Concept Images and Interactions with the Formal Definition

    Science.gov (United States)

    Wawro, Megan; Sweeney, George F.; Rabin, Jeffrey M.

    2011-01-01

    This paper reports on a study investigating students' ways of conceptualizing key ideas in linear algebra, with the particular results presented here focusing on student interactions with the notion of subspace. In interviews conducted with eight undergraduates, we found students' initial descriptions of subspace often varied substantially from…

  1. Application of Linear Mixed-Effects Models in Human Neuroscience Research: A Comparison with Pearson Correlation in Two Auditory Electrophysiology Studies.

    Science.gov (United States)

    Koerner, Tess K; Zhang, Yang

    2017-02-27

    Neurophysiological studies are often designed to examine relationships between measures from different testing conditions, time points, or analysis techniques within the same group of participants. Appropriate statistical techniques that can take into account repeated measures and multivariate predictor variables are integral and essential to successful data analysis and interpretation. This work implements and compares conventional Pearson correlations and linear mixed-effects (LME) regression models using data from two recently published auditory electrophysiology studies. For the specific research questions in both studies, the Pearson correlation test is inappropriate for determining strengths between the behavioral responses for speech-in-noise recognition and the multiple neurophysiological measures as the neural responses across listening conditions were simply treated as independent measures. In contrast, the LME models allow a systematic approach to incorporate both fixed-effect and random-effect terms to deal with the categorical grouping factor of listening conditions, between-subject baseline differences in the multiple measures, and the correlational structure among the predictor variables. Together, the comparative data demonstrate the advantages as well as the necessity to apply mixed-effects models to properly account for the built-in relationships among the multiple predictor variables, which has important implications for proper statistical modeling and interpretation of human behavior in terms of neural correlates and biomarkers.

  2. Double-Stage Delay Multiply and Sum Beamforming Algorithm: Application to Linear-Array Photoacoustic Imaging.

    Science.gov (United States)

    Mozaffarzadeh, Moein; Mahloojifar, Ali; Orooji, Mahdi; Adabi, Saba; Nasiriavanaki, Mohammadreza

    2018-01-01

    Photoacoustic imaging (PAI) is an emerging medical imaging modality capable of providing high spatial resolution of Ultrasound (US) imaging and high contrast of optical imaging. Delay-and-Sum (DAS) is the most common beamforming algorithm in PAI. However, using DAS beamformer leads to low resolution images and considerable contribution of off-axis signals. A new paradigm namely delay-multiply-and-sum (DMAS), which was originally used as a reconstruction algorithm in confocal microwave imaging, was introduced to overcome the challenges in DAS. DMAS was used in PAI systems and it was shown that this algorithm results in resolution improvement and sidelobe degrading. However, DMAS is still sensitive to high levels of noise, and resolution improvement is not satisfying. Here, we propose a novel algorithm based on DAS algebra inside DMAS formula expansion, double stage DMAS (DS-DMAS), which improves the image resolution and levels of sidelobe, and is much less sensitive to high level of noise compared to DMAS. The performance of DS-DMAS algorithm is evaluated numerically and experimentally. The resulted images are evaluated qualitatively and quantitatively using established quality metrics including signal-to-noise ratio (SNR), full-width-half-maximum (FWHM) and contrast ratio (CR). It is shown that DS-DMAS outperforms DAS and DMAS at the expense of higher computational load. DS-DMAS reduces the lateral valley for about 15 dB and improves the SNR and FWHM better than 13% and 30%, respectively. Moreover, the levels of sidelobe are reduced for about 10 dB in comparison with those in DMAS.

  3. A comparative study of generalized linear mixed modelling and artificial neural network approach for the joint modelling of survival and incidence of Dengue patients in Sri Lanka

    Science.gov (United States)

    Hapugoda, J. C.; Sooriyarachchi, M. R.

    2017-09-01

    Survival time of patients with a disease and the incidence of that particular disease (count) is frequently observed in medical studies with the data of a clustered nature. In many cases, though, the survival times and the count can be correlated in a way that, diseases that occur rarely could have shorter survival times or vice versa. Due to this fact, joint modelling of these two variables will provide interesting and certainly improved results than modelling these separately. Authors have previously proposed a methodology using Generalized Linear Mixed Models (GLMM) by joining the Discrete Time Hazard model with the Poisson Regression model to jointly model survival and count model. As Aritificial Neural Network (ANN) has become a most powerful computational tool to model complex non-linear systems, it was proposed to develop a new joint model of survival and count of Dengue patients of Sri Lanka by using that approach. Thus, the objective of this study is to develop a model using ANN approach and compare the results with the previously developed GLMM model. As the response variables are continuous in nature, Generalized Regression Neural Network (GRNN) approach was adopted to model the data. To compare the model fit, measures such as root mean square error (RMSE), absolute mean error (AME) and correlation coefficient (R) were used. The measures indicate the GRNN model fits the data better than the GLMM model.

  4. A new perspective on sexual mixing among men who have sex with men by body image.

    Directory of Open Access Journals (Sweden)

    Ka-Kit Leung

    Full Text Available "Casual sex" is seldom as non-selective and random as it may sound. During each sexual encounter, people consciously and unconsciously seek their casual sex partners according to different attributes. Influential to a sexual network, research focusing on quantifying the effects of physical appearance on sexual network has been sparse.We evaluated the application of Log odds score (LOD to assess the mixing patterns of 326 men who have sex with men (MSM in Hong Kong in their networking of casual sex partners by Body Image Type (BIT. This involved an analysis of 1,196 respondents-casual sex partner pairs. Seven BITs were used in the study: Bear, Chubby, Slender, Lean toned, Muscular, Average and Other.A hierarchical pattern was observed in the preference of MSM for casual sex partners by the latter's BIT. Overall, Muscular men were most preferred, followed by Lean toned while the least preferred was Slender, as illustrated by LOD going down along the hierarchy in the same direction. Marked avoidance was found between men who self-identified as Chubby and men of Other body type (within-group-LOD: 1.25-2.89; between-group-LOD: <-1. None of the respondents reported to have networked a man who self-identified as Average for casual sex.We have demonstrated the possibility of adopting a mathematical prototype to investigate the influence of BIT in a sexual network of MSM. Construction of matrix based on culture-specific BIT and cross-cultural comparisons would generate new knowledge on the mixing behaviors of MSM.

  5. Isometries and binary images of linear block codes over ℤ4 + uℤ4 and ℤ8 + uℤ8

    Science.gov (United States)

    Sison, Virgilio; Remillion, Monica

    2017-10-01

    Let {{{F}}}2 be the binary field and ℤ2 r the residue class ring of integers modulo 2 r , where r is a positive integer. For the finite 16-element commutative local Frobenius non-chain ring ℤ4 + uℤ4, where u is nilpotent of index 2, two weight functions are considered, namely the Lee weight and the homogeneous weight. With the appropriate application of these weights, isometric maps from ℤ4 + uℤ4 to the binary spaces {{{F}}}24 and {{{F}}}28, respectively, are established via the composition of other weight-based isometries. The classical Hamming weight is used on the binary space. The resulting isometries are then applied to linear block codes over ℤ4+ uℤ4 whose images are binary codes of predicted length, which may or may not be linear. Certain lower and upper bounds on the minimum distances of the binary images are also derived in terms of the parameters of the ℤ4 + uℤ4 codes. Several new codes and their images are constructed as illustrative examples. An analogous procedure is performed successfully on the ring ℤ8 + uℤ8, where u 2 = 0, which is a commutative local Frobenius non-chain ring of order 64. It turns out that the method is possible in general for the class of rings ℤ2 r + uℤ2 r , where u 2 = 0, for any positive integer r, using the generalized Gray map from ℤ2 r to {{{F}}}2{2r-1}.

  6. A linear programming approach to reconstructing subcellular structures from confocal images for automated generation of representative 3D cellular models.

    Science.gov (United States)

    Wood, Scott T; Dean, Brian C; Dean, Delphine

    2013-04-01

    This paper presents a novel computer vision algorithm to analyze 3D stacks of confocal images of fluorescently stained single cells. The goal of the algorithm is to create representative in silico model structures that can be imported into finite element analysis software for mechanical characterization. Segmentation of cell and nucleus boundaries is accomplished via standard thresholding methods. Using novel linear programming methods, a representative actin stress fiber network is generated by computing a linear superposition of fibers having minimum discrepancy compared with an experimental 3D confocal image. Qualitative validation is performed through analysis of seven 3D confocal image stacks of adherent vascular smooth muscle cells (VSMCs) grown in 2D culture. The presented method is able to automatically generate 3D geometries of the cell's boundary, nucleus, and representative F-actin network based on standard cell microscopy data. These geometries can be used for direct importation and implementation in structural finite element models for analysis of the mechanics of a single cell to potentially speed discoveries in the fields of regenerative medicine, mechanobiology, and drug discovery. Copyright © 2012 Elsevier B.V. All rights reserved.

  7. Linear information retrieval method in X-ray grating-based phase contrast imaging and its interchangeability with tomographic reconstruction

    Science.gov (United States)

    Wu, Z.; Gao, K.; Wang, Z. L.; Shao, Q. G.; Hu, R. F.; Wei, C. X.; Zan, G. B.; Wali, F.; Luo, R. H.; Zhu, P. P.; Tian, Y. C.

    2017-06-01

    In X-ray grating-based phase contrast imaging, information retrieval is necessary for quantitative research, especially for phase tomography. However, numerous and repetitive processes have to be performed for tomographic reconstruction. In this paper, we report a novel information retrieval method, which enables retrieving phase and absorption information by means of a linear combination of two mutually conjugate images. Thanks to the distributive law of the multiplication as well as the commutative law and associative law of the addition, the information retrieval can be performed after tomographic reconstruction, thus simplifying the information retrieval procedure dramatically. The theoretical model of this method is established in both parallel beam geometry for Talbot interferometer and fan beam geometry for Talbot-Lau interferometer. Numerical experiments are also performed to confirm the feasibility and validity of the proposed method. In addition, we discuss its possibility in cone beam geometry and its advantages compared with other methods. Moreover, this method can also be employed in other differential phase contrast imaging methods, such as diffraction enhanced imaging, non-interferometric imaging, and edge illumination.

  8. New concept on an integrated interior magnetic resonance imaging and medical linear accelerator system for radiation therapy.

    Science.gov (United States)

    Jia, Xun; Tian, Zhen; Xi, Yan; Jiang, Steve B; Wang, Ge

    2017-01-01

    Image guidance plays a critical role in radiotherapy. Currently, cone-beam computed tomography (CBCT) is routinely used in clinics for this purpose. While this modality can provide an attenuation image for therapeutic planning, low soft-tissue contrast affects the delineation of anatomical and pathological features. Efforts have recently been devoted to several MRI linear accelerator (LINAC) projects that lead to the successful combination of a full diagnostic MRI scanner with a radiotherapy machine. We present a new concept for the development of the MRI-LINAC system. Instead of combining a full MRI scanner with the LINAC platform, we propose using an interior MRI (iMRI) approach to image a specific region of interest (RoI) containing the radiation treatment target. While the conventional CBCT component still delivers a global image of the patient's anatomy, the iMRI offers local imaging of high soft-tissue contrast for tumor delineation. We describe a top-level system design for the integration of an iMRI component into an existing LINAC platform. We performed numerical analyses of the magnetic field for the iMRI to show potentially acceptable field properties in a spherical RoI with a diameter of 15 cm. This field could be shielded to a sufficiently low level around the LINAC region to avoid electromagnetic interference. Furthermore, we investigate the dosimetric impacts of this integration on the radiotherapy beam.

  9. A linear systems description of the CO2 laser based tangential imaging system

    International Nuclear Information System (INIS)

    Wright, E.L.J.; Nazikian, R.

    1995-01-01

    We demonstrate that the phase variation produced by the projection of the density fluctuations onto a laser beam that is aligned tangent to the magnetic field lines in a toroidal plasma, is in fact a convolution of the density fluctuation profile in the tangency plane with a shift-invariant point spread function. Thus a spatial filter can be used to invert the corresponding transfer function to produce an undistorted image of the plasma density fluctuations at the tangency plane. Numerical simulations demonstrate that a spatial filter consisting of a simple and versatile step-function form of a Zernike phase mirror, will recover a reasonably accurate image of the fluctuations

  10. Piecewise linear approximation: application to control rod step counting in a nuclear reactor core and image contours characterization

    International Nuclear Information System (INIS)

    Kaoutar, M.

    1986-09-01

    After a survey of main algorithms for piecewise linear approximation, a new method is suggested. It consists of two stages: a sequential detection stage and an optimization stage, which derives from general dynamic clustering principle. It is applied to control rod step counting in a nuclear reactor core and images contours characterization. Another version of our method is presented. Its originality cames from the variability of the line segments number during iterations. A comparative study is made by comparing the results of the proposed method with of another methods already existing thereby it attests the efficiency and reliability of our method [fr

  11. Periodic quality control of a linear accelerator using electronic portal imaging

    International Nuclear Information System (INIS)

    Planes Meseguer, D.; Dorado Rodriguez, M. P.; Esposito, R. D.

    2011-01-01

    In this paper we present our solution for the realization of the monthly periodic quality control (CP) geometry - mechanical and multi leaf collimator (MLC), using the electronic system for portal imaging (EPI). We have developed specific programs created with free software. The monitoring results are automatically stored on our web server, along with other information generated in our service.

  12. Digital simulation of staining in histopathology multispectral images: enhancement and linear transformation of spectral transmittance.

    Science.gov (United States)

    Bautista, Pinky A; Yagi, Yukako

    2012-05-01

    Hematoxylin and eosin (H&E) stain is currently the most popular for routine histopathology staining. Special and/or immuno-histochemical (IHC) staining is often requested to further corroborate the initial diagnosis on H&E stained tissue sections. Digital simulation of staining (or digital staining) can be a very valuable tool to produce the desired stained images from the H&E stained tissue sections instantaneously. We present an approach to digital staining of histopathology multispectral images by combining the effects of spectral enhancement and spectral transformation. Spectral enhancement is accomplished by shifting the N-band original spectrum of the multispectral pixel with the weighted difference between the pixel's original and estimated spectrum; the spectrum is estimated using M transformed to the spectral configuration associated to its reaction to a specific stain by utilizing an N × N transformation matrix, which is derived through application of least mean squares method to the enhanced and target spectral transmittance samples of the different tissue components found in the image. Results of our experiments on the digital conversion of an H&E stained multispectral image to its Masson's trichrome stained equivalent show the viability of the method.

  13. Analysis and Segmentation of Face Images using Point Annotations and Linear Subspace Techniques

    DEFF Research Database (Denmark)

    Stegmann, Mikkel Bille

    2002-01-01

    This report provides an analysis of 37 annotated frontal face images. All results presented have been obtained using our freely available Active Appearance Model (AAM) implementation. To ensure the reproducibility of the presented experiments, the data set has also been made available. As such...

  14. Characterization of high density SiPM non-linearity and energy resolution for prompt gamma imaging applications

    Science.gov (United States)

    Regazzoni, V.; Acerbi, F.; Cozzi, G.; Ferri, A.; Fiorini, C.; Paternoster, G.; Piemonte, C.; Rucatti, D.; Zappalà, G.; Zorzi, N.; Gola, A.

    2017-07-01

    Fondazione Bruno Kessler (FBK) (Trento, Italy) has recently introduced High Density (HD) and Ultra High-Density (UHD) SiPMs, featuring very small micro-cell pitch. The high cell density is a very important factor to improve the linearity of the SiPM in high-dynamic-range applications, such as the scintillation light readout in high-energy gamma-ray spectroscopy and in prompt gamma imaging for proton therapy. The energy resolution at high energies is a trade-off between the excess noise factor caused by the non-linearity of the SiPM and the photon detection efficiency of the detector. To study these effects, we developed a new setup that simulates the LYSO light emission in response to gamma photons up to 30 MeV, using a pulsed light source. We measured the non-linearity and energy resolution vs. energy of the FBK RGB-HD e RGB-UHD SiPM technologies. We considered five different cell sizes, ranging from 10 μm up to 25 μm. With the UHD technology we were able to observe a remarkable reduction of the SiPM non-linearity, less than 5% at 5 MeV with 10 μm cells, which should be compared to a non-linearity of 50% with 25 μm-cell HD-SiPMs. With the same setup, we also measured the different components of the energy resolution (intrinsic, statistical, detector and electronic noise) vs. cell size, over-voltage and energy and we separated the different sources of excess noise factor.

  15. Clinically Practical Approach for Screening of Low Muscularity Using Electronic Linear Measures on Computed Tomography Images in Critically Ill Patients.

    Science.gov (United States)

    Avrutin, Egor; Moisey, Lesley L; Zhang, Roselyn; Khattab, Jenna; Todd, Emma; Premji, Tahira; Kozar, Rosemary; Heyland, Daren K; Mourtzakis, Marina

    2017-12-06

    Computed tomography (CT) scans performed during routine hospital care offer the opportunity to quantify skeletal muscle and predict mortality and morbidity in intensive care unit (ICU) patients. Existing methods of muscle cross-sectional area (CSA) quantification require specialized software, training, and time commitment that may not be feasible in a clinical setting. In this article, we explore a new screening method to identify patients with low muscle mass. We analyzed 145 scans of elderly ICU patients (≥65 years old) using a combination of measures obtained with a digital ruler, commonly found on hospital radiological software. The psoas and paraspinal muscle groups at the level of the third lumbar vertebra (L3) were evaluated by using 2 linear measures each and compared with an established method of CT image analysis of total muscle CSA in the L3 region. There was a strong association between linear measures of psoas and paraspinal muscle groups and total L3 muscle CSA (R 2 = 0.745, P < 0.001). Linear measures, age, and sex were included as covariates in a multiple logistic regression to predict those with low muscle mass; receiver operating characteristic (ROC) area under the curve (AUC) of the combined psoas and paraspinal linear index model was 0.920. Intraclass correlation coefficients (ICCs) were used to evaluate intrarater and interrater reliability, resulting in scores of 0.979 (95% CI: 0.940-0.992) and 0.937 (95% CI: 0.828-0.978), respectively. A digital ruler can reliably predict L3 muscle CSA, and these linear measures may be used to identify critically ill patients with low muscularity who are at risk for worse clinical outcomes. © 2017 American Society for Parenteral and Enteral Nutrition.

  16. Accuracy requirements of optical linear algebra processors in adaptive optics imaging systems

    Science.gov (United States)

    Downie, John D.; Goodman, Joseph W.

    1989-10-01

    The accuracy requirements of optical processors in adaptive optics systems are determined by estimating the required accuracy in a general optical linear algebra processor (OLAP) that results in a smaller average residual aberration than that achieved with a conventional electronic digital processor with some specific computation speed. Special attention is given to an error analysis of a general OLAP with regard to the residual aberration that is created in an adaptive mirror system by the inaccuracies of the processor, and to the effect of computational speed of an electronic processor on the correction. Results are presented on the ability of an OLAP to compete with a digital processor in various situations.

  17. Addressing Phase Errors in Fat-Water Imaging Using a Mixed Magnitude/Complex Fitting Method

    Science.gov (United States)

    Hernando, D.; Hines, C. D. G.; Yu, H.; Reeder, S.B.

    2012-01-01

    Accurate, noninvasive measurements of liver fat content are needed for the early diagnosis and quantitative staging of nonalcoholic fatty liver disease. Chemical shift-based fat quantification methods acquire images at multiple echo times using a multiecho spoiled gradient echo sequence, and provide fat fraction measurements through postprocessing. However, phase errors, such as those caused by eddy currents, can adversely affect fat quantification. These phase errors are typically most significant at the first echo of the echo train, and introduce bias in complex-based fat quantification techniques. These errors can be overcome using a magnitude-based technique (where the phase of all echoes is discarded), but at the cost of significantly degraded signal-to-noise ratio, particularly for certain choices of echo time combinations. In this work, we develop a reconstruction method that overcomes these phase errors without the signal-to-noise ratio penalty incurred by magnitude fitting. This method discards the phase of the first echo (which is often corrupted) while maintaining the phase of the remaining echoes (where phase is unaltered). We test the proposed method on 104 patient liver datasets (from 52 patients, each scanned twice), where the fat fraction measurements are compared to coregistered spectroscopy measurements. We demonstrate that mixed fitting is able to provide accurate fat fraction measurements with high signal-to-noise ratio and low bias over a wide choice of echo combinations. PMID:21713978

  18. Live cell linear dichroism imaging reveals extensive membrane ruffling within the docking structure of natural killer cell immune synapses

    DEFF Research Database (Denmark)

    Benninger, Richard K P; Vanherberghen, Bruno; Young, Stephen

    2009-01-01

    We have applied fluorescence imaging of two-photon linear dichroism to measure the subresolution organization of the cell membrane during formation of the activating (cytolytic) natural killer (NK) cell immune synapse (IS). This approach revealed that the NK cell plasma membrane is convoluted...... into ruffles at the periphery, but not in the center of a mature cytolytic NK cell IS. Time-lapse imaging showed that the membrane ruffles formed at the initial point of contact between NK cells and target cells and then spread radialy across the intercellular contact as the size of the IS increased, becoming...... absent from the center of the mature synapse. Understanding the role of such extensive membrane ruffling in the assembly of cytolytic synapses is an intriguing new goal....

  19. Single shot trajectory design for region-specific imaging using linear and nonlinear magnetic encoding fields.

    Science.gov (United States)

    Layton, Kelvin J; Gallichan, Daniel; Testud, Frederik; Cocosco, Chris A; Welz, Anna M; Barmet, Christoph; Pruessmann, Klaas P; Hennig, Jürgen; Zaitsev, Maxim

    2013-09-01

    It has recently been demonstrated that nonlinear encoding fields result in a spatially varying resolution. This work develops an automated procedure to design single-shot trajectories that create a local resolution improvement in a region of interest. The technique is based on the design of optimized local k-space trajectories and can be applied to arbitrary hardware configurations that employ any number of linear and nonlinear encoding fields. The trajectories designed in this work are tested with the currently available hardware setup consisting of three standard linear gradients and two quadrupolar encoding fields generated from a custom-built gradient insert. A field camera is used to measure the actual encoding trajectories up to third-order terms, enabling accurate reconstructions of these demanding single-shot trajectories, although the eddy current and concomitant field terms of the gradient insert have not been completely characterized. The local resolution improvement is demonstrated in phantom and in vivo experiments. Copyright © 2012 Wiley Periodicals, Inc.

  20. Accuracy requirements of optical linear algebra processors in adaptive optics imaging systems

    Science.gov (United States)

    Downie, John D.

    1990-01-01

    A ground-based adaptive optics imaging telescope system attempts to improve image quality by detecting and correcting for atmospherically induced wavefront aberrations. The required control computations during each cycle will take a finite amount of time. Longer time delays result in larger values of residual wavefront error variance since the atmosphere continues to change during that time. Thus an optical processor may be well-suited for this task. This paper presents a study of the accuracy requirements in a general optical processor that will make it competitive with, or superior to, a conventional digital computer for the adaptive optics application. An optimization of the adaptive optics correction algorithm with respect to an optical processor's degree of accuracy is also briefly discussed.

  1. Comparison of linear measurements and analyses taken from plaster models and three-dimensional images.

    Science.gov (United States)

    Porto, Betina Grehs; Porto, Thiago Soares; Silva, Monica Barros; Grehs, Renésio Armindo; Pinto, Ary dos Santos; Bhandi, Shilpa H; Tonetto, Mateus Rodrigues; Bandéca, Matheus Coelho; dos Santos-Pinto, Lourdes Aparecida Martins

    2014-11-01

    Digital models are an alternative for carrying out analyses and devising treatment plans in orthodontics. The objective of this study was to evaluate the accuracy and the reproducibility of measurements of tooth sizes, interdental distances and analyses of occlusion using plaster models and their digital images. Thirty pairs of plaster models were chosen at random, and the digital images of each plaster model were obtained using a laser scanner (3Shape R-700, 3Shape A/S). With the plaster models, the measurements were taken using a caliper (Mitutoyo Digimatic(®), Mitutoyo (UK) Ltd) and the MicroScribe (MS) 3DX (Immersion, San Jose, Calif). For the digital images, the measurement tools used were those from the O3d software (Widialabs, Brazil). The data obtained were compared statistically using the Dahlberg formula, analysis of variance and the Tukey test (p < 0.05). The majority of the measurements, obtained using the caliper and O3d were identical, and both were significantly different from those obtained using the MS. Intra-examiner agreement was lowest when using the MS. The results demonstrated that the accuracy and reproducibility of the tooth measurements and analyses from the plaster models using the caliper and from the digital models using O3d software were identical.

  2. A Dynamic Range Enhanced Readout Technique with a Two-Step TDC for High Speed Linear CMOS Image Sensors

    Directory of Open Access Journals (Sweden)

    Zhiyuan Gao

    2015-11-01

    Full Text Available This paper presents a dynamic range (DR enhanced readout technique with a two-step time-to-digital converter (TDC for high speed linear CMOS image sensors. A multi-capacitor and self-regulated capacitive trans-impedance amplifier (CTIA structure is employed to extend the dynamic range. The gain of the CTIA is auto adjusted by switching different capacitors to the integration node asynchronously according to the output voltage. A column-parallel ADC based on a two-step TDC is utilized to improve the conversion rate. The conversion is divided into coarse phase and fine phase. An error calibration scheme is also proposed to correct quantization errors caused by propagation delay skew within −Tclk~+Tclk. A linear CMOS image sensor pixel array is designed in the 0.13 μm CMOS process to verify this DR-enhanced high speed readout technique. The post simulation results indicate that the dynamic range of readout circuit is 99.02 dB and the ADC achieves 60.22 dB SNDR and 9.71 bit ENOB at a conversion rate of 2 MS/s after calibration, with 14.04 dB and 2.4 bit improvement, compared with SNDR and ENOB of that without calibration.

  3. Peran Promotional Mix dalam Membentuk Brand Image Produk City Car di Kota Pekanbaru (Studi Kasus pada Honda Brio)

    OpenAIRE

    Musfar, Tengku Firli; Nursanti, Aida; Karlina, Monina Selfi

    2014-01-01

    The purpose of this research is to examine the role of promotional mix variables which consist of advertising, personal selling, sales promotion, public relations, and direct marketing towards brand image of Honda Brio car in Pekanbaru. Sample€™s selection that used with the method of non-probability sampling. Selected sample is consumer who uses Honda Brio car, specifically who lives in Pekanbaru city, ever been involved in Honda Brio€™s promotional mix activity, and the consumer that consid...

  4. Fast Depiction Invariant Visual Similarity for Content Based Image Retrieval Based on Data-driven Visual Similarity using Linear Discriminant Analysis

    Science.gov (United States)

    Wihardi, Y.; Setiawan, W.; Nugraha, E.

    2018-01-01

    On this research we try to build CBIRS based on Learning Distance/Similarity Function using Linear Discriminant Analysis (LDA) and Histogram of Oriented Gradient (HoG) feature. Our method is invariant to depiction of image, such as similarity of image to image, sketch to image, and painting to image. LDA can decrease execution time compared to state of the art method, but it still needs an improvement in term of accuracy. Inaccuracy in our experiment happen because we did not perform sliding windows search and because of low number of negative samples as natural-world images.

  5. The first clinical implementation of real-time image-guided adaptive radiotherapy using a standard linear accelerator.

    Science.gov (United States)

    Keall, Paul J; Nguyen, Doan Trang; O'Brien, Ricky; Caillet, Vincent; Hewson, Emily; Poulsen, Per Rugaard; Bromley, Regina; Bell, Linda; Eade, Thomas; Kneebone, Andrew; Martin, Jarad; Booth, Jeremy T

    2018-04-01

    Until now, real-time image guided adaptive radiation therapy (IGART) has been the domain of dedicated cancer radiotherapy systems. The purpose of this study was to clinically implement and investigate real-time IGART using a standard linear accelerator. We developed and implemented two real-time technologies for standard linear accelerators: (1) Kilovoltage Intrafraction Monitoring (KIM) that finds the target and (2) multileaf collimator (MLC) tracking that aligns the radiation beam to the target. Eight prostate SABR patients were treated with this real-time IGART technology. The feasibility, geometric accuracy and the dosimetric fidelity were measured. Thirty-nine out of forty fractions with real-time IGART were successful (95% confidence interval 87-100%). The geometric accuracy of the KIM system was -0.1 ± 0.4, 0.2 ± 0.2 and -0.1 ± 0.6 mm in the LR, SI and AP directions, respectively. The dose reconstruction showed that real-time IGART more closely reproduced the planned dose than that without IGART. For the largest motion fraction, with real-time IGART 100% of the CTV received the prescribed dose; without real-time IGART only 95% of the CTV would have received the prescribed dose. The clinical implementation of real-time image-guided adaptive radiotherapy on a standard linear accelerator using KIM and MLC tracking is feasible. This achievement paves the way for real-time IGART to be a mainstream treatment option. Copyright © 2018 Elsevier B.V. All rights reserved.

  6. Short linear shadows connecting pulmonary segmental arteries to oblique fissures in volumetric thin-section CT images: comparing CT, micro-CT and histopathology

    International Nuclear Information System (INIS)

    Guan, Chun-Shuang; Ma, Da-Qing; Chen, Jiang-Hong; Chen, Bu-Dong; Cui, Dun; Zhang, Yan-Song; Liu, Wei-Hua

    2016-01-01

    To retrospectively evaluate short linear shadows connecting pulmonary segmental arteries to oblique fissures in thin-section CT images and determine their anatomical basis. CT scanning was performed on 108 patients and 11 lung specimens with no lung diseases around the oblique fissures or hilar. Two radiologists evaluated the imaging. The parameters included length, thickness of short linear shadows, pulmonary segmental artery variations, and traction interlobar fissures, etc. The short linear shadows were not related to sex, age, or smoking history. The lengths of the short linear shadows were generally within 10 mm. The thicknesses of the short linear shadows ranged from 1 to 2 mm. Of the patients, 26.9 % showed pulmonary segmental artery variations; 66.7 % of short linear shadows pulled oblique fissures. In three-dimensional images, the short linear shadows appeared as arc planes, with one side edge connected to the oblique fissure, one side edge connected to a pulmonary segmental artery. On the tissue slices, the short linear shadow exhibited a band structure composed of connective tissues, small blood vessels, and small lymphatic vessels. Short linear shadows are a type of normal intrapulmonary membranes and can maintain the integrity of the oblique fissures and hilar structure. (orig.)

  7. Determination of renewable energy yield from mixed waste material from the use of novel image analysis methods.

    Science.gov (United States)

    Wagland, S T; Dudley, R; Naftaly, M; Longhurst, P J

    2013-11-01

    Two novel techniques are presented in this study which together aim to provide a system able to determine the renewable energy potential of mixed waste materials. An image analysis tool was applied to two waste samples prepared using known quantities of source-segregated recyclable materials. The technique was used to determine the composition of the wastes, where through the use of waste component properties the biogenic content of the samples was calculated. The percentage renewable energy determined by image analysis for each sample was accurate to within 5% of the actual values calculated. Microwave-based multiple-point imaging (AutoHarvest) was used to demonstrate the ability of such a technique to determine the moisture content of mixed samples. This proof-of-concept experiment was shown to produce moisture measurement accurate to within 10%. Overall, the image analysis tool was able to determine the renewable energy potential of the mixed samples, and the AutoHarvest should enable the net calorific value calculations through the provision of moisture content measurements. The proposed system is suitable for combustion facilities, and enables the operator to understand the renewable energy potential of the waste prior to combustion. Copyright © 2013 Elsevier Ltd. All rights reserved.

  8. Spectral survey of helium lines in a linear plasma device for use in HELIOS imaging

    Science.gov (United States)

    Ray, H. B.; Biewer, T. M.; Fehling, D. T.; Isler, R. C.; Unterberg, E. A.

    2016-11-01

    Fast visible cameras and a filterscope are used to examine the visible light emission from Oak Ridge National Laboratory's Proto-MPEX. The filterscope has been configured to perform helium line ratio measurements using emission lines at 667.9, 728.1, and 706.5 nm. The measured lines should be mathematically inverted and the ratios compared to a collisional radiative model (CRM) to determine Te and ne. Increasing the number of measurement chords through the plasma improves the inversion calculation and subsequent Te and ne localization. For the filterscope, one spatial chord measurement requires three photomultiplier tubes (PMTs) connected to pellicle beam splitters. Multiple, fast visible cameras with narrowband filters are an alternate technique for performing these measurements with superior spatial resolution. Each camera contains millions of pixels; each pixel is analogous to one filterscope PMT. The data can then be inverted and the ratios compared to the CRM to determine 2-dimensional "images" of Te and ne in the plasma. An assessment is made in this paper of the candidate He I emission lines for an imaging technique.

  9. Variable Selection in Heterogeneous Datasets: A Truncated-rank Sparse Linear Mixed Model with Applications to Genome-wide Association Studies.

    Science.gov (United States)

    Wang, Haohan; Aragam, Bryon; Xing, Eric P

    2018-04-26

    A fundamental and important challenge in modern datasets of ever increasing dimensionality is variable selection, which has taken on renewed interest recently due to the growth of biological and medical datasets with complex, non-i.i.d. structures. Naïvely applying classical variable selection methods such as the Lasso to such datasets may lead to a large number of false discoveries. Motivated by genome-wide association studies in genetics, we study the problem of variable selection for datasets arising from multiple subpopulations, when this underlying population structure is unknown to the researcher. We propose a unified framework for sparse variable selection that adaptively corrects for population structure via a low-rank linear mixed model. Most importantly, the proposed method does not require prior knowledge of sample structure in the data and adaptively selects a covariance structure of the correct complexity. Through extensive experiments, we illustrate the effectiveness of this framework over existing methods. Further, we test our method on three different genomic datasets from plants, mice, and human, and discuss the knowledge we discover with our method. Copyright © 2018. Published by Elsevier Inc.

  10. Experimental Effects and Individual Differences in Linear Mixed Models: Estimating the Relationship between Spatial, Object, and Attraction Effects in Visual Attention

    Science.gov (United States)

    Kliegl, Reinhold; Wei, Ping; Dambacher, Michael; Yan, Ming; Zhou, Xiaolin

    2011-01-01

    Linear mixed models (LMMs) provide a still underused methodological perspective on combining experimental and individual-differences research. Here we illustrate this approach with two-rectangle cueing in visual attention (Egly et al., 1994). We replicated previous experimental cue-validity effects relating to a spatial shift of attention within an object (spatial effect), to attention switch between objects (object effect), and to the attraction of attention toward the display centroid (attraction effect), also taking into account the design-inherent imbalance of valid and other trials. We simultaneously estimated variance/covariance components of subject-related random effects for these spatial, object, and attraction effects in addition to their mean reaction times (RTs). The spatial effect showed a strong positive correlation with mean RT and a strong negative correlation with the attraction effect. The analysis of individual differences suggests that slow subjects engage attention more strongly at the cued location than fast subjects. We compare this joint LMM analysis of experimental effects and associated subject-related variances and correlations with two frequently used alternative statistical procedures. PMID:21833292

  11. A harmonic approximation of intramolecular vibrations in a mixed quantum-classical methodology: Linear absorbance of a dissolved Pheophorbid-a molecule as an example

    International Nuclear Information System (INIS)

    Megow, Joerg; Kulesza, Alexander; Qu Zhengwang; Ronneberg, Thomas; Bonacic-Koutecky, Vlasta; May, Volkhard

    2010-01-01

    Graphical abstract: Structure of a single Pheo (green: C-atoms, blue: N-atoms, red; O-atoms, light grey: H-atoms). - Abstract: Linear absorption spectra of a single Pheophorbid-a molecule (Pheo) dissolved in ethanol are calculated in a mixed quantum-classical approach. In this computational scheme the absorbance is mainly determined by the time-dependent fluctuations of the energy gap between the Pheo ground and excited electronic state. The actual magnitude of the energy gap is caused by the electrostatic solvent solute coupling as well as by contributions due to intra Pheo vibrations. For the latter a new approach is proposed which is based on precalculated potential energy surfaces (PES) described in a harmonic approximation. To get the respective nuclear equilibrium configurations and Hessian matrices of the two involved electronic states we carried out the necessary electronic structure calculations in a DFT-framework. Since the Pheo changes its spatial orientation in the course of a MD run, the nuclear equilibrium configurations change their spatial position, too. Introducing a particular averaging procedure, these configurations are determined from the actual MD trajectories. The usability of the approach is underlined by a perfect reproduction of experimental data. This also demonstrates that our proposed method is suitable for the description of more complex systems in future investigations.

  12. Mixtures of Berkson and classical covariate measurement error in the linear mixed model: Bias analysis and application to a study on ultrafine particles.

    Science.gov (United States)

    Deffner, Veronika; Küchenhoff, Helmut; Breitner, Susanne; Schneider, Alexandra; Cyrys, Josef; Peters, Annette

    2018-03-13

    The ultrafine particle measurements in the Augsburger Umweltstudie, a panel study conducted in Augsburg, Germany, exhibit measurement error from various sources. Measurements of mobile devices show classical possibly individual-specific measurement error; Berkson-type error, which may also vary individually, occurs, if measurements of fixed monitoring stations are used. The combination of fixed site and individual exposure measurements results in a mixture of the two error types. We extended existing bias analysis approaches to linear mixed models with a complex error structure including individual-specific error components, autocorrelated errors, and a mixture of classical and Berkson error. Theoretical considerations and simulation results show, that autocorrelation may severely change the attenuation of the effect estimations. Furthermore, unbalanced designs and the inclusion of confounding variables influence the degree of attenuation. Bias correction with the method of moments using data with mixture measurement error partially yielded better results compared to the usage of incomplete data with classical error. Confidence intervals (CIs) based on the delta method achieved better coverage probabilities than those based on Bootstrap samples. Moreover, we present the application of these new methods to heart rate measurements within the Augsburger Umweltstudie: the corrected effect estimates were slightly higher than their naive equivalents. The substantial measurement error of ultrafine particle measurements has little impact on the results. The developed methodology is generally applicable to longitudinal data with measurement error. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  13. The coefficient of determination R2 and intra-class correlation coefficient from generalized linear mixed-effects models revisited and expanded.

    Science.gov (United States)

    Nakagawa, Shinichi; Johnson, Paul C D; Schielzeth, Holger

    2017-09-01

    The coefficient of determination R 2 quantifies the proportion of variance explained by a statistical model and is an important summary statistic of biological interest. However, estimating R 2 for generalized linear mixed models (GLMMs) remains challenging. We have previously introduced a version of R 2 that we called [Formula: see text] for Poisson and binomial GLMMs, but not for other distributional families. Similarly, we earlier discussed how to estimate intra-class correlation coefficients (ICCs) using Poisson and binomial GLMMs. In this paper, we generalize our methods to all other non-Gaussian distributions, in particular to negative binomial and gamma distributions that are commonly used for modelling biological data. While expanding our approach, we highlight two useful concepts for biologists, Jensen's inequality and the delta method, both of which help us in understanding the properties of GLMMs. Jensen's inequality has important implications for biologically meaningful interpretation of GLMMs, whereas the delta method allows a general derivation of variance associated with non-Gaussian distributions. We also discuss some special considerations for binomial GLMMs with binary or proportion data. We illustrate the implementation of our extension by worked examples from the field of ecology and evolution in the R environment. However, our method can be used across disciplines and regardless of statistical environments. © 2017 The Author(s).

  14. Advantage of make-to-stock strategy based on linear mixed-effect model: a comparison with regression, autoregressive, times series, and exponential smoothing models

    Directory of Open Access Journals (Sweden)

    Yu-Pin Liao

    2017-11-01

    Full Text Available In the past few decades, demand forecasting has become relatively difficult due to rapid changes in the global environment. This research illustrates the use of the make-to-stock (MTS production strategy in order to explain how forecasting plays an essential role in business management. The linear mixed-effect (LME model has been extensively developed and is widely applied in various fields. However, no study has used the LME model for business forecasting. We suggest that the LME model be used as a tool for prediction and to overcome environment complexity. The data analysis is based on real data in an international display company, where the company needs accurate demand forecasting before adopting a MTS strategy. The forecasting result from the LME model is compared to the commonly used approaches, including the regression model, autoregressive model, times series model, and exponential smoothing model, with the results revealing that prediction performance provided by the LME model is more stable than using the other methods. Furthermore, product types in the data are regarded as a random effect in the LME model, hence demands of all types can be predicted simultaneously using a single LME model. However, some approaches require splitting the data into different type categories, and then predicting the type demand by establishing a model for each type. This feature also demonstrates the practicability of the LME model in real business operations.

  15. An energy integrated, multi-microgrid, MILP (mixed-integer linear programming) approach for residential distributed energy system planning – A South Australian case-study

    International Nuclear Information System (INIS)

    Wouters, Carmen; Fraga, Eric S.; James, Adrian M.

    2015-01-01

    The integration of distributed generation units and microgrids in the current grid infrastructure requires an efficient and cost effective local energy system design. A mixed-integer linear programming model is presented to identify such optimal design. The electricity as well as the space heating and cooling demands of a small residential neighbourhood are satisfied through the consideration and combined use of distributed generation technologies, thermal units and energy storage with an optional interconnection with the central grid. Moreover, energy integration is allowed in the form of both optimised pipeline networks and microgrid operation. The objective is to minimise the total annualised cost of the system to meet its yearly energy demand. The model integrates the operational characteristics and constraints of the different technologies for several scenarios in a South Australian setting and is implemented in GAMS. The impact of energy integration is analysed, leading to the identification of key components for residential energy systems. Additionally, a multi-microgrid concept is introduced to allow for local clustering of households within neighbourhoods. The robustness of the model is shown through sensitivity analysis, up-scaling and an effort to address the variability of solar irradiation. - Highlights: • Distributed energy system planning is employed on a small residential scale. • Full energy integration is employed based on microgrid operation and tri-generation. • An MILP for local clustering of households in multi-microgrids is developed. • Micro combined heat and power units are key components for residential microgrids

  16. WE-DE-BRA-08: A Linear Accelerator Target Allowing Rapid Switching Between Treatment and High-Contrast Imaging Modes

    Energy Technology Data Exchange (ETDEWEB)

    Yewondwossen, M; Robar, J; Parsons, D [Dalhousie University, Halifax, NS (Canada)

    2016-06-15

    Purpose: During radiotherapy treatment, lung tumors can display substantial respiratory motion. This motion usually necessitates enlarged treatment margins to provide full tumour coverage. Unfortunately, these margins limit the dose that can be prescribed for tumour control and cause complications to normal tissue. Options for real-time methods of direct detection of tumour position, and particularly those that obviate the need for inserted fiducial markers, are limited. We propose a method of tumor tracking without implanted fiducial markers using a novel fast switching-target that toggles between a FFF copper/tungsten therapy mode and a FFF low-Z target mode for imaging. In this work we demonstrate proof-of-concept of this new technology. Methods: The prototype includes two targets: i) a FFF copper/tungsten target equivalent to that in the Varian 2100 EX 6 MV, and ii) a low-Z (carbon) target with a thickness of 110% of continuous slowing down approximation range (CSDA) at 7 MeV. The two targets can be exchanged with a custom made linear slide and motor-driven actuator. The usefulness of the switching-target concept is demonstrated through experimental BEV Planar images acquired with continual treatment and imaging at a user-defined period. Results: The prototype switching-target demonstrates that two recent advances in linac technology (FFF target for therapy and low-Z target) can be combined with synergy. The switching-target approach offers the capacity for rapid switching between treatment and high-contrast imaging modes, allowing intrafractional tracking, as demonstrated in this work with dynamic breathing phantom. By using a single beam-line, the design is streamlined and may obviate the need for an auxiliary imaging system (e.g., kV OBI.) Conclusion: This switching-target approach is a feasible combination of two current advances in linac technology (FFF target for therapy and a FFF low-Z target) allowing new options in on-line IGRT.

  17. Imaging linear and circular polarization features in leaves with complete Mueller matrix polarimetry.

    Science.gov (United States)

    Patty, C H Lucas; Luo, David A; Snik, Frans; Ariese, Freek; Buma, Wybren Jan; Ten Kate, Inge Loes; van Spanning, Rob J M; Sparks, William B; Germer, Thomas A; Garab, Győző; Kudenov, Michael W

    2018-06-01

    Spectropolarimetry of intact plant leaves allows to probe the molecular architecture of vegetation photosynthesis in a non-invasive and non-destructive way and, as such, can offer a wealth of physiological information. In addition to the molecular signals due to the photosynthetic machinery, the cell structure and its arrangement within a leaf can create and modify polarization signals. Using Mueller matrix polarimetry with rotating retarder modulation, we have visualized spatial variations in polarization in transmission around the chlorophyll a absorbance band from 650 nm to 710 nm. We show linear and circular polarization measurements of maple leaves and cultivated maize leaves and discuss the corresponding Mueller matrices and the Mueller matrix decompositions, which show distinct features in diattenuation, polarizance, retardance and depolarization. Importantly, while normal leaf tissue shows a typical split signal with both a negative and a positive peak in the induced fractional circular polarization and circular dichroism, the signals close to the veins only display a negative band. The results are similar to the negative band as reported earlier for single macrodomains. We discuss the possible role of the chloroplast orientation around the veins as a cause of this phenomenon. Systematic artefacts are ruled out as three independent measurements by different instruments gave similar results. These results provide better insight into circular polarization measurements on whole leaves and options for vegetation remote sensing using circular polarization. Copyright © 2018 The Author(s). Published by Elsevier B.V. All rights reserved.

  18. Generalized Uncertainty Quantification for Linear Inverse Problems in X-ray Imaging

    Energy Technology Data Exchange (ETDEWEB)

    Fowler, Michael James [Clarkson Univ., Potsdam, NY (United States)

    2014-04-25

    In industrial and engineering applications, X-ray radiography has attained wide use as a data collection protocol for the assessment of material properties in cases where direct observation is not possible. The direct measurement of nuclear materials, particularly when they are under explosive or implosive loading, is not feasible, and radiography can serve as a useful tool for obtaining indirect measurements. In such experiments, high energy X-rays are pulsed through a scene containing material of interest, and a detector records a radiograph by measuring the radiation that is not attenuated in the scene. One approach to the analysis of these radiographs is to model the imaging system as an operator that acts upon the object being imaged to produce a radiograph. In this model, the goal is to solve an inverse problem to reconstruct the values of interest in the object, which are typically material properties such as density or areal density. The primary objective in this work is to provide quantitative solutions with uncertainty estimates for three separate applications in X-ray radiography: deconvolution, Abel inversion, and radiation spot shape reconstruction. For each problem, we introduce a new hierarchical Bayesian model for determining a posterior distribution on the unknowns and develop efficient Markov chain Monte Carlo (MCMC) methods for sampling from the posterior. A Poisson likelihood, based on a noise model for photon counts at the detector, is combined with a prior tailored to each application: an edge-localizing prior for deconvolution; a smoothing prior with non-negativity constraints for spot reconstruction; and a full covariance sampling prior based on a Wishart hyperprior for Abel inversion. After developing our methods in a general setting, we demonstrate each model on both synthetically generated datasets, including those from a well known radiation transport code, and real high energy radiographs taken at two U. S. Department of Energy

  19. An adaptive image sparse reconstruction method combined with nonlocal similarity and cosparsity for mixed Gaussian-Poisson noise removal

    Science.gov (United States)

    Chen, Yong-fei; Gao, Hong-xia; Wu, Zi-ling; Kang, Hui

    2018-01-01

    Compressed sensing (CS) has achieved great success in single noise removal. However, it cannot restore the images contaminated with mixed noise efficiently. This paper introduces nonlocal similarity and cosparsity inspired by compressed sensing to overcome the difficulties in mixed noise removal, in which nonlocal similarity explores the signal sparsity from similar patches, and cosparsity assumes that the signal is sparse after a possibly redundant transform. Meanwhile, an adaptive scheme is designed to keep the balance between mixed noise removal and detail preservation based on local variance. Finally, IRLSM and RACoSaMP are adopted to solve the objective function. Experimental results demonstrate that the proposed method is superior to conventional CS methods, like K-SVD and state-of-art method nonlocally centralized sparse representation (NCSR), in terms of both visual results and quantitative measures.

  20. Feasibility study on using imaging plates to estimate thermal neutron fluence in neutron-gamma mixed fields

    International Nuclear Information System (INIS)

    Fujibuchi, T.; Tanabe, Y.; Sakae, T.; Terunuma, T.; Isobe, T.; Kawamura, H.; Yasuoka, K.; Matsumoto, T.; Harano, H.; Nishiyama, J.; Masuda, A.; Nohtomi, A.

    2011-01-01

    In current radiotherapy, neutrons are produced in a photonuclear reaction when incident photon energy is higher than the threshold. In the present study, a method of discriminating the neutron component was investigated using an imaging plate (IP) in the neutron-gamma-ray mixed field. Two types of IP were used: a conventional IP for beta- and gamma rays, and an IP doped with Gd for detecting neutrons. IPs were irradiated in the mixed field, and the photo-stimulated luminescence (PSL) intensity of the thermal neutron component was discriminated using an expression proposed herein. The PSL intensity of the thermal neutron component was proportional to thermal neutron fluence. When additional irradiation of photons was added to constant neutron irradiation, the PSL intensity of the thermal neutron component was not affected. The uncertainty of PSL intensities was approximately 11.4 %. This method provides a simple and effective means of discriminating the neutron component in a mixed field. (authors)

  1. Rapid Automatic Lighting Control of a Mixed Light Source for Image Acquisition using Derivative Optimum Search Methods

    Directory of Open Access Journals (Sweden)

    Kim HyungTae

    2015-01-01

    Full Text Available Automatic lighting (auto-lighting is a function that maximizes the image quality of a vision inspection system by adjusting the light intensity and color.In most inspection systems, a single color light source is used, and an equal step search is employed to determine the maximum image quality. However, when a mixed light source is used, the number of iterations becomes large, and therefore, a rapid search method must be applied to reduce their number. Derivative optimum search methods follow the tangential direction of a function and are usually faster than other methods. In this study, multi-dimensional forms of derivative optimum search methods are applied to obtain the maximum image quality considering a mixed-light source. The auto-lighting algorithms were derived from the steepest descent and conjugate gradient methods, which have N-size inputs of driving voltage and one output of image quality. Experiments in which the proposed algorithm was applied to semiconductor patterns showed that a reduced number of iterations is required to determine the locally maximized image quality.

  2. The Effect of Marketing Mix Efforts and Corporate Image on Brand Equity in the IT Software Sector

    OpenAIRE

    Saghar Rafiei; Manijeh Haghighi Nasab; Hamidreza Yazdani

    2013-01-01

    The aim of this research was to prioritize the most important factors that influence brand equity in the software industry. In this regard, the relationships between marketing-mix effort (channel performance, value-oriented price, promotion, and after sales service), corporate image, three dimensions of brand equity (brand awareness with associations, perceived quality, and brand loyalty), and market performance have been investigated by the structural equation modeling. A random sampling met...

  3. Spectral survey of helium lines in a linear plasma device for use in HELIOS imaging

    Energy Technology Data Exchange (ETDEWEB)

    Ray, H. B., E-mail: rayhb@ornl.gov [University of Tennessee, Knoxville, Tennessee 37996 (United States); Oak Ridge National Laboratory, Oak Ridge, Tennessee 37831 (United States); Biewer, T. M.; Fehling, D. T.; Isler, R. C.; Unterberg, E. A. [Oak Ridge National Laboratory, Oak Ridge, Tennessee 37831 (United States)

    2016-11-15

    Fast visible cameras and a filterscope are used to examine the visible light emission from Oak Ridge National Laboratory’s Proto-MPEX. The filterscope has been configured to perform helium line ratio measurements using emission lines at 667.9, 728.1, and 706.5 nm. The measured lines should be mathematically inverted and the ratios compared to a collisional radiative model (CRM) to determine T{sub e} and n{sub e}. Increasing the number of measurement chords through the plasma improves the inversion calculation and subsequent T{sub e} and n{sub e} localization. For the filterscope, one spatial chord measurement requires three photomultiplier tubes (PMTs) connected to pellicle beam splitters. Multiple, fast visible cameras with narrowband filters are an alternate technique for performing these measurements with superior spatial resolution. Each camera contains millions of pixels; each pixel is analogous to one filterscope PMT. The data can then be inverted and the ratios compared to the CRM to determine 2-dimensional “images” of T{sub e} and n{sub e} in the plasma. An assessment is made in this paper of the candidate He I emission lines for an imaging technique.

  4. Reducing 4DCBCT imaging time and dose: the first implementation of variable gantry speed 4DCBCT on a linear accelerator.

    Science.gov (United States)

    O'Brien, Ricky T; Stankovic, Uros; Sonke, Jan-Jakob; Keall, Paul J

    2017-06-07

    Four dimensional cone beam computed tomography (4DCBCT) uses a constant gantry speed and imaging frequency that are independent of the patient's breathing rate. Using a technique called respiratory motion guided 4DCBCT (RMG-4DCBCT), we have previously demonstrated that by varying the gantry speed and imaging frequency, in response to changes in the patient's real-time respiratory signal, the imaging dose can be reduced by 50-70%. RMG-4DCBCT optimally computes a patient specific gantry trajectory to eliminate streaking artefacts and projection clustering that is inherent in 4DCBCT imaging. The gantry trajectory is continuously updated as projection data is acquired and the patient's breathing changes. The aim of this study was to realise RMG-4DCBCT for the first time on a linear accelerator. To change the gantry speed in real-time a potentiometer under microcontroller control was used to adjust the current supplied to an Elekta Synergy's gantry motor. A real-time feedback loop was developed on the microcontroller to modulate the gantry speed and projection acquisition in response to the real-time respiratory signal so that either 40, RMG-4DCBCT 40 , or 60, RMG-4DCBCT 60 , uniformly spaced projections were acquired in 10 phase bins. Images of the CIRS dynamic Thorax phantom were acquired with sinusoidal breathing periods ranging from 2 s to 8 s together with two breathing traces from lung cancer patients. Image quality was assessed using the contrast to noise ratio (CNR) and edge response width (ERW). For the average patient, with a 3.8 s breathing period, the imaging time and image dose were reduced by 37% and 70% respectively. Across all respiratory rates, RMG-4DCBCT 40 had a CNR in the range of 6.5 to 7.5, and RMG-4DCBCT 60 had a CNR between 8.7 and 9.7, indicating that RMG-4DCBCT allows consistent and controllable CNR. In comparison, the CNR for conventional 4DCBCT drops from 20.4 to 6.2 as the breathing rate increases from 2 s to 8 s. With RMG-4DCBCT

  5. Dispersion of the linear and nonlinear optical susceptibilities of the CuAl(S1−xSex)2 mixed chaclcopyrite compounds

    International Nuclear Information System (INIS)

    Reshak, A. H.; Brik, M. G.; Auluck, S.

    2014-01-01

    Based on the electronic band structure, we have calculated the dispersion of the linear and nonlinear optical susceptibilities for the mixed CuAl(S 1−x Se x ) 2 chaclcopyrite compounds with x = 0.0, 0.25, 0.5, 0.75, and 1.0. Calculations are performed within the Perdew-Becke-Ernzerhof general gradient approximation. The investigated compounds possess a direct band gap of about 2.2 eV (CuAlS 2 ), 1.9 eV (CuAl(S 0.75 Se 0.25 ) 2 ), 1.7 eV (CuAl(S 0.5 Se 0.5 ) 2 ), 1.5 eV (CuAl(S 0.25 Se 0.75 ) 2 ), and 1.4 eV (CuAlSe 2 ) which tuned to make them optically active for the optoelectronics and photovoltaic applications. These results confirm that substituting S by Se causes significant band gaps' reduction. The optical function's dispersion ε 2 xx (ω) and ε 2 zz (ω)/ε 2 xx (ω), ε 2 yy (ω), and ε 2 zz (ω) was calculated and discussed in detail. To demonstrate the effect of substituting S by Se on the complex second-order nonlinear optical susceptibility tensors, we performed detailed calculations for the complex second-order nonlinear optical susceptibility tensors, which show that the neat parents compounds CuAlS 2 and CuAlSe 2 exhibit | χ 123 (2) (−2ω;ω;ω) | as the dominant component, while the mixed alloys exhibit | χ 111 (2) (−2ω;ω;ω) | as the dominant component. The features of | χ 123 (2) (−2ω;ω;ω) | and | χ 111 (2) (−2ω;ω;ω) | spectra were analyzed on the basis of the absorptive part of the corresponding dielectric function ε 2 (ω) as a function of both ω/2 and ω.

  6. Linear associations between clinically assessed upper motor neuron disease and diffusion tensor imaging metrics in amyotrophic lateral sclerosis.

    Science.gov (United States)

    Woo, John H; Wang, Sumei; Melhem, Elias R; Gee, James C; Cucchiara, Andrew; McCluskey, Leo; Elman, Lauren

    2014-01-01

    To assess the relationship between clinically assessed Upper Motor Neuron (UMN) disease in Amyotrophic Lateral Sclerosis (ALS) and local diffusion alterations measured in the brain corticospinal tract (CST) by a tractography-driven template-space region-of-interest (ROI) analysis of Diffusion Tensor Imaging (DTI). This cross-sectional study included 34 patients with ALS, on whom DTI was performed. Clinical measures were separately obtained including the Penn UMN Score, a summary metric based upon standard clinical methods. After normalizing all DTI data to a population-specific template, tractography was performed to determine a region-of-interest (ROI) outlining the CST, in which average Mean Diffusivity (MD) and Fractional Anisotropy (FA) were estimated. Linear regression analyses were used to investigate associations of DTI metrics (MD, FA) with clinical measures (Penn UMN Score, ALSFRS-R, duration-of-disease), along with age, sex, handedness, and El Escorial category as covariates. For MD, the regression model was significant (p = 0.02), and the only significant predictors were the Penn UMN Score (p = 0.005) and age (p = 0.03). The FA regression model was also significant (p = 0.02); the only significant predictor was the Penn UMN Score (p = 0.003). Measured by the template-space ROI method, both MD and FA were linearly associated with the Penn UMN Score, supporting the hypothesis that DTI alterations reflect UMN pathology as assessed by the clinical examination.

  7. Income and health-related quality of life among prostate cancer patients over a one-year period after radical prostatectomy: a linear mixed model analysis.

    Science.gov (United States)

    Klein, Jens; Lüdecke, Daniel; Hofreuter-Gätgens, Kerstin; Fisch, Margit; Graefen, Markus; von dem Knesebeck, Olaf

    2017-09-01

    To examine income-related disparities in health-related quality of life (HRQOL) over a one-year period after surgery (radical prostatectomy) and its contributory factors in a longitudinal perspective. Evidence of associations between income and HRQOL among patients with prostate cancer (PCa) is sparse and their explanations still remain unclear. 246 males of two German hospitals filled out a questionnaire at the time of acute treatment, 6 and 12 months later. Age, partnership status, baseline disease and treatment factors, physical and psychological comorbidities, as well as treatment factors and adverse effects at follow-up were additionally included in the analyses to explain potential disparities. HRQOL was assessed with the EORTC (European Organisation for Research and Treatment of Cancer) QLQ-C30 core questionnaire and the prostate-specific QLQ-PR25. A linear mixed model for repeated measures was calculated. The fixed effects showed highly significant income-related inequalities regarding the majority of HRQOL scales. Less affluent PCa patients reported lower HRQOL in terms of global quality of life, all functional scales and urinary symptoms. After introducing relevant covariates, some associations became insignificant (physical, cognitive and sexual function), while others only showed reduced estimates (global quality of life, urinary symptoms, role, emotional and social function). In particular, mental disorders/psychological comorbidity played a relevant role in the explanation of income-related disparities. One year after surgery, income-related disparities in various dimensions of HRQOL persist. With respect to economically disadvantaged PCa patients, the findings emphasize the importance of continuous psychosocial screening and tailored interventions, of patients' empowerment and improved access to supportive care.

  8. Normative values for selected linear indices of the intracranial fluid spaces based on CT images of the head in children

    International Nuclear Information System (INIS)

    Wilk, R.; Syc, B.; Bajor, G.; Kluczewska, E.

    2011-01-01

    Currently, a few imaging methods are used in CNS diagnostics: computed tomography - CT, magnetic resonance imaging - MRI, and ultrasonography - USG. The ventricular system changes its dimensions with child's development. Linear indices commonly used in the diagnostics of hydrocephalus do not consider developmental changes of the intracranial fluid spaces. The aim of our work was to identify reference values for selected linear indices in specific age groups. Material/Methods: The material included 507 CT examinations of the head in children of different age and both sexes. There were 381 CT examinations considered as normal and they were used to establish the reference values. They were compared with 126 CTs from the observational zone (3-10 percentile and 90-97 percentile). The children were divided into 7 following age groups: 0-12 months, > 12-36 months, > 3-6 years, > 6-9 years, > 9-12 years, > 12-15 years, > 15-18 years. For every group, the 10 th , 25 th , 50 th , 75 th and 90 th percentile was calculated. The range between the 10 th and the 90 th percentile was described as a norm. Results: Reference values for particular indices: Huckman Number from 3.3 to 5.0 cm with correlation coefficient according to age equal to 0.34; Evans' Index from 0.218 to 0.312 with correlation coefficient of -0.12; Bifrontal Index from 0.265 to 0.380 with correlation coefficient of 0.18; Bicaudate / Frontal Index from 0.212 to 0.524 with correlation coefficient of -0,33; Bicaudate Index from 0.059 to 0.152 with correlation coefficient of -0.26; Bicaudate / Temporal Index from 0.051 to 0.138 with correlation coefficient of 0.32; Schiersmann's Index from 3.545 to 6.038 with correlation coefficient of 0.42. Conclusions: The intracerebral CSF spaces increased in a non-uniform manner with age. All indices established on the basis of linear parameters were relatively higher in younger children than in the older ones. In proportion to the cranial size, the intracranial fluid spaces

  9. Quantitative Imaging of Turbulent Mixing Dynamics in High-Pressure Fuel Injection to Enable Predictive Simulations of Engine Combustion

    Energy Technology Data Exchange (ETDEWEB)

    Frank, Jonathan H. [Sandia National Lab. (SNL-CA), Livermore, CA (United States). Reacting Flows Dept.; Pickett, Lyle M. [Sandia National Lab. (SNL-CA), Livermore, CA (United States). Engine Combustion Dept.; Bisson, Scott E. [Sandia National Lab. (SNL-CA), Livermore, CA (United States). Remote Sensing and Energetic Materials Dept.; Patterson, Brian D. [Sandia National Lab. (SNL-CA), Livermore, CA (United States). combustion Chemistry Dept.; Ruggles, Adam J. [Sandia National Lab. (SNL-CA), Livermore, CA (United States). Reacting Flows Dept.; Skeen, Scott A. [Sandia National Lab. (SNL-CA), Livermore, CA (United States). Engine Combustion Dept.; Manin, Julien Luc [Sandia National Lab. (SNL-CA), Livermore, CA (United States). Engine Combustion Dept.; Huang, Erxiong [Sandia National Lab. (SNL-CA), Livermore, CA (United States). Reacting Flows Dept.; Cicone, Dave J. [Sandia National Lab. (SNL-CA), Livermore, CA (United States). Engine Combustion Dept.; Sphicas, Panos [Sandia National Lab. (SNL-CA), Livermore, CA (United States). Engine Combustion Dept.

    2015-09-01

    In this LDRD project, we developed a capability for quantitative high - speed imaging measurements of high - pressure fuel injection dynamics to advance understanding of turbulent mixing in transcritical flows, ignition, and flame stabilization mechanisms, and to provide e ssential validation data for developing predictive tools for engine combustion simulations. Advanced, fuel - efficient engine technologies rely on fuel injection into a high - pressure, high - temperature environment for mixture preparation and com bustion. Howe ver, the dynamics of fuel injection are not well understood and pose significant experimental and modeling challenges. To address the need for quantitative high - speed measurements, we developed a Nd:YAG laser that provides a 5ms burst of pulses at 100 kHz o n a robust mobile platform . Using this laser, we demonstrated s patially and temporally resolved Rayleigh scattering imaging and particle image velocimetry measurements of turbulent mixing in high - pressure gas - phase flows and vaporizing sprays . Quantitativ e interpretation of high - pressure measurements was advanced by reducing and correcting interferences and imaging artifacts.

  10. Commissioning and early experience with a new-generation low-energy linear accelerator with advanced delivery and imaging functionalities

    Directory of Open Access Journals (Sweden)

    Fogliata Antonella

    2011-09-01

    Full Text Available Abstract Background A new-generation low-energy linear accelerator (UNIQUE was introduced in the clinical arena during 2009 by Varian Medical Systems. The world's first UNIQUE was installed at Oncology Institute of Southern Switzerland and put into clinical operation in June 2010. The aim of the present contribution was to report experience about its commissioning and first year results from clinical operation. Methods Commissioning data, beam characteristics and the modeling into the treatment planning system were summarized. Imaging system of UNIQUE included a 2D-2D matching capability and tests were performed to identify system repositioning capability. Finally, since the system is capable of delivering volumetric modulated arc therapy with RapidArc, a summary of the tests performed for such modality to assess its performance in preclinical settings and during clinical usage was included. Results Isocenter virtual diameter was measured as less than 0.2 mm. Observed accuracy of isocenter determination and repositioning for 2D-2D matching procedures in image guidance was Conclusions The results of the commissioning tests and of the first period of clinical operation, resulted meeting specifications and having good margins respect to tolerances. UNIQUE was put into operation for all delivery techniques; in particular, as shown by the pre-treatment quality assurance results, it enabled accurate and safe delivery of RapidArc plans.

  11. Targeting Accuracy of Image-Guided Radiosurgery for Intracranial Lesions: A Comparison Across Multiple Linear Accelerator Platforms.

    Science.gov (United States)

    Huang, Yimei; Zhao, Bo; Chetty, Indrin J; Brown, Stephen; Gordon, James; Wen, Ning

    2016-04-01

    To evaluate the overall positioning accuracy of image-guided intracranial radiosurgery across multiple linear accelerator platforms. A computed tomography scan with a slice thickness of 1.0 mm was acquired of an anthropomorphic head phantom in a BrainLAB U-frame mask. The phantom was embedded with three 5-mm diameter tungsten ball bearings, simulating a central, a left, and an anterior cranial lesion. The ball bearings were positioned to radiation isocenter under ExacTrac X-ray or cone-beam computed tomography image guidance on 3 Linacs: (1) ExacTrac X-ray localization on a Novalis Tx; (2) cone-beam computed tomography localization on the Novalis Tx; (3) cone-beam computed tomography localization on a TrueBeam; and (4) cone-beam computed tomography localization on an Edge. Each ball bearing was positioned 5 times to the radiation isocenter with different initial setup error following the 4 image guidance procedures on the 3 Linacs, and the mean (µ) and one standard deviation (σ) of the residual error were compared. Averaged overall 3 ball bearing locations, the vector length of the residual setup error in mm (µ ± σ) was 0.6 ± 0.2, 1.0 ± 0.5, 0.2 ± 0.1, and 0.3 ± 0.1 on ExacTrac X-ray localization on a Novalis Tx, cone-beam computed tomography localization on the Novalis Tx, cone-beam computed tomography localization on a TrueBeam, and cone-beam computed tomography localization on an Edge, with their range in mm being 0.4 to 1.1, 0.4 to 1.9, 0.1 to 0.5, and 0.2 to 0.6, respectively. The congruence between imaging and radiation isocenters in mm was 0.6 ± 0.1, 0.7 ± 0.1, 0.3 ± 0.1, and 0.2 ± 0.1, for the 4 systems, respectively. Targeting accuracy comparable to frame-based stereotactic radiosurgery can be achieved with image-guided intracranial stereotactic radiosurgery treatment. © The Author(s) 2015.

  12. Stability of Mixed Preparations Consisting of Commercial Moisturizing Creams with an Ointment Base Investigated by Magnetic Resonance Imaging.

    Science.gov (United States)

    Onuki, Yoshinori; Funatani, Chiaki; Yamamoto, Yoshihisa; Fukami, Toshiro; Koide, Tatsuo; Hayashi, Yoshihiro; Takayama, Kozo

    2017-01-01

    A moisturizing cream mixed with a steroid ointment is frequently prescribed to patients suffering from atopic dermatitis. However, there is a concern that the mixing operation causes destabilization. The present study was performed to investigate the stability of such preparations closely using magnetic resonance imaging (MRI). As sample preparations, five commercial moisturizing creams that are popular in Japan were mixed with an ointment base, a white petrolatum, at a volume ratio of 1 : 1. The mixed preparations were stored at 60°C to accelerate the destabilization processes. Subsequently, the phase separations induced by the storage test were monitored using MRI. Using advanced MR technologies including spin-spin relaxation time (T 2 ) mapping and MR spectroscopy, we successfully characterized the phase-separation behavior of the test samples. For most samples, phase separations developed by the bleeding of liquid oil components. From a sample consisting of an oil-in-water-type cream, Urepearl Cream 10%, a distinct phase-separation mode was observed, which was initiated by the aqueous component separating from the bottom part of the sample. The resultant phase separation was the most distinct among the test samples. To investigate the phase separation quantitatively and objectively, we conducted a histogram analysis on the acquired T 2 maps. The water-in-oil type creams were found to be much more stable after mixing with ointment base than those of oil-in-water type creams. This finding strongly supported the validity of the mixing operation traditionally conducted in pharmacies.

  13. The Impact of a Health IT Changeover on Medical Imaging Department Work Processes and Turnaround Times: A mixed method study.

    Science.gov (United States)

    Georgiou, A; Prgomet, M; Lymer, S; Hordern, A; Ridley, L; Westbrook, J

    2015-01-01

    To assess the impact of introducing a new Picture Archiving and Communication System (PACS) and Radiology Information System (RIS) on: (i) Medical Imaging work processes; and (ii) turnaround times (TATs) for x-ray and CT scan orders initiated in the Emergency Department (ED). We employed a mixed method study design comprising: (i) semi-structured interviews with Medical Imaging Department staff; and (ii) retrospectively extracted ED data before (March/April 2010) and after (March/April 2011 and 2012) the introduction of a new PACS/RIS. TATs were calculated as: processing TAT (median time from image ordering to examination) and reporting TAT (median time from examination to final report). Reporting TAT for x-rays decreased significantly after introduction of the new PACS/RIS; from a median of 76 hours to 38 hours per order (pMedical Imaging staff reported that the changeover to the new PACS/RIS led to gains in efficiency, particularly regarding the accessibility of images and patient-related information. Nevertheless, assimilation of the new PACS/RIS with existing Departmental work processes was considered inadequate and in some instances unsafe. Issues highlighted related to the synchronization of work tasks (e.g., porter arrangements) and the material set up of the work place (e.g., the number and location of computers). The introduction of new health IT can be a "double-edged sword" providing improved efficiency but at the same time introducing potential hazards affecting the effectiveness of the Medical Imaging Department.

  14. On a separating method for mixed-modes crack growth in wood material using image analysis

    Science.gov (United States)

    Moutou Pitti, R.; Dubois, F.; Pop, O.

    2010-06-01

    Due to the complex wood anatomy and the loading orientation, the timber elements are subjected to a mixed-mode fracture. In these conditions, the crack tip advance is characterized by mixed-mode kinematics. In order to characterize the fracture process function versus the loading orientation, a new mixed-mode crack growth timber specimen is proposed. In the present paper, the design process and the experimental validation of this specimen are proposed. Using experimental results, the energy release rate is calculated for several modes. The calculi consist on the separation of each fracture mode. The design of the specimen is based on the analytical approach and numerical simulation by finite element method. The specimen particularity is the stability of the crack propagation under a force control.

  15. Model, analysis, and evaluation of the effects of analog VLSI arithmetic on linear subspace-based image recognition.

    Science.gov (United States)

    Carvajal, Gonzalo; Figueroa, Miguel

    2014-07-01

    Typical image recognition systems operate in two stages: feature extraction to reduce the dimensionality of the input space, and classification based on the extracted features. Analog Very Large Scale Integration (VLSI) is an attractive technology to achieve compact and low-power implementations of these computationally intensive tasks for portable embedded devices. However, device mismatch limits the resolution of the circuits fabricated with this technology. Traditional layout techniques to reduce the mismatch aim to increase the resolution at the transistor level, without considering the intended application. Relating mismatch parameters to specific effects in the application level would allow designers to apply focalized mismatch compensation techniques according to predefined performance/cost tradeoffs. This paper models, analyzes, and evaluates the effects of mismatched analog arithmetic in both feature extraction and classification circuits. For the feature extraction, we propose analog adaptive linear combiners with on-chip learning for both Least Mean Square (LMS) and Generalized Hebbian Algorithm (GHA). Using mathematical abstractions of analog circuits, we identify mismatch parameters that are naturally compensated during the learning process, and propose cost-effective guidelines to reduce the effect of the rest. For the classification, we derive analog models for the circuits necessary to implement Nearest Neighbor (NN) approach and Radial Basis Function (RBF) networks, and use them to emulate analog classifiers with standard databases of face and hand-writing digits. Formal analysis and experiments show how we can exploit adaptive structures and properties of the input space to compensate the effects of device mismatch at the application level, thus reducing the design overhead of traditional layout techniques. Results are also directly extensible to multiple application domains using linear subspace methods. Copyright © 2014 Elsevier Ltd. All rights

  16. Mixing and transport during pharmaceutical twin-screw wet granulation: Experimental analysis via chemical imaging

    DEFF Research Database (Denmark)

    Kumar, Ashish; Vercruysse, Jurgen; Toiviainen, Maunu

    2014-01-01

    to calculate the mean residence time, mean centred variance and the Péclet number to determine the axial mixing and predominance of convective over dispersive transport. The results showed that screw speed is the most influential parameter in terms of RTD and axial mixing in the TSG and established...... a significant interaction between screw design parameters (number and stagger angle of kneading discs) and the process parameters (material throughput and number of kneading discs). The results of the study will allow the development and validation of a transport model capable of predicting the RTD and macro...

  17. Determination of mean rainfall from the Special Sensor Microwave/Imager (SSM/I) using a mixed lognormal distribution

    Science.gov (United States)

    Berg, Wesley; Chase, Robert

    1992-01-01

    Global estimates of monthly, seasonal, and annual oceanic rainfall are computed for a period of one year using data from the Special Sensor Microwave/Imager (SSM/I). Instantaneous rainfall estimates are derived from brightness temperature values obtained from the satellite data using the Hughes D-matrix algorithm. The instantaneous rainfall estimates are stored in 1 deg square bins over the global oceans for each month. A mixed probability distribution combining a lognormal distribution describing the positive rainfall values and a spike at zero describing the observations indicating no rainfall is used to compute mean values. The resulting data for the period of interest are fitted to a lognormal distribution by using a maximum-likelihood. Mean values are computed for the mixed distribution and qualitative comparisons with published historical results as well as quantitative comparisons with corresponding in situ raingage data are performed.

  18. Parkinson Symptoms and Health Related Quality of Life as Predictors of Costs: A Longitudinal Observational Study with Linear Mixed Model Analysis.

    Directory of Open Access Journals (Sweden)

    Pablo Martinez-Martín

    Full Text Available To estimate the magnitude in which Parkinson's disease (PD symptoms and health- related quality of life (HRQoL determined PD costs over a 4-year period.Data collected during 3-month, each year, for 4 years, from the ELEP study, included sociodemographic, clinical and use of resources information. Costs were calculated yearly, as mean 3-month costs/patient and updated to Spanish €, 2012. Mixed linear models were performed to analyze total, direct and indirect costs based on symptoms and HRQoL.One-hundred and seventy four patients were included. Mean (SD age: 63 (11 years, mean (SD disease duration: 8 (6 years. Ninety-three percent were HY I, II or III (mild or moderate disease. Forty-nine percent remained in the same stage during the study period. Clinical evaluation and HRQoL scales showed relatively slight changes over time, demonstrating a stable group overall. Mean (SD PD total costs augmented 92.5%, from € 2,082.17 (€ 2,889.86 in year 1 to € 4,008.6 (€ 7,757.35 in year 4. Total, direct and indirect cost incremented 45.96%, 35.63%, and 69.69% for mild disease, respectively, whereas increased 166.52% for total, 55.68% for direct and 347.85% for indirect cost in patients with moderate PD. For severe patients, cost remained almost the same throughout the study. For each additional point in the SCOPA-Motor scale total costs increased € 75.72 (p = 0.0174; for each additional point on SCOPA-Motor and the SCOPA-COG, direct costs incremented € 49.21 (p = 0.0094 and € 44.81 (p = 0.0404, respectively; and for each extra point on the pain scale, indirect costs increased € 16.31 (p = 0.0228.PD is an expensive disease in Spain. Disease progression and severity as well as motor and cognitive dysfunctions are major drivers of costs increments. Therapeutic measures aimed at controlling progression and symptoms could help contain disease expenses.

  19. MCTP system model based on linear programming optimization of apertures obtained from sequencing patient image data maps

    Energy Technology Data Exchange (ETDEWEB)

    Ureba, A. [Dpto. Fisiología Médica y Biofísica. Facultad de Medicina, Universidad de Sevilla, E-41009 Sevilla (Spain); Salguero, F. J. [Nederlands Kanker Instituut, Antoni van Leeuwenhoek Ziekenhuis, 1066 CX Ámsterdam, The Nederlands (Netherlands); Barbeiro, A. R.; Jimenez-Ortega, E.; Baeza, J. A.; Leal, A., E-mail: alplaza@us.es [Dpto. Fisiología Médica y Biofísica, Facultad de Medicina, Universidad de Sevilla, E-41009 Sevilla (Spain); Miras, H. [Servicio de Radiofísica, Hospital Universitario Virgen Macarena, E-41009 Sevilla (Spain); Linares, R.; Perucha, M. [Servicio de Radiofísica, Hospital Infanta Luisa, E-41010 Sevilla (Spain)

    2014-08-15

    Purpose: The authors present a hybrid direct multileaf collimator (MLC) aperture optimization model exclusively based on sequencing of patient imaging data to be implemented on a Monte Carlo treatment planning system (MC-TPS) to allow the explicit radiation transport simulation of advanced radiotherapy treatments with optimal results in efficient times for clinical practice. Methods: The planning system (called CARMEN) is a full MC-TPS, controlled through aMATLAB interface, which is based on the sequencing of a novel map, called “biophysical” map, which is generated from enhanced image data of patients to achieve a set of segments actually deliverable. In order to reduce the required computation time, the conventional fluence map has been replaced by the biophysical map which is sequenced to provide direct apertures that will later be weighted by means of an optimization algorithm based on linear programming. A ray-casting algorithm throughout the patient CT assembles information about the found structures, the mass thickness crossed, as well as PET values. Data are recorded to generate a biophysical map for each gantry angle. These maps are the input files for a home-made sequencer developed to take into account the interactions of photons and electrons with the MLC. For each linac (Axesse of Elekta and Primus of Siemens) and energy beam studied (6, 9, 12, 15 MeV and 6 MV), phase space files were simulated with the EGSnrc/BEAMnrc code. The dose calculation in patient was carried out with the BEAMDOSE code. This code is a modified version of EGSnrc/DOSXYZnrc able to calculate the beamlet dose in order to combine them with different weights during the optimization process. Results: Three complex radiotherapy treatments were selected to check the reliability of CARMEN in situations where the MC calculation can offer an added value: A head-and-neck case (Case I) with three targets delineated on PET/CT images and a demanding dose-escalation; a partial breast

  20. MCTP system model based on linear programming optimization of apertures obtained from sequencing patient image data maps

    International Nuclear Information System (INIS)

    Ureba, A.; Salguero, F. J.; Barbeiro, A. R.; Jimenez-Ortega, E.; Baeza, J. A.; Leal, A.; Miras, H.; Linares, R.; Perucha, M.

    2014-01-01

    Purpose: The authors present a hybrid direct multileaf collimator (MLC) aperture optimization model exclusively based on sequencing of patient imaging data to be implemented on a Monte Carlo treatment planning system (MC-TPS) to allow the explicit radiation transport simulation of advanced radiotherapy treatments with optimal results in efficient times for clinical practice. Methods: The planning system (called CARMEN) is a full MC-TPS, controlled through aMATLAB interface, which is based on the sequencing of a novel map, called “biophysical” map, which is generated from enhanced image data of patients to achieve a set of segments actually deliverable. In order to reduce the required computation time, the conventional fluence map has been replaced by the biophysical map which is sequenced to provide direct apertures that will later be weighted by means of an optimization algorithm based on linear programming. A ray-casting algorithm throughout the patient CT assembles information about the found structures, the mass thickness crossed, as well as PET values. Data are recorded to generate a biophysical map for each gantry angle. These maps are the input files for a home-made sequencer developed to take into account the interactions of photons and electrons with the MLC. For each linac (Axesse of Elekta and Primus of Siemens) and energy beam studied (6, 9, 12, 15 MeV and 6 MV), phase space files were simulated with the EGSnrc/BEAMnrc code. The dose calculation in patient was carried out with the BEAMDOSE code. This code is a modified version of EGSnrc/DOSXYZnrc able to calculate the beamlet dose in order to combine them with different weights during the optimization process. Results: Three complex radiotherapy treatments were selected to check the reliability of CARMEN in situations where the MC calculation can offer an added value: A head-and-neck case (Case I) with three targets delineated on PET/CT images and a demanding dose-escalation; a partial breast

  1. MCTP system model based on linear programming optimization of apertures obtained from sequencing patient image data maps.

    Science.gov (United States)

    Ureba, A; Salguero, F J; Barbeiro, A R; Jimenez-Ortega, E; Baeza, J A; Miras, H; Linares, R; Perucha, M; Leal, A

    2014-08-01

    The authors present a hybrid direct multileaf collimator (MLC) aperture optimization model exclusively based on sequencing of patient imaging data to be implemented on a Monte Carlo treatment planning system (MC-TPS) to allow the explicit radiation transport simulation of advanced radiotherapy treatments with optimal results in efficient times for clinical practice. The planning system (called CARMEN) is a full MC-TPS, controlled through aMATLAB interface, which is based on the sequencing of a novel map, called "biophysical" map, which is generated from enhanced image data of patients to achieve a set of segments actually deliverable. In order to reduce the required computation time, the conventional fluence map has been replaced by the biophysical map which is sequenced to provide direct apertures that will later be weighted by means of an optimization algorithm based on linear programming. A ray-casting algorithm throughout the patient CT assembles information about the found structures, the mass thickness crossed, as well as PET values. Data are recorded to generate a biophysical map for each gantry angle. These maps are the input files for a home-made sequencer developed to take into account the interactions of photons and electrons with the MLC. For each linac (Axesse of Elekta and Primus of Siemens) and energy beam studied (6, 9, 12, 15 MeV and 6 MV), phase space files were simulated with the EGSnrc/BEAMnrc code. The dose calculation in patient was carried out with the BEAMDOSE code. This code is a modified version of EGSnrc/DOSXYZnrc able to calculate the beamlet dose in order to combine them with different weights during the optimization process. Three complex radiotherapy treatments were selected to check the reliability of CARMEN in situations where the MC calculation can offer an added value: A head-and-neck case (Case I) with three targets delineated on PET/CT images and a demanding dose-escalation; a partial breast irradiation case (Case II) solved

  2. Mixed-integer evolution strategies for parameter optimization and their applications to medical image analysis

    NARCIS (Netherlands)

    Li, Rui

    2009-01-01

    The target of this work is to extend the canonical Evolution Strategies (ES) from traditional real-valued parameter optimization domain to mixed-integer parameter optimization domain. This is necessary because there exist numerous practical optimization problems from industry in which the set of

  3. Cross-sectional imaging of large and dense materials by high energy X-ray CT using linear accelerator

    International Nuclear Information System (INIS)

    Kanamori, Takahiro; Kamata, Shouji; Ito, Shinichi.

    1989-01-01

    A prototype high energy X-ray CT (computed tomography) system has been developed which employs a linear accelerator as the X-ray source (max. photon energy: 12 MeV). One problem encountered in development of this CT system was to reduce the scattered photons from adjacent detectors, i.e. crosstalk, due to high energy X-rays. This crosstalk was reduced to 2% by means of detector shields using tungsten spacers. Spatial resolution was not affected by such small crosstalk as confirmed by numerical simulations. A second problem was to reduce the scattered photons from the test object. This was done using collimators. A third concern was to realize a wide dynamic range data processing which would allow applications to large and dense objects. This problem was solved by using a sample and hold data acquisition method to reduce the dark current of the photo detectors. The dynamic range of this system was experimentally confirmed over 60 dB. It was demonstrated that slits (width: 2 mm) in an iron object (diameter: 25 cm) could be imaged by this prototype CT system. (author)

  4. Chandra Snapshot Spectral Imaging of Comets C/2001 Q4 (NEAT) and C/2002 T7 (LINEAR)

    Science.gov (United States)

    Lisse, Carey

    2003-09-01

    The highly favorable perigee passage of the very bright comets C/2001 Q4 (NEAT) and C/2002 T7 (LINEAR) in late May 2004 provides an opportunity to study cometary x-ray emission in conjunction with the new CHIPS spectroscopic mission. In 10 ksec of on-target time for each comet, ACIS-S will obtain snapshot images of the comets in the heart of the CHIPS 0.05 0.150 keV spectroscopic monitoring period in late-May 2004. The combined observations have the potential of directly detecting for the first time the ultra-soft emission due to Mg, S, Si, and Fe predicted by McCammon et al. (2002) from soft x-ray background measurements and by Kharchenko et al. (2000, 2003) from models of solar wind minor ion charge exchange emission. New work by Wegmann, Dennerl, and Lisse (2004) allows a determination of the neutral gas production rate from the spatial scale of the emission, and an independent determination of the solar wind minor ion flux density using the x-ray surface brightness.

  5. Apertureless near-field terahertz imaging using the self-mixing effect in a quantum cascade laser

    Energy Technology Data Exchange (ETDEWEB)

    Dean, Paul, E-mail: p.dean@leeds.ac.uk; Keeley, James; Kundu, Iman; Li, Lianhe; Linfield, Edmund H.; Giles Davies, A. [School of Electronic and Electrical Engineering, University of Leeds, Leeds LS2 9JT (United Kingdom); Mitrofanov, Oleg [Department of Electronic and Electrical Engineering, University College London, Torrington Place, London WC1E 7JE (United Kingdom)

    2016-02-29

    We report two-dimensional apertureless near-field terahertz (THz) imaging using a quantum cascade laser (QCL) source and a scattering probe. A near-field enhancement of the scattered field amplitude is observed for small tip-sample separations, allowing image resolutions of ∼1 μm (∼λ/100) and ∼7 μm to be achieved along orthogonal directions on the sample surface. This represents the highest resolution demonstrated to date with a THz QCL. By employing a detection scheme based on self-mixing interferometry, our approach offers experimental simplicity by removing the need for an external detector and also provides sensitivity to the phase of the reinjected field.

  6. Image processing algorithm design and implementation for real-time autonomous inspection of mixed waste

    International Nuclear Information System (INIS)

    Schalkoff, R.J.; Shaaban, K.M.; Carver, A.E.

    1996-01-01

    The ARIES number-sign 1 (Autonomous Robotic Inspection Experimental System) vision system is used to acquire drum surface images under controlled conditions and subsequently perform autonomous visual inspection leading to a classification as 'acceptable' or 'suspect'. Specific topics described include vision system design methodology, algorithmic structure,hardware processing structure, and image acquisition hardware. Most of these capabilities were demonstrated at the ARIES Phase II Demo held on Nov. 30, 1995. Finally, Phase III efforts are briefly addressed

  7. A novel image fusion algorithm based on 2D scale-mixing complex wavelet transform and Bayesian MAP estimation for multimodal medical images

    Directory of Open Access Journals (Sweden)

    Abdallah Bengueddoudj

    2017-05-01

    Full Text Available In this paper, we propose a new image fusion algorithm based on two-dimensional Scale-Mixing Complex Wavelet Transform (2D-SMCWT. The fusion of the detail 2D-SMCWT coefficients is performed via a Bayesian Maximum a Posteriori (MAP approach by considering a trivariate statistical model for the local neighboring of 2D-SMCWT coefficients. For the approximation coefficients, a new fusion rule based on the Principal Component Analysis (PCA is applied. We conduct several experiments using three different groups of multimodal medical images to evaluate the performance of the proposed method. The obtained results prove the superiority of the proposed method over the state of the art fusion methods in terms of visual quality and several commonly used metrics. Robustness of the proposed method is further tested against different types of noise. The plots of fusion metrics establish the accuracy of the proposed fusion method.

  8. Ideal-observer detectability in photon-counting differential phase-contrast imaging using a linear-systems approach

    International Nuclear Information System (INIS)

    Fredenberg, Erik; Danielsson, Mats; Stayman, J. Webster; Siewerdsen, Jeffrey H.; Åslund, Magnus

    2012-01-01

    Purpose: To provide a cascaded-systems framework based on the noise-power spectrum (NPS), modulation transfer function (MTF), and noise-equivalent number of quanta (NEQ) for quantitative evaluation of differential phase-contrast imaging (Talbot interferometry) in relation to conventional absorption contrast under equal-dose, equal-geometry, and, to some extent, equal-photon-economy constraints. The focus is a geometry for photon-counting mammography. Methods: Phase-contrast imaging is a promising technology that may emerge as an alternative or adjunct to conventional absorption contrast. In particular, phase contrast may increase the signal-difference-to-noise ratio compared to absorption contrast because the difference in phase shift between soft-tissue structures is often substantially larger than the absorption difference. We have developed a comprehensive cascaded-systems framework to investigate Talbot interferometry, which is a technique for differential phase-contrast imaging. Analytical expressions for the MTF and NPS were derived to calculate the NEQ and a task-specific ideal-observer detectability index under assumptions of linearity and shift invariance. Talbot interferometry was compared to absorption contrast at equal dose, and using either a plane wave or a spherical wave in a conceivable mammography geometry. The impact of source size and spectrum bandwidth was included in the framework, and the trade-off with photon economy was investigated in some detail. Wave-propagation simulations were used to verify the analytical expressions and to generate example images. Results: Talbot interferometry inherently detects the differential of the phase, which led to a maximum in NEQ at high spatial frequencies, whereas the absorption-contrast NEQ decreased monotonically with frequency. Further, phase contrast detects differences in density rather than atomic number, and the optimal imaging energy was found to be a factor of 1.7 higher than for absorption

  9. Ideal-observer detectability in photon-counting differential phase-contrast imaging using a linear-systems approach

    Energy Technology Data Exchange (ETDEWEB)

    Fredenberg, Erik; Danielsson, Mats; Stayman, J. Webster; Siewerdsen, Jeffrey H.; Aslund, Magnus [Research and Development, Philips Women' s Healthcare, Smidesvaegen 5, SE-171 41 Solna, Sweden and Department of Physics, Royal Institute of Technology, AlbaNova, SE-106 91 Stockholm (Sweden); Department of Physics, Royal Institute of Technology, AlbaNova, SE-106 91 Stockholm (Sweden); Department of Biomedical Engineering, Johns Hopkins University, Baltimore, Maryland 21205 (United States); Department of Biomedical Engineering and Department of Radiology, Johns Hopkins University, Baltimore, Maryland 21205 (United States); Research and Development, Philips Women' s Healthcare, Smidesvaegen 5, SE-171 41 Solna (Sweden)

    2012-09-15

    Purpose: To provide a cascaded-systems framework based on the noise-power spectrum (NPS), modulation transfer function (MTF), and noise-equivalent number of quanta (NEQ) for quantitative evaluation of differential phase-contrast imaging (Talbot interferometry) in relation to conventional absorption contrast under equal-dose, equal-geometry, and, to some extent, equal-photon-economy constraints. The focus is a geometry for photon-counting mammography. Methods: Phase-contrast imaging is a promising technology that may emerge as an alternative or adjunct to conventional absorption contrast. In particular, phase contrast may increase the signal-difference-to-noise ratio compared to absorption contrast because the difference in phase shift between soft-tissue structures is often substantially larger than the absorption difference. We have developed a comprehensive cascaded-systems framework to investigate Talbot interferometry, which is a technique for differential phase-contrast imaging. Analytical expressions for the MTF and NPS were derived to calculate the NEQ and a task-specific ideal-observer detectability index under assumptions of linearity and shift invariance. Talbot interferometry was compared to absorption contrast at equal dose, and using either a plane wave or a spherical wave in a conceivable mammography geometry. The impact of source size and spectrum bandwidth was included in the framework, and the trade-off with photon economy was investigated in some detail. Wave-propagation simulations were used to verify the analytical expressions and to generate example images. Results: Talbot interferometry inherently detects the differential of the phase, which led to a maximum in NEQ at high spatial frequencies, whereas the absorption-contrast NEQ decreased monotonically with frequency. Further, phase contrast detects differences in density rather than atomic number, and the optimal imaging energy was found to be a factor of 1.7 higher than for absorption

  10. Visualization and understanding of the granulation liquid mixing and distribution during continuous twin screw granulation using NIR chemical imaging.

    Science.gov (United States)

    Vercruysse, Jurgen; Toiviainen, Maunu; Fonteyne, Margot; Helkimo, Niko; Ketolainen, Jarkko; Juuti, Mikko; Delaet, Urbain; Van Assche, Ivo; Remon, Jean Paul; Vervaet, Chris; De Beer, Thomas

    2014-04-01

    Over the last decade, there has been increased interest in the application of twin screw granulation as a continuous wet granulation technique for pharmaceutical drug formulations. However, the mixing of granulation liquid and powder material during the short residence time inside the screw chamber and the atypical particle size distribution (PSD) of granules produced by twin screw granulation is not yet fully understood. Therefore, this study aims at visualizing the granulation liquid mixing and distribution during continuous twin screw granulation using NIR chemical imaging. In first instance, the residence time of material inside the barrel was investigated as function of screw speed and moisture content followed by the visualization of the granulation liquid distribution as function of different formulation and process parameters (liquid feed rate, liquid addition method, screw configuration, moisture content and barrel filling degree). The link between moisture uniformity and granule size distributions was also studied. For residence time analysis, increased screw speed and lower moisture content resulted to a shorter mean residence time and narrower residence time distribution. Besides, the distribution of granulation liquid was more homogenous at higher moisture content and with more kneading zones on the granulator screws. After optimization of the screw configuration, a two-level full factorial experimental design was performed to evaluate the influence of moisture content, screw speed and powder feed rate on the mixing efficiency of the powder and liquid phase. From these results, it was concluded that only increasing the moisture content significantly improved the granulation liquid distribution. This study demonstrates that NIR chemical imaging is a fast and adequate measurement tool for allowing process visualization and hence for providing better process understanding of a continuous twin screw granulation system. Copyright © 2013 Elsevier B.V. All

  11. Mixed reality orthognathic surgical simulation by entity model manipulation and 3D-image display

    Science.gov (United States)

    Shimonagayoshi, Tatsunari; Aoki, Yoshimitsu; Fushima, Kenji; Kobayashi, Masaru

    2005-12-01

    In orthognathic surgery, the framing of 3D-surgical planning that considers the balance between the front and back positions and the symmetry of the jawbone, as well as the dental occlusion of teeth, is essential. In this study, a support system for orthodontic surgery to visualize the changes in the mandible and the occlusal condition and to determine the optimum position in mandibular osteotomy has been developed. By integrating the operating portion of a tooth model that is to determine the optimum occlusal position by manipulating the entity tooth model and the 3D-CT skeletal images (3D image display portion) that are simultaneously displayed in real-time, the determination of the mandibular position and posture in which the improvement of skeletal morphology and occlusal condition is considered, is possible. The realistic operation of the entity model and the virtual 3D image display enabled the construction of a surgical simulation system that involves augmented reality.

  12. Local hyperspectral data multisharpening based on linear/linear-quadratic nonnegative matrix factorization by integrating lidar data

    Science.gov (United States)

    Benhalouche, Fatima Zohra; Karoui, Moussa Sofiane; Deville, Yannick; Ouamri, Abdelaziz

    2015-10-01

    In this paper, a new Spectral-Unmixing-based approach, using Nonnegative Matrix Factorization (NMF), is proposed to locally multi-sharpen hyperspectral data by integrating a Digital Surface Model (DSM) obtained from LIDAR data. In this new approach, the nature of the local mixing model is detected by using the local variance of the object elevations. The hyper/multispectral images are explored using small zones. In each zone, the variance of the object elevations is calculated from the DSM data in this zone. This variance is compared to a threshold value and the adequate linear/linearquadratic spectral unmixing technique is used in the considered zone to independently unmix hyperspectral and multispectral data, using an adequate linear/linear-quadratic NMF-based approach. The obtained spectral and spatial information thus respectively extracted from the hyper/multispectral images are then recombined in the considered zone, according to the selected mixing model. Experiments based on synthetic hyper/multispectral data are carried out to evaluate the performance of the proposed multi-sharpening approach and literature linear/linear-quadratic approaches used on the whole hyper/multispectral data. In these experiments, real DSM data are used to generate synthetic data containing linear and linear-quadratic mixed pixel zones. The DSM data are also used for locally detecting the nature of the mixing model in the proposed approach. Globally, the proposed approach yields good spatial and spectral fidelities for the multi-sharpened data and significantly outperforms the used literature methods.

  13. Learning pathology using collaborative vs. individual annotation of whole slide images: a mixed methods trial.

    Science.gov (United States)

    Sahota, Michael; Leung, Betty; Dowdell, Stephanie; Velan, Gary M

    2016-12-12

    Students in biomedical disciplines require understanding of normal and abnormal microscopic appearances of human tissues (histology and histopathology). For this purpose, practical classes in these disciplines typically use virtual microscopy, viewing digitised whole slide images in web browsers. To enhance engagement, tools have been developed to enable individual or collaborative annotation of whole slide images within web browsers. To date, there have been no studies that have critically compared the impact on learning of individual and collaborative annotations on whole slide images. Junior and senior students engaged in Pathology practical classes within Medical Science and Medicine programs participated in cross-over trials of individual and collaborative annotation activities. Students' understanding of microscopic morphology was compared using timed online quizzes, while students' perceptions of learning were evaluated using an online questionnaire. For senior medical students, collaborative annotation of whole slide images was superior for understanding key microscopic features when compared to individual annotation; whilst being at least equivalent to individual annotation for junior medical science students. Across cohorts, students agreed that the annotation activities provided a user-friendly learning environment that met their flexible learning needs, improved efficiency, provided useful feedback, and helped them to set learning priorities. Importantly, these activities were also perceived to enhance motivation and improve understanding. Collaborative annotation improves understanding of microscopic morphology for students with sufficient background understanding of the discipline. These findings have implications for the deployment of annotation activities in biomedical curricula, and potentially for postgraduate training in Anatomical Pathology.

  14. A mixed-scale dense convolutional neural network for image analysis

    NARCIS (Netherlands)

    D.M. Pelt (Daniël); J.A. Sethian (James)

    2016-01-01

    textabstractDeep convolutional neural networks have been successfully applied to many image-processing problems in recent works. Popular network architectures often add additional operations and connections to the standard architecture to enable training deeper networks. To achieve accurate results

  15. High Signal Intensity in the Dentate Nucleus and Globus Pallidus on Unenhanced T1-Weighted MR Images: Comparison between Gadobutrol and Linear Gadolinium-Based Contrast Agents.

    Science.gov (United States)

    Moser, F G; Watterson, C T; Weiss, S; Austin, M; Mirocha, J; Prasad, R; Wang, J

    2018-02-01

    In view of the recent observations that gadolinium deposits in brain tissue after intravenous injection, our aim of this study was to compare signal changes in the globus pallidus and dentate nucleus on unenhanced T1-weighted MR images in patients receiving serial doses of gadobutrol, a macrocyclic gadolinium-based contrast agent, with those seen in patients receiving linear gadolinium-based contrast agents. This was a retrospective analysis of on-site patients with brain tumors. Fifty-nine patients received only gadobutrol, and 60 patients received only linear gadolinium-based contrast agents. Linear gadolinium-based contrast agents included gadoversetamide, gadobenate dimeglumine, and gadodiamide. T1 signal intensity in the globus pallidus, dentate nucleus, and pons was measured on the precontrast portions of patients' first and seventh brain MRIs. Ratios of signal intensity comparing the globus pallidus with the pons (globus pallidus/pons) and dentate nucleus with the pons (dentate nucleus/pons) were calculated. Changes in the above signal intensity ratios were compared within the gadobutrol and linear agent groups, as well as between groups. The dentate nucleus/pons signal ratio increased in the linear gadolinium-based contrast agent group ( t = 4.215, P linear gadolinium-based contrast agent group ( t = 2.931, P linear gadolinium-based contrast agents. © 2018 by American Journal of Neuroradiology.

  16. Analysis of the imaging performance in indirect digital mammography detectors by linear systems and signal detection models

    International Nuclear Information System (INIS)

    Liaparinos, P.; Kalyvas, N.; Kandarakis, I.; Cavouras, D.

    2013-01-01

    Purpose: The purpose of this study was to provide an analysis of imaging performance in digital mammography, using indirect detector instrumentation, by combining the Linear Cascaded Systems (LCS) theory and the Signal Detection Theory (SDT). Observer performance was assessed, by examining frequently employed detectors, consisting of phosphor-based X-ray converters (granular Gd 2 O 2 S:Tb and structural CsI:Tl), coupled with the recently introduced complementary metal-oxide-semiconductor (CMOS) sensor. By applying combinations of various irradiation conditions (filter-target and exposure levels at 28 kV) on imaging detectors, our study aimed to find the optimum system set-up for digital mammography. For this purpose, the signal to noise transfer properties of the medical imaging detectors were examined for breast carcinoma detectability. Methods: An analytical model was applied to calculate X-ray interactions within software breast phantoms and detective media. Modeling involved: (a) three X-ray spectra used in digital mammography: 28 kV Mo/Mo (Mo: 0.030 mm), 28 kV Rh/Rh (Rh: 0.025 mm) and 28 kV W/Rh (Rh: 0.060 mm) at different entrance surface air kerma (ESAK) of 3 mGy and 5 mGy, (b) a 5 cm thick Perspex software phantom incorporating a small Ca lesion of varying size (0.1–1 cm), and (c) two 200 μm thick phosphor-based X-ray converters (Gd2O2S:Tb, CsI:Tl), coupled to a CMOS based detector of 22.5 μm pixel size. Results: Best (lowest) contrast threshold (CT) values were obtained with the combination: (i) W/Rh target-filter, (ii) 5 mGy (ESAK), and (iii) CsI:Tl-CMOS detector. For lesion diameter 0.5 cm the CT was found improved, in comparison to other anode/filter combinations, approximately 42% than Rh/Rh and 55% than Mo/Mo, for small sized carcinoma (0.1 cm) and approximately 50% than Rh/Rh and 125% than Mo/Mo, for big sized carcinoma (1 cm), considering 5 mGy X-ray beam. By decreasing lesion diameter and thickness, a limiting CT (100%) was occurred for size

  17. Mixed-spectrum generation mechanism analysis of dispersive hyperspectral imaging for improving environmental monitoring of coastal waters

    Science.gov (United States)

    Xie, Feng; Xiao, Gonghai; Qi, Hongxing; Shu, Rong; Wang, Jianyu; Xue, Yongqi

    2010-11-01

    At present, most part of coast zone in China belong to Case II waters with a large volume of shallow waters. Through theories and experiences of ocean water color remote sensing has a prominent improvement, there still exist many problems mainly as follows: (a) there is not a special sensor for heat pollution of coast water remote sensing up to now; (b) though many scholars have developed many water quality parameter retrieval models in the open ocean, there still exists a large gap from practical applications in turbid coastal waters. It is much more difficult due to the presence of high concentrations of suspended sediments and dissolved organic material, which overwhelm the spectral signal of sea water. Hyperspectral remote sensing allows a sensor on a moving platform to gather emitted radiation from the Earth's surface, which opens a way to reach a better analysis and understanding of coast water. Operative Modular Imaging Spectrometer (OMIS) is a type of representative imaging spectrometer developed by the Chinese Academy of Sciences. OMIS collects reflective and radiation light from ground by RC telescope with the scanning mirror cross track and flight of plane along track. In this paper, we explore the use of OMIS as the airborne sensor for the heat pollution monitoring in coast water, on the basis of an analysis on the mixed-spectrum arising from the image correcting process for geometric distortion. An airborne experiment was conducted in the winter of 2009 on the coast of the East Sea in China.

  18. Bremsstrahlung-Based Imaging and Assays of Radioactive, Mixed and Hazardous Waste

    Science.gov (United States)

    Kwofie, J.; Wells, D. P.; Selim, F. A.; Harmon, F.; Duttagupta, S. P.; Jones, J. L.; White, T.; Roney, T.

    2003-08-01

    A new nondestructive accelerator based x-ray fluorescence (AXRF) approach has been developed to identify heavy metals in large-volume samples. Such samples are an important part of the process and waste streams of U.S Department of Energy sites, as well as other industries such as mining and milling. Distributions of heavy metal impurities in these process and waste samples can range from homogeneous to highly inhomogeneous, and non-destructive assays and imaging that can address both are urgently needed. Our approach is based on using high-energy, pulsed bremsstrahlung beams (3-6.5 MeV) from small electron accelerators to produce K-shell atomic fluorescence x-rays. In addition we exploit pair-production, Compton scattering and x-ray transmission measurements from these beams to probe locations of high density and high atomic number. The excellent penetrability of these beams allows assays and images for soil-like samples at least 15 g/cm2 thick, with elemental impurities of atomic number greater than approximately 50. Fluorescence yield of a variety of targets was measured as a function of impurity atomic number, impurity homogeneity, and sample thickness. We report on actual and potential detection limits of heavy metal impurities in a soil matrix for a variety of samples, and on the potential for imaging, using AXRF and these related probes.

  19. Mathematical modeling of the crack growth in linear elastic isotropic materials by conventional fracture mechanics approaches and by molecular dynamics method: crack propagation direction angle under mixed mode loading

    Science.gov (United States)

    Stepanova, Larisa; Bronnikov, Sergej

    2018-03-01

    The crack growth directional angles in the isotropic linear elastic plane with the central crack under mixed-mode loading conditions for the full range of the mixity parameter are found. Two fracture criteria of traditional linear fracture mechanics (maximum tangential stress and minimum strain energy density criteria) are used. Atomistic simulations of the central crack growth process in an infinite plane medium under mixed-mode loading using Large-scale Molecular Massively Parallel Simulator (LAMMPS), a classical molecular dynamics code, are performed. The inter-atomic potential used in this investigation is Embedded Atom Method (EAM) potential. The plane specimens with initial central crack were subjected to Mixed-Mode loadings. The simulation cell contains 400000 atoms. The crack propagation direction angles under different values of the mixity parameter in a wide range of values from pure tensile loading to pure shear loading in a wide diapason of temperatures (from 0.1 К to 800 К) are obtained and analyzed. It is shown that the crack propagation direction angles obtained by molecular dynamics method coincide with the crack propagation direction angles given by the multi-parameter fracture criteria based on the strain energy density and the multi-parameter description of the crack-tip fields.

  20. A mixed-order nonlinear diffusion compressed sensing MR image reconstruction.

    Science.gov (United States)

    Joy, Ajin; Paul, Joseph Suresh

    2018-03-07

    Avoid formation of staircase artifacts in nonlinear diffusion-based MR image reconstruction without compromising computational speed. Whereas second-order diffusion encourages the evolution of pixel neighborhood with uniform intensities, fourth-order diffusion considers smooth region to be not necessarily a uniform intensity region but also a planar region. Therefore, a controlled application of fourth-order diffusivity function is used to encourage second-order diffusion to reconstruct the smooth regions of the image as a plane rather than a group of blocks, while not being strong enough to introduce the undesirable speckle effect. Proposed method is compared with second- and fourth-order nonlinear diffusion reconstruction, total variation (TV), total generalized variation, and higher degree TV using in vivo data sets for different undersampling levels with application to dictionary learning-based reconstruction. It is observed that the proposed technique preserves sharp boundaries in the image while preventing the formation of staircase artifacts in the regions of smoothly varying pixel intensities. It also shows reduced error measures compared with second-order nonlinear diffusion reconstruction or TV and converges faster than TV-based methods. Because nonlinear diffusion is known to be an effective alternative to TV for edge-preserving reconstruction, the crucial aspect of staircase artifact removal is addressed. Reconstruction is found to be stable for the experimentally determined range of fourth-order regularization parameter, and therefore not does not introduce a parameter search. Hence, the computational simplicity of second-order diffusion is retained. © 2018 International Society for Magnetic Resonance in Medicine.

  1. Stereoscopic particle image velocimetry investigations of the mixed convection exchange flow through a horizontal vent

    Science.gov (United States)

    Varrall, Kevin; Pretrel, Hugues; Vaux, Samuel; Vauquelin, Olivier

    2017-10-01

    The exchange flow through a horizontal vent linking two compartments (one above the other) is studied experimentally. This exchange is here governed by both the buoyant natural effect due to the temperature difference of the fluids in both compartments, and the effect of a (forced) mechanical ventilation applied in the lower compartment. Such a configuration leads to uni- or bi-directional flows through the vent. In the experiments, buoyancy is induced in the lower compartment thanks to an electrical resistor. The forced ventilation is applied in exhaust or supply modes and three different values of the vent area. To estimate both velocity fields and flow rates at the vent, measurements are realized at thermal steady state, flush the vent in the upper compartment using stereoscopic particle image velocimetry (SPIV), which is original for this kind of flow. The SPIV measurements allows the area occupied by both upward and downward flows to be determined.

  2. Mixing of multi-modal images for conformational radiotherapy: application to patient re positioning

    International Nuclear Information System (INIS)

    Vassal, P.

    1998-01-01

    This study shows a procedure of patient re positioning by comparative evaluation between the position of the patient and the theoretical expected position of the patient; the stagger between the two information gives the error of installation in translation and rotation. A correction is calculated and applied to the treatment environment (position and orientation of the patient, position and orientation of the irradiation source). The control system allows to determine the precise position of the tumor volume. The echography allows to determine the position of an organ.The surface captor is used to localize the tumors of the face or of the brain, with the respect to the face, the acquisition and the treatment of information take only few minutes. The X rays imager is applied for the localization of bones structures. (N.C.)

  3. Comparison of radiation absorbed dose in target organs in maxillofacial imaging with panoramic, conventional linear tomography, cone beam computed tomography and computed tomography

    Directory of Open Access Journals (Sweden)

    Panjnoush M.

    2009-12-01

    Full Text Available "nBackground and Aim: The objective of this study was to measure and compare the tissue absorbed dose in thyroid gland, salivary glands, eye and skin in maxillofacial imaging with panoramic, conventional linear tomography, cone beam computed tomography (CBCT and computed tomography (CT."nMaterials and Methods: Thermoluminescent dosimeters (TLD were implanted in 14 sites of RANDO phantom to measure average tissue absorbed dose in thyroid gland, parotid glands, submandibular glands, sublingual gland, lenses and buccal skin. The Promax (PLANMECA, Helsinki, Finland unit was selected for Panoramic, conventional linear tomography and cone beam computed tomography examinations and spiral Hispeed/Fxi (General Electric,USA was selected for CT examination. The average tissue absorbed doses were used for the calculation of the equivalent and effective doses in each organ."nResults: The average absorbed dose for Panoramic ranged from 0.038 mGY (Buccal skin to 0.308 mGY (submandibular gland, linear tomography ranged from 0.048 mGY (Lens to 0.510 mGY (submandibular gland,CBCT ranged from 0.322 mGY (thyroid glad to 1.144 mGY (Parotid gland and in CT ranged from 2.495 mGY (sublingual gland to 3.424 mGY (submandibular gland. Total effective dose in CBCT is 5 times greater than Panoramic and 4 times greater than linear tomography, and in CT, 30 and 22 times greater than Panoramic and linear tomography, respectively. Total effective dose in CT is 6 times greater than CBCT."nConclusion: For obtaining 3-dimensional (3D information in maxillofacial region, CBCT delivers the lower dose than CT, and should be preferred over a medical CT imaging. Furthermore, during maxillofacial imaging, salivary glands receive the highest dose of radiation.

  4. Pulse height non-linearity in LaBr3:Ce crystal for gamma ray spectrometry and imaging

    International Nuclear Information System (INIS)

    Pani, R.; Cinti, M.N.; Pellegrini, R.; Bennati, P.; Ridolfi, S.; Scafe, R.; Orsolini Cencelli, V.; De Notaristefani, F.; Fabbri, A.; Navarria, F.L.; Lanconelli, N.; Moschini, G.; Boccaccio, P.

    2011-01-01

    In this paper the response in term of pulse height linearity of two Hamamatsu photomultipliers is investigated, when coupled to a LaBr 3 :Ce scintillation crystal. The two photodetectors have high quantum efficiency and in particular 30% for R6231-01 and 42% for R7600-200 tube. The substantial difference is in the dynode structure, linear focused and metal channel for R6231 and R7600 respectively. In this work in order to verify the non-linearity effects on the pulse height distribution, due principally to the high and fast light production of LaBr 3 :Ce scintillator, we propose a 'peak by peak' procedure to calibrate the pulse height distribution. Utilizing a specific fragmentation of the calibration curve in subsets, the calculated energy values are very similar for both PMTs. This result confirmed the potentiality of the procedure to highlight the non-linearity effects on pulse height distribution.

  5. Short wave infrared hyperspectral imaging for recovered post-consumer single and mixed polymers characterization

    Science.gov (United States)

    Bonifazi, Giuseppe; Palmieri, Roberta; Serranti, Silvia

    2015-03-01

    Postconsumer plastics from packing and packaging represent about the 60% of the total plastic wastes (i.e. 23 million of tons) produced in Europe. The EU Directive (2014/12/EC) fixes as target that the 60%, by weight, of packaging waste has to be recovered, or thermally valorized. When recovered, the same directive established that packaging waste has to be recycled in a percentage ranging between 55% (minimum) and 60% (maximum). The non-respect of these rules can produce that large quantities of end-of-life plastic products, specifically those utilized for packaging, are disposed-off, with a strong environmental impact. The application of recycling strategies, finalized to polymer recovery, can represent an opportunity to reduce: i) not renewable raw materials (i.e. oil) utilization, ii) carbon dioxide emissions and iii) amount of plastic waste disposed-off. Aim of this work was to perform a full characterization of different end-of-life polymers based products, constituted not only by single polymers but also of mixtures, in order to realize their identification for quality control and/or certification assessment. The study was specifically addressed to characterize the different recovered products as resulting from a recycling plant where classical processing flow-sheets, based on milling, classification and separation, are applied. To reach this goal, an innovative sensing technique, based on the utilization of a HyperSpectral[b] I[/b]maging (HSI) device working in the SWIR region (1000-2500 nm), was investigated. Following this strategy, single polymers and/or mixed polymers recovered were correctly recognized. The main advantage of the proposed approach is linked to the possibility to perform "on-line" analyses, that is directly on the different material flow streams, as resulting from processing, without any physical sampling and classical laboratory "off-line" determination.

  6. Modeling and simulation of protein elution in linear pH and salt gradients on weak, strong and mixed cation exchange resins applying an extended Donnan ion exchange model.

    Science.gov (United States)

    Wittkopp, Felix; Peeck, Lars; Hafner, Mathias; Frech, Christian

    2018-04-13

    Process development and characterization based on mathematic modeling provides several advantages and has been applied more frequently over the last few years. In this work, a Donnan equilibrium ion exchange (DIX) model is applied for modelling and simulation of ion exchange chromatography of a monoclonal antibody in linear chromatography. Four different cation exchange resin prototypes consisting of weak, strong and mixed ligands are characterized using pH and salt gradient elution experiments applying the extended DIX model. The modelling results are compared with the results using a classic stoichiometric displacement model. The Donnan equilibrium model is able to describe all four prototype resins while the stoichiometric displacement model fails for the weak and mixed weak/strong ligands. Finally, in silico chromatogram simulations of pH and pH/salt dual gradients are performed to verify the results and to show the consistency of the developed model. Copyright © 2018 Elsevier B.V. All rights reserved.

  7. Correction of Non-Linear Propagation Artifact in Contrast-Enhanced Ultrasound Imaging of Carotid Arteries: Methods and in Vitro Evaluation.

    Science.gov (United States)

    Yildiz, Yesna O; Eckersley, Robert J; Senior, Roxy; Lim, Adrian K P; Cosgrove, David; Tang, Meng-Xing

    2015-07-01

    Non-linear propagation of ultrasound creates artifacts in contrast-enhanced ultrasound images that significantly affect both qualitative and quantitative assessments of tissue perfusion. This article describes the development and evaluation of a new algorithm to correct for this artifact. The correction is a post-processing method that estimates and removes non-linear artifact in the contrast-specific image using the simultaneously acquired B-mode image data. The method is evaluated on carotid artery flow phantoms with large and small vessels containing microbubbles of various concentrations at different acoustic pressures. The algorithm significantly reduces non-linear artifacts while maintaining the contrast signal from bubbles to increase the contrast-to-tissue ratio by up to 11 dB. Contrast signal from a small vessel 600 μm in diameter buried in tissue artifacts before correction was recovered after the correction. Copyright © 2015 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.

  8. Fast optical measurements and imaging of flow mixing: Fast optical measurements and imaging of temperature in combined fossil fuel and biomass/waste systems

    Energy Technology Data Exchange (ETDEWEB)

    Clausen, Soennik; Fateev, A.; Lindorff Nielsen, K.; Evseev, V.

    2012-02-15

    Project is focused on fast time-resolved infrared measurements of gas temperature and fast IR-imagining of flames in various combustion environments. The infrared spectrometer system was developed in the project for fast infrared spectral measurements on industrial scale using IR-fibre- optics. Fast time-and spectral-resolved measurements in 1.5-5.1 mu spectral range give information about flame characteristics like gas and particle temperatures, eddies and turbulent gas mixing. Time-resolved gas composition in that spectral range (H{sub 2}O, CH{sub 4}, CO{sub 2}, CO) which is one of the key parameters in combustion enhancement can be also obtained. The infrared camera was also used together with special endoscope optics for fast thermal imaging of a coal-straw flame in an industrial boiler. Obtained time-resolved infrared images provided useful information for the diagnostics of the flame and fuel distribustion. The applicability of the system for gas leak detection is also demonstrated. The infrared spectrometer system with minor developments was applied for fast time-resolved exhaust gas temperature measurements performed simultaneously at the three optical ports of the exhaust duct of a marine Diesel engine and visualisation of gas flow behaviour in cylinder. (Author)

  9. Novel mixed ligand technetium complexes as 5-HT1A receptor imaging agents

    International Nuclear Information System (INIS)

    Leon, A.; Rey, A.; Mallo, L.; Pirmettis, I.; Papadopoulos, M.; Leon, E.; Pagano, M.; Manta, E.; Incerti, M.; Raptopoulou, C.; Terzis, A.; Chiotellis, E.

    2002-01-01

    The synthesis, characterization and biological evaluation of two novel 3 + 1 mixed ligand 99m Tc-complexes, bearing the 1-(2-methoxyphenylpiperazine) moiety, a fragment of the true 5-HT 1A antagonist WAY 100635, is reported. Complexes at tracer level 99m TcO[(CH 3 CH 2 ) 2 NCH 2 CH 2 N(CH 2 CH 2 S) 2 ][o-CH 3 OC 6 H 4 N(CH 2 CH 2 ) 2 NCH 2 CH 2 S], 99m Tc-1, and 99m TcO[((CH 3 ) 2 CH) 2 NCH 2 CH 2 N(CH 2 CH 2 S) 2 ][o-CH 3 OC 6 H 4 N (CH 2 CH 2 ) 2 NCH 2 CH 2 S], 99m Tc-2, were prepared using 99m Tc-glucoheptonate as precursor. For structural characterization, the analogous oxorhenium complexes, Re-1 and Re-2, were prepared by ligand exchange reaction using ReOCl 3 (PPh 3 ) 2 as precursor, and characterized by elemental analysis and spectroscopic methods. Complex Re-1 was further characterized by crystallographic analysis. Labeling was performed with high yield (>85%) and radiochemical purity (>90%) using very low ligand concentration. The structure of 99m Tc complexes was established by comparative HPLC using the well-characterized oxorhenium analogues as references. In vitro binding assays demonstrated the affinity of these complexes for 5-HT 1A receptors (IC 50 : 67 and 45 nM for Re-1 and Re-2 respectively). Biological studies in mice showed the ability of 99m Tc-1 and 99m Tc-2 complexes to cross the intact blood-brain barrier (1.4 and 0.9% dose/g, respectively at 1 min post-inj.). The distribution of these complexes in various regions in rat brain is inhomogeneous. The highest ratio between areas reach and poor in 5-HT 1A receptors was calculated for complex Tc-1 at 60 min p.i. (hippocampus/cerebellum = 1.7).

  10. Novel mixed ligand technetium complexes as 5-HT1A receptor imaging agents.

    Science.gov (United States)

    León, A; Rey, A; Mallo, L; Pirmettis, I; Papadopoulos, M; León, E; Pagano, M; Manta, E; Incerti, M; Raptopoulou, C; Terzis, A; Chiotellis, E

    2002-02-01

    The synthesis, characterization and biological evaluation of two novel 3 + 1 mixed ligand 99mTc-complexes, bearing the 1-(2-methoxyphenylpiperazine) moiety, a fragment of the true 5-HT1A antagonist WAY 100635, is reported. Complexes at tracer level 99mTcO[(CH3CH2)2NCH2CH2N(CH2CH2S)2][o-CH3OC6H4N(CH2CH2)2NCH2CH2S], 99mTc-1, and 99mTcO[((CH3)2CH)2NCH2CH2N(CH2CH2S)2][o-CH3OC6H4N (CH2CH2)2NCH2CH2S], 99mTc-2, were prepared using 99mTc-glucoheptonate as precursor. For structural characterization, the analogous oxorhenium complexes, Re-1 and Re-2, were prepared by ligand exchange reaction using ReOCl3(PPh3)2 as precursor, and characterized by elemental analysis and spectroscopic methods. Complex Re-1 was further characterized by crystallographic analysis. Labeling was performed with high yield (>85%) and radiochemical purity (>90%) using very low ligand concentration. The structure of 99mTc complexes was established by comparative HPLC using the well-characterized oxorhenium analogues as references. In vitro binding assays demonstrated the affinity of these complexes for 5-HT1A receptors (IC50 : 67 and 45 nM for Re-1 and Re-2 respectively). Biological studies in mice showed the ability of 99mTc-1 and 99mTc-2 complexes to cross the intact blood-brain barrier (1.4 and 0.9% dose/g, respectively at 1 min post-inj.). The distribution of these complexes in various regions in rat brain is inhomogeneous. The highest ratio between areas reach and poor in 5-HT1A receptors was calculated for complex Tc-1 at 60 min p.i. (hippocampus/cerebellum = 1.7).

  11. Novel mixed ligand technetium complexes as 5-HT{sub 1A} receptor imaging agents

    Energy Technology Data Exchange (ETDEWEB)

    Leon, A.; Rey, A. E-mail: arey@bilbo.edu.uy; Mallo, L.; Pirmettis, I.; Papadopoulos, M.; Leon, E.; Pagano, M.; Manta, E.; Incerti, M.; Raptopoulou, C.; Terzis, A.; Chiotellis, E

    2002-02-01

    The synthesis, characterization and biological evaluation of two novel 3 + 1 mixed ligand {sup 99m}Tc-complexes, bearing the 1-(2-methoxyphenylpiperazine) moiety, a fragment of the true 5-HT{sub 1A} antagonist WAY 100635, is reported. Complexes at tracer level {sup 99m}TcO[(CH{sub 3}CH{sub 2}){sub 2}NCH{sub 2}CH{sub 2}N(CH{sub 2}CH{sub 2}S){sub 2}][o-CH{sub 3}OC{sub 6}H{sub 4}N(CH{sub 2}CH{sub 2}){sub 2}NCH{sub 2}= CH{sub 2}S], {sup 99m}Tc-1, and {sup 99m}TcO[((CH{sub 3}){sub 2}CH){sub 2}NCH{sub 2}CH{sub 2}N(CH{sub 2}CH{sub 2}S){sub 2}][o-CH{sub 3}OC{sub 6}H{sub 4}N (CH{sub 2}CH{sub 2}){sub 2}NCH{sub 2}CH{sub 2}S], {sup 99m}Tc-2, were prepared using {sup 99m}Tc-glucoheptonate as precursor. For structural characterization, the analogous oxorhenium complexes, Re-1 and Re-2, were prepared by ligand exchange reaction using ReOCl{sub 3}(PPh{sub 3}){sub 2} as precursor, and characterized by elemental analysis and spectroscopic methods. Complex Re-1 was further characterized by crystallographic analysis. Labeling was performed with high yield (>85%) and radiochemical purity (>90%) using very low ligand concentration. The structure of {sup 99m}Tc complexes was established by comparative HPLC using the well-characterized oxorhenium analogues as references. In vitro binding assays demonstrated the affinity of these complexes for 5-HT{sub 1A} receptors (IC{sub 50} : 67 and 45 nM for Re-1 and Re-2 respectively). Biological studies in mice showed the ability of {sup 99m}Tc-1 and {sup 99m}Tc-2 complexes to cross the intact blood-brain barrier (1.4 and 0.9% dose/g, respectively at 1 min post-inj.). The distribution of these complexes in various regions in rat brain is inhomogeneous. The highest ratio between areas reach and poor in 5-HT{sub 1A} receptors was calculated for complex Tc-1 at 60 min p.i. (hippocampus/cerebellum = 1.7)

  12. Mixed selection. Effects of body images, dietary restraint, and persuasive messages on females' orientations towards chocolate.

    Science.gov (United States)

    Durkin, Kevin; Hendry, Alana; Stritzke, Werner G K

    2013-01-01

    Many women experience ambivalent reactions to chocolate: craving it but also wary of its impact on weight and health. Chocolate advertisements often use thin ideal models and previous research indicates that this exacerbates ambivalence. This experiment compared attitudes to, and consumption of, chocolate following exposure to images containing thin or overweight models together with written messages that were either positive or negative about eating chocolate. Participants (all female) were categorised as either low- or high-restraint. Approach, avoidance and guilt motives towards chocolate were measured and the participants had an opportunity to consume chocolate. Exposure to thin ideal models led to higher approach motives and this effect was most marked among the high restraint participants. Avoidance and guilt scores did not vary as a function of model size or message, but there were clear differences between the restraint groups, with the high restraint participants scoring substantially higher than low restraint participants on both of these measures. When the participants were provided with an opportunity to eat some chocolate, those with high restraint who had been exposed to the thin models consumed the most. Copyright © 2012 Elsevier Ltd. All rights reserved.

  13. Angular-resolution and material-characterization measurements for a dual-particle imaging system with mixed-oxide fuel

    Energy Technology Data Exchange (ETDEWEB)

    Poitrasson-Rivière, Alexis, E-mail: alexispr@umich.edu [Department of Nuclear Engineering & Radiological Sciences, University of Michigan, Ann Arbor, MI 48109 (United States); Polack, J. Kyle; Hamel, Michael C.; Klemm, Dietrich D.; Ito, Kai; McSpaden, Alexander T.; Flaska, Marek; Clarke, Shaun D.; Pozzi, Sara A. [Department of Nuclear Engineering & Radiological Sciences, University of Michigan, Ann Arbor, MI 48109 (United States); Tomanin, Alice [Lainsa-Italia S.R.L., Via E. Fermi 2749, 21027 Ispra, VA (Italy); Peerani, Paolo [European Commission, Joint Research Centre, Institute for Transuranium Elements, 21027 Ispra, VA (Italy)

    2015-10-11

    A dual-particle imaging (DPI) system, capable of simultaneously imaging fast neutrons and gamma rays, has been operated in the presence of mixed-oxide (MOX) fuel to assess the system's angular resolution and material-characterization capabilities. The detection principle is based on the scattering physics of neutrons (elastic scattering) and gamma rays (Compton scattering) in organic and inorganic scintillators. The detection system is designed as a combination of a two-plane Compton camera and a neutron-scatter camera. The front plane consists of EJ-309 liquid scintillators and the back plane consists of interleaved EJ-309 and NaI(Tl) scintillators. MCNPX-PoliMi was used to optimize the geometry of the system and the resulting prototype was built and tested using a Cf-252 source as an SNM surrogate. A software package was developed to acquire and process data in real time. The software was used for a measurement campaign to assess the angular resolution of the imaging system with MOX samples. Measurements of two MOX canisters of similar isotopics and intensity were performed for 6 different canister separations (from 5° to 30°, corresponding to distances of 21 cm and 131 cm, respectively). The measurements yielded a minimum separation of 20° at 2.5 m (86-cm separation) required to see 2 separate hot spots. Additionally, the results displayed good agreement with MCNPX-PoliMi simulations. These results indicate an angular resolution between 15° and 20°, given the 5° step. Coupled with its large field of view, and its capability to differentiate between spontaneous fission and (α,n) sources, the DPI system shows its potential for nuclear-nonproliferation applications.

  14. LINEAR MIXED MODEL TO DESCRIBE THE BASAL AREA INCREMENT FOR INDIVUDUAL CEDRO (Cedrela odorata L.TREES IN OCCIDENTAL AMAZON, BRAZIL

    Directory of Open Access Journals (Sweden)

    Thiago Augusto da Cunha

    2013-01-01

    Full Text Available Reliable growth data from trees are important to establish a rational forest management. Characteristics from trees, like the size, crown architecture and competition indices have been used to mathematically describe the increment efficiently when associated with them. However, the precise role of these effects in the growth-modeling destined to tropical trees needs to be further studied. Here it is reconstructed the basal area increment (BAI of individual Cedrela odorata trees, sampled at Amazon forest, to develop a growth- model using potential-predictors like: (1 classical tree size; (2 morphometric data; (3 competition and (4 social position including liana loads. Despite the large variation in tree size and growth, we observed that these kinds of predictor variables described well the BAI in level of individual tree. The fitted mixed model achieve a high efficiency (R2=92.7 % and predicted 3-years BAI over bark for trees of Cedrela odorata ranging from 10 to 110 cm at diameter at breast height. Tree height, steam slenderness and crown formal demonstrated high influence in the BAI growth model and explaining most of the growth variance (Partial R2=87.2%. Competition variables had negative influence on the BAI, however, explained about 7% of the total variation. The introduction of a random parameter on the regressions model (mixed modelprocedure has demonstrated a better significance approach to the data observed and showed more realistic predictions than the fixed model.

  15. Definition of Linear Color Models in the RGB Vector Color Space to Detect Red Peaches in Orchard Images Taken under Natural Illumination

    Directory of Open Access Journals (Sweden)

    Jordi Palacín

    2012-06-01

    Full Text Available This work proposes the detection of red peaches in orchard images based on the definition of different linear color models in the RGB vector color space. The classification and segmentation of the pixels of the image is then performed by comparing the color distance from each pixel to the different previously defined linear color models. The methodology proposed has been tested with images obtained in a real orchard under natural light. The peach variety in the orchard was the paraguayo (Prunus persica var. platycarpa peach with red skin. The segmentation results showed that the area of the red peaches in the images was detected with an average error of 11.6%; 19.7% in the case of bright illumination; 8.2% in the case of low illumination; 8.6% for occlusion up to 33%; 12.2% in the case of occlusion between 34 and 66%; and 23% for occlusion above 66%. Finally, a methodology was proposed to estimate the diameter of the fruits based on an ellipsoidal fitting. A first diameter was obtained by using all the contour pixels and a second diameter was obtained by rejecting some pixels of the contour. This approach enables a rough estimate of the fruit occlusion percentage range by comparing the two diameter estimates.

  16. On the determination of CP-even and CP-odd components of a mixed CP Higgs boson at e+e- linear colliders

    International Nuclear Information System (INIS)

    Dova, Maria Teresa; Ferrari, Sergio

    2005-01-01

    We present a method to investigate the CP quantum numbers of the Higgs boson in the process e + e - ->Zφ at a future e + e - linear collider (LC), where φ, a generic Higgs boson, is a mixture of CP-even and CP-odd states. The procedure consists of a comparison of the data with predictions obtained from Monte Carlo simulations corresponding to the productions of scalar and pseudoscalar Higgs and the interference term which constitutes a distinctive signal of CP violation. We present estimates of the sensitivity of the method from Monte Carlo studies using hypothetical data samples with a full LC detector simulation taking into account the background signals

  17. Principle component analysis and linear discriminant analysis of multi-spectral autofluorescence imaging data for differentiating basal cell carcinoma and healthy skin

    Science.gov (United States)

    Chernomyrdin, Nikita V.; Zaytsev, Kirill I.; Lesnichaya, Anastasiya D.; Kudrin, Konstantin G.; Cherkasova, Olga P.; Kurlov, Vladimir N.; Shikunova, Irina A.; Perchik, Alexei V.; Yurchenko, Stanislav O.; Reshetov, Igor V.

    2016-09-01

    In present paper, an ability to differentiate basal cell carcinoma (BCC) and healthy skin by combining multi-spectral autofluorescence imaging, principle component analysis (PCA), and linear discriminant analysis (LDA) has been demonstrated. For this purpose, the experimental setup, which includes excitation and detection branches, has been assembled. The excitation branch utilizes a mercury arc lamp equipped with a 365-nm narrow-linewidth excitation filter, a beam homogenizer, and a mechanical chopper. The detection branch employs a set of bandpass filters with the central wavelength of spectral transparency of λ = 400, 450, 500, and 550 nm, and a digital camera. The setup has been used to study three samples of freshly excised BCC. PCA and LDA have been implemented to analyze the data of multi-spectral fluorescence imaging. Observed results of this pilot study highlight the advantages of proposed imaging technique for skin cancer diagnosis.

  18. A GENERALIZED NON-LINEAR METHOD FOR DISTORTION CORRECTION AND TOP-DOWN VIEW CONVERSION OF FISH EYE IMAGES

    Directory of Open Access Journals (Sweden)

    Vivek Singh Bawa

    2017-06-01

    Full Text Available Advanced driver assistance systems (ADAS have been developed to automate and modify vehicles for safety and better driving experience. Among all computer vision modules in ADAS, 360-degree surround view generation of immediate surroundings of the vehicle is very important, due to application in on-road traffic assistance, parking assistance etc. This paper presents a novel algorithm for fast and computationally efficient transformation of input fisheye images into required top down view. This paper also presents a generalized framework for generating top down view of images captured by cameras with fish-eye lenses mounted on vehicles, irrespective of pitch or tilt angle. The proposed approach comprises of two major steps, viz. correcting the fish-eye lens images to rectilinear images, and generating top-view perspective of the corrected images. The images captured by the fish-eye lens possess barrel distortion, for which a nonlinear and non-iterative method is used. Thereafter, homography is used to obtain top-down view of corrected images. This paper also targets to develop surroundings of the vehicle for wider distortion less field of view and camera perspective independent top down view, with minimum computation cost which is essential due to limited computation power on vehicles.

  19. Online Kidney Position Verification Using Non-Contrast Radiographs on a Linear Accelerator with on Board KV X-Ray Imaging Capability

    International Nuclear Information System (INIS)

    Willis, David J.; Kron, Tomas; Hubbard, Patricia; Haworth, Annette; Wheeler, Greg; Duchesne, Gillian M.

    2009-01-01

    The kidneys are dose-limiting organs in abdominal radiotherapy. Kilovoltage (kV) radiographs can be acquired using on-board imager (OBI)-equipped linear accelerators with better soft tissue contrast and lower radiation doses than conventional portal imaging. A feasibility study was conducted to test the suitability of anterior-posterior (AP) non-contrast kV radiographs acquired at treatment time for online kidney position verification. Anthropomorphic phantoms were used to evaluate image quality and radiation dose. Institutional Review Board approval was given for a pilot study that enrolled 5 adults and 5 children. Customized digitally reconstructed radiographs (DRRs) were generated to provide a priori information on kidney shape and position. Radiotherapy treatment staff performed online evaluation of kidney visibility on OBI radiographs. Kidney dose measured in a pediatric anthropomorphic phantom was 0.1 cGy for kV imaging and 1.7 cGy for MV imaging. Kidneys were rated as well visualized in 60% of patients (90% confidence interval, 34-81%). The likelihood of visualization appears to be influenced by the relative AP separation of the abdomen and kidneys, the axial profile of the kidneys, and their relative contrast with surrounding structures. Online verification of kidney position using AP non-contrast kV radiographs on an OBI-equipped linear accelerator appears feasible for patients with suitable abdominal anatomy. Kidney position information provided is limited to 2-dimensional 'snapshots,' but this is adequate in some clinical situations and potentially advantageous in respiratory-correlated treatments. Successful clinical implementation requires customized partial DRRs, appropriate imaging parameters, and credentialing of treatment staff.

  20. In vitro metabolism of a linear furanocoumarin (8-methoxypsoralen, xanthotoxin) by mixed-function oxidases of larvae of black swallowtail butterfly and fall armyworm

    International Nuclear Information System (INIS)

    Bull, D.L.

    1986-01-01

    Studies were made of the comparative in vitro metabolism of ( 14 C)xanthotoxin and( 14 C)aldrin by homogenate preparations of midguts and bodies (carcass minus digestive tract and head) of last-stage larvae of the black swallowtail butterfly (Papilio polyxenes Fabr.) and the fall armyworm (Spodoptera frugiperda (J. E. Smith)). The two substrates were metabolized by 10,000g supernatant microsomal preparations from both species. Evidence gained through the use of a specific inhibitor and cofactor indicated that mixed-function microsomal oxidases were major factors in the metabolism and that the specific activity of this enzyme system was considerably higher in midgut preparations from P. polyxenes than in similar preparations from S. frugiperda. Aldrin was metabolized 3-4 times faster by P. polyxenes, and xanthotoxin 6-6.5 times faster

  1. MO-FG-BRC-02: Low-Z Switching Linear Accelerator Targets: New Options for Image Guidance and Dose Enhancement in Radiotherapy

    International Nuclear Information System (INIS)

    Robar, J.

    2016-01-01

    Experimental research in medical physics has expanded the limits of our knowledge and provided novel imaging and therapy technologies for patients around the world. However, experimental efforts are challenging due to constraints in funding, space, time and other forms of institutional support. In this joint ESTRO-AAPM symposium, four exciting experimental projects from four different countries are highlighted. Each project is focused on a different aspect of radiation therapy. From the USA, we will hear about a new linear accelerator concept for more compact and efficient therapy devices. From Canada, we will learn about novel linear accelerator target design and the implications for imaging and therapy. From France, we will discover a mature translational effort to incorporate theranostic nanoparticles in MR-guided radiation therapy. From Germany, we will find out about a novel in-treatment imaging modality for particle therapy. These examples of high impact, experimental medical physics research are representative of the diversity of such efforts that are on-going around the globe. J. Robar, Research is supported through collaboration with Varian Medical Systems and Brainlab AGD. Westerly, This work is supported by the Department of Radiation Oncology at the University of Colorado School of Medicine. COI: NONEK. Parodi, Part of the presented work is supported by the DFG (German Research Foundation) Cluster of Excellence MAP (Munich-Centre for Advanced Photonics) and has been carried out in collaboration with IBA.

  2. The Utility of Digital Linear Tomosynthesis Imaging of Total Hip Joint Arthroplasty with Suspicion of Loosening: A Prospective Study in 40 Patients

    Science.gov (United States)

    Göthlin, Jan H.

    2013-01-01

    Aim. The clinical utility of digital linear tomosynthesis in musculoskeletal applications has been validated in only a few reports. Technical performance and utility in hip prosthesis imaging have been discussed in technical reports, but no clinical evaluation has been reported. The purpose of the current study was to assess the added clinical utility of digital linear tomosynthesis compared to radiography in loosening of total hip joint arthroplasty. Materials and Methods. In a prospective study, radiography and digital tomosynthesis were performed in 40 consecutive patients with total hip arthroplasty referred for suspect prosthesis loosening. Tomosynthesis images were compared to anterior-posterior (AP) and cross-table lateral radiographs regarding demarcation and extent of demineralization and osteolysis. Further noted were skeletal fractures, cement fractures, fragmentation, and artifacts interfering with the diagnosis. Results. Tomosynthesis was superior to radiography with sharper delineation of demineralization and osteolysis in the AP projection. A limitation was the inability to generate lateral tomosynthesis images, with inferior assessment of the area anterior and posterior to the acetabular cup compared to cross-table radiographs. Artifacts interfering with diagnosis were found in one hip. Conclusion. Tomosynthesis improved evaluation of total hip arthroplasty in the AP projection but was limited by the lack of lateral projections. PMID:24078921

  3. The Utility of Digital Linear Tomosynthesis Imaging of Total Hip Joint Arthroplasty with Suspicion of Loosening: A Prospective Study in 40 Patients

    Directory of Open Access Journals (Sweden)

    Jan H. Göthlin

    2013-01-01

    Full Text Available Aim. The clinical utility of digital linear tomosynthesis in musculoskeletal applications has been validated in only a few reports. Technical performance and utility in hip prosthesis imaging have been discussed in technical reports, but no clinical evaluation has been reported. The purpose of the current study was to assess the added clinical utility of digital linear tomosynthesis compared to radiography in loosening of total hip joint arthroplasty. Materials and Methods. In a prospective study, radiography and digital tomosynthesis were performed in 40 consecutive patients with total hip arthroplasty referred for suspect prosthesis loosening. Tomosynthesis images were compared to anterior-posterior (AP and cross-table lateral radiographs regarding demarcation and extent of demineralization and osteolysis. Further noted were skeletal fractures, cement fractures, fragmentation, and artifacts interfering with the diagnosis. Results. Tomosynthesis was superior to radiography with sharper delineation of demineralization and osteolysis in the AP projection. A limitation was the inability to generate lateral tomosynthesis images, with inferior assessment of the area anterior and posterior to the acetabular cup compared to cross-table radiographs. Artifacts interfering with diagnosis were found in one hip. Conclusion. Tomosynthesis improved evaluation of total hip arthroplasty in the AP projection but was limited by the lack of lateral projections.

  4. MO-FG-BRC-02: Low-Z Switching Linear Accelerator Targets: New Options for Image Guidance and Dose Enhancement in Radiotherapy

    Energy Technology Data Exchange (ETDEWEB)

    Robar, J. [Capital District Health Authority (Canada)

    2016-06-15

    Experimental research in medical physics has expanded the limits of our knowledge and provided novel imaging and therapy technologies for patients around the world. However, experimental efforts are challenging due to constraints in funding, space, time and other forms of institutional support. In this joint ESTRO-AAPM symposium, four exciting experimental projects from four different countries are highlighted. Each project is focused on a different aspect of radiation therapy. From the USA, we will hear about a new linear accelerator concept for more compact and efficient therapy devices. From Canada, we will learn about novel linear accelerator target design and the implications for imaging and therapy. From France, we will discover a mature translational effort to incorporate theranostic nanoparticles in MR-guided radiation therapy. From Germany, we will find out about a novel in-treatment imaging modality for particle therapy. These examples of high impact, experimental medical physics research are representative of the diversity of such efforts that are on-going around the globe. J. Robar, Research is supported through collaboration with Varian Medical Systems and Brainlab AGD. Westerly, This work is supported by the Department of Radiation Oncology at the University of Colorado School of Medicine. COI: NONEK. Parodi, Part of the presented work is supported by the DFG (German Research Foundation) Cluster of Excellence MAP (Munich-Centre for Advanced Photonics) and has been carried out in collaboration with IBA.

  5. Advanced femtosecond lasers enable new developments in non-linear imaging and functional studies in neuroscience, biology and medical applications (Conference Presentation)

    Science.gov (United States)

    Arrigoni, Marco; McCoy, Darryl

    2016-03-01

    In the last few years Multiphoton Excitation Microscopy witnessed a mutation from tool for imaging cellular structures in living animals deeper than other high-resolution techniques, into an instrument for monitoring functionality and even stimulating or inhibiting inter-cellular signalling. This paradigm shift has been enabled primarily by the development of genetically encoded probes like Ca indicators (GECI) and Opsins for optogenetics inhibition and stimulation. These developments will hopefully enable the understanding of how local network of hundreds or thousands of neurons operate in response to actual tasks or induced stimuli. Imaging, monitoring signals and activating neurons, all on a millisecond time scale, requires new laser tools providing a combination of wavelengths, higher powers and operating regimes different from the ones traditionally used for classic multiphoton imaging. The other key development in multiphoton techniques relates to potential diagnostic and clinical applications where non-linear imaging could provide all optical marker-free replacement of H and E techniques and even intra-operative guidance for procedures like cancer surgery. These developments will eventually drive the development of specialized laser sources where compact size, ease of use, beam delivery and cost are primary concerns. In this talk we will discuss recent laser product developments targeting the various applications of multiphoton imaging, as fiber lasers and other new type of lasers gradually gain popularity and their own space, side-by-side or as an alternative to conventional titanium sapphire femtosecond lasers.

  6. Attenuated total internal reflection infrared microspectroscopic imaging using a large-radius germanium internal reflection element and a linear array detector.

    Science.gov (United States)

    Patterson, Brian M; Havrilla, George J

    2006-11-01

    The number of techniques and instruments available for Fourier transform infrared (FT-IR) microspectroscopic imaging has grown significantly over the past few years. Attenuated total internal reflectance (ATR) FT-IR microspectroscopy reduces sample preparation time and has simplified the analysis of many difficult samples. FT-IR imaging has become a powerful analytical tool using either a focal plane array or a linear array detector, especially when coupled with a chemometric analysis package. The field of view of the ATR-IR microspectroscopic imaging area can be greatly increased from 300 x 300 microm to 2500 x 2500 microm using a larger internal reflection element of 12.5 mm radius instead of the typical 1.5 mm radius. This gives an area increase of 70x before aberrant effects become too great. Parameters evaluated include the change in penetration depth as a function of beam displacement, measurements of the active area, magnification factor, and change in spatial resolution over the imaging area. Drawbacks such as large file size will also be discussed. This technique has been successfully applied to the FT-IR imaging of polydimethylsiloxane foam cross-sections, latent human fingerprints, and a model inorganic mixture, which demonstrates the usefulness of the method for pharmaceuticals.

  7. New Satellite Estimates of Mixed-Phase Cloud Properties: A Synergistic Approach for Application to Global Satellite Imager Data

    Science.gov (United States)

    Smith, W. L., Jr.; Spangenberg, D.; Fleeger, C.; Sun-Mack, S.; Chen, Y.; Minnis, P.

    2016-12-01

    Determining accurate cloud properties horizontally and vertically over a full range of time and space scales is currently next to impossible using data from either active or passive remote sensors or from modeling systems. Passive satellite imagers provide horizontal and temporal resolution of clouds, but little direct information on vertical structure. Active sensors provide vertical resolution but limited spatial and temporal coverage. Cloud models embedded in NWP can produce realistic clouds but often not at the right time or location. Thus, empirical techniques that integrate information from multiple observing and modeling systems are needed to more accurately characterize clouds and their impacts. Such a strategy is employed here in a new cloud water content profiling technique developed for application to satellite imager cloud retrievals based on VIS, IR and NIR radiances. Parameterizations are developed to relate imager retrievals of cloud top phase, optical depth, effective radius and temperature to ice and liquid water content profiles. The vertical structure information contained in the parameterizations is characterized climatologically from cloud model analyses, aircraft observations, ground-based remote sensing data, and from CloudSat and CALIPSO. Thus, realistic cloud-type dependent vertical structure information (including guidance on cloud phase partitioning) circumvents poor assumptions regarding vertical homogeneity that plague current passive satellite retrievals. This paper addresses mixed phase cloud conditions for clouds with glaciated tops including those associated with convection and mid-latitude storm systems. Novel outcomes of our approach include (1) simultaneous retrievals of ice and liquid water content and path, which are validated with active sensor, microwave and in-situ data, and yield improved global cloud climatologies, and (2) new estimates of super-cooled LWC, which are demonstrated in aviation safety applications and

  8. Topics in computational linear optimization

    DEFF Research Database (Denmark)

    Hultberg, Tim Helge

    2000-01-01

    Linear optimization has been an active area of research ever since the pioneering work of G. Dantzig more than 50 years ago. This research has produced a long sequence of practical as well as theoretical improvements of the solution techniques avilable for solving linear optimization problems...... of high quality solvers and the use of algebraic modelling systems to handle the communication between the modeller and the solver. This dissertation features four topics in computational linear optimization: A) automatic reformulation of mixed 0/1 linear programs, B) direct solution of sparse unsymmetric...... systems of linear equations, C) reduction of linear programs and D) integration of algebraic modelling of linear optimization problems in C++. Each of these topics is treated in a separate paper included in this dissertation. The efficiency of solving mixed 0-1 linear programs by linear programming based...

  9. European mixed forests

    DEFF Research Database (Denmark)

    Bravo-Oviedo, Andres; Pretzsch, Hans; Ammer, Christian

    2014-01-01

    Aim of study: We aim at (i) developing a reference definition of mixed forests in order to harmonize comparative research in mixed forests and (ii) review the research perspectives in mixed forests. Area of study: The definition is developed in Europe but can be tested worldwide. Material...... and Methods: Review of existent definitions of mixed forests based and literature review encompassing dynamics, management and economic valuation of mixed forests. Main results: A mixed forest is defined as a forest unit, excluding linear formations, where at least two tree species coexist at any...... density in mixed forests, (iii) conversion of monocultures to mixed-species forest and (iv) economic valuation of ecosystem services provided by mixed forests. Research highlights: The definition is considered a high-level one which encompasses previous attempts to define mixed forests. Current fields...

  10. Spatio-temporal encoding using narrow-band linear frequency modulated signals in synthetic aperture ultrasound imaging

    DEFF Research Database (Denmark)

    Gran, Fredrik; Jensen, Jørgen Arendt

    2005-01-01

    In this paper a method for spatio-temporal encoding is presented for synthetic transmit aperture ultrasound imaging (STA). The purpose is to excite several transmitters at the same time in order to transmit more acoustic energy in every single transmission. When increasing the transmitted acousti...

  11. Linear algebra

    CERN Document Server

    Shilov, Georgi E

    1977-01-01

    Covers determinants, linear spaces, systems of linear equations, linear functions of a vector argument, coordinate transformations, the canonical form of the matrix of a linear operator, bilinear and quadratic forms, Euclidean spaces, unitary spaces, quadratic forms in Euclidean and unitary spaces, finite-dimensional space. Problems with hints and answers.

  12. Evaluating the effectiveness of mixed-integer linear programming for day-ahead hydro-thermal self-scheduling considering price uncertainty and forced outage rate

    International Nuclear Information System (INIS)

    Esmaeily, Ali; Ahmadi, Abdollah; Raeisi, Fatima; Ahmadi, Mohammad Reza; Esmaeel Nezhad, Ali; Janghorbani, Mohammadreza

    2017-01-01

    A new optimization framework based on MILP model is introduced in the paper for the problem of stochastic self-scheduling of hydrothermal units known as HTSS Problem implemented in a joint energy and reserve electricity market with day-ahead mechanism. The proposed MILP framework includes some practical constraints such as the cost due to valve-loading effect, the limit due to DRR and also multi-POZs, which have been less investigated in electricity market models. For the sake of more accuracy, for hydro generating units’ model, multi performance curves are also used. The problem proposed in this paper is formulated using a model on the basis of a stochastic optimization technique while the objective function is maximizing the expected profit utilizing MILP technique. The suggested stochastic self-scheduling model employs the price forecast error in order to take into account the uncertainty due to price. Besides, LMCS is combined with roulette wheel mechanism so that the scenarios corresponding to the non-spinning reserve price and spinning reserve price as well as the energy price at each hour of the scheduling are generated. Finally, the IEEE 118-bus power system is used to indicate the performance and the efficiency of the suggested technique. - Highlights: • Characterizing the uncertainties of price and FOR of units. • Replacing the fixed ramping rate constraints with the dynamic ones. • Proposing linearized model for the valve-point effects of thermal units. • Taking into consideration the multi-POZs relating to the thermal units. • Taking into consideration the multi-performance curves of hydroelectric units.

  13. Why care about linear hair growth rates (LHGR)? a study using in vivo imaging and computer assisted image analysis after manual processing (CAIAMP) in unaffected male controls and men with male pattern hair loss (MPHL).

    Science.gov (United States)

    Van Neste, Dominique

    2014-01-01

    The words "hair growth" frequently encompass many aspects other than just growth. Report on a validation method for precise non-invasive measurement of thickness together with linear hair growth rates of individual hair fibres. To verify the possible correlation between thickness and linear growth rate of scalp hair in male pattern hair loss as compared with healthy male controls. To document the process of validation of hair growth measurement from in vivo image capturing and manual processing, followed by computer assisted image analysis. We analysed 179 paired images obtained with the contrast-enhanced-phototrichogram method with exogen collection (CE-PTG-EC) in 13 healthy male controls and in 87 men with male pattern hair loss (MPHL). There was a global positive correlation between thickness and growth rate (ANOVA; phairs from controls. Finally, the growth rate recorded in the more severe patterns was significantly (ANOVA; P ≤ 0.001) reduced compared with equally thick hair from less severely affected MPHL or controls subjects. Reduced growth rate, together with thinning and shortening of the anagen phase duration in MPHL might contribute together to the global impression of decreased hair volume on the top of the head. Amongst other structural and functional parameters characterizing hair follicle regression, linear hair growth rate warrants further investigation, as it may be relevant in terms of self-perception of hair coverage, quantitative diagnosis and prognostic factor of the therapeutic response.

  14. Beam generation and planar imaging at energies below 2.40 MeV with carbon and aluminum linear accelerator targets.

    Science.gov (United States)

    Parsons, David; Robar, James L

    2012-07-01

    Recent work has demonstrated improvement of image quality with low-Z linear accelerator targets and energies as low as 3.5 MV. In this paper, the authors lower the incident electron beam energy between 1.90 and 2.35 MeV and assess the improvement of megavoltage planar image quality with the use of carbon and aluminum linear accelerator targets. The bending magnet shunt current was adjusted in a Varian linear accelerator to allow selection of mean electron energy between 1.90 and 2.35 MeV. Linac set points were altered to increase beam current to allow experimental imaging in a practical time frame. Electron energy was determined through comparison of measured and Monte Carlo modeled depth dose curves. Planar image CNR and spatial resolution measurements were performed to quantify the improvement of image quality. Magnitudes of improvement are explained with reference to Monte Carlo generated energy spectra. After modifications to the linac, beam current was increased by a factor greater than four and incident electron energy was determined to have an adjustable range from 1.90 MeV to 2.35 MeV. CNR of cortical bone was increased by a factor ranging from 6.2 to 7.4 and 3.7 to 4.3 for thin and thick phantoms, respectively, compared to a 6 MV therapeutic beam for both aluminum and carbon targets. Spatial resolution was degraded slightly, with a relative change of 3% and 10% at 0.20 lp∕mm and 0.40 lp∕mm, respectively, when reducing energy from 2.35 to 1.90 MV. The percentage of diagnostic x-rays for the beams examined here, ranges from 46% to 54%. It is possible to produce a large fraction of diagnostic energy x-rays by lowering the beam energy below 2.35 MV. By lowering the beam energy to 1.90 MV or 2.35 MV, CNR improves by factors ranging from 3.7 to 7.4 compared to a 6 MV therapy beam, with only a slight degradation of spatial resolution when lowering the energy from 2.35 MV to 1.90 MV.

  15. Genetic parameters of linear conformation type traits and their relationship with milk yield throughout lactation in mixed-breed dairy goats.

    Science.gov (United States)

    McLaren, A; Mucha, S; Mrode, R; Coffey, M; Conington, J

    2016-07-01

    Conformation traits are of interest to many dairy goat breeders not only as descriptive traits in their own right, but also because of their influence on production, longevity, and profitability. If these traits are to be considered for inclusion in future dairy goat breeding programs, relationships between them and production traits such as milk yield must be considered. With the increased use of regression models to estimate genetic parameters, an opportunity now exists to investigate correlations between conformation traits and milk yield throughout lactation in more detail. The aims of this study were therefore to (1) estimate genetic parameters for conformation traits in a population of crossbred dairy goats, (2) estimate correlations between all conformation traits, and (3) assess the relationship between conformation traits and milk yield throughout lactation. No information on milk composition was available. Data were collected from goats based on 2 commercial goat farms during August and September in 2013 and 2014. Ten conformation traits, relating to udder, teat, leg, and feet characteristics, were scored on a linear scale (1-9). The overall data set comprised data available for 4,229 goats, all in their first lactation. The population of goats used in the study was created using random crossings between 3 breeds: British Alpine, Saanen, and Toggenburg. In each generation, the best performing animals were selected for breeding, leading to the formation of a synthetic breed. The pedigree file used in the analyses contained sire and dam information for a total of 30,139 individuals. The models fitted relevant fixed and random effects. Heritability estimates for the conformation traits were low to moderate, ranging from 0.02 to 0.38. A range of positive and negative phenotypic and genetic correlations between the traits were observed, with the highest correlations found between udder depth and udder attachment (0.78), teat angle and teat placement (0

  16. Linear-Array Photoacoustic Imaging Using Minimum Variance-Based Delay Multiply and Sum Adaptive Beamforming Algorithm

    OpenAIRE

    Mozaffarzadeh, Moein; Mahloojifar, Ali; Orooji, Mahdi; Kratkiewicz, Karl; Adabi, Saba; Nasiriavanaki, Mohammadreza

    2017-01-01

    In Photoacoustic imaging (PA), Delay-and-Sum (DAS) beamformer is a common beamforming algorithm having a simple implementation. However, it results in a poor resolution and high sidelobes. To address these challenges, a new algorithm namely Delay-Multiply-and-Sum (DMAS) was introduced having lower sidelobes compared to DAS. To improve the resolution of DMAS, a novel beamformer is introduced using Minimum Variance (MV) adaptive beamforming combined with DMAS, so-called Minimum Variance-Based D...

  17. Non-linear mixed-effects modeling for photosynhetic response of Rosa hybrida L. under elevated CO2 in greenhouses - short communication

    DEFF Research Database (Denmark)

    Ozturk, I.; Ottosen, C.O.; Ritz, Christian

    2011-01-01

    conditions. Leaf gas exchanges were measured at 11 light intensities from 0 to 1,400 µmol/m2s, at 800 ppm CO2, 25°C, and 65 ± 5% relative humidity. In order to describe the data corresponding to diff erent measurement dates, the non-linear mixed-eff ects regression analysis was used. Th e model successfully...... effi ciency. Th e results suggested acclimation response, as carbon assimilation rates and stomatal conductance at each measurement date were higher for Escimo than Mercedes. Diff erences in photosynthesis rates were attributed to the adaptive capacity of the cultivars to light conditions at a specifi......Photosynthetic response to light was measured on the leaves of two cultivars of Rosa hybrida L. (Escimo and Mercedes) in the greenhouse to obtain light-response curves and their parameters. Th e aim was to use a model to simulate leaf photosynthetic carbon gain with respect to environmental...

  18. Technical Note: Evaluation of the systematic accuracy of a frameless, multiple image modality guided, linear accelerator based stereotactic radiosurgery system

    Energy Technology Data Exchange (ETDEWEB)

    Wen, N., E-mail: nwen1@hfhs.org; Snyder, K. C.; Qin, Y.; Li, H.; Siddiqui, M. S.; Chetty, I. J. [Department of Radiation Oncology, Henry Ford Health System, 2799 West Brand Boulevard, Detroit, Michigan 48202 (United States); Scheib, S. G.; Schmelzer, P. [Varian Medical System, Täfernstrasse 7, Dättwil AG 5405 (Switzerland)

    2016-05-15

    Purpose: To evaluate the total systematic accuracy of a frameless, image guided stereotactic radiosurgery system. Methods: The localization accuracy and intermodality difference was determined by delivering radiation to an end-to-end prototype phantom, in which the targets were localized using optical surface monitoring system (OSMS), electromagnetic beacon-based tracking (Calypso®), cone-beam CT, “snap-shot” planar x-ray imaging, and a robotic couch. Six IMRT plans with jaw tracking and a flattening filter free beam were used to study the dosimetric accuracy for intracranial and spinal stereotactic radiosurgery treatment. Results: End-to-end localization accuracy of the system evaluated with the end-to-end phantom was 0.5 ± 0.2 mm with a maximum deviation of 0.9 mm over 90 measurements (including jaw, MLC, and cone measurements for both auto and manual fusion) for single isocenter, single target treatment, 0.6 ± 0.4 mm for multitarget treatment with shared isocenter. Residual setup errors were within 0.1 mm for OSMS, and 0.3 mm for Calypso. Dosimetric evaluation based on absolute film dosimetry showed greater than 90% pass rate for all cases using a gamma criteria of 3%/1 mm. Conclusions: The authors’ experience demonstrates that the localization accuracy of the frameless image-guided system is comparable to robotic or invasive frame based radiosurgery systems.

  19. Estimation of aortic valve leaflets from 3D CT images using local shape dictionaries and linear coding

    Science.gov (United States)

    Liang, Liang; Martin, Caitlin; Wang, Qian; Sun, Wei; Duncan, James

    2016-03-01

    Aortic valve (AV) disease is a significant cause of morbidity and mortality. The preferred treatment modality for severe AV disease is surgical resection and replacement of the native valve with either a mechanical or tissue prosthetic. In order to develop effective and long-lasting treatment methods, computational analyses, e.g., structural finite element (FE) and computational fluid dynamic simulations, are very effective for studying valve biomechanics. These computational analyses are based on mesh models of the aortic valve, which are usually constructed from 3D CT images though many hours of manual annotation, and therefore an automatic valve shape reconstruction method is desired. In this paper, we present a method for estimating the aortic valve shape from 3D cardiac CT images, which is represented by triangle meshes. We propose a pipeline for aortic valve shape estimation which includes novel algorithms for building local shape dictionaries and for building landmark detectors and curve detectors using local shape dictionaries. The method is evaluated on real patient image dataset using a leave-one-out approach and achieves an average accuracy of 0.69 mm. The work will facilitate automatic patient-specific computational modeling of the aortic valve.

  20. Technical Note: Evaluation of the systematic accuracy of a frameless, multiple image modality guided, linear accelerator based stereotactic radiosurgery system

    International Nuclear Information System (INIS)

    Wen, N.; Snyder, K. C.; Qin, Y.; Li, H.; Siddiqui, M. S.; Chetty, I. J.; Scheib, S. G.; Schmelzer, P.

    2016-01-01

    Purpose: To evaluate the total systematic accuracy of a frameless, image guided stereotactic radiosurgery system. Methods: The localization accuracy and intermodality difference was determined by delivering radiation to an end-to-end prototype phantom, in which the targets were localized using optical surface monitoring system (OSMS), electromagnetic beacon-based tracking (Calypso®), cone-beam CT, “snap-shot” planar x-ray imaging, and a robotic couch. Six IMRT plans with jaw tracking and a flattening filter free beam were used to study the dosimetric accuracy for intracranial and spinal stereotactic radiosurgery treatment. Results: End-to-end localization accuracy of the system evaluated with the end-to-end phantom was 0.5 ± 0.2 mm with a maximum deviation of 0.9 mm over 90 measurements (including jaw, MLC, and cone measurements for both auto and manual fusion) for single isocenter, single target treatment, 0.6 ± 0.4 mm for multitarget treatment with shared isocenter. Residual setup errors were within 0.1 mm for OSMS, and 0.3 mm for Calypso. Dosimetric evaluation based on absolute film dosimetry showed greater than 90% pass rate for all cases using a gamma criteria of 3%/1 mm. Conclusions: The authors’ experience demonstrates that the localization accuracy of the frameless image-guided system is comparable to robotic or invasive frame based radiosurgery systems.

  1. Imaging of Caenorhabditis elegans samples and sub-cellular localization of new generation photosensitizers for photodynamic therapy, using non-linear microscopy

    Energy Technology Data Exchange (ETDEWEB)

    Filippidis, G [Institute of Electronic Structure and Laser, Foundation of Research and Technology-Hellas, PO Box 1527, 71110 Heraklion (Greece); Kouloumentas, C [Institute of Electronic Structure and Laser, Foundation of Research and Technology-Hellas, PO Box 1527, 71110 Heraklion (Greece); Kapsokalyvas, D [Institute of Electronic Structure and Laser, Foundation of Research and Technology-Hellas, PO Box 1527, 71110 Heraklion (Greece); Voglis, G [Institute of Molecular Biology and Biotechnology, Foundation of Research and Technology, Heraklion 71110, Crete (Greece); Tavernarakis, N [Institute of Molecular Biology and Biotechnology, Foundation of Research and Technology, Heraklion 71110, Crete (Greece); Papazoglou, T G [Institute of Electronic Structure and Laser, Foundation of Research and Technology-Hellas, PO Box 1527, 71110 Heraklion (Greece)

    2005-08-07

    Two-photon excitation fluorescence (TPEF) and second-harmonic generation (SHG) are relatively new promising tools for the imaging and mapping of biological structures and processes at the microscopic level. The combination of the two image-contrast modes in a single instrument can provide unique and complementary information concerning the structure and the function of tissues and individual cells. The extended application of this novel, innovative technique by the biological community is limited due to the high price of commercial multiphoton microscopes. In this study, a compact, inexpensive and reliable setup utilizing femtosecond pulses for excitation was developed for the TPEF and SHG imaging of biological samples. Specific cell types of the nematode Caenorhabditis elegans were imaged. Detection of the endogenous structural proteins of the worm, which are responsible for observation of SHG signals, was achieved. Additionally, the binding of different photosensitizers in the HL-60 cell line was investigated, using non-linear microscopy. The sub-cellular localization of photosensitizers of a new generation, very promising for photodynamic therapy (PDT) (Hypericum perforatum L. extracts) was achieved. The sub-cellular localization of these novel photosensitizers was linked with their photodynamic action during PDT, and the possible mechanisms for cell killing have been elucidated.

  2. Imaging of urokinase-type plasminogen activator receptor expression using a 64Cu-labeled linear peptide antagonist by microPET

    DEFF Research Database (Denmark)

    Li, Zi-Bo; Niu, Gang; Wang, Hui

    2008-01-01

    for positron emission tomography (PET) imaging. A linear, high-affinity uPAR-binding peptide antagonist AE105 was conjugated with 1,4,7,10-tetraazadodecane-N,N',N'',N'''-tetraacetic acid (DOTA) and labeled with (64)Cu for microPET imaging of mice bearing U87MG human glioblastoma (uPAR positive) and MDA-MB-435...... human breast cancer (uPAR negative). RESULTS: Surface plasmon resonance measurements show that AE105 with DOTA conjugated at the alpha-amino group (DOTA-AE105) has high affinity toward uPAR. microPET imaging reveals a rapid and high accumulation of (64)Cu-DOTA-AE105 in uPAR-positive U87MG tumors (10.......8 +/- 1.5%ID/g at 4.5 hours, n = 3) but not in uPAR-negative MDA-MB-435 tumors (1.2 +/- 0.6%ID/g at 4.5 hours, n = 3). Specificity of this peptide-based imaging of uPAR was validated by further control experiments. First, a nonbinding variant of AE105 carrying a single amino acid replacement (Trp...

  3. SU-F-P-36: Automation of Linear Accelerator Star Shot Measurement with Advanced XML Scripting and Electronic Portal Imaging Device

    International Nuclear Information System (INIS)

    Nguyen, N; Knutson, N; Schmidt, M; Price, M

    2016-01-01

    Purpose: To verify a method used to automatically acquire jaw, MLC, collimator and couch star shots for a Varian TrueBeam linear accelerator utilizing Developer Mode and an Electronic Portal Imaging Device (EPID). Methods: An XML script was written to automate motion of the jaws, MLC, collimator and couch in TrueBeam Developer Mode (TBDM) to acquire star shot measurements. The XML script also dictates MV imaging parameters to facilitate automatic acquisition and recording of integrated EPID images. Since couch star shot measurements cannot be acquired using a combination of EPID and jaw/MLC collimation alone due to a fixed imager geometry, a method utilizing a 5mm wide steel ruler placed on the table and centered within a 15×15cm2 open field to produce a surrogate of the narrow field aperture was investigated. Four individual star shot measurements (X jaw, Y jaw, MLC and couch) were obtained using our proposed as well as traditional film-based method. Integrated EPID images and scanned measurement films were analyzed and compared. Results: Star shot (X jaw, Y jaw, MLC and couch) measurements were obtained in a single 5 minute delivery using the TBDM XML script method compared to 60 minutes for equivalent traditional film measurements. Analysis of the images and films demonstrated comparable isocentricity results, agreeing within 0.3mm of each other. Conclusion: The presented automatic approach of acquiring star shot measurements using TBDM and EPID has proven to be more efficient than the traditional film approach with equivalent results.

  4. Distributed transition-edge sensors for linearized position response in a phonon-mediated X-ray imaging spectrometer

    Science.gov (United States)

    Cabrera, Blas; Brink, Paul L.; Leman, Steven W.; Castle, Joseph P.; Tomada, Astrid; Young, Betty A.; Martínez-Galarce, Dennis S.; Stern, Robert A.; Deiker, Steve; Irwin, Kent D.

    2004-03-01

    For future solar X-ray satellite missions, we are developing a phonon-mediated macro-pixel composed of a Ge crystal absorber with four superconducting transition-edge sensors (TES) distributed on the backside. The X-rays are absorbed on the opposite side and the energy is converted into phonons, which are absorbed into the four TES sensors. By connecting together parallel elements into four channels, fractional total energy absorbed between two of the sensors provides x-position information and the other two provide y-position information. We determine the optimal distribution for the TES sub-elements to obtain linear position information while minimizing the degradation of energy resolution.

  5. Linear Dispersion Relation and Depth Sensitivity to Swell Parameters: Application to Synthetic Aperture Radar Imaging and Bathymetry

    Directory of Open Access Journals (Sweden)

    Valentina Boccia

    2015-01-01

    Full Text Available Long gravity waves or swell dominating the sea surface is known to be very useful to estimate seabed morphology in coastal areas. The paper reviews the main phenomena related to swell waves propagation that allow seabed morphology to be sensed. The linear dispersion is analysed and an error budget model is developed to assess the achievable depth accuracy when Synthetic Aperture Radar (SAR data are used. The relevant issues and potentials of swell-based bathymetry by SAR are identified and discussed. This technique is of particular interest for characteristic regions of the Mediterranean Sea, such as in gulfs and relatively close areas, where traditional SAR-based bathymetric techniques, relying on strong tidal currents, are of limited practical utility.

  6. #fitspo on Instagram: A mixed-methods approach using Netlytic and photo analysis, uncovering the online discussion and author/image characteristics.

    Science.gov (United States)

    Santarossa, Sara; Coyne, Paige; Lisinski, Carly; Woodruff, Sarah J

    2016-11-01

    The #fitspo 'tag' is a recent trend on Instagram, which is used on posts to motivate others towards a healthy lifestyle through exercise/eating habits. This study used a mixed-methods approach consisting of text and network analysis via the Netlytic program ( N = 10,000 #fitspo posts), and content analysis of #fitspo images ( N = 122) was used to examine author and image characteristics. Results suggest that #fitspo posts may motivate through appearance-mediated themes, as the largest content categories (based on the associated text) were 'feeling good' and 'appearance'. Furthermore, #fitspo posts may create peer influence/support as personal (opposed to non-personal) accounts were associated with higher popularity of images (i.e. number of likes/followers). Finally, most images contained posed individuals with some degree of objectification.

  7. A 1,000 Frames/s Programmable Vision Chip with Variable Resolution and Row-Pixel-Mixed Parallel Image Processors

    Directory of Open Access Journals (Sweden)

    Nanjian Wu

    2009-07-01

    Full Text Available A programmable vision chip with variable resolution and row-pixel-mixed parallel image processors is presented. The chip consists of a CMOS sensor array, with row-parallel 6-bit Algorithmic ADCs, row-parallel gray-scale image processors, pixel-parallel SIMD Processing Element (PE array, and instruction controller. The resolution of the image in the chip is variable: high resolution for a focused area and low resolution for general view. It implements gray-scale and binary mathematical morphology algorithms in series to carry out low-level and mid-level image processing and sends out features of the image for various applications. It can perform image processing at over 1,000 frames/s (fps. A prototype chip with 64 × 64 pixels resolution and 6-bit gray-scale image is fabricated in 0.18 mm Standard CMOS process. The area size of chip is 1.5 mm × 3.5 mm. Each pixel size is 9.5 μm × 9.5 μm and each processing element size is 23 μm × 29 μm. The experiment results demonstrate that the chip can perform low-level and mid-level image processing and it can be applied in the real-time vision applications, such as high speed target tracking.

  8. Linear iterative near-field phase retrieval (LIPR) for dual-energy x-ray imaging and material discrimination.

    Science.gov (United States)

    Li, Heyang Thomas; Kingston, Andrew M; Myers, Glenn R; Beeching, Levi; Sheppard, Adrian P

    2018-01-01

    Near-field x-ray refraction (phase) contrast is unavoidable in many lab-based micro-CT imaging systems. Quantitative analysis of x-ray refraction (a.k.a. phase retrieval) is in general an under-constrained problem. Regularizing assumptions may not hold true for interesting samples; popular single-material methods are inappropriate for heterogeneous samples, leading to undesired blurring and/or over-sharpening. In this paper, we constrain and solve the phase-retrieval problem for heterogeneous objects, using the Alvarez-Macovski model for x-ray attenuation. Under this assumption we neglect Rayleigh scattering and pair production, considering only Compton scattering and the photoelectric effect. We formulate and test the resulting method to extract the material properties of density and atomic number from single-distance, dual-energy imaging of both strongly and weakly attenuating multi-material objects with polychromatic x-ray spectra. Simulation and experimental data are used to compare our proposed method with the Paganin single-material phase-retrieval algorithm, and an innovative interpretation of the data-constrained modeling phase-retrieval technique.

  9. Linear-array photoacoustic imaging using minimum variance-based delay multiply and sum adaptive beamforming algorithm.

    Science.gov (United States)

    Mozaffarzadeh, Moein; Mahloojifar, Ali; Orooji, Mahdi; Kratkiewicz, Karl; Adabi, Saba; Nasiriavanaki, Mohammadreza

    2018-02-01

    In photoacoustic imaging, delay-and-sum (DAS) beamformer is a common beamforming algorithm having a simple implementation. However, it results in a poor resolution and high sidelobes. To address these challenges, a new algorithm namely delay-multiply-and-sum (DMAS) was introduced having lower sidelobes compared to DAS. To improve the resolution of DMAS, a beamformer is introduced using minimum variance (MV) adaptive beamforming combined with DMAS, so-called minimum variance-based DMAS (MVB-DMAS). It is shown that expanding the DMAS equation results in multiple terms representing a DAS algebra. It is proposed to use the MV adaptive beamformer instead of the existing DAS. MVB-DMAS is evaluated numerically and experimentally. In particular, at the depth of 45 mm MVB-DMAS results in about 31, 18, and 8 dB sidelobes reduction compared to DAS, MV, and DMAS, respectively. The quantitative results of the simulations show that MVB-DMAS leads to improvement in full-width-half-maximum about 96%, 94%, and 45% and signal-to-noise ratio about 89%, 15%, and 35% compared to DAS, DMAS, MV, respectively. In particular, at the depth of 33 mm of the experimental images, MVB-DMAS results in about 20 dB sidelobes reduction in comparison with other beamformers. (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).

  10. Linear-array photoacoustic imaging using minimum variance-based delay multiply and sum adaptive beamforming algorithm

    Science.gov (United States)

    Mozaffarzadeh, Moein; Mahloojifar, Ali; Orooji, Mahdi; Kratkiewicz, Karl; Adabi, Saba; Nasiriavanaki, Mohammadreza

    2018-02-01

    In photoacoustic imaging, delay-and-sum (DAS) beamformer is a common beamforming algorithm having a simple implementation. However, it results in a poor resolution and high sidelobes. To address these challenges, a new algorithm namely delay-multiply-and-sum (DMAS) was introduced having lower sidelobes compared to DAS. To improve the resolution of DMAS, a beamformer is introduced using minimum variance (MV) adaptive beamforming combined with DMAS, so-called minimum variance-based DMAS (MVB-DMAS). It is shown that expanding the DMAS equation results in multiple terms representing a DAS algebra. It is proposed to use the MV adaptive beamformer instead of the existing DAS. MVB-DMAS is evaluated numerically and experimentally. In particular, at the depth of 45 mm MVB-DMAS results in about 31, 18, and 8 dB sidelobes reduction compared to DAS, MV, and DMAS, respectively. The quantitative results of the simulations show that MVB-DMAS leads to improvement in full-width-half-maximum about 96%, 94%, and 45% and signal-to-noise ratio about 89%, 15%, and 35% compared to DAS, DMAS, MV, respectively. In particular, at the depth of 33 mm of the experimental images, MVB-DMAS results in about 20 dB sidelobes reduction in comparison with other beamformers.

  11. [Comparison of film-screen combinations with contrast detail diagram and interactive image analysis. 2: Linear assessment of grey scale ranges with interactive image analysis].

    Science.gov (United States)

    Stamm, G; Eichbaum, G; Hagemann, G

    1997-09-01

    The following three screen-film combinations were compared: a) a combination of anticrossover film and UV-light emitting screens, b) a combination of blue-light emitting screens and film, and c) a conventional green fluorescing screen-film combination. Radiographs of a specially designed plexiglass phantom (0.2 x 0.2 x 0.12 m3) with bar patterns of lead and plaster and of air, respectively were obtained using the following parameters: 12 pulse generator, 0.6 mm focus size, 4.7 mm aluminum pre-filter, a grid with 40 lines/cm (12:1) and a focus-detector distance of 1.15 m. Image analysis was performed using an IBAS system and a Zeiss Kontron computer. Display conditions were the following: display distance 0.12 m, a vario film objective 35/70 (Zeiss), a video camera tube with a PbO photocathode, 625 lines (Siemens Heimann), an IBAS image matrix of 512 x 512 pixels with a resolution of 7 lines/mm, the projected matrix area was 5000 microns2. Grey scale ranges were measured on a line perpendicular to the grouped bar patterns. The difference between the maximum and minimum density value served as signal. The spatial resolution of the detector system was measured when the signal value was three times higher than the standard deviation of the means of multiple density measurements. The results showed considerable advantages of the two new screen-film combinations as compared to the conventional screen-film combination. The result was contradictory to the findings with pure visual assessment of thresholds (part I) that had found no differences. The authors concluded that (automatic) interactive image analysis algorithms serve as an objective measure and are specifically advantageous when small differences in image quality are to be evaluated.

  12. Hybrid Spectral Unmixing: Using Artificial Neural Networks for Linear/Non-Linear Switching

    Directory of Open Access Journals (Sweden)

    Asmau M. Ahmed

    2017-07-01

    Full Text Available Spectral unmixing is a key process in identifying spectral signature of materials and quantifying their spatial distribution over an image. The linear model is expected to provide acceptable results when two assumptions are satisfied: (1 The mixing process should occur at macroscopic level and (2 Photons must interact with single material before reaching the sensor. However, these assumptions do not always hold and more complex nonlinear models are required. This study proposes a new hybrid method for switching between linear and nonlinear spectral unmixing of hyperspectral data based on artificial neural networks. The neural networks was trained with parameters within a window of the pixel under consideration. These parameters are computed to represent the diversity of the neighboring pixels and are based on the Spectral Angular Distance, Covariance and a non linearity parameter. The endmembers were extracted using Vertex Component Analysis while the abundances were estimated using the method identified by the neural networks (Vertex Component Analysis, Fully Constraint Least Square Method, Polynomial Post Nonlinear Mixing Model or Generalized Bilinear Model. Results show that the hybrid method performs better than each of the individual techniques with high overall accuracy, while the abundance estimation error is significantly lower than that obtained using the individual methods. Experiments on both synthetic dataset and real hyperspectral images demonstrated that the proposed hybrid switch method is efficient for solving spectral unmixing of hyperspectral images as compared to individual algorithms.

  13. Low-Income, African American and American Indian Children's Viewpoints on Body Image Assessment Tools and Body Satisfaction: A Mixed Methods Study.

    Science.gov (United States)

    Heidelberger, Lindsay; Smith, Chery

    2018-03-03

    Objectives Pediatric obesity is complicated by many factors including psychological issues, such as body dissatisfaction. Body image assessment tools are used with children to measure their acceptance of their body shape or image. Limited research has been conducted with African American and American Indian children to understand their opinions on assessment tools created. This study investigated: (a) children's perception about body image and (b) differences between two body image instruments among low-income, multi-ethnic children. Methods This study uses mixed methodology including focus groups (qualitative) and body image assessment instruments (quantitative). Fifty-one children participated (25 girls, 26 boys); 53% of children identified as African American and 47% as American Indian. The average age was 10.4 years. Open coding methods were used by identify themes from focus group data. SPSS was used for quantitative analysis. Results Children preferred the Figure Rating Scale (FRS/silhouette) instrument over the Children's Body Image Scale (CBIS/photo) because their body parts and facial features were more detailed. Children formed their body image perception with influence from their parents and the media. Children verbalized that they have experienced negative consequences related to poor body image including disordered eating habits, depression, and bullying. Healthy weight children are also aware of weight-related bullying that obese and overweight children face. Conclusions for Practice Children prefer that the images on a body image assessment tool have detailed facial features and are clothed. Further research into body image assessment tools for use with African American and American Indian children is needed.

  14. Imaging the complex geometry of a magma reservoir using FEM-based linear inverse modeling of InSAR data: application to Rabaul Caldera, Papua New Guinea

    Science.gov (United States)

    Ronchin, Erika; Masterlark, Timothy; Dawson, John; Saunders, Steve; Martì Molist, Joan

    2017-06-01

    We test an innovative inversion scheme using Green's functions from an array of pressure sources embedded in finite-element method (FEM) models to image, without assuming an a-priori geometry, the composite and complex shape of a volcano deformation source. We invert interferometric synthetic aperture radar (InSAR) data to estimate the pressurization and shape of the magma reservoir of Rabaul caldera, Papua New Guinea. The results image the extended shallow magmatic system responsible for a broad and long-term subsidence of the caldera between 2007 February and 2010 December. Elastic FEM solutions are integrated into the regularized linear inversion of InSAR data of volcano surface displacements in order to obtain a 3-D image of the source of deformation. The Green's function matrix is constructed from a library of forward line-of-sight displacement solutions for a grid of cubic elementary deformation sources. Each source is sequentially generated by removing the corresponding cubic elements from a common meshed domain and simulating the injection of a fluid mass flux into the cavity, which results in a pressurization and volumetric change of the fluid-filled cavity. The use of a single mesh for the generation of all FEM models avoids the computationally expensive process of non-linear inversion and remeshing a variable geometry domain. Without assuming an a-priori source geometry other than the configuration of the 3-D grid that generates the library of Green's functions, the geodetic data dictate the geometry of the magma reservoir as a 3-D distribution of pressure (or flux of magma) within the source array. The inversion of InSAR data of Rabaul caldera shows a distribution of interconnected sources forming an amorphous, shallow magmatic system elongated under two opposite sides of the caldera. The marginal areas at the sides of the imaged magmatic system are the possible feeding reservoirs of the ongoing Tavurvur volcano eruption of andesitic products on the

  15. Energy response of imaging plates to radiation beams from standard beta sources, ortho-voltage and cobalt-60 units and linear accelerators

    Science.gov (United States)

    Gonzalez, Albin Leonel

    The response to different types of radiation beams of commercial imaging plates used for diagnostic computed radiography has been investigated in this work. Imaging plates are designed with a phosphor layer which after been irradiated; information is stored in the form of photostimulable luminescence (PSL) centers. Initial measurements of the dose distribution of a radioactive stent with the imaging plates showed similar results to those with radiochromic films, but with much shorter exposure time due to their higher sensitivity. In order to investigate further their response, the imaging plates were irradiated with calibrated beams from: standard beta sources, orthovoltage and Co-60 units and therapy linear accelerator. Initially it was found that the energy to create the storage centers (generation efficiency) when irradiated with the three standard beta sources (225 keV to 2.28 MeV) was the same. For the rest of the calibrated beams an in house reader system was built in order to perform the bleaching of the plates with a He-Ne laser (632.8 nm) and to measure the absolute number of the emitted PSL photons (storage centers produced). Bleaching curves were then obtained for different exposure times for each beam. From the graph of the calculated area under the bleaching curves (total number of storage center) versus the absorbed dose to the phosphor layer it was possible to calculate the energy to create the storage centers (generation efficiency) for photon and electron beams. The dose to the phosphor layer was calculated in the case of the electron beams following a scaling procedure, while in the case of the photon beams Monte Carlo simulations were performed. For the photons beams the measurement of the generation efficiency energy of 126 +/- 8% eV per PSL storage center, coincide with measurements using a different approach (˜148 eV) by previous investigators. The generation efficiency for the electron beam was 807 +/- 3% eV, no reference was found in the