Scaled Sparse Linear Regression
Sun, Tingni
2011-01-01
Scaled sparse linear regression jointly estimates the regression coefficients and noise level in a linear model. It chooses an equilibrium with a sparse regression method by iteratively estimating the noise level via the mean residual squares and scaling the penalty in proportion to the estimated noise level. The iterative algorithm costs nearly nothing beyond the computation of a path of the sparse regression estimator for penalty levels above a threshold. For the scaled Lasso, the algorithm is a gradient descent in a convex minimization of a penalized joint loss function for the regression coefficients and noise level. Under mild regularity conditions, we prove that the method yields simultaneously an estimator for the noise level and an estimated coefficient vector in the Lasso path satisfying certain oracle inequalities for the estimation of the noise level, prediction, and the estimation of regression coefficients. These oracle inequalities provide sufficient conditions for the consistency and asymptotic...
Volumetric Light-field Encryption at the Microscopic Scale
Li, Haoyu; Guo, Changliang; Muniraj, Inbarasan; Schroeder, Bryce C.; Sheridan, John T.; Jia, Shu
2017-01-01
We report a light-field based method that allows the optical encryption of three-dimensional (3D) volumetric information at the microscopic scale in a single 2D light-field image. The system consists of a microlens array and an array of random phase/amplitude masks. The method utilizes a wave optics model to account for the dominant diffraction effect at this new scale, and the system point-spread function (PSF) serves as the key for encryption and decryption. We successfully developed and demonstrated a deconvolution algorithm to retrieve both spatially multiplexed discrete data and continuous volumetric data from 2D light-field images. Showing that the method is practical for data transmission and storage, we obtained a faithful reconstruction of the 3D volumetric information from a digital copy of the encrypted light-field image. The method represents a new level of optical encryption, paving the way for broad industrial and biomedical applications in processing and securing 3D data at the microscopic scale.
Volumetric Light-field Encryption at the Microscopic Scale
Li, Haoyu; Guo, Changliang; Muniraj, Inbarasan; Schroeder, Bryce C.; Sheridan, John T.; Jia, Shu
2017-01-01
We report a light-field based method that allows the optical encryption of three-dimensional (3D) volumetric information at the microscopic scale in a single 2D light-field image. The system consists of a microlens array and an array of random phase/amplitude masks. The method utilizes a wave optics model to account for the dominant diffraction effect at this new scale, and the system point-spread function (PSF) serves as the key for encryption and decryption. We successfully developed and demonstrated a deconvolution algorithm to retrieve both spatially multiplexed discrete data and continuous volumetric data from 2D light-field images. Showing that the method is practical for data transmission and storage, we obtained a faithful reconstruction of the 3D volumetric information from a digital copy of the encrypted light-field image. The method represents a new level of optical encryption, paving the way for broad industrial and biomedical applications in processing and securing 3D data at the microscopic scale. PMID:28059149
Volumetric Light-field Encryption at the Microscopic Scale
Li, Haoyu; Muniraj, Inbarasan; Schroeder, Bryce C; Sheridan, John T; Jia, Shu
2016-01-01
We report a light-field based method that allows the optical encryption of three-dimensional (3D) volumetric information at the microscopic scale in a single 2D light-field image. The system consists of a microlens array and an array of random phase/amplitude masks. The method utilizes a wave optics model to account for the dominant diffraction effect at this new scale, and the system point-spread function (PSF) serves as the key for encryption and decryption. We successfully developed and demonstrated a deconvolution algorithm to retrieve spatially multiplexed discrete and continuous volumetric data from 2D light-field images. Showing that the method is practical for data transmission and storage, we obtained a faithful reconstruction of the 3D volumetric information from a digital copy of the encrypted light-field image. The method represents a new level of optical encryption, paving the way for broad industrial and biomedical applications in processing and securing 3D data at the microscopic scale.
Volumetric Light-field Encryption at the Microscopic Scale.
Li, Haoyu; Guo, Changliang; Muniraj, Inbarasan; Schroeder, Bryce C; Sheridan, John T; Jia, Shu
2017-01-06
We report a light-field based method that allows the optical encryption of three-dimensional (3D) volumetric information at the microscopic scale in a single 2D light-field image. The system consists of a microlens array and an array of random phase/amplitude masks. The method utilizes a wave optics model to account for the dominant diffraction effect at this new scale, and the system point-spread function (PSF) serves as the key for encryption and decryption. We successfully developed and demonstrated a deconvolution algorithm to retrieve both spatially multiplexed discrete data and continuous volumetric data from 2D light-field images. Showing that the method is practical for data transmission and storage, we obtained a faithful reconstruction of the 3D volumetric information from a digital copy of the encrypted light-field image. The method represents a new level of optical encryption, paving the way for broad industrial and biomedical applications in processing and securing 3D data at the microscopic scale.
Optimization of element length for imaging small volumetric reflectors with linear ultrasonic arrays
Barber, T. S.; Wilcox, P. D.; Nixon, A. D.
2016-01-01
A 3D ultrasonic simulation study is presented, aimed at understanding the effect of element length for imaging small volumetric flaws with linear arrays in ultrasonically noisy materials. The geometry of a linear array can be described by the width, pitch and total number of the elements along with the length perpendicular to imaging plane. This paper is concerned with the latter parameter, which tends to be ignored in array optimization studies and is often chosen arbitrarily for industrial ...
Three-dimensional linear and volumetric analysis of maxillary sinus pneumatization
Directory of Open Access Journals (Sweden)
Reham M. Hamdy
2014-05-01
Full Text Available Considering the anatomical variability related to the maxillary sinus, its intimate relation to the maxillary posterior teeth and because of all the implications that pneumatization may possess, three-dimensional assessment of maxillary sinus pneumatization is of most usefulness. The aim of this study is to analyze the maxillary sinus dimensions both linearly and volumetrically using cone beam computed tomography (CBCT to assess the maxillary sinus pneumatization. Retrospective analysis of 30 maxillary sinuses belonging to 15 patients’ CBCT scans was performed. Linear and volumetric measurements were conducted and statistically analyzed. The maximum craniocaudal extension of the maxillary sinus was located around the 2nd molar in 93% of the sinuses, while the maximum mediolateral and antroposterior extensions of the maxillary sinus were located at the level of root of zygomatic complex in 90% of sinuses. There was a high correlation between the linear measurements of the right and left sides, where the antroposterior extension of the sinus at level of the nasal floor had the largest correlation (0.89. There was also a high correlation between the Simplant and geometric derived maxillary sinus volumes for both right and left sides (0.98 and 0.96, respectively. The relations of the sinus floor can be accurately assessed on the different orthogonal images obtained through 3D CBCT scan. The geometric method offered a much cheaper, easier, and less sophisticated substitute; therefore, with the availability of software, 3D volumetric measurements are more facilitated.
Brassey, Charlotte A; Maidment, Susannah C R; Barrett, Paul M
2015-03-01
Body mass is a key biological variable, but difficult to assess from fossils. Various techniques exist for estimating body mass from skeletal parameters, but few studies have compared outputs from different methods. Here, we apply several mass estimation methods to an exceptionally complete skeleton of the dinosaur Stegosaurus. Applying a volumetric convex-hulling technique to a digital model of Stegosaurus, we estimate a mass of 1560 kg (95% prediction interval 1082-2256 kg) for this individual. By contrast, bivariate equations based on limb dimensions predict values between 2355 and 3751 kg and require implausible amounts of soft tissue and/or high body densities. When corrected for ontogenetic scaling, however, volumetric and linear equations are brought into close agreement. Our results raise concerns regarding the application of predictive equations to extinct taxa with no living analogues in terms of overall morphology and highlight the sensitivity of bivariate predictive equations to the ontogenetic status of the specimen. We emphasize the significance of rare, complete fossil skeletons in validating widely applied mass estimation equations based on incomplete skeletal material and stress the importance of accurately determining specimen age prior to further analyses.
Relating Linear and Volumetric Variables Through Body Scanning to Improve Human Interfaces in Space
Margerum, Sarah E.; Ferrer, Mike A.; Young, Karen S.; Rajulu, Sudhakar
2010-01-01
Designing space suits and vehicles for the diverse human population present unique challenges for the methods of traditional anthropometry. Space suits are bulky and allow the operator to shift position within the suit and inhibit the ability to identify body landmarks. Limited suit sizing options also cause variability in fit and performance between similarly sized individuals. Space vehicles are restrictive in volume in both the fit and the ability to collect data. NASA's Anthropometric and Biomechanics Facility (ABF) has utilized 3D scanning to shift from traditional linear anthropometry to explore and examine volumetric capabilities to provide anthropometric solutions for design. Overall, the key goals are to improve the human-system performance and develop new processes to aid in the design and evaluation of space systems. Four case studies are presented that illustrate the shift from purely linear analyses to an augmented volumetric toolset to predict and analyze the human within the space suit and vehicle. The first case study involves the calculation of maximal head volume to estimate total free volume in the helmet for proper air exchange. Traditional linear measurements resulted in an inaccurate representation of the head shape, yet limited data exists for the determination of a large head volume. Steps were first taken to identify and classify a maximum head volume and the resulting comparisons to the estimate are presented in this paper. This study illustrates the gap between linear components of anthropometry and the need for overall volume metrics in order to provide solutions. A second case study examines the overlay of the space suit scans and components onto scanned individuals to quantify fit and clearance to aid in sizing the suit to the individual. Restrictions in space suit size availability present unique challenges to optimally fit the individual within a limited sizing range while maintaining performance. Quantification of the clearance and
Institute of Scientific and Technical Information of China (English)
李方; 叶佩青; 张辉
2016-01-01
Permanent magnet tubular linear motors (TLMs) arranged in multiple rows and multiple columns used for a radiotherapy machine were studied. Due to severe volumetric and thermal constraints, the TLMs were at high risk of overheating. To predict the performance of the TLMs accurately, a multi-physics analysis approach was proposed. Specifically, it considered the coupling effects amongst the electromagnetic and the thermal models of the TLMs, as well as the fluid model of the surrounding air. To reduce computation cost, both the electromagnetic and the thermal models were based on lumped-parameter methods. Only a minimum set of numerical computation (computational fluid dynamics, CFD) was performed to model the complex fluid behavior. With the proposed approach, both steady state and transient state temperature distributions, thermal rating and permissible load can be predicted. The validity of this approach is verified through the experiment.
Taguas, Encarnación; Nadal-Romero, Estela; Ayuso, José L.; Casalí, Javier; Cid, Patricio; Dafonte, Jorge; Duarte, Antonio C.; Giménez, Rafael; Giráldez, Juan V.; Gómez-Macpherson, Helena; Gómez, José A.; González-Hidalgo, J. Carlos; Lucía, Ana; Mateos, Luciano; Rodríguez-Blanco, M. Luz; Schnabel, Susanne; Serrano-Muela, M. Pilar; Lana-Renault, Noemí; Mercedes Taboada-Castro, M.; Taboada-Castro, M. Teresa
2016-04-01
Analysis of storm rainfall-runoff data is essential to improve our understanding of catchment hydrology and to validate models supporting hydrological planning. In a context of climate change, statistical and process-based models are helpful to explore different scenarios which might be represented by simple parameters such as volumetric runoff coefficient. In this work, rainfall-runoff event datasets collected at 17 rural catchments in the Iberian Peninsula were studied. The objectives were: i) to describe hydrological patterns/variability of the relation rainfall-runoff; ii) to explore different methodologies to quantify representative volumetric runoff coefficients. Firstly, the criteria used to define an event were examined in order to standardize the analysis. Linear regression adjustments and statistics of the rainfall-runoff relations were examined to identify possible common patterns. In addition, a principal component analysis was applied to evaluate the variability among catchments based on their physical attributes. Secondly, runoff coefficients at event temporal scale were calculated following different methods. Median, mean, Hawkinś graphic method (Hawkins, 1993), reference values for engineering project of Prevert (TRAGSA, 1994) and the ratio of cumulated runoff and cumulated precipitation of the event that generated runoff (Rcum) were compared. Finally, the relations between the most representative volumetric runoff coefficients with the physical features of the catchments were explored using multiple linear regressions. The mean volumetric runoff coefficient in the studied catchments was 0.18, whereas the median was 0.15, both with variation coefficients greater than 100%. In 6 catchments, rainfall-runoff linear adjustments presented coefficient of determination greater than 0.60 (p hydrological response differences in the catchments. REFERENCES: Hawkins, R. H. (1993). Asymptotic determination of runoff curve numbers from data. J. Irrig. Drain. Eng
Optimization of element length for imaging small volumetric reflectors with linear ultrasonic arrays
Barber, T. S.; Wilcox, P. D.; Nixon, A. D.
2016-02-01
A 3D ultrasonic simulation study is presented, aimed at understanding the effect of element length for imaging small volumetric flaws with linear arrays in ultrasonically noisy materials. The geometry of a linear array can be described by the width, pitch and total number of the elements along with the length perpendicular to imaging plane. This paper is concerned with the latter parameter, which tends to be ignored in array optimization studies and is often chosen arbitrarily for industrial array inspections. A 3D analytical model based on imaging a point target is described, validated and used to make calculations of relative Signal-to-Noise Ratio (SNR) as a function of element length. SNR is found to be highly sensitive to element length with a 12dB variation observed over the length range investigated. It is then demonstrated that the optimal length can be predicted directly from the Point Spread Function (PSF) of the imaging system as well as the natural focal point of the array element from 2D beam profiles perpendicular to the imaging plane. This result suggests that the optimal length for any imaging position can be predicted without the need for a full 3D model and is independent of element pitch and the number of elements. Array element design guidelines are then described with respect to wavelength and extensions of these results are discussed for application to realistically-sized defects and coarse-grained materials.
Volumetric scale-up of smouldering remediation of contaminated materials.
Switzer, Christine; Pironi, Paolo; Gerhard, Jason I; Rein, Guillermo; Torero, Jose L
2014-03-15
Smouldering remediation is a process that has been introduced recently to address non-aqueous phase liquid (NAPL) contamination in soils and other porous media. Previous work demonstrated this process to be highly effective across a wide range of contaminants and soil conditions at the bench scale. In this work, a suite of 12 experiments explored the effectiveness of the process as operating scale was increased 1000-fold from the bench (0.003m(3)) to intermediate (0.3m(3)) and pilot field-scale (3m(3)) with coal tar and petrochemical NAPLs. As scale increased, remediation efficiency of 97-99.95% was maintained. Smouldering propagation velocities of 0.6-14×10(-5)m/s at Darcy air fluxes of 1.54-9.15cm/s were consistent with observations in previous bench studies, as was the dependence on air flux. The pilot field-scale experiments demonstrated the robustness of the process despite heterogeneities, localised operation, controllability through airflow supply, and the importance of a minimum air flux for self-sustainability. Experiments at the intermediate scale established a minimum-observed, not minimum-possible, initial concentration of 12,000mg/kg in mixed oil waste, providing support for the expectation that lower thresholds for self-sustaining smouldering decreased with increasing scale. Once the threshold was exceeded, basic process characteristics of average peak temperature, destructive efficiency, and treatment velocity were relatively independent of scale. Copyright © 2014 Elsevier B.V. All rights reserved.
Schultz, R.A.; Soliva, R.; Fossen, H.; Okubo, C.H.; Reeves, D.M.
2008-01-01
Displacement-length data from faults, joints, veins, igneous dikes, shear deformation bands, and compaction bands define two groups. The first group, having a power-law scaling relation with a slope of n = 1 and therefore a linear dependence of maximum displacement and discontinuity length (Dmax = ??L), comprises faults and shear (non-compactional or non-dilational) deformation bands. These shearing-mode structures, having shearing strains that predominate over volumetric strains across them, grow under conditions of constant driving stress, with the magnitude of near-tip stress on the same order as the rock's yield strength in shear. The second group, having a power-law scaling relation with a slope of n = 0.5 and therefore a dependence of maximum displacement on the square root of discontinuity length (Dmax = ??L0.5), comprises joints, veins, igneous dikes, cataclastic deformation bands, and compaction bands. These opening- and closing-mode structures grow under conditions of constant fracture toughness, implying significant amplification of near-tip stress within a zone of small-scale yielding at the discontinuity tip. Volumetric changes accommodated by grain fragmentation, and thus control of propagation by the rock's fracture toughness, are associated with scaling of predominantly dilational and compactional structures with an exponent of n = 0.5. ?? 2008 Elsevier Ltd.
Lion, Alexander; Mittermeier, Christoph; Johlitz, Michael
2017-09-01
A novel approach to represent the glass transition is proposed. It is based on a physically motivated extension of the linear viscoelastic Poynting-Thomson model. In addition to a temperature-dependent damping element and two linear springs, two thermal strain elements are introduced. In order to take the process dependence of the specific heat into account and to model its characteristic behaviour below and above the glass transition, the Helmholtz free energy contains an additional contribution which depends on the temperature history and on the current temperature. The model describes the process-dependent volumetric and caloric behaviour of glass-forming materials, and defines a functional relationship between pressure, volumetric strain, and temperature. If a model for the isochoric part of the material behaviour is already available, for example a model of finite viscoelasticity, the caloric and volumetric behaviour can be represented with the current approach. The proposed model allows computing the isobaric and isochoric heat capacities in closed form. The difference c_p -c_v is process-dependent and tends towards the classical expression in the glassy and equilibrium ranges. Simulations and theoretical studies demonstrate the physical significance of the model.
A SCALED CENTRAL PATH FOR LINEAR PROGRAMMING
Institute of Scientific and Technical Information of China (English)
Ya-xiang Yuan
2001-01-01
Interior point methods are very efficient methods for solving large scale linear programming problems. The central path plays a very important role in interior point methods. In this paper we propose a new central path, which scales the variables. Thus it has the advantage of forcing the path to have roughly the same distance from each active constraint boundary near the solution.
Gomez-Garcia, Fabrisio; Santiago, Sergio; Luque, Salvador; Romero, Manuel; Gonzalez-Aguilar, Jose
2016-05-01
This paper describes a new modular laboratory-scale experimental facility that was designed to conduct detailed aerothermal characterizations of volumetric absorbers for use in concentrating solar power plants. Absorbers are generally considered to be the element with the highest potential for efficiency gains in solar thermal energy systems. The configu-ration of volumetric absorbers enables concentrated solar radiation to penetrate deep into their solid structure, where it is progressively absorbed, prior to being transferred by convection to a working fluid flowing through the structure. Current design trends towards higher absorber outlet temperatures have led to the use of complex intricate geometries in novel ceramic and metallic elements to maximize the temperature deep inside the structure (thus reducing thermal emission losses at the front surface and increasing efficiency). Although numerical models simulate the conjugate heat transfer mechanisms along volumetric absorbers, they lack, in many cases, the accuracy that is required for precise aerothermal validations. The present work aims to aid this objective by the design, development, commissioning and operation of a new experimental facility which consists of a 7 kWe (1.2 kWth) high flux solar simulator, a radiation homogenizer, inlet and outlet collector modules and a working section that can accommodate volumetric absorbers up to 80 mm × 80 mm in cross-sectional area. Experimental measurements conducted in the facility include absorber solid temperature distributions along its depth, inlet and outlet air temperatures, air mass flow rate and pressure drop, incident radiative heat flux, and overall thermal efficiency. In addition, two windows allow for the direct visualization of the front and rear absorber surfaces, thus enabling full-coverage surface temperature measurements by thermal imaging cameras. This paper presents the results from the aerothermal characterization of a siliconized silicon
Preface: Introductory Remarks: Linear Scaling Methods
Bowler, D. R.; Fattebert, J.-L.; Gillan, M. J.; Haynes, P. D.; Skylaris, C.-K.
2008-07-01
It has been just over twenty years since the publication of the seminal paper on molecular dynamics with ab initio methods by Car and Parrinello [1], and the contribution of density functional theory (DFT) and the related techniques to physics, chemistry, materials science, earth science and biochemistry has been huge. Nevertheless, significant improvements are still being made to the performance of these standard techniques; recent work suggests that speed improvements of one or even two orders of magnitude are possible [2]. One of the areas where major progress has long been expected is in O(N), or linear scaling, DFT, in which the computer effort is proportional to the number of atoms. Linear scaling DFT methods have been in development for over ten years [3] but we are now in an exciting period where more and more research groups are working on these methods. Naturally there is a strong and continuing effort to improve the efficiency of the methods and to make them more robust. But there is also a growing ambition to apply them to challenging real-life problems. This special issue contains papers submitted following the CECAM Workshop 'Linear-scaling ab initio calculations: applications and future directions', held in Lyon from 3-6 September 2007. A noteworthy feature of the workshop is that it included a significant number of presentations involving real applications of O(N) methods, as well as work to extend O(N) methods into areas of greater accuracy (correlated wavefunction methods, quantum Monte Carlo, TDDFT) and large scale computer architectures. As well as explicitly linear scaling methods, the conference included presentations on techniques designed to accelerate and improve the efficiency of standard (that is non-linear-scaling) methods; this highlights the important question of crossover—that is, at what size of system does it become more efficient to use a linear-scaling method? As well as fundamental algorithmic questions, this brings up
Linear scaling algorithms: Progress and promise
Energy Technology Data Exchange (ETDEWEB)
Stechel, E.B.
1996-08-01
The goal of this laboratory-directed research and development (LDRD) project was to develop a new and efficient electronic structure algorithm that would scale linearly with system size. Since the start of the program this field has received much attention in the literature as well as in terms of focused symposia and at least one dedicated international workshop. The major success of this program is the development of a unique algorithm for minimization of the density functional energy which replaces the diagonalization of the Kohn-Sham hamiltonian with block diagonalization into explicit occupied and partially occupied (in metals) subspaces and an implicit unoccupied subspace. The progress reported here represents an important step toward the simultaneous goals of linear scaling, controlled accuracy, efficiency and transferability. The method is specifically designed to deal with localized, non-orthogonal basis sets to maximize transferability and state by state iteration to minimize any charge-sloshing instabilities and accelerate convergence. The computational demands of the algorithm do scale as the particle number, permitting applications to problems involving many inequivalent atoms. Our targeted goal is at least 10,000 inequivalent atoms on a teraflop computer. This report describes our algorithm, some proof-of-principle examples and a state of the field at the conclusion of this LDRD.
Large-scale volumetric pressure from tomographic PTV with HFSB tracers
Schneiders, Jan F. G.; Caridi, Giuseppe C. A.; Sciacchitano, Andrea; Scarano, Fulvio
2016-11-01
The instantaneous volumetric pressure in the near-wake of a truncated cylinder is measured by use of tomographic particle tracking velocimetry (PTV) using helium-filled soap bubbles (HFSB) as tracers. The measurement volume is several orders of magnitude larger than that reported in tomographic experiments dealing with pressure from particle image velocimetry (PIV). The near-wake of a truncated cylinder installed on a flat plate ( Re D = 3.5 × 104) features both wall-bounded turbulence and large-scale unsteady flow separation. The instantaneous pressure is calculated from the time-resolved 3D velocity distribution by invoking the momentum equation. The experiments are conducted simultaneously with surface pressure measurements intended for validation of the technique. The study shows that time-averaged pressure and root-mean-squared pressure fluctuations can be accurately measured both in the fluid domain and at the solid surface by large-scale tomographic PTV with HFSB as tracers, with significant reduction in manufacturing complexity for the wind-tunnel model and circumventing the need to install pressure taps or transducers. The measurement over a large volume eases the extension toward the free-stream regime, providing a reliable boundary condition for the solution of the Poisson equation for pressure. The work demonstrates, in the case of the flow past a truncated cylinder, the use of HFSB tracer particles for pressure measurement in air flows in a measurement volume that is two orders of magnitude larger than that of conventional tomographic PIV.
Scaling Laws for $e^+ e^-$ Linear Colliders
Delahaye, J P; Raubenheimer, T O; Wilson, Ian H
1999-01-01
Design studies of a future TeV e+e- Linear Collider (TLC) are presently being made by five major laboratories within the framework of a world-wide collaboration. A figure of merit is defined which enables an objective comparison of these different designs. This figure of merit is shown to depend only on a small number of parameters. General scaling laws for the main beam parameters and linac parameters are derived and prove to be very effective when used as guidelines to optimize the linear collider design. By adopting appropriate parameters for beam stability, the figure of merit becomes nearly independent of accelerating gradient and RF frequency of the accelerating structures. In spite of the strong dependence of the wake-fields with frequency, the single bunch emittance preservation during acceleration along the linac is also shown to be independent of the RF frequency when using equivalent trajectory correction schemes. In this situation, beam acceleration using high frequency structures becomes very adv...
Directory of Open Access Journals (Sweden)
Gaigals G.
2016-04-01
Full Text Available The focus of the present research is to investigate possibilities of volumetric defect detection in thin film coatings on glass substrates by means of high definition imaging with no complex optical systems, such as lenses, and to determine development and construction feasibility of a defectoscope employing the investigated methods. Numerical simulations were used to test the proposed methods. Three theoretical models providing various degrees of accuracy and feasibility were studied.
Algebraic Framework for Linear and Morphological Scale-Spaces
Heijmans, H.J.A.M.; van den Boomgaard, R.
2002-01-01
This paper proposes a general algebraic construction technique for image scale-spaces. The basic idea is to first downscale the image by some factor using an invertible scaling, then apply an image operator (linear or morphological) at a unit scale, and finally resize the image to its original scale
Scaling behavior of linear polymers in disordered media
Janssen, Hans-Karl; Stenull, Olaf
2006-01-01
Folklore has, that the universal scaling properties of linear polymers in disordered media are well described by the statistics of self-avoiding walks Folklore has, that the universal scaling properties of linear polymers in disordered media are well described by the statistics of self-avoiding walks (SAWs) on percolation clusters and their critical exponent $\
Hopewell Furnace NHS Small Scale Features (Linear Features)
National Park Service, Department of the Interior — This shapefile represents the linear small scale features found at Hopewell Furnace National Historic Site based on the Cultural Landscape Report completed in...
Analysis of linear trade models and relation to scale economies.
Gomory, R E; Baumol, W J
1997-09-01
We discuss linear Ricardo models with a range of parameters. We show that the exact boundary of the region of equilibria of these models is obtained by solving a simple integer programming problem. We show that there is also an exact correspondence between many of the equilibria resulting from families of linear models and the multiple equilibria of economies of scale models.
Non-linear Frequency Scaling Algorithm for FMCW SAR Data
Meta, A.; Hoogeboom, P.; Ligthart, L.P.
2006-01-01
This paper presents a novel approach for processing data acquired with Frequency Modulated Continuous Wave (FMCW) dechirp-on-receive systems by using a non-linear frequency scaling algorithm. The range frequency non-linearity correction, the Doppler shift induced by the continuous motion and the ran
Lach, Adeline; Boulahya, Faïza; André, Laurent; Lassin, Arnault; Azaroual, Mohamed; Serin, Jean-Paul; Cézac, Pierre
2016-07-01
The thermal and volumetric properties of complex aqueous solutions are described according to the Pitzer equation, explicitly taking into account the speciation in the aqueous solutions. The thermal properties are the apparent relative molar enthalpy (Lϕ) and the apparent molar heat capacity (Cp,ϕ). The volumetric property is the apparent molar volume (Vϕ). Equations describing these properties are obtained from the temperature or pressure derivatives of the excess Gibbs energy and make it possible to calculate the dilution enthalpy (∆HD), the heat capacity (cp) and the density (ρ) of aqueous solutions up to high concentrations. Their implementation in PHREEQC V.3 (Parkhurst and Appelo, 2013) is described and has led to a new numerical tool, called PhreeSCALE. It was tested first, using a set of parameters (specific interaction parameters and standard properties) from the literature for two binary systems (Na2SO4-H2O and MgSO4-H2O), for the quaternary K-Na-Cl-SO4 system (heat capacity only) and for the Na-K-Ca-Mg-Cl-SO4-HCO3 system (density only). The results obtained with PhreeSCALE are in agreement with the literature data when the same standard solution heat capacity (Cp0) and volume (V0) values are used. For further applications of this improved computation tool, these standard solution properties were calculated independently, using the Helgeson-Kirkham-Flowers (HKF) equations. By using this kind of approach, most of the Pitzer interaction parameters coming from literature become obsolete since they are not coherent with the standard properties calculated according to the HKF formalism. Consequently a new set of interaction parameters must be determined. This approach was successfully applied to the Na2SO4-H2O and MgSO4-H2O binary systems, providing a new set of optimized interaction parameters, consistent with the standard solution properties derived from the HKF equations.
Linear Scaling Real Time TDDFT in the CONQUEST Code
O'Rourke, Conn
2014-01-01
The real time formulation of Time Dependent Density Functional Theory (RT-TDDFT) is implemented in the linear scaling density functional theory code CONQEST. Proceeding through the propagation of the density matrix, as opposed to the Kohn-Sham orbitals, it is possible to reduced the computational workload. Imposing a cut-off on the density matrix the effort can be made to scale linearly with the size of the system under study. Propagation of the reduced density matrix in this manner provides direct access to the optical response of very large systems, which would be otherwise impractical to obtain using the standard formulations of TDDFT. We discuss our implementation and present several benchmark tests illustrating the validity of the method, and the factors affecting its accuracy. Finally we illustrate the effect of density matrix truncation on the optical response, and illustrate that computational load scales linearly with the system size.
Vos, J.M.C.; Vincent, L.F.
2011-01-01
Volumetric water control (VWC) is widely seen as a means to increase productivity through flexible scheduling and user incentives to apply just enough water. However, the technical and social requirements for VWC are poorly understood. Also, many experts assert that VWC in large-scale open canals
Polarization properties of linearly polarized parabolic scaling Bessel beams
Guo, Mengwen; Zhao, Daomu
2016-10-01
The intensity profiles for the dominant polarization, cross polarization, and longitudinal components of modified parabolic scaling Bessel beams with linear polarization are investigated theoretically. The transverse intensity distributions of the three electric components are intimately connected to the topological charge. In particular, the intensity patterns of the cross polarization and longitudinal components near the apodization plane reflect the sign of the topological charge.
Suppressing Linear Power on Dwarf Galaxy Halo Scales
White, M; White, Martin; Croft, Rupert A.C.
2000-01-01
Recently is has been suggested that the dearth of small halos around the Milky Way arises due to a modification of the primordial power spectrum of fluctuations from inflation. Such modifications would be expected to alter the formation of structure from bottom-up to top-down on scales near where the short-scale power has been suppressed. Using cosmological simulations we study the effects of such a modification of the initial power spectrum. While the halo multiplicity function depends primarily on the linear theory power spectrum, most other probes of power are more sensitive to the non-linear power spectrum. Collapse of large-scale structures as they go non-linear regenerates a ``tail'' in the power spectrum, masking small-scale modifications to the primordial power spectrum except at very high-z. Even the small-scale (k>2h/Mpc) clustering of the Ly-alpha forest is affected by this process, so that CDM models with sufficient power suppression to reduce the number of 10^10 Msun halos by a factor of about 5 ...
Estimating WISC-IV indexes: proration versus linear scaling.
Glass, Laura A; Ryan, Joseph J; Bartels, Jared M; Morris, Jeri
2008-10-01
This investigation compared proration and linear scaling for estimating Wechsler Intelligence Scale for Children-Fourth Edition (WISC-IV) verbal comprehension (VCI) and perceptual reasoning (PRI) composites from all relevant two subtest combinations. Using 57 primary school students and 41 clinical referrals, actual VCI and PRI scores were highly correlated with estimated index scores based on proration and linear scaling (all rs> or =.90). In the school sample, significant mean score differences between the actual and estimated composites were found in two comparisons; however, differences between mean scores were less than three points. No significant differences emerged in the clinical sample. Results indicate that any of the two subtest combinations produced reasonably accurate estimates of actual indexes. There was no advantage of one computational method over the other. Copyright 2008 Wiley Periodicals, Inc.
Whyms, B.J.; Vorperian, H.K.; Gentry, L.R.; Schimek, E.M.; Bersu, E.T.; Chung, M.K.
2013-01-01
Objectives This study investigates the effect of scanning parameters on the accuracy of measurements from three-dimensional multi-detector computed tomography (3D-CT) mandible renderings. A broader range of acceptable parameters can increase the availability of CT studies for retrospective analysis. Study Design Three human mandibles and a phantom object were scanned using 18 combinations of slice thickness, field of view, and reconstruction algorithm and three different threshold-based segmentations. Measurements of 3D-CT models and specimens were compared. Results Linear and angular measurements were accurate, irrespective of scanner parameters or rendering technique. Volume measurements were accurate with a slice thickness of 1.25 mm, but not 2.5 mm. Surface area measurements were consistently inflated. Conclusions Linear, angular and volumetric measurements of mandible 3D-CT models can be confidently obtained from a range of parameters and rendering techniques. Slice thickness is the primary factor affecting volume measurements. These findings should also apply to 3D rendering using cone-beam-CT. PMID:23601224
Ban, Sungbea; Cho, Nam Hyun; Ryu, Yongjae; Jung, Sunwoo; Vavilin, Andrey; Min, Eunjung; Jung, Woonggyu
2016-04-01
Optical projection tomography is a new optical imaging method for visualizing small biological specimens in three dimension. The most important advantage of OPT is to fill the gap between MRI and confocal microscope for the specimen having the range of 1-10 mm. Thus, it has been mainly used for whole-mount small animals and developmental study since this imaging modality was developed. The ability of OPT delivering anatomical and functional information of relatively large tissue in 3D has made it a promising platform in biomedical research. Recently, the potential of OPT spans its coverage to cellular scale. Even though there are increasing demand to obtain better understanding of cellular dynamics, only few studies to visualize cellular structure, shape, size and functional morphology over tissue has been investigated in existing OPT system due to its limited field of view. In this study, we develop a novel optical imaging system for 3D cellular imaging with OPT integrated with dynamic focusing technique. Our tomographic setup has great potential to be used for identifying cell characteristic in tissue because it can provide selective contrast on dynamic focal plane allowing for fluorescence as well as absorption. While the dominant contrast of optical imaging technique is to use the fluorescence for detecting certain target only, the newly developed OPT system will offer considerable advantages over currently available method when imaging cellar molecular dynamics by permitting contrast variation. By achieving multi-contrast, it is expected for this new imaging system to play an important role in delivering better cytological information to pathologist.
Scaling Linear Algebra Kernels using Remote Memory Access
Energy Technology Data Exchange (ETDEWEB)
Krishnan, Manoj Kumar; Lewis, Robert R.; Vishnu, Abhinav
2010-09-13
This paper describes the scalability of linear algebra kernels based on remote memory access approach. The current approach differs from the other linear algebra algorithms by the explicit use of shared memory and remote memory access (RMA) communication rather than message passing. It is suitable for clusters and scalable shared memory systems. The experimental results on large scale systems (Linux-Infiniband cluster, Cray XT) demonstrate consistent performance advantages over ScaLAPACK suite, the leading implementation of parallel linear algebra algorithms used today. For example, on a Cray XT4 for a matrix size of 102400, our RMA-based matrix multiplication achieved over 55 teraflops while ScaLAPACK’s pdgemm measured close to 42 teraflops on 10000 processes.
Digital deblurring based on linear-scale differential analysis
Bezzubik, Vitali; Belashenkov, Nikolai; Vdovin, Gleb V.
2014-09-01
A novel method of sharpness improvement is proposed for digital images. This method is realized via linear multi-scale analysis of source image and sequent synthesis of restored image. The analysis comprises the procedure of computation of intensity gradient values using the special filters providing simultaneous edge detection and noise filtering. Restoration of image sharpness is achieved by simple subtraction of some discrete recovery function from blurred image. Said recovery function is calculated as a sum of several normalized gradient responses found by linear multi-scale analysis using the operation of spatial transposition of those gradient response values relative the points of zero-crossing of first derivatives of gradients. The proposed method provides the restoration of sharpness of edges in digital image without additional operation of spatial noise filtering and a priori knowledge of blur kernel.
Polarization properties of linearly polarized parabolic scaling Bessel beams
Energy Technology Data Exchange (ETDEWEB)
Guo, Mengwen; Zhao, Daomu, E-mail: zhaodaomu@yahoo.com
2016-10-07
The intensity profiles for the dominant polarization, cross polarization, and longitudinal components of modified parabolic scaling Bessel beams with linear polarization are investigated theoretically. The transverse intensity distributions of the three electric components are intimately connected to the topological charge. In particular, the intensity patterns of the cross polarization and longitudinal components near the apodization plane reflect the sign of the topological charge. - Highlights: • We investigated the polarization properties of modified parabolic scaling Bessel beams with linear polarization. • We studied the evolution of transverse intensity profiles for the three components of these beams. • The intensity patterns of the cross polarization and longitudinal components can reflect the sign of the topological charge.
Linear Scaling Density Functional Calculations with Gaussian Orbitals
Scuseria, Gustavo E.
1999-01-01
Recent advances in linear scaling algorithms that circumvent the computational bottlenecks of large-scale electronic structure simulations make it possible to carry out density functional calculations with Gaussian orbitals on molecules containing more than 1000 atoms and 15000 basis functions using current workstations and personal computers. This paper discusses the recent theoretical developments that have led to these advances and demonstrates in a series of benchmark calculations the present capabilities of state-of-the-art computational quantum chemistry programs for the prediction of molecular structure and properties.
Energy Technology Data Exchange (ETDEWEB)
Goncalves, Leandro R.; Santos, Gabriela R.; Menegussi, Gisela; Silva, Marco A.; Passaro, Anderson M.; Rodrigues, Laura N., E-mail: leandrorg11@hotmail.com [Instituto do Cancer do Estado de Sao Paulo (ICESP), Sao Paulo, SP (Brazil)
2013-08-15
Radiotherapy techniques like VMAT allow complex dose distributions modulating the beam intensity within the irradiation field from the handling of multi-blade collimators, variations in dose rate, different speeds of rotation of the gantry and collimator angle allowing greater conformation of the dose to the tumor volume and a lower dose to healthy tissues. To ensure proper dose delivery, the linear particle accelerator must be able to monitor and perform all the variation in these parameters simultaneously. In this work dosimetric tests obtained in the literature that aims to commission, implement and ensure the quality of VMAT treatments were performed performed in the Institute of Cancer of Sao Paulo State (ICESP). From the results obtained it was established a program of quality control for the linear accelerator studied. The linearity and stability response of ionization chamber monitoring, leafs accuracy positioning, flatness and symmetry of beam to VMAT irradiations were evaluated. The obtained results are in agreement with the literature. It can be concluded that the accelerator studied is able to satisfactorily control the variation of all necessary parameters to perform the VMAT treatments. (author)
Graph-based linear scaling electronic structure theory
Niklasson, Anders M N; Negre, Christian F A; Cawkwell, Marc J; Swart, Pieter J; Mohd-Yusof, Jamal; Germann, Timothy C; Wall, Michael E; Bock, Nicolas; Djidjev, Hristo
2016-01-01
We show how graph theory can be combined with quantum theory to calculate the electronic structure of large complex systems. The graph formalism is general and applicable to a broad range of electronic structure methods and materials, including challenging systems such as biomolecules. The methodology combines well-controlled accuracy, low computational cost, and natural low-communication parallelism. This combination addresses substantial shortcomings of linear scaling electronic structure theory, in particular with respect to quantum-based molecular dynamics simulations.
Generation of primordial magnetic fields on linear overdensity scales.
Naoz, Smadar; Narayan, Ramesh
2013-08-02
Magnetic fields appear to be present in all galaxies and galaxy clusters. Recent measurements indicate that a weak magnetic field may be present even in the smooth low density intergalactic medium. One explanation for these observations is that a seed magnetic field was generated by some unknown mechanism early in the life of the Universe, and was later amplified by various dynamos in nonlinear objects like galaxies and clusters. We show that a primordial magnetic field is expected to be generated in the early Universe on purely linear scales through vorticity induced by scale-dependent temperature fluctuations, or equivalently, a spatially varying speed of sound of the gas. Residual free electrons left over after recombination tap into this vorticity to generate magnetic field via the Biermann battery process. Although the battery operates even in the absence of any relative velocity between dark matter and gas at the time of recombination, the presence of such a relative velocity modifies the predicted spatial power spectrum of the magnetic field. At redshifts of order a few tens, we estimate a root mean square field strength of order 10(-25)-10(-24) G on comoving scales ~10 kpc. This field, which is generated purely from linear perturbations, is expected to be amplified significantly after reionization, and to be further boosted by dynamo processes during nonlinear structure formation.
Planning under uncertainty solving large-scale stochastic linear programs
Energy Technology Data Exchange (ETDEWEB)
Infanger, G. (Stanford Univ., CA (United States). Dept. of Operations Research Technische Univ., Vienna (Austria). Inst. fuer Energiewirtschaft)
1992-12-01
For many practical problems, solutions obtained from deterministic models are unsatisfactory because they fail to hedge against certain contingencies that may occur in the future. Stochastic models address this shortcoming, but up to recently seemed to be intractable due to their size. Recent advances both in solution algorithms and in computer technology now allow us to solve important and general classes of practical stochastic problems. We show how large-scale stochastic linear programs can be efficiently solved by combining classical decomposition and Monte Carlo (importance) sampling techniques. We discuss the methodology for solving two-stage stochastic linear programs with recourse, present numerical results of large problems with numerous stochastic parameters, show how to efficiently implement the methodology on a parallel multi-computer and derive the theory for solving a general class of multi-stage problems with dependency of the stochastic parameters within a stage and between different stages.
Design techniques for large scale linear measurement systems
Energy Technology Data Exchange (ETDEWEB)
Candy, J.V.
1979-03-01
Techniques to design measurement schemes for systems modeled by large scale linear time invariant systems, i.e., physical systems modeled by a large number (> 5) of ordinary differential equations, are described. The techniques are based on transforming the physical system model to a coordinate system facilitating the design and then transforming back to the original coordinates. An example of a three-stage, four-species, extraction column used in the reprocessing of spent nuclear fuel elements is presented. The basic ideas are briefly discussed in the case of noisy measurements. An example using a plutonium nitrate storage vessel (reprocessing) with measurement uncertainty is also presented.
Computational alanine scanning with linear scaling semiempirical quantum mechanical methods.
Diller, David J; Humblet, Christine; Zhang, Xiaohua; Westerhoff, Lance M
2010-08-01
Alanine scanning is a powerful experimental tool for understanding the key interactions in protein-protein interfaces. Linear scaling semiempirical quantum mechanical calculations are now sufficiently fast and robust to allow meaningful calculations on large systems such as proteins, RNA and DNA. In particular, they have proven useful in understanding protein-ligand interactions. Here we ask the question: can these linear scaling quantum mechanical methods developed for protein-ligand scoring be useful for computational alanine scanning? To answer this question, we assembled 15 protein-protein complexes with available crystal structures and sufficient alanine scanning data. In all, the data set contains Delta Delta Gs for 400 single point alanine mutations of these 15 complexes. We show that with only one adjusted parameter the quantum mechanics-based methods outperform both buried accessible surface area and a potential of mean force and compare favorably to a variety of published empirical methods. Finally, we closely examined the outliers in the data set and discuss some of the challenges that arise from this examination.
The clustering of dark matter haloes: scale-dependent bias on quasi-linear scales
Jose, Charles; Lacey, Cedric G.; Baugh, Carlton M.
2016-11-01
We investigate the spatial clustering of dark matter haloes, collapsing from 1σ-4σ fluctuations, in the redshift range 0-5 using N-body simulations. The halo bias of high redshift haloes (z ≥ 2) is found to be strongly nonlinear and scale dependent on quasi-linear scales that are larger than their virial radii (0.5-10 Mpc h-1). However, at lower redshifts, the scale dependence of nonlinear bias is weaker and is of the order of a few per cent on quasi-linear scales at z ˜ 0. We find that the redshift evolution of the scale-dependent bias of dark matter haloes can be expressed as a function of four physical parameters: the peak height of haloes, the nonlinear matter correlation function at the scale of interest, an effective power-law index of the rms linear density fluctuations and the matter density of the universe at the given redshift. This suggests that the scale dependence of halo bias is not a universal function of the dark matter power spectrum, which is commonly assumed. We provide a fitting function for the scale-dependent halo bias as a function of these four parameters. Our fit reproduces the simulation results to an accuracy of better than 4 per cent over the redshift range 0 ≤ z ≤ 5. We also extend our model by expressing the nonlinear bias as a function of the linear matter correlation function. It is important to incorporate our results into the clustering models of dark matter haloes at any redshift, including those hosting early generations of stars and galaxies before reionization.
Linear Estimation of Location and Scale Parameters Using Partial Maxima
Papadatos, Nickos
2010-01-01
Consider an i.i.d. sample X^*_1,X^*_2,...,X^*_n from a location-scale family, and assume that the only available observations consist of the partial maxima (or minima)sequence, X^*_{1:1},X^*_{2:2},...,X^*_{n:n}, where X^*_{j:j}=max{X^*_1,...,X^*_j}. This kind of truncation appears in several circumstances, including best performances in athletics events. In the case of partial maxima, the form of the BLUEs (best linear unbiased estimators) is quite similar to the form of the well-known Lloyd's (1952, Least-squares estimation of location and scale parameters using order statistics, Biometrika, vol. 39, pp. 88-95) BLUEs, based on (the sufficient sample of) order statistics, but, in contrast to the classical case, their consistency is no longer obvious. The present paper is mainly concerned with the scale parameter, showing that the variance of the partial maxima BLUE is at most of order O(1/log n), for a wide class of distributions.
Order reduction of large-scale linear oscillatory system models
Energy Technology Data Exchange (ETDEWEB)
Trudnowksi, D.J. (Pacific Northwest Lab., Richland, WA (United States))
1994-02-01
Eigen analysis and signal analysis techniques of deriving representations of power system oscillatory dynamics result in very high-order linear models. In order to apply many modern control design methods, the models must be reduced to a more manageable order while preserving essential characteristics. Presented in this paper is a model reduction method well suited for large-scale power systems. The method searches for the optimal subset of the high-order model that best represents the system. An Akaike information criterion is used to define the optimal reduced model. The method is first presented, and then examples of applying it to Prony analysis and eigenanalysis models of power systems are given.
Linear scaling calculation of band edge states and doped semiconductors.
Xiang, H J; Yang, Jinlong; Hou, J G; Zhu, Qingshi
2007-06-28
Linear scaling methods provide total energy, but no energy levels and canonical wave functions. From the density matrix computed through the density matrix purification methods, we propose an order-N [O(N)] method for calculating both the energies and wave functions of band edge states, which are important for optical properties and chemical reactions. In addition, we also develop an O(N) algorithm to deal with doped semiconductors based on the O(N) method for band edge states calculation. We illustrate the O(N) behavior of the new method by applying it to boron nitride (BN) nanotubes and BN nanotubes with an adsorbed hydrogen atom. The band gap of various BN nanotubes are investigated systematically and the acceptor levels of BN nanotubes with an isolated adsorbed H atom are computed. Our methods are simple, robust, and especially suited for the application in self-consistent field electronic structure theory.
Scaling laws for e sup + /e sup - linear colliders
Delahaye, J P; Raubenheimer, T O; Wilson, Ian H
1999-01-01
Design studies of a future TeV e sup + e sup - Linear Collider (TLC) are presently being made by five major laboratories within the framework of a world-wide collaboration. A figure of merit is defined which enables an objective comparison of these different designs. This figure of merit is shown to depend only on a small number of parameters. General scaling laws for the main beam parameters and linac parameters are derived and prove to be very effective when used as guidelines to optimize the linear collider design. By adopting appropriate parameters for beam stability, the figure of merit becomes nearly independent of accelerating gradient and RF frequency of the accelerating structures. In spite of the strong dependence of the wake fields with frequency, the single-bunch emittance blow-up during acceleration along the linac is also shown to be independent of the RF frequency when using equivalent trajectory correction schemes. In this situation, beam acceleration using high-frequency structures becomes ve...
Scaling Laws for Normal Conducting $e^{\\pm}$ Linear Colliders
Delahaye, J P; Raubenheimer, T O; Wilson, Ian H
1998-01-01
Design studies of a future TeV e± Linear Collider (TLC) are presently being made by five major laboratories within the framework of a world-wide collaboration. A figure of merit is defined which enabl es an objective comparison of these different designs. This figure of merit is shown to depend only on a small number of parameters. General scaling laws for the main beam parameters and linac paramet ers are derived and prove to be very effective when used as guidelines to optimize the linear collider design. By adopting appropriate parameters for beam stability, the figure of merit becomes nearly independent of accelerating gradient and RF frequency of the accelerating structures. In spite of the strong dependence of the wake-fields with frequency, the single bunch emittance preservation durin g acceleration along the linac is also shown to be independent of the RF frequency when using equivalent trajectory correction schemes. In this situation, beam acceleration using high frequency struct ures becomes very ...
Parameter Scaling in Non-Linear Microwave Tomography
DEFF Research Database (Denmark)
Jensen, Peter Damsgaard; Rubæk, Tonny; Talcoth, Oskar;
2012-01-01
Non-linear microwave tomographic imaging of the breast is a challenging computational problem. The breast is heterogeneous and contains several high-contrast and lossy regions, resulting in large differences in the measured signal levels. This implies that special care must be taken when the imag......Non-linear microwave tomographic imaging of the breast is a challenging computational problem. The breast is heterogeneous and contains several high-contrast and lossy regions, resulting in large differences in the measured signal levels. This implies that special care must be taken when...... the imaging problem is formulated. Under such conditions, microwave imaging systems will most often be considerably more sensitive to changes in the electromagnetic properties in certain regions of the breast. The result is that the parameters might not be reconstructed correctly in the less sensitive regions...... introduced as a measure of the sensitivity. The scaling of the parameters is shown to improve performance of the microwave imaging system when applied to reconstruction of images from 2-D simulated data and measurement data....
Parameter Scaling in Non-Linear Microwave Tomography
DEFF Research Database (Denmark)
Jensen, Peter Damsgaard; Rubæk, Tonny; Talcoth, Oskar
2012-01-01
Non-linear microwave tomographic imaging of the breast is a challenging computational problem. The breast is heterogeneous and contains several high-contrast and lossy regions, resulting in large differences in the measured signal levels. This implies that special care must be taken when the imag......Non-linear microwave tomographic imaging of the breast is a challenging computational problem. The breast is heterogeneous and contains several high-contrast and lossy regions, resulting in large differences in the measured signal levels. This implies that special care must be taken when...... the imaging problem is formulated. Under such conditions, microwave imaging systems will most often be considerably more sensitive to changes in the electromagnetic properties in certain regions of the breast. The result is that the parameters might not be reconstructed correctly in the less sensitive regions...... introduced as a measure of the sensitivity. The scaling of the parameters is shown to improve performance of the microwave imaging system when applied to reconstruction of images from 2-D simulated data and measurement data....
Barber, A J; Valageas, P; Barber, Andrew J.; Munshi, Dipak; Valageas, Patrick
2004-01-01
Weak lensing convergence can be used directly to map and probe the dark mass distribution in the universe. Building on earlier studies, we recall how the statistics of the convergence field are related to the statistics of the underlying mass distribution, in particular to the many-body density correlations. We describe two model-independent approximations which provide two simple methods to compute the probability distribution function, pdf, of the convergence. We apply one of these to the case where the density field can be described by a log-normal pdf. Next, we discuss two hierarchical models for the high-order correlations which allow one to perform exact calculations and evaluate the previous approximations in such specific cases. Finally, we apply these methods to a very simple model for the evolution of the density field from linear to highly non-linear scales. Comparisons with the results obtained from numerical simulations, obtained from a number of different realizations, show excellent agreement w...
Institute of Scientific and Technical Information of China (English)
李莲明; 李治平; 车艳
2011-01-01
When the rock is deformed by pressure decreasing in the formation, it is difficult to study the non-linear elasticity deformation rock volumetric strain. According to the power relationship between the non-elastic deformation rock elastic modelling quantity and effective pressure, this paper establishes the theoretic expressions between rock volumetric strain and effective pressure under the surface experiment conditions and the formation conditions, proposese a new method of the “Trial Calculation & Iteration”used to study the non-linear elasticity deformation rock volumetric strain quantitatively, calculates the a and b values of the rock non-linear elasticity deformation constants by means of the experiment databetween the non-linear elasticity rock volumetric strain and the effective pressure, and forecastes the non-linear elasticity deformation rock volumetric strain quantitatively. The application of this method indicates that the relative errors between the predictive values of the non-linear rock volumetric strain, rock porosity under the surface experiment conditions & rock porosity of the formation pressure decrease and the experiment values or the predictive values using experiment data of them should be less than or equal to 7.39％,0.80％ & 3.92％ and preferable consistance, and that it is possible to convert from the experimental data of the surface conditions to the data of reservoir conditions. This method provides an effective way to calculate the non-linear elasticity deformation rock volumetric strain quantitatively.%砂岩气藏地层压力下降岩石发生非线弹性变形时,定量研究非线弹性岩石体积应变的大小是个难点.由非线弹性岩石弹性模量与有效压力满足的乘幂关系,推导了地面实验和地层条件岩石体积应变理论关系,提出了一种定量研究岩石体积应变的试算迭代法,并结合岩石变形实验岩石体积应变与有效压力变化数据,求取了岩
Scaling Behaviour of Diffusion Limited Aggregation with Linear Seed
Institute of Scientific and Technical Information of China (English)
TANG Qiang; TIAN Ju-Ping; YAO Kai-Lun
2006-01-01
@@ We present a computer model of diffusion limited aggregation with linear seed. The clusters with varying linear seed lengths are simulated, and their pattern structure, fractal dimension and multifractal spectrum are obtained.The simulation results show that the linear seed length has little effect on the pattern structure of the aggregation clusters if its length is comparatively shorter. With its increasing, the linear seed length has stronger effects on the pattern structure, while the dimension Df decreases. When the linear seed length is larger, the corresponding pattern structure is cross alike. The larger the linear seed length is, the more obvious the cross-like structure with more particles clustering at the two ends of the linear seed and along the vertical direction to the centre of the linear seed. Furthermore, the multifractal spectra curve becomes lower and the range of singularity narrower.The longer the length of a linear seed is, the less irregular and nonuniform the pattern becomes.
Daubechies wavelets for linear scaling density functional theory
Energy Technology Data Exchange (ETDEWEB)
Mohr, Stephan [Institut für Physik, Universität Basel, Klingelbergstr. 82, 4056 Basel (Switzerland); Univ. Grenoble Alpes, INAC-SP2M, F-38000 Grenoble, France and CEA, INAC-SP2M, F-38000 Grenoble (France); Ratcliff, Laura E.; Genovese, Luigi; Caliste, Damien; Deutsch, Thierry [Univ. Grenoble Alpes, INAC-SP2M, F-38000 Grenoble, France and CEA, INAC-SP2M, F-38000 Grenoble (France); Boulanger, Paul [Univ. Grenoble Alpes, INAC-SP2M, F-38000 Grenoble, France and CEA, INAC-SP2M, F-38000 Grenoble (France); Institut Néel, CNRS and Université Joseph Fourier, B.P. 166, 38042 Grenoble Cedex 09 (France); Goedecker, Stefan [Institut für Physik, Universität Basel, Klingelbergstr. 82, 4056 Basel (Switzerland)
2014-05-28
We demonstrate that Daubechies wavelets can be used to construct a minimal set of optimized localized adaptively contracted basis functions in which the Kohn-Sham orbitals can be represented with an arbitrarily high, controllable precision. Ground state energies and the forces acting on the ions can be calculated in this basis with the same accuracy as if they were calculated directly in a Daubechies wavelets basis, provided that the amplitude of these adaptively contracted basis functions is sufficiently small on the surface of the localization region, which is guaranteed by the optimization procedure described in this work. This approach reduces the computational costs of density functional theory calculations, and can be combined with sparse matrix algebra to obtain linear scaling with respect to the number of electrons in the system. Calculations on systems of 10 000 atoms or more thus become feasible in a systematic basis set with moderate computational resources. Further computational savings can be achieved by exploiting the similarity of the adaptively contracted basis functions for closely related environments, e.g., in geometry optimizations or combined calculations of neutral and charged systems.
Linearly Scaling 3D Fragment Method for Large-Scale Electronic Structure Calculations
Energy Technology Data Exchange (ETDEWEB)
Wang, Lin-Wang; Lee, Byounghak; Shan, Hongzhang; Zhao, Zhengji; Meza, Juan; Strohmaier, Erich; Bailey, David H.
2008-07-01
We present a new linearly scaling three-dimensional fragment (LS3DF) method for large scale ab initio electronic structure calculations. LS3DF is based on a divide-and-conquer approach, which incorporates a novel patching scheme that effectively cancels out the artificial boundary effects due to the subdivision of the system. As a consequence, the LS3DF program yields essentially the same results as direct density functional theory (DFT) calculations. The fragments of the LS3DF algorithm can be calculated separately with different groups of processors. This leads to almost perfect parallelization on tens of thousands of processors. After code optimization, we were able to achieve 35.1 Tflop/s, which is 39percent of the theoretical speed on 17,280 Cray XT4 processor cores. Our 13,824-atom ZnTeO alloy calculation runs 400 times faster than a direct DFTcalculation, even presuming that the direct DFT calculation can scale well up to 17,280 processor cores. These results demonstrate the applicability of the LS3DF method to material simulations, the advantage of using linearly scaling algorithms over conventional O(N3) methods, and the potential for petascale computation using the LS3DF method.
Linear scaling 3D fragment method for large-scale electronic structure calculations
Energy Technology Data Exchange (ETDEWEB)
Wang, Lin-Wang; Wang, Lin-Wang; Lee, Byounghak; Shan, HongZhang; Zhao, Zhengji; Meza, Juan; Strohmaier, Erich; Bailey, David
2008-07-11
We present a new linearly scaling three-dimensional fragment (LS3DF) method for large scale ab initio electronic structure calculations. LS3DF is based on a divide-and-conquer approach, which incorporates a novel patching scheme that effectively cancels out the artificial boundary effects due to the subdivision of the system. As a consequence, the LS3DF program yields essentially the same results as direct density functional theory (DFT) calculations. The fragments of the LS3DF algorithm can be calculated separately with different groups of processors. This leads to almost perfect parallelization on tens of thousands of processors. After code optimization, we were able to achieve 35.1 Tflop/s, which is 39% of the theoretical speed on 17,280 Cray XT4 processor cores. Our 13,824-atom ZnTeO alloy calculation runs 400 times faster than a direct DFT calculation, even presuming that the direct DFT calculation can scale well up to 17,280 processor cores. These results demonstrate the applicability of the LS3DF method to material simulations, the advantage of using linearly scaling algorithms over conventional O(N{sup 3}) methods, and the potential for petascale computation using the LS3DF method.
Linear-scaling computation of excited states in time-domain
Institute of Scientific and Technical Information of China (English)
YAM ChiYung; CHEN GuanHua
2014-01-01
The applicability of quantum mechanical methods is severely limited by their poor scaling.To circumvent the problem,linearscaling methods for quantum mechanical calculations had been developed.The physical basis of linear-scaling methods is the locality in quantum mechanics where the properties or observables of a system are weakly influenced by factors spatially far apart.Besides the substantial efforts spent on devising linear-scaling methods for ground state,there is also a growing interest in the development of linear-scaling methods for excited states.This review gives an overview of linear-scaling approaches for excited states solved in real time-domain.
A Linear Scaling Three Dimensional Fragment Method for Large ScaleElectronic Structure Calculations
Energy Technology Data Exchange (ETDEWEB)
Wang, Lin-Wang; Zhao, Zhengji; Meza, Juan
2007-07-26
We present a novel linear scaling ab initio total energyelectronic structure calculation method, which is simple to implement,easily to parallelize, and produces essentially thesame results as thedirect ab initio method, while it could be thousands of times faster.Using this method, we have studied the dipole moments of CdSe quantumdots, and found both significant bulk and surface contributions. The bulkdipole contribution cannot simply be estimated from the bulk spontaneouspolarization value by a proportional volume factor. Instead it has ageometry dependent screening effect. The dipole moment also produces astrong internal electric field which induces a strong electron holeseparation.
The origin of linear scaling Fock matrix calculation with density prescreening
Energy Technology Data Exchange (ETDEWEB)
Mitin, Alexander V., E-mail: mitin@phys.chem.msu.ru [Chemistry Department, Moscow State University, Moscow, 119991 (Russian Federation)
2015-12-31
A theorem was proven, which reads that the number of nonzero two-electron integrals scales linearly with respect to the number of basis functions for large molecular systems. This permits to show that linear scaling property of the Fock matrix calculation with using density prescreening arises due to linear scaling properties of the number of nonzero two-electron integrals and the number of leading matrix elements of density matrix. This property is reinforced by employing the density prescreening technique. The use of the density difference prescreening further improves the linear scaling property of the Fock matrix calculation method. As a result, the linear scaling regime of the Fock matrix calculation can begin from the number of basis functions of 2000–3000 in dependence on the basis function type in molecular calculations. It was also shown that the conventional algorithm of Fock matrix calculation from stored nonzero two-electron integrals with density prescreening possesses linear scaling property.
Energy Technology Data Exchange (ETDEWEB)
Davies, R.R.; Williams, Guy B. [University of Cambridge, Department of Clinical Neurosciences, Cambridge (United Kingdom); Scahill, Victoria L.; Graham, Kim S. [Cardiff University, MRC Cognition and Brain Sciences Unit, Cambridge and Wales Institute of Cognitive Neuroscience, School of Psychology, Cardiff (United Kingdom); Graham, Andrew [University of Cambridge, Department of Clinical Neurosciences, Cambridge (United Kingdom); Cardiff University, MRC Cognition and Brain Sciences Unit, Cambridge and Wales Institute of Cognitive Neuroscience, School of Psychology, Cardiff (United Kingdom); Hodges, John R. [University of Cambridge, Department of Clinical Neurosciences, Cambridge (United Kingdom); Cardiff University, MRC Cognition and Brain Sciences Unit, Cambridge and Wales Institute of Cognitive Neuroscience, School of Psychology, Cardiff (United Kingdom); Prince of Wales Medical Research Institute, Cognitive Neurology, Sydney, NSW (Australia)
2009-08-15
We aimed to devise a rating method for key frontal and temporal brain regions validated against quantitative volumetric methods and applicable to a range of dementia syndromes. Four standardised coronal MR images from 36 subjects encompassing controls and cases with Alzheimer's disease (AD) and frontotemporal dementia (FTD) were used. After initial pilot studies, 15 regions produced good intra- and inter-rater reliability. We then validated the ratings against manual volumetry and voxel-based morphometry (VBM) and compared ratings across the subject groups. Validation against both manual volumetry (for both frontal and temporal lobes), and against whole brain VBM, showed good correlation with visual ratings for the majority of the brain regions. Comparison of rating scores across disease groups showed involvement of the anterior fusiform gyrus, anterior hippocampus and temporal pole in semantic dementia, while anterior cingulate and orbitofrontal regions were involved in behavioural variant FTD. This simple visual rating can be used as an alternative to highly technical methods of quantification, and may be superior when dealing with single cases or small groups. (orig.)
DEFF Research Database (Denmark)
Wang, Zhaohui; Folsø, Rasmus; Bondini, Francesca;
1999-01-01
presents the results from the performed full scale measurements, and compares these to results from calculations performed with 3 different software systems: I-SHIP, SGN80 and SHIPSTAR.SGN80 is a linear strip theory software system in frequency domain, I-SHIP is a more advanced system, which allows...... the user to compare several linear and nonlinear strip theories, and SHIPSTAR is an advanced non-linear time-domain strip theory sea-keeping code.The calculations agree well with the measurements at Fn=0.32, whereas the agreement is less satisfying at Fn=0.55. Various reasons for this disagreement......, full-scale measurements have been performed on board a 128 m monohull fast ferry. This paper deals with the results from these full-scale measurements. The primary results considered are pitch motion, midship vertical bending moment and vertical acceleration at the bow. Previous comparisons between...
Directory of Open Access Journals (Sweden)
B Sarkar
2016-01-01
Full Text Available Introduction: Linear accelerator (Linac based stereotactic radiosurgery (SRS and stereotactic radiotherapy (SRT using volumetric modulated arc therapy (VMAT has been used for treating small intracranial lesions. Recent development in the Linacs such as inbuilt micro multileaf collimator (MLC and flattening filter free (FFF beam are intended to provide a better dose conformity and faster delivery when using VMAT technique. This study was aimed to compare the dosimetric outcomes and monitor units (MUs of the stereotactic treatment plans for different commercially available MLC models and beam profiles. Materials and Methods: Ten patients having 12 planning target volume (PTV/gross target volume's (GTVs who received the SRS/SRT treatment in our clinic using Axesse Linac (considered reference arm gold standard were considered for this study. The test arms comprised of plans using Elekta Agility with FFF, Elekta Agility with the plane beam, Elekta APEX, Varian Millennium 120, Varian Millennium 120HD, and Elekta Synergy in Monaco treatment planning system. Planning constraints and calculation grid spacing were not altered in the test plans. To objectively evaluate the efficacy of MLC-beam model, the resultant dosimetric outcomes were subtracted from the reference arm parameters. Results: V95%, V100%, V105%, D1%, maximum dose, and mean dose of PTV/GTV showed a maximum inter MLC - beam model variation of 1.5% and 2% for PTV and GTV, respectively. Average PTV conformity index and heterogeneity index shows a variation in the range 0.56–0.63 and 1.08–1.11, respectively. Mean dose difference (excluding Axesse for all organs varied between 1.1 cGy and 74.8 cGy (mean dose = 6.1 cGy standard deviation [SD] = 26.9 cGy and 1.7 cGy–194.5 cGy (mean dose 16.1 cGy SD = 57.2 cGy for single and multiple fraction, respectively. Conclusion: The dosimetry of VMAT-based SRS/SRT treatment plan had minimal dependence on MLC and beam model variations. All tested MLC
J.F. Sturm; J. Zhang (Shuzhong)
1996-01-01
textabstractIn this paper we introduce a primal-dual affine scaling method. The method uses a search-direction obtained by minimizing the duality gap over a linearly transformed conic section. This direction neither coincides with known primal-dual affine scaling directions (Jansen et al., 1993; Mon
Large space-time scale behavior of linearly interacting diffusions
Swart, J.M.
1999-01-01
This dissertation in mathematics is devoted to systems consisting of a countably infinite collection of diffusion processes with a linear attractive interaction. Such systems have been used in population biology as a stochastic model for the distribution of genes over a population, or for the size o
Scaling and linear response in the GOY model
Kadanoff, Leo; Lohse, Detlef; Schörghofer, Norbert
1997-01-01
The GOY model is a model for turbulence in which two conserved quantities cascade up and down a linear array of shells. When the viscosity parameter, small nu, Greek, is small the model has a qualitative behavior which is similar to the Kolmogorov theories of turbulence. Here a static solution to th
Input-output description of linear systems with multiple time-scales
Madriz, R. S.; Sastry, S. S.
1984-01-01
It is pointed out that the study of systems evolving at multiple time-scales is simplified by studying reduced-order models of these systems valid at specific time-scales. The present investigation is concerned with an extension of results on the time-scale decomposition of autonomous systems to that of input-output systems. The results are employed to study conditions under which positive realness of a transfer function is preserved under singular perturbation. Attention is given to the perturbation theory for linear operators, the multiple time-scale structure of autonomous linear systems, the input-output description of two time-scale linear systems, the positive realness of two time-scale systems, and multiple time-scale linear systems.
On the non-linear scale of cosmological perturbation theory
Energy Technology Data Exchange (ETDEWEB)
Blas, Diego [Theory Division, CERN, 1211 Geneva (Switzerland); Garny, Mathias; Konstandin, Thomas, E-mail: diego.blas@cern.ch, E-mail: mathias.garny@desy.de, E-mail: Thomas.Konstandin@desy.de [DESY, Notkestr. 85, 22607 Hamburg (Germany)
2013-09-01
We discuss the convergence of cosmological perturbation theory. We prove that the polynomial enhancement of the non-linear corrections expected from the effects of soft modes is absent in equal-time correlators like the power or bispectrum. We first show this at leading order by resumming the most important corrections of soft modes to an arbitrary skeleton of hard fluctuations. We derive the same result in the eikonal approximation, which also allows us to show the absence of enhancement at any order. We complement the proof by an explicit calculation of the power spectrum at two-loop order, and by further numerical checks at higher orders. Using these insights, we argue that the modification of the power spectrum from soft modes corresponds at most to logarithmic corrections at any order in perturbation theory. Finally, we discuss the asymptotic behavior in the large and small momentum regimes and identify the expansion parameter pertinent to non-linear corrections.
On the non-linear scale of cosmological perturbation theory
Energy Technology Data Exchange (ETDEWEB)
Blas, Diego [European Organization for Nuclear Research (CERN), Geneva (Switzerland); Garny, Mathias; Konstandin, Thomas [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany)
2013-04-15
We discuss the convergence of cosmological perturbation theory. We prove that the polynomial enhancement of the non-linear corrections expected from the effects of soft modes is absent in equal-time correlators like the power or bispectrum. We first show this at leading order by resumming the most important corrections of soft modes to an arbitrary skeleton of hard fluctuations. We derive the same result in the eikonal approximation, which also allows us to show the absence of enhancement at any order. We complement the proof by an explicit calculation of the power spectrum at two-loop order, and by further numerical checks at higher orders. Using these insights, we argue that the modification of the power spectrum from soft modes corresponds at most to logarithmic corrections. Finally, we discuss the asymptotic behavior in the large and small momentum regimes and identify the expansion parameter pertinent to non-linear corrections.
Non-linear variability in geophysics scaling and fractals
Lovejoy, S
1991-01-01
consequences of broken symmetry -here parity-is studied. In this model, turbulence is dominated by a hierarchy of helical (corkscrew) structures. The authors stress the unique features of such pseudo-scalar cascades as well as the extreme nature of the resulting (intermittent) fluctuations. Intermittent turbulent cascades was also the theme of a paper by us in which we show that universality classes exist for continuous cascades (in which an infinite number of cascade steps occur over a finite range of scales). This result is the multiplicative analogue of the familiar central limit theorem for the addition of random variables. Finally, an interesting paper by Pasmanter investigates the scaling associated with anomolous diffusion in a chaotic tidal basin model involving a small number of degrees of freedom. Although the statistical literature is replete with techniques for dealing with those random processes characterized by both exponentially decaying (non-scaling) autocorrelations and exponentially decaying...
Non-linear scaling of a musculoskeletal model of the lower limb using statistical shape models.
Nolte, Daniel; Tsang, Chui Kit; Zhang, Kai Yu; Ding, Ziyun; Kedgley, Angela E; Bull, Anthony M J
2016-10-03
Accurate muscle geometry for musculoskeletal models is important to enable accurate subject-specific simulations. Commonly, linear scaling is used to obtain individualised muscle geometry. More advanced methods include non-linear scaling using segmented bone surfaces and manual or semi-automatic digitisation of muscle paths from medical images. In this study, a new scaling method combining non-linear scaling with reconstructions of bone surfaces using statistical shape modelling is presented. Statistical Shape Models (SSMs) of femur and tibia/fibula were used to reconstruct bone surfaces of nine subjects. Reference models were created by morphing manually digitised muscle paths to mean shapes of the SSMs using non-linear transformations and inter-subject variability was calculated. Subject-specific models of muscle attachment and via points were created from three reference models. The accuracy was evaluated by calculating the differences between the scaled and manually digitised models. The points defining the muscle paths showed large inter-subject variability at the thigh and shank - up to 26mm; this was found to limit the accuracy of all studied scaling methods. Errors for the subject-specific muscle point reconstructions of the thigh could be decreased by 9% to 20% by using the non-linear scaling compared to a typical linear scaling method. We conclude that the proposed non-linear scaling method is more accurate than linear scaling methods. Thus, when combined with the ability to reconstruct bone surfaces from incomplete or scattered geometry data using statistical shape models our proposed method is an alternative to linear scaling methods.
Critical scaling in hidden state inference for linear Langevin dynamics
Bravi, Barbara; Sollich, Peter
2016-01-01
We consider the problem of inferring the dynamics of unknown (i.e. hidden) nodes from a set of observed trajectories and we study analytically the average prediction error given by the Extended Plefka Expansion applied to it, as presented in [1]. We focus on a stochastic linear dynamics of continuous degrees of freedom interacting via random Gaussian couplings in the infinite network size limit. The expected error on the hidden time courses can be found as the equal-time hidden-to-hidden cova...
Volumetric Virtual Environments
Institute of Scientific and Technical Information of China (English)
HE Taosong
2000-01-01
Driven by fast development of both virtual reality and volume visualization, we discuss some critical techniques towards building a volumetric VR system, specifically the modeling, rendering, and manipulations of a volumetric scene.Techniques such as voxel-based object simplification, accelerated volume rendering,fast stereo volume rendering, and volumetric "collision detection" are introduced and improved, with the idea of demonstrating the possibilities and potential benefits of incorporating volumetric models into VR systems.
Localized density matrix minimization and linear scaling algorithms
Lai, Rongjie
2015-01-01
We propose a convex variational approach to compute localized density matrices for both zero temperature and finite temperature cases, by adding an entry-wise $\\ell_1$ regularization to the free energy of the quantum system. Based on the fact that the density matrix decays exponential away from the diagonal for insulating system or system at finite temperature, the proposed $\\ell_1$ regularized variational method provides a nice way to approximate the original quantum system. We provide theoretical analysis of the approximation behavior and also design convergence guaranteed numerical algorithms based on Bregman iteration. More importantly, the $\\ell_1$ regularized system naturally leads to localized density matrices with banded structure, which enables us to develop approximating algorithms to find the localized density matrices with computation cost linearly dependent on the problem size.
Linear scaling calculation of maximally localized Wannier functions with atomic basis set.
Xiang, H J; Li, Zhenyu; Liang, W Z; Yang, Jinlong; Hou, J G; Zhu, Qingshi
2006-06-21
We have developed a linear scaling algorithm for calculating maximally localized Wannier functions (MLWFs) using atomic orbital basis. An O(N) ground state calculation is carried out to get the density matrix (DM). Through a projection of the DM onto atomic orbitals and a subsequent O(N) orthogonalization, we obtain initial orthogonal localized orbitals. These orbitals can be maximally localized in linear scaling by simple Jacobi sweeps. Our O(N) method is validated by applying it to water molecule and wurtzite ZnO. The linear scaling behavior of the new method is demonstrated by computing the MLWFs of boron nitride nanotubes.
CMB all-scale blackbody distortions induced by linearizing temperature
Notari, Alessio; Quartin, Miguel
2016-08-01
Cosmic microwave background (CMB) experiments, such as WMAP and Planck, measure intensity anisotropies and build maps using a linearized formula for relating them to the temperature blackbody fluctuations. However, this procedure also generates a signal in the maps in the form of y -type distortions which is degenerate with the thermal Sunyaev Zel'dovich (tSZ) effect. These are small effects that arise at second order in the temperature fluctuations not from primordial physics but from such a limitation of the map-making procedure. They constitute a contaminant for measurements of our peculiar velocity, the tSZ and primordial y -distortions. They can nevertheless be well modeled and accounted for. We show that the distortions arise from a leakage of the CMB dipole into the y -channel which couples to all multipoles, mostly affecting the range ℓ≲400 . This should be visible in Planck's y -maps with an estimated signal-to-noise ratio of about 12. We note however that such frequency-dependent terms carry no new information on the nature of the CMB dipole. This implies that the real significance of Planck's Doppler coupling measurements is actually lower than reported by the collaboration. Finally, we quantify the level of contamination in tSZ and primordial y -type distortions and show that it is above the sensitivity of proposed next-generation CMB experiments.
CMB all-scale blackbody distortions induced by linearizing temperature
Notari, Alessio
2016-01-01
Cosmic Microwave Background (CMB) experiments, such as WMAP and Planck, measure intensity anisotropies and build maps using a \\emph{linearized} formula for relating them to the temperature blackbody fluctuations. However such a procedure also generates a signal in the maps in the form of y-type distortions, and thus degenerate with the thermal SZ (tSZ) effect. These are small effects that arise at second-order in the temperature fluctuations not from primordial physics but from such a limitation of the map-making procedure. They constitute a contaminant for measurements of: our peculiar velocity, the tSZ and of primordial y-distortions, but they can nevertheless be well-modelled and accounted for. We show that the largest distortions arises at high ell from a leakage of the CMB dipole into the y-channel which couples to all multipoles, but mostly affects the range ell <~ 400. This should be visible in Planck's y-maps with an estimated signal-to-noise ratio of about 9. We note however that such frequency-de...
Linear-scaling and parallelizable algorithms for stochastic quantum chemistry
Booth, George H; Alavi, Ali
2013-01-01
For many decades, quantum chemical method development has been dominated by algorithms which involve increasingly complex series of tensor contractions over one-electron orbital spaces. Procedures for their derivation and implementation have evolved to require the minimum amount of logic and rely heavily on computationally efficient library-based matrix algebra and optimized paging schemes. In this regard, the recent development of exact stochastic quantum chemical algorithms to reduce computational scaling and memory overhead requires a contrasting algorithmic philosophy, but one which when implemented efficiently can often achieve higher accuracy/cost ratios with small random errors. Additionally, they can exploit the continuing trend for massive parallelization which hinders the progress of deterministic high-level quantum chemical algorithms. In the Quantum Monte Carlo community, stochastic algorithms are ubiquitous but the discrete Fock space of quantum chemical methods is often unfamiliar, and the metho...
Multiscale plant wakes, turbulence and non linear scaling flexible effects
Vila, Teresa; Redondo, Jose M.; Velasco, David
2010-05-01
We present velocity ADV measurements and flow visualization of the turbulent wakes behind plant arrays, as these are often fractal in nature, we compare the multifractal spectra and the turbulence structure behind the wakes. Both statistical measures allowing to calculate integral lengthscales and their profiles modified by the plant cannopies [1,2] as well as intermittency and spectral behaviour are also measured [3,4]. We distinguish several momentum transfer mechanisms between the cannopy and the flow, an internal one where lateral turbulent tensions are dominant, and another one just above the plant average height dominated by vertical Reynolds stresses. Visualization of flow over individual plant models show the role of coherent vortices triggered by plant elasticity. The deformation rate of the plants and their Youngs modulus may be correlated with overal plant drag and geometry. This is modified strongly in fractal canopies. Large turbulent integral scales are linked to rugosity and the scaling of the waves.[5,6] Pearlescence experiments where local shear is visualized and numerical simulations of Fractal grids are compared following [7]. [1] Nepf,H.M. Drag, turbulence and diffusion in flow through emergent vegetation. Water Resources Res. 35(2)(1999) [2] Ben Mahjoub,O., Redondo J.M. and Babiano A. Jour.Structure functions in complex flows. Flow Turbulence and Combustion 59, 299-313. [3] El-Hakim, O. Salama, M. Velocity distribution inside and above branched flexible roughness. ASCE Journal of Irrigation and Drainage Engineering, Vol. 118, No 6, (November/December 1992) 914-927. [4] Finnigan,J. Turbulence in plant canopies. Annu. Rev. Fluid Mech. 2000 , Vol. 32: 519-571. [5] Ikeda, S., Kanazawa, M. Three- dimensional organized vortices above flexible water plants. ASCE Journal of Hydraulic Engineering, Vol. 122, No 11, (1996) 634-640. [6] Velasco, D.,Bateman A.,Redondo J.M and Medina V. An open channel flow experimental and theorical study of resistance and
Decentralised stabilising controllers for a class of large-scale linear systems
Indian Academy of Sciences (India)
B C Jha; K Patralekh; R Singh
2000-12-01
A simple method for computing decentralised stabilising controllers for a class of large-scale (interconnected) linear systems has been developed. Decentralised controls are optimal controls at subsystem level and are generated from the solution of algebraic Riccati equations for decoupled subsystems resulting from a new aggregation-decomposition technique. The method has been illustrated through a numerical example of a large-scale linear system consisting of three subsystems each of the fourth order.
Institute of Scientific and Technical Information of China (English)
Ichitaro Yamazaki; Zhaojun Bai; Wenbin Chen; Richard Scalettar
2009-01-01
We study preconditioning techniques used in conjunction with the conjugate gradient method for solving multi-length-scale symmetric positive definite linear systems originating from the quantum Monte Carlo simulation of electron interaction of correlated materials. Existing preconditioning techniques are not designed to be adaptive to varying numerical properties of the multi-length-scale systems. In this paper, we propose a hybrid incomplete Cholesky (HIC) preconditioner and demonstrate its adaptivity to the multi-length-scale systems. In addition, we propose an extension of the compressed sparse column with row access (CSCR) sparse matrix storage format to efficiently accommodate the data access pattern to compute the HIC preconditioner. We show that for moderately correlated materials, the HIC preconditioner achieves the optimal linear scaling of the simulation. The development of a linear-scaling preconditioner for strongly correlated materials remains an open topic.
Directory of Open Access Journals (Sweden)
Thais Maria Freire FERNANDES
2015-02-01
Full Text Available OBJECTIVE: The purpose of this study was to determine the accuracy and reliability of two methods of measurements of linear distances (multiplanar 2D and tridimensional reconstruction 3D obtained from cone-beam computed tomography (CBCT with different voxel sizes. MATERIAL AND METHODS: Ten dry human mandibles were scanned at voxel sizes of 0.2 and 0.4 mm. Craniometric anatomical landmarks were identified twice by two independent operators on the multiplanar reconstructed and on volume rendering images that were generated by the software Dolphin®. Subsequently, physical measurements were performed using a digital caliper. Analysis of variance (ANOVA, intraclass correlation coefficient (ICC and Bland-Altman were used for evaluating accuracy and reliability (p<0.05. RESULTS: Excellent intraobserver reliability and good to high precision interobserver reliability values were found for linear measurements from CBCT 3D and multiplanar images. Measurements performed on multiplanar reconstructed images were more accurate than measurements in volume rendering compared with the gold standard. No statistically significant difference was found between voxel protocols, independently of the measurement method. CONCLUSIONS: Linear measurements on multiplanar images of 0.2 and 0.4 voxel are reliable and accurate when compared with direct caliper measurements. Caution should be taken in the volume rendering measurements, because the measurements were reliable, but not accurate for all variables. An increased voxel resolution did not result in greater accuracy of mandible measurements and would potentially provide increased patient radiation exposure.
2013-01-01
This book consists of twenty seven chapters, which can be divided into three large categories: articles with the focus on the mathematical treatment of non-linear problems, including the methodologies, algorithms and properties of analytical and numerical solutions to particular non-linear problems; theoretical and computational studies dedicated to the physics and chemistry of non-linear micro-and nano-scale systems, including molecular clusters, nano-particles and nano-composites; and, papers focused on non-linear processes in medico-biological systems, including mathematical models of ferments, amino acids, blood fluids and polynucleic chains.
Choi, Soo-Jung; Choi, Janggyoo; Lee, Chang Uk; Yoon, Shin Hee; Bae, Soo Kyung; Chin, Young-Won; Kim, Jinwoong; Yoon, Kee Dong
2015-06-01
This study describes the rapid separation of mulberry anthocyanins; namely, cyanidin-3-glucoside and cyanidin-3-rutinoside, using high-performance countercurrent chromatography, and the establishment of a volumetric scale-up process from semi-preparative to preparative-scale. To optimize the separation parameters, biphasic solvent systems composed of tert-butyl methyl ether/n-butanol/acetonitrile/0.01% trifluoroacetic acid, flow rate, sample amount and rotational speed were evaluated for the semi-preparative-scale high-performance countercurrent chromatography. The optimized semi-preparative-scale high-performance countercurrent chromatography parameters (tert-butyl methyl ether/n-butanol/acetonitrile/0.01% trifluoroacetic acid, 1:3:1:5, v/v; flow rate, 4.0 mL/min; sample amount, 200-1000 mg; rotational speed, 1600 rpm) were transferred directly to a preparative-scale (tert-butyl methyl ether/n-butanol/acetonitrile/0.01% trifluoroacetic acid, 1:3:1:5, v/v; flow rate, 28 mL/min; sample amount, 5.0-10.0 g; rotational speed, 1400 rpm) to achieve separation results identical to cyanidin-3-glucoside and cyanidin-3-rutinoside. The separation of mulberry anthocyanins using semi-preparative high-performance countercurrent chromatography and its volumetric scale-up to preparative-scale was addressed for the first time in this report. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Nonlinear and linear timescales near kinetic scales in solar wind turbulence
Energy Technology Data Exchange (ETDEWEB)
Matthaeus, W. H.; Wan, M.; Shay, M. A. [Department of Physics and Astronomy, University of Delaware, DE 19716 (United States); Oughton, S. [Department of Mathematics, University of Waikato, Hamilton (New Zealand); Osman, K. T.; Chapman, S. C. [Centre for Fusion, Space, and Astrophysics, University of Warwick, Coventry CV4 7AL (United Kingdom); Servidio, S.; Valentini, F. [Dipartimento di Fisica, Università della Calabria, I-87036 Cosenza (Italy); Gary, S. P. [Space Sciences Institute, Boulder, CO 80301 (United States); Roytershteyn, V.; Karimabadi, H., E-mail: whm@udel.edu [Sciberquest, Inc., Del Mar, CA 92014 (United States)
2014-08-01
The application of linear kinetic treatments to plasma waves, damping, and instability requires favorable inequalities between the associated linear timescales and timescales for nonlinear (e.g., turbulence) evolution. In the solar wind these two types of timescales may be directly compared using standard Kolmogorov-style analysis and observational data. The estimated local (in scale) nonlinear magnetohydrodynamic cascade times, evaluated as relevant kinetic scales are approached, remain slower than the cyclotron period, but comparable to or faster than the typical timescales of instabilities, anisotropic waves, and wave damping. The variation with length scale of the turbulence timescales is supported by observations and simulations. On this basis the use of linear theory—which assumes constant parameters to calculate the associated kinetic rates—may be questioned. It is suggested that the product of proton gyrofrequency and nonlinear time at the ion gyroscales provides a simple measure of turbulence influence on proton kinetic behavior.
Field-based observations confirm linear scaling of sand flux with wind stress
Martin, Raleigh L
2016-01-01
Wind-driven sand transport generates atmospheric dust, forms dunes, and sculpts landscapes. However, it remains unclear how the sand flux scales with wind speed, largely because models do not agree on how particle speed changes with wind shear velocity. Here, we present comprehensive measurements from three new field sites and three published studies, showing that characteristic saltation layer heights, and thus particle speeds, remain approximately constant with shear velocity. This result implies a linear dependence of saltation flux on wind shear stress, which contrasts with the nonlinear 3/2 scaling used in most aeolian process predictions. We confirm the linear flux law with direct measurements of the stress-flux relationship occurring at each site. Models for dust generation, dune migration, and other processes driven by wind-blown sand on Earth, Mars, and several other planetary surfaces should be modified to account for linear stress-flux scaling.
Lin, Zi-Jing; Li, Lin; Cazzell, Mary; Liu, Hanli
2014-08-01
Diffuse optical tomography (DOT) is a variant of functional near infrared spectroscopy and has the capability of mapping or reconstructing three dimensional (3D) hemodynamic changes due to brain activity. Common methods used in DOT image analysis to define brain activation have limitations because the selection of activation period is relatively subjective. General linear model (GLM)-based analysis can overcome this limitation. In this study, we combine the atlas-guided 3D DOT image reconstruction with GLM-based analysis (i.e., voxel-wise GLM analysis) to investigate the brain activity that is associated with risk decision-making processes. Risk decision-making is an important cognitive process and thus is an essential topic in the field of neuroscience. The Balloon Analog Risk Task (BART) is a valid experimental model and has been commonly used to assess human risk-taking actions and tendencies while facing risks. We have used the BART paradigm with a blocked design to investigate brain activations in the prefrontal and frontal cortical areas during decision-making from 37 human participants (22 males and 15 females). Voxel-wise GLM analysis was performed after a human brain atlas template and a depth compensation algorithm were combined to form atlas-guided DOT images. In this work, we wish to demonstrate the excellence of using voxel-wise GLM analysis with DOT to image and study cognitive functions in response to risk decision-making. Results have shown significant hemodynamic changes in the dorsal lateral prefrontal cortex (DLPFC) during the active-choice mode and a different activation pattern between genders; these findings correlate well with published literature in functional magnetic resonance imaging (fMRI) and fNIRS studies.
The velocity shear and vorticity across redshifts and non-linear scales
Libeskind, Noam I; Gottlöber, Stefan
2013-01-01
The evolution of the large scale distribution of matter in the universe is often characterized by the density field. Here we take a complimentary approach and characterize it using the cosmic velocity field, specifically the deformation of the velocity field. The deformation tensor is decomposed into its symmetric component (known as the "shear tensor") and its anti-symmetric part (the "vorticity"). Using a high resolution cosmological simulation we examine the relative orientations of the shear and the vorticity as a function of spatial scale and redshift. The shear is found to be remarkable stable to the choice of scale, while the vorticity is found to quickly decay with increasing spatial scale or redshift. The vorticity emerges out of the linear regime randomly oriented with respect to the shear eigenvectors. Non-linear evolution drives the vorticity to lie within the plane defined by the eigenvector of the fastest collapse. Within that plane the vorticity first gets aligned with the middle eigenvector an...
A study on the fabrication of main scale of linear encoder using continuous roller imprint method
Fan, Shanjin; Shi, Yongsheng; Yin, Lei; Feng, Long; Liu, Hongzhong
2013-10-01
Linear encoder composed of main and index scales has an extensive application in the field of modern precision measurement. The main scale is the key component of linear encoder as measuring basis. In this article, the continuous roller imprint technology is applied to the manufacturing of the main scale, this method can realize the high efficiency and low cost manufacturing of the ultra-long main scale. By means of the plastic deformation of the soft metal film substrate, the grating microstructure on the surface of the cylinder mold is replicated to the soft metal film substrate directly. Through the high precision control of continuous rotational motion of the mold, ultra-long high precision grating microstructure is obtained. This paper mainly discusses the manufacturing process of the high precision cylinder mold and the effects of the roller imprint pressure and roller rotation speed on the imprint replication quality. The above process parameters were optimized to manufacture the high quality main scale. At last, the reading test of a linear encoder contains the main scale made by the above method was conducted to evaluate its measurement accuracy, the result demonstrated the feasibility of the continuous roller imprint method.
Linear scaling calculation of an n-type GaAs quantum dot.
Nomura, Shintaro; Iitaka, Toshiaki
2007-09-01
A linear scale method for calculating electronic properties of large and complex systems is introduced within a local density approximation. The method is based on the Chebyshev polynomial expansion and the time-dependent method, which is tested on the calculation of the electronic structure of a model n-type GaAs quantum dot.
A trust-region and affine scaling algorithm for linearly constrained optimization
Institute of Scientific and Technical Information of China (English)
陈中文; 章祥荪
2002-01-01
A new trust-region and affine scaling algorithm for linearly constrained optimization is presentedin this paper. Under no nondegenerate assumption, we prove that any limit point of the sequence generatedby the new algorithm satisfies the first order necessary condition and there exists at least one limit point ofthe sequence which satisfies the second order necessary condition. Some preliminary numerical experiments are reported.
Hardy inequality on time scales and its application to half-linear dynamic equations
Directory of Open Access Journals (Sweden)
Řehák Pavel
2005-01-01
Full Text Available A time-scale version of the Hardy inequality is presented, which unifies and extends well-known Hardy inequalities in the continuous and in the discrete setting. An application in the oscillation theory of half-linear dynamic equations is given.
QUALITATIVE BEHAVIORS OF LINEAR TIME-INVARIANT DYNAMIC EQUATIONS ON TIME SCALES
Institute of Scientific and Technical Information of China (English)
无
2010-01-01
We investigate the type of singularity and qualitative structure of solutions to a time-invariant linear dynamic system on time scales. The results truly unify the qualitative behaviors of the system on the continuous and discrete times with any step size.
Scale of association: hierarchical linear models and the measurement of ecological systems
Sean M. McMahon; Jeffrey M. Diez
2007-01-01
A fundamental challenge to understanding patterns in ecological systems lies in employing methods that can analyse, test and draw inference from measured associations between variables across scales. Hierarchical linear models (HLM) use advanced estimation algorithms to measure regression relationships and variance-covariance parameters in hierarchically structured...
Directory of Open Access Journals (Sweden)
Dongxu Ren
2016-04-01
Full Text Available A multi-repeated photolithography method for manufacturing an incremental linear scale using projection lithography is presented. The method is based on the average homogenization effect that periodically superposes the light intensity of different locations of pitches in the mask to make a consistent energy distribution at a specific wavelength, from which the accuracy of a linear scale can be improved precisely using the average pitch with different step distances. The method’s theoretical error is within 0.01 µm for a periodic mask with a 2-µm sine-wave error. The intensity error models in the focal plane include the rectangular grating error on the mask, static positioning error, and lithography lens focal plane alignment error, which affect pitch uniformity less than in the common linear scale projection lithography splicing process. It was analyzed and confirmed that increasing the repeat exposure number of a single stripe could improve accuracy, as could adjusting the exposure spacing to achieve a set proportion of black and white stripes. According to the experimental results, the effectiveness of the multi-repeated photolithography method is confirmed to easily realize a pitch accuracy of 43 nm in any 10 locations of 1 m, and the whole length accuracy of the linear scale is less than 1 µm/m.
Application of Linear Scale Space and the Spatial Color Model in Microscopy
P. van Osta; K. Verdonck; L. Bols; J. Geysen; J.M. Geusebroek; B. ter Haar Romeny
2002-01-01
Structural features and color are used in human vision to distinguish features in light micorscopy. Taking these structural features and color into consideration in machine vision often enables a more robust segmentation than based on intensity tresholding. Linear scale space theory and the spatial
Ren, Dongxu; Zhao, Huiying; Zhang, Chupeng; Yuan, Daocheng; Xi, Jianpu; Zhu, Xueliang; Ban, Xinxing; Dong, Longchao; Gu, Yawen; Jiang, Chunye
2016-04-14
A multi-repeated photolithography method for manufacturing an incremental linear scale using projection lithography is presented. The method is based on the average homogenization effect that periodically superposes the light intensity of different locations of pitches in the mask to make a consistent energy distribution at a specific wavelength, from which the accuracy of a linear scale can be improved precisely using the average pitch with different step distances. The method's theoretical error is within 0.01 µm for a periodic mask with a 2-µm sine-wave error. The intensity error models in the focal plane include the rectangular grating error on the mask, static positioning error, and lithography lens focal plane alignment error, which affect pitch uniformity less than in the common linear scale projection lithography splicing process. It was analyzed and confirmed that increasing the repeat exposure number of a single stripe could improve accuracy, as could adjusting the exposure spacing to achieve a set proportion of black and white stripes. According to the experimental results, the effectiveness of the multi-repeated photolithography method is confirmed to easily realize a pitch accuracy of 43 nm in any 10 locations of 1 m, and the whole length accuracy of the linear scale is less than 1 µm/m.
Nonlinear and Linear Timescales near Kinetic Scales in Solar Wind Turbulence
Matthaeus, W H; Osman, K T; Servidio, S; Wan, M; Gary, S P; Shay, M A; Valentini, F; Roytershteyn, V; Karimabadi, H; Chapman, S C
2014-01-01
The application of linear kinetic treatments to plasma waves, damping, and instability requires favorable inequalities between the associated linear timescales and timescales for nonlinear (e.g., turbulence) evolution. In the solar wind these two types of timescales may be directly compared using standard Kolmogorov-style analysis and observational data. The estimated local nonlinear magnetohydrodynamic cascade times, evaluated as relevant kinetic scales are approached, remain slower than the cyclotron period, but comparable to, or faster than, the typical timescales of instabilities, anisotropic waves, and wave damping. The variation with length scale of the turbulence timescales is supported by observations and simulations. On this basis the use of linear theory - which assumes constant parameters to calculate the associated kinetic rates - may be questioned. It is suggested that the product of proton gyrofrequency and nonlinear time at the ion gyroscales provides a simple measure of turbulence influence on...
Energy Technology Data Exchange (ETDEWEB)
Wang, Lin-Wang; Zhao, Zhengji; Meza, Juan; Wang, Lin-Wang
2008-07-11
We present a new linear scaling ab initio total energy electronic structure calculation method based on the divide-and-conquer strategy. This method is simple to implement, easily to parallelize, and produces very accurate results when compared with the direct ab initio method. The method has been tested using up to 8,000 processors, and has been used to calculate nanosystems up to 15,000 atoms.
Werner, Hans-Joachim; Knizia, Gerald; Krause, Christine; Schwilk, Max; Dornbach, Mark
2015-02-10
We propose to construct electron correlation methods that are scalable in both molecule size and aggregated parallel computational power, in the sense that the total elapsed time of a calculation becomes nearly independent of the molecular size when the number of processors grows linearly with the molecular size. This is shown to be possible by exploiting a combination of local approximations and parallel algorithms. The concept is demonstrated with a linear scaling pair natural orbital local second-order Møller-Plesset perturbation theory (PNO-LMP2) method. In this method, both the wave function manifold and the integrals are transformed incrementally from projected atomic orbitals (PAOs) first to orbital-specific virtuals (OSVs) and finally to pair natural orbitals (PNOs), which allow for minimum domain sizes and fine-grained accuracy control using very few parameters. A parallel algorithm design is discussed, which is efficient for both small and large molecules, and numbers of processors, although true inverse-linear scaling with compute power is not yet reached in all cases. Initial applications to reactions involving large molecules reveal surprisingly large effects of dispersion energy contributions as well as large intramolecular basis set superposition errors in canonical MP2 calculations. In order to account for the dispersion effects, the usual selection of PNOs on the basis of natural occupation numbers turns out to be insufficient, and a new energy-based criterion is proposed. If explicitly correlated (F12) terms are included, fast convergence to the MP2 complete basis set (CBS) limit is achieved. For the studied reactions, the PNO-LMP2-F12 results deviate from the canonical MP2/CBS and MP2-F12 values by <1 kJ mol(-1), using triple-ζ (VTZ-F12) basis sets.
A linear scale height Chapman model supported by GNSS occultation measurements
Olivares-Pulido, G.; Hernández-Pajares, M.; Aragón-Àngel, A.; Garcia-Rigo, A.
2016-08-01
Global Navigation Satellite Systems (GNSS) radio occultations allow the vertical sounding of the Earth's atmosphere, in particular, the ionosphere. The physical observables estimated with this technique permit to test theoretical models of the electron density such as, for example, the Chapman and the Vary-Chap models. The former is characterized by a constant scale height, whereas the latter considers a more general function of the scale height with respect to height. We propose to investigate the feasibility of the Vary-Chap model where the scale height varies linearly with respect to height. In order to test this hypothesis, the scale height data provided by radio occultations from a receiver on board a low Earth orbit (LEO) satellite, obtained by iterating with a local Chapman model at every point of the topside F2 layer provided by the GNSS satellite occultation, are fitted to height data by means of a linear least squares fit (LLS). Results, based on FORMOSAT-3/COSMIC GPS occultation data inverted by means of the Improved Abel transform inversion technique (which takes into account the horizontal electron content gradients) show that the scale height presents a more clear linear trend above the F2 layer peak height, hm, which is in good agreement with the expected linear temperature dependence. Moreover, the parameters of the linear fit obtained during four representative days for all seasons, depend significantly on local time and latitude, strongly suggesting that this approach can significantly contribute to build realistic models of the electron density directly derived from GNSS occultation data.
Semiparametric Analysis of Heterogeneous Data Using Varying-Scale Generalized Linear Models.
Xie, Minge; Simpson, Douglas G; Carroll, Raymond J
2008-01-01
This article describes a class of heteroscedastic generalized linear regression models in which a subset of the regression parameters are rescaled nonparametrically, and develops efficient semiparametric inferences for the parametric components of the models. Such models provide a means to adapt for heterogeneity in the data due to varying exposures, varying levels of aggregation, and so on. The class of models considered includes generalized partially linear models and nonparametrically scaled link function models as special cases. We present an algorithm to estimate the scale function nonparametrically, and obtain asymptotic distribution theory for regression parameter estimates. In particular, we establish that the asymptotic covariance of the semiparametric estimator for the parametric part of the model achieves the semiparametric lower bound. We also describe bootstrap-based goodness-of-scale test. We illustrate the methodology with simulations, published data, and data from collaborative research on ultrasound safety.
Linear-scaling evaluation of the local energy in quantum MonteCarlo
Energy Technology Data Exchange (ETDEWEB)
Austin, Brian; Aspuru-Guzik, Alan; Salomon-Ferrer, Romelia; Lester Jr., William A.
2006-02-11
For atomic and molecular quantum Monte Carlo calculations, most of the computational effort is spent in the evaluation of the local energy. We describe a scheme for reducing the computational cost of the evaluation of the Slater determinants and correlation function for the correlated molecular orbital (CMO) ansatz. A sparse representation of the Slater determinants makes possible efficient evaluation of molecular orbitals. A modification to the scaled distance function facilitates a linear scaling implementation of the Schmidt-Moskowitz-Boys-Handy (SMBH) correlation function that preserves the efficient matrix multiplication structure of the SMBH function. For the evaluation of the local energy, these two methods lead to asymptotic linear scaling with respect to the molecule size.
Linear scaling coupled cluster and perturbation theories in the atomic orbital basis
Scuseria, Gustavo E.; Ayala, Philippe Y.
1999-11-01
We present a reformulation of the coupled cluster equations in the atomic orbital (AO) basis that leads to a linear scaling algorithm for large molecules. Neglecting excitation amplitudes in a screening process designed to achieve a target energy accuracy, we obtain an AO coupled cluster method which is competitive in terms of number of amplitudes with the traditional molecular orbital (MO) solution, even for small molecules. For large molecules, the decay properties of integrals and excitation amplitudes becomes evident and our AO method yields a linear scaling algorithm with respect to molecular size. We present benchmark calculations to demonstrate that our AO reformulation of the many-body electron correlation problem defeats the "exponential scaling wall" that has characterized high-level MO quantum chemistry calculations for many years.
Linear Scaling Solution of the Time-Dependent Self-Consistent-Field Equations
Directory of Open Access Journals (Sweden)
Matt Challacombe
2014-03-01
Full Text Available A new approach to solving the Time-Dependent Self-Consistent-Field equations is developed based on the double quotient formulation of Tsiper 2001 (J. Phys. B. Dual channel, quasi-independent non-linear optimization of these quotients is found to yield convergence rates approaching those of the best case (single channel Tamm-Dancoff approximation. This formulation is variational with respect to matrix truncation, admitting linear scaling solution of the matrix-eigenvalue problem, which is demonstrated for bulk excitons in the polyphenylene vinylene oligomer and the (4,3 carbon nanotube segment.
Estimating WAIS-IV indexes: proration versus linear scaling in a clinical sample.
Umfleet, Laura Glass; Ryan, Joseph J; Gontkovsky, Sam T; Morris, Jeri
2012-04-01
We compared the accuracy of proration and linear scaling for estimating Wechsler Adult Intelligence Scale-Fourth Edition (WAIS-IV), Verbal Comprehension Index (VCI), and Perceptual Reasoning Index (PRI) composites from all possible two subtest combinations. The purpose was to provide practice relevant psychometric results in a clinical sample. The present investigation was an archival study that used mostly within-group comparisons. We analyzed WAIS-IV data of a clinical sample comprising 104 patients with brain damage and 37 with no known neurological impairment. In both clinical samples, actual VCI and PRI scores were highly correlated with estimated index scores based on proration and linear scaling (all rs ≥.95). In the brain-impaired sample, significant mean score differences between the actual and estimated composites were found in two comparisons, but these differences were less than three points; no other significant differences emerged. Overall, findings demonstrate that proration and linear scaling methods are feasible procedures when estimating actual Indexes. There was no advantage of one computational method over the other. © 2012 Wiley Periodicals, Inc.
Imprint of non-linear effects on HI intensity mapping on large scales
Umeh, Obinna
2016-01-01
Intensity mapping of the HI brightness temperature provides a unique way of tracing large-scale structures of the Universe up to the largest possible scales. This is achieved by using a low angular resolution radio telescopes to detect emission line from cosmic neutral Hydrogen in the post-reionization Universe. We consider how non-linear effects associated with the HI bias and redshift space distortions contribute to the clustering of cosmic neutral Hydrogen on large scales. We use general relativistic perturbation theory techniques to derive for the first time the full expression for the HI brightness temperature up to third order in perturbation theory without making any plane-parallel approximation. We use this result to show how mode coupling at nonlinear order due to nonlinear bias parameters and redshift space distortions leads to about 10\\% modulation of the HI power spectrum on large scales.
The fastclime Package for Linear Programming and Large-Scale Precision Matrix Estimation in R.
Pang, Haotian; Liu, Han; Vanderbei, Robert
2014-02-01
We develop an R package fastclime for solving a family of regularized linear programming (LP) problems. Our package efficiently implements the parametric simplex algorithm, which provides a scalable and sophisticated tool for solving large-scale linear programs. As an illustrative example, one use of our LP solver is to implement an important sparse precision matrix estimation method called CLIME (Constrained L1 Minimization Estimator). Compared with existing packages for this problem such as clime and flare, our package has three advantages: (1) it efficiently calculates the full piecewise-linear regularization path; (2) it provides an accurate dual certificate as stopping criterion; (3) it is completely coded in C and is highly portable. This package is designed to be useful to statisticians and machine learning researchers for solving a wide range of problems.
b-Bit Minwise Hashing for Large-Scale Linear SVM
Li, Ping; Konig, Christian
2011-01-01
In this paper, we propose to (seamlessly) integrate b-bit minwise hashing with linear SVM to substantially improve the training (and testing) efficiency using much smaller memory, with essentially no loss of accuracy. Theoretically, we prove that the resemblance matrix, the minwise hashing matrix, and the b-bit minwise hashing matrix are all positive definite matrices (kernels). Interestingly, our proof for the positive definiteness of the b-bit minwise hashing kernel naturally suggests a simple strategy to integrate b-bit hashing with linear SVM. Our technique is particularly useful when the data can not fit in memory, which is an increasingly critical issue in large-scale machine learning. Our preliminary experimental results on a publicly available webspam dataset (350K samples and 16 million dimensions) verified the effectiveness of our algorithm. For example, the training time was reduced to merely a few seconds. In addition, our technique can be easily extended to many other linear and nonlinear machine...
Self-consistent field theory based molecular dynamics with linear system-size scaling.
Richters, Dorothee; Kühne, Thomas D
2014-04-01
We present an improved field-theoretic approach to the grand-canonical potential suitable for linear scaling molecular dynamics simulations using forces from self-consistent electronic structure calculations. It is based on an exact decomposition of the grand canonical potential for independent fermions and does neither rely on the ability to localize the orbitals nor that the Hamilton operator is well-conditioned. Hence, this scheme enables highly accurate all-electron linear scaling calculations even for metallic systems. The inherent energy drift of Born-Oppenheimer molecular dynamics simulations, arising from an incomplete convergence of the self-consistent field cycle, is circumvented by means of a properly modified Langevin equation. The predictive power of the present approach is illustrated using the example of liquid methane under extreme conditions.
Self-consistent field theory based molecular dynamics with linear system-size scaling
Energy Technology Data Exchange (ETDEWEB)
Richters, Dorothee [Institute of Mathematics and Center for Computational Sciences, Johannes Gutenberg University Mainz, Staudinger Weg 9, D-55128 Mainz (Germany); Kühne, Thomas D., E-mail: kuehne@uni-mainz.de [Institute of Physical Chemistry and Center for Computational Sciences, Johannes Gutenberg University Mainz, Staudinger Weg 7, D-55128 Mainz (Germany); Technical and Macromolecular Chemistry, University of Paderborn, Warburger Str. 100, D-33098 Paderborn (Germany)
2014-04-07
We present an improved field-theoretic approach to the grand-canonical potential suitable for linear scaling molecular dynamics simulations using forces from self-consistent electronic structure calculations. It is based on an exact decomposition of the grand canonical potential for independent fermions and does neither rely on the ability to localize the orbitals nor that the Hamilton operator is well-conditioned. Hence, this scheme enables highly accurate all-electron linear scaling calculations even for metallic systems. The inherent energy drift of Born-Oppenheimer molecular dynamics simulations, arising from an incomplete convergence of the self-consistent field cycle, is circumvented by means of a properly modified Langevin equation. The predictive power of the present approach is illustrated using the example of liquid methane under extreme conditions.
Dahl, Milo D.; Hixon, Ray; Mankbadi, Reda R.
2003-01-01
An approximate technique is presented for the prediction of the large-scale turbulent structure sound source in a supersonic jet. A linearized Euler equations code is used to solve for the flow disturbances within and near a jet with a given mean flow. Assuming a normal mode composition for the wave-like disturbances, the linear radial profiles are used in an integration of the Navier-Stokes equations. This results in a set of ordinary differential equations representing the weakly nonlinear self-interactions of the modes along with their interaction with the mean flow. Solutions are then used to correct the amplitude of the disturbances that represent the source of large-scale turbulent structure sound in the jet.
Screening methods for linear-scaling short-range hybrid calculations on CPU and GPU architectures
Beuerle, Matthias; Kussmann, Jörg; Ochsenfeld, Christian
2017-04-01
We present screening schemes that allow for efficient, linear-scaling short-range exchange calculations employing Gaussian basis sets for both CPU and GPU architectures. They are based on the LinK [C. Ochsenfeld et al., J. Chem. Phys. 109, 1663 (1998)] and PreLinK [J. Kussmann and C. Ochsenfeld, J. Chem. Phys. 138, 134114 (2013)] methods, but account for the decay introduced by the attenuated Coulomb operator in short-range hybrid density functionals. Furthermore, we discuss the implementation of short-range electron repulsion integrals on GPUs. The introduction of our screening methods allows for speedups of up to a factor 7.8 as compared to the underlying linear-scaling algorithm, while retaining full numerical control over the accuracy. With the increasing number of short-range hybrid functionals, our new schemes will allow for significant computational savings on CPU and GPU architectures.
A field-theoretic approach to linear scaling \\textit{ab-initio} molecular dynamics
Richters, Dorothee; Kühne, Thomas D
2012-01-01
We present a field-theoretic method suitable for linear scaling molecular dynamics simulations using forces from self-consistent electronic structure calculations. It is based on an exact decomposition of the grand canonical potential for independent fermions and does neither rely on the ability to localize the orbitals nor that the Hamilton operator is well-conditioned. Hence, this scheme enables highly accurate all-electron linear scaling calculations even for metallic systems. The inherent energy drift of Born-Oppenheimer molecular dynamics simulations, arising from an incomplete convergence of the self-consistent field cycle, is solved by means of a properly modified Langevin equation. The predictive power of this approach is illustrated using the example of liquid methane under extreme conditions.
Rousselet, Bernard
2013-01-01
We consider {\\it small solutions} of a vibrating mechanical system with smooth non-linearities for which we provide an approximate solution by using a triple scale analysis; a rigorous proof of convergence of the triple scale method is included; for the forced response, a stability result is needed in order to prove convergence in a neighbourhood of a primary resonance. The amplitude of the response with respect to the frequency forcing is described and it is related to the frequency of a free periodic vibration.
A Reduced Basis Framework: Application to large scale non-linear multi-physics problems
Directory of Open Access Journals (Sweden)
Daversin C.
2013-12-01
Full Text Available In this paper we present applications of the reduced basis method (RBM to large-scale non-linear multi-physics problems. We first describe the mathematical framework in place and in particular the Empirical Interpolation Method (EIM to recover an affine decomposition and then we propose an implementation using the open-source library Feel++ which provides both the reduced basis and finite element layers. Large scale numerical examples are shown and are connected to real industrial applications arising from the High Field Resistive Magnets development at the Laboratoire National des Champs Magnétiques Intenses.
Scaling effects in a non-linear electromagnetic energy harvester for wearable sensors
Geisler, M.; Boisseau, S.; Perez, M.; Ait-Ali, I.; Perraud, S.
2016-11-01
In the field of inertial energy harvesters targeting human mechanical energy, the ergonomics of the solutions impose to find the best compromise between dimensions reduction and electrical performance. In this paper, we study the properties of a non-linear electromagnetic generator at different scales, by performing simulations based on an experimentally validated model and real human acceleration recordings. The results display that the output power of the structure is roughly proportional to its scaling factor raised to the power of five, which indicates that this system is more relevant at lengths over a few centimetres.
Chemmangat Manakkal Cheriya, Krishnan; Ferranti, Francesco; Dhaene, Tom; Knockaert, Luc
2014-01-01
An enhanced parametric macromodelling scheme is presented for linear high-frequency systems based on the use of multiple frequency scaling coefficients and a sequential sampling algorithm to fully automate the entire modelling process. The proposed method is applied on a ring resonator bandpass filter example and compared with another state-of-the-art macromodelling method to show its improved modelling capability and reduced setup time.
Augmented Arnoldi-Tikhonov Regularization Methods for Solving Large-Scale Linear Ill-Posed Systems
Directory of Open Access Journals (Sweden)
Yiqin Lin
2013-01-01
Full Text Available We propose an augmented Arnoldi-Tikhonov regularization method for the solution of large-scale linear ill-posed systems. This method augments the Krylov subspace by a user-supplied low-dimensional subspace, which contains a rough approximation of the desired solution. The augmentation is implemented by a modified Arnoldi process. Some useful results are also presented. Numerical experiments illustrate that the augmented method outperforms the corresponding method without augmentation on some real-world examples.
Institute of Scientific and Technical Information of China (English)
De Tong ZHU
2008-01-01
We extend the classical affine scaling interior trust region algorithm for the linear con-strained smooth minimization problem to the nonsmooth case where the gradient of objective function is only locally Lipschitzian. We propose and analyze a new affine scaling trust-region method in associ-ation with nonmonotonic interior backtracking line search technique for solving the linear constrained LC1 optimization where the second-order derivative of the objective function is explicitly required to be locally Lipschitzian. The general trust region subproblem in the proposed algorithm is defined by minimizing an augmented affine scaling quadratic model which requires both first and second order information of the objective function subject only to an affine scaling ellipsoidal constraint in a null subspace of the augmented equality constraints. The global convergence and fast local convergence rate of the proposed algorithm are established under some reasonable conditions where twice smoothness of the objective function is not required. Applications of the algorithm to some nonsmooth optimization problems are discussed.
Wind-invariant saltation heights imply linear scaling of aeolian saltation flux with shear stress.
Martin, Raleigh L; Kok, Jasper F
2017-06-01
Wind-driven sand transport generates atmospheric dust, forms dunes, and sculpts landscapes. However, it remains unclear how the flux of particles in aeolian saltation-the wind-driven transport of sand in hopping trajectories-scales with wind speed, largely because models do not agree on how particle speeds and trajectories change with wind shear velocity. We present comprehensive measurements, from three new field sites and three published studies, showing that characteristic saltation layer heights remain approximately constant with shear velocity, in agreement with recent wind tunnel studies. These results support the assumption of constant particle speeds in recent models predicting linear scaling of saltation flux with shear stress. In contrast, our results refute widely used older models that assume that particle speed increases with shear velocity, thereby predicting nonlinear 3/2 stress-flux scaling. This conclusion is further supported by direct field measurements of saltation flux versus shear stress. Our results thus argue for adoption of linear saltation flux laws and constant saltation trajectories for modeling saltation-driven aeolian processes on Earth, Mars, and other planetary surfaces.
Large-Scale Structure Formation: from the first non-linear objects to massive galaxy clusters
Planelles, S; Bykov, A M
2014-01-01
The large-scale structure of the Universe formed from initially small perturbations in the cosmic density field, leading to galaxy clusters with up to 10^15 Msun at the present day. Here, we review the formation of structures in the Universe, considering the first primordial galaxies and the most massive galaxy clusters as extreme cases of structure formation where fundamental processes such as gravity, turbulence, cooling and feedback are particularly relevant. The first non-linear objects in the Universe formed in dark matter halos with 10^5-10^8 Msun at redshifts 10-30, leading to the first stars and massive black holes. At later stages, larger scales became non-linear, leading to the formation of galaxy clusters, the most massive objects in the Universe. We describe here their formation via gravitational processes, including the self-similar scaling relations, as well as the observed deviations from such self-similarity and the related non-gravitational physics (cooling, stellar feedback, AGN). While on i...
High-performance small-scale solvers for linear Model Predictive Control
DEFF Research Database (Denmark)
Frison, Gianluca; Sørensen, Hans Henrik Brandenborg; Dammann, Bernd
2014-01-01
In Model Predictive Control (MPC), an optimization problem needs to be solved at each sampling time, and this has traditionally limited use of MPC to systems with slow dynamic. In recent years, there has been an increasing interest in the area of fast small-scale solvers for linear MPC......, with the two main research areas of explicit MPC and tailored on-line MPC. State-of-the-art solvers in this second class can outperform optimized linear-algebra libraries (BLAS) only for very small problems, and do not explicitly exploit the hardware capabilities, relying on compilers for that. This approach...... problems 2 to 8 times faster than the current state-of-the-art solver for this class of problems, and the high-performance is maintained for MPC problems with up to a few hundred states....
Methods for accurate analysis of galaxy clustering on non-linear scales
Vakili, Mohammadjavad
2017-01-01
Measurements of galaxy clustering with the low-redshift galaxy surveys provide sensitive probe of cosmology and growth of structure. Parameter inference with galaxy clustering relies on computation of likelihood functions which requires estimation of the covariance matrix of the observables used in our analyses. Therefore, accurate estimation of the covariance matrices serves as one of the key ingredients in precise cosmological parameter inference. This requires generation of a large number of independent galaxy mock catalogs that accurately describe the statistical distribution of galaxies in a wide range of physical scales. We present a fast method based on low-resolution N-body simulations and approximate galaxy biasing technique for generating mock catalogs. Using a reference catalog that was created using the high resolution Big-MultiDark N-body simulation, we show that our method is able to produce catalogs that describe galaxy clustering at a percentage-level accuracy down to highly non-linear scales in both real-space and redshift-space.In most large-scale structure analyses, modeling of galaxy bias on non-linear scales is performed assuming a halo model. Clustering of dark matter halos has been shown to depend on halo properties beyond mass such as halo concentration, a phenomenon referred to as assembly bias. Standard large-scale structure studies assume that halo mass alone is sufficient in characterizing the connection between galaxies and halos. However, modeling of galaxy bias can face systematic effects if the number of galaxies are correlated with other halo properties. Using the Small MultiDark-Planck high resolution N-body simulation and the clustering measurements of Sloan Digital Sky Survey DR7 main galaxy sample, we investigate the extent to which the dependence of galaxy bias on halo concentration can improve our modeling of galaxy clustering.
Linear and Nonlinear Optical Properties of Micrometer-Scale Gold Nanoplates
Institute of Scientific and Technical Information of China (English)
LIU Xiao-Lan; PENG Xiao-Niu; YANG Zhong-Jian; LI Min; ZHOU Li
2011-01-01
Micrometer-scale gold nanoplates have been synthesized in high yield through a polyol process.The morphology, crystal structure and linear optical extinction of the gold nanoplates have been characterized.These gold nanoplates are single-crystalline with triangular, truncated triangular and hexagonal shapes, exhibiting strong surface plasmon resonance (SPR) extinction in the visible and near-infrared (NIR) region.The linear optical properties of gold nanoplates are also investigated by theoretical calculations.We further investigate the nonlinear opticai properties of the gold nanoplates in solution by Z-scan technique.The nonlinear absorption (NLA )coefficient and nonlinear refraction (NLR) index are measured to be 1.18 × 102 cm/GW and - 1.04 × 10-3 cm2/GW,respectively.%@@ Micrometer-scale gold nanoplates have been synthesized in high yield through a polyol process.The morphology,crystal structure and linear optical extinction of the gold nanoplates have been characterized.These gold nanoplates are single-crystalline with triangular,truncated triangular and hexagonal shapes,exhibiting strong surface plasmon resonance(SPR) extinction in the visible and near-infrared(NIR) region.The linear optical properties of gold nanoplates are also investigated by theoretical calculations.We further investigate the nonlinear optical properties of the gold nanoplates in solution by Z-scan technique.The nonlinear absorption(NLA)coefficient and nonlinear refraction(NLR) index are measured to be 1.18 × 102 cm/GW and - 1.04 × 10-3 cm2/GW,respectively.
Investigation of the Validity of the Universal Scaling Law on Linear Chains of Silver Nanoparticles
Directory of Open Access Journals (Sweden)
Mohammed Alsawafta
2015-01-01
Full Text Available Due to the wide range of variation in the plasmonic characteristics of the metallic nanoparticles arranged in linear arrays, the optical spectra of these arrays provide a powerful platform for spectroscopic studies and biosensing applications. Due to the coupling effect between the interacting nanoparticles, the excited resonance mode is shifted with the interparticle separation. The change in the resonance energy of the coupled mode is expressed by the fractional plasmon shift which would normally follow a universal scaling behavior. Such a universal law has been successfully applied on a system of dimers under parallel polarization. It has been found that the plasmon shift decays exponentially over interparticle spacing. The decay length is independent of both the nanoparticle and dielectric properties of the surrounding medium. In this paper, the discrete dipole approximation (DDA is used to examine the validity of extending the universal scaling law to linear chains of several interacting nanoparticles embedded in various host media for both parallel and perpendicular polarizations. Our calculations reveal that the decay length of both the coupled longitudinal mode (LM and transverse modes (TM is strongly dependent on the refractive index of the surrounding medium nm. The decay constant of the LM is linearly proportional to nm while the corresponding constant of the TM decays exponentially with nm. Upon changing the nanoparticle size, the change in the peak position of the LM decreases exponentially with the interparticle separation and hence, it obeys the universal law. The sensitivity of coupled LM to the nanoparticle size is more pronounced at both smaller nanoparticle sizes and separations. The sensitivity of the coupled TM to the nanoparticle size on the other hand changes linearly with the separation and therefore, the universal law does not apply in the case of the excited TM.
Liu, Wei-Long; Jiang, Li-Lin; Wang, Yang; He, Xing; Song, Yun-Fei; Zheng, Zhi-Ren; Yang, Yan-Qiang; Zhao, Lian-Cheng
2013-08-01
Raman spectra of two typical carotenoids (beta-carotene and lutein) and some short (n = 2-5) polyenes were calculated using density functional theory. The wavenumber-linear scaling (WLS) and other frequency scaling methods were used to calibrate the calculated frequencies. It was found that the most commonly used uniform scaling (UFS) method can only calibrate several individual frequencies perfectly, and the systematic result of this method is not very good. The fitting parameters obtained by the WLS method are upsilon(obs)/upsilon(calc)) = 0.999 9-0.000 027 4upsilon(calc) and upsilon(obs)/upsilon(calc)= 0.993 8-0.000 024 8upsilon(calc) for short polyenes and carotenoids, respectively. The calibration results of the WLS method are much better than the UFS method. This result suggests that the WLS method can be used for the frequency scaling of the molecules as large as carotenoids. The similar fitting parameters for short polyenes and carotenoids indicate that the fitting parameters obtained by WLS for short polyenes can be used for calibrating the calculated vibrational frequencies of carotenoids. This presents a new frequency scaling method for vibrational spectroscopic analysis of carotenoids.
Nonmonotonic Recursive Polynomial Expansions for Linear Scaling Calculation of the Density Matrix.
Rubensson, Emanuel H
2011-05-10
As it stands, density matrix purification is a powerful tool for linear scaling electronic structure calculations. The convergence is rapid and depends only weakly on the band gap. However, as will be shown in this letter, there is room for improvements. The key is to allow for nonmonotonicity in the recursive polynomial expansion. On the basis of this idea, new purification schemes are proposed that require only half the number of matrix-matrix multiplications compared to previous schemes. The speedup is essentially independent of the location of the chemical potential and increases with decreasing band gap.
Minimization of Linear Functionals Defined on| Solutions of Large-Scale Discrete Ill-Posed Problems
DEFF Research Database (Denmark)
Elden, Lars; Hansen, Per Christian; Rojas, Marielba
2003-01-01
The minimization of linear functionals de ned on the solutions of discrete ill-posed problems arises, e.g., in the computation of con dence intervals for these solutions. In 1990, Elden proposed an algorithm for this minimization problem based on a parametric-programming reformulation involving...... the solution of a sequence of trust-region problems, and using matrix factorizations. In this paper, we describe MLFIP, a large-scale version of this algorithm where a limited-memory trust-region solver is used on the subproblems. We illustrate the use of our algorithm in connection with an inverse heat...
Stability Criteria for Large-Scale Linear Systems with Structured Uncertainties
Institute of Scientific and Technical Information of China (English)
Cao Dengqing
1996-01-01
The robust stability analysis for large-scale linear systems with structured timevarying uncertainties is investigated in this paper. By using the scalar Lyapunov functions and the properties of M-matrix and nonnegative matrix, stability robustness measures are proposed. The robust stability criteria obtained are applied to derive an algebric criterion which is expressed directly in terms of plant parameters and is shown to be less conservative than the existing ones. A numerical example is given to demonstrate the stability criteria obtained and to compare them with the previous ones.
Directory of Open Access Journals (Sweden)
Xiaocui Wu
2015-02-01
Full Text Available The reliable simulation of gross primary productivity (GPP at various spatial and temporal scales is of significance to quantifying the net exchange of carbon between terrestrial ecosystems and the atmosphere. This study aimed to verify the ability of a nonlinear two-leaf model (TL-LUEn, a linear two-leaf model (TL-LUE, and a big-leaf light use efficiency model (MOD17 to simulate GPP at half-hourly, daily and 8-day scales using GPP derived from 58 eddy-covariance flux sites in Asia, Europe and North America as benchmarks. Model evaluation showed that the overall performance of TL-LUEn was slightly but not significantly better than TL-LUE at half-hourly and daily scale, while the overall performance of both TL-LUEn and TL-LUE were significantly better (p < 0.0001 than MOD17 at the two temporal scales. The improvement of TL-LUEn over TL-LUE was relatively small in comparison with the improvement of TL-LUE over MOD17. However, the differences between TL-LUEn and MOD17, and TL-LUE and MOD17 became less distinct at the 8-day scale. As for different vegetation types, TL-LUEn and TL-LUE performed better than MOD17 for all vegetation types except crops at the half-hourly scale. At the daily and 8-day scales, both TL-LUEn and TL-LUE outperformed MOD17 for forests. However, TL-LUEn had a mixed performance for the three non-forest types while TL-LUE outperformed MOD17 slightly for all these non-forest types at daily and 8-day scales. The better performance of TL-LUEn and TL-LUE for forests was mainly achieved by the correction of the underestimation/overestimation of GPP simulated by MOD17 under low/high solar radiation and sky clearness conditions. TL-LUEn is more applicable at individual sites at the half-hourly scale while TL-LUE could be regionally used at half-hourly, daily and 8-day scales. MOD17 is also an applicable option regionally at the 8-day scale.
FAST SOLUTION FOR LARGE SCALE LINEAR ALGEBRAIC EQUATIONS IN FINITE ELEMENT ANALYSIS
Institute of Scientific and Technical Information of China (English)
Qi Zhaohui; Liu Yuqi; Hu Ping
2001-01-01
The computational efficiency of numerical solution of linear algebraic equations in finite elements can be improved in tow wqys. One is to decrease the fill-in numbers, which are new non-ze-ro numbers in the matrix of global stiffness generated during the process of elimination.The other is to reduce the computational operation of multiplying a real number by zero.Based on the fact that the order of elimination can determine how many fill-in numbers should be generated, we present a new method for optimization of numbering nodes. This method is quite different from bandwidth optimization. Fill-in numbers can be decreased in a large scale by the use of this method. The bi-factorization method is adoted to avoid multiplying real numbers by zero.For large scale finite element analysis, the method presented in this paper is more efficient than the traditional LDLT method.
Linear-scaling density functional theory using the projector augmented wave method
Hine, Nicholas D. M.
2017-01-01
Quantum mechanical simulation of realistic models of nanostructured systems, such as nanocrystals and crystalline interfaces, demands computational methods combining high-accuracy with low-order scaling with system size. Blöchl’s projector augmented wave (PAW) approach enables all-electron (AE) calculations with the efficiency and systematic accuracy of plane-wave pseudopotential calculations. Meanwhile, linear-scaling (LS) approaches to density functional theory (DFT) allow for simulation of thousands of atoms in feasible computational effort. This article describes an adaptation of PAW for use in the LS-DFT framework provided by the ONETEP LS-DFT package. ONETEP uses optimisation of the density matrix through in situ-optimised local orbitals rather than the direct calculation of eigenstates as in traditional PAW approaches. The method is shown to be comparably accurate to both PAW and AE approaches and to exhibit improved convergence properties compared to norm-conserving pseudopotential methods.
Volumetric composition of nanocomposites
DEFF Research Database (Denmark)
Madsen, Bo; Lilholt, Hans; Mannila, Juha
2015-01-01
Detailed characterisation of the properties of composite materials with nanoscale fibres is central for the further progress in optimization of their manufacturing and properties. In the present study, a methodology for the determination and analysis of the volumetric composition of nanocomposites...... is presented, using cellulose/epoxy and aluminosilicate/polylactate nanocomposites as case materials. The buoyancy method is used for the accurate measurements of materials density. The accuracy of the method is determined to be high, allowing the measured nanocomposite densities to be reported with 5...... significant figures. The plotting of the measured nanocomposite density as a function of the nanofibre weight content is shown to be a first good approach of assessing the porosity content of the materials. The known gravimetric composition of the nanocomposites is converted into a volumetric composition...
Rapid mapping of volumetric machine errors using distance measurements
Energy Technology Data Exchange (ETDEWEB)
Krulewich, D.A.
1998-04-01
This paper describes a relatively inexpensive, fast, and easy to execute approach to maping the volumetric errors of a machine tool, coordinate measuring machine, or robot. An error map is used to characterize a machine or to improve its accuracy by compensating for the systematic errors. The method consists of three steps: (1) models the relationship between volumetric error and the current state of the machine, (2) acquiring error data based on distance measurements throughout the work volume; and (3)fitting the error model using the nonlinear equation for the distance. The error model is formulated from the kinematic relationship among the six degrees of freedom of error an each moving axis. Expressing each parametric error as function of position each is combined to predict the error between the functional point and workpiece, also as a function of position. A series of distances between several fixed base locations and various functional points in the work volume is measured using a Laser Ball Bar (LBB). Each measured distance is a non-linear function dependent on the commanded location of the machine, the machine error, and the location of the base locations. Using the error model, the non-linear equation is solved producing a fit for the error model Also note that, given approximate distances between each pair of base locations, the exact base locations in the machine coordinate system determined during the non-linear filling procedure. Furthermore, with the use of 2048 more than three base locations, bias error in the measuring instrument can be removed The volumetric errors of three-axis commercial machining center have been mapped using this procedure. In this study, only errors associated with the nominal position of the machine were considered Other errors such as thermally induced and load induced errors were not considered although the mathematical model has the ability to account for these errors. Due to the proprietary nature of the projects we are
Scalable fault tolerant algorithms for linear-scaling coupled-cluster electronic structure methods.
Energy Technology Data Exchange (ETDEWEB)
Leininger, Matthew L.; Nielsen, Ida Marie B.; Janssen, Curtis L.
2004-10-01
By means of coupled-cluster theory, molecular properties can be computed with an accuracy often exceeding that of experiment. The high-degree polynomial scaling of the coupled-cluster method, however, remains a major obstacle in the accurate theoretical treatment of mainstream chemical problems, despite tremendous progress in computer architectures. Although it has long been recognized that this super-linear scaling is non-physical, the development of efficient reduced-scaling algorithms for massively parallel computers has not been realized. We here present a locally correlated, reduced-scaling, massively parallel coupled-cluster algorithm. A sparse data representation for handling distributed, sparse multidimensional arrays has been implemented along with a set of generalized contraction routines capable of handling such arrays. The parallel implementation entails a coarse-grained parallelization, reducing interprocessor communication and distributing the largest data arrays but replicating as many arrays as possible without introducing memory bottlenecks. The performance of the algorithm is illustrated by several series of runs for glycine chains using a Linux cluster with an InfiniBand interconnect.
Dual mean field search for large scale linear and quadratic knapsack problems
Banda, Juan; Velasco, Jonás; Berrones, Arturo
2017-07-01
An implementation of mean field annealing to deal with large scale linear and non linear binary optimization problems is given. Mean field annealing is based on the analogy between combinatorial optimization and interacting physical systems at thermal equilibrium. Specifically, a mean field approximation of the Boltzmann distribution given by a Lagrangian that encompass the objective function and the constraints is calculated. The original discrete task is in this way transformed into a continuous variational problem. In our version of mean field annealing, no temperature parameter is used, but a good starting point in the dual space is given by a ;thermodynamic limit; argument. The method is tested in linear and quadratic knapsack problems with sizes that are considerably larger than those used in previous studies of mean field annealing. Dual mean field annealing is capable to find high quality solutions in running times that are orders of magnitude shorter than state of the art algorithms. Moreover, as may be expected for a mean field theory, the solutions tend to be more accurate as the number of variables grow.
DEFF Research Database (Denmark)
Wang, Zhaohui; Folsø, Rasmus; Bondini, Francesca
1999-01-01
, full-scale measurements have been performed on board a 128 m monohull fast ferry. This paper deals with the results from these full-scale measurements. The primary results considered are pitch motion, midship vertical bending moment and vertical acceleration at the bow. Previous comparisons between...
Bringing about matrix sparsity in linear-scaling electronic structure calculations.
Rubensson, Emanuel H; Rudberg, Elias
2011-05-01
The performance of linear-scaling electronic structure calculations depends critically on matrix sparsity. This article gives an overview of different strategies for removal of small matrix elements, with emphasis on schemes that allow for rigorous control of errors. In particular, a novel scheme is proposed that has significantly smaller computational overhead compared with the Euclidean norm-based truncation scheme of Rubensson et al. (J Comput Chem 2009, 30, 974) while still achieving the desired asymptotic behavior required for linear scaling. Small matrix elements are removed while ensuring that the Euclidean norm of the error matrix stays below a desired value, so that the resulting error in the occupied subspace can be controlled. The efficiency of the new scheme is investigated in benchmark calculations for water clusters including up to 6523 water molecules. Furthermore, the foundation of matrix sparsity is investigated. This includes a study of the decay of matrix element magnitude with distance between basis function centers for different molecular systems and different methods. The studied methods include Hartree–Fock and density functional theory using both pure and hybrid functionals. The relation between band gap and decay properties of the density matrix is also discussed.
Robust linear equation dwell time model compatible with large scale discrete surface error matrix.
Dong, Zhichao; Cheng, Haobo; Tam, Hon-Yuen
2015-04-01
The linear equation dwell time model can translate the 2D convolution process of material removal during subaperture polishing into a more intuitional expression, and may provide relatively fast and reliable results. However, the accurate solution of this ill-posed equation is not so easy, and its practicability for a large scale surface error matrix is still limited. This study first solves this ill-posed equation by Tikhonov regularization and the least square QR decomposition (LSQR) method, and automatically determines an optional interval and a typical value for the damped factor of regularization, which are dependent on the peak removal rate of tool influence functions. Then, a constrained LSQR method is presented to increase the robustness of the damped factor, which can provide more consistent dwell time maps than traditional LSQR. Finally, a matrix segmentation and stitching method is used to cope with large scale surface error matrices. Using these proposed methods, the linear equation model becomes more reliable and efficient in practical engineering.
Lorenzen, Konstantin; Mathias, Gerald; Tavan, Paul
2015-11-14
Hamiltonian Dielectric Solvent (HADES) is a recent method [S. Bauer et al., J. Chem. Phys. 140, 104103 (2014)] which enables atomistic Hamiltonian molecular dynamics (MD) simulations of peptides and proteins in dielectric solvent continua. Such simulations become rapidly impractical for large proteins, because the computational effort of HADES scales quadratically with the number N of atoms. If one tries to achieve linear scaling by applying a fast multipole method (FMM) to the computation of the HADES electrostatics, the Hamiltonian character (conservation of total energy, linear, and angular momenta) may get lost. Here, we show that the Hamiltonian character of HADES can be almost completely preserved, if the structure-adapted fast multipole method (SAMM) as recently redesigned by Lorenzen et al. [J. Chem. Theory Comput. 10, 3244-3259 (2014)] is suitably extended and is chosen as the FMM module. By this extension, the HADES/SAMM forces become exact gradients of the HADES/SAMM energy. Their translational and rotational invariance then guarantees (within the limits of numerical accuracy) the exact conservation of the linear and angular momenta. Also, the total energy is essentially conserved-up to residual algorithmic noise, which is caused by the periodically repeated SAMM interaction list updates. These updates entail very small temporal discontinuities of the force description, because the employed SAMM approximations represent deliberately balanced compromises between accuracy and efficiency. The energy-gradient corrected version of SAMM can also be applied, of course, to MD simulations of all-atom solvent-solute systems enclosed by periodic boundary conditions. However, as we demonstrate in passing, this choice does not offer any serious advantages.
Energy Technology Data Exchange (ETDEWEB)
Lorenzen, Konstantin; Mathias, Gerald; Tavan, Paul, E-mail: tavan@physik.uni-muenchen.de [Lehrstuhl für BioMolekulare Optik, Ludig–Maximilians Universität München, Oettingenstr. 67, 80538 München (Germany)
2015-11-14
Hamiltonian Dielectric Solvent (HADES) is a recent method [S. Bauer et al., J. Chem. Phys. 140, 104103 (2014)] which enables atomistic Hamiltonian molecular dynamics (MD) simulations of peptides and proteins in dielectric solvent continua. Such simulations become rapidly impractical for large proteins, because the computational effort of HADES scales quadratically with the number N of atoms. If one tries to achieve linear scaling by applying a fast multipole method (FMM) to the computation of the HADES electrostatics, the Hamiltonian character (conservation of total energy, linear, and angular momenta) may get lost. Here, we show that the Hamiltonian character of HADES can be almost completely preserved, if the structure-adapted fast multipole method (SAMM) as recently redesigned by Lorenzen et al. [J. Chem. Theory Comput. 10, 3244-3259 (2014)] is suitably extended and is chosen as the FMM module. By this extension, the HADES/SAMM forces become exact gradients of the HADES/SAMM energy. Their translational and rotational invariance then guarantees (within the limits of numerical accuracy) the exact conservation of the linear and angular momenta. Also, the total energy is essentially conserved—up to residual algorithmic noise, which is caused by the periodically repeated SAMM interaction list updates. These updates entail very small temporal discontinuities of the force description, because the employed SAMM approximations represent deliberately balanced compromises between accuracy and efficiency. The energy-gradient corrected version of SAMM can also be applied, of course, to MD simulations of all-atom solvent-solute systems enclosed by periodic boundary conditions. However, as we demonstrate in passing, this choice does not offer any serious advantages.
Yang, Shujiang; Kertesz, Miklos
2006-12-01
The two bond length alternation related backbone carbon-carbon stretching Raman active normal modes of polyacetylene are notoriously difficulty to predict theoretically. We apply our new linear/exponential scaled quantum mechanical force field scheme to tackle this problem by exponentially adjusting the decay of the coupling force constants between backbone stretchings based on their distance which extends over many neighbors. With transferable scaling parameters optimized by least squares fitting to the experimental vibrational frequencies of short oligoenes, the scaled frequencies of trans-polyacetylene and its isotopic analogs agree very well with experiments. The linear/exponential scaling scheme is also applicable to the cis-polyacetylene case.
Institute of Scientific and Technical Information of China (English)
De-tong Zhu
2009-01-01
In this paper we extend and improve the classical affine scaling interior-point Newton method for solving nonlinear optimization subject to linear inequality constraints in the absence of the strict complementar-ity assumption. Introducing a computationally efficient technique and employing an identification function for the definition of the new affine scaling matrix, we propose and analyze a new affine scaling interior-point Newton method which improves the Coleman and Li affine scaling matrix in [2] for solving the linear inequality con-strained optimization. Local superlinear and quadratical convergence of the proposed algorithm is established under the strong second order sufficiency condition without assuming strict complementarity of the solution.
Weeden, George S; Wang, Nien-Hwa Linda
2017-04-14
Simulated Moving Bed (SMB) systems with linear adsorption isotherms have been used for many different separations, including large-scale sugar separations. While SMBs are much more efficient than batch operations, they are not widely used for large-scale production because there are two key barriers. The methods for design, optimization, and scale-up are complex for non-ideal systems. The Speedy Standing Wave Design (SSWD) is developed here to reduce these barriers. The productivity (PR) and the solvent efficiency (F/D) are explicitly related to seven material properties and 13 design parameters. For diffusion-controlled systems, the maximum PR or F/D is controlled by two key dimensionless material properties, the selectivity (α) and the effective diffusivity ratio (η), and two key dimensionless design parameters, the ratios of step time/diffusion time and pressure-limited convection time/diffusion time. The optimum column configuration for maximum PR or F/D is controlled by the weighted diffusivity ratio (η/α(2)). In general, high α and low η/α(2) favor high PR and F/D. The productivity is proportional to the ratio of the feed concentration to the diffusion time. Small particles and high diffusivities favor high productivity, but do not affect solvent efficiency. Simple scaling rules are derived from the two key dimensionless design parameters. The separation of acetic acid from glucose in biomass hydrolysate is used as an example to show how the productivity and the solvent efficiency are affected by the key dimensionless material and design parameters. Ten design parameters are optimized for maximum PR or minimum cost in one minute on a laptop computer. If the material properties are the same for different particle sizes and the dimensionless groups are kept constant, then lab-scale testing consumes less materials and can be done four times faster using particles with half the particle size. Copyright © 2017 Elsevier B.V. All rights reserved.
Trace Conserving Purification for Linear Scaling [O(N)] Methods: A First Enhancement to CP2K
2014-09-01
purification scheme times in CP2K. Timings are normalized to TRS4 for each band gap. 5 Fig. 2 Graphical representation of the 1024 water box...Trace Conserving Purification for Linear Scaling [O(N)] Methods: A First Enhancement to CP2K by Jonathan Mullin ARL-CR-0746 September...Proving Ground, MD 21005-5069 ARL-CR-0746 September 2014 Trace Conserving Purification for Linear Scaling [O(N)] Methods: A First
The Front-End Readout as an Encoder IC for Magneto-Resistive Linear Scale Sensors
Tran, Trong-Hieu; Chao, Paul Chang-Po; Chien, Ping-Chieh
2016-01-01
This study proposes a front-end readout circuit as an encoder chip for magneto-resistance (MR) linear scales. A typical MR sensor consists of two major parts: one is its base structure, also called the magnetic scale, which is embedded with multiple grid MR electrodes, while another is an “MR reader” stage with magnets inside and moving on the rails of the base. As the stage is in motion, the magnetic interaction between the moving stage and the base causes the variation of the magneto-resistances of the grid electrodes. In this study, a front-end readout IC chip is successfully designed and realized to acquire temporally-varying resistances in electrical signals as the stage is in motions. The acquired signals are in fact sinusoids and co-sinusoids, which are further deciphered by the front-end readout circuit via newly-designed programmable gain amplifiers (PGAs) and analog-to-digital converters (ADCs). The PGA is particularly designed to amplify the signals up to full dynamic ranges and up to 1 MHz. A 12-bit successive approximation register (SAR) ADC for analog-to-digital conversion is designed with linearity performance of ±1 in the least significant bit (LSB) over the input range of 0.5–2.5 V from peak to peak. The chip was fabricated by the Taiwan Semiconductor Manufacturing Company (TSMC) 0.35-micron complementary metal oxide semiconductor (CMOS) technology for verification with a chip size of 6.61 mm2, while the power consumption is 56 mW from a 5-V power supply. The measured integral non-linearity (INL) is −0.79–0.95 LSB while the differential non-linearity (DNL) is −0.68–0.72 LSB. The effective number of bits (ENOB) of the designed ADC is validated as 10.86 for converting the input analog signal to digital counterparts. Experimental validation was conducted. A digital decoder is orchestrated to decipher the harmonic outputs from the ADC via interpolation to the position of the moving stage. It was found that the displacement measurement
The Front-End Readout as an Encoder IC for Magneto-Resistive Linear Scale Sensors.
Tran, Trong-Hieu; Chao, Paul Chang-Po; Chien, Ping-Chieh
2016-09-02
This study proposes a front-end readout circuit as an encoder chip for magneto-resistance (MR) linear scales. A typical MR sensor consists of two major parts: one is its base structure, also called the magnetic scale, which is embedded with multiple grid MR electrodes, while another is an "MR reader" stage with magnets inside and moving on the rails of the base. As the stage is in motion, the magnetic interaction between the moving stage and the base causes the variation of the magneto-resistances of the grid electrodes. In this study, a front-end readout IC chip is successfully designed and realized to acquire temporally-varying resistances in electrical signals as the stage is in motions. The acquired signals are in fact sinusoids and co-sinusoids, which are further deciphered by the front-end readout circuit via newly-designed programmable gain amplifiers (PGAs) and analog-to-digital converters (ADCs). The PGA is particularly designed to amplify the signals up to full dynamic ranges and up to 1 MHz. A 12-bit successive approximation register (SAR) ADC for analog-to-digital conversion is designed with linearity performance of ±1 in the least significant bit (LSB) over the input range of 0.5-2.5 V from peak to peak. The chip was fabricated by the Taiwan Semiconductor Manufacturing Company (TSMC) 0.35-micron complementary metal oxide semiconductor (CMOS) technology for verification with a chip size of 6.61 mm², while the power consumption is 56 mW from a 5-V power supply. The measured integral non-linearity (INL) is -0.79-0.95 LSB while the differential non-linearity (DNL) is -0.68-0.72 LSB. The effective number of bits (ENOB) of the designed ADC is validated as 10.86 for converting the input analog signal to digital counterparts. Experimental validation was conducted. A digital decoder is orchestrated to decipher the harmonic outputs from the ADC via interpolation to the position of the moving stage. It was found that the displacement measurement error is within
The Front-End Readout as an Encoder IC for Magneto-Resistive Linear Scale Sensors
Directory of Open Access Journals (Sweden)
Trong-Hieu Tran
2016-09-01
Full Text Available This study proposes a front-end readout circuit as an encoder chip for magneto-resistance (MR linear scales. A typical MR sensor consists of two major parts: one is its base structure, also called the magnetic scale, which is embedded with multiple grid MR electrodes, while another is an “MR reader” stage with magnets inside and moving on the rails of the base. As the stage is in motion, the magnetic interaction between the moving stage and the base causes the variation of the magneto-resistances of the grid electrodes. In this study, a front-end readout IC chip is successfully designed and realized to acquire temporally-varying resistances in electrical signals as the stage is in motions. The acquired signals are in fact sinusoids and co-sinusoids, which are further deciphered by the front-end readout circuit via newly-designed programmable gain amplifiers (PGAs and analog-to-digital converters (ADCs. The PGA is particularly designed to amplify the signals up to full dynamic ranges and up to 1 MHz. A 12-bit successive approximation register (SAR ADC for analog-to-digital conversion is designed with linearity performance of ±1 in the least significant bit (LSB over the input range of 0.5–2.5 V from peak to peak. The chip was fabricated by the Taiwan Semiconductor Manufacturing Company (TSMC 0.35-micron complementary metal oxide semiconductor (CMOS technology for verification with a chip size of 6.61 mm2, while the power consumption is 56 mW from a 5-V power supply. The measured integral non-linearity (INL is −0.79–0.95 LSB while the differential non-linearity (DNL is −0.68–0.72 LSB. The effective number of bits (ENOB of the designed ADC is validated as 10.86 for converting the input analog signal to digital counterparts. Experimental validation was conducted. A digital decoder is orchestrated to decipher the harmonic outputs from the ADC via interpolation to the position of the moving stage. It was found that the displacement
Examining item-position effects in large-scale assessment using the Linear Logistic Test Model
Directory of Open Access Journals (Sweden)
CHRISTINE HOHENSINN
2008-09-01
Full Text Available When administering large-scale assessments, item-position effects are of particular importance because the applied test designs very often contain several test booklets with the same items presented at different test positions. Establishing such position effects would be most critical; it would mean that the estimated item parameters do not depend exclusively on the items’ difficulties due to content but also on their presentation positions. As a consequence, item calibration would be biased. By means of the linear logistic test model (LLTM, item-position effects can be tested. In this paper, the results of a simulation study demonstrating how LLTM is indeed able to detect certain position effects in the framework of a large-scale assessment are presented first. Second, empirical item-position effects of a specific large-scale competence assessment in mathematics (4th grade students are analyzed using the LLTM. The results indicate that a small fatigue effect seems to take place. The most important consequence of the given paper is that it is advisable to try pertinent simulation studies before an analysis of empirical data takes place; the reason is, that for the given example, the suggested Likelihood-Ratio test neither holds the nominal type-I-risk, nor qualifies as “robust”, and furthermore occasionally shows very low power.
Acceleration in the linear non-scaling fixed-field alternating-gradient accelerator EMMA
Machida, S.; Barlow, R.; Berg, J. S.; Bliss, N.; Buckley, R. K.; Clarke, J. A.; Craddock, M. K.; D'Arcy, R.; Edgecock, R.; Garland, J. M.; Giboudot, Y.; Goudket, P.; Griffiths, S.; Hill, C.; Hill, S. F.; Hock, K. M.; Holder, D. J.; Ibison, M. G.; Jackson, F.; Jamison, S. P.; Johnstone, C.; Jones, J. K.; Jones, L. B.; Kalinin, A.; Keil, E.; Kelliher, D. J.; Kirkman, I. W.; Koscielniak, S.; Marinov, K.; Marks, N.; Martlew, B.; McIntosh, P. A.; McKenzie, J. W.; Méot, F.; Middleman, K. J.; Moss, A.; Muratori, B. D.; Orrett, J.; Owen, H. L.; Pasternak, J.; Peach, K. J.; Poole, M. W.; Rao, Y.-N.; Saveliev, Y.; Scott, D. J.; Sheehy, S. L.; Shepherd, B. J. A.; Smith, R.; Smith, S. L.; Trbojevic, D.; Tzenov, S.; Weston, T.; Wheelhouse, A.; Williams, P. H.; Wolski, A.; Yokoi, T.
2012-03-01
In a fixed-field alternating-gradient (FFAG) accelerator, eliminating pulsed magnet operation permits rapid acceleration to synchrotron energies, but with a much higher beam-pulse repetition rate. Conceived in the 1950s, FFAGs are enjoying renewed interest, fuelled by the need to rapidly accelerate unstable muons for future high-energy physics colliders. Until now a `scaling' principle has been applied to avoid beam blow-up and loss. Removing this restriction produces a new breed of FFAG, a non-scaling variant, allowing powerful advances in machine characteristics. We report on the first non-scaling FFAG, in which orbits are compacted to within 10mm in radius over an electron momentum range of 12-18MeV/c. In this strictly linear-gradient FFAG, unstable beam regions are crossed, but acceleration via a novel serpentine channel is so rapid that no significant beam disruption is observed. This result has significant implications for future particle accelerators, particularly muon and high-intensity proton accelerators.
Cagle, Christopher M. (Inventor); Schlecht, Robin W. (Inventor)
2014-01-01
A flexible volumetric structure has a first spring that defines a three-dimensional volume and includes a serpentine structure elongatable and compressible along a length thereof. A second spring is coupled to at least one outboard edge region of the first spring. The second spring is a sheet-like structure capable of elongation along an in-plane dimension thereof. The second spring is oriented such that its in-plane dimension is aligned with the length of the first spring's serpentine structure.
Linear-scaling generation of potential energy surfaces using a double incremental expansion
König, Carolin
2016-01-01
We present a combination of the incremental expansion of potential energy surfaces (PESs), known as n-mode expansion, with the incremental evaluation of the electronic energy in a many-body approach. The application of semi-local coordinates in this context allows the generation of PESs in a very cost-efficient way. For this, we employ the recently introduced FALCON (Flexible Adaptation of Local COordinates of Nuclei) coordinates. By introducing an additional transformation step, concerning only a fraction of the vibrational degrees of freedom, we can achieve linear scaling of the accumulated cost of the single point calculations required in the PES generation. Numerical examples of these double incremental approaches for oligo-phenyl examples show fast convergence with respect to the maximum number of simultaneously treated fragments and only a modest error introduced by the additional transformation step. The approach, presented here, represents a major step towards the applicability of vibrational wave fun...
Mixed-Mode Oscillations in a piecewise linear system with multiple time scale coupling
Fernández-García, S.; Krupa, M.; Clément, F.
2016-10-01
In this work, we analyze a four dimensional slow-fast piecewise linear system with three time scales presenting Mixed-Mode Oscillations. The system possesses an attractive limit cycle along which oscillations of three different amplitudes and frequencies can appear, namely, small oscillations, pulses (medium amplitude) and one surge (largest amplitude). In addition to proving the existence and attractiveness of the limit cycle, we focus our attention on the canard phenomena underlying the changes in the number of small oscillations and pulses. We analyze locally the existence of secondary canards leading to the addition or subtraction of one small oscillation and describe how this change is globally compensated for or not with the addition or subtraction of one pulse.
Pavanello, Michele; Visscher, Lucas; Neugebauer, Johannes
2012-01-01
Quantum--Mechanical methods that are both computationally fast and accurate are not yet available for electronic excitations having charge transfer character. In this work, we present a significant step forward towards this goal for those charge transfer excitations that take place between non-covalently bound molecules. In particular, we present a method that scales linearly with the number of non-covalently bound molecules in the system and is based on a two-pronged approach: The molecular electronic structure of broken-symmetry charge-localized states is obtained with the Frozen Density Embedding formulation of subsystem Density-Functional Theory; subsequently, in a post-SCF calculation, the full-electron Hamiltonian and overlap matrix elements among the charge-localized states are evaluated with an algorithm which takes full advantage of the subsystem DFT density partitioning technique. The method is benchmarked against Coupled-Cluster calculations and achieves chemical accuracy for the systems considered...
ON DECENTRALIZED STABILIZATION OF LINEAR LARGE SCALE SYSTEMS WITH SYMMETRIC CIRCULANT STRUCTURE
Institute of Scientific and Technical Information of China (English)
金朝永; 张湘伟
2004-01-01
The decentralized stabilization of continuous and discrete linear large scale systems with symmetric circulant structure was studied. A few sufficient conditions on decentralized stabilization of such systems were proposed. For the continuous systems, by introducing a concept called the magnitude of interconnected structure, a very important property that the decentralized stabilization of such systems is fully determined by the structure of each isolated subsystem that is obtained when the magnitude of interconnected structure of the overall system is given. So the decentralized stabilization of such systems can be got by only appropriately designing or modifying the structure of each isolated subsystem,no matter how complicated the interconnected structure of the overall system is. A algorithm for obtaining decentralized state feedback to stabilize the overall system is given. The discrete systems were also discussed. The results show that there is a great dfference on decentralized stabilization between continuous case and discrete case.
Ligand Discrimination in Myoglobin from Linear-Scaling DFT+U
Cole, Daniel J; Payne, Mike C
2013-01-01
Myoglobin modulates the binding of diatomic molecules to its heme group via hydrogen-bonding and steric interactions with neighboring residues, and is an important benchmark for computational studies of biomolecules. We have performed calculations on the heme binding site and a significant proportion of the protein environment (more than 1000 atoms) using linear-scaling density functional theory and the DFT+U method to correct for self-interaction errors associated with localized 3d states. We confirm both the hydrogen-bonding nature of the discrimination effect (3.6 kcal/mol) and assumptions that the relative strain energy stored in the protein is low (less than 1 kcal/mol). Our calculations significantly widen the scope for tackling problems in drug design and enzymology, especially in cases where electron localization, allostery or long-ranged polarization influence ligand binding and reaction.
Non-linear shrinkage estimation of large-scale structure covariance
Joachimi, Benjamin
2017-03-01
In many astrophysical settings, covariance matrices of large data sets have to be determined empirically from a finite number of mock realizations. The resulting noise degrades inference and precludes it completely if there are fewer realizations than data points. This work applies a recently proposed non-linear shrinkage estimator of covariance to a realistic example from large-scale structure cosmology. After optimizing its performance for the usage in likelihood expressions, the shrinkage estimator yields subdominant bias and variance comparable to that of the standard estimator with a factor of ∼50 less realizations. This is achieved without any prior information on the properties of the data or the structure of the covariance matrix, at a negligible computational cost.
Directory of Open Access Journals (Sweden)
Paula Kersten
Full Text Available OBJECTIVES: Pain visual analogue scales (VAS are commonly used in clinical trials and are often treated as an interval level scale without evidence that this is appropriate. This paper examines the internal construct validity and responsiveness of the pain VAS using Rasch analysis. METHODS: Patients (n = 221, mean age 67, 58% female with chronic stable joint pain (hip 40% or knee 60% of mechanical origin waiting for joint replacement were included. Pain was scored on seven daily VASs. Rasch analysis was used to examine fit to the Rasch model. Responsiveness (Standardized Response Means, SRM was examined on the raw ordinal data and the interval data generated from the Rasch analysis. RESULTS: Baseline pain VAS scores fitted the Rasch model, although 15 aberrant cases impacted on unidimensionality. There was some local dependency between items but this did not significantly affect the person estimates of pain. Daily pain (item difficulty was stable, suggesting that single measures can be used. Overall, the SRMs derived from ordinal data overestimated the true responsiveness by 59%. Changes over time at the lower and higher end of the scale were represented by large jumps in interval equivalent data points; in the middle of the scale the reverse was seen. CONCLUSIONS: The pain VAS is a valid tool for measuring pain at one point in time. However, the pain VAS does not behave linearly and SRMs vary along the trait of pain. Consequently, Minimum Clinically Important Differences using raw data, or change scores in general, are invalid as these will either under- or overestimate true change; raw pain VAS data should not be used as a primary outcome measure or to inform parametric-based Randomised Controlled Trial power calculations in research studies; and Rasch analysis should be used to convert ordinal data to interval data prior to data interpretation.
Exponents of non-linear clustering in scale-free one dimensional cosmological simulations
Benhaiem, David; Sicard, François
2012-01-01
One dimensional versions of cosmological N-body simulations have been shown to share many qualitative behaviours of the three dimensional problem. They can resolve a large range of time and length scales, and admit exact numerical integration. We use such models to study how non-linear clustering depends on initial conditions and cosmology. More specifically, we consider a family of models which, like the 3D EdS model, lead for power-law initial conditions to self-similar clustering characterized in the strongly non-linear regime by power-law behaviour of the two point correlation function. We study how the corresponding exponent \\gamma depends on the initial conditions, characterized by the exponent n of the power spectrum of initial fluctuations, and on a single parameter \\kappa controlling the rate of expansion. The space of initial conditions/cosmology divides very clearly into two parts: (1) a region in which \\gamma depends strongly on both n and \\kappa and where it agrees very well with a simple general...
Preservation of local linearity by neighborhood subspace scaling for solving the pre-image problem
Institute of Scientific and Technical Information of China (English)
Sheng-kai YANG; Jian-yi MENG; Hai-bin SHEN
2014-01-01
An important issue involved in kernel methods is the pre-image problem. However, it is an ill-posed problem, as the solution is usually nonexistent or not unique. In contrast to direct methods aimed at minimizing the distance in feature space, indirect methods aimed at constructing approximate equivalent models have shown outstanding performance. In this paper, an indirect method for solving the pre-image problem is proposed. In the proposed algorithm, an inverse mapping process is constructed based on a novel framework that preserves local linearity. In this framework, a local nonlinear transformation is implicitly conducted by neighborhood subspace scaling transformation to preserve the local linearity between feature space and input space. By extending the inverse mapping process to test samples, we can obtain pre-images in input space. The proposed method is non-iterative, and can be used for any kernel functions. Experimental results based on image denoising using kernel principal component analysis (PCA) show that the proposed method outperforms the state-of-the-art methods for solving the pre-image problem.
A linear systems analysis of the yaw dynamics of a dynamically scaled insect model.
Dickson, William B; Polidoro, Peter; Tanner, Melissa M; Dickinson, Michael H
2010-09-01
Recent studies suggest that fruit flies use subtle changes to their wing motion to actively generate forces during aerial maneuvers. In addition, it has been estimated that the passive rotational damping caused by the flapping wings of an insect is around two orders of magnitude greater than that for the body alone. At present, however, the relationships between the active regulation of wing kinematics, passive damping produced by the flapping wings and the overall trajectory of the animal are still poorly understood. In this study, we use a dynamically scaled robotic model equipped with a torque feedback mechanism to study the dynamics of yaw turns in the fruit fly Drosophila melanogaster. Four plausible mechanisms for the active generation of yaw torque are examined. The mechanisms deform the wing kinematics of hovering in order to introduce asymmetry that results in the active production of yaw torque by the flapping wings. The results demonstrate that the stroke-averaged yaw torque is well approximated by a model that is linear with respect to both the yaw velocity and the magnitude of the kinematic deformations. Dynamic measurements, in which the yaw torque produced by the flapping wings was used in real-time to determine the rotation of the robot, suggest that a first-order linear model with stroke-average coefficients accurately captures the yaw dynamics of the system. Finally, an analysis of the stroke-average dynamics suggests that both damping and inertia will be important factors during rapid body saccades of a fruit fly.
Karimi, Samaneh; Abdulkhani, Ali; Tahir, Paridah Md; Dufresne, Alain
2016-10-01
Cellulosic nanofibers (NFs) from kenaf bast were used to reinforce glycerol plasticized thermoplastic starch (TPS) matrices with varying contents (0-10wt%). The composites were prepared by casting/evaporation method. Raw fibers (RFs) reinforced TPS films were prepared with the same contents and conditions. The aim of study was to investigate the effects of filler dimension and loading on linear and non-linear mechanical performance of fabricated materials. Obtained results clearly demonstrated that the NF-reinforced composites had significantly greater mechanical performance than the RF-reinforced counterparts. This was attributed to the high aspect ratio and nano dimension of the reinforcing agents, as well as their compatibility with the TPS matrix, resulting in strong fiber/matrix interaction. Tensile strength and Young's modulus increased by 313% and 343%, respectively, with increasing NF content from 0 to 10wt%. Dynamic mechanical analysis (DMA) revealed an elevational trend in the glass transition temperature of amylopectin-rich domains in composites. The most eminent record was +18.5°C shift in temperature position of the film reinforced with 8% NF. This finding implied efficient dispersion of nanofibers in the matrix and their ability to form a network and restrict mobility of the system.
Weston, Joseph; Waintal, Xavier
2016-04-01
We report on a "source-sink" algorithm which allows one to calculate time-resolved physical quantities from a general nanoelectronic quantum system (described by an arbitrary time-dependent quadratic Hamiltonian) connected to infinite electrodes. Although mathematically equivalent to the nonequilibrium Green's function formalism, the approach is based on the scattering wave functions of the system. It amounts to solving a set of generalized Schrödinger equations that include an additional "source" term (coming from the time-dependent perturbation) and an absorbing "sink" term (the electrodes). The algorithm execution time scales linearly with both system size and simulation time, allowing one to simulate large systems (currently around 106 degrees of freedom) and/or large times (currently around 105 times the smallest time scale of the system). As an application we calculate the current-voltage characteristics of a Josephson junction for both short and long junctions, and recover the multiple Andreev reflection physics. We also discuss two intrinsically time-dependent situations: the relaxation time of a Josephson junction after a quench of the voltage bias, and the propagation of voltage pulses through a Josephson junction. In the case of a ballistic, long Josephson junction, we predict that a fast voltage pulse creates an oscillatory current whose frequency is controlled by the Thouless energy of the normal part. A similar effect is found for short junctions; a voltage pulse produces an oscillating current which, in the absence of electromagnetic environment, does not relax.
Beigy, Hamid; Ahmad, Ashar; Masoudi-Nejad, Ali; Fröhlich, Holger
2017-01-01
Inferring the structure of molecular networks from time series protein or gene expression data provides valuable information about the complex biological processes of the cell. Causal network structure inference has been approached using different methods in the past. Most causal network inference techniques, such as Dynamic Bayesian Networks and ordinary differential equations, are limited by their computational complexity and thus make large scale inference infeasible. This is specifically true if a Bayesian framework is applied in order to deal with the unavoidable uncertainty about the correct model. We devise a novel Bayesian network reverse engineering approach using ordinary differential equations with the ability to include non-linearity. Besides modeling arbitrary, possibly combinatorial and time dependent perturbations with unknown targets, one of our main contributions is the use of Expectation Propagation, an algorithm for approximate Bayesian inference over large scale network structures in short computation time. We further explore the possibility of integrating prior knowledge into network inference. We evaluate the proposed model on DREAM4 and DREAM8 data and find it competitive against several state-of-the-art existing network inference methods. PMID:28166542
A mixed-integer linear programming approach to the reduction of genome-scale metabolic networks.
Röhl, Annika; Bockmayr, Alexander
2017-01-03
Constraint-based analysis has become a widely used method to study metabolic networks. While some of the associated algorithms can be applied to genome-scale network reconstructions with several thousands of reactions, others are limited to small or medium-sized models. In 2015, Erdrich et al. introduced a method called NetworkReducer, which reduces large metabolic networks to smaller subnetworks, while preserving a set of biological requirements that can be specified by the user. Already in 2001, Burgard et al. developed a mixed-integer linear programming (MILP) approach for computing minimal reaction sets under a given growth requirement. Here we present an MILP approach for computing minimum subnetworks with the given properties. The minimality (with respect to the number of active reactions) is not guaranteed by NetworkReducer, while the method by Burgard et al. does not allow specifying the different biological requirements. Our procedure is about 5-10 times faster than NetworkReducer and can enumerate all minimum subnetworks in case there exist several ones. This allows identifying common reactions that are present in all subnetworks, and reactions appearing in alternative pathways. Applying complex analysis methods to genome-scale metabolic networks is often not possible in practice. Thus it may become necessary to reduce the size of the network while keeping important functionalities. We propose a MILP solution to this problem. Compared to previous work, our approach is more efficient and allows computing not only one, but even all minimum subnetworks satisfying the required properties.
An Integral-Direct Linear-Scaling Second-Order Møller-Plesset Approach.
Nagy, Péter R; Samu, Gyula; Kállay, Mihály
2016-10-11
An integral-direct, iteration-free, linear-scaling, local second-order Møller-Plesset (MP2) approach is presented, which is also useful for spin-scaled MP2 calculations as well as for the efficient evaluation of the perturbative terms of double-hybrid density functionals. The method is based on a fragmentation approximation: the correlation contributions of the individual electron pairs are evaluated in domains constructed for the corresponding localized orbitals, and the correlation energies of distant electron pairs are computed with multipole expansions. The required electron repulsion integrals are calculated directly invoking the density fitting approximation; the storage of integrals and intermediates is avoided. The approach also utilizes natural auxiliary functions to reduce the size of the auxiliary basis of the domains and thereby the operation count and memory requirement. Our test calculations show that the approach recovers 99.9% of the canonical MP2 correlation energy and reproduces reaction energies with an average (maximum) error below 1 kJ/mol (4 kJ/mol). Our benchmark calculations demonstrate that the new method enables MP2 calculations for molecules with more than 2300 atoms and 26000 basis functions on a single processor.
Energy Technology Data Exchange (ETDEWEB)
Tait, E. W.; Ratcliff, L. E.; Payne, M. C.; Haynes, P. D.; Hine, N. D. M.
2016-04-20
Experimental techniques for electron energy loss spectroscopy (EELS) combine high energy resolution with high spatial resolution. They are therefore powerful tools for investigating the local electronic structure of complex systems such as nanostructures, interfaces and even individual defects. Interpretation of experimental electron energy loss spectra is often challenging and can require theoretical modelling of candidate structures, which themselves may be large and complex, beyond the capabilities of traditional cubic-scaling density functional theory. In this work, we present functionality to compute electron energy loss spectra within the onetep linear-scaling density functional theory code. We first demonstrate that simulated spectra agree with those computed using conventional plane wave pseudopotential methods to a high degree of precision. The ability of onetep to tackle large problems is then exploited to investigate convergence of spectra with respect to supercell size. Finally, we apply the novel functionality to a study of the electron energy loss spectra of defects on the (1 0 1) surface of an anatase slab and determine concentrations of defects which might be experimentally detectable.
Finite-time scaling via linear driving: application to the two-dimensional Potts model.
Huang, Xianzhi; Gong, Shurong; Zhong, Fan; Fan, Shuangli
2010-04-01
We apply finite-time scaling to the q-state Potts model with q=3 and 4 on two-dimensional lattices to determine its critical properties. This consists in applying to the model a linearly varying external field that couples to one of its q states to manipulate its dynamics in the vicinity of its criticality and that drives the system out of equilibrium and thus produces hysteresis and in defining an order parameter other than the usual one and a nonequilibrium susceptibility to extract coercive fields. From the finite-time scaling of the order parameter, the coercivity, and the hysteresis area and its derivative, we are able to determine systematically both static and dynamic critical exponents as well as the critical temperature. The static critical exponents obtained in general and the magnetic exponent delta in particular agree reasonably with the conjectured ones. The dynamic critical exponents obtained appear to confirm the proposed dynamic weak universality but unlikely to agree with recent short-time dynamic results for q=4. Our results also suggest an alternative way to characterize the weak universality.
Segmented linear modeling of CHO fed‐batch culture and its application to large scale production
Ben Yahia, Bassem; Gourevitch, Boris; Malphettes, Laetitia
2016-01-01
ABSTRACT We describe a systematic approach to model CHO metabolism during biopharmaceutical production across a wide range of cell culture conditions. To this end, we applied the metabolic steady state concept. We analyzed and modeled the production rates of metabolites as a function of the specific growth rate. First, the total number of metabolic steady state phases and the location of the breakpoints were determined by recursive partitioning. For this, the smoothed derivative of the metabolic rates with respect to the growth rate were used followed by hierarchical clustering of the obtained partition. We then applied a piecewise regression to the metabolic rates with the previously determined number of phases. This allowed identifying the growth rates at which the cells underwent a metabolic shift. The resulting model with piecewise linear relationships between metabolic rates and the growth rate did well describe cellular metabolism in the fed‐batch cultures. Using the model structure and parameter values from a small‐scale cell culture (2 L) training dataset, it was possible to predict metabolic rates of new fed‐batch cultures just using the experimental specific growth rates. Such prediction was successful both at the laboratory scale with 2 L bioreactors but also at the production scale of 2000 L. This type of modeling provides a flexible framework to set a solid foundation for metabolic flux analysis and mechanistic type of modeling. Biotechnol. Bioeng. 2017;114: 785–797. © 2016 The Authors. Biotechnology and Bioengineering Published by Wiley Periodicals, Inc. PMID:27869296
Segmented linear modeling of CHO fed-batch culture and its application to large scale production.
Ben Yahia, Bassem; Gourevitch, Boris; Malphettes, Laetitia; Heinzle, Elmar
2017-04-01
We describe a systematic approach to model CHO metabolism during biopharmaceutical production across a wide range of cell culture conditions. To this end, we applied the metabolic steady state concept. We analyzed and modeled the production rates of metabolites as a function of the specific growth rate. First, the total number of metabolic steady state phases and the location of the breakpoints were determined by recursive partitioning. For this, the smoothed derivative of the metabolic rates with respect to the growth rate were used followed by hierarchical clustering of the obtained partition. We then applied a piecewise regression to the metabolic rates with the previously determined number of phases. This allowed identifying the growth rates at which the cells underwent a metabolic shift. The resulting model with piecewise linear relationships between metabolic rates and the growth rate did well describe cellular metabolism in the fed-batch cultures. Using the model structure and parameter values from a small-scale cell culture (2 L) training dataset, it was possible to predict metabolic rates of new fed-batch cultures just using the experimental specific growth rates. Such prediction was successful both at the laboratory scale with 2 L bioreactors but also at the production scale of 2000 L. This type of modeling provides a flexible framework to set a solid foundation for metabolic flux analysis and mechanistic type of modeling. Biotechnol. Bioeng. 2017;114: 785-797. © 2016 The Authors. Biotechnology and Bioengineering Published by Wiley Periodicals, Inc. © 2016 The Authors. Biotechnology and Bioengineering Published by Wiley Periodicals, Inc.
Vos, J.M.C.
2002-01-01
This thesis describes the organisation and performance of two large-scale irrigation systems in the North Coast of Peru. Good water management is important in this area because water is scarce and irrigated agriculture provides a livelihood to many small and middle-sized farmers. Water in t
Vos, J.M.C.
2002-01-01
This thesis describes the organisation and performance of two large-scale irrigation systems in the North Coast of Peru. Good water management is important in this area because water is scarce and irrigated agriculture provides a livelihood to many small and middle-sized farmers. Water in the coast
Vos, J.M.C.
2002-01-01
This thesis describes the organisation and performance of two large-scale irrigation systems in the North Coast of Peru. Good water management is important in this area because water is scarce and irrigated agriculture provides a livelihood to many small and middle-sized farmers. Water in
A Non-linear Scaling Algorithm Based on chirp-z Transform for Squint Mode FMCW-SAR
Directory of Open Access Journals (Sweden)
Yu Bin-bin
2012-03-01
Full Text Available A non-linear scaling chirp-z imaging algorithm for squint mode Frequency Modulated Continuous Wave Synthetic Aperture Radar (FMCW-SAR is presented to solve the problem of the focus accuracy decline. Based on the non-linear characteristics in range direction for the echo signal in Doppler domain, a non-linear modulated signal is introduced to perform a non-linear scaling based on chirp-z transform. Then the error due to range compression and range migration correction can be reduced, therefore the range resolution of radar image is improved. By using the imaging algorithm proposed, the imaging performances for point targets, compared with that from the original chirp-z algorithm, are demonstrated to be improved in range resolution and image contrast, and to be maintained the same in azimuth resolution.
Hyperspectral image classification based on volumetric texture and dimensionality reduction
Su, Hongjun; Sheng, Yehua; Du, Peijun; Chen, Chen; Liu, Kui
2015-06-01
A novel approach using volumetric texture and reduced-spectral features is presented for hyperspectral image classification. Using this approach, the volumetric textural features were extracted by volumetric gray-level co-occurrence matrices (VGLCM). The spectral features were extracted by minimum estimated abundance covariance (MEAC) and linear prediction (LP)-based band selection, and a semi-supervised k-means (SKM) clustering method with deleting the worst cluster (SKMd) bandclustering algorithms. Moreover, four feature combination schemes were designed for hyperspectral image classification by using spectral and textural features. It has been proven that the proposed method using VGLCM outperforms the gray-level co-occurrence matrices (GLCM) method, and the experimental results indicate that the combination of spectral information with volumetric textural features leads to an improved classification performance in hyperspectral imagery.
Vos, J.M.C.
2002-01-01
This thesis describes the organisation and performance of two large-scale irrigation systems in the North Coast of Peru. Good water management is important in this area because water is scarce and irrigated agriculture provides a livelihood to many small and middle-sized farmers. Water in the coast of Peru is considered to be badly managed, however this study shows that performance is more optimal than critics assume. Apart from the relevance in the local water management discussion, the stud...
Directory of Open Access Journals (Sweden)
Russell D. Monds
2014-11-01
Full Text Available Diversification of cell size is hypothesized to have occurred through a process of evolutionary optimization, but direct demonstrations of causal relationships between cell geometry and fitness are lacking. Here, we identify a mutation from a laboratory-evolved bacterium that dramatically increases cell size through cytoskeletal perturbation and confers a large fitness advantage. We engineer a library of cytoskeletal mutants of different sizes and show that fitness scales linearly with respect to cell size over a wide physiological range. Quantification of the growth rates of single cells during the exit from stationary phase reveals that transitions between “feast-or-famine” growth regimes are a key determinant of cell-size-dependent fitness effects. We also uncover environments that suppress the fitness advantage of larger cells, indicating that cell-size-dependent fitness effects are subject to both biophysical and metabolic constraints. Together, our results highlight laboratory-based evolution as a powerful framework for studying the quantitative relationships between morphology and fitness.
Adding a visual linear scale probability to the PIOPED probability of pulmonary embolism.
Christiansen, F; Nilsson, T; Måre, K; Carlsson, A
1997-05-01
Reporting a lung scintigraphy diagnosis as a PIOPED categorical probability of pulmonary embolism offers the clinician a wide range of interpretation. Therefore the purpose of this study was to analyze the impact on lung scintigraphy reporting of adding a visual linear scale (VLS) probability assessment to the ordinary PIOPED categorical probability. The study material was a re-evaluation of lung scintigrams from a prospective study of 170 patients. All patients had been examined by lung scintigraphy and pulmonary angiography. The scintigrams were re-evaluated by 3 raters, and the probability of pulmonary embolism was estimated by the PIOPED categorization and by a VLS probability. The test was repeated after 6 months. There was no significant difference (p > 0.05) in the area under the ROC curve between the PIOPED categorization and the VLS for any of the 3 raters. Analysis of agreement among raters and for repeatability demonstrated low agreement in the mid-range of probabilities. A VLS probability estimate did not significantly improve the overall accuracy of the diagnosis compared to the categorical PIOPED probability assessment alone. From the data of our present study we cannot recommend the addition of a VLS score to the PIOPED categorization.
A Steeper than Linear Disk Mass-Stellar Mass Scaling Relation
Pascucci, Ilaria; SLICK, EOS
2017-01-01
The disk mass is among the most important input parameter of planet formation models as it determines the number and masses of the planets that can form. I will present an ALMA 887 micron survey of the disk population around objects from 2 to 0.03Msun in the nearby 2 Myr-old Chamaeleon I star-forming region. Assuming isothermal and optically thin emission, we convert the 887 micron flux densities into dust disk masses (Mdust) and show that the Mdust-Mstar scaling relation is steeper than linear. By re-analyzing all millimeter data available for nearby regions in a self-consistent way, we find that the 1-3 Myr-old regions of Taurus, Lupus, and Chamaeleon I share the same Mdust-Mstar relation, while the 10 Myr-old Upper Sco association has an even steeper relation. Theoretical models of grain growth, drift, and fragmentation reproduce this trend and suggest that disks are in the fragmentation-limited regime. In this regime millimeter grains will be located closer in around lower-mass stars, a prediction that can be tested with deeper and higher spatial resolution ALMA observations.
Negre, Christian F A; Mniszewski, Susan M; Cawkwell, Marc J; Bock, Nicolas; Wall, Michael E; Niklasson, Anders M N
2016-07-12
We present a reduced complexity algorithm to compute the inverse overlap factors required to solve the generalized eigenvalue problem in a quantum-based molecular dynamics (MD) simulation. Our method is based on the recursive, iterative refinement of an initial guess of Z (inverse square root of the overlap matrix S). The initial guess of Z is obtained beforehand by using either an approximate divide-and-conquer technique or dynamical methods, propagated within an extended Lagrangian dynamics from previous MD time steps. With this formulation, we achieve long-term stability and energy conservation even under the incomplete, approximate, iterative refinement of Z. Linear-scaling performance is obtained using numerically thresholded sparse matrix algebra based on the ELLPACK-R sparse matrix data format, which also enables efficient shared-memory parallelization. As we show in this article using self-consistent density-functional-based tight-binding MD, our approach is faster than conventional methods based on the diagonalization of overlap matrix S for systems as small as a few hundred atoms, substantially accelerating quantum-based simulations even for molecular structures of intermediate size. For a 4158-atom water-solvated polyalanine system, we find an average speedup factor of 122 for the computation of Z in each MD step.
Fast and dynamic generation of linear octrees for geological bodies under hardware acceleration
Institute of Scientific and Technical Information of China (English)
无
2010-01-01
In the application of 3D Geoscience Modeling,we often need to generate the volumetric representations of geological bodies from their surface representations.Linear octree,as an efficient and easily operated volumetric model,is widely used in 3D Geoscience Modeling.This paper proposes an algorithm for fast and dynamic generation of linear octrees of geological bodies from their surface models under hardware acceleration.The Z-buffers are used to determine the attributes of octants and voxels in a fast way,and a divide-and-conquer strategy is adopted.A stack structure is exploited to record the subdivision,which allows generating linear octrees dynamically.The algorithm avoids large-scale sorting process and bypasses the compression in linear octrees generation.Experimental results indicate its high efficiency in generating linear octrees for large-scale geologic bodies.
DEFF Research Database (Denmark)
D'Souza, Sonia; Rasmussen, John; Schwirtz, Ansgar
2012-01-01
and valuable ergonomic tool. Objective: To investigate age and gender effects on the torque-producing ability in the knee and elbow in older adults. To create strength scaled equations based on age, gender, upper/lower limb lengths and masses using multiple linear regression. To reduce the number of dependent...
On the Oscillation for Second-Order Half-Linear Neutral Delay Dynamic Equations on Time Scales
Directory of Open Access Journals (Sweden)
Quanxin Zhang
2014-01-01
Full Text Available We discuss oscillation criteria for second-order half-linear neutral delay dynamic equations on time scales by using the generalized Riccati transformation and the inequality technique. Under certain conditions, we establish four new oscillation criteria. Our results in this paper are new even for the cases of =ℝ and =ℤ.
Energy Technology Data Exchange (ETDEWEB)
Riplinger, Christoph; Pinski, Peter; Becker, Ute; Neese, Frank, E-mail: frank.neese@cec.mpg.de, E-mail: evaleev@vt.edu [Max Planck Institute for Chemical Energy Conversion, Stiftstr. 34-36, D-45470 Mülheim an der Ruhr (Germany); Valeev, Edward F., E-mail: frank.neese@cec.mpg.de, E-mail: evaleev@vt.edu [Department of Chemistry, Virginia Tech, Blacksburg, Virginia 24061 (United States)
2016-01-14
Domain based local pair natural orbital coupled cluster theory with single-, double-, and perturbative triple excitations (DLPNO-CCSD(T)) is a highly efficient local correlation method. It is known to be accurate and robust and can be used in a black box fashion in order to obtain coupled cluster quality total energies for large molecules with several hundred atoms. While previous implementations showed near linear scaling up to a few hundred atoms, several nonlinear scaling steps limited the applicability of the method for very large systems. In this work, these limitations are overcome and a linear scaling DLPNO-CCSD(T) method for closed shell systems is reported. The new implementation is based on the concept of sparse maps that was introduced in Part I of this series [P. Pinski, C. Riplinger, E. F. Valeev, and F. Neese, J. Chem. Phys. 143, 034108 (2015)]. Using the sparse map infrastructure, all essential computational steps (integral transformation and storage, initial guess, pair natural orbital construction, amplitude iterations, triples correction) are achieved in a linear scaling fashion. In addition, a number of additional algorithmic improvements are reported that lead to significant speedups of the method. The new, linear-scaling DLPNO-CCSD(T) implementation typically is 7 times faster than the previous implementation and consumes 4 times less disk space for large three-dimensional systems. For linear systems, the performance gains and memory savings are substantially larger. Calculations with more than 20 000 basis functions and 1000 atoms are reported in this work. In all cases, the time required for the coupled cluster step is comparable to or lower than for the preceding Hartree-Fock calculation, even if this is carried out with the efficient resolution-of-the-identity and chain-of-spheres approximations. The new implementation even reduces the error in absolute correlation energies by about a factor of two, compared to the already accurate
Estimation of scale parameters of logistic distribution by linear functions of sample quantiles
Institute of Scientific and Technical Information of China (English)
无
2001-01-01
The large sample estimation of standard deviation of logistic distribution employs the asymptotically best linear unbiased estimators based on sample quantiles. The sample quantiles are established from a pair of single spacing. Finally, a table of the variances and efficiencies of the estimator for 5 ≤ n ≤ 65 is provided and comparison is made with other linear estimators.
Institute of Scientific and Technical Information of China (English)
Yunjuan WANG; Detong ZHU
2008-01-01
Based on a differentiable merit function proposed by Taji et al.in "Math.Prog. Stud.,58,1993,369-383",the authors propose an affine scaling interior trust region strategy via optimal path to modify Newton method for the strictly monotone variational inequality problem subject to linear equality and inequality constraints.By using the eigensystem decomposition and affine scaling mapping,the authors form an affine scaling optimal curvilinear path very easily in order to approximately solve the trust region subproblem.Theoretical analysis is given which shows that the proposed algorithm is globally convergent and has a local quadratic convergence rate under some reasonable conditions.
Womack, James C.; Mardirossian, Narbe; Head-Gordon, Martin; Skylaris, Chris-Kriton
2016-11-01
Accurate and computationally efficient exchange-correlation functionals are critical to the successful application of linear-scaling density functional theory (DFT). Local and semi-local functionals of the density are naturally compatible with linear-scaling approaches, having a general form which assumes the locality of electronic interactions and which can be efficiently evaluated by numerical quadrature. Presently, the most sophisticated and flexible semi-local functionals are members of the meta-generalized-gradient approximation (meta-GGA) family, and depend upon the kinetic energy density, τ, in addition to the charge density and its gradient. In order to extend the theoretical and computational advantages of τ-dependent meta-GGA functionals to large-scale DFT calculations on thousands of atoms, we have implemented support for τ-dependent meta-GGA functionals in the ONETEP program. In this paper we lay out the theoretical innovations necessary to implement τ-dependent meta-GGA functionals within ONETEP's linear-scaling formalism. We present expressions for the gradient of the τ-dependent exchange-correlation energy, necessary for direct energy minimization. We also derive the forms of the τ-dependent exchange-correlation potential and kinetic energy density in terms of the strictly localized, self-consistently optimized orbitals used by ONETEP. To validate the numerical accuracy of our self-consistent meta-GGA implementation, we performed calculations using the B97M-V and PKZB meta-GGAs on a variety of small molecules. Using only a minimal basis set of self-consistently optimized local orbitals, we obtain energies in excellent agreement with large basis set calculations performed using other codes. Finally, to establish the linear-scaling computational cost and applicability of our approach to large-scale calculations, we present the outcome of self-consistent meta-GGA calculations on amyloid fibrils of increasing size, up to tens of thousands of atoms.
Directory of Open Access Journals (Sweden)
2007-01-01
Full Text Available Hysteresis is a rate-independent non-linearity that is expressed through thresholds, switches, and branches. Exceedance of a threshold, or the occurrence of a turning point in the input, switches the output onto a particular output branch. Rate-independent branching on a very large set of switches with non-local memory is the central concept in the new definition of hysteresis. Hysteretic loops are a special case. A self-consistent mathematical description of hydrological systems with hysteresis demands a new non-linear systems theory of adequate generality. The goal of this paper is to establish this and to show how this may be done. Two results are presented: a conceptual model for the hysteretic soil-moisture characteristic at the pedon scale and a hysteretic linear reservoir at the catchment scale. Both are based on the Preisach model. A result of particular significance is the demonstration that the independent domain model of the soil moisture characteristic due to Childs, Poulavassilis, Mualem and others, is equivalent to the Preisach hysteresis model of non-linear systems theory, a result reminiscent of the reduction of the theory of the unit hydrograph to linear systems theory in the 1950s. A significant reduction in the number of model parameters is also achieved. The new theory implies a change in modelling paradigm.
Bonnan, Matthew F
2007-09-01
Neosauropod dinosaurs were gigantic, herbivorous dinosaurs. Given that the limb skeleton is essentially a plastic, mobile framework that supports and moves the body, analysis of long bone scaling can reveal limb adaptations that supported neosauropod gigantism. Previously, analyses of linear dimensions have revealed a relatively isometric scaling pattern for the humerus and femur of neosauropods. Here, a combined scaling analysis of humerus and femur linear dimensions, cortical area, and shape across six neosauropod taxa is used to test the hypothesis that neosauropod long bones scaled isometrically and to investigate the paleobiological implications of these trends. A combination of linear regression and geometric morphometrics analyses of neosauropod humeri and femora were performed using traditional and thin-plate splines approaches. The neosauropod sample was very homogeneous, and linear analyses revealed that nearly all humerus and femur dimensions, including cortical area, scale with isometry against maximum length. Thin-plate splines analyses showed that little to no significant shape change occurs with increasing length or cortical area for the humerus or femur. Even with the exclusion of the long-limbed Brachiosaurus, the overall trends were consistently isometric. These results suggest that the mechanical advantage of limb-moving muscles and the relative range of limb movement decreased with increasing size. The isometric signal for neosauropod long bone dimensions and shape suggests these dinosaurs may have reached the upper limit of vertebrate long bone mechanics. Perhaps, like stilt-walkers, the absolutely long limbs of the largest neosauropods allowed for efficient locomotion at gigantic size with few ontogenetic changes.
Non-linear optics of nano-scale pentacene thin film
Yahia, I. S.; Alfaify, S.; Jilani, Asim; Abdel-wahab, M. Sh.; Al-Ghamdi, Attieh A.; Abutalib, M. M.; Al-Bassam, A.; El-Naggar, A. M.
2016-07-01
We have found the new ways to investigate the linear/non-linear optical properties of nanostructure pentacene thin film deposited by thermal evaporation technique. Pentacene is the key material in organic semiconductor technology. The existence of nano-structured thin film was confirmed by atomic force microscopy and X-ray diffraction. The wavelength-dependent transmittance and reflectance were calculated to observe the optical behavior of the pentacene thin film. It has been observed the anomalous dispersion at wavelength λ 800. The non-linear refractive index of the deposited films was investigated. The linear optical susceptibility of pentacene thin film was calculated, and we observed the non-linear optical susceptibility of pentacene thin film at about 6 × 10-13 esu. The advantage of this work is to use of spectroscopic method to calculate the liner and non-liner optical response of pentacene thin films rather than expensive Z-scan. The calculated optical behavior of the pentacene thin films could be used in the organic thin films base advanced optoelectronic devices such as telecommunications devices.
Stable evaluation of differential operators and linear and nonlinear multi-scale filtering
Directory of Open Access Journals (Sweden)
Otmar Scherzer
1997-09-01
Full Text Available Diffusion processes create multi--scale analyses, which enable the generation of simplified pictures, where for increasing scale the image gets sketchier. In many practical applications the ``scaled image'' can be characterized via a variational formulation as the solution of a minimization problem involving unbounded operators. These unbounded operators can be evaluated by regularization techniques. We show that the theory of stable evaluation of unbounded operators can be applied to efficiently solve these minimization problems.
Quantitative Techniques in Volumetric Analysis
Zimmerman, John; Jacobsen, Jerrold J.
1996-12-01
Quantitative Techniques in Volumetric Analysis is a visual library of techniques used in making volumetric measurements. This 40-minute VHS videotape is designed as a resource for introducing students to proper volumetric methods and procedures. The entire tape, or relevant segments of the tape, can also be used to review procedures used in subsequent experiments that rely on the traditional art of quantitative analysis laboratory practice. The techniques included are: Quantitative transfer of a solid with a weighing spoon Quantitative transfer of a solid with a finger held weighing bottle Quantitative transfer of a solid with a paper strap held bottle Quantitative transfer of a solid with a spatula Examples of common quantitative weighing errors Quantitative transfer of a solid from dish to beaker to volumetric flask Quantitative transfer of a solid from dish to volumetric flask Volumetric transfer pipet A complete acid-base titration Hand technique variations The conventional view of contemporary quantitative chemical measurement tends to focus on instrumental systems, computers, and robotics. In this view, the analyst is relegated to placing standards and samples on a tray. A robotic arm delivers a sample to the analysis center, while a computer controls the analysis conditions and records the results. In spite of this, it is rare to find an analysis process that does not rely on some aspect of more traditional quantitative analysis techniques, such as careful dilution to the mark of a volumetric flask. Figure 2. Transfer of a solid with a spatula. Clearly, errors in a classical step will affect the quality of the final analysis. Because of this, it is still important for students to master the key elements of the traditional art of quantitative chemical analysis laboratory practice. Some aspects of chemical analysis, like careful rinsing to insure quantitative transfer, are often an automated part of an instrumental process that must be understood by the
Teneketzis, D.; Sandell, N. R., Jr.
1976-01-01
This paper develops a hierarchically-structured, suboptimal controller for a linear stochastic system composed of fast and slow subsystems. It is proved that the controller is optimal in the limit as the separation of time scales of the subsystems becomes infinite. The methodology is illustrated by design of a controller to suppress the phugoid and short period modes of the longitudinal dynamics of the F-8 aircraft.
On the chaotic behavior of the primal-dual affine-scaling algorithm for linear optimization.
Bruin, H; Fokkink, R; Gu, G; Roos, C
2014-12-01
We study a one-parameter family of quadratic maps, which serves as a template for interior point methods. It is known that such methods can exhibit chaotic behavior, but this has been verified only for particular linear optimization problems. Our results indicate that this chaotic behavior is generic.
Test Facility for Volumetric Absorber
Energy Technology Data Exchange (ETDEWEB)
Ebert, M.; Dibowski, G.; Pfander, M.; Sack, J. P.; Schwarzbozl, P.; Ulmer, S.
2006-07-01
Long-time testing of volumetric absorber modules is an inevitable measure to gain the experience and reliability required for the commercialization of the open volumetric receiver technology. While solar tower test facilities are necessary for performance measurements of complete volumetric receivers, the long-term stability of individual components can be tested in less expensive test setups. For the qualification of the aging effects of operating cycles on single elements of new absorber materials and designs, a test facility was developed and constructed in the framework of the KOSMOSOL project. In order to provide the concentrated solar radiation level, the absorber test facility is integrated into a parabolic dish system at the Plataforma Solar de Almeria (PSA) in Spain. Several new designs of ceramic absorbers were developed and tested during the last months. (Author)
Nishimichi, Takahiro; Nakamichi, Masashi; Taruya, Atsushi; Yahata, Kazuhiro; Shirata, Akihito; Saito, Shun; Nomura, Hidenori; Yamamoto, Kazuhiro; Suto, Yasushi
2007-01-01
An acoustic oscillation of the primeval photon-baryon fluid around the decoupling time imprints a characteristic scale in the galaxy distribution today, known as the baryon acoustic oscillation (BAO) scale. Several on-going and/or future galaxy surveys aim at detecting and precisely determining the BAO scale so as to trace the expansion history of the universe. We consider nonlinear and redshift-space distortion effects on the shifts of the BAO scale in $k$-space using perturbation theory. The resulting shifts are indeed sensitive to different choices of the definition of the BAO scale, which needs to be kept in mind in the data analysis. We present a toy model to explain the physical behavior of the shifts. We find that the BAO scale defined as in Percival et al. (2007) indeed shows very small shifts ($\\lesssim$ 1%) relative to the prediction in {\\it linear theory} in real space. The shifts can be predicted accurately for scales where the perturbation theory is reliable.
Bairwa, Arvind Kumar; Khosa, Rakesh; Maheswaran, R.
2016-11-01
In this study, presence of multi-scale behaviour in rainfall IDF relationship has been established using Linear Probability Weighted Moments (LPWMs) for some selected stations in India. Simple, non-central moments (SMs) have seen widespread use in similar scaling studies but these latter statistical attributes are known to mask the 'true' scaling pattern and, consequently, leading to inappropriate inferences. There is a general agreement amongst researchers that conventional higher order moments do indeed amplify the extreme observations and drastically affect scaling exponents. Additional advantage of LPWMs over SMs is that they exist even when the standard moments do not exist. As an alternative, this study presents a comparison with results based on use of the robust LPWMs which have revealed, in sharp contrast with the conventional moments, a definitive multi-scaling behaviour in all four rainfall observation stations that were selected from different climatic zones. The multi-scale IDF curves derived using LPWMs show a good agreement with observations and it is accordingly concluded that LPWMs provide a more reliable tool for investigating scaling in sequences of observed rainfall corresponding to various durations.
Imprint of non-linear effects on HI intensity mapping on large scales
Umeh, Obinna
2017-06-01
Intensity mapping of the HI brightness temperature provides a unique way of tracing large-scale structures of the Universe up to the largest possible scales. This is achieved by using a low angular resolution radio telescopes to detect emission line from cosmic neutral Hydrogen in the post-reionization Universe. We use general relativistic perturbation theory techniques to derive for the first time the full expression for the HI brightness temperature up to third order in perturbation theory without making any plane-parallel approximation. We use this result and the renormalization prescription for biased tracers to study the impact of nonlinear effects on the power spectrum of HI brightness temperature both in real and redshift space. We show how mode coupling at nonlinear order due to nonlinear bias parameters and redshift space distortion terms modulate the power spectrum on large scales. The large scale modulation may be understood to be due to the effective bias parameter and effective shot noise.
Small scale effect on linear vibration of buckled size-dependent FG nanobeams
Directory of Open Access Journals (Sweden)
Sima Ziaee
2015-06-01
The present study is an attempt to present linear free vibration of buckled FG nano-beams. It is assumed that the material properties of FGMs are graded in the thickness direction. The partial differential equation of motion is derived based on Euler–Bernoulli beam theory, von-Karman geometric nonlinearity and Eringen’s nonlocal elasticity theory. The exact solution of the post-buckling configurations of FG nano-beams and polynomial-based differential quadrature method are employed to study the linear behaviour of vibrated nano-beams around their post-buckling configurations. The results show the important role of compressive axial force exerted on FG nano-beams in nonlocal behaviour of vibrating FG nano-beams.
Non-linear and scale-invariant analysis of the Heart Rate Variability
Kalda, J; Vainu, M; Laan, M
2003-01-01
Human heart rate fluctuates in a complex and non-stationary manner. Elaborating efficient and adequate tools for the analysis of such signals has been a great challenge for the researchers during last decades. Here, an overview of the main research results in this field is given. The following question are addressed: (a) what are the intrinsic features of the heart rate variability signal; (b) what are the most promising non-linear measures, bearing in mind clinical diagnostic and prognostic applications.
Growing Random Geometric Graph Models of Super-linear Scaling Law
Zhang, Jiang
2012-01-01
Recent researches on complex systems highlighted the so-called super-linear growth phenomenon. As the system size $P$ measured as population in cities or active users in online communities increases, the total activities $X$ measured as GDP or number of new patents, crimes in cities generated by these people also increases but in a faster rate. This accelerating growth phenomenon can be well described by a super-linear power law $X \\propto P^{\\gamma}$($\\gamma>1$). However, the explanation on this phenomenon is still lack. In this paper, we propose a modeling framework called growing random geometric models to explain the super-linear relationship. A growing network is constructed on an abstract geometric space. The new coming node can only survive if it just locates on an appropriate place in the space where other nodes exist, then new edges are connected with the adjacent nodes whose number is determined by the density of existing nodes. Thus the total number of edges can grow with the number of nodes in a f...
Energy Technology Data Exchange (ETDEWEB)
Song, Hyun-Seob; Goldberg, Noam; Mahajan, Ashutosh; Ramkrishna, Doraiswami
2017-03-27
Elementary (flux) modes (EMs) have served as a valuable tool for investigating structural and functional properties of metabolic networks. Identification of the full set of EMs in genome-scale networks remains challenging due to combinatorial explosion of EMs in complex networks. It is often, however, that only a small subset of relevant EMs needs to be known, for which optimization-based sequential computation is a useful alternative. Most of the currently available methods along this line are based on the iterative use of mixed integer linear programming (MILP), the effectiveness of which significantly deteriorates as the number of iterations builds up. To alleviate the computational burden associated with the MILP implementation, we here present a novel optimization algorithm termed alternate integer linear programming (AILP). Results: Our algorithm was designed to iteratively solve a pair of integer programming (IP) and linear programming (LP) to compute EMs in a sequential manner. In each step, the IP identifies a minimal subset of reactions, the deletion of which disables all previously identified EMs. Thus, a subsequent LP solution subject to this reaction deletion constraint becomes a distinct EM. In cases where no feasible LP solution is available, IP-derived reaction deletion sets represent minimal cut sets (MCSs). Despite the additional computation of MCSs, AILP achieved significant time reduction in computing EMs by orders of magnitude. The proposed AILP algorithm not only offers a computational advantage in the EM analysis of genome-scale networks, but also improves the understanding of the linkage between EMs and MCSs.
Analytical scalings of the linear Richtmyer-Meshkov instability when a rarefaction is reflected
Cobos-Campos, F.; Wouchuk, J. G.
2017-07-01
The Richtmyer-Meshkov instability for the case of a reflected rarefaction is studied in detail following the growth of the contact surface in the linear regime and providing explicit analytical expressions for the asymptotic velocities in different physical limits. This work is a continuation of the similar problem when a shock is reflected [Phys. Rev. E 93, 053111 (2016), 10.1103/PhysRevE.93.053111]. Explicit analytical expressions for the asymptotic normal velocity of the rippled surface (δ vi∞ ) are shown. The known analytical solution of the perturbations growing inside the rarefaction fan is coupled to the pressure perturbations between the transmitted shock front and the rarefaction trailing edge. The surface ripple growth (ψi) is followed from t =0 + up to the asymptotic stage inside the linear regime. As in the shock reflected case, an asymptotic behavior of the form ψi(t ) ≅ψ∞+δ vi∞t is observed, where ψ∞ is an asymptotic ordinate to the origin. Approximate expressions for the asymptotic velocities are given for arbitrary values of the shock Mach number. The asymptotic velocity field is calculated at both sides of the contact surface. The kinetic energy content of the velocity field is explicitly calculated. It is seen that a significant part of the motion occurs inside a fluid layer very near the material surface in good qualitative agreement with recent simulations. The important physical limits of weak and strong shocks and high and low preshock density ratio are also discussed and exact Taylor expansions are given. The results of the linear theory are compared to simulations and experimental work [R. L. Holmes et al., J. Fluid Mech. 389, 55 (1999), 10.1017/S0022112099004838; C. Mariani et al., Phys. Rev. Lett. 100, 254503 (2008), 10.1103/PhysRevLett.100.254503]. The theoretical predictions of δ vi∞ and ψ∞ show good agreement with the experimental and numerical reported values.
Integral Invariance and Non-linearity Reduction for Proliferating Vorticity Scales in Fluid Dynamics
Lam, F
2013-01-01
A vorticity theory for incompressible fluid flows in the absence of solid boundaries is proposed. Some apriori bounds are established. They are used in an interpolation theory to show the well-posedness of the vorticity Cauchy problem. A non-linear integral equation for vorticity is derived and its solution is expressed in an expansion. Interpretations of flow evolutions starting from given initial data are given and elaborated. The kinetic theory for Maxwellian molecules with cut-off is revisited in order to link microscopic properties to flow characters on the continuum.
DEFF Research Database (Denmark)
Wu, Xiaocui; Ju, Weimin; Zhou, Yanlian;
2015-01-01
two-leaf model (TL-LUE), and a big-leaf light use efficiency model (MOD17) to simulate GPP at half-hourly, daily and 8-day scales using GPP derived from 58 eddy-covariance flux sites in Asia, Europe and North America as benchmarks. Model evaluation showed that the overall performance of TL...
Rapakoulia, Trisevgeni
2017-08-09
Motivation: Drug combination therapy for treatment of cancers and other multifactorial diseases has the potential of increasing the therapeutic effect, while reducing the likelihood of drug resistance. In order to reduce time and cost spent in comprehensive screens, methods are needed which can model additive effects of possible drug combinations. Results: We here show that the transcriptional response to combinatorial drug treatment at promoters, as measured by single molecule CAGE technology, is accurately described by a linear combination of the responses of the individual drugs at a genome wide scale. We also find that the same linear relationship holds for transcription at enhancer elements. We conclude that the described approach is promising for eliciting the transcriptional response to multidrug treatment at promoters and enhancers in an unbiased genome wide way, which may minimize the need for exhaustive combinatorial screens.
Dynamic analysis on generalized linear elastic body subjected to large scale rigid rotations
Institute of Scientific and Technical Information of China (English)
刘占芳; 颜世军; 符志
2013-01-01
The dynamic analysis of a generalized linear elastic body undergoing large rigid rotations is investigated. The generalized linear elastic body is described in kine-matics through translational and rotational deformations, and a modified constitutive relation for the rotational deformation is proposed between the couple stress and the curvature tensor. Thus, the balance equations of momentum and moment are used for the motion equations of the body. The floating frame of reference formulation is applied to the elastic body that conducts rotations about a fixed axis. The motion-deformation coupled model is developed in which three types of inertia forces along with their incre-ments are elucidated. The finite element governing equations for the dynamic analysis of the elastic body under large rotations are subsequently formulated with the aid of the constrained variational principle. A penalty parameter is introduced, and the rotational angles at element nodes are treated as independent variables to meet the requirement of C1 continuity. The elastic body is discretized through the isoparametric element with 8 nodes and 48 degrees-of-freedom. As an example with an application of the motion-deformation coupled model, the dynamic analysis on a rotating cantilever with two spatial layouts relative to the rotational axis is numerically implemented. Dynamic frequencies of the rotating cantilever are presented at prescribed constant spin velocities. The maximal rigid rotational velocity is extended for ensuring the applicability of the linear model. A complete set of dynamical response of the rotating cantilever in the case of spin-up maneuver is examined, it is shown that, under the ultimate rigid rotational velocities less than the maximal rigid rotational velocity, the stress strength may exceed the material strength tolerance even though the displacement and rotational angle responses are both convergent. The influence of the cantilever layouts on their responses and
Power System Design Compromises for Large-Scale Linear Particle Accelerators
Papastergiou, K D
2014-01-01
This paper discusses various design aspects of a 280MW Power System for the Compact Linear Collider (CLIC), a 50km long electrons-positrons accelerator, under feasibility evaluation. The key requirements are a very high accelerator availability and constant power flow from the utility grid, considering the pulsed power nature of CLIC. Firstly, the possible power network and cabling layouts are discussed along with potential difficulties on electrical fault clearance. Following, the use of active front-end converters is examined as a means to control the power flow and power quality seen by the 400kV grid. In particular a modular multilevel converter preliminary configuration is described and the compromises related to energy storage and voltage level are discussed.
Scaling functional patterns of skeletal and cardiac muscles: New non-linear elasticity approach
Kokshenev, Valery B
2009-01-01
Responding mechanically to environmental requests, muscles show a surprisingly large variety of functions. The studies of in vivo cycling muscles qualified skeletal muscles into four principal locomotor patterns: motor, brake, strut, and spring. While much effort of has been done in searching for muscle design patterns, no fundamental concepts underlying empirically established patterns were revealed. In this interdisciplinary study, continuum mechanics is applied to the problem of muscle structure in relation to function. The ability of a powering muscle, treated as a homogenous solid organ, tuned to efficient locomotion via the natural frequency is illuminated through the non-linear elastic muscle moduli controlled by contraction velocity. The exploration of the elastic force patterns known in solid state physics incorporated in activated skeletal and cardiac muscles via the mechanical similarity principle yields analytical rationalization for locomotor muscle patterns. Besides the explanation of the origin...
Directory of Open Access Journals (Sweden)
Chen Qi
2013-07-01
Full Text Available Non-linear chirp scaling (NLCS is a feasible method to deal with time-variant frequency modulation (FM rate problem in synthetic aperture radar (SAR imaging. However, approximations in derivation of NLCS spectrum lead to performance decline in some cases. Presented is the exact spectrum of the NLCS function. Simulation with a geosynchronous synthetic aperture radar (GEO-SAR configuration is implemented. The results show that using the presented spectrum can significantly improve imaging performance, and the NLCS algorithm is suitable for GEO-SAR imaging after modification.
Linear perturbation theory for tidal streams and the small-scale CDM power spectrum
Bovy, Jo; Erkal, Denis; Sanders, Jason L.
2017-04-01
Tidal streams in the Milky Way are sensitive probes of the population of low-mass dark matter subhaloes predicted in cold dark matter (CDM) simulations. We present a new calculus for computing the effect of subhalo fly-bys on cold streams based on the action-angle representation of streams. The heart of this calculus is a line-of-parallel-angle approach that calculates the perturbed distribution function of a stream segment by undoing the effect of all relevant impacts. This approach allows one to compute the perturbed stream density and track in any coordinate system in minutes for realizations of the subhalo distribution down to 105 M⊙, accounting for the stream's internal dispersion and overlapping impacts. We study the statistical properties of density and track fluctuations with large suites of simulations of the effect of subhalo fly-bys. The one-dimensional density and track power spectra along the stream trace the subhalo mass function, with higher mass subhaloes producing power only on large scales, while lower mass subhaloes cause structure on smaller scales. We also find significant density and track bispectra that are observationally accessible. We further demonstrate that different projections of the track all reflect the same pattern of perturbations, facilitating their observational measurement. We apply this formalism to data for the Pal 5 stream and make a first rigorous determination of 10^{+11}_{-6} dark matter subhaloes with masses between 106.5 and 109 M⊙ within 20 kpc from the Galactic centre [corresponding to 1.4^{+1.6}_{-0.9} times the number predicted by CDM-only simulations or to fsub(r measurements of the subhalo mass function down to 105 M⊙, thus definitively testing whether dark matter is clumpy on the smallest scales relevant for galaxy formation.
Meso-scale aeolian transport of beach sediment via dune blowout pathways within a linear foredune
O'Keeffe, Nicholas; Delgado-Fernandez, Irene; Jackson, Derek; Aplin, Paul; Marston, Christopher
2016-04-01
The evolution of coastal foredunes is largely controlled by sediment exchanges between the geomorphic sub-units of the nearshore, beach, foredune and dune field. Although blowouts are widely recognised as efficient sediment transport pathways, both event-scale and meso-scale quantification of their utility in transferring beach sediments landwards is limited. Foredunes characterised by multiple blowouts may be more susceptible to coastline retreat through the enhanced landwards transport of beach or foredune sediments. To date, a key constraint for investigations of such scenarios has been the absence of accurate blowout sediment transport records. Here we use the Sefton coast in north-west England as a study area where an unprecedented temporal coverage of LIDAR data is available between 1999 and 2015. Additionally, an extensive set of aerial photography also exists, dating back to 1945 allowing comparison of blowout frequency and magnitude together with the alongshore limits of coastline retreat. Digital terrain models are derived for each year that LIDAR data is available. Informed by LIDAR based topography and areas of bare sand (aerial photos) terrain models have been created containing individual blowouts. Differentials in 'z' values between each terrain model of each available year has identified topographic change and total levels of transport. Preliminary results have confirmed the importance of blowouts in transporting beach or foredune sediment landwards and thus potentially promoting coastline retreat. Repetition of processes across a larger number of blowout topographies will allow better identification of individual blowouts for 'event' scale field investigations to examine spatial and temporal variability of beach sediment transport via blowouts routes.
Quantifying feedforward control: a linear scaling model for fingertip forces and object weight
Lu, Ying; Bilaloglu, Seda; Aluru, Viswanath
2015-01-01
The ability to predict the optimal fingertip forces according to object properties before the object is lifted is known as feedforward control, and it is thought to occur due to the formation of internal representations of the object's properties. The control of fingertip forces to objects of different weights has been studied extensively by using a custom-made grip device instrumented with force sensors. Feedforward control is measured by the rate of change of the vertical (load) force before the object is lifted. However, the precise relationship between the rate of change of load force and object weight and how it varies across healthy individuals in a population is not clearly understood. Using sets of 10 different weights, we have shown that there is a log-linear relationship between the fingertip load force rates and weight among neurologically intact individuals. We found that after one practice lift, as the weight increased, the peak load force rate (PLFR) increased by a fixed percentage, and this proportionality was common among the healthy subjects. However, at any given weight, the level of PLFR varied across individuals and was related to the efficiency of the muscles involved in lifting the object, in this case the wrist and finger extensor muscles. These results quantify feedforward control during grasp and lift among healthy individuals and provide new benchmarks to interpret data from neurologically impaired populations as well as a means to assess the effect of interventions on restoration of feedforward control and its relationship to muscular control. PMID:25878151
A Steeper than Linear Disk Mass-Stellar Mass Scaling Relation
Pascucci, I; Herczeg, G J; Long, F; Manara, C F; Hendler, N; Mulders, G D; Krijt, S; Ciesla, F; Henning, Th; Mohanty, S; Drabek-Maunder, E; Apai, D; Szucs, L; Sacco, G; Olofsson, J
2016-01-01
The disk mass is among the most important input parameter for every planet formation model to determine the number and masses of the planets that can form. We present an ALMA 887micron survey of the disk population around objects from 2 to 0.03Msun in the nearby 2Myr-old Chamaeleon I star-forming region. We detect thermal dust emission from 66 out of 93 disks, spatially resolve 34 of them, and identify two disks with large dust cavities of about 45AU in radius. Assuming isothermal and optically thin emission, we convert the 887micron flux densities into dust disk masses, hereafter Mdust. We find that the Mdust-Mstar relation is steeper than linear with power law indices 1.3-1.9, where the range reflects two extremes of the possible relation between the average dust temperature and stellar luminosity. By re-analyzing all millimeter data available for nearby regions in a self-consistent way, we show that the 1-3 Myr-old regions of Taurus, Lupus, and Chamaeleon I share the same Mdust-Mstar relation, while the 10...
Predicting groundwater redox status on a regional scale using linear discriminant analysis
Close, M. E.; Abraham, P.; Humphries, B.; Lilburne, L.; Cuthill, T.; Wilson, S.
2016-08-01
Reducing conditions are necessary for denitrification, thus the groundwater redox status can be used to identify subsurface zones where potentially significant nitrate reduction can occur. Groundwater chemistry in two contrasting regions of New Zealand was classified with respect to redox status and related to mappable factors, such as geology, topography and soil characteristics using discriminant analysis. Redox assignment was carried out for water sampled from 568 and 2223 wells in the Waikato and Canterbury regions, respectively. For the Waikato region 64% of wells sampled indicated oxic conditions in the water; 18% indicated reduced conditions and 18% had attributes indicating both reducing and oxic conditions termed "mixed". In Canterbury 84% of wells indicated oxic conditions; 10% were mixed; and only 5% indicated reduced conditions. The analysis was performed over three different well depths, 100 m. For both regions, the percentage of oxidised groundwater decreased with increasing well depth. Linear discriminant analysis was used to develop models to differentiate between the three redox states. Models were derived for each depth and region using 67% of the data, and then subsequently validated on the remaining 33%. The average agreement between predicted and measured redox status was 63% and 70% for the Waikato and Canterbury regions, respectively. The models were incorporated into GIS and the prediction of redox status was extended over the whole region, excluding mountainous land. This knowledge improves spatial prediction of reduced groundwater zones, and therefore, when combined with groundwater flow paths, improves estimates of denitrification.
Star Formation On Sub-kpc Scale Triggered By Non-linear Processes In Nearby Spiral Galaxies
Momose, Rieko; Kennicutt, Robert C; Egusa, Fumi; Calzetti, Daniela; Liu, Guilin; Meyer, Jennifer Donovan; Okumura, Sachiko K; Scoville, Nick Z; Sawada, Tsuyoshi; Kuno, Nario
2013-01-01
We report a super-linear correlation for the star formation law based on new CO($J$=1-0) data from the CARMA and NOBEYAMA Nearby-galaxies (CANON) CO survey. The sample includes 10 nearby spiral galaxies, in which structures at sub-kpc scales are spatially resolved. Combined with the star formation rate surface density traced by H$\\alpha$ and 24 $\\mu$m images, CO($J$=1-0) data provide a super-linear slope of $N$ = 1.3. The slope becomes even steeper ($N$ = 1.8) when the diffuse stellar and dust background emission is subtracted from the H$\\alpha$ and 24 $\\mu$m images. In contrast to the recent results with CO($J$=2-1) that found a constant star formation efficiency (SFE) in many spiral galaxies, these results suggest that the SFE is not independent of environment, but increases with molecular gas surface density. We suggest that the excitation of CO($J$=2-1) is likely enhanced in the regions with higher star formation and does not linearly trace the molecular gas mass. In addition, the diffuse emission contami...
Strength and reversibility of stereotypes for a rotary control with linear scales.
Chan, Alan H S; Chan, W H
2008-02-01
Using real mechanical controls, this experiment studied strength and reversibility of direction-of-motion stereotypes and response times for a rotary control with horizontal and vertical scales. Thirty-eight engineering undergraduates (34 men and 4 women) ages 23 to 47 years (M=29.8, SD=7.7) took part in the experiment voluntarily. The effects of instruction of change of pointer position and control plane on movement compatibility were analyzed with precise quantitative measures of strength and a reversibility index of stereotype. Comparisons of the strength and reversibility values of these two configurations with those of rotary control-circular display, rotary control-digital counter, four-way lever-circular display, and four-way lever-digital counter were made. The results of this study provided significant implications for the industrial design of control panels for improved human performance.
Twork, Sabine; Wiesmeth, Susanne; Spindler, Milena; Wirtz, Markus; Schipper, Sabine; Pöhlau, Dieter; Klewer, Jörg; Kugler, Joachim
2010-06-07
Progression in disability as measured by increase in the Expanded Disability Status Scale (EDSS) is commonly used as outcome variable in clinical trials concerning multiple sclerosis (MS). In this study, we addressed the question, whether there is a linear relationship between disability status and health related quality of life (HRQOL) in MS. 7305 MS patients were sent a questionnaire containing a German version of the "Multiple Sclerosis Quality of Life (MSQOL)-54" and an assessment of self-reported disability status analogous to the EDSS. 3157 patients participated in the study. Patients were allocated to three groups according to disability status. Regarding the physical health composite and the mental health composite as well as most MSQOL-54 subscales, the differences between EDSS 4.5-6.5 and EDSS > or = 7 were clearly smaller than the differences between EDSS EDSS 4.5-6.5. These results indicate a non-linear relationship between disability status and HRQOL in MS. The EDSS does not seem to be interval scaled as is commonly assumed. Consequently, absolute increase in EDSS does not seem to be a suitable outcome variable in MS studies.
Ma, Qianli; Werner, Hans-Joachim
2015-11-10
We present an efficient explicitly correlated pair natural orbital local second-order Møller-Plesset perturbation theory (PNO-LMP2-F12) method. The method is an extension of our previously reported PNO-LMP2 approach [ Werner et al. J. Chem. Theory Comput. 2015 , 11 , 484 ]. Near linear scaling with the size of molecule is achieved by using domain approximations on both virtual and occupied orbitals, local density fitting (DF), and local resolution of the identity (RI), and by exploiting the sparsity of the local molecular orbitals (LMOs) as well as of projected atomic orbitals (PAOs). All large data structures used in the method are stored in distributed memory using Global Arrays (GAs) to achieve near inverse-linear scaling with the number of processing cores, provided that the GAs can be efficiently and independently accessed from all cores. The effect of the various domain approximations is tested for a wide range of chemical reactions. The PNO-LMP2-F12 reaction energies deviate from the canonical DF-MP2-F12 results by ≤1 kJ mol(-1) using triple-ζ (VTZ-F12) basis sets and are close to the complete basis set limits. PNO-LMP2-F12 calculations on molecules of chemical interest involving a few thousand basis functions can be performed within an hour or less using a few nodes on a small computer cluster.
Dziedzic, J; Hill, Q; Skylaris, C-K
2013-12-07
We present a method for the calculation of four-centre two-electron repulsion integrals in terms of localised non-orthogonal generalised Wannier functions (NGWFs). Our method has been implemented in the ONETEP program and is used to compute the Hartree-Fock exchange energy component of Hartree-Fock and Density Functional Theory (DFT) calculations with hybrid exchange-correlation functionals. As the NGWFs are optimised in situ in terms of a systematically improvable basis set which is equivalent to plane waves, it is possible to achieve large basis set accuracy in routine calculations. The spatial localisation of the NGWFs allows us to exploit the exponential decay of the density matrix in systems with a band gap in order to compute the exchange energy with a computational effort that increases linearly with the number of atoms. We describe the implementation of this approach in the ONETEP program for linear-scaling first principles quantum mechanical calculations. We present extensive numerical validation of all the steps in our method. Furthermore, we find excellent agreement in energies and structures for a wide variety of molecules when comparing with other codes. We use our method to perform calculations with the B3LYP exchange-correlation functional for models of myoglobin systems bound with O2 and CO ligands and confirm that the same qualitative behaviour is obtained as when the same myoglobin models are studied with the DFT+U approach which is also available in ONETEP. Finally, we confirm the linear-scaling capability of our method by performing calculations on polyethylene and polyacetylene chains of increasing length.
Song, Hyun-Seob; Goldberg, Noam; Mahajan, Ashutosh; Ramkrishna, Doraiswami
2017-08-01
Elementary (flux) modes (EMs) have served as a valuable tool for investigating structural and functional properties of metabolic networks. Identification of the full set of EMs in genome-scale networks remains challenging due to combinatorial explosion of EMs in complex networks. It is often, however, that only a small subset of relevant EMs needs to be known, for which optimization-based sequential computation is a useful alternative. Most of the currently available methods along this line are based on the iterative use of mixed integer linear programming (MILP), the effectiveness of which significantly deteriorates as the number of iterations builds up. To alleviate the computational burden associated with the MILP implementation, we here present a novel optimization algorithm termed alternate integer linear programming (AILP). Our algorithm was designed to iteratively solve a pair of integer programming (IP) and linear programming (LP) to compute EMs in a sequential manner. In each step, the IP identifies a minimal subset of reactions, the deletion of which disables all previously identified EMs. Thus, a subsequent LP solution subject to this reaction deletion constraint becomes a distinct EM. In cases where no feasible LP solution is available, IP-derived reaction deletion sets represent minimal cut sets (MCSs). Despite the additional computation of MCSs, AILP achieved significant time reduction in computing EMs by orders of magnitude. The proposed AILP algorithm not only offers a computational advantage in the EM analysis of genome-scale networks, but also improves the understanding of the linkage between EMs and MCSs. The software is implemented in Matlab, and is provided as supplementary information . hyunseob.song@pnnl.gov. Supplementary data are available at Bioinformatics online.
Linear perturbation theory for tidal streams and the small-scale CDM power spectrum
Bovy, Jo; Sanders, Jason L
2016-01-01
Tidal streams in the Milky Way are sensitive probes of the population of dark-matter subhalos predicted in cold-dark-matter (CDM) simulations. We present a new calculus for computing the effect of subhalo fly-bys on cold tidal streams based on the action-angle representation of streams. The heart of this calculus is a line-of-parallel-angle approach that calculates the perturbed distribution function of a given stream segment by undoing the effect of all impacts. This approach allows one to compute the perturbed stream density and track in any coordinate system in minutes for realizations of the subhalo distribution down to 10^5 Msun, accounting for the stream's internal dispersion and overlapping impacts. We study the properties of density and track fluctuations with suites of simulations. The one-dimensional density and track power spectra along the stream trace the subhalo mass function, with higher-mass subhalos producing power only on large scales, while lower mass subhalos cause structure on smaller sca...
Tkalcic, Hrvoje; Dreger, Douglas S.; Foulger, Gillian R.; Julian, Bruce R.
2009-01-01
A volcanic earthquake with Mw 5.6 occurred beneath the Bárdarbunga caldera in Iceland on 29 September 1996. This earthquake is one of a decade-long sequence of events at Bárdarbunga with non-double-couple mechanisms in the Global Centroid Moment Tensor catalog. Fortunately, it was recorded well by the regional-scale Iceland Hotspot Project seismic experiment. We investigated the event with a complete moment tensor inversion method using regional long-period seismic waveforms and a composite structural model. The moment tensor inversion using data from stations of the Iceland Hotspot Project yields a non-double-couple solution with a 67% vertically oriented compensated linear vector dipole component, a 32% double-couple component, and a statistically insignificant (2%) volumetric (isotropic) contraction. This indicates the absence of a net volumetric component, which is puzzling in the case of a large volcanic earthquake that apparently is not explained by shear slip on a planar fault. A possible volcanic mechanism that can produce an earthquake without a volumetric component involves two offset sources with similar but opposite volume changes. We show that although such a model cannot be ruled out, the circumstances under which it could happen are rare.
An efficient and near linear scaling pair natural orbital based local coupled cluster method
Riplinger, Christoph; Neese, Frank
2013-01-01
In previous publications, it was shown that an efficient local coupled cluster method with single- and double excitations can be based on the concept of pair natural orbitals (PNOs) [F. Neese, A. Hansen, and D. G. Liakos, J. Chem. Phys. 131, 064103 (2009), 10.1063/1.3173827]. The resulting local pair natural orbital-coupled-cluster single double (LPNO-CCSD) method has since been proven to be highly reliable and efficient. For large molecules, the number of amplitudes to be determined is reduced by a factor of 105-106 relative to a canonical CCSD calculation on the same system with the same basis set. In the original method, the PNOs were expanded in the set of canonical virtual orbitals and single excitations were not truncated. This led to a number of fifth order scaling steps that eventually rendered the method computationally expensive for large molecules (e.g., >100 atoms). In the present work, these limitations are overcome by a complete redesign of the LPNO-CCSD method. The new method is based on the combination of the concepts of PNOs and projected atomic orbitals (PAOs). Thus, each PNO is expanded in a set of PAOs that in turn belong to a given electron pair specific domain. In this way, it is possible to fully exploit locality while maintaining the extremely high compactness of the original LPNO-CCSD wavefunction. No terms are dropped from the CCSD equations and domains are chosen conservatively. The correlation energy loss due to the domains remains below 8800 basis functions and >450 atoms. In all larger test calculations done so far, the LPNO-CCSD step took less time than the preceding Hartree-Fock calculation, provided no approximations have been introduced in the latter. Thus, based on the present development reliable CCSD calculations on large molecules with unprecedented efficiency and accuracy are realized.
Volumetric Three-Dimensional Display Systems
Blundell, Barry G.; Schwarz, Adam J.
2000-03-01
A comprehensive study of approaches to three-dimensional visualization by volumetric display systems This groundbreaking volume provides an unbiased and in-depth discussion on a broad range of volumetric three-dimensional display systems. It examines the history, development, design, and future of these displays, and considers their potential for application to key areas in which visualization plays a major role. Drawing substantially on material that was previously unpublished or available only in patent form, the authors establish the first comprehensive technical and mathematical formalization of the field, and examine a number of different volumetric architectures. System level design strategies are presented, from which proposals for the next generation of high-definition predictable volumetric systems are developed. To ensure that researchers will benefit from work already completed, they provide: * Descriptions of several recent volumetric display systems prepared from material supplied by the teams that created them * An abstract volumetric display system design paradigm * An historical summary of 90 years of development in volumetric display system technology * An assessment of the strengths and weaknesses of many of the systems proposed to date * A unified presentation of the underlying principles of volumetric display systems * A comprehensive bibliography Beautifully supplemented with 17 color plates that illustrate volumetric images and prototype displays, Volumetric Three-Dimensional Display Systems is an indispensable resource for professionals in imaging systems development, scientific visualization, medical imaging, computer graphics, aerospace, military planning, and CAD/CAE.
Hine, N D M; Haynes, P D; Skylaris, C K
2011-01-01
We present a comparison of methods for treating the electrostatic interactions of finite, isolated systems within periodic boundary conditions (PBCs), within Density Functional Theory (DFT), with particular emphasis on linear-scaling (LS) DFT. Often, PBCs are not physically realistic but are an unavoidable consequence of the choice of basis set and the efficacy of using Fourier transforms to compute the Hartree potential. In such cases the effects of PBCs on the calculations need to be avoided, so that the results obtained represent the open rather than the periodic boundary. The very large systems encountered in LS-DFT make the demands of the supercell approximation for isolated systems more difficult to manage, and we show cases where the open boundary (infinite cell) result cannot be obtained from extrapolation of calculations from periodic cells of increasing size. We discuss, implement and test three very different approaches for overcoming or circumventing the effects of PBCs: truncation of the Coulomb ...
AN APPLICATION OF DOUBLE-SCALE METHOD TO THE STUDY OF NON-LINEAR DISSIPATIVE WAVES IN JEFFREYS MEDIA
Directory of Open Access Journals (Sweden)
Adelina Georgescu
2011-07-01
Full Text Available In previous papers we sketched out the general use of the doublescalemethod to nonlinear hyperbolic partial differential equations(PDEs in order to study the asymptotic waves and as an examplethe model governing the motion of a rheological medium (Maxwellmedium with one mechanical internal variable was studied. In thispaper the double scale method is applied to investigate non-linear dissipative waves in viscoanelastic media without memory of order one(Jeffreys media, that were studied by one of the authors (L. R. inmore classical way. For these media the equations of motion includesecond order derivative terms multiplied by a very small parameter. We give a physical interpretation of the new (fast variable, related to the surfaces across which the solutions or/and some of their derivatives vary steeply. The paper concludes with one-dimensional application containing original results.
Chesterman, Frédérique; Manssens, Hannah; Morel, Céline; Serrell, Guillaume; Piepers, Bastian; Kimpe, Tom
2017-03-01
Medical displays for primary diagnosis are calibrated to the DICOM GSDF1 but there is no accepted standard today that describes how display systems for medical modalities involving color should be calibrated. Recently the Color Standard Display Function3,4 (CSDF), a calibration using the CIEDE2000 color difference metric to make a display as perceptually linear as possible has been proposed. In this work we present the results of a first observer study set up to investigate the interpretation accuracy of a rainbow color scale when a medical display is calibrated to CSDF versus DICOM GSDF and a second observer study set up to investigate the detectability of color differences when a medical display is calibrated to CSDF, DICOM GSDF and sRGB. The results of the first study indicate that the error when interpreting a rainbow color scale is lower for CSDF than for DICOM GSDF with statistically significant difference (Mann-Whitney U test) for eight out of twelve observers. The results correspond to what is expected based on CIEDE2000 color differences between consecutive colors along the rainbow color scale for both calibrations. The results of the second study indicate a statistical significant improvement in detecting color differences when a display is calibrated to CSDF compared to DICOM GSDF and a (non-significant) trend indicating improved detection for CSDF compared to sRGB. To our knowledge this is the first work that shows the added value of a perceptual color calibration method (CSDF) in interpreting medical color images using the rainbow color scale. Improved interpretation of the rainbow color scale may be beneficial in the area of quantitative medical imaging (e.g. PET SUV, quantitative MRI and CT and doppler US), where a medical specialist needs to interpret quantitative medical data based on a color scale and/or detect subtle color differences and where improved interpretation accuracy and improved detection of color differences may contribute to a better
Del Rio Amador, Lenin; Lovejoy, Shaun
2016-04-01
Traditionally, most of the models for prediction of the atmosphere behavior in the macroweather and climate regimes follow a deterministic approach. However, modern ensemble forecasting systems using stochastic parameterizations are in fact deterministic/ stochastic hybrids that combine both elements to yield a statistical distribution of future atmospheric states. Nevertheless, the result is both highly complex (both numerically and theoretically) as well as being theoretically eclectic. In principle, it should be advantageous to exploit higher level turbulence type scaling laws. Concretely, in the case for the Global Circulation Models (GCM's), due to sensitive dependence on initial conditions, there is a deterministic predictability limit of the order of 10 days. When these models are coupled with ocean, cryosphere and other process models to make long range, climate forecasts, the high frequency "weather" is treated as a driving noise in the integration of the modelling equations. Following Hasselman, 1976, this has led to stochastic models that directly generate the noise, and model the low frequencies using systems of integer ordered linear ordinary differential equations, the most well-known are the Linear Inverse Models (LIM). For annual global scale forecasts, they are somewhat superior to the GCM's and have been presented as a benchmark for surface temperature forecasts with horizons up to decades. A key limitation for the LIM approach is that it assumes that the temperature has only short range (exponential) decorrelations. In contrast, an increasing body of evidence shows that - as with the models - the atmosphere respects a scale invariance symmetry leading to power laws with potentially enormous memories so that LIM greatly underestimates the memory of the system. In this talk we show that, due to the relatively low macroweather intermittency, the simplest scaling models - fractional Gaussian noise - can be used for making greatly improved forecasts
Ata, Metin; Müller, Volker
2014-01-01
We present a Bayesian reconstruction algorithm to generate unbiased samples of the underlying dark matter field from galaxy redshift data. Our new contribution consists of implementing a non-Poisson likelihood including a deterministic non-linear and scale-dependent bias. In particular we present the Hamiltonian equations of motions for the negative binomial (NB) probability distribution function. This permits us to efficiently sample the posterior distribution function of density fields given a sample of galaxies using the Hamiltonian Monte Carlo technique implemented in the Argo code. We have tested our algorithm with the Bolshoi N-body simulation, inferring the underlying dark matter density field from a subsample of the halo catalogue. Our method shows that we can draw closely unbiased samples (compatible within 1-$\\sigma$) from the posterior distribution up to scales of about k~1 h/Mpc in terms of power-spectra and cell-to-cell correlations. We find that a Poisson likelihood yields reconstructions with p...
Goldsmith, Paul F; Narayanan, Gopal; Snell, Ronald; Li, Di; Brunt, Chris
2008-01-01
We report the results of a 100 square degree survey of the Taurus Molecular Cloud region in the J = 1-0 transition of 12CO and 13CO. The image of the cloud in each velocity channel includes ~ 3 million Nyquist sampled pixels on a 20" grid. The high sensitivity and large linear dynamic range of the maps in both isotopologues reveal a very complex, highly structured cloud morphology. There are large scale correlated structures evident in 13CO emission having very fine dimensions, including filaments, cavities, and rings. The 12CO emission shows a quite different structure, with particularly complex interfaces between regions of greater and smaller column density defining the boundaries of the largest-scale cloud structures. The axes of the striations seen in the 12CO emission from relatively diffuse gas are aligned with the direction of the magnetic field. Using a column density-dependent model for the CO fractional abundance, we derive the mass of the region mapped to be 24,000 solar masses, a factor of three ...
Franklin, Erick de Moraes
2016-01-01
Granular media are frequently found in nature and in industry and their transport by a fluid flow is of great importance to human activities. One case of particular interest is the transport of sand in open-channel and river flows. In many instances, the shear stresses exerted by the fluid flow are bounded to certain limits and some grains are entrained as bed-load: a mobile layer which stays in contact with the fixed part of the granular bed. Under these conditions, an initially flat granular bed may be unstable, generating ripples and dunes such as those observed on the bed of rivers. In free-surface water flows, dunes are bedforms that scale with the flow depth, while ripples do not scale with it. This article presents a model for the formation of ripples and dunes based on the proposition that ripples are primary linear instabilities and that dunes are secondary instabilities formed from the competition between the coalescence of ripples and free surface effects. Although simple, the model is able to expl...
Song, Chao; Kwan, Mei-Po; Zhu, Jiping
2017-04-08
An increasing number of fires are occurring with the rapid development of cities, resulting in increased risk for human beings and the environment. This study compares geographically weighted regression-based models, including geographically weighted regression (GWR) and geographically and temporally weighted regression (GTWR), which integrates spatial and temporal effects and global linear regression models (LM) for modeling fire risk at the city scale. The results show that the road density and the spatial distribution of enterprises have the strongest influences on fire risk, which implies that we should focus on areas where roads and enterprises are densely clustered. In addition, locations with a large number of enterprises have fewer fire ignition records, probably because of strict management and prevention measures. A changing number of significant variables across space indicate that heterogeneity mainly exists in the northern and eastern rural and suburban areas of Hefei city, where human-related facilities or road construction are only clustered in the city sub-centers. GTWR can capture small changes in the spatiotemporal heterogeneity of the variables while GWR and LM cannot. An approach that integrates space and time enables us to better understand the dynamic changes in fire risk. Thus governments can use the results to manage fire safety at the city scale.
Smirnov, Sergey V; Kobtsev, Sergey M; Kukarin, Sergey V
2014-01-13
For the first time we report the results of both numerical simulation and experimental observation of second-harmonic generation as an example of non-linear frequency conversion of pulses generated by passively mode-locked fiber master oscillator in different regimes including conventional (stable) and double-scale (partially coherent and noise-like) ones. We show that non-linear frequency conversion efficiency of double-scale pulses is slightly higher than that of conventional picosecond laser pulses with the same energy and duration despite strong phase fluctuations of double-scale pulses.
A reduced volumetric expansion factor plot
Hendricks, R. C.
1979-01-01
A reduced volumetric expansion factor plot has been constructed for simple fluids which is suitable for engineering computations in heat transfer. Volumetric expansion factors have been found useful in correlating heat transfer data over a wide range of operating conditions including liquids, gases and the near critical region.
Wu, Fangqin; Liu, Wenjian; Zhang, Yong; Li, Zhendong
2011-11-08
To circumvent the cubic scaling and convergence difficulties encountered in the standard top-down localization of the global canonical molecular orbitals (CMOs), a bottom-up localization scheme is proposed based on the idea of "from fragments to molecule". That is, the global localized MOs (LMOs), both occupied and unoccupied, are to be synthesized from the primitive fragment LMOs (pFLMOs) obtained from subsystem calculations. They are orthonormal but are still well localized on the parent fragments of the pFLMOs and can hence be termed as "fragment LMOs" (FLMOs). This has been achieved by making use of two important factors. Physically, it is the transferability of the locality of the fragments that serves as the basis. Mathematically, it is the special block-diagonalization of the Kohn-Sham matrix that allows retention of the locality: The occupied-occupied and virtual-virtual diagonal blocks are only minimally modified when the occupied-virtual off-diagonal blocks are annihilated. Such a bottom-up localization scheme is applicable to systems composed of all kinds of chemical bonds. It is then shown that, by a simple prescreening of the particle-hole pairs, the FLMO-based time-dependent density functional theory (TDDFT) can achieve linear scaling with respect to the system size, with a very small prefactor. As a proof of principle, representative model systems are taken as examples to demonstrate the accuracy and efficiency of the algorithms. As both the orbital picture and integral number of electrons are retained, the FLMO-TDDFT offers a clear characterization of the nature of the excited states in line with chemical/physical intuition.
Energy Technology Data Exchange (ETDEWEB)
Li, Y.F. [Energy and Environmental Research Center, North China Electric Power University, Beijing 102206 (China); Huang, G.H., E-mail: gordon.huang@uregina.c [Environmental Systems Engineering Program, Faculty of Engineering, University of Regina, Regina, Saskatchewan, S4S 0A2 (Canada); College of Urban and Environmental Sciences, Peking University, Beijing 100871 (China); Li, Y.P. [College of Urban and Environmental Sciences, Peking University, Beijing 100871 (China); Xu, Y.; Chen, W.T. [Energy and Environmental Research Center, North China Electric Power University, Beijing 102206 (China)
2010-01-15
In this study, a multistage interval-stochastic regional-scale energy model (MIS-REM) is developed for supporting electric power system (EPS) planning under uncertainty that is based on a multistage interval-stochastic integer linear programming method. The developed MIS-REM can deal with uncertainties expressed as both probability distributions and interval values existing in energy system planning problems. Moreover, it can reflect dynamic decisions for electricity generation schemes and capacity expansions through transactions at discrete points of a multiple representative scenario set over a multistage context. It can also analyze various energy-policy scenarios that are associated with economic penalties when the regulated targets are violated. A case study is provided for demonstrating the applicability of the developed model, where renewable and non-renewable energy resources, economic concerns, and environmental requirements are integrated into a systematic optimization process. The results obtained are helpful for supporting (a) adjustment or justification of allocation patterns of regional energy resources and services, (b) formulation of local policies regarding energy consumption, economic development, and energy structure, and (c) analysis of interactions among economic cost, environmental requirement, and energy-supply security.
Energy Technology Data Exchange (ETDEWEB)
Li, Y.F.; Xu, Y.; Chen, W.T. [Energy and Environmental Research Center, North China Electric Power University, Beijing 102206 (China); Huang, G.H. [Environmental Systems Engineering Program, Faculty of Engineering, University of Regina, Regina, Saskatchewan (Canada); College of Urban and Environmental Sciences, Peking University, Beijing 100871 (China); Li, Y.P. [College of Urban and Environmental Sciences, Peking University, Beijing 100871 (China)
2010-01-15
In this study, a multistage interval-stochastic regional-scale energy model (MIS-REM) is developed for supporting electric power system (EPS) planning under uncertainty that is based on a multistage interval-stochastic integer linear programming method. The developed MIS-REM can deal with uncertainties expressed as both probability distributions and interval values existing in energy system planning problems. Moreover, it can reflect dynamic decisions for electricity generation schemes and capacity expansions through transactions at discrete points of a multiple representative scenario set over a multistage context. It can also analyze various energy-policy scenarios that are associated with economic penalties when the regulated targets are violated. A case study is provided for demonstrating the applicability of the developed model, where renewable and non-renewable energy resources, economic concerns, and environmental requirements are integrated into a systematic optimization process. The results obtained are helpful for supporting (a) adjustment or justification of allocation patterns of regional energy resources and services, (b) formulation of local policies regarding energy consumption, economic development, and energy structure, and (c) analysis of interactions among economic cost, environmental requirement, and energy-supply security. (author)
Directory of Open Access Journals (Sweden)
Tarek H. M. Abou-El-Enien
2015-04-01
Full Text Available This paper extended TOPSIS (Technique for Order Preference by Similarity Ideal Solution method for solving Two-Level Large Scale Linear Multiobjective Optimization Problems with Stochastic Parameters in the righthand side of the constraints (TL-LSLMOP-SPrhs of block angular structure. In order to obtain a compromise ( satisfactory solution to the (TL-LSLMOP-SPrhs of block angular structure using the proposed TOPSIS method, a modified formulas for the distance function from the positive ideal solution (PIS and the distance function from the negative ideal solution (NIS are proposed and modeled to include all the objective functions of the two levels. In every level, as the measure of ―Closeness‖ dp-metric is used, a k-dimensional objective space is reduced to two –dimentional objective space by a first-order compromise procedure. The membership functions of fuzzy set theory is used to represent the satisfaction level for both criteria. A single-objective programming problem is obtained by using the max-min operator for the second –order compromise operaion. A decomposition algorithm for generating a compromise ( satisfactory solution through TOPSIS approach is provided where the first level decision maker (FLDM is asked to specify the relative importance of the objectives. Finally, an illustrative numerical example is given to clarify the main results developed in the paper.
SIMULATION OF COMPOSITE NON-LINEAR MECHANICAL BEHAVIOR OF CMCS BY FEM-BASED MULTI-SCALE APPROACH
Institute of Scientific and Technical Information of China (English)
高希光; 王绍华; 宋迎东
2013-01-01
The non-linear behavior of continuous fiber reinforced C/SiC ceramic matrix composites (CMCs) under tensile loading is modeled by three-dimensional representative volume element (RVE) models of the composite . The theoretical background of the multi-scale approach solved by the finite element method (FEM ) is recalled first-ly .Then the geometric characters of three kinds of damage mechanisms ,i .e .micro matrix cracks ,fiber/matrix interface debonding and fiber fracture ,are studied .Three kinds of RVE are proposed to model the microstructure of C/SiC with above damage mechanisms respectively .The matrix cracking is modeled by critical matrix strain en-ergy (CMSE) principle while a maximum shear stress criterion is used for modeling fiber/matrix interface debond-ing .The behavior of fiber fracture is modeled by the famous Weibull statistic theory .A numerical example of con-tinuous fiber reinforced C/SiC composite under tensile loading is performed .The results show that the stress/strain curve predicted by the developed model agrees with experimental data .
Ihrig, Arvid Conrad; Wieferink, Jürgen; Zhang, Igor Ying; Ropo, Matti; Ren, Xinguo; Rinke, Patrick; Scheffler, Matthias; Blum, Volker
2015-09-01
A key component in calculations of exchange and correlation energies is the Coulomb operator, which requires the evaluation of two-electron integrals. For localized basis sets, these four-center integrals are most efficiently evaluated with the resolution of identity (RI) technique, which expands basis-function products in an auxiliary basis. In this work we show the practical applicability of a localized RI-variant (‘RI-LVL’), which expands products of basis functions only in the subset of those auxiliary basis functions which are located at the same atoms as the basis functions. We demonstrate the accuracy of RI-LVL for Hartree-Fock calculations, for the PBE0 hybrid density functional, as well as for RPA and MP2 perturbation theory. Molecular test sets used include the S22 set of weakly interacting molecules, the G3 test set, as well as the G2-1 and BH76 test sets, and heavy elements including titanium dioxide, copper and gold clusters. Our RI-LVL implementation paves the way for linear-scaling RI-based hybrid functional calculations for large systems and for all-electron many-body perturbation theory with significantly reduced computational and memory cost.
Kim, I. Jong; Pae, Ki Hong; Kim, Chul Min; Kim, Hyung Taek; Choi, Il Woo; Lee, Chang-Lyoul; Singhal, Himanshu; Sung, Jae Hee; Lee, Seong Ku; Lee, Hwang Woon; Nickles, Peter V.; Jeong, Tae Moon; Nam, Chang Hee
2015-12-01
Laser-driven proton/ion acceleration is a rapidly developing research field attractive for both fundamental physics and applications such as hadron therapy, radiography, inertial confinement fusion, and nuclear/particle physics. Laser-driven proton/ion beams, compared to those obtained in conventional accelerators, have outstanding features such as low emittance, small source size, ultra-short duration and huge acceleration gradient of ∼1 MeV μm-1. We report proton acceleration from ultrathin polymer targets irradiated with linearly polarized, 30-fs, 1-PW Ti:sapphire laser pulses. A maximum proton energy of 45 MeV with a broad and modulated profile was obtained when a 10-nm-thick target was irradiated at a laser intensity of 3.3 × 1020 W/cm2. The transition from slow (I1/2) to fast scaling (I) of maximum proton energy with respect to laser intensity I was observed and explained by the hybrid acceleration mechanism including target normal sheath acceleration and radiation pressure acceleration in the acceleration stage and Coulomb-explosion-assisted free expansion in the post acceleration stage.
Non-linear numerical simulations of magneto-acoustic wave propagation in small-scale flux tubes
Khomenko, E; Felipe, T
2007-01-01
We present results of non-linear 2D numerical simulations of magneto-acoustic wave propagation in the photosphere and chromosphere of small-scale flux tubes with internal structure. Waves with realistic periods of 3--5 min are studied, after applying horizontal and vertical oscillatory perturbations to the equilibrium situation. Spurious reflections of shock waves from the upper boundary are minimized thanks to a special boundary condition. This has allowed us to increase the duration of the simulations and to make it long enough to perform a statistical analysis of oscillations. The simulations show that deep horizontal motions of the flux tube generate a slow (magnetic) mode and a surface mode. These modes are efficiently transformed into a slow (acoustic) mode in the Va < Cs atmosphere. The slow (acoustic) mode propagates vertically along the field lines, forms shocks and remains always within the flux tube. It might deposit effectively the energy of the driver into the chromosphere. When the driver osc...
Poli, E.; Elliott, J. D.; Ratcliff, L. E.; Andrinopoulos, L.; Dziedzic, J.; Hine, N. D. M.; Mostofi, A. A.; Skylaris, C.-K.; Haynes, P. D.; Teobaldi, G.
2016-02-01
We report a linear-scaling density functional theory (DFT) study of the structure, wall-polarization absolute band-alignment and optical absorption of several, recently synthesized, open-ended imogolite (Imo) nanotubes (NTs), namely single-walled (SW) aluminosilicate (AlSi), SW aluminogermanate (AlGe), SW methylated aluminosilicate (AlSi-Me), and double-walled (DW) AlGe NTs. Simulations with three different semi-local and dispersion-corrected DFT-functionals reveal that the NT wall-polarization can be increased by nearly a factor of four going from SW-AlSi-Me to DW-AlGe. Absolute vacuum alignment of the NT electronic bands and comparison with those of rutile and anatase TiO2 suggest that the NTs may exhibit marked propensity to both photo-reduction and hole-scavenging. Characterization of the NTs’ band-separation and optical properties reveal the occurrence of (near-)UV inside-outside charge-transfer excitations, which may be effective for electron-hole separation and enhanced photocatalytic activity. Finally, the effects of the NTs’ wall-polarization on the absolute alignment of electron and hole acceptor states of interacting water (H2O) molecules are quantified and discussed.
Hine, Nicholas D M; Dziedzic, Jacek; Haynes, Peter D; Skylaris, Chris-Kriton
2011-11-28
We present a comparison of methods for treating the electrostatic interactions of finite, isolated systems within periodic boundary conditions (PBCs), within density functional theory (DFT), with particular emphasis on linear-scaling (LS) DFT. Often, PBCs are not physically realistic but are an unavoidable consequence of the choice of basis set and the efficacy of using Fourier transforms to compute the Hartree potential. In such cases the effects of PBCs on the calculations need to be avoided, so that the results obtained represent the open rather than the periodic boundary. The very large systems encountered in LS-DFT make the demands of the supercell approximation for isolated systems more difficult to manage, and we show cases where the open boundary (infinite cell) result cannot be obtained from extrapolation of calculations from periodic cells of increasing size. We discuss, implement, and test three very different approaches for overcoming or circumventing the effects of PBCs: truncation of the Coulomb interaction combined with padding of the simulation cell, approaches based on the minimum image convention, and the explicit use of open boundary conditions (OBCs). We have implemented these approaches in the ONETEP LS-DFT program and applied them to a range of systems, including a polar nanorod and a protein. We compare their accuracy, complexity, and rate of convergence with simulation cell size. We demonstrate that corrective approaches within PBCs can achieve the OBC result more efficiently and accurately than pure OBC approaches.
Iqbal, Javed; Yahia, I. S.; Zahran, H. Y.; AlFaify, S.; AlBassam, A. M.; El-Naggar, A. M.
2016-12-01
2‧,7‧ dichloro-Fluorescein (DCF) is a promising organic semiconductor material in different technological aspects such as solar cell, photodiode, Schottky diode. DCF thin film/conductive glass (FTO glass) was prepared by a low-cost spin coating technique. The spectrophotometric data such as the absorbance, reflectance and transmittance were cogitated in the 350-2500 nm wavelength range, at the normal incidence. The absorption (n) and linear refractive indices (k) were computed using the Fresnel's equations. The optical band gap was evaluated and it was found that there is two band gap described as follows: (1) It is related to the band gap of FTO/glass which is equal 3.4 eV and (2) the second one is related to the absorption edge of DCF equals 2.25 eV. The non-linear parameters such as the refractive index (n2) and optical susceptibility χ(3) were evaluated by the spectroscopic method based on the refractive index. Both (n2) and χ(3) increased rapidly on increasing the wavelength with redshift absorption. Our work represents a new idea about using FTO glass for a new generation of the optical device and technology.
Directory of Open Access Journals (Sweden)
Thomas Philipp
2012-05-01
Full Text Available Abstract Background It is well known that the deterministic dynamics of biochemical reaction networks can be more easily studied if timescale separation conditions are invoked (the quasi-steady-state assumption. In this case the deterministic dynamics of a large network of elementary reactions are well described by the dynamics of a smaller network of effective reactions. Each of the latter represents a group of elementary reactions in the large network and has associated with it an effective macroscopic rate law. A popular method to achieve model reduction in the presence of intrinsic noise consists of using the effective macroscopic rate laws to heuristically deduce effective probabilities for the effective reactions which then enables simulation via the stochastic simulation algorithm (SSA. The validity of this heuristic SSA method is a priori doubtful because the reaction probabilities for the SSA have only been rigorously derived from microscopic physics arguments for elementary reactions. Results We here obtain, by rigorous means and in closed-form, a reduced linear Langevin equation description of the stochastic dynamics of monostable biochemical networks in conditions characterized by small intrinsic noise and timescale separation. The slow-scale linear noise approximation (ssLNA, as the new method is called, is used to calculate the intrinsic noise statistics of enzyme and gene networks. The results agree very well with SSA simulations of the non-reduced network of elementary reactions. In contrast the conventional heuristic SSA is shown to overestimate the size of noise for Michaelis-Menten kinetics, considerably under-estimate the size of noise for Hill-type kinetics and in some cases even miss the prediction of noise-induced oscillations. Conclusions A new general method, the ssLNA, is derived and shown to correctly describe the statistics of intrinsic noise about the macroscopic concentrations under timescale separation conditions
In Vivo Real Time Volumetric Synthetic Aperture Ultrasound Imaging
DEFF Research Database (Denmark)
Bouzari, Hamed; Rasmussen, Morten Fischer; Brandt, Andreas Hjelm
2015-01-01
. This paper investigates the in vivo applicability and sensitivity of volumetric SA imaging. Utilizing the transmit events to generate a set of virtual point sources, a frame rate of 25 Hz for a 90° x 90° field-of-view was achieved. Data were obtained using a 3.5 MHz 32 x 32 elements 2-D phased array......Synthetic aperture (SA) imaging can be used to achieve real-time volumetric ultrasound imaging using 2-D array transducers. The sensitivity of SA imaging is improved by maximizing the acoustic output, but one must consider the limitations of an ultrasound system, both technical and biological...... transducer connected to the experimental scanner (SARUS). Proper scaling is applied to the excitation signal such that intensity levels are in compliance with the U.S. Food and Drug Administration regulations for in vivo ultrasound imaging. The measured Mechanical Index and spatial-peak- temporal...
Hung, Linda; Huang, Chen; Shin, Ilgyou; Ho, Gregory S.; Lignères, Vincent L.; Carter, Emily A.
2010-12-01
Orbital-free density functional theory (OFDFT) is a first principles quantum mechanics method to find the ground-state energy of a system by variationally minimizing with respect to the electron density. No orbitals are used in the evaluation of the kinetic energy (unlike Kohn-Sham DFT), and the method scales nearly linearly with the size of the system. The PRinceton Orbital-Free Electronic Structure Software (PROFESS) uses OFDFT to model materials from the atomic scale to the mesoscale. This new version of PROFESS allows the study of larger systems with two significant changes: PROFESS is now parallelized, and the ion-electron and ion-ion terms scale quasilinearly, instead of quadratically as in PROFESS v1 (L. Hung and E.A. Carter, Chem. Phys. Lett. 475 (2009) 163). At the start of a run, PROFESS reads the various input files that describe the geometry of the system (ion positions and cell dimensions), the type of elements (defined by electron-ion pseudopotentials), the actions you want it to perform (minimize with respect to electron density and/or ion positions and/or cell lattice vectors), and the various options for the computation (such as which functionals you want it to use). Based on these inputs, PROFESS sets up a computation and performs the appropriate optimizations. Energies, forces, stresses, material geometries, and electron density configurations are some of the values that can be output throughout the optimization. New version program summaryProgram Title: PROFESS Catalogue identifier: AEBN_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEBN_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 68 721 No. of bytes in distributed program, including test data, etc.: 1 708 547 Distribution format: tar.gz Programming language: Fortran 90 Computer
Surfactant enhanced volumetric sweep efficiency
Energy Technology Data Exchange (ETDEWEB)
Harwell, J.H.; Scamehorn, J.F.
1989-10-01
Surfactant-enhanced waterflooding is a novel EOR method aimed to improve the volumetric sweep efficiencies in reservoirs. The technique depends upon the ability to induce phase changes in surfactant solutions by mixing with surfactants of opposite charge or with salts of appropriate type. One surfactant or salt solution is injected into the reservoir. It is followed later by injection of another surfactant or salt solution. The sequence of injections is arranged so that the two solutions do not mix until they are into the permeable regions well away from the well bore. When they mix at this point, by design they form a precipitate or gel-like coacervate phase, plugging this permeable region, forcing flow through less permeable regions of the reservoir, improving sweep efficiency. The selectivity of the plugging process is demonstrated by achieving permeability reductions in the high permeable regions of Berea sandstone cores. Strategies were set to obtain a better control over the plug placement and the stability of plugs. A numerical simulator has been developed to investigate the potential increases in oil production of model systems. Furthermore, the hardness tolerance of anionic surfactant solutions is shown to be enhanced by addition of monovalent electrolyte or nonionic surfactants. 34 refs., 32 figs., 8 tabs.
Volumetric verification of multiaxis machine tool using laser tracker.
Aguado, Sergio; Samper, David; Santolaria, Jorge; Aguilar, Juan José
2014-01-01
This paper aims to present a method of volumetric verification in machine tools with linear and rotary axes using a laser tracker. Beyond a method for a particular machine, it presents a methodology that can be used in any machine type. Along this paper, the schema and kinematic model of a machine with three axes of movement, two linear and one rotational axes, including the measurement system and the nominal rotation matrix of the rotational axis are presented. Using this, the machine tool volumetric error is obtained and nonlinear optimization techniques are employed to improve the accuracy of the machine tool. The verification provides a mathematical, not physical, compensation, in less time than other methods of verification by means of the indirect measurement of geometric errors of the machine from the linear and rotary axes. This paper presents an extensive study about the appropriateness and drawbacks of the regression function employed depending on the types of movement of the axes of any machine. In the same way, strengths and weaknesses of measurement methods and optimization techniques depending on the space available to place the measurement system are presented. These studies provide the most appropriate strategies to verify each machine tool taking into consideration its configuration and its available work space.
Andermatt, Samuel; Cha, Jinwoong; Schiffmann, Florian; VandeVondele, Joost
2016-07-12
In this work, methods for the efficient simulation of large systems embedded in a molecular environment are presented. These methods combine linear-scaling (LS) Kohn-Sham (KS) density functional theory (DFT) with subsystem (SS) DFT. LS DFT is efficient for large subsystems, while SS DFT is linear scaling with a smaller prefactor for large sets of small molecules. The combination of SS and LS, which is an embedding approach, can result in a 10-fold speedup over a pure LS simulation for large systems in aqueous solution. In addition to a ground-state Born-Oppenheimer SS+LS implementation, a time-dependent density functional theory-based Ehrenfest molecular dynamics (EMD) using density matrix propagation is presented that allows for performing nonadiabatic dynamics. Density matrix-based EMD in the SS framework is naturally linear scaling and appears suitable to study the electronic dynamics of molecules in solution. In the LS framework, linear scaling results as long as the density matrix remains sparse during time propagation. However, we generally find a less than exponential decay of the density matrix after a sufficiently long EMD run, preventing LS EMD simulations with arbitrary accuracy. The methods are tested on various systems, including spectroscopy on dyes, the electronic structure of TiO2 nanoparticles, electronic transport in carbon nanotubes, and the satellite tobacco mosaic virus in explicit solution.
Wannier-Stark ladder in the linear absorption of a random system with scale-free disorder
Diaz, E.; Dominguez-Adame, F.; Kosevich, Yu. A.; Malyshev, V.A.
2006-01-01
We study numerically the linear optical response of a quasiparticle moving on a one-dimensional disordered lattice in the presence of a linear bias. The random site potential is assumed to be long-range correlated with a power-law spectral density S(k)similar to 1/k(alpha), alpha > 0. This type of c
Laser Based 3D Volumetric Display System
1993-03-01
Literature, Costa Mesa, CA July 1983. 3. "A Real Time Autostereoscopic Multiplanar 3D Display System", Rodney Don Williams, Felix Garcia, Jr., Texas...8217 .- NUMBERS LASER BASED 3D VOLUMETRIC DISPLAY SYSTEM PR: CD13 0. AUTHOR(S) PE: N/AWIU: DN303151 P. Soltan, J. Trias, W. Robinson, W. Dahlke 7...laser generated 3D volumetric images on a rotating double helix, (where the 3D displays are computer controlled for group viewing with the naked eye
Two-dimensional and volumetric airway changes after bimaxillary surgery for class III malocclusion.
Vaezi, Toraj; Zarch, Seyed Hossein Hosseini; Eshghpour, Majid; Kermani, Hamed
2017-04-01
Any change in maxilla and mandible position can alter the upper airway, and any decrease in the upper airway can cause sleep disorders. Thus, it is necessary to assess airway changes after repositioning of the maxilla and mandible during orthognathic surgery. The purpose of this study was to evaluate linear and volumetric changes in the upper airway after bimaxillary surgery to correct class III malocclusion via cone-beam computed tomography (CBCT) and to identify correlations between linear and volumetric changes. This was a prospective cohort study. CBCTs from 10 class III patients were evaluated before surgery and three months after. The Wilcoxon one-sample test was used to evaluate the differences in measurements before and after surgery. Spearman's rank correlation coefficient was used to test the correlation between linear and volumetric changes. The results show that the nasopharyngeal space increased significantly, and that this increase correlated with degree of maxillary advancement. No significant changes were found in volumes before and after surgery. A correlation was found between linear and volumetric oropharyngeal changes. Bimaxillary surgical correction of class III malocclusion did not cause statistically significant changes in the posterior airway space.
Baudin, Pablo; Ettenhuber, Patrick; Reine, Simen; Kristensen, Kasper; Kjærgaard, Thomas
2016-02-01
The Resolution of the Identity second-order Møller-Plesset perturbation theory (RI-MP2) method is implemented within the linear-scaling Divide-Expand-Consolidate (DEC) framework. In a DEC calculation, the full molecular correlated calculation is replaced by a set of independent fragment calculations each using a subset of the total orbital space. The number of independent fragment calculations scales linearly with the system size, rendering the method linear-scaling and massively parallel. The DEC-RI-MP2 method can be viewed as an approximation to the DEC-MP2 method where the RI approximation is utilized in each fragment calculation. The individual fragment calculations scale with the fifth power of the fragment size for both methods. However, the DEC-RI-MP2 method has a reduced prefactor compared to DEC-MP2 and is well-suited for implementation on massively parallel supercomputers, as demonstrated by test calculations on a set of medium-sized molecules. The DEC error control ensures that the standard RI-MP2 energy can be obtained to the predefined precision. The errors associated with the RI and DEC approximations are compared, and it is shown that the DEC-RI-MP2 method can be applied to systems far beyond the ones that can be treated with a conventional RI-MP2 implementation.
Baudin, Pablo; Ettenhuber, Patrick; Reine, Simen; Kristensen, Kasper; Kjærgaard, Thomas
2016-02-01
The Resolution of the Identity second-order Møller-Plesset perturbation theory (RI-MP2) method is implemented within the linear-scaling Divide-Expand-Consolidate (DEC) framework. In a DEC calculation, the full molecular correlated calculation is replaced by a set of independent fragment calculations each using a subset of the total orbital space. The number of independent fragment calculations scales linearly with the system size, rendering the method linear-scaling and massively parallel. The DEC-RI-MP2 method can be viewed as an approximation to the DEC-MP2 method where the RI approximation is utilized in each fragment calculation. The individual fragment calculations scale with the fifth power of the fragment size for both methods. However, the DEC-RI-MP2 method has a reduced prefactor compared to DEC-MP2 and is well-suited for implementation on massively parallel supercomputers, as demonstrated by test calculations on a set of medium-sized molecules. The DEC error control ensures that the standard RI-MP2 energy can be obtained to the predefined precision. The errors associated with the RI and DEC approximations are compared, and it is shown that the DEC-RI-MP2 method can be applied to systems far beyond the ones that can be treated with a conventional RI-MP2 implementation.
Volumetric hemispheric ratio as a useful tool in personality psychology.
Montag, Christian; Schoene-Bake, Jan-Christoph; Wagner, Jan; Reuter, Martin; Markett, Sebastian; Weber, Bernd; Quesada, Carlos M
2013-02-01
The present study investigates the link between volumetric hemispheric ratios (VHRs) and personality measures in N=267 healthy participants using Eysenck's Personality Inventory-Revised (EPQ-R) and the BIS/BAS scales. A robust association between extraversion and VHRs was observed for gray matter in males but not females. Higher gray matter volume in the left than in the right hemisphere was associated with higher extraversion in males. The results are discussed in the context of positive emotionality and laterality of the human brain.
Theys, Céline; Dobigeon, Nicolas; Richard, Cédric; Tourneret, Jean-Yves; Ferrari, André
2013-01-01
This paper addresses the problem of minimizing a convex cost function under non-negativity and equality constraints, with the aim of solving the linear unmixing problem encountered in hyperspectral imagery. This problem can be formulated as a linear regression problem whose regression coefficients (abundances) satisfy sum-to-one and positivity constraints. A normalized scaled gradient iterative method (NSGM) is proposed for estimating the abundances of the linear mixing model. The positivity constraint is ensured by the Karush Kuhn Tucker conditions whereas the sum-to-one constraint is fulfilled by introducing normalized variables in the algorithm. The convergence is ensured by a one-dimensional search of the step size. Note that NSGM can be applied to any convex cost function with non negativity and flux constraints. In order to compare the NSGM with the well-known fully constraint least squares (FCLS) algorithm, this latter is reformulated in term of a penalized function, which reveals its suboptimality. Si...
Scott, D J; Clarke, J A; Baynham, D E; Bayliss, V; Bradshaw, T; Burton, G; Brummitt, A; Carr, S; Lintern, A; Rochford, J; Taylor, O; Ivanyushenkov, Y
2011-10-21
The first demonstration of a full-scale working undulator module suitable for future TeV-scale positron-electron linear collider positron sources is presented. Generating sufficient positrons is an important challenge for these colliders, and using polarized e(+) would enhance the machine's capabilities. In an undulator-based source polarized positrons are generated in a metallic target via pair production initiated by circularly polarized photons produced in a helical undulator. We show how the undulator design is developed by considering impedance effects on the electron beam, modeling and constructing short prototypes before the successful fabrication, and testing of a final module.
Streicher, Jeffrey W; Cox, Christian L; Birchard, Geoffrey F
2012-04-01
Although well documented in vertebrates, correlated changes between metabolic rate and cardiovascular function of insects have rarely been described. Using the very large cockroach species Gromphadorhina portentosa, we examined oxygen consumption and heart rate across a range of body sizes and temperatures. Metabolic rate scaled positively and heart rate negatively with body size, but neither scaled linearly. The response of these two variables to temperature was similar. This correlated response to endogenous (body mass) and exogenous (temperature) variables is likely explained by a mutual dependence on similar metabolic substrate use and/or coupled regulatory pathways. The intraspecific scaling for oxygen consumption rate showed an apparent plateauing at body masses greater than about 3 g. An examination of cuticle mass across all instars revealed isometric scaling with no evidence of an ontogenetic shift towards proportionally larger cuticles. Published oxygen consumption rates of other Blattodea species were also examined and, as in our intraspecific examination of G. portentosa, the scaling relationship was found to be non-linear with a decreasing slope at larger body masses. The decreasing slope at very large body masses in both intraspecific and interspecific comparisons may have important implications for future investigations of the relationship between oxygen transport and maximum body size in insects.
Brewer, Paul; Berryman, Fiona; Baker, De; Pynsent, Paul; Gardner, Adrian
2013-11-01
Retrospective sequential patient series. To establish the relationship between the magnitude of the deformity in scoliosis and patients' perception of their condition, as measured with Scoliosis Research Society-22 scores. A total of 93 untreated patients with adolescent idiopathic scoliosis were included retrospectively. The Cobb angle was measured from a plain radiograph, and volumetric asymmetry was measured by ISIS2 surface topography. The association between Scoliosis Research Society scores for function, pain, self-image, and mental health against Cobb angle and volumetric asymmetry was investigated using the Pearson correlation coefficient. Correlation of both Cobb angle and volumetric asymmetry with function and pain was weak (all self-image, was higher, although still moderate (-.37 for Cobb angle and -.44 for volumetric asymmetry). Both were statistically significant (Cobb angle, p = .0002; volumetric asymmetry; p = .00001). Cobb angle contributed 13.8% to the linear relationship with self-image, whereas volumetric asymmetry contributed 19.3%. For mental health, correlation was statistically significant with Cobb angle (p = .011) and volumetric asymmetry (p = .0005), but the correlation was low to moderate (-.26 and -.35, respectively). Cobb angle contributed 6.9% to the linear relationship with mental health, whereas volumetric asymmetry contributed 12.4%. Volumetric asymmetry correlates better with both mental health and self-image compared with Cobb angle, but the correlation was only moderate. This study suggests that a patient's own perception of self-image and mental health is multifactorial and not completely explained through present objective measurements of the size of the deformity. This helps to explain the difficulties in any objective analysis of a problem with multifactorial perception issues. Further study is required to investigate other physical aspects of the deformity that may have a role in how patients view themselves. Copyright
Gaosheng Li; Hongguang Zhang; Fubin Yang; Songsong Song; Ying Chang; Fei Yu; Jingfu Wang; Baofeng Yao
2016-01-01
A novel free piston expander-linear generator (FPE-LG) integrated unit was proposed to recover waste heat efficiently from vehicle engine. This integrated unit can be used in a small-scale Organic Rankine Cycle (ORC) system and can directly convert the thermodynamic energy of working fluid into electric energy. The conceptual design of the free piston expander (FPE) was introduced and discussed. A cam plate and the corresponding valve train were used to control the inlet and outlet valve timi...
Ishihara, Takashi; Kadoya, Toshihiko; Yamamoto, Shuichi
2007-08-24
We applied the model described in our previous paper to the rapid scale-up in the ion exchange chromatography of proteins, in which linear flow velocity, column length and gradient slope were changed. We carried out linear gradient elution experiments, and obtained data for the peak salt concentration and peak width. From these data, the plate height (HETP) was calculated as a function of the mobile phase velocity and iso-resolution curve (the separation time and elution volume relationship for the same resolution) was calculated. The scale-up chromatography conditions were determined by the iso-resolution curve. The scale-up of the linear gradient elution from 5 to 100mL and 2.5L column sizes was performed both by the separation of beta-lactoglobulin A and beta-lactoglobulin B with anion-exchange chromatography and by the purification of a recombinant protein with cation-exchange chromatography. Resolution, recovery and purity were examined in order to verify the proposed method.
Measuring treatment and scale bias effects by linear regression in the analysis of OHI-S scores.
Moore, B J
1977-05-01
A linear regression model is presented for estimating unbiased treatment effects from OHI-S scores. An example is given to illustrate an analysis and to compare results of an unbiased regression estimator with those based on a biased simple difference estimator.
Hirt, Ulrike; Mewes, Melanie; Meyer, Burghard C.
The structure of a landscape is highly relevant for research and planning (such as fulfilling the requirements of the Water Framework Directive - WFD - and for implementation of comprehensive catchment planning). There is a high potential for restoration of linear landscape elements in most European landscapes. By implementing the WFD in Germany, the restoration of linear landscape elements could be a valuable measure, for example to reduce nutrient input into rivers. Despite this importance of landscape structures for water and nutrients fluxes, biodiversity and the appearance of a landscape, specific studies of the linear elements are rare for larger catchment areas. Existing studies are limited because they either use remote sensing data, which does not adequately differentiate all types of linear landscape elements, or they focus only on a specific type of linear element. To address these limitations, we developed a framework allowing comprehensive quantification of linear landscape elements for catchment areas, using publicly available biotope type data. We analysed the dependence of landscape structures on natural regions and regional soil characteristics. Three data sets (differing in biotopes, soil parameters and natural regions) were generated for the catchment area of the middle Mulde River (2700 km 2) in Germany, using overlay processes in geographic information systems (GIS), followed by statistical evaluation. The linear landscape components of the total catchment area are divided into roads (55%), flowing water (21%), tree rows (14%), avenues (5%), and hedges (2%). The occurrence of these landscape components varies regionally among natural units and different soil regions. For example, the mixed deciduous stands (3.5 m/ha) are far more frequent in foothills (6 m/ha) than in hill country (0.9 m/ha). In contrast, fruit trees are more frequent in hill country (5.2 m/ha) than in the cooler foothills (0.5 m/ha). Some 70% of avenues, and 40% of tree rows
Nonequilibrium volumetric response of shocked polymers
Energy Technology Data Exchange (ETDEWEB)
Clements, B E [Los Alamos National Laboratory
2009-01-01
Polymers are well known for their non-equilibrium deviatoric behavior. However, investigations involving both high rate shock experiments and equilibrium measured thermodynamic quantities remind us that the volumetric behavior also exhibits a non-equilibrium response. Experiments supporting the notion of a non-equilibrium volumetric behavior will be summarized. Following that discussion, a continuum-level theory is proposed that will account for both the equilibrium and non-equilibrium response. Upon finding agreement with experiment, the theory is used to study the relaxation of a shocked polymer back towards its shocked equilibrium state.
Kjærgaard, Thomas; Baudin, Pablo; Bykov, Dmytro; Eriksen, Janus Juul; Ettenhuber, Patrick; Kristensen, Kasper; Larkin, Jeff; Liakh, Dmitry; Pawłowski, Filip; Vose, Aaron; Wang, Yang Min; Jørgensen, Poul
2017-03-01
We present a scalable cross-platform hybrid MPI/OpenMP/OpenACC implementation of the Divide-Expand-Consolidate (DEC) formalism with portable performance on heterogeneous HPC architectures. The Divide-Expand-Consolidate formalism is designed to reduce the steep computational scaling of conventional many-body methods employed in electronic structure theory to linear scaling, while providing a simple mechanism for controlling the error introduced by this approximation. Our massively parallel implementation of this general scheme has three levels of parallelism, being a hybrid of the loosely coupled task-based parallelization approach and the conventional MPI +X programming model, where X is either OpenMP or OpenACC. We demonstrate strong and weak scalability of this implementation on heterogeneous HPC systems, namely on the GPU-based Cray XK7 Titan supercomputer at the Oak Ridge National Laboratory. Using the ;resolution of the identity second-order Møller-Plesset perturbation theory; (RI-MP2) as the physical model for simulating correlated electron motion, the linear-scaling DEC implementation is applied to 1-aza-adamantane-trione (AAT) supramolecular wires containing up to 40 monomers (2440 atoms, 6800 correlated electrons, 24 440 basis functions and 91 280 auxiliary functions). This represents the largest molecular system treated at the MP2 level of theory, demonstrating an efficient removal of the scaling wall pertinent to conventional quantum many-body methods.
Jin, Chao; Glawdel, Tomasz; Ren, Carolyn L.; Emelko, Monica B.
2015-12-01
Deposition of colloidal- and nano-scale particles on surfaces is critical to numerous natural and engineered environmental, health, and industrial applications ranging from drinking water treatment to semi-conductor manufacturing. Nano-scale surface roughness-induced hydrodynamic impacts on particle deposition were evaluated in the absence of an energy barrier to deposition in a parallel plate system. A non-linear, non-monotonic relationship between deposition surface roughness and particle deposition flux was observed and a critical roughness size associated with minimum deposition flux or “sag effect” was identified. This effect was more significant for nanoparticles (surface roughness on particle deposition by unifying hydrodynamic forces (using the most current approaches for describing flow field profiles and hydrodynamic retardation effects) with appropriately modified expressions for DLVO interaction energies, and gravity forces in one model and 2) a foundation for further describing the impacts of more complicated scales of deposition surface roughness on particle deposition.
Maurer, Marina; Ochsenfeld, Christian
2013-05-07
An atomic-orbital (AO) based formulation for calculating nuclear magnetic resonance chemical shieldings at the second-order Møller-Plesset perturbation theory level is introduced, which provides a basis for reducing the scaling of the computational effort with the molecular size from the fifth power to linear and for a specific nucleus to sublinear. The latter sublinear scaling in the rate-determining steps becomes possible by avoiding global perturbations with respect to the magnetic field and by solving for quantities that involve the local nuclear magnetic spin perturbation instead. For avoiding the calculation of the second-order perturbed density matrix, we extend our AO-based reformulation of the Z-vector method within a density matrix-based scheme. Our pilot implementation illustrates the fast convergence with respect to the required number of Laplace points and the asymptotic scaling behavior in the rate-determining steps.
Volumetric properties of human islet amyloid polypeptide in liquid water.
Brovchenko, I; Andrews, M N; Oleinikova, A
2010-04-28
The volumetric properties of human islet amyloid polypeptide (hIAPP) in water were studied in a wide temperature range by computer simulations. The intrinsic density rho(p) and the intrinsic thermal expansion coefficient alpha(p) of hIAPP were evaluated by taking into account the difference between the volumetric properties of hydration and bulk water. The density of hydration water rho(h) was found to decrease almost linearly with temperature upon heating and its thermal expansion coefficient was found to be notably higher than that of bulk water. The peptide surface exposed to water is more hydrophobic and its rho(h) is smaller in conformation with a larger number of intrapeptide hydrogen bonds. The two hIAPP peptides studied (with and without disulfide bridge) show negative alpha(p), which is close to zero at 250 K and decreases to approximately -1.5 x 10(-3) K(-1) upon heating to 450 K. The analysis of various structural properties of peptides shows a correlation between the intrinsic peptide volumes and the number of intrapeptide hydrogen bonds. The obtained negative values of alpha(p) can be attributed to the shrinkage of the inner voids of the peptides upon heating.
De Angelis, Simone; Manzari, Paola; De Sanctis, Maria Cristina; Altieri, Francesca; Carli, Cristian; Agrosì, Giovanna
2017-09-01
Focus of this work is the analysis of rock slabs by means of the Ma_Miss BreadBoard instrument. Ma_Miss (Mars Multispectral Imager for Subsurface Studies, Coradini et al., 2001; De Sanctis et al., 2017) is the miniaturized imaging spectrometer onboard the ESA Exomars 2020 mission. Here we report the results of the analysis carried out on rock slabs using the Ma_Miss breadboard (BB) (De Angelis et al., 2014, 2015) and a Spectro-Goniometer (SPG). The samples are three volcanic rocks (from the Aeolian Islands and Montiferru volcanoes, Italy) and two carbonate rocks (from Central Apennines, Italy). Visible and near infrared spectroscopic characterization has been first performed on all the samples with a Spectro-goniometer (SPG). Successively, higher spatial resolution spectra were acquired with the Ma_Miss BB setup in each of the areas analyzed with the SPG. We compared the spectra of the same areas of the slabs, acquired with SPG and Ma_Miss BB. Three different analysis approaches have been performed on the spectra: arithmetical averaging of the spectra, linear mixing of reflectances and linear mixing of Single Scattering Albedoes (using Hapke model). The comparison shows that: (i) Ma_Miss instrument has great capabilities for the investigation of rock surfaces with high detail; a large number of different mineralogical phases can be recognized thanks to Ma_Miss high resolution within each millimeter-sized analyzed area; (ii) the agreement with SPG spectra is excellent especially when linear mixing is applied for the convolution of Ma_Miss BB spectra.
Process conditions and volumetric composition in composites
DEFF Research Database (Denmark)
Madsen, Bo
2013-01-01
The obtainable volumetric composition in composites is linked to the gravimetric composition, and it is influenced by the conditions of the manufacturing process. A model for the volumetric composition is presented, where the volume fractions of fibers, matrix and porosity are calculated as a fun...... is increased. Altogether, the model is demonstrated to be a valuable tool for a quantitative analysis of the effect of process conditions. Based on the presented findings and considerations, examples of future work are mentioned for the further improvement of the model.......The obtainable volumetric composition in composites is linked to the gravimetric composition, and it is influenced by the conditions of the manufacturing process. A model for the volumetric composition is presented, where the volume fractions of fibers, matrix and porosity are calculated...... as a function of the fiber weight fraction, and where parameters are included for the composite microstructure, and the fiber assembly compaction behavior. Based on experimental data of composites manufactured with different process conditions, together with model predictions, different types of process related...
Indexing Volumetric Shapes with Matching and Packing.
Koes, David Ryan; Camacho, Carlos J
2015-04-01
We describe a novel algorithm for bulk-loading an index with high-dimensional data and apply it to the problem of volumetric shape matching. Our matching and packing algorithm is a general approach for packing data according to a similarity metric. First an approximate k-nearest neighbor graph is constructed using vantage-point initialization, an improvement to previous work that decreases construction time while improving the quality of approximation. Then graph matching is iteratively performed to pack related items closely together. The end result is a dense index with good performance. We define a new query specification for shape matching that uses minimum and maximum shape constraints to explicitly specify the spatial requirements of the desired shape. This specification provides a natural language for performing volumetric shape matching and is readily supported by the geometry-based similarity search (GSS) tree, an indexing structure that maintains explicit representations of volumetric shape. We describe our implementation of a GSS tree for volumetric shape matching and provide a comprehensive evaluation of parameter sensitivity, performance, and scalability. Compared to previous bulk-loading algorithms, we find that matching and packing can construct a GSS-tree index in the same amount of time that is denser, flatter, and better performing, with an observed average performance improvement of 2X.
Volumetric 3D Display System with Static Screen
Geng, Jason
2011-01-01
Current display technology has relied on flat, 2D screens that cannot truly convey the third dimension of visual information: depth. In contrast to conventional visualization that is primarily based on 2D flat screens, the volumetric 3D display possesses a true 3D display volume, and places physically each 3D voxel in displayed 3D images at the true 3D (x,y,z) spatial position. Each voxel, analogous to a pixel in a 2D image, emits light from that position to form a real 3D image in the eyes of the viewers. Such true volumetric 3D display technology provides both physiological (accommodation, convergence, binocular disparity, and motion parallax) and psychological (image size, linear perspective, shading, brightness, etc.) depth cues to human visual systems to help in the perception of 3D objects. In a volumetric 3D display, viewers can watch the displayed 3D images from a completely 360 view without using any special eyewear. The volumetric 3D display techniques may lead to a quantum leap in information display technology and can dramatically change the ways humans interact with computers, which can lead to significant improvements in the efficiency of learning and knowledge management processes. Within a block of glass, a large amount of tiny dots of voxels are created by using a recently available machining technique called laser subsurface engraving (LSE). The LSE is able to produce tiny physical crack points (as small as 0.05 mm in diameter) at any (x,y,z) location within the cube of transparent material. The crack dots, when illuminated by a light source, scatter the light around and form visible voxels within the 3D volume. The locations of these tiny voxels are strategically determined such that each can be illuminated by a light ray from a high-resolution digital mirror device (DMD) light engine. The distribution of these voxels occupies the full display volume within the static 3D glass screen. This design eliminates any moving screen seen in previous
Clara, Ian P.; Huynh, Cam-Loi
2003-01-01
The Wechsler Adult Intelligence Scale-3rd Edition (WAIS-III) was released in 1997. Short forms developed for previous versions have not yet been investigated for the WAIS-III in special populations. A 4-subtest short form by A. B. Silverstein emerged as the most promising short form in an elderly sample. (Contains 49 references, 4 tables, and 2…
Using pressure and volumetric approaches to estimate CO2 storage capacity in deep saline aquifers
Thibeau, S.; Bachu, S.; Birkholzer, J.; Holloway, S.; Neele, F.P.; Zou, Q.
2014-01-01
Various approaches are used to evaluate the capacity of saline aquifers to store CO2, resulting in a wide range of capacity estimates for a given aquifer. The two approaches most used are the volumetric “open aquifer” and “closed aquifer” approaches. We present four full-scale aquifer cases, where C
An empirical correlation of volumetric mass transfer coefficient was developed for a pilot scale internal-loop rectangular airlift bioreactor that was designed for biotechnology. The empirical correlation combines classic turbulence theory, Kolmogorov’s isotropic turbulence theory with Higbie’s pen...
Using pressure and volumetric approaches to estimate CO2 storage capacity in deep saline aquifers
Thibeau, S.; Bachu, S.; Birkholzer, J.; Holloway, S.; Neele, F.P.; Zou, Q.
2014-01-01
Various approaches are used to evaluate the capacity of saline aquifers to store CO2, resulting in a wide range of capacity estimates for a given aquifer. The two approaches most used are the volumetric “open aquifer” and “closed aquifer” approaches. We present four full-scale aquifer cases, where
Zuehlsdorff, Tim J; Payne, Mike C; Haynes, Peter D
2015-01-01
We present a solution of the full TDDFT eigenvalue equation in the linear response formalism exhibiting a linear-scaling computational complexity with system size, without relying on the simplifying Tamm-Dancoff approximation (TDA). The implementation relies on representing the occupied and unoccupied subspace with two different sets of in situ optimised localised functions, yielding a very compact and efficient representation of the transition density matrix of the excitation with the accuracy associated with a systematic basis set. The TDDFT eigenvalue equation is solved using a preconditioned conjugate-gradients algorithm that is very memory-efficient. The algorithm is validated on a test molecule and a good agreement with results obtained from standard quantum chemistry packages is found, with the preconditioner yielding a significant improvement in convergence rates. The method developed in this work is then used to reproduce experimental results of the absorption spectrum of bacteriochlorophyll (BChl) i...
Saitow, Masaaki; Becker, Ute; Riplinger, Christoph; Valeev, Edward F; Neese, Frank
2017-04-28
The Coupled-Cluster expansion, truncated after single and double excitations (CCSD), provides accurate and reliable molecular electronic wave functions and energies for many molecular systems around their equilibrium geometries. However, the high computational cost, which is well-known to scale as O(N(6)) with system size N, has limited its practical application to small systems consisting of not more than approximately 20-30 atoms. To overcome these limitations, low-order scaling approximations to CCSD have been intensively investigated over the past few years. In our previous work, we have shown that by combining the pair natural orbital (PNO) approach and the concept of orbital domains it is possible to achieve fully linear scaling CC implementations (DLPNO-CCSD and DLPNO-CCSD(T)) that recover around 99.9% of the total correlation energy [C. Riplinger et al., J. Chem. Phys. 144, 024109 (2016)]. The production level implementations of the DLPNO-CCSD and DLPNO-CCSD(T) methods were shown to be applicable to realistic systems composed of a few hundred atoms in a routine, black-box fashion on relatively modest hardware. In 2011, a reduced-scaling CCSD approach for high-spin open-shell unrestricted Hartree-Fock reference wave functions was proposed (UHF-LPNO-CCSD) [A. Hansen et al., J. Chem. Phys. 135, 214102 (2011)]. After a few years of experience with this method, a few shortcomings of UHF-LPNO-CCSD were noticed that required a redesign of the method, which is the subject of this paper. To this end, we employ the high-spin open-shell variant of the N-electron valence perturbation theory formalism to define the initial guess wave function, and consequently also the open-shell PNOs. The new PNO ansatz properly converges to the closed-shell limit since all truncations and approximations have been made in strict analogy to the closed-shell case. Furthermore, given the fact that the formalism uses a single set of orbitals, only a single PNO integral transformation is
Directory of Open Access Journals (Sweden)
Hiroki Yoshioka
2012-07-01
Full Text Available The spectral unmixing of a linear mixture model (LMM with Normalized Difference Vegetation Index (NDVI constraints was performed to estimate the fraction of vegetation cover (FVC over the earth’s surface in an effort to facilitate long-term surface vegetation monitoring using a set of environmental satellites. Although the integrated use of multiple sensors improves the spatial and temporal quality of the data sets, area-averaged FVC values obtained using an LMM-based algorithm suffer from systematic biases caused by differences in the spatial resolutions of the sensors, known as scaling effects. The objective of this study is to investigate the scaling effects in area-averaged FVC values using analytical approaches by focusing on the monotonic behavior of the scaling effects as a function of the spatial resolution. The analysis was conducted based on a resolution transformation model introduced recently by the authors in the accompanying paper (Obata et al., 2012. The maximum value of the scaling effects present in FVC values was derived analytically and validated numerically. A series of derivations identified the error bounds (inherent uncertainties of the averaged FVC values caused by the scaling effect. The results indicate a fundamental difference between the NDVI and the retrieved FVC from NDVI, which should be noted for accuracy improvement of long-term observation datasets.
Using pressure and volumetric approaches to estimate CO2 storage capacity in deep saline aquifers
Thibeau, S.; Bachu, S.; Birkholzer, J.; Holloway, S; Neele, F.P.; Zou, Q
2014-01-01
Various approaches are used to evaluate the capacity of saline aquifers to store CO2, resulting in a wide range of capacity estimates for a given aquifer. The two approaches most used are the volumetric “open aquifer” and “closed aquifer” approaches. We present four full-scale aquifer cases, where CO2 storage capacity is evaluated both volumetrically (with “open” and/or “closed” approaches) and through flow modeling. These examples show that the “open aquifer” CO2 storage capacity estimation ...
Kang, Bongmun; Yoon, Ho-Sung
2015-02-01
Recently, microalgae was considered as a renewable energy for fuel production because its production is nonseasonal and may take place on nonarable land. Despite all of these advantages, microalgal oil production is significantly affected by environmental factors. Furthermore, the large variability remains an important problem in measurement of algae productivity and compositional analysis, especially, the total lipid content. Thus, there is considerable interest in accurate determination of total lipid content during the biotechnological process. For these reason, various high-throughput technologies were suggested for accurate measurement of total lipids contained in the microorganisms, especially oleaginous microalgae. In addition, more advanced technologies were employed to quantify the total lipids of the microalgae without a pretreatment. However, these methods are difficult to measure total lipid content in wet form microalgae obtained from large-scale production. In present study, the thermal analysis performed with two-step linear temeperature program was applied to measure heat evolved in temperature range from 310 to 351 °C of Nostoc sp. KNUA003 obtained from large-scale cultivation. And then, we examined the relationship between the heat evolved in 310-351 °C (HE) and total lipid content of the wet Nostoc cell cultivated in raceway. As a result, the linear relationship was determined between HE value and total lipid content of Nostoc sp. KNUA003. Particularly, there was a linear relationship of 98% between the HE value and the total lipid content of the tested microorganism. Based on this relationship, the total lipid content converted from the heat evolved of wet Nostoc sp. KNUA003 could be used for monitoring its lipid induction in large-scale cultivation. Copyright © 2014 Elsevier Inc. All rights reserved.
Terracciano, Antonio; McCrae, Robert R; Brant, Larry J; Costa, Paul T
2005-09-01
The authors examined age trends in the 5 factors and 30 facets assessed by the Revised NEO Personality Inventory in Baltimore Longitudinal Study of Aging data (N=1,944; 5,027 assessments) collected between 1989 and 2004. Consistent with cross-sectional results, hierarchical linear modeling analyses showed gradual personality changes in adulthood: a decline in Neuroticism up to age 80, stability and then decline in Extraversion, decline in Openness, increase in Agreeableness, and increase in Conscientiousness up to age 70. Some facets showed different curves from the factor they define. Birth cohort effects were modest, and there were no consistent Gender x Age interactions. Significant nonnormative changes were found for all 5 factors; they were not explained by attrition but might be due to genetic factors, disease, or life experience. Copyright (c) 2005 APA, all rights reserved.
Terracciano, Antonio; McCrae, Robert R.; Brant, Larry J.; Costa, Paul T.
2009-01-01
We examined age trends in the five factors and 30 facets assessed by the Revised NEO Personality Inventory in Baltimore Longitudinal Study of Aging data (N = 1,944; 5,027 assessments) collected between 1989 and 2004. Consistent with cross-sectional results, Hierarchical Linear Modeling analyses showed gradual personality changes in adulthood: a decline up to age 80 in Neuroticism, stability and then decline in Extraversion, decline in Openness, increase in Agreeableness, and increase up to age 70 in Conscientiousness. Some facets showed different curves from the factor they define. Birth cohort effects were modest, and there were no consistent Gender × Age interactions. Significant non-normative changes were found for all five factors; they were not explained by attrition but might be due to genetic factors, disease, or life experience. PMID:16248708
Jilani, Asim; Abdel-wahab, M. Sh.; Zahran, H. Y.; Yahia, I. S.; Al-Ghamdi, Attieh A.
2016-09-01
Pure and Si-doped ZnO (SZO) thin films at different concentration of Si (1.9 and 2.4 wt%) were deposited on highly cleaned glass substrate by radio frequency (DC/RF) magnetron sputtering. The morphological and structural investigations have been performed by atomic force electron microscope (AFM) and X-ray diffraction (XRD). The X-ray photoelectron spectroscopy was employed to study the composition and the change in the chemical state of Si-doped ZnO thin films. The optical observations like transmittance, energy band gap, extinction coefficient, refractive index, dielectric loss of pure and Si-doped ZnO thin films have been calculated. The linear optical susceptibility, nonlinear refractive index, and nonlinear optical susceptibility were also studied by the spectroscopic approach rather than conventional Z-scan method. The energy gap of Si-doped ZnO thin films was found to increase as compared to pure ZnO thin films. The crystallinity of the ZnO thin films was effected by the Si doping. The O1s spectra in pure and Si-doped ZnO revealed the bound between O-2 and Zn+2 ions and reduction in the surface oxygen with the Si doping. The chemical state analysis of Si 2p showed the conversation of Si to SiOx and SiO2. The increase in the first-order linear optical susceptibility χ (1) and third-order nonlinear optical susceptibility χ (3) was observed with the Si doping. The nonlinear studies gave some details about the applications of metal oxides in nonlinear optical devices. In short, this study showed that Si doping through sputtering has effected on the structural, surface and optical properties of ZnO thin films which could be quite useful for advanced applications such as metal-oxide-based optical devices.
Beer, Matthias; Ochsenfeld, Christian
2008-06-14
A density matrix-based Laplace reformulation of coupled-perturbed self-consistent field (CPSCF) theory is presented. It allows a direct, instead of iterative, solution for the integral-independent part of the density matrix-based CPSCF (D-CPSCF) equations [J. Kussmann and C. Ochsenfeld, J. Chem. Phys. 127, 054103 (2007)]. In this way, the matrix-multiplication overhead compared to molecular orbital-based solutions is reduced to a minimum, while at the same time, the linear-scaling behavior of D-CPSCF theory is preserved. The present Laplace-based equation solver is expected to be of general applicability.
Institute of Scientific and Technical Information of China (English)
Shen Yi
2013-01-01
In this paper,we propose an adaptive strategy based on the linear prediction of queue length to minimize congestion in Barabási-Albert (BA) scale-free networks.This strategy uses local knowledge of traffic conditions and allows nodes to be able to self-coordinate their accepting probability to the incoming packets.We show that the strategy can delay remarkably the onset of congestion and systems avoiding the congestion can benefit from hierarchical organization of accepting rates of nodes.Furthermore,with the increase of prediction orders,we achieve larger values for the critical load together with a smooth transition from free-flow to congestion.
Directory of Open Access Journals (Sweden)
M. Cargnin
2015-06-01
Full Text Available AbstractSingle-cycle firing is currently the most widespread method used for the production of ceramic tile. The productivity is directly related to the performance of the constituent materials of the ceramic piece during thermal cycling. Numerical tools which allow the prediction of the material behavior may be of great help in the optimization of this stage. This study addressed the mathematical modeling of the temperature profile within a ceramic tile, together with the sintering kinetics, to simulate the effect of the thermal cycle on the final size. On the laboratory scale, 80 mm x 20 mm specimens with thicknesses of 2.3 mm and 7.8 mm were prepared in order to determine the kinetic constants and validate the model. The application was carried out on an industrial scale, with 450 mm x 450 mm pieces that were 8.0 mm thick. These results show that the model was capable of predicting the experimental results satisfactorily.
Giverso, Chiara; Verani, Marco; Ciarletta, Pasquale
2016-06-01
Biological experiments performed on living bacterial colonies have demonstrated the microbial capability to develop finger-like shapes and highly irregular contours, even starting from an homogeneous inoculum. In this work, we study from the continuum mechanics viewpoint the emergence of such branched morphologies in an initially circular colony expanding on the top of a Petri dish coated with agar. The bacterial colony expansion, based on either a source term, representing volumetric mitotic processes, or a nonconvective mass flux, describing chemotactic expansion, is modeled at the continuum scale. We demonstrate that the front of the colony is always linearly unstable, having similar dispersion curves to the ones characterizing branching instabilities. We also perform finite element simulations, which not only prove the emergence of branching, but also highlight dramatic differences between the two mechanisms of colony expansion in the nonlinear regime. Furthermore, the proposed combination of analytical and numerical analysis allowed studying the influence of different model parameters on the selection of specific patterns. A very good agreement has been found between the resulting simulations and the typical structures observed in biological assays. Finally, this work provides a new interpretation of the emergence of branched patterns in living aggregates, depicted as the results of a complex interplay among chemical, mechanical and size effects.
Hine, N D M; Haynes, P D; Mostofi, A A; Payne, M C
2010-09-21
We present calculations of formation energies of defects in an ionic solid (Al(2)O(3)) extrapolated to the dilute limit, corresponding to a simulation cell of infinite size. The large-scale calculations required for this extrapolation are enabled by developments in the approach to parallel sparse matrix algebra operations, which are central to linear-scaling density-functional theory calculations. The computational cost of manipulating sparse matrices, whose sizes are determined by the large number of basis functions present, is greatly improved with this new approach. We present details of the sparse algebra scheme implemented in the ONETEP code using hierarchical sparsity patterns, and demonstrate its use in calculations on a wide range of systems, involving thousands of atoms on hundreds to thousands of parallel processes.
Mahmood, Rashid; JIA, Shaofeng
2016-08-01
In this study, the linear scaling method used for the downscaling of temperature was extended from monthly scaling factors to daily scaling factors (SFs) to improve the daily variations in the corrected temperature. In the original linear scaling (OLS), mean monthly SFs are used to correct the future data, but mean daily SFs are used to correct the future data in the extended linear scaling (ELS) method. The proposed method was evaluated in the Jhelum River basin for the period 1986-2000, using the observed maximum temperature (Tmax) and minimum temperature (Tmin) of 18 climate stations and the simulated Tmax and Tmin of five global climate models (GCMs) (GFDL-ESM2G, NorESM1-ME, HadGEM2-ES, MIROC5, and CanESM2), and the method was also compared with OLS to observe the improvement. Before the evaluation of ELS, these GCMs were also evaluated using their raw data against the observed data for the same period (1986-2000). Four statistical indicators, i.e., error in mean, error in standard deviation, root mean square error, and correlation coefficient, were used for the evaluation process. The evaluation results with GCMs' raw data showed that GFDL-ESM2G and MIROC5 performed better than other GCMs according to all the indicators but with unsatisfactory results that confine their direct application in the basin. Nevertheless, after the correction with ELS, a noticeable improvement was observed in all the indicators except correlation coefficient because this method only adjusts (corrects) the magnitude. It was also noticed that the daily variations of the observed data were better captured by the corrected data with ELS than OLS. Finally, the ELS method was applied for the downscaling of five GCMs' Tmax and Tmin for the period of 2041-2070 under RCP8.5 in the Jhelum basin. The results showed that the basin would face hotter climate in the future relative to the present climate, which may result in increasing water requirements in public, industrial, and agriculture
Non-Linear Integral Equation and excited-states scaling functions in the sine-Gordon model
Destri, C
1997-01-01
The NLIE (the non-linear integral equation equivalent to the Bethe Ansatz equations for finite size) is generalized to excited states, that is states with holes and complex roots over the antiferromagnetic ground state. We consider the sine-Gordon/massive Thirring model (sG/mT) in a periodic box of length L using the light-cone approach, in which the sG/mT model is obtained as the continuum limit of an inhomogeneous six vertex model. This NLIE is an useful starting point to compute the spectrum of excited states both analytically in the large L (perturbative) and small L (conformal) regimes as well as numerically. We derive the conformal weights of the Bethe states with holes and non-string complex roots (close and wide roots) in the UV limit. These weights agree with the Coulomb gas description, yielding a UV conformal spectrum related by duality to the IR conformal spectrum of the six vertex model.
van Weeren, R J; Intema, H T; Rudnick, L; Bruggen, M; Hoeft, M; Oonk, J B R
2012-01-01
Some merging galaxy clusters host diffuse extended radio emission, so-called radio halos and relics. Here we present observations between 147 MHz and 4.9 GHz of a new radio-selected galaxy cluster 1RXS J0603.3+4214 (z=0.225). The cluster is also detected as an extended X-ray source in the RASS. It hosts a large bright 1.9 Mpc radio relic, an elongated ~2 Mpc radio halo, and two smaller radio relics. The large radio relic has a peculiar linear morphology. For this relic we observe a clear spectral index gradient, in the direction towards the cluster center. We performed Rotation Measure (RM) Synthesis between 1.2 and 1.7 GHz. The results suggest that for the west part of the large relic some of the Faraday rotation is caused by ICM and is not only due to galactic foregrounds. We also carried out a detailed spectral analysis of this radio relic and created radio color-color diagrams. We find (i) an injection spectral index of -0.6 to -0.7, (ii) steepening spectral index and increasing spectral curvature in the ...
Pavošević, Fabijan; Pinski, Peter; Riplinger, Christoph; Neese, Frank; Valeev, Edward F.
2016-04-01
We present a formulation of the explicitly correlated second-order Møller-Plesset (MP2-F12) energy in which all nontrivial post-mean-field steps are formulated with linear computational complexity in system size. The two key ideas are the use of pair-natural orbitals for compact representation of wave function amplitudes and the use of domain approximation to impose the block sparsity. This development utilizes the concepts for sparse representation of tensors described in the context of the domain based local pair-natural orbital-MP2 (DLPNO-MP2) method by us recently [Pinski et al., J. Chem. Phys. 143, 034108 (2015)]. Novel developments reported here include the use of domains not only for the projected atomic orbitals, but also for the complementary auxiliary basis set (CABS) used to approximate the three- and four-electron integrals of the F12 theory, and a simplification of the standard B intermediate of the F12 theory that avoids computation of four-index two-electron integrals that involve two CABS indices. For quasi-1-dimensional systems (n-alkanes), the O (" separators="N ) DLPNO-MP2-F12 method becomes less expensive than the conventional O (" separators="N5 ) MP2-F12 for n between 10 and 15, for double- and triple-zeta basis sets; for the largest alkane, C200H402, in def2-TZVP basis, the observed computational complexity is N˜1.6, largely due to the cubic cost of computing the mean-field operators. The method reproduces the canonical MP2-F12 energy with high precision: 99.9% of the canonical correlation energy is recovered with the default truncation parameters. Although its cost is significantly higher than that of DLPNO-MP2 method, the cost increase is compensated by the great reduction of the basis set error due to explicit correlation.
Energy Technology Data Exchange (ETDEWEB)
Lipparini, Filippo, E-mail: flippari@uni-mainz.de [Sorbonne Universités, UPMC Univ. Paris 06, UMR 7598, Laboratoire Jacques-Louis Lions, F-75005 Paris (France); Sorbonne Universités, UPMC Univ. Paris 06, UMR 7616, Laboratoire de Chimie Théorique, F-75005 Paris (France); Sorbonne Universités, UPMC Univ. Paris 06, Institut du Calcul et de la Simulation, F-75005 Paris (France); Scalmani, Giovanni; Frisch, Michael J. [Gaussian, Inc., 340 Quinnipiac St. Bldg. 40, Wallingford, Connecticut 06492 (United States); Lagardère, Louis [Sorbonne Universités, UPMC Univ. Paris 06, Institut du Calcul et de la Simulation, F-75005 Paris (France); Stamm, Benjamin [Sorbonne Universités, UPMC Univ. Paris 06, UMR 7598, Laboratoire Jacques-Louis Lions, F-75005 Paris (France); CNRS, UMR 7598 and 7616, F-75005 Paris (France); Cancès, Eric [Université Paris-Est, CERMICS, Ecole des Ponts and INRIA, 6 and 8 avenue Blaise Pascal, 77455 Marne-la-Vallée Cedex 2 (France); Maday, Yvon [Sorbonne Universités, UPMC Univ. Paris 06, UMR 7598, Laboratoire Jacques-Louis Lions, F-75005 Paris (France); Institut Universitaire de France, Paris, France and Division of Applied Maths, Brown University, Providence, Rhode Island 02912 (United States); Piquemal, Jean-Philip [Sorbonne Universités, UPMC Univ. Paris 06, UMR 7616, Laboratoire de Chimie Théorique, F-75005 Paris (France); CNRS, UMR 7598 and 7616, F-75005 Paris (France); Mennucci, Benedetta [Dipartimento di Chimica e Chimica Industriale, Università di Pisa, Via Risorgimento 35, 56126 Pisa (Italy)
2014-11-14
We present the general theory and implementation of the Conductor-like Screening Model according to the recently developed ddCOSMO paradigm. The various quantities needed to apply ddCOSMO at different levels of theory, including quantum mechanical descriptions, are discussed in detail, with a particular focus on how to compute the integrals needed to evaluate the ddCOSMO solvation energy and its derivatives. The overall computational cost of a ddCOSMO computation is then analyzed and decomposed in the various steps: the different relative weights of such contributions are then discussed for both ddCOSMO and the fastest available alternative discretization to the COSMO equations. Finally, the scaling of the cost of the various steps with respect to the size of the solute is analyzed and discussed, showing how ddCOSMO opens significantly new possibilities when cheap or hybrid molecular mechanics/quantum mechanics methods are used to describe the solute.
van Weeren, R. J.; Röttgering, H. J. A.; Intema, H. T.; Rudnick, L.; Brüggen, M.; Hoeft, M.; Oonk, J. B. R.
2012-10-01
Some merging galaxy clusters host diffuse extended radio emission, so-called radio halos and relics, unrelated to individual galaxies. The origin of these halos and relics is still debated, although there is compelling evidence now that they are related to cluster merger events. Here we present detailed Westerbork Synthesis Radio Telescope (WSRT) and Giant Metrewave Radio Telescope (GMRT) radio observations between 147 MHz and 4.9 GHz of a new radio-selected galaxy cluster 1RXS J0603.3+4214, for which we find a redshift of 0.225. The cluster is detected as an extended X-ray source in the ROSAT All Sky Survey with an X-ray luminosity of LX, 0.1-2.4 keV ~ 1 × 1045 erg s-1. The cluster hosts a large bright 1.9 Mpc radio relic, an elongated ~2 Mpc radio halo, and two fainter smaller radio relics. The large radio relic has a peculiar linear morphology. For this relic we observe a clear spectral index gradient from the front of the relic towards the back, in the direction towards the cluster center. Parts of this relic are highly polarized with a polarization fraction of up to 60%. We performed rotation measure (RM) synthesis between 1.2 and 1.7 GHz. The results suggest that for the west part of the large relic some of the Faraday rotation is caused by the intracluster medium and not only due to galactic foregrounds. We also carried out a detailed spectral analysis of this radio relic and created radio color-color diagrams. We find (i) an injection spectral index of -0.6 to -0.7; (ii) steepening spectral index and increasing spectral curvature in the post-shock region; and (iii) an overall power-law spectrum between 74 MHz and 4.9 GHz with α = -1.10 ± 0.02. Mixing of emission in the beam from regions with different spectral ages is probably the dominant factor that determines the shape of the radio spectra. Changes in the magnetic field, total electron content, or adiabatic gains/losses do not play a major role. A model in which particles are (re)accelerated in a
Subramanian, Aneesh C.
2012-11-01
This paper investigates the role of the linear analysis step of the ensemble Kalman filters (EnKF) in disrupting the balanced dynamics in a simple atmospheric model and compares it to a fully nonlinear particle-based filter (PF). The filters have a very similar forecast step but the analysis step of the PF solves the full Bayesian filtering problem while the EnKF analysis only applies to Gaussian distributions. The EnKF is compared to two flavors of the particle filter with different sampling strategies, the sequential importance resampling filter (SIRF) and the sequential kernel resampling filter (SKRF). The model admits a chaotic vortical mode coupled to a comparatively fast gravity wave mode. It can also be configured either to evolve on a so-called slow manifold, where the fast motion is suppressed, or such that the fast-varying variables are diagnosed from the slow-varying variables as slaved modes. Identical twin experiments show that EnKF and PF capture the variables on the slow manifold well as the dynamics is very stable. PFs, especially the SKRF, capture slaved modes better than the EnKF, implying that a full Bayesian analysis estimates the nonlinear model variables better. The PFs perform significantly better in the fully coupled nonlinear model where fast and slow variables modulate each other. This suggests that the analysis step in the PFs maintains the balance in both variables much better than the EnKF. It is also shown that increasing the ensemble size generally improves the performance of the PFs but has less impact on the EnKF after a sufficient number of members have been used.
Combined surface and volumetric occlusion shading
Schott, Matthias O.
2012-02-01
In this paper, a method for interactive direct volume rendering is proposed that computes ambient occlusion effects for visualizations that combine both volumetric and geometric primitives, specifically tube shaped geometric objects representing streamlines, magnetic field lines or DTI fiber tracts. The proposed algorithm extends the recently proposed Directional Occlusion Shading model to allow the rendering of those geometric shapes in combination with a context providing 3D volume, considering mutual occlusion between structures represented by a volume or geometry. © 2012 IEEE.
Energy Technology Data Exchange (ETDEWEB)
Zuehlsdorff, T. J., E-mail: tjz21@cam.ac.uk; Payne, M. C. [Cavendish Laboratory, J. J. Thomson Avenue, Cambridge CB3 0HE (United Kingdom); Hine, N. D. M. [Department of Physics, University of Warwick, Coventry CV4 7AL (United Kingdom); Haynes, P. D. [Department of Materials, Imperial College London, Exhibition Road, London SW7 2AZ (United Kingdom); Department of Physics, Imperial College London, Exhibition Road, London SW7 2AZ (United Kingdom); Thomas Young Centre for Theory and Simulation of Materials, Imperial College London, Exhibition Road, London SW7 2AZ (United Kingdom)
2015-11-28
We present a solution of the full time-dependent density-functional theory (TDDFT) eigenvalue equation in the linear response formalism exhibiting a linear-scaling computational complexity with system size, without relying on the simplifying Tamm-Dancoff approximation (TDA). The implementation relies on representing the occupied and unoccupied subspaces with two different sets of in situ optimised localised functions, yielding a very compact and efficient representation of the transition density matrix of the excitation with the accuracy associated with a systematic basis set. The TDDFT eigenvalue equation is solved using a preconditioned conjugate gradient algorithm that is very memory-efficient. The algorithm is validated on a small test molecule and a good agreement with results obtained from standard quantum chemistry packages is found, with the preconditioner yielding a significant improvement in convergence rates. The method developed in this work is then used to reproduce experimental results of the absorption spectrum of bacteriochlorophyll in an organic solvent, where it is demonstrated that the TDA fails to reproduce the main features of the low energy spectrum, while the full TDDFT equation yields results in good qualitative agreement with experimental data. Furthermore, the need for explicitly including parts of the solvent into the TDDFT calculations is highlighted, making the treatment of large system sizes necessary that are well within reach of the capabilities of the algorithm introduced here. Finally, the linear-scaling properties of the algorithm are demonstrated by computing the lowest excitation energy of bacteriochlorophyll in solution. The largest systems considered in this work are of the same order of magnitude as a variety of widely studied pigment-protein complexes, opening up the possibility of studying their properties without having to resort to any semiclassical approximations to parts of the protein environment.
Zuehlsdorff, T. J.; Hine, N. D. M.; Payne, M. C.; Haynes, P. D.
2015-11-01
We present a solution of the full time-dependent density-functional theory (TDDFT) eigenvalue equation in the linear response formalism exhibiting a linear-scaling computational complexity with system size, without relying on the simplifying Tamm-Dancoff approximation (TDA). The implementation relies on representing the occupied and unoccupied subspaces with two different sets of in situ optimised localised functions, yielding a very compact and efficient representation of the transition density matrix of the excitation with the accuracy associated with a systematic basis set. The TDDFT eigenvalue equation is solved using a preconditioned conjugate gradient algorithm that is very memory-efficient. The algorithm is validated on a small test molecule and a good agreement with results obtained from standard quantum chemistry packages is found, with the preconditioner yielding a significant improvement in convergence rates. The method developed in this work is then used to reproduce experimental results of the absorption spectrum of bacteriochlorophyll in an organic solvent, where it is demonstrated that the TDA fails to reproduce the main features of the low energy spectrum, while the full TDDFT equation yields results in good qualitative agreement with experimental data. Furthermore, the need for explicitly including parts of the solvent into the TDDFT calculations is highlighted, making the treatment of large system sizes necessary that are well within reach of the capabilities of the algorithm introduced here. Finally, the linear-scaling properties of the algorithm are demonstrated by computing the lowest excitation energy of bacteriochlorophyll in solution. The largest systems considered in this work are of the same order of magnitude as a variety of widely studied pigment-protein complexes, opening up the possibility of studying their properties without having to resort to any semiclassical approximations to parts of the protein environment.
MR volumetric assessment of endolymphatic hydrops
Energy Technology Data Exchange (ETDEWEB)
Guerkov, R.; Berman, A.; Jerin, C.; Krause, E. [University of Munich, Department of Otorhinolaryngology Head and Neck Surgery, Grosshadern Medical Centre, Munich (Germany); University of Munich, German Centre for Vertigo and Balance Disorders, Grosshadern Medical Centre, Marchioninistr. 15, 81377, Munich (Germany); Dietrich, O.; Flatz, W.; Ertl-Wagner, B. [University of Munich, Institute of Clinical Radiology, Grosshadern Medical Centre, Munich (Germany); Keeser, D. [University of Munich, Institute of Clinical Radiology, Grosshadern Medical Centre, Munich (Germany); University of Munich, German Centre for Vertigo and Balance Disorders, Grosshadern Medical Centre, Marchioninistr. 15, 81377, Munich (Germany); University of Munich, Department of Psychiatry and Psychotherapy, Innenstadtkliniken Medical Centre, Munich (Germany)
2014-10-16
We aimed to volumetrically quantify endolymph and perilymph spaces of the inner ear in order to establish a methodological basis for further investigations into the pathophysiology and therapeutic monitoring of Meniere's disease. Sixteen patients (eight females, aged 38-71 years) with definite unilateral Meniere's disease were included in this study. Magnetic resonance (MR) cisternography with a T2-SPACE sequence was combined with a Real reconstruction inversion recovery (Real-IR) sequence for delineation of inner ear fluid spaces. Machine learning and automated local thresholding segmentation algorithms were applied for three-dimensional (3D) reconstruction and volumetric quantification of endolymphatic hydrops. Test-retest reliability was assessed by the intra-class coefficient; correlation of cochlear endolymph volume ratio with hearing function was assessed by the Pearson correlation coefficient. Endolymph volume ratios could be reliably measured in all patients, with a mean (range) value of 15 % (2-25) for the cochlea and 28 % (12-40) for the vestibulum. Test-retest reliability was excellent, with an intra-class coefficient of 0.99. Cochlear endolymphatic hydrops was significantly correlated with hearing loss (r = 0.747, p = 0.001). MR imaging after local contrast application and image processing, including machine learning and automated local thresholding, enable the volumetric quantification of endolymphatic hydrops. This allows for a quantitative assessment of the effect of therapeutic interventions on endolymphatic hydrops. (orig.)
Pehlevan, Cengiz; Hu, Tao; Chklovskii, Dmitri B
2015-07-01
Neural network models of early sensory processing typically reduce the dimensionality of streaming input data. Such networks learn the principal subspace, in the sense of principal component analysis, by adjusting synaptic weights according to activity-dependent learning rules. When derived from a principled cost function, these rules are nonlocal and hence biologically implausible. At the same time, biologically plausible local rules have been postulated rather than derived from a principled cost function. Here, to bridge this gap, we derive a biologically plausible network for subspace learning on streaming data by minimizing a principled cost function. In a departure from previous work, where cost was quantified by the representation, or reconstruction, error, we adopt a multidimensional scaling cost function for streaming data. The resulting algorithm relies only on biologically plausible Hebbian and anti-Hebbian local learning rules. In a stochastic setting, synaptic weights converge to a stationary state, which projects the input data onto the principal subspace. If the data are generated by a nonstationary distribution, the network can track the principal subspace. Thus, our result makes a step toward an algorithmic theory of neural computation.
Nercessian, Shahan C; Panetta, Karen A; Agaian, Sos S
2013-09-01
Image enhancement is a crucial pre-processing step for various image processing applications and vision systems. Many enhancement algorithms have been proposed based on different sets of criteria. However, a direct multi-scale image enhancement algorithm capable of independently and/or simultaneously providing adequate contrast enhancement, tonal rendition, dynamic range compression, and accurate edge preservation in a controlled manner has yet to be produced. In this paper, a multi-scale image enhancement algorithm based on a new parametric contrast measure is presented. The parametric contrast measure incorporates not only the luminance masking characteristic, but also the contrast masking characteristic of the human visual system. The formulation of the contrast measure can be adapted for any multi-resolution decomposition scheme in order to yield new human visual system-inspired multi-scale transforms. In this article, it is exemplified using the Laplacian pyramid, discrete wavelet transform, stationary wavelet transform, and dual-tree complex wavelet transform. Consequently, the proposed enhancement procedure is developed. The advantages of the proposed method include: 1) the integration of both the luminance and contrast masking phenomena; 2) the extension of non-linear mapping schemes to human visual system inspired multi-scale contrast coefficients; 3) the extension of human visual system-based image enhancement approaches to the stationary and dual-tree complex wavelet transforms, and a direct means of; 4) adjusting overall brightness; and 5) achieving dynamic range compression for image enhancement within a direct multi-scale enhancement framework. Experimental results demonstrate the ability of the proposed algorithm to achieve simultaneous local and global enhancements.
Modal analysis of measurements from a large-scale VIV model test of a riser in linearly sheared flow
Lie, H.; Kaasen, K. E.
2006-05-01
Large-scale model testing of a tensioned steel riser in well-defined sheared current was performed at Hanøytangen outside Bergen, Norway in 1997. The length of the model was 90 m and the diameter was 3 cm. The aim of the present work is to look into this information and try to improve the understanding of vortex-induced vibrations (VIV) for cases with very high order of responding modes, and in particular to study if and under which circumstances the riser motions would be single-mode or multi-mode. The measurement system consisted of 29 biaxial gauges for bending moment. The signals are processed to yield curvature and displacement and further to identify modes of vibration. A modal approach is used successfully employing a combination of signal filtering and least-squares fitting of precalculated mode-shapes. As a part of the modal analysis, it is demonstrated that the equally spaced instrumentation limited the maximum mode number to be extracted to be equal to the number of instrumentation locations. This imposed a constraint on the analysis of in-line (IL) vibration, which occurs at higher frequencies and involves higher modes than cross-flow (CF). The analysis has shown that in general the riser response was irregular (i.e. broad-banded) and that the degree of irregularity increases with the flow speed. In some tests distinct spectral peaks could be seen, corresponding to a dominating mode. No occurrences of single-mode (lock-in) were seen. The IL response is more broad-banded than the CF response and contains higher frequencies. The average value of the displacement r.m.s over the length of the riser is computed to indicate the magnitude of VIV motion during one test. In the CF direction the average displacement is typically 1/4 of the diameter, almost independent of the flow speed. For the IL direction the values are in the range 0.05 0.08 of the diameter. The peak frequency taken from the spectra of the CF displacement at riser midpoint show approximately
Optimization approaches to volumetric modulated arc therapy planning
Energy Technology Data Exchange (ETDEWEB)
Unkelbach, Jan, E-mail: junkelbach@mgh.harvard.edu; Bortfeld, Thomas; Craft, David [Department of Radiation Oncology, Massachusetts General Hospital and Harvard Medical School, Boston, Massachusetts 02114 (United States); Alber, Markus [Department of Medical Physics and Department of Radiation Oncology, Aarhus University Hospital, Aarhus C DK-8000 (Denmark); Bangert, Mark [Department of Medical Physics in Radiation Oncology, German Cancer Research Center, Heidelberg D-69120 (Germany); Bokrantz, Rasmus [RaySearch Laboratories, Stockholm SE-111 34 (Sweden); Chen, Danny [Department of Computer Science and Engineering, University of Notre Dame, Notre Dame, Indiana 46556 (United States); Li, Ruijiang; Xing, Lei [Department of Radiation Oncology, Stanford University, Stanford, California 94305 (United States); Men, Chunhua [Department of Research, Elekta, Maryland Heights, Missouri 63043 (United States); Nill, Simeon [Joint Department of Physics at The Institute of Cancer Research and The Royal Marsden NHS Foundation Trust, London SM2 5NG (United Kingdom); Papp, Dávid [Department of Mathematics, North Carolina State University, Raleigh, North Carolina 27695 (United States); Romeijn, Edwin [H. Milton Stewart School of Industrial and Systems Engineering, Georgia Institute of Technology, Atlanta, Georgia 30332 (United States); Salari, Ehsan [Department of Industrial and Manufacturing Engineering, Wichita State University, Wichita, Kansas 67260 (United States)
2015-03-15
Volumetric modulated arc therapy (VMAT) has found widespread clinical application in recent years. A large number of treatment planning studies have evaluated the potential for VMAT for different disease sites based on the currently available commercial implementations of VMAT planning. In contrast, literature on the underlying mathematical optimization methods used in treatment planning is scarce. VMAT planning represents a challenging large scale optimization problem. In contrast to fluence map optimization in intensity-modulated radiotherapy planning for static beams, VMAT planning represents a nonconvex optimization problem. In this paper, the authors review the state-of-the-art in VMAT planning from an algorithmic perspective. Different approaches to VMAT optimization, including arc sequencing methods, extensions of direct aperture optimization, and direct optimization of leaf trajectories are reviewed. Their advantages and limitations are outlined and recommendations for improvements are discussed.
Institute of Scientific and Technical Information of China (English)
李根; 唐春安; 李连崇
2013-01-01
Fast solving large-scale linear equations in the finite element analysis is a classical subject in computational mechanics. It is a key technique in computer aided engineering (CAE) and computer aided manufacturing (CAM). This paper presents a high-effciency improved symmetric successive over-relaxation (ISSOR) preconditioned conjugate gradient (PCG) method, which maintains the convergence and inherent paral-lelism consistent with the original form. Ideally, the computation can be reduced nearly by 50% as compared with the original algorithm. It is suitable for high-performance computing with its inherent basic high-effciency operations. By comparing with the nu-merical results, it is shown that the proposed method has the best performance.
Energy Technology Data Exchange (ETDEWEB)
Cai, J., E-mail: caijun@ncepu.edu.c [School of Nuclear Science and Engineering, North China Electric Power University, Beijing 102206 (China); Wang, Y.D. [Key Laboratory for Anisotropy and Texture of Materials (Ministry of Education), Northeastern University, Shenyang 110004 (China); Wang, C.Y. [Department of Physics, Tsinghua University, Beijing 100084 (China)
2009-11-15
By using a linear scaling self-consistent charge, density functional tight-binding (SCC-DFTB) method and an ab intio Dmol3 calculation, the energy and Young's modulus as a function of tube length for (10, 0) single-walled carbon nanotubes (SWCNTs) are investigated. It was found that with increasing the length of SWCNTs the Young's modulus increases rapidly, then, there is a slow increase, which ultimately approaches a constant value after the length is increased to approx20 nm, whereas a reversed variation tendency was found for the average energy of atoms in SWCNTs with a change of the tube length. We found that the characters of the length-dependent energy and Young's modulus stem from the changed P{sub y}-DOS of atoms in the ending region of the tube. Here one simple formula is proposed for quantitatively explaining the length-dependent energy and modulus.
Alves, R M S; Pereira, B F; Ribeiro, R G L G; Pitol, D L; Ciamarro, C M; Valim, J R T; Caetano, F H
2016-07-01
Increasing pollution levels have turned our attention to assessing lethal and sublethal effects of toxic agents using the most informative techniques possible. We must seek non-invasive or non-lethal sampling methods that represent an attractive alternative to traditional techniques of environmental assessment in fish. Detergents are amongst the most common contaminants of water bodies, and LAS (Linear Alkylbenzene Sulfonate) is one of the most used anionic surfactant on the market. Our study analyzed morphological alterations (histological and histochemical) of the scale epithelium of Prochilodus lineatus under exposure to two concentrations of LAS, 3.6mg/L and 0.36mg/L, for a period of 30 days and evaluated at 14, 21 and 30 days. In order to establish morphological analysis of the scale epithelium as a new non-lethal environmental assessment tool that is reliable and comparable to classic methods, the relative sensibility of this technique was compared to a commonly used method of environmental assessment in fish, the estimation of the effects of pollutants upon branchial morphology. Two experiments were carried out, testing animals in tanks, and in individual aquariums. Results of analyses on gill tissue show that exposure to 3.6mg/L of surfactant caused severe damage, including hyperplasia, hypertrophy and fusion at 14 days, with aneurisms at 21 and 30 days; while exposure to 0.36mg/L had lighter effects on the organ, mainly lower incidence of fusion and hyperplasia. Aditionally, scale morphology was altered severely in response to 3.6mg/L of LAS, consistently showing increased mucous and club cell production. Epithelial thickness was the most variable parameter measured. Scale epithelium sensibility has the potential to be a reliable environmental marker for fish species since it has the advantage of being less invasive when compared to traditional methods. However, more studies are required to increase the robustness of the technique before it can be
Doser, Bernd; Lambrecht, Daniel S; Kussmann, Jörg; Ochsenfeld, Christian
2009-02-14
A Laplace-transformed second-order Moller-Plesset perturbation theory (MP2) method is presented, which allows to achieve linear scaling of the computational effort with molecular size for electronically local structures. Also for systems with a delocalized electronic structure, a cubic or even quadratic scaling behavior is achieved. Numerically significant contributions to the atomic orbital (AO)-MP2 energy are preselected using the so-called multipole-based integral estimates (MBIE) introduced earlier by us [J. Chem. Phys. 123, 184102 (2005)]. Since MBIE provides rigorous upper bounds, numerical accuracy is fully controlled and the exact MP2 result is attained. While the choice of thresholds for a specific accuracy is only weakly dependent upon the molecular system, our AO-MP2 scheme offers the possibility for incremental thresholding: for only little additional computational expense, the numerical accuracy can be systematically converged. We illustrate this dependence upon numerical thresholds for the calculation of intermolecular interaction energies for the S22 test set. The efficiency and accuracy of our AO-MP2 method is demonstrated for linear alkanes, stacked DNA base pairs, and carbon nanotubes: e.g., for DNA systems the crossover toward conventional MP2 schemes occurs between one and two base pairs. In this way, it is for the first time possible to compute wave function-based correlation energies for systems containing more than 1000 atoms with 10 000 basis functions as illustrated for a 16 base pair DNA system on a single-core computer, where no empirical restrictions are introduced and numerical accuracy is fully preserved.
Volumetric polymerization shrinkage of contemporary composite resins
Directory of Open Access Journals (Sweden)
Halim Nagem Filho
2007-10-01
Full Text Available The polymerization shrinkage of composite resins may affect negatively the clinical outcome of the restoration. Extensive research has been carried out to develop new formulations of composite resins in order to provide good handling characteristics and some dimensional stability during polymerization. The purpose of this study was to analyze, in vitro, the magnitude of the volumetric polymerization shrinkage of 7 contemporary composite resins (Definite, Suprafill, SureFil, Filtek Z250, Fill Magic, Alert, and Solitaire to determine whether there are differences among these materials. The tests were conducted with precision of 0.1 mg. The volumetric shrinkage was measured by hydrostatic weighing before and after polymerization and calculated by known mathematical equations. One-way ANOVA (a or = 0.05 was used to determine statistically significant differences in volumetric shrinkage among the tested composite resins. Suprafill (1.87±0.01 and Definite (1.89±0.01 shrank significantly less than the other composite resins. SureFil (2.01±0.06, Filtek Z250 (1.99±0.03, and Fill Magic (2.02±0.02 presented intermediate levels of polymerization shrinkage. Alert and Solitaire presented the highest degree of polymerization shrinkage. Knowing the polymerization shrinkage rates of the commercially available composite resins, the dentist would be able to choose between using composite resins with lower polymerization shrinkage rates or adopting technical or operational procedures to minimize the adverse effects deriving from resin contraction during light-activation.
A SUBDIVISION SCHEME FOR VOLUMETRIC MODELS
Institute of Scientific and Technical Information of China (English)
GhulamMustafa; LiuXuefeng
2005-01-01
In this paper, a subdivision scheme which generalizes a surface scheme in previous papers to volume meshes is designed. The scheme exhibits significant control over shrink-age/size of volumetric models. It also has the ability to conveniently incorporate boundaries and creases into a smooth limit shape of models. The method presented here is much simpler and easier as compared to MacCracken and Joy's. This method makes no restrictions on the local topology of meshes. Particularly, it can be applied without any change to meshes of nonmanifold topology.
Volumetric composition in composites and historical data
DEFF Research Database (Denmark)
Lilholt, Hans; Madsen, Bo
2013-01-01
guidance to the optimal combination of fibre content, matrix content and porosity content, in order to achieve the best obtainable properties. Several composite materials systems have been shown to be handleable with this model. An extensive series of experimental data for the system of cellulose fibres...... and polymer (resin) was produced in 1942 – 1944, and these data have been (re-)analysed by the volumetric composition model, and the property values for density, stiffness and strength have been evaluated. Good agreement has been obtained and some further observations have been extracted from the analysis....
IMITATION OF STANDARD VOLUMETRIC ACTIVITY METAL SAMPLES
Directory of Open Access Journals (Sweden)
A. I. Zhukouski
2016-01-01
Full Text Available Due to the specific character of problems in the field of ionizing radiation spectroscopy, the R&D and making process of standard volumetric activity metal samples (standard samples for calibration and verification of spectrometric equipment is not only expensive, but also requires the use of highly qualified experts and a unique specific equipment. Theoretical and experimental studies performed have shown the possibility to use imitators as a set of alternating point sources of gamma radiation and metal plates and their use along with standard volumetric activity metal samples for calibration of scintillation-based detectors used in radiation control in metallurgy. Response functions or instrumental spectra of such spectrometer to radionuclides like 137Cs, 134Cs, 152Eu, 154Eu, 60Co, 54Mn, 232Th, 226Ra, 65Zn, 125Sb+125mTe, 106Ru+106Rh, 94Nb, 110mAg, 233U, 234U, 235U and 238U are required for calibration in a given measurement geometry. Standard samples in the form of a probe made of melt metal of a certain diameter and height are used in such measurements. However, the production of reference materials is costly and even problematic for such radionuclides as 94Nb, 125Sb+125mTe, 234U, 235U etc. A recognized solution to solve this problem is to use the Monte-Carlo simulation method. Instrumental experimental and theoretical spectra obtained by using standard samples and their imitators show a high compliance between experimental spectra of real samples and the theoretical ones of their Monte-Carlo models, between spectra of real samples and the ones of their imitators and finally, between experimental spectra of real sample imitators and the theoretical ones of their Monte-Carlo models. They also have shown the adequacy and consistency of the approach in using a combination of metal scattering layers and reference point gamma-ray sources instead of standard volumetric activity metal samples. As for using several reference point gamma-ray sources
Magnetic volumetric hologram memory with magnetic garnet.
Nakamura, Yuichi; Takagi, Hiroyuki; Lim, Pang Boey; Inoue, Mitsuteru
2014-06-30
Holographic memory is a promising next-generation optical memory that has a higher recording density and a higher transfer rate than other types of memory. In holographic memory, magnetic garnet films can serve as rewritable holographic memory media by use of magneto-optical effect. We have now demonstrated that a magnetic hologram can be recorded volumetrically in a ferromagnetic garnet film and that the signal image can be reconstructed from it for the first time. In addition, multiplicity of the magnetic hologram was also confirmed; the image could be reconstructed from a spot overlapped by other spots.
Bodryakov, V. Yu.; Bykov, A. A.
2016-05-01
The correlation between the volumetric thermal expansion coefficient β( T) and the heat capacity C( T) of aluminum is considered in detail. It is shown that a clear correlation is observed in a significantly wider temperature range, up to the melting temperature of the metal, along with the low-temperature range where it is linear. The significant deviation of dependence β( C) from the low-temperature linear behavior is observed up to the point where the heat capacity achieves the classical Dulong-Petit limit of 3 R ( R is the universal gas constant).
Institute of Scientific and Technical Information of China (English)
Xiaoe RUAN; Huizhuo WU; Na LI; Baiwu WAN
2009-01-01
In this paper, a decentralized iterative learning control strategy is embedded into the procedure of hierarchical steady-state optimization for a class of linear large-scale industrial processes which consists of a number of subsystems. The task of the learning controller for each subsystem is to iteratively generate a sequence of upgraded control inputs to take responsibilities of a sequential step functional control signals with distinct scales which are determined by the local decision-making units in the two-layer hierarchical steady-state optimization processing. The objective of the designated strategy is to consecutively improve the transient performance of the system. By means of the generalized Young inequality of convolution integral, the convergence of the learning algorithm is analyzed in the sense of Lebesgue-p norm. It is shown that the inherent feature of system such as the multi-dimensionality and the interaction may influence the convergence of the non-repetitive learning rule. Numerical simulations illustrate the effectiveness of the proposed control scheme and the validity of the conclusion.
Directory of Open Access Journals (Sweden)
Gaosheng Li
2016-04-01
Full Text Available A novel free piston expander-linear generator (FPE-LG integrated unit was proposed to recover waste heat efficiently from vehicle engine. This integrated unit can be used in a small-scale Organic Rankine Cycle (ORC system and can directly convert the thermodynamic energy of working fluid into electric energy. The conceptual design of the free piston expander (FPE was introduced and discussed. A cam plate and the corresponding valve train were used to control the inlet and outlet valve timing of the FPE. The working principle of the FPE-LG was proven to be feasible using an air test rig. The indicated efficiency of the FPE was obtained from the p–V indicator diagram. The dynamic characteristics of the in-cylinder flow field during the intake and exhaust processes of the FPE were analyzed based on Fluent software and 3D numerical simulation models using a computation fluid dynamics method. Results show that the indicated efficiency of the FPE can reach 66.2% and the maximal electric power output of the FPE-LG can reach 22.7 W when the working frequency is 3 Hz and intake pressure is 0.2 MPa. Two large-scale vortices are formed during the intake process because of the non-uniform distribution of velocity and pressure. The vortex flow will convert pressure energy and kinetic energy into thermodynamic energy for the working fluid, which weakens the power capacity of the working fluid.
Ishikawa, Takashi; Totani, Tomonori; Nishimichi, Takahiro; Takahashi, Ryuichi; Yoshida, Naoki; Tonegawa, Motonari
2014-10-01
Redshift-space distortion (RSD) observed in galaxy redshift surveys is a powerful tool to test gravity theories on cosmological scales, but the systematic uncertainties must carefully be examined for future surveys with large statistics. Here we employ various analytic models of RSD and estimate the systematic errors on measurements of the structure growth-rate parameter, fσ8, induced by non-linear effects and the halo bias with respect to the dark matter distribution, by using halo catalogues from 40 realizations of 3.4 × 108 comoving h-3 Mpc3 cosmological N-body simulations. We consider hypothetical redshift surveys at redshifts z = 0.5, 1.35 and 2, and different minimum halo mass thresholds in the range of 5.0 × 1011-2.0 × 1013 h-1 M⊙. We find that the systematic error of fσ8 is greatly reduced to ˜5 per cent level, when a recently proposed analytical formula of RSD that takes into account the higher order coupling between the density and velocity fields is adopted, with a scale-dependent parametric bias model. Dependence of the systematic error on the halo mass, the redshift and the maximum wavenumber used in the analysis is discussed. We also find that the Wilson-Hilferty transformation is useful to improve the accuracy of likelihood analysis when only a small number of modes are available in power spectrum measurements.
Croke, S. M.; O'Sullivan, S. P.; Gabuzda, D. C.
2010-02-01
Previous very long baseline interferometry (VLBI) observations of the nearby (z = 0.0337) active galactic nucleus (AGN) Mrk501 have revealed a complex total-intensity structure with an approximately 90° misalignment between the jet orientations on parsec and kiloparsec scales. The jet displays a `spine' of magnetic field orthogonal to the jet surrounded by a `sheath' of magnetic field aligned with the jet. Mrk501 is also one of a handful of AGN that are regularly detected at TeV energies, indicating the presence of high-energy phenomena in the core. However, multi-epoch analyses of the VLBI total-intensity structure have yielded only very modest apparent speeds for features in the VLBI jet. We investigate the total-intensity and linear-polarization structures of the parsec- to decaparsec-scale jet of Mrk501 using VLBA observations at 8.4, 5, 2.2 and 1.6 GHz. The rotation-measure distribution displays the presence of a Faraday rotation gradient across an extended stretch of the jet, providing new evidence for a helical magnetic field associated with the jet of this AGN. The position of the radio core from the base of the jet follows the law rcore(ν) ~ ν-1.1+/-0.2, consistent with the compact inner jet region being in equipartition. Hence, we estimate a magnetic field strength of ~40 mG at a distance of 1 pc.
Ozsoy, Oyku Eren; Can, Tolga
2013-01-01
Inference of topology of signaling networks from perturbation experiments is a challenging problem. Recently, the inference problem has been formulated as a reference network editing problem and it has been shown that finding the minimum number of edit operations on a reference network to comply with perturbation experiments is an NP-complete problem. In this paper, we propose an integer linear optimization (ILP) model for reconstruction of signaling networks from RNAi data and a reference network. The ILP model guarantees the optimal solution; however, is practical only for small signaling networks of size 10-15 genes due to computational complexity. To scale for large signaling networks, we propose a divide and conquer-based heuristic, in which a given reference network is divided into smaller subnetworks that are solved separately and the solutions are merged together to form the solution for the large network. We validate our proposed approach on real and synthetic data sets, and comparison with the state of the art shows that our proposed approach is able to scale better for large networks while attaining similar or better biological accuracy.
Directory of Open Access Journals (Sweden)
Chengbin Deng
2015-07-01
Full Text Available As an important indicator of anthropogenic impacts on the Earth’s surface, it is of great necessity to accurately map large-scale urbanized areas for various science and policy applications. Although spectral mixture analysis (SMA can provide spatial distribution and quantitative fractions for better representations of urban areas, this technique is rarely explored with 1-km resolution imagery. This is due mainly to the absence of image endmembers associated with the mixed pixel problem. Consequently, as the most profound source of error in SMA, endmember variability has rarely been considered with coarse resolution imagery. These issues can be acute for fractional land cover mapping due to the significant spectral variations of numerous land covers across a large study area. To solve these two problems, a hierarchically object-based SMA (HOBSMA was developed (1 to extrapolate local endmembers for regional spectral library construction; and (2 to incorporate endmember variability into linear spectral unmixing of MODIS 1-km imagery for large-scale impervious surface abundance mapping. Results show that by integrating spatial constraints from object-based image segments and endmember extrapolation techniques into multiple endmember SMA (MESMA of coarse resolution imagery, HOBSMA improves the discriminations between urban impervious surfaces and other land covers with well-known spectral confusions (e.g., bare soil and water, and particularly provides satisfactory representations of urban fringe areas and small settlements. HOBSMA yields promising abundance results at the km-level scale with relatively high precision and small bias, which considerably outperforms the traditional simple mixing model and the aggregated MODIS land cover classification product.
Fatigue life estimation for different notched specimens based on the volumetric approach
Directory of Open Access Journals (Sweden)
Esmaeili F.
2010-06-01
Full Text Available In this paper, the effects of notch radius for different notched specimens has been studied on the values of stress concentration factor, notch strength reduction factor, and fatigue life duration of the specimens. The material which has been selected for this investigation is Al 2024T3 . Volumetric approach has been applied to obtain the values of notch strength reduction factor and results have been compared with those obtained from the Neuber and Peterson methods. Load controlled fatigue tests of mentioned specimens have been conducted on the 250kN servo-hydraulic Zwick/Amsler fatigue testing machine with the frequency of 10Hz. The fatigue lives of the specimens have also been predicted based on the available smooth S-N curve of Al2024-T3 and also the amounts of notch strength reduction factor which have been obtained from volumetric, Neuber and Peterson methods. The values of stress and strain around the notch roots are required to predict the fatigue life of notched specimens, so Ansys finite element code has been used and non-linear analyses have been performed to obtain the stress and strain distributions around the notches. The plastic deformations of the material have been simulated using multi-linear kinematic hardening and cyclic stress-strain relation. The work here shows that the volumetric approach does a very good job for predicting the fatigue life of the notched specimens.
Directory of Open Access Journals (Sweden)
Gildeberto S. Cardoso
2011-01-01
Full Text Available This paper presents a study of linear control systems based on exact feedback linearization and approximate feedback linearization. As exact feedback linearization is applied, a linear controller can perform the control objectives. The approximate feedback linearization is required when a nonlinear system presents a noninvolutive property. It uses a Taylor series expansion in order to compute a nonlinear transformation of coordinates to satisfy the involutivity conditions.
Blockwise conjugate gradient methods for image reconstruction in volumetric CT.
Qiu, W; Titley-Peloquin, D; Soleimani, M
2012-11-01
Cone beam computed tomography (CBCT) enables volumetric image reconstruction from 2D projection data and plays an important role in image guided radiation therapy (IGRT). Filtered back projection is still the most frequently used algorithm in applications. The algorithm discretizes the scanning process (forward projection) into a system of linear equations, which must then be solved to recover images from measured projection data. The conjugate gradients (CG) algorithm and its variants can be used to solve (possibly regularized) linear systems of equations Ax=b and linear least squares problems minx∥b-Ax∥2, especially when the matrix A is very large and sparse. Their applications can be found in a general CT context, but in tomography problems (e.g. CBCT reconstruction) they have not widely been used. Hence, CBCT reconstruction using the CG-type algorithm LSQR was implemented and studied in this paper. In CBCT reconstruction, the main computational challenge is that the matrix A usually is very large, and storing it in full requires an amount of memory well beyond the reach of commodity computers. Because of these memory capacity constraints, only a small fraction of the weighting matrix A is typically used, leading to a poor reconstruction. In this paper, to overcome this difficulty, the matrix A is partitioned and stored blockwise, and blockwise matrix-vector multiplications are implemented within LSQR. This implementation allows us to use the full weighting matrix A for CBCT reconstruction without further enhancing computer standards. Tikhonov regularization can also be implemented in this fashion, and can produce significant improvement in the reconstructed images.
Disentangling volumetric and hydrational properties of proteins.
Voloshin, Vladimir P; Medvedev, Nikolai N; Smolin, Nikolai; Geiger, Alfons; Winter, Roland
2015-02-05
We used molecular dynamics simulations of a typical monomeric protein, SNase, in combination with Voronoi-Delaunay tessellation to study and analyze the temperature dependence of the apparent volume, Vapp, of the solute. We show that the void volume, VB, created in the boundary region between solute and solvent, determines the temperature dependence of Vapp to a major extent. The less pronounced but still significant temperature dependence of the molecular volume of the solute, VM, is essentially the result of the expansivity of its internal voids, as the van der Waals contribution to VM is practically independent of temperature. Results for polypeptides of different chemical nature feature a similar temperature behavior, suggesting that the boundary/hydration contribution seems to be a universal part of the temperature dependence of Vapp. The results presented here shine new light on the discussion surrounding the physical basis for understanding and decomposing the volumetric properties of proteins and biomolecules in general.
All Photons Imaging Through Volumetric Scattering
Satat, Guy; Heshmat, Barmak; Raviv, Dan; Raskar, Ramesh
2016-01-01
Imaging through thick highly scattering media (sample thickness ≫ mean free path) can realize broad applications in biomedical and industrial imaging as well as remote sensing. Here we propose a computational “All Photons Imaging” (API) framework that utilizes time-resolved measurement for imaging through thick volumetric scattering by using both early arrived (non-scattered) and diffused photons. As opposed to other methods which aim to lock on specific photons (coherent, ballistic, acoustically modulated, etc.), this framework aims to use all of the optical signal. Compared to conventional early photon measurements for imaging through a 15 mm tissue phantom, our method shows a two fold improvement in spatial resolution (4db increase in Peak SNR). This all optical, calibration-free framework enables widefield imaging through thick turbid media, and opens new avenues in non-invasive testing, analysis, and diagnosis. PMID:27683065
Energy Technology Data Exchange (ETDEWEB)
Wiedemann, H.
1981-11-01
Since no linear colliders have been built yet it is difficult to know at what energy the linear cost scaling of linear colliders drops below the quadratic scaling of storage rings. There is, however, no doubt that a linear collider facility for a center of mass energy above say 500 GeV is significantly cheaper than an equivalent storage ring. In order to make the linear collider principle feasible at very high energies a number of problems have to be solved. There are two kinds of problems: one which is related to the feasibility of the principle and the other kind of problems is associated with minimizing the cost of constructing and operating such a facility. This lecture series describes the problems and possible solutions. Since the real test of a principle requires the construction of a prototype I will in the last chapter describe the SLC project at the Stanford Linear Accelerator Center.
Shilov, Georgi E
1977-01-01
Covers determinants, linear spaces, systems of linear equations, linear functions of a vector argument, coordinate transformations, the canonical form of the matrix of a linear operator, bilinear and quadratic forms, Euclidean spaces, unitary spaces, quadratic forms in Euclidean and unitary spaces, finite-dimensional space. Problems with hints and answers.
Directory of Open Access Journals (Sweden)
Agarwalla Arun
2001-01-01
Full Text Available Linear psoriasis, inflammatory linear varrucous epidermal naevus (ILVEN. Lichen straitus, linear lichen planus and invasion of epidermal naevi by psoriasis have clinical and histopathological overlap. We report two young male patients of true linear psoriasis without classical lesions elsewhere which were proved histopathologically. Seasonal variation and good response to topical antipsoriatic treatment supported the diagnosis.
A Technique for Volumetric CSG Based on Morphology
DEFF Research Database (Denmark)
Bærentzen, Jakob Andreas; Christensen, Niels Jørgen
2001-01-01
In this paper, a new technique for volumetric CSG is presented. The technique requires the input volumes to correspond to solids which fulfill a voxelization suitability criterion. Assume the CSG operation is union. The volumetric union of two such volumes is defined in terms of the voxelization...
Volumetric microscale particle tracking velocimetry (PTV) in porous media
Guo, Tianqi; Aramideh, Soroush; Ardekani, Arezoo M.; Vlachos, Pavlos P.
2016-11-01
The steady-state flow through refractive-index-matched glass bead microchannels is measured using microscopic particle tracking velocimetry (μPTV). A novel technique is developed to volumetrically reconstruct particles from oversampled two-dimensional microscopic images of fluorescent particles. Fast oversampling of the quasi-steady-state flow field in the lateral direction is realized by a nano-positioning piezo stage synchronized with a fast CMOS camera. Experiments at different Reynolds numbers are carried out for flows through a series of both monodispersed and bidispersed glass bead microchannels with various porosities. The obtained velocity fields at pore-scale (on the order of 10 μm) are compared with direct numerical simulations (DNS) conducted in the exact same geometries reconstructed from micro-CT scans of the glass bead microchannels. The developed experimental method would serve as a new approach for exploring the flow physics at pore-scale in porous media, and also provide benchmark measurements for validation of numerical simulations.
Luo, Houding; Peng, Ming; Ye, Haoyu; Chen, Lijuan; Peng, Aihua; Tang, Minghai; Zhang, Fan; Shi, Jie
2010-07-15
This paper describes how distribution ratios were used for prediction of peak elution in analytical high-performance counter-current chromatography (HPCCC) to explore the method for separation and purification of bioactive compounds from the roots of Menispermum dauricum. Then important parameters related to HPCCC separations including solvent systems, sample concentration, sample loading volume and flow rate were optimized on an analytical Mini-DE HPCCC and finally linearly scaled up to a preparative Midi-DE HPCCC with nearly the same resolutions and separation time. Four phenolic alkaloids were for the first time obtained by HPCCC separation with a two-phase solvent system composed of petroleum ether-ethyl acetate-ethanol-water (1:2:1:2, v/v). This process produced 131.3 mg daurisolin, 197.1 mg dauricine, 32.4 mg daurinoline and 14.7 mg dauricicoline with the purity of 97.6%, 96.4%, 97.2% and 98.3%, respectively from 500 mg crude extract of the roots of M. dauricum in a one-step separation. The purities of compounds were determined by high-performance liquid chromatography (HPLC). Their structures were identified by electrospray ionization mass spectrometer (ESI-MS) and nuclear magnetic resonance (NMR).
Ishikawa, Takashi; Nishimichi, Takahiro; Takahashi, Ryuichi; Yoshida, Naoki; Tonegawa, Motonari
2013-01-01
Redshift space distortion (RSD) observed in galaxy redshift surveys is a powerful tool to test gravity theories on cosmological scales, but the systematic uncertainties must carefully be examined for future surveys with large statistics. Here we employ various analytic models of RSD and estimate the systematic errors on measurements of the structure growth-rate parameter, f\\sigma_8, induced by non-linear effects and the halo bias with respect to the dark matter distribution, by using halo catalogues from 40 realisations of 3.4 \\times 10^8 comoving h^{-3}Mpc^3 cosmological N-body simulations. We consider hypothetical redshift surveys at redshifts z=0.5, 1.35 and 2, and different minimum halo mass thresholds in the range of 5.0 \\times 10^{11} -- 2.0 \\times 10^{13} h^{-1} M_\\odot. We find that the systematic error of f\\sigma_8 is greatly reduced to ~4 per cent level, when a recently proposed analytical formula of RSD that takes into account the higher-order coupling between the density and velocity fields is ado...
Directory of Open Access Journals (Sweden)
Souverein Olga W
2012-04-01
Full Text Available Abstract Background To derive micronutrient recommendations in a scientifically sound way, it is important to obtain and analyse all published information on the association between micronutrient intake and biochemical proxies for micronutrient status using a systematic approach. Therefore, it is important to incorporate information from randomized controlled trials as well as observational studies as both of these provide information on the association. However, original research papers present their data in various ways. Methods This paper presents a methodology to obtain an estimate of the dose–response curve, assuming a bivariate normal linear model on the logarithmic scale, incorporating a range of transformations of the original reported data. Results The simulation study, conducted to validate the methodology, shows that there is no bias in the transformations. Furthermore, it is shown that when the original studies report the mean and standard deviation or the geometric mean and confidence interval the results are less variable compared to when the median with IQR or range is reported in the original study. Conclusions The presented methodology with transformations for various reported data provides a valid way to estimate the dose–response curve for micronutrient intake and status using both randomized controlled trials and observational studies.
Xie, Jiazhuo; Zhang, Kun; Zhao, Qinghua; Wang, Qingguo; Xu, Jing
2016-11-01
Novel LDH intercalated with organic aliphatic long-chain anion was large-scale synthesized innovatively by high-energy ball milling in one pot. The linear low density polyethylene (LLDPE)/layered double hydroxides (LDH) composite films with enhanced heat retention, thermal, mechanical, optical and water vapor barrier properties were fabricated by melt blending and blowing process. FT IR, XRD, SEM results show that LDH particles were dispersed uniformly in the LLDPE composite films. Particularly, LLDPE composite film with 1% LDH exhibited the optimal performance among all the composite films with a 60.36% enhancement in the water vapor barrier property and a 45.73 °C increase in the temperature of maximum mass loss rate compared with pure LLDPE film. Furthermore, the improved infrared absorbance (1180-914 cm-1) of LLDPE/LDH films revealed the significant enhancement of heat retention. Therefore, this study prompts the application of LLDPE/LDH films as agricultural films with superior heat retention.
Borbulevych, Oleg Y; Plumley, Joshua A; Martin, Roger I; Merz, Kenneth M; Westerhoff, Lance M
2014-05-01
Macromolecular crystallographic refinement relies on sometimes dubious stereochemical restraints and rudimentary energy functionals to ensure the correct geometry of the model of the macromolecule and any covalently bound ligand(s). The ligand stereochemical restraint file (CIF) requires a priori understanding of the ligand geometry within the active site, and creation of the CIF is often an error-prone process owing to the great variety of potential ligand chemistry and structure. Stereochemical restraints have been replaced with more robust functionals through the integration of the linear-scaling, semiempirical quantum-mechanics (SE-QM) program DivCon with the PHENIX X-ray refinement engine. The PHENIX/DivCon package has been thoroughly validated on a population of 50 protein-ligand Protein Data Bank (PDB) structures with a range of resolutions and chemistry. The PDB structures used for the validation were originally refined utilizing various refinement packages and were published within the past five years. PHENIX/DivCon does not utilize CIF(s), link restraints and other parameters for refinement and hence it does not make as many a priori assumptions about the model. Across the entire population, the method results in reasonable ligand geometries and low ligand strains, even when the original refinement exhibited difficulties, indicating that PHENIX/DivCon is applicable to both single-structure and high-throughput crystallography.
Energy Technology Data Exchange (ETDEWEB)
Borbulevych, Oleg Y.; Plumley, Joshua A.; Martin, Roger I. [QuantumBio Inc., 2790 West College Avenue, State College, PA 16801 (United States); Merz, Kenneth M. Jr [University of Florida, Gainesville, Florida (United States); Westerhoff, Lance M., E-mail: lance@quantumbioinc.com [QuantumBio Inc., 2790 West College Avenue, State College, PA 16801 (United States)
2014-05-01
Semiempirical quantum-chemical X-ray macromolecular refinement using the program DivCon integrated with PHENIX is described. Macromolecular crystallographic refinement relies on sometimes dubious stereochemical restraints and rudimentary energy functionals to ensure the correct geometry of the model of the macromolecule and any covalently bound ligand(s). The ligand stereochemical restraint file (CIF) requires a priori understanding of the ligand geometry within the active site, and creation of the CIF is often an error-prone process owing to the great variety of potential ligand chemistry and structure. Stereochemical restraints have been replaced with more robust functionals through the integration of the linear-scaling, semiempirical quantum-mechanics (SE-QM) program DivCon with the PHENIX X-ray refinement engine. The PHENIX/DivCon package has been thoroughly validated on a population of 50 protein–ligand Protein Data Bank (PDB) structures with a range of resolutions and chemistry. The PDB structures used for the validation were originally refined utilizing various refinement packages and were published within the past five years. PHENIX/DivCon does not utilize CIF(s), link restraints and other parameters for refinement and hence it does not make as many a priori assumptions about the model. Across the entire population, the method results in reasonable ligand geometries and low ligand strains, even when the original refinement exhibited difficulties, indicating that PHENIX/DivCon is applicable to both single-structure and high-throughput crystallography.
Azhari, Budi; Prawinnetou, Wassy; Hutama, Dewangga Adhyaksa
2017-03-01
Indonesia has several potential ocean energies to utilize. One of them is tidal wave energy, which the potential is about 49 GW. To convert the tidal wave energy to electricity, linear permanent magnet generator (LPMG) is considered as the best appliance. In this paper, a pico-scale tidal wave power converter was designed using quasi-flat LPMG. The generator was meant to be applied in southern coast of Yogyakarta, Indonesia and was expected to generate 1 kW output. First, a quasi-flat LPMG was designed based on the expected output power and the wave characteristic at the placement site. The design was then simulated using finite element software of FEMM. Finally, the output values were calculated and the output characteristics were analyzed. The results showed that the designed power plant was able to produce output power of 725.78 Wp for each phase, with electrical efficiency of 64.5%. The output characteristics of the LPMG: output power would increase as the average wave height or wave period increases. Besides, the efficiency would increase if the external load resistance increases. Meanwhile the output power of the generator would be maximum at load resistance equals 11 Ω.
Kussmann, Jörg; Ochsenfeld, Christian
2007-11-28
A density matrix-based time-dependent self-consistent field (D-TDSCF) method for the calculation of dynamic polarizabilities and first hyperpolarizabilities using the Hartree-Fock and Kohn-Sham density functional theory approaches is presented. The D-TDSCF method allows us to reduce the asymptotic scaling behavior of the computational effort from cubic to linear for systems with a nonvanishing band gap. The linear scaling is achieved by combining a density matrix-based reformulation of the TDSCF equations with linear-scaling schemes for the formation of Fock- or Kohn-Sham-type matrices. In our reformulation only potentially linear-scaling matrices enter the formulation and efficient sparse algebra routines can be employed. Furthermore, the corresponding formulas for the first hyperpolarizabilities are given in terms of zeroth- and first-order one-particle reduced density matrices according to Wigner's (2n+1) rule. The scaling behavior of our method is illustrated for first exemplary calculations with systems of up to 1011 atoms and 8899 basis functions.
GPU-based Scalable Volumetric Reconstruction for Multi-view Stereo
Energy Technology Data Exchange (ETDEWEB)
Kim, H; Duchaineau, M; Max, N
2011-09-21
We present a new scalable volumetric reconstruction algorithm for multi-view stereo using a graphics processing unit (GPU). It is an effectively parallelized GPU algorithm that simultaneously uses a large number of GPU threads, each of which performs voxel carving, in order to integrate depth maps with images from multiple views. Each depth map, triangulated from pair-wise semi-dense correspondences, represents a view-dependent surface of the scene. This algorithm also provides scalability for large-scale scene reconstruction in a high resolution voxel grid by utilizing streaming and parallel computation. The output is a photo-realistic 3D scene model in a volumetric or point-based representation. We demonstrate the effectiveness and the speed of our algorithm with a synthetic scene and real urban/outdoor scenes. Our method can also be integrated with existing multi-view stereo algorithms such as PMVS2 to fill holes or gaps in textureless regions.
The effect of volumetric (3D) tactile symbols within inclusive tactile maps.
Gual, Jaume; Puyuelo, Marina; Lloveras, Joaquim
2015-05-01
Point, linear and areal elements, which are two-dimensional and of a graphic nature, are the morphological elements employed when designing tactile maps and symbols for visually impaired users. However, beyond the two-dimensional domain, there is a fourth group of elements - volumetric elements - which mapmakers do not take sufficiently into account when it comes to designing tactile maps and symbols. This study analyses the effect of including volumetric, or 3D, symbols within a tactile map. In order to do so, the researchers compared two tactile maps. One of them uses only two-dimensional elements and is produced using thermoforming, one of the most popular systems in this field, while the other includes volumetric symbols, thus highlighting the possibilities opened up by 3D printing, a new area of production. The results of the study show that including 3D symbols improves the efficiency and autonomous use of these products. Copyright © 2014 Elsevier Ltd and The Ergonomics Society. All rights reserved.
Verbal Memory Decline following DBS for Parkinson’s Disease: Structural Volumetric MRI Relationships
Geevarghese, Ruben; Lumsden, Daniel E.; Costello, Angela; Hulse, Natasha; Ayis, Salma; Samuel, Michael; Ashkan, Keyoumars
2016-01-01
Background Parkinson’s disease is a chronic degenerative movement disorder. The mainstay of treatment is medical. In certain patients Deep Brain Stimulation (DBS) may be offered. However, DBS has been associated with post-operative neuropsychology changes, especially in verbal memory. Objectives Firstly, to determine if pre-surgical thalamic and hippocampal volumes were related to verbal memory changes following DBS. Secondly, to determine if clinical factors such as age, duration of symptoms or motor severity (UPDRS Part III score) were related to verbal memory changes. Methods A consecutive group of 40 patients undergoing bilateral Subthalamic Nucleus (STN)-DBS for PD were selected. Brain MRI data was acquired, pre-processed and structural volumetric data was extracted using FSL. Verbal memory test scores for pre- and post-STN-DBS surgery were recorded. Linear regression was used to investigate the relationship between score change and structural volumetric data. Results A significant relationship was demonstrated between change in List Learning test score and thalamic (left, p = 0.02) and hippocampal (left, p = 0.02 and right p = 0.03) volumes. Duration of symptoms was also associated with List Learning score change (p = 0.02 to 0.03). Conclusion Verbal memory score changes appear to have a relationship to pre-surgical MRI structural volumetric data. The findings of this study provide a basis for further research into the use of pre-surgical MRI to counsel PD patients regarding post-surgical verbal memory changes. PMID:27557088
Sideridou, Irini D; Karabela, Maria M; Vouvoudi, Evagelia Ch
2008-08-01
This study evaluated the influence of water and ethanol sorption on the volumetric dimensional changes of resins prepared by light curing of Bis-GMA, Bis-EMA, UDMA, TEGDMA or D(3)MA. The resin specimens (15mm diameterx1mm height) were immersed in water or ethanol 37+/-1 degrees C for 30 days. Volumetric changes of specimens were obtained via accurate mass measurements using Archimedes principle. The specimens were reconditioned by dry storage in an oven at 37+/-1 degrees C until constant mass was obtained and then immersed in water or ethanol for 30 days. The volumetric changes of specimens were determined and compared to those obtained from the first sorption. Resins showed similar volume increase during the first and second sorptions of water or ethanol. The volume increase due to water absorption is in the following order: poly-TEGDMA>poly-Bis-GMA>poly-UDMA>poly-Bis-EMA>poly-D(3)MA. On the contrary, the order in ethanol is poly-Bis-GMA>poly-UDMA>poly-TEGDMA>poly-Bis-EMA approximately poly-D(3)MA. The volume increase was found to depend linearly on the amount of water or ethanol absorbed. In the choice of monomers for preparation of composite resin matrix the volume increase in the resin after immersion in water or ethanol must be taken into account. Resins of Bis-EMA and D(3)MA showed the lowest values.
A new contrast-assisted method in microcirculation volumetric flow assessment
Lu, Sheng-Yi; Chen, Yung-Sheng; Yeh, Chih-Kuang
2007-03-01
Microcirculation volumetric flow rate is a significant index in diseases diagnosis and treatment such as diabetes and cancer. In this study, we propose an integrated algorithm to assess microcirculation volumetric flow rate including estimation of blood perfused area and corresponding flow velocity maps based on high frequency destruction/contrast replenishment imaging technique. The perfused area indicates the blood flow regions including capillaries, arterioles and venules. Due to the echo variance changes between ultrasonic contrast agents (UCAs) pre- and post-destruction two images, the perfused area can be estimated by the correlation-based approach. The flow velocity distribution within the perfused area can be estimated by refilling time-intensity curves (TICs) after UCAs destruction. Most studies introduced the rising exponential model proposed by Wei (1998) to fit the TICs. Nevertheless, we found the TICs profile has a great resemblance to sigmoid function in simulations and in vitro experiments results. Good fitting correlation reveals that sigmoid model was more close to actual fact in describing destruction/contrast replenishment phenomenon. We derived that the saddle point of sigmoid model is proportional to blood flow velocity. A strong linear relationship (R = 0.97) between the actual flow velocities (0.4-2.1 mm/s) and the estimated saddle constants was found in M-mode and B-mode flow phantom experiments. Potential applications of this technique include high-resolution volumetric flow rate assessment in small animal tumor and the evaluation of superficial vasculature in clinical studies.
Guo, Yang; Sivalingam, Kantharuban; Valeev, Edward F.; Neese, Frank
2016-03-01
Multi-reference (MR) electronic structure methods, such as MR configuration interaction or MR perturbation theory, can provide reliable energies and properties for many molecular phenomena like bond breaking, excited states, transition states or magnetic properties of transition metal complexes and clusters. However, owing to their inherent complexity, most MR methods are still too computationally expensive for large systems. Therefore the development of more computationally attractive MR approaches is necessary to enable routine application for large-scale chemical systems. Among the state-of-the-art MR methods, second-order N-electron valence state perturbation theory (NEVPT2) is an efficient, size-consistent, and intruder-state-free method. However, there are still two important bottlenecks in practical applications of NEVPT2 to large systems: (a) the high computational cost of NEVPT2 for large molecules, even with moderate active spaces and (b) the prohibitive cost for treating large active spaces. In this work, we address problem (a) by developing a linear scaling "partially contracted" NEVPT2 method. This development uses the idea of domain-based local pair natural orbitals (DLPNOs) to form a highly efficient algorithm. As shown previously in the framework of single-reference methods, the DLPNO concept leads to an enormous reduction in computational effort while at the same time providing high accuracy (approaching 99.9% of the correlation energy), robustness, and black-box character. In the DLPNO approach, the virtual space is spanned by pair natural orbitals that are expanded in terms of projected atomic orbitals in large orbital domains, while the inactive space is spanned by localized orbitals. The active orbitals are left untouched. Our implementation features a highly efficient "electron pair prescreening" that skips the negligible inactive pairs. The surviving pairs are treated using the partially contracted NEVPT2 formalism. A detailed comparison
Gas sorption and the consequent volumetric and permeability change of coal
Lin, Wenjuan
in the injection gas, the greater the amount of total adsorption. Volumetric strain followed the same trend as the amount of adsorption with pressure and injection gas composition. Permeability showed opposite behaviors, decreasing with the increase of pressure and the percentage of CO2 in the injection gas. The experimental adsorption, volumetric strain, and permeability data were analyzed to investigate the numerical correlations between gas sorption, sorption-induced volumetric strain and permeability, and pressure and injection gas composition. The relationship between the amount of adsorption and pressure for pure gases (CO2 and N2) were readily represented by parametric isotherm models, such as Langmuir and the N-layer BET equations. Modeling efforts of multicomponent adsorption included predicting amount of adsorption and adsorbed phase composition based on the extended Langmuir equations and the ideal adsorbed solution model. Activity coefficients of the components in the adsorbed phase were computed based on the real adsorbed solution model and the ABC excess Gibbs free energy model. Algorithms for modeling the CO 2/N2-Coal system were developed, and the constraints and strength of each model were discussed. The experimental volumetric strain was found to be linearly proportional to the total amount of adsorption and independent of the injection gas composition. The permeability reduction could not be readily correlated by the models in the literature unless the change of other coal properties (bulk modulus, axial constrained modulus, etc.) due to gas sorption was incorporated. The sorption, volumetric strain, and permeability data collected in this study can be used for comparison by other researchers conducting similar studies. The algorithms of sorption modeling and the correlations developed in this study are readily incorporated into the simulation of enhanced coalbed methane recovery and CO2 sequestration in coalbeds. (Abstract shortened by UMI.)
Barbu, N.; Cuculeanu, V.; Stefan, S.
2016-10-01
The aim of this study is to investigate the relationship between the frequency of very warm days (TX90p) in Romania and large-scale atmospheric circulation for winter (December-February) and summer (June-August) between 1962 and 2010. In order to achieve this, two catalogues from COST733Action were used to derive daily circulation types. Seasonal occurrence frequencies of the circulation types were calculated and have been utilized as predictors within the multiple linear regression model (MLRM) for the estimation of winter and summer TX90p values for 85 synoptic stations covering the entire Romania. A forward selection procedure has been utilized to find adequate predictor combinations and those predictor combinations were tested for collinearity. The performance of the MLRMs has been quantified based on the explained variance. Furthermore, the leave-one-out cross-validation procedure was applied and the root-mean-squared error skill score was calculated at station level in order to obtain reliable evidence of MLRM robustness. From this analysis, it can be stated that the MLRM performance is higher in winter compared to summer. This is due to the annual cycle of incoming insolation and to the local factors such as orography and surface albedo variations. The MLRM performances exhibit distinct variations between regions with high performance in wintertime for the eastern and southern part of the country and in summertime for the western part of the country. One can conclude that the MLRM generally captures quite well the TX90p variability and reveals the potential for statistical downscaling of TX90p values based on circulation types.
Xue, Hong-Tao; Boschetto, Gabriele; Krompiec, Michal; Morse, Graham E; Tang, Fu-Ling; Skylaris, Chris-Kriton
2017-02-15
In this work, the crystal properties, HOMO and LUMO energies, band gaps, density of states, as well as the optical absorption spectra of fullerene C60 and its derivative phenyl-C61-butyric-acid-methyl-ester (PCBM) co-crystallised with various solvents such as benzene, biphenyl, cyclohexane, and chlorobenzene were investigated computationally using linear-scaling density functional theory with plane waves as implemented in the ONETEP program. Such solvates are useful materials as electron acceptors for organic photovoltaic (OPV) devices. We found that the fullerene parts contained in the solvates are unstable without solvents, and the interactions between fullerene and solvent molecules in C60 and PCBM solvates make a significant contribution to the cohesive energies of solvates, indicating that solvent molecules are essential to keep C60 and PCBM solvates stable. Both the band gap (Eg) and the HOMO and LUMO states of C60 and PCBM solvates are mainly determined by the fullerene parts contained in solvates. Chlorobenzene- and ortho-dichlorobenzene-solvated PCBM are the most promising electron-accepting materials among these solvates for increasing the driving force for charge separation in OPVs due to their relatively high LUMO energies. The UV-Vis absorption spectra of solvent-free C60 and PCBM crystals in the present work are similar to those of C60 and PCBM thin films shown in the literature. Changes in the absorption spectra of C60 solvates relative to the solvent-free C60 crystal are more significant than those of PCBM solvates due to the weaker effect of solvents on the π-stacking interactions between fullerene molecules in the latter solvates. The main absorptions for all C60 and PCBM crystals are located in the ultraviolet (UV) region.
Li, Wei
2013-01-07
A linear scaling quantum chemistry method, generalized energy-based fragmentation (GEBF) approach has been extended to the explicitly correlated second-order Møller-Plesset perturbation theory F12 (MP2-F12) method and own N-layer integrated molecular orbital molecular mechanics (ONIOM) method, in which GEBF-MP2-F12, GEBF-MP2, and conventional density functional tight-binding methods could be used for different layers. Then the long-range interactions in dilute methanol aqueous solutions are studied by computing the binding energies between methanol molecule and water molecules in gas-phase and condensed phase methanol-water clusters with various sizes, which were taken from classic molecular dynamics (MD) snapshots. By comparing with the results of force field methods, including SPC, TIP3P, PCFF, and AMOEBA09, the GEBF-MP2-F12 and GEBF-ONIOM methods are shown to be powerful and efficient for studying the long-range interactions at a high level. With the GEBF-ONIOM(MP2-F12:MP2) and GEBF-ONIOM(MP2-F12:MP2:cDFTB) methods, the diameters of the largest nanoscale clusters under studies are about 2.4 nm (747 atoms and 10 209 basis functions with aug-cc-pVDZ basis set) and 4 nm (3351 atoms), respectively, which are almost impossible to be treated by conventional MP2 or MP2-F12 method. Thus, the GEBF-F12 and GEBF-ONIOM methods are expected to be a practical tool for studying the nanoscale clusters in condensed phase, providing an alternative benchmark for ab initio and density functional theory studies, and developing new force fields by combining with classic MD simulations.
Soil volumetric water content measurements using TDR technique
Directory of Open Access Journals (Sweden)
S. Vincenzi
1996-06-01
Full Text Available A physical model to measure some hydrological and thermal parameters in soils will to be set up. The vertical profiles of: volumetric water content, matric potential and temperature will be monitored in different soils. The volumetric soil water content is measured by means of the Time Domain Reflectometry (TDR technique. The result of a test to determine experimentally the reproducibility of the volumetric water content measurements is reported together with the methodology and the results of the analysis of the TDR wave forms. The analysis is based on the calculation of the travel time of the TDR signal in the wave guide embedded in the soil.
Iterative reconstruction of volumetric particle distribution
Wieneke, Bernhard
2013-02-01
For tracking the motion of illuminated particles in space and time several volumetric flow measurement techniques are available like 3D-particle tracking velocimetry (3D-PTV) recording images from typically three to four viewing directions. For higher seeding densities and the same experimental setup, tomographic PIV (Tomo-PIV) reconstructs voxel intensities using an iterative tomographic reconstruction algorithm (e.g. multiplicative algebraic reconstruction technique, MART) followed by cross-correlation of sub-volumes computing instantaneous 3D flow fields on a regular grid. A novel hybrid algorithm is proposed here that similar to MART iteratively reconstructs 3D-particle locations by comparing the recorded images with the projections calculated from the particle distribution in the volume. But like 3D-PTV, particles are represented by 3D-positions instead of voxel-based intensity blobs as in MART. Detailed knowledge of the optical transfer function and the particle image shape is mandatory, which may differ for different positions in the volume and for each camera. Using synthetic data it is shown that this method is capable of reconstructing densely seeded flows up to about 0.05 ppp with similar accuracy as Tomo-PIV. Finally the method is validated with experimental data.
Rapidly-steered single-element ultrasound for real-time volumetric imaging and guidance
Stauber, Mark; Western, Craig; Solek, Roman; Salisbury, Kenneth; Hristov, Dmitre; Schlosser, Jeffrey
2016-03-01
Volumetric ultrasound (US) imaging has the potential to provide real-time anatomical imaging with high soft-tissue contrast in a variety of diagnostic and therapeutic guidance applications. However, existing volumetric US machines utilize "wobbling" linear phased array or matrix phased array transducers which are costly to manufacture and necessitate bulky external processing units. To drastically reduce cost, improve portability, and reduce footprint, we propose a rapidly-steered single-element volumetric US imaging system. In this paper we explore the feasibility of this system with a proof-of-concept single-element volumetric US imaging device. The device uses a multi-directional raster-scan technique to generate a series of two-dimensional (2D) slices that were reconstructed into three-dimensional (3D) volumes. At 15 cm depth, 90° lateral field of view (FOV), and 20° elevation FOV, the device produced 20-slice volumes at a rate of 0.8 Hz. Imaging performance was evaluated using an US phantom. Spatial resolution was 2.0 mm, 4.7 mm, and 5.0 mm in the axial, lateral, and elevational directions at 7.5 cm. Relative motion of phantom targets were automatically tracked within US volumes with a mean error of -0.3+/-0.3 mm, -0.3+/-0.3 mm, and -0.1+/-0.5 mm in the axial, lateral, and elevational directions, respectively. The device exhibited a mean spatial distortion error of 0.3+/-0.9 mm, 0.4+/-0.7 mm, and -0.3+/-1.9 in the axial, lateral, and elevational directions. With a production cost near $1000, the performance characteristics of the proposed system make it an ideal candidate for diagnostic and image-guided therapy applications where form factor and low cost are paramount.
Constrained reverse diffusion for thick slice interpolation of 3D volumetric MRI images.
Neubert, Aleš; Salvado, Olivier; Acosta, Oscar; Bourgeat, Pierrick; Fripp, Jurgen
2012-03-01
Due to physical limitations inherent in magnetic resonance imaging scanners, three dimensional volumetric scans are often acquired with anisotropic voxel resolution. We investigate several interpolation approaches to reduce the anisotropy and present a novel approach - constrained reverse diffusion for thick slice interpolation. This technique was compared to common methods: linear and cubic B-Spline interpolation and a technique based on non-rigid registration of neighboring slices. The methods were evaluated on artificial MR phantoms and real MR scans of human brain. The constrained reverse diffusion approach delivered promising results and provides an alternative for thick slice interpolation, especially for higher anisotropy factors.
Young, Tony; Xing, Aitang; Vial, Philp; Thwaites, David; Holloway, Lois; Arumugam, Sankar
2015-01-01
In this paper the sensitivity of an Electronic Portal Imaging Device (EPID) to detecting introduced Volumetric Arc Therapy (VMAT) treatment errors was studied using the Collapsed Arc method. Two clinical Head and Neck (H&N) and Prostate treatment plans had gantry dependent dose and MLC errors introduced to the plans. These plans were then delivered to an Elekta Synergy Linear Accelerator EPID and compared to the original treatment planning system Collapsed Arc dose matrix. With the Collapsed Arc technique the EPID was able to detect MLC errors down to 2mm and dose errors of down to 3% depending on the treatment plan complexity and gamma tolerance used.
Antes, Sebastian; Welsch, Melanie; Kiefer, Michael; Gläser, Mareike; Körner, Heiko; Eymann, Regina
2013-01-01
Magnetic resonance imaging and cranial -ultrasound are the most frequently implemented imaging methods for investigating the infantile hydrocephalic brain. A general and reliable measurement index that can be equally applied in both imaging methods to assess dimension of ventricular dilatation is currently not available. For this purpose, a new parameter called the frontal and temporal horn ratio - determinable in coronal slices of the brain - was developed and evaluated in a comparative volumetric retrospective study: Statistical analyses of 118 MRIs of 46 different shunt-treated pediatric patients revealed a good linear correlation between the new index and the actual ventricular volume.
DEFF Research Database (Denmark)
Troen, Ib; Bechmann, Andreas; Kelly, Mark C.
2014-01-01
Using the Wind Atlas methodology to predict the average wind speed at one location from measured climatological wind frequency distributions at another nearby location we analyse the relative prediction errors using a linearized flow model (IBZ) and a more physically correct fully non-linear 3D...
Characterizing volumetric deformation behavior of naturally occuring bituminous sand materials
CSIR Research Space (South Africa)
Anochie-Boateng, Joseph
2009-05-01
Full Text Available newly proposed hydrostatic compression test procedure. The test procedure applies field loading conditions of off-road construction and mining equipment to closely simulate the volumetric deformation and stiffness behaviour of oil sand materials. Based...
Designing remote web-based mechanical-volumetric flow meter ...
African Journals Online (AJOL)
... remote web-based mechanical-volumetric flow meter reading systems based on ... damage and also provides the ability to control and manage consumption. ... existing infrastructure of the telecommunications is used in data transmission.
Haoyi Wu; Sum Wai Chiang; Cheng Yang; Ziyin Lin; Jingping Liu; Kyoung-Sik Moon; Feiyu Kang; Bo Li; Ching Ping Wong
2015-01-01
Electrically small antennas (ESAs) are becoming one of the key components in the compact wireless devices for telecommunications, defence, and aerospace systems, especially for the spherical one whose geometric layout is more closely approaching Chu's limit, thus yielding significant bandwidth improvements relative to the linear and planar counterparts. Yet broad applications of the volumetric ESAs are still hindered since the low cost fabrication has remained a tremendous challenge. Here we ...
Increasing the volumetric efficiency of Diesel engines by intake pipes
List, Hans
1933-01-01
Development of a method for calculating the volumetric efficiency of piston engines with intake pipes. Application of this method to the scavenging pumps of two-stroke-cycle engines with crankcase scavenging and to four-stroke-cycle engines. The utility of the method is demonstrated by volumetric-efficiency tests of the two-stroke-cycle engines with crankcase scavenging. Its practical application to the calculation of intake pipes is illustrated by example.
Serial volumetric registration of pulmonary CT studies
Silva, José Silvestre; Silva, Augusto; Sousa Santos, Beatriz
2008-03-01
Detailed morphological analysis of pulmonary structures and tissue, provided by modern CT scanners, is of utmost importance as in the case of oncological applications both for diagnosis, treatment, and follow-up. In this case, a patient may go through several tomographic studies throughout a period of time originating volumetric sets of image data that must be appropriately registered in order to track suspicious radiological findings. The structures or regions of interest may change their position or shape in CT exams acquired at different moments, due to postural, physiologic or pathologic changes, so, the exams should be registered before any follow-up information can be extracted. Postural mismatching throughout time is practically impossible to avoid being particularly evident when imaging is performed at the limiting spatial resolution. In this paper, we propose a method for intra-patient registration of pulmonary CT studies, to assist in the management of the oncological pathology. Our method takes advantage of prior segmentation work. In the first step, the pulmonary segmentation is performed where trachea and main bronchi are identified. Then, the registration method proceeds with a longitudinal alignment based on morphological features of the lungs, such as the position of the carina, the pulmonary areas, the centers of mass and the pulmonary trans-axial principal axis. The final step corresponds to the trans-axial registration of the corresponding pulmonary masked regions. This is accomplished by a pairwise sectional registration process driven by an iterative search of the affine transformation parameters leading to optimal similarity metrics. Results with several cases of intra-patient, intra-modality registration, up to 7 time points, show that this method provides accurate registration which is needed for quantitative tracking of lesions and the development of image fusion strategies that may effectively assist the follow-up process.
Volumetric optoacoustic monitoring of endovenous laser treatments
Fehm, Thomas F.; Deán-Ben, Xosé L.; Schaur, Peter; Sroka, Ronald; Razansky, Daniel
2016-03-01
Chronic venous insufficiency (CVI) is one of the most common medical conditions with reported prevalence estimates as high as 30% in the adult population. Although conservative management with compression therapy may improve the symptoms associated with CVI, healing often demands invasive procedures. Besides established surgical methods like vein stripping or bypassing, endovenous laser therapy (ELT) emerged as a promising novel treatment option during the last 15 years offering multiple advantages such as less pain and faster recovery. Much of the treatment success hereby depends on monitoring of the treatment progression using clinical imaging modalities such as Doppler ultrasound. The latter however do not provide sufficient contrast, spatial resolution and three-dimensional imaging capacity which is necessary for accurate online lesion assessment during treatment. As a consequence, incidence of recanalization, lack of vessel occlusion and collateral damage remains highly variable among patients. In this study, we examined the capacity of volumetric optoacoustic tomography (VOT) for real-time monitoring of ELT using an ex-vivo ox foot model. ELT was performed on subcutaneous veins while optoacoustic signals were acquired and reconstructed in real-time and at a spatial resolution in the order of 200μm. VOT images showed spatio-temporal maps of the lesion progression, characteristics of the vessel wall, and position of the ablation fiber's tip during the pull back. It was also possible to correlate the images with the temperature elevation measured in the area adjacent to the ablation spot. We conclude that VOT is a promising tool for providing online feedback during endovenous laser therapy.
Treatment planning for volumetric modulated arc therapy
Energy Technology Data Exchange (ETDEWEB)
Bedford, James L. [Joint Department of Physics, Institute of Cancer Research and The Royal Marsden NHS Foundation Trust, Downs Road, Sutton, Surrey SM2 5PT (United Kingdom)
2009-11-15
Purpose: Volumetric modulated arc therapy (VMAT) is a specific type of intensity-modulated radiation therapy (IMRT) in which the gantry speed, multileaf collimator (MLC) leaf position, and dose rate vary continuously during delivery. A treatment planning system for VMAT is presented. Methods: Arc control points are created uniformly throughout one or more arcs. An iterative least-squares algorithm is used to generate a fluence profile at every control point. The control points are then grouped and all of the control points in a given group are used to approximate the fluence profiles. A direct-aperture optimization is then used to improve the solution, taking into account the allowed range of leaf motion of the MLC. Dose is calculated using a fast convolution algorithm and the motion between control points is approximated by 100 interpolated dose calculation points. The method has been applied to five cases, consisting of lung, rectum, prostate and seminal vesicles, prostate and pelvic lymph nodes, and head and neck. The resulting plans have been compared with segmental (step-and-shoot) IMRT and delivered and verified on an Elekta Synergy to ensure practicality. Results: For the lung, prostate and seminal vesicles, and rectum cases, VMAT provides a plan of similar quality to segmental IMRT but with faster delivery by up to a factor of 4. For the prostate and pelvic nodes and head-and-neck cases, the critical structure doses are reduced with VMAT, both of these cases having a longer delivery time than IMRT. The plans in general verify successfully, although the agreement between planned and measured doses is not very close for the more complex cases, particularly the head-and-neck case. Conclusions: Depending upon the emphasis in the treatment planning, VMAT provides treatment plans which are higher in quality and/or faster to deliver than IMRT. The scheme described has been successfully introduced into clinical use.
Visualization and volumetric structures from MR images of the brain
Energy Technology Data Exchange (ETDEWEB)
Parvin, B.; Johnston, W.; Robertson, D.
1994-03-01
Pinta is a system for segmentation and visualization of anatomical structures obtained from serial sections reconstructed from magnetic resonance imaging. The system approaches the segmentation problem by assigning each volumetric region to an anatomical structure. This is accomplished by satisfying constraints at the pixel level, slice level, and volumetric level. Each slice is represented by an attributed graph, where nodes correspond to regions and links correspond to the relations between regions. These regions are obtained by grouping pixels based on similarity and proximity. The slice level attributed graphs are then coerced to form a volumetric attributed graph, where volumetric consistency can be verified. The main novelty of our approach is in the use of the volumetric graph to ensure consistency from symbolic representations obtained from individual slices. In this fashion, the system allows errors to be made at the slice level, yet removes them when the volumetric consistency cannot be verified. Once the segmentation is complete, the 3D surfaces of the brain can be constructed and visualized.
Soft bilateral filtering volumetric shadows using cube shadow maps
Ali, Hatam H.; Sunar, Mohd Shahrizal; Kolivand, Hoshang
2017-01-01
Volumetric shadows often increase the realism of rendered scenes in computer graphics. Typical volumetric shadows techniques do not provide a smooth transition effect in real-time with conservation on crispness of boundaries. This research presents a new technique for generating high quality volumetric shadows by sampling and interpolation. Contrary to conventional ray marching method, which requires extensive time, this proposed technique adopts downsampling in calculating ray marching. Furthermore, light scattering is computed in High Dynamic Range buffer to generate tone mapping. The bilateral interpolation is used along a view rays to smooth transition of volumetric shadows with respect to preserving-edges. In addition, this technique applied a cube shadow map to create multiple shadows. The contribution of this technique isreducing the number of sample points in evaluating light scattering and then introducing bilateral interpolation to improve volumetric shadows. This contribution is done by removing the inherent deficiencies significantly in shadow maps. This technique allows obtaining soft marvelous volumetric shadows, having a good performance and high quality, which show its potential for interactive applications. PMID:28632740
Yan, Jun; Wang, Qian; Wei, Tong; Jiang, Lili; Zhang, Milin; Jing, Xiaoyan; Fan, Zhuangjun
2014-05-27
We demonstrated the fabrication of functionalized graphene nanosheets via low temperature (300 °C) treatment of graphite oxide with a slow heating rate using Mg(OH)2 nanosheets as template. Because of its dented sheet with high surface area, a certain amount of oxygen-containing groups, and low pore volume, the as-obtained graphene delivers both ultrahigh specific gravimetric and volumetric capacitances of 456 F g(-1) and 470 F cm(-3), almost 3.7 times and 3.3 times higher than hydrazine reduced graphene, respectively. Especially, the obtained volumetric capacitance is the highest value so far reported for carbon materials in aqueous electrolytes. More importantly, the assembled supercapacitor exhibits an ultrahigh volumetric energy density of 27.2 Wh L(-1), which is among the highest values for carbon materials in aqueous electrolytes, as well as excellent cycling stability with 134% of its initial capacitance after 10,000 cycles. Therefore, the present work holds a great promise for future design and large-scale production of high performance graphene electrodes for portable energy storage devices.
Brain stem and cerebellum volumetric analysis of Machado Joseph disease patients
Directory of Open Access Journals (Sweden)
S T Camargos
2011-01-01
Full Text Available Machado-Joseph disease, or spinocerebellar ataxia type 3(MJD/SCA3, is the most frequent late onset spinocerebellar ataxia and results from a CAG repeat expansion in the ataxin-3 gene. Previous studies have found correlation between atrophy of cerebellum and brainstem with age and CAG repeats, although no such correlation has been found with disease duration and clinical manifestations. In this study we test the hypothesis that atrophy of cerebellum and brainstem in MJD/SCA3 is related to clinical severity, disease duration and CAG repeat length as well as to other variables such as age and ICARS (International Cooperative Ataxia Rating Scale. Whole brain high resolution MRI and volumetric measurement with cranial volume normalization were obtained from 15 MJD/SCA3 patients and 15 normal, age and sex-matchedcontrols. We applied ICARS and compared the score with volumes and CAG number, disease duration and age. We found significant correlation of both brain stem and cerebellar atrophy with CAG repeat length, age, disease duration and degree of disability. The Spearman rank correlation was stronger with volumetric reduction of the cerebellum than with brain stem. Our data allow us to conclude that volumetric analysis might reveal progressive degeneration after disease onset, which in turn is linked to both age and number of CAG repeat expansions in SCA 3.
Yang, William C; Minkler, Daniel F; Kshirsagar, Rashmi; Ryll, Thomas; Huang, Yao-Ming
2016-01-10
Biomanufacturing factories of the future are transitioning from large, single-product facilities toward smaller, multi-product, flexible facilities. Flexible capacity allows companies to adapt to ever-changing pipeline and market demands. Concentrated fed-batch (CFB) cell culture enables flexible manufacturing capacity with limited volumetric capacity; it intensifies cell culture titers such that the output of a smaller facility can rival that of a larger facility. We tested this hypothesis at bench scale by developing a feeding strategy for CFB and applying it to two cell lines. CFB improved cell line A output by 105% and cell line B output by 70% compared to traditional fed-batch (TFB) processes. CFB did not greatly change cell line A product quality, but it improved cell line B charge heterogeneity, suggesting that CFB has both process and product quality benefits. We projected CFB output gains in the context of a 2000-L small-scale facility, but the output was lower than that of a 15,000-L large-scale TFB facility. CFB's high cell mass also complicated operations, eroded volumetric productivity, and showed our current processes require significant improvements in specific productivity in order to realize their full potential and savings in manufacturing. Thus, improving specific productivity can resolve CFB's cost, scale-up, and operability challenges.
The report documents a series of seminars at Rome Air Development Center with the content equivalent to an intense course in Linear Systems . Material...is slanted toward the practicing engineer and introduces some of the fundamental concepts and techniques for analyzing linear systems . Techniques for
Directory of Open Access Journals (Sweden)
Dhanya eParameshwaran
2012-09-01
Full Text Available Many theories of neural network function assume linear summation. This is in apparent conflict with several known forms of nonlinearity in real neurons. Furthermore, key network properties depend on the summation parameters, which are themselves subject to modulation and plasticity in real neurons. We tested summation responses as measured by spiking activity in small groups of CA1 pyramidal neurons using permutations of inputs delivered on an electrode array. We used calcium dye recordings as a readout of the summed spiking response of cell assemblies in the network. Each group consisted of 2-10 cells, and the calcium signal from each cell correlated with individual action potentials. We find that the responses of these small cell groups sum linearly, despite previously reported dendritic nonlinearities and the thresholded responses of individual cells. This linear summation persisted when input strengths were reduced. Blockage of inhibition shifted responses up towards saturation, but did not alter the slope of the linear region of summation. Long-term potentiation of synapses in the slice also preserved the linear fit, with an increase in absolute response. However, in this case the summation gain decreased, suggesting a homeostatic process for preserving overall network excitability. Overall, our results suggest that cell groups in the CA3-CA1 network robustly follow a consistent set of linear summation and gain-control rules, notwithstanding the intrinsic nonlinearities of individual neurons. Cell-group responses remain linear, with well-defined transformations following inhibitory modulation and plasticity. Our measures of these transformations provide useful parameters to apply to neural network analyses involving modulation and plasticity.
Yang, Liu; Xiao-Jing, Yu; Jian-Ming, Ma; Yi-Wen, Guan; Jiang, Li; Qiang, Li; Sa, Yang
2017-06-01
A volumetric ablation model for EPDM (ethylene- propylene-diene monomer) is established in this paper. This model considers the complex physicochemical process in the porous structure of a char layer. An ablation physics model based on a porous structure of a char layer and another model of heterogeneous volumetric ablation char layer physics are then built. In the model, porosity is used to describe the porous structure of a char layer. Gas diffusion and chemical reactions are introduced to the entire porous structure. Through detailed formation analysis, the causes of the compact or loose structure in the char layer and chemical vapor deposition (CVD) reaction between pyrolysis gas and char layer skeleton are introduced. The Arrhenius formula is adopted to determine the methods for calculating carbon deposition rate C which is the consumption rate caused by thermochemical reactions in the char layer, and porosity evolution. The critical porosity value is used as a criterion for char layer porous structure failure under gas flow and particle erosion. This critical porosity value is obtained by fitting experimental parameters and surface porosity of the char layer. Linear ablation and mass ablation rates are confirmed with the critical porosity value. Results of linear ablation and mass ablation rate calculations generally coincide with experimental results, suggesting that the ablation analysis proposed in this paper can accurately reflect practical situations and that the physics and mathematics models built are accurate and reasonable.
DEFF Research Database (Denmark)
Darula, Radoslav; Sorokin, Sergey
2013-01-01
An electro-magneto-mechanical system combines three physical domains - a mechanical structure, a magnetic field and an electric circuit. The interaction between these domains is analysed for a structure with two degrees of freedom (translational and rotational) and two electrical circuits. Each...... electrical circuit is described by a differential equation of the 1st order, which is considered to contribute to the coupled system by 0.5 DOF. The electrical and mechanical systems are coupled via a magnetic circuit, which is inherently non-linear, due to a non-linear nature of the electro-magnetic force....... To study the non-linear behaviour of the coupled problem analytically, the classical multiple scale method is applied. The response at each mode in resonant as well as in sub-harmonic excitation conditions is analysed in the cases of internal resonance and internal parametric resonance....
Stoll, R R
1968-01-01
Linear Algebra is intended to be used as a text for a one-semester course in linear algebra at the undergraduate level. The treatment of the subject will be both useful to students of mathematics and those interested primarily in applications of the theory. The major prerequisite for mastering the material is the readiness of the student to reason abstractly. Specifically, this calls for an understanding of the fact that axioms are assumptions and that theorems are logical consequences of one or more axioms. Familiarity with calculus and linear differential equations is required for understand
Reconstruction of volumetric ultrasound panorama based on improved 3D SIFT.
Ni, Dong; Chui, Yim Pan; Qu, Yingge; Yang, Xuan; Qin, Jing; Wong, Tien-Tsin; Ho, Simon S H; Heng, Pheng Ann
2009-10-01
Registration of ultrasound volumes is a key issue for the reconstruction of volumetric ultrasound panorama. In this paper, we propose an improved three-dimensional (3D) scale invariant feature transform (SIFT) algorithm to globally register ultrasound volumes acquired from dedicated ultrasound probe, where local deformations are corrected by block-based warping algorithm. Original SIFT algorithm is extended to 3D and improved by combining the SIFT detector with Rohr3D detector to extract complementary features and applying the diffusion distance algorithm for robust feature comparison. Extensive experiments have been performed on both phantom and clinical data sets to demonstrate the effectiveness and robustness of our approach.
Improved volumetric imaging in tomosynthesis using combined multiaxial sweeps.
Gersh, Jacob A; Wiant, David B; Best, Ryan C M; Bennett, Marcus C; Munley, Michael T; King, June D; McKee, Mahta M; Baydush, Alan H
2010-09-03
This study explores the volumetric reconstruction fidelity attainable using tomosynthesis with a kV imaging system which has a unique ability to rotate isocentrically and with multiple degrees of mechanical freedom. More specifically, we seek to investigate volumetric reconstructions by combining multiple limited-angle rotational image acquisition sweeps. By comparing these reconstructed images with those of a CBCT reconstruction, we can gauge the volumetric fidelity of the reconstructions. In surgical situations, the described tomosynthesis-based system could provide high-quality volumetric imaging without requiring patient motion, even with rotational limitations present. Projections were acquired using the Digital Integrated Brachytherapy Unit, or IBU-D. A phantom was used which contained several spherical objects of varying contrast. Using image projections acquired during isocentric sweeps around the phantom, reconstructions were performed by filtered backprojection. For each image acquisition sweep configuration, a contrasting sphere is analyzed using two metrics and compared to a gold standard CBCT reconstruction. Since the intersection of a reconstructed sphere and an imaging plane is ideally a circle with an eccentricity of zero, the first metric presented compares the effective eccentricity of intersections of reconstructed volumes and imaging planes. As another metric of volumetric reconstruction fidelity, the volume of one of the contrasting spheres was determined using manual contouring. By comparing these manually delineated volumes with a CBCT reconstruction, we can gauge the volumetric fidelity of reconstructions. The configuration which yielded the highest overall volumetric reconstruction fidelity, as determined by effective eccentricities and volumetric contouring, consisted of two orthogonally-offset 60° L-arm sweeps and a single C-arm sweep which shared a pivot point with one the L-arm sweeps. When compared to a similar configuration that
Searle, Shayle R
2012-01-01
This 1971 classic on linear models is once again available--as a Wiley Classics Library Edition. It features material that can be understood by any statistician who understands matrix algebra and basic statistical methods.
Solow, Daniel
2014-01-01
This text covers the basic theory and computation for a first course in linear programming, including substantial material on mathematical proof techniques and sophisticated computation methods. Includes Appendix on using Excel. 1984 edition.
Berberian, Sterling K
2014-01-01
Introductory treatment covers basic theory of vector spaces and linear maps - dimension, determinants, eigenvalues, and eigenvectors - plus more advanced topics such as the study of canonical forms for matrices. 1992 edition.
Liesen, Jörg
2015-01-01
This self-contained textbook takes a matrix-oriented approach to linear algebra and presents a complete theory, including all details and proofs, culminating in the Jordan canonical form and its proof. Throughout the development, the applicability of the results is highlighted. Additionally, the book presents special topics from applied linear algebra including matrix functions, the singular value decomposition, the Kronecker product and linear matrix equations. The matrix-oriented approach to linear algebra leads to a better intuition and a deeper understanding of the abstract concepts, and therefore simplifies their use in real world applications. Some of these applications are presented in detailed examples. In several ‘MATLAB-Minutes’ students can comprehend the concepts and results using computational experiments. Necessary basics for the use of MATLAB are presented in a short introduction. Students can also actively work with the material and practice their mathematical skills in more than 300 exerc...
Aspects of volumetric efficiency measurement for reciprocating engines
Directory of Open Access Journals (Sweden)
Pešić Radivoje B.
2013-01-01
Full Text Available The volumetric efficiency significantly influences engine output. Both design and dimensions of an intake and exhaust system have large impact on volumetric efficiency. Experimental equipment for measuring of airflow through the engine, which is placed in the intake system, may affect the results of measurements and distort the real picture of the impact of individual structural factors. This paper deals with the problems of experimental determination of intake airflow using orifice plates and the influence of orifice plate diameter on the results of the measurements. The problems of airflow measurements through a multi-process Otto/Diesel engine were analyzed. An original method for determining volumetric efficiency was developed based on in-cylinder pressure measurement during motored operation, and appropriate calibration of the experimental procedure was performed. Good correlation between the results of application of the original method for determination of volumetric efficiency and the results of theoretical model used in research of influence of the intake pipe length on volumetric efficiency was determined. [Acknowledgments. The paper is the result of the research within the project TR 35041 financed by the Ministry of Science and Technological Development of the Republic of Serbia
A high volume, high throughput volumetric sorption analyzer
Soo, Y. C.; Beckner, M.; Romanos, J.; Wexler, C.; Pfeifer, P.; Buckley, P.; Clement, J.
2011-03-01
In this talk we will present an overview of our new Hydrogen Test Fixture (HTF) constructed by the Midwest Research Institute for The Alliance for Collaborative Research in Alternative Fuel Technology to test activated carbon monoliths for hydrogen gas storage. The HTF is an automated, computer-controlled volumetric instrument for rapid screening and manipulation of monoliths under an inert atmosphere (to exclude degradation of carbon from exposure to oxygen). The HTF allows us to measure large quantity (up to 500 g) of sample in a 0.5 l test tank, making our results less sensitive to sample inhomogeneity. The HTF can measure isotherms at pressures ranging from 1 to 300 bar at room temperature. For comparison, other volumetric instruments such as Hiden Isochema's HTP-1 Volumetric Analyser can only measure carbon samples up to 150 mg at pressures up to 200 bar. Work supported by the US DOD Contract # N00164-08-C-GS37.
Volumetric (3D) compressive sensing spectral domain optical coherence tomography.
Xu, Daguang; Huang, Yong; Kang, Jin U
2014-11-01
In this work, we proposed a novel three-dimensional compressive sensing (CS) approach for spectral domain optical coherence tomography (SD OCT) volumetric image acquisition and reconstruction. Instead of taking a spectral volume whose size is the same as that of the volumetric image, our method uses a sub set of the original spectral volume that is under-sampled in all three dimensions, which reduces the amount of spectral measurements to less than 20% of that required by the Shan-non/Nyquist theory. The 3D image is recovered from the under-sampled spectral data dimension-by-dimension using the proposed three-step CS reconstruction strategy. Experimental results show that our method can significantly reduce the sampling rate required for a volumetric SD OCT image while preserving the image quality.
Volumetric intake flow measurements of an IC engine using magnetic resonance velocimetry
Freudenhammer, Daniel; Baum, Elias; Peterson, Brian; Böhm, Benjamin; Jung, Bernd; Grundmann, Sven
2014-05-01
Magnetic resonance velocimetry (MRV) measurements are performed in a 1:1 scale model of a single-cylinder optical engine to investigate the volumetric flow within the intake and cylinder geometry during flow induction. The model is a steady flow water analogue of the optical IC-engine with a fixed valve lift of mm to simulate the induction flow at crank-angle bTDC. This setup resembles a steady flow engine test bench configuration. MRV measurements are validated with phase-averaged particle image velocimetry (PIV) measurements performed within the symmetry plane of the optical engine. Differences in experimental operating parameters between MRV and PIV measurements are well addressed. Comparison of MRV and PIV measurements is demonstrated using normalized mean velocity component profiles and showed excellent agreement in the upper portion of the cylinder chamber (i.e., mm). MRV measurements are further used to analyze the ensemble average volumetric flow within the 3D engine domain. Measurements are used to describe the 3D overflow and underflow behavior as the annular flow enters the cylinder chamber. Flow features such as the annular jet-like flows extending into the cylinder, their influence on large-scale in-cylinder flow motion, as well as flow recirculation zones are identified in 3D space. Inlet flow velocities are analyzed around the entire valve curtain perimeter to quantify percent mass flow rate entering the cylinder. Recirculation zones associated with the underflow are shown to reduce local mass flow rates up to 50 %. Recirculation zones are further analyzed in 3D space within the intake manifold and cylinder chamber. It is suggested that such recirculation zones can have large implications on cylinder charge filling and variations of the in-cylinder flow pattern. MRV is revealed to be an important diagnostic tool used to understand the volumetric induction flow within engine geometries and is potentially suited to evaluate flow changes due to intake
Multiple sparse volumetric priors for distributed EEG source reconstruction.
Strobbe, Gregor; van Mierlo, Pieter; De Vos, Maarten; Mijović, Bogdan; Hallez, Hans; Van Huffel, Sabine; López, José David; Vandenberghe, Stefaan
2014-10-15
We revisit the multiple sparse priors (MSP) algorithm implemented in the statistical parametric mapping software (SPM) for distributed EEG source reconstruction (Friston et al., 2008). In the present implementation, multiple cortical patches are introduced as source priors based on a dipole source space restricted to a cortical surface mesh. In this note, we present a technique to construct volumetric cortical regions to introduce as source priors by restricting the dipole source space to a segmented gray matter layer and using a region growing approach. This extension allows to reconstruct brain structures besides the cortical surface and facilitates the use of more realistic volumetric head models including more layers, such as cerebrospinal fluid (CSF), compared to the standard 3-layered scalp-skull-brain head models. We illustrated the technique with ERP data and anatomical MR images in 12 subjects. Based on the segmented gray matter for each of the subjects, cortical regions were created and introduced as source priors for MSP-inversion assuming two types of head models. The standard 3-layered scalp-skull-brain head models and extended 4-layered head models including CSF. We compared these models with the current implementation by assessing the free energy corresponding with each of the reconstructions using Bayesian model selection for group studies. Strong evidence was found in favor of the volumetric MSP approach compared to the MSP approach based on cortical patches for both types of head models. Overall, the strongest evidence was found in favor of the volumetric MSP reconstructions based on the extended head models including CSF. These results were verified by comparing the reconstructed activity. The use of volumetric cortical regions as source priors is a useful complement to the present implementation as it allows to introduce more complex head models and volumetric source priors in future studies.
Hiroki Yoshioka; Kenta Obata; Tomoaki Miura
2012-01-01
The spectral unmixing of a linear mixture model (LMM) with Normalized Difference Vegetation Index (NDVI) constraints was performed to estimate the fraction of vegetation cover (FVC) over the earth’s surface in an effort to facilitate long-term surface vegetation monitoring using a set of environmental satellites. Although the integrated use of multiple sensors improves the spatial and temporal quality of the data sets, area-averaged FVC values obtained using an LMM-based algorithm suffer from...
Volumetric measurements of a spatially growing dust acoustic wave
Williams, Jeremiah D.
2012-11-01
In this study, tomographic particle image velocimetry (tomo-PIV) techniques are used to make volumetric measurements of the dust acoustic wave (DAW) in a weakly coupled dusty plasma system in an argon, dc glow discharge plasma. These tomo-PIV measurements provide the first instantaneous volumetric measurement of a naturally occurring propagating DAW. These measurements reveal over the measured volume that the measured wave mode propagates in all three spatial dimensional and exhibits the same spatial growth rate and wavelength in each spatial direction.
Volumetric measurements of a spatially growing dust acoustic wave
Energy Technology Data Exchange (ETDEWEB)
Williams, Jeremiah D. [Physics Department, Wittenberg University, Springfield, Ohio 45504 (United States)
2012-11-15
In this study, tomographic particle image velocimetry (tomo-PIV) techniques are used to make volumetric measurements of the dust acoustic wave (DAW) in a weakly coupled dusty plasma system in an argon, dc glow discharge plasma. These tomo-PIV measurements provide the first instantaneous volumetric measurement of a naturally occurring propagating DAW. These measurements reveal over the measured volume that the measured wave mode propagates in all three spatial dimensional and exhibits the same spatial growth rate and wavelength in each spatial direction.
Volumetric Pricing of Agricultural Water Supplies: A Case Study
Griffin, Ronald C.; Perry, Gregory M.
1985-07-01
Models of water consumption by rice producers are conceptualized and then estimated using cross-sectional time series data obtained from 16 Texas canal operators for the years 1977-1982. Two alternative econometric models demonstrate that both volumetric and flat rate water charges are strongly and inversely related to agricultural water consumption. Nonprice conservation incentives accompanying flat rates are hypothesized to explain the negative correlation of flat rate charges and water consumption. Application of these results suggests that water supply organizations in the sample population converting to volumetric pricing will generally reduce water consumption.
Olive, David J
2017-01-01
This text covers both multiple linear regression and some experimental design models. The text uses the response plot to visualize the model and to detect outliers, does not assume that the error distribution has a known parametric distribution, develops prediction intervals that work when the error distribution is unknown, suggests bootstrap hypothesis tests that may be useful for inference after variable selection, and develops prediction regions and large sample theory for the multivariate linear regression model that has m response variables. A relationship between multivariate prediction regions and confidence regions provides a simple way to bootstrap confidence regions. These confidence regions often provide a practical method for testing hypotheses. There is also a chapter on generalized linear models and generalized additive models. There are many R functions to produce response and residual plots, to simulate prediction intervals and hypothesis tests, to detect outliers, and to choose response trans...
Sahai, Vivek
2013-01-01
Beginning with the basic concepts of vector spaces such as linear independence, basis and dimension, quotient space, linear transformation and duality with an exposition of the theory of linear operators on a finite dimensional vector space, this book includes the concept of eigenvalues and eigenvectors, diagonalization, triangulation and Jordan and rational canonical forms. Inner product spaces which cover finite dimensional spectral theory and an elementary theory of bilinear forms are also discussed. This new edition of the book incorporates the rich feedback of its readers. We have added new subject matter in the text to make the book more comprehensive. Many new examples have been discussed to illustrate the text. More exercises have been included. We have taken care to arrange the exercises in increasing order of difficulty. There is now a new section of hints for almost all exercises, except those which are straightforward, to enhance their importance for individual study and for classroom use.
Edwards, Harold M
1995-01-01
In his new undergraduate textbook, Harold M Edwards proposes a radically new and thoroughly algorithmic approach to linear algebra Originally inspired by the constructive philosophy of mathematics championed in the 19th century by Leopold Kronecker, the approach is well suited to students in the computer-dominated late 20th century Each proof is an algorithm described in English that can be translated into the computer language the class is using and put to work solving problems and generating new examples, making the study of linear algebra a truly interactive experience Designed for a one-semester course, this text adopts an algorithmic approach to linear algebra giving the student many examples to work through and copious exercises to test their skills and extend their knowledge of the subject Students at all levels will find much interactive instruction in this text while teachers will find stimulating examples and methods of approach to the subject
Allenby, Reg
1995-01-01
As the basis of equations (and therefore problem-solving), linear algebra is the most widely taught sub-division of pure mathematics. Dr Allenby has used his experience of teaching linear algebra to write a lively book on the subject that includes historical information about the founders of the subject as well as giving a basic introduction to the mathematics undergraduate. The whole text has been written in a connected way with ideas introduced as they occur naturally. As with the other books in the series, there are many worked examples.Solutions to the exercises are available onlin
Isolated linear blaschkoid psoriasis.
Nasimi, M; Abedini, R; Azizpour, A; Nikoo, A
2016-10-01
Linear psoriasis (LPs) is considered a rare clinical presentation of psoriasis, which is characterized by linear erythematous and scaly lesions along the lines of Blaschko. We report the case of a 20-year-old man who presented with asymptomatic linear and S-shaped erythematous, scaly plaques on right side of his trunk. The plaques were arranged along the lines of Blaschko with a sharp demarcation at the midline. Histological examination of a skin biopsy confirmed the diagnosis of psoriasis. Topical calcipotriol and betamethasone dipropionate ointments were prescribed for 2 months. A good clinical improvement was achieved, with reduction in lesion thickness and scaling. In patients with linear erythematous and scaly plaques along the lines of Blaschko, the diagnosis of LPs should be kept in mind, especially in patients with asymptomatic lesions of late onset. © 2016 British Association of Dermatologists.
A Technique for Generating Volumetric Cine MRI (VC-MRI)
Harris, Wendy; Ren, Lei; Cai, Jing; Zhang, You; Chang, Zheng; Yin, Fang-Fang
2016-01-01
Purpose To develop a technique to generate on-board volumetric-cine MRI (VC-MRI) using patient prior images, motion modeling and on-board 2D-cine MRI. Methods One phase of a 4D-MRI acquired during patient simulation is used as patient prior images. 3 major respiratory deformation patterns of the patient are extracted from 4D-MRI based on principal-component-analysis. The on-board VC-MRI at any instant is considered as a deformation of the prior MRI. The deformation field is represented as a linear combination of the 3 major deformation patterns. The coefficients of the deformation patterns are solved by the data fidelity constraint using the acquired on-board single 2D-cine MRI. The method was evaluated using both XCAT simulation of lung cancer patients and MRI data from four real liver cancer patients. The accuracy of the estimated VC-MRI was quantitatively evaluated using Volume-Percent-Difference(VPD), Center-of-Mass-Shift(COMS), and target tracking errors. Effects of acquisition orientation, region-of-interest(ROI) selection, patient breathing pattern change and noise on the estimation accuracy were also evaluated. Results Image subtraction of ground-truth with estimated on-board VC-MRI shows fewer differences than image subtraction of ground-truth with prior image. Agreement between profiles in the estimated and ground-truth VC-MRI was achieved with less than 6% error for both XCAT and patient data. Among all XCAT scenarios, the VPD between ground-truth and estimated lesion volumes was on average 8.43±1.52% and the COMS was on average 0.93±0.58mm across all time-steps for estimation based on the ROI region in the sagittal cine images. Matching to ROI in the sagittal view achieved better accuracy when there was substantial breathing pattern change. The technique was robust against noise levels up to SNR=20. For patient data, average tracking errors were less than 2 mm in all directions for all patients. Conclusions Preliminary studies demonstrated the
A Technique for Generating Volumetric Cine-Magnetic Resonance Imaging.
Harris, Wendy; Ren, Lei; Cai, Jing; Zhang, You; Chang, Zheng; Yin, Fang-Fang
2016-06-01
The purpose of this study was to develop a techique to generate on-board volumetric cine-magnetic resonance imaging (VC-MRI) using patient prior images, motion modeling, and on-board 2-dimensional cine MRI. One phase of a 4-dimensional MRI acquired during patient simulation is used as patient prior images. Three major respiratory deformation patterns of the patient are extracted from 4-dimensional MRI based on principal-component analysis. The on-board VC-MRI at any instant is considered as a deformation of the prior MRI. The deformation field is represented as a linear combination of the 3 major deformation patterns. The coefficients of the deformation patterns are solved by the data fidelity constraint using the acquired on-board single 2-dimensional cine MRI. The method was evaluated using both digital extended-cardiac torso (XCAT) simulation of lung cancer patients and MRI data from 4 real liver cancer patients. The accuracy of the estimated VC-MRI was quantitatively evaluated using volume-percent-difference (VPD), center-of-mass-shift (COMS), and target tracking errors. Effects of acquisition orientation, region-of-interest (ROI) selection, patient breathing pattern change, and noise on the estimation accuracy were also evaluated. Image subtraction of ground-truth with estimated on-board VC-MRI shows fewer differences than image subtraction of ground-truth with prior image. Agreement between normalized profiles in the estimated and ground-truth VC-MRI was achieved with less than 6% error for both XCAT and patient data. Among all XCAT scenarios, the VPD between ground-truth and estimated lesion volumes was, on average, 8.43 ± 1.52% and the COMS was, on average, 0.93 ± 0.58 mm across all time steps for estimation based on the ROI region in the sagittal cine images. Matching to ROI in the sagittal view achieved better accuracy when there was substantial breathing pattern change. The technique was robust against noise levels up to SNR = 20. For
Improving plan quality for prostate volumetric-modulated arc therapy.
Wright, Katrina; Ferrari-Anderson, Janet; Barry, Tamara; Bernard, Anne; Brown, Elizabeth; Lehman, Margot; Pryor, David
2017-08-04
We critically evaluated the quality and consistency of volumetric-modulated arc therapy (VMAT) prostate planning at a single institution to quantify objective measures for plan quality and establish clear guidelines for plan evaluation and quality assurance. A retrospective analysis was conducted on 34 plans generated on the Pinnacle(3) version 9.4 and 9.8 treatment planning system to deliver 78 Gy in 39 fractions to the prostate only using VMAT. Data were collected on contoured structure volumes, overlaps and expansions, planning target volume (PTV) and organs at risk volumes and relationship, dose volume histogram, plan conformity, plan homogeneity, low-dose wash, and beam parameters. Standard descriptive statistics were used to describe the data. Despite a standardized planning protocol, we found variability was present in all steps of the planning process. Deviations from protocol contours by radiation oncologists and radiation therapists occurred in 12% and 50% of cases, respectively, and the number of optimization parameters ranged from 12 to 27 (median 17). This contributed to conflicts within the optimization process reflected by the mean composite objective value of 0.07 (range 0.01 to 0.44). Methods used to control low-intermediate dose wash were inconsistent. At the PTV rectum interface, the dose-gradient distance from the 74.1 Gy to 40 Gy isodose ranged from 0.6 cm to 2.0 cm (median 1.0 cm). Increasing collimator angle was associated with a decrease in monitor units and a single full 6 MV arc was sufficient for the majority of plans. A significant relationship was found between clinical target volume-rectum distance and rectal tolerances achieved. A linear relationship was determined between the PTV volume and volume of 40 Gy isodose. Objective values and composite objective values were useful in determining plan quality. Anatomic geometry and overlap of structures has a measurable impact on the plan quality achieved for prostate patients
Gondim Teixeira, Pedro Augusto; Cendre, Romain; Hossu, Gabriela; Leplat, Christophe; Felblinger, Jacques; Blum, Alain; Braun, Marc
2017-02-01
Assess the use of a volumetric simulation tool for the evaluation of radiology resident MR and CT interpretation skills. Forty-three participants were evaluated with a software allowing the visualisation of multiple volumetric image series. There were 7 medical students, 28 residents and 8 senior radiologists among the participants. Residents were divided into two sub-groups (novice and advanced). The test was composed of 15 exercises on general radiology and lasted 45 min. Participants answered a questionnaire on their experience with the test using a 5-point Likert scale. This study was approved by the dean of the medical school and did not require ethics committee approval. The reliability of the test was good with a Cronbach alpha value of 0.9. Test scores were significantly different in all sub-groups studies (p radiological practice (3.9 ± 0.9 on a 5-point scale) and was better than the conventional evaluation methods (4.6 ± 0.5 on a 5-point scale). This software provides a high quality evaluation tool for the assessment of the interpretation skills in radiology residents. • This tool allows volumetric image analysis of MR and CT studies. • A high reliability test could be created with this tool. • Test scores were strongly associated with the examinee expertise level. • Examinees positively evaluated the authenticity and usability of this tool.
DEFF Research Database (Denmark)
Kjærgaard, Thomas; Baudin, Pablo; Bykov, Dmytro
2017-01-01
We present a scalable cross-platform hybrid MPI/OpenMP/OpenACC implementation of the Divide–Expand–Consolidate (DEC) formalism with portable performance on heterogeneous HPC architectures. The Divide–Expand–Consolidate formalism is designed to reduce the steep computational scaling of conventiona...... (2440 atoms, 6800 correlated electrons, 24 440 basis functions and 91 280 auxiliary functions). This represents the largest molecular system treated at the MP2 level of theory, demonstrating an efficient removal of the scaling wall pertinent to conventional quantum many-body methods....
Institute of Scientific and Technical Information of China (English)
Ouyang Zhihua; Elsworth Derek; Sheng Jianlong
2005-01-01
The undrained change in pore fluid pressure that accompanies dike intrusion may be conveniently represented as a moving volumetric dislocation. The concept of a dilation center was developed to represent the field of undrained pressure change in a saturated linear elastic medium. Since instantaneous pore fluid pressures can be developed to a considerable distance from the dislocation, monitoring the rate of pressure generation and subsequent pressure dissipation in a fully coupled manner enables certain characteristics of the resulting dislocation to be defined. The principal focus of this study is the application of dislocation-based methods to analyze the behavior of the fluid pressure response induced by intrusive dislocations in a semi-infinite space, such as dike intrusion, hydraulic fracturing and piezometer insertion. Partially drained pore pressures result from the isothermal introduction of volumetric moving pencil-like dislocations described as analogs to moving point dislocation within a semi-infinite saturated elastic medium. To represent behavior within the halfspace, an image dislocation is positioned under the moving coordinate frame fixed to the front of the primary moving dislocation, to yield an approximate solution for pore pressure for constant fluid pressure conditions. Induced pore pressures are concisely described under a minimum set of dimensionless parameter groupings representing propagation velocity, and relative geometry. Charts defining induced pore fluid pressure at a static measuring point provide a meaningful tool for determining unknown parameters in data reduction. Two intrusive events at Krafla, Iceland are examined using the type curve matching techniques. Predicted parameters agree favorably with field data.
Integral transform solution of natural convection in a square cavity with volumetric heat generation
Directory of Open Access Journals (Sweden)
C. An
2013-12-01
Full Text Available The generalized integral transform technique (GITT is employed to obtain a hybrid numerical-analytical solution of natural convection in a cavity with volumetric heat generation. The hybrid nature of this approach allows for the establishment of benchmark results in the solution of non-linear partial differential equation systems, including the coupled set of heat and fluid flow equations that govern the steady natural convection problem under consideration. Through performing the GITT, the resulting transformed ODE system is then numerically solved by making use of the subroutine DBVPFD from the IMSL Library. Therefore, numerical results under user prescribed accuracy are obtained for different values of Rayleigh numbers, and the convergence behavior of the proposed eigenfunction expansions is illustrated. Critical comparisons against solutions produced by ANSYS CFX 12.0 are then conducted, which demonstrate excellent agreement. Several sets of reference results for natural convection with volumetric heat generation in a bi-dimensional square cavity are also provided for future verification of numerical results obtained by other researchers.
Directory of Open Access Journals (Sweden)
Julie Vercelloni
Full Text Available Recently, attempts to improve decision making in species management have focussed on uncertainties associated with modelling temporal fluctuations in populations. Reducing model uncertainty is challenging; while larger samples improve estimation of species trajectories and reduce statistical errors, they typically amplify variability in observed trajectories. In particular, traditional modelling approaches aimed at estimating population trajectories usually do not account well for nonlinearities and uncertainties associated with multi-scale observations characteristic of large spatio-temporal surveys. We present a Bayesian semi-parametric hierarchical model for simultaneously quantifying uncertainties associated with model structure and parameters, and scale-specific variability over time. We estimate uncertainty across a four-tiered spatial hierarchy of coral cover from the Great Barrier Reef. Coral variability is well described; however, our results show that, in the absence of additional model specifications, conclusions regarding coral trajectories become highly uncertain when considering multiple reefs, suggesting that management should focus more at the scale of individual reefs. The approach presented facilitates the description and estimation of population trajectories and associated uncertainties when variability cannot be attributed to specific causes and origins. We argue that our model can unlock value contained in large-scale datasets, provide guidance for understanding sources of uncertainty, and support better informed decision making.
Vercelloni, Julie; Caley, M Julian; Kayal, Mohsen; Low-Choy, Samantha; Mengersen, Kerrie
2014-01-01
Recently, attempts to improve decision making in species management have focussed on uncertainties associated with modelling temporal fluctuations in populations. Reducing model uncertainty is challenging; while larger samples improve estimation of species trajectories and reduce statistical errors, they typically amplify variability in observed trajectories. In particular, traditional modelling approaches aimed at estimating population trajectories usually do not account well for nonlinearities and uncertainties associated with multi-scale observations characteristic of large spatio-temporal surveys. We present a Bayesian semi-parametric hierarchical model for simultaneously quantifying uncertainties associated with model structure and parameters, and scale-specific variability over time. We estimate uncertainty across a four-tiered spatial hierarchy of coral cover from the Great Barrier Reef. Coral variability is well described; however, our results show that, in the absence of additional model specifications, conclusions regarding coral trajectories become highly uncertain when considering multiple reefs, suggesting that management should focus more at the scale of individual reefs. The approach presented facilitates the description and estimation of population trajectories and associated uncertainties when variability cannot be attributed to specific causes and origins. We argue that our model can unlock value contained in large-scale datasets, provide guidance for understanding sources of uncertainty, and support better informed decision making.
DEFF Research Database (Denmark)
Kjærgaard, Thomas; Baudin, Pablo; Bykov, Dmytro
2017-01-01
We present a scalable cross-platform hybrid MPI/OpenMP/OpenACC implementation of the Divide–Expand–Consolidate (DEC) formalism with portable performance on heterogeneous HPC architectures. The Divide–Expand–Consolidate formalism is designed to reduce the steep computational scaling of conventiona...
Space-Time Transfinite Interpolation of Volumetric Material Properties.
Sanchez, Mathieu; Fryazinov, Oleg; Adzhiev, Valery; Comninos, Peter; Pasko, Alexander
2015-02-01
The paper presents a novel technique based on extension of a general mathematical method of transfinite interpolation to solve an actual problem in the context of a heterogeneous volume modelling area. It deals with time-dependent changes to the volumetric material properties (material density, colour, and others) as a transformation of the volumetric material distributions in space-time accompanying geometric shape transformations such as metamorphosis. The main idea is to represent the geometry of both objects by scalar fields with distance properties, to establish in a higher-dimensional space a time gap during which the geometric transformation takes place, and to use these scalar fields to apply the new space-time transfinite interpolation to volumetric material attributes within this time gap. The proposed solution is analytical in its nature, does not require heavy numerical computations and can be used in real-time applications. Applications of this technique also include texturing and displacement mapping of time-variant surfaces, and parametric design of volumetric microstructures.
In Vivo Real Time Volumetric Synthetic Aperture Ultrasound Imaging
DEFF Research Database (Denmark)
Bouzari, Hamed; Rasmussen, Morten Fischer; Brandt, Andreas Hjelm;
2015-01-01
Synthetic aperture (SA) imaging can be used to achieve real-time volumetric ultrasound imaging using 2-D array transducers. The sensitivity of SA imaging is improved by maximizing the acoustic output, but one must consider the limitations of an ultrasound system, both technical and biological...
Automatic segmentation of pulmonary segments from volumetric chest CT scans.
Rikxoort, E.M. van; Hoop, B. de; Vorst, S. van de; Prokop, M.; Ginneken, B. van
2009-01-01
Automated extraction of pulmonary anatomy provides a foundation for computerized analysis of computed tomography (CT) scans of the chest. A completely automatic method is presented to segment the lungs, lobes and pulmonary segments from volumetric CT chest scans. The method starts with lung segmenta
Volumetric T-spline Construction Using Boolean Operations
2013-07-01
15213, USA 2 Institute for Computational Engineering and Sciences, The University of Texas at Austin, Austin, TX 78712, USA 3 Department of Civil and...and S. Yau. Volumetric harmonic map. Communications in Information and Systems, 3(3):191–202, 2003. 12. C.A.R. Guerra . Simultaneous untangling and
Volumetric motion quantification by 3D tissue phase mapped CMR
Directory of Open Access Journals (Sweden)
Lutz Anja
2012-10-01
Full Text Available Abstract Background The objective of this study was the quantification of myocardial motion from 3D tissue phase mapped (TPM CMR. Recent work on myocardial motion quantification by TPM has been focussed on multi-slice 2D acquisitions thus excluding motion information from large regions of the left ventricle. Volumetric motion assessment appears an important next step towards the understanding of the volumetric myocardial motion and hence may further improve diagnosis and treatments in patients with myocardial motion abnormalities. Methods Volumetric motion quantification of the complete left ventricle was performed in 12 healthy volunteers and two patients applying a black-blood 3D TPM sequence. The resulting motion field was analysed regarding motion pattern differences between apical and basal locations as well as for asynchronous motion pattern between different myocardial segments in one or more slices. Motion quantification included velocity, torsion, rotation angle and strain derived parameters. Results All investigated motion quantification parameters could be calculated from the 3D-TPM data. Parameters quantifying hypokinetic or asynchronous motion demonstrated differences between motion impaired and healthy myocardium. Conclusions 3D-TPM enables the gapless volumetric quantification of motion abnormalities of the left ventricle, which can be applied in future application as additional information to provide a more detailed analysis of the left ventricular function.
Video-rate volumetric optical coherence tomography-based microangiography
Baran, Utku; Wei, Wei; Xu, Jingjiang; Qi, Xiaoli; Davis, Wyatt O.; Wang, Ruikang K.
2016-04-01
Video-rate volumetric optical coherence tomography (vOCT) is relatively young in the field of OCT imaging but has great potential in biomedical applications. Due to the recent development of the MHz range swept laser sources, vOCT has started to gain attention in the community. Here, we report the first in vivo video-rate volumetric OCT-based microangiography (vOMAG) system by integrating an 18-kHz resonant microelectromechanical system (MEMS) mirror with a 1.6-MHz FDML swept source operating at ˜1.3 μm wavelength. Because the MEMS scanner can offer an effective B-frame rate of 36 kHz, we are able to engineer vOMAG with a video rate up to 25 Hz. This system was utilized for real-time volumetric in vivo visualization of cerebral microvasculature in mice. Moreover, we monitored the blood perfusion dynamics during stimulation within mouse ear in vivo. We also discussed this system's limitations. Prospective MEMS-enabled OCT probes with a real-time volumetric functional imaging capability can have a significant impact on endoscopic imaging and image-guided surgery applications.
Arutyunov, V. S.; Shmelev, V. M.; Shapovalova, O. V.; Rakhmetov, A. N.; Strekova, L. N.
2013-03-01
New type of syngas generator based on the partial conversion of natural gas (methane) or heavier hydrocarbons in volumetric permeable matrix burners in the conditions of locked infrared (IR) radiation is suggested as a high-productive, adaptable, and rather simple way of syngas and hydrogen production for various low-scale applications including enhancing the performance characteristics of power engines.
Karloff, Howard
1991-01-01
To this reviewer’s knowledge, this is the first book accessible to the upper division undergraduate or beginning graduate student that surveys linear programming from the Simplex Method…via the Ellipsoid algorithm to Karmarkar’s algorithm. Moreover, its point of view is algorithmic and thus it provides both a history and a case history of work in complexity theory. The presentation is admirable; Karloff's style is informal (even humorous at times) without sacrificing anything necessary for understanding. Diagrams (including horizontal brackets that group terms) aid in providing clarity. The end-of-chapter notes are helpful...Recommended highly for acquisition, since it is not only a textbook, but can also be used for independent reading and study. —Choice Reviews The reader will be well served by reading the monograph from cover to cover. The author succeeds in providing a concise, readable, understandable introduction to modern linear programming. —Mathematics of Computing This is a textbook intend...
Analysis of Changing Swarm Rate using Volumetric Strain
Kumazawa, T.; Ogata, Y.; Kimura, K.; Maeda, K.; Kobayashi, A.
2015-12-01
Near the eastern coast of Izu peninsula is an active submarine volcanic region in Japan, where magma intrusions have been observed many times. The forecast of earthquake swarm activities and eruptions are serious concern particularly in nearby hot spring resort areas. It is well known that temporal durations of the swarm activities have been correlated with early volumetric strain changes at a certain observation station of about 20 km distance apart. Therefore the Earthquake Research Committee (2010) investigated some empirical statistical relations to predict sizes of the swarm activity. Here we looked at the background seismicity rate changes during these swarm periods using the non-stationary ETAS model (Kumazawa and Ogata, 2013, 2014), and have found the followings. The modified volumetric strain data, by removing the effect of earth tides, precipitation and coseismic jumps, have significantly higher cross-correlations to the estimated background rates of the ETAS model than to the swarm rate-changes. Specifically, the background seismicity rate synchronizes clearer to the strain change by the lags around a half day. These relations suggest an enhanced prediction of earthquakes in this region using volumetric strain measurements. Hence we propose an extended ETAS model where the background rate is modulated by the volumetric strain data. We have also found that the response function to the strain data can be well approximated by an exponential functions with the same decay rate, but that their intersects are inversely proportional to the distances between the volumetric strain-meter and the onset location of the swarm. Our numerical results by the same proposed model show consistent outcomes for the various major swarms in this region.
Chengbin Deng
2015-01-01
As an important indicator of anthropogenic impacts on the Earth’s surface, it is of great necessity to accurately map large-scale urbanized areas for various science and policy applications. Although spectral mixture analysis (SMA) can provide spatial distribution and quantitative fractions for better representations of urban areas, this technique is rarely explored with 1-km resolution imagery. This is due mainly to the absence of image endmembers associated with the mixed pixel problem. Con...
Evaluation of feature-based 3-d registration of probabilistic volumetric scenes
Restrepo, Maria I.; Ulusoy, Ali O.; Mundy, Joseph L.
2014-12-01
Automatic estimation of the world surfaces from aerial images has seen much attention and progress in recent years. Among current modeling technologies, probabilistic volumetric models (PVMs) have evolved as an alternative representation that can learn geometry and appearance in a dense and probabilistic manner. Recent progress, in terms of storage and speed, achieved in the area of volumetric modeling, opens the opportunity to develop new frameworks that make use of the PVM to pursue the ultimate goal of creating an entire map of the earth, where one can reason about the semantics and dynamics of the 3-d world. Aligning 3-d models collected at different time-instances constitutes an important step for successful fusion of large spatio-temporal information. This paper evaluates how effectively probabilistic volumetric models can be aligned using robust feature-matching techniques, while considering different scenarios that reflect the kind of variability observed across aerial video collections from different time instances. More precisely, this work investigates variability in terms of discretization, resolution and sampling density, errors in the camera orientation, and changes in illumination and geographic characteristics. All results are given for large-scale, outdoor sites. In order to facilitate the comparison of the registration performance of PVMs to that of other 3-d reconstruction techniques, the registration pipeline is also carried out using Patch-based Multi-View Stereo (PMVS) algorithm. Registration performance is similar for scenes that have favorable geometry and the appearance characteristics necessary for high quality reconstruction. In scenes containing trees, such as a park, or many buildings, such as a city center, registration performance is significantly more accurate when using the PVM.
Cerebrospinal fluid volumetric MRI mapping as a simple measurement for evaluating brain atrophy.
De Vis, J B; Zwanenburg, J J; van der Kleij, L A; Spijkerman, J M; Biessels, G J; Hendrikse, J; Petersen, E T
2016-05-01
To assess whether volumetric cerebrospinal fluid (CSF) MRI can be used as a surrogate for brain atrophy assessment and to evaluate how the T2 of the CSF relates to brain atrophy. Twenty-eight subjects [mean age 64 (sd 2) years] were included; T1-weighted and CSF MRI were performed. The first echo data of the CSF MRI sequence was used to obtain intracranial volume, CSF partial volume was measured voxel-wise to obtain CSF volume (VCSF) and the T2 of CSF (T2,CSF) was calculated. The correlation between VCSF/T2,CSF and brain atrophy scores [global cortical atrophy (GCA) and medial temporal lobe atrophy (MTA)] was evaluated. Relative total, peripheral subarachnoidal, and ventricular VCSF increased significantly with increased scores on the GCA and MTA (R = 0.83, 0.78 and 0.78 and R = 0.72, 0.62 and 0.86). Total, peripheral subarachnoidal, and ventricular T2 of the CSF increased significantly with higher scores on the GCA and MTA (R = 0.72, 0.70 and 0.49 and R = 0.60, 0.57 and 0.41). A fast, fully automated CSF MRI volumetric sequence is an alternative for qualitative atrophy scales. The T2 of the CSF is related to brain atrophy and could thus be a marker of neurodegenerative disease. • A 1:11 min CSF MRI volumetric sequence can evaluate brain atrophy. • CSF MRI provides accurate atrophy assessment without partial volume effects. • CSF MRI data can be processed quickly without user interaction. • The measured T 2 of the CSF is related to brain atrophy.
Datta, D P
2003-01-01
A new class of finitely differentiable scale free solutions to the simplest class of ordinary differential equations is presented. Consequently, the real number set gets replaced by an extended physical set, each element of which is endowed with an equivalence class of infinitesimally separated neighbours in the form of random fluctuations. We show how a sense of time and evolution is intrinsically defined by the infinite continued fraction of the golden mean irrational number (Radical radicand 5 -1)/2, which plays a key role in this extended SL(2,R) formalism of calculus analogous to El Naschie's theory of E sup ( supinfinity sup ) spacetime manifold. Time may thereby undergo random inversions generating well defined random scales, thus allowing a dynamical system to evolve self similarly over the set of multiple scales. The late time stochastic fluctuations of a dynamical system enjoys the generic 1/f spectrum. A universal form of the related probability density is also derived. We prove that the golden mea...
DEFF Research Database (Denmark)
Hashemi, Fariborz; Tahir, Paridah Md; Madsen, Bo
2015-01-01
In the present study, six different combinations of pultruded hybrid kenaf/glass composites were fabricated. The number of kenaf and glass rovings was specifically selected to ensure constant local fiber volume fractions in the composites. The volumetric composition of the composites was determined...... by using a gravimetrically based method. Optical microscopy was used to determine the location of voids. The short-beam test method was used to determine the interlaminar shear strength of the composites, and the failure mode was observed. It was found that the void volume fraction of the composites...... was increased as a function of the kenaf fiber volume fraction. A linear relationship with high correlation (R2=0.95) was established between the two volume fractions. Three types of voids were observed in the core region of the composites (lumen voids, interface voids and impregnation voids). The failure...
Papp, Dávid
2013-01-01
We propose a novel optimization model for volumetric modulated arc therapy (VMAT) planning that directly optimizes deliverable leaf trajectories in the treatment plan optimization problem, and eliminates the need for a separate arc-sequencing step. In this model, a 360-degree arc is divided into a given number of arc segments in which the leaves move unidirectionally. This facilitates an algorithm that determines the optimal piecewise linear leaf trajectories for each arc segment, which are deliverable in a given treatment time. Multi-leaf collimator (MLC) constraints, including maximum leaf speed and interdigitation, are accounted for explicitly. The algorithm is customized to allow for VMAT delivery using constant gantry speed and dose rate, however, the algorithm generalizes to variable gantry speed if beneficial. We demonstrate the method for three different tumor sites: a head-and-neck case, a prostate case, and a paraspinal case. For that purpose, we first obtain a reference plan for intensity modulated...
Energy Technology Data Exchange (ETDEWEB)
Tenhover, M.; Biernacki, J. [Carborundum Co., Niagara Falls, NY (United States); Schatz, K.; Ko, F. [Advanced Product Development, Inc., Bristol, PA (United States)
1995-08-01
In order to exploit the superior thermomechanical properties of the VLS fibril, the feasibility of scaled-up production of the SiC fibril is demonstrated in this study. Through time series study and computer simulation, the parameters affecting the growth process and properties of the fibrils were examined. To facilitate translation of the superior mechanical properties into higher level preform structures, conventional and unconventional processing methods were evaluated. As revealed by scanning electron microscopic examination and X-ray diffractometry, high level alignment of the fibrils was achieved by the wet-laid process.
Directory of Open Access Journals (Sweden)
Haoyi Wu
Full Text Available Electrically small antennas (ESAs are becoming one of the key components in the compact wireless devices for telecommunications, defence, and aerospace systems, especially for the spherical one whose geometric layout is more closely approaching Chu's limit, thus yielding significant bandwidth improvements relative to the linear and planar counterparts. Yet broad applications of the volumetric ESAs are still hindered since the low cost fabrication has remained a tremendous challenge. Here we report a state-of-the-art technology to transfer electrically conductive composites (ECCs from a planar mould to a volumetric thermoplastic substrate by using pad-printing technology without pattern distortion, benefit from the excellent properties of the ECCs as well as the printing-calibration method that we developed. The antenna samples prepared in this way meet the stringent requirement of an ESA (ka is as low as 0.32 and the antenna efficiency is as high as 57%, suggesting that volumetric electronic components i.e. the antennas can be produced in such a simple, green, and cost-effective way. This work can be of interest for the development of studies on green and high performance wireless communication devices.
Wu, Haoyi; Chiang, Sum Wai; Yang, Cheng; Lin, Ziyin; Liu, Jingping; Moon, Kyoung-Sik; Kang, Feiyu; Li, Bo; Wong, Ching Ping
2015-01-01
Electrically small antennas (ESAs) are becoming one of the key components in the compact wireless devices for telecommunications, defence, and aerospace systems, especially for the spherical one whose geometric layout is more closely approaching Chu's limit, thus yielding significant bandwidth improvements relative to the linear and planar counterparts. Yet broad applications of the volumetric ESAs are still hindered since the low cost fabrication has remained a tremendous challenge. Here we report a state-of-the-art technology to transfer electrically conductive composites (ECCs) from a planar mould to a volumetric thermoplastic substrate by using pad-printing technology without pattern distortion, benefit from the excellent properties of the ECCs as well as the printing-calibration method that we developed. The antenna samples prepared in this way meet the stringent requirement of an ESA (ka is as low as 0.32 and the antenna efficiency is as high as 57%), suggesting that volumetric electronic components i.e. the antennas can be produced in such a simple, green, and cost-effective way. This work can be of interest for the development of studies on green and high performance wireless communication devices.
Bagheri, M.; Rezania, M.; Nezhad, M. M.
2015-09-01
Clayey soils tend to undergo continuous compression with time, even after excess pore pressures have substantially dissipated. The effect of time on deformation and mechanical response of these soft soils has been the subject of numerous studies. Based on these studies, the observed time-dependent behaviour of clays is mainly related to the evolution of soil volume and strength characteristics with time, which are classified as creep and/or relaxation properties of the soil. Apart from many empirical relationships that have been proposed in the literature to capture the rheological behaviour of clays, a number of viscid constitutive relationships have also been developed which have more attractive theoretical attributes. A particular feature of these viscid models is that their creep parameters often have clear physical meaning (e.g. coefficient of secondary compression, Cα). Sometimes with these models, a parameter referred to as initial/reference volumetric strain rate, has also been alluded as a model parameter. However, unlike Cα, the determination of and its variations with stress level is not properly documented in the literature. In an attempt to better understand , this paper presents an experimental investigation of the reference volumetric strain rate in reconstituted clay specimens. A long-term triaxial creep test, at different shear stress levels and different strain rates, was performed on clay specimen whereby the volumetric strain rate was measured. The obtained results indicated the stress-level dependency and non-linear variation of with time.
Bourlès, Henri
2013-01-01
Linear systems have all the necessary elements (modeling, identification, analysis and control), from an educational point of view, to help us understand the discipline of automation and apply it efficiently. This book is progressive and organized in such a way that different levels of readership are possible. It is addressed both to beginners and those with a good understanding of automation wishing to enhance their knowledge on the subject. The theory is rigorously developed and illustrated by numerous examples which can be reproduced with the help of appropriate computation software. 60 exe
Cerebrospinal fluid volumetric MRI mapping as a simple measurement for evaluating brain atrophy
Energy Technology Data Exchange (ETDEWEB)
Vis, J.B. de; Zwanenburg, J.J.; Kleij, L.A. van der; Spijkerman, J.M.; Hendrikse, J. [University Medical Center Utrecht, Department of Radiology, Utrecht (Netherlands); Biessels, G.J. [University Medical Center Utrecht, Department of Neurology, Brain Center Rudolf Magnus, Utrecht (Netherlands); Petersen, E.T. [University Medical Center Utrecht, Department of Radiology, Utrecht (Netherlands); Hvidovre Hospital, Danish Research Centre for Magnetic Resonance, Hvidovre (Denmark)
2016-05-15
To assess whether volumetric cerebrospinal fluid (CSF) MRI can be used as a surrogate for brain atrophy assessment and to evaluate how the T{sub 2} of the CSF relates to brain atrophy. Twenty-eight subjects [mean age 64 (sd 2) years] were included; T{sub 1}-weighted and CSF MRI were performed. The first echo data of the CSF MRI sequence was used to obtain intracranial volume, CSF partial volume was measured voxel-wise to obtain CSF volume (V{sub CSF}) and the T{sub 2} of CSF (T{sub 2,CSF}) was calculated. The correlation between V{sub CSF} / T{sub 2,CSF} and brain atrophy scores [global cortical atrophy (GCA) and medial temporal lobe atrophy (MTA)] was evaluated. Relative total, peripheral subarachnoidal, and ventricular V{sub CSF} increased significantly with increased scores on the GCA and MTA (R = 0.83, 0.78 and 0.78 and R = 0.72, 0.62 and 0.86). Total, peripheral subarachnoidal, and ventricular T{sub 2} of the CSF increased significantly with higher scores on the GCA and MTA (R = 0.72, 0.70 and 0.49 and R = 0.60, 0.57 and 0.41). A fast, fully automated CSF MRI volumetric sequence is an alternative for qualitative atrophy scales. The T{sub 2} of the CSF is related to brain atrophy and could thus be a marker of neurodegenerative disease. (orig.)
Redesigning linear algebra algorithms
Energy Technology Data Exchange (ETDEWEB)
Dongarra, J.J.
1983-01-01
Many of the standard algorithms in linear algebra as implemented in FORTRAN do not achieve maximum performance on today's large-scale vector computers. The author examines the problem and constructs alternative formulations of algorithms that do not lose the clarity of the original algorithm or sacrifice the FORTRAN portable environment, but do gain the performance attainable on these supercomputers. The resulting implementation not only performs well on vector computers but also increases performance on conventional sequential computers. 13 references.
Redesigning linear algebra algorithms
Energy Technology Data Exchange (ETDEWEB)
Dongarra, J.J.
1983-01-01
Many of the standard algorithms in linear algebra as implemented in FORTRAN do not achieve maximum performance on today's large-scale vector computers. In this paper we examine the problem and construct alternative formulations of algorithms that do not lose the clarity of the original algorithm or sacrifice the Fortran portable environment, but do gain the performance attainable on these supercomputers. The resulting implementation not only performs well on vector computers but also increases performance on conventional sequential computers.
Pulse sequence for dynamic volumetric imaging of hyperpolarized metabolic products
Cunningham, Charles H.; Chen, Albert P.; Lustig, Michael; Hargreaves, Brian A.; Lupo, Janine; Xu, Duan; Kurhanewicz, John; Hurd, Ralph E.; Pauly, John M.; Nelson, Sarah J.; Vigneron, Daniel B.
2008-07-01
Dynamic nuclear polarization and dissolution of a 13C-labeled substrate enables the dynamic imaging of cellular metabolism. Spectroscopic information is typically acquired, making the acquisition of dynamic volumetric data a challenge. To enable rapid volumetric imaging, a spectral-spatial excitation pulse was designed to excite a single line of the carbon spectrum. With only a single resonance present in the signal, an echo-planar readout trajectory could be used to resolve spatial information, giving full volume coverage of 32 × 32 × 16 voxels every 3.5 s. This high frame rate was used to measure the different lactate dynamics in different tissues in a normal rat model and a mouse model of prostate cancer.
Nonrigid registration of volumetric images using ranked order statistics
DEFF Research Database (Denmark)
Tennakoon, Ruwan; Bab-Hadiashar, Alireza; Cao, Zhenwei
2014-01-01
Non-rigid image registration techniques using intensity based similarity measures are widely used in medical imaging applications. Due to high computational complexities of these techniques, particularly for volumetric images, finding appropriate registration methods to both reduce the computation...... burden and increase the registration accuracy has become an intensive area of research. In this paper we propose a fast and accurate non-rigid registration method for intra-modality volumetric images. Our approach exploits the information provided by an order statistics based segmentation method, to find...... the important regions for registration and use an appropriate sampling scheme to target those areas and reduce the registration computation time. A unique advantage of the proposed method is its ability to identify the point of diminishing returns and stop the registration process. Our experiments...
Volumetric characterization of delamination fields via angle longitudinal wave ultrasound
Wertz, John; Wallentine, Sarah; Welter, John; Dierken, Josiah; Aldrin, John
2017-02-01
The volumetric characterization of delaminations necessarily precedes rigorous composite damage progression modeling. Yet, inspection of composite structures for subsurface damage remains largely focused on detection, resulting in a capability gap. In response to this need, angle longitudinal wave ultrasound was employed to characterize a composite surrogate containing a simulated three-dimensional delamination field with distinct regions of occluded features (shadow regions). Simple analytical models of the specimen were developed to guide subsequent experimentation through identification of optimal scanning parameters. The ensuing experiments provided visual evidence of the complete delamination field, including indications of features within the shadow regions. The results of this study demonstrate proof-of-principle for the use of angle longitudinal wave ultrasonic inspection for volumetric characterization of three-dimensional delamination fields. Furthermore, the techniques developed herein form the foundation of succeeding efforts to characterize impact delaminations within inhomogeneous laminar materials such as polymer matrix composites.
Magnetic Resonance Image Segmentation and its Volumetric Measurement
Directory of Open Access Journals (Sweden)
Rahul R. Ambalkar
2013-02-01
Full Text Available Image processing techniques make it possible to extract meaningful information from medical images. Magnetic resonance (MR imaging has been widely applied in biological research and diagnostics because of its excellent soft tissue contrast, non-invasive character, high spatial resolution and easy slice selection at any orientation. The MRI-based brain volumetric is concerned with the analysis of volumes and shapes of the structural components of the human brain. It also provides a criterion, by which we recognize the presence of degenerative diseases and characterize their rates of progression to make the diagnosis and treatments as a easy task. In this paper we have proposed an automated method for volumetric measurement of Magnetic Resonance Imaging and used Self Organized Map (SOM clustering method for their segmentations. We have used the MRI data set of 61 slices of 256×256 pixels in DICOM standard format
Two-dimensional random arrays for real time volumetric imaging
DEFF Research Database (Denmark)
Davidsen, Richard E.; Jensen, Jørgen Arendt; Smith, Stephen W.
1994-01-01
Two-dimensional arrays are necessary for a variety of ultrasonic imaging techniques, including elevation focusing, 2-D phase aberration correction, and real time volumetric imaging. In order to reduce system cost and complexity, sparse 2-D arrays have been considered with element geometries...... real time volumetric imaging system, which employs a wide transmit beam and receive mode parallel processing to increase image frame rate. Depth-of-field comparisons were made from simulated on-axis and off-axis beamplots at ranges from 30 to 160 mm for both coaxial and offset transmit and receive...... selected ad hoc, by algorithm, or by random process. Two random sparse array geometries and a sparse array with a Mills cross receive pattern were simulated and compared to a fully sampled aperture with the same overall dimensions. The sparse arrays were designed to the constraints of the Duke University...
COMPARISON OF VOLUMETRIC REGISTRATION ALGORITHMS FOR TENSOR-BASED MORPHOMETRY
Villalon, Julio; Joshi, Anand A.; Toga, Arthur W.; Thompson, Paul M.
2015-01-01
Nonlinear registration of brain MRI scans is often used to quantify morphological differences associated with disease or genetic factors. Recently, surface-guided fully 3D volumetric registrations have been developed that combine intensity-guided volume registrations with cortical surface constraints. In this paper, we compare one such algorithm to two popular high-dimensional volumetric registration methods: large-deformation viscous fluid registration, formulated in a Riemannian framework, and the diffeomorphic “Demons” algorithm. We performed an objective morphometric comparison, by using a large MRI dataset from 340 young adult twin subjects to examine 3D patterns of correlations in anatomical volumes. Surface-constrained volume registration gave greater effect sizes for detecting morphometric associations near the cortex, while the other two approaches gave greater effects sizes subcortically. These findings suggest novel ways to combine the advantages of multiple methods in the future. PMID:26925198
Volumetric 3D display using a DLP projection engine
Geng, Jason
2012-03-01
In this article, we describe a volumetric 3D display system based on the high speed DLPTM (Digital Light Processing) projection engine. Existing two-dimensional (2D) flat screen displays often lead to ambiguity and confusion in high-dimensional data/graphics presentation due to lack of true depth cues. Even with the help of powerful 3D rendering software, three-dimensional (3D) objects displayed on a 2D flat screen may still fail to provide spatial relationship or depth information correctly and effectively. Essentially, 2D displays have to rely upon capability of human brain to piece together a 3D representation from 2D images. Despite the impressive mental capability of human visual system, its visual perception is not reliable if certain depth cues are missing. In contrast, volumetric 3D display technologies to be discussed in this article are capable of displaying 3D volumetric images in true 3D space. Each "voxel" on a 3D image (analogous to a pixel in 2D image) locates physically at the spatial position where it is supposed to be, and emits light from that position toward omni-directions to form a real 3D image in 3D space. Such a volumetric 3D display provides both physiological depth cues and psychological depth cues to human visual system to truthfully perceive 3D objects. It yields a realistic spatial representation of 3D objects and simplifies our understanding to the complexity of 3D objects and spatial relationship among them.
Using surface heave to estimate reservoir volumetric strain
Energy Technology Data Exchange (ETDEWEB)
Nanayakkara, A.S.; Wong, R.C.K. [Calgary Univ., AB (Canada)
2008-07-01
This paper presented a newly developed numerical tool for estimating reservoir volumetric strain distribution using surface vertical displacements and solving an inverse problem. Waterflooding, steam injection, carbon dioxide sequestration and aquifer storage recovery are among the subsurface injection operations that are responsible for reservoir dilations which propagate to the surrounding formations and extend to the surface resulting in surface heaves. Global positioning systems and surface tiltmeters are often used to measure the characteristics of these surface heaves and to derive valuable information regarding reservoir deformation and flow characteristics. In this study, Tikhonov regularization techniques were adopted to solve the ill-posed inversion problem commonly found in standard inversion techniques such as Gaussian elimination and least squares methods. Reservoir permeability was then estimated by inverting the volumetric strain distribution. Results of the newly developed numerical tool were compared with results from fully-coupled finite element simulation of fluid injection problems. The reservoir volumetric strain distribution was successfully estimated along with an approximate value for reservoir permeability.
Eltrass, A.; Mahmoudian, A.; Scales, W. A.; de Larquier, S.; Ruohoniemi, J. M.; Baker, J. B. H.; Greenwald, R. A.; Erickson, P. J.
2014-06-01
Previous joint measurements by the Millstone Hill incoherent scatter radar and the Super Dual Auroral Radar Network (SuperDARN) HF radar located at Wallops Island, Virginia, have identified the presence of opposed meridional electron density and temperature gradients in the region of decameter-scale electron density irregularities that have been proposed to be responsible for low-velocity Sub-Auroral Ionospheric Scatter observed by SuperDARN radars. The temperature gradient instability (TGI) and the gradient drift instability (GDI) have been extended into the kinetic regime appropriate for SuperDARN radar frequencies and investigated as the causes of these irregularities. A time series for the growth rate of both TGI and GDI has been developed for midlatitude ionospheric irregularities observed by SuperDARN Greenwald et al. (2006). The time series is computed for both perpendicular and meridional density and temperature gradients. This growth rate comparison shows that the TGI is the most likely generation mechanism for the irregularities observed during the experiment and the GDI is expected to play a relatively minor role in irregularity generation.
Turkdogan-Aydinol, F Ilter; Yetilmezsoy, Kaan
2010-10-15
A MIMO (multiple inputs and multiple outputs) fuzzy-logic-based model was developed to predict biogas and methane production rates in a pilot-scale 90-L mesophilic up-flow anaerobic sludge blanket (UASB) reactor treating molasses wastewater. Five input variables such as volumetric organic loading rate (OLR), volumetric total chemical oxygen demand (TCOD) removal rate (R(V)), influent alkalinity, influent pH and effluent pH were fuzzified by the use of an artificial intelligence-based approach. Trapezoidal membership functions with eight levels were conducted for the fuzzy subsets, and a Mamdani-type fuzzy inference system was used to implement a total of 134 rules in the IF-THEN format. The product (prod) and the centre of gravity (COG, centroid) methods were employed as the inference operator and defuzzification methods, respectively. Fuzzy-logic predicted results were compared with the outputs of two exponential non-linear regression models derived in this study. The UASB reactor showed a remarkable performance on the treatment of molasses wastewater, with an average TCOD removal efficiency of 93 (+/-3)% and an average volumetric TCOD removal rate of 6.87 (+/-3.93) kg TCOD(removed)/m(3)-day, respectively. Findings of this study clearly indicated that, compared to non-linear regression models, the proposed MIMO fuzzy-logic-based model produced smaller deviations and exhibited a superior predictive performance on forecasting of both biogas and methane production rates with satisfactory determination coefficients over 0.98.
Ultra-fast treatment plan optimization for volumetric modulated arc therapy (VMAT)
Men, Chunhua; Jia, Xun; Jiang, Steve B
2010-01-01
Purpose: To develop a novel aperture-based algorithm for volumetric modulated arc therapy (VMAT) treatment plan optimization with high quality and high efficiency. Methods: The VMAT optimization problem is formulated as a large-scale convex programming problem solved by a column generation approach. We consider a cost function consisting two terms, the first which enforces a desired dose distribution while the second guarantees a smooth dose rate variation between successive gantry angles. At each iteration of the column generation method, a subproblem is first solved to generate one more deliverable MLC aperture which potentially decreases the cost function most effectively. A subsequent master problem is then solved to determine the dose rate at all currently generated apertures by minimizing the cost function. The iteration of such an algorithm yields a set of deliverable apertures, as well as dose rates, at all gantry angles. Results: The algorithm was preliminarily tested on five prostate and five head-a...
Cerebrospinal fluid volumetric MRI mapping as a simple measurement for evaluating brain atrophy
DEFF Research Database (Denmark)
De Vis, J B; Zwanenburg, J J; van der Kleij, L A;
2016-01-01
) and medial temporal lobe atrophy (MTA)] was evaluated. RESULTS: Relative total, peripheral subarachnoidal, and ventricular VCSF increased significantly with increased scores on the GCA and MTA (R = 0.83, 0.78 and 0.78 and R = 0.72, 0.62 and 0.86). Total, peripheral subarachnoidal, and ventricular T2...... of the CSF increased significantly with higher scores on the GCA and MTA (R = 0.72, 0.70 and 0.49 and R = 0.60, 0.57 and 0.41). CONCLUSION: A fast, fully automated CSF MRI volumetric sequence is an alternative for qualitative atrophy scales. The T2 of the CSF is related to brain atrophy and could thus...
Johnson, E. D.; Cowen, E. A.
2016-03-01
Current methods employed by the United States Geological Survey (USGS) to measure river discharge are manpower intensive, expensive, and during high flow events require field personnel to work in dangerous conditions. Indirect methods of estimating river discharge, which involve the use of extrapolated rating curves, can result in gross error during high flow conditions due to extrapolation error and/or bathymetric change. Our goal is to develop a remote method of monitoring volumetric discharge that reduces costs at the same or improved accuracy compared with current methods, while minimizing risk to field technicians. We report the results of Large-Scale Particle Image Velocimetry (LSPIV) and Acoustic Doppler Velocimetry (ADV) measurements conducted in a wide-open channel under a range of flow conditions, i.e., channel aspect ratio (B/H = 6.6-31.9), Reynolds number (ReH = 4,950-73,800), and Froude number (Fr = 0.04-0.46). Experiments were carried out for two different channel cross sections (rectangular and asymmetric compound) and two bathymetric roughness conditions (smooth glass and rough gravel bed). The results show that the mean surface velocity normalized by the depth-averaged velocity (the velocity index) decreases with increasing δ*/H, where δ* is the boundary layer displacement thickness and that the integral length scales, L11,1 and L22,1, calculated on the free-surface vary predictably with the local flow depth. Remote determination of local depth-averaged velocity and flow depth over a channel cross section yields an estimate of volumetric discharge.
Energy Technology Data Exchange (ETDEWEB)
Azcona Armendariz, J. D.; Li, R.; Xing, L.
2015-07-01
Develop a strategy of tracking MV tumor on images acquired with flat panel and apply it to the characterization of the movement and dose reconstruction The research was conducted using a linear accelerator Varian True Beam, equipped with imaging system by Megavoltage. used images of patients with prostate cancer treated with volumetric arcotheraphy. (Author)
LINEAR SYSTEMS AND LINEAR INTERPOLATION I
Institute of Scientific and Technical Information of China (English)
丁立峰
2001-01-01
he linear interpolation of linear system on a family of linear systems is introduced and discussed. Some results and examples on singly generated systems on a finite dimensional vector space are given.
Volumetric CT-images improve testing of radiological image interpretation skills
Energy Technology Data Exchange (ETDEWEB)
Ravesloot, Cécile J., E-mail: C.J.Ravesloot@umcutrecht.nl [Radiology Department at University Medical Center Utrecht, Heidelberglaan 100, 3508 GA Utrecht, Room E01.132 (Netherlands); Schaaf, Marieke F. van der, E-mail: M.F.vanderSchaaf@uu.nl [Department of Pedagogical and Educational Sciences at Utrecht University, Heidelberglaan 1, 3584 CS Utrecht (Netherlands); Schaik, Jan P.J. van, E-mail: J.P.J.vanSchaik@umcutrecht.nl [Radiology Department at University Medical Center Utrecht, Heidelberglaan 100, 3508 GA Utrecht, Room E01.132 (Netherlands); Cate, Olle Th.J. ten, E-mail: T.J.tenCate@umcutrecht.nl [Center for Research and Development of Education at University Medical Center Utrecht, Heidelberglaan 100, 3508 GA Utrecht (Netherlands); Gijp, Anouk van der, E-mail: A.vanderGijp-2@umcutrecht.nl [Radiology Department at University Medical Center Utrecht, Heidelberglaan 100, 3508 GA Utrecht, Room E01.132 (Netherlands); Mol, Christian P., E-mail: C.Mol@umcutrecht.nl [Image Sciences Institute at University Medical Center Utrecht, Heidelberglaan 100, 3508 GA Utrecht (Netherlands); Vincken, Koen L., E-mail: K.Vincken@umcutrecht.nl [Image Sciences Institute at University Medical Center Utrecht, Heidelberglaan 100, 3508 GA Utrecht (Netherlands)
2015-05-15
Rationale and objectives: Current radiology practice increasingly involves interpretation of volumetric data sets. In contrast, most radiology tests still contain only 2D images. We introduced a new testing tool that allows for stack viewing of volumetric images in our undergraduate radiology program. We hypothesized that tests with volumetric CT-images enhance test quality, in comparison with traditional completely 2D image-based tests, because they might better reflect required skills for clinical practice. Materials and methods: Two groups of medical students (n = 139; n = 143), trained with 2D and volumetric CT-images, took a digital radiology test in two versions (A and B), each containing both 2D and volumetric CT-image questions. In a questionnaire, they were asked to comment on the representativeness for clinical practice, difficulty and user-friendliness of the test questions and testing program. Students’ test scores and reliabilities, measured with Cronbach's alpha, of 2D and volumetric CT-image tests were compared. Results: Estimated reliabilities (Cronbach's alphas) were higher for volumetric CT-image scores (version A: .51 and version B: .54), than for 2D CT-image scores (version A: .24 and version B: .37). Participants found volumetric CT-image tests more representative of clinical practice, and considered them to be less difficult than volumetric CT-image questions. However, in one version (A), volumetric CT-image scores (M 80.9, SD 14.8) were significantly lower than 2D CT-image scores (M 88.4, SD 10.4) (p < .001). The volumetric CT-image testing program was considered user-friendly. Conclusion: This study shows that volumetric image questions can be successfully integrated in students’ radiology testing. Results suggests that the inclusion of volumetric CT-images might improve the quality of radiology tests by positively impacting perceived representativeness for clinical practice and increasing reliability of the test.
Personalized heterogeneous deformable model for fast volumetric registration.
Si, Weixin; Liao, Xiangyun; Wang, Qiong; Heng, Pheng Ann
2017-02-20
Biomechanical deformable volumetric registration can help improve safety of surgical interventions by ensuring the operations are extremely precise. However, this technique has been limited by the accuracy and the computational efficiency of patient-specific modeling. This study presents a tissue-tissue coupling strategy based on penalty method to model the heterogeneous behavior of deformable body, and estimate the personalized tissue-tissue coupling parameters in a data-driven way. Moreover, considering that the computational efficiency of biomechanical model is highly dependent on the mechanical resolution, a practical coarse-to-fine scheme is proposed to increase runtime efficiency. Particularly, a detail enrichment database is established in an offline fashion to represent the mapping relationship between the deformation results of high-resolution hexahedral mesh extracted from the raw medical data and a newly constructed low-resolution hexahedral mesh. At runtime, the mechanical behavior of human organ under interactions is simulated with this low-resolution hexahedral mesh, then the microstructures are synthesized in virtue of the detail enrichment database. The proposed method is validated by volumetric registration in an abdominal phantom compression experiments. Our personalized heterogeneous deformable model can well describe the coupling effects between different tissues of the phantom. Compared with high-resolution heterogeneous deformable model, the low-resolution deformable model with our detail enrichment database can achieve 9.4× faster, and the average target registration error is 3.42 mm, which demonstrates that the proposed method shows better volumetric registration performance than state-of-the-art. Our framework can well balance the precision and efficiency, and has great potential to be adopted in the practical augmented reality image-guided robotic systems.
Volumetric measurements of pulmonary nodules: variability in automated analysis tools
Juluru, Krishna; Kim, Woojin; Boonn, William; King, Tara; Siddiqui, Khan; Siegel, Eliot
2007-03-01
Over the past decade, several computerized tools have been developed for detection of lung nodules and for providing volumetric analysis. Incidentally detected lung nodules have traditionally been followed over time by measurements of their axial dimensions on CT scans to ensure stability or document progression. A recently published article by the Fleischner Society offers guidelines on the management of incidentally detected nodules based on size criteria. For this reason, differences in measurements obtained by automated tools from various vendors may have significant implications on management, yet the degree of variability in these measurements is not well understood. The goal of this study is to quantify the differences in nodule maximum diameter and volume among different automated analysis software. Using a dataset of lung scans obtained with both "ultra-low" and conventional doses, we identified a subset of nodules in each of five size-based categories. Using automated analysis tools provided by three different vendors, we obtained size and volumetric measurements on these nodules, and compared these data using descriptive as well as ANOVA and t-test analysis. Results showed significant differences in nodule maximum diameter measurements among the various automated lung nodule analysis tools but no significant differences in nodule volume measurements. These data suggest that when using automated commercial software, volume measurements may be a more reliable marker of tumor progression than maximum diameter. The data also suggest that volumetric nodule measurements may be relatively reproducible among various commercial workstations, in contrast to the variability documented when performing human mark-ups, as is seen in the LIDC (lung imaging database consortium) study.
AN ATTRIBUTION OF CAVITATION RESONANCE: VOLUMETRIC OSCILLATIONS OF CLOUD
Institute of Scientific and Technical Information of China (English)
ZUO Zhi-gang; LI Sheng-cai; LIU Shu-hong; LI Shuang; CHEN Hui
2009-01-01
In order to further verify the proposed theory of cavitation resonance, as well as to proceed the investigations into microscopic level, a series of studies are being carried out on the Warwick venturi. The analysis of the oscillation characteristics of the cavitation resonance has conclusively verified the macro-mechanism proposed through previous studies on other cavitating flows by the authors. The initial observations using high-speed photographic approach have revealed a new attribution of cavitation resonance. That is, the volumetric oscillation of cavitation cloud is associated with the cavitation resonance, which is a collective behaviour of the bubbles in the cloud.
Synthesis of Volumetric Ring Antenna Array for Terrestrial Coverage Pattern
Directory of Open Access Journals (Sweden)
Alberto Reyna
2014-01-01
Full Text Available This paper presents a synthesis of a volumetric ring antenna array for a terrestrial coverage pattern. This synthesis regards the spacing among the rings on the planes X-Y, the positions of the rings on the plane X-Z, and uniform and concentric excitations. The optimization is carried out by implementing the particle swarm optimization. The synthesis is compared with previous designs by resulting with proper performance of this geometry to provide an accurate coverage to be applied in satellite applications with a maximum reduction of the antenna hardware as well as the side lobe level reduction.
Synthesis of Volumetric Ring Antenna Array for Terrestrial Coverage Pattern
Reyna, Alberto; Panduro, Marco A.; Del Rio Bocio, Carlos
2014-01-01
This paper presents a synthesis of a volumetric ring antenna array for a terrestrial coverage pattern. This synthesis regards the spacing among the rings on the planes X-Y, the positions of the rings on the plane X-Z, and uniform and concentric excitations. The optimization is carried out by implementing the particle swarm optimization. The synthesis is compared with previous designs by resulting with proper performance of this geometry to provide an accurate coverage to be applied in satellite applications with a maximum reduction of the antenna hardware as well as the side lobe level reduction. PMID:24701150
Estimation of volumetric breast density for breast cancer risk prediction
Pawluczyk, Olga; Yaffe, Martin J.; Boyd, Norman F.; Jong, Roberta A.
2000-04-01
Mammographic density (MD) has been shown to be a strong risk predictor for breast cancer. Compared to subjective assessment by a radiologist, computer-aided analysis of digitized mammograms provides a quantitative and more reproducible method for assessing breast density. However, the current methods of estimating breast density based on the area of bright signal in a mammogram do not reflect the true, volumetric quantity of dense tissue in the breast. A computerized method to estimate the amount of radiographically dense tissue in the overall volume of the breast has been developed to provide an automatic, user-independent tool for breast cancer risk assessment. The procedure for volumetric density estimation consists of first correcting the image for inhomogeneity, then performing a volume density calculation. First, optical sensitometry is used to convert all images to the logarithm of relative exposure (LRE), in order to simplify the image correction operations. The field non-uniformity correction, which takes into account heel effect, inverse square law, path obliquity and intrinsic field and grid non- uniformity is obtained by imaging a spherical section PMMA phantom. The processed LRE image of the phantom is then used as a correction offset for actual mammograms. From information about the thickness and placement of the breast, as well as the parameters of a breast-like calibration step wedge placed in the mammogram, MD of the breast is calculated. Post processing and a simple calibration phantom enable user- independent, reliable and repeatable volumetric estimation of density in breast-equivalent phantoms. Initial results obtained on known density phantoms show the estimation to vary less than 5% in MD from the actual value. This can be compared to estimated mammographic density differences of 30% between the true and non-corrected values. Since a more simplistic breast density measurement based on the projected area has been shown to be a strong indicator
Floating volumetric image formation using a dihedral corner reflector array device.
Miyazaki, Daisuke; Hirano, Noboru; Maeda, Yuki; Yamamoto, Siori; Mukai, Takaaki; Maekawa, Satoshi
2013-01-01
A volumetric display system using an optical imaging device consisting of numerous dihedral corner reflectors placed perpendicular to the surface of a metal plate is proposed. Image formation by the dihedral corner reflector array (DCRA) is free from distortion and focal length. In the proposed volumetric display system, a two-dimensional real image is moved by a mirror scanner to scan a three-dimensional (3D) space. Cross-sectional images of a 3D object are displayed in accordance with the position of the image plane. A volumetric image is observed as a stack of the cross-sectional images. The use of the DCRA brings compact system configuration and volumetric real image generation with very low distortion. An experimental volumetric display system including a DCRA, a galvanometer mirror, and a digital micro-mirror device was constructed to verify the proposed method. A volumetric image consisting of 1024×768×400 voxels was formed by the experimental system.
Energy Technology Data Exchange (ETDEWEB)
Gondim Teixeira, Pedro Augusto; Leplat, Christophe [CHRU-Nancy Hopital Central, Service d' Imagerie Guilloz, Nancy (France); Universite de Lorraine, IADI U947, Nancy (France); Cendre, Romain [INSERM, CIC-IT 1433, Nancy (France); Hossu, Gabriela; Felblinger, Jacques [Universite de Lorraine, IADI U947, Nancy (France); INSERM, CIC-IT 1433, Nancy (France); Blum, Alain [CHRU-Nancy Hopital Central, Service d' Imagerie Guilloz, Nancy (France); Braun, Marc [CHRU-Nancy Hopital Central, Service de Neuroradiologie, Nancy (France)
2017-02-15
Assess the use of a volumetric simulation tool for the evaluation of radiology resident MR and CT interpretation skills. Forty-three participants were evaluated with a software allowing the visualisation of multiple volumetric image series. There were 7 medical students, 28 residents and 8 senior radiologists among the participants. Residents were divided into two sub-groups (novice and advanced). The test was composed of 15 exercises on general radiology and lasted 45 min. Participants answered a questionnaire on their experience with the test using a 5-point Likert scale. This study was approved by the dean of the medical school and did not require ethics committee approval. The reliability of the test was good with a Cronbach alpha value of 0.9. Test scores were significantly different in all sub-groups studies (p < 0.0225). The relation between test scores and the year of residency was logarithmic (R{sup 2} = 0.974). Participants agreed that the test reflected their radiological practice (3.9 ± 0.9 on a 5-point scale) and was better than the conventional evaluation methods (4.6 ± 0.5 on a 5-point scale). This software provides a high quality evaluation tool for the assessment of the interpretation skills in radiology residents. (orig.)
Volumetric display containing multiple two-dimensional color motion pictures
Hirayama, R.; Shiraki, A.; Nakayama, H.; Kakue, T.; Shimobaba, T.; Ito, T.
2014-06-01
We have developed an algorithm which can record multiple two-dimensional (2-D) gradated projection patterns in a single three-dimensional (3-D) object. Each recorded pattern has the individual projected direction and can only be seen from the direction. The proposed algorithm has two important features: the number of recorded patterns is theoretically infinite and no meaningful pattern can be seen outside of the projected directions. In this paper, we expanded the algorithm to record multiple 2-D projection patterns in color. There are two popular ways of color mixing: additive one and subtractive one. Additive color mixing used to mix light is based on RGB colors and subtractive color mixing used to mix inks is based on CMY colors. We made two coloring methods based on the additive mixing and subtractive mixing. We performed numerical simulations of the coloring methods, and confirmed their effectiveness. We also fabricated two types of volumetric display and applied the proposed algorithm to them. One is a cubic displays constructed by light-emitting diodes (LEDs) in 8×8×8 array. Lighting patterns of LEDs are controlled by a microcomputer board. The other one is made of 7×7 array of threads. Each thread is illuminated by a projector connected with PC. As a result of the implementation, we succeeded in recording multiple 2-D color motion pictures in the volumetric displays. Our algorithm can be applied to digital signage, media art and so forth.
Volumetric three-dimensional display system with rasterization hardware
Favalora, Gregg E.; Dorval, Rick K.; Hall, Deirdre M.; Giovinco, Michael; Napoli, Joshua
2001-06-01
An 8-color multiplanar volumetric display is being developed by Actuality Systems, Inc. It will be capable of utilizing an image volume greater than 90 million voxels, which we believe is the greatest utilizable voxel set of any volumetric display constructed to date. The display is designed to be used for molecular visualization, mechanical CAD, e-commerce, entertainment, and medical imaging. As such, it contains a new graphics processing architecture, novel high-performance line- drawing algorithms, and an API similar to a current standard. Three-dimensional imagery is created by projecting a series of 2-D bitmaps ('image slices') onto a diffuse screen that rotates at 600 rpm. Persistence of vision fuses the slices into a volume-filling 3-D image. A modified three-panel Texas Instruments projector provides slices at approximately 4 kHz, resulting in 8-color 3-D imagery comprised of roughly 200 radially-disposed slices which are updated at 20 Hz. Each slice has a resolution of 768 by 768 pixels, subtending 10 inches. An unusual off-axis projection scheme incorporating tilted rotating optics is used to maintain good focus across the projection screen. The display electronics includes a custom rasterization architecture which converts the user's 3- D geometry data into image slices, as well as 6 Gbits of DDR SDRAM graphics memory.
Myocardial kinematics based on tagged MRI from volumetric NURBS models
Tustison, Nicholas J.; Amini, Amir A.
2004-04-01
We present current research in which left ventricular deformation is estimated from tagged cardiac magnetic resonance imaging using volumetric deformable models constructed from nonuniform rational B-splines (NURBS). From a set of short and long axis images at end-diastole, the initial NURBS model is constructed by fitting two surfaces with the same parameterization to the set of epicardial and endocardial contours from which a volumetric model is created. Using normal displacements of the three sets of orthogonal tag planes as well as displacements of both tag line and contour/tag line intersection points, one can solve for the optimal homogeneous coordinates, in a least squares sense, of the control points of the NURBS model at a later time point using quadratic programming. After fitting to all time points of data, lofting the NURBS model at each time point creates a comprehensive 4-D NURBS model. From this model, we can extract 3-D myocardial displacement fields and corresponding strain maps, which are local measures of non-rigid deformation.
Volumetric breast density affects performance of digital screening mammography.
Wanders, Johanna O P; Holland, Katharina; Veldhuis, Wouter B; Mann, Ritse M; Pijnappel, Ruud M; Peeters, Petra H M; van Gils, Carla H; Karssemeijer, Nico
2017-02-01
To determine to what extent automatically measured volumetric mammographic density influences screening performance when using digital mammography (DM). We collected a consecutive series of 111,898 DM examinations (2003-2011) from one screening unit of the Dutch biennial screening program (age 50-75 years). Volumetric mammographic density was automatically assessed using Volpara. We determined screening performance measures for four density categories comparable to the American College of Radiology (ACR) breast density categories. Of all the examinations, 21.6% were categorized as density category 1 ('almost entirely fatty') and 41.5, 28.9, and 8.0% as category 2-4 ('extremely dense'), respectively. We identified 667 screen-detected and 234 interval cancers. Interval cancer rates were 0.7, 1.9, 2.9, and 4.4‰ and false positive rates were 11.2, 15.1, 18.2, and 23.8‰ for categories 1-4, respectively (both p-trend density categories: 85.7, 77.6, 69.5, and 61.0% for categories 1-4, respectively (p-trend density, automatically measured on digital mammograms, impacts screening performance measures along the same patterns as established with ACR breast density categories. Since measuring breast density fully automatically has much higher reproducibility than visual assessment, this automatic method could help with implementing density-based supplemental screening.
The Volumetric Rate of Superluminous Supernovae at z~1
Prajs, S; Smith, M; Levan, A; Karpenka, N V; Edwards, T D P; Walker, C R; Wolf, W M; Balland, C; Carlberg, R; Howell, A; Lidman, C; Pain, R; Pritchet, C; Ruhlmann-Kleider, V
2016-01-01
We present a measurement of the volumetric rate of superluminous supernovae (SLSNe) at z~1, measured using archival data from the first four years of the Canada-France-Hawaii Telescope Supernova Legacy Survey (SNLS). We develop a method for the photometric classification of SLSNe to construct our sample. Our sample includes two previously spectroscopically-identified objects, and a further new candidate selected using our classification technique. We use the point-source recovery efficiencies from Perrett et.al. (2010) and a Monte Carlo approach to calculate the rate based on our SLSN sample. We find that the three identified SLSNe from SNLS give a rate of 91 (+76/-36) SNe/Yr/Gpc^3 at a volume-weighted redshift of z=1.13. This is equivalent to 2.2 (+1.8/-0.9) x10^-4 of the volumetric core collapse supernova rate at the same redshift. When combined with other rate measurements from the literature, we show that the rate of SLSNe increases with redshift in a manner consistent with that of the cosmic star formati...
Linear Algebra and Smarandache Linear Algebra
Vasantha, Kandasamy
2003-01-01
The present book, on Smarandache linear algebra, not only studies the Smarandache analogues of linear algebra and its applications, it also aims to bridge the need for new research topics pertaining to linear algebra, purely in the algebraic sense. We have introduced Smarandache semilinear algebra, Smarandache bilinear algebra and Smarandache anti-linear algebra and their fuzzy equivalents. Moreover, in this book, we have brought out the study of linear algebra and ve...
Linear Algebra and Smarandache Linear Algebra
Vasantha, Kandasamy
2003-01-01
The present book, on Smarandache linear algebra, not only studies the Smarandache analogues of linear algebra and its applications, it also aims to bridge the need for new research topics pertaining to linear algebra, purely in the algebraic sense. We have introduced Smarandache semilinear algebra, Smarandache bilinear algebra and Smarandache anti-linear algebra and their fuzzy equivalents. Moreover, in this book, we have brought out the study of linear algebra and ve...
Automated volumetric breast density derived by shape and appearance modeling
Malkov, Serghei; Kerlikowske, Karla; Shepherd, John
2014-03-01
The image shape and texture (appearance) estimation designed for facial recognition is a novel and promising approach for application in breast imaging. The purpose of this study was to apply a shape and appearance model to automatically estimate percent breast fibroglandular volume (%FGV) using digital mammograms. We built a shape and appearance model using 2000 full-field digital mammograms from the San Francisco Mammography Registry with known %FGV measured by single energy absorptiometry method. An affine transformation was used to remove rotation, translation and scale. Principal Component Analysis (PCA) was applied to extract significant and uncorrelated components of %FGV. To build an appearance model, we transformed the breast images into the mean texture image by piecewise linear image transformation. Using PCA the image pixels grey-scale values were converted into a reduced set of the shape and texture features. The stepwise regression with forward selection and backward elimination was used to estimate the outcome %FGV with shape and appearance features and other system parameters. The shape and appearance scores were found to correlate moderately to breast %FGV, dense tissue volume and actual breast volume, body mass index (BMI) and age. The highest Pearson correlation coefficient was equal 0.77 for the first shape PCA component and actual breast volume. The stepwise regression method with ten-fold cross-validation to predict %FGV from shape and appearance variables and other system outcome parameters generated a model with a correlation of r2 = 0.8. In conclusion, a shape and appearance model demonstrated excellent feasibility to extract variables useful for automatic %FGV estimation. Further exploring and testing of this approach is warranted.
Zuo, Wenhua; Wang, Chong; Li, Yuanyuan; Liu, Jinping
2015-01-01
Hybrid supercapacitor (HSC), which typically consists of a Li-ion battery electrode and an electric double-layer supercapacitor electrode, has been extensively investigated for large-scale applications such as hybrid electric vehicles, etc. Its application potential for thin-film downsized energy storage systems that always prefer high volumetric energy/power densities, however, has not yet been explored. Herein, as a case study, we develop an entirely binder-free HSC by using multiwalled carbon nanotube (MWCNT) network film as the cathode and Li4Ti5O12 (LTO) nanowire array as the anode and study the volumetric energy storage capability. Both the electrode materials are grown directly on carbon cloth current collector, ensuring robust mechanical/electrical contacts and flexibility. Our 3 V HSC device exhibits maximum volumetric energy density of ~4.38 mWh cm-3, much superior to those of previous supercapacitors based on thin-film electrodes fabricated directly on carbon cloth and even comparable to the commercial thin-film lithium battery. It also has volumetric power densities comparable to that of the commercial 5.5 V/100 mF supercapacitor (can be operated within 3 s) and has excellent cycling stability (~92% retention after 3000 cycles). The concept of utilizing binder-free electrodes to construct HSC for thin-film energy storage may be readily extended to other HSC electrode systems.
De Long, A J; Greenberg, N; Keaney, C
1986-12-01
An influence of spatial scale on temporal processing has been described in humans (De Long, 1981). The hypothesis that a similar relationship exists in reptiles was tested by placing twelve lizards in volumetrically constant but large-scale or small-scale "home" environments and alternately exposing them to large and small scale novel environments in a counterbalanced design. Behavioral measures included latencies and frequencies for four types of behavior associated with behavioral arousal and exploration and for duration of behavioral states. Results indicate (1) behavioral latencies are significantly reduced in small-scale novel environments and (2) as predicted, the ratio of latencies in large-scale divided by small-scale novel environments is essentially identical to the ratio of the scales of the environments themselves. Linear regression analyses relating latencies to the ratio yield results remarkably similar to those previously reported for temporal experience and spatial scale in human subjects. This research suggests that an experiential temporal-spatial relativity may be phylogenetically primitive.
Feature-based Alignment of Volumetric Multi-modal Images
Toews, Matthew; Zöllei, Lilla; Wells, William M.
2014-01-01
This paper proposes a method for aligning image volumes acquired from different imaging modalities (e.g. MR, CT) based on 3D scale-invariant image features. A novel method for encoding invariant feature geometry and appearance is developed, based on the assumption of locally linear intensity relationships, providing a solution to poor repeatability of feature detection in different image modalities. The encoding method is incorporated into a probabilistic feature-based model for multi-modal image alignment. The model parameters are estimated via a group-wise alignment algorithm, that iteratively alternates between estimating a feature-based model from feature data, then realigning feature data to the model, converging to a stable alignment solution with few pre-processing or pre-alignment requirements. The resulting model can be used to align multi-modal image data with the benefits of invariant feature correspondence: globally optimal solutions, high efficiency and low memory usage. The method is tested on the difficult RIRE data set of CT, T1, T2, PD and MP-RAGE brain images of subjects exhibiting significant inter-subject variability due to pathology. PMID:24683955
Directory of Open Access Journals (Sweden)
A Zareei
2017-05-01
Full Text Available Introduction Spiral conveyors effectively carry solid masses as free or partly free flow of materials. They create good throughput and they are the perfect solution to solve the problems of transport, due to their simple structure, high efficiency and low maintenance costs. This study aims to investigate the performance characteristics of conveyors as function of auger diameter, rotational speed and handling inclination angle. The performance characteristic was investigated according to volumetric efficiency. In another words, the purpose of this study was obtaining a suitable model for volumetric efficiency changes of steep auger to transfer agricultural products. Three different diameters of auger, five levels of rotational speed and three slope angles were used to investigate the effects of changes in these parameters on volumetric efficiency of auger. The used method is novel in this area and the results show that performance by ANFIS models is much better than common statistical models. Materials and Methods The experiments were conducted in Department of Mechanical Engineering of Agricultural Machinery in Urmia University. In this study, SAYOS cultivar of wheat was used. This cultivar of wheat had hard seeds and the humidity was 12% (based on wet. Before testing, all foreign material was separated from the wheat such as stone, dust, plant residues and green seeds. Bulk density of wheat was 790 kg m-3. The auger shaft of the spiral conveyor was received its rotational force through belt and electric motor and its rotation leading to transfer the product to the output. In this study, three conveyors at diameters of 13, 17.5, and 22.5 cm, five levels of rotational speed at 100, 200, 300, 400, and 500 rpm and three handling angles of 10, 20, and 30º were tested. Adaptive Nero-fuzzy inference system (ANFIS is the combination of fuzzy systems and artificial neural network, so it has both benefits. This system is useful to solve the complex non-linear
Tate, David F; Wade, Benjamin S C; Velez, Carmen S; Drennon, Ann Marie; Bolzenius, Jacob; Gutman, Boris A; Thompson, Paul M; Lewis, Jeffrey D; Wilde, Elisabeth A; Bigler, Erin D; Shenton, Martha E; Ritter, John L; York, Gerald E
2016-10-01
Mild traumatic brain injury (mTBI) is a significant health concern. The majority who sustain mTBI recover, although ~20 % continue to experience symptoms that can interfere with quality of life. Accordingly, there is a critical need to improve diagnosis, prognostic accuracy, and monitoring (recovery trajectory over time) of mTBI. Volumetric magnetic resonance imaging (MRI) has been successfully utilized to examine TBI. One promising improvement over standard volumetric approaches is to analyze high-dimensional shape characteristics of brain structures. In this study, subcortical shape and volume in 76 Service Members with mTBI was compared to 59 Service Members with orthopedic injury (OI) and 17 with post-traumatic stress disorder (PTSD) only. FreeSurfer was used to quantify structures from T1-weighted 3 T MRI data. Radial distance (RD) and Jacobian determinant (JD) were defined vertex-wise on parametric mesh-representations of subcortical structures. Linear regression was used to model associations between morphometry (volume and shape), TBI status, and time since injury (TSI) correcting for age, sex, intracranial volume, and level of education. Volumetric data was not significantly different between the groups. JD was significantly increased in the accumbens and caudate and significantly reduced in the thalamus of mTBI participants. Additional significant associations were noted between RD of the amygdala and TSI. Positive trend-level associations between TSI and the amygdala and accumbens were observed, while a negative association was observed for third ventricle. Our findings may aid in the initial diagnosis of mTBI, provide biological targets for functional examination, and elucidate regions that may continue remodeling after injury.
Ke, Xinyou; Alexander, J Iwan D; Savinell, Robert F
2016-01-01
In this work, a two-dimensional mathematical model is developed to study the flow patterns and volumetric flow penetrations in the flow channel over the porous electrode layered system in vanadium flow battery with serpentine flow field design. The flow distributions at the interface between the flow channel and porous electrode are examined. It is found that the non-linear pressure distributions can distinguish the interface flow distributions under the ideal plug flow and ideal parabolic flow inlet boundary conditions. However, the volumetric flow penetration within the porous electrode beneath the flow channel through the integration of interface flow velocity reveals that this value is identical under both ideal plug flow and ideal parabolic flow inlet boundary conditions. The volumetric flow penetrations under the advection effects of flow channel and landing/rib are estimated. The maximum current density achieved in the flow battery can be predicted based on the 100% amount of electrolyte flow reactant ...
Energy Technology Data Exchange (ETDEWEB)
Li Guang; Arora, Naveen C; Xie Huchen; Ning, Holly; Citrin, Deborah; Kaushal, Aradhana; Zach, Leor; Camphausen, Kevin; Miller, Robert W [Radiation Oncology Branch, National Cancer Institute, National Institutes of Health, Bethesda, MD 20892 (United States); Lu Wei; Low, Daniel [Department of Radiation Oncology, Washington University School of Medicine, St Louis, MO 63110 (United States)], E-mail: ligeorge@mail.nih.gov
2009-04-07
An external respiratory surrogate that not only highly correlates with but also quantitatively predicts internal tidal volume should be useful in guiding four-dimensional computed tomography (4DCT), as well as 4D radiation therapy (4DRT). A volumetric surrogate should have advantages over external fiducial point(s) for monitoring respiration-induced motion of the torso, which deforms in synchronization with a patient-specific breathing pattern. This study establishes a linear relationship between the external torso volume change (TVC) and lung air volume change (AVC) by validating a proposed volume conservation hypothesis (TVC = AVC) throughout the respiratory cycle using 4DCT and spirometry. Fourteen patients' torso 4DCT images and corresponding spirometric tidal volumes were acquired to examine this hypothesis. The 4DCT images were acquired using dual surrogates in cine mode and amplitude-based binning in 12 respiratory stages, minimizing residual motion artifacts. Torso and lung volumes were calculated using threshold-based segmentation algorithms and volume changes were calculated relative to the full-exhalation stage. The TVC and AVC, as functions of respiratory stages, were compared, showing a high correlation (r = 0.992 {+-} 0.005, p < 0.0001) as well as a linear relationship (slope = 1.027 {+-} 0.061, R{sup 2} = 0.980) without phase shift. The AVC was also compared to the spirometric tidal volumes, showing a similar linearity (slope = 1.030 {+-} 0.092, R{sup 2} = 0.947). In contrast, the thoracic and abdominal heights measured from 4DCT showed relatively low correlation (0.28 {+-} 0.44 and 0.82 {+-} 0.30, respectively) and location-dependent phase shifts. This novel approach establishes the foundation for developing an external volumetric respiratory surrogate.
Quantitative volumetric Raman imaging of three dimensional cell cultures
Kallepitis, Charalambos; Bergholt, Mads S.; Mazo, Manuel M.; Leonardo, Vincent; Skaalure, Stacey C.; Maynard, Stephanie A.; Stevens, Molly M.
2017-03-01
The ability to simultaneously image multiple biomolecules in biologically relevant three-dimensional (3D) cell culture environments would contribute greatly to the understanding of complex cellular mechanisms and cell-material interactions. Here, we present a computational framework for label-free quantitative volumetric Raman imaging (qVRI). We apply qVRI to a selection of biological systems: human pluripotent stem cells with their cardiac derivatives, monocytes and monocyte-derived macrophages in conventional cell culture systems and mesenchymal stem cells inside biomimetic hydrogels that supplied a 3D cell culture environment. We demonstrate visualization and quantification of fine details in cell shape, cytoplasm, nucleus, lipid bodies and cytoskeletal structures in 3D with unprecedented biomolecular specificity for vibrational microspectroscopy.
Volumetric properties of water/AOT/isooctane microemulsions.
Du, Changfei; He, Wei; Yin, Tianxiang; Shen, Weiguo
2014-12-23
The densities of AOT/isooctane micelles and water/AOT/isooctane microemulsions with the molar ratios R of water to AOT being 2, 8, 10, 12, 16, 18, 20, 25, 30, and 40 were measured at 303.15 K. The apparent specific volumes of AOT and the quasi-component water/AOT at various concentrations were calculated and used to estimate the volumetric properties of AOT and water in the droplets and in the continuous oil phase, to discuss the interaction between the droplets, and to determine the critical micelle concentration and the critical microemulsion concentrations. A thermodynamic model was proposed to analysis the stability boundary of the microemulsion droplets, which confirms the maximum value of R being about 65 for the stable AOT/water/isooctane microemulsion droplets.
Quantitative volumetric Raman imaging of three dimensional cell cultures
Kallepitis, Charalambos
2017-03-22
The ability to simultaneously image multiple biomolecules in biologically relevant three-dimensional (3D) cell culture environments would contribute greatly to the understanding of complex cellular mechanisms and cell–material interactions. Here, we present a computational framework for label-free quantitative volumetric Raman imaging (qVRI). We apply qVRI to a selection of biological systems: human pluripotent stem cells with their cardiac derivatives, monocytes and monocyte-derived macrophages in conventional cell culture systems and mesenchymal stem cells inside biomimetic hydrogels that supplied a 3D cell culture environment. We demonstrate visualization and quantification of fine details in cell shape, cytoplasm, nucleus, lipid bodies and cytoskeletal structures in 3D with unprecedented biomolecular specificity for vibrational microspectroscopy.
In-line hologram segmentation for volumetric samples.
Orzó, László; Göröcs, Zoltán; Fehér, András; Tőkés, Szabolcs
2013-01-01
We propose a fast, noniterative method to segment an in-line hologram of a volumetric sample into in-line subholograms according to its constituent objects. In contrast to the phase retrieval or twin image elimination algorithms, we do not aim or require to reconstruct the complex wave field of all the objects, which would be a more complex task, but only provide a good estimate about the contribution of the particular objects to the original hologram quickly. The introduced hologram segmentation algorithm exploits the special inner structure of the in-line holograms and applies only the estimated supports and reconstruction distances of the corresponding objects as parameters. The performance of the proposed method is demonstrated and analyzed experimentally both on synthetic and measured holograms. We discussed how the proposed algorithm can be efficiently applied for object reconstruction and phase retrieval tasks.
Three-Dimensional Volumetric Restoration by Structural Fat Grafting
Clauser, Luigi C.; Consorti, Giuseppe; Elia, Giovanni; Galié, Manlio; Tieghi, Riccardo
2013-01-01
The use of adipose tissue transfer for correction of maxillofacial defects was reported for the first time at the end of the 19th century. Structural fat grafting (SFG) was introduced as a way to improve facial esthetics and in recent years has evolved into applications in craniomaxillofacial reconstructive surgery. Several techniques have been proposed for harvesting and grafting the fat. However, owing to the damage of many adipocytes during these maneuvers, the results have not been satisfactory and have required several fat injection procedures for small corrections. The author's (L.C.) overview the application of SFG in the management of volumetric deficit in the craniomaxillofacial in patients treated with a long-term follow-up. PMID:24624259
Semi-automatic volumetrics system to parcellate ROI on neocortex
Tan, Ou; Ichimiya, Tetsuya; Yasuno, Fumihiko; Suhara, Tetsuya
2002-05-01
A template-based and semi-automatic volumetrics system--BrainVol is build to divide the any given patient brain to neo-cortical and sub-cortical regions. The standard region is given as standard ROI drawn on a standard brain volume. After normalization between the standard MR image and the patient MR image, the sub-cortical ROIs' boundary are refined based on gray matter. The neo-cortical ROIs are refined by sulcus information that is semi-automatically marked on the patient brain. Then the segmentation is applied to 4D PET image of same patient for calculation of TAC (Time Activity Curve) by co-registration between MR and PET.
Out-of-core clustering of volumetric datasets
Institute of Scientific and Technical Information of China (English)
GRANBERG Carl J.; LI Ling
2006-01-01
In this paper we present a novel method for dividing and clustering large volumetric scalar out-of-core datasets. This work is based on the Ordered Cluster Binary Tree (OCBT) structure created using a top-down or divisive clustering method. The OCBT structure allows fast and efficient sub volume queries to be made in combination with level of detail (LOD) queries of the tree. The initial partitioning of the large out-of-core dataset is done by using non-axis aligned planes calculated using Principal Component Analysis (PCA). A hybrid OCBT structure is also proposed where an in-core cluster binary tree is combined with a large out-of-core file.
Volumetric Survey Speed: A Figure of Merit for Transient Surveys
Bellm, Eric C
2016-01-01
Time-domain surveys can exchange sky coverage for revisit frequency, complicating the comparison of their relative capabilities. By using different revisit intervals, a specific camera may execute surveys optimized for discovery of different classes of transient objects. We propose a new figure of merit, the instantaneous volumetric survey speed, for evaluating transient surveys. This metric defines the trade between cadence interval and snapshot survey volume and so provides a natural means of comparing survey capability. The related metric of areal survey speed imposes a constraint on the range of possible revisit times: we show that many modern time-domain surveys are limited by the amount of fresh sky available each night. We introduce the concept of "spectroscopic accessibility" and discuss its importance for transient science goals requiring followup observing. We present an extension of the control time algorithm for cases where multiple consecutive detections are required. Finally, we explore how surv...
Volumetric optical coherence microscopy enabled by aberrated optics (Conference Presentation)
Mulligan, Jeffrey A.; Liu, Siyang; Adie, Steven G.
2017-02-01
Optical coherence microscopy (OCM) is an interferometric imaging technique that enables high resolution, non-invasive imaging of 3D cell cultures and biological tissues. Volumetric imaging with OCM suffers a trade-off between high transverse resolution and poor depth-of-field resulting from defocus, optical aberrations, and reduced signal collection away from the focal plane. While defocus and aberrations can be compensated with computational methods such as interferometric synthetic aperture microscopy (ISAM) or computational adaptive optics (CAO), reduced signal collection must be physically addressed through optical hardware. Axial scanning of the focus is one approach, but comes at the cost of longer acquisition times, larger datasets, and greater image reconstruction times. Given the capabilities of CAO to compensate for general phase aberrations, we present an alternative method to address the signal collection problem without axial scanning by using intentionally aberrated optical hardware. We demonstrate the use of an astigmatic spectral domain (SD-)OCM imaging system to enable single-acquisition volumetric OCM in 3D cell culture over an extended depth range, compared to a non-aberrated SD-OCM system. The transverse resolution of the non-aberrated and astigmatic imaging systems after application of CAO were 2 um and 2.2 um, respectively. The depth-range of effective signal collection about the nominal focal plane was increased from 100 um in the non-aberrated system to over 300 um in the astigmatic system, extending the range over which useful data may be acquired in a single OCM dataset. We anticipate that this method will enable high-throughput cellular-resolution imaging of dynamic biological systems over extended volumes.
Marginal Space Deep Learning: Efficient Architecture for Volumetric Image Parsing.
Ghesu, Florin C; Krubasik, Edward; Georgescu, Bogdan; Singh, Vivek; Yefeng Zheng; Hornegger, Joachim; Comaniciu, Dorin
2016-05-01
Robust and fast solutions for anatomical object detection and segmentation support the entire clinical workflow from diagnosis, patient stratification, therapy planning, intervention and follow-up. Current state-of-the-art techniques for parsing volumetric medical image data are typically based on machine learning methods that exploit large annotated image databases. Two main challenges need to be addressed, these are the efficiency in scanning high-dimensional parametric spaces and the need for representative image features which require significant efforts of manual engineering. We propose a pipeline for object detection and segmentation in the context of volumetric image parsing, solving a two-step learning problem: anatomical pose estimation and boundary delineation. For this task we introduce Marginal Space Deep Learning (MSDL), a novel framework exploiting both the strengths of efficient object parametrization in hierarchical marginal spaces and the automated feature design of Deep Learning (DL) network architectures. In the 3D context, the application of deep learning systems is limited by the very high complexity of the parametrization. More specifically 9 parameters are necessary to describe a restricted affine transformation in 3D, resulting in a prohibitive amount of billions of scanning hypotheses. The mechanism of marginal space learning provides excellent run-time performance by learning classifiers in clustered, high-probability regions in spaces of gradually increasing dimensionality. To further increase computational efficiency and robustness, in our system we learn sparse adaptive data sampling patterns that automatically capture the structure of the input. Given the object localization, we propose a DL-based active shape model to estimate the non-rigid object boundary. Experimental results are presented on the aortic valve in ultrasound using an extensive dataset of 2891 volumes from 869 patients, showing significant improvements of up to 45
The volumetric rate of superluminous supernovae at z ˜ 1
Prajs, S.; Sullivan, M.; Smith, M.; Levan, A.; Karpenka, N. V.; Edwards, T. D. P.; Walker, C. R.; Wolf, W. M.; Balland, C.; Carlberg, R.; Howell, D. A.; Lidman, C.; Pain, R.; Pritchet, C.; Ruhlmann-Kleider, V.
2017-01-01
We present a measurement of the volumetric rate of superluminous supernovae (SLSNe) at z ˜ 1.0, measured using archival data from the first four years of the Canada-France-Hawaii Telescope Supernova Legacy Survey (SNLS). We develop a method for the photometric classification of SLSNe to construct our sample. Our sample includes two previously spectroscopically identified objects, and a further new candidate selected using our classification technique. We use the point-source recovery efficiencies from Perrett et al. and a Monte Carlo approach to calculate the rate based on our SLSN sample. We find that the three identified SLSNe from SNLS give a rate of 91^{+76}_{-36} SNe yr-1 Gpc-3 at a volume-weighted redshift of z = 1.13. This is equivalent to 2.2^{+1.8}_{-0.9}× 10^{-4} of the volumetric core-collapse supernova rate at the same redshift. When combined with other rate measurements from the literature, we show that the rate of SLSNe increases with redshift in a manner consistent with that of the cosmic star formation history. We also estimate the rate of ultra-long gamma-ray bursts based on the events discovered by the Swift satellite, and show that it is comparable to the rate of SLSNe, providing further evidence of a possible connection between these two classes of events. We also examine the host galaxies of the SLSNe discovered in SNLS, and find them to be consistent with the stellar-mass distribution of other published samples of SLSNe.
Volumetric analysis of corticocancellous bones using CT data
Energy Technology Data Exchange (ETDEWEB)
Krappinger, Dietmar; Linde, Astrid von; Rosenberger, Ralf; Blauth, Michael [Medical University Innsbruck, Department of Trauma Surgery and Sports Medicine, Innsbruck (Austria); Glodny, Bernhard; Niederwanger, Christian [Medical University Innsbruck, Department of Radiology I, Innsbruck (Austria)
2012-05-15
To present a method for an automated volumetric analysis of corticocancellous bones such as the superior pubic ramus using CT data and to assess the reliability of this method. Computed tomography scans of a consecutive series of 250 patients were analyzed. A Hounsfield unit (HU) thresholding-based reconstruction technique (''Vessel Tracking,'' GE Healthcare) was used. A contiguous space of cancellous bone with similar HU values between the starting and end points was automatically identified as the region of interest. The identification was based upon the density gradient to the adjacent cortical bone. The starting point was defined as the middle of the parasymphyseal corticocancellous transition zone on the axial slice showing the parasymphyseal superior pubic ramus in its maximum anteroposterior width. The end point was defined as the middle of the periarticular corticocancellous transition zone on the axial slice showing the quadrilateral plate as a thin cortical plate. The following parameters were automatically obtained on both sides: length of the center line, volume of the superior pubic ramus between the starting point and end point, minimum, maximum and mean diameter perpendicular to the center line, and mean cross-sectional area perpendicular to the center line. An automated analysis without manual adjustments was successful in 207 patients (82.8%). The center line showed a significantly greater length in female patients (67.6 mm vs 65.0 mm). The volume was greater in male patients (21.8 cm{sup 3} vs 19.4 cm{sup 3}). The intersite reliability was high with a mean difference between the left and right sides of between 0.1% (cross-sectional area) and 2.3% (volume). The method presented allows for an automated volumetric analysis of a corticocancellous bone using CT data. The method is intended to provide preoperative information for the use of intramedullary devices in fracture fixation and percutaneous cement augmentation techniques
Directory of Open Access Journals (Sweden)
Qiang Cheng
2013-01-01
Full Text Available Traditional approaches about error modeling and analysis of machine tool few consider the probability characteristics of the geometric error and volumetric error systematically. However, the individual geometric error measured at different points is variational and stochastic, and therefore the resultant volumetric error is aslo stochastic and uncertain. In order to address the stochastic characteristic of the volumetric error for multiaxis machine tool, a new probability analysis mathematical model of volumetric error is proposed in this paper. According to multibody system theory, a mean value analysis model for volumetric error is established with consideration of geometric errors. The probability characteristics of geometric errors are obtained by statistical analysis to the measured sample data. Based on probability statistics and stochastic process theory, the variance analysis model of volumetric error is established in matrix, which can avoid the complex mathematics operations during the direct differential. A four-axis horizontal machining center is selected as an illustration example. The analysis results can reveal the stochastic characteristic of volumetric error and are also helpful to make full use of the best workspace to reduce the random uncertainty of the volumetric error and improve the machining accuracy.
Volumetric and two-dimensional image interpretation show different cognitive processes in learners
van der Gijp, Anouk; Ravesloot, C.J.; van der Schaaf, Marieke F; van der Schaaf, Irene C; Huige, Josephine C B M; Vincken, Koen L; Ten Cate, Olle Th J; van Schaik, JPJ
2015-01-01
RATIONALE AND OBJECTIVES: In current practice, radiologists interpret digital images, including a substantial amount of volumetric images. We hypothesized that interpretation of a stack of a volumetric data set demands different skills than interpretation of two-dimensional (2D) cross-sectional imag
Directory of Open Access Journals (Sweden)
Daniëlle van der Waal
Full Text Available The objective of this study is to compare different methods for measuring breast density, both visual assessments and automated volumetric density, in a breast cancer screening setting. These measures could potentially be implemented in future screening programmes, in the context of personalised screening or screening evaluation.Digital mammographic exams (N = 992 of women participating in the Dutch breast cancer screening programme (age 50-75y in 2013 were included. Breast density was measured in three different ways: BI-RADS density (5th edition and with two commercially available automated software programs (Quantra and Volpara volumetric density. BI-RADS density (ordinal scale was assessed by three radiologists. Quantra (v1.3 and Volpara (v1.5.0 provide continuous estimates. Different comparison methods were used, including Bland-Altman plots and correlation coefficients (e.g., intraclass correlation coefficient [ICC].Based on the BI-RADS classification, 40.8% of the women had 'heterogeneously or extremely dense' breasts. The median volumetric percent density was 12.1% (IQR: 9.6-16.5 for Quantra, which was higher than the Volpara estimate (median 6.6%, IQR: 4.4-10.9. The mean difference between Quantra and Volpara was 5.19% (95% CI: 5.04-5.34 (ICC: 0.64. There was a clear increase in volumetric percent dense volume as BI-RADS density increased. The highest accuracy for predicting the presence of BI-RADS c+d (heterogeneously or extremely dense was observed with a cut-off value of 8.0% for Volpara and 13.8% for Quantra.Although there was no perfect agreement, there appeared to be a strong association between all three measures. Both volumetric density measures seem to be usable in breast cancer screening programmes, provided that the required data flow can be realized.
Siripatana, Chairat; Thongpan, Hathaikarn; Promraksa, Arwut
2017-03-01
This article explores a volumetric approach in formulating differential equations for a class of engineering flow problems involving component transfer within or between two phases. In contrast to conventional formulation which is based on linear velocities, this work proposed a slightly different approach based on volumetric flow-rate which is essentially constant in many industrial processes. In effect, many multi-dimensional flow problems found industrially can be simplified into multi-component or multi-phase but one-dimensional flow problems. The formulation is largely generic, covering counter-current, concurrent or batch, fixed and fluidized bed arrangement. It was also intended to use for start-up, shut-down, control and steady state simulation. Since many realistic and industrial operation are dynamic with variable velocity and porosity in relation to position, analytical solutions are rare and limited to only very simple cases. Thus we also provide a numerical solution using Crank-Nicolson finite difference scheme. This solution is inherently stable as tested against a few cases published in the literature. However, it is anticipated that, for unconfined flow or non-constant flow-rate, traditional formulation should be applied.
Energy Technology Data Exchange (ETDEWEB)
Ezzati, Ali [Albert Einstein College of Medicine of Yeshiva University, Saul B. Korey Department of Neurology, Bronx, NY (United States); Montefiore Medical Center, Department of Neurology, Bronx, NY (United States); Katz, Mindy J. [Albert Einstein College of Medicine of Yeshiva University, Saul B. Korey Department of Neurology, Bronx, NY (United States); Lipton, Michael L. [Albert Einstein College of Medicine of Yeshiva University, The Gruss Magnetic Resonance Research Center and Departments of Radiology, Psychiatry and Behavioral Sciences and the Dominick P. Purpura Department of Neuroscience, Bronx, NY (United States); Montefiore Medical Center, The Department of Radiology, Bronx, NY (United States); Lipton, Richard B. [Albert Einstein College of Medicine of Yeshiva University, Saul B. Korey Department of Neurology, Bronx, NY (United States); Albert Einstein College of Medicine of Yeshiva University, Department of Epidemiology and Population Health, Bronx, NY (United States); Verghese, Joe [Albert Einstein College of Medicine of Yeshiva University, Saul B. Korey Department of Neurology, Bronx, NY (United States); Albert Einstein College of Medicine, Division of Cognitive and Motor Aging, Bronx, NY (United States)
2015-08-15
While cortical processes play an important role in controlling locomotion, the underlying structural brain changes associated with slowing of gait in aging are not yet fully established. Our study aimed to examine the relationship between cortical gray matter volume (GM), white matter volume (WM), ventricular volume (VV), hippocampal and hippocampal subfield volumes, and gait velocity in older adults free of dementia. Gait and cognitive performance was tested in 112 community-residing adults, age 70 years and over, participating in the Einstein Aging Study. Gait velocity (cm/s) was obtained using an instrumented walkway. Volumetric MRI measures were estimated using a FreeSurfer software. We examined the cross-sectional relationship of GM, WM, VV, and hippocampal total and subfield volumes and gait velocity using linear regression models. In complementary models, the effect of memory performance on the relationship between gait velocity and regional volumes was evaluated. Slower gait velocity was associated with smaller cortical GM and total hippocampal volumes. There was no association between gait velocity and WM or VV. Among hippocampal subfields, only smaller presubiculum volume was significantly associated with decrease in gait velocity. Addition of the memory performance to the models attenuated the association between gait velocity and all volumetric measures. Our findings indicate that total GM and hippocampal volumes as well as specific hippocampal subfield volumes are inversely associated with locomotor function. These associations are probably affected by cognitive status of study population. (orig.)
Non-linear canonical correlation
van der Burg, Eeke; de Leeuw, Jan
1983-01-01
Non-linear canonical correlation analysis is a method for canonical correlation analysis with optimal scaling features. The method fits many kinds of discrete data. The different parameters are solved for in an alternating least squares way and the corresponding program is called CANALS. An
High Volumetric Energy Density Hybrid Supercapacitors Based on Reduced Graphene Oxide Scrolls.
Rani, Janardhanan R; Thangavel, Ranjith; Oh, Se-I; Woo, Jeong Min; Chandra Das, Nayan; Kim, So-Yeon; Lee, Yun-Sung; Jang, Jae-Hyung
2017-07-12
The low volumetric energy density of reduced graphene oxide (rGO)-based electrodes limits its application in commercial electrochemical energy storage devices that require high-performance energy storage capacities in small volumes. The volumetric energy density of rGO-based electrode materials is very low due to their low packing density. A supercapacitor with enhanced packing density and high volumetric energy density is fabricated using doped rGO scrolls (GFNSs) as the electrode material. The restacking of rGO sheets is successfully controlled through synthesizing the doped scroll structures while increasing the packing density. The fabricated cell exhibits an ultrahigh volumetric energy density of 49.66 Wh/L with excellent cycling stability (>10 000 cycles). This unique design strategy for the electrode material has significant potential for the future supercapacitors with high volumetric energy densities.
Soldea, Octavian; Elber, Gershon; Rivlin, Ehud
2006-02-01
This paper presents a method to globally segment volumetric images into regions that contain convex or concave (elliptic) iso-surfaces, planar or cylindrical (parabolic) iso-surfaces, and volumetric regions with saddle-like (hyperbolic) iso-surfaces, regardless of the value of the iso-surface level. The proposed scheme relies on a novel approach to globally compute, bound, and analyze the Gaussian and mean curvatures of an entire volumetric data set, using a trivariate B-spline volumetric representation. This scheme derives a new differential scalar field for a given volumetric scalar field, which could easily be adapted to other differential properties. Moreover, this scheme can set the basis for more precise and accurate segmentation of data sets targeting the identification of primitive parts. Since the proposed scheme employs piecewise continuous functions, it is precise and insensitive to aliasing.
Energy Technology Data Exchange (ETDEWEB)
Garcia-Casals, X. [Universidad Pontificia Comillas-ICAI, Madrid (Spain). Dept. de Fluidos y Calor; Ajona, J.I. [Departamento de Energia Solar, Viessemann, Poligono Industrial San Marcos, Getafe (Spain)
1999-07-01
Recently much theoretical and experimental work has been conducted on volumetric receivers. However, not much attention has been paid to the possibilities of using different selectivity mechanisms to minimize radiation thermal losses, which are the main ones at high operating temperature. In this paper we present a duct volumetric receiver model and its results, which allow the evaluation of different selectivity strategies such as: conventional {epsilon}/{alpha}, geometry, frontal absorption and diffuse/specular reflection. We propose a new concept of selective volumetric receivers based on a solar-specular/infrared-diffuse radiative behaviour and evaluate its potential for efficiency improvement. In recent work on volumetric receivers based on simplified models, it has been concluded that the duct volumetric receiver is inherently unstable when working with high solar flux. We didn't find any unstable receiver behaviour even at very high solar fluxes, and conclude that a substantial potential for efficiency improvement exists if selectivity mechanisms are properly combined. (author)
Enhanced volumetric visualization for real time 4D intraoperative ophthalmic swept-source OCT.
Viehland, Christian; Keller, Brenton; Carrasco-Zevallos, Oscar M; Nankivil, Derek; Shen, Liangbo; Mangalesh, Shwetha; Viet, Du Tran; Kuo, Anthony N; Toth, Cynthia A; Izatt, Joseph A
2016-05-01
Current-generation software for rendering volumetric OCT data sets based on ray casting results in volume visualizations with indistinct tissue features and sub-optimal depth perception. Recent developments in hand-held and microscope-integrated intrasurgical OCT designed for real-time volumetric imaging motivate development of rendering algorithms which are both visually appealing and fast enough to support real time rendering, potentially from multiple viewpoints for stereoscopic visualization. We report on an enhanced, real time, integrated volumetric rendering pipeline which incorporates high performance volumetric median and Gaussian filtering, boundary and feature enhancement, depth encoding, and lighting into a ray casting volume rendering model. We demonstrate this improved model implemented on graphics processing unit (GPU) hardware for real-time volumetric rendering of OCT data during tissue phantom and live human surgical imaging. We show that this rendering produces enhanced 3D visualizations of pathology and intraoperative maneuvers compared to standard ray casting.
Energy Technology Data Exchange (ETDEWEB)
Rhodes, L.A. [Academic Unit of Medical Physics, University of Leeds and Leeds General Infirmary, Leeds (United Kingdom)]. E-mail: lar@medphysics.leeds.ac.uk; Keenan, A.-M. [Academic Unit of Musculoskeletal Disease, University of Leeds and Leeds General Infirmary, Leeds (United Kingdom); Grainger, A.J. [Department of Radiology, Leeds General Infirmary, Leeds (United Kingdom); Emery, P. [Academic Unit of Musculoskeletal Disease, University of Leeds and Leeds General Infirmary, Leeds (United Kingdom); McGonagle, D. [Academic Unit of Musculoskeletal Disease, University of Leeds and Leeds General Infirmary, Leeds (United Kingdom); Calderdale Royal Hospital, Salterhebble, Halifax (United Kingdom); Conaghan, P.G. [Academic Unit of Musculoskeletal Disease, University of Leeds and Leeds General Infirmary, Leeds (United Kingdom)
2005-12-15
AIM: To assess whether simple, limited section analysis can replace detailed volumetric assessment of synovitis in patients with osteoarthritis (OA) of the knee using contrast-enhanced magnetic resonance imaging (MRI). MATERIALS AND METHODS: Thirty-five patients with clinical and radiographic OA of the knee were assessed for synovitis using gadolinium-enhanced MRI. The volume of enhancing synovium was quantitatively assessed in four anatomical sites (the medial and lateral parapatellar recesses, the intercondylar notch and the suprapatellar pouch) by summing the volumes of synovitis in consecutive sections. Four different combinations of section analysis were evaluated for their ability to predict total synovial volume. RESULTS: A total of 114 intra-articular sites were assessed. Simple linear regression demonstrated that the best predictor of total synovial volume was the analysis containing the inferior, mid and superior sections of each of the intra-articular sites, which predicted between 40-80% (r {sup 2}=0.396, p<0.001 for notch; r {sup 2}=0.818, p<0.001 for medial parapatellar recess) of the total volume assessment. CONCLUSIONS: The results suggest that a three-section analysis on axial post-gadolinium sequences provides a simple surrogate measure of synovial volume in OA knees.
Feasibility study of volumetric modulated arc therapy with constant dose rate for endometrial cancer
Energy Technology Data Exchange (ETDEWEB)
Yang, Ruijie [Department of Radiation Oncology, Peking University Third Hospital, Beijing (China); Wang, Junjie, E-mail: junjiewang47@yahoo.com [Department of Radiation Oncology, Peking University Third Hospital, Beijing (China); Xu, Feng [Department of Biomedical Engineering, Peking University Third Hospital, Beijing (China); Li, Hua [Department of Obstetrics and Gynecology, Peking University Third Hospital, Beijing (China); Zhang, Xile [Department of Radiation Oncology, Peking University Third Hospital, Beijing (China)
2013-10-01
To investigate the feasibility, efficiency, and delivery accuracy of volumetric modulated arc therapy with constant dose rate (VMAT-CDR) for whole-pelvic radiotherapy (WPRT) of endometrial cancer. The nine-field intensity-modulated radiotherapy (IMRT), VMAT with variable dose-rate (VMAT-VDR), and VMAT-CDR plans were created for 9 patients with endometrial cancer undergoing WPRT. The dose distribution of planning target volume (PTV), organs at risk (OARs), and normal tissue (NT) were compared. The monitor units (MUs) and treatment delivery time were also evaluated. For each VMAT-CDR plan, a dry run was performed to assess the dosimetric accuracy with MatriXX from IBA. Compared with IMRT, the VMAT-CDR plans delivered a slightly greater V{sub 20} of the bowel, bladder, pelvis bone, and NT, but significantly decreased the dose to the high-dose region of the rectum and pelvis bone. The MUs decreased from 1105 with IMRT to 628 with VMAT-CDR. The delivery time also decreased from 9.5 to 3.2 minutes. The average gamma pass rate was 95.6% at the 3%/3 mm criteria with MatriXX pretreatment verification for 9 patients. VMAT-CDR can achieve comparable plan quality with significant shorter delivery time and smaller number of MUs compared with IMRT for patients with endometrial cancer undergoing WPRT. It can be accurately delivered and be an alternative to IMRT on the linear accelerator without VDR capability.
Directory of Open Access Journals (Sweden)
Guishan Fu
2014-01-01
Full Text Available Purpose: To explore the dosimetric effects of flattening filter-free (FFF beams in volumetric modulated arc therapy (VMAT of nasopharyngeal carcinoma via a retrospective planning study. Materials and Methods: A linear accelerator (LINAC was prepared to operate in FFF mode and the beam data were collected and used to build a model in TPS. For 10 nasopharyngeal carcinoma (NPC cases, VMAT plans of FFF beams and normal flattened (FF beams were designed. Differences of plan quality and delivery efficiency between FFF-VMAT plans and filter filtered VMAT (FF-VMAT plans were analyzed using two-tailed paired t-tests. Results: Removal of the flattening filter increased the dose rate. Averaged beam on time (BOT of FFF-VMAT plans was decreased by 24.2%. Differences of target dose coverage between plans with flattened and unflattened beams were statistically insignificant. For dose to normal organs, up to 4.9% decrease in V35 of parotid grand and 4.5% decrease in averaged normal tissue (NT dose was observed. Conclusions: The TPS used in our study was able to handle FFF beams. The FFF beam prone to improve the normal tissue sparing while achieving similar target dose distribution. Decreasing of BOT in NPC cases was valuable in terms of patient′s comfort.
Volumetric and two-dimensional image interpretation show different cognitive processes in learners.
van der Gijp, Anouk; Ravesloot, Cécile J; van der Schaaf, Marieke F; van der Schaaf, Irene C; Huige, Josephine C B M; Vincken, Koen L; Ten Cate, Olle Th J; van Schaik, Jan P J
2015-05-01
In current practice, radiologists interpret digital images, including a substantial amount of volumetric images. We hypothesized that interpretation of a stack of a volumetric data set demands different skills than interpretation of two-dimensional (2D) cross-sectional images. This study aimed to investigate and compare knowledge and skills used for interpretation of volumetric versus 2D images. Twenty radiology clerks were asked to think out loud while reading four or five volumetric computed tomography (CT) images in stack mode and four or five 2D CT images. Cases were presented in a digital testing program allowing stack viewing of volumetric data sets and changing views and window settings. Thoughts verbalized by the participants were registered and coded by a framework of knowledge and skills concerning three components: perception, analysis, and synthesis. The components were subdivided into 16 discrete knowledge and skill elements. A within-subject analysis was performed to compare cognitive processes during volumetric image readings versus 2D cross-sectional image readings. Most utterances contained knowledge and skills concerning perception (46%). A smaller part involved synthesis (31%) and analysis (23%). More utterances regarded perception in volumetric image interpretation than in 2D image interpretation (Median 48% vs 35%; z = -3.9; P Cognitive processes in volumetric and 2D cross-sectional image interpretation differ substantially. Volumetric image interpretation draws predominantly on perceptual processes, whereas 2D image interpretation is mainly characterized by synthesis. The results encourage the use of volumetric images for teaching and testing perceptual skills. Copyright © 2015 AUR. Published by Elsevier Inc. All rights reserved.
Hodgetts, David; Seers, Thomas
2015-04-01
Fault systems are important structural elements within many petroleum reservoirs, acting as potential conduits, baffles or barriers to hydrocarbon migration. Large, seismic-scale faults often serve as reservoir bounding seals, forming structural traps which have proved to be prolific plays in many petroleum provinces. Though inconspicuous within most seismic datasets, smaller subsidiary faults, commonly within the damage zones of parent structures, may also play an important role. These smaller faults typically form narrow, tabular low permeability zones which serve to compartmentalize the reservoir, negatively impacting upon hydrocarbon recovery. Though considerable improvements have been made in the visualization field to reservoir-scale fault systems with the advent of 3D seismic surveys, the occlusion of smaller scale faults in such datasets is a source of significant uncertainty during prospect evaluation. The limited capacity of conventional subsurface datasets to probe the spatial distribution of these smaller scale faults has given rise to a large number of outcrop based studies, allowing their intensity, connectivity and size distributions to be explored in detail. Whilst these studies have yielded an improved theoretical understanding of the style and distribution of sub-seismic scale faults, the ability to transform observations from outcrop to quantities that are relatable to reservoir volumes remains elusive. These issues arise from the fact that outcrops essentially offer a pseudo-3D window into the rock volume, making the extrapolation of surficial fault properties such as areal density (fracture length per unit area: P21), to equivalent volumetric measures (i.e. fracture area per unit volume: P32) applicable to fracture modelling extremely challenging. Here, we demonstrate an approach which harnesses advances in the extraction of 3D trace maps from surface reconstructions using calibrated image sequences, in combination with a novel semi
Third SIAM conference on applied linear algebra and short course on linear algebra in statistics
Energy Technology Data Exchange (ETDEWEB)
1988-01-01
This report contains abstracts on the following themes: Large Scale Computing and Numerical Methods; Inverse Eigenvalue Problems; Qualitative and Combinatorial Analysis of Matrices; Linear Systems and Control; Parallel Matrix Computations; Signal Processing; Optimization; Multivariate Statistics; Core Linear Algebra; and Iterative Methods for Solving Linear Systems. (LSP)
Byron, S.
1985-03-01
The low pressure gas-filled thyratron is scalable in the long dimension. Internally the tube is formed as a tetrode, with an auxiliary grid placed between the cathode and the control grid. A dc or pulsed power source drives the auxiliary grid both to insure uniform cathode emission and to provide a grid-cathode plasma prior to commutation. The high voltage holdoff structure consists of the anode, the control grid and its electrostatic shielding baffles, and a main quartz insulator. A small gas flow supply and exhaust system is used that eliminates the need for a hydrogen reservoir and permits other gases, such as helium, to be used. The thyratron provides a low inductance, high current, long lifetime switch configuration: useful for switch-on applications involving large scale lasers and other similar loads that are distributed in a linear geometry.
Linearly constrained minimax optimization
DEFF Research Database (Denmark)
Madsen, Kaj; Schjær-Jacobsen, Hans
1978-01-01
We present an algorithm for nonlinear minimax optimization subject to linear equality and inequality constraints which requires first order partial derivatives. The algorithm is based on successive linear approximations to the functions defining the problem. The resulting linear subproblems...