Inverse problems in linear transport theory
Dressler, K.
1988-01-01
Inverse problems for a class of linear kinetic equations are investigated. The aim is to identify the scattering kernel of a transport equation (corresponding to the structure of a background medium) by observing the 'albedo' part of the solution operator for the corresponding direct initial boundary value problem. This means to get information on some integral operator in an integrodifferential equation through on overdetermined boundary value problem. We first derive a constructive method for solving direct halfspace problems and prove a new factorization theorem for the solutions. Using this result we investigate stationary inverse problems with respect to well posedness (e.g. reduce them to classical ill-posed problems, such as integral equations of first kind). In the time-dependent case we show that a quite general inverse problem is well posed and solve it constructively. (orig.)
An Entropic Estimator for Linear Inverse Problems
Amos Golan
2012-05-01
Full Text Available In this paper we examine an Information-Theoretic method for solving noisy linear inverse estimation problems which encompasses under a single framework a whole class of estimation methods. Under this framework, the prior information about the unknown parameters (when such information exists, and constraints on the parameters can be incorporated in the statement of the problem. The method builds on the basics of the maximum entropy principle and consists of transforming the original problem into an estimation of a probability density on an appropriate space naturally associated with the statement of the problem. This estimation method is generic in the sense that it provides a framework for analyzing non-normal models, it is easy to implement and is suitable for all types of inverse problems such as small and or ill-conditioned, noisy data. First order approximation, large sample properties and convergence in distribution are developed as well. Analytical examples, statistics for model comparisons and evaluations, that are inherent to this method, are discussed and complemented with explicit examples.
Microlocal analysis of a seismic linearized inverse problem
Stolk, C.C.
1999-01-01
The seismic inverse problem is to determine the wavespeed c x in the interior of a medium from measurements at the boundary In this paper we analyze the linearized inverse problem in general acoustic media The problem is to nd a left inverse of the linearized forward map F or equivalently to nd the
Linearity in Process Languages
Nygaard, Mikkel; Winskel, Glynn
2002-01-01
The meaning and mathematical consequences of linearity (managing without a presumed ability to copy) are studied for a path-based model of processes which is also a model of affine-linear logic. This connection yields an affine-linear language for processes, automatically respecting open......-map bisimulation, in which a range of process operations can be expressed. An operational semantics is provided for the tensor fragment of the language. Different ways to make assemblies of processes lead to different choices of exponential, some of which respect bisimulation....
Inverse photon-photon processes
Carimalo, C.; Crozon, M.; Kesler, P.; Parisi, J.
1981-12-01
We here consider inverse photon-photon processes, i.e. AB → γγX (where A, B are hadrons, in particular protons or antiprotons), at high energies. As regards the production of a γγ continuum, we show that, under specific conditions the study of such processes might provide some information on the subprocess gg γγ, involving a quark box. It is also suggested to use those processes in order to systematically look for heavy C = + structures (quarkonium states, gluonia, etc.) showing up in the γγ channel. Inverse photon-photon processes might thus become a new and fertile area of investigation in high-energy physics, provided the difficult problem of discriminating between direct photons and indirect ones can be handled in a satisfactory way
Inverse Modelling Problems in Linear Algebra Undergraduate Courses
Martinez-Luaces, Victor E.
2013-01-01
This paper will offer an analysis from a theoretical point of view of mathematical modelling, applications and inverse problems of both causation and specification types. Inverse modelling problems give the opportunity to establish connections between theory and practice and to show this fact, a simple linear algebra example in two different…
LinvPy : a Python package for linear inverse problems
Beaud, Guillaume François Paul
2016-01-01
The goal of this project is to make a Python package including the tau-estimator algorithm to solve linear inverse problems. The package must be distributed, well documented, easy to use and easy to extend for future developers.
Point source reconstruction principle of linear inverse problems
Terazono, Yasushi; Matani, Ayumu; Fujimaki, Norio; Murata, Tsutomu
2010-01-01
Exact point source reconstruction for underdetermined linear inverse problems with a block-wise structure was studied. In a block-wise problem, elements of a source vector are partitioned into blocks. Accordingly, a leadfield matrix, which represents the forward observation process, is also partitioned into blocks. A point source is a source having only one nonzero block. An example of such a problem is current distribution estimation in electroencephalography and magnetoencephalography, where a source vector represents a vector field and a point source represents a single current dipole. In this study, the block-wise norm, a block-wise extension of the l p -norm, was defined as the family of cost functions of the inverse method. The main result is that a set of three conditions was found to be necessary and sufficient for block-wise norm minimization to ensure exact point source reconstruction for any leadfield matrix that admit such reconstruction. The block-wise norm that satisfies the conditions is the sum of the cost of all the observations of source blocks, or in other words, the block-wisely extended leadfield-weighted l 1 -norm. Additional results are that minimization of such a norm always provides block-wisely sparse solutions and that its solutions form cones in source space
Alkhalifah, Tariq Ali
2012-09-25
Traveltime inversion focuses on the geometrical features of the waveform (traveltimes), which is generally smooth, and thus, tends to provide averaged (smoothed) information of the model. On other hand, general waveform inversion uses additional elements of the wavefield including amplitudes to extract higher resolution information, but this comes at the cost of introducing non-linearity to the inversion operator, complicating the convergence process. We use unwrapped phase-based objective functions in waveform inversion as a link between the two general types of inversions in a domain in which such contributions to the inversion process can be easily identified and controlled. The instantaneous traveltime is a measure of the average traveltime of the energy in a trace as a function of frequency. It unwraps the phase of wavefields yielding far less non-linearity in the objective function than that experienced with conventional wavefields, yet it still holds most of the critical wavefield information in its frequency dependency. However, it suffers from non-linearity introduced by the model (or reflectivity), as reflections from independent events in our model interact with each other. Unwrapping the phase of such a model can mitigate this non-linearity as well. Specifically, a simple modification to the inverted domain (or model), can reduce the effect of the model-induced non-linearity and, thus, make the inversion more convergent. Simple numerical examples demonstrate these assertions.
Alkhalifah, Tariq Ali; Choi, Yun Seok
2012-01-01
Traveltime inversion focuses on the geometrical features of the waveform (traveltimes), which is generally smooth, and thus, tends to provide averaged (smoothed) information of the model. On other hand, general waveform inversion uses additional elements of the wavefield including amplitudes to extract higher resolution information, but this comes at the cost of introducing non-linearity to the inversion operator, complicating the convergence process. We use unwrapped phase-based objective functions in waveform inversion as a link between the two general types of inversions in a domain in which such contributions to the inversion process can be easily identified and controlled. The instantaneous traveltime is a measure of the average traveltime of the energy in a trace as a function of frequency. It unwraps the phase of wavefields yielding far less non-linearity in the objective function than that experienced with conventional wavefields, yet it still holds most of the critical wavefield information in its frequency dependency. However, it suffers from non-linearity introduced by the model (or reflectivity), as reflections from independent events in our model interact with each other. Unwrapping the phase of such a model can mitigate this non-linearity as well. Specifically, a simple modification to the inverted domain (or model), can reduce the effect of the model-induced non-linearity and, thus, make the inversion more convergent. Simple numerical examples demonstrate these assertions.
Linear System of Equations, Matrix Inversion, and Linear Programming Using MS Excel
El-Gebeily, M.; Yushau, B.
2008-01-01
In this note, we demonstrate with illustrations two different ways that MS Excel can be used to solve Linear Systems of Equation, Linear Programming Problems, and Matrix Inversion Problems. The advantage of using MS Excel is its availability and transparency (the user is responsible for most of the details of how a problem is solved). Further, we…
The possibilities of linearized inversion of internally scattered seismic data
Aldawood, Ali; Alkhalifah, Tariq Ali; Hoteit, Ibrahim; Zuberi, Mohammad; Turkiyyah, George
2014-01-01
Least-square migration is an iterative linearized inversion scheme that tends to suppress the migration artifacts and enhance the spatial resolution of the migrated image. However, standard least-square migration, based on imaging single scattering energy, may not be able to enhance events that are mainly illuminated by internal multiples such as vertical and nearly vertical faults. To alleviate this problem, we propose a linearized inversion framework to migrate internally multiply scattered energy. We applied this least-square migration of internal multiples to image a vertical fault. Tests on synthetic data demonstrate the ability of the proposed method to resolve a vertical fault plane that is poorly resolved by least-square imaging using primaries only. We, also, demonstrate the robustness of the proposed scheme in the presence of white Gaussian random observational noise and in the case of imaging the fault plane using inaccurate migration velocities.
The possibilities of linearized inversion of internally scattered seismic data
Aldawood, Ali
2014-08-05
Least-square migration is an iterative linearized inversion scheme that tends to suppress the migration artifacts and enhance the spatial resolution of the migrated image. However, standard least-square migration, based on imaging single scattering energy, may not be able to enhance events that are mainly illuminated by internal multiples such as vertical and nearly vertical faults. To alleviate this problem, we propose a linearized inversion framework to migrate internally multiply scattered energy. We applied this least-square migration of internal multiples to image a vertical fault. Tests on synthetic data demonstrate the ability of the proposed method to resolve a vertical fault plane that is poorly resolved by least-square imaging using primaries only. We, also, demonstrate the robustness of the proposed scheme in the presence of white Gaussian random observational noise and in the case of imaging the fault plane using inaccurate migration velocities.
A Sparse Approximate Inverse Preconditioner for Nonsymmetric Linear Systems
Benzi, M.; Tůma, Miroslav
1998-01-01
Roč. 19, č. 3 (1998), s. 968-994 ISSN 1064-8275 R&D Projects: GA ČR GA201/93/0067; GA AV ČR IAA230401 Keywords : large sparse systems * interative methods * preconditioning * approximate inverse * sparse linear systems * sparse matrices * incomplete factorizations * conjugate gradient -type methods Subject RIV: BA - General Mathematics Impact factor: 1.378, year: 1998
Lebrun, D.
1997-05-22
The aim of the dissertation is the linearized inversion of multicomponent seismic data for 3D elastic horizontally stratified media, using Born approximation. A Jacobian matrix is constructed; it will be used to model seismic data from elastic parameters. The inversion technique, relying on single value decomposition (SVD) of the Jacobian matrix, is described. Next, the resolution of inverted elastic parameters is quantitatively studies. A first use of the technique is shown in the frame of an evaluation of a sea bottom acquisition (synthetic data). Finally, a real data set acquired with conventional marine technique is inverted. (author) 70 refs.
On the calibration process of film dosimetry: OLS inverse regression versus WLS inverse prediction
Crop, F; Thierens, H; Rompaye, B Van; Paelinck, L; Vakaet, L; Wagter, C De
2008-01-01
The purpose of this study was both putting forward a statistically correct model for film calibration and the optimization of this process. A reliable calibration is needed in order to perform accurate reference dosimetry with radiographic (Gafchromic) film. Sometimes, an ordinary least squares simple linear (in the parameters) regression is applied to the dose-optical-density (OD) curve with the dose as a function of OD (inverse regression) or sometimes OD as a function of dose (inverse prediction). The application of a simple linear regression fit is an invalid method because heteroscedasticity of the data is not taken into account. This could lead to erroneous results originating from the calibration process itself and thus to a lower accuracy. In this work, we compare the ordinary least squares (OLS) inverse regression method with the correct weighted least squares (WLS) inverse prediction method to create calibration curves. We found that the OLS inverse regression method could lead to a prediction bias of up to 7.3 cGy at 300 cGy and total prediction errors of 3% or more for Gafchromic EBT film. Application of the WLS inverse prediction method resulted in a maximum prediction bias of 1.4 cGy and total prediction errors below 2% in a 0-400 cGy range. We developed a Monte-Carlo-based process to optimize calibrations, depending on the needs of the experiment. This type of thorough analysis can lead to a higher accuracy for film dosimetry
Treating experimental data of inverse kinetic method by unitary linear regression analysis
Zhao Yusen; Chen Xiaoliang
2009-01-01
The theory of treating experimental data of inverse kinetic method by unitary linear regression analysis was described. Not only the reactivity, but also the effective neutron source intensity could be calculated by this method. Computer code was compiled base on the inverse kinetic method and unitary linear regression analysis. The data of zero power facility BFS-1 in Russia were processed and the results were compared. The results show that the reactivity and the effective neutron source intensity can be obtained correctly by treating experimental data of inverse kinetic method using unitary linear regression analysis and the precision of reactivity measurement is improved. The central element efficiency can be calculated by using the reactivity. The result also shows that the effect to reactivity measurement caused by external neutron source should be considered when the reactor power is low and the intensity of external neutron source is strong. (authors)
Linearized versus non-linear inverse methods for seismic localization of underground sources
Oh, Geok Lian; Jacobsen, Finn
2013-01-01
The problem of localization of underground sources from seismic measurements detected by several geophones located on the ground surface is addressed. Two main approaches to the solution of the problem are considered: a beamforming approach that is derived from the linearized inversion problem, a...
Linear Algebra and Image Processing
Allali, Mohamed
2010-01-01
We use the computing technology digital image processing (DIP) to enhance the teaching of linear algebra so as to make the course more visual and interesting. Certainly, this visual approach by using technology to link linear algebra to DIP is interesting and unexpected to both students as well as many faculty. (Contains 2 tables and 11 figures.)
Inverse osmotic process for radioactive laundry waste
Ebara, K; Takahashi, S; Sugimoto, Y; Yusa, H; Hyakutake, H
1977-01-07
Purpose: To effectively recover the processing amount reduced in a continuous treatment. Method: Laundry waste containing radioactive substances discharged from a nuclear power plant is processed in an inverse osmotic process while adding starch digesting enzymes such as amylase and takadiastase, as well as soft spherical bodies such as sponge balls of a particle diameter capable of flowing in the flow of the liquid wastes along the inverse osmotic membrane pipe and having such a softness and roundness as not to damage the inverse osmotic membrane. This process can remove the floating materials such as thread dusts or hairs deposited on the membrane surface by the action of the soft elastic balls and remove paste or the like through decomposition by the digesting enzymes. Consequently, effective recovery can be attained for the reduced processing amount.
Inverse osmotic process for radioactive laundry waste
Ebara, Katsuya; Takahashi, Sankichi; Sugimoto, Yoshikazu; Yusa, Hideo; Hyakutake, Hiroshi.
1977-01-01
Purpose: To effectively recover the processing amount reduced in a continuous treatment. Method: Laundry waste containing radioactive substances discharged from a nuclear power plant is processed in an inverse osmotic process while adding starch digesting enzymes such as amylase and takadiastase, as well as soft spherical bodies such as sponge balls of a particle diameter capable of flowing in the flow of the liquid wastes along the inverse osmotic membrane pipe and having such a softness and roundness as not to damage the inverse osmotic membrane. This process can remove the floating materials such as thread dusts or hairs deposited on the membrane surface by the action of the soft elastic balls and remove paste or the like through decomposition by the digesting enzymes. Consequently, effective recovery can be attained for the reduced processing amount. (Furukawa, Y.)
Approximate inverse preconditioning of iterative methods for nonsymmetric linear systems
Benzi, M. [Universita di Bologna (Italy); Tuma, M. [Inst. of Computer Sciences, Prague (Czech Republic)
1996-12-31
A method for computing an incomplete factorization of the inverse of a nonsymmetric matrix A is presented. The resulting factorized sparse approximate inverse is used as a preconditioner in the iterative solution of Ax = b by Krylov subspace methods.
Probabilistic inversion for chicken processing lines
Cooke, Roger M.; Nauta, Maarten; Havelaar, Arie H.; Fels, Ine van der
2006-01-01
We discuss an application of probabilistic inversion techniques to a model of campylobacter transmission in chicken processing lines. Such techniques are indicated when we wish to quantify a model which is new and perhaps unfamiliar to the expert community. In this case there are no measurements for estimating model parameters, and experts are typically unable to give a considered judgment. In such cases, experts are asked to quantify their uncertainty regarding variables which can be predicted by the model. The experts' distributions (after combination) are then pulled back onto the parameter space of the model, a process termed 'probabilistic inversion'. This study illustrates two such techniques, iterative proportional fitting (IPF) and PARmeter fitting for uncertain models (PARFUM). In addition, we illustrate how expert judgement on predicted observable quantities in combination with probabilistic inversion may be used for model validation and/or model criticism
Linearized inversion frameworks toward high-resolution seismic imaging
Aldawood, Ali
2016-09-01
internally multiply scattered seismic waves to obtain highly resolved images delineating vertical faults that are otherwise not easily imaged by primaries. Seismic interferometry is conventionally based on the cross-correlation and convolution of seismic traces to transform seismic data from one acquisition geometry to another. The conventional interferometric transformation yields virtual data that suffers from low temporal resolution, wavelet distortion, and correlation/convolution artifacts. I therefore incorporate a least-squares datuming technique to interferometrically transform vertical-seismic-profile surface-related multiples to surface-seismic-profile primaries. This yields redatumed data with high temporal resolution and less artifacts, which are subsequently imaged to obtain highly resolved subsurface images. Tests on synthetic examples demonstrate the efficiency of the proposed techniques, yielding highly resolved migrated sections compared with images obtained by imaging conventionally redatumed data. I further advance the recently developed cost-effective Generalized Interferometric Multiple Imaging procedure, which aims to not only image first but also higher-order multiples as well. I formulate this procedure as a linearized inversion framework and solve it as a least-squares problem. Tests of the least-squares Generalized Interferometric Multiple imaging framework on synthetic datasets and demonstrate that it could provide highly resolved migrated images and delineate vertical fault planes compared with the standard procedure. The results support the assertion that this linearized inversion framework can illuminate subsurface zones that are mainly illuminated by internally scattered energy.
The linearized inversion of the generalized interferometric multiple imaging
Aldawood, Ali
2016-09-06
The generalized interferometric multiple imaging (GIMI) procedure can be used to image duplex waves and other higher order internal multiples. Imaging duplex waves could help illuminate subsurface zones that are not easily illuminated by primaries such as vertical and nearly vertical fault planes, and salt flanks. To image first-order internal multiple, the GIMI framework consists of three datuming steps, followed by applying the zero-lag cross-correlation imaging condition. However, the standard GIMI procedure yields migrated images that suffer from low spatial resolution, migration artifacts, and cross-talk noise. To alleviate these problems, we propose a least-squares GIMI framework in which we formulate the first two steps as a linearized inversion problem when imaging first-order internal multiples. Tests on synthetic datasets demonstrate the ability to localize subsurface scatterers in their true positions, and delineate a vertical fault plane using the proposed method. We, also, demonstrate the robustness of the proposed framework when imaging the scatterers or the vertical fault plane with erroneous migration velocities.
Xin-Jia Meng
2015-01-01
Full Text Available Multidisciplinary reliability is an important part of the reliability-based multidisciplinary design optimization (RBMDO. However, it usually has a considerable amount of calculation. The purpose of this paper is to improve the computational efficiency of multidisciplinary inverse reliability analysis. A multidisciplinary inverse reliability analysis method based on collaborative optimization with combination of linear approximations (CLA-CO is proposed in this paper. In the proposed method, the multidisciplinary reliability assessment problem is first transformed into a problem of most probable failure point (MPP search of inverse reliability, and then the process of searching for MPP of multidisciplinary inverse reliability is performed based on the framework of CLA-CO. This method improves the MPP searching process through two elements. One is treating the discipline analyses as the equality constraints in the subsystem optimization, and the other is using linear approximations corresponding to subsystem responses as the replacement of the consistency equality constraint in system optimization. With these two elements, the proposed method realizes the parallel analysis of each discipline, and it also has a higher computational efficiency. Additionally, there are no difficulties in applying the proposed method to problems with nonnormal distribution variables. One mathematical test problem and an electronic packaging problem are used to demonstrate the effectiveness of the proposed method.
Periodic linear differential stochastic processes
Kwakernaak, H.
1975-01-01
Periodic linear differential processes are defined and their properties are analyzed. Equivalent representations are discussed, and the solutions of related optimal estimation problems are given. An extension is presented of Kailath and Geesey’s [1] results concerning the innovations representation
Brown, Malcolm
2009-01-01
Inversions are fascinating phenomena. They are reversals of the normal or expected order. They occur across a wide variety of contexts. What do inversions have to do with learning spaces? The author suggests that they are a useful metaphor for the process that is unfolding in higher education with respect to education. On the basis of…
Multiple scattering processes: inverse and direct
Kagiwada, H.H.; Kalaba, R.; Ueno, S.
1975-01-01
The purpose of the work is to formulate inverse problems in radiative transfer, to introduce the functions b and h as parameters of internal intensity in homogeneous slabs, and to derive initial value problems to replace the more traditional boundary value problems and integral equations of multiple scattering with high computational efficiency. The discussion covers multiple scattering processes in a one-dimensional medium; isotropic scattering in homogeneous slabs illuminated by parallel rays of radiation; the theory of functions b and h in homogeneous slabs illuminated by isotropic sources of radiation either at the top or at the bottom; inverse and direct problems of multiple scattering in slabs including internal sources; multiple scattering in inhomogeneous media, with particular reference to inverse problems for estimation of layers and total thickness of inhomogeneous slabs and to multiple scattering problems with Lambert's law and specular reflectors underlying slabs; and anisotropic scattering with reduction of the number of relevant arguments through axially symmetric fields and expansion in Legendre functions. Gaussian quadrature data for a seven point formula, a FORTRAN program for computing the functions b and h, and tables of these functions supplement the text
An investigation on the solutions for the linear inverse problem in gamma ray tomography
Araujo, Bruna G.M.; Dantas, Carlos C.; Santos, Valdemir A. dos; Finkler, Christine L.L.; Oliveira, Eric F. de; Melo, Silvio B.; Santos, M. Graca dos
2009-01-01
This paper the results obtained in single beam gamma ray tomography are investigated according to direct problem formulation and the applied solution for the linear system of equations. By image reconstruction based algebraic computational algorithms are used. The sparse under and over-determined linear system of equations was analyzed. Build in functions of Matlab software were applied and optimal solutions were investigate. Experimentally a section of the tube is scanned from various positions and at different angles. The solution, to find the vector of coefficients μ, from the vector of measured p values through the W matrix inversion, constitutes an inverse problem. A industrial tomography process requires a numerical solution of the system of equations. The definition of inverse problem according to Hadmard's is considered and as well the requirement of a well posed problem to find stable solutions. The formulation of the basis function and the computational algorithm to structure the weight matrix W were analyzed. For W full rank matrix the obtained solution is unique as expected. Total Least Squares was implemented which theory and computation algorithm gives adequate treatment for the problems due to non-unique solutions of the system of equations. Stability of the solution was investigating by means of a regularization technique and the comparison shows that it improves the results. An optimal solution as a function of the image quality, computation time and minimum residuals were quantified. The corresponding reconstructed images are shown in 3D graphics in order to compare with the solution. (author)
Neutron inverse kinetics via Gaussian Processes
Picca, Paolo; Furfaro, Roberto
2012-01-01
Highlights: ► A novel technique for the interpretation of experiments in ADS is presented. ► The technique is based on Bayesian regression, implemented via Gaussian Processes. ► GPs overcome the limits of classical methods, based on PK approximation. ► Results compares GPs and ANN performance, underlining similarities and differences. - Abstract: The paper introduces the application of Gaussian Processes (GPs) to determine the subcriticality level in accelerator-driven systems (ADSs) through the interpretation of pulsed experiment data. ADSs have peculiar kinetic properties due to their special core design. For this reason, classical – inversion techniques based on point kinetic (PK) generally fail to generate an accurate estimate of reactor subcriticality. Similarly to Artificial Neural Networks (ANNs), Gaussian Processes can be successfully trained to learn the underlying inverse neutron kinetic model and, as such, they are not limited to the model choice. Importantly, GPs are strongly rooted into the Bayes’ theorem which makes them a powerful tool for statistical inference. Here, GPs have been designed and trained on a set of kinetics models (e.g. point kinetics and multi-point kinetics) for homogeneous and heterogeneous settings. The results presented in the paper show that GPs are very efficient and accurate in predicting the reactivity for ADS-like systems. The variance computed via GPs may provide an indication on how to generate additional data as function of the desired accuracy.
Linear parallel processing machines I
Von Kunze, M
1984-01-01
As is well-known, non-context-free grammars for generating formal languages happen to be of a certain intrinsic computational power that presents serious difficulties to efficient parsing algorithms as well as for the development of an algebraic theory of contextsensitive languages. In this paper a framework is given for the investigation of the computational power of formal grammars, in order to start a thorough analysis of grammars consisting of derivation rules of the form aB ..-->.. A/sub 1/ ... A /sub n/ b/sub 1/...b /sub m/ . These grammars may be thought of as automata by means of parallel processing, if one considers the variables as operators acting on the terminals while reading them right-to-left. This kind of automata and their 2-dimensional programming language prove to be useful by allowing a concise linear-time algorithm for integer multiplication. Linear parallel processing machines (LP-machines) which are, in their general form, equivalent to Turing machines, include finite automata and pushdown automata (with states encoded) as special cases. Bounded LP-machines yield deterministic accepting automata for nondeterministic contextfree languages, and they define an interesting class of contextsensitive languages. A characterization of this class in terms of generating grammars is established by using derivation trees with crossings as a helpful tool. From the algebraic point of view, deterministic LP-machines are effectively represented semigroups with distinguished subsets. Concerning the dualism between generating and accepting devices of formal languages within the algebraic setting, the concept of accepting automata turns out to reduce essentially to embeddability in an effectively represented extension monoid, even in the classical cases.
Spatial Processes in Linear Ordering
von Hecker, Ulrich; Klauer, Karl Christoph; Wolf, Lukas; Fazilat-Pour, Masoud
2016-01-01
Memory performance in linear order reasoning tasks (A > B, B > C, C > D, etc.) shows quicker, and more accurate responses to queries on wider (AD) than narrower (AB) pairs on a hypothetical linear mental model (A -- B -- C -- D). While indicative of an analogue representation, research so far did not provide positive evidence for spatial…
Alvarez-Estrada, R.F.
1979-01-01
A comprehensive review of the inverse scattering solution of certain non-linear evolution equations of physical interest in one space dimension is presented. We explain in some detail the interrelated techniques which allow to linearize exactly the following equations: (1) the Korteweg and de Vries equation; (2) the non-linear Schrodinger equation; (3) the modified Korteweg and de Vries equation; (4) the Sine-Gordon equation. We concentrate in discussing the pairs of linear operators which accomplish such an exact linearization and the solution of the associated initial value problem. The application of the method to other non-linear evolution equations is reviewed very briefly
Inverse Boundary Value Problem for Non-linear Hyperbolic Partial Differential Equations
Nakamura, Gen; Vashisth, Manmohan
2017-01-01
In this article we are concerned with an inverse boundary value problem for a non-linear wave equation of divergence form with space dimension $n\\geq 3$. This non-linear wave equation has a trivial solution, i.e. zero solution. By linearizing this equation at the trivial solution, we have the usual linear isotropic wave equation with the speed $\\sqrt{\\gamma(x)}$ at each point $x$ in a given spacial domain. For any small solution $u=u(t,x)$ of this non-linear equation, we have the linear isotr...
Continuity and general perturbation of the Drazin inverse for closed linear operators
N. Castro González
2002-01-01
Full Text Available We study perturbations and continuity of the Drazin inverse of a closed linear operator A and obtain explicit error estimates in terms of the gap between closed operators and the gap between ranges and nullspaces of operators. The results are used to derive a theorem on the continuity of the Drazin inverse for closed operators and to describe the asymptotic behavior of operator semigroups.
Two-Dimensional Linear Inversion of GPR Data with a Shifting Zoom along the Observation Line
Raffaele Persico
2017-09-01
Full Text Available Linear inverse scattering problems can be solved by regularized inversion of a matrix, whose calculation and inversion may require significant computing resources, in particular, a significant amount of RAM memory. This effort is dependent on the extent of the investigation domain, which drives a large amount of data to be gathered and a large number of unknowns to be looked for, when this domain becomes electrically large. This leads, in turn, to the problem of inversion of excessively large matrices. Here, we consider the problem of a ground-penetrating radar (GPR survey in two-dimensional (2D geometry, with antennas at an electrically short distance from the soil. In particular, we present a strategy to afford inversion of large investigation domains, based on a shifting zoom procedure. The proposed strategy was successfully validated using experimental radar data.
A Projected Non-linear Conjugate Gradient Method for Interactive Inverse Kinematics
Engell-Nørregård, Morten; Erleben, Kenny
2009-01-01
Inverse kinematics is the problem of posing an articulated figure to obtain a wanted goal, without regarding inertia and forces. Joint limits are modeled as bounds on individual degrees of freedom, leading to a box-constrained optimization problem. We present A projected Non-linear Conjugate...... Gradient optimization method suitable for box-constrained optimization problems for inverse kinematics. We show application on inverse kinematics positioning of a human figure. Performance is measured and compared to a traditional Jacobian Transpose method. Visual quality of the developed method...
Food processing with linear accelerators
Wilmer, M.E.
1987-01-01
The application of irradiation techniques to the preservation of foods is reviewed. The utility of the process for several important food groups is discussed in the light of work being done in a number of institutions. Recent findings in food chemistry are used to illustrate some of the potential advantages in using high power accelerators in food processing. Energy and dosage estimates are presented for several cases to illustrate the accelerator requirements and to shed light on the economics of the process
On The Structure of The Inverse of a Linear Constant Multivariable ...
On The Structure of The Inverse of a Linear Constant Multivariable System. ... It is shown that the use of this representation has certain advantages in the design of multivariable feedback systems. typical examples were considered to indicate the corresponding application. Keywords: Stability Functions, multivariable ...
Inverse chaos synchronization in linearly and nonlinearly coupled systems with multiple time-delays
Shahverdiev, E.M.; Hashimov, R.H.; Nuriev, R.A.; Hashimova, L.H.; Huseynova, E.M.; Shore, K.A.
2005-04-01
We report on inverse chaos synchronization between two unidirectionally linearly and nonlinearly coupled chaotic systems with multiple time-delays and find the existence and stability conditions for different synchronization regimes. We also study the effect of parameter mismatches on synchonization regimes. The method is tested on the famous Ikeda model. Numerical simulations fully support the analytical approach. (author)
Oh, Geok Lian
properties such as the elastic wave speeds and soil densities. One processing method is casting the estimation problem into an inverse problem to solve for the unknown material parameters. The forward model for the seismic signals used in the literatures include ray tracing methods that consider only...... density values of the discretized ground medium, which leads to time-consuming computations and instability behaviour of the inversion process. In addition, the geophysics inverse problem is generally ill-posed due to non-exact forward model that introduces errors. The Bayesian inversion method through...... the first arrivals of the reflected compressional P-waves from the subsurface structures, or 3D elastic wave models that model all the seismic wave components. The ray tracing forward model formulation is linear, whereas the full 3D elastic wave model leads to a nonlinear inversion problem. In this Ph...
Fukuda, J.; Johnson, K. M.
2009-12-01
Studies utilizing inversions of geodetic data for the spatial distribution of coseismic slip on faults typically present the result as a single fault plane and slip distribution. Commonly the geometry of the fault plane is assumed to be known a priori and the data are inverted for slip. However, sometimes there is not strong a priori information on the geometry of the fault that produced the earthquake and the data is not always strong enough to completely resolve the fault geometry. We develop a method to solve for the full posterior probability distribution of fault slip and fault geometry parameters in a Bayesian framework using Monte Carlo methods. The slip inversion problem is particularly challenging because it often involves multiple data sets with unknown relative weights (e.g. InSAR, GPS), model parameters that are related linearly (slip) and nonlinearly (fault geometry) through the theoretical model to surface observations, prior information on model parameters, and a regularization prior to stabilize the inversion. We present the theoretical framework and solution method for a Bayesian inversion that can handle all of these aspects of the problem. The method handles the mixed linear/nonlinear nature of the problem through combination of both analytical least-squares solutions and Monte Carlo methods. We first illustrate and validate the inversion scheme using synthetic data sets. We then apply the method to inversion of geodetic data from the 2003 M6.6 San Simeon, California earthquake. We show that the uncertainty in strike and dip of the fault plane is over 20 degrees. We characterize the uncertainty in the slip estimate with a volume around the mean fault solution in which the slip most likely occurred. Slip likely occurred somewhere in a volume that extends 5-10 km in either direction normal to the fault plane. We implement slip inversions with both traditional, kinematic smoothing constraints on slip and a simple physical condition of uniform stress
Sparse contrast-source inversion using linear-shrinkage-enhanced inexact Newton method
Desmal, Abdulla
2014-07-01
A contrast-source inversion scheme is proposed for microwave imaging of domains with sparse content. The scheme uses inexact Newton and linear shrinkage methods to account for the nonlinearity and ill-posedness of the electromagnetic inverse scattering problem, respectively. Thresholded shrinkage iterations are accelerated using a preconditioning technique. Additionally, during Newton iterations, the weight of the penalty term is reduced consistently with the quadratic convergence of the Newton method to increase accuracy and efficiency. Numerical results demonstrate the applicability of the proposed method.
Sparse contrast-source inversion using linear-shrinkage-enhanced inexact Newton method
Desmal, Abdulla; Bagci, Hakan
2014-01-01
A contrast-source inversion scheme is proposed for microwave imaging of domains with sparse content. The scheme uses inexact Newton and linear shrinkage methods to account for the nonlinearity and ill-posedness of the electromagnetic inverse scattering problem, respectively. Thresholded shrinkage iterations are accelerated using a preconditioning technique. Additionally, during Newton iterations, the weight of the penalty term is reduced consistently with the quadratic convergence of the Newton method to increase accuracy and efficiency. Numerical results demonstrate the applicability of the proposed method.
Inverse kinematics of a dual linear actuator pitch/roll heliostat
Freeman, Joshua; Shankar, Balakrishnan; Sundaram, Ganesh
2017-06-01
This work presents a simple, computationally efficient inverse kinematics solution for a pitch/roll heliostat using two linear actuators. The heliostat design and kinematics have been developed, modeled and tested using computer simulation software. A physical heliostat prototype was fabricated to validate the theoretical computations and data. Pitch/roll heliostats have numerous advantages including reduced cost potential and reduced space requirements, with a primary disadvantage being the significantly more complicated kinematics, which are solved here. Novel methods are applied to simplify the inverse kinematics problem which could be applied to other similar problems.
Bayesian Travel Time Inversion adopting Gaussian Process Regression
Mauerberger, S.; Holschneider, M.
2017-12-01
A major application in seismology is the determination of seismic velocity models. Travel time measurements are putting an integral constraint on the velocity between source and receiver. We provide insight into travel time inversion from a correlation-based Bayesian point of view. Therefore, the concept of Gaussian process regression is adopted to estimate a velocity model. The non-linear travel time integral is approximated by a 1st order Taylor expansion. A heuristic covariance describes correlations amongst observations and a priori model. That approach enables us to assess a proxy of the Bayesian posterior distribution at ordinary computational costs. No multi dimensional numeric integration nor excessive sampling is necessary. Instead of stacking the data, we suggest to progressively build the posterior distribution. Incorporating only a single evidence at a time accounts for the deficit of linearization. As a result, the most probable model is given by the posterior mean whereas uncertainties are described by the posterior covariance.As a proof of concept, a synthetic purely 1d model is addressed. Therefore a single source accompanied by multiple receivers is considered on top of a model comprising a discontinuity. We consider travel times of both phases - direct and reflected wave - corrupted by noise. Left and right of the interface are assumed independent where the squared exponential kernel serves as covariance.
Li, Guo; Xia, Jun; Li, Lei; Wang, Lidai; Wang, Lihong V.
2015-03-01
Linear transducer arrays are readily available for ultrasonic detection in photoacoustic computed tomography. They offer low cost, hand-held convenience, and conventional ultrasonic imaging. However, the elevational resolution of linear transducer arrays, which is usually determined by the weak focus of the cylindrical acoustic lens, is about one order of magnitude worse than the in-plane axial and lateral spatial resolutions. Therefore, conventional linear scanning along the elevational direction cannot provide high-quality three-dimensional photoacoustic images due to the anisotropic spatial resolutions. Here we propose an innovative method to achieve isotropic resolutions for three-dimensional photoacoustic images through combined linear and rotational scanning. In each scan step, we first elevationally scan the linear transducer array, and then rotate the linear transducer array along its center in small steps, and scan again until 180 degrees have been covered. To reconstruct isotropic three-dimensional images from the multiple-directional scanning dataset, we use the standard inverse Radon transform originating from X-ray CT. We acquired a three-dimensional microsphere phantom image through the inverse Radon transform method and compared it with a single-elevational-scan three-dimensional image. The comparison shows that our method improves the elevational resolution by up to one order of magnitude, approaching the in-plane lateral-direction resolution. In vivo rat images were also acquired.
Linearizing control of continuous anaerobic fermentation processes
Babary, J.P. [Centre National d`Etudes Spatiales (CNES), 31 - Toulouse (France). Laboratoire d`Analyse et d`Architecture des Systemes; Simeonov, I. [Institute of Microbiology, Bulgarian Academy of Sciences (Bulgaria); Ljubenova, V. [Institute of Control and System Research, BAS (Country unknown/Code not available); Dochain, D. [Universite Catholique de Louvain (UCL), Louvain-la-Neuve (Belgium)
1997-09-01
Biotechnological processes (BTP) involve living organisms. In the anaerobic fermentation (biogas production process) the organic matter is mineralized by microorganisms into biogas (methane and carbon dioxide) in the absence of oxygen. The biogas is an additional energy source. Generally this process is carried out as a continuous BTP. It has been widely used in life process and has been confirmed as a promising method of solving some energy and ecological problems in the agriculture and industry. Because of the very restrictive on-line information the control of this process in continuous mode is often reduced to control of the biogas production rate or the concentration of the polluting organic matter (de-pollution control) at a desired value in the presence of some perturbations. Investigations show that classical linear controllers have good performances only in the linear zone of the strongly non-linear input-output characteristics. More sophisticated robust and with variable structure (VSC) controllers are studied. Due to the strongly non-linear dynamics of the process the performances of the closed loop system may be degrading in this case. The aim of this paper is to investigate different linearizing algorithms for control of a continuous non-linear methane fermentation process using the dilution rate as a control action and taking into account some practical implementation aspects. (authors) 8 refs.
Chu, Dezhang; Lawson, Gareth L; Wiebe, Peter H
2016-05-01
The linear inversion commonly used in fisheries and zooplankton acoustics assumes a constant inversion kernel and ignores the uncertainties associated with the shape and behavior of the scattering targets, as well as other relevant animal parameters. Here, errors of the linear inversion due to uncertainty associated with the inversion kernel are quantified. A scattering model-based nonlinear inversion method is presented that takes into account the nonlinearity of the inverse problem and is able to estimate simultaneously animal abundance and the parameters associated with the scattering model inherent to the kernel. It uses sophisticated scattering models to estimate first, the abundance, and second, the relevant shape and behavioral parameters of the target organisms. Numerical simulations demonstrate that the abundance, size, and behavior (tilt angle) parameters of marine animals (fish or zooplankton) can be accurately inferred from the inversion by using multi-frequency acoustic data. The influence of the singularity and uncertainty in the inversion kernel on the inversion results can be mitigated by examining the singular values for linear inverse problems and employing a non-linear inversion involving a scattering model-based kernel.
On the internal stability of non-linear dynamic inversion: application to flight control
Alam, M.; Čelikovský, Sergej
2017-01-01
Roč. 11, č. 12 (2017), s. 1849-1861 ISSN 1751-8644 R&D Projects: GA ČR(CZ) GA17-04682S Institutional support: RVO:67985556 Keywords : flight control * non-linear dynamic inversion * stability Subject RIV: BC - Control Systems Theory OBOR OECD: Automation and control systems Impact factor: 2.536, year: 2016 http://library.utia.cas.cz/separaty/2017/TR/celikovsky-0476150.pdf
Frequency-domain full-waveform inversion with non-linear descent directions
Geng, Yu; Pan, Wenyong; Innanen, Kristopher A.
2018-05-01
Full-waveform inversion (FWI) is a highly non-linear inverse problem, normally solved iteratively, with each iteration involving an update constructed through linear operations on the residuals. Incorporating a flexible degree of non-linearity within each update may have important consequences for convergence rates, determination of low model wavenumbers and discrimination of parameters. We examine one approach for doing so, wherein higher order scattering terms are included within the sensitivity kernel during the construction of the descent direction, adjusting it away from that of the standard Gauss-Newton approach. These scattering terms are naturally admitted when we construct the sensitivity kernel by varying not the current but the to-be-updated model at each iteration. Linear and/or non-linear inverse scattering methodologies allow these additional sensitivity contributions to be computed from the current data residuals within any given update. We show that in the presence of pre-critical reflection data, the error in a second-order non-linear update to a background of s0 is, in our scheme, proportional to at most (Δs/s0)3 in the actual parameter jump Δs causing the reflection. In contrast, the error in a standard Gauss-Newton FWI update is proportional to (Δs/s0)2. For numerical implementation of more complex cases, we introduce a non-linear frequency-domain scheme, with an inner and an outer loop. A perturbation is determined from the data residuals within the inner loop, and a descent direction based on the resulting non-linear sensitivity kernel is computed in the outer loop. We examine the response of this non-linear FWI using acoustic single-parameter synthetics derived from the Marmousi model. The inverted results vary depending on data frequency ranges and initial models, but we conclude that the non-linear FWI has the capability to generate high-resolution model estimates in both shallow and deep regions, and to converge rapidly, relative to a
Soft-sensing Modeling Based on MLS-SVM Inversion for L-lysine Fermentation Processes
Bo Wang
2015-06-01
Full Text Available A modeling approach 63 based on multiple output variables least squares support vector machine (MLS-SVM inversion is presented by a combination of inverse system and support vector machine theory. Firstly, a dynamic system model is developed based on material balance relation of a fed-batch fermentation process, with which it is analyzed whether an inverse system exists or not, and into which characteristic information of a fermentation process is introduced to set up an extended inversion model. Secondly, an initial extended inversion model is developed off-line by the use of the fitting capacity of MLS-SVM; on-line correction is made by the use of a differential evolution (DE algorithm on the basis of deviation information. Finally, a combined pseudo-linear system is formed by means of a serial connection of a corrected extended inversion model behind the L-lysine fermentation processes; thereby crucial biochemical parameters of a fermentation process could be predicted on-line. The simulation experiment shows that this soft-sensing modeling method features very high prediction precision and can predict crucial biochemical parameters of L-lysine fermentation process very well.
The tempered stable process with infinitely divisible inverse subordinators
Wyłomańska, Agnieszka
2013-01-01
In the last decade processes driven by inverse subordinators have become extremely popular. They have been used in many different applications, especially for data with observable constant time periods. However, the classical model, i.e. the subordinated Brownian motion, can be inappropriate for the description of observed phenomena that exhibit behavior not adequate for Gaussian systems. Therefore, in this paper we extend the classical approach and replace the Brownian motion by the tempered stable process. Moreover, on the other hand, as an extension of the classical model, we analyze the general class of inverse subordinators. We examine the main properties of the tempered stable process driven by inverse subordinators from the infinitely divisible class of distributions. We show the fractional Fokker–Planck equation of the examined process and the asymptotic behavior of the mean square displacement for two cases of subordinators. Additionally, we examine how an external force can influence the examined characteristics. (paper)
The Fractional Poisson Process and the Inverse Stable Subordinator
Meerschaert, Mark; Nane, Erkan; Vellaisamy, P.
2011-01-01
The fractional Poisson process is a renewal process with Mittag-Leffler waiting times. Its distributions solve a time-fractional analogue of the Kolmogorov forward equation for a Poisson process. This paper shows that a traditional Poisson process, with the time variable replaced by an independent inverse stable subordinator, is also a fractional Poisson process. This result unifies the two main approaches in the stochastic theory of time-fractional diffusion equations. The equivalence extend...
Guliyev, Namig J.
2008-01-01
International audience; Inverse problems of recovering the coefficients of Sturm–Liouville problems with the eigenvalue parameter linearly contained in one of the boundary conditions are studied: 1) from the sequences of eigenvalues and norming constants; 2) from two spectra. Necessary and sufficient conditions for the solvability of these inverse problems are obtained.
Inverse reasoning processes in obsessive-compulsive disorder.
Wong, Shiu F; Grisham, Jessica R
2017-04-01
The inference-based approach (IBA) is one cognitive model that aims to explain the aetiology and maintenance of obsessive-compulsive disorder (OCD). The model proposes that certain reasoning processes lead an individual with OCD to confuse an imagined possibility with an actual probability, a state termed inferential confusion. One such reasoning process is inverse reasoning, in which hypothetical causes form the basis of conclusions about reality. Although previous research has found associations between a self-report measure of inferential confusion and OCD symptoms, evidence of a specific association between inverse reasoning and OCD symptoms is lacking. In the present study, we developed a task-based measure of inverse reasoning in order to investigate whether performance on this task is associated with OCD symptoms in an online sample. The results provide some evidence for the IBA assertion: greater endorsement of inverse reasoning was significantly associated with OCD symptoms, even when controlling for general distress and OCD-related beliefs. Future research is needed to replicate this result in a clinical sample and to investigate a potential causal role for inverse reasoning in OCD. Copyright © 2016 Elsevier Ltd. All rights reserved.
Regression tools for CO2 inversions: application of a shrinkage estimator to process attribution
Shaby, Benjamin A.; Field, Christopher B.
2006-01-01
In this study we perform an atmospheric inversion based on a shrinkage estimator. This method is used to estimate surface fluxes of CO 2 , first partitioned according to constituent geographic regions, and then according to constituent processes that are responsible for the total flux. Our approach differs from previous approaches in two important ways. The first is that the technique of linear Bayesian inversion is recast as a regression problem. Seen as such, standard regression tools are employed to analyse and reduce errors in the resultant estimates. A shrinkage estimator, which combines standard ridge regression with the linear 'Bayesian inversion' model, is introduced. This method introduces additional bias into the model with the aim of reducing variance such that errors are decreased overall. Compared with standard linear Bayesian inversion, the ridge technique seems to reduce both flux estimation errors and prediction errors. The second divergence from previous studies is that instead of dividing the world into geographically distinct regions and estimating the CO 2 flux in each region, the flux space is divided conceptually into processes that contribute to the total global flux. Formulating the problem in this manner adds to the interpretability of the resultant estimates and attempts to shed light on the problem of attributing sources and sinks to their underlying mechanisms
Inverse scattering and GPR data processing: an Introduction
Persico, Raffaele
2014-05-01
Inverse scattering and GPR data processing: an Introduction Raffaele Persico This abstract is meant to propose a brief overview of the book "Introduction to Ground Penetrating Radar: Inverse scattering and data processing", edited by Wiley Press (ISBN 9781118305003). The reason why I propose this contribution is the fact that, in spite of the large relevant literature, to the best of my knowledge it is not very common to find a text entirely devoted to the physical-mathematical aspects (a part of them, of course) of GPR data processing. Also due to this, probably a sort of gap between the GPR practice and the underlying theory has been created, and indeed we can meet practitioners convinced that the quality of the achieved results is indefinitely improvable by making narrower the spatial step of the data, or that it is desirable to have extremely directive antennas because this would improve the resolution. In order to provide a work hopefully able to address these and other aspects and hopefully able to give a contribution to the correction of these imprecise beliefs, a dealing from the beginning has been proposed, i.e. a sequential, relatively plane, and as much as possible self consistent, dealing starting from the Maxwell's equations and reaching the most commonly exploited migration formulas and linear inversion algorithms, both within a 2D and a 3D framework. This follows the didactic aim to provide to the reader an insight about what can be reasonably achieved and what should be reasonably done in the field and during the processing phase in order to achieve satisfying results. In particular, the reader will be hopefully made aware not only of the mathematical passages, but also of the involved approximations, the needed assumptions and the physical limits of the final algorithms. The results have been also back-upped with numerical exercises and with some experimental tests, all of which conceived on purpose for this text, and some questions with the
Resolution limits of migration and linearized waveform inversion images in a lossy medium
Schuster, Gerard T.; Dutta, Gaurav; Li, Jing
2017-01-01
The vertical-and horizontal-resolution limits Delta x(lossy) and Delta z(lossy) of post-stack migration and linearized waveform inversion images are derived for lossy data in the far-field approximation. Unlike the horizontal resolution limit Delta x proportional to lambda z/L in a lossless medium which linearly worsens in depth z, Delta x(lossy) proportional to z(2)/QL worsens quadratically with depth for a medium with small Q values. Here, Q is the quality factor, lambda is the effective wavelength, L is the recording aperture, and loss in the resolution formulae is accounted for by replacing lambda with z/Q. In contrast, the lossy vertical-resolution limit Delta z(lossy) only worsens linearly in depth compared to Delta z proportional to lambda for a lossless medium. For both the causal and acausal Q models, the resolution limits are linearly proportional to 1/Q for small Q. These theoretical predictions are validated with migration images computed from lossy data.
Resolution limits of migration and linearized waveform inversion images in a lossy medium
Schuster, Gerard T.
2017-03-10
The vertical-and horizontal-resolution limits Delta x(lossy) and Delta z(lossy) of post-stack migration and linearized waveform inversion images are derived for lossy data in the far-field approximation. Unlike the horizontal resolution limit Delta x proportional to lambda z/L in a lossless medium which linearly worsens in depth z, Delta x(lossy) proportional to z(2)/QL worsens quadratically with depth for a medium with small Q values. Here, Q is the quality factor, lambda is the effective wavelength, L is the recording aperture, and loss in the resolution formulae is accounted for by replacing lambda with z/Q. In contrast, the lossy vertical-resolution limit Delta z(lossy) only worsens linearly in depth compared to Delta z proportional to lambda for a lossless medium. For both the causal and acausal Q models, the resolution limits are linearly proportional to 1/Q for small Q. These theoretical predictions are validated with migration images computed from lossy data.
Inverse opal photonic crystal of chalcogenide glass by solution processing.
Kohoutek, Tomas; Orava, Jiri; Sawada, Tsutomu; Fudouzi, Hiroshi
2011-01-15
Chalcogenide opal and inverse opal photonic crystals were successfully fabricated by low-cost and low-temperature solution-based process, which is well developed in polymer films processing. Highly ordered silica colloidal crystal films were successfully infilled with nano-colloidal solution of the high refractive index As(30)S(70) chalcogenide glass by using spin-coating method. The silica/As-S opal film was etched in HF acid to dissolve the silica opal template and fabricate the inverse opal As-S photonic crystal. Both, the infilled silica/As-S opal film (Δn ~ 0.84 near λ=770 nm) and the inverse opal As-S photonic structure (Δn ~ 1.26 near λ=660 nm) had significantly enhanced reflectivity values and wider photonic bandgaps in comparison with the silica opal film template (Δn ~ 0.434 near λ=600 nm). The key aspects of opal film preparation by spin-coating of nano-colloidal chalcogenide glass solution are discussed. The solution fabricated "inorganic polymer" opal and the inverse opal structures exceed photonic properties of silica or any organic polymer opal film. The fabricated photonic structures are proposed for designing novel flexible colloidal crystal laser devices, photonic waveguides and chemical sensors. Copyright © 2010 Elsevier Inc. All rights reserved.
Linearity of bulk-controlled inverter ring VCO in weak and strong inversion
Wismar, Ulrik Sørensen; Wisland, D.; Andreani, Pietro
2007-01-01
In this paper linearity of frequency modulation in voltage controlled inverter ring oscillators for non feedback sigma delta converter applications is studied. The linearity is studied through theoretical models of the oscillator operating at supply voltages above and below the threshold voltage......, process variations and temperature variations have also been simulated to indicate the advantages of having the soft rail bias transistor in the VCO....
The Inverse System Method Applied to the Derivation of Power System Non—linear Control Laws
DonghaiLI; XuezhiJIANG; 等
1997-01-01
The differential geometric method has been applied to a series of power system non-linear control problems effectively.However a set of differential equations must be solved for obtaining the required diffeomorphic transformation.Therefore the derivation of control laws is very complicated.In fact because of the specificity of power system models the required diffeomorphic transformation may be obtained directly,so it is unnecessary to solve a set of differential equations.In addition inverse system method is equivalent to differential geometric method in reality and not limited to affine nonlinear systems,Its physical meaning is able to be viewed directly and its deduction needs only algebraic operation and derivation,so control laws can be obtained easily and the application to engineering is very convenient.Authors of this paper take steam valving control of power system as a typical case to be studied.It is demonstrated that the control law deduced by inverse system method is just the same as one by differential geometric method.The conclusion will simplify the control law derivations of steam valving,excitation,converter and static var compensator by differential geometric method and may be suited to similar control problems in other areas.
Kuchment, Peter
2015-05-10
© 2015, Springer Basel. In the previous paper (Kuchment and Steinhauer in Inverse Probl 28(8):084007, 2012), the authors introduced a simple procedure that allows one to detect whether and explain why internal information arising in several novel coupled physics (hybrid) imaging modalities could turn extremely unstable techniques, such as optical tomography or electrical impedance tomography, into stable, good-resolution procedures. It was shown that in all cases of interest, the Fréchet derivative of the forward mapping is a pseudo-differential operator with an explicitly computable principal symbol. If one can set up the imaging procedure in such a way that the symbol is elliptic, this would indicate that the problem was stabilized. In the cases when the symbol is not elliptic, the technique suggests how to change the procedure (e.g., by adding extra measurements) to achieve ellipticity. In this article, we consider the situation arising in acousto-optical tomography (also called ultrasound modulated optical tomography), where the internal data available involves the Green’s function, and thus depends globally on the unknown parameter(s) of the equation and its solution. It is shown that the technique of (Kuchment and Steinhauer in Inverse Probl 28(8):084007, 2012) can be successfully adopted to this situation as well. A significant part of the article is devoted to results on generic uniqueness for the linearized problem in a variety of situations, including those arising in acousto-electric and quantitative photoacoustic tomography.
A tutorial on inverse problems for anomalous diffusion processes
Jin, Bangti; Rundell, William
2015-01-01
Over the last two decades, anomalous diffusion processes in which the mean squares variance grows slower or faster than that in a Gaussian process have found many applications. At a macroscopic level, these processes are adequately described by fractional differential equations, which involves fractional derivatives in time or/and space. The fractional derivatives describe either history mechanism or long range interactions of particle motions at a microscopic level. The new physics can change dramatically the behavior of the forward problems. For example, the solution operator of the time fractional diffusion diffusion equation has only limited smoothing property, whereas the solution for the space fractional diffusion equation may contain weak singularity. Naturally one expects that the new physics will impact related inverse problems in terms of uniqueness, stability, and degree of ill-posedness. The last aspect is especially important from a practical point of view, i.e., stably reconstructing the quantities of interest. In this paper, we employ a formal analytic and numerical way, especially the two-parameter Mittag-Leffler function and singular value decomposition, to examine the degree of ill-posedness of several ‘classical’ inverse problems for fractional differential equations involving a Djrbashian–Caputo fractional derivative in either time or space, which represent the fractional analogues of that for classical integral order differential equations. We discuss four inverse problems, i.e., backward fractional diffusion, sideways problem, inverse source problem and inverse potential problem for time fractional diffusion, and inverse Sturm–Liouville problem, Cauchy problem, backward fractional diffusion and sideways problem for space fractional diffusion. It is found that contrary to the wide belief, the influence of anomalous diffusion on the degree of ill-posedness is not definitive: it can either significantly improve or worsen the conditioning
Zha, Yuanyuan; Yeh, Tian-Chyi J.; Illman, Walter A.; Zeng, Wenzhi; Zhang, Yonggen; Sun, Fangqiang; Shi, Liangsheng
2018-03-01
Hydraulic tomography (HT) is a recently developed technology for characterizing high-resolution, site-specific heterogeneity using hydraulic data (nd) from a series of cross-hole pumping tests. To properly account for the subsurface heterogeneity and to flexibly incorporate additional information, geostatistical inverse models, which permit a large number of spatially correlated unknowns (ny), are frequently used to interpret the collected data. However, the memory storage requirements for the covariance of the unknowns (ny × ny) in these models are prodigious for large-scale 3-D problems. Moreover, the sensitivity evaluation is often computationally intensive using traditional difference method (ny forward runs). Although employment of the adjoint method can reduce the cost to nd forward runs, the adjoint model requires intrusive coding effort. In order to resolve these issues, this paper presents a Reduced-Order Successive Linear Estimator (ROSLE) for analyzing HT data. This new estimator approximates the covariance of the unknowns using Karhunen-Loeve Expansion (KLE) truncated to nkl order, and it calculates the directional sensitivities (in the directions of nkl eigenvectors) to form the covariance and cross-covariance used in the Successive Linear Estimator (SLE). In addition, the covariance of unknowns is updated every iteration by updating the eigenvalues and eigenfunctions. The computational advantages of the proposed algorithm are demonstrated through numerical experiments and a 3-D transient HT analysis of data from a highly heterogeneous field site.
Post-processing through linear regression
van Schaeybroeck, B.; Vannitsem, S.
2011-03-01
Various post-processing techniques are compared for both deterministic and ensemble forecasts, all based on linear regression between forecast data and observations. In order to evaluate the quality of the regression methods, three criteria are proposed, related to the effective correction of forecast error, the optimal variability of the corrected forecast and multicollinearity. The regression schemes under consideration include the ordinary least-square (OLS) method, a new time-dependent Tikhonov regularization (TDTR) method, the total least-square method, a new geometric-mean regression (GM), a recently introduced error-in-variables (EVMOS) method and, finally, a "best member" OLS method. The advantages and drawbacks of each method are clarified. These techniques are applied in the context of the 63 Lorenz system, whose model version is affected by both initial condition and model errors. For short forecast lead times, the number and choice of predictors plays an important role. Contrarily to the other techniques, GM degrades when the number of predictors increases. At intermediate lead times, linear regression is unable to provide corrections to the forecast and can sometimes degrade the performance (GM and the best member OLS with noise). At long lead times the regression schemes (EVMOS, TDTR) which yield the correct variability and the largest correlation between ensemble error and spread, should be preferred.
Post-processing through linear regression
B. Van Schaeybroeck
2011-03-01
Full Text Available Various post-processing techniques are compared for both deterministic and ensemble forecasts, all based on linear regression between forecast data and observations. In order to evaluate the quality of the regression methods, three criteria are proposed, related to the effective correction of forecast error, the optimal variability of the corrected forecast and multicollinearity. The regression schemes under consideration include the ordinary least-square (OLS method, a new time-dependent Tikhonov regularization (TDTR method, the total least-square method, a new geometric-mean regression (GM, a recently introduced error-in-variables (EVMOS method and, finally, a "best member" OLS method. The advantages and drawbacks of each method are clarified.
These techniques are applied in the context of the 63 Lorenz system, whose model version is affected by both initial condition and model errors. For short forecast lead times, the number and choice of predictors plays an important role. Contrarily to the other techniques, GM degrades when the number of predictors increases. At intermediate lead times, linear regression is unable to provide corrections to the forecast and can sometimes degrade the performance (GM and the best member OLS with noise. At long lead times the regression schemes (EVMOS, TDTR which yield the correct variability and the largest correlation between ensemble error and spread, should be preferred.
Planktonic food webs revisited: Reanalysis of results from the linear inverse approach
Hlaili, Asma Sakka; Niquil, Nathalie; Legendre, Louis
2014-01-01
Identification of the trophic pathway that dominates a given planktonic assemblage is generally based on the distribution of biomasses among food-web compartments, or better, the flows of materials or energy among compartments. These flows are obtained by field observations and a posteriori analyses, including the linear inverse approach. In the present study, we re-analysed carbon flows obtained by inverse analysis at 32 stations in the global ocean and one large lake. Our results do not support two "classical" views of plankton ecology, i.e. that the herbivorous food web is dominated by mesozooplankton grazing on large phytoplankton, and the microbial food web is based on microzooplankton significantly consuming bacteria; our results suggest instead that phytoplankton are generally grazed by microzooplankton, of which they are the main food source. Furthermore, we identified the "phyto-microbial food web", where microzooplankton largely feed on phytoplankton, in addition to the already known "poly-microbial food web", where microzooplankton consume more or less equally various types of food. These unexpected results led to a (re)definition of the conceptual models corresponding to the four trophic pathways we found to exist in plankton, i.e. the herbivorous, multivorous, and two types of microbial food web. We illustrated the conceptual trophic pathways using carbon flows that were actually observed at representative stations. The latter can be calibrated to correspond to any field situation. Our study also provides researchers and managers with operational criteria for identifying the dominant trophic pathway in a planktonic assemblage, these criteria being based on the values of two carbon ratios that could be calculated from flow values that are relatively easy to estimate in the field.
Jiang, Yi; Li, Guoyang; Qian, Lin-Xue; Liang, Si; Destrade, Michel; Cao, Yanping
2015-10-01
We use supersonic shear wave imaging (SSI) technique to measure not only the linear but also the nonlinear elastic properties of brain matter. Here, we tested six porcine brains ex vivo and measured the velocities of the plane shear waves induced by acoustic radiation force at different states of pre-deformation when the ultrasonic probe is pushed into the soft tissue. We relied on an inverse method based on the theory governing the propagation of small-amplitude acoustic waves in deformed solids to interpret the experimental data. We found that, depending on the subjects, the resulting initial shear modulus [Formula: see text] varies from 1.8 to 3.2 kPa, the stiffening parameter [Formula: see text] of the hyperelastic Demiray-Fung model from 0.13 to 0.73, and the third- [Formula: see text] and fourth-order [Formula: see text] constants of weakly nonlinear elasticity from [Formula: see text]1.3 to [Formula: see text]20.6 kPa and from 3.1 to 8.7 kPa, respectively. Paired [Formula: see text] test performed on the experimental results of the left and right lobes of the brain shows no significant difference. These values are in line with those reported in the literature on brain tissue, indicating that the SSI method, combined to the inverse analysis, is an efficient and powerful tool for the mechanical characterization of brain tissue, which is of great importance for computer simulation of traumatic brain injury and virtual neurosurgery.
Park, J. J.
2017-12-01
Sheared Layers in the Continental Crust: Nonlinear and Linearized inversion for Ps receiver functions Jeffrey Park, Yale University The interpretation of seismic receiver functions (RFs) in terms of isotropic and anisotropic layered structure can be complex. The relationship between structure and body-wave scattering is nonlinear. The anisotropy can involve more parameters than the observations can readily constrain. Finally, reflectivity-predicted layer reverberations are often not prominent in data, so that nonlinear waveform inversion can search in vain to match ghost signals. Multiple-taper correlation (MTC) receiver functions have uncertainties in the frequency domain that follow Gaussian statistics [Park and Levin, 2016a], so grid-searches for the best-fitting collections of interfaces can be performed rapidly to minimize weighted misfit variance. Tests for layer-reverberations can be performed in the frequency domain without reflectivity calculations, allowing flexible modelling of weak, but nonzero, reverberations. Park and Levin [2016b] linearized the hybridization of P and S body waves in an anisotropic layer to predict first-order Ps conversion amplitudes at crust and mantle interfaces. In an anisotropic layer, the P wave acquires small SV and SH components. To ensure continuity of displacement and traction at the top and bottom boundaries of the layer, shear waves are generated. Assuming hexagonal symmetry with an arbitrary symmetry axis, theory confirms the empirical stacking trick of phase-shifting transverse RFs by 90 degrees in back-azimuth [Shiomi and Park, 2008; Schulte-Pelkum and Mahan, 2014] to enhance 2-lobed and 4-lobed harmonic variation. Ps scattering is generated by sharp interfaces, so that RFs resemble the first derivative of the model. MTC RFs in the frequency domain can be manipulated to obtain a first-order reconstruction of the layered anisotropy, under the above modeling constraints and neglecting reverberations. Examples from long
Wave Characteristics of Temperature Inversion Process of Nighttime Radiation,
1983-12-09
CHARACTERISTICS OF TEMPERATURE INVERSION PROCESS OF NIGHTTIME RADIATION By: Zhou Mingyu and Zhang ¥i English pages: 8 Source: Kexue Tongbao, 1982, pp. 156...lJournal of Meteorology], 39 (1981), 1:70-81. 3. Drazin, P. G., J. Fluid. Mech., 4 (1958), 214-224. 4. Zhou Mingyu et al., QIXIANG XUEBAO, 38 (1980), 3: 250...258. 5. Emnanuel, C. B., B-L. Meteor., 5(1973), N(1/2)8 19-27. 6. Zhou Mingyu et al., J. Acoust. Soc., A. m., 68 (1980), 1: 303-308. 8 I iI
On a finite moment perturbation of linear functionals and the inverse Szegö transformation
Edinson Fuentes
2016-05-01
Full Text Available Given a sequence of moments $\\{c_{n}\\}_{n\\in\\ze}$ associated with an Hermitian linear functional $\\mathcal{L}$ defined in the space of Laurent polynomials, we study a new functional $\\mathcal{L}_{\\Omega}$ which is a perturbation of $\\mathcal{L}$ in such a way that a finite number of moments are perturbed. Necessary and sufficient conditions are given for the regularity of $\\mathcal{L}_{\\Omega}$, and a connection formula between the corresponding families of orthogonal polynomials is obtained. On the other hand, assuming $\\mathcal{L}_{\\Omega}$ is positive definite, the perturbation is analyzed through the inverse Szegö transformation. Resumen. Dada una sucesión de momentos $\\{c_{n}\\}_{n\\in\\ze}$ asociada a un funcional lineal hermitiano $\\mathcal{L}$ definido en el espacio de los polinomios de Laurent, estudiamos un nuevo funcional $\\mathcal{L}_{\\Omega}$ que consiste en una perturbación de $\\mathcal{L}$ de tal forma que se perturba un número finito de momentos de la sucesión. Se encuentran condiciones necesarias y suficientes para la regularidad de $\\mathcal{L}_{\\Omega}$, y se obtiene una fórmula de conexión que relaciona las familias de polinomios ortogonales correspondientes. Por otro lado, suponiendo que $\\mathcal{L}_{\\Omega}$ es definido positivo, se analiza la perturbación mediante de la transformación inversa de Szegö.
Surface waves tomography and non-linear inversion in the southeast Carpathians
Raykova, R.B.; Panza, G.F.
2005-11-01
A set of shear-wave velocity models of the lithosphere-asthenosphere system in the southeast Carpathians is determined by the non-linear inversion of surface wave group velocity data, obtained from a tomographic analysis. The local dispersion curves are assembled for the period range 7 s - 150 s, combining regional group velocity measurements and published global Rayleigh wave dispersion data. The lithosphere-asthenosphere velocity structure is reliably reconstructed to depths of about 250 km. The thickness of the lithosphere in the region varies from about 120 km to 250 km and the depth of the asthenosphere between 150 km and 250 km. Mantle seismicity concentrates where the high velocity lid is detected just below the Moho. The obtained results are in agreement with recent seismic refraction, receiver function, and travel time P-wave tomography investigations in the region. The similarity among the results obtained from different kinds of structural investigations (including the present work) highlights some new features of the lithosphere-asthenosphere system in southeast Carpathians, as the relatively thin crust under Transylvania basin and Vrancea zone. (author)
Inverse Analysis to Formability Design in a Deep Drawing Process
Buranathiti, Thaweepat; Cao, Jian
Deep drawing process is an important process adding values to flat sheet metals in many industries. An important concern in the design of a deep drawing process generally is formability. This paper aims to present the connection between formability and inverse analysis (IA), which is a systematical means for determining an optimal blank configuration for a deep drawing process. In this paper, IA is presented and explored by using a commercial finite element software package. A number of numerical studies on the effect of blank configurations to the quality of a part produced by a deep drawing process were conducted and analyzed. The quality of the drawing processes is numerically analyzed by using an explicit incremental nonlinear finite element code. The minimum distance between elemental principal strains and the strain-based forming limit curve (FLC) is defined as tearing margin to be the key performance index (KPI) implying the quality of the part. The initial blank configuration has shown that it plays a highly important role in the quality of the product via the deep drawing process. In addition, it is observed that if a blank configuration is not greatly deviated from the one obtained from IA, the blank can still result a good product. The strain history around the bottom fillet of the part is also observed. The paper concludes that IA is an important part of the design methodology for deep drawing processes.
Caiyan Qin
2017-12-01
Full Text Available Due to its simple mechanical structure and high motion stability, the H-shaped platform has been increasingly widely used in precision measuring, numerical control machining and semiconductor packaging equipment, etc. The H-shaped platform is normally driven by multiple (three permanent magnet synchronous linear motors. The main challenges for H-shaped platform-control include synchronous control between the two linear motors in the Y direction as well as total positioning error of the platform mover, a combination of position deviation in X and Y directions. To deal with the above challenges, this paper proposes a control strategy based on the inverse system method through state feedback and dynamic decoupling of the thrust force. First, mechanical dynamics equations have been deduced through the analysis of system coupling based on the platform structure. Second, the mathematical model of the linear motors and the relevant coordinate transformation between dq-axis currents and ABC-phase currents are analyzed. Third, after the main concept of inverse system method being explained, the inverse system model of the platform control system has been designed after defining relevant system variables. Inverse system model compensates the original nonlinear coupled system into pseudo-linear decoupled linear system, for which typical linear control methods, like PID, can be adopted to control the system. The simulation model of the control system is built in MATLAB/Simulink and the simulation result shows that the designed control system has both small synchronous deviation and small total trajectory tracking error. Furthermore, the control program has been run on NI controller for both fixed-loop-time and free-loop-time modes, and the test result shows that the average loop computation time needed is rather small, which makes it suitable for real industrial applications. Overall, it proves that the proposed new control strategy can be used in
Linear GPR inversion for lossy soil and a planar air-soil interface
Meincke, Peter
2001-01-01
A three-dimensional inversion scheme for fixed-offset ground penetrating radar (GPR) is derived that takes into account the loss in the soil and the planar air-soil interface. The forward model of this inversion scheme is based upon the first Born approximation and the dyadic Green function...
Ranaivo Nomenjanahary, F.; Rakoto, H.; Ratsimbazafy, J.B.
1994-08-01
This paper is concerned with resistivity sounding measurements performed from single site (vertical sounding) or from several sites (profiles) within a bounded area. The objective is to present an accurate information about the study area and to estimate the likelihood of the produced quantitative models. The achievement of this objective obviously requires quite relevant data and processing methods. It also requires interpretation methods which should take into account the probable effect of an heterogeneous structure. In front of such difficulties, the interpretation of resistivity sounding data inevitably involves the use of inversion methods. We suggest starting the interpretation in simple situation (1-D approximation), and using the rough but correct model obtained as an a-priori model for any more refined interpretation. Related to this point of view, special attention should be paid for the inverse problem applied to the resistivity sounding data. This inverse problem is nonlinear, while linearity inherent in the functional response used to describe the physical experiment. Two different approaches are used to build an approximate but higher dimensional inversion of geoelectrical data: the linear approach and the bayesian statistical approach. Some illustrations of their application in resistivity sounding data acquired at Tritrivakely volcanic lake (single site) and at Mahitsy area (several sites) will be given. (author). 28 refs, 7 figs
High performance GPU processing for inversion using uniform grid searches
Venetis, Ioannis E.; Saltogianni, Vasso; Stiros, Stathis; Gallopoulos, Efstratios
2017-04-01
Many geophysical problems are described by systems of redundant, highly non-linear systems of ordinary equations with constant terms deriving from measurements and hence representing stochastic variables. Solution (inversion) of such problems is based on numerical, optimization methods, based on Monte Carlo sampling or on exhaustive searches in cases of two or even three "free" unknown variables. Recently the TOPological INVersion (TOPINV) algorithm, a grid search-based technique in the Rn space, has been proposed. TOPINV is not based on the minimization of a certain cost function and involves only forward computations, hence avoiding computational errors. The basic concept is to transform observation equations into inequalities on the basis of an optimization parameter k and of their standard errors, and through repeated "scans" of n-dimensional search grids for decreasing values of k to identify the optimal clusters of gridpoints which satisfy observation inequalities and by definition contain the "true" solution. Stochastic optimal solutions and their variance-covariance matrices are then computed as first and second statistical moments. Such exhaustive uniform searches produce an excessive computational load and are extremely time consuming for common computers based on a CPU. An alternative is to use a computing platform based on a GPU, which nowadays is affordable to the research community, which provides a much higher computing performance. Using the CUDA programming language to implement TOPINV allows the investigation of the attained speedup in execution time on such a high performance platform. Based on synthetic data we compared the execution time required for two typical geophysical problems, modeling magma sources and seismic faults, described with up to 18 unknown variables, on both CPU/FORTRAN and GPU/CUDA platforms. The same problems for several different sizes of search grids (up to 1012 gridpoints) and numbers of unknown variables were solved on
Retrieval of collision kernels from the change of droplet size distributions with linear inversion
Onishi, Ryo; Takahashi, Keiko [Earth Simulator Center, Japan Agency for Marine-Earth Science and Technology, 3173-25 Showa-machi, Kanazawa-ku, Yokohama Kanagawa 236-0001 (Japan); Matsuda, Keigo; Kurose, Ryoichi; Komori, Satoru [Department of Mechanical Engineering and Science, Kyoto University, Yoshida-honmachi, Sakyo-ku, Kyoto 606-8501 (Japan)], E-mail: onishi.ryo@jamstec.go.jp, E-mail: matsuda.keigo@t03.mbox.media.kyoto-u.ac.jp, E-mail: takahasi@jamstec.go.jp, E-mail: kurose@mech.kyoto-u.ac.jp, E-mail: komori@mech.kyoto-u.ac.jp
2008-12-15
We have developed a new simple inversion scheme for retrieving collision kernels from the change of droplet size distribution due to collision growth. Three-dimensional direct numerical simulations (DNS) of steady isotropic turbulence with colliding droplets are carried out in order to investigate the validity of the developed inversion scheme. In the DNS, air turbulence is calculated using a quasi-spectral method; droplet motions are tracked in a Lagrangian manner. The initial droplet size distribution is set to be equivalent to that obtained in a wind tunnel experiment. Collision kernels retrieved by the developed inversion scheme are compared to those obtained by the DNS. The comparison shows that the collision kernels can be retrieved within 15% error. This verifies the feasibility of retrieving collision kernels using the present inversion scheme.
Effective and accurate processing and inversion of airborne electromagnetic data
Auken, Esben; Christiansen, Anders Vest; Andersen, Kristoffer Rønne
Airborne electromagnetic (AEM) data is used throughout the world for mapping of mineral targets and groundwater resources. The development of technology and inversion algorithms has been tremendously over the last decade and results from these surveys are high-resolution images of the subsurface....... In this keynote talk, we discuss an effective inversion algorithm, which is both subjected to intense research and development as well as production. This is the well know Laterally Constrained Inversion (LCI) and Spatial Constrained Inversion algorithm. The same algorithm is also used in a voxel setup (3D model......) and for sheet inversions. An integral part of these different model discretization is an accurate modelling of the system transfer function and of auxiliary parameters like flight altitude, bird pitch,etc....
Design of Linear-Quadratic-Regulator for a CSTR process
Meghna, P. R.; Saranya, V.; Jaganatha Pandian, B.
2017-11-01
This paper aims at creating a Linear Quadratic Regulator (LQR) for a Continuous Stirred Tank Reactor (CSTR). A CSTR is a common process used in chemical industries. It is a highly non-linear system. Therefore, in order to create the gain feedback controller, the model is linearized. The controller is designed for the linearized model and the concentration and volume of the liquid in the reactor are kept at a constant value as required.
A note on inverses of non-decreasing Lévy processes
J.A. Ferreira
2001-01-01
textabstractWe show that, apart from deterministic processes, compound Poisson processes with exponential jumps are the only (shifted) non-decreasing Lévy processes whose inverses are also (shifted) non-decreasing Lévy processes.
An inverse method for non linear ablative thermics with experimentation of automatic differentiation
Alestra, S [Simulation Information Technology and Systems Engineering, EADS IW Toulouse (France); Collinet, J [Re-entry Systems and Technologies, EADS ASTRIUM ST, Les Mureaux (France); Dubois, F [Professor of Applied Mathematics, Conservatoire National des Arts et Metiers Paris (France)], E-mail: stephane.alestra@eads.net, E-mail: jean.collinet@astrium.eads.net, E-mail: fdubois@cnam.fr
2008-11-01
Thermal Protection System is a key element for atmospheric re-entry missions of aerospace vehicles. The high level of heat fluxes encountered in such missions has a direct effect on mass balance of the heat shield. Consequently, the identification of heat fluxes is of great industrial interest but is in flight only available by indirect methods based on temperature measurements. This paper is concerned with inverse analyses of highly evolutive heat fluxes. An inverse problem is used to estimate transient surface heat fluxes (convection coefficient), for degradable thermal material (ablation and pyrolysis), by using time domain temperature measurements on thermal protection. The inverse problem is formulated as a minimization problem involving an objective functional, through an optimization loop. An optimal control formulation (Lagrangian, adjoint and gradient steepest descent method combined with quasi-Newton method computations) is then developed and applied, using Monopyro, a transient one-dimensional thermal model with one moving boundary (ablative surface) that has been developed since many years by ASTRIUM-ST. To compute numerically the adjoint and gradient quantities, for the inverse problem in heat convection coefficient, we have used both an analytical manual differentiation and an Automatic Differentiation (AD) engine tool, Tapenade, developed at INRIA Sophia-Antipolis by the TROPICS team. Several validation test cases, using synthetic temperature measurements are carried out, by applying the results of the inverse method with minimization algorithm. Accurate results of identification on high fluxes test cases, and good agreement for temperatures restitutions, are obtained, without and with ablation and pyrolysis, using bad fluxes initial guesses. First encouraging results with an automatic differentiation procedure are also presented in this paper.
Inverse mathematical modelling and identification in metal powder compaction process
Gakwaya, A.; Hrairi, M.; Guillot, M.
2000-01-01
An online assessment of the quality of advanced integrated computer aided manufacturing systems require the knowledge of accurate and reliable non-linear constitutive material behavior. This paper is concerned with material parameter identification based on experimental data for which non uniform distribution of stresses and deformation within the volume of the specimen is considered. Both geometric and material non linearities as well interfacial frictional contact are taken into account during the simulation. Within the framework of finite deformation theory, a multisurface multiplicative plasticity model for metal powder compaction process is presented. The model is seen to involve several parameters which are not always activated by a single state variable even though it may be technologically important in assessing the final product quality and manufacturing performance. The resulting expressions are presented in spatial setting and gradient based descent method utilizing the modified Levenberg-Marquardt scheme is used for the minimization of least square functional so as to obtain the best agreement between relevant experimental data and simulated data in a specified energy norm. The identification of a subset of material parameters of the cap model for stainless steel powder compaction is performed. The obtained parameters are validated through a simulation of an industrial part manufacturing case. A very good agreement between simulated final density and measured density is obtained thus demonstrating the practical usefulness of the proposed approach. (author)
The effect of dendrimer charge inversion in complexes with linear polyelectrolytes
Lyulin, S.V.; Lyulin, A.V.; Darinskii, A.A.; Emri, I.
2005-01-01
The structure of complexes formed by charged dendrimers and oppositely charged linear chains with a charge of at least the same as that of dendrimers was studied by computer simulation using the Brownian dynamics method. The freely jointed, free-draining model of the dendrimer and the linear chain
Friedrich, R.; Drewelow, W.
1978-01-01
An algorithm is described that is based on the method of breaking the Laplace transform down into partial fractions which are then inverse-transformed separately. The sum of the resulting partial functions is the wanted time function. Any problems caused by equation system forms are largely limited by appropriate normalization using an auxiliary parameter. The practical limits of program application are reached when the degree of the denominator of the Laplace transform is seven to eight.
Sakurai, K; Shima, H [OYO Corp., Tokyo (Japan)
1996-10-01
This paper proposes a modeling method of one-dimensional complex resistivity using linear filter technique which has been extended to the complex resistivity. In addition, a numerical test of inversion was conducted using the monitoring results, to discuss the measured frequency band. Linear filter technique is a method by which theoretical potential can be calculated for stratified structures, and it is widely used for the one-dimensional analysis of dc electrical exploration. The modeling can be carried out only using values of complex resistivity without using values of potential. In this study, a bipolar method was employed as a configuration of electrodes. The numerical test of one-dimensional complex resistivity inversion was conducted using the formulated modeling. A three-layered structure model was used as a numerical model. A multi-layer structure with a thickness of 5 m was analyzed on the basis of apparent complex resistivity calculated from the model. From the results of numerical test, it was found that both the chargeability and the time constant agreed well with those of the original model. A trade-off was observed between the chargeability and the time constant at the stage of convergence. 3 refs., 9 figs., 1 tab.
Amplitudes for multiphoton quantum processes in linear optics
UrIas, Jesus
2011-01-01
The prominent role that linear optical networks have acquired in the engineering of photon states calls for physically intuitive and automatic methods to compute the probability amplitudes for the multiphoton quantum processes occurring in linear optics. A version of Wick's theorem for the expectation value, on any vector state, of products of linear operators, in general, is proved. We use it to extract the combinatorics of any multiphoton quantum processes in linear optics. The result is presented as a concise rule to write down directly explicit formulae for the probability amplitude of any multiphoton process in linear optics. The rule achieves a considerable simplification and provides an intuitive physical insight about quantum multiphoton processes. The methodology is applied to the generation of high-photon-number entangled states by interferometrically mixing coherent light with spontaneously down-converted light.
Amplitudes for multiphoton quantum processes in linear optics
Urías, Jesús
2011-07-01
The prominent role that linear optical networks have acquired in the engineering of photon states calls for physically intuitive and automatic methods to compute the probability amplitudes for the multiphoton quantum processes occurring in linear optics. A version of Wick's theorem for the expectation value, on any vector state, of products of linear operators, in general, is proved. We use it to extract the combinatorics of any multiphoton quantum processes in linear optics. The result is presented as a concise rule to write down directly explicit formulae for the probability amplitude of any multiphoton process in linear optics. The rule achieves a considerable simplification and provides an intuitive physical insight about quantum multiphoton processes. The methodology is applied to the generation of high-photon-number entangled states by interferometrically mixing coherent light with spontaneously down-converted light.
Generalized Uncertainty Quantification for Linear Inverse Problems in X-ray Imaging
Fowler, Michael James [Clarkson Univ., Potsdam, NY (United States)
2014-04-25
In industrial and engineering applications, X-ray radiography has attained wide use as a data collection protocol for the assessment of material properties in cases where direct observation is not possible. The direct measurement of nuclear materials, particularly when they are under explosive or implosive loading, is not feasible, and radiography can serve as a useful tool for obtaining indirect measurements. In such experiments, high energy X-rays are pulsed through a scene containing material of interest, and a detector records a radiograph by measuring the radiation that is not attenuated in the scene. One approach to the analysis of these radiographs is to model the imaging system as an operator that acts upon the object being imaged to produce a radiograph. In this model, the goal is to solve an inverse problem to reconstruct the values of interest in the object, which are typically material properties such as density or areal density. The primary objective in this work is to provide quantitative solutions with uncertainty estimates for three separate applications in X-ray radiography: deconvolution, Abel inversion, and radiation spot shape reconstruction. For each problem, we introduce a new hierarchical Bayesian model for determining a posterior distribution on the unknowns and develop efficient Markov chain Monte Carlo (MCMC) methods for sampling from the posterior. A Poisson likelihood, based on a noise model for photon counts at the detector, is combined with a prior tailored to each application: an edge-localizing prior for deconvolution; a smoothing prior with non-negativity constraints for spot reconstruction; and a full covariance sampling prior based on a Wishart hyperprior for Abel inversion. After developing our methods in a general setting, we demonstrate each model on both synthetically generated datasets, including those from a well known radiation transport code, and real high energy radiographs taken at two U. S. Department of Energy
Parker, Peter A.; Geoffrey, Vining G.; Wilson, Sara R.; Szarka, John L., III; Johnson, Nels G.
2010-01-01
The calibration of measurement systems is a fundamental but under-studied problem within industrial statistics. The origins of this problem go back to basic chemical analysis based on NIST standards. In today's world these issues extend to mechanical, electrical, and materials engineering. Often, these new scenarios do not provide "gold standards" such as the standard weights provided by NIST. This paper considers the classic "forward regression followed by inverse regression" approach. In this approach the initial experiment treats the "standards" as the regressor and the observed values as the response to calibrate the instrument. The analyst then must invert the resulting regression model in order to use the instrument to make actual measurements in practice. This paper compares this classical approach to "reverse regression," which treats the standards as the response and the observed measurements as the regressor in the calibration experiment. Such an approach is intuitively appealing because it avoids the need for the inverse regression. However, it also violates some of the basic regression assumptions.
Practical Implementations of Advanced Process Control for Linear Systems
Knudsen, Jørgen K . H.; Huusom, Jakob Kjøbsted; Jørgensen, John Bagterp
2013-01-01
This paper describes some practical problems encountered, when implementing Advanced Process Control, APC, schemes on linear processes. The implemented APC controllers discussed will be LQR, Riccati MPC and Condensed MPC controllers illustrated by simulation of the Four Tank Process and a lineari......This paper describes some practical problems encountered, when implementing Advanced Process Control, APC, schemes on linear processes. The implemented APC controllers discussed will be LQR, Riccati MPC and Condensed MPC controllers illustrated by simulation of the Four Tank Process...... on pilot plant equipment on the department of Chemical Engineering DTU Lyngby....
Modelling and Inverse-Modelling: Experiences with O.D.E. Linear Systems in Engineering Courses
Martinez-Luaces, Victor
2009-01-01
In engineering careers courses, differential equations are widely used to solve problems concerned with modelling. In particular, ordinary differential equations (O.D.E.) linear systems appear regularly in Chemical Engineering, Food Technology Engineering and Environmental Engineering courses, due to the usefulness in modelling chemical kinetics,…
Introduction to ground penetrating radar inverse scattering and data processing
Persico, Raffaele
2014-01-01
This book presents a comprehensive treatment of ground penetrating radar using both forward and inverse scattering mathematical techniques. Use of field data instead of laboratory data enables readers to envision real-life underground imaging; a full color insert further clarifies understanding. Along with considering the practical problem of achieving interpretable underground images, this book also features significant coverage of the problem's mathematical background. This twofold approach provides a resource that will appeal both to application oriented geologists and testing specialists,
Geodynamic inversion to constrain the non-linear rheology of the lithosphere
Baumann, T. S.; Kaus, Boris J. P.
2015-08-01
One of the main methods to determine the strength of the lithosphere is by estimating it's effective elastic thickness. This method assumes that the lithosphere is a thin elastic plate that floats on the mantle and uses both topography and gravity anomalies to estimate the plate thickness. Whereas this seems to work well for oceanic plates, it has given controversial results in continental collision zones. For most of these locations, additional geophysical data sets such as receiver functions and seismic tomography exist that constrain the geometry of the lithosphere and often show that it is rather complex. Yet, lithospheric geometry by itself is insufficient to understand the dynamics of the lithosphere as this also requires knowledge of the rheology of the lithosphere. Laboratory experiments suggest that rocks deform in a viscous manner if temperatures are high and stresses low, or in a plastic/brittle manner if the yield stress is exceeded. Yet, the experimental results show significant variability between various rock types and there are large uncertainties in extrapolating laboratory values to nature, which leaves room for speculation. An independent method is thus required to better understand the rheology and dynamics of the lithosphere in collision zones. The goal of this paper is to discuss such an approach. Our method relies on performing numerical thermomechanical forward models of the present-day lithosphere with an initial geometry that is constructed from geophysical data sets. We employ experimentally determined creep-laws for the various parts of the lithosphere, but assume that the parameters of these creep-laws as well as the temperature structure of the lithosphere are uncertain. This is used as a priori information to formulate a Bayesian inverse problem that employs topography, gravity, horizontal and vertical surface velocities to invert for the unknown material parameters and temperature structure. In order to test the general methodology
Optimal linear filtering of Poisson process with dead time
Glukhova, E.V.
1993-01-01
The paper presents a derivation of an integral equation defining the impulsed transient of optimum linear filtering for evaluation of the intensity of the fluctuating Poisson process with allowance for dead time of transducers
Linear signal processing using silicon micro-ring resonators
Peucheret, Christophe; Ding, Yunhong; Ou, Haiyan
2012-01-01
We review our recent achievements on the use of silicon micro-ring resonators for linear optical signal processing applications, including modulation format conversion, phase-to-intensity modulation conversion and waveform shaping.......We review our recent achievements on the use of silicon micro-ring resonators for linear optical signal processing applications, including modulation format conversion, phase-to-intensity modulation conversion and waveform shaping....
Inverse estimation of multiple muscle activations based on linear logistic regression.
Sekiya, Masashi; Tsuji, Toshiaki
2017-07-01
This study deals with a technology to estimate the muscle activity from the movement data using a statistical model. A linear regression (LR) model and artificial neural networks (ANN) have been known as statistical models for such use. Although ANN has a high estimation capability, it is often in the clinical application that the lack of data amount leads to performance deterioration. On the other hand, the LR model has a limitation in generalization performance. We therefore propose a muscle activity estimation method to improve the generalization performance through the use of linear logistic regression model. The proposed method was compared with the LR model and ANN in the verification experiment with 7 participants. As a result, the proposed method showed better generalization performance than the conventional methods in various tasks.
Fitting the two-compartment model in DCE-MRI by linear inversion.
Flouri, Dimitra; Lesnic, Daniel; Sourbron, Steven P
2016-09-01
Model fitting of dynamic contrast-enhanced-magnetic resonance imaging-MRI data with nonlinear least squares (NLLS) methods is slow and may be biased by the choice of initial values. The aim of this study was to develop and evaluate a linear least squares (LLS) method to fit the two-compartment exchange and -filtration models. A second-order linear differential equation for the measured concentrations was derived where model parameters act as coefficients. Simulations of normal and pathological data were performed to determine calculation time, accuracy and precision under different noise levels and temporal resolutions. Performance of the LLS was evaluated by comparison against the NLLS. The LLS method is about 200 times faster, which reduces the calculation times for a 256 × 256 MR slice from 9 min to 3 s. For ideal data with low noise and high temporal resolution the LLS and NLLS were equally accurate and precise. The LLS was more accurate and precise than the NLLS at low temporal resolution, but less accurate at high noise levels. The data show that the LLS leads to a significant reduction in calculation times, and more reliable results at low noise levels. At higher noise levels the LLS becomes exceedingly inaccurate compared to the NLLS, but this may be improved using a suitable weighting strategy. Magn Reson Med 76:998-1006, 2016. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.
Ruggeri, Fabrizio
2016-05-12
In this work we develop a Bayesian setting to infer unknown parameters in initial-boundary value problems related to linear parabolic partial differential equations. We realistically assume that the boundary data are noisy, for a given prescribed initial condition. We show how to derive the joint likelihood function for the forward problem, given some measurements of the solution field subject to Gaussian noise. Given Gaussian priors for the time-dependent Dirichlet boundary values, we analytically marginalize the joint likelihood using the linearity of the equation. Our hierarchical Bayesian approach is fully implemented in an example that involves the heat equation. In this example, the thermal diffusivity is the unknown parameter. We assume that the thermal diffusivity parameter can be modeled a priori through a lognormal random variable or by means of a space-dependent stationary lognormal random field. Synthetic data are used to test the inference. We exploit the behavior of the non-normalized log posterior distribution of the thermal diffusivity. Then, we use the Laplace method to obtain an approximated Gaussian posterior and therefore avoid costly Markov Chain Monte Carlo computations. Expected information gains and predictive posterior densities for observable quantities are numerically estimated using Laplace approximation for different experimental setups.
High-Order Sparse Linear Predictors for Audio Processing
Giacobello, Daniele; van Waterschoot, Toon; Christensen, Mads Græsbøll
2010-01-01
Linear prediction has generally failed to make a breakthrough in audio processing, as it has done in speech processing. This is mostly due to its poor modeling performance, since an audio signal is usually an ensemble of different sources. Nevertheless, linear prediction comes with a whole set...... of interesting features that make the idea of using it in audio processing not far fetched, e.g., the strong ability of modeling the spectral peaks that play a dominant role in perception. In this paper, we provide some preliminary conjectures and experiments on the use of high-order sparse linear predictors...... in audio processing. These predictors, successfully implemented in modeling the short-term and long-term redundancies present in speech signals, will be used to model tonal audio signals, both monophonic and polyphonic. We will show how the sparse predictors are able to model efﬁciently the different...
State Space Reduction of Linear Processes using Control Flow Reconstruction
van de Pol, Jan Cornelis; Timmer, Mark
2009-01-01
We present a new method for fighting the state space explosion of process algebraic specifications, by performing static analysis on an intermediate format: linear process equations (LPEs). Our method consists of two steps: (1) we reconstruct the LPE's control flow, detecting control flow parameters
State Space Reduction of Linear Processes Using Control Flow Reconstruction
van de Pol, Jan Cornelis; Timmer, Mark; Liu, Zhiming; Ravn, Anders P.
2009-01-01
We present a new method for fighting the state space explosion of process algebraic specifications, by performing static analysis on an intermediate format: linear process equations (LPEs). Our method consists of two steps: (1) we reconstruct the LPE's control flow, detecting control flow parameters
Murray L. Ireland
2015-06-01
Full Text Available Multirotor is the umbrella term for the family of unmanned aircraft, which include the quadrotor, hexarotor and other vertical take-off and landing (VTOL aircraft that employ multiple main rotors for lift and control. Development and testing of novel multirotor designs has been aided by the proliferation of 3D printing and inexpensive flight controllers and components. Different multirotor configurations exhibit specific strengths, while presenting unique challenges with regards to design and control. This article highlights the primary differences between three multirotor platforms: a quadrotor; a fully-actuated hexarotor; and an octorotor. Each platform is modelled and then controlled using non-linear dynamic inversion. The differences in dynamics, control and performance are then discussed.
Global seismic inversion as the next standard step in the processing sequence
Maver, Kim G.; Hansen, Lars S.; Jepsen, Anne-Marie; Rasmussen, Klaus B.
1998-12-31
Seismic inversion of post stack seismic data has until recently been regarded as a reservoir oriented method since the standard inversion techniques rely on extensive well control and a detailed user derived input model. Most seismic inversion techniques further requires a stable wavelet. As a consequence seismic inversion is mainly utilised in mature areas focusing of specific zones only after the seismic data has been interpreted and is well understood. By using an advanced 3-D global technique, seismic inversion is presented as the next standard step in the processing sequence. The technique is robust towards noise within the seismic data, utilizes a time variant wavelet, and derives a low frequency model utilizing the stacking velocities and only limited well control. 4 figs.
Ungan, F.; Yesilgul, U.; Kasapoglu, E.; Sari, H.; Sökmen, I.
2012-01-01
In this present work, we have investigated theoretically the effects of applied electric and magnetic fields on the linear and nonlinear optical properties in a GaAs/Al x Ga 1−x As inverse parabolic quantum well for different Al concentrations at the well center. The Al concentration at the barriers was always x max =0.3. The energy levels and wave functions are calculated within the effective mass approximation and the envelope function approach. The analytical expressions of optical properties are obtained by using the compact density-matrix approach. The linear, third-order nonlinear and total absorption and refractive index changes depending on the Al concentration at the well center are investigated as a function of the incident photon energy for the different values of the applied electric and magnetic fields. The results show that the applied electric and magnetic fields have a great effect on these optical quantities. - Highlights: ► The x c concentration has a great effect on the optical characteristics of these structures. ► The EM fields have a great effect on the optical properties of these structures. ► The total absorption coefficients increased as the electric and magnetic field increases. ► The RICs reduced as the electric and magnetic field increases.
Ronchin, Erika; Masterlark, Timothy; Dawson, John; Saunders, Steve; Martì Molist, Joan
2017-06-01
We test an innovative inversion scheme using Green's functions from an array of pressure sources embedded in finite-element method (FEM) models to image, without assuming an a-priori geometry, the composite and complex shape of a volcano deformation source. We invert interferometric synthetic aperture radar (InSAR) data to estimate the pressurization and shape of the magma reservoir of Rabaul caldera, Papua New Guinea. The results image the extended shallow magmatic system responsible for a broad and long-term subsidence of the caldera between 2007 February and 2010 December. Elastic FEM solutions are integrated into the regularized linear inversion of InSAR data of volcano surface displacements in order to obtain a 3-D image of the source of deformation. The Green's function matrix is constructed from a library of forward line-of-sight displacement solutions for a grid of cubic elementary deformation sources. Each source is sequentially generated by removing the corresponding cubic elements from a common meshed domain and simulating the injection of a fluid mass flux into the cavity, which results in a pressurization and volumetric change of the fluid-filled cavity. The use of a single mesh for the generation of all FEM models avoids the computationally expensive process of non-linear inversion and remeshing a variable geometry domain. Without assuming an a-priori source geometry other than the configuration of the 3-D grid that generates the library of Green's functions, the geodetic data dictate the geometry of the magma reservoir as a 3-D distribution of pressure (or flux of magma) within the source array. The inversion of InSAR data of Rabaul caldera shows a distribution of interconnected sources forming an amorphous, shallow magmatic system elongated under two opposite sides of the caldera. The marginal areas at the sides of the imaged magmatic system are the possible feeding reservoirs of the ongoing Tavurvur volcano eruption of andesitic products on the
Relating Reasoning Methodologies in Linear Logic and Process Algebra
Yuxin Deng
2012-11-01
Full Text Available We show that the proof-theoretic notion of logical preorder coincides with the process-theoretic notion of contextual preorder for a CCS-like calculus obtained from the formula-as-process interpretation of a fragment of linear logic. The argument makes use of other standard notions in process algebra, namely a labeled transition system and a coinductively defined simulation relation. This result establishes a connection between an approach to reason about process specifications and a method to reason about logic specifications.
Liu, Yishan; Han, Ping [School of Biological Sciences, The University of Hong Kong, Pokfulam Road, Hong Kong (China); Li, Xiao-yan; Shih, Kaimin [Department of Civil Engineering, The University of Hong Kong, Pokfulam Road, Hong Kong (China); Gu, Ji-Dong, E-mail: jdgu@hkucc.hku.hk [School of Biological Sciences, The University of Hong Kong, Pokfulam Road, Hong Kong (China); The Swire Institute of Marine Science, The University of Hong Kong, Shek O, Cape d' Aguilar, Hong Kong (China)
2011-09-15
Highlights: {yields} We isolated a Xanthobacter flavus strain PA1 utilizing the racemic 2-PBA and the single enantiomers as the sole source of carbon and energy. {yields} Both (R) and (S) forms of enantiomers can be degraded in a sequential manner in which the (S) disappeared before the (R) form. {yields} The biochemical degradation pathway involves an initial oxidation of the alkyl side chain before aromatic ring cleavage. - Abstract: Microbial degradation of the chiral 2-phenylbutyric acid (2-PBA), a metabolite of surfactant linear alkylbenzene sulfonates (LAS), was investigated using both racemic and enantiomer-pure compounds together with quantitative stereoselective analyses. A pure culture of bacteria, identified as Xanthobacter flavus strain PA1 isolated from the mangrove sediment of Hong Kong Mai Po Nature Reserve, was able to utilize the racemic 2-PBA as well as the single enantiomers as the sole source of carbon and energy. In the presence of the racemic compounds, X. flavus PA1 degraded both (R) and (S) forms of enantiomers to completion in a sequential manner in which the (S) enantiomer disappeared much faster than the (R) enantiomer. When the single pure enantiomer was supplied as the sole substrate, a unidirectional chiral inversion involving (S) enantiomer to (R) enantiomer was evident. No major difference was observed in the degradation intermediates with either of the individual enantiomers when used as the growth substrate. Two major degradation intermediates were detected and identified as 3-hydroxy-2-phenylbutanoic acid and 4-methyl-3-phenyloxetan-2-one, using a combination of liquid chromatography-mass spectrometry (LC-MS), and {sup 1}H and {sup 13}C nuclear magnetic resonance (NMR) spectroscopy. The biochemical degradation pathway follows an initial oxidation of the alkyl side chain before aromatic ring cleavage. This study reveals new evidence for enantiomeric inversion catalyzed by pure culture of environmental bacteria and emphasizes the
Short-memory linear processes and econometric applications
Mynbaev, Kairat T
2011-01-01
This book serves as a comprehensive source of asymptotic results for econometric models with deterministic exogenous regressors. Such regressors include linear (more generally, piece-wise polynomial) trends, seasonally oscillating functions, and slowly varying functions including logarithmic trends, as well as some specifications of spatial matrices in the theory of spatial models. The book begins with central limit theorems (CLTs) for weighted sums of short memory linear processes. This part contains the analysis of certain operators in Lp spaces and their employment in the derivation of CLTs
Han, Y.; Misra, S.
2018-04-01
Multi-frequency measurement of a dispersive electromagnetic (EM) property, such as electrical conductivity, dielectric permittivity, or magnetic permeability, is commonly analyzed for purposes of material characterization. Such an analysis requires inversion of the multi-frequency measurement based on a specific relaxation model, such as Cole-Cole model or Pelton's model. We develop a unified inversion scheme that can be coupled to various type of relaxation models to independently process multi-frequency measurement of varied EM properties for purposes of improved EM-based geomaterial characterization. The proposed inversion scheme is firstly tested in few synthetic cases in which different relaxation models are coupled into the inversion scheme and then applied to multi-frequency complex conductivity, complex resistivity, complex permittivity, and complex impedance measurements. The method estimates up to seven relaxation-model parameters exhibiting convergence and accuracy for random initializations of the relaxation-model parameters within up to 3-orders of magnitude variation around the true parameter values. The proposed inversion method implements a bounded Levenberg algorithm with tuning initial values of damping parameter and its iterative adjustment factor, which are fixed in all the cases shown in this paper and irrespective of the type of measured EM property and the type of relaxation model. Notably, jump-out step and jump-back-in step are implemented as automated methods in the inversion scheme to prevent the inversion from getting trapped around local minima and to honor physical bounds of model parameters. The proposed inversion scheme can be easily used to process various types of EM measurements without major changes to the inversion scheme.
The measurement problem on classical diffusion process: inverse method on stochastic processes
Bigerelle, M.; Iost, A.
2004-01-01
In a high number of diffusive systems, measures are processed to calculate material parameters such as diffusion coefficients, or to verify the accuracy of mathematical models. However, the precision of the parameter determination or of the model relevance depends on the location of the measure itself. The aim of this paper is first to analyse, for a mono-dimensional system, the precision of the measure in relation with its location by an inverse problem algorithm and secondly to examine the physical meaning of the results. Statistical mechanic considerations show that, passing over a time-distance criterion, measurement becomes uncertain whatever the initial conditions. The criterion proves that this chaotic mode is related to the production of anti-entropy at a mesoscopique scale that is in violation to quantum theory about measurement
A non-linear model of economic production processes
Ponzi, A.; Yasutomi, A.; Kaneko, K.
2003-06-01
We present a new two phase model of economic production processes which is a non-linear dynamical version of von Neumann's neoclassical model of production, including a market price-setting phase as well as a production phase. The rate of an economic production process is observed, for the first time, to depend on the minimum of its input supplies. This creates highly non-linear supply and demand dynamics. By numerical simulation, production networks are shown to become unstable when the ratio of different products to total processes increases. This provides some insight into observed stability of competitive capitalist economies in comparison to monopolistic economies. Capitalist economies are also shown to have low unemployment.
Inhibition of the anaerobic digestion process by linear alkylbenzene sulfonates
Gavala, Hariklia N.; Ahring, Birgitte Kiær
2002-01-01
Linear Alkylbenzene Sulfonates (LAS) are the most widely used synthetic anionic surfactants. They are anthropogenic, toxic compounds and are found in the primary sludge generated in municipal wastewater treatment plants. Primary sludge is usually stabilized anaerobically and therefore it is impor......Linear Alkylbenzene Sulfonates (LAS) are the most widely used synthetic anionic surfactants. They are anthropogenic, toxic compounds and are found in the primary sludge generated in municipal wastewater treatment plants. Primary sludge is usually stabilized anaerobically and therefore...... it is important to investigate the effect of these xenobiotic compounds on an anaerobic environment. The inhibitory effect of Linear Alkylbenzene Sulfonates (LAS) on the acetogenic and methanogenic step of the anaerobic digestion process was studied. LAS inhibit both acetogenesis from propionate...
Induction linear accelerators for commercial photon irradiation processing
Matthews, S.M.
1989-01-01
A number of proposed irradiation processes requires bulk rather than surface exposure with intense applications of ionizing radiation. Typical examples are irradiation of food packaged into pallet size containers, processing of sewer sludge for recycling as landfill and fertilizer, sterilization of prepackaged medical disposals, treatment of municipal water supplies for pathogen reduction, etc. Volumetric processing of dense, bulky products with ionizing radiation requires high energy photon sources because electrons are not penetrating enough to provide uniform bulk dose deposition in thick, dense samples. Induction Linear Accelerator (ILA) technology developed at the Lawrence Livermore National Laboratory promises to play a key role in providing solutions to this problem. This is discussed in this paper
NON-LINEAR FINITE ELEMENT MODELING OF DEEP DRAWING PROCESS
Hasan YILDIZ
2004-03-01
Full Text Available Deep drawing process is one of the main procedures used in different branches of industry. Finding numerical solutions for determination of the mechanical behaviour of this process will save time and money. In die surfaces, which have complex geometries, it is hard to determine the effects of parameters of sheet metal forming. Some of these parameters are wrinkling, tearing, and determination of the flow of the thin sheet metal in the die and thickness change. However, the most difficult one is determination of material properties during plastic deformation. In this study, the effects of all these parameters are analyzed before producing the dies. The explicit non-linear finite element method is chosen to be used in the analysis. The numerical results obtained for non-linear material and contact models are also compared with the experiments. A good agreement between the numerical and the experimental results is obtained. The results obtained for the models are given in detail.
On process capability and system availability analysis of the inverse Rayleigh distribution
Sajid Ali
2015-04-01
Full Text Available In this article, process capability and system availability analysis is discussed for the inverse Rayleigh lifetime distribution. Bayesian approach with a conjugate gamma distribution is adopted for the analysis. Different types of loss functions are considered to find Bayes estimates of the process capability and system availability. A simulation study is conducted for the comparison of different loss functions.
Pulsed laser deposition of the lysozyme protein: an unexpected “Inverse MAPLE” process
Schou, Jørgen; Matei, Andreea; Constantinescu, Catalin
2012-01-01
Films of organic materials are commonly deposited by laser assisted methods, such as MAPLE (matrix-assisted pulsed laser evaporation), where a few percent of the film material in the target is protected by a light-absorbing volatile matrix. Another possibility is to irradiate the dry organic...... the ejection and deposition of lysozyme. This can be called an “inverse MAPLE” process, since the ratio of “matrix” to film material in the target is 10:90, which is inverse of the typical MAPLE process where the film material is dissolved in the matrix down to several wt.%. Lysozyme is a well-known protein...
Linear circuits, systems and signal processing: theory and application
Byrnes, C.I.; Saeks, R.E.; Martin, C.F.
1988-01-01
In part because of its universal role as a first approximation of more complicated behaviour and in part because of the depth and breadth of its principle paradigms, the study of linear systems continues to play a central role in control theory and its applications. Enhancing more traditional applications to aerospace and electronics, application areas such as econometrics, finance, and speech and signal processing have contributed to a renaissance in areas such as realization theory and classical automatic feedback control. Thus, the last few years have witnessed a remarkable research effort expended in understanding both new algorithms and new paradigms for modeling and realization of linear processes and in the analysis and design of robust control strategies. The papers in this volume reflect these trends in both the theory and applications of linear systems and were selected from the invited and contributed papers presented at the 8th International Symposium on the Mathematical Theory of Networks and Systems held in Phoenix on June 15-19, 1987
Huhn, Stefan; Peeling, Derek; Burkart, Maximilian
2017-10-01
With the availability of die face design tools and incremental solver technologies to provide detailed forming feasibility results in a timely fashion, the use of inverse solver technologies and resulting process improvements during the product development process of stamped parts often is underestimated. This paper presents some applications of inverse technologies that are currently used in the automotive industry to streamline the product development process and greatly increase the quality of a developed process and the resulting product. The first focus is on the so-called target strain technology. Application examples will show how inverse forming analysis can be applied to support the process engineer during the development of a die face geometry for Class `A' panels. The drawing process is greatly affected by the die face design and the process designer has to ensure that the resulting drawn panel will meet specific requirements regarding surface quality and a minimum strain distribution to ensure dent resistance. The target strain technology provides almost immediate feedback to the process engineer during the die face design process if a specific change of the die face design will help to achieve these specific requirements or will be counterproductive. The paper will further show how an optimization of the material flow can be achieved through the use of a newly developed technology called Sculptured Die Face (SDF). The die face generation in SDF is more suited to be used in optimization loops than any other conventional die face design technology based on cross section design. A second focus in this paper is on the use of inverse solver technologies for secondary forming operations. The paper will show how the application of inverse technology can be used to accurately and quickly develop trim lines on simple as well as on complex support geometries.
Inverse magnetostrictive characteristics of Fe-Co composite materials using gas-nitriding process
Nakajima, Kenya; Yang, Zhenjun; Narita, Fumio
2018-03-01
The inverse magnetostrictive response, known as the Villari effect, of magnetostrictive materials is a change in magnetization due to an applied stress. It is commonly used for sensor applications. This work deals with the inverse magnetostrictive characteristics of Fe-Co bimetal plates that were subjected gas-nitriding process. Gas-nitriding was performed on bimetal plates for 30 min at 853 K as a surface heat treatment process. The specimens were cooled to room temperature after completing the nitriding treatment. Three-point bending tests were performed on the plates under a magnetic field. The changes on the magnetic induction of the plates due to the applied load are discussed. The effect of the nitriding treatment on the inverse magnetostrictive characteristics, magnetostrictive susceptibility, and magnetic hysteresis loop was examined. Our work represents an important step forward in the development of magnetostrictive sensor materials.
Ali Mohammad-Djafari
2015-06-01
Full Text Available The main content of this review article is first to review the main inference tools using Bayes rule, the maximum entropy principle (MEP, information theory, relative entropy and the Kullback–Leibler (KL divergence, Fisher information and its corresponding geometries. For each of these tools, the precise context of their use is described. The second part of the paper is focused on the ways these tools have been used in data, signal and image processing and in the inverse problems, which arise in different physical sciences and engineering applications. A few examples of the applications are described: entropy in independent components analysis (ICA and in blind source separation, Fisher information in data model selection, different maximum entropy-based methods in time series spectral estimation and in linear inverse problems and, finally, the Bayesian inference for general inverse problems. Some original materials concerning the approximate Bayesian computation (ABC and, in particular, the variational Bayesian approximation (VBA methods are also presented. VBA is used for proposing an alternative Bayesian computational tool to the classical Markov chain Monte Carlo (MCMC methods. We will also see that VBA englobes joint maximum a posteriori (MAP, as well as the different expectation-maximization (EM algorithms as particular cases.
Chai, Xintao; Tang, Genyang; Peng, Ronghua; Liu, Shaoyong
2018-03-01
Full-waveform inversion (FWI) reconstructs the subsurface properties from acquired seismic data via minimization of the misfit between observed and simulated data. However, FWI suffers from considerable computational costs resulting from the numerical solution of the wave equation for each source at each iteration. To reduce the computational burden, constructing supershots by combining several sources (aka source encoding) allows mitigation of the number of simulations at each iteration, but it gives rise to crosstalk artifacts because of interference between the individual sources of the supershot. A modified Gauss-Newton FWI (MGNFWI) approach showed that as long as the difference between the initial and true models permits a sparse representation, the ℓ _1-norm constrained model updates suppress subsampling-related artifacts. However, the spectral-projected gradient ℓ _1 (SPGℓ _1) algorithm employed by MGNFWI is rather complicated that makes its implementation difficult. To facilitate realistic applications, we adapt a linearized Bregman (LB) method to sparsity-promoting FWI (SPFWI) because of the efficiency and simplicity of LB in the framework of ℓ _1-norm constrained optimization problem and compressive sensing. Numerical experiments performed with the BP Salt model, the Marmousi model and the BG Compass model verify the following points. The FWI result with LB solving ℓ _1-norm sparsity-promoting problem for the model update outperforms that generated by solving ℓ _2-norm problem in terms of crosstalk elimination and high-fidelity results. The simpler LB method performs comparably and even superiorly to the complicated SPGℓ _1 method in terms of computational efficiency and model quality, making the LB method a viable alternative for realistic implementations of SPFWI.
Use of the inverse temperature profile in microwave processing of advanced ceramics
Binner, J.G.P.; Al-Dawery, I.A.; Aneziris, C.; Cross, T.E.
1992-01-01
Attempts are being made to exploit the inverse temperature profile which can be developed with microwave heating with respect to the processing of certain advanced ceramics. This paper discusses the results obtained to date during the microwave sintering of YBCO high-T c superconductors and the microwave reaction bonding of silicon nitride
A Bayesian optimal design for degradation tests based on the inverse Gaussian process
Peng, Weiwen; Liu, Yu; Li, Yan Feng; Zhu, Shun Peng; Huang, Hong Zhong [University of Electronic Science and Technology of China, Chengdu (China)
2014-10-15
The inverse Gaussian process is recently introduced as an attractive and flexible stochastic process for degradation modeling. This process has been demonstrated as a valuable complement for models that are developed on the basis of the Wiener and gamma processes. We investigate the optimal design of the degradation tests on the basis of the inverse Gaussian process. In addition to an optimal design with pre-estimated planning values of model parameters, we also address the issue of uncertainty in the planning values by using the Bayesian method. An average pre-posterior variance of reliability is used as the optimization criterion. A trade-off between sample size and number of degradation observations is investigated in the degradation test planning. The effects of priors on the optimal designs and on the value of prior information are also investigated and quantified. The degradation test planning of a GaAs Laser device is performed to demonstrate the proposed method.
Instantaneous Switching Processes in Quasi-Linear Circuits
Rositsa Angelova
2004-01-01
Full Text Available The paper considers instantaneous processes in electrical circuits produced by the stepwise change of the capacitance of the capacitor and the inductance of the inductor and by the switching on and switching off of the circuit. In order to determine the set of electrical circuits, for which it is possible to explicitly obtain the values of the currents and the voltages at the end of the instantaneous process, a classification of the networks with nonlinear elements is introduced in the paper. The instantaneous switching process in the moment t0 is approximated when T->t0 with a sequence of processes in the interval [t0, T]. For quasi-linear inductive and capacitive circuits; we present the type of the system satisfied by the currents and the voltages, the charges, as well as the fluxes in the interval [t0, T]. From this system, after passage to the limit T->t0, we obtain the formulas for the values of the circuits at the end of the instantaneous process. The obtained results are applied for the analysis of particular processes.
Induction-linear accelerators for food processing with ionizing radiation
Lagunas-Solar, M.C.
1985-01-01
Electron accelerators with sufficient beam power and reliability of operation will be required for applications in the large-scale radiation processing of food. Electron beams can be converted to the more penetrating bremsstrahlung radiation (X-rays), although at a great expense in useful X-ray power due to small conversion efficiencies. Recent advances in the technology of pulse-power accelerators indicates that Linear Induction Electron Accelerators (LIEA) are capable of sufficiently high-beam current and pulse repetition rate, while delivering ultra-short pulses of high voltage. The application of LIEA systems in food irradiation provides the potential for high product output and compact, modular-type systems readily adaptable to food processing facilities. (orig.)
Can complex cellular processes be governed by simple linear rules?
Selvarajoo, Kumar; Tomita, Masaru; Tsuchiya, Masa
2009-02-01
Complex living systems have shown remarkably well-orchestrated, self-organized, robust, and stable behavior under a wide range of perturbations. However, despite the recent generation of high-throughput experimental datasets, basic cellular processes such as division, differentiation, and apoptosis still remain elusive. One of the key reasons is the lack of understanding of the governing principles of complex living systems. Here, we have reviewed the success of perturbation-response approaches, where without the requirement of detailed in vivo physiological parameters, the analysis of temporal concentration or activation response unravels biological network features such as causal relationships of reactant species, regulatory motifs, etc. Our review shows that simple linear rules govern the response behavior of biological networks in an ensemble of cells. It is daunting to know why such simplicity could hold in a complex heterogeneous environment. Provided physical reasons can be explained for these phenomena, major advancement in the understanding of basic cellular processes could be achieved.
Murata, M; Uchida, T; Yang, Y; Lezhava, A; Kinashi, H
2011-04-01
We have comprehensively analyzed the linear chromosomes of Streptomyces griseus mutants constructed and kept in our laboratory. During this study, macrorestriction analysis of AseI and DraI fragments of mutant 402-2 suggested a large chromosomal inversion. The junctions of chromosomal inversion were cloned and sequenced and compared with the corresponding target sequences in the parent strain 2247. Consequently, a transposon-involved mechanism was revealed. Namely, a transposon originally located at the left target site was replicatively transposed to the right target site in an inverted direction, which generated a second copy and at the same time caused a 2.5-Mb chromosomal inversion. The involved transposon named TnSGR was grouped into a new subfamily of the resolvase-encoding Tn3 family transposons based on its gene organization. At the end, terminal diversity of S. griseus chromosomes is discussed by comparing the sequences of strains 2247 and IFO13350.
Mössbauer spectra linearity improvement by sine velocity waveform followed by linearization process
Kohout, Pavel; Frank, Tomas; Pechousek, Jiri; Kouril, Lukas
2018-05-01
This note reports the development of a new method for linearizing the Mössbauer spectra recorded with a sine drive velocity signal. Mössbauer spectra linearity is a critical parameter to determine Mössbauer spectrometer accuracy. Measuring spectra with a sine velocity axis and consecutive linearization increases the linearity of spectra in a wider frequency range of a drive signal, as generally harmonic movement is natural for velocity transducers. The obtained data demonstrate that linearized sine spectra have lower nonlinearity and line width parameters in comparison with those measured using a traditional triangle velocity signal.
High-Dimensional Quantum Information Processing with Linear Optics
Fitzpatrick, Casey A.
Quantum information processing (QIP) is an interdisciplinary field concerned with the development of computers and information processing systems that utilize quantum mechanical properties of nature to carry out their function. QIP systems have become vastly more practical since the turn of the century. Today, QIP applications span imaging, cryptographic security, computation, and simulation (quantum systems that mimic other quantum systems). Many important strategies improve quantum versions of classical information system hardware, such as single photon detectors and quantum repeaters. Another more abstract strategy engineers high-dimensional quantum state spaces, so that each successful event carries more information than traditional two-level systems allow. Photonic states in particular bring the added advantages of weak environmental coupling and data transmission near the speed of light, allowing for simpler control and lower system design complexity. In this dissertation, numerous novel, scalable designs for practical high-dimensional linear-optical QIP systems are presented. First, a correlated photon imaging scheme using orbital angular momentum (OAM) states to detect rotational symmetries in objects using measurements, as well as building images out of those interactions is reported. Then, a statistical detection method using chains of OAM superpositions distributed according to the Fibonacci sequence is established and expanded upon. It is shown that the approach gives rise to schemes for sorting, detecting, and generating the recursively defined high-dimensional states on which some quantum cryptographic protocols depend. Finally, an ongoing study based on a generalization of the standard optical multiport for applications in quantum computation and simulation is reported upon. The architecture allows photons to reverse momentum inside the device. This in turn enables realistic implementation of controllable linear-optical scattering vertices for
Faggiani Dias, D.; Subramanian, A. C.; Zanna, L.; Miller, A. J.
2017-12-01
Sea surface temperature (SST) in the Pacific sector is well known to vary on time scales from seasonal to decadal, and the ability to predict these SST fluctuations has many societal and economical benefits. Therefore, we use a suite of statistical linear inverse models (LIMs) to understand the remote and local SST variability that influences SST predictions over the North Pacific region and further improve our understanding on how the long-observed SST record can help better guide multi-model ensemble forecasts. Observed monthly SST anomalies in the Pacific sector (between 15oS and 60oN) are used to construct different regional LIMs for seasonal to decadal prediction. The forecast skills of the LIMs are compared to that from two operational forecast systems in the North American Multi-Model Ensemble (NMME) revealing that the LIM has better skill in the Northeastern Pacific than NMME models. The LIM is also found to have comparable forecast skill for SST in the Tropical Pacific with NMME models. This skill, however, is highly dependent on the initialization month, with forecasts initialized during the summer having better skill than those initialized during the winter. The forecast skill with LIM is also influenced by the verification period utilized to make the predictions, likely due to the changing character of El Niño in the 20th century. The North Pacific seems to be a source of predictability for the Tropics on seasonal to interannual time scales, while the Tropics act to worsen the skill for the forecast in the North Pacific. The data were also bandpassed into seasonal, interannual and decadal time scales to identify the relationships between time scales using the structure of the propagator matrix. For the decadal component, this coupling occurs the other way around: Tropics seem to be a source of predictability for the Extratropics, but the Extratropics don't improve the predictability for the Tropics. These results indicate the importance of temporal
Tichý, Ondřej; Šmídl, Václav; Hofman, Radek; Stohl, A.
2016-01-01
Roč. 9, č. 11 (2016), s. 4297-4311 ISSN 1991-959X R&D Projects: GA MŠk(CZ) 7F14287 Institutional support: RVO:67985556 Keywords : Linear inverse problem * Bayesian regularization * Source-term determination * Variational Bayes method Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 3.458, year: 2016 http://library.utia.cas.cz/separaty/2016/AS/tichy-0466029.pdf
Linear and Nonlinear MHD Wave Processes in Plasmas. Final Report
Tataronis, J. A.
2004-01-01
This program treats theoretically low frequency linear and nonlinear wave processes in magnetized plasmas. A primary objective has been to evaluate the effectiveness of MHD waves to heat plasma and drive current in toroidal configurations. The research covers the following topics: (1) the existence and properties of the MHD continua in plasma equilibria without spatial symmetry; (2) low frequency nonresonant current drive and nonlinear Alfven wave effects; and (3) nonlinear electron acceleration by rf and random plasma waves. Results have contributed to the fundamental knowledge base of MHD activity in symmetric and asymmetric toroidal plasmas. Among the accomplishments of this research effort, the following are highlighted: Identification of the MHD continuum mode singularities in toroidal geometry. Derivation of a third order ordinary differential equation that governs nonlinear current drive in the singular layers of the Alfven continuum modes in axisymmetric toroidal geometry. Bounded solutions of this ODE implies a net average current parallel to the toroidal equilibrium magnetic field. Discovery of a new unstable continuum of the linearized MHD equation in axially periodic circular plasma cylinders with shear and incompressibility. This continuum, which we named ''accumulation continuum'' and which is related to ballooning modes, arises as discrete unstable eigenfrequency accumulate on the imaginary frequency axis in the limit of large mode numbers. Development of techniques to control nonlinear electron acceleration through the action of multiple coherent and random plasmas waves. Two important elements of this program aye student participation and student training in plasma theory
Linear response in the nonequilibrium zero range process
Maes, Christian; Salazar, Alberto
2014-01-01
We explore a number of explicit response formulæ around the boundary driven zero range process to changes in the exit and entrance rates. In such a nonequilibrium regime kinetic (and not only thermodynamic) aspects make a difference in the response. Apart from a number of formal approaches, we illustrate a general decomposition of the linear response into entropic and frenetic contributions, the latter being realized from changes in the dynamical activity at the boundaries. In particular in this way one obtains nonlinear modifications to the Green–Kubo relation. We end by bringing some general remarks about the situation where that nonequilibrium response remains given by the (equilibrium) Kubo formula such as for the density profile in the boundary driven Lorentz gas
Small-scale quantum information processing with linear optics
Bergou, J.A.; Steinberg, A.M.; Mohseni, M.
2005-01-01
Full text: Photons are the ideal systems for carrying quantum information. Although performing large-scale quantum computation on optical systems is extremely demanding, non scalable linear-optics quantum information processing may prove essential as part of quantum communication networks. In addition efficient (scalable) linear-optical quantum computation proposal relies on the same optical elements. Here, by constructing multirail optical networks, we experimentally study two central problems in quantum information science, namely optimal discrimination between nonorthogonal quantum states, and controlling decoherence in quantum systems. Quantum mechanics forbids deterministic discrimination between nonorthogonal states. This is one of the central features of quantum cryptography, which leads to secure communications. Quantum state discrimination is an important primitive in quantum information processing, since it determines the limitations of a potential eavesdropper, and it has applications in quantum cloning and entanglement concentration. In this work, we experimentally implement generalized measurements in an optical system and demonstrate the first optimal unambiguous discrimination between three non-orthogonal states with a success rate of 55 %, to be compared with the 25 % maximum achievable using projective measurements. Furthermore, we present the first realization of unambiguous discrimination between a pure state and a nonorthogonal mixed state. In a separate experiment, we demonstrate how decoherence-free subspaces (DFSs) may be incorporated into a prototype optical quantum algorithm. Specifically, we present an optical realization of two-qubit Deutsch-Jozsa algorithm in presence of random noise. By introduction of localized turbulent airflow we produce a collective optical dephasing, leading to large error rates and demonstrate that using DFS encoding, the error rate in the presence of decoherence can be reduced from 35 % to essentially its pre
Numerical simulation of linear fiction welding (LFW) processes
Fratini, L.; La Spisa, D.
2011-05-01
Solid state welding processes are becoming increasingly important due to a large number of advantages related to joining "unweldable" materials and in particular light weight alloys. Linear friction welding (LFW) has been used successfully to bond non-axisymmetric components of a range of materials including titanium alloys, steels, aluminum alloys, nickel, copper, and also dissimilar material combinations. The technique is useful in the research of quality of the joints and in reducing costs of components and parts of the aeronautic and automotive industries. LFW involves parts to be welded through the relative reciprocating motion of two components under an axial force. In such process the heat source is given by the frictional forces work decaying into heat determining a local softening of the material and proper bonding conditions due to both the temperature increase and the local pressure of the two edges to be welded. This paper is a comparative test between the numerical model in two dimensions, i.e. in plane strain conditions, and in three dimensions of a LFW process of AISI1045 steel specimens. It must be observed that the 3D model assures a faithful simulation of the actual threedimensional material flow, even if the two-dimensional simulation computational times are very short, a few hours instead of several ones as the 3D model. The obtained results were compared with experimental values found out in the scientific literature.
Numerical simulation of linear fiction welding (LFW) processes
Fratini, L.; La Spisa, D.
2011-01-01
Solid state welding processes are becoming increasingly important due to a large number of advantages related to joining ''unweldable'' materials and in particular light weight alloys. Linear friction welding (LFW) has been used successfully to bond non-axisymmetric components of a range of materials including titanium alloys, steels, aluminum alloys, nickel, copper, and also dissimilar material combinations. The technique is useful in the research of quality of the joints and in reducing costs of components and parts of the aeronautic and automotive industries.LFW involves parts to be welded through the relative reciprocating motion of two components under an axial force. In such process the heat source is given by the frictional forces work decaying into heat determining a local softening of the material and proper bonding conditions due to both the temperature increase and the local pressure of the two edges to be welded. This paper is a comparative test between the numerical model in two dimensions, i.e. in plane strain conditions, and in three dimensions of a LFW process of AISI1045 steel specimens. It must be observed that the 3D model assures a faithful simulation of the actual threedimensional material flow, even if the two-dimensional simulation computational times are very short, a few hours instead of several ones as the 3D model. The obtained results were compared with experimental values found out in the scientific literature.
Massively Parallel Geostatistical Inversion of Coupled Processes in Heterogeneous Porous Media
Ngo, A.; Schwede, R. L.; Li, W.; Bastian, P.; Ippisch, O.; Cirpka, O. A.
2012-04-01
The quasi-linear geostatistical approach is an inversion scheme that can be used to estimate the spatial distribution of a heterogeneous hydraulic conductivity field. The estimated parameter field is considered to be a random variable that varies continuously in space, meets the measurements of dependent quantities (such as the hydraulic head, the concentration of a transported solute or its arrival time) and shows the required spatial correlation (described by certain variogram models). This is a method of conditioning a parameter field to observations. Upon discretization, this results in as many parameters as elements of the computational grid. For a full three dimensional representation of the heterogeneous subsurface it is hardly sufficient to work with resolutions (up to one million parameters) of the model domain that can be achieved on a serial computer. The forward problems to be solved within the inversion procedure consists of the elliptic steady-state groundwater flow equation and the formally elliptic but nearly hyperbolic steady-state advection-dominated solute transport equation in a heterogeneous porous medium. Both equations are discretized by Finite Element Methods (FEM) using fully scalable domain decomposition techniques. Whereas standard conforming FEM is sufficient for the flow equation, for the advection dominated transport equation, which rises well known numerical difficulties at sharp fronts or boundary layers, we use the streamline diffusion approach. The arising linear systems are solved using efficient iterative solvers with an AMG (algebraic multigrid) pre-conditioner. During each iteration step of the inversion scheme one needs to solve a multitude of forward and adjoint problems in order to calculate the sensitivities of each measurement and the related cross-covariance matrix of the unknown parameters and the observations. In order to reduce interprocess communications and to improve the scalability of the code on larger clusters
Non-linear processes in the Earth atmosphere boundary layer
Grunskaya, Lubov; Valery, Isakevich; Dmitry, Rubay
2013-04-01
The work is connected with studying electromagnetic fields in the resonator Earth-Ionosphere. There is studied the interconnection of tide processes of geophysical and astrophysical origin with the Earth electromagnetic fields. On account of non-linear property of the resonator Earth-Ionosphere the tides (moon and astrophysical tides) in the electromagnetic Earth fields are kinds of polyharmonic nature. It is impossible to detect such non-linear processes with the help of the classical spectral analysis. Therefore to extract tide processes in the electromagnetic fields, the method of covariance matrix eigen vectors is used. Experimental investigations of electromagnetic fields in the atmosphere boundary layer are done at the distance spaced stations, situated on Vladimir State University test ground, at Main Geophysical Observatory (St. Petersburg), on Kamchatka pen., on Lake Baikal. In 2012 there was continued to operate the multichannel synchronic monitoring system of electrical and geomagnetic fields at the spaced apart stations: VSU physical experimental proving ground; the station of the Institute of Solar and Terrestrial Physics of Russian Academy of Science (RAS) at Lake Baikal; the station of the Institute of volcanology and seismology of RAS in Paratunka; the station in Obninsk on the base of the scientific and production society "Typhoon". Such investigations turned out to be possible after developing the method of scanning experimental signal of electromagnetic field into non- correlated components. There was used a method of the analysis of the eigen vectors ofthe time series covariance matrix for exposing influence of the moon tides on Ez. The method allows to distribute an experimental signal into non-correlated periodicities. The present method is effective just in the situation when energetical deposit because of possible influence of moon tides upon the electromagnetic fields is little. There have been developed and realized in program components
Mixing and evaporation processes in an inverse estuary inferred from δ2H and δ18O
Corlis, Nicholas J.; Herbert Veeh, H.; Dighton, John C.; Herczeg, Andrew L.
2003-05-01
We have measured δ2H and δ18O in Spencer Gulf, South Australia, an inverse estuary with a salinity gradient from 36‰ near its entrance to about 45‰ at its head. We show that a simple evaporation model of seawater under ambient conditions, aided by its long residence time in Spencer Gulf, can account for the major features of the non-linear distribution pattern of δ2H with respect to salinity, at least in the restricted part of the gulf. In the more exposed part of the gulf, the δ/ S pattern appears to be governed primarily by mixing processes between inflowing shelf water and outflowing high salinity gulf water. These data provide direct support for the oceanographic model of Spencer Gulf previously proposed by other workers. Although the observed δ/ S relationship here is non-linear and hence in notable contrast to the linear δ/ S relationship in the Red Sea, the slopes of δ2H vs. δ18O are comparable, indicating that the isotopic enrichments in both marginal seas are governed by similar climatic conditions with evaporation exceeding precipitation.
Airborne gamma-ray spectrometry data processing using 1.5D inversion.
Druker, Eugene
2017-10-01
Standard processing of Airborne Gamma-Ray Spectrometry data generally gives good results when the measurement conditions are almost constant within several footprint area sizes, with the possible exception of flight height variations in a small range. In practice, deviations, such as large or abrupt changes of flight height and/or rugged terrain are not so rare and lead to certain problems. This article proposes a different approach where the solutions of inverse problems are used for data processing. The approach is quite natural in the processing of field data measured along the flight lines: it explicitly takes into account 1.5D survey models and flight parameters - from topography to sources distribution on the surface. Also, it clearly demonstrates that the inverse problem of the Airborne Gamma-Ray Spectrometry does not have a unique solution. This feature can be used in accordance with the underlying geological problem since various formulations of inverse problems can lead to various geological solutions. The use of the approach is illustrated by several examples given for flight lines and survey areas. This approach can be particularly useful in situations where geological, geophysical and/or geographic survey conditions are far from the standard assumptions. Copyright © 2017 Elsevier Ltd. All rights reserved.
Simone eFavelle
2012-12-01
Full Text Available Upright faces are thought to be processed holistically. However, the range of views within which holistic processing occurs is unknown. Recent research by McKone (2008 suggests that holistic processing occurs for all yaw rotated face views (i.e. full-face through to profile. Here we examined whether holistic processing occurs for pitch, as well as yaw, rotated face views. In this face recognition experiment: (i participants made same/different judgments about two sequentially presented faces (either both upright or both inverted; (ii the test face was pitch/yaw rotated by between 0°-75° from the encoding face (always a full face view. Our logic was as follows: If a particular pitch/yaw rotated face view is being processed holistically when upright, then this processing should be disrupted by inversion. Consistent with previous research, significant face inversion effects (FIEs were found for all yaw rotated views. However, while FIEs were found for pitch rotations up to 45°, none were observed for 75° pitch rotations (rotated either above or below the full face. We conclude that holistic processing does not occur for all views of upright faces (e.g., not for uncommon pitch rotated views, only those that can be matched to a generic global representation of a face.
Numerical modeling of axi-symmetrical cold forging process by ``Pseudo Inverse Approach''
Halouani, A.; Li, Y. M.; Abbes, B.; Guo, Y. Q.
2011-05-01
The incremental approach is widely used for the forging process modeling, it gives good strain and stress estimation, but it is time consuming. A fast Inverse Approach (IA) has been developed for the axi-symmetric cold forging modeling [1-2]. This approach exploits maximum the knowledge of the final part's shape and the assumptions of proportional loading and simplified tool actions make the IA simulation very fast. The IA is proved very useful for the tool design and optimization because of its rapidity and good strain estimation. However, the assumptions mentioned above cannot provide good stress estimation because of neglecting the loading history. A new approach called "Pseudo Inverse Approach" (PIA) was proposed by Batoz, Guo et al.. [3] for the sheet forming modeling, which keeps the IA's advantages but gives good stress estimation by taking into consideration the loading history. Our aim is to adapt the PIA for the cold forging modeling in this paper. The main developments in PIA are resumed as follows: A few intermediate configurations are generated for the given tools' positions to consider the deformation history; the strain increment is calculated by the inverse method between the previous and actual configurations. An incremental algorithm of the plastic integration is used in PIA instead of the total constitutive law used in the IA. An example is used to show the effectiveness and limitations of the PIA for the cold forging process modeling.
Larin, S.V.; Lyulin, S.V.; Lyulin, A.V.; Darinskii, A.A.
2009-01-01
Complexes of fully ionized third-generation dendrimers with oppositely charged linear polyelectrolyte chains are studied by the Brownian dynamics method. A freely jointed model of a dendrimer and a linear chain is used. Electrostatic interactions are considered within the Debye-Hückel approximation
A linear process-algebraic format for probabilistic systems with data (extended version)
Katoen, Joost P.; van de Pol, Jan Cornelis; Stoelinga, Mariëlle Ida Antoinette; Timmer, Mark
2010-01-01
This paper presents a novel linear process-algebraic format for probabilistic automata. The key ingredient is a symbolic transformation of probabilistic process algebra terms that incorporate data into this linear format while preserving strong probabilistic bisimulation. This generalises similar
A linear process-algebraic format for probabilistic systems with data
Katoen, Joost P.; van de Pol, Jan Cornelis; Stoelinga, Mariëlle Ida Antoinette; Timmer, Mark; Gomes, L.; Khomenko, V.; Fernandes, J.M.
This paper presents a novel linear process algebraic format for probabilistic automata. The key ingredient is a symbolic transformation of probabilistic process algebra terms that incorporate data into this linear format while preserving strong probabilistic bisimulation. This generalises similar
Tonellot, Th.L.
2000-03-24
In this thesis, we propose a method which takes into account a priori information (geological, diagraphic and stratigraphic knowledge) in linearized pre-stack seismic data inversion. The approach is based on a formalism in which the a priori information is incorporated in an a priori model of elastic parameters - density, P and S impedances - and a model covariance operator which describes the uncertainties in the model. The first part of the thesis is dedicated to the study of this covariance operator and to the norm associated to its inverse. We have generalized the exponential covariance operator in order to describe the uncertainties in the a priori model elastic parameters and their correlations at each location. We give the analytical expression of the covariance operator inverse in 1-D, 2-D, and 3-D, and we discretized the associated norm with a finite element method. The second part is dedicated to synthetic and real examples. In a preliminary step, we have developed a pre-stack data well calibration method which allows the estimation of the source signal. The impact of different a priori information is then demonstrated on synthetic and real data. (author)
Dinh Nho Hao; Nguyen Trung Thanh; Sahli, Hichem
2008-01-01
In this paper we consider a multi-dimensional inverse heat conduction problem with time-dependent coefficients in a box, which is well-known to be severely ill-posed, by a variational method. The gradient of the functional to be minimized is obtained by aids of an adjoint problem and the conjugate gradient method with a stopping rule is then applied to this ill-posed optimization problem. To enhance the stability and the accuracy of the numerical solution to the problem we apply this scheme to the discretized inverse problem rather than to the continuous one. The difficulties with large dimensions of discretized problems are overcome by a splitting method which only requires the solution of easy-to-solve one-dimensional problems. The numerical results provided by our method are very good and the techniques seem to be very promising.
Impaired configural body processing in anorexia nervosa: evidence from the body inversion effect.
Urgesi, Cosimo; Fornasari, Livia; Canalaz, Francesca; Perini, Laura; Cremaschi, Silvana; Faleschini, Laura; Thyrion, Erica Zappoli; Zuliani, Martina; Balestrieri, Matteo; Fabbro, Franco; Brambilla, Paolo
2014-11-01
Patients with anorexia nervosa (AN) suffer from severe disturbances of body perception. It is unclear, however, whether such disturbances are linked to specific alterations in the processing of body configurations with respect to the local processing of body part details. Here, we compared a consecutive sample of 12 AN patients with a group of 12 age-, gender- and education-matched controls using an inversion effect paradigm requiring the visual discrimination of upright and inverted pictures of whole bodies, faces and objects. The AN patients presented selective deficits in the discrimination of upright body stimuli, which requires configural processing. Conversely, patients and controls showed comparable abilities in the discrimination of inverted bodies, which involves only detail-based processing, and in the discrimination of both upright and inverted faces and objects. Importantly, the body inversion effect negatively correlated with the persistence scores at the Temperament and Character Inventory, which evaluates increased tendency to convert a signal of punishment into a signal of reinforcement. These results suggest that the deficits of configural processing in AN patients may be associated with their obsessive worries about body appearance and to the excessive attention to details that characterizes their general perceptual style. © 2013 The British Psychological Society.
Hikosaka Kenji
2012-11-01
Full Text Available Abstract Background Mitochondrial (mt genomes vary considerably in size, structure and gene content. The mt genomes of the phylum Apicomplexa, which includes important human pathogens such as the malaria parasite Plasmodium, also show marked diversity of structure. Plasmodium has a concatenated linear mt genome of the smallest size (6-kb; Babesia and Theileria have a linear monomeric mt genome (6.5-kb to 8.2-kb with terminal inverted repeats; Eimeria, which is distantly related to Plasmodium and Babesia/Theileria, possesses a mt genome (6.2-kb with a concatemeric form similar to that of Plasmodium; Cryptosporidium, the earliest branching lineage within the phylum Apicomplexa, has no mt genome. We are interested in the evolutionary origin of linear mt genomes of Babesia/Theileria, and have investigated mt genome structures in members of archaeopiroplasmid, a lineage branched off earlier from Babesia/Theileria. Results The complete mt genomes of archaeopiroplasmid parasites, Babesia microti and Babesia rodhaini, were sequenced. The mt genomes of B. microti (11.1-kb and B. rodhaini (6.9-kb possess two pairs of unique inverted repeats, IR-A and IR-B. Flip-flop inversions between two IR-As and between two IR-Bs appear to generate four distinct genome structures that are present at an equi-molar ratio. An individual parasite contained multiple mt genome structures, with 20 copies and 2 – 3 copies per haploid nuclear genome in B. microti and B. rodhaini, respectively. Conclusion We found a novel linear monomeric mt genome structure of B. microti and B. rhodhaini equipped with dual flip-flop inversion system, by which four distinct genome structures are readily generated. To our knowledge, this study is the first to report the presence of two pairs of distinct IR sequences within a monomeric linear mt genome. The present finding provides insight into further understanding of evolution of mt genome structure.
Inverse problems of geophysics
Yanovskaya, T.B.
2003-07-01
This report gives an overview and the mathematical formulation of geophysical inverse problems. General principles of statistical estimation are explained. The maximum likelihood and least square fit methods, the Backus-Gilbert method and general approaches for solving inverse problems are discussed. General formulations of linearized inverse problems, singular value decomposition and properties of pseudo-inverse solutions are given
Liu Guanghui [Department of Physics, College of Physics and Electronic Engineering, Guangzhou University, Guangzhou 510006 (China); Guo Kangxian, E-mail: axguo@sohu.com [Department of Physics, College of Physics and Electronic Engineering, Guangzhou University, Guangzhou 510006 (China); Wang Chao [Institute of Public Administration, Guangzhou University, Guangzhou 510006 (China)
2012-06-15
The linear and nonlinear optical absorption in a disk-shaped quantum dot (DSQD) with parabolic potential plus an inverse squared potential in the presence of a static magnetic field are theoretically investigated within the framework of the compact-density-matrix approach and iterative method. The energy levels and the wave functions of an electron in the DSQD are obtained by using the effective mass approximation. Numerical calculations are presented for typical GaAs/AlAs DSQD. It is found that the optical absorption coefficients are strongly affected not only by a static magnetic field, but also by the strength of external field, the confinement frequency and the incident optical intensity.
Liu Guanghui; Guo Kangxian; Wang Chao
2012-01-01
The linear and nonlinear optical absorption in a disk-shaped quantum dot (DSQD) with parabolic potential plus an inverse squared potential in the presence of a static magnetic field are theoretically investigated within the framework of the compact-density-matrix approach and iterative method. The energy levels and the wave functions of an electron in the DSQD are obtained by using the effective mass approximation. Numerical calculations are presented for typical GaAs/AlAs DSQD. It is found that the optical absorption coefficients are strongly affected not only by a static magnetic field, but also by the strength of external field, the confinement frequency and the incident optical intensity.
Spurr, Robert; Stamnes, Knut; Eide, Hans; Li Wei; Zhang Kexin; Stamnes, Jakob
2007-01-01
In this paper and the sequel, we investigate the application of classic inverse methods based on iterative least-squares cost-function minimization to the simultaneous retrieval of aerosol and ocean properties from visible and near infrared spectral radiance measurements such as those from the SeaWiFS and MODIS instruments. Radiance measurements at the satellite are simulated directly using an accurate coupled atmosphere-ocean-discrete-ordinate radiative transfer (CAO-DISORT) code as the main component of the forward model. For this kind of cost-function inverse problem, we require the forward model to generate weighting functions (radiance partial derivatives) with respect to the aerosol and marine properties to be retrieved, and to other model parameters which are sources of error in the retrievals. In this paper, we report on the linearization of the CAO-DISORT model. This linearization provides a complete analytic differentiation of the coupled-media radiative transfer theory, and it allows the model to generate analytic weighting functions for any atmospheric or marine parameter. For high solar zenith angles, we give an implementation of the pseudo-spherical (P-S) approach to solar beam attenuation in the atmosphere in the linearized model. We summarize a number of performance enhancements such as the use of an exact single-scattering calculation to improve accuracy. We derive inherent optical property inputs for the linearized CAO-DISORT code for a simple 2-parameter bio-optical model for the marine environment coupled to a 2-parameter bimodal atmospheric aerosol medium
Foo, Mathias; Kim, Jongrae; Sawlekar, Rucha; Bates, Declan G
2017-04-06
Feedback control is widely used in chemical engineering to improve the performance and robustness of chemical processes. Feedback controllers require a 'subtractor' that is able to compute the error between the process output and the reference signal. In the case of embedded biomolecular control circuits, subtractors designed using standard chemical reaction network theory can only realise one-sided subtraction, rendering standard controller design approaches inadequate. Here, we show how a biomolecular controller that allows tracking of required changes in the outputs of enzymatic reaction processes can be designed and implemented within the framework of chemical reaction network theory. The controller architecture employs an inversion-based feedforward controller that compensates for the limitations of the one-sided subtractor that generates the error signals for a feedback controller. The proposed approach requires significantly fewer chemical reactions to implement than alternative designs, and should have wide applicability throughout the fields of synthetic biology and biological engineering.
Namatame, Hirofumi; Taniguchi, Masaki
1994-01-01
Photoelectron spectroscopy is regarded as the most powerful means since it can measure almost perfectly the occupied electron state. On the other hand, inverse photoelectron spectroscopy is the technique for measuring unoccupied electron state by using the inverse process of photoelectron spectroscopy, and in principle, the similar experiment to photoelectron spectroscopy becomes feasible. The development of the experimental technology for inverse photoelectron spectroscopy has been carried out energetically by many research groups so far. At present, the heightening of resolution of inverse photoelectron spectroscopy, the development of inverse photoelectron spectroscope in which light energy is variable and so on are carried out. But the inverse photoelectron spectroscope for vacuum ultraviolet region is not on the market. In this report, the principle of inverse photoelectron spectroscopy and the present state of the spectroscope are described, and the direction of the development hereafter is groped. As the experimental equipment, electron guns, light detectors and so on are explained. As the examples of the experiment, the inverse photoelectron spectroscopy of semimagnetic semiconductors and resonance inverse photoelectron spectroscopy are reported. (K.I.)
Neural Generalized Predictive Control of a non-linear Process
Sørensen, Paul Haase; Nørgård, Peter Magnus; Ravn, Ole
1998-01-01
The use of neural network in non-linear control is made difficult by the fact the stability and robustness is not guaranteed and that the implementation in real time is non-trivial. In this paper we introduce a predictive controller based on a neural network model which has promising stability qu...... detail and discuss the implementation difficulties. The neural generalized predictive controller is tested on a pneumatic servo sys-tem.......The use of neural network in non-linear control is made difficult by the fact the stability and robustness is not guaranteed and that the implementation in real time is non-trivial. In this paper we introduce a predictive controller based on a neural network model which has promising stability...... qualities. The controller is a non-linear version of the well-known generalized predictive controller developed in linear control theory. It involves minimization of a cost function which in the present case has to be done numerically. Therefore, we develop the numerical algorithms necessary in substantial...
Estimation of G-renewal process parameters as an ill-posed inverse problem
Krivtsov, V.; Yevkin, O.
2013-01-01
Statistical estimation of G-renewal process parameters is an important estimation problem, which has been considered by many authors. We view this problem from the standpoint of a mathematically ill-posed, inverse problem (the solution is not unique and/or is sensitive to statistical error) and propose a regularization approach specifically suited to the G-renewal process. Regardless of the estimation method, the respective objective function usually involves parameters of the underlying life-time distribution and simultaneously the restoration parameter. In this paper, we propose to regularize the problem by decoupling the estimation of the aforementioned parameters. Using a simulation study, we show that the resulting estimation/extrapolation accuracy of the proposed method is considerably higher than that of the existing methods
Lehikoinen, A.; Huttunen, J.M.J.; Finsterle, S.; Kowalsky, M.B.; Kaipio, J.P.
2009-08-01
We propose an approach for imaging the dynamics of complex hydrological processes. The evolution of electrically conductive fluids in porous media is imaged using time-lapse electrical resistance tomography. The related dynamic inversion problem is solved using Bayesian filtering techniques, that is, it is formulated as a sequential state estimation problem in which the target is an evolving posterior probability density of the system state. The dynamical inversion framework is based on the state space representation of the system, which involves the construction of a stochastic evolution model and an observation model. The observation model used in this paper consists of the complete electrode model for ERT, with Archie's law relating saturations to electrical conductivity. The evolution model is an approximate model for simulating flow through partially saturated porous media. Unavoidable modeling and approximation errors in both the observation and evolution models are considered by computing approximate statistics for these errors. These models are then included in the construction of the posterior probability density of the estimated system state. This approximation error method allows the use of approximate - and therefore computationally efficient - observation and evolution models in the Bayesian filtering. We consider a synthetic example and show that the incorporation of an explicit model for the model uncertainties in the state space representation can yield better estimates than a frame-by-frame imaging approach.
A linear time layout algorithm for business process models
Gschwind, T.; Pinggera, J.; Zugal, S.; Reijers, H.A.; Weber, B.
2014-01-01
The layout of a business process model influences how easily it can beunderstood. Existing layout features in process modeling tools often rely on graph representations, but do not take the specific properties of business process models into account. In this paper, we propose an algorithm that is
P.D.Gujrati
2002-01-01
Full Text Available Theoretical evidence is presented in this review that architectural aspects can play an important role, not only in the bulk but also in confined geometries by using our recursive lattice theory, which is equally applicable to fixed architectures (regularly branched polymers, stars, dendrimers, brushes, linear chains, etc. and variable architectures, i.e. randomly branched structures. Linear chains possess an inversion symmetry (IS of a magnetic system (see text, whose presence or absence determines the bulk phase diagram. Fixed architectures possess the IS and yield a standard bulk phase diagram in which there exists a theta point at which two critical lines C and C' meet and the second virial coefficient A2 vanishes. The critical line C appears only for infinitely large polymers, and an order parameter is identified for this criticality. The critical line C' exists for polymers of all sizes and represents phase separation criticality. Variable architectures, which do not possess the IS, give rise to a topologically different phase diagram with no theta point in general. In confined regions next to surfaces, it is not the IS but branching and monodispersity, which becomes important in the surface regions. We show that branching plays no important role for polydisperse systems, but become important for monodisperse systems. Stars and linear chains behave differently near a surface.
Hernandez, J.A.; Siqueiros, J.; Juarez-Romero, D. [Centro de Investigacion en Ingenieria y Ciencias Aplicadas, Universidad Autonoma del Estado de Morelos (UAEM), Av. Universidad No. 1001, Col. Chamilpa, Cuernavaca, Morelos C.P. 62209 (Mexico); Bassam, A. [Posgrado en Ingenieria y Ciencias Aplicadas, Universidad Autonoma del Estado de Morelos (UAEM), Av. Universidad No. 1001, Col. Chamilpa, Cuernavaca, Morelos C.P. 62209 (Mexico)
2009-04-15
Artificial neural network inverse (ANNi) is applied to calculate the optimal operating conditions on the coefficient of performance (COP) for a water purification process integrated to an absorption heat transformer with energy recycling. An artificial neural network (ANN) model is developed to predict the COP which was increased with energy recycling. This ANN model takes into account the input and output temperatures for each one of the four components (absorber, generator, evaporator, and condenser), as well as two pressures and LiBr + H{sub 2}O concentrations. For the network, a feedforward with one hidden layer, a Levenberg-Marquardt learning algorithm, a hyperbolic tangent sigmoid transfer function and a linear transfer function were used. The best fitting training data set was obtained with three neurons in the hidden layer. On the validation data set, simulations and experimental data test were in good agreement (R > 0.99). This ANN model can be used to predict the COP when the input variables (operating conditions) are well known. However, to control the COP in the system, we developed a strategy to estimate the optimal input variables when a COP is required from ANNi. An optimization method (the Nelder-Mead simplex method) is used to fit the unknown input variable resulted from the ANNi. This methodology can be applied to control on-line the performance of the system. (author)
The tropopause inversion layer in baroclinic life-cycle experiments: the role of diabatic processes
D. Kunkel
2016-01-01
Full Text Available Recent studies on the formation of a quasi-permanent layer of enhanced static stability above the thermal tropopause revealed the contributions of dynamical and radiative processes. Dry dynamics leads to the evolution of a tropopause inversion layer (TIL, which is, however, too weak compared to observations and thus diabatic contributions are required. In this study we aim to assess the importance of diabatic processes in the understanding of TIL formation at midlatitudes. The non-hydrostatic model COSMO (COnsortium for Small-scale MOdelling is applied in an idealized midlatitude channel configuration to simulate baroclinic life cycles. The effect of individual diabatic processes related to humidity, radiation, and turbulence is studied first to estimate the contribution of each of these processes to the TIL formation in addition to dry dynamics. In a second step these processes are stepwise included in the model to increase the complexity and finally estimate the relative importance of each process. The results suggest that including turbulence leads to a weaker TIL than in a dry reference simulation. In contrast, the TIL evolves stronger when radiation is included but the temporal evolution is still comparable to the reference. Using various cloud schemes in the model shows that latent heat release and consecutive increased vertical motions foster an earlier and stronger appearance of the TIL than in all other life cycles. Furthermore, updrafts moisten the upper troposphere and as such increase the radiative effect from water vapor. Particularly, this process becomes more relevant for maintaining the TIL during later stages of the life cycles. Increased convergence of the vertical wind induced by updrafts and by propagating inertia-gravity waves, which potentially dissipate, further contributes to the enhanced stability of the lower stratosphere. Finally, radiative feedback of ice clouds reaching up to the tropopause is identified to
Saloranta, Tuomo M; Andersen, Tom; Naes, Kristoffer
2006-01-01
Rate constant bioaccumulation models are applied to simulate the flow of polychlorinated dibenzo-p-dioxins and dibenzofurans (PCDD/Fs) in the coastal marine food web of Frierfjorden, a contaminated fjord in southern Norway. We apply two different ways to parameterize the rate constants in the model, global sensitivity analysis of the models using Extended Fourier Amplitude Sensitivity Test (Extended FAST) method, as well as results from general linear system theory, in order to obtain a more thorough insight to the system's behavior and to the flow pathways of the PCDD/Fs. We calibrate our models against observed body concentrations of PCDD/Fs in the food web of Frierfjorden. Differences between the predictions from the two models (using the same forcing and parameter values) are of the same magnitude as their individual deviations from observations, and the models can be said to perform about equally well in our case. Sensitivity analysis indicates that the success or failure of the models in predicting the PCDD/F concentrations in the food web organisms highly depends on the adequate estimation of the truly dissolved concentrations in water and sediment pore water. We discuss the pros and cons of such models in understanding and estimating the present and future concentrations and bioaccumulation of persistent organic pollutants in aquatic food webs.
R. Barbiero
2007-05-01
Full Text Available Model Output Statistics (MOS refers to a method of post-processing the direct outputs of numerical weather prediction (NWP models in order to reduce the biases introduced by a coarse horizontal resolution. This technique is especially useful in orographically complex regions, where large differences can be found between the NWP elevation model and the true orography. This study carries out a comparison of linear and non-linear MOS methods, aimed at the prediction of minimum temperatures in a fruit-growing region of the Italian Alps, based on the output of two different NWPs (ECMWF T511–L60 and LAMI-3. Temperature, of course, is a particularly important NWP output; among other roles it drives the local frost forecast, which is of great interest to agriculture. The mechanisms of cold air drainage, a distinctive aspect of mountain environments, are often unsatisfactorily captured by global circulation models. The simplest post-processing technique applied in this work was a correction for the mean bias, assessed at individual model grid points. We also implemented a multivariate linear regression on the output at the grid points surrounding the target area, and two non-linear models based on machine learning techniques: Neural Networks and Random Forest. We compare the performance of all these techniques on four different NWP data sets. Downscaling the temperatures clearly improved the temperature forecasts with respect to the raw NWP output, and also with respect to the basic mean bias correction. Multivariate methods generally yielded better results, but the advantage of using non-linear algorithms was small if not negligible. RF, the best performing method, was implemented on ECMWF prognostic output at 06:00 UTC over the 9 grid points surrounding the target area. Mean absolute errors in the prediction of 2 m temperature at 06:00 UTC were approximately 1.2°C, close to the natural variability inside the area itself.
On-line validation of linear process models using generalized likelihood ratios
Tylee, J.L.
1981-12-01
A real-time method for testing the validity of linear models of nonlinear processes is described and evaluated. Using generalized likelihood ratios, the model dynamics are continually monitored to see if the process has moved far enough away from the nominal linear model operating point to justify generation of a new linear model. The method is demonstrated using a seventh-order model of a natural circulation steam generator
Diagnostic checking in linear processes with infinit variance
Krämer, Walter; Runde, Ralf
1998-01-01
We consider empirical autocorrelations of residuals from infinite variance autoregressive processes. Unlike the finite-variance case, it emerges that the limiting distribution, after suitable normalization, is not always more concentrated around zero when residuals rather than true innovations are employed.
Young, A.T.; Moreno, D.K.; Marsters, R.G.
1981-01-01
Homogenous, low-density plastic foams for ICF targets have been prepared by thermally induced phase inversion processes. Uniform, open cell foams have been obtained by the rapid freezing of water solutions of modified cellulose polymers with densities in the range of 5 mg/cm 3 to 0.7 mg/cm 3 and respective average cell sizes of 2 to 40 micrometers. In addition, low-density, microcellular foams have been prepared from the hydrocarbon polymer poly(4-methyl-l-pentene) via a similar phase inversion process using homogenous solutions in organic solvents. These foams have densities from 2 to 5 mg/cm 3 and average cell sizes of 20 micrometers. The physical-chemical aspects of the thermally induced phase inversion process is presented
Supply Chain Management: from Linear Interactions to Networked Processes
Doina FOTACHE
2006-01-01
Full Text Available Supply Chain Management is a distinctive product, with a tremendous impact on the software applications market. SCM applications are back-end solutions intended to link suppliers, manufacturers, distributors and resellers in a production and distribution network, which allows the enterprise to track and consolidate the flows of materials and data trough the process of manufacturing and distribution of goods/services. The advent of the Web as a major means of conducting business transactions and business-tobusiness communications, coupled with evolving web-based supply chain management (SCM technology, has resulted in a transition period from “linear” supply chain models to "networked" supply chain models. The technologies to enable dynamic process changes and real time interactions between extended supply chain partners are emerging and being deployed at an accelerated pace.
Inverse Source Data-Processing Strategies for Radio-Frequency Localization in Indoor Environments
Gianluca Gennarelli
2017-10-01
Full Text Available Indoor positioning of mobile devices plays a key role in many aspects of our daily life. These include real-time people tracking and monitoring, activity recognition, emergency detection, navigation, and numerous location based services. Despite many wireless technologies and data-processing algorithms have been developed in recent years, indoor positioning is still a problem subject of intensive research. This paper deals with the active radio-frequency (RF source localization in indoor scenarios. The localization task is carried out at the physical layer thanks to receiving sensor arrays which are deployed on the border of the surveillance region to record the signal emitted by the source. The localization problem is formulated as an imaging one by taking advantage of the inverse source approach. Different measurement configurations and data-processing/fusion strategies are examined to investigate their effectiveness in terms of localization accuracy under both line-of-sight (LOS and non-line of sight (NLOS conditions. Numerical results based on full-wave synthetic data are reported to support the analysis.
Bonemei, Rob; Costantino, Andrea I; Battistel, Ilenia; Rivolta, Davide
2018-05-01
Faces and bodies are more difficult to perceive when presented inverted than when presented upright (i.e., stimulus inversion effect), an effect that has been attributed to the disruption of holistic processing. The features that can trigger holistic processing in faces and bodies, however, still remain elusive. In this study, using a sequential matching task, we tested whether stimulus inversion affects various categories of visual stimuli: faces, faceless heads, faceless heads in body context, headless bodies naked, whole bodies naked, headless bodies clothed, and whole bodies clothed. Both accuracy and inversion efficiency score results show inversion effects for all categories but for clothed bodies (with and without heads). In addition, the magnitude of the inversion effect for face, naked body, and faceless heads was similar. Our findings demonstrate that the perception of faces, faceless heads, and naked bodies relies on holistic processing. Clothed bodies (with and without heads), on the other side, may trigger clothes-sensitive rather than body-sensitive perceptual mechanisms. © 2017 The British Psychological Society.
Coupled Inverse Fluidized Bed Bioreactor with Advanced Oxidation Processes for Treatment of Vinasse
Karla E. Campos Díaz
2017-11-01
Full Text Available Vinasse is the wastewater generated from ethanol distillation; it is characterized by high levels of organic and inorganic matter, high exit temperature, dissolved salts and low pH. In this work the treatment of undiluted vinasse was achieved using sequentially-coupled biological and advanced oxidation processes. The initial characterization of vinasse showed a high Chemical Oxygen Demand (COD, 32 kg m-3, high Total Organic Carbon (TOC, 24.5 kg m-3 and low pH (2.5. The first stage of the biological treatment of the vinasse was carried out in an inverse fluidized bed bioreactor with a microbial consortium using polypropylene as support material. The fluidized bed bioreactor was kept at a constant temperature (37 ± 1ºC and pH (6.0 ± 0.5 for 90 days. After the biological process, the vinasse was continuously fed to the photoreactor using a peristaltic pump 2.8 × 10-3 kg of FeSO4•7H2O were added to the vinasse and allowed to dissolve in the dark for five minutes; after this time, 15.3 m3 of hydrogen peroxide (H2O2 (30% w/w were added, and subsequently, the UV radiation was allowed to reach the photoreactor to treat the effluent for 3600 s at pH = 3. Results showed that the maximum organic matter removed using the biological process, measured as COD, was 80% after 90 days. Additionally, 88% of COD removal was achieved using the photo-assisted Fenton oxidation. The overall COD removal after the sequentially-coupled processes reached a value as low as 0.194 kg m-3, achieving over 99% of COD removal as well as complete TOC removal.
A quantum analogy for the linear thermodynamics of irreversible processes
Ibanez-Mengual, J.A.; Tejerina-Garcia, A.F.
1981-01-01
In this paper, a model for the transport through a liquid junction of two solutions of the same components, based on quantum-mechanical considerations, is established. A small energy difference, compared with the molecules' energy, among the molecules placed at both sides of the junction is assumed to exist. The liquid junction is assimilated to a potential barrier, getting the material flow from the transmission coefficient of the barrier, when the energy difference is caused by a temperature gradient, a concentration gradient, or both gradients acting together. In all cases, equations formally identical to those of the thermodynamics of irreversible processes are obtained. In the last case, the heat flow is also determined. (author)
Guida, M.; Pulcini, G.
2013-01-01
This paper proposes the family of non-stationary inverse Gamma processes for modeling state-dependent deterioration processes with nonlinear trend. The proposed family of processes, which is based on the assumption that the “inverse” time process is Gamma, is mathematically more tractable than previously proposed state-dependent processes, because, unlike the previous models, the inverse Gamma process is a time-continuous and state-continuous model and does not require discretization of time and state. The conditional distribution of the deterioration growth over a generic time interval, the conditional distribution of the residual life and the residual reliability of the unit, given the current state, are provided. Point and interval estimation of the parameters which index the proposed process, as well as of several quantities of interest, are also discussed. Finally, the proposed model is applied to the wear process of the liners of some Diesel engines which was previously analyzed and proved to be a purely state-dependent process. The comparison of the inferential results obtained under the competitor models shows the ability of the Inverse Gamma process to adequately model the observed state-dependent wear process
A Single Software For Processing, Inversion, And Presentation Of Aem Data Of Different Systems
Auken, Esben; Christiansen, Anders Vest; Viezzoli, Andrea
2009-01-01
modeling and Spatial Constrained inversion (SCI) for quasi 3-D inversion. The Workbench implements a user friendly interface to these algorithms enabling non-geophysicists to carry out inversion of complicated airborne data sets without having in-depth knowledge about how the algorithm actually works. Just...... to manage data and settings. The benefits of using a databases compared to flat ASCII column files should not be underestimated. Firstly, user-handled input/output is nearly eliminated, thus minimizing the chance of human errors. Secondly, data are stored in a well described and documented format which...
Mei, Gang; Xu, Liangliang; Xu, Nengxiong
2017-09-01
This paper focuses on designing and implementing parallel adaptive inverse distance weighting (AIDW) interpolation algorithms by using the graphics processing unit (GPU). The AIDW is an improved version of the standard IDW, which can adaptively determine the power parameter according to the data points' spatial distribution pattern and achieve more accurate predictions than those predicted by IDW. In this paper, we first present two versions of the GPU-accelerated AIDW, i.e. the naive version without profiting from the shared memory and the tiled version taking advantage of the shared memory. We also implement the naive version and the tiled version using two data layouts, structure of arrays and array of aligned structures, on both single and double precision. We then evaluate the performance of parallel AIDW by comparing it with its corresponding serial algorithm on three different machines equipped with the GPUs GT730M, M5000 and K40c. The experimental results indicate that: (i) there is no significant difference in the computational efficiency when different data layouts are employed; (ii) the tiled version is always slightly faster than the naive version; and (iii) on single precision the achieved speed-up can be up to 763 (on the GPU M5000), while on double precision the obtained highest speed-up is 197 (on the GPU K40c). To benefit the community, all source code and testing data related to the presented parallel AIDW algorithm are publicly available.
Linear Processing Design of Amplify-and-Forward Relays for Maximizing the System Throughput
Qiang Wang
2018-01-01
Full Text Available In this paper, firstly, we study the linear processing of amplify-and-forward (AF relays for the multiple relays multiple users scenario. We regard all relays as one special “relay”, and then the subcarrier pairing, relay selection and channel assignment can be seen as a linear processing of the special “relay”. Under fixed power allocation, the linear processing of AF relays can be regarded as a permutation matrix. Employing the partitioned matrix, we propose an optimal linear processing design for AF relays to find the optimal permutation matrix based on the sorting of the received SNR over the subcarriers from BS to relays and from relays to users, respectively. Then, we prove the optimality of the proposed linear processing scheme. Through the proposed linear processing scheme, we can obtain the optimal subcarrier paring, relay selection and channel assignment under given power allocation in polynomial time. Finally, we propose an iterative algorithm based on the proposed linear processing scheme and Lagrange dual domain method to jointly optimize the joint optimization problem involving the subcarrier paring, relay selection, channel assignment and power allocation. Simulation results illustrate that the proposed algorithm can achieve a perfect performance.
Effects of connection of electrical and mechanical potentials in inverse osmosis processes
Cortes, Farid; Chejne, Farid; Chejne, David; Velez, Fredy; Londono, Carlos
2009-01-01
A theoretical dissertation and experimental assays of the irreversible phenomena applied to electro-kinetics and inverse osmosis is presented. Experimental assays were made on simple equipment to evidence the occurrence of connected irreversible phenomena between electric current flow and global mass flow. The coupling of these two phenomena allowed us to make conclusions about the possibility of reducing operation costs of the inverse osmosis equipment due to increasing the saline solution flow between 12% and 20%.
Effects of connection of electrical and mechanical potentials in inverse osmosis processes
Cortes, Farid; Chejne, Farid; Chejne, David; Velez, Fredy; Londono, Carlos [Grupo de Termodinamica Aplicada y Energias Alternativas - TAYEA, Instituto de Energia, Facultad de Minas, Universidad Nacional de Colombia, Sede Medellin, Antigua (Colombia)
2009-07-15
A theoretical dissertation and experimental assays of the irreversible phenomena applied to electro-kinetics and inverse osmosis is presented. Experimental assays were made on simple equipment to evidence the occurrence of connected irreversible phenomena between electric current flow and global mass flow. The coupling of these two phenomena allowed us to make conclusions about the possibility of reducing operation costs of the inverse osmosis equipment due to increasing the saline solution flow between 12% and 20%. (author)
Full Waveform Inversion Using Oriented Time Migration Method
Zhang, Zhendong
2016-01-01
Full waveform inversion (FWI) for reflection events is limited by its linearized update requirements given by a process equivalent to migration. Unless the background velocity model is reasonably accurate the resulting gradient can have
2013-01-01
This book consists of twenty seven chapters, which can be divided into three large categories: articles with the focus on the mathematical treatment of non-linear problems, including the methodologies, algorithms and properties of analytical and numerical solutions to particular non-linear problems; theoretical and computational studies dedicated to the physics and chemistry of non-linear micro-and nano-scale systems, including molecular clusters, nano-particles and nano-composites; and, papers focused on non-linear processes in medico-biological systems, including mathematical models of ferments, amino acids, blood fluids and polynucleic chains.
O. Tichý
2016-11-01
Full Text Available Estimation of pollutant releases into the atmosphere is an important problem in the environmental sciences. It is typically formalized as an inverse problem using a linear model that can explain observable quantities (e.g., concentrations or deposition values as a product of the source-receptor sensitivity (SRS matrix obtained from an atmospheric transport model multiplied by the unknown source-term vector. Since this problem is typically ill-posed, current state-of-the-art methods are based on regularization of the problem and solution of a formulated optimization problem. This procedure depends on manual settings of uncertainties that are often very poorly quantified, effectively making them tuning parameters. We formulate a probabilistic model, that has the same maximum likelihood solution as the conventional method using pre-specified uncertainties. Replacement of the maximum likelihood solution by full Bayesian estimation also allows estimation of all tuning parameters from the measurements. The estimation procedure is based on the variational Bayes approximation which is evaluated by an iterative algorithm. The resulting method is thus very similar to the conventional approach, but with the possibility to also estimate all tuning parameters from the observations. The proposed algorithm is tested and compared with the standard methods on data from the European Tracer Experiment (ETEX where advantages of the new method are demonstrated. A MATLAB implementation of the proposed algorithm is available for download.
Tichý, Ondřej; Šmídl, Václav; Hofman, Radek; Stohl, Andreas
2016-11-01
Estimation of pollutant releases into the atmosphere is an important problem in the environmental sciences. It is typically formalized as an inverse problem using a linear model that can explain observable quantities (e.g., concentrations or deposition values) as a product of the source-receptor sensitivity (SRS) matrix obtained from an atmospheric transport model multiplied by the unknown source-term vector. Since this problem is typically ill-posed, current state-of-the-art methods are based on regularization of the problem and solution of a formulated optimization problem. This procedure depends on manual settings of uncertainties that are often very poorly quantified, effectively making them tuning parameters. We formulate a probabilistic model, that has the same maximum likelihood solution as the conventional method using pre-specified uncertainties. Replacement of the maximum likelihood solution by full Bayesian estimation also allows estimation of all tuning parameters from the measurements. The estimation procedure is based on the variational Bayes approximation which is evaluated by an iterative algorithm. The resulting method is thus very similar to the conventional approach, but with the possibility to also estimate all tuning parameters from the observations. The proposed algorithm is tested and compared with the standard methods on data from the European Tracer Experiment (ETEX) where advantages of the new method are demonstrated. A MATLAB implementation of the proposed algorithm is available for download.
Azimuthal asymmetry in processes of nonlinear QED for linearly polarized photon
Bajer, V.N.; Mil'shtejn, A.I.
1994-01-01
Cross sections of nonlinear QED processes (photon-photon scattering, photon splitting in a Coulomb field, and Delbrueck scattering) are considered for linearly polarized initial photon. The cross sections have sizeable azimuthal asymmetry. 15 refs.; 3 figs
Inversion factor in the comparative analysis of dynamical processes in radioecology
Zarubin, O.; Zarubina, N. [Institute for Nuclear Researh of National Academy of Science of Ukraine (Ukraine)
2014-07-01
We have studied levels of specific activity of radionuclides in fish and fungi of the Kiev region of Ukraine since 1986 till 2013, including 30-km alienation zone of Chernobyl Nuclear Power Plant (ChNPP) after the accident. The radionuclides specific activity dynamics analysis for 10 species of freshwater fishes of different trophic levels and at 7 species of higher fungi was carried out for this period. Multiple research of specific activity of radionuclides in fish was carried out on the Kanevskoe reservoir and cooling-pond of ChNPP, in fungi - on 6 testing areas, which are situated within the range of 2 to 150 km from ChNPP. The basic attention was given to accumulation of {sup 137}Cs. We have established that dynamics of specific activity of {sup 137}Cs within different species of fish in the same reservoir is not identical. Dynamics of specific activity of {sup 137}Cs within various species of fungi of the same testing area is also not identical. Dynamics of specific activity of {sup 137}Cs with the investigated objects of various testing dry-land and water areas also varies. Authors suggest an inversion factor to be used for comparison of dynamics of specific activity of {sup 137}Cs, which in case of biota is a nonlinear process: K{sub inv} = A{sub 0} / A{sub t}, where A{sub 0} stands for the value of specific activity of the radionuclide at time 0; A{sub t} - specific activity of radionuclide at time t. Therefore, K{sub inv} reflects ratio (inversion) of specific activity of radionuclides to its starting value as a function of time, where K{sub inv} > 1 corresponds to increase in radionuclides' specific activity and K{sub inv} < 1 corresponds to its decrease. For example, K{sub inv} of {sup 137}Cs in fish Rutilus rutilus in the Kanevskoe reservoir was equal to 0.57, and 13.33 in the cooling-pond of ChNPP, at Blicca bjoerkna 0.95 and 29.61 accordingly in 1987 - 1996. In 1987 - 2011 K{sub inv} of {sup 137}Cs at R. rutilus in the Kanevskoe reservoir
Chemically Patterned Inverse Opal Created by a Selective Photolysis Modification Process.
Tian, Tian; Gao, Ning; Gu, Chen; Li, Jian; Wang, Hui; Lan, Yue; Yin, Xianpeng; Li, Guangtao
2015-09-02
Anisotropic photonic crystal materials have long been pursued for their broad applications. A novel method for creating chemically patterned inverse opals is proposed here. The patterning technique is based on selective photolysis of a photolabile polymer together with postmodification on released amine groups. The patterning method allows regioselective modification within an inverse opal structure, taking advantage of selective chemical reaction. Moreover, combined with the unique signal self-reporting feature of the photonic crystal, the fabricated structure is capable of various applications, including gradient photonic bandgap and dynamic chemical patterns. The proposed method provides the ability to extend the structural and chemical complexity of the photonic crystal, as well as its potential applications.
Population inversion of two atoms under the phase decoherence in the multiphoton process
Zhang Dongxia; Sa Chuerfu; Mu Qier
2011-01-01
By means of the quantum theory, the population inversion of two atoms in the system of two two-level atoms coupled to a light field in the Binomial Optical Field are investigated in the presence of phase decoherence in the multiphoton Tavis-Cumming Model. The influences of the phase decoherence coefficient, the parameters η of the binomial optical field, the maximum number of photons and the number of the transitional photons on the properties of the population inversion of two atoms have been discussed. The results show that the phase decoherence reduced the oscillation amplitude of the population inversion of two atoms and destroyed the atomic quantum characteristic. Changing the number of the transitional photons, evolved cycle and evolved intensity the population inversion of two atoms can be changed. The phenomena of collapse and revival disappear as photon number increase. When the binomial optical state changes from a coherent state to a Fock state, the oscillation frequency of the atomic population reduces gradually, the phenomena of collapse and revival vanishes gradually. (authors)
Liu, Yi; Zhang, He; Liu, Siwei; Lin, Fuchang
2018-05-01
The J-A (Jiles-Atherton) model is widely used to describe the magnetization characteristics of magnetic cores in a low-frequency alternating field. However, this model is deficient in the quantitative analysis of the eddy current loss and residual loss in a high-frequency magnetic field. Based on the decomposition of magnetization intensity, an inverse J-A model is established which uses magnetic flux density B as an input variable. Static and dynamic core losses under high frequency excitation are separated based on the inverse J-A model. Optimized parameters of the inverse J-A model are obtained based on particle swarm optimization. The platform for the pulsed magnetization characteristic test is designed and constructed. The hysteresis curves of ferrite and Fe-based nanocrystalline cores at high magnetization rates are measured. The simulated and measured hysteresis curves are presented and compared. It is found that the inverse J-A model can be used to describe the magnetization characteristics at high magnetization rates and to separate the static loss and dynamic loss accurately.
Petit, Jean-Pierre; D'Agostini, G.
2015-03-01
We reconsider the classical Schwarzschild solution in the context of a Janus cosmological model. We show that the central singularity can be eliminated through a simple coordinate change and that the subsequent transit from one fold to the other is accompanied by mass inversion. In such scenario matter swallowed by black holes could be ejected as invisible negative mass and dispersed in space.
Milutinović, A.; Lazarević, Z.; Jovalekić, Č.; Kuryliszyn-Kudelska, I.; Romčević, M.; Kostić, S.; Romčević, N.
2013-01-01
Graphical abstract: - Highlights: • Nano powder of ZnFe 2 O 4 prepared by a soft mechanochemical route after 18 h milling. • Phase formation controlled by XRD, Raman spectroscopy and magnetic measurements. • Size, strain and cation inversion degree determined by Rietveld refinement. • We were able to estimate the degree of inversion at most 0.348 and 0.4. • Obtained extremely high values of saturation magnetizations at T = 4.5 K. - Abstract: Two zinc ferrite nanoparticle materials were prepared by the same method – soft mechanochemical synthesis, but starting from different powder mixtures: (1) Zn(OH) 2 /α-Fe 2 O 3 and (2) Zn(OH) 2 /Fe(OH) 3 . In both cases a single phase system was obtained after 18 h of milling. The progress of the synthesis was controlled by X-ray diffractometry (XRD), Raman spectroscopy, TEM and magnetic measurements. Analysis of the XRD patterns by Rietveld refinement allowed determination of the cation inversion degree for both obtained single phase ZnFe 2 O 4 samples. The sample obtained from mixture (1) has the cation inversion degree 0.3482 and the sample obtained from mixture (2) 0.400. Magnetization measurements were confirmed that the degrees of the inversion were well estimated. Comparison with published data shows that used method of synthesis gives nano powder samples with extremely high values of saturation magnetizations: sample (1) 78.3 emu g −1 and sample (2) 91.5 emu g −1 at T = 4.5 K
Granita; Bahar, A.
2015-01-01
This paper discusses on linear birth and death with immigration and emigration (BIDE) process to stochastic differential equation (SDE) model. Forward Kolmogorov equation in continuous time Markov chain (CTMC) with a central-difference approximation was used to find Fokker-Planckequation corresponding to a diffusion process having the stochastic differential equation of BIDE process. The exact solution, mean and variance function of BIDE process was found
Granita, E-mail: granitafc@gmail.com [Dept. Mathematical Education, State Islamic University of Sultan Syarif Kasim Riau, 28293 Indonesia and Dept. of Mathematical Science, Universiti Teknologi Malaysia, 81310,Johor (Malaysia); Bahar, A. [Dept. of Mathematical Science, Universiti Teknologi Malaysia, 81310,Johor Malaysia and UTM Center for Industrial and Applied Mathematics (UTM-CIAM) (Malaysia)
2015-03-09
This paper discusses on linear birth and death with immigration and emigration (BIDE) process to stochastic differential equation (SDE) model. Forward Kolmogorov equation in continuous time Markov chain (CTMC) with a central-difference approximation was used to find Fokker-Planckequation corresponding to a diffusion process having the stochastic differential equation of BIDE process. The exact solution, mean and variance function of BIDE process was found.
Fast Algorithms for High-Order Sparse Linear Prediction with Applications to Speech Processing
Jensen, Tobias Lindstrøm; Giacobello, Daniele; van Waterschoot, Toon
2016-01-01
In speech processing applications, imposing sparsity constraints on high-order linear prediction coefficients and prediction residuals has proven successful in overcoming some of the limitation of conventional linear predictive modeling. However, this modeling scheme, named sparse linear prediction...... problem with lower accuracy than in previous work. In the experimental analysis, we clearly show that a solution with lower accuracy can achieve approximately the same performance as a high accuracy solution both objectively, in terms of prediction gain, as well as with perceptual relevant measures, when...... evaluated in a speech reconstruction application....
Oh, Geok Lian; Brunskog, Jonas
2014-01-01
Techniques have been studied for the localization of an underground source with seismic interrogation signals. Much of the work has involved defining either a P-wave acoustic model or a dispersive surface wave model to the received signal and applying the time-delay processing technique and frequ...... that for field data, inversion for localization is most advantageous when the forward model completely describe all the elastic wave components as is the case of the FDTD 3D elastic model....
State Estimation for Linear Systems Driven Simultaneously by Wiener and Poisson Processes.
1978-12-01
The state estimation problem of linear stochastic systems driven simultaneously by Wiener and Poisson processes is considered, especially the case...where the incident intensities of the Poisson processes are low and the system is observed in an additive white Gaussian noise. The minimum mean squared
Frank, T D
2005-01-01
Stationary distributions of processes are derived that involve a time delay and are defined by a linear stochastic neutral delay differential equation. The distributions are Gaussian distributions. The variances of the Gaussian distributions are either monotonically increasing or decreasing functions of the time delays. The variances become infinite when fixed points of corresponding deterministic processes become unstable. (letter to the editor)
Strong practical stability and stabilization of uncertain discrete linear repetitive processes
Dabkowski, Pavel; Galkowski, K.; Bachelier, O.; Rogers, E.; Kummert, A.; Lam, J.
2013-01-01
Roč. 20, č. 2 (2013), s. 220-233 ISSN 1070-5325 R&D Projects: GA MŠk(CZ) 1M0567 Institutional research plan: CEZ:AV0Z10750506 Institutional support: RVO:67985556 Keywords : strong practical stability * stabilization * uncertain discrete linear repetitive processes * linear matrix inequality Subject RIV: BC - Control Systems Theory Impact factor: 1.424, year: 2013 http://onlinelibrary.wiley.com/doi/10.1002/nla.812/abstract
Generalized inverses theory and computations
Wang, Guorong; Qiao, Sanzheng
2018-01-01
This book begins with the fundamentals of the generalized inverses, then moves to more advanced topics. It presents a theoretical study of the generalization of Cramer's rule, determinant representations of the generalized inverses, reverse order law of the generalized inverses of a matrix product, structures of the generalized inverses of structured matrices, parallel computation of the generalized inverses, perturbation analysis of the generalized inverses, an algorithmic study of the computational methods for the full-rank factorization of a generalized inverse, generalized singular value decomposition, imbedding method, finite method, generalized inverses of polynomial matrices, and generalized inverses of linear operators. This book is intended for researchers, postdocs, and graduate students in the area of the generalized inverses with an undergraduate-level understanding of linear algebra.
Liu, Long; Liu, Wei
2018-04-01
A forward modeling and inversion algorithm is adopted in order to determine the water injection plan in the oilfield water injection network. The main idea of the algorithm is shown as follows: firstly, the oilfield water injection network is inversely calculated. The pumping station demand flow is calculated. Then, forward modeling calculation is carried out for judging whether all water injection wells meet the requirements of injection allocation or not. If all water injection wells meet the requirements of injection allocation, calculation is stopped, otherwise the demand injection allocation flow rate of certain step size is reduced aiming at water injection wells which do not meet requirements, and next iterative operation is started. It is not necessary to list the algorithm into water injection network system algorithm, which can be realized easily. Iterative method is used, which is suitable for computer programming. Experimental result shows that the algorithm is fast and accurate.
Scargle, Jeffrey D.
1990-01-01
While chaos arises only in nonlinear systems, standard linear time series models are nevertheless useful for analyzing data from chaotic processes. This paper introduces such a model, the chaotic moving average. This time-domain model is based on the theorem that any chaotic process can be represented as the convolution of a linear filter with an uncorrelated process called the chaotic innovation. A technique, minimum phase-volume deconvolution, is introduced to estimate the filter and innovation. The algorithm measures the quality of a model using the volume covered by the phase-portrait of the innovation process. Experiments on synthetic data demonstrate that the algorithm accurately recovers the parameters of simple chaotic processes. Though tailored for chaos, the algorithm can detect both chaos and randomness, distinguish them from each other, and separate them if both are present. It can also recover nonminimum-delay pulse shapes in non-Gaussian processes, both random and chaotic.
A linear dynamic model for rotor-spun composite yarn spinning process
Yang, R H; Wang, S Y
2008-01-01
A linear dynamic model is established for the stable rotor-spun composite yarn spinning process. Approximate oscillating frequencies in the vertical and horizontal directions are obtained. By suitable choice of certain processing parameters, the mixture construction after the convergent point can be optimally matched. The presented study is expected to provide a general pathway to understand the motion of the rotor-spun composite yarn spinning process
Kaulakys, B.; Alaburda, M.; Ruseckas, J.
2016-05-01
A well-known fact in the financial markets is the so-called ‘inverse cubic law’ of the cumulative distributions of the long-range memory fluctuations of market indicators such as a number of events of trades, trading volume and the logarithmic price change. We propose the nonlinear stochastic differential equation (SDE) giving both the power-law behavior of the power spectral density and the long-range dependent inverse cubic law of the cumulative distribution. This is achieved using the suggestion that when the market evolves from calm to violent behavior there is a decrease of the delay time of multiplicative feedback of the system in comparison to the driving noise correlation time. This results in a transition from the Itô to the Stratonovich sense of the SDE and yields a long-range memory process.
MINIMUM ENTROPY DECONVOLUTION OF ONE-AND MULTI-DIMENSIONAL NON-GAUSSIAN LINEAR RANDOM PROCESSES
程乾生
1990-01-01
The minimum entropy deconvolution is considered as one of the methods for decomposing non-Gaussian linear processes. The concept of peakedness of a system response sequence is presented and its properties are studied. With the aid of the peakedness, the convergence theory of the minimum entropy deconvolution is established. The problem of the minimum entropy deconvolution of multi-dimensional non-Gaussian linear random processes is first investigated and the corresponding theory is given. In addition, the relation between the minimum entropy deconvolution and parameter method is discussed.
Single-machine common/slack due window assignment problems with linear decreasing processing times
Zhang, Xingong; Lin, Win-Chin; Wu, Wen-Hsiang; Wu, Chin-Chia
2017-08-01
This paper studies linear non-increasing processing times and the common/slack due window assignment problems on a single machine, where the actual processing time of a job is a linear non-increasing function of its starting time. The aim is to minimize the sum of the earliness cost, tardiness cost, due window location and due window size. Some optimality results are discussed for the common/slack due window assignment problems and two O(n log n) time algorithms are presented to solve the two problems. Finally, two examples are provided to illustrate the correctness of the corresponding algorithms.
Regularization and Bayesian methods for inverse problems in signal and image processing
Giovannelli , Jean-François
2015-01-01
The focus of this book is on "ill-posed inverse problems". These problems cannot be solved only on the basis of observed data. The building of solutions involves the recognition of other pieces of a priori information. These solutions are then specific to the pieces of information taken into account. Clarifying and taking these pieces of information into account is necessary for grasping the domain of validity and the field of application for the solutions built. For too long, the interest in these problems has remained very limited in the signal-image community. However, the community has si
Inverse optimal design of the radiant heating in materials processing and manufacturing
Fedorov, A. G.; Lee, K. H.; Viskanta, R.
1998-12-01
Combined convective, conductive, and radiative heat transfer is analyzed during heating of a continuously moving load in the industrial radiant oven. A transient, quasi-three-dimensional model of heat transfer between a continuous load of parts moving inside an oven on a conveyor belt at a constant speed and an array of radiant heaters/burners placed inside the furnace enclosure is developed. The model accounts for radiative exchange between the heaters and the load, heat conduction in the load, and convective heat transfer between the moving load and oven environment. The thermal model developed has been used to construct a general framework for an inverse optimal design of an industrial oven as an example. In particular, the procedure based on the Levenberg-Marquardt nonlinear least squares optimization algorithm has been developed to obtain the optimal temperatures of the heaters/burners that need to be specified to achieve a prescribed temperature distribution of the surface of a load. The results of calculations for several sample cases are reported to illustrate the capabilities of the procedure developed for the optimal inverse design of an industrial radiant oven.
Joel Sereno
2010-01-01
Full Text Available Inverse kinematics is the process of converting a Cartesian point in space into a set of joint angles to more efficiently move the end effector of a robot to a desired orientation. This project investigates the inverse kinematics of a robotic hand with fingers under various scenarios. Assuming the parameters of a provided robot, a general equation for the end effector point was calculated and used to plot the region of space that it can reach. Further, the benefits obtained from the addition of a prismatic joint versus an extra variable angle joint were considered. The results confirmed that having more movable parts, such as prismatic points and changing angles, increases the effective reach of a robotic hand.
Abma, Tineke A.; Cook, Tina; Rämgård, Margaretha; Kleba, Elisabeth; Harris, Janet; Wallerstein, Nina
2017-01-01
Social impact, defined as an effect on society, culture, quality of life, community services, or public policy beyond academia, is widely considered as a relevant requirement for scientific research, especially in the field of health care. Traditionally, in health research, the process of knowledge transfer is rather linear and one-sided and has…
Scene matching based on non-linear pre-processing on reference image and sensed image
Zhong Sheng; Zhang Tianxu; Sang Nong
2005-01-01
To solve the heterogeneous image scene matching problem, a non-linear pre-processing method for the original images before intensity-based correlation is proposed. The result shows that the proper matching probability is raised greatly. Especially for the low S/N image pairs, the effect is more remarkable.
Discounted semi-Markov decision processes : linear programming and policy iteration
Wessels, J.; van Nunen, J.A.E.E.
1975-01-01
For semi-Markov decision processes with discounted rewards we derive the well known results regarding the structure of optimal strategies (nonrandomized, stationary Markov strategies) and the standard algorithms (linear programming, policy iteration). Our analysis is completely based on a primal
Discounted semi-Markov decision processes : linear programming and policy iteration
Wessels, J.; van Nunen, J.A.E.E.
1974-01-01
For semi-Markov decision processes with discounted rewards we derive the well known results regarding the structure of optimal strategies (nonrandomized, stationary Markov strategies) and the standard algorithms (linear programming, policy iteration). Our analysis is completely based on a primal
Linear all-optical signal processing using silicon micro-ring resonators
Ding, Yunhong; Ou, Haiyan; Xu, Jing
2016-01-01
Silicon micro-ring resonators (MRRs) are compact and versatile devices whose periodic frequency response can be exploited for a wide range of applications. In this paper, we review our recent work on linear all-optical signal processing applications using silicon MRRs as passive filters. We focus...
Donkin, C.; Brown, S.; Heathcote, A.; Wagenmakers, E.-J.
2011-01-01
Quantitative models for response time and accuracy are increasingly used as tools to draw conclusions about psychological processes. Here we investigate the extent to which these substantive conclusions depend on whether researchers use the Ratcliff diffusion model or the Linear Ballistic
Effect of Process Parameters on Friction Model in Computer Simulation of Linear Friction Welding
A. Yamileva
2014-07-01
Full Text Available The friction model is important part of a numerical model of linear friction welding. Its selection determines the accuracy of the results. Existing models employ the classical law of Amonton-Coulomb where the friction coefficient is either constant or linearly dependent on a single parameter. Determination of the coefficient of friction is a time consuming process that requires a lot of experiments. So the feasibility of determinating the complex dependence should be assessing by analysis of effect of approximating law for friction model on simulation results.
Acceleration of Linear Finite-Difference Poisson-Boltzmann Methods on Graphics Processing Units.
Qi, Ruxi; Botello-Smith, Wesley M; Luo, Ray
2017-07-11
Electrostatic interactions play crucial roles in biophysical processes such as protein folding and molecular recognition. Poisson-Boltzmann equation (PBE)-based models have emerged as widely used in modeling these important processes. Though great efforts have been put into developing efficient PBE numerical models, challenges still remain due to the high dimensionality of typical biomolecular systems. In this study, we implemented and analyzed commonly used linear PBE solvers for the ever-improving graphics processing units (GPU) for biomolecular simulations, including both standard and preconditioned conjugate gradient (CG) solvers with several alternative preconditioners. Our implementation utilizes the standard Nvidia CUDA libraries cuSPARSE, cuBLAS, and CUSP. Extensive tests show that good numerical accuracy can be achieved given that the single precision is often used for numerical applications on GPU platforms. The optimal GPU performance was observed with the Jacobi-preconditioned CG solver, with a significant speedup over standard CG solver on CPU in our diversified test cases. Our analysis further shows that different matrix storage formats also considerably affect the efficiency of different linear PBE solvers on GPU, with the diagonal format best suited for our standard finite-difference linear systems. Further efficiency may be possible with matrix-free operations and integrated grid stencil setup specifically tailored for the banded matrices in PBE-specific linear systems.
J.E., Podgorski; Auken, Esben; Schamper, Cyril Noel Clarence
2013-01-01
Helicopter time-domain electromagnetic (HTEM) surveying has historically been used for mineral exploration, but over the past decade it has started to be used in environmental assessments and geologic and hydrologic mapping. Such surveying is a cost-effective means of rapidly acquiring densely......%-23%, and the artificial lineations were practically eliminated. Our processing and inversion strategy is entirely general, such that with minor system-specific modifications it could be applied to any HTEM data set, including those recorded many years ago. © 2013 Society of Exploration Geophysicists....
Effects of noise, nonlinear processing, and linear filtering on perceived music quality.
Arehart, Kathryn H; Kates, James M; Anderson, Melinda C
2011-03-01
The purpose of this study was to determine the relative impact of different forms of hearing aid signal processing on quality ratings of music. Music quality was assessed using a rating scale for three types of music: orchestral classical music, jazz instrumental, and a female vocalist. The music stimuli were subjected to a wide range of simulated hearing aid processing conditions including, (1) noise and nonlinear processing, (2) linear filtering, and (3) combinations of noise, nonlinear, and linear filtering. Quality ratings were measured in a group of 19 listeners with normal hearing and a group of 15 listeners with sensorineural hearing impairment. Quality ratings in both groups were generally comparable, were reliable across test sessions, were impacted more by noise and nonlinear signal processing than by linear filtering, and were significantly affected by the genre of music. The average quality ratings for music were reasonably well predicted by the hearing aid speech quality index (HASQI), but additional work is needed to optimize the index to the wide range of music genres and processing conditions included in this study.
Chaaba, Ali; Aboussaleh, Mohamed; Bousshine, Lahbib; Boudaia, El Hassan
2011-01-01
Limit analysis approaches are widely used to deal with metalworking processes analysis; however, they are applied only for perfectly plastic materials and recently for isotropic hardening ones excluding any kind of kinematic hardening. In the present work, using Implicit Standard Materials concept, sequential limit analysis approach and the finite element method, our objective consists in extending the limit analysis application for including linear and non linear kinematic strain hardenings. Because this plastic flow rule is non associative, the Implicit Standard Materials concept is adopted as a framework of non standard plasticity modeling. The sequential limit analysis procedure which considers the plastic behavior with non linear kinematic strain hardening as a succession of perfectly plastic behavior with yielding surfaces updated after each sequence of limit analysis and geometry updating is applied. Standard kinematic finite element method together with a regularization approach is used for performing two large compression cases (cold forging) in plane strain and axisymmetric conditions
Melo, Ingrid Sofia Vieira de; Costa, Clara Andrezza Crisóstomo Bezerra; Santos, João Victor Laurindo Dos; Santos, Aldenir Feitosa Dos; Florêncio, Telma Maria de Menezes Toledo; Bueno, Nassib Bezerra
2017-01-01
The consumption of ultra-processed foods may be associated with the development of chronic diseases, both in adults and in children/adolescents. This consumption is growing worldwide, especially in low and middle-income countries. Nevertheless, its magnitude in small, poor cities from the countryside is not well characterized, especially in adolescents. This study aimed to assess the consumption of minimally processed, processed and ultra-processed foods by adolescents from a poor Brazilian city and to determine if it was associated with excess weight, high waist circumference and high blood pressure. Cross-sectional study, conducted at a public federal school that offers technical education together with high school, located in the city of Murici. Adolescents of both sexes and aged between 14-19 years old were included. Anthropometric characteristics (weight, height, waist circumference), blood pressure, and dietary intake data were assessed. Associations were calculated using Poisson regression models, adjusted by sex and age. At total, 249 adolescents were included, being 55.8% girls, with a mean age of 16 years-old. The consumption of minimally processed foods was inversely associated with excess weight (Adjusted Prevalence Ratio: 0.61, 95% Confidence Interval: [0.39-0.96], P = 0.03). Although the consumption of ultra-processed foods was not associated with excess weight, high blood pressure and high waist circumference, 46.2% of the sample reported eating these products more than weekly. Consumption of minimally processed food is inversely associated with excess weight in adolescents. Investments in nutritional education aiming the prevention of chronic diseases associated with the consumption of these foods are necessary.
Luiz Augusto da Cruz Meleiro
2005-06-01
Full Text Available In this work a MIMO non-linear predictive controller was developed for an extractive alcoholic fermentation process. The internal model of the controller was represented by two MISO Functional Link Networks (FLNs, identified using simulated data generated from a deterministic mathematical model whose kinetic parameters were determined experimentally. The FLN structure presents as advantages fast training and guaranteed convergence, since the estimation of the weights is a linear optimization problem. Besides, the elimination of non-significant weights generates parsimonious models, which allows for fast execution in an MPC-based algorithm. The proposed algorithm showed good potential in identification and control of non-linear processes.Neste trabalho um controlador preditivo não linear multivariável foi desenvolvido para um processo de fermentação alcoólica extrativa. O modelo interno do controlador foi representado por duas redes do tipo Functional Link (FLN, identificadas usando dados de simulação gerados a partir de um modelo validado experimentalmente. A estrutura FLN apresenta como vantagem o treinamento rápido e convergência garantida, já que a estimação dos seus pesos é um problema de otimização linear. Além disso, a eliminação de pesos não significativos gera modelos parsimoniosos, o que permite a rápida execução em algoritmos de controle preditivo baseado em modelo. Os resultados mostram que o algoritmo proposto tem grande potencial para identificação e controle de processos não lineares.
Morán-Ramírez, J.; Ledesma-Ruiz, R.; Mahlknecht, J.; Ramos-Leal, J.A.
2016-01-01
In order to understand and mitigate the deterioration of water quality in the aquifer system underlying Guadalajara metropolitan area, an investigation was performed developing geochemical evolution models for assessment of groundwater chemical processes. The models helped not only to conceptualize the groundwater geochemistry, but also to evaluate the relative influence of anthropogenic inputs and natural sources of salinity to the groundwater. Mixing processes, ion exchange, water–rock–water interactions and nitrate pollution and denitrification were identified and confirmed using mass-balance models constraint by information on hydrogeology, groundwater chemistry, lithology and stability of geochemical phases. The water–rock interactions in the volcanic setting produced a dominant Na−HCO_3 water type, followed by Na−Mg−Ca−HCO_3 and Na−Ca−HCO_3. For geochemical evolution modeling, flow sections were selected representing recharge and non-recharge processes and a variety of mixing conditions. Recharge processes are dominated by dissolution of soil CO_2 gas, calcite, gypsum, albite and biotite, and Ca/Na exchange. Non-recharge processes show that the production of carbonic acid and Ca/Na exchange are decreasing, while other minerals such as halite and amorphous SiO_2 are precipitated. The origin of nitrate pollution in groundwater are fertilizers in rural plots and wastewater and waste disposal in the urban area. This investigation may help water authorities to adequately address and manage groundwater contamination. - Highlights: • The Inverse geochemical modeling was used to study to processes occurring in a volcanic aquifer. • Three flow sections were selected to apply inverse hydrogeochemical modeling. • Three main groundwater flows were identified: a local, intermediate and regional flow. • The models show that in the study area that groundwater is mixed with local recharge. • In the south, the aquifer has thermal influence.
Sushma Santapuri
2016-10-01
Full Text Available A unified thermodynamic framework for the characterization of functional materials is developed. This framework encompasses linear reversible and irreversible processes with thermal, electrical, magnetic, and/or mechanical effects coupled. The comprehensive framework combines the principles of classical equilibrium and non-equilibrium thermodynamics with electrodynamics of continua in the infinitesimal strain regime.In the first part of this paper, linear Thermo-Electro-Magneto-Mechanical (TEMM quasistatic processes are characterized. Thermodynamic stability conditions are further imposed on the linear constitutive model and restrictions on the corresponding material constants are derived. The framework is then extended to irreversible transport phenomena including thermoelectric, thermomagnetic and the state-of-the-art spintronic and spin caloritronic effects. Using Onsager's reciprocity relationships and the dissipation inequality, restrictions on the kinetic coefficients corresponding to charge, heat and spin transport processes are derived. All the constitutive models are accompanied by multiphysics interaction diagrams that highlight the various processes that can be characterized using this framework. Keywords: Applied mathematics, Materials science, Thermodynamics
Deuterium Liner and Multiparameter Investigation of the Inverse Z-Pinch Formation Process
Bystritskii, Vyach M; Grebenyuk, V M; Parzhitsky, S S; Penkov, F M; Stolupin, V A; Boznyak, J; Gula, E; Dudkin, G N; Nechaev, B A; Padalko, V M; Mesyats, G A; Ratakhin, N A; Sorokin, S A
2001-01-01
A description of the methods and results of the measurements of the ion energy distribution of the deuterium liner accelerated in the inverse Z-pinch configuration are presented - the liner plasma is radially accelerated from the outward small radius. The knowledge of the experiment deuteron energy distribution is crucially important for correct interpretation of the results on the study of the dd-reaction at infralow collision energies using the liner plasma. Experiments were fulfilled in the HCEI (Tomsk, Russia) at a nanosecond pulsed high current generator (I=950 kA, pulse duration \\tau=80 ns). The hollow deuterium liner of 20 mm length was accelerated from the initial radius of \\sim 15 mm to 45 mm. Measurement of the liner characteristics was produced by means of the light detectors (detection of H_\\alpha and H_\\beta deuterium lines) and magnetic B-dot probes, placed on the various radii of the expanding liner. Besides, the measurement of the neutron radiation intensity due to reaction d+d\\to^{3}He+n was ...
Angle-domain inverse scattering migration/inversion in isotropic media
Li, Wuqun; Mao, Weijian; Li, Xuelei; Ouyang, Wei; Liang, Quan
2018-07-01
The classical seismic asymptotic inversion can be transformed into a problem of inversion of generalized Radon transform (GRT). In such methods, the combined parameters are linearly attached to the scattered wave-field by Born approximation and recovered by applying an inverse GRT operator to the scattered wave-field data. Typical GRT-style true-amplitude inversion procedure contains an amplitude compensation process after the weighted migration via dividing an illumination associated matrix whose elements are integrals of scattering angles. It is intuitional to some extent that performs the generalized linear inversion and the inversion of GRT together by this process for direct inversion. However, it is imprecise to carry out such operation when the illumination at the image point is limited, which easily leads to the inaccuracy and instability of the matrix. This paper formulates the GRT true-amplitude inversion framework in an angle-domain version, which naturally degrades the external integral term related to the illumination in the conventional case. We solve the linearized integral equation for combined parameters of different fixed scattering angle values. With this step, we obtain high-quality angle-domain common-image gathers (CIGs) in the migration loop which provide correct amplitude-versus-angle (AVA) behavior and reasonable illumination range for subsurface image points. Then we deal with the over-determined problem to solve each parameter in the combination by a standard optimization operation. The angle-domain GRT inversion method keeps away from calculating the inaccurate and unstable illumination matrix. Compared with the conventional method, the angle-domain method can obtain more accurate amplitude information and wider amplitude-preserved range. Several model tests demonstrate the effectiveness and practicability.
Unkelbach, Jan; Oelfke, Uwe
2005-01-01
We investigate an off-line strategy to incorporate inter fraction organ movements in IMRT treatment planning. Nowadays, imaging modalities located in the treatment room allow for several CT scans of a patient during the course of treatment. These multiple CT scans can be used to estimate a probability distribution of possible patient geometries. This probability distribution can subsequently be used to calculate the expectation value of the delivered dose distribution. In order to incorporate organ movements into the treatment planning process, it was suggested that inverse planning could be based on that probability distribution of patient geometries instead of a single snapshot. However, it was shown that a straightforward optimization of the expectation value of the dose may be insufficient since the expected dose distribution is related to several uncertainties: first, this probability distribution has to be estimated from only a few images. And second, the distribution is only sparsely sampled over the treatment course due to a finite number of fractions. In order to obtain a robust treatment plan these uncertainties should be considered and minimized in the inverse planning process. In the current paper, we calculate a 3D variance distribution in addition to the expectation value of the dose distribution which are simultaniously optimized. The variance is used as a surrogate to quantify the associated risks of a treatment plan. The feasibility of this approach is demonstrated for clinical data of prostate patients. Different scenarios of dose expectation values and corresponding variances are discussed
Yamada, M.; Mangeney, A.; Moretti, L.; Matsushi, Y.
2014-12-01
Understanding physical parameters, such as frictional coefficients, velocity change, and dynamic history, is important issue for assessing and managing the risks posed by deep-seated catastrophic landslides. Previously, landslide motion has been inferred qualitatively from topographic changes caused by the event, and occasionally from eyewitness reports. However, these conventional approaches are unable to evaluate source processes and dynamic parameters. In this study, we use broadband seismic recordings to trace the dynamic process of the deep-seated Akatani landslide that occurred on the Kii Peninsula, Japan, which is one of the best recorded large slope failures. Based on the previous results of waveform inversions and precise topographic surveys done before and after the event, we applied numerical simulations using the SHALTOP numerical model (Mangeney et al., 2007). This model describes homogeneous continuous granular flows on a 3D topography based on a depth averaged thin layer approximation. We assume a Coulomb's friction law with a constant friction coefficient, i. e. the friction is independent of the sliding velocity. We varied the friction coefficients in the simulation so that the resulting force acting on the surface agrees with the single force estimated from the seismic waveform inversion. Figure shows the force history of the east-west components after the band-pass filtering between 10-100 seconds. The force history of the simulation with frictional coefficient 0.27 (thin red line) the best agrees with the result of seismic waveform inversion (thick gray line). Although the amplitude is slightly different, phases are coherent for the main three pulses. This is an evidence that the point-source approximation works reasonably well for this particular event. The friction coefficient during the sliding was estimated to be 0.38 based on the seismic waveform inversion performed by the previous study and on the sliding block model (Yamada et al., 2013
Observations of linear and nonlinear processes in the foreshock wave evolution
Y. Narita
2007-07-01
Full Text Available Waves in the foreshock region are studied on the basis of a hypothesis that the linear process first excites the waves and further wave-wave nonlinearities distribute scatter the energy of the primary waves into a number of daughter waves. To examine this wave evolution scenario, the dispersion relations, the wave number spectra of the magnetic field energy, and the dimensionless cross helicity are determined from the observations made by the four Cluster spacecraft. The results confirm that the linear process is the ion/ion right-hand resonant instability, but the wave-wave interactions are not clearly identified. We discuss various reasons why the test for the wave-wave nonlinearities fails, and conclude that the higher order statistics would provide a direct evidence for the wave coupling phenomena.
Hadronic cross-sections in two photon processes at a future linear collider
Godbole, Rohini M.; Roeck, Albert de; Grau, Agnes; Pancheri, Giulia
2003-01-01
In this note we address the issue of measurability of the hadronic cross-sections at a future photon collider as well as for the two-photon processes at a future high energy linear e + e - collider. We extend, to higher energy, our previous estimates of the accuracy with which the γ γ cross-section needs to be measured, in order to distinguish between different theoretical models of energy dependence of the total cross-sections. We show that the necessary precision to discriminate among these models is indeed possible at future linear colliders in the Photon Collider option. Further we note that even in the e + e - option a measurement of the hadron production cross-section via γ γ processes, with an accuracy necessary to allow discrimination between different theoretical models, should be possible. We also comment briefly on the implications of these predictions for hadronic backgrounds at the future TeV energy e + e - collider CLIC. (author)
Mover Position Detection for PMTLM Based on Linear Hall Sensors through EKF Processing.
Yan, Leyang; Zhang, Hui; Ye, Peiqing
2017-04-06
Accurate mover position is vital for a permanent magnet tubular linear motor (PMTLM) control system. In this paper, two linear Hall sensors are utilized to detect the mover position. However, Hall sensor signals contain third-order harmonics, creating errors in mover position detection. To filter out the third-order harmonics, a signal processing method based on the extended Kalman filter (EKF) is presented. The limitation of conventional processing method is first analyzed, and then EKF is adopted to detect the mover position. In the EKF model, the amplitude of the fundamental component and the percentage of the harmonic component are taken as state variables, and they can be estimated based solely on the measured sensor signals. Then, the harmonic component can be calculated and eliminated. The proposed method has the advantages of faster convergence, better stability and higher accuracy. Finally, experimental results validate the effectiveness and superiority of the proposed method.
Processing for maximizing the level of crystallinity in linear aromatic polyimides
St.clair, Terry L. (Inventor)
1991-01-01
The process of the present invention includes first treating a polyamide acid (such as LARC-TPI polyamide acid) in an amide-containing solvent (such as N-methyl pyrrolidone) with an aprotic organic base (such as triethylamine), followed by dehydrating with an organic dehydrating agent (such as acetic anhydride). The level of crystallinity in the linear aromatic polyimide so produced is maximized without any degradation in the molecular weight thereof.
Nonparametric adaptive estimation of linear functionals for low frequency observed Lévy processes
Kappus, Johanna
2012-01-01
For a Lévy process X having finite variation on compact sets and finite first moments, Âµ( dx) = xv( dx) is a finite signed measure which completely describes the jump dynamics. We construct kernel estimators for linear functionals of Âµ and provide rates of convergence under regularity assumptions. Moreover, we consider adaptive estimation via model selection and propose a new strategy for the data driven choice of the smoothing parameter.
2014-04-11
Carpenter Custom 465 precipitation-hardened martensitic stainless steel to develop a linear friction welding (LFW) process model for this material...Model for Carpenter Custom 465 Precipitation-Hardened Martensitic Stainless Steel The views, opinions and/or findings contained in this report are... Martensitic Stainless Steel Report Title An Arbitrary Lagrangian-Eulerian finite-element analysis is combined with thermo-mechanical material
Photogrammetric Processing of Planetary Linear Pushbroom Images Based on Approximate Orthophotos
Geng, X.; Xu, Q.; Xing, S.; Hou, Y. F.; Lan, C. Z.; Zhang, J. J.
2018-04-01
It is still a great challenging task to efficiently produce planetary mapping products from orbital remote sensing images. There are many disadvantages in photogrammetric processing of planetary stereo images, such as lacking ground control information and informative features. Among which, image matching is the most difficult job in planetary photogrammetry. This paper designs a photogrammetric processing framework for planetary remote sensing images based on approximate orthophotos. Both tie points extraction for bundle adjustment and dense image matching for generating digital terrain model (DTM) are performed on approximate orthophotos. Since most of planetary remote sensing images are acquired by linear scanner cameras, we mainly deal with linear pushbroom images. In order to improve the computational efficiency of orthophotos generation and coordinates transformation, a fast back-projection algorithm of linear pushbroom images is introduced. Moreover, an iteratively refined DTM and orthophotos scheme was adopted in the DTM generation process, which is helpful to reduce search space of image matching and improve matching accuracy of conjugate points. With the advantages of approximate orthophotos, the matching results of planetary remote sensing images can be greatly improved. We tested the proposed approach with Mars Express (MEX) High Resolution Stereo Camera (HRSC) and Lunar Reconnaissance Orbiter (LRO) Narrow Angle Camera (NAC) images. The preliminary experimental results demonstrate the feasibility of the proposed approach.
N. Jaya
2008-10-01
Full Text Available In this work, a design and implementation of a Conventional PI controller, single region fuzzy logic controller, two region fuzzy logic controller and Globally Linearized Controller (GLC for a two capacity interacting nonlinear process is carried out. The performance of this process using single region FLC, two region FLC and GLC are compared with the performance of conventional PI controller about an operating point of 50 %. It has been observed that GLC and two region FLC provides better performance. Further, this procedure is also validated by real time experimentation using dSPACE.
Sparse Inverse Gaussian Process Regression with Application to Climate Network Discovery
National Aeronautics and Space Administration — Regression problems on massive data sets are ubiquitous in many application domains including the Internet, earth and space sciences, and finances. Gaussian Process...
Foo, Mathias; Kim, Jongrae; Sawlekar, Rucha; Bates, Declan G.
2017-01-01
Feedback control is widely used in chemical engineering to improve the performance and robustness of chemical processes. Feedback controllers require a ‘subtractor’ that is able to compute the error between the process output and the reference signal. In the case of embedded biomolecular control circuits, subtractors designed using standard chemical reaction network theory can only realise one-sided subtraction, rendering standard controller design approaches inadequate. Here, we show how a b...
Generalization of the Wide-Sense Markov Concept to a Widely Linear Processing
Espinosa-Pulido, Juan Antonio; Navarro-Moreno, Jesús; Fernández-Alcalá, Rosa María; Ruiz-Molina, Juan Carlos; Oya-Lechuga, Antonia; Ruiz-Fuentes, Nuria
2014-01-01
In this paper we show that the classical definition and the associated characterizations of wide-sense Markov (WSM) signals are not valid for improper complex signals. For that, we propose an extension of the concept of WSM to a widely linear (WL) setting and the study of new characterizations. Specifically, we introduce a new class of signals, called widely linear Markov (WLM) signals, and we analyze some of their properties based either on second-order properties or on state-space models from a WL processing standpoint. The study is performed in both the forwards and backwards directions of time. Thus, we provide two forwards and backwards Markovian representations for WLM signals. Finally, different estimation recursive algorithms are obtained for these models
Measurements of translation, rotation and strain: new approaches to seismic processing and inversion
Bernauer, M.; Fichtner, A.; Igel, H.
2012-01-01
We propose a novel approach to seismic tomography based on the joint processing of translation, strain and rotation measurements. Our concept is based on the apparent S and P velocities, defined as the ratios of displacement velocity and rotation amplitude, and displacement velocity and
Burkhard, N.R.
1979-01-01
The gravity inversion code applies stabilized linear inverse theory to determine the topography of a subsurface density anomaly from Bouguer gravity data. The gravity inversion program consists of four source codes: SEARCH, TREND, INVERT, and AVERAGE. TREND and INVERT are used iteratively to converge on a solution. SEARCH forms the input gravity data files for Nevada Test Site data. AVERAGE performs a covariance analysis on the solution. This document describes the necessary input files and the proper operation of the code. 2 figures, 2 tables
Zhang, Zhendong; Alkhalifah, Tariq Ali
2017-01-01
Full waveform inversion for reection events is limited by its linearized update re-quirements given by a process equivalent to migration. Unless the background velocity model is reasonably accurate, the resulting gradient can have an inaccurate
Inverse Processing of Undefined Complex Shape Parts from Structural High Alloyed Tool Steel
Katarina Monkova
2014-02-01
Full Text Available The paper deals with the process of 3D digitization as a tool for increasing production efficiency of complex shaped parts. Utilizes the concept of reverse engineering and new the model of NC program generation STEP-NC, for the of templates production for winding the stator coil of electromotors that is for electric household appliances. The manual production of prototype was substituted by manufacturing with NC machines. A 3D scanner was used for data digitizing, CAD/CAM system Pro/Engineering was used for NC program generation, and 3D measuring equipment was used for verification of new produced parts. The company estimated that only due to the implementation of STEP NC standard into production process it was allowed to read the 3D geometry of the product without problems. It helps the workshop to shorten the time needed for part production by about 30%.
Słania J.
2014-10-01
Full Text Available The article presents the process of production of coated electrodes and their welding properties. The factors concerning the welding properties and the currently applied method of assessing are given. The methodology of the testing based on the measuring and recording of instantaneous values of welding current and welding arc voltage is discussed. Algorithm for creation of reference data base of the expert system is shown, aiding the assessment of covered electrodes welding properties. The stability of voltage–current characteristics was discussed. Statistical factors of instantaneous values of welding current and welding arc voltage waveforms used for determining of welding process stability are presented. The results of coated electrodes welding properties are compared. The article presents the results of linear regression as well as the impact of the independent variables on the welding process performance. Finally the conclusions drawn from the research are given.
Dynamic actuation of a novel laser-processed NiTi linear actuator
Pequegnat, A; Daly, M; Wang, J; Zhou, Y; Khan, M I
2012-01-01
A novel laser processing technique, capable of locally modifying the shape memory effect, was applied to enhance the functionality of a NiTi linear actuator. By altering local transformation temperatures, an additional memory was imparted into a monolithic NiTi wire to enable dynamic actuation via controlled resistive heating. Characterizations of the actuator load, displacement and cyclic properties were conducted using a custom-built spring-biased test set-up. Monotonic tensile testing was also implemented to characterize the deformation behaviour of the martensite phase. Observed differences in the deformation behaviour of laser-processed material were found to affect the magnitude of the active strain. Furthermore, residual strain during cyclic actuation testing was found to stabilize after 150 cycles while the recoverable strain remained constant. This laser-processed actuator will allow for the realization of new applications and improved control methods for shape memory alloys. (paper)
Varadarajan, Divya; Haldar, Justin P
2017-11-01
The data measured in diffusion MRI can be modeled as the Fourier transform of the Ensemble Average Propagator (EAP), a probability distribution that summarizes the molecular diffusion behavior of the spins within each voxel. This Fourier relationship is potentially advantageous because of the extensive theory that has been developed to characterize the sampling requirements, accuracy, and stability of linear Fourier reconstruction methods. However, existing diffusion MRI data sampling and signal estimation methods have largely been developed and tuned without the benefit of such theory, instead relying on approximations, intuition, and extensive empirical evaluation. This paper aims to address this discrepancy by introducing a novel theoretical signal processing framework for diffusion MRI. The new framework can be used to characterize arbitrary linear diffusion estimation methods with arbitrary q-space sampling, and can be used to theoretically evaluate and compare the accuracy, resolution, and noise-resilience of different data acquisition and parameter estimation techniques. The framework is based on the EAP, and makes very limited modeling assumptions. As a result, the approach can even provide new insight into the behavior of model-based linear diffusion estimation methods in contexts where the modeling assumptions are inaccurate. The practical usefulness of the proposed framework is illustrated using both simulated and real diffusion MRI data in applications such as choosing between different parameter estimation methods and choosing between different q-space sampling schemes. Copyright © 2017 Elsevier Inc. All rights reserved.
Linear Mathematical Model for Seam Tracking with an Arc Sensor in P-GMAW Processes.
Liu, Wenji; Li, Liangyu; Hong, Ying; Yue, Jianfeng
2017-03-14
Arc sensors have been used in seam tracking and widely studied since the 80s and commercial arc sensing products for T and V shaped grooves have been developed. However, it is difficult to use these arc sensors in narrow gap welding because the arc stability and sensing accuracy are not satisfactory. Pulse gas melting arc welding (P-GMAW) has been successfully applied in narrow gap welding and all position welding processes, so it is worthwhile to research P-GMAW arc sensing technology. In this paper, we derived a linear mathematical P-GMAW model for arc sensing, and the assumptions for the model are verified through experiments and finite element methods. Finally, the linear characteristics of the mathematical model were investigated. In torch height changing experiments, uphill experiments, and groove angle changing experiments the P-GMAW arc signals all satisfied the linear rules. In addition, the faster the welding speed, the higher the arc signal sensitivities; the smaller the groove angle, the greater the arc sensitivities. The arc signal variation rate needs to be modified according to the welding power, groove angles, and weaving or rotate speed.
Rate-Independent Processes with Linear Growth Energies and Time-Dependent Boundary Conditions
Kružík, Martin; Zimmer, J.
2012-01-01
Roč. 5, č. 3 (2012), s. 591-604 ISSN 1937-1632 R&D Projects: GA AV ČR IAA100750802 Grant - others:GA ČR(CZ) GAP201/10/0357 Institutional research plan: CEZ:AV0Z10750506 Keywords : concentrations * oscillations * time - dependent boundary conditions * rate-independent evolution Subject RIV: BA - General Mathematics http://library.utia.cas.cz/separaty/2011/MTR/kruzik-rate-independent processes with linear growth energies and time - dependent boundary conditions.pdf
Dufour, F., E-mail: dufour@math.u-bordeaux1.fr [Institut de Mathématiques de Bordeaux, INRIA Bordeaux Sud Ouest, Team: CQFD, and IMB (France); Prieto-Rumeau, T., E-mail: tprieto@ccia.uned.es [UNED, Department of Statistics and Operations Research (Spain)
2016-08-15
We consider a discrete-time constrained discounted Markov decision process (MDP) with Borel state and action spaces, compact action sets, and lower semi-continuous cost functions. We introduce a set of hypotheses related to a positive weight function which allow us to consider cost functions that might not be bounded below by a constant, and which imply the solvability of the linear programming formulation of the constrained MDP. In particular, we establish the existence of a constrained optimal stationary policy. Our results are illustrated with an application to a fishery management problem.
Goretzki, Nora; Inbar, Nimrod; Siebert, Christian; Möller, Peter; Rosenthal, Eliyahu; Schneider, Michael; Magri, Fabien
2015-04-01
Salty and thermal springs exist along the lakeshore of the Sea of Galilee, which covers most of the Tiberias Basin (TB) in the northern Jordan- Dead Sea Transform, Israel/Jordan. As it is the only freshwater reservoir of the entire area, it is important to study the salinisation processes that pollute the lake. Simulations of thermohaline flow along a 35 km NW-SE profile show that meteoric and relic brines are flushed by the regional flow from the surrounding heights and thermally induced groundwater flow within the faults (Magri et al., 2015). Several model runs with trial and error were necessary to calibrate the hydraulic conductivity of both faults and major aquifers in order to fit temperature logs and spring salinity. It turned out that the hydraulic conductivity of the faults ranges between 30 and 140 m/yr whereas the hydraulic conductivity of the Upper Cenomanian aquifer is as high as 200 m/yr. However, large-scale transport processes are also dependent on other physical parameters such as thermal conductivity, porosity and fluid thermal expansion coefficient, which are hardly known. Here, inverse problems (IP) are solved along the NW-SE profile to better constrain the physical parameters (a) hydraulic conductivity, (b) thermal conductivity and (c) thermal expansion coefficient. The PEST code (Doherty, 2010) is applied via the graphical interface FePEST in FEFLOW (Diersch, 2014). The results show that both thermal and hydraulic conductivity are consistent with the values determined with the trial and error calibrations. Besides being an automatic approach that speeds up the calibration process, the IP allows to cover a wide range of parameter values, providing additional solutions not found with the trial and error method. Our study shows that geothermal systems like TB are more comprehensively understood when inverse models are applied to constrain coupled fluid flow processes over large spatial scales. References Diersch, H.-J.G., 2014. FEFLOW Finite
A non-linear decision making process for public involvement in environmental management activities
Harper, M.R.; Kastenberg, W.
1995-01-01
The international industrial and governmental institutions involved in radioactive waste management and environmental remediation are now entering a new era in which they must significantly expand public involvement. Thus the decision making processes formerly utilized to direct and guide these institutions must now be shifted to take into consideration the needs of many more stakeholders than ever before. To meet this challenge, they now have the job of developing and creating a new set of accurate, sufficient and continuous self-regulating and self-correcting information pathways between themselves and the many divergent stakeholder groups in order to establish sustainable, trusting and respectful relationships. In this paper the authors introduce a new set of non-linear, practical and effective strategies for interaction. These self-regulating strategies provide timely feedback to a system, establishing trust and creating a viable vehicle for staying open and responsive to the needs out of which change and balanced adaptation can continually emerge for all stakeholders. The authors present a decision making process for public involvement which is congruent with the non-linear ideas of holographic and fractal relationships -- the mutual influence between related parts of the whole and the self-symmetry of systems at every level of complexity
Linear and nonlinear post-processing of numerically forecasted surface temperature
M. Casaioli
2003-01-01
Full Text Available In this paper we test different approaches to the statistical post-processing of gridded numerical surface air temperatures (provided by the European Centre for Medium-Range Weather Forecasts onto the temperature measured at surface weather stations located in the Italian region of Puglia. We consider simple post-processing techniques, like correction for altitude, linear regression from different input parameters and Kalman filtering, as well as a neural network training procedure, stabilised (i.e. driven into the absolute minimum of the error function over the learning set by means of a Simulated Annealing method. A comparative analysis of the results shows that the performance with neural networks is the best. It is encouraging for systematic use in meteorological forecast-analysis service operations.
Goodman, Roe W
2016-01-01
This textbook for undergraduate mathematics, science, and engineering students introduces the theory and applications of discrete Fourier and wavelet transforms using elementary linear algebra, without assuming prior knowledge of signal processing or advanced analysis.It explains how to use the Fourier matrix to extract frequency information from a digital signal and how to use circulant matrices to emphasize selected frequency ranges. It introduces discrete wavelet transforms for digital signals through the lifting method and illustrates through examples and computer explorations how these transforms are used in signal and image processing. Then the general theory of discrete wavelet transforms is developed via the matrix algebra of two-channel filter banks. Finally, wavelet transforms for analog signals are constructed based on filter bank results already presented, and the mathematical framework of multiresolution analysis is examined.
Impulsive Control for Continuous-Time Markov Decision Processes: A Linear Programming Approach
Dufour, F., E-mail: dufour@math.u-bordeaux1.fr [Bordeaux INP, IMB, UMR CNRS 5251 (France); Piunovskiy, A. B., E-mail: piunov@liv.ac.uk [University of Liverpool, Department of Mathematical Sciences (United Kingdom)
2016-08-15
In this paper, we investigate an optimization problem for continuous-time Markov decision processes with both impulsive and continuous controls. We consider the so-called constrained problem where the objective of the controller is to minimize a total expected discounted optimality criterion associated with a cost rate function while keeping other performance criteria of the same form, but associated with different cost rate functions, below some given bounds. Our model allows multiple impulses at the same time moment. The main objective of this work is to study the associated linear program defined on a space of measures including the occupation measures of the controlled process and to provide sufficient conditions to ensure the existence of an optimal control.
The Application of Linear and Nonlinear Water Tanks Case Study in Teaching of Process Control
Li, Xiangshun; Li, Zhiang
2018-02-01
In the traditional process control teaching, the importance of passing knowledge is emphasized while the development of creative and practical abilities of students is ignored. Traditional teaching methods are not very helpful to breed a good engineer. Case teaching is a very useful way to improve students’ innovative and practical abilities. In the traditional case teaching, knowledge points are taught separately based on different examples or no examples, thus it is very hard to setup the whole knowledge structure. Though all the knowledge is learned, how to use the knowledge to solve engineering problems keeps challenging for students. In this paper, the linear and nonlinear tanks are taken as illustrative examples which involves several knowledge points of process control. The application method of each knowledge point is discussed in detail and simulated. I believe the case-based study will be helpful for students.
Linear and quadratic models of point process systems: contributions of patterned input to output.
Lindsay, K A; Rosenberg, J R
2012-08-01
In the 1880's Volterra characterised a nonlinear system using a functional series connecting continuous input and continuous output. Norbert Wiener, in the 1940's, circumvented problems associated with the application of Volterra series to physical problems by deriving from it a new series of terms that are mutually uncorrelated with respect to Gaussian processes. Subsequently, Brillinger, in the 1970's, introduced a point-process analogue of Volterra's series connecting point-process inputs to the instantaneous rate of point-process output. We derive here a new series from this analogue in which its terms are mutually uncorrelated with respect to Poisson processes. This new series expresses how patterned input in a spike train, represented by third-order cross-cumulants, is converted into the instantaneous rate of an output point-process. Given experimental records of suitable duration, the contribution of arbitrary patterned input to an output process can, in principle, be determined. Solutions for linear and quadratic point-process models with one and two inputs and a single output are investigated. Our theoretical results are applied to isolated muscle spindle data in which the spike trains from the primary and secondary endings from the same muscle spindle are recorded in response to stimulation of one and then two static fusimotor axons in the absence and presence of a random length change imposed on the parent muscle. For a fixed mean rate of input spikes, the analysis of the experimental data makes explicit which patterns of two input spikes contribute to an output spike. Copyright © 2012 Elsevier Ltd. All rights reserved.
Trimming and procrastination as inversion techniques
Backus, George E.
1996-12-01
By examining the processes of truncating and approximating the model space (trimming it), and by committing to neither the objectivist nor the subjectivist interpretation of probability (procrastinating), we construct a formal scheme for solving linear and non-linear geophysical inverse problems. The necessary prior information about the correct model xE can be either a collection of inequalities or a probability measure describing where xE was likely to be in the model space X before the data vector y0 was measured. The results of the inversion are (1) a vector z0 that estimates some numerical properties zE of xE; (2) an estimate of the error δz = z0 - zE. As y0 is finite dimensional, so is z0, and hence in principle inversion cannot describe all of xE. The error δz is studied under successively more specialized assumptions about the inverse problem, culminating in a complete analysis of the linear inverse problem with a prior quadratic bound on xE. Our formalism appears to encompass and provide error estimates for many of the inversion schemes current in geomagnetism, and would be equally applicable in geodesy and seismology if adequate prior information were available there. As an idealized example we study the magnetic field at the core-mantle boundary, using satellite measurements of field elements at sites assumed to be almost uniformly distributed on a single spherical surface. Magnetospheric currents are neglected and the crustal field is idealized as a random process with rotationally invariant statistics. We find that an appropriate data compression diagonalizes the variance matrix of the crustal signal and permits an analytic trimming of the idealized problem.
Mosegaard, Klaus
2012-01-01
For non-linear inverse problems, the mathematical structure of the mapping from model parameters to data is usually unknown or partly unknown. Absence of information about the mathematical structure of this function prevents us from presenting an analytical solution, so our solution depends on our......-heuristics are inefficient for large-scale, non-linear inverse problems, and that the 'no-free-lunch' theorem holds. We discuss typical objections to the relevance of this theorem. A consequence of the no-free-lunch theorem is that algorithms adapted to the mathematical structure of the problem perform more efficiently than...... pure meta-heuristics. We study problem-adapted inversion algorithms that exploit the knowledge of the smoothness of the misfit function of the problem. Optimal sampling strategies exist for such problems, but many of these problems remain hard. © 2012 Springer-Verlag....
NONLINEAR REFLECTION PROCESS OF LINEARLY POLARIZED, BROADBAND ALFVÉN WAVES IN THE FAST SOLAR WIND
Shoda, M.; Yokoyama, T., E-mail: shoda@eps.s.u-tokyo.ac.jp [Department of Earth and Planetary Science, The University of Tokyo, Bunkyo-ku, Tokyo 113-0033 (Japan)
2016-04-01
Using one-dimensional numerical simulations, we study the elementary process of Alfvén wave reflection in a uniform medium, including nonlinear effects. In the linear regime, Alfvén wave reflection is triggered only by the inhomogeneity of the medium, whereas in the nonlinear regime, it can occur via nonlinear wave–wave interactions. Such nonlinear reflection (backscattering) is typified by decay instability. In most studies of decay instabilities, the initial condition has been a circularly polarized Alfvén wave. In this study we consider a linearly polarized Alfvén wave, which drives density fluctuations by its magnetic pressure force. For generality, we also assume a broadband wave with a red-noise spectrum. In the data analysis, we decompose the fluctuations into characteristic variables using local eigenvectors, thus revealing the behaviors of the individual modes. Different from the circular-polarization case, we find that the wave steepening produces a new energy channel from the parent Alfvén wave to the backscattered one. Such nonlinear reflection explains the observed increasing energy ratio of the sunward to the anti-sunward Alfvénic fluctuations in the solar wind with distance against the dynamical alignment effect.
Inversion assuming weak scattering
Xenaki, Angeliki; Gerstoft, Peter; Mosegaard, Klaus
2013-01-01
due to the complex nature of the field. A method based on linear inversion is employed to infer information about the statistical properties of the scattering field from the obtained cross-spectral matrix. A synthetic example based on an active high-frequency sonar demonstrates that the proposed...
Imitation learning of Non-Linear Point-to-Point Robot Motions using Dirichlet Processes
Krüger, Volker; Tikhanoff, Vadim; Natale, Lorenzo
2012-01-01
In this paper we discuss the use of the infinite Gaussian mixture model and Dirichlet processes for learning robot movements from demonstrations. Starting point of this work is an earlier paper where the authors learn a non-linear dynamic robot movement model from a small number of observations....... The model in that work is learned using a classical finite Gaussian mixture model (FGMM) where the Gaussian mixtures are appropriately constrained. The problem with this approach is that one needs to make a good guess for how many mixtures the FGMM should use. In this work, we generalize this approach...... our algorithm on the same data that was used in [5], where the authors use motion capture devices to record the demonstrations. As further validation we test our approach on novel data acquired on our iCub in a different demonstration scenario in which the robot is physically driven by the human...
A new formulation of the linear sampling method: spatial resolution and post-processing
Piana, M; Aramini, R; Brignone, M; Coyle, J
2008-01-01
A new formulation of the linear sampling method is described, which requires the regularized solution of a single functional equation set in a direct sum of L 2 spaces. This new approach presents the following notable advantages: it is computationally more effective than the traditional implementation, since time consuming samplings of the Tikhonov minimum problem and of the generalized discrepancy equation are avoided; it allows a quantitative estimate of the spatial resolution achievable by the method; it facilitates a post-processing procedure for the optimal selection of the scatterer profile by means of edge detection techniques. The formulation is described in a two-dimensional framework and in the case of obstacle scattering, although generalizations to three dimensions and penetrable inhomogeneities are straightforward
Technical training seminar: Data Converters and Linear Products for Signal Processing and Control
Davide Vitè
2006-01-01
Monday 23 January 2006 TECHNICAL TRAINING SEMINAR from 14:00 to 17:30, Training Centre Auditorium (bldg. 503) Data Converters and Linear Products for Signal Processing and Control Marco Corsi, William Bright, Olrik Maier, Andrea Huder / TEXAS INSTRUMENTS (US, D, CH) Texas Instruments will present recent technology advances in design and manufacturing of A/D and D/A converters, and of operational amplifiers. 14:00 - 15:30 HIGH SPEED - Technology and the new process BiCom3: High speed ADCs, DACs, operational amplifiers 15:30 - 15:45 coffee 15:45 - 17:15 HIGH PRECISION - Technology and the new process HPA07: High precision ADCs, DACs, operational amplifiers questions, discussion Industrial partners: Robert Medioni, François Caloz Spoerle Electronic, CH-1440 Montagny (VD), Switzerland Phone: + 41 24 447 01 37, email: RMedioni@spoerle.com, http://www.spoerle.com Language: English. Free seminar (no registration). Organiser: Davide Vitè / HR-PMD-ATT / 75141 For more information, visit the Te...
Wang, Z.; Kato, T.; Wang, Y.
2015-12-01
The spatiotemporal fault slip history of the 2008 Iwate-Miyagi Nairiku earthquake, Japan, is obtained by the joint inversion of 1-Hz GPS waveforms and near-field strong motion records. 1-Hz GPS data from GEONET is processed by GAMIT/GLOBK and then a low-pass filter of 0.05 Hz is applied. The ground surface strong motion records from stations of K-NET and Kik-Net are band-pass filtered for the range of 0.05 ~ 0.3 Hz and integrated once to obtain velocity. The joint inversion exploits a broader frequency band for near-field ground motions, which provides excellent constraints for both the detailed slip history and slip distribution. A fully Bayesian inversion method is performed to simultaneously and objectively determine the rupture model, the unknown relative weighting of multiple data sets and the unknown smoothing hyperparameters. The preferred rupture model is stable for different choices of velocity structure model and station distribution, with maximum slip of ~ 8.0 m and seismic moment of 2.9 × 1019 Nm (Mw 6.9). By comparison with the single inversion of strong motion records, the cumulative slip distribution of joint inversion shows sparser slip distribution with two slip asperities. One common slip asperity extends from the hypocenter southeastward to the ground surface of breakage; another slip asperity, which is unique for joint inversion contributed by 1-Hz GPS waveforms, appears in the deep part of fault where very few aftershocks are occurring. The differential moment rate function of joint and single inversions obviously indicates that rich high frequency waves are radiated in the first three seconds but few low frequency waves.
Mamdani-Fuzzy Modeling Approach for Quality Prediction of Non-Linear Laser Lathing Process
Sivaraos; Khalim, A. Z.; Salleh, M. S.; Sivakumar, D.; Kadirgama, K.
2018-03-01
Lathing is a process to fashioning stock materials into desired cylindrical shapes which usually performed by traditional lathe machine. But, the recent rapid advancements in engineering materials and precision demand gives a great challenge to the traditional method. The main drawback of conventional lathe is its mechanical contact which brings to the undesirable tool wear, heat affected zone, finishing, and dimensional accuracy especially taper quality in machining of stock with high length to diameter ratio. Therefore, a novel approach has been devised to investigate in transforming a 2D flatbed CO2 laser cutting machine into 3D laser lathing capability as an alternative solution. Three significant design parameters were selected for this experiment, namely cutting speed, spinning speed, and depth of cut. Total of 24 experiments were performed with eight (8) sequential runs where they were then replicated three (3) times. The experimental results were then used to establish Mamdani - Fuzzy predictive model where it yields the accuracy of more than 95%. Thus, the proposed Mamdani - Fuzzy modelling approach is found very much suitable and practical for quality prediction of non-linear laser lathing process for cylindrical stocks of 10mm diameter.
Orhan Dengiz
2018-01-01
Full Text Available Land evaluation analysis is a prerequisite to achieving optimum utilization of the available land resources. Lack of knowledge on best combination of factors that suit production of yields has contributed to the low production. The aim of this study was to determine the most suitable areas for agricultural uses. For that reasons, in order to determine land suitability classes of the study area, multi-criteria approach was used with linear combination technique and analytical hierarchy process by taking into consideration of some land and soil physico-chemical characteristic such as slope, texture, depth, derange, stoniness, erosion, pH, EC, CaCO3 and organic matter. These data and land mapping unites were taken from digital detailed soil map scaled as 1:5.000. In addition, in order to was produce land suitability map GIS was program used for the study area. This study was carried out at Mahmudiye, Karaamca, Yazılı, Çiçeközü, Orhaniye and Akbıyık villages in Yenişehir district of Bursa province. Total study area is 7059 ha. 6890 ha of total study area has been used as irrigated agriculture, dry farming agriculture, pasture while, 169 ha has been used for non-agricultural activities such as settlement, road water body etc. Average annual temperature and precipitation of the study area are 16.1oC and 1039.5 mm, respectively. Finally after determination of land suitability distribution classes for the study area, it was found that 15.0% of the study area has highly (S1 and moderately (S2 while, 85% of the study area has marginally suitable and unsuitable coded as S3 and N. It was also determined some relation as compared results of linear combination technique with other hierarchy approaches such as Land Use Capability Classification and Suitability Class for Agricultural Use methods.
Divya eVohora
2012-10-01
Full Text Available Histamine H3 receptor antagonists/ inverse agonists possess potential to treat diverse disease states of the central nervous system (CNS. Cognitive dysfunction and motor impairments are the hallmark of multifarious neurodegenerative and/or psychiatric disorders. This review presents the various neurobiological/ neurochemical evidences available so far following H3 receptor inverse agonists/ antagonists in the pathophysiology of Alzheimer’s disease (AD, attention-deficit hyperactivity disorder (ADHD, schizophrenia and drug abuse each of which is accompanied by deficits of some aspects of cognitive and/or motor functions. Whether the H3 receptor inverse agonism modulates the neurochemical basis underlying the disease condition or affects only the cognitive/motor component of the disease process is discussed with the aim to provide a rationale for their use in diverse disease states that are interlinked and are accompanied by some common motor, cognitive and attentional deficits.
Ingram, WT
2012-01-01
Inverse limits provide a powerful tool for constructing complicated spaces from simple ones. They also turn the study of a dynamical system consisting of a space and a self-map into a study of a (likely more complicated) space and a self-homeomorphism. In four chapters along with an appendix containing background material the authors develop the theory of inverse limits. The book begins with an introduction through inverse limits on [0,1] before moving to a general treatment of the subject. Special topics in continuum theory complete the book. Although it is not a book on dynamics, the influen
Javier Hernádez Benítez
2012-12-01
Full Text Available In reactor design phase bubble column type (CBT is required to have the distribution of solids within the reactor. This distribution satisfies an ordinary differential equation (ODE of order two, with boundary conditions that was developed by D. R. Cova [2], followed by D. N. Smith and J. A. Ruether [8]. Some elements of this equation are given by correlations that depend on certain parameters that are unknown but may be obtained from experimental data. The methodology used to determine these parameters is the sub- piecewise linear underestimation developed by O. L. Mangasarian, J. B. Rosen, M. E. Thompson. // RESUMEN: En el diseño de reactores trifásicos tipo columna de burbujeo (CBT, se requiere tener la distribución de solidos dentro del reactor. Esta distribución satisface una ecuación diferencial ordinaria (EDO de orden dos, con condiciones de frontera que fue desarrollada por D. R. Cova [2], y posteriormente por D. N. Smith y J. A. Ruether [8]. Algunos elementos de esta ecuación están dados por correlaciones que dependen de ciertos parámetros que son desconocidos, pero se pueden obtener a partir de datos experimentales. La metodología utilizada para determinar dichos parámetros es la sub-estimación lineal a trozos desarrollada por O. L. Mangasarian, J. B. Rosen y M. E. Thompson.
Pajewski, Lara; Giannopoulos, Antonios; Sesnic, Silvestar; Randazzo, Andrea; Lambot, Sébastien; Benedetto, Francesco; Economou, Nikos
2017-04-01
opportunity of testing and validating, against reliable data, their electromagnetic-modelling, inversion, imaging and processing algorithms. One of the most interesting dataset comes from the IFSTTAR Geophysical Test Site, in Nantes (France): this is an open-air laboratory including a large and deep area, filled with various materials arranged in horizontal compacted slices, separated by vertical interfaces and water-tighted in surface; several objects as pipes, polystyrene hollows, boulders and masonry are embedded in the field. Data were collected by using nine different GPR systems and at different frequencies ranging from 200 MHz to 1 GHz. Moreover, some sections of this test site were modelled by using gprMax and the commercial software CST Microwave Studio. Hence, both experimental and synthetic data are available. Further interesting datasets were collected on roads, bridges, concrete cells, columns - and more. (v) WG3 contributed to the TU1208 Education Pack, an open educational package conceived to teach GPR in University courses. (vi) WG3 was very active in offering training activities. The following courses were successfully organised: Training School (TS) "Microwave Imaging and Diagnostics" (in cooperation with the European School of Antennas; 1st edition: Madonna di Campiglio, Italy, March 2014, 2nd edition: Taormina, Italy, October 2016); TS "Numerical modelling of Ground Penetrating Radar using gprMax" (Thessaloniki, Greece, November 2015); TS "Electromagnetic Modelling Techniques for Ground Penetrating Radar" (Split, Croatia, November 2016). Moreover, WG3 organized a workshop on "Electromagnetic modelling with the Finite-Difference Time-Domain technique" (Nantes, France, February 2014) and a workshop on "Electromagnetic modelling and inversion techniques for GPR" (Davos, Switzerland, April 2016) within the 2016 European Conference on Antennas and Propagation (EuCAP). Acknowledgement: The Authors are deeply grateful to COST (European COoperation in Science and
Ronchin, Erika; Masterlark, Timothy; Dawson, John; Saunders, Steve; Martí Molist, Joan
2015-04-01
In this study, we present a method to fully integrate a family of finite element models (FEMs) into the regularized linear inversion of InSAR data collected at Rabaul caldera (PNG) between February 2007 and December 2010. During this period the caldera experienced a long-term steady subsidence that characterized surface movement both inside the caldera and outside, on its western side. The inversion is based on an array of FEM sources in the sense that the Green's function matrix is a library of forward numerical displacement solutions generated by the sources of an array common to all FEMs. Each entry of the library is the LOS surface displacement generated by injecting a unity mass of fluid, of known density and bulk modulus, into a different source cavity of the array for each FEM. By using FEMs, we are taking advantage of their capability of including topography and heterogeneous distribution of elastic material properties. All FEMs of the family share the same mesh in which only one source is activated at the time by removing the corresponding elements and applying the unity fluid flux. The domain therefore only needs to be discretized once. This precludes remeshing for each activated source, thus reducing computational requirements, often a downside of FEM-based inversions. Without imposing an a-priori source, the method allows us to identify, from a least-squares standpoint, a complex distribution of fluid flux (or change in pressure) with a 3D free geometry within the source array, as dictated by the data. The results of applying the proposed inversion to Rabaul InSAR data show a shallow magmatic system under the caldera made of two interconnected lobes located at the two opposite sides of the caldera. These lobes could be consistent with feeding reservoirs of the ongoing Tavuvur volcano eruption of andesitic products, on the eastern side, and of the past Vulcan volcano eruptions of more evolved materials, on the western side. The interconnection and
Pearson, Jeremy [Department of Chemical Engineering and Materials Science - University of California Irvine, 916 Engineering Tower, Irvine, CA, 92697 (United States); Miller, George [Department of Chemistry- University of California Irvine, 2046D PS II, Irvine, CA, 92697 (United States); Nilsson, Mikael [Department of Chemical Engineering and Materials Science - University of California Irvine, 916 Engineering Tower, Irvine, CA, 92697 (United States)
2013-07-01
Treatment of used nuclear fuel through solvent extraction separation processes is hindered by radiolytic damage from radioactive isotopes present in used fuel. The nature of the damage caused by the radiation may depend on the radiation type, whether it be low linear energy transfer (LET) such as gamma radiation or high LET such as alpha radiation. Used nuclear fuel contains beta/gamma emitting isotopes but also a significant amount of transuranics which are generally alpha emitters. Studying the respective effects on matter of both of these types of radiation will allow for accurate prediction and modeling of process performance losses with respect to dose. Current studies show that alpha radiation has milder effects than that of gamma. This is important to know because it will mean that solvent extraction solutions exposed to alpha radiation may last longer than expected and need less repair and replacement. These models are important for creating robust, predictable, and economical processes that have strong potential for mainstream adoption on the commercial level. The effects of gamma radiation on solvent extraction ligands have been more extensively studied than the effects of alpha radiation. This is due to the inherent difficulty in producing a sufficient and confluent dose of alpha particles within a sample without leaving the sample contaminated with long lived radioactive isotopes. Helium ion beam and radioactive isotope sources have been studied in the literature. We have developed a method for studying the effects of high LET radiation in situ via {sup 10}B activation and the high LET particles that result from the {sup 10}B(n,a){sup 7}Li reaction which follows. Our model for dose involves solving a partial differential equation representing absorption by 10B of an isentropic field of neutrons penetrating a sample. This method has been applied to organic solutions of TBP and CMPO, two ligands common in TRU solvent extraction treatment processes. Rates
Support Minimized Inversion of Acoustic and Elastic Wave Scattering
Safaeinili, Ali
Inversion of limited data is common in many areas of NDE such as X-ray Computed Tomography (CT), Ultrasonic and eddy current flaw characterization and imaging. In many applications, it is common to have a bias toward a solution with minimum (L^2)^2 norm without any physical justification. When it is a priori known that objects are compact as, say, with cracks and voids, by choosing "Minimum Support" functional instead of the minimum (L^2)^2 norm, an image can be obtained that is equally in agreement with the available data, while it is more consistent with what is most probably seen in the real world. We have utilized a minimum support functional to find a solution with the smallest volume. This inversion algorithm is most successful in reconstructing objects that are compact like voids and cracks. To verify this idea, we first performed a variational nonlinear inversion of acoustic backscatter data using minimum support objective function. A full nonlinear forward model was used to accurately study the effectiveness of the minimized support inversion without error due to the linear (Born) approximation. After successful inversions using a full nonlinear forward model, a linearized acoustic inversion was developed to increase speed and efficiency in imaging process. The results indicate that by using minimum support functional, we can accurately size and characterize voids and/or cracks which otherwise might be uncharacterizable. An extremely important feature of support minimized inversion is its ability to compensate for unknown absolute phase (zero-of-time). Zero-of-time ambiguity is a serious problem in the inversion of the pulse-echo data. The minimum support inversion was successfully used for the inversion of acoustic backscatter data due to compact scatterers without the knowledge of the zero-of-time. The main drawback to this type of inversion is its computer intensiveness. In order to make this type of constrained inversion available for common use, work
Support minimized inversion of acoustic and elastic wave scattering
Safaeinili, A.
1994-01-01
This report discusses the following topics on support minimized inversion of acoustic and elastic wave scattering: Minimum support inversion; forward modelling of elastodynamic wave scattering; minimum support linearized acoustic inversion; support minimized nonlinear acoustic inversion without absolute phase; and support minimized nonlinear elastic inversion
Inverse scale space decomposition
Schmidt, Marie Foged; Benning, Martin; Schönlieb, Carola-Bibiane
2018-01-01
We investigate the inverse scale space flow as a decomposition method for decomposing data into generalised singular vectors. We show that the inverse scale space flow, based on convex and even and positively one-homogeneous regularisation functionals, can decompose data represented...... by the application of a forward operator to a linear combination of generalised singular vectors into its individual singular vectors. We verify that for this decomposition to hold true, two additional conditions on the singular vectors are sufficient: orthogonality in the data space and inclusion of partial sums...... of the subgradients of the singular vectors in the subdifferential of the regularisation functional at zero. We also address the converse question of when the inverse scale space flow returns a generalised singular vector given that the initial data is arbitrary (and therefore not necessarily in the range...
Realization of beam polarization at the linear collider and its application to EW processes
Franco-Sollova, F.
2006-07-15
The use of beam polarization at the future ILC e{sup +}e{sup -} linear collider will benefit the physics program significantly. This thesis explores three aspects of beam polarization: the application of beam polarization to the study of electroweak processes, the precise measurement of the beam polarization, and finally, the production of polarized positrons at a test beam experiment. In the first part of the thesis the importance of beam polarization at the future ILC is exhibited: the benefits of employing transverse beam polarization (in both beams) for the measurement of triple gauge boson couplings (TGCs) in the W-pair production process are studied. The sensitivity to anomalous TGC values is compared for the cases of transverse and longitudinal beam polarization at a center of mass energy of 500 GeV. Due to the suppressed contribution of the t-channel {nu} exchange, the sensitivity is higher for longitudinal polarization. For some physics analyses the usual polarimetry techniques do not provide the required accuracy for the measurement of the beam polarization (around 0.25% with Compton polarimetry). The second part of the thesis deals with a complementary method to measure the beam polarization employing physics data acquired with two polarization modes. The process of single-W production is chosen due to its high cross section. The expected precision for 500 fb{sup -1} and W{yields}{mu}{nu} decays only, is {delta}P{sub e{sup -}}/P{sub e{sup -}}=0.26% and {delta}P{sub e{sup +}}/P{sub e{sup +}}=0.33%, which can be further improved by employing additional W-decay channels. The first results of an attempt to produce polarized positrons at the E-166 experiment are shown in the last part of the thesis. The E-166 experiment, located at the Final Focus Test Beam at SLAC's LINAC employs a helical undulator to induce the emission of circularly polarized gamma rays by the beam electrons. These gamma rays are converted into longitudinally polarized electron
Primary processes in radiation chemistry. LET (Linear Energy Transfer) effect in water radiolysis
Trupin-Wasselin, V.
2000-01-01
The effect of ionizing radiations on aqueous solutions leads to water ionization and then to the formation of radical species and molecular products (e - aq , H . , OH . , H 2 O 2 , H 2 ). It has been shown that the stopping power, characterized by the LET value (Linear Energy Transfer) becomes different when the nature of the ionizing radiations is different. Few data are nowadays available for high LET radiations such as protons and high energy heavy ions. These particles have been used to better understand the primary processes in radiation chemistry. The yield of a chemical dosimeter (the Fricke dosimeter) and those of the hydrogen peroxide have been determined for different LET. The effect of the dose rate on the Fricke dosimeter yield and on the H 2 O 2 yield has been studied too. When the dose rate increases, an increase of the molecular products yield is observed. At very high dose rate, this yield decreases on account of the attack of the molecular products by radicals. The H 2 O 2 yield in alkaline medium decreases when the pH reaches 12. This decrease can be explained by a slowing down of the H 2 O 2 formation velocity in alkaline medium. Superoxide radical has also been studied in this work. A new detection method: the time-resolved chemiluminescence has been perfected for this radical. This technique is more sensitive than the absorption spectroscopy. Experiments with heavy ions have allowed to determine the O 2 .- yield directly in the irradiation cell. The experimental results have been compared with those obtained with a Monte Carlo simulation code. (O.M.)
Nucleation process and dynamic inversion of the Mw 6.9 Valparaíso 2017 earthquake in Central Chile
Ruiz, S.; Aden-Antoniow, F.; Baez, J. C., Sr.; Otarola, C., Sr.; Potin, B.; DelCampo, F., Sr.; Poli, P.; Flores, C.; Satriano, C.; Felipe, L., Sr.; Madariaga, R. I.
2017-12-01
The Valparaiso 2017 sequence occurred in mega-thrust Central Chile, an active zone where the last mega-earthquake occurred in 1730. An intense seismicity occurred 2 days before of the Mw 6.9 main-shock. A slow trench ward movement observed in the coastal GPS antennas accompanied the foreshock seismicity. Following the Mw 6.9 earthquake the seismicity migrated 30 Km to South-East. This sequence was well recorded by multi-parametric stations composed by GPS, Broad-Band and Strong Motion instruments. We built a seismic catalogue with 2329 events associated to Valparaiso sequence, with a magnitude completeness of Ml 2.8. We located all the seismicity considering a new 3D velocity model obtained for the Valparaiso zone, and compute the moment tensor for events with magnitude larger than Ml 3.5, and finally studied the presence of repeating earthquakes. The main-shock is studied by performing a dynamic inversion using the strong motion records and an elliptical patch approach to characterize the rupture process. During the two days nucleation stage, we observe a compact zone of repeater events. In the meantime a westward GPS movement was recorded in the coastal GPS stations. The aseismic moment estimated from GPS is larger than the foreshocks cumulative moment, suggesting the presence of a slow slip event, which potentially triggered the 6.9 mainshock. The Mw 6.9 earthquake is associated to rupture of an elliptical asperity of semi-axis of 10 km and 5 km, with a sub-shear rupture, stress drop of 11.71 MPa, yield stress of 17.21 MPa, slip weakening of 0.65 m and kappa value of 1.70. This sequence occurs close to, and with some similar characteristics that 1985 Valparaíso Mw 8.0 earthquake. The rupture of this asperity could stress more the highly locked Central Chile zone where a mega-thrust earthquake like 1730 is expected.
Linearized Programming of Memristors for Artificial Neuro-Sensor Signal Processing.
Yang, Changju; Kim, Hyongsuk
2016-08-19
A linearized programming method of memristor-based neural weights is proposed. Memristor is known as an ideal element to implement a neural synapse due to its embedded functions of analog memory and analog multiplication. Its resistance variation with a voltage input is generally a nonlinear function of time. Linearization of memristance variation about time is very important for the easiness of memristor programming. In this paper, a method utilizing an anti-serial architecture for linear programming is proposed. The anti-serial architecture is composed of two memristors with opposite polarities. It linearizes the variation of memristance due to complimentary actions of two memristors. For programming a memristor, additional memristor with opposite polarity is employed. The linearization effect of weight programming of an anti-serial architecture is investigated and memristor bridge synapse which is built with two sets of anti-serial memristor architecture is taken as an application example of the proposed method. Simulations are performed with memristors of both linear drift model and nonlinear model.
Kinjo, Ken; Uchibe, Eiji; Doya, Kenji
2013-01-01
Linearly solvable Markov Decision Process (LMDP) is a class of optimal control problem in which the Bellman's equation can be converted into a linear equation by an exponential transformation of the state value function (Todorov, 2009b). In an LMDP, the optimal value function and the corresponding control policy are obtained by solving an eigenvalue problem in a discrete state space or an eigenfunction problem in a continuous state using the knowledge of the system dynamics and the action, state, and terminal cost functions. In this study, we evaluate the effectiveness of the LMDP framework in real robot control, in which the dynamics of the body and the environment have to be learned from experience. We first perform a simulation study of a pole swing-up task to evaluate the effect of the accuracy of the learned dynamics model on the derived the action policy. The result shows that a crude linear approximation of the non-linear dynamics can still allow solution of the task, despite with a higher total cost. We then perform real robot experiments of a battery-catching task using our Spring Dog mobile robot platform. The state is given by the position and the size of a battery in its camera view and two neck joint angles. The action is the velocities of two wheels, while the neck joints were controlled by a visual servo controller. We test linear and bilinear dynamic models in tasks with quadratic and Guassian state cost functions. In the quadratic cost task, the LMDP controller derived from a learned linear dynamics model performed equivalently with the optimal linear quadratic regulator (LQR). In the non-quadratic task, the LMDP controller with a linear dynamics model showed the best performance. The results demonstrate the usefulness of the LMDP framework in real robot control even when simple linear models are used for dynamics learning.
Rosenwald, J.-C.
2008-01-01
The lecture addressed the following topics: Optimizing radiotherapy dose distribution; IMRT contributes to optimization of energy deposition; Inverse vs direct planning; Main steps of IMRT; Background of inverse planning; General principle of inverse planning; The 3 main components of IMRT inverse planning; The simplest cost function (deviation from prescribed dose); The driving variable : the beamlet intensity; Minimizing a 'cost function' (or 'objective function') - the walker (or skier) analogy; Application to IMRT optimization (the gradient method); The gradient method - discussion; The simulated annealing method; The optimization criteria - discussion; Hard and soft constraints; Dose volume constraints; Typical user interface for definition of optimization criteria; Biological constraints (Equivalent Uniform Dose); The result of the optimization process; Semi-automatic solutions for IMRT; Generalisation of the optimization problem; Driving and driven variables used in RT optimization; Towards multi-criteria optimization; and Conclusions for the optimization phase. (P.A.)
Adaptive regularization of noisy linear inverse problems
Hansen, Lars Kai; Madsen, Kristoffer Hougaard; Lehn-Schiøler, Tue
2006-01-01
In the Bayesian modeling framework there is a close relation between regularization and the prior distribution over parameters. For prior distributions in the exponential family, we show that the optimal hyper-parameter, i.e., the optimal strength of regularization, satisfies a simple relation: T......: The expectation of the regularization function, i.e., takes the same value in the posterior and prior distribution. We present three examples: two simulations, and application in fMRI neuroimaging....
Ken eKinjo
2013-04-01
Full Text Available Linearly solvable Markov Decision Process (LMDP is a class of optimal control problem in whichthe Bellman’s equation can be converted into a linear equation by an exponential transformation ofthe state value function (Todorov, 2009. In an LMDP, the optimal value function and the correspondingcontrol policy are obtained by solving an eigenvalue problem in a discrete state space or an eigenfunctionproblem in a continuous state using the knowledge of the system dynamics and the action, state, andterminal cost functions.In this study, we evaluate the effectiveness of the LMDP framework in real robot control, in whichthe dynamics of the body and the environment have to be learned from experience. We first perform asimulation study of a pole swing-up task to evaluate the effect of the accuracy of the learned dynam-ics model on the derived the action policy. The result shows that a crude linear approximation of thenonlinear dynamics can still allow solution of the task, despite with a higher total cost.We then perform real robot experiments of a battery-catching task using our Spring Dog mobile robotplatform. The state is given by the position and the size of a battery in its camera view and two neck jointangles. The action is the velocities of two wheels, while the neck joints were controlled by a visual servocontroller. We test linear and bilinear dynamic models in tasks with quadratic and Guassian state costfunctions. In the quadratic cost task, the LMDP controller derived from a learned linear dynamics modelperformed equivalently with the optimal linear quadratic controller (LQR. In the non-quadratic task, theLMDP controller with a linear dynamics model showed the best performance. The results demonstratethe usefulness of the LMDP framework in real robot control even when simple linear models are usedfor dynamics learning.
Inverse and Ill-posed Problems Theory and Applications
Kabanikhin, S I
2011-01-01
The text demonstrates the methods for proving the existence (if et all) and finding of inverse and ill-posed problems solutions in linear algebra, integral and operator equations, integral geometry, spectral inverse problems, and inverse scattering problems. It is given comprehensive background material for linear ill-posed problems and for coefficient inverse problems for hyperbolic, parabolic, and elliptic equations. A lot of examples for inverse problems from physics, geophysics, biology, medicine, and other areas of application of mathematics are included.
Chattopadhyay, Anirban; Khondekar, Mofazzal Hossain; Bhattacharjee, Anup Kumar
2017-09-01
In this paper initiative has been taken to search the periodicities of linear speed of Coronal Mass Ejection in solar cycle 23. Double exponential smoothing and Discrete Wavelet Transform are being used for detrending and filtering of the CME linear speed time series. To choose the appropriate statistical methodology for the said purpose, Smoothed Pseudo Wigner-Ville distribution (SPWVD) has been used beforehand to confirm the non-stationarity of the time series. The Time-Frequency representation tool like Hilbert Huang Transform and Empirical Mode decomposition has been implemented to unearth the underneath periodicities in the non-stationary time series of the linear speed of CME. Of all the periodicities having more than 95% Confidence Level, the relevant periodicities have been segregated out using Integral peak detection algorithm. The periodicities observed are of low scale ranging from 2-159 days with some relevant periods like 4 days, 10 days, 11 days, 12 days, 13.7 days, 14.5 and 21.6 days. These short range periodicities indicate the probable origin of the CME is the active longitude and the magnetic flux network of the sun. The results also insinuate about the probable mutual influence and causality with other solar activities (like solar radio emission, Ap index, solar wind speed, etc.) owing to the similitude between their periods and CME linear speed periods. The periodicities of 4 days and 10 days indicate the possible existence of the Rossby-type waves or planetary waves in Sun.
Design and Implementation of a linear-phase equalizer in digital audio signal processing
Slump, Cornelis H.; van Asma, C.G.M.; Barels, J.K.P.; Barels, J.K.P.; Brunink, W.J.A; Drenth, F.B.; Pol, J.V.; Schouten, D.S.; Samsom, M.M.; Samsom, M.M.; Herrmann, O.E.
1992-01-01
This contribution presents the four phases of a project aiming at the realization in VLSI of a digital audio equalizer with a linear phase characteristic. The first step includes the identification of the system requirements, based on experience and (psycho-acoustical) literature. Secondly, the
Moraes, N.A.; Paulo, J.B.A.; Medeiros, G.S. [Universidade Federal do Rio Grande do Norte (UFRN), Natal, RN (Brazil). Dept. de Engenharia Quimica], e-mail: norberto@eq.ufrn.br
2011-04-15
The prototype of a device on a semi-industrial scale to treat wastewaters from the oil industry has been widely studied as a viable alternative to conventional equipment. The device, called Mixer-Settler based on Phase Inversion (MSPI), uses the phase inversion method as operating principle. Using experimental planning (2{sup 4} factorial with four repetitions in the central point), it was determined the influence of the main variables on the oil/water separation process for waters containing between 30 and 100 mg of oil per liter of water. The following variables were evaluated: specific throughput, organic/aqueous phase ratio, agitation in the mixing chamber, and coconut oil concentration. The response variable was the oil/water separation efficiency. The results show that the separation efficiency of the device is a function of the effective throughput and the organic/aqueous phase ratio. (author)
Subhash, P V; Madhavan, S; Chaturvedi, S
2008-01-01
Two-dimensional (2D) magneto-hydrodynamic (MHD) liner-on-plasma computations have been performed to study the growth of instabilities in a magnetized target fusion system involving the cylindrical compression of an inverse Z-pinch target plasma by a metallic liner. The growth of modes in the plasma can be divided into two phases. During the first phase, the plasma continues to be Kadomtsev stable. The dominant mode in the liner instability is imposed upon the plasma in the form of a growing perturbation. This mode further transfers part of its energy to its harmonics. During the second phase, however, non-uniform implosion of the liner leads to axial variations in plasma quantities near the liner-plasma interface, such that certain regions of the plasma locally violate the Kadomtsev criteria. Further growth ofthe plasma modes is then due to plasma instability. The above numerical study has been complemented with a linear stability analysis for the plasma, the boundary conditions for this analysis being obtained from the liner-on-plasma simulation. The stability of axisymmetric modes in the first phase is found to satisfy the Kadomtsev condition Q 0 1 modes, using equilibrium profiles from the 2D MHD study, shows that their growth rates can exceed those for m=0 by as much as an order of magnitude
Study of resolution and linearity in LaBr3: Ce scintillator through digital-pulse processing
Abhinav Kumar; Mishra, Gaurav; Ramachandran, K.
2014-01-01
Advent of digital pulse processing has led to a paradigm shift in pulse processing techniques by replacing analog electronics processing chain with equivalent algorithms acting on pulse profiles digitized at high sampling rates. In this paper, we have carried out offline digital pulse processing of Cerium-doped Lanthanum bromide scintillator (LaBr 3 : Ce) detector pulses, acquired using CAEN V1742 VME digitizer module. Algorithms have been written to approximate the functioning of peak sensing analog-to-digital convertor (ADC) and charge-to-digital convertor (QDC). Energy dependence of resolution and energy linearity of LaBr 3 : Ce scintillator detector has been studied by utilizing aforesaid algorithms
Morin, R. H.
2004-05-01
It is intuitive to think of hydraulic conductivity K as varying directly and monotonically with porosity P in porous media. However, laboratory studies and field observations have documented a possible inverse relationship between these two parameters in unconsolidated deposits under certain grain-size distributions and packing arrangements. This was confirmed at two sites in sand-and-gravel aquifers on Cape Cod, Massachusetts, where sets of geophysical well logs were used to examine the interdependence of several aquifer properties. Along with K and P, the resistivity R and the natural-gamma activity G of the surrounding sediments were measured as a function of depth. Qualitative examination of field results from the first site was useful in locating a contaminant plume and inferred an inverse relation between K and P; this was substantiated by a rigorous multivariate analysis of log data collected from the second site where K and P were determined to respond in a bipolar manner among the four independent variables. Along with this result come some implications regarding our conceptual understanding of contaminant transport processes in the shallow subsurface. According to Darcy's law, the interstitial fluid velocity V is proportional to the ratio K/P and, consequently, a general inverse K-P relationship implies that values of V can extend over a much wider range than conventionally assumed. This situation introduces a pronounced flow stratification within these granular deposits that can result in large values of longitudinal dispersivity; faster velocities occur in already fast zones and slower velocities in already slow zones. An inverse K-P relationship presents a new perspective on the physical processes associated with groundwater flow and transport. Although the results of this study apply strictly to the Cape Cod aquifers, they may merit a re-evaluation of modeling approaches undertaken at other locations having similar geologic environments.
Yue, H.; Simons, M.; Jiang, J.; Fielding, E. J.; Owen, S. E.; Moore, A. W.; Riel, B. V.; Polet, J.; Duputel, Z.; Samsonov, S. V.; Avouac, J. P.
2015-12-01
The April 2015 Gorkha, Nepal (Mw 7.8) earthquake ruptured the front of Himalaya thrust belt, causing more than 9,000 fatalities. 17 days after the main event, a large aftershock (Mw 7.2) ruptured to down-dip and east of the main rupture area. To investigate the kinematic rupture process of this earthquake sequence, we explored linear and non-linear inversion techniques using a variety of datasets including teleseismic, high rate and conventional GPS, InSAR interferograms and pixel-offsets. InSAR interferograms from ALOS-2, RADARSAT-2 and Sentinel-1a satellites are used in the joint inversion. The main event is characterized by unilateral rupture extending along strike approximately 70 km to the southeast and 40 km along dip direction. The rupture velocity is well resolved to be lie between 2.8 and 3.0 km/s, which is consistent with back-projection results. An emergent initial phase is observed in teleseismic body wave records, which is consistent with a narrow area of rupture initiation near the hypocenter. The rupture mode of the main event is pulse like. The aftershock ruptured down-dip to the northeast of the main event rupture area. The aftershock rupture area is compact and contained within 40 km of its hypocenter. In contrast to the main event, teleseismic body wave records of the aftershock suggest an abrupt initial phase, which is consistent with a crack like rupture mode. The locations of most of the aftershocks (small and large) surround the rupture area of the main shock with little, if any, spatial overlap.
Desesquelles, P.
1997-01-01
Computer Monte Carlo simulations occupy an increasingly important place between theory and experiment. This paper introduces a global protocol for the comparison of model simulations with experimental results. The correlated distributions of the model parameters are determined using an original recursive inversion procedure. Multivariate analysis techniques are used in order to optimally synthesize the experimental information with a minimum number of variables. This protocol is relevant in all fields if physics dealing with event generators and multi-parametric experiments. (authors)
Algebraic properties of generalized inverses
Cvetković‐Ilić, Dragana S
2017-01-01
This book addresses selected topics in the theory of generalized inverses. Following a discussion of the “reverse order law” problem and certain problems involving completions of operator matrices, it subsequently presents a specific approach to solving the problem of the reverse order law for {1} -generalized inverses. Particular emphasis is placed on the existence of Drazin invertible completions of an upper triangular operator matrix; on the invertibility and different types of generalized invertibility of a linear combination of operators on Hilbert spaces and Banach algebra elements; on the problem of finding representations of the Drazin inverse of a 2x2 block matrix; and on selected additive results and algebraic properties for the Drazin inverse. In addition to the clarity of its content, the book discusses the relevant open problems for each topic discussed. Comments on the latest references on generalized inverses are also included. Accordingly, the book will be useful for graduate students, Ph...
Pajewski, Lara; Giannopoulos, Antonis; van der Kruk, Jan
2015-04-01
This work aims at presenting the ongoing research activities carried out in Working Group 3 (WG3) 'EM methods for near-field scattering problems by buried structures; data processing techniques' of the COST (European COoperation in Science and Technology) Action TU1208 'Civil Engineering Applications of Ground Penetrating Radar' (www.GPRadar.eu). The principal goal of the COST Action TU1208 is to exchange and increase scientific-technical knowledge and experience of GPR techniques in civil engineering, simultaneously promoting throughout Europe the effective use of this safe and non-destructive technique in the monitoring of infrastructures and structures. WG3 is structured in four Projects. Project 3.1 deals with 'Electromagnetic modelling for GPR applications.' Project 3.2 is concerned with 'Inversion and imaging techniques for GPR applications.' The topic of Project 3.3 is the 'Development of intrinsic models for describing near-field antenna effects, including antenna-medium coupling, for improved radar data processing using full-wave inversion.' Project 3.4 focuses on 'Advanced GPR data-processing algorithms.' Electromagnetic modeling tools that are being developed and improved include the Finite-Difference Time-Domain (FDTD) technique and the spectral domain Cylindrical-Wave Approach (CWA). One of the well-known freeware and versatile FDTD simulators is GprMax that enables an improved realistic representation of the soil/material hosting the sought structures and of the GPR antennas. Here, input/output tools are being developed to ease the definition of scenarios and the visualisation of numerical results. The CWA expresses the field scattered by subsurface two-dimensional targets with arbitrary cross-section as a sum of cylindrical waves. In this way, the interaction is taken into account of multiple scattered fields within the medium hosting the sought targets. Recently, the method has been extended to deal with through-the-wall scenarios. One of the
Bayesian seismic AVO inversion
Buland, Arild
2002-07-01
A new linearized AVO inversion technique is developed in a Bayesian framework. The objective is to obtain posterior distributions for P-wave velocity, S-wave velocity and density. Distributions for other elastic parameters can also be assessed, for example acoustic impedance, shear impedance and P-wave to S-wave velocity ratio. The inversion algorithm is based on the convolutional model and a linearized weak contrast approximation of the Zoeppritz equation. The solution is represented by a Gaussian posterior distribution with explicit expressions for the posterior expectation and covariance, hence exact prediction intervals for the inverted parameters can be computed under the specified model. The explicit analytical form of the posterior distribution provides a computationally fast inversion method. Tests on synthetic data show that all inverted parameters were almost perfectly retrieved when the noise approached zero. With realistic noise levels, acoustic impedance was the best determined parameter, while the inversion provided practically no information about the density. The inversion algorithm has also been tested on a real 3-D dataset from the Sleipner Field. The results show good agreement with well logs but the uncertainty is high. The stochastic model includes uncertainties of both the elastic parameters, the wavelet and the seismic and well log data. The posterior distribution is explored by Markov chain Monte Carlo simulation using the Gibbs sampler algorithm. The inversion algorithm has been tested on a seismic line from the Heidrun Field with two wells located on the line. The uncertainty of the estimated wavelet is low. In the Heidrun examples the effect of including uncertainty of the wavelet and the noise level was marginal with respect to the AVO inversion results. We have developed a 3-D linearized AVO inversion method with spatially coupled model parameters where the objective is to obtain posterior distributions for P-wave velocity, S
Jens G. Balchen
1984-10-01
Full Text Available The problem of systematic derivation of a quasi-dynamic optimal control strategy for a non-linear dynamic process based upon a non-quadratic objective function is investigated. The wellknown LQG-control algorithm does not lead to an optimal solution when the process disturbances have non-zero mean. The relationships between the proposed control algorithm and LQG-control are presented. The problem of how to constrain process variables by means of 'penalty' - terms in the objective function is dealt with separately.
Condorelli, Rosalia
2015-01-01
Using Census of India data from 1901 to 2011 and national and international reports on women's condition in India, beginning with sex ratio trends according to regional distribution up to female infanticides and sex-selective abortions and dowry deaths, this study examines the sociological aspects of the gender imbalance in modern contemporary India. Gender inequality persistence in India proves that new values and structures do not necessarily lead to the disappearance of older forms, but they can co-exist with mutual adaptations and reinforcements. Data analysis suggests that these unexpected combinations are not comprehensible in light of a linear concept of social change which is founded, in turn, on a concept of social systems as linear interaction systems that relate to environmental perturbations according to proportional cause and effect relationships. From this perspective, in fact, behavioral attitudes and interaction relationships should be less and less proportionally regulated by traditional values and practices as exposure to modernizing influences increases. And progressive decreases should be found in rates of social indicators of gender inequality like dowry deaths (the inverse should be found in sex ratio trends). However, data does not confirm these trends. This finding leads to emphasize a new theoretical and methodological approach toward social systems study, namely the conception of social systems as complex adaptive systems and the consequential emergentist, nonlinear conception of social change processes. Within the framework of emergentist theory of social change is it possible to understand the lasting strength of the patriarchal tradition and its problematic consequences in the modern contemporary India.
Full Waveform Inversion Using Oriented Time Migration Method
Zhang, Zhendong
2016-04-12
Full waveform inversion (FWI) for reflection events is limited by its linearized update requirements given by a process equivalent to migration. Unless the background velocity model is reasonably accurate the resulting gradient can have an inaccurate update direction leading the inversion to converge into what we refer to as local minima of the objective function. In this thesis, I first look into the subject of full model wavenumber to analysis the root of local minima and suggest the possible ways to avoid this problem. And then I analysis the possibility of recovering the corresponding wavenumber components through the existing inversion and migration algorithms. Migration can be taken as a generalized inversion method which mainly retrieves the high wavenumber part of the model. Conventional impedance inversion method gives a mapping relationship between the migration image (high wavenumber) and model parameters (full wavenumber) and thus provides a possible cascade inversion strategy to retrieve the full wavenumber components from seismic data. In the proposed approach, consider a mild lateral variation in the model, I find an analytical Frechet derivation corresponding to the new objective function. In the proposed approach, the gradient is given by the oriented time-domain imaging method. This is independent of the background velocity. Specifically, I apply the oriented time-domain imaging (which depends on the reflection slope instead of a background velocity) on the data residual to obtain the geometrical features of the velocity perturbation. Assuming that density is constant, the conventional 1D impedance inversion method is also applicable for 2D or 3D velocity inversion within the process of FWI. This method is not only capable of inverting for velocity, but it is also capable of retrieving anisotropic parameters relying on linearized representations of the reflection response. To eliminate the cross-talk artifacts between different parameters, I
Ruiz Egea, E.; Sanchez Carrascal, M.; Torres Pozas, S.; Monja Ray, P. de la; Perez Molina, J. L.; Madan Rodriguez, C.; Luque Japon, L.; Morera Molina, A.; Hernandez Perez, A.; Barquero Bravo, Y.; Morengo Pedagna, I.; Oliva Gordillo, M. C.; Martin Olivar, R.
2011-01-01
In order to try to determine the high dose in the bunker of a Linear Accelerator clinical use trying to measure the spatial dependence of the sane f ron the isocenter to tite gateway to the Board ceeking to establich the origin of it. This doce measurements performed with an ionization charober at different locations incide the bunker after an irradiation of 400 Monitor Units verifying the doce rate per minute for an hour, and accumulating the doce received during that period of time.
Point-source inversion techniques
Langston, Charles A.; Barker, Jeffrey S.; Pavlin, Gregory B.
1982-11-01
A variety of approaches for obtaining source parameters from waveform data using moment-tensor or dislocation point source models have been investigated and applied to long-period body and surface waves from several earthquakes. Generalized inversion techniques have been applied to data for long-period teleseismic body waves to obtain the orientation, time function and depth of the 1978 Thessaloniki, Greece, event, of the 1971 San Fernando event, and of several events associated with the 1963 induced seismicity sequence at Kariba, Africa. The generalized inversion technique and a systematic grid testing technique have also been used to place meaningful constraints on mechanisms determined from very sparse data sets; a single station with high-quality three-component waveform data is often sufficient to discriminate faulting type (e.g., strike-slip, etc.). Sparse data sets for several recent California earthquakes, for a small regional event associated with the Koyna, India, reservoir, and for several events at the Kariba reservoir have been investigated in this way. Although linearized inversion techniques using the moment-tensor model are often robust, even for sparse data sets, there are instances where the simplifying assumption of a single point source is inadequate to model the data successfully. Numerical experiments utilizing synthetic data and actual data for the 1971 San Fernando earthquake graphically demonstrate that severe problems may be encountered if source finiteness effects are ignored. These techniques are generally applicable to on-line processing of high-quality digital data, but source complexity and inadequacy of the assumed Green's functions are major problems which are yet to be fully addressed.
Xiang, Zhaowei; Yin, Ming; Dong, Guanhua; Mei, Xiaoqin; Yin, Guofu
2018-06-01
A finite element model considering volume shrinkage with powder-to-dense process of powder layer in selective laser melting (SLM) is established. Comparison between models that consider and do not consider volume shrinkage or powder-to-dense process is carried out. Further, parametric analysis of laser power and scan speed is conducted and the reliability of linear energy density as a design parameter is investigated. The results show that the established model is an effective method and has better accuracy allowing for the temperature distribution, and the length and depth of molten pool. The maximum temperature is more sensitive to laser power than scan speed. The maximum heating rate and cooling rate increase with increasing scan speed at constant laser power and increase with increasing laser power at constant scan speed as well. The simulation results and experimental result reveal that linear energy density is not always reliable using as a design parameter in the SLM.
Kimura, W.D.
1993-01-01
The final report describes work performed to investigate inverse Cherenkov acceleration (ICA) as a promising method for laser particle acceleration. In particular, an improved configuration of ICA is being tested in a experiment presently underway on the Accelerator Test Facility (ATF). In the experiment, the high peak power (∼ 10 GW) linearly polarized ATF CO 2 laser beam is converted to a radially polarized beam. This is beam is focused with an axicon at the Cherenkov angle onto the ATF 50-MeV e-beam inside a hydrogen gas cell, where the gas acts as the phase matching medium of the interaction. An energy gain of ∼12 MeV is predicted assuming a delivered laser peak power of 5 GW. The experiment is divided into two phases. The Phase I experiments, which were completed in the spring of 1992, were conducted before the ATF e-beam was available and involved several successful tests of the optical systems. Phase II experiments are with the e-beam and laser beam, and are still in progress. The ATF demonstrated delivery of the e-beam to the experiment in Dec. 1992. A preliminary ''debugging'' run with the e-beam and laser beam occurred in May 1993. This revealed the need for some experimental modifications, which have been implemented. The second run is tentatively scheduled for October or November 1993. In parallel to the experimental efforts has been ongoing theoretical work to support the experiment and investigate improvement and/or offshoots. One exciting offshoot has been theoretical work showing that free-space laser acceleration of electrons is possible using a radially-polarized, axicon-focused laser beam, but without any phase-matching gas. The Monte Carlo code used to model the ICA process has been upgraded and expanded to handle different types of laser beam input profiles
Zhaowei Xiang
2018-06-01
Full Text Available A finite element model considering volume shrinkage with powder-to-dense process of powder layer in selective laser melting (SLM is established. Comparison between models that consider and do not consider volume shrinkage or powder-to-dense process is carried out. Further, parametric analysis of laser power and scan speed is conducted and the reliability of linear energy density as a design parameter is investigated. The results show that the established model is an effective method and has better accuracy allowing for the temperature distribution, and the length and depth of molten pool. The maximum temperature is more sensitive to laser power than scan speed. The maximum heating rate and cooling rate increase with increasing scan speed at constant laser power and increase with increasing laser power at constant scan speed as well. The simulation results and experimental result reveal that linear energy density is not always reliable using as a design parameter in the SLM. Keywords: Selective laser melting, Volume shrinkage, Powder-to-dense process, Numerical modeling, Thermal analysis, Linear energy density
E. D. Resende
2007-09-01
Full Text Available The freezing process is considered as a propagation problem and mathematically classified as an "initial value problem." The mathematical formulation involves a complex situation of heat transfer with simultaneous changes of phase and abrupt variation in thermal properties. The objective of the present work is to solve the non-linear heat transfer equation for food freezing processes using orthogonal collocation on finite elements. This technique has not yet been applied to freezing processes and represents an alternative numerical approach in this area. The results obtained confirmed the good capability of the numerical method, which allows the simulation of the freezing process in approximately one minute of computer time, qualifying its application in a mathematical optimising procedure. The influence of the latent heat released during the crystallisation phenomena was identified by the significant increase in heat load in the early stages of the freezing process.
Bleier, W.
1983-01-01
The polarization of the photons in the elementary processes of the electron-nucleus and electron-electron bremsstrahlung was measured. Electrons with an energy of 300 keV were scattered by copper, gold and carbon target. The polarization in the different processes was measured by using different coincidence methods. (BEF)
A linear program for optimal configurable business processes deployment into cloud federation
Rekik, M.; Boukadi, K.; Assy, N.; Gaaloul, W.; Ben-Abdallah, H.; Zhang, J.; Miller, J.A.; Xu, X.
2016-01-01
A configurable process model is a generic model from which an enterprise can derive and execute process variants that meet its specific needs and contexts. With the advent of cloud computing and its economic pay-per-use model, enterprises are increasingly outsourcing partially or totally their
A State-Space Approach to Optimal Level-Crossing Prediction for Linear Gaussian Processes
Martin, Rodney Alexander
2009-01-01
In many complex engineered systems, the ability to give an alarm prior to impending critical events is of great importance. These critical events may have varying degrees of severity, and in fact they may occur during normal system operation. In this article, we investigate approximations to theoretically optimal methods of designing alarm systems for the prediction of level-crossings by a zero-mean stationary linear dynamic system driven by Gaussian noise. An optimal alarm system is designed to elicit the fewest false alarms for a fixed detection probability. This work introduces the use of Kalman filtering in tandem with the optimal level-crossing problem. It is shown that there is a negligible loss in overall accuracy when using approximations to the theoretically optimal predictor, at the advantage of greatly reduced computational complexity. I
Scanning Electron Microscope Calibration Using a Multi-Image Non-Linear Minimization Process
Cui, Le; Marchand, Éric
2015-04-01
A scanning electron microscope (SEM) calibrating approach based on non-linear minimization procedure is presented in this article. A part of this article has been published in IEEE International Conference on Robotics and Automation (ICRA), 2014. . Both the intrinsic parameters and the extrinsic parameters estimations are achieved simultaneously by minimizing the registration error. The proposed approach considers multi-images of a multi-scale calibration pattern view from different positions and orientations. Since the projection geometry of the scanning electron microscope is different from that of a classical optical sensor, the perspective projection model and the parallel projection model are considered and compared with distortion models. Experiments are realized by varying the position and the orientation of a multi-scale chessboard calibration pattern from 300× to 10,000×. The experimental results show the efficiency and the accuracy of this approach.
Østergaard, Jacob; Kramer, Mark A.; Eden, Uri T.
2018-01-01
current. We then fit these spike train datawith a statistical model (a generalized linear model, GLM, with multiplicative influences of past spiking). For different levels of noise, we show how the GLM captures both the deterministic features of the Izhikevich neuron and the variability driven...... by the noise. We conclude that the GLM captures essential features of the simulated spike trains, but for near-deterministic spike trains, goodness-of-fit analyses reveal that the model does not fit very well in a statistical sense; the essential random part of the GLM is not captured....... are separately applied; understanding the relationships between these modeling approaches remains an area of active research. In this letter, we examine this relationship using simulation. To do so, we first generate spike train data from a well-known dynamical model, the Izhikevich neuron, with a noisy input...
Post-processing with linear optics for improving the quality of single-photon sources
Berry, Dominic W; Scheel, Stefan; Myers, Casey R; Sanders, Barry C; Knight, Peter L; Laflamme, Raymond
2004-01-01
Triggered single-photon sources produce the vacuum state with non-negligible probability, but produce a much smaller multiphoton component. It is therefore reasonable to approximate the output of these photon sources as a mixture of the vacuum and single-photon states. We show that it is impossible to increase the probability for a single photon using linear optics and photodetection on fewer than four modes. This impossibility is due to the incoherence of the inputs; if the inputs were pure-state superpositions, it would be possible to obtain a perfect single-photon output. In the more general case, a chain of beam splitters can be used to increase the probability for a single photon, but at the expense of adding an additional multiphoton component. This improvement is robust against detector inefficiencies, but is degraded by distinguishable photons, dark counts or multiphoton components in the input
Low-impedance internal linear inductive antenna for large-area flat panel display plasma processing
Kim, K.N.; Jung, S.J.; Lee, Y.J.; Yeom, G.Y.; Lee, S.H.; Lee, J.K.
2005-01-01
An internal-type linear inductive antenna, that is, a double-comb-type antenna, was developed for a large-area plasma source having the size of 1020 mmx830 mm, and high density plasmas on the order of 2.3x10 11 cm -3 were obtained with 15 mTorr Ar at 5000 W of inductive power with good plasma stability. This is higher than that for the conventional serpentine-type antenna, possibly due to the low impedance, resulting in high efficiency of power transfer for the double-comb antenna type. In addition, due to the remarkable reduction of the antenna length, a plasma uniformity of less than 8% was obtained within the substrate area of 880 mmx660 mm at 5000 W without having a standing-wave effect
Exact closed-form expression for the inverse moments of one-sided correlated Gram matrices
Elkhalil, Khalil
2016-08-15
In this paper, we derive a closed-form expression for the inverse moments of one sided-correlated random Gram matrices. Such a question is mainly motivated by applications in signal processing and wireless communications for which evaluating this quantity is a question of major interest. This is for instance the case of the best linear unbiased estimator, in which the average estimation error corresponds to the first inverse moment of a random Gram matrix.
Exact closed-form expression for the inverse moments of one-sided correlated Gram matrices
Elkhalil, Khalil; Kammoun, Abla; Al-Naffouri, Tareq Y.; Alouini, Mohamed-Slim
2016-01-01
In this paper, we derive a closed-form expression for the inverse moments of one sided-correlated random Gram matrices. Such a question is mainly motivated by applications in signal processing and wireless communications for which evaluating this quantity is a question of major interest. This is for instance the case of the best linear unbiased estimator, in which the average estimation error corresponds to the first inverse moment of a random Gram matrix.
Nonlinear fluctuation-induced rate equations for linear birth-death processes
Honkonen, J.
2008-01-01
The Fock-space approach to the solution of master equations for the one-step Markov processes is reconsidered. It is shown that in birth-death processes with an absorbing state at the bottom of the occupation-number spectrum and occupation-number independent annihilation probability occupation-number fluctuations give rise to rate equations drastically different from the polynomial form typical of birth-death processes. The fluctuation-induced rate equations with the characteristic exponential terms are derived for Mikhailov's ecological model and Lanchester's model of modern warfare
Nonlinear fluctuations-induced rate equations for linear birth-death processes
Honkonen, J.
2008-05-01
The Fock-space approach to the solution of master equations for one-step Markov processes is reconsidered. It is shown that in birth-death processes with an absorbing state at the bottom of the occupation-number spectrum and occupation-number independent annihilation probability of occupation-number fluctuations give rise to rate equations drastically different from the polynomial form typical of birth-death processes. The fluctuation-induced rate equations with the characteristic exponential terms are derived for Mikhailov’s ecological model and Lanchester’s model of modern warfare.
Yuxi Miao
2016-08-01
Full Text Available The free-piston gasoline engine linear generator (FPGLG is a new kind of power plant consisting of free-piston gasoline engines and a linear generator. Due to the elimination of the crankshaft mechanism, the piston motion process and the combustion heat release process affect each other significantly. In this paper, the combustion characteristics during the stable generating process of a FPGLG were presented using a numerical iteration method, which coupled a zero-dimensional piston dynamic model and a three-dimensional scavenging model with the combustion process simulation. The results indicated that, compared to the conventional engine (CE, the heat release process of the FPGLG lasted longer with a lower peak heat release rate. The indicated thermal efficiency of the engine was lower because less heat was released around the piston top dead centre (TDC. Very minimal difference was observed on the ignition delay duration between the FPGLG and the CE, while the post-combustion period of the FPGLG was significantly longer than that of the CE. Meanwhile, the FPGLG was found to operate more moderately due to lower peak in-cylinder gas pressure and a lower pressure rising rate. The potential advantage of the FPGLG in lower NOx emission was also proven with the simulation results presented in this paper.
Digital signals processing using non-linear orthogonal transformation in frequency domain
Ivanichenko E.V.
2017-12-01
Full Text Available The rapid progress of computer technology in recent decades led to a wide introduction of methods of digital information processing practically in all fields of scientific research. In this case, among various applications of computing one of the most important places is occupied by digital processing systems signals (DSP that are used in data processing remote solution tasks of navigation of aerospace and marine objects, communications, radiophysics, digital optics and in a number of other applications. Digital Signal Processing (DSP is a dynamically developing an area that covers both technical and software tools. Related areas for digital signal processing are theory information, in particular, the theory of optimal signal reception and theory pattern recognition. In the first case, the main problem is signal extraction against a background of noise and interference of a different physical nature, and in the second - automatic recognition, i.e. classification and signal identification. In the digital processing of signals under a signal, we mean its mathematical description, i.e. a certain real function, containing information on the state or behavior of a physical system under an event that can be defined on a continuous or discrete space of time variation or spatial coordinates. In the broad sense, DSP systems mean a complex algorithmic, hardware and software. As a rule, systems contain specialized technical means of preliminary (or primary signal processing and special technical means for secondary processing of signals. Means of pretreatment are designed to process the original signals observed in general case against a background of random noise and interference of a different physical nature and represented in the form of discrete digital samples, for the purpose of detecting and selection (selection of the useful signal and evaluation characteristics of the detected signal. A new method of digital signal processing in the frequency
Application of mixed-integer linear programming in a car seats assembling process
Jorge Iván Perez Rave
2011-12-01
Full Text Available In this paper, a decision problem involving a car parts manufacturing company is modeled in order to prepare the company for an increase in demand. Mixed-integer linear programming was used with the following decision variables: creating a second shift, purchasing additional equipment, determining the required work force, and other alternatives involving new manners of work distribution that make it possible to separate certain operations from some workplaces and integrate them into others to minimize production costs. The model was solved using GAMS. The solution consisted of programming 19 workers under a configuration that merges two workplaces and separates some operations from some workplaces. The solution did not involve purchasing additional machinery or creating a second shift. As a result, the manufacturing paradigms that had been valid in the company for over 14 years were broken. This study allowed the company to increase its productivity and obtain significant savings. It also shows the benefits of joint work between academia and companies, and provides useful information for professors, students and engineers regarding production and continuous improvement.
Feasible Application Area Study for Linear Laser Cutting in Paper Making Processes
Happonen, A.; Stepanov, A.; Piili, H.
Traditional industry sectors, like paper making industry, tend to stay within well-known technology rather than going forward towards promising, but still quite new technical solutions and applications. This study analyses the feasibility of the laser cutting in large-scale industrial paper making processes. Aim was to reveal development and process related challenges and improvement potential in paper making processes by utilizing laser technology. This study has been carried out, because there still seems to be only few large-scale industrial laser processing applications in paper converting processes worldwide, even in the beginning of 2010's. Because of this, the small-scale use of lasers in paper material manufacturing industry is related to a shortage of well-known and widely available published research articles and published measurement data (e.g. actual achieved cut speeds with high quality cut edges, set-up times and so on). It was concluded that laser cutting has strong potential in industrial applications for paper making industries. This potential includes quality improvements and a competitive advantage for paper machine manufacturers and industry. The innovations have also added potential, when developing new paper products. An example of these kinds of products are ones with printed intelligence, which could be a new business opportunity for the paper industries all around the world.
Seyyed Ghoreishi
2017-09-01
Full Text Available Objective(S: In this work, paclitaxel (PX, a promising anticancer drug, was loaded in the basil seed mucilage (BSM aerogels by implementation of supercritical carbon dioxide (SC-CO2 technology. Then, the effects of operating conditions were studied on the PX mean particle size (MPS, particle size distribution (PSD and drug loading efficiency (DLE. Methods: The employed SC-CO2 process in this research is the combination of phase inversion technique and gas antisolvent (GAS process. The effect of DMSO/water ratio (4 and 6 (v/v, pressure (10-20 MPa, CO2 addition rate (1–3 mL/min and ethanol concentration (5-10% were studied on MPS, PSD and DLE. Scanning electron microscopy (SEM and Zetasizer were used for particle analysis. DLE was investigated by utilizing the high-performance liquid chromatography (HPLC. Results: Nanoparticles of paclitaxel (MPS of 82–131 nm depending on process variables with narrow PSD were successfully loaded in BSM aerogel with DLE of 28–52%. Experimental results indicated that higher DMSO/water ratio, ethanol concentration, pressure and CO2 addition rate reduced MPS and DLE. Conclusions: A modified semi batch SC-CO2 process based on the combination of gas antisolvent process and phase inversion methods using DMSO as co-solvent and ethanol as a secondary solvent was developed for the loading of an anticancer drug, PX, in ocimum basilicum mucilage aerogel. The experimental results determined that the mean particle size, particle size distribution, and drug loading efficiency be controlled with operating conditions.
Radiation processing of inhomogeneous objects at the 300 MeV electron linear accelerator
Demeshko, O.A.; Kochetov, S.S.; Makhnenko, L.A.; Melnitsky, I.V.; Shopen, O.A.
2009-01-01
Comparison is made between the calculated and experimental doses absorbed by complex density-inhomogeneous objects during their radiation processing. The process of fast electron passage through the object and depth dose formation has been simulated by the Monte Carlo technique with the use of the licensed program package PENELOPE. The calculated and experimental data are found to be in good agreement (∼ 30 %). Preliminary simulation of the process of object irradiation at given conditions provides the necessary information when developing the methods for a particular group of objects. This is of particular importance at performing bilateral irradiation, when an insignificant density variance of different objects may lead to appreciable errors of dose determination in the symmetry plane of the object.
Probabilistic inversion in priority setting of emerging zoonoses.
Kurowicka, D.; Bucura, C.; Cooke, R.; Havelaar, A.H.
2010-01-01
This article presents methodology of applying probabilistic inversion in combination with expert judgment in priority setting problem. Experts rank scenarios according to severity. A linear multi-criteria analysis model underlying the expert preferences is posited. Using probabilistic inversion, a
Bounds for the probability distribution function of the linear ACD process
Fernandes, Marcelo
2003-01-01
Rio de Janeiro This paper derives both lower and upper bounds for the probability distribution function of stationary ACD(p, q) processes. For the purpose of illustration, I specialize the results to the main parent distributions in duration analysis. Simulations show that the lower bound is much tighter than the upper bound.
Statistical perspectives on inverse problems
Andersen, Kim Emil
of the interior of an object from electrical boundary measurements. One part of this thesis concerns statistical approaches for solving, possibly non-linear, inverse problems. Thus inverse problems are recasted in a form suitable for statistical inference. In particular, a Bayesian approach for regularisation...... problem is given in terms of probability distributions. Posterior inference is obtained by Markov chain Monte Carlo methods and new, powerful simulation techniques based on e.g. coupled Markov chains and simulated tempering is developed to improve the computational efficiency of the overall simulation......Inverse problems arise in many scientific disciplines and pertain to situations where inference is to be made about a particular phenomenon from indirect measurements. A typical example, arising in diffusion tomography, is the inverse boundary value problem for non-invasive reconstruction...
Schenini, L.; Beslier, M. O.; Sage, F.; Badji, R.; Galibert, P. Y.; Lepretre, A.; Dessa, J. X.; Aidi, C.; Watremez, L.
2014-12-01
Recent studies on the Algerian and the North-Ligurian margins in the Western Mediterranean have evidenced inversion-related superficial structures, such as folds and asymmetric sedimentary perched basins whose geometry hints at deep compressive structures dipping towards the continent. Deep seismic imaging of these margins is difficult due to steep slope and superficial multiples, and, in the Mediterranean context, to the highly diffractive Messinian evaporitic series in the basin. During the Algerian-French SPIRAL survey (2009, R/V Atalante), 2D marine multi-channel seismic (MCS) reflection data were collected along the Algerian Margin using a 4.5 km, 360 channel digital streamer and a 3040 cu. in. air-gun array. An advanced processing workflow has been laid out using Geocluster CGG software, which includes noise attenuation, 2D SRME multiple attenuation, surface consistent deconvolution, Kirchhoff pre-stack time migration. This processing produces satisfactory seismic images of the whole sedimentary cover, and of southward dipping reflectors in the acoustic basement along the central part of the margin offshore Great Kabylia, that are interpreted as inversion-related blind thrusts as part of flat-ramp systems. We applied this successful processing workflow to old 2D marine MCS data acquired on the North-Ligurian Margin (Malis survey, 1995, R/V Le Nadir), using a 2.5 km, 96 channel streamer and a 1140 cu. in. air-gun array. Particular attention was paid to multiple attenuation in adapting our workflow. The resulting reprocessed seismic images, interpreted with a coincident velocity model obtained by wide-angle data tomography, provide (1) enhanced imaging of the sedimentary cover down to the top of the acoustic basement, including the base of the Messinian evaporites and the sub-salt Miocene series, which appear to be tectonized as far as in the mid-basin, and (2) new evidence of deep crustal structures in the margin which the initial processing had failed to
Pelle, L.
2003-12-01
The removal of multiple reflections remains a real problem in seismic imaging. Many preprocessing methods have been developed to attenuate multiples in seismic data but none of them is satisfactory in 3D. The objective of this thesis is to develop a new method to remove multiples, extensible in 3D. Contrary to the existing methods, our approach is not a preprocessing step: we directly include the multiple removal in the imaging process by means of a simultaneous inversion of primaries and multiples. We then propose to improve the standard linearized inversion so as to make it insensitive to the presence of multiples in the data. We exploit kinematics differences between primaries and multiples. We propose to pick in the data the kinematics of the multiples we want to remove. The wave field is decomposed into primaries and multiples. Primaries are modeled by the Ray+Born operator from perturbations of the logarithm of impedance, given the velocity field. Multiples are modeled by the Transport operator from an initial trace, given the picking. The inverse problem simultaneously fits primaries and multiples to the data. To solve this problem with two unknowns, we take advantage of the isometric nature of the Transport operator, which allows to drastically reduce the CPU time: this simultaneous inversion is this almost as fast as the standard linearized inversion. This gain of time opens the way to different applications to multiple removal and in particular, allows to foresee the straightforward 3D extension. (author)
Matthew J Simpson
Full Text Available Many processes during embryonic development involve transport and reaction of molecules, or transport and proliferation of cells, within growing tissues. Mathematical models of such processes usually take the form of a reaction-diffusion partial differential equation (PDE on a growing domain. Previous analyses of such models have mainly involved solving the PDEs numerically. Here, we present a framework for calculating the exact solution of a linear reaction-diffusion PDE on a growing domain. We derive an exact solution for a general class of one-dimensional linear reaction-diffusion process on 0
Simpson, Matthew J
2015-01-01
Many processes during embryonic development involve transport and reaction of molecules, or transport and proliferation of cells, within growing tissues. Mathematical models of such processes usually take the form of a reaction-diffusion partial differential equation (PDE) on a growing domain. Previous analyses of such models have mainly involved solving the PDEs numerically. Here, we present a framework for calculating the exact solution of a linear reaction-diffusion PDE on a growing domain. We derive an exact solution for a general class of one-dimensional linear reaction-diffusion process on 0exact solutions with numerical approximations confirms the veracity of the method. Furthermore, our examples illustrate a delicate interplay between: (i) the rate at which the domain elongates, (ii) the diffusivity associated with the spreading density profile, (iii) the reaction rate, and (iv) the initial condition. Altering the balance between these four features leads to different outcomes in terms of whether an initial profile, located near x = 0, eventually overcomes the domain growth and colonizes the entire length of the domain by reaching the boundary where x = L(t).
Rao, H.M.; Ghaffari, B.; Yuan, W.; Jordon, J.B.; Badarinarayan, H.
2016-01-01
The microstructure and lap-shear behaviors of friction stir linear welded wrought Al alloy AA6022-T4 to cast Mg alloy AM60B joints were examined. A process window was developed to initially identify the potential process conditions. Multitudes of welds were produced by varying the tool rotation rate and tool traverse speed. Welds produced at 1500 revolutions per minute (rpm) tool rotation rate and either 50 mm/min or 75 mm/min tool traverse speed displayed the highest quasi-static failure load of ~3.3 kN per 30 mm wide lap-shear specimens. Analysis of cross sections of untested coupons indicated that the welds made at these optimum welding parameters had negligible microvoids and displayed a favorable weld geometry for the cold lap and hook features at the faying surface, compared to welds produced using other process parameters. Cross sections of the tested coupons indicated that the dominant crack initiated on the advancing side and progressed through the weld nugget, which consists of intermetallic compounds (IMC). This study demonstrates the feasibility of welding wrought Al and cast Mg alloy via friction stir linear welding with promising lap-shear strength results.
Rao, H.M. [Research & Development Division, Hitachi America Ltd., Farmington Hills, MI 48335 (United States); Ghaffari, B. [Research and Advanced Engineering, Ford Motor Company, Dearborn, MI 48121 (United States); Yuan, W., E-mail: wei.yuan@hitachi-automotive.us [Research & Development Division, Hitachi America Ltd., Farmington Hills, MI 48335 (United States); Jordon, J.B. [Department of Mechanical Engineering, The University of Alabama, Tuscaloosa, AL 35487 (United States); Badarinarayan, H. [Research & Development Division, Hitachi America Ltd., Farmington Hills, MI 48335 (United States)
2016-01-10
The microstructure and lap-shear behaviors of friction stir linear welded wrought Al alloy AA6022-T4 to cast Mg alloy AM60B joints were examined. A process window was developed to initially identify the potential process conditions. Multitudes of welds were produced by varying the tool rotation rate and tool traverse speed. Welds produced at 1500 revolutions per minute (rpm) tool rotation rate and either 50 mm/min or 75 mm/min tool traverse speed displayed the highest quasi-static failure load of ~3.3 kN per 30 mm wide lap-shear specimens. Analysis of cross sections of untested coupons indicated that the welds made at these optimum welding parameters had negligible microvoids and displayed a favorable weld geometry for the cold lap and hook features at the faying surface, compared to welds produced using other process parameters. Cross sections of the tested coupons indicated that the dominant crack initiated on the advancing side and progressed through the weld nugget, which consists of intermetallic compounds (IMC). This study demonstrates the feasibility of welding wrought Al and cast Mg alloy via friction stir linear welding with promising lap-shear strength results.
Inverse Faraday Effect Revisited
Mendonça, J. T.; Ali, S.; Davies, J. R.
2010-11-01
The inverse Faraday effect is usually associated with circularly polarized laser beams. However, it was recently shown that it can also occur for linearly polarized radiation [1]. The quasi-static axial magnetic field by a laser beam propagating in plasma can be calculated by considering both the spin and the orbital angular momenta of the laser pulse. A net spin is present when the radiation is circularly polarized and a net orbital angular momentum is present if there is any deviation from perfect rotational symmetry. This orbital angular momentum has recently been discussed in the plasma context [2], and can give an additional contribution to the axial magnetic field, thus enhancing or reducing the inverse Faraday effect. As a result, this effect that is usually attributed to circular polarization can also be excited by linearly polarized radiation, if the incident laser propagates in a Laguerre-Gauss mode carrying a finite amount of orbital angular momentum.[4pt] [1] S. ALi, J.R. Davies and J.T. Mendonca, Phys. Rev. Lett., 105, 035001 (2010).[0pt] [2] J. T. Mendonca, B. Thidé, and H. Then, Phys. Rev. Lett. 102, 185005 (2009).
Zheng, Ao; Wang, Mingfeng; Yu, Xiangwei; Zhang, Wenbo
2018-03-01
On 2016 November 13, an Mw 7.8 earthquake occurred in the northeast of the South Island of New Zealand near Kaikoura. The earthquake caused severe damages and great impacts on local nature and society. Referring to the tectonic environment and defined active faults, the field investigation and geodetic evidence reveal that at least 12 fault sections ruptured in the earthquake, and the focal mechanism is one of the most complicated in historical earthquakes. On account of the complexity of the source rupture, we propose a multisegment fault model based on the distribution of surface ruptures and active tectonics. We derive the source rupture process of the earthquake using the kinematic waveform inversion method with the multisegment fault model from strong-motion data of 21 stations (0.05-0.35 Hz). The inversion result suggests the rupture initiates in the epicentral area near the Humps fault, and then propagates northeastward along several faults, until the offshore Needles fault. The Mw 7.8 event is a mixture of right-lateral strike and reverse slip, and the maximum slip is approximately 19 m. The synthetic waveforms reproduce the characteristics of the observed ones well. In addition, we synthesize the coseismic offsets distribution of the ruptured region from the slips of upper subfaults in the fault model, which is roughly consistent with the surface breaks observed in the field survey.
Mar'yanov, B.M.; Shumar, S.V.; Gavrilenko, M.A.
1994-01-01
A method for the computer processing of the curves of potentiometric differential titration using the precipitation reactions is developed. This method is based on transformation of the titration curve into a line of multiphase regression, whose parameters determine the equivalence points and the solubility products of the formed precipitates. The computational algorithm is tested using experimental curves for the titration of solutions containing Hg(2) and Cd(2) by the solution of sodium diethyldithiocarbamate. The random errors (RSD) for the titration of 1x10 -4 M solutions are in the range of 3-6%. 7 refs.; 2 figs.; 1 tab
Variability in surface inversion characteristics over India in winter ...
inversion depth at most of the other stations show that shallow and moderate inversions occur more frequently than deep ..... processed and several checks were applied to ensure homogeneity ... simply inversions) is defined as the layer from ...
Finite-dimensional linear algebra
Gockenbach, Mark S
2010-01-01
Some Problems Posed on Vector SpacesLinear equationsBest approximationDiagonalizationSummaryFields and Vector SpacesFields Vector spaces Subspaces Linear combinations and spanning sets Linear independence Basis and dimension Properties of bases Polynomial interpolation and the Lagrange basis Continuous piecewise polynomial functionsLinear OperatorsLinear operatorsMore properties of linear operatorsIsomorphic vector spaces Linear operator equations Existence and uniqueness of solutions The fundamental theorem; inverse operatorsGaussian elimination Newton's method Linear ordinary differential eq
Multi input single output model predictive control of non-linear bio-polymerization process
Arumugasamy, Senthil Kumar; Ahmad, Z. [School of Chemical Engineering, Univerisiti Sains Malaysia, Engineering Campus, Seri Ampangan,14300 Nibong Tebal, Seberang Perai Selatan, Pulau Pinang (Malaysia)
2015-05-15
This paper focuses on Multi Input Single Output (MISO) Model Predictive Control of bio-polymerization process in which mechanistic model is developed and linked with the feedforward neural network model to obtain a hybrid model (Mechanistic-FANN) of lipase-catalyzed ring-opening polymerization of ε-caprolactone (ε-CL) for Poly (ε-caprolactone) production. In this research, state space model was used, in which the input to the model were the reactor temperatures and reactor impeller speeds and the output were the molecular weight of polymer (M{sub n}) and polymer polydispersity index. State space model for MISO created using System identification tool box of Matlab™. This state space model is used in MISO MPC. Model predictive control (MPC) has been applied to predict the molecular weight of the biopolymer and consequently control the molecular weight of biopolymer. The result shows that MPC is able to track reference trajectory and give optimum movement of manipulated variable.
Porto da Silva, Edson
Digital signal processing (DSP) has become one of the main enabling technologies for the physical layer of coherent optical communication networks. The DSP subsystems are used to implement several functionalities in the digital domain, from synchronization to channel equalization. Flexibility...... nonlinearity compensation, (II) spectral shaping, and (III) adaptive equalization. For (I), original contributions are presented to the study of the nonlinearity compensation (NLC) with digital backpropagation (DBP). Numerical and experimental performance investigations are shown for different application...... scenarios. Concerning (II), it is demonstrated how optical and electrical (digital) pulse shaping can be allied to improve the spectral confinement of a particular class of optical time-division multiplexing (OTDM) signals that can be used as a building block for fast signaling single-carrier transceivers...
PROCESS SIMULATION OF BENZENE SEPARATION COLUMN OF LINEAR ALKYL BENZENE (LABPLANT
Zaid A. AbdelRahman
2013-05-01
Full Text Available CHEMCAD process simulator was used for the analysis of existing benzene separation column in LAB plant(Arab Detergent Company/Beiji-Iraq. Simulated column performance curves were constructed. The variables considered in this study are the thermodynamic model option, top and bottom temperatures, feed temperature, feed composition & reflux ratio. Also simulated columns profiles for the temperature, vapor & liquid flow rates compositions, were constructed. Four different thermodynamic models options (SRK, TSRK, PR, and ESSO were used, affecting the results within 1-25% variation for the most cases. For Benzene Column (32 real stages, feed stage 14, the simulated results show that bottom temperature above 200 oC the weight fractions of top components, except benzene, increases sharply, where as benzene top weight fraction decreasing sharply. Also, feed temperature above 180 oC shows same trends. The column profiles remain fairly constant from tray 3 (immediately below condenser to tray 10 (immediately above feed and from tray 15 (immediately below feed to tray 25 (immediately above reboiler. Simulation of the benzene separation column in LAB production plant using CHEMCAD simulator, confirms the real plant operation data. The study gives evidence about a successful simulation with CHEMCAD.
Linear electron accelerators for medicine and radiation processing developed in Beijing, China
Benguang, G.
1981-01-01
Because of the wide applications in radiotherapy, sterilization, industrial radiography, irradiation processing, etc., the authors started to develop their own machines in this field in 1974. The first linac made in Beijing is a medical one, Model BJ-10. It was completed in 1977 and installed at the Beijing Municipal Tumor Institute and has been used in treatment for 3 years. The parameters of this radiotherapy equipment are determined from the requirements of the treatment on the deep and superficial tumors. In subsystems of this medical linac, the advanced techniques developed since the appearance of the first world's medical linac such as the isocentric gantry system, etc., are adopted as much as possible. The second machine is an industrial linac, Model BF-5 finished manufacturing in 1977 and installed at Beijing Irradiation Experiment Center. The BF-5 is the successor of the BJ-10 in various techniques. A series of irradiation experiments have been carried out on this machine. Now the authors are developing new linacs to meet the demand for cancer therapy, industrial radiography and other aspects in our country
Data acquisition and processing software for linear PSD based neutron diffractometers
Pande, S.S.; Borkar, S.P.; Ghodgaonkar, M.D.
2003-01-01
As a part of data acquisition system for various single and multi-PSD diffractometers software is developed to acquire the data and support the requirements of diffraction experiments. The software is a front-end Windows 98 application on PC and a transputer program on the MPSD card. The front-end application provides entire user interface required for data acquisition, control, presentation and system setup. Data is acquired and the diffraction spectra are generated in the transputer program. All the required hardware control is also implemented in the transputer program. The two programs communicate using a device driver named VTRANSPD. The software plays a vital role in customizing and integrating the data acquisition system for various diffractometer setups. Also the experiments are effectively automated in the software which has helped in making best use of available beam time. These and other features of the data acquisition and processing software are presented here. This software is being used along with the data acquisition system at a few single PSD and multi-PSD diffractometers. (author)
Andréa Cristina Fermiano Fidelis
2017-03-01
Full Text Available The growth of health structures and their complexity have led Clinical Engineering professionals to carry out studies to develop and implement health technology management programs. In this way, employees of this area, integrated with the health system teams, have contributed to make feasible the use of technologies that offer greater security, functionality and reliability. The radiotherapy area, with the increase in the incidence of new cases of cancer, together with the contingency of the financial resources for health, high cost and complexity of the equipment, motivate studies for its adequate management. This research aimed to identify the technologies applied in the radiotherapy treatment, in particular the linear accelerator, as well as the concept of innovation, innovation in services, innovation in processes and the competitiveness acquired with the aid of innovation. The method used in the research has a qualitative approach, with an exploratory and descriptive objective, with semistructured and open questions and involved bibliographic research on the topic of Innovation and on Linear Accelerator, document analysis, Unit of High Complexity in Oncology visit and interviews at the General Hospital of Caxias do Sul South, presenting, finally, the impacts suffered in the hospital and in the community after the arrival of the Line Accelerator. The results showed that there was process and product innovation, incrementally, in the services offered by the hospital.
Dao-ming, Lu
2018-05-01
The negativity of Wigner function (WF) is one of the important symbols of non-classical properties of light field. Therefore, it is of great significance to study the evolution of WF in dissipative process. The evolution formula of WF in laser process under the action of linear resonance force is given by virtue of thermo entangled state representation and the technique of integration within an ordered product of operator. As its application, the evolution of WF of thermal field and that of single-photon-added coherent state are discussed. The results show that the WF of thermal field maintains its original character. On the other hand, the negative region size and the depth of negativity of WF of single- photon-added coherent state decrease until it vanishes with dissipation. This shows that the non-classical property of single-photon-added coherent state is weakened, until it disappears with dissipation time increasing.
Zhang, Zhendong
2017-07-11
Full waveform inversion for reection events is limited by its linearized update re-quirements given by a process equivalent to migration. Unless the background velocity model is reasonably accurate, the resulting gradient can have an inaccurate update direction leading the inversion to converge what we refer to as local minima of the objective function. In our approach, we consider mild lateral variation in the model, and thus, use a gradient given by the oriented time-domain imaging method. Specifically, we apply the oriented time-domain imaging on the data residual to obtain the geometrical features of the velocity perturbation. After updating the model in the time domain, we convert the perturbation from the time domain to depth using the average velocity. Considering density is constant, we can expand the conventional 1D impedance inversion method to 2D or 3D velocity inversion within the process of full waveform inversion. This method is not only capable of inverting for velocity, but it is also capable of retrieving anisotropic parameters relying on linearized representations of the reection response. To eliminate the cross-talk artifacts between different parameters, we utilize what we consider being an optimal parametrization for this step. To do so, we extend the prestack time-domain migration image in incident angle dimension to incorporate angular dependence needed by the multiparameter inversion. For simple models, this approach provides an efficient and stable way to do full waveform inversion or modified seismic inversion and makes the anisotropic inversion more practicable. The proposed method still needs kinematically accurate initial models since it only recovers the high-wavenumber part as conventional full waveform inversion method does. Results on synthetic data of isotropic and anisotropic cases illustrate the benefits and limitations of this method.
Analytical Derivation of the Inverse Moments of One-Sided Correlated Gram Matrices With Applications
Elkhalil, Khalil
2016-02-03
This paper addresses the development of analytical tools for the computation of the inverse moments of random Gram matrices with one side correlation. Such a question is mainly driven by applications in signal processing and wireless communications wherein such matrices naturally arise. In particular, we derive closed-form expressions for the inverse moments and show that the obtained results can help approximate several performance metrics such as the average estimation error corresponding to the Best Linear Unbiased Estimator (BLUE) and the Linear Minimum Mean Square Error (LMMSE) estimator or also other loss functions used to measure the accuracy of covariance matrix estimates.
Steinhauer, L.C.; Romea, R.D.; Kimura, W.D.
1997-01-01
A new method for laser acceleration is proposed based upon the inverse process of transition radiation. The laser beam intersects an electron-beam traveling between two thin foils. The principle of this acceleration method is explored in terms of its classical and quantum bases and its inverse process. A closely related concept based on the inverse of diffraction radiation is also presented: this concept has the significant advantage that apertures are used to allow free passage of the electron beam. These concepts can produce net acceleration because they do not satisfy the conditions in which the Lawson-Woodward theorem applies (no net acceleration in an unbounded vacuum). Finally, practical aspects such as damage limits at optics are employed to find an optimized set of parameters. For reasonable assumptions an acceleration gradient of 200 MeV/m requiring a laser power of less than 1 GW is projected. An interesting approach to multi-staging the acceleration sections is also presented. copyright 1997 American Institute of Physics
FOGWELL, T.W.; LAST, G.V.
2003-01-01
The estimation of flux of contaminants through the vadose zone to the groundwater under varying geologic, hydrologic, and chemical conditions is key to making technically credible and sound decisions regarding soil site characterization and remediation, single-shell tank retrieval, and waste site closures (DOE 2000). One of the principal needs identified in the science and technology roadmap (DOE 2000) is the need to improve the conceptual and numerical models that describe the location of contaminants today, and to provide the basis for forecasting future movement of contaminants on both site-specific and site-wide scales. The State of Knowledge (DOE 1999) and Preliminary Concepts documents describe the importance of geochemical processes on the transport of contaminants through the Vadose Zone. These processes have been identified in the international list of Features, Events, and Processes (FEPs) (NEA 2000) and included in the list of FEPS currently being developed for Hanford Site assessments (Soler et al. 2001). The current vision for Hanford site-wide cumulative risk assessments as performed using the System Assessment Capability (SAC) is to represent contaminant adsorption using the linear isotherm (empirical distribution coefficient, K d ) sorption model. Integration Project Expert Panel (PEP) comments indicate that work is required to adequately justify the applicability of the linear sorption model, and to identify and defend the range of K d values that are adopted for assessments. The work plans developed for the Science and Technology (S and T) efforts, SAC, and the Core Projects must answer directly the question of ''Is there a scientific basis for the application of the linear sorption isotherm model to the complex wastes of the Hanford Site?'' This paper is intended to address these issues. The reason that well documented justification is required for using the linear sorption (K d ) model is that this approach is strictly empirical and is often
Revil, Andre [Colorado School of Mines, Golden, CO (United States)
2013-01-15
Understanding the influence of coupled biological, chemical, and hydrological processes on subsurface contaminant behavior at multiple scales is a prerequisite for developing effective remedial approaches, whether they are active remediation or natural attenuation strategies. To develop this understanding, methods are needed that can measure critical components of the natural system in real time. The self-potential method corresponds to the passive measurement of the distribution of the electrical potential at the surface of the Earth or in boreholes. This method is very complemetary to other geophysical methods like DC resistivity and induced polarization. In this report, we summarize of research efforts to advance the theory of low-frequency geoelectrical methods and their applications to the contaminant plumes in the vicinity of the former S-3 settling basins at Oak Ridge, TN.
Inverse m-matrices and ultrametric matrices
Dellacherie, Claude; San Martin, Jaime
2014-01-01
The study of M-matrices, their inverses and discrete potential theory is now a well-established part of linear algebra and the theory of Markov chains. The main focus of this monograph is the so-called inverse M-matrix problem, which asks for a characterization of nonnegative matrices whose inverses are M-matrices. We present an answer in terms of discrete potential theory based on the Choquet-Deny Theorem. A distinguished subclass of inverse M-matrices is ultrametric matrices, which are important in applications such as taxonomy. Ultrametricity is revealed to be a relevant concept in linear algebra and discrete potential theory because of its relation with trees in graph theory and mean expected value matrices in probability theory. Remarkable properties of Hadamard functions and products for the class of inverse M-matrices are developed and probabilistic insights are provided throughout the monograph.
Retrieving rupture history using waveform inversions in time sequence
Yi, L.; Xu, C.; Zhang, X.
2017-12-01
The rupture history of large earthquakes is generally regenerated using the waveform inversion through utilizing seismological waveform records. In the waveform inversion, based on the superposition principle, the rupture process is linearly parameterized. After discretizing the fault plane into sub-faults, the local source time function of each sub-fault is usually parameterized using the multi-time window method, e.g., mutual overlapped triangular functions. Then the forward waveform of each sub-fault is synthesized through convoluting the source time function with its Green function. According to the superposition principle, these forward waveforms generated from the fault plane are summarized in the recorded waveforms after aligning the arrival times. Then the slip history is retrieved using the waveform inversion method after the superposing of all forward waveforms for each correspond seismological waveform records. Apart from the isolation of these forward waveforms generated from each sub-fault, we also realize that these waveforms are gradually and sequentially superimposed in the recorded waveforms. Thus we proposed a idea that the rupture model is possibly detachable in sequent rupture times. According to the constrained waveform length method emphasized in our previous work, the length of inverted waveforms used in the waveform inversion is objectively constrained by the rupture velocity and rise time. And one essential prior condition is the predetermined fault plane that limits the duration of rupture time, which means the waveform inversion is restricted in a pre-set rupture duration time. Therefore, we proposed a strategy to inverse the rupture process sequentially using the progressively shift rupture times as the rupture front expanding in the fault plane. And we have designed a simulation inversion to test the feasibility of the method. Our test result shows the prospect of this idea that requiring furthermore investigation.
Testing earthquake source inversion methodologies
Page, Morgan T.
2011-01-01
Source Inversion Validation Workshop; Palm Springs, California, 11-12 September 2010; Nowadays earthquake source inversions are routinely performed after large earthquakes and represent a key connection between recorded seismic and geodetic data and the complex rupture process at depth. The resulting earthquake source models quantify the spatiotemporal evolution of ruptures. They are also used to provide a rapid assessment of the severity of an earthquake and to estimate losses. However, because of uncertainties in the data, assumed fault geometry and velocity structure, and chosen rupture parameterization, it is not clear which features of these source models are robust. Improved understanding of the uncertainty and reliability of earthquake source inversions will allow the scientific community to use the robust features of kinematic inversions to more thoroughly investigate the complexity of the rupture process and to better constrain other earthquakerelated computations, such as ground motion simulations and static stress change calculations.
Masuda, Y; Misztal, I; Legarra, A; Tsuruta, S; Lourenco, D A L; Fragomeni, B O; Aguilar, I
2017-01-01
This paper evaluates an efficient implementation to multiply the inverse of a numerator relationship matrix for genotyped animals () by a vector (). The computation is required for solving mixed model equations in single-step genomic BLUP (ssGBLUP) with the preconditioned conjugate gradient (PCG). The inverse can be decomposed into sparse matrices that are blocks of the sparse inverse of a numerator relationship matrix () including genotyped animals and their ancestors. The elements of were rapidly calculated with the Henderson's rule and stored as sparse matrices in memory. Implementation of was by a series of sparse matrix-vector multiplications. Diagonal elements of , which were required as preconditioners in PCG, were approximated with a Monte Carlo method using 1,000 samples. The efficient implementation of was compared with explicit inversion of with 3 data sets including about 15,000, 81,000, and 570,000 genotyped animals selected from populations with 213,000, 8.2 million, and 10.7 million pedigree animals, respectively. The explicit inversion required 1.8 GB, 49 GB, and 2,415 GB (estimated) of memory, respectively, and 42 s, 56 min, and 13.5 d (estimated), respectively, for the computations. The efficient implementation required <1 MB, 2.9 GB, and 2.3 GB of memory, respectively, and <1 sec, 3 min, and 5 min, respectively, for setting up. Only <1 sec was required for the multiplication in each PCG iteration for any data sets. When the equations in ssGBLUP are solved with the PCG algorithm, is no longer a limiting factor in the computations.
Optimization for nonlinear inverse problem
Boyadzhiev, G.; Brandmayr, E.; Pinat, T.; Panza, G.F.
2007-06-01
The nonlinear inversion of geophysical data in general does not yield a unique solution, but a single model, representing the investigated field, is preferred for an easy geological interpretation of the observations. The analyzed region is constituted by a number of sub-regions where the multi-valued nonlinear inversion is applied, which leads to a multi-valued solution. Therefore, combining the values of the solution in each sub-region, many acceptable models are obtained for the entire region and this complicates the geological interpretation of geophysical investigations. In this paper are presented new methodologies, capable to select one model, among all acceptable ones, that satisfies different criteria of smoothness in the explored space of solutions. In this work we focus on the non-linear inversion of surface waves dispersion curves, which gives structural models of shear-wave velocity versus depth, but the basic concepts have a general validity. (author)
Shin, Boo Young; Han, Do Hung
2014-01-01
The aim of this study was to compatibilize immiscible polyamide 6 (PA6)/linear low density polyethylene (LLDPE) blend by using electron-beam initiated mediation process. Glycidyl methacrylate (GMA) was chosen as a mediator for cross-copolymerization at the interface between PA6 and LLDPE. The exposure process was carried out to initiate cross-copolymerization by the medium of GMA at the interface between PA and LLDPE. The mixture of the PA6/LLDPE/GMA was prepared by using a twin-screw extruder, and then the mixture was exposed to electron-beam radiation at various doses at room temperature. To investigate the results of this compatibilization strategy, the morphological and mechanical properties of the blend were analyzed. Morphology study revealed that the diameters of the dispersion particles decreased and the interfacial adhesion increased with respect to irradiation doses. The elongation at break of the blends increases significantly with increasing irradiation dose up to 100 kGy while the tensile strength and the modulus increased nonlinearly with increasing irradiation dose. The reaction mechanisms of the mediation process with the GMA mediator at the interface between PA6 and LLDPE were estimated. - Highlights: • PA6/LLDPE blend was compatibilized by the electron-beam initiated mediation process. • Interfacial adhesion was significantly enhanced by the radiation initiated cross-copolymerization. • The elongation at break of blend irradiated at 100 kGy was 4 times higher than PA6. • The GMA as a mediator played a key role in the electron-beam initiated mediation process
Dolinsky, Margaret
2006-02-01
This paper will discuss the potentiality towards a methodology for creating perceptual shifts in virtual reality (VR) environments. A perceptual shift is a cognitive recognition of having experienced something extra-marginal, on the boundaries of normal awareness, outside of conditioned attenuation. Definitions of perceptual shifts demonstrate a historical tradition for the wonder of devices as well as analyze various categories of sensory and optical illusions. Neuroscience and cognitive science attempt to explain perceptual shifts through biological and perceptual mechanisms using the sciences. This paper explores perspective, illusion and projections to situate an artistic process in terms of perceptual shifts. Most VR environments rely on a single perceptual shift while there remains enormous potential for perceptual shifts in VR. Examples of artwork and VR environments develop and present this idea.
P. Y. Rogov
2015-09-01
Full Text Available The paper deals with mathematical model of linear and nonlinear processes occurring at the propagation of femtosecond laser pulses in the vitreous of the human eye. Methods of computing modeling are applied for the nonlinear spectral equation solution describing the dynamics of a two-dimensional TE-polarized radiation in a homogeneous isotropic medium with cubic fast-response nonlinearity without the usage of slowly varying envelope approximation. Environments close to the optical media parameters of the eye were used for the simulation. The model of femtosecond radiation propagation takes into account the process dynamics for dispersion broadening of pulses in time and the occurence of the self-focusing near the retina when passing through the vitreous body of the eye. Dependence between the pulse duration on the retina has been revealed and the duration of the input pulse and the values of power density at which there is self-focusing have been found. It is shown that the main mechanism of radiation damage with the use of titanium-sapphire laser is photoionization. The results coincide with those obtained by the other scientists, and are usable for creation Russian laser safety standards for femtosecond laser systems.
Vilaragut Llanes, J.J.; Ferro Fernandez, R.; Rodriguez Marti, M.; Ramirez, M.L.; Perez Mulas, A.; Barrientos Montero, M.; Ortiz Lopez, P.; Somoano, F.; Delgado Rodriguez, J.M.; Papadopulos, S.B.; Pereira, P.P. Jr.; Lopez Morones, R.; Larrinaga Cortinai, E.; Rivero Oliva, J.J.; Alemany, J.
2008-01-01
This paper presents the results of the Probabilistic Safety Assessment (PSA) to the radiotherapy treatment process with an Electron Linear Accelerator (LINAC) for Medical Uses, which was conducted in the framework of the Extra budgetary Programme on Nuclear and Radiological Safety in Iberian-America. The PSA tools were used to evaluate occupational, public and medical exposures during treatment. The study focused on the radiological protection of patients. Equipment Failure Modes and Human Errors were evaluated for each system and treatment phase by FMEA. It was aimed at obtaining an exhaustive list of deviations with a reasonable probability of occurrence and which might produce significant adverse outcomes. Separate events trees were constructed for each initiating event group. Each event tree had a different structure since the initiating events were grouped according to mitigation requirements. Fault tree models were constructed for each top event. The fault trees were developed up to the level of components. In addition to hardware faults, the fault trees included human errors associated with the response to accidents, and human errors associated with the treatment. Each accident sequence was quantified. The combination of the initiating event and top events through one fault tree was the method used to analyse the accident sequences. After combining the appropriate models, a Boolean reduction was conducted by computer software to produce sequence cut sets. Several findings were analysed concerning the treatment process and the study proposed safety recommendations to avoid them. (author)
Galbraith, R.F.; Laslett, G.M.; Green, P.F.; Duddy, I.R.
1990-01-01
Spontaneous fission of uranium atoms over geological time creates a random process of linearly shaped features (fission tracks) inside an apatite crystal. The theoretical distributions associated with this process are governed by the elapsed time and temperature history, but other factors are also reflected in empirical measurements as consequences of sampling by plane section and chemical etching. These include geometrical biases leading to over-representation of long tracks, the shape and orientation of host features when sampling totally confined tracks, and 'gaps' in heavily annealed tracks. We study the estimation of geological parameters in the presence of these factors using measurements on both confined tracks and projected semi-tracks. Of particular interest is a history of sedimentation, uplift and erosion giving rise to a two-component mixture of tracks in which the parameters reflect the current temperature, the maximum temperature and the timing of uplift. A full likelihood analysis based on all measured densities, lengths and orientations is feasible, but because some geometrical biases and measurement limitations are only partly understood it seems preferable to use conditional likelihoods given numbers and orientations of confined tracks. (author)
Vilaragut Llanes, Juan Jose; Fernandez, Ruben Ferro; Ortiz Lopez, Pedro
2009-01-01
The radiation safety assessments traditionally have been based on analyzing the lessons you learn of new events that are becoming known. Although these methods are very valuable, their main limitation is that only cover known events and leave without consider other possible failures that have occurred or have not been published, This does not mean they can not occur. Other tools to analyze prospectively the safety, among which found Probabilistic Safety Assessment (PSA). This paper summarizes the project of American Forum of agencies radiological and nuclear regulators aimed at applying the methods of APS treatment process with a linear accelerator. We defined as unintended consequences accidental exposures both single patient and multiple patients. FMEA methodology was used to define events initiators of accidents and methods of event trees and trees failure to identify the accident sequences that may occur. A Once quantified the frequency of occurrence of accidental sequences Analyses of importance in determining the most recent events significant from the point of view of safety. We identified 158 of equipment failure modes and 295 errors human if they occurred would have the potential to cause the accidental exposures defined. We studied 118 of initiating events accident and 120 barriers. We studied 434 accident sequences. The accidental exposure of a single patient were 40 times likely that multiple patients. 100% of the total frequency of accidental exposures on a single patient is caused by human errors . 8% of the total frequency of accidental exposures on multiple patients initiating events may occur by equipment failure (Computerized tomography, treatment planning system, throttle linear) and 92% by human error. As part of the and recommendations of the study presents the events that are more contribution on the reduction of risk of accidental exposure. (author)
Parameterization analysis and inversion for orthorhombic media
Masmoudi, Nabil
2018-05-01
Accounting for azimuthal anisotropy is necessary for the processing and inversion of wide-azimuth and wide-aperture seismic data because wave speeds naturally depend on the wave propagation direction. Orthorhombic anisotropy is considered the most effective anisotropic model that approximates the azimuthal anisotropy we observe in seismic data. In the framework of full wave form inversion (FWI), the large number of parameters describing orthorhombic media exerts a considerable trade-off and increases the non-linearity of the inversion problem. Choosing a suitable parameterization for the model, and identifying which parameters in that parameterization could be well resolved, are essential to a successful inversion. In this thesis, I derive the radiation patterns for different acoustic orthorhombic parameterization. Analyzing the angular dependence of the scattering of the parameters of different parameterizations starting with the conventionally used notation, I assess the potential trade-off between the parameters and the resolution in describing the data and inverting for the parameters. In order to build practical inversion strategies, I suggest new parameters (called deviation parameters) for a new parameterization style in orthorhombic media. The novel parameters denoted ∈d, ƞd and δd are dimensionless and represent a measure of deviation between the vertical planes in orthorhombic anisotropy. The main feature of the deviation parameters consists of keeping the scattering of the vertical transversely isotropic (VTI) parameters stationary with azimuth. Using these scattering features, we can condition FWI to invert for the parameters which the data are sensitive to, at different stages, scales, and locations in the model. With this parameterization, the data are mainly sensitive to the scattering of 3 parameters (out of six that describe an acoustic orthorhombic medium): the horizontal velocity in the x1 direction, ∈1 which provides scattering mainly near
Seismic inverse scattering in the downward continuation approach
Stolk, C.C.; de Hoop, M.V.
Seismic data are commonly modeled by a linearization around a smooth background medium in combination with a high frequency approximation. The perturbation of the medium coefficient is assumed to contain the discontinuities. This leads to two inverse problems, first the linearized inverse problem
Galindo, I.; Romero, M. C.; Sánchez, N.; Morales, J. M.
2016-06-01
Risk management stakeholders in high-populated volcanic islands should be provided with the latest high-quality volcanic information. We present here the first volcanic susceptibility map of Lanzarote and Chinijo Islands and their submarine flanks based on updated chronostratigraphical and volcano structural data, as well as on the geomorphological analysis of the bathymetric data of the submarine flanks. The role of the structural elements in the volcanic susceptibility analysis has been reviewed: vents have been considered since they indicate where previous eruptions took place; eruptive fissures provide information about the stress field as they are the superficial expression of the dyke conduit; eroded dykes have been discarded since they are single non-feeder dykes intruded in deep parts of Miocene-Pliocene volcanic edifices; main faults have been taken into account only in those cases where they could modified the superficial movement of magma. The application of kernel density estimation via a linear diffusion process for the volcanic susceptibility assessment has been applied successfully to Lanzarote and could be applied to other fissure volcanic fields worldwide since the results provide information about the probable area where an eruption could take place but also about the main direction of the probable volcanic fissures.
Fan-Yun Pai
2015-11-01
Full Text Available To consistently produce high quality products, a quality management system, such as the ISO9001, 2000 or TS 16949 must be practically implemented. One core instrument of the TS16949 MSA (Measurement System Analysis is to rank the capability of a measurement system and ensure the quality characteristics of the product would likely be transformed through the whole manufacturing process. It is important to reduce the risk of Type I errors (acceptable goods are misjudged as defective parts and Type II errors (defective parts are misjudged as good parts. An ideal measuring system would have the statistical characteristic of zero error, but such a system could hardly exist. Hence, to maintain better control of the variance that might occur in the manufacturing process, MSA is necessary for better quality control. Ball screws, which are a key component in precision machines, have significant attributes with respect to positioning and transmitting. Failures of lead accuracy and axial-gap of a ball screw can cause negative and expensive effects in machine positioning accuracy. Consequently, a functional measurement system can incur great savings by detecting Type I and Type II errors. If the measurement system fails with respect to specification of the product, it will likely misjudge Type I and Type II errors. Inspectors normally follow the MSA regulations for accuracy measurement, but the choice of measuring system does not merely depend on some simple indices. In this paper, we examine the stability of a measuring system by using a Monte Carlo simulation to establish bias, linearity variance of the normal distribution, and the probability density function. Further, we forecast the possible area distribution in the real case. After the simulation, the measurement capability will be improved, which helps the user classify the measurement system and establish measurement regulations for better performance and monitoring of the precision of the ball screw.
Frömer, Romy; Maier, Martin; Abdel Rahman, Rasha
2018-01-01
Here we present an application of an EEG processing pipeline customizing EEGLAB and FieldTrip functions, specifically optimized to flexibly analyze EEG data based on single trial information. The key component of our approach is to create a comprehensive 3-D EEG data structure including all trials and all participants maintaining the original order of recording. This allows straightforward access to subsets of the data based on any information available in a behavioral data structure matched with the EEG data (experimental conditions, but also performance indicators, such accuracy or RTs of single trials). In the present study we exploit this structure to compute linear mixed models (LMMs, using lmer in R) including random intercepts and slopes for items. This information can easily be read out from the matched behavioral data, whereas it might not be accessible in traditional ERP approaches without substantial effort. We further provide easily adaptable scripts for performing cluster-based permutation tests (as implemented in FieldTrip), as a more robust alternative to traditional omnibus ANOVAs. Our approach is particularly advantageous for data with parametric within-subject covariates (e.g., performance) and/or multiple complex stimuli (such as words, faces or objects) that vary in features affecting cognitive processes and ERPs (such as word frequency, salience or familiarity), which are sometimes hard to control experimentally or might themselves constitute variables of interest. The present dataset was recorded from 40 participants who performed a visual search task on previously unfamiliar objects, presented either visually intact or blurred. MATLAB as well as R scripts are provided that can be adapted to different datasets.
Wei, Haiqiao; Zhao, Wanhui; Zhou, Lei; Chen, Ceyuan; Shu, Gequn
2018-03-01
Large eddy simulation coupled with the linear eddy model (LEM) is employed for the simulation of n-heptane spray flames to investigate the low temperature ignition and combustion process in a constant-volume combustion vessel under diesel-engine relevant conditions. Parametric studies are performed to give a comprehensive understanding of the ignition processes. The non-reacting case is firstly carried out to validate the present model by comparing the predicted results with the experimental data from the Engine Combustion Network (ECN). Good agreements are observed in terms of liquid and vapour penetration length, as well as the mixture fraction distributions at different times and different axial locations. For the reacting cases, the flame index was introduced to distinguish between the premixed and non-premixed combustion. A reaction region (RR) parameter is used to investigate the ignition and combustion characteristics, and to distinguish the different combustion stages. Results show that the two-stage combustion process can be identified in spray flames, and different ignition positions in the mixture fraction versus RR space are well described at low and high initial ambient temperatures. At an initial condition of 850 K, the first-stage ignition is initiated at the fuel-lean region, followed by the reactions in fuel-rich regions. Then high-temperature reaction occurs mainly at the places with mixture concentration around stoichiometric mixture fraction. While at an initial temperature of 1000 K, the first-stage ignition occurs at the fuel-rich region first, then it moves towards fuel-richer region. Afterwards, the high-temperature reactions move back to the stoichiometric mixture fraction region. For all of the initial temperatures considered, high-temperature ignition kernels are initiated at the regions richer than stoichiometric mixture fraction. By increasing the initial ambient temperature, the high-temperature ignition kernels move towards richer
A simple inversion of induced-polarization data collected in the Haenam area of Korea
Jang, Hannuree; Park, Samgyu; Kim, Hee Joon
2014-01-01
We develop a two-stage method to invert induced polarization (IP) data. First, DC resistivity data are inverted to recover a background resistivity that is used to generate a sensitivity matrix for the IP inversion. The second stage accepts the background resistivity as the true resistivity of the medium and attempts to find a polarizability that satisfies the IP data. This is done by linearizing the equations for the background resistivity to produce a linear inverse problem that can be solved for the distribution of the subsurface polarizability. Smoothness and base-model constraints are used to stabilize the IP inversion process. These regularization methods are validated by inverting both synthetic and field data obtained in the Haenam epithermal mineralized area, Korea. As a result, the IP anomaly recovered from the base-model constraint indicates that fine-grained pyrite is disseminated in a shallow zone beneath the ridge of this site, which is confirmed by core samples. (paper)
A Joint Method of Envelope Inversion Combined with Hybrid-domain Full Waveform Inversion
CUI, C.; Hou, W.
2017-12-01
Full waveform inversion (FWI) aims to construct high-precision subsurface models by fully using the information in seismic records, including amplitude, travel time, phase and so on. However, high non-linearity and the absence of low frequency information in seismic data lead to the well-known cycle skipping problem and make inversion easily fall into local minima. In addition, those 3D inversion methods that are based on acoustic approximation ignore the elastic effects in real seismic field, and make inversion harder. As a result, the accuracy of final inversion results highly relies on the quality of initial model. In order to improve stability and quality of inversion results, multi-scale inversion that reconstructs subsurface model from low to high frequency are applied. But, the absence of very low frequencies (time domain and inversion in the frequency domain. To accelerate the inversion, we adopt CPU/GPU heterogeneous computing techniques. There were two levels of parallelism. In the first level, the inversion tasks are decomposed and assigned to each computation node by shot number. In the second level, GPU multithreaded programming is used for the computation tasks in each node, including forward modeling, envelope extraction, DFT (discrete Fourier transform) calculation and gradients calculation. Numerical tests demonstrated that the combined envelope inversion + hybrid-domain FWI could obtain much faithful and accurate result than conventional hybrid-domain FWI. The CPU/GPU heterogeneous parallel computation could improve the performance speed.
MARE2DEM: a 2-D inversion code for controlled-source electromagnetic and magnetotelluric data
Key, Kerry
2016-10-01
This work presents MARE2DEM, a freely available code for 2-D anisotropic inversion of magnetotelluric (MT) data and frequency-domain controlled-source electromagnetic (CSEM) data from onshore and offshore surveys. MARE2DEM parametrizes the inverse model using a grid of arbitrarily shaped polygons, where unstructured triangular or quadrilateral grids are typically used due to their ease of construction. Unstructured grids provide significantly more geometric flexibility and parameter efficiency than the structured rectangular grids commonly used by most other inversion codes. Transmitter and receiver components located on topographic slopes can be tilted parallel to the boundary so that the simulated electromagnetic fields accurately reproduce the real survey geometry. The forward solution is implemented with a goal-oriented adaptive finite-element method that automatically generates and refines unstructured triangular element grids that conform to the inversion parameter grid, ensuring accurate responses as the model conductivity changes. This dual-grid approach is significantly more efficient than the conventional use of a single grid for both the forward and inverse meshes since the more detailed finite-element meshes required for accurate responses do not increase the memory requirements of the inverse problem. Forward solutions are computed in parallel with a highly efficient scaling by partitioning the data into smaller independent modeling tasks consisting of subsets of the input frequencies, transmitters and receivers. Non-linear inversion is carried out with a new Occam inversion approach that requires fewer forward calls. Dense matrix operations are optimized for memory and parallel scalability using the ScaLAPACK parallel library. Free parameters can be bounded using a new non-linear transformation that leaves the transformed parameters nearly the same as the original parameters within the bounds, thereby reducing non-linear smoothing effects. Data
Recurrent Neural Network for Computing Outer Inverse.
Živković, Ivan S; Stanimirović, Predrag S; Wei, Yimin
2016-05-01
Two linear recurrent neural networks for generating outer inverses with prescribed range and null space are defined. Each of the proposed recurrent neural networks is based on the matrix-valued differential equation, a generalization of dynamic equations proposed earlier for the nonsingular matrix inversion, the Moore-Penrose inversion, as well as the Drazin inversion, under the condition of zero initial state. The application of the first approach is conditioned by the properties of the spectrum of a certain matrix; the second approach eliminates this drawback, though at the cost of increasing the number of matrix operations. The cases corresponding to the most common generalized inverses are defined. The conditions that ensure stability of the proposed neural network are presented. Illustrative examples present the results of numerical simulations.
Masatoshi Hasegawa
2017-10-01
Full Text Available This paper reviews the development of new high-temperature polymeric materials applicable to plastic substrates in image display devices with a focus on our previous results. Novel solution-processable colorless polyimides (PIs with ultra-low linear coefficients of thermal expansion (CTE are proposed in this paper. First, the principles of the coloration of PI films are briefly discussed, including the influence of the processing conditions on the film coloration, as well as the chemical and physical factors dominating the low CTE characteristics of the resultant PI films to clarify the challenges in simultaneously achieving excellent optical transparency, a very high Tg, a very low CTE, and excellent film toughness. A possible approach of achieving these target properties is to use semi-cycloaliphatic PI systems consisting of linear chain structures. However, semi-cycloaliphatic PIs obtained using cycloaliphatic diamines suffer various problems during precursor polymerization, cyclodehydration (imidization, and film preparation. In particular, when using trans-1,4-cyclohexanediamine (t-CHDA as the cycloaliphatic diamine, a serious problem emerges: salt formation in the initial stages of the precursor polymerization, which terminates the polymerization in some cases or significantly extends the reaction period. The system derived from 3,3′,4,4′-biphenyltetracarboxylic dianhydride (s-BPDA and t-CHDA can be polymerized by a controlled heating method and leads to a PI film with relatively good properties, i.e., excellent light transmittance at 400 nm (T400 = ~80%, a high Tg (>300 °C, and a very low CTE (10 ppm·K−1. However, this PI film is somewhat brittle (the maximum elongation at break, εb max is about 10%. On the other hand, the combination of cycloaliphatic tetracarboxylic dianhydrides and aromatic diamines does not result in salt formation. The steric structures of cycloaliphatic tetracarboxylic dianhydrides significantly influence
Advanced linear algebra for engineers with Matlab
Dianat, Sohail A
2009-01-01
Matrices, Matrix Algebra, and Elementary Matrix OperationsBasic Concepts and NotationMatrix AlgebraElementary Row OperationsSolution of System of Linear EquationsMatrix PartitionsBlock MultiplicationInner, Outer, and Kronecker ProductsDeterminants, Matrix Inversion and Solutions to Systems of Linear EquationsDeterminant of a MatrixMatrix InversionSolution of Simultaneous Linear EquationsApplications: Circuit AnalysisHomogeneous Coordinates SystemRank, Nu
Feng, Huihua; Guo, Chendong; Jia, Boru; Zuo, Zhengxing; Guo, Yuyao; Roskilly, Tony
2016-01-01
Highlights: • The intermediate process of free-piston linear generator is investigated for the first time. • “Gradually switching strategy” is the best strategy in the intermediate process. • Switching at the top dead center position timing has the least influences on free-piston linear generator. • After the intermediate process, the operation parameters value is smaller than those before the intermediate process. - Abstract: The free-piston linear generator (FPLG) has more merits than the traditional reciprocating engines (TRE), and has been under extensive investigation. Researchers mainly investigated on the starting process and the stable generating process of FPLG, while there has not been any report on the intermediate process from the engine cold start-up to stable operation process. Therefore, this paper investigated the intermediate process of the FPLG in terms of switching strategy and switching position based on simulation results and test results. Results showed that when the motor force of the linear electric machine (LEM) declined gradually from 100% to 0% with an interval of 50%, and then to a resistance force in the opposite direction of piston velocity (generator mode), the operation parameters of the FPLG showed minimal changes. Meanwhile, the engine operated more smoothly when the LEM switched its working mode from a motor to a generator at the piston dead center, compared with that at the middle stroke or a random switching time. More importantly, after the intermediate process, the operation parameters of FPLG were smaller than that before the intermediate process. As a result, a gradual motor/generator switching strategy was recommended and the LEM was suggested to switch its working mode when the piston arrived its dead center in order to achieve smooth engine operation.
Bilinear Inverse Problems: Theory, Algorithms, and Applications
Ling, Shuyang
We will discuss how several important real-world signal processing problems, such as self-calibration and blind deconvolution, can be modeled as bilinear inverse problems and solved by convex and nonconvex optimization approaches. In Chapter 2, we bring together three seemingly unrelated concepts, self-calibration, compressive sensing and biconvex optimization. We show how several self-calibration problems can be treated efficiently within the framework of biconvex compressive sensing via a new method called SparseLift. More specifically, we consider a linear system of equations y = DAx, where the diagonal matrix D (which models the calibration error) is unknown and x is an unknown sparse signal. By "lifting" this biconvex inverse problem and exploiting sparsity in this model, we derive explicit theoretical guarantees under which both x and D can be recovered exactly, robustly, and numerically efficiently. In Chapter 3, we study the question of the joint blind deconvolution and blind demixing, i.e., extracting a sequence of functions [special characters omitted] from observing only the sum of their convolutions [special characters omitted]. In particular, for the special case s = 1, it becomes the well-known blind deconvolution problem. We present a non-convex algorithm which guarantees exact recovery under conditions that are competitive with convex optimization methods, with the additional advantage of being computationally much more efficient. We discuss several applications of the proposed framework in image processing and wireless communications in connection with the Internet-of-Things. In Chapter 4, we consider three different self-calibration models of practical relevance. We show how their corresponding bilinear inverse problems can be solved by both the simple linear least squares approach and the SVD-based approach. As a consequence, the proposed algorithms are numerically extremely efficient, thus allowing for real-time deployment. Explicit theoretical
Shilov, Georgi E
1977-01-01
Covers determinants, linear spaces, systems of linear equations, linear functions of a vector argument, coordinate transformations, the canonical form of the matrix of a linear operator, bilinear and quadratic forms, Euclidean spaces, unitary spaces, quadratic forms in Euclidean and unitary spaces, finite-dimensional space. Problems with hints and answers.
Etim, E; Basili, C [Rome Univ. (Italy). Ist. di Matematica
1978-08-21
The lagrangian in the path integral solution of the master equation of a stationary Markov process is derived by application of the Ehrenfest-type theorem of quantum mechanics and the Cauchy method of finding inverse functions. Applied to the non-linear Fokker-Planck equation the authors reproduce the result obtained by integrating over Fourier series coefficients and by other methods.
Ligier, Nicolas; Carter, John; Poulet, François; Langevin, Yves; Dumas, Christophe; Gourgeot, Florian
2016-04-01
Jupiter's moon Europa harbors a very young surface dated, based on cratering rates, to 10-50 M.y (Zahnle et al. 1998, Pappalardo et al. 1999). This young age implies rapid surface recycling and reprocessing, partially engendered by a global salty subsurface liquid ocean that could result in tectonic activity (Schmidt et al. 2011, Kattenhorn et al. 2014) and active plumes (Roth et al. 2014). The surface of Europa should contain important clues about the composition of this sub-surface briny ocean and about the potential presence of material of exobiological interest in it, thus reinforcing Europa as a major target of interest for upcoming space missions such as the ESA L-class mission JUICE. To perform the investigation of the composition of the surface of Europa, a global mapping campaign of the satellite was performed between October 2011 and January 2012 with the integral field spectrograph SINFONI on the Very Large Telescope (VLT) in Chile. The high spectral binning of this instrument (0.5 nm) is suitable to detect any narrow mineral signature in the wavelength range 1.45-2.45 μm. The spatially resolved spectra we obtained over five epochs nearly cover the entire surface of Europa with a pixel scale of 12.5 by 25 m.a.s (~35 by 70 km on Europa's surface), thus permitting a global scale study. Until recently, a large majority of studies only proposed sulfate salts along with sulfuric acid hydrate and water-ice to be present on Europa's surface. However, recent works based on Europa's surface coloration in the visible wavelength range and NIR spectral analysis support the hypothesis of the predominance of chlorine salts instead of sulfate salts (Hand & Carlson 2015, Fischer et al. 2015). Our linear spectral modeling supports this new hypothesis insofar as the use of Mg-bearing chlorines improved the fits whatever the region. As expected, the distribution of sulfuric acid hydrate is correlated to the Iogenic sulfur ion implantation flux distribution (Hendrix et al
de Oliveira, Luciana Renata; Bazzani, Armando; Giampieri, Enrico; Castellani, Gastone C
2014-08-14
We propose a non-equilibrium thermodynamical description in terms of the Chemical Master Equation (CME) to characterize the dynamics of a chemical cycle chain reaction among m different species. These systems can be closed or open for energy and molecules exchange with the environment, which determines how they relax to the stationary state. Closed systems reach an equilibrium state (characterized by the detailed balance condition (D.B.)), while open systems will reach a non-equilibrium steady state (NESS). The principal difference between D.B. and NESS is due to the presence of chemical fluxes. In the D.B. condition the fluxes are absent while for the NESS case, the chemical fluxes are necessary for the state maintaining. All the biological systems are characterized by their "far from equilibrium behavior," hence the NESS is a good candidate for a realistic description of the dynamical and thermodynamical properties of living organisms. In this work we consider a CME written in terms of a discrete Kolmogorov forward equation, which lead us to write explicitly the non-equilibrium chemical fluxes. For systems in NESS, we show that there is a non-conservative "external vector field" whose is linearly proportional to the chemical fluxes. We also demonstrate that the modulation of these external fields does not change their stationary distributions, which ensure us to study the same system and outline the differences in the system's behavior when it switches from the D.B. regime to NESS. We were interested to see how the non-equilibrium fluxes influence the relaxation process during the reaching of the stationary distribution. By performing analytical and numerical analysis, our central result is that the presence of the non-equilibrium chemical fluxes reduces the characteristic relaxation time with respect to the D.B. condition. Within a biochemical and biological perspective, this result can be related to the "plasticity property" of biological systems and to their
Oliveira, Luciana Renata de; Bazzani, Armando; Giampieri, Enrico; Castellani, Gastone C.
2014-01-01
We propose a non-equilibrium thermodynamical description in terms of the Chemical Master Equation (CME) to characterize the dynamics of a chemical cycle chain reaction among m different species. These systems can be closed or open for energy and molecules exchange with the environment, which determines how they relax to the stationary state. Closed systems reach an equilibrium state (characterized by the detailed balance condition (D.B.)), while open systems will reach a non-equilibrium steady state (NESS). The principal difference between D.B. and NESS is due to the presence of chemical fluxes. In the D.B. condition the fluxes are absent while for the NESS case, the chemical fluxes are necessary for the state maintaining. All the biological systems are characterized by their “far from equilibrium behavior,” hence the NESS is a good candidate for a realistic description of the dynamical and thermodynamical properties of living organisms. In this work we consider a CME written in terms of a discrete Kolmogorov forward equation, which lead us to write explicitly the non-equilibrium chemical fluxes. For systems in NESS, we show that there is a non-conservative “external vector field” whose is linearly proportional to the chemical fluxes. We also demonstrate that the modulation of these external fields does not change their stationary distributions, which ensure us to study the same system and outline the differences in the system's behavior when it switches from the D.B. regime to NESS. We were interested to see how the non-equilibrium fluxes influence the relaxation process during the reaching of the stationary distribution. By performing analytical and numerical analysis, our central result is that the presence of the non-equilibrium chemical fluxes reduces the characteristic relaxation time with respect to the D.B. condition. Within a biochemical and biological perspective, this result can be related to the “plasticity property” of biological
Hatch, Andrew G; Smith, Ralph C; De, Tathagata; Salapaka, Murti V
2005-01-01
.... In this paper, we illustrate the construction of inverse filters, based on homogenized energy models, which can be used to approximately linearize the piezoceramic transducer behavior for linear...
Antonov, Y.; Zhuravleva, I.; Cardinaels, R.M.; Moldenaers, P.
2017-01-01
We study thermal aggregation and disaggregation processes in complex carrageenan/lysozyme systems with a different linear charge density of the sulphated polysaccharide. To this end, we determine the temperature dependency of the turbidity and the intensity size distribution functions in complex
Deboeck, Pascal R.; Boker, Steven M.; Bergeman, C. S.
2008-01-01
Among the many methods available for modeling intraindividual time series, differential equation modeling has several advantages that make it promising for applications to psychological data. One interesting differential equation model is that of the damped linear oscillator (DLO), which can be used to model variables that have a tendency to…
Morrow, A; Rangaraj, D; Perez-Andujar, A; Krishnamurthy, N
2016-01-01
Purpose: This work’s objective is to determine the overlap of processes, in terms of sub-processes and time, between acceptance testing and commissioning of a conventional medical linear accelerator and to evaluate the time saved by consolidating the two processes. Method: A process map for acceptance testing for medical linear accelerators was created from vendor documentation (Varian and Elekta). Using AAPM TG-106 and inhouse commissioning procedures, a process map was created for commissioning of said accelerators. The time to complete each sub-process in each process map was evaluated. Redundancies in the processes were found and the time spent on each were calculated. Results: Mechanical testing significantly overlaps between the two processes - redundant work here amounts to 9.5 hours. Many beam non-scanning dosimetry tests overlap resulting in another 6 hours of overlap. Beam scanning overlaps somewhat - acceptance tests include evaluating PDDs and multiple profiles but for only one field size while commissioning beam scanning includes multiple field sizes and depths of profiles. This overlap results in another 6 hours of rework. Absolute dosimetry, field outputs, and end to end tests are not done at all in acceptance testing. Finally, all imaging tests done in acceptance are repeated in commissioning, resulting in about 8 hours of rework. The total time overlap between the two processes is about 30 hours. Conclusion: The process mapping done in this study shows that there are no tests done in acceptance testing that are not also recommended to do for commissioning. This results in about 30 hours of redundant work when preparing a conventional linear accelerator for clinical use. Considering these findings in the context of the 5000 linacs in the United states, consolidating acceptance testing and commissioning would have allowed for the treatment of an additional 25000 patients using no additional resources.
Morrow, A [Scott & White Hospital Temple, TX (United States); Rangaraj, D [Baylor Scott & White Health, Temple, TX (United States); Perez-Andujar, A [University of California San Francisco, San Francisco, CA (United States); Krishnamurthy, N [Baylor Scott & White Healthcare, Temple, TX (United States)
2016-06-15
Purpose: This work’s objective is to determine the overlap of processes, in terms of sub-processes and time, between acceptance testing and commissioning of a conventional medical linear accelerator and to evaluate the time saved by consolidating the two processes. Method: A process map for acceptance testing for medical linear accelerators was created from vendor documentation (Varian and Elekta). Using AAPM TG-106 and inhouse commissioning procedures, a process map was created for commissioning of said accelerators. The time to complete each sub-process in each process map was evaluated. Redundancies in the processes were found and the time spent on each were calculated. Results: Mechanical testing significantly overlaps between the two processes - redundant work here amounts to 9.5 hours. Many beam non-scanning dosimetry tests overlap resulting in another 6 hours of overlap. Beam scanning overlaps somewhat - acceptance tests include evaluating PDDs and multiple profiles but for only one field size while commissioning beam scanning includes multiple field sizes and depths of profiles. This overlap results in another 6 hours of rework. Absolute dosimetry, field outputs, and end to end tests are not done at all in acceptance testing. Finally, all imaging tests done in acceptance are repeated in commissioning, resulting in about 8 hours of rework. The total time overlap between the two processes is about 30 hours. Conclusion: The process mapping done in this study shows that there are no tests done in acceptance testing that are not also recommended to do for commissioning. This results in about 30 hours of redundant work when preparing a conventional linear accelerator for clinical use. Considering these findings in the context of the 5000 linacs in the United states, consolidating acceptance testing and commissioning would have allowed for the treatment of an additional 25000 patients using no additional resources.
Marinin, I. V.; Kabanikhin, S. I.; Krivorotko, O. I.; Karas, A.; Khidasheli, D. G.
2012-04-01
We consider new techniques and methods for earthquake and tsunami related problems, particularly - inverse problems for the determination of tsunami source parameters, numerical simulation of long wave propagation in soil and water and tsunami risk estimations. In addition, we will touch upon the issue of database management and destruction scenario visualization. New approaches and strategies, as well as mathematical tools and software are to be shown. The long joint investigations by researchers of the Institute of Mathematical Geophysics and Computational Mathematics SB RAS and specialists from WAPMERR and Informap have produced special theoretical approaches, numerical methods, and software tsunami and earthquake modeling (modeling of propagation and run-up of tsunami waves on coastal areas), visualization, risk estimation of tsunami, and earthquakes. Algorithms are developed for the operational definition of the origin and forms of the tsunami source. The system TSS numerically simulates the source of tsunami and/or earthquakes and includes the possibility to solve the direct and the inverse problem. It becomes possible to involve advanced mathematical results to improve models and to increase the resolution of inverse problems. Via TSS one can construct maps of risks, the online scenario of disasters, estimation of potential damage to buildings and roads. One of the main tools for the numerical modeling is the finite volume method (FVM), which allows us to achieve stability with respect to possible input errors, as well as to achieve optimum computing speed. Our approach to the inverse problem of tsunami and earthquake determination is based on recent theoretical results concerning the Dirichlet problem for the wave equation. This problem is intrinsically ill-posed. We use the optimization approach to solve this problem and SVD-analysis to estimate the degree of ill-posedness and to find the quasi-solution. The software system we developed is intended to
Madsen, Line Meldgaard; Fiandaca, Gianluca; Auken, Esben; Christiansen, Anders Vest
2017-12-01
The application of time-domain induced polarization (TDIP) is increasing with advances in acquisition techniques, data processing and spectral inversion schemes. An inversion of TDIP data for the spectral Cole-Cole parameters is a non-linear problem, but by applying a 1-D Markov Chain Monte Carlo (MCMC) inversion algorithm, a full non-linear uncertainty analysis of the parameters and the parameter correlations can be accessed. This is essential to understand to what degree the spectral Cole-Cole parameters can be resolved from TDIP data. MCMC inversions of synthetic TDIP data, which show bell-shaped probability distributions with a single maximum, show that the Cole-Cole parameters can be resolved from TDIP data if an acquisition range above two decades in time is applied. Linear correlations between the Cole-Cole parameters are observed and by decreasing the acquisitions ranges, the correlations increase and become non-linear. It is further investigated how waveform and parameter values influence the resolution of the Cole-Cole parameters. A limiting factor is the value of the frequency exponent, C. As C decreases, the resolution of all the Cole-Cole parameters decreases and the results become increasingly non-linear. While the values of the time constant, τ, must be in the acquisition range to resolve the parameters well, the choice between a 50 per cent and a 100 per cent duty cycle for the current injection does not have an influence on the parameter resolution. The limits of resolution and linearity are also studied in a comparison between the MCMC and a linearized gradient-based inversion approach. The two methods are consistent for resolved models, but the linearized approach tends to underestimate the uncertainties for poorly resolved parameters due to the corresponding non-linear features. Finally, an MCMC inversion of 1-D field data verifies that spectral Cole-Cole parameters can also be resolved from TD field measurements.
Teaching Linear Algebra: Proceeding More Efficiently by Staying Comfortably within Z
Beaver, Scott
2015-01-01
For efficiency in a linear algebra course the instructor may wish to avoid the undue arithmetical distractions of rational arithmetic. In this paper we explore how to write fraction-free problems of various types including elimination, matrix inverses, orthogonality, and the (non-normalizing) Gram-Schmidt process.
Acute puerperal uterine inversion
Hussain, M.; Liaquat, N.; Noorani, K.; Bhutta, S.Z; Jabeen, T.
2004-01-01
Objective: To determine the frequency, causes, clinical presentations, management and maternal mortality associated with acute puerperal inversion of the uterus. Materials and Methods: All the patients who developed acute puerperal inversion of the uterus either in or outside the JPMC were included in the study. Patients of chronic uterine inversion were not included in the present study. Abdominal and vaginal examination was done to confirm and classify inversion into first, second or third degrees. Results: 57036 deliveries and 36 acute uterine inversions occurred during the study period, so the frequency of uterine inversion was 1 in 1584 deliveries. Mismanagement of third stage of labour was responsible for uterine inversion in 75% of patients. Majority of the patients presented with shock, either hypovolemic (69%) or neurogenic (13%) in origin. Manual replacement of the uterus under general anaesthesia with 2% halothane was successfully done in 35 patients (97.5%). Abdominal hysterectomy was done in only one patient. There were three maternal deaths due to inversion. Conclusion: Proper education and training regarding placental delivery, diagnosis and management of uterine inversion must be imparted to the maternity care providers especially to traditional birth attendants and family physicians to prevent this potentially life-threatening condition. (author)
Marchi, Daniel E.; Menghini, Jorge E.; Trimarco, Viviana G.
1999-01-01
The inverse co-precipitation method has been used at the laboratory level to produce uranium - gadolinium mixed oxides. The formation of a mixed phase in the precipitates has been determined as well as the occurrence of only one phase in the sintered pellets, corresponding to a gadolinium - uranium solution. Moreover, a modification in the calcination-reduction stage was introduced that allows the elimination of the fissures previously detected in the sintered pellets
Surface layer temperature inversion in the Bay of Bengal
Pankajakshan, T.; Gopalakrishna, V.V.; Muraleedharan, P.M.; Reddy, G.V.; Araligidad, N.
Hydrographic and XBT data archived in the Indian Oceanographic Data Centre (IODC) are used to understand the process of temperature inversions occurring in the Bay of Bengal. The following aspects of the inversions are addressed: i) annual...
Natanael Antonio dos Santos
2002-01-01
Full Text Available O objetivo deste trabalho é discutir alguns aspectos conceituais básicos da análise de Fourier enquanto ferramenta que fundamenta a perspectiva de filtros ou canais múltiplos de freqüências espaciais no estudo do processamento visual da forma. Serão também discutidos alguns dos principais paradigmas psicofísicos utilizados para caracterizar a resposta do sistema visual humano para filtros de freqüências espaciais de banda estreita. A análise de sistema linear e alguns paradigmas psicofísicos têm contribuído para o desenvolvimento teórico da percepção e do processamento visual da forma.The goal of this work is to discuss some basic aspects of Fourier analysis as a tool to be used in the approach of multiple channels of spatial frequencies on the study of visual processing of form. Some of the psychophysical paradigms more frequently used to characterize response of the human visual system to spatial frequency filter of narrow-band. The linear system analysis and some psychophysical paradigms have contributed to theoretical development of perception and of the visual processing of form.
General inverse problems for regular variation
Damek, Ewa; Mikosch, Thomas Valentin; Rosinski, Jan
2014-01-01
Regular variation of distributional tails is known to be preserved by various linear transformations of some random structures. An inverse problem for regular variation aims at understanding whether the regular variation of a transformed random object is caused by regular variation of components ...
Bayesian inversion of refraction seismic traveltime data
Ryberg, T.; Haberland, Ch
2018-03-01
We apply a Bayesian Markov chain Monte Carlo (McMC) formalism to the inversion of refraction seismic, traveltime data sets to derive 2-D velocity models below linear arrays (i.e. profiles) of sources and seismic receivers. Typical refraction data sets, especially when using the far-offset observations, are known as having experimental geometries which are very poor, highly ill-posed and far from being ideal. As a consequence, the structural resolution quickly degrades with depth. Conventional inversion techniques, based on regularization, potentially suffer from the choice of appropriate inversion parameters (i.e. number and distribution of cells, starting velocity models, damping and smoothing constraints, data noise level, etc.) and only local model space exploration. McMC techniques are used for exhaustive sampling of the model space without the need of prior knowledge (or assumptions) of inversion parameters, resulting in a large number of models fitting the observations. Statistical analysis of these models allows to derive an average (reference) solution and its standard deviation, thus providing uncertainty estimates of the inversion result. The highly non-linear character of the inversion problem, mainly caused by the experiment geometry, does not allow to derive a reference solution and error map by a simply averaging procedure. We present a modified averaging technique, which excludes parts of the prior distribution in the posterior values due to poor ray coverage, thus providing reliable estimates of inversion model properties even in those parts of the models. The model is discretized by a set of Voronoi polygons (with constant slowness cells) or a triangulated mesh (with interpolation within the triangles). Forward traveltime calculations are performed by a fast, finite-difference-based eikonal solver. The method is applied to a data set from a refraction seismic survey from Northern Namibia and compared to conventional tomography. An inversion test
Inverse logarithmic potential problem
Cherednichenko, V G
1996-01-01
The Inverse and Ill-Posed Problems Series is a series of monographs publishing postgraduate level information on inverse and ill-posed problems for an international readership of professional scientists and researchers. The series aims to publish works which involve both theory and applications in, e.g., physics, medicine, geophysics, acoustics, electrodynamics, tomography, and ecology.
Inverse Kinematics using Quaternions
Henriksen, Knud; Erleben, Kenny; Engell-Nørregård, Morten
In this project I describe the status of inverse kinematics research, with the focus firmly on the methods that solve the core problem. An overview of the different methods are presented Three common methods used in inverse kinematics computation have been chosen as subject for closer inspection....
Granger , Geoffroy; Restoin , Christine; Roy , Philippe; Jamier , Raphaël; Rougier , Sébastien; Duclere , Jean-René; Lecomte , André; Dauliat , Romain; Blondy , Jean-Marc
2015-01-01
International audience; This paper presents a study of original nanostructured optical fibers based on the SiO 2-SnO 2-(Yb 3+) system. Two different processes have been developed and compared: the sol-gel chemical method associated to the " inverse dip-coating " (IDC) and the " powder in tube " (PIT) process. The microstructural and optical properties of the fibers are studied according to the concentration of SnO 2. X-Ray Diffraction as well as Transmission Electron Microscopy studies show t...
Full waveform inversion using envelope-based global correlation norm
Oh, Juwon
2018-01-28
Various parameterizations have been suggested to simplify inversions of first arrivals, or P −waves, in orthorhombic anisotropic media, but the number and type of retrievable parameters have not been decisively determined. We show that only six parameters can be retrieved from the dynamic linearized inversion of P −waves. These parameters are different from the six parameters needed to describe the kinematics of P −waves. Reflection-based radiation patterns from the P − P scattered waves are remapped into the spectral domain to allow for our resolution analysis based on the effective angle of illumination concept. Singular value decomposition of the spectral sensitivities from various azimuths, offset coverage scenarios, and data bandwidths allows us to quantify the resolution of different parameterizations, taking into account the signal-to-noise ratio in a given experiment. According to our singular value analysis, when the primary goal of inversion is determining the velocity of the P −waves, gradually adding anisotropy of lower orders (isotropic, vertically transversally isotropic, orthorhombic) in hierarchical parameterization is the best choice. Hierarchical parametrization reduces the tradeoff between the parameters and makes gradual introduction of lower anisotropy orders straightforward. When all the anisotropic parameters affecting P −wave propagation need to be retrieved simultaneously, the classic parameterization of orthorhombic medium with elastic stiffness matrix coefficients and density is a better choice for inversion. We provide estimates of the number and set of parameters that can be retrieved from surface seismic data in different acquisition scenarios. To set up an inversion process, the singular values determine the number of parameters that can be inverted and the resolution matrices from the parameterizations can be used to ascertain the set of parameters that can be resolved.
Parameterization analysis and inversion for orthorhombic media
Masmoudi, Nabil
2018-01-01
Accounting for azimuthal anisotropy is necessary for the processing and inversion of wide-azimuth and wide-aperture seismic data because wave speeds naturally depend on the wave propagation direction. Orthorhombic anisotropy is considered the most
Heeding the waveform inversion nonlinearity by unwrapping the model and data
Alkhalifah, Tariq Ali; Choi, Yun Seok
2012-01-01
Unlike traveltime inversion, waveform inversion provides relatively higher-resolution inverted models. This feature, however, comes at the cost of introducing complex nonlinearity to the inversion operator complicating the convergence process. We
Olea Gonzalez, Ulises
2007-08-15
time to know the analytical solution of the problem, supposing that all the variables of the problem (direct) are known, it is possible to predict, by means of inverse methods, the supposed initial condition of the formation in the direct problem. For which three inverse methods are explored: the Method of Levenberg Marquardt (MLM), a Method of Control Proportional Integral (PI) and one referred to as Artificial Intelligence (AI). In order to offer the solution, each one of the methods uses in its objective function the registries at closed well and at constant depth, that will serve to carry out the adjustment of the formation temperature. The algorithm MLM structure, besides considering the initial formation condition, allowed to explore the Eigen values and Eigen vectors of the Jacobian matrix, and so the variation of the sensitivity parameters was identified during the convergence process in the estimation of the formation temperature; It was determined that the viscosity of the fluid, the volumetric flow and the circulation losses were the parameters that greater variation showed during the optimization process. According to the Literature investigated in the oil scope, there are not publications that approach the subject of parametric sensitivity analysis during an optimization process of initial conditions of formation. Method IA was an excellent model for comparison effects, since the convergence criterion was controlled by means of a function S used to model the group of temperature differences in a close interval [Tmin, Tmax] to approach the vicinity of the initial condition of formation obtained from the MLM, considering that it does not give information on how the thermo physical variables are modified during the iterations. Model PI was a contribution of the project; since Integral part is added to the Proportional model so as it corrects any compensation of the error that might occur between the value of the desired temperature (set point) and the value
Approximation of the inverse G-frame operator
... projection method for -frames which works for all conditional -Riesz frames. We also derive a method for approximation of the inverse -frame operator which is efficient for all -frames. We show how the inverse of -frame operator can be approximated as close as we like using finite-dimensional linear algebra.
Minimal-Inversion Feedforward-And-Feedback Control System
Seraji, Homayoun
1990-01-01
Recent developments in theory of control systems support concept of minimal-inversion feedforward-and feedback control system consisting of three independently designable control subsystems. Applicable to the control of linear, time-invariant plant.
Suwono.
1978-01-01
A linear gate providing a variable gate duration from 0,40μsec to 4μsec was developed. The electronic circuity consists of a linear circuit and an enable circuit. The input signal can be either unipolar or bipolar. If the input signal is bipolar, the negative portion will be filtered. The operation of the linear gate is controlled by the application of a positive enable pulse. (author)
Michaud, S.
2001-11-01
This work deals with the operation and start-up of a turbulent bed reactor with ExtendospheresO as a support, for the anaerobic treatment of a food process wastewater. An hydrodynamic study was carried out to characterise the liquid flow and mixing with this carrier of small size (147 {mu}m) and density (0.7). Phase behaviour during fluidizing gas injection can be described by an homogeneous liquid-solid pseudo-fluid whose apparent viscosity depends on the solid concentration. A biological study showed that the initial contact between cells and particles caused a physiological adaptation of microorganisms to the presence of solid after a transitory inhibition of methane production. The methane yield has been showed to be an interesting parameter to monitor bio-film formation and detachment. A low hydraulic retention time during the start-up period has been decisive to reduce the lag-period during carrier colonization. A robust continuous operation of the reactor has been obtained using a pH-controlled feeding. Gas velocity has been shown to be an important parameter to control cells concentration, density and durability of the bio-film. (author)
a method of gravity and seismic sequential inversion and its GPU implementation
Liu, G.; Meng, X.
2011-12-01
In this abstract, we introduce a gravity and seismic sequential inversion method to invert for density and velocity together. For the gravity inversion, we use an iterative method based on correlation imaging algorithm; for the seismic inversion, we use the full waveform inversion. The link between the density and velocity is an empirical formula called Gardner equation, for large volumes of data, we use the GPU to accelerate the computation. For the gravity inversion method , we introduce a method based on correlation imaging algorithm,it is also a interative method, first we calculate the correlation imaging of the observed gravity anomaly, it is some value between -1 and +1, then we multiply this value with a little density ,this value become the initial density model. We get a forward reuslt with this initial model and also calculate the correaltion imaging of the misfit of observed data and the forward data, also multiply the correaltion imaging result a little density and add it to the initial model, then do the same procedure above , at last ,we can get a inversion density model. For the seismic inveron method ,we use a mothod base on the linearity of acoustic wave equation written in the frequency domain,with a intial velociy model, we can get a good velocity result. In the sequential inversion of gravity and seismic , we need a link formula to convert between density and velocity ,in our method , we use the Gardner equation. Driven by the insatiable market demand for real time, high-definition 3D images, the programmable NVIDIA Graphic Processing Unit (GPU) as co-processor of CPU has been developed for high performance computing. Compute Unified Device Architecture (CUDA) is a parallel programming model and software environment provided by NVIDIA designed to overcome the challenge of using traditional general purpose GPU while maintaining a low learn curve for programmers familiar with standard programming languages such as C. In our inversion processing
Vretenar, M
2014-01-01
The main features of radio-frequency linear accelerators are introduced, reviewing the different types of accelerating structures and presenting the main characteristics aspects of linac beam dynamics
Analog fault diagnosis by inverse problem technique
Ahmed, Rania F.
2011-12-01
A novel algorithm for detecting soft faults in linear analog circuits based on the inverse problem concept is proposed. The proposed approach utilizes optimization techniques with the aid of sensitivity analysis. The main contribution of this work is to apply the inverse problem technique to estimate the actual parameter values of the tested circuit and so, to detect and diagnose single fault in analog circuits. The validation of the algorithm is illustrated through applying it to Sallen-Key second order band pass filter and the results show that the detecting percentage efficiency was 100% and also, the maximum error percentage of estimating the parameter values is 0.7%. This technique can be applied to any other linear circuit and it also can be extended to be applied to non-linear circuits. © 2011 IEEE.
Linearization Method and Linear Complexity
Tanaka, Hidema
We focus on the relationship between the linearization method and linear complexity and show that the linearization method is another effective technique for calculating linear complexity. We analyze its effectiveness by comparing with the logic circuit method. We compare the relevant conditions and necessary computational cost with those of the Berlekamp-Massey algorithm and the Games-Chan algorithm. The significant property of a linearization method is that it needs no output sequence from a pseudo-random number generator (PRNG) because it calculates linear complexity using the algebraic expression of its algorithm. When a PRNG has n [bit] stages (registers or internal states), the necessary computational cost is smaller than O(2n). On the other hand, the Berlekamp-Massey algorithm needs O(N2) where N(≅2n) denotes period. Since existing methods calculate using the output sequence, an initial value of PRNG influences a resultant value of linear complexity. Therefore, a linear complexity is generally given as an estimate value. On the other hand, a linearization method calculates from an algorithm of PRNG, it can determine the lower bound of linear complexity.
Workflows for Full Waveform Inversions
Boehm, Christian; Krischer, Lion; Afanasiev, Michael; van Driel, Martin; May, Dave A.; Rietmann, Max; Fichtner, Andreas
2017-04-01
Despite many theoretical advances and the increasing availability of high-performance computing clusters, full seismic waveform inversions still face considerable challenges regarding data and workflow management. While the community has access to solvers which can harness modern heterogeneous computing architectures, the computational bottleneck has fallen to these often manpower-bounded issues that need to be overcome to facilitate further progress. Modern inversions involve huge amounts of data and require a tight integration between numerical PDE solvers, data acquisition and processing systems, nonlinear optimization libraries, and job orchestration frameworks. To this end we created a set of libraries and applications revolving around Salvus (http://salvus.io), a novel software package designed to solve large-scale full waveform inverse problems. This presentation focuses on solving passive source seismic full waveform inversions from local to global scales with Salvus. We discuss (i) design choices for the aforementioned components required for full waveform modeling and inversion, (ii) their implementation in the Salvus framework, and (iii) how it is all tied together by a usable workflow system. We combine state-of-the-art algorithms ranging from high-order finite-element solutions of the wave equation to quasi-Newton optimization algorithms using trust-region methods that can handle inexact derivatives. All is steered by an automated interactive graph-based workflow framework capable of orchestrating all necessary pieces. This naturally facilitates the creation of new Earth models and hopefully sparks new scientific insights. Additionally, and even more importantly, it enhances reproducibility and reliability of the final results.
Face inversion increases attractiveness.
Leder, Helmut; Goller, Juergen; Forster, Michael; Schlageter, Lena; Paul, Matthew A
2017-07-01
Assessing facial attractiveness is a ubiquitous, inherent, and hard-wired phenomenon in everyday interactions. As such, it has highly adapted to the default way that faces are typically processed: viewing faces in upright orientation. By inverting faces, we can disrupt this default mode, and study how facial attractiveness is assessed. Faces, rotated at 90 (tilting to either side) and 180°, were rated on attractiveness and distinctiveness scales. For both orientations, we found that faces were rated more attractive and less distinctive than upright faces. Importantly, these effects were more pronounced for faces rated low in upright orientation, and smaller for highly attractive faces. In other words, the less attractive a face was, the more it gained in attractiveness by inversion or rotation. Based on these findings, we argue that facial attractiveness assessments might not rely on the presence of attractive facial characteristics, but on the absence of distinctive, unattractive characteristics. These unattractive characteristics are potentially weighed against an individual, attractive prototype in assessing facial attractiveness. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
Madsen, Line Meldgaard; Fiandaca, Gianluca; Auken, Esben
2017-01-01
The application of time-domain induced polarization (TDIP) is increasing with advances in acquisition techniques, data processing and spectral inversion schemes. An inversion of TDIP data for the spectral Cole-Cole parameters is a non-linear problem, but by applying a 1-D Markov Chain Monte Carlo......-shaped probability distributions with a single maximum, show that the Cole-Cole parameters can be resolved from TDIP data if an acquisition range above two decades in time is applied. Linear correlations between the Cole-Cole parameters are observed and by decreasing the acquisitions ranges, the correlations...
Maruthai Suresh
2010-10-01
Full Text Available A nonlinear process, the heat exchanger whose parameters vary with respect to the process variable, is considered. The time constant and gain of the chosen process vary as a function of temperature. The limitations of the conventional feedback controller tuned using Ziegler-Nichols settings for the chosen process are brought out. The servo and regulatory responses through simulation and experimentation for various magnitudes of set-point changes and load changes at various operating points with the controller tuned only at a chosen nominal operating point are obtained and analyzed. Regulatory responses for output load changes are studied. The efficiency of feedforward controller and the effects of modeling error have been brought out. An IMC based system is presented to understand clearly how variations of system parameters affect the performance of the controller. The present work illustrates the effectiveness of Feedforward and IMC controller.
Inversions and the dynamics of eukaryotic gene order.
Huynen, M.A.; Snel, B.; Bork, P.
2001-01-01
Comparisons of the gene order in closely related genomes reveal a major role for inversions in the genome shuffling process. In contrast to prokaryotes, where the inversions are predominantly large, half of the inversions between Saccharomyces cerevisiae and Candida albicans appear to be small,
Said-Houari, Belkacem
2017-01-01
This self-contained, clearly written textbook on linear algebra is easily accessible for students. It begins with the simple linear equation and generalizes several notions from this equation for the system of linear equations and introduces the main ideas using matrices. It then offers a detailed chapter on determinants and introduces the main ideas with detailed proofs. The third chapter introduces the Euclidean spaces using very simple geometric ideas and discusses various major inequalities and identities. These ideas offer a solid basis for understanding general Hilbert spaces in functional analysis. The following two chapters address general vector spaces, including some rigorous proofs to all the main results, and linear transformation: areas that are ignored or are poorly explained in many textbooks. Chapter 6 introduces the idea of matrices using linear transformation, which is easier to understand than the usual theory of matrices approach. The final two chapters are more advanced, introducing t...
Inverse scattering problems with multi-frequencies
Bao, Gang; Li, Peijun; Lin, Junshan; Triki, Faouzi
2015-01-01
This paper is concerned with computational approaches and mathematical analysis for solving inverse scattering problems in the frequency domain. The problems arise in a diverse set of scientific areas with significant industrial, medical, and military applications. In addition to nonlinearity, there are two common difficulties associated with the inverse problems: ill-posedness and limited resolution (diffraction limit). Due to the diffraction limit, for a given frequency, only a low spatial frequency part of the desired parameter can be observed from measurements in the far field. The main idea developed here is that if the reconstruction is restricted to only the observable part, then the inversion will become stable. The challenging task is how to design stable numerical methods for solving these inverse scattering problems inspired by the diffraction limit. Recently, novel recursive linearization based algorithms have been presented in an attempt to answer the above question. These methods require multi-frequency scattering data and proceed via a continuation procedure with respect to the frequency from low to high. The objective of this paper is to give a brief review of these methods, their error estimates, and the related mathematical analysis. More attention is paid to the inverse medium and inverse source problems. Numerical experiments are included to illustrate the effectiveness of these methods. (topical review)
Sharp spatially constrained inversion
Vignoli, Giulio G.; Fiandaca, Gianluca G.; Christiansen, Anders Vest C A.V.C.
2013-01-01
We present sharp reconstruction of multi-layer models using a spatially constrained inversion with minimum gradient support regularization. In particular, its application to airborne electromagnetic data is discussed. Airborne surveys produce extremely large datasets, traditionally inverted...... by using smoothly varying 1D models. Smoothness is a result of the regularization constraints applied to address the inversion ill-posedness. The standard Occam-type regularized multi-layer inversion produces results where boundaries between layers are smeared. The sharp regularization overcomes...... inversions are compared against classical smooth results and available boreholes. With the focusing approach, the obtained blocky results agree with the underlying geology and allow for easier interpretation by the end-user....
Inverse Higgs effect in nonlinear realizations
Ivanov, E.A.; Ogievetskij, V.I.
1975-01-01
In theories with nonlinearly realized symmetry it is possible in a number of cases to eliminate some initial Goldstone and gauge fields by means of putting appropriate Cartan forms equal to zero. This is called the inverse Higgs phenomenon. We give a general treatment of the inverse Higgs phenomenon for gauge and space-time symmetries and consider four instructive examples which are the elimination of unessential gauge fields in chiral symmetry and in non-linearly realized supersymmetry and also the elimination of unessential Goldstone fields in the spontaneously broken conformal and projective symmetries
Yamanaka, Tsuyuko; Raffaelli, David; White, Piran C L
2013-01-01
Sea-level rise induced by climate change may have significant impacts on the ecosystem functions and ecosystem services provided by intertidal sediment ecosystems. Accelerated sea-level rise is expected to lead to steeper beach slopes, coarser particle sizes and increased wave exposure, with consequent impacts on intertidal ecosystems. We examined the relationships between abundance, biomass, and community metabolism of benthic fauna with beach slope, particle size and exposure, using samples across a range of conditions from three different locations in the UK, to determine the significance of sediment particle size beach slope and wave exposure in affecting benthic fauna and ecosystem function in different ecological contexts. Our results show that abundance, biomass and oxygen consumption of intertidal macrofauna and meiofauna are affected significantly by interactions among sediment particle size, beach slope and wave exposure. For macrofauna on less sloping beaches, the effect of these physical constraints is mediated by the local context, although for meiofauna and for macrofauna on intermediate and steeper beaches, the effects of physical constraints dominate. Steeper beach slopes, coarser particle sizes and increased wave exposure generally result in decreases in abundance, biomass and oxygen consumption, but these relationships are complex and non-linear. Sea-level rise is likely to lead to changes in ecosystem structure with generally negative impacts on ecosystem functions and ecosystem services. However, the impacts of sea-level rise will also be affected by local ecological context, especially for less sloping beaches.
Hutka, Stefanie; Bidelman, Gavin M; Moreno, Sylvain
2013-12-30
There is convincing empirical evidence for bidirectional transfer between music and language, such that experience in either domain can improve mental processes required by the other. This music-language relationship has been studied using linear models (e.g., comparing mean neural activity) that conceptualize brain activity as a static entity. The linear approach limits how we can understand the brain's processing of music and language because the brain is a nonlinear system. Furthermore, there is evidence that the networks supporting music and language processing interact in a nonlinear manner. We therefore posit that the neural processing and transfer between the domains of language and music are best viewed through the lens of a nonlinear framework. Nonlinear analysis of neurophysiological activity may yield new insight into the commonalities, differences, and bidirectionality between these two cognitive domains not measurable in the local output of a cortical patch. We thus propose a novel application of brain signal variability (BSV) analysis, based on mutual information and signal entropy, to better understand the bidirectionality of music-to-language transfer in the context of a nonlinear framework. This approach will extend current methods by offering a nuanced, network-level understanding of the brain complexity involved in music-language transfer.
Rotsch, David A. [Argonne National Lab. (ANL), Argonne, IL (United States). Nuclear Engineering Division; Brossard, Tom [Argonne National Lab. (ANL), Argonne, IL (United States). Nuclear Engineering Division; Roussin, Ethan [Argonne National Lab. (ANL), Argonne, IL (United States). Nuclear Engineering Division; Quigley, Kevin [Argonne National Lab. (ANL), Argonne, IL (United States). Nuclear Engineering Division; Chemerisov, Sergey [Argonne National Lab. (ANL), Argonne, IL (United States). Nuclear Engineering Division; Gromov, Roman [Argonne National Lab. (ANL), Argonne, IL (United States). Nuclear Engineering Division; Jonah, Charles [Argonne National Lab. (ANL), Argonne, IL (United States). Nuclear Engineering Division; Hafenrichter, Lohman [Argonne National Lab. (ANL), Argonne, IL (United States). Nuclear Engineering Division; Tkac, Peter [Argonne National Lab. (ANL), Argonne, IL (United States). Nuclear Engineering Division; Krebs, John [Argonne National Lab. (ANL), Argonne, IL (United States). Nuclear Engineering Division; Vandegrift, George F. [Argonne National Lab. (ANL), Argonne, IL (United States). Nuclear Engineering Division
2016-10-31
Molybdenum-99, the mother of Tc-99m, can be produced from fission of U-235 in nuclear reactors and purified from fission products by the Cintichem process, later modified for low-enriched uranium (LEU) targets. The key step in this process is the precipitation of Mo with α-benzoin oxime (ABO). The stability of this complex to radiation has been examined. Molybdenum-ABO was irradiated with 3 MeV electrons produced by a Van de Graaff generator and 35 MeV electrons produced by a 50 MeV/25 kW electron linear accelerator. Dose equivalents of 1.7–31.2 kCi of Mo-99 were administered to freshly prepared Mo-ABO. Irradiated samples of Mo-ABO were processed according to the LEU Modified-Cintichem process. The Van de Graaff data indicated good radiation stability of the Mo-ABO complex up to ~15 kCi dose equivalents of Mo-99 and nearly complete destruction at doses >24 kCi Mo-99. The linear accelerator data indicate that even at 6.2 kCi of Mo-99 equivalence of dose, the sample lost ~20% of Mo-99. The 20% loss of Mo-99 at this low dose may be attributed to thermal decomposition of the product from the heat deposited in the sample during irradiation.
Stefanie Andrea Hutka
2013-12-01
Full Text Available There is convincing empirical evidence for bidirectional transfer between music and language, such that experience in either domain can improve mental processes required by the other. This music-language relationship has been studied using linear models (e.g., comparing mean neural activity that conceptualize brain activity as a static entity. The linear approach limits how we can understand the brain’s processing of music and language because the brain is a nonlinear system. Furthermore, there is evidence that the networks supporting music and language processing interact in a nonlinear manner. We therefore posit that the neural processing and transfer between the domains of language and music are best viewed through the lens of a nonlinear framework. Nonlinear analysis of neurophysiological activity may yield new insight into the commonalities, differences, and bidirectionality between these two cognitive domains not measurable in the local output of a cortical patch. We thus propose a novel application of brain signal variability (BSV analysis, based on mutual information and signal entropy, to better understand the bidirectionality of music-to-language transfer in the context of a nonlinear framework. This approach will extend current methods by offering a nuanced, network-level understanding of the brain complexity involved in music-language transfer.
Rogner, H.H.
1989-01-01
The submitted sections on linear programming are extracted from 'Theorie und Technik der Planung' (1978) by W. Blaas and P. Henseler and reformulated for presentation at the Workshop. They consider a brief introduction to the theory of linear programming and to some essential aspects of the SIMPLEX solution algorithm for the purposes of economic planning processes. 1 fig
Kuznetsov, N.; Maz'ya, V.; Vainberg, B.
2002-08-01
This book gives a self-contained and up-to-date account of mathematical results in the linear theory of water waves. The study of waves has many applications, including the prediction of behavior of floating bodies (ships, submarines, tension-leg platforms etc.), the calculation of wave-making resistance in naval architecture, and the description of wave patterns over bottom topography in geophysical hydrodynamics. The first section deals with time-harmonic waves. Three linear boundary value problems serve as the approximate mathematical models for these types of water waves. The next section uses a plethora of mathematical techniques in the investigation of these three problems. The techniques used in the book include integral equations based on Green's functions, various inequalities between the kinetic and potential energy and integral identities which are indispensable for proving the uniqueness theorems. The so-called inverse procedure is applied to constructing examples of non-uniqueness, usually referred to as 'trapped nodes.'
Stoll, R R
1968-01-01
Linear Algebra is intended to be used as a text for a one-semester course in linear algebra at the undergraduate level. The treatment of the subject will be both useful to students of mathematics and those interested primarily in applications of the theory. The major prerequisite for mastering the material is the readiness of the student to reason abstractly. Specifically, this calls for an understanding of the fact that axioms are assumptions and that theorems are logical consequences of one or more axioms. Familiarity with calculus and linear differential equations is required for understand
Inverse problem in hydrogeology
Carrera, Jesús; Alcolea, Andrés; Medina, Agustín; Hidalgo, Juan; Slooten, Luit J.
2005-03-01
The state of the groundwater inverse problem is synthesized. Emphasis is placed on aquifer characterization, where modelers have to deal with conceptual model uncertainty (notably spatial and temporal variability), scale dependence, many types of unknown parameters (transmissivity, recharge, boundary conditions, etc.), nonlinearity, and often low sensitivity of state variables (typically heads and concentrations) to aquifer properties. Because of these difficulties, calibration cannot be separated from the modeling process, as it is sometimes done in other fields. Instead, it should be viewed as one step in the process of understanding aquifer behavior. In fact, it is shown that actual parameter estimation methods do not differ from each other in the essence, though they may differ in the computational details. It is argued that there is ample room for improvement in groundwater inversion: development of user-friendly codes, accommodation of variability through geostatistics, incorporation of geological information and different types of data (temperature, occurrence and concentration of isotopes, age, etc.), proper accounting of uncertainty, etc. Despite this, even with existing codes, automatic calibration facilitates enormously the task of modeling. Therefore, it is contended that its use should become standard practice. L'état du problème inverse des eaux souterraines est synthétisé. L'accent est placé sur la caractérisation de l'aquifère, où les modélisateurs doivent jouer avec l'incertitude des modèles conceptuels (notamment la variabilité spatiale et temporelle), les facteurs d'échelle, plusieurs inconnues sur différents paramètres (transmissivité, recharge, conditions aux limites, etc.), la non linéarité, et souvent la sensibilité de plusieurs variables d'état (charges hydrauliques, concentrations) des propriétés de l'aquifère. A cause de ces difficultés, le calibrage ne peut êtreséparé du processus de modélisation, comme c'est le
Esteban Moyano, Fernando; Vasilyeva, Nadezda; Menichetti, Lorenzo
2016-04-01
Soil carbon models developed over the last couple of decades are limited in their capacity to accurately predict the magnitudes and temporal variations in observed carbon fluxes and stocks. New process-based models are now emerging that attempt to address the shortcomings of their more simple, empirical counterparts. While a spectrum of ideas and hypothetical mechanisms are finding their way into new models, the addition of only a few processes known to significantly affect soil carbon (e.g. enzymatic decomposition, adsorption, Michaelis-Menten kinetics) has shown the potential to resolve a number of previous model-data discrepancies (e.g. priming, Birch effects). Through model-data validation, such models are a means of testing hypothetical mechanisms. In addition, they can lead to new insights into what soil carbon pools are and how they respond to external drivers. In this study we develop a model of soil carbon dynamics based on enzymatic decomposition and other key features of process based models, i.e. simulation of carbon in particulate, soluble and adsorbed states, as well as enzyme and microbial components. Here we focus on understanding how moisture affects C decomposition at different levels, both directly (e.g. by limiting diffusion) or through interactions with other components. As the medium where most reactions and transport take place, water is central en every aspect of soil C dynamics. We compare results from a number of alternative models with experimental data in order to test different processes and parameterizations. Among other observations, we try to understand: 1. typical moisture response curves and associated temporal changes, 2. moisture-temperature interactions, and 3. diffusion effects under changing C concentrations. While the model aims at being a process based approach and at simulating fluxes at short time scales, it remains a simplified representation using the same inputs as classical soil C models, and is thus potentially
Inverse Raman effect: applications and detection techniques
Hughes, L.J. Jr.
1980-08-01
The processes underlying the inverse Raman effect are qualitatively described by comparing it to the more familiar phenomena of conventional and stimulated Raman scattering. An experession is derived for the inverse Raman absorption coefficient, and its relationship to the stimulated Raman gain is obtained. The power requirements of the two fields are examined qualitatively and quantitatively. The assumption that the inverse Raman absorption coefficient is constant over the interaction length is examined. Advantages of the technique are discussed and a brief survey of reported studies is presented
Inverse Raman effect: applications and detection techniques
Hughes, L.J. Jr.
1980-08-01
The processes underlying the inverse Raman effect are qualitatively described by comparing it to the more familiar phenomena of conventional and stimulated Raman scattering. An experession is derived for the inverse Raman absorption coefficient, and its relationship to the stimulated Raman gain is obtained. The power requirements of the two fields are examined qualitatively and quantitatively. The assumption that the inverse Raman absorption coefficient is constant over the interaction length is examined. Advantages of the technique are discussed and a brief survey of reported studies is presented.
Solow, Daniel
2014-01-01
This text covers the basic theory and computation for a first course in linear programming, including substantial material on mathematical proof techniques and sophisticated computation methods. Includes Appendix on using Excel. 1984 edition.
Liesen, Jörg
2015-01-01
This self-contained textbook takes a matrix-oriented approach to linear algebra and presents a complete theory, including all details and proofs, culminating in the Jordan canonical form and its proof. Throughout the development, the applicability of the results is highlighted. Additionally, the book presents special topics from applied linear algebra including matrix functions, the singular value decomposition, the Kronecker product and linear matrix equations. The matrix-oriented approach to linear algebra leads to a better intuition and a deeper understanding of the abstract concepts, and therefore simplifies their use in real world applications. Some of these applications are presented in detailed examples. In several ‘MATLAB-Minutes’ students can comprehend the concepts and results using computational experiments. Necessary basics for the use of MATLAB are presented in a short introduction. Students can also actively work with the material and practice their mathematical skills in more than 300 exerc...
Berberian, Sterling K
2014-01-01
Introductory treatment covers basic theory of vector spaces and linear maps - dimension, determinants, eigenvalues, and eigenvectors - plus more advanced topics such as the study of canonical forms for matrices. 1992 edition.
Searle, Shayle R
2012-01-01
This 1971 classic on linear models is once again available--as a Wiley Classics Library Edition. It features material that can be understood by any statistician who understands matrix algebra and basic statistical methods.
Ogorodnikov, I N; Isaenko, L I; Zinin, E I; Kruzhalov, A V
2000-01-01
The paper presents the results of a study of the LiB sub 3 O sub 5 and Li sub 2 B sub 4 O sub 7 crystals by the use of the luminescent spectroscopy with the sub-nanosecond time resolution under excitation of the high-power synchrotron radiation. The commonness in the origin of the non-equilibrium processes in these crystals as well as the observed differences in the luminescence manifestations is discussed.
Ogorodnikov, I.N. E-mail: ogo@dpt.ustu.ru; Pustovarov, V.A.; Isaenko, L.I.; Zinin, E.I.; Kruzhalov, A.V
2000-06-21
The paper presents the results of a study of the LiB{sub 3}O{sub 5} and Li{sub 2}B{sub 4}O{sub 7} crystals by the use of the luminescent spectroscopy with the sub-nanosecond time resolution under excitation of the high-power synchrotron radiation. The commonness in the origin of the non-equilibrium processes in these crystals as well as the observed differences in the luminescence manifestations is discussed.
Christofilos, N.C.; Polk, I.J.
1959-02-17
Improvements in linear particle accelerators are described. A drift tube system for a linear ion accelerator reduces gap capacity between adjacent drift tube ends. This is accomplished by reducing the ratio of the diameter of the drift tube to the diameter of the resonant cavity. Concentration of magnetic field intensity at the longitudinal midpoint of the external sunface of each drift tube is reduced by increasing the external drift tube diameter at the longitudinal center region.
BOOK REVIEW: Inverse Problems. Activities for Undergraduates
Yamamoto, Masahiro
2003-06-01
This book is a valuable introduction to inverse problems. In particular, from the educational point of view, the author addresses the questions of what constitutes an inverse problem and how and why we should study them. Such an approach has been eagerly awaited for a long time. Professor Groetsch, of the University of Cincinnati, is a world-renowned specialist in inverse problems, in particular the theory of regularization. Moreover, he has made a remarkable contribution to educational activities in the field of inverse problems, which was the subject of his previous book (Groetsch C W 1993 Inverse Problems in the Mathematical Sciences (Braunschweig: Vieweg)). For this reason, he is one of the most qualified to write an introductory book on inverse problems. Without question, inverse problems are important, necessary and appear in various aspects. So it is crucial to introduce students to exercises in inverse problems. However, there are not many introductory books which are directly accessible by students in the first two undergraduate years. As a consequence, students often encounter diverse concrete inverse problems before becoming aware of their general principles. The main purpose of this book is to present activities to allow first-year undergraduates to learn inverse theory. To my knowledge, this book is a rare attempt to do this and, in my opinion, a great success. The author emphasizes that it is very important to teach inverse theory in the early years. He writes; `If students consider only the direct problem, they are not looking at the problem from all sides .... The habit of always looking at problems from the direct point of view is intellectually limiting ...' (page 21). The book is very carefully organized so that teachers will be able to use it as a textbook. After an introduction in chapter 1, sucessive chapters deal with inverse problems in precalculus, calculus, differential equations and linear algebra. In order to let one gain some insight
Ding, Hai-Yan; Li, Gai-Ru; Yu, Ying-Ge; Guo, Wei; Zhi, Ling; Li, Xin-Xia
2014-04-01
A method for on-line monitoring the dissolution of Valsartan and hydrochlorothiazide tablets assisted by mathematical separation model of linear equations was established. UV spectrums of valsartan and hydrochlorothiazide were overlapping completely at the maximum absorption wavelength respectively. According to the Beer-Lambert principle of absorbance additivity, the absorptivity of Valsartan and hydrochlorothiazide was determined at the maximum absorption wavelength, and the dissolubility of Valsartan and hydrochlorothiazide tablets was detected by fiber-optic dissolution test (FODT) assisted by the mathematical separation model of linear equations and compared with the HPLC method. Results show that two ingredients were real-time determined simultaneously in given medium. There was no significant difference for FODT compared with HPLC (p > 0.05). Due to the dissolution behavior consistency, the preparation process of different batches was stable and with good uniformity. The dissolution curves of valsartan were faster and higher than hydrochlorothiazide. The dissolutions at 30 min of Valsartan and hydrochlorothiazide were concordant with US Pharmacopoeia. It was concluded that fiber-optic dissolution test system assisted by the mathematical separation model of linear equations that can detect the dissolubility of Valsartan and hydrochlorothiazide simultaneously, and get dissolution profiles and overall data, which can directly reflect the dissolution speed at each time. It can provide the basis for establishing standards of the drug. Compared to HPLC method with one-point data, there are obvious advantages to evaluate and analyze quality of sampling drug by FODT.
The seismic reflection inverse problem
Symes, W W
2009-01-01
The seismic reflection method seeks to extract maps of the Earth's sedimentary crust from transient near-surface recording of echoes, stimulated by explosions or other controlled sound sources positioned near the surface. Reasonably accurate models of seismic energy propagation take the form of hyperbolic systems of partial differential equations, in which the coefficients represent the spatial distribution of various mechanical characteristics of rock (density, stiffness, etc). Thus the fundamental problem of reflection seismology is an inverse problem in partial differential equations: to find the coefficients (or at least some of their properties) of a linear hyperbolic system, given the values of a family of solutions in some part of their domains. The exploration geophysics community has developed various methods for estimating the Earth's structure from seismic data and is also well aware of the inverse point of view. This article reviews mathematical developments in this subject over the last 25 years, to show how the mathematics has both illuminated innovations of practitioners and led to new directions in practice. Two themes naturally emerge: the importance of single scattering dominance and compensation for spectral incompleteness by spatial redundancy. (topical review)
Population inversion in recombining hydrogen plasma
Furukane, Utaro; Yokota, Toshiaki; Oda, Toshiatsu.
1978-11-01
The collisional-radiative model is applied to a recombining hydrogen plasma in order to investigate the plasma condition in which the population inversion between the energy levels of hydrogen can be generated. The population inversion is expected in a plasma where the three body recombination has a large contribution to the recombining processes and the effective recombination rate is beyond a certain value for a given electron density and temperature. Calculated results are presented in figures and tables. (author)
Oluleye, Gbemi; Smith, Robin
2016-01-01
Highlights: • MILP model developed for integration of waste heat recovery technologies in process sites. • Five thermodynamic cycles considered for exploitation of industrial waste heat. • Temperature and quantity of multiple waste heat sources considered. • Interactions with the site utility system considered. • Industrial case study presented to illustrate application of the proposed methodology. - Abstract: Thermodynamic cycles such as organic Rankine cycles, absorption chillers, absorption heat pumps, absorption heat transformers, and mechanical heat pumps are able to utilize wasted thermal energy in process sites for the generation of electrical power, chilling and heat at a higher temperature. In this work, a novel systematic framework is presented for optimal integration of these technologies in process sites. The framework is also used to assess the best design approach for integrating waste heat recovery technologies in process sites, i.e. stand-alone integration or a systems-oriented integration. The developed framework allows for: (1) selection of one or more waste heat sources (taking into account the temperatures and thermal energy content), (2) selection of one or more technology options and working fluids, (3) selection of end-uses of recovered energy, (4) exploitation of interactions with the existing site utility system and (5) the potential for heat recovery via heat exchange is also explored. The methodology is applied to an industrial case study. Results indicate a systems-oriented design approach reduces waste heat by 24%; fuel consumption by 54% and CO_2 emissions by 53% with a 2 year payback, and stand-alone design approach reduces waste heat by 12%; fuel consumption by 29% and CO_2 emissions by 20.5% with a 4 year payback. Therefore, benefits from waste heat utilization increase when interactions between the existing site utility system and the waste heat recovery technologies are explored simultaneously. The case study also shows
Moreno-Camacho, Carlos A.; Montoya-Torres, Jairo R.; Vélez-Gallego, Mario C.
2018-06-01
Only a few studies in the available scientific literature address the problem of having a group of workers that do not share identical levels of productivity during the planning horizon. This study considers a workforce scheduling problem in which the actual processing time is a function of the scheduling sequence to represent the decline in workers' performance, evaluating two classical performance measures separately: makespan and maximum tardiness. Several mathematical models are compared with each other to highlight the advantages of each approach. The mathematical models are tested with randomly generated instances available from a public e-library.
Isaak, S.; Bull, S.; Pitter, M. C.; Harrison, Ian.
2011-05-01
This paper reports on the development of a SPAD device and its subsequent use in an actively quenched single photon counting imaging system, and was fabricated in a UMC 0.18 μm CMOS process. A low-doped p- guard ring (t-well layer) encircling the active area to prevent the premature reverse breakdown. The array is a 16×1 parallel output SPAD array, which comprises of an active quenched SPAD circuit in each pixel with the current value being set by an external resistor RRef = 300 kΩ. The SPAD I-V response, ID was found to slowly increase until VBD was reached at excess bias voltage, Ve = 11.03 V, and then rapidly increase due to avalanche multiplication. Digital circuitry to control the SPAD array and perform the necessary data processing was designed in VHDL and implemented on a FPGA chip. At room temperature, the dark count was found to be approximately 13 KHz for most of the 16 SPAD pixels and the dead time was estimated to be 40 ns.
Martin-Espanol, Alba; Zammit-Mangion, Andrew; Clarke, Peter J.; Flament, Thomas; Helm, Veit; King, Matt A.; Luthcke, Scott B.; Petrie, Elizabeth; Remy, Frederique; Schon, Nana;
2016-01-01
We present spatiotemporal mass balance trends for the Antarctic Ice Sheet from a statistical inversion of satellite altimetry, gravimetry, and elastic-corrected GPS data for the period 2003-2013. Our method simultaneously determines annual trends in ice dynamics, surface mass balance anomalies, and a time-invariant solution for glacio-isostatic adjustment while remaining largely independent of forward models. We establish that over the period 2003-2013, Antarctica has been losing mass at a rateof -84 +/- 22 Gt per yr, with a sustained negative mean trend of dynamic imbalance of -111 +/- 13 Gt per yr. West Antarctica is the largest contributor with -112 +/- 10 Gt per yr, mainly triggered by high thinning rates of glaciers draining into the Amundsen Sea Embayment. The Antarctic Peninsula has experienced a dramatic increase in mass loss in the last decade, with a mean rate of -28 +/- 7 Gt per yr and significantly higher values for the most recent years following the destabilization of the Southern Antarctic Peninsula around 2010. The total mass loss is partly compensated by a significant mass gain of 56 +/- 18 Gt per yr in East Antarctica due to a positive trend of surface mass balance anomalies.
Basin analysis in the Southern Tethyan margin: Facies sequences, stratal pattern and subsidence history highlight extension-to-inversion processes in the Cretaceous Panormide carbonate platform (NW Sicily)
Basilone, Luca; Sulli, Attilio
2018-01-01
In the Mediterranean, the South-Tethys paleomargin experienced polyphased tectonic episodes and paleoenvironmental perturbations during Mesozoic time. The Cretaceous shallow-water carbonate successions of the Panormide platform, outcropping in the northern edge of the Palermo Mountains (NW Sicily), were studied by integrating facies and stratal pattern with backstripping analysis to recognize the tectonics vs. carbonate sedimentation interaction. The features of the Requienid limestone, including geometric configuration, facies sequence, lithological changes and significance of the top-unconformity, highlight that at the end of the Lower Cretaceous the carbonate platform was tectonically dismembered in various rotating fault-blocks. The variable trends of the subsidence curves testify to different responses, both uplift and downthrow, of various platform-blocks impacted by extensional tectonics. Physical stratigraphic and facies analysis of the Rudistid limestone highlight that during the Upper Cretaceous the previously carbonate platform faulted-blocks were subjected to vertical movements in the direction opposite to the displacement produced by the extensional tectonics, indicating a positive tectonic inversion. Comparisons with other sectors of the Southern Tethyan and Adria paleomargins indicate that during the Cretaceous these areas underwent the same extensional and compressional stages occurring in the Panormide carbonate platform, suggesting a regional scale significance, in time and kinematics, for these tectonic events.
Linearized inversion frameworks toward high-resolution seismic imaging
Aldawood, Ali
2016-01-01
installed along the earth surface or down boreholes. Seismic imaging is a powerful tool to map these reflected and scattered energy back to their subsurface scattering or reflection points. Seismic imaging is conventionally based on the single
The linearized inversion of the generalized interferometric multiple imaging
Aldawood, Ali; Hoteit, Ibrahim; Alkhalifah, Tariq Ali
2016-01-01
such as vertical and nearly vertical fault planes, and salt flanks. To image first-order internal multiple, the GIMI framework consists of three datuming steps, followed by applying the zero-lag cross-correlation imaging condition. However, the standard GIMI
Petrică Andreea-Cristina
2017-07-01
Full Text Available Modeling exchange rate volatility became an important topic for research debate starting with 1973, when many countries switched to floating exchange rate system. In this paper, we focus on the EUR/RON exchange rate both as an economic measure and present the implied economic links, and also as a financial investment and analyze its movements and fluctuations through two volatility stochastic processes: the Standard Generalized Autoregressive Conditionally Heteroscedastic Model (GARCH and the Exponential Generalized Autoregressive Conditionally Heteroscedastic Model (EGARCH. The objective of the conditional variance processes is to capture dependency in the return series of the EUR/RON exchange rate. On this account, analyzing exchange rates could be seen as the input for economic decisions regarding Romanian macroeconomics - the exchange rates being influenced by many factors such as: interest rates, inflation, trading relationships with other countries (imports and exports, or investments - portfolio optimization, risk management, asset pricing. Therefore, we talk about political stability and economic performance of a country that represents a link between the two types of inputs mentioned above and influences both the macroeconomics and the investments. Based on time-varying volatility, we examine implied volatility of daily returns of EUR/RON exchange rate using the standard GARCH model and the asymmetric EGARCH model, whose parameters are estimated through the maximum likelihood method and the error terms follow two distributions (Normal and Student’s t. The empirical results show EGARCH(2,1 with Asymmetric order 2 and Student’s t error terms distribution performs better than all the estimated standard GARCH models (GARCH(1,1, GARCH(1,2, GARCH(2,1 and GARCH(2,2. This conclusion is supported by the major advantage of the EGARCH model compared to the GARCH model which consists in allowing good and bad news having different impact on the
Convex blind image deconvolution with inverse filtering
Lv, Xiao-Guang; Li, Fang; Zeng, Tieyong
2018-03-01
Blind image deconvolution is the process of estimating both the original image and the blur kernel from the degraded image with only partial or no information about degradation and the imaging system. It is a bilinear ill-posed inverse problem corresponding to the direct problem of convolution. Regularization methods are used to handle the ill-posedness of blind deconvolution and get meaningful solutions. In this paper, we investigate a convex regularized inverse filtering method for blind deconvolution of images. We assume that the support region of the blur object is known, as has been done in a few existing works. By studying the inverse filters of signal and image restoration problems, we observe the oscillation structure of the inverse filters. Inspired by the oscillation structure of the inverse filters, we propose to use the star norm to regularize the inverse filter. Meanwhile, we use the total variation to regularize the resulting image obtained by convolving the inverse filter with the degraded image. The proposed minimization model is shown to be convex. We employ the first-order primal-dual method for the solution of the proposed minimization model. Numerical examples for blind image restoration are given to show that the proposed method outperforms some existing methods in terms of peak signal-to-noise ratio (PSNR), structural similarity (SSIM), visual quality and time consumption.
Linearization of CIF through SOS
Nadales Agut, D.E.; Reniers, M.A.; Luttik, B.; Valencia, F.
2011-01-01
Linearization is the procedure of rewriting a process term into a linear form, which consist only of basic operators of the process language. This procedure is interesting both from a theoretical and a practical point of view. In particular, a linearization algorithm is needed for the Compositional
Inverse scattering theory foundations of tomography with diffracting wavefields
Devaney, A.J.
1987-01-01
The underlying mathematical models employed in reflection and transmission computed tomography using diffracting wavefields (called diffraction tomography) are reviewed and shown to have a rigorous basis in inverse scattering theory. In transmission diffraction tomography the underlying wave model is shown to be the Rytov approximation to the complex phase of the wavefield transmitted by the object being probed while in reflection diffraction tomography the underlying wave model is shown to be the Born approximation to the backscattered wavefield from the object. In both cases the goal of the reconstruction process is the determination of the objects's complex index of refraction as a function of position r/sup →/ and, possibly, the frequency ω of the probing wavefield. By use of these approximations the reconstruction problem for both transmission and reflection diffraction tomography can be cast into the simple and elegant form of linearized inverse scattering theory. Linearized inverse scattering theory is shown to lead directly to generalized projection-slice theorems for both reflection and transmission diffraction tomography that provide a simple mathematical relationship between the object's complex index of refraction (the unknown) and the data (the complex phase of the transmitted wave or the complex amplitude of the reflected wave). The conventional projection-slice theorem of X-ray CT is shown to result from the generalized projection-slice theorem for transmission diffraction tomography in the limit of vanishing wavelength (in the absence of wave effects). Fourier based and back-projection type reconstruction algorithms are shown to be directly derivable from the generalized projection-slice theorems
Recursive Matrix Inverse Update On An Optical Processor
Casasent, David P.; Baranoski, Edward J.
1988-02-01
A high accuracy optical linear algebraic processor (OLAP) using the digital multiplication by analog convolution (DMAC) algorithm is described for use in an efficient matrix inverse update algorithm with speed and accuracy advantages. The solution of the parameters in the algorithm are addressed and the advantages of optical over digital linear algebraic processors are advanced.
Olive, David J
2017-01-01
This text covers both multiple linear regression and some experimental design models. The text uses the response plot to visualize the model and to detect outliers, does not assume that the error distribution has a known parametric distribution, develops prediction intervals that work when the error distribution is unknown, suggests bootstrap hypothesis tests that may be useful for inference after variable selection, and develops prediction regions and large sample theory for the multivariate linear regression model that has m response variables. A relationship between multivariate prediction regions and confidence regions provides a simple way to bootstrap confidence regions. These confidence regions often provide a practical method for testing hypotheses. There is also a chapter on generalized linear models and generalized additive models. There are many R functions to produce response and residual plots, to simulate prediction intervals and hypothesis tests, to detect outliers, and to choose response trans...
Alcaraz, J.
2001-01-01
After several years of study e''+ e''- linear colliders in the TeV range have emerged as the major and optimal high-energy physics projects for the post-LHC era. These notes summarize the present status form the main accelerator and detector features to their physics potential. The LHC era. These notes summarize the present status, from the main accelerator and detector features to their physics potential. The LHC is expected to provide first discoveries in the new energy domain, whereas an e''+ e''- linear collider in the 500 GeV-1 TeV will be able to complement it to an unprecedented level of precision in any possible areas: Higgs, signals beyond the SM and electroweak measurements. It is evident that the Linear Collider program will constitute a major step in the understanding of the nature of the new physics beyond the Standard Model. (Author) 22 refs
Edwards, Harold M
1995-01-01
In his new undergraduate textbook, Harold M Edwards proposes a radically new and thoroughly algorithmic approach to linear algebra Originally inspired by the constructive philosophy of mathematics championed in the 19th century by Leopold Kronecker, the approach is well suited to students in the computer-dominated late 20th century Each proof is an algorithm described in English that can be translated into the computer language the class is using and put to work solving problems and generating new examples, making the study of linear algebra a truly interactive experience Designed for a one-semester course, this text adopts an algorithmic approach to linear algebra giving the student many examples to work through and copious exercises to test their skills and extend their knowledge of the subject Students at all levels will find much interactive instruction in this text while teachers will find stimulating examples and methods of approach to the subject
Li, Qi; Tan, Kai; Wang, Dong Zhen; Zhao, Bin; Zhang, Rui; Li, Yu; Qi, Yu Jie
2018-05-01
The spatio-temporal slip distribution of the earthquake that occurred on 8 August 2017 in Jiuzhaigou, China, was estimated from the teleseismic body wave and near-field Global Navigation Satellite System (GNSS) data (coseismic displacements and high-rate GPS data) based on a finite fault model. Compared with the inversion results from the teleseismic body waves, the near-field GNSS data can better restrain the rupture area, the maximum slip, the source time function, and the surface rupture. The results show that the maximum slip of the earthquake approaches 1.4 m, the scalar seismic moment is 8.0 × 1018 N·m ( M w ≈ 6.5), and the centroid depth is 15 km. The slip is mainly driven by the left-lateral strike-slip and it is initially inferred that the seismogenic fault occurs in the south branch of the Tazang fault or an undetectable fault, a NW-trending left-lateral strike-slip fault, and belongs to one of the tail structures at the easternmost end of the eastern Kunlun fault zone. The earthquake rupture is mainly concentrated at depths of 5-15 km, which results in the complete rupture of the seismic gap left by the previous four earthquakes with magnitudes > 6.0 in 1973 and 1976. Therefore, the possibility of a strong aftershock on the Huya fault is low. The source duration is 30 s and there are two major ruptures. The main rupture occurs in the first 10 s, 4 s after the earthquake; the second rupture peak arrives in 17 s. In addition, the Coulomb stress study shows that the epicenter of the earthquake is located in the area where the static Coulomb stress change increased because of the 12 May 2017 M w7.9 Wenchuan, China, earthquake. Therefore, the Wenchuan earthquake promoted the occurrence of the 8 August 2017 Jiuzhaigou earthquake.
Some results on inverse scattering
Ramm, A.G.
2008-01-01
A review of some of the author's results in the area of inverse scattering is given. The following topics are discussed: (1) Property C and applications, (2) Stable inversion of fixed-energy 3D scattering data and its error estimate, (3) Inverse scattering with 'incomplete' data, (4) Inverse scattering for inhomogeneous Schroedinger equation, (5) Krein's inverse scattering method, (6) Invertibility of the steps in Gel'fand-Levitan, Marchenko, and Krein inversion methods, (7) The Newton-Sabatier and Cox-Thompson procedures are not inversion methods, (8) Resonances: existence, location, perturbation theory, (9) Born inversion as an ill-posed problem, (10) Inverse obstacle scattering with fixed-frequency data, (11) Inverse scattering with data at a fixed energy and a fixed incident direction, (12) Creating materials with a desired refraction coefficient and wave-focusing properties. (author)
Inverse Stochastic Resonance in Cerebellar Purkinje Cells.
Anatoly Buchin
2016-08-01
Full Text Available Purkinje neurons play an important role in cerebellar computation since their axons are the only projection from the cerebellar cortex to deeper cerebellar structures. They have complex internal dynamics, which allow them to fire spontaneously, display bistability, and also to be involved in network phenomena such as high frequency oscillations and travelling waves. Purkinje cells exhibit type II excitability, which can be revealed by a discontinuity in their f-I curves. We show that this excitability mechanism allows Purkinje cells to be efficiently inhibited by noise of a particular variance, a phenomenon known as inverse stochastic resonance (ISR. While ISR has been described in theoretical models of single neurons, here we provide the first experimental evidence for this effect. We find that an adaptive exponential integrate-and-fire model fitted to the basic Purkinje cell characteristics using a modified dynamic IV method displays ISR and bistability between the resting state and a repetitive activity limit cycle. ISR allows the Purkinje cell to operate in different functional regimes: the all-or-none toggle or the linear filter mode, depending on the variance of the synaptic input. We propose that synaptic noise allows Purkinje cells to quickly switch between these functional regimes. Using mutual information analysis, we demonstrate that ISR can lead to a locally optimal information transfer between the input and output spike train of the Purkinje cell. These results provide the first experimental evidence for ISR and suggest a functional role for ISR in cerebellar information processing.
Inversion effects for faces and objects in developmental prosopagnosia
Klargaard, Solja K; Starrfelt, Randi; Gerlach, Christian
2018-01-01
The disproportionate face inversion effect (dFIE) concerns the finding that face recognition is more affected by inversion than recognition of non-face objects; an effect assumed to reflect that face recognition relies on special operations. Support for this notion comes from studies showing...... that face processing in developmental prosopagnosia (DP) is less affected by inversion than it is in normal subjects, and that DPs may even display face inversion superiority effects, i.e. better processing of inverted compared to upright faces. To date, however, there are no reports of direct comparisons...... between inversion effects for faces and objects, investigating whether the altered inversion effect in DP is specific to faces. We examined this question by comparing inversion effects for faces and cars in two otherwise identical recognition tasks in a group of DPs (N = 16) and a matched control group...
Alternating minimisation for glottal inverse filtering
Bleyer, Ismael Rodrigo; Lybeck, Lasse; Auvinen, Harri; Siltanen, Samuli; Airaksinen, Manu; Alku, Paavo
2017-01-01
A new method is proposed for solving the glottal inverse filtering (GIF) problem. The goal of GIF is to separate an acoustical speech signal into two parts: the glottal airflow excitation and the vocal tract filter. To recover such information one has to deal with a blind deconvolution problem. This ill-posed inverse problem is solved under a deterministic setting, considering unknowns on both sides of the underlying operator equation. A stable reconstruction is obtained using a double regularization strategy, alternating between fixing either the glottal source signal or the vocal tract filter. This enables not only splitting the nonlinear and nonconvex problem into two linear and convex problems, but also allows the use of the best parameters and constraints to recover each variable at a time. This new technique, called alternating minimization glottal inverse filtering (AM-GIF), is compared with two other approaches: Markov chain Monte Carlo glottal inverse filtering (MCMC-GIF), and iterative adaptive inverse filtering (IAIF), using synthetic speech signals. The recent MCMC-GIF has good reconstruction quality but high computational cost. The state-of-the-art IAIF method is computationally fast but its accuracy deteriorates, particularly for speech signals of high fundamental frequency ( F 0). The results show the competitive performance of the new method: With high F 0, the reconstruction quality is better than that of IAIF and close to MCMC-GIF while reducing the computational complexity by two orders of magnitude. (paper)
Introduction to the mathematics of inversion in remote sensing and indirect measurements
Twomey, S
2013-01-01
Developments in Geomathematics, 3: Introduction to the Mathematics of Inversion in Remote Sensing and Indirect Measurements focuses on the application of the mathematics of inversion in remote sensing and indirect measurements, including vectors and matrices, eigenvalues and eigenvectors, and integral equations. The publication first examines simple problems involving inversion, theory of large linear systems, and physical and geometric aspects of vectors and matrices. Discussions focus on geometrical view of matrix operations, eigenvalues and eigenvectors, matrix products, inverse of a matrix, transposition and rules for product inversion, and algebraic elimination. The manuscript then tackles the algebraic and geometric aspects of functions and function space and linear inversion methods, as well as the algebraic and geometric nature of constrained linear inversion, least squares solution, approximation by sums of functions, and integral equations. The text examines information content of indirect sensing m...
Inverse Problems and Uncertainty Quantification
Litvinenko, Alexander
2014-01-06
In a Bayesian setting, inverse problems and uncertainty quantification (UQ) - the propagation of uncertainty through a computational (forward) modelare strongly connected. In the form of conditional expectation the Bayesian update becomes computationally attractive. This is especially the case as together with a functional or spectral approach for the forward UQ there is no need for time- consuming and slowly convergent Monte Carlo sampling. The developed sampling- free non-linear Bayesian update is derived from the variational problem associated with conditional expectation. This formulation in general calls for further discretisa- tion to make the computation possible, and we choose a polynomial approximation. After giving details on the actual computation in the framework of functional or spectral approximations, we demonstrate the workings of the algorithm on a number of examples of increasing complexity. At last, we compare the linear and quadratic Bayesian update on the small but taxing example of the chaotic Lorenz 84 model, where we experiment with the influence of different observation or measurement operators on the update.
Inverse Problems and Uncertainty Quantification
Litvinenko, Alexander; Matthies, Hermann G.
2014-01-01
In a Bayesian setting, inverse problems and uncertainty quantification (UQ) - the propagation of uncertainty through a computational (forward) modelare strongly connected. In the form of conditional expectation the Bayesian update becomes computationally attractive. This is especially the case as together with a functional or spectral approach for the forward UQ there is no need for time- consuming and slowly convergent Monte Carlo sampling. The developed sampling- free non-linear Bayesian update is derived from the variational problem associated with conditional expectation. This formulation in general calls for further discretisa- tion to make the computation possible, and we choose a polynomial approximation. After giving details on the actual computation in the framework of functional or spectral approximations, we demonstrate the workings of the algorithm on a number of examples of increasing complexity. At last, we compare the linear and quadratic Bayesian update on the small but taxing example of the chaotic Lorenz 84 model, where we experiment with the influence of different observation or measurement operators on the update.
Inverse problems and uncertainty quantification
Litvinenko, Alexander
2013-12-18
In a Bayesian setting, inverse problems and uncertainty quantification (UQ)— the propagation of uncertainty through a computational (forward) model—are strongly connected. In the form of conditional expectation the Bayesian update becomes computationally attractive. This is especially the case as together with a functional or spectral approach for the forward UQ there is no need for time- consuming and slowly convergent Monte Carlo sampling. The developed sampling- free non-linear Bayesian update is derived from the variational problem associated with conditional expectation. This formulation in general calls for further discretisa- tion to make the computation possible, and we choose a polynomial approximation. After giving details on the actual computation in the framework of functional or spectral approximations, we demonstrate the workings of the algorithm on a number of examples of increasing complexity. At last, we compare the linear and quadratic Bayesian update on the small but taxing example of the chaotic Lorenz 84 model, where we experiment with the influence of different observation or measurement operators on the update.
Calculation of the inverse data space via sparse inversion
Saragiotis, Christos
2011-01-01
The inverse data space provides a natural separation of primaries and surface-related multiples, as the surface multiples map onto the area around the origin while the primaries map elsewhere. However, the calculation of the inverse data is far from trivial as theory requires infinite time and offset recording. Furthermore regularization issues arise during inversion. We perform the inversion by minimizing the least-squares norm of the misfit function by constraining the $ell_1$ norm of the solution, being the inverse data space. In this way a sparse inversion approach is obtained. We show results on field data with an application to surface multiple removal.
Blanco Rodriguez, P.; Vera Tome, F. [Natural Radioactivity Group. Universidad de Extremadura, 06071 Badajoz (Spain); Lozano, J.C. [Laboratorio de Radiactividad Ambiental. Universidad de Salamanca, 37008 Salamanca (Spain)
2014-07-01
Transfer from soil to plant is an important input of radionuclides into the food chain. Also, the mobility of radionuclides in soils is enhanced through their passage into the plant compartment. Thus, the soil-to-plant transfer of radionuclides raises the potential human dose. In radiological risk assessment models, this process is usually considered to be an equilibrium process such that the activity concentration in plants is linearly related to the soil concentration through a constant transfer factor (TF). However, the large variability present by measured TF values leads to major uncertainties in the assessment of risks. One possible way to reduce this variability in TF values is to parametrize their determination. This paper presents correlations of TF with the major element concentrations in soils. The findings confirm the major influence of the chemical environment of a soil on the assimilation process. The variability of TF might be greatly reduced if only the labile fraction were considered. Experiments performed with plants (Helianthus annuus L.) growing in a hydroponic medium appear to confirm this suggestion, showing a linear correlation between the plant and the soil solution activity concentrations. Extracting the labile fraction of a real soil is no trivial task, however. A possible operationally definable method is to consider the water-soluble together with the exchangeable fractions of the soil. Studies performed in granitic soils showed that the labile concentration of uranium and radium strongly depended on the soil's textural characteristics. In this sense, a parametrization is proposed of the labile uranium and radium concentration as a function of the soil's granulometric parameters. (authors)
Elastic versus acoustic inversion for marine surveys
Mora, Peter; Wu, Zedong
2018-01-01
Full Wavefield Inversion (FWI) is a powerful and elegant approach for seismic imaging that is on the way to becoming the method of choice when processing exploration or global seismic data. In the case of processing marine survey data, one may
Zhang, Jing; Liang, Lichen; Anderson, Jon R; Gatewood, Lael; Rottenberg, David A; Strother, Stephen C
2008-01-01
As functional magnetic resonance imaging (fMRI) becomes widely used, the demands for evaluation of fMRI processing pipelines and validation of fMRI analysis results is increasing rapidly. The current NPAIRS package, an IDL-based fMRI processing pipeline evaluation framework, lacks system interoperability and the ability to evaluate general linear model (GLM)-based pipelines using prediction metrics. Thus, it can not fully evaluate fMRI analytical software modules such as FSL.FEAT and NPAIRS.GLM. In order to overcome these limitations, a Java-based fMRI processing pipeline evaluation system was developed. It integrated YALE (a machine learning environment) into Fiswidgets (a fMRI software environment) to obtain system interoperability and applied an algorithm to measure GLM prediction accuracy. The results demonstrated that the system can evaluate fMRI processing pipelines with univariate GLM and multivariate canonical variates analysis (CVA)-based models on real fMRI data based on prediction accuracy (classification accuracy) and statistical parametric image (SPI) reproducibility. In addition, a preliminary study was performed where four fMRI processing pipelines with GLM and CVA modules such as FSL.FEAT and NPAIRS.CVA were evaluated with the system. The results indicated that (1) the system can compare different fMRI processing pipelines with heterogeneous models (NPAIRS.GLM, NPAIRS.CVA and FSL.FEAT) and rank their performance by automatic performance scoring, and (2) the rank of pipeline performance is highly dependent on the preprocessing operations. These results suggest that the system will be of value for the comparison, validation, standardization and optimization of functional neuroimaging software packages and fMRI processing pipelines.
Ramig, Keith; Subramaniam, Gopal; Karimi, Sasan; Szalda, David J; Ko, Allen; Lam, Aaron; Li, Jeffrey; Coaderaj, Ani; Cavdar, Leyla; Bogdan, Lukasz; Kwon, Kitae; Greer, Edyta M
2016-04-15
A series of 2,4-disubstituted 1H-1-benzazepines, 2a-d, 4, and 6, were studied, varying both the substituents at C2 and C4 and at the nitrogen atom. The conformational inversion (ring-flip) and nitrogen-atom inversion (N-inversion) energetics were studied by variable-temperature NMR spectroscopy and computations. The steric bulk of the nitrogen-atom substituent was found to affect both the conformation of the azepine ring and the geometry around the nitrogen atom. Also affected were the Gibbs free energy barriers for the ring-flip and the N-inversion. When the nitrogen-atom substituent was alkyl, as in 2a-c, the geometry of the nitrogen atom was nearly planar and the azepine ring was highly puckered; the result was a relatively high-energy barrier to ring-flip and a low barrier to N-inversion. Conversely, when the nitrogen-atom substituent was a hydrogen atom, as in 2d, 4, and 6, the nitrogen atom was significantly pyramidalized and the azepine ring was less puckered; the result here was a relatively high energy barrier to N-inversion and a low barrier to ring-flip. In these N-unsubstituted compounds, it was found computationally that the lowest-energy stereodynamic process was ring-flip coupled with N-inversion, as N-inversion alone had a much higher energy barrier.
Gallovic, Frantisek; Cirella, Antonella; Plicka, Vladimir; Piatanesi, Alessio
2013-04-01
On 14 June 2008, UTC 23:43, the border of Iwate and Miyagi prefectures was hit by an Mw7 reverse-fault type crustal earthquake. The event is known to have the largest ground acceleration observed to date (~4g), which was recorded at station IWTH25. We analyze observed strong motion data with the objective to image the event rupture process and the associated uncertainties. Two different slip inversion approaches are used, the difference between the two methods being only in the parameterization of the source model. To minimize mismodeling of the propagation effects we use crustal model obtained by full waveform inversion of aftershock records in the frequency range between 0.05-0.3 Hz. In the first method, based on linear formulation, the parameters are represented by samples of slip velocity functions along the (finely discretized) fault in a time window spanning the whole rupture duration. Such a source description is very general with no prior constraint on the nucleation point, rupture velocity, shape of the velocity function. Thus the inversion could resolve very general (unexpected) features of the rupture evolution, such as multiple rupturing, rupture-propagation reversals, etc. On the other hand, due to the relatively large number of model parameters, the inversion result is highly non-unique, with possibility of obtaining a biased solution. The second method is a non-linear global inversion technique, where each point on the fault can slip only once, following a prescribed functional form of the source time function. We invert simultaneously for peak slip velocity, slip angle, rise time and rupture time by allowing a given range of variability for each kinematic model parameter. For this reason, unlike to the linear inversion approach, the rupture process needs a smaller number of parameters to be retrieved, and is more constrained with a proper control on the allowed range of parameter values. In order to test the resolution and reliability of the
Karloff, Howard
1991-01-01
To this reviewer’s knowledge, this is the first book accessible to the upper division undergraduate or beginning graduate student that surveys linear programming from the Simplex Method…via the Ellipsoid algorithm to Karmarkar’s algorithm. Moreover, its point of view is algorithmic and thus it provides both a history and a case history of work in complexity theory. The presentation is admirable; Karloff's style is informal (even humorous at times) without sacrificing anything necessary for understanding. Diagrams (including horizontal brackets that group terms) aid in providing clarity. The end-of-chapter notes are helpful...Recommended highly for acquisition, since it is not only a textbook, but can also be used for independent reading and study. —Choice Reviews The reader will be well served by reading the monograph from cover to cover. The author succeeds in providing a concise, readable, understandable introduction to modern linear programming. —Mathematics of Computing This is a textbook intend...
Mo, Yun-Fei [School of Physics and Microelectronics Science, Hunan University, Changsha, 410082 (China); Liu, Rang-Su, E-mail: liurangsu@sina.com [School of Physics and Microelectronics Science, Hunan University, Changsha, 410082 (China); Tian, Ze-An; Liang, Yong-Chao [School of Physics and Microelectronics Science, Hunan University, Changsha, 410082 (China); Zhang, Hai-Tao [School of Physics and Microelectronics Science, Hunan University, Changsha, 410082 (China); Department of Electronic and Communication Engineering, Changsha University, Changsha 410003 (China); Hou, Zhao-Yang [Department of Applied Physics, Chang’an University, Xi’an 710064 (China); Liu, Hai-Rong [College of Materials Science and Engineering, Hunan University, Changsha 410082 (China); Zhang, Ai-long [College of Physics and Electronics, Hunan University of Arts and Science, Changde 415000 (China); Zhou, Li-Li [Department of Information Engineering, Gannan Medical University, Ganzhou 341000 (China); Peng, Ping [College of Materials Science and Engineering, Hunan University, Changsha 410082 (China); Xie, Zhong [School of Physics and Microelectronics Science, Hunan University, Changsha, 410082 (China)
2015-05-15
A MD simulation of liquid Cu{sub 46}Zr{sub 54} alloys has been performed for understanding the effects of initial melt temperatures on the microstructural evolution and mechanical properties during quenching process. By using several microstructural analyzing methods, it is found that the icosahedral and defective icosahedral clusters play a key role in the microstructure transition. All the final solidification structures obtained at different initial melt temperatures are of amorphous structures, and their structural and mechanical properties are non-linearly related to the initial melt temperatures, and fluctuated in a certain range. Especially, there exists a best initial melt temperature, from which the glass configuration possesses the highest packing density, the optimal elastic constants, and the smaller extent of structural softening under deforming.
Wang, Jun-Sheng; Yang, Guang-Hong
2017-07-25
This paper studies the optimal output-feedback control problem for unknown linear discrete-time systems with stochastic measurement and process noise. A dithered Bellman equation with the innovation covariance matrix is constructed via the expectation operator given in the form of a finite summation. On this basis, an output-feedback-based approximate dynamic programming method is developed, where the terms depending on the innovation covariance matrix are available with the aid of the innovation covariance matrix identified beforehand. Therefore, by iterating the Bellman equation, the resulting value function can converge to the optimal one in the presence of the aforementioned noise, and the nearly optimal control laws are delivered. To show the effectiveness and the advantages of the proposed approach, a simulation example and a velocity control experiment on a dc machine are employed.
Ai Ling Pang
2015-09-01
Full Text Available This study was conducted to evaluate the possibility of utilizing kenaf (KNF in LLDPE/PVOH to develop a new thermoplastic composite. The effect of KNF loading on the processability and mechanical, thermal and water absorption properties of linear low-density polyethylene/poly (vinyl alcohol/kenaf (LLDPE/PVOH/KNF composites were investigated. Composites with different KNF loadings (0, 10, 20, 30, and 40 phr were prepared using a Thermo Haake Polydrive internal mixer at a temperature of 150 °C and rotor speed of 50 rpm for 10 min. The results indicate that the stabilization torque, tensile modulus, water uptake, and thermal stability increased, while tensile strength and elongation at break decreased with increasing filler loading. The tensile fractured surfaces observed by scanning electron microscopy (SEM supported the deterioration in tensile properties of the LLDPE/PVOH/KNF composites with increasing KNF loading.
Júnior, Décio Brandes M.F.; Oliveira, Mônica Georgia N.; Silva, Cristiano da, E-mail: deciobr@eletronuclear.gov.br, E-mail: mongeor@eletronuclear.gov.br, E-mail: cdsilva@eletronuclear.gov.br [Eletrobrás Termonuclear S.A. (ELETRONUCLEAR), Angra dos Reis, RJ (Brazil). Departamento DDD.O - Física de Reatores
2017-07-01
The goal of this work is present the new System of Acquisition and Signal Processing for the execution of the initial criticality after refueling and physical tests at low power with the incorporation of the real time resolution of Inverse Point Kinetic Equations (IPK). The system was developed using cRIO 9082 hardware (compactRIO), which is a programmable logic controller (PLC) and, the National Lab's LabVIEW programming language. The developed system enabled a better visualization and monitoring interface of the neutron flux evolution during the first criticality of cycle and following the low power physical tests, which allows the Reactor Physics Group and Reactor Operators of Angra 2 guide faster and accurately the reactivity variations at physical tests. The digital reactivity meter developed reinforces in Angra-2 the set of operational practices of reactivity management. (author)
Júnior, Décio Brandes M.F.; Oliveira, Mônica Georgia N.; Silva, Cristiano da
2017-01-01
The goal of this work is present the new System of Acquisition and Signal Processing for the execution of the initial criticality after refueling and physical tests at low power with the incorporation of the real time resolution of Inverse Point Kinetic Equations (IPK). The system was developed using cRIO 9082 hardware (compactRIO), which is a programmable logic controller (PLC) and, the National Lab's LabVIEW programming language. The developed system enabled a better visualization and monitoring interface of the neutron flux evolution during the first criticality of cycle and following the low power physical tests, which allows the Reactor Physics Group and Reactor Operators of Angra 2 guide faster and accurately the reactivity variations at physical tests. The digital reactivity meter developed reinforces in Angra-2 the set of operational practices of reactivity management. (author)
Inverse problems in systems biology
Engl, Heinz W; Lu, James; Müller, Stefan; Flamm, Christoph; Schuster, Peter; Kügler, Philipp
2009-01-01
Systems biology is a new discipline built upon the premise that an understanding of how cells and organisms carry out their functions cannot be gained by looking at cellular components in isolation. Instead, consideration of the interplay between the parts of systems is indispensable for analyzing, modeling, and predicting systems' behavior. Studying biological processes under this premise, systems biology combines experimental techniques and computational methods in order to construct predictive models. Both in building and utilizing models of biological systems, inverse problems arise at several occasions, for example, (i) when experimental time series and steady state data are used to construct biochemical reaction networks, (ii) when model parameters are identified that capture underlying mechanisms or (iii) when desired qualitative behavior such as bistability or limit cycle oscillations is engineered by proper choices of parameter combinations. In this paper we review principles of the modeling process in systems biology and illustrate the ill-posedness and regularization of parameter identification problems in that context. Furthermore, we discuss the methodology of qualitative inverse problems and demonstrate how sparsity enforcing regularization allows the determination of key reaction mechanisms underlying the qualitative behavior. (topical review)
Atmospheric inverse modeling via sparse reconstruction
Hase, Nils; Miller, Scot M.; Maaß, Peter; Notholt, Justus; Palm, Mathias; Warneke, Thorsten
2017-10-01
Many applications in atmospheric science involve ill-posed inverse problems. A crucial component of many inverse problems is the proper formulation of a priori knowledge about the unknown parameters. In most cases, this knowledge is expressed as a Gaussian prior. This formulation often performs well at capturing smoothed, large-scale processes but is often ill equipped to capture localized structures like large point sources or localized hot spots. Over the last decade, scientists from a diverse array of applied mathematics and engineering fields have developed sparse reconstruction techniques to identify localized structures. In this study, we present a new regularization approach for ill-posed inverse problems in atmospheric science. It is based on Tikhonov regularization with sparsity constraint and allows bounds on the parameters. We enforce sparsity using a dictionary representation system. We analyze its performance in an atmospheric inverse modeling scenario by estimating anthropogenic US methane (CH4) emissions from simulated atmospheric measurements. Different measures indicate that our sparse reconstruction approach is better able to capture large point sources or localized hot spots than other methods commonly used in atmospheric inversions. It captures the overall signal equally well but adds details on the grid scale. This feature can be of value for any inverse problem with point or spatially discrete sources. We show an example for source estimation of synthetic methane emissions from the Barnett shale formation.
A compressive sensing approach to the calculation of the inverse data space
Khan, Babar Hasan; Saragiotis, Christos; Alkhalifah, Tariq Ali
2012-01-01
Seismic processing in the Inverse Data Space (IDS) has its advantages like the task of removing the multiples simply becomes muting the zero offset and zero time data in the inverse domain. Calculation of the Inverse Data Space by sparse inversion
Electrochemically driven emulsion inversion
Johans, Christoffer; Kontturi, Kyösti
2007-09-01
It is shown that emulsions stabilized by ionic surfactants can be inverted by controlling the electrical potential across the oil-water interface. The potential dependent partitioning of sodium dodecyl sulfate (SDS) was studied by cyclic voltammetry at the 1,2-dichlorobenzene|water interface. In the emulsion the potential control was achieved by using a potential-determining salt. The inversion of a 1,2-dichlorobenzene-in-water (O/W) emulsion stabilized by SDS was followed by conductometry as a function of added tetrapropylammonium chloride. A sudden drop in conductivity was observed, indicating the change of the continuous phase from water to 1,2-dichlorobenzene, i.e. a water-in-1,2-dichlorobenzene emulsion was formed. The inversion potential is well in accordance with that predicted by the hydrophilic-lipophilic deviation if the interfacial potential is appropriately accounted for.
Optimized nonlinear inversion of surface-wave dispersion data
Raykova, Reneta B.
2014-01-01
A new code for inversion of surface wave dispersion data is developed to obtain Earth’s crustal and upper mantle velocity structure. The author developed Optimized Non–Linear Inversion ( ONLI ) software, based on Monte-Carlo search. The values of S–wave velocity VS and thickness h for a number of horizontal homogeneous layers are parameterized. Velocity of P–wave VP and density ρ of relevant layers are calculated by empirical or theoretical relations. ONLI explores parameters space in two modes, selective and full search, and the main innovation of software is evaluation of tested models. Theoretical dispersion curves are calculated if tested model satisfied specific conditions only, reducing considerably the computation time. A number of tests explored impact of parameterization and proved the ability of ONLI approach to deal successfully with non–uniqueness of inversion problem. Key words: Earth’s structure, surface–wave dispersion, non–linear inversion, software
An inverse heat transfer problem for optimization of the thermal ...
This paper takes a different approach towards identiﬁcation of the thermal process in machining, using inverse heat transfer problem. Inverse heat transfer method allows the closest possible experimental and analytical approximation of thermal state for a machining process. Based on a temperature measured at any point ...
Jong Won Kim
2017-01-01
Full Text Available Polyethylene is one of the most commonly used polymer materials. Even though linear low density polyethylene (LLDPE has better mechanical properties than other kinds of polyethylene, it is not used as a textile material because of its plastic behavior that is easy to break at the die during melt spinning. In this study, LLDPE fibers were successfully produced with a new approach using a dry-jet wet spinning and a heat drawing process. The fibers were filled with carbon nanotubes (CNTs to improve the strength and reduce plastic deformation. The crystallinity, degree of orientation, mechanical properties (strength to yield, strength to break, elongation at break, and initial modulus, electrical conductivity, and thermal properties of LLDPE fibers were studied. The results show that the addition of CNTs improved the tensile strength and the degree of crystallinity. The heat drawing process resulted in a significant increase in the tensile strength and the orientation of the CNTs and polymer chains. In addition, this study demonstrates that the heat drawing process effectively decreases the plastic deformation of LLDPE.
Zhang, Da; She, Jin; Yang, Jun; Yu, Mengsun
2015-06-01
Acute hypoxia activates several autonomic mechanisms, mainly in cardiovascular system and respiratory system. The influence of acute hypoxia on linear and nonlinear heart rate variability (HRV) has been studied, but the parameters in the process of hypoxia are still unclear. Although the changes of HRV in frequency domain are related to autonomic responses, how nonlinear dynamics change with the decrease of ambient atmospheric pressure is unknown either. Eight healthy male subjects were exposed to simulated altitude from sea level to 3600 m in 10 min. HRV parameters in frequency domain were analyzed by wavelet packet transform (Daubechies 4, 4 level) followed by Hilbert transform to assess the spectral power of modified low frequency (0.0625-0.1875 Hz, LFmod), modified high frequency (0.1875-0.4375 Hz, HFmod), and the LFmod/HFmod ratio in every 1 min. Nonlinear parameters were also quantified by sample entropy (SampEn) and short term fractal correlation exponent (α1) in the process. Hypoxia was associated with the depression of both LFmod and HFmod component. They were significantly lower than that at sea level at 3600 m and 2880 m respectively (both p nonlinear HRV parameters continuously in the process of hypoxia would be an effective way to evaluate the different regulatory mechanisms of autonomic nervous system.
Gale, A.S.; Surlyk, Finn; Anderskouv, Kresten
2013-01-01
Evidence from regional stratigraphical patterns in Santonian−Campanian chalk is used to infer the presence of a very broad channel system (5 km across) with a depth of at least 50 m, running NNW−SSE across the eastern Isle of Wight; only the western part of the channel wall and fill is exposed. W......−Campanian chalks in the eastern Isle of Wight, involving penecontemporaneous tectonic inversion of the underlying basement structure, are rejected....
Reactivity in inverse micelles
Brochette, Pascal
1987-01-01
This research thesis reports the study of the use of micro-emulsions of water in oil as reaction support. Only the 'inverse micelles' domain of the ternary mixing (water/AOT/isooctane) has been studied. The main addressed issues have been: the micro-emulsion disturbance in presence of reactants, the determination of reactant distribution and the resulting kinetic theory, the effect of the interface on electron transfer reactions, and finally protein solubilization [fr
Ensemble Kalman methods for inverse problems
Iglesias, Marco A; Law, Kody J H; Stuart, Andrew M
2013-01-01
The ensemble Kalman filter (EnKF) was introduced by Evensen in 1994 (Evensen 1994 J. Geophys. Res. 99 10143–62) as a novel method for data assimilation: state estimation for noisily observed time-dependent problems. Since that time it has had enormous impact in many application domains because of its robustness and ease of implementation, and numerical evidence of its accuracy. In this paper we propose the application of an iterative ensemble Kalman method for the solution of a wide class of inverse problems. In this context we show that the estimate of the unknown function that we obtain with the ensemble Kalman method lies in a subspace A spanned by the initial ensemble. Hence the resulting error may be bounded above by the error found from the best approximation in this subspace. We provide numerical experiments which compare the error incurred by the ensemble Kalman method for inverse problems with the error of the best approximation in A, and with variants on traditional least-squares approaches, restricted to the subspace A. In so doing we demonstrate that the ensemble Kalman method for inverse problems provides a derivative-free optimization method with comparable accuracy to that achieved by traditional least-squares approaches. Furthermore, we also demonstrate that the accuracy is of the same order of magnitude as that achieved by the best approximation. Three examples are used to demonstrate these assertions: inversion of a compact linear operator; inversion of piezometric head to determine hydraulic conductivity in a Darcy model of groundwater flow; and inversion of Eulerian velocity measurements at positive times to determine the initial condition in an incompressible fluid. (paper)
Intersections, ideals, and inversion
Vasco, D.W.
1998-01-01
Techniques from computational algebra provide a framework for treating large classes of inverse problems. In particular, the discretization of many types of integral equations and of partial differential equations with undetermined coefficients lead to systems of polynomial equations. The structure of the solution set of such equations may be examined using algebraic techniques.. For example, the existence and dimensionality of the solution set may be determined. Furthermore, it is possible to bound the total number of solutions. The approach is illustrated by a numerical application to the inverse problem associated with the Helmholtz equation. The algebraic methods are used in the inversion of a set of transverse electric (TE) mode magnetotelluric data from Antarctica. The existence of solutions is demonstrated and the number of solutions is found to be finite, bounded from above at 50. The best fitting structure is dominantly one dimensional with a low crustal resistivity of about 2 ohm-m. Such a low value is compatible with studies suggesting lower surface wave velocities than found in typical stable cratons
Intersections, ideals, and inversion
Vasco, D.W.
1998-10-01
Techniques from computational algebra provide a framework for treating large classes of inverse problems. In particular, the discretization of many types of integral equations and of partial differential equations with undetermined coefficients lead to systems of polynomial equations. The structure of the solution set of such equations may be examined using algebraic techniques.. For example, the existence and dimensionality of the solution set may be determined. Furthermore, it is possible to bound the total number of solutions. The approach is illustrated by a numerical application to the inverse problem associated with the Helmholtz equation. The algebraic methods are used in the inversion of a set of transverse electric (TE) mode magnetotelluric data from Antarctica. The existence of solutions is demonstrated and the number of solutions is found to be finite, bounded from above at 50. The best fitting structure is dominantly onedimensional with a low crustal resistivity of about 2 ohm-m. Such a low value is compatible with studies suggesting lower surface wave velocities than found in typical stable cratons.
Ferencik, Maros; Lisauskas, Jennifer B.; Cury, Ricardo C.; Hoffmann, Udo; Abbara, Suhny; Achenbach, Stephan; Karl, W. Clem; Brady, Thomas J.; Chan, Raymond C.
2006-01-01
Multi-detector computed tomography (MDCT) permits detection of coronary plaque. However, noise and blurring impair accuracy and precision of plaque measurements. The aim of the study was to evaluate MDCT post-processing based on non-linear image deblurring and edge-preserving noise suppression for measurements of plaque size. Contrast-enhanced MDCT coronary angiography was performed in four subjects (mean age 55 ± 5 years, mean heart rate 54 ± 5 bpm) using a 16-slice scanner (Siemens Sensation 16, collimation 16 x 0.75 mm, gantry rotation 420 ms, tube voltage 120 kV, tube current 550 mAs, 80 mL of contrast). Intravascular ultrasound (IVUS; 40 MHz probe) was performed in one vessel in each patient and served as a reference standard. MDCT vessel cross-sectional images (1 mm thickness) were created perpendicular to centerline and aligned with corresponding IVUS images. MDCT images were processed using a deblurring and edge-preserving noise suppression algorithm. Then, three independent blinded observers segmented lumen and outer vessel boundaries in each modality to obtain vessel cross-sectional area and wall area in the unprocessed MDCT cross-sections, post-processed MDCT cross-sections and corresponding IVUS. The wall area measurement difference for unprocessed and post-processed MDCT images relative to IVUS was 0.4 ± 3.8 mm 2 and -0.2 ± 2.2 mm 2 (p 2 , respectively. In conclusion, MDCT permitted accurate in vivo measurement of wall area and vessel cross-sectional area as compared to IVUS. Post-processing to reduce blurring and noise reduced variability of wall area measurements and reduced measurement bias for both wall area and vessel cross-sectional area
Standard and inverse bond percolation of straight rigid rods on square lattices
Ramirez, L. S.; Centres, P. M.; Ramirez-Pastor, A. J.
2018-04-01
Numerical simulations and finite-size scaling analysis have been carried out to study standard and inverse bond percolation of straight rigid rods on square lattices. In the case of standard percolation, the lattice is initially empty. Then, linear bond k -mers (sets of k linear nearest-neighbor bonds) are randomly and sequentially deposited on the lattice. Jamming coverage pj ,k and percolation threshold pc ,k are determined for a wide range of k (1 ≤k ≤120 ). pj ,k and pc ,k exhibit a decreasing behavior with increasing k , pj ,k →∞=0.7476 (1 ) and pc ,k →∞=0.0033 (9 ) being the limit values for large k -mer sizes. pj ,k is always greater than pc ,k, and consequently, the percolation phase transition occurs for all values of k . In the case of inverse percolation, the process starts with an initial configuration where all lattice bonds are occupied and, given that periodic boundary conditions are used, the opposite sides of the lattice are connected by nearest-neighbor occupied bonds. Then, the system is diluted by randomly removing linear bond k -mers from the lattice. The central idea here is based on finding the maximum concentration of occupied bonds (minimum concentration of empty bonds) for which connectivity disappears. This particular value of concentration is called the inverse percolation threshold pc,k i, and determines a geometrical phase transition in the system. On the other hand, the inverse jamming coverage pj,k i is the coverage of the limit state, in which no more objects can be removed from the lattice due to the absence of linear clusters of nearest-neighbor bonds of appropriate size. It is easy to understand that pj,k i=1 -pj ,k . The obtained results for pc,k i show that the inverse percolation threshold is a decreasing function of k in the range 1 ≤k ≤18 . For k >18 , all jammed configurations are percolating states, and consequently, there is no nonpercolating phase. In other words, the lattice remains connected even when
Testing earthquake source inversion methodologies
Page, Morgan T.; Mai, Paul Martin; Schorlemmer, Danijel
2011-01-01
Source Inversion Validation Workshop; Palm Springs, California, 11-12 September 2010; Nowadays earthquake source inversions are routinely performed after large earthquakes and represent a key connection between recorded seismic and geodetic data
Contributions to Large Covariance and Inverse Covariance Matrices Estimation
Kang, Xiaoning
2016-01-01
Estimation of covariance matrix and its inverse is of great importance in multivariate statistics with broad applications such as dimension reduction, portfolio optimization, linear discriminant analysis and gene expression analysis. However, accurate estimation of covariance or inverse covariance matrices is challenging due to the positive definiteness constraint and large number of parameters, especially in the high-dimensional cases. In this thesis, I develop several approaches for estimat...
Observer-dependent sign inversions of polarization singularities.
Freund, Isaac
2014-10-15
We describe observer-dependent sign inversions of the topological charges of vector field polarization singularities: C points (points of circular polarization), L points (points of linear polarization), and two virtually unknown singularities we call γ(C) and α(L) points. In all cases, the sign of the charge seen by an observer can change as she changes the direction from which she views the singularity. Analytic formulas are given for all C and all L point sign inversions.
Scattering-angle based filtering of the waveform inversion gradients
Alkhalifah, Tariq Ali
2014-01-01
Full waveform inversion (FWI) requires a hierarchical approach to maneuver the complex non-linearity associated with the problem of velocity update. In anisotropic media, the non-linearity becomes far more complex with the potential trade-off between the multiparameter description of the model. A gradient filter helps us in accessing the parts of the gradient that are suitable to combat the potential non-linearity and parameter trade-off. The filter is based on representing the gradient in the time-lag normalized domain, in which the low scattering angle of the gradient update is initially muted out in the FWI implementation, in what we may refer to as a scattering angle continuation process. The result is a low wavelength update dominated by the transmission part of the update gradient. In this case, even 10 Hz data can produce vertically near-zero wavenumber updates suitable for a background correction of the model. Relaxing the filtering at a later stage in the FWI implementation allows for smaller scattering angles to contribute higher-resolution information to the model. The benefits of the extended domain based filtering of the gradient is not only it's ability in providing low wavenumber gradients guided by the scattering angle, but also in its potential to provide gradients free of unphysical energy that may correspond to unrealistic scattering angles.
Scattering-angle based filtering of the waveform inversion gradients
Alkhalifah, Tariq Ali
2014-11-22
Full waveform inversion (FWI) requires a hierarchical approach to maneuver the complex non-linearity associated with the problem of velocity update. In anisotropic media, the non-linearity becomes far more complex with the potential trade-off between the multiparameter description of the model. A gradient filter helps us in accessing the parts of the gradient that are suitable to combat the potential non-linearity and parameter trade-off. The filter is based on representing the gradient in the time-lag normalized domain, in which the low scattering angle of the gradient update is initially muted out in the FWI implementation, in what we may refer to as a scattering angle continuation process. The result is a low wavelength update dominated by the transmission part of the update gradient. In this case, even 10 Hz data can produce vertically near-zero wavenumber updates suitable for a background correction of the model. Relaxing the filtering at a later stage in the FWI implementation allows for smaller scattering angles to contribute higher-resolution information to the model. The benefits of the extended domain based filtering of the gradient is not only it\\'s ability in providing low wavenumber gradients guided by the scattering angle, but also in its potential to provide gradients free of unphysical energy that may correspond to unrealistic scattering angles.
Connection between Dirac and matrix Schroedinger inverse-scattering transforms
Jaulent, M.; Leon, J.J.P.
1978-01-01
The connection between two applications of the inverse scattering method for solving nonlinear equations is established. The inverse method associated with the massive Dirac system (D) : (iσ 3 d/dx - i q 3 σ 1 - q 1 σ 2 + mσ 2 )Y = epsilonY is rediscovered from the inverse method associated with the 2 x 2 matrix Schroedinger equation (S) : Ysub(xx) + (k 2 -Q)Y = 0. Here Q obeys a nonlinear constraint equivalent to a linear constraint on the reflection coefficient for (S). (author)
Inverse radiative transfer problems in two-dimensional heterogeneous media
Tito, Mariella Janette Berrocal
2001-01-01
The analysis of inverse problems in participating media where emission, absorption and scattering take place has several relevant applications in engineering and medicine. Some of the techniques developed for the solution of inverse problems have as a first step the solution of the direct problem. In this work the discrete ordinates method has been used for the solution of the linearized Boltzmann equation in two dimensional cartesian geometry. The Levenberg - Marquardt method has been used for the solution of the inverse problem of internal source and absorption and scattering coefficient estimation. (author)
Inverse planning and optimization: a comparison of solutions
Ringor, Michael [School of Health Sciences, Purdue University, West Lafayette, IN (United States); Papiez, Lech [Department of Radiation Oncology, Indiana University, Indianapolis, IN (United States)
1998-09-01
The basic problem in radiation therapy treatment planning is to determine an appropriate set of treatment parameters that would induce an effective dose distribution inside a patient. One can approach this task as an inverse problem, or as an optimization problem. In this presentation, we compare both approaches. The inverse problem is presented as a dose reconstruction problem similar to tomography reconstruction. We formulate the optimization problem as linear and quadratic programs. Explicit comparisons are made between the solutions obtained by inversion and those obtained by optimization for the case in which scatter and attenuation are ignored (the NS-NA approximation)
Reduction of Linear Programming to Linear Approximation
Vaserstein, Leonid N.
2006-01-01
It is well known that every Chebyshev linear approximation problem can be reduced to a linear program. In this paper we show that conversely every linear program can be reduced to a Chebyshev linear approximation problem.
Topological inversion for solution of geodesy-constrained geophysical problems
Saltogianni, Vasso; Stiros, Stathis
2015-04-01
Geodetic data, mostly GPS observations, permit to measure displacements of selected points around activated faults and volcanoes, and on the basis of geophysical models, to model the underlying physical processes. This requires inversion of redundant systems of highly non-linear equations with >3 unknowns; a situation analogous to the adjustment of geodetic networks. However, in geophysical problems inversion cannot be based on conventional least-squares techniques, and is based on numerical inversion techniques (a priori fixing of some variables, optimization in steps with values of two variables each time to be regarded fixed, random search in the vicinity of approximate solutions). Still these techniques lead to solutions trapped in local minima, to correlated estimates and to solutions with poor error control (usually sampling-based approaches). To overcome these problems, a numerical-topological, grid-search based technique in the RN space is proposed (N the number of unknown variables). This technique is in fact a generalization and refinement of techniques used in lighthouse positioning and in some cases of low-accuracy 2-D positioning using Wi-Fi etc. The basic concept is to assume discrete possible ranges of each variable, and from these ranges to define a grid G in the RN space, with some of the gridpoints to approximate the true solutions of the system. Each point of hyper-grid G is then tested whether it satisfies the observations, given their uncertainty level, and successful grid points define a sub-space of G containing the true solutions. The optimal (minimal) space containing one or more solutions is obtained using a trial-and-error approach, and a single optimization factor. From this essentially deterministic identification of the set of gridpoints satisfying the system of equations, at a following step, a stochastic optimal solution is computed corresponding to the center of gravity of this set of gridpoints. This solution corresponds to a
Taschek, Marco; Egermann, Jan; Schwarz, Sabrina; Leipertz, Alfred
2005-11-01
Optimum fuel preparation and mixture formation are core issues in the development of modern direct-injection (DI) Diesel engines, as these are crucial for defining the border conditions for the subsequent combustion and pollutant formation process. The local fuel/air ratio can be seen as one of the key parameters for this optimization process, as it allows the characterization and comparison of the mixture formation quality. For what is the first time to the best of our knowledge, linear Raman spectroscopy is used to detect the fuel/air ratio and its change along a line of a few millimeters directly and nonintrusively inside the combustion bowl of a DI Diesel engine. By a careful optimization of the measurement setup, the weak Raman signals could be separated successfully from disturbing interferences. A simultaneous measurement of the densities of air and fuel was possible along a line of about 10 mm length, allowing a time- and space-resolved measurement of the local fuel/air ratio. This could be performed in a nonreacting atmosphere as well as during fired operating conditions. The positioning of the measurement volume next to the interaction point of one of the spray jets with the wall of the combustion bowl allowed a near-wall analysis of the mixture formation process for a six-hole nozzle under varying injection and engine conditions. The results clearly show the influence of the nozzle geometry and preinjection on the mixing process. In contrast, modulation of the intake air temperature merely led to minor changes of the fuel concentration in the measurement volume.