Model order reduction techniques with applications in finite element analysis
Qu, Zu-Qing
2004-01-01
Despite the continued rapid advance in computing speed and memory the increase in the complexity of models used by engineers persists in outpacing them. Even where there is access to the latest hardware, simulations are often extremely computationally intensive and time-consuming when full-blown models are under consideration. The need to reduce the computational cost involved when dealing with high-order/many-degree-of-freedom models can be offset by adroit computation. In this light, model-reduction methods have become a major goal of simulation and modeling research. Model reduction can also ameliorate problems in the correlation of widely used finite-element analyses and test analysis models produced by excessive system complexity. Model Order Reduction Techniques explains and compares such methods focusing mainly on recent work in dynamic condensation techniques: - Compares the effectiveness of static, exact, dynamic, SEREP and iterative-dynamic condensation techniques in producing valid reduced-order mo...
Symmetry and partial order reduction techniques in model checking Rebeca
Jaghouri, M.M.; Sirjani, M.; Mousavi, M.R.; Movaghar, A.
2007-01-01
Rebeca is an actor-based language with formal semantics that can be used in modeling concurrent and distributed software and protocols. In this paper, we study the application of partial order and symmetry reduction techniques to model checking dynamic Rebeca models. Finding symmetry based equivalen
Manifold learning techniques and model reduction applied to dissipative PDEs
Sonday, Benjamin E; Gear, C William; Kevrekidis, Ioannis G
2010-01-01
We link nonlinear manifold learning techniques for data analysis/compression with model reduction techniques for evolution equations with time scale separation. In particular, we demonstrate a `"nonlinear extension" of the POD-Galerkin approach to obtaining reduced dynamic models of dissipative evolution equations. The approach is illustrated through a reaction-diffusion PDE, and the performance of different simulators on the full and the reduced models is compared. We also discuss the relation of this nonlinear extension with the so-called "nonlinear Galerkin" methods developed in the context of Approximate Inertial Manifolds.
System identification and model reduction using modulating function techniques
Shen, Yan
1993-01-01
Weighted least squares (WLS) and adaptive weighted least squares (AWLS) algorithms are initiated for continuous-time system identification using Fourier type modulating function techniques. Two stochastic signal models are examined using the mean square properties of the stochastic calculus: an equation error signal model with white noise residuals, and a more realistic white measurement noise signal model. The covariance matrices in each model are shown to be banded and sparse, and a joint likelihood cost function is developed which links the real and imaginary parts of the modulated quantities. The superior performance of above algorithms is demonstrated by comparing them with the LS/MFT and popular predicting error method (PEM) through 200 Monte Carlo simulations. A model reduction problem is formulated with the AWLS/MFT algorithm, and comparisons are made via six examples with a variety of model reduction techniques, including the well-known balanced realization method. Here the AWLS/MFT algorithm manifests higher accuracy in almost all cases, and exhibits its unique flexibility and versatility. Armed with this model reduction, the AWLS/MFT algorithm is extended into MIMO transfer function system identification problems. The impact due to the discrepancy in bandwidths and gains among subsystem is explored through five examples. Finally, as a comprehensive application, the stability derivatives of the longitudinal and lateral dynamics of an F-18 aircraft are identified using physical flight data provided by NASA. A pole-constrained SIMO and MIMO AWLS/MFT algorithm is devised and analyzed. Monte Carlo simulations illustrate its high-noise rejecting properties. Utilizing the flight data, comparisons among different MFT algorithms are tabulated and the AWLS is found to be strongly favored in almost all facets.
Frequency Weighted Model Order Reduction Technique and Error Bounds for Discrete Time Systems
Directory of Open Access Journals (Sweden)
Muhammad Imran
2014-01-01
for whole frequency range. However, certain applications (like controller reduction require frequency weighted approximation, which introduce the concept of using frequency weights in model reduction techniques. Limitations of some existing frequency weighted model reduction techniques include lack of stability of reduced order models (for two sided weighting case and frequency response error bounds. A new frequency weighted technique for balanced model reduction for discrete time systems is proposed. The proposed technique guarantees stable reduced order models even for the case when two sided weightings are present. Efficient technique for frequency weighted Gramians is also proposed. Results are compared with other existing frequency weighted model reduction techniques for discrete time systems. Moreover, the proposed technique yields frequency response error bounds.
Comparative Studies of Clustering Techniques for Real-Time Dynamic Model Reduction
Hogan, Emilie; Halappanavar, Mahantesh; Huang, Zhenyu; Lin, Guang; Lu, Shuai; Wang, Shaobu
2015-01-01
Dynamic model reduction in power systems is necessary for improving computational efficiency. Traditional model reduction using linearized models or offline analysis would not be adequate to capture power system dynamic behaviors, especially the new mix of intermittent generation and intelligent consumption makes the power system more dynamic and non-linear. Real-time dynamic model reduction emerges as an important need. This paper explores the use of clustering techniques to analyze real-time phasor measurements to determine generator groups and representative generators for dynamic model reduction. Two clustering techniques -- graph clustering and evolutionary clustering -- are studied in this paper. Various implementations of these techniques are compared and also compared with a previously developed Singular Value Decomposition (SVD)-based dynamic model reduction approach. Various methods exhibit different levels of accuracy when comparing the reduced model simulation against the original model. But some ...
Size reduction techniques for vital compliant VHDL simulation models
Rich, Marvin J.; Misra, Ashutosh
2006-08-01
A method and system select delay values from a VHDL standard delay file that correspond to an instance of a logic gate in a logic model. Then the system collects all the delay values of the selected instance and builds super generics for the rise-time and the fall-time of the selected instance. Then, the system repeats this process for every delay value in the standard delay file (310) that correspond to every instance of every logic gate in the logic model. The system then outputs a reduced size standard delay file (314) containing the super generics for every instance of every logic gate in the logic model.
Energy Technology Data Exchange (ETDEWEB)
Dautin, S.
1997-04-01
This work concerns the modeling of thermal phenomena inside buildings for the evaluation of energy exploitation costs of thermal installations and for the modeling of thermal and aeraulic transient phenomena. This thesis comprises 7 chapters dealing with: (1) the thermal phenomena inside buildings and the CLIM2000 calculation code, (2) the ETNA and GENEC experimental cells and their modeling, (3) the techniques of model reduction tested (Marshall`s truncature, Michailesco aggregation method and Moore truncature) with their algorithms and their encoding in the MATRED software, (4) the application of model reduction methods to the GENEC and ETNA cells and to a medium size dual-zone building, (5) the modeling of meteorological influences classically applied to buildings (external temperature and solar flux), (6) the analytical expression of these modeled meteorological influences. The last chapter presents the results of these improved methods on the GENEC and ETNA cells and on a lower inertia building. These new methods are compared to classical methods. (J.S.) 69 refs.
Ramesh, K; Nirmalkumar, A; Gurusamy, G
2010-01-01
In this paper, design of current controller for a two quadrant DC motor drive was proposed with the help of model order reduction technique. The calculation of current controller gain with some approximations in the conventional design process is replaced by proposed model order reduction method. The model order reduction technique proposed in this paper gives the better controller gain value for the DC motor drive. The proposed model order reduction method is a mixed method, where the numerator polynomial of reduced order model is obtained by using stability equation method and the denominator polynomial is obtained by using some approximation technique preceded in this paper. The designed controllers responses were simulated with the help of MATLAB to show the validity of the proposed method.
Application of Krylov Reduction Technique for a Machine Tool Multibody Modelling
Directory of Open Access Journals (Sweden)
M. Sulitka
2014-02-01
Full Text Available Quick calculation of machine tool dynamic response represents one of the major requirements for machine tool virtual modelling and virtual machining, aiming at simulating the machining process performance, quality, and precision of a workpiece. Enhanced time effectiveness in machine tool dynamic simulations may be achieved by employing model order reduction (MOR techniques of the full finite element (FE models. The paper provides a case study aimed at comparison of Krylov subspace base and mode truncation technique. Application of both of the reduction techniques for creating a machine tool multibody model is evaluated. The Krylov subspace reduction technique shows high quality in terms of both dynamic properties of the reduced multibody model and very low time demands at the same time.
Directory of Open Access Journals (Sweden)
Lubna Moin
2009-04-01
Full Text Available This research paper basically explores and compares the different modeling and analysis techniques and than it also explores the model order reduction approach and significance. The traditional modeling and simulation techniques for dynamic systems are generally adequate for single-domain systems only, but the Bond Graph technique provides new strategies for reliable solutions of multi-domain system. They are also used for analyzing linear and non linear dynamic production system, artificial intelligence, image processing, robotics and industrial automation. This paper describes a unique technique of generating the Genetic design from the tree structured transfer function obtained from Bond Graph. This research work combines bond graphs for model representation with Genetic programming for exploring different ideas on design space tree structured transfer function result from replacing typical bond graph element with their impedance equivalent specifying impedance lows for Bond Graph multiport. This tree structured form thus obtained from Bond Graph is applied for generating the Genetic Tree. Application studies will identify key issues and importance for advancing this approach towards becoming on effective and efficient design tool for synthesizing design for Electrical system. In the first phase, the system is modeled using Bond Graph technique. Its system response and transfer function with conventional and Bond Graph method is analyzed and then a approach towards model order reduction is observed. The suggested algorithm and other known modern model order reduction techniques are applied to a 11th order high pass filter [1], with different approach. The model order reduction technique developed in this paper has least reduction errors and secondly the final model retains structural information. The system response and the stability analysis of the system transfer function taken by conventional and by Bond Graph method is compared and
Directory of Open Access Journals (Sweden)
Shahid Ali
2009-04-01
Full Text Available This research paper basically explores and compares the different modeling and analysis techniques and than it also explores the model order reduction approach and significance. The traditional modeling and simulation techniques for dynamic systems are generally adequate for single-domain systems only, but the Bond Graph technique provides new strategies for reliable solutions of multi-domain system. They are also used for analyzing linear and non linear dynamic production system, artificial intelligence, image processing, robotics and industrial automation. This paper describes a unique technique of generating the Genetic design from the tree structured transfer function obtained from Bond Graph. This research work combines bond graphs for model representation with Genetic programming for exploring different ideas on design space tree structured transfer function result from replacing typical bond graph element with their impedance equivalent specifying impedance lows for Bond Graph multiport. This tree structured form thus obtained from Bond Graph is applied for generating the Genetic Tree. Application studies will identify key issues and importance for advancing this approach towards becoming on effective and efficient design tool for synthesizing design for Electrical system. In the first phase, the system is modeled using Bond Graph technique. Its system response and transfer function with conventional and Bond Graph method is analyzed and then a approach towards model order reduction is observed. The suggested algorithm and other known modern model order reduction techniques are applied to a 11th order high pass filter [1], with different approach. The model order reduction technique developed in this paper has least reduction errors and secondly the final model retains structural information. The system response and the stability analysis of the system transfer function taken by conventional and by Bond Graph method is compared and
A Multi-Model Reduction Technique for Optimization of Coupled Structural-Acoustic Problems
DEFF Research Database (Denmark)
Creixell Mediante, Ester; Jensen, Jakob Søndergaard; Brunskog, Jonas;
2016-01-01
Finite Element models of structural-acoustic coupled systems can become very large for complex structures with multiple connected parts. Optimization of the performance of the structure based on harmonic analysis of the system requires solving the coupled problem iteratively and for several...... frequencies, which can become highly time consuming. Several modal-based model reduction techniques for structure-acoustic interaction problems have been developed in the literature. The unsymmetric nature of the pressure-displacement formulation of the problem poses the question of how the reduction modal...... base should be formed, given that the modal vectors are not orthogonal due to the asymmetry of the system matrices. In this paper, a multi-model reduction (MMR) technique for structure-acoustic interaction problems is developed. In MMR, the reduction base is formed with the modal vectors of a family...
DEFF Research Database (Denmark)
Tahavori, Maryamsadat; Shaker, Hamid Reza
A method for model reduction of dynamical systems with the second order structure is proposed in this paper. The proposed technique preserves the second order structure of the system, and also preserves the stability of the original systems. The method uses the controllability and observability...... gramians within the time interval to build the appropriate Petrov-Galerkin projection for dynamical systems within the time interval of interest. The bound on approximation error is also derived. The numerical results are compared with the counterparts from other techniques. The results confirm...
Zimmerling, Jörn; Wei, Lei; Urbach, Paul; Remis, Rob
2016-06-01
In this paper we present a Krylov subspace model-order reduction technique for time- and frequency-domain electromagnetic wave fields in linear dispersive media. Starting point is a self-consistent first-order form of Maxwell's equations and the constitutive relation. This form is discretized on a standard staggered Yee grid, while the extension to infinity is modeled via a recently developed global complex scaling method. By applying this scaling method, the time- or frequency-domain electromagnetic wave field can be computed via a so-called stability-corrected wave function. Since this function cannot be computed directly due to the large order of the discretized Maxwell system matrix, Krylov subspace reduced-order models are constructed that approximate this wave function. We show that the system matrix exhibits a particular physics-based symmetry relation that allows us to efficiently construct the time- and frequency-domain reduced-order models via a Lanczos-type reduction algorithm. The frequency-domain models allow for frequency sweeps meaning that a single model provides field approximations for all frequencies of interest and dominant field modes can easily be determined as well. Numerical experiments for two- and three-dimensional configurations illustrate the performance of the proposed reduction method.
Trimming a hazard logic tree with a new model-order-reduction technique
Porter, Keith; Field, Ned; Milner, Kevin R
2017-01-01
The size of the logic tree within the Uniform California Earthquake Rupture Forecast Version 3, Time-Dependent (UCERF3-TD) model can challenge risk analyses of large portfolios. An insurer or catastrophe risk modeler concerned with losses to a California portfolio might have to evaluate a portfolio 57,600 times to estimate risk in light of the hazard possibility space. Which branches of the logic tree matter most, and which can one ignore? We employed two model-order-reduction techniques to simplify the model. We sought a subset of parameters that must vary, and the specific fixed values for the remaining parameters, to produce approximately the same loss distribution as the original model. The techniques are (1) a tornado-diagram approach we employed previously for UCERF2, and (2) an apparently novel probabilistic sensitivity approach that seems better suited to functions of nominal random variables. The new approach produces a reduced-order model with only 60 of the original 57,600 leaves. One can use the results to reduce computational effort in loss analyses by orders of magnitude.
Energy Technology Data Exchange (ETDEWEB)
WAGGONER, L.O.
2000-05-16
As radiation safety specialists, one of the things we are required to do is evaluate tools, equipment, materials and work practices and decide whether the use of these products or work practices will reduce radiation dose or risk to the environment. There is a tendency for many workers that work with radioactive material to accomplish radiological work the same way they have always done it rather than look for new technology or change their work practices. New technology is being developed all the time that can make radiological work easier and result in less radiation dose to the worker or reduce the possibility that contamination will be spread to the environment. As we discuss the various tools and techniques that reduce radiation dose, keep in mind that the radiological controls should be reasonable. We can not always get the dose to zero, so we must try to accomplish the work efficiently and cost-effectively. There are times we may have to accept there is only so much you can do. The goal is to do the smart things that protect the worker but do not hinder him while the task is being accomplished. In addition, we should not demand that large amounts of money be spent for equipment that has marginal value in order to save a few millirem. We have broken the handout into sections that should simplify the presentation. Time, distance, shielding, and source reduction are methods used to reduce dose and are covered in Part I on work execution. We then look at operational considerations, radiological design parameters, and discuss the characteristics of personnel who deal with ALARA. This handout should give you an overview of what it takes to have an effective dose reduction program.
Simulation of Moving Loads in Elastic Multibody Systems With Parametric Model Reduction Techniques
Directory of Open Access Journals (Sweden)
Fischer Michael
2014-08-01
Full Text Available In elastic multibody systems, one considers large nonlinear rigid body motion and small elastic deformations. In a rising number of applications, e.g. automotive engineering, turning and milling processes, the position of acting forces on the elastic body varies. The necessary model order reduction to enable efficient simulations requires the determination of ansatz functions, which depend on the moving force position. For a large number of possible interaction points, the size of the reduced system would increase drastically in the classical Component Mode Synthesis framework. If many nodes are potentially loaded, or the contact area is not known a-priori and only a small number of nodes is loaded simultaneously, the system is described in this contribution with the parameter-dependent force position. This enables the application of parametric model order reduction methods. Here, two techniques based on matrix interpolation are described which transform individually reduced systems and allow the interpolation of the reduced system matrices to determine reduced systems for any force position. The online-offline decomposition and description of the force distribution onto the reduced elastic body are presented in this contribution. The proposed framework enables the simulation of elastic multibody systems with moving loads efficiently because it solely depends on the size of the reduced system. Results in frequency and time domain for the simulation of a thin-walled cylinder with a moving load illustrate the applicability of the proposed method.
Model reduction techniques for dynamics analysis of ultra-precision linear stage
Institute of Scientific and Technical Information of China (English)
Xuedong CHEN; Zhixin LI
2009-01-01
Spring-damping elements are used to simplify the internal interaction in the proposed finite element (FE) model of an ultra-precision linear Stage. The dynamics behavior is studied. The comparison between mode shapes from the eigenvalue analysis shows that the components, except the translator, can represent system dynamics characteristics. A reduction approach is used to simplify the system in a dynamic studied. There is little difference between the vibration mode and the response analysis. The experimental modal analysis proves the validity of the reduction approach, which can be generalized to the development and dynamics characteristic study of a complex system model to obviously save computational resource.
Waggoner, L O
2000-01-01
As radiation safety specialists, one of the things we are required to do is evaluate tools, equipment, materials and work practices and decide whether the use of these products or work practices will reduce radiation dose or risk to the environment. There is a tendency for many workers that work with radioactive material to accomplish radiological work the same way they have always done it rather than look for new technology or change their work practices. New technology is being developed all the time that can make radiological work easier and result in less radiation dose to the worker or reduce the possibility that contamination will be spread to the environment. As we discuss the various tools and techniques that reduce radiation dose, keep in mind that the radiological controls should be reasonable. We can not always get the dose to zero, so we must try to accomplish the work efficiently and cost-effectively. There are times we may have to accept there is only so much you can do. The goal is to do the sm...
Surrogate-based modeling and dimension reduction techniques for multi-scale mechanics problems
Institute of Scientific and Technical Information of China (English)
Wei Shyy; Young-Chang Cho; Wenbo Du; Amit Gupta; Chien-Chou Tseng; Ann Marie Sastry
2011-01-01
Successful modeling and/or design of engineering systems often requires one to address the impact of multiple “design variables” on the prescribed outcome.There are often multiple,competing objectives based on which we assess the outcome of optimization.Since accurate,high fidelity models are typically time consuming and computationally expensive,comprehensive evaluations can be conducted only if an efficient framework is available.Furthermore,informed decisions of the model/hardware's overall performance rely on an adequate understanding of the global,not local,sensitivity of the individual design variables on the objectives.The surrogate-based approach,which involves approximating the objectives as continuous functions of design variables from limited data,offers a rational framework to reduce the number of important input variables,i.e.,the dimension of a design or modeling space.In this paper,we review the fundamental issues that arise in surrogate-based analysis and optimization,highlighting concepts,methods,techniques,as well as modeling implications for mechanics problems.To aid the discussions of the issues involved,we summarize recent efforts in investigating cryogenic cavitating flows,active flow control based on dielectric barrier discharge concepts,and lithium (Li)-ion batteries.It is also stressed that many multi-scale mechanics problems can naturally benefit from the surrogate approach for “scale bridging.”
Directory of Open Access Journals (Sweden)
Othman M. K. Alsmadi
2015-01-01
Full Text Available A robust computational technique for model order reduction (MOR of multi-time-scale discrete systems (single input single output (SISO and multi-input multioutput (MIMO is presented in this paper. This work is motivated by the singular perturbation of multi-time-scale systems where some specific dynamics may not have significant influence on the overall system behavior. The new approach is proposed using genetic algorithms (GA with the advantage of obtaining a reduced order model, maintaining the exact dominant dynamics in the reduced order, and minimizing the steady state error. The reduction process is performed by obtaining an upper triangular transformed matrix of the system state matrix defined in state space representation along with the elements of B, C, and D matrices. The GA computational procedure is based on maximizing the fitness function corresponding to the response deviation between the full and reduced order models. The proposed computational intelligence MOR method is compared to recently published work on MOR techniques where simulation results show the potential and advantages of the new approach.
Alsmadi, Othman M K; Abo-Hammour, Zaer S
2015-01-01
A robust computational technique for model order reduction (MOR) of multi-time-scale discrete systems (single input single output (SISO) and multi-input multioutput (MIMO)) is presented in this paper. This work is motivated by the singular perturbation of multi-time-scale systems where some specific dynamics may not have significant influence on the overall system behavior. The new approach is proposed using genetic algorithms (GA) with the advantage of obtaining a reduced order model, maintaining the exact dominant dynamics in the reduced order, and minimizing the steady state error. The reduction process is performed by obtaining an upper triangular transformed matrix of the system state matrix defined in state space representation along with the elements of B, C, and D matrices. The GA computational procedure is based on maximizing the fitness function corresponding to the response deviation between the full and reduced order models. The proposed computational intelligence MOR method is compared to recently published work on MOR techniques where simulation results show the potential and advantages of the new approach.
Variance Reduction Techniques in Monte Carlo Methods
Kleijnen, Jack P.C.; Ridder, A.A.N.; Rubinstein, R.Y.
2010-01-01
Monte Carlo methods are simulation algorithms to estimate a numerical quantity in a statistical model of a real system. These algorithms are executed by computer programs. Variance reduction techniques (VRT) are needed, even though computer speed has been increasing dramatically, ever since the intr
Variance Reduction Techniques in Monte Carlo Methods
Kleijnen, Jack P.C.; Ridder, A.A.N.; Rubinstein, R.Y.
2010-01-01
Monte Carlo methods are simulation algorithms to estimate a numerical quantity in a statistical model of a real system. These algorithms are executed by computer programs. Variance reduction techniques (VRT) are needed, even though computer speed has been increasing dramatically, ever since the intr
Model Reduction via Reducibility Matrix
Institute of Scientific and Technical Information of China (English)
Musa Abdalla; Othman Alsmadi
2006-01-01
In this work, a new model reduction technique is introduced. The proposed technique is derived using the matrix reducibility concept. The eigenvalues of the reduced model are preserved; that is, the reduced model eigenvalues are a subset of the full order model eigenvalues. This preservation of the eigenvalues makes the mathematical model closer to the physical model. Finally, the outcomes of this method are fully illustrated using simulations of two numeric examples.
Reduction technique for tire contact problems
Noor, Ahmed K.; Peters, Jeanne M.
1995-04-01
A reduction technique and a computational procedure are presented for predicting the tire contact response and evaluating the sensitivity coefficients of the different response quantities. The sensitivity coefficients measure the sensitivity of the contact response to variations in the geometric and material parameters of the tire. The tire is modeled using a two-dimensional laminated anisotropic shell theory with the effects of variation in geometric and material parameters, transverse shear deformation, and geometric nonlinearities included. The contact conditions are incorporated into the formulation by using a perturbed Lagrangian approach with the fundamental unknowns consisting of the stress resultants, the generalized displacements, and the Lagrange multipliers associated with the contact conditions. The elemental arrays are obtained by using a modified two-field, mixed variational principle. For the application of the reduction technique, the tire finite element model is partitioned into two regions. The first region consists of the nodes that are likely to come in contact with the pavement, and the second region includes all the remaining nodes. The reduction technique is used to significantly reduce the degrees of freedom in the second region. The effectiveness of the computational procedure is demonstrated by a numerical example of the frictionless contact response of the space shuttle nose-gear tire, inflated and pressed against a rigid flat surface.
Indian Academy of Sciences (India)
T S Jeyali Laseetha; R Sukanesh
2014-02-01
In this paper, we propose biogeography based optimization technique, with linear and sinusoidal migration models and simplified biogeography based optimization (S-BBO), for uniformly spaced linear antenna array synthesis to maximize the reduction of side lobe level (SLL). This paper explores biogeography theory. It generalizes two migration models in BBO namely, linear migration model and sinusoidal migration model. The performance of SLL reduction in ULA is investigated. Our performance study shows that among the two, sinusoidal migration model is a promising candidate for optimization. In our work, simplified – BBO algorithmis also deployed. This determines an optimum set value for amplitude excitations of antenna array elements that generate a radiation pattern with maximum side lobe level reduction. Our detailed investigation also shows that sinusoidal migration model of BBO performs better compared to the other evolutionary algorithms discussed in this paper.
Model Reduction of Hybrid Systems
DEFF Research Database (Denmark)
Shaker, Hamid Reza
systems are derived in this thesis. The results are used for output feedback control of switched nonlinear systems. Model reduction of piecewise affine systems is also studied in this thesis. The proposed method is based on the reduction of linear subsystems inside the polytopes. The methods which......High-Technological solutions of today are characterized by complex dynamical models. A lot of these models have inherent hybrid/switching structure. Hybrid/switched systems are powerful models for distributed embedded systems design where discrete controls are applied to continuous processes...... of hybrid systems, designing controllers and implementations is very high so that the use of these models is limited in applications where the size of the state space is large. To cope with complexity, model reduction is a powerful technique. This thesis presents methods for model reduction and stability...
Energy Technology Data Exchange (ETDEWEB)
Berthomieu, Th.; Boyer, H. [Universite de la Reunion (France). Laboratoire de Genie Industriel
2004-02-01
It is possible at present to perform complex thermal studies, integrating various thermal sources and for various buildings, using energetic software. It is always interesting to simplify the calculation process with numerical reduction techniques. In this paper a reduction technique using the decomposition of a complex system in elementary components linked each other by simple relations is presented. This reduction is performed in simulation code Codyrum, which can be used for research purpose of for design help. The results of simulation are compared with experimental results. (authors)
Pressure-Reduction Technique for Crystal Growth
Shlichta, P. J.
1981-01-01
Large crystals grown by varying pressure rather than temperature. In constant temerature pressure-reduction process crystal growth promoted as solubility decreases by factor of more than 10. Technique used to study crystal growth kinetics by "pressure wave"" analog of conventional "thermal wave" experiments. Technique has advantages of faster response and freedom from convective interference.
Model Reduction by Manifold Boundaries
Transtrum, Mark K.; Qiu, Peng
2015-01-01
Understanding the collective behavior of complex systems from their basic components is a difficult yet fundamental problem in science. Existing model reduction techniques are either applicable under limited circumstances or produce “black boxes” disconnected from the microscopic physics. We propose a new approach by translating the model reduction problem for an arbitrary statistical model into a geometric problem of constructing a low-dimensional, submanifold approximation to a high-dimensional manifold. When models are overly complex, we use the observation that the model manifold is bounded with a hierarchy of widths and propose using the boundaries as submanifold approximations. We refer to this approach as the manifold boundary approximation method. We apply this method to several models, including a sum of exponentials, a dynamical systems model of protein signaling, and a generalized Ising model. By focusing on parameters rather than physical degrees of freedom, the approach unifies many other model reduction techniques, such as singular limits, equilibrium approximations, and the renormalization group, while expanding the domain of tractable models. The method produces a series of approximations that decrease the complexity of the model and reveal how microscopic parameters are systematically “compressed” into a few macroscopic degrees of freedom, effectively building a bridge between the microscopic and the macroscopic descriptions. PMID:25216014
Overview of MC CDMA PAPR Reduction Techniques
Sarala, B; Bhandari, B N
2012-01-01
High Peak to Average Power Ratio (PAPR) of the transmitted signal is a critical problem in multicarrier modulation systems (MCM) such as Orthogonal Frequency Division Multiplexing (OFDM), and Multi-Carrier Code Division Multiple Access (MC CDMA) systems, due to large number of subcarriers. High PAPR leads to reduced resolution, and battery life. It also deteriorates system performance. This paper focuses on review of different PAPR reduction techniques with attendant technical issues as well as criteria for selection of PAPR reduction technique. To reduce PAPR the constraints are low power consumption, and low Bit Error Rate (BER). Spectral bandwidth is improved by better spectral characteristics, and low complexity/cost.
Digital Eye Strain Reduction Techniques: A Review
PREETHI J SEEGEHALLI
2016-01-01
Digital eye strain or computer vision syndrome (CVS) is caused when we spend considerable amount of time in staring at digital screens of desktop computer, laptop, e-readers, tablets and mobile phones. This paper discusses the different the causes for visual fatigue and digital eye strain reduction techniques like usage of optical glass, flicker free screen, color filtering, fuzzy logic based brightness adaption technique, bias lighting screens, optimizing monitor’s color temperature and Auto...
Model reduction for circuit simulation
Hinze, Michael; Maten, E Jan W Ter
2011-01-01
Simulation based on mathematical models plays a major role in computer aided design of integrated circuits (ICs). Decreasing structure sizes, increasing packing densities and driving frequencies require the use of refined mathematical models, and to take into account secondary, parasitic effects. This leads to very high dimensional problems which nowadays require simulation times too large for the short time-to-market demands in industry. Modern Model Order Reduction (MOR) techniques present a way out of this dilemma in providing surrogate models which keep the main characteristics of the devi
Post-placement temperature reduction techniques
DEFF Research Database (Denmark)
Liu, Wei; Nannarelli, Alberto
2010-01-01
With technology scaled to deep submicron era, temperature and temperature gradient have emerged as important design criteria. We propose two post-placement techniques to reduce peak temperature by intelligently allocating whitespace in the hotspots. Both methods are fully compliant with commercial...... technologies, and can be easily integrated with state-of-the-art thermal-aware design flow. Experiments in a set of tests on circuits implemented in STM 65nm technologies show that our methods achieve better peak temperature reduction than directly increasing circuit's area....
Abstract models of transfinite reductions
DEFF Research Database (Denmark)
Bahr, Patrick
2010-01-01
We investigate transfinite reductions in abstract reduction systems. To this end, we study two abstract models for transfinite reductions: a metric model generalising the usual metric approach to infinitary term rewriting and a novel partial order model. For both models we distinguish between...
Time-Weighted Balanced Stochastic Model Reduction
DEFF Research Database (Denmark)
Tahavori, Maryamsadat; Shaker, Hamid Reza
2011-01-01
A new relative error model reduction technique for linear time invariant (LTI) systems is proposed in this paper. Both continuous and discrete time systems can be reduced within this framework. The proposed model reduction method is mainly based upon time-weighted balanced truncation and a recent...
Advances in reduction techniques for tire contact problems
Noor, Ahmed K.
1995-08-01
Some recent developments in reduction techniques, as applied to predicting the tire contact response and evaluating the sensitivity coefficients of the different response quantities, are reviewed. The sensitivity coefficients measure the sensitivity of the contact response to variations in the geometric and material parameters of the tire. The tire is modeled using a two-dimensional laminated anisotropic shell theory with the effects of variation in geometric and material parameters, transverse shear deformation, and geometric nonlinearities included. The contact conditions are incorporated into the formulation by using a perturbed Lagrangian approach with the fundamental unknowns consisting of the stress resultants, the generalized displacements, and the Lagrange multipliers associated with the contact conditions. The elemental arrays are obtained by using a modified two-field, mixed variational principle. For the application of reduction techniques, the tire finite element model is partitioned into two regions. The first region consists of the nodes that are likely to come in contact with the pavement, and the second region includes all the remaining nodes. The reduction technique is used to significantly reduce the degrees of freedom in the second region. The effectiveness of the computational procedure is demonstrated by a numerical example of the frictionless contact response of the space shuttle nose-gear tire, inflated and pressed against a rigid flat surface. Also, the research topics which have high potential for enhancing the effectiveness of reduction techniques are outlined.
Reduction of chemical reaction models
Frenklach, Michael
1991-01-01
An attempt is made to reconcile the different terminologies pertaining to reduction of chemical reaction models. The approaches considered include global modeling, response modeling, detailed reduction, chemical lumping, and statistical lumping. The advantages and drawbacks of each of these methods are pointed out.
A Survey of Dimension Reduction Techniques
Energy Technology Data Exchange (ETDEWEB)
Fodor, I K
2002-05-09
Advances in data collection and storage capabilities during the past decades have led to an information overload in most sciences. Researchers working in domains as diverse as engineering, astronomy, biology, remote sensing, economics, and consumer transactions, face larger and larger observations and simulations on a daily basis. Such datasets, in contrast with smaller, more traditional datasets that have been studied extensively in the past, present new challenges in data analysis. Traditional statistical methods break down partly because of the increase in the number of observations, but mostly because of the increase in the number of variables associated with each observation. The dimension of the data, is the number of variables that are measured on each observation. High-dimensional datasets present many mathematical challenges as well as some opportunities, and are bound to give rise to new theoretical developments. One of the problems with high-dimensional datasets is that, in many cases, not all the measured variables are ''important'' for understanding the underlying phenomena of interest. While certain computationally expensive novel methods can construct predictive models with high accuracy from high-dimensional data, it is still of interest in many applications to reduce the dimension of the original data prior to any modeling of the data. In this paper, we described several dimension reduction methods.
Structured building model reduction toward parallel simulation
Energy Technology Data Exchange (ETDEWEB)
Dobbs, Justin R. [Cornell University; Hencey, Brondon M. [Cornell University
2013-08-26
Building energy model reduction exchanges accuracy for improved simulation speed by reducing the number of dynamical equations. Parallel computing aims to improve simulation times without loss of accuracy but is poorly utilized by contemporary simulators and is inherently limited by inter-processor communication. This paper bridges these disparate techniques to implement efficient parallel building thermal simulation. We begin with a survey of three structured reduction approaches that compares their performance to a leading unstructured method. We then use structured model reduction to find thermal clusters in the building energy model and allocate processing resources. Experimental results demonstrate faster simulation and low error without any interprocessor communication.
Chemical model reduction under uncertainty
Najm, Habib
2016-01-05
We outline a strategy for chemical kinetic model reduction under uncertainty. We present highlights of our existing deterministic model reduction strategy, and describe the extension of the formulation to include parametric uncertainty in the detailed mechanism. We discuss the utility of this construction, as applied to hydrocarbon fuel-air kinetics, and the associated use of uncertainty-aware measures of error between predictions from detailed and simplified models.
Outlier Preservation by Dimensionality Reduction Techniques
Onderwater, M.
2015-01-01
Sensors are increasingly part of our daily lives: motion detection, lighting control, and energy consumption all rely on sensors. Combining this information into, for instance, simple and comprehensive graphs can be quite challenging. Dimensionality reduction is often used to address this problem, b
Infrared Imaging Data Reduction Software and Techniques
Sabbey, C N; Lewis, J R; Irwin, M J; Sabbey, Chris N.; Mahon, Richard G. Mc; Lewis, James R.; Irwin, Mike J.
2001-01-01
We describe the InfraRed Data Reduction (IRDR) software package, a small ANSI C library of fast image processing routines for automated pipeline reduction of infrared (dithered) observations. We developed the software to satisfy certain design requirements not met in existing packages (e.g., full weight map handling) and to optimize the software for large data sets (non-interactive tasks that are CPU and disk efficient). The software includes stand-alone C programs for tasks such as running sky frame subtraction with object masking, image registration and coaddition with weight maps, dither offset measurement using cross-correlation, and object mask dilation. Although we currently use the software to process data taken with CIRSI (a near-IR mosaic imager), the software is modular and concise and should be easy to adapt/reuse for other work. IRDR is available from anonymous ftp to ftp.ast.cam.ac.uk in pub/sabbey.
Discussion on variance reduction technique for shielding
Energy Technology Data Exchange (ETDEWEB)
Maekawa, Fujio [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment
1998-03-01
As the task of the engineering design activity of the international thermonuclear fusion experimental reactor (ITER), on 316 type stainless steel (SS316) and the compound system of SS316 and water, the shielding experiment using the D-T neutron source of FNS in Japan Atomic Energy Research Institute has been carried out. However, in these analyses, enormous working time and computing time were required for determining the Weight Window parameter. Limitation or complication was felt when the variance reduction by Weight Window method of MCNP code was carried out. For the purpose of avoiding this difficulty, investigation was performed on the effectiveness of the variance reduction by cell importance method. The conditions of calculation in all cases are shown. As the results, the distribution of fractional standard deviation (FSD) related to neutrons and gamma-ray flux in the direction of shield depth is reported. There is the optimal importance change, and when importance was increased at the same rate as that of the attenuation of neutron or gamma-ray flux, the optimal variance reduction can be done. (K.I.)
Communication Analysis modelling techniques
España, Sergio; Pastor, Óscar; Ruiz, Marcela
2012-01-01
This report describes and illustrates several modelling techniques proposed by Communication Analysis; namely Communicative Event Diagram, Message Structures and Event Specification Templates. The Communicative Event Diagram is a business process modelling technique that adopts a communicational perspective by focusing on communicative interactions when describing the organizational work practice, instead of focusing on physical activities1; at this abstraction level, we refer to business activities as communicative events. Message Structures is a technique based on structured text that allows specifying the messages associated to communicative events. Event Specification Templates are a means to organise the requirements concerning a communicative event. This report can be useful to analysts and business process modellers in general, since, according to our industrial experience, it is possible to apply many Communication Analysis concepts, guidelines and criteria to other business process modelling notation...
Advanced Data Reduction Techniques for MUSE
Weilbacher, Peter M; Roth, Martin M; Boehm, Petra; Pecontal-Rousset, Arlette
2009-01-01
MUSE, a 2nd generation VLT instrument, will become the world's largest integral field spectrograph. It will be an AO assisted instrument which, in a single exposure, covers the wavelength range from 465 to 930 nm with an average resolution of 3000 over a field of view of 1'x1' with 0.2'' spatial sampling. Both the complexity and the rate of the data are a challenge for the data processing of this instrument. We will give an overview of the data processing scheme that has been designed for MUSE. Specifically, we will use only a single resampling step from the raw data to the reduced data product. This allows us to improve data quality, accurately propagate variance, and minimize spreading of artifacts and correlated noise. This approach necessitates changes to the standard way in which reduction steps like wavelength calibration and sky subtraction are carried out, but can be expanded to include combination of multiple exposures.
Belan, Marco
2013-01-01
The background of this work is the problem of reducing the aerodynamic turbulent friction drag, which is an important source of energy waste in innumerable technological fields. We develop a theoretical framework aimed at predicting the behaviour of existing drag reduction techniques when used at the large values of Re which are typical of applications. We focus on one recently proposed and very promising technique, which consists in creating at the wall streamwise-travelling waves of spanwise velocity. A perturbation analysis of the Navier-Stokes equations that govern the fluid motion is carried out, for the simplest wall-bounded flow geometry, i.e. the plane channel flow. The streamwise base flow is perturbed by the spanwise time-varying base flow induced by the travelling waves. An asymptotic expansion is then carried out with respect to the velocity amplitude of the travelling wave. The analysis, although based on several assumptions, leads to predictions of drag reduction that agree well with the measure...
Noise reduction techniques for the restoration of musical recordings
Cappe, Olivier
The evaluation of short time spectral attenuation techniques using a simplified model of standard noise suppression rules and with elementary test signals is assessed. Signal distortions induced by the restoration process are evaluated analytically, and their audibility is addressed by use of classic psychoacoustics results. Phenomena observed experimentally in previous studies, such as the modification of timbre, the appearance of modulations, and the spreading of transients, are brought to light. Drawing from these results, a noise reduction technique intended for enhancing musical signals is described. In the first step, the noisy signal is analyzed by use of a medium frequency resolution short time transform. The restoration then takes place in each subband in two different ways according to the nature of the subband signal: the processing is carried out block by block when steady signal components are detected, or locally otherwise. This approach was successfully applied to several musical recordings yielding promising results.
Model reduction of parametrized systems
Ohlberger, Mario; Patera, Anthony; Rozza, Gianluigi; Urban, Karsten
2017-01-01
The special volume offers a global guide to new concepts and approaches concerning the following topics: reduced basis methods, proper orthogonal decomposition, proper generalized decomposition, approximation theory related to model reduction, learning theory and compressed sensing, stochastic and high-dimensional problems, system-theoretic methods, nonlinear model reduction, reduction of coupled problems/multiphysics, optimization and optimal control, state estimation and control, reduced order models and domain decomposition methods, Krylov-subspace and interpolatory methods, and applications to real industrial and complex problems. The book represents the state of the art in the development of reduced order methods. It contains contributions from internationally respected experts, guaranteeing a wide range of expertise and topics. Further, it reflects an important effor t, carried out over the last 12 years, to build a growing research community in this field. Though not a textbook, some of the chapters ca...
FPGA Implementation of ADPLL with Ripple Reduction Techniques
Directory of Open Access Journals (Sweden)
Manoj kumar
2012-05-01
Full Text Available In this paper FPGA implementation of ADPLL using Verilog is presented. ADPLL with ripple reduction techniques is also simulated and implemented on FPGA. For simulation ISE Xilinx 10.1 CAD is used.Vertex5 FPGA (Field Programmable Gate Array is used for implementation. ADPLL performance improvement, while using ripple reduction techniques is also discussed. The ADPLL is designed at the central frequency of 100 kHz. The frequency range of ADPLL is 0 kHz to 199 kHz. But when it is implemented with ripple reduction techniques, the frequency range observed is from 11 kHz to 216 kHz.
FPGA Implementation of ADPLL with Ripple Reduction Techniques
Directory of Open Access Journals (Sweden)
Manoj Kumar
2012-04-01
Full Text Available In this paper FPGA implementation of ADPLL using Verilog is presented. ADPLL with ripple reduction techniques is also simulated and implemented on FPGA. For simulation ISE Xilinx 10.1 CAD is used.Vertex5 FPGA (Field Programmable Gate Array is used for implementation. ADPLL performance improvement, while using ripple reduction techniques is also discussed. The ADPLL is designed at the central frequency of 100 kHz. The frequency range of ADPLL is 0 kHz to 199 kHz. But when it is implemented with ripple reduction techniques, the frequency range observed is from 11 kHz to 216 kHz.
Experimental Investigation of Tunnel Discharge Ability by Using Drag Reduction Techniques
Directory of Open Access Journals (Sweden)
Ying-kui WANG
2010-06-01
Full Text Available The experiments in an open flume model and in the spillway tunnel models were carried out by using drag reduction technique. The drag reduction experiments in open channel model adopted two techniques: polymer addition and coating. The drag reduction effect of polyacrylamide (PAM solution and the dimethyl silicone oil coating were studied by the flume model experiments, and the results were satisfied. Then the experiments were carried out in the model of a Hydropower station, which is the second largest dam in China. In order to reduce the resistance, the spillway tunnel models were coated inside with the dimethyl silicone oil. It is the first time that applying the drag reduction technique in the large hydraulic model. The experimental results show that the coating technique can effectively increase the ability of flood discharge. The outlet velocity and the jet trajectory distance were also increased, which is beneficial to the energy dissipation of the spillway tunnel.
Treur, M.; Postma, M.
2014-01-01
Objectives: Patient-level simulation models provide increased flexibility to overcome the limitations of cohort-based approaches in health-economic analysis. However, computational requirements of reaching convergence is a notorious barrier. The objective was to assess the impact of using quasi-mont
Kavi, K. M.
1984-01-01
There have been a number of simulation packages developed for the purpose of designing, testing and validating computer systems, digital systems and software systems. Complex analytical tools based on Markov and semi-Markov processes have been designed to estimate the reliability and performance of simulated systems. Petri nets have received wide acceptance for modeling complex and highly parallel computers. In this research data flow models for computer systems are investigated. Data flow models can be used to simulate both software and hardware in a uniform manner. Data flow simulation techniques provide the computer systems designer with a CAD environment which enables highly parallel complex systems to be defined, evaluated at all levels and finally implemented in either hardware or software. Inherent in data flow concept is the hierarchical handling of complex systems. In this paper we will describe how data flow can be used to model computer system.
Techniques for Leakage Power Reduction in Nanoscale Circuits: A Survey
DEFF Research Database (Denmark)
Liu, Wei
This report surveys progress in the field of designing low power especially low leakage CMOS circuits in deep submicron era. The leakage mechanism and various recently proposed run time leakage reduction techniques are presented. Two designs from Cadence and Sony respectively, which can represent...... current industrial application of these techniques, are also illustrated....
Cohomological reduction of sigma models
Energy Technology Data Exchange (ETDEWEB)
Candu, Constantin; Mitev, Vladimir; Schomerus, Volker [DESY, Hamburg (Germany). Theory Group; Creutzig, Thomas [North Carolina Univ., Chapel Hill, NC (United States). Dept. of Physics and Astronomy
2010-01-15
This article studies some features of quantum field theories with internal supersymmetry, focusing mainly on 2-dimensional non-linear sigma models which take values in a coset superspace. It is discussed how BRST operators from the target space super- symmetry algebra can be used to identify subsectors which are often simpler than the original model and may allow for an explicit computation of correlation functions. After an extensive discussion of the general reduction scheme, we present a number of interesting examples, including symmetric superspaces G/G{sup Z{sub 2}} and coset superspaces of the form G/G{sup Z{sub 4}}. (orig.)
Chemical model reduction under uncertainty
Malpica Galassi, Riccardo
2017-03-06
A general strategy for analysis and reduction of uncertain chemical kinetic models is presented, and its utility is illustrated in the context of ignition of hydrocarbon fuel–air mixtures. The strategy is based on a deterministic analysis and reduction method which employs computational singular perturbation analysis to generate simplified kinetic mechanisms, starting from a detailed reference mechanism. We model uncertain quantities in the reference mechanism, namely the Arrhenius rate parameters, as random variables with prescribed uncertainty factors. We propagate this uncertainty to obtain the probability of inclusion of each reaction in the simplified mechanism. We propose probabilistic error measures to compare predictions from the uncertain reference and simplified models, based on the comparison of the uncertain dynamics of the state variables, where the mixture entropy is chosen as progress variable. We employ the construction for the simplification of an uncertain mechanism in an n-butane–air mixture homogeneous ignition case, where a 176-species, 1111-reactions detailed kinetic model for the oxidation of n-butane is used with uncertainty factors assigned to each Arrhenius rate pre-exponential coefficient. This illustration is employed to highlight the utility of the construction, and the performance of a family of simplified models produced depending on chosen thresholds on importance and marginal probabilities of the reactions.
Case report macroglossia: Review and application of tongue reduction technique
Directory of Open Access Journals (Sweden)
Bilommi R. Irhamni
2015-05-01
Full Text Available Congenital macroglossia is uncommon condition, Enlargement can be true as seen in vascular malformations or muscular enlargement. It may cause significant symptoms in children such as sleep apnea, respiratory distress, drooling, difficulty in swallowing and dysarthria. Long-standing macroglossia leads to an anterior open bite deformity, mucosal changes, exposure to potential trauma, increased incidence of upper respiratory tract infections and failure to thrive. Tongue movements, sounds and Speech articulation may also be affected. It is important to achieve uniform global reduction of the enlarged tongue for functional as well as esthetic reasons. The multiple techniques advocated for tongue reduction reveal that an ideal procedure has yet to emerge. In our case report we describe a modified reduction technique of the tongue globally preserving the taste, sensation and mobility of the tongue suitable for cases of enlargement of the tongue as in muscular hypertrophy. It can be used for repeat reductions without jeopardizing the mobility and sensibility of the tongue.
Parameter Estimation, Model Reduction and Quantum Filtering
Chase, Bradley A
2009-01-01
This dissertation explores the topics of parameter estimation and model reduction in the context of quantum filtering. Chapters 2 and 3 provide a review of classical and quantum probability theory, stochastic calculus and filtering. Chapter 4 studies the problem of quantum parameter estimation and introduces the quantum particle filter as a practical computational method for parameter estimation via continuous measurement. Chapter 5 applies these techniques in magnetometry and studies the estimator's uncertainty scalings in a double-pass atomic magnetometer. Chapter 6 presents an efficient feedback controller for continuous-time quantum error correction. Chapter 7 presents an exact model of symmetric processes of collective qubit systems.
Power-reduction techniques for data-center storage systems
Bostoen, Tom; Mullender, Sape; Berbers, Yolande
2013-01-01
As data-intensive, network-based applications proliferate, the power consumed by the data-center storage subsystem surges. This survey summarizes, organizes, and integrates a decade of research on power-aware enterprise storage systems. All of the existing power-reduction techniques are classified a
MC CDMA PAPR Reduction Techniques using Discrete Transforms and Companding
Sarala, B
2011-01-01
High Peak to Average Power Ratio (PAPR) of the transmitted signal is a serious problem in multicarrier modulation systems. In this paper a new technique for reduction in PAPR of the Multicarrier Code Division Multiple Access (MC CDMA) signals based on combining the Discrete Transform either Discrete Cosine Transform (DCT) or multi-resolution Discrete Wavelet Transform (DWT) with companding is proposed. It is analyzed and implemented using MATLAB. Simulation results of reduction in PAPR and power Spectral Density (PSD) of the MC CDMA with companding and without companding are compared with the MC CDMA with DCT and companding, DWT and companding systems. The new technique proposed is to make use of multi-resolution DWT in combination with companding in order to achieve a very substantial reduction in PAPR of the MC CDMA signal
Model Reduction of Nonlinear Fire Dynamics Models
Lattimer, Alan Martin
2016-01-01
Due to the complexity, multi-scale, and multi-physics nature of the mathematical models for fires, current numerical models require too much computational effort to be useful in design and real-time decision making, especially when dealing with fires over large domains. To reduce the computational time while retaining the complexity of the domain and physics, our research has focused on several reduced-order modeling techniques. Our contributions are improving wildland fire reduced-order mod...
Model reduction for Space Station Freedom
Williams, Trevor
1992-01-01
Model reduction is an important practical problem in the control of flexible spacecraft, and a considerable amount of work has been carried out on this topic. Two of the best known methods developed are modal truncation and internal balancing. Modal truncation is simple to implement but can give poor results when the structure possesses clustered natural frequencies, as often occurs in practice. Balancing avoids this problem but has the disadvantages of high computational cost, possible numerical sensitivity problems, and no physical interpretation for the resulting balanced 'modes'. The purpose of this work is to examine the performance of the subsystem balancing technique developed by the investigator when tested on a realistic flexible space structure, in this case a model of the Permanently Manned Configuration (PMC) of Space Station Freedom. This method retains the desirable properties of standard balancing while overcoming the three difficulties listed above. It achieves this by first decomposing the structural model into subsystems of highly correlated modes. Each subsystem is approximately uncorrelated from all others, so balancing them separately and then combining yields comparable results to balancing the entire structure directly. The operation count reduction obtained by the new technique is considerable: a factor of roughly r(exp 2) if the system decomposes into r equal subsystems. Numerical accuracy is also improved significantly, as the matrices being operated on are of reduced dimension, and the modes of the reduced-order model now have a clear physical interpretation; they are, to first order, linear combinations of repeated-frequency modes.
Energy Technology Data Exchange (ETDEWEB)
Khawaja, Ranish Deedar Ali, E-mail: rkhawaja@mgh.harvard.edu [MGH Imaging, Massachusetts General Hospital and Harvard Medical School, Boston, MA (United States); Singh, Sarabjeet; Blake, Michael; Harisinghani, Mukesh; Choy, Gary; Karosmangulu, Ali; Padole, Atul; Do, Synho [MGH Imaging, Massachusetts General Hospital and Harvard Medical School, Boston, MA (United States); Brown, Kevin; Thompson, Richard; Morton, Thomas; Raihani, Nilgoun [CT Research and Advanced Development, Philips Healthcare, Cleveland, OH (United States); Koehler, Thomas [Philips Technologie GmbH, Innovative Technologies, Hamburg (Germany); Kalra, Mannudeep K. [MGH Imaging, Massachusetts General Hospital and Harvard Medical School, Boston, MA (United States)
2015-01-15
Highlights: • Limited abdominal CT indications can be performed at a size specific dose estimate of (SSDE) 1.5 mGy (∼0.9 mSv) in smaller patients (BMI less than or equal to 25 kg/m{sup 2}) using a knowledge based Iterative Model Reconstruction (IMR) technique. • Evaluation of liver tumors and pathologies is unacceptable at this reduced dose with IMR technique especially in patients with a BMI greater than 25 kg/m{sup 2}. • IMR body soft tissue and routine settings perform substantially better than IMR sharp plus setting in reduced dose CT images. • At SSDE of 1.5 mGy, objective image noise in reduced dose IMR images is 8–56% less than compared to standard dose FBP images, with lowest image noise in IMR body-soft tissue images. - Abstract: Purpose: To assess lesion detection and image quality parameters of a knowledge-based Iterative Model Reconstruction (IMR) in reduced dose (RD) abdominal CT examinations. Materials and methods: This IRB-approved prospective study included 82 abdominal CT examinations performed for 41 consecutive patients (mean age, 62 ± 12 years; F:M 28:13) who underwent a RD CT (SSDE, 1.5 mGy ± 0.4 [∼0.9 mSv] at 120 kV with 17–20 mAs/slice) immediately after their standard dose (SD) CT exam (10 mGy ± 3 [∼6 mSv] at 120 kV with automatic exposure control) on 256 MDCT (iCT, Philips Healthcare). SD data were reconstructed using filtered back projection (FBP). RD data were reconstructed with FBP and IMR. Four radiologists used a five-point scale (1 = image quality better than SD CT to 5 = image quality unacceptable) to assess both subjective image quality and artifacts. Lesions were first detected on RD FBP images. RD IMR and RD FBP images were then compared side-by-side to SD-FBP images in an independent, randomized and blinded fashion. Friedman's test and intraclass correlation coefficient were used for data analysis. Objective measurements included image noise and attenuation as well as noise spectral density (NSD) curves
Reduction clitoroplasty: a technique for debulking the enlarged clitoris.
Oyama, Ian A; Steinberg, Adam C; Holzberg, Adam S; Maccarone, Joseph L
2004-12-01
Clitoral reduction, especially in an adult, is a rare procedure which often leaves the glans clitoris without the capacity for tactile sensation. We present the case of a 34-year-old woman with symptomatic clitoromegaly since puberty who underwent a clitoral reduction procedure designed to preserve the neurovascular supply of the glans clitoris. The surgical technique presented here removes the corpora cavernosa of the clitoris, but conserves important neurovascular attachments. While this procedure was done on an adult, it could just as easily be performed on children or adolescents with clitoromegaly, typically the at-risk group for this condition.
Power Backoff Reduction Techniques for Generalized Multicarrier Waveforms
Directory of Open Access Journals (Sweden)
Wesołowski K
2008-01-01
Full Text Available Abstract Amplification of generalized multicarrier (GMC signals by high-power amplifiers (HPAs before transmission can result in undesirable out-of-band spectral components, necessitating power backoff, and low HPA efficiency. We evaluate variations of several peak-to-average power ratio (PAPR reduction and HPA linearization techniques which were previously proposed for OFDM signals. Our main emphasis is on their applicability to the more general class of GMC signals, including serial modulation and DFT-precoded OFDM. Required power backoff is shown to depend on the type of signal transmitted, the specific HPA nonlinearity characteristic, and the spectrum mask which is imposed to limit adjacent channel interference. PAPR reduction and HPA linearization techniques are shown to be very effective when combined.
Power Backoff Reduction Techniques for Generalized Multicarrier Waveforms
Directory of Open Access Journals (Sweden)
D. Falconer
2007-12-01
Full Text Available Amplification of generalized multicarrier (GMC signals by high-power amplifiers (HPAs before transmission can result in undesirable out-of-band spectral components, necessitating power backoff, and low HPA efficiency. We evaluate variations of several peak-to-average power ratio (PAPR reduction and HPA linearization techniques which were previously proposed for OFDM signals. Our main emphasis is on their applicability to the more general class of GMC signals, including serial modulation and DFT-precoded OFDM. Required power backoff is shown to depend on the type of signal transmitted, the specific HPA nonlinearity characteristic, and the spectrum mask which is imposed to limit adjacent channel interference. PAPR reduction and HPA linearization techniques are shown to be very effective when combined.
On the selection of dimension reduction techniques for scientific applications
Energy Technology Data Exchange (ETDEWEB)
Fan, Y J; Kamath, C
2012-02-17
Many dimension reduction methods have been proposed to discover the intrinsic, lower dimensional structure of a high-dimensional dataset. However, determining critical features in datasets that consist of a large number of features is still a challenge. In this paper, through a series of carefully designed experiments on real-world datasets, we investigate the performance of different dimension reduction techniques, ranging from feature subset selection to methods that transform the features into a lower dimensional space. We also discuss methods that calculate the intrinsic dimensionality of a dataset in order to understand the reduced dimension. Using several evaluation strategies, we show how these different methods can provide useful insights into the data. These comparisons enable us to provide guidance to a user on the selection of a technique for their dataset.
Novel optical technique for 2D graphene reduction
Tharwat, Christen; Swillam, Mohamed A.; Badr, Y.; Ahmed, Samah M.; Bishay, I. K.; Sadallah, F. A.; Elsaid, Enayat A.
2017-02-01
Engineering a low-cost graphene- based opto-electronic device is a challenging task to accomplish via a single-step fabrication process. Recently scientists have started focusing on the development and use of a laser-based method for efficient reduction of graphene oxide (GO) films at low-temperature. Our proposed technique utilizes a laser beam for non thermal reduction of solution processed GO layers onto film substrates. Compared to other reduction techniques, it is a single-step, facile, time consuming, non-contact operation, environment-friendly, patternable, low cost, and can be performed at room temperature in ambient atmosphere without affecting the integrity of either the physical properties or the lattice of graphene. Laser scribed reduced graphene (LSRG) is shown to be successfully produced and selectively patterned from the direct laser irradiation of graphite oxide films under ambient conditions. In addition, by varying the laser's intensity, power, and irradiation treatments, the electrical properties of LSRG can be accurately attune over five orders of magnitude of conductivity. Feature has proven difficulty with other methods. This credible, scalable approach is mask-free, does not require certain expensive chemical reduction agents, and can be performed at ambient conditions starting from aqueous graphene oxide flakes. The non thermal nature of this method combined with its scalability and simplicity, makes it very attractive for the manufacturing of future generation large-volume graphene-based opto/electronics.
Mathematical modelling techniques
Aris, Rutherford
1995-01-01
""Engaging, elegantly written."" - Applied Mathematical ModellingMathematical modelling is a highly useful methodology designed to enable mathematicians, physicists and other scientists to formulate equations from a given nonmathematical situation. In this elegantly written volume, a distinguished theoretical chemist and engineer sets down helpful rules not only for setting up models but also for solving the mathematical problems they pose and for evaluating models.The author begins with a discussion of the term ""model,"" followed by clearly presented examples of the different types of mode
Error reduction techniques for measuring long synchrotron mirrors
Energy Technology Data Exchange (ETDEWEB)
Irick, S.
1998-07-01
Many instruments and techniques are used for measuring long mirror surfaces. A Fizeau interferometer may be used to measure mirrors much longer than the interferometer aperture size by using grazing incidence at the mirror surface and analyzing the light reflected from a flat end mirror. Advantages of this technique are data acquisition speed and use of a common instrument. Disadvantages are reduced sampling interval, uncertainty of tangential position, and sagittal/tangential aspect ratio other than unity. Also, deep aspheric surfaces cannot be measured on a Fizeau interferometer without a specially made fringe nulling holographic plate. Other scanning instruments have been developed for measuring height, slope, or curvature profiles of the surface, but lack accuracy for very long scans required for X-ray synchrotron mirrors. The Long Trace Profiler (LTP) was developed specifically for long x-ray mirror measurement, and still outperforms other instruments, especially for aspheres. Thus, this paper focuses on error reduction techniques for the LTP.
Energy Technology Data Exchange (ETDEWEB)
Khawaja, Ranish Deedar Ali; Singh, Sarabjeet; Otrakji, Alexi; Padole, Atul; Lim, Ruth; Nimkin, Katherine; Westra, Sjirk; Kalra, Mannudeep K.; Gee, Michael S. [MGH Imaging, Massachusetts General Hospital, Harvard Medical School, Boston, MA (United States)
2015-07-15
Dose reduction in children undergoing CT scanning is an important priority for the radiology community and public at large. Drawbacks of radiation reduction are increased image noise and artifacts, which can affect image interpretation. Iterative reconstruction techniques have been developed to reduce noise and artifacts from reduced-dose CT examinations, although reconstruction algorithm, magnitude of dose reduction and effects on image quality vary. We review the reconstruction principles, radiation dose potential and effects on image quality of several iterative reconstruction techniques commonly used in clinical settings, including 3-D adaptive iterative dose reduction (AIDR-3D), adaptive statistical iterative reconstruction (ASIR), iDose, sinogram-affirmed iterative reconstruction (SAFIRE) and model-based iterative reconstruction (MBIR). We also discuss clinical applications of iterative reconstruction techniques in pediatric abdominal CT. (orig.)
Star formation: Submillimeter observations and data reduction techniques
Attard, Michael
2010-12-01
The process of star formation is key to astrophysics and its understanding remains a fundamental problem. The following chapters describe recent work on this subject with instrumentation at the Caltech Submillimeter Observatory. Chapter 1 provides an introduction to this thesis. Chapter 2 describes a new data reduction technique for dual-array polarimeters. This technique is meant to address a potential problem with these instruments; artificial polarization signals are introduced into the data when misalignments between the subarrays and pointing drifts are present during the data acquisition process. The correction algorithm presented is meant to treat for this problem, and has been tested using simulated and actual data. The results indicate that this approach is effective at removing up to 60% of the artificial polarization. Chapter 3 discusses an analysis of the low-mass star forming region NGC 1333 IRAS 4 involving SHARP 350 mum polarimetry and HCN J=4→3 emission spectra. The polarimetry indicates a uniform magnetic field morphology over a 20" radius from the peak continuum flux of IRAS 4A, in agreement with models of magnetically supported cloud collapse. The magnetic field morphology around IRAS 4B appears to be quite distinct however, with indications of depolarization observed towards the peak flux of this source. Inverse P-Cygni profiles are observed in the HCN J=4→3 line spectra towards IRAS 4A, providing a clear indication of infall gas motions. Taken together, the evidence gathered appears to support the scenario that IRAS 4A is a cloud core in a critical state of support against gravitational collapse. Chapter 4 covers SHARP 450 mum polarimetry obtained over the high-mass star forming region NGC 6334 I(N). The "Method 2" approach described in a recent paper by G. Novak and collaborators is applied here to combine our data with results from the Hertz and SPARO polarimeters. This is done in order to estimate the intrinsic angular dispersion
Volume reduction philosophy and techniques in use or planned
Energy Technology Data Exchange (ETDEWEB)
Row, T.H.
1984-01-01
Siting and development of nuclear waste disposal facilities is an expensive task. In the private sector, such developments face siting and licensing issues, public intervention, and technology challenges. The United States Department of Energy (DOE) faces similar challenges in the management of waste generated by the research and production facilities. Volume reduction can be used to lengthen the service life of existing facilities. A wide variety of volume reduction techniques are applied to different waste forms. Compressible waste is compacted into drums, cardboard and metal boxes, and the loaded drums are supercompacted into smaller units. Large metallic items are size-reduced and melted for recycle or sent to shallow land burial. Anaerobic digestion is a process that can reduce cellulosic and animal wastes by 80%. Incinerators of all types have been investigated for application to nuclear wastes and a number of installations operate or are constructing units for low-level and transuranic solid and liquid combustibles. Technology may help solve many of the problems in volume reduction, but the human element also has an important part in solving the puzzle. Aggressive educational campaigns at two sites have proved very successful in reducing waste generation. This overview of volume reduction is intended to transfer the current information from many DOE facilities. 44 references, 85 figures, 5 tables.
Directory of Open Access Journals (Sweden)
S. Gimeno García
2012-02-01
Full Text Available To handle complexity to the smallest detail in atmospheric radiative transfer models is in practice unfeasible. On the one hand, the properties of the interacting medium, i.e. the atmosphere and the surface, are only available at a limited spatial resolution. On the other hand, the computational cost of accurate radiation models accounting for three-dimensional heterogeneous media are prohibitive for some applications, esp. for climate modeling and operational remote sensing algorithms. Hence, it is still common practice to use simplified models for atmospheric radiation applications.
Three-dimensional radiation models can deal with much more complexity than the one-dimensional ones providing a more accurate solution of the radiative transfer. In turn, one-dimensional models introduce biases to the radiation results.
With the help of stochastic models that consider the multi-fractal nature of clouds, it is possible to scale cloud properties given at a coarse spatial resolution down to a finer resolution. Performing the radiative transfer within the spatially fine-resolved cloud fields noticeably helps to improve the radiation results.
In the framework of this paper, we aim at characterizing cloud heterogeneity effects on radiances and broadband flux densities, namely: the errors due to unresolved variability (the so-called plane parallel homogeneous, PPH, bias and the errors due to the neglect of transversal photon displacements (independent pixel approximation, IPA, bias. First, we study the effect of the missing cloud variability on reflectivities. We will show that the generation of subscale variability by means of stochastic methods greatly reduce or nearly eliminate the reflectivity biases. Secondly, three-dimensional broadband flux densities in the presence of realistic inhomogeneous cloud fields sampled at fine spatial resolutions are calculated and compared to their one-dimensional counterparts at coarser
Model Reduction of Switched Systems Based on Switching Generalized Gramians
DEFF Research Database (Denmark)
Shaker, Hamid Reza; Wisniewski, Rafal
2012-01-01
In this paper, a general method for model order reduction of discrete-time switched linear systems is presented. The proposed technique uses switching generalized gramians. It is shown that several classical reduction methods can be developed into the generalized gramian framework for the model r......-Galerkin projection is constructed instead of the similarity transform approach for reduction. It is proven that the proposed reduction framework preserves the stability of the original switched system. The performance of the method is illustrated by numerical examples....
Pediatric interventional radiology and dose-reduction techniques.
Johnson, Craig; Martin-Carreras, Teresa; Rabinowitz, Deborah
2014-08-01
The pediatric interventional radiology community has worked diligently in recent years through education and the use of technology to incorporate numerous dose-reduction strategies. This article seeks to describe different strategies where we can significantly lower the dose to the pediatric patient undergoing a diagnostic or therapeutic image-guided procedure and, subsequently, lower the dose several fold to the staff and ourselves in the process. These strategies start with patient selection, dose awareness and monitoring, shielding, fluoroscopic techniques, and collimation. Advanced features such as cone-beam technology, dose-reduction image processing algorithms, overlay road mapping, and volumetric cross-sectional hybrid imaging are also discussed. Copyright © 2014 Elsevier Inc. All rights reserved.
Directory of Open Access Journals (Sweden)
S. Gimeno García
2012-09-01
Full Text Available Handling complexity to the smallest detail in atmospheric radiative transfer models is unfeasible in practice. On the one hand, the properties of the interacting medium, i.e., the atmosphere and the surface, are only available at a limited spatial resolution. On the other hand, the computational cost of accurate radiation models accounting for three-dimensional heterogeneous media are prohibitive for some applications, especially for climate modelling and operational remote-sensing algorithms. Hence, it is still common practice to use simplified models for atmospheric radiation applications.
Three-dimensional radiation models can deal with complex scenarios providing an accurate solution to the radiative transfer. In contrast, one-dimensional models are computationally more efficient, but introduce biases to the radiation results.
With the help of stochastic models that consider the multi-fractal nature of clouds, it is possible to scale cloud properties given at a coarse spatial resolution down to a higher resolution. Performing the radiative transfer within the cloud fields at higher spatial resolution noticeably helps to improve the radiation results.
We present a new Monte Carlo model, MoCaRT, that computes the radiative transfer in three-dimensional inhomogeneous atmospheres. The MoCaRT model is validated by comparison with the consensus results of the Intercomparison of Three-Dimensional Radiation Codes (I3RC project.
In the framework of this paper, we aim at characterising cloud heterogeneity effects on radiances and broadband fluxes, namely: the errors due to unresolved variability (the so-called plane parallel homogeneous, PPH, bias and the errors due to the neglect of transversal photon displacements (independent pixel approximation, IPA, bias. First, we study the effect of the missing cloud variability on reflectivities. We will show that the generation of subscale variability by means of stochastic
Mellish, Robert W.; Coder, David W.
1988-11-01
A parametric study based on incompressible, irrotational flow theory was conducted to evaluate the effect of strut support interference on the flow field about a model. The use of suction and blowing to correct the support interference is also investigated. Two struts were considered for numerical analysis, a small chord strut of constant cross section and a large chord strut of varying cross section, both attached to the tip of a model submarine sail. For the present study the SS N21 class and SSN 688 class sail geometries are utilized. To assess the level of strut interference, the presence fields on the surface of the sail and a flat representation of the hull were evaluated as follows. The flow field was computed for the model geometry without a strut attached (baseline configuration) and the results are compared with identical calculations for the model-strut combination. The calculated results are presented graphically as contour plots of the pressure coefficient (Cp). Contour plots of delta p (the difference between baseline ad sail-strut results) are utilized to identify regions of principal strut interference. Finally, suction and blowing was applied to minimize strut interference in areas considered important to hull boundary layer and sail flow that would affect wake measurements.
Enamel Reduction Techniques in Orthodontics: A Literature Review
Livas, Christos; Jongsma, Albert Cornelis; Ren, Yijin
2013-01-01
Artificial abrasion of interproximal surfaces has been described for almost seventy years as orthodontic intervention for achievement and maintenance of ideal treatment outcome. A variety of terms and approaches have been introduced throughout this period implying a growing clinicians’ interest. Nevertheless, the widespread recognition of enamel stripping technique was initiated by the advent of bonded orthodontic attachments and a 2-article series of Sheridan in the 80’s. Since then, experimental and clinical research has been focused on the investigation of instrumentation efficacy and potential iatrogenic sequelae related to interproximal stripping. This review discusses the evolution, technical aspects and trends of enamel reduction procedures as documented in the literature. PMID:24265652
Techniques for Reduction of the Parasitic Inductance of Decoupling Capacitors
Bernal, J.; Freire, M. J.
2016-05-01
The ability for providing effective decoupling of decoupling capacitors is mainly limited by its parasitic inductance. In this work we propose some new techniques for placing surface mount decoupling capacitors on a printed circuit board that make use of mutual inductance effects between currents on adjacent capacitors to provide significant reduction of the impedance seen at high frequencies at the input of the set of decoupling capacitors. This allows to keep the impedance of the power distribution network below the target impedance with a reduced number of decoupling capacitors, thus reducing cost and, more importantly in aerospace applications, saving space on the board. This technique does not require complex previous calculations or experimental adjustments to be implemented and consequently it has no negative impact in the time of design of practical circuits.
Relative Error Model Reduction via Time-Weighted Balanced Stochastic Singular Perturbation
DEFF Research Database (Denmark)
Tahavori, Maryamsadat; Shaker, Hamid Reza
2012-01-01
A new mixed method for relative error model reduction of linear time invariant (LTI) systems is proposed in this paper. This order reduction technique is mainly based upon time-weighted balanced stochastic model reduction method and singular perturbation model reduction technique. Compared...... by using the concept and properties of the reciprocal systems. The results are further illustrated by two practical numerical examples: a model of CD player and a model of the atmospheric storm track....
Janssen, G. J. M.; Hulst, R. V. D.; Nennie, E.
1988-03-01
Radar cross section (RCS) measurements were performed at a square trihedral corner reflector to investigate RCS reduction techniques which use camouflage materials and changes in the construction. The results are compared with an RCS modeling technique. The measurement results show that a significant RCS reduction can be achieved.
A mixed model reduction method for preserving selected physical information
Zhang, Jing; Zheng, Gangtie
2017-03-01
A new model reduction method in the frequency domain is presented. By mixedly using the model reduction techniques from both the time domain and the frequency domain, the dynamic model is condensed to selected physical coordinates, and the contribution of slave degrees of freedom is taken as a modification to the model in the form of effective modal mass of virtually constrained modes. The reduced model can preserve the physical information related to the selected physical coordinates such as physical parameters and physical space positions of corresponding structure components. For the cases of non-classical damping, the method is extended to the model reduction in the state space but still only contains the selected physical coordinates. Numerical results are presented to validate the method and show the effectiveness of the model reduction.
Iterative metal artifact reduction: Evaluation and optimization of technique
Energy Technology Data Exchange (ETDEWEB)
Subhas, Naveen; Gupta, Amit; Polster, Joshua M. [Imaging Institute, Cleveland Clinic, Cleveland, OH (United States); Primak, Andrew N. [Siemens Medical Solutions USA Inc., Malvern, PA (United States); Obuchowski, Nancy A. [Quantitative Health Sciences, Cleveland Clinic, Cleveland, OH (United States); Krauss, Andreas [Siemens Healthcare, Forchheim (Germany); Iannotti, Joseph P. [Orthopaedic and Rheumatologic Institute, Cleveland Clinic, Cleveland, OH (United States)
2014-12-15
Iterative metal artifact reduction (IMAR) is a sinogram inpainting technique that incorporates high-frequency data from standard weighted filtered back projection (WFBP) reconstructions to reduce metal artifact on computed tomography (CT). This study was designed to compare the image quality of IMAR and WFBP in total shoulder arthroplasties (TSA); determine the optimal amount of WFBP high-frequency data needed for IMAR; and compare image quality of the standard 3D technique with that of a faster 2D technique. Eight patients with nine TSA underwent CT with standardized parameters: 140 kVp, 300 mAs, 0.6 mm collimation and slice thickness, and B30 kernel. WFBP, three 3D IMAR algorithms with different amounts of WFBP high-frequency data (IMARlo, lowest; IMARmod, moderate; IMARhi, highest), and one 2D IMAR algorithm were reconstructed. Differences in attenuation near hardware and away from hardware were measured and compared using repeated measures ANOVA. Five readers independently graded image quality; scores were compared using Friedman's test. Attenuation differences were smaller with all 3D IMAR techniques than with WFBP (p < 0.0063). With increasing high-frequency data, the attenuation difference increased slightly (differences not statistically significant). All readers ranked IMARmod and IMARhi more favorably than WFBP (p < 0.05), with IMARmod ranked highest for most structures. The attenuation difference was slightly higher with 2D than with 3D IMAR, with no significant reader preference for 3D over 2D. IMAR significantly decreases metal artifact compared to WFBP both objectively and subjectively in TSA. The incorporation of a moderate amount of WFBP high-frequency data and use of a 2D reconstruction technique optimize image quality and allow for relatively short reconstruction times. (orig.)
Cogging Torque Reduction Techniques for Spoke-type IPMSM
Bahrim, F. S.; Sulaiman, E.; Kumar, R.; Jusoh, L. I.
2017-08-01
A spoke-type interior permanent magnet synchronous motor (IPMSM) is extending its tentacles in industrial arena due to good flux-weakening capability and high power density. In many of the application, high strength of permanent magnet causes the undesirable effects of high cogging torque that can aggravate performance of the motor. High cogging torque is significantly produced by IPMSM due to the similar length and the effectiveness of the magnetic air-gap. The address of this study is to analyze and compare the cogging torque effect and performance of four common techniques for cogging torque reduction such as skewing, notching, pole pairing and rotor pole pairing. With the aid of 3-D finite element analysis (FEA) by JMAG software, a 6S-4P Spoke-type IPMSM with various rotor-PM configurations has been designed. As a result, the cogging torque effect reduced up to 69.5% for skewing technique, followed by 31.96%, 29.6%, and 17.53% by pole pairing, axial pole pairing and notching techniques respectively.
Survey of semantic modeling techniques
Energy Technology Data Exchange (ETDEWEB)
Smith, C.L.
1975-07-01
The analysis of the semantics of programing languages was attempted with numerous modeling techniques. By providing a brief survey of these techniques together with an analysis of their applicability for answering semantic issues, this report attempts to illuminate the state-of-the-art in this area. The intent is to be illustrative rather than thorough in the coverage of semantic models. A bibliography is included for the reader who is interested in pursuing this area of research in more detail.
Survey of semantic modeling techniques
Energy Technology Data Exchange (ETDEWEB)
Smith, C.L.
1975-07-01
The analysis of the semantics of programing languages was attempted with numerous modeling techniques. By providing a brief survey of these techniques together with an analysis of their applicability for answering semantic issues, this report attempts to illuminate the state-of-the-art in this area. The intent is to be illustrative rather than thorough in the coverage of semantic models. A bibliography is included for the reader who is interested in pursuing this area of research in more detail.
Data-Driven Model Order Reduction for Bayesian Inverse Problems
Cui, Tiangang
2014-01-06
One of the major challenges in using MCMC for the solution of inverse problems is the repeated evaluation of computationally expensive numerical models. We develop a data-driven projection- based model order reduction technique to reduce the computational cost of numerical PDE evaluations in this context.
Compendium of Practical Astronomy. Volume 1: Instrumentation and Reduction Techniques.
Augensen, H. J.; Heintz, W. D.; Roth, Günter D.
The Compendium of Practical Astronomy is a revised and enlarged English version of the fourth edition of G. Roth's famous handbook for stargazers. In three volumes 28 carefully edited articles, aimed especially at amateur astronomers and students and teachers of astronomy in high schools and colleges, cover the length and breadth of practical astronomy. Volume 1 contains information on modern instrumentation and reduction techniques, including spherical astronomy, error estimations, telescope mountings, astrophotography, and more. Volume 2 covers the planetary system, with contributions on artificial satellites, comets, the polar aurorae, and the effects of the atmosphere on observational data. Volume 3 is devoted to stellar objects, variable stars and binary stars in particular. An introduction to the astronomical literature and a comprehensive chapter on astronomy education and instructional aids make the Compendium a useful complement to any college library, in addition to its being essential reading for all practical astronomers.
Computerized data reduction techniques for nadir viewing remote sensors
Tiwari, S. N.; Gormsen, Barbara B.
1985-01-01
Computer resources have been developed for the analysis and reduction of MAPS experimental data from the OSTA-1 payload. The MAPS Research Project is concerned with the measurement of the global distribution of mid-tropospheric carbon monoxide. The measurement technique for the MAPS instrument is based on non-dispersive gas filter radiometer operating in the nadir viewing mode. The MAPS experiment has two passive remote sensing instruments, the prototype instrument which is used to measure tropospheric air pollution from aircraft platforms and the third generation (OSTA) instrument which is used to measure carbon monoxide in the mid and upper troposphere from space platforms. Extensive effort was also expended in support of the MAPS/OSTA-3 shuttle flight. Specific capabilities and resources developed are discussed.
Parameter estimation, model reduction and quantum filtering
Chase, Bradley A.
This thesis explores the topics of parameter estimation and model reduction in the context of quantum filtering. The last is a mathematically rigorous formulation of continuous quantum measurement, in which a stream of auxiliary quantum systems is used to infer the state of a target quantum system. Fundamental quantum uncertainties appear as noise which corrupts the probe observations and therefore must be filtered in order to extract information about the target system. This is analogous to the classical filtering problem in which techniques of inference are used to process noisy observations of a system in order to estimate its state. Given the clear similarities between the two filtering problems, I devote the beginning of this thesis to a review of classical and quantum probability theory, stochastic calculus and filtering. This allows for a mathematically rigorous and technically adroit presentation of the quantum filtering problem and solution. Given this foundation, I next consider the related problem of quantum parameter estimation, in which one seeks to infer the strength of a parameter that drives the evolution of a probe quantum system. By embedding this problem in the state estimation problem solved by the quantum filter, I present the optimal Bayesian estimator for a parameter when given continuous measurements of the probe system to which it couples. For cases when the probe takes on a finite number of values, I review a set of sufficient conditions for asymptotic convergence of the estimator. For a continuous-valued parameter, I present a computational method called quantum particle filtering for practical estimation of the parameter. Using these methods, I then study the particular problem of atomic magnetometry and review an experimental method for potentially reducing the uncertainty in the estimate of the magnetic field beyond the standard quantum limit. The technique involves double-passing a probe laser field through the atomic system, giving
Interpolatory Weighted-H2 Model Reduction
Anic, Branimir; Gugercin, Serkan; Antoulas, Athanasios C
2012-01-01
This paper introduces an interpolation framework for the weighted-H2 model reduction problem. We obtain a new representation of the weighted-H2 norm of SISO systems that provides new interpolatory first order necessary conditions for an optimal reduced-order model. The H2 norm representation also provides an error expression that motivates a new weighted-H2 model reduction algorithm. Several numerical examples illustrate the effectiveness of the proposed approach.
Model Reduction for Complex Hyperbolic Networks
Himpe, Christian; Ohlberger, Mario
2013-01-01
We recently introduced the joint gramian for combined state and parameter reduction [C. Himpe and M. Ohlberger. Cross-Gramian Based Combined State and Parameter Reduction for Large-Scale Control Systems. arXiv:1302.0634, 2013], which is applied in this work to reduce a parametrized linear time-varying control system modeling a hyperbolic network. The reduction encompasses the dimension of nodes and parameters of the underlying control system. Networks with a hyperbolic structure have many app...
Relative Error Model Reduction via Time-Weighted Balanced Stochastic Singular Perturbation
DEFF Research Database (Denmark)
Tahavori, Maryamsadat; Shaker, Hamid Reza
2012-01-01
A new mixed method for relative error model reduction of linear time invariant (LTI) systems is proposed in this paper. This order reduction technique is mainly based upon time-weighted balanced stochastic model reduction method and singular perturbation model reduction technique. Compared...... to the other analogous counterparts, the proposed method shows to provide more accurate results in terms of time weighted norms when applied to the practical examples. It is shown that important properties of the time-weighted stochastic balanced reduction technique are extended to the mixed reduction method...
Behavior Change Techniques in Popular Alcohol Reduction Apps: Content Analysis
Garnett, Claire; Brown, James; West, Robert; Michie, Susan
2015-01-01
Background Mobile phone apps have the potential to reduce excessive alcohol consumption cost-effectively. Although hundreds of alcohol-related apps are available, there is little information about the behavior change techniques (BCTs) they contain, or the extent to which they are based on evidence or theory and how this relates to their popularity and user ratings. Objective Our aim was to assess the proportion of popular alcohol-related apps available in the United Kingdom that focus on alcohol reduction, identify the BCTs they contain, and explore whether BCTs or the mention of theory or evidence is associated with app popularity and user ratings. Methods We searched the iTunes and Google Play stores with the terms “alcohol” and “drink”, and the first 800 results were classified into alcohol reduction, entertainment, or blood alcohol content measurement. Of those classified as alcohol reduction, all free apps and the top 10 paid apps were coded for BCTs and for reference to evidence or theory. Measures of popularity and user ratings were extracted. Results Of the 800 apps identified, 662 were unique. Of these, 13.7% (91/662) were classified as alcohol reduction (95% CI 11.3-16.6), 53.9% (357/662) entertainment (95% CI 50.1-57.7), 18.9% (125/662) blood alcohol content measurement (95% CI 16.1-22.0) and 13.4% (89/662) other (95% CI 11.1-16.3). The 51 free alcohol reduction apps and the top 10 paid apps contained a mean of 3.6 BCTs (SD 3.4), with approximately 12% (7/61) not including any BCTs. The BCTs used most often were “facilitate self-recording” (54%, 33/61), “provide information on consequences of excessive alcohol use and drinking cessation” (43%, 26/61), “provide feedback on performance” (41%, 25/61), “give options for additional and later support” (25%, 15/61) and “offer/direct towards appropriate written materials” (23%, 14/61). These apps also rarely included any of the 22 BCTs frequently used in other health behavior change
Color gamut reduction techniques for printing with custom inks
Chosson, Sylvain M.; Hersch, Roger D.
2001-12-01
Printing with custom inks is of interest both for artistic purposes and for printing security documents such as banknotes. However, in order to create designs with only a few custom inks, a general purpose high-quality gamut reduction technique is needed. Most existing gamut mapping techniques map an input gamut such as the gamut of a CRT display into the gamut of an output device such as a CMYK printer. In the present contribution, we are interested in printing with up to three custom inks, which in the general case define a rather narrow color gamut compared with the gamut of standard CMYK printers. The proposed color gamut reduction techniques should work for any combination of custom inks and have a smooth and predictable behavior. When the black ink is available, the lightness levels present in the original image remain nearly identical. Original colors with hues outside the target gamut are projected onto the gray axis. Original colors with hues inside the target gamut hues are rendered as faithful as possible. When the black ink is not available, we map the gray axis G into a colored curve G' connecting in the 3D color space the paper white and the darkest available color formed by the superposition of the 3 inks. The mapped gray axis curve G'(a) is given by the Neugebauer equations when enforcing an equal amount a of custom inks c1, c2 and c3. Original lightness values are mapped onto lightness values along that curve. After lightness mapping, hue and saturation mappings are carried out. When the target gamut does not incorporate the gray axis, we divide it into two volumes, one on the desaturated side of the mapped gray axis curve G' and the other on the saturated side of the G' curve. Colors whose hues are not part of the target color gamut are mapped to colors located on the desaturated side of the G' curve. Colors within the set of printable hues remain within the target color gamut and retain as much as possible their original hue and saturation.
Development of hydrogen peroxide technique for bioburden reduction
Rohatgi, N.; Schwartz, L.; Stabekis, P.; Barengoltz, J.
In order to meet the National Aeronautics and Space Administration (NASA) Planetary Protection microbial reduction requirements for Mars in-situ life detection and sample return missions, entire planetary spacecraft (including planetary entry probes and planetary landing capsules) may have to be exposed to a qualified sterilization process. Presently, dry heat is the only NASA approved sterilization technique available for spacecraft application. However, with the increasing use of various man-made materials, highly sophisticated electronic circuit boards, and sensors in a modern spacecraft, compatibility issues may render this process unacceptable to design engineers and thus impractical to achieve terminal sterilization of the entire spacecraft. An alternative vapor phase hydrogen peroxide sterilization process, which is currently used in various industries, has been selected for further development. Strategic Technology Enterprises, Incorporated (STE), a subsidiary of STERIS Corporation, under a contract from the Jet Propulsion Laboratory (JPL) is developing systems and methodologies to decontaminate spacecraft using vaporized hydrogen peroxide (VHP) technology. The VHP technology provides an effective, rapid and low temperature means for inactivation of spores, mycobacteria, fungi, viruses and other microorganisms. The VHP application is a dry process affording excellent material compatibility with many of the components found in spacecraft such as polymers, paints and electronic systems. Furthermore, the VHP process has innocuous residuals as it decomposes to water vapor and oxygen. This paper will discuss the approach that is being used to develop this technique and will present lethality data that have been collected to establish deep vacuum VHP sterilization cycles. In addition, the application of this technique to meet planetary protection requirements will be addressed.
Radaydeh, Redha Mahmoud
2014-02-01
This paper studies generalized single-stream transmit beamforming employing receive array co-channel interference reduction algorithms under slow and flat fading multiuser wireless systems. The impact of imperfect prediction of channel state information for the desired user spatially uncorrelated transmit channels on the effectiveness of transmit beamforming for different interference reduction techniques is investigated. The case of over-loaded receive array with closely-spaced elements is considered, wherein it can be configured to specified interfering sources. Both dominant interference reduction and adaptive interference reduction techniques for statistically ordered and unordered interferers powers, respectively, are thoroughly studied. The effect of outdated statistical ordering of the interferers powers on the efficiency of dominant interference reduction is studied and then compared against the adaptive interference reduction. For the system models described above, new analytical formulations for the statistics of combined signal-to-interference-plus-noise ratio are presented, from which results for conventional maximum ratio transmission and single-antenna best transmit selection can be directly deduced as limiting cases. These results are then utilized to obtain quantitative measures for various performance metrics. They are also used to compare the achieved performance of various configuration models under consideration. © 1972-2012 IEEE.
Parafermionic reductions of WZW model
Energy Technology Data Exchange (ETDEWEB)
Gomes, J.F.; Zimerman, A.H. [Instituto de Fisica Teorica (IFT), Sao Paulo, SP (Brazil); Sotkov, G.M. [Centro Brasileiro de Pesquisas Fisicas (CBPF), Rio de Janeiro, RJ (Brazil)
1998-03-01
We investigate a class of conformal Non-Abelian-Toda models representing a non compact SL(2,R)/U(1) parafermionions (PF) interacting with a specific abelian Toda theories and having a global U(1) symmetry. A systematic derivation of the conserved currents, their algebras and the exact solution of these models is presented. An important property of this class of models is the affine SL(2 R){sub q} algebra spanned by charges of the chiral and anti chiral currents and the U(1) charge. The classical (Poisson Brackets) algebras of symmetries VG{sub n} of these models appears to be of mixed PF-WG{sub n} type. They contain together with the local quadratic terms specific for the W{sub n}-algebras the nonlocal terms similar to the ones of the classical PF-algebra. The renormalization of the spins of the nonlocal currents is the main new feature of the quantum VA{sub n} algebras. The quantum V A{sub 2}-algebra and its degenerate representations are studied in detail. (author) 41 refs.; e-mail: jfg at axp.ift.unesp.br; sotkov at cbpfsu1.cat.cbpf.br; zimerman at axp.ift.unesp.br
Detailed reduction of reaction mechanisms for flame modeling
Wang, Hai; Frenklach, Michael
1991-01-01
A method for reduction of detailed chemical reaction mechanisms, introduced earlier for ignition system, was extended to laminar premixed flames. The reduction is based on testing the reaction and reaction-enthalpy rates of the 'full' reaction mechanism using a zero-dimensional model with the flame temperature profile as a constraint. The technique is demonstrated with numerical tests performed on the mechanism of methane combustion.
Cross-Gramian-Based Model Reduction: A Comparison
Himpe, Christian; Ohlberger, Mario
2016-01-01
As an alternative to the popular balanced truncation method, the cross Gramian matrix induces a class of balancing model reduction techniques. Besides the classical computation of the cross Gramian by a Sylvester matrix equation, an empirical cross Gramian can be computed based on simulated trajectories. This work assesses the cross Gramian and its empirical Gramian variant for state-space reduction on a procedural benchmark based to the cross Gramian itself.
Model reduction for optimization of structural-acoustic coupling problems
DEFF Research Database (Denmark)
Creixell Mediante, Ester; Jensen, Jakob Søndergaard; Brunskog, Jonas;
2016-01-01
, which becomes highly time consuming since many iterations may be required. The use of model reduction techniques to speed up the computations is studied in this work. The Component Mode Synthesis (CMS) method and the Multi-Model Reduction (MMR) method are adapted for problems with structure......Fully coupled structural-acoustic models of complex systems, such as those used in the hearing aid field, may have several hundreds of thousands of nodes. When there is a strong structure-acoustic interaction, performing optimization on one part requires the complete model to be taken into account...
Counterexample-Preserving Reduction for Symbolic Model Checking
Directory of Open Access Journals (Sweden)
Wanwei Liu
2014-01-01
Full Text Available The cost of LTL model checking is highly sensitive to the length of the formula under verification. We observe that, under some specific conditions, the input LTL formula can be reduced to an easier-to-handle one before model checking. In such reduction, these two formulae need not to be logically equivalent, but they share the same counterexample set w.r.t the model. In the case that the model is symbolically represented, the condition enabling such reduction can be detected with a lightweight effort (e.g., with SAT-solving. In this paper, we tentatively name such technique “counterexample-preserving reduction” (CePRe, for short, and the proposed technique is evaluated by conducting comparative experiments of BDD-based model checking, bounded model checking, and property directed reachability-(IC3 based model checking.
Improved FTIR open-path remote sensing data reduction technique
Phillips, Bill; Moyers, Rick; Lay, Lori T.
1995-05-01
Progress on the developement of a nonlinear curve fitting computer algorithm for data reduction of optical remote sensing Fourier transform spectrometer (FTS) data is presented. This new algorithm is an adaptation of an existing algorithm employed at the Arnold Engineering Development Center for the analysis of infrared plume signature and optical gas diagnostic data on rocket and turbine engine exhaust. Because it is a nonlinear model, the algorithm can be used to determine parameters not readily determined by linear methods such as classical least squares. Unlike linear methods this procedure can simultaneously determine atmospheric gas concetrations, spectral resolution, spectral shift, and the background or (Io(omega) spectrum. Additionally, species which possess spectra that are strongly masked by atmospheric absorption features such as BTX can also be incorporated into the procedure. The basic theory behind the algorithm is presented as well as test results on FTS data and synthetic data containing benzene and toluene spectral features.
Chemical reduction technique for the synthesis of nickel nanoparticles
Directory of Open Access Journals (Sweden)
Ankur Pandey
2015-04-01
Full Text Available Chemical reduction technique was used to synthesize nickel powder using hydrazine hydrate as reducing agent, nickel chloride hexahydrate as precursor and polyvinylpyrrolidone (PVP as capping agent in ethylene glycol medium. Experiments were carried out with mole ratios 13:1 and 20:1 of hydrazine to nickel chloride hexahydrate by keeping the amounts of ethylene glycol and NaOH as constant. Variation of capping agent concentration and temperature was also studied. X-ray diffraction (XRD analysis was performed and the crystal size was calculated using Debye-Scherrer equation. XRD peaks where corresponds to that of the face-centered cubic nickel crystals, in accordance with the literature. Likewise, no oxygen peaks were found in XRD pattern, which confirm the absence of oxide formation in nickel. Morphological studies were performed using scanning electron microscopy (SEM and the elemental composition was determined using energy dispersive X-ray analysis. The elemental composition was found to be nickel with small traces of oxygen.
Model building techniques for analysis.
Energy Technology Data Exchange (ETDEWEB)
Walther, Howard P.; McDaniel, Karen Lynn; Keener, Donald; Cordova, Theresa Elena; Henry, Ronald C.; Brooks, Sean; Martin, Wilbur D.
2009-09-01
The practice of mechanical engineering for product development has evolved into a complex activity that requires a team of specialists for success. Sandia National Laboratories (SNL) has product engineers, mechanical designers, design engineers, manufacturing engineers, mechanical analysts and experimentalists, qualification engineers, and others that contribute through product realization teams to develop new mechanical hardware. The goal of SNL's Design Group is to change product development by enabling design teams to collaborate within a virtual model-based environment whereby analysis is used to guide design decisions. Computer-aided design (CAD) models using PTC's Pro/ENGINEER software tools are heavily relied upon in the product definition stage of parts and assemblies at SNL. The three-dimensional CAD solid model acts as the design solid model that is filled with all of the detailed design definition needed to manufacture the parts. Analysis is an important part of the product development process. The CAD design solid model (DSM) is the foundation for the creation of the analysis solid model (ASM). Creating an ASM from the DSM currently is a time-consuming effort; the turnaround time for results of a design needs to be decreased to have an impact on the overall product development. This effort can be decreased immensely through simple Pro/ENGINEER modeling techniques that summarize to the method features are created in a part model. This document contains recommended modeling techniques that increase the efficiency of the creation of the ASM from the DSM.
Transcondylar traction as a closed reduction technique in vertically unstable pelvic ring disruption
Thaunat, M.; Laude, F.; Paillard, P.; Saillant, G.; Catonné, Y.
2007-01-01
Little information is provided in the literature describing an efficient reduction technique for pelvic ring disruption. The aim of this study is to assess the use of the transcondylar traction as a closed reduction technique for vertically unstable fracture-dislocations of the sacro-iliac joint. Twenty-four pelvic ring disruptions were treated with attempted closed reduction followed by percutaneous screw fixation. Transcondylar traction was used as a closed reduction technique. Closed reduc...
Towards reduction of Paradigm coordination models
Andova, Suzana; de Vink, Erik; 10.4204/EPTCS.60.1
2011-01-01
The coordination modelling language Paradigm addresses collaboration between components in terms of dynamic constraints. Within a Paradigm model, component dynamics are consistently specified at a detailed and a global level of abstraction. To enable automated verification of Paradigm models, a translation of Paradigm into process algebra has been defined in previous work. In this paper we investigate, guided by a client-server example, reduction of Paradigm models based on a notion of global inertness. Representation of Paradigm models as process algebraic specifications helps to establish a property-preserving equivalence relation between the original and the reduced Paradigm model. Experiments indicate that in this way larger Paradigm models can be analyzed.
Energy Reductions Using Next-Generation Remanufacturing Techniques
Energy Technology Data Exchange (ETDEWEB)
Sordelet, Daniel; Racek, Ondrej
2012-02-24
supported the Industrial Technologies Program's initiative titled 'Industrial Energy Efficiency Grand Challenge.' To contribute to this Grand Challenge, we. pursued an innovative processing approach for the next generation of thermal spray coatings to capture substantial energy savings and green house gas emission reductions through the remanufacturing of steel and aluminum-based components. The primary goal was to develop a new thermal spray coating process that yields significantly enhanced bond strength. To reach the goal of higher coating bond strength, a laser was coupled with a traditional twin-wire arc (TWA) spray gun to treat the component surface (i.e., heat or partially melt) during deposition. Both ferrous and aluminum-based substrates and coating alloys were examined to determine what materials are more suitable for the laser-assisted twin-wire arc coating technique. Coating adhesion was measured by static tensile and dynamic fatigue techniques, and the results helped to guide the identification of appropriate remanufacturing opportunities that will now be viable due to the increased bond strength of the laser-assisted twin-wire arc coatings. The feasibility of the laser-assisted TWA (LATWA) process was successfully demonstrated in this current effort. Critical processing parameters were identified, and when these were properly controlled, a strong, diffusion bond was developed between the substrate and the deposited coating. Consequently, bond strengths were nearly doubled over those typically obtained using conventional grit-blast TWA coatings. Note, however, that successful LATWA processing was limited to ferrous substrates coated with steel coatings (e.g., 1020 and 1080 steel). With Al-based substrates, it was not possible to avoid melting a thin layer of the substrate during spraying, and this layer re-solidified to form a band of intermetallic phases at the substrate/coating interface, which significantly diminished the coating adhesion. The
Reduction techniques of workflow verification and its implementation
Institute of Scientific and Technical Information of China (English)
李沛武; 卢正鼎; 付湘林
2004-01-01
Many workflow management systems have emerged in recent years, but few of them provide any form of support for verification. This frequently results in runtime errors that need to be corrected at prohibitive costs. In Ref. [ 1 ], a few reduction rules of verifying workflow graph are given. After analyzing the reduction rules, the overlapped reduction rule is found to be inaccurate. In this paper, the improved reduction rules are presented and the matrix-based implementing algorithm is given, so that the scope of the verification of workflow is expanded and the efficiency of the algorithm is enhanced. The method is simple and natural, and its implementation is easy too.
Statistical detection of structural damage based on model reduction
Institute of Scientific and Technical Information of China (English)
Tao YIN; Heung-fai LAM; Hong-ping ZHU
2009-01-01
This paper proposes a statistical method for damage detection based on the finite element (FE) model reduction technique that utilizes measured modal data with a limited number of sensors.A deterministic damage detection process is formulated based on the model reduction technique.The probabilistic process is integrated into the deterministic damage detection process using a perturbation technique,resulting in a statistical structural damage detection method.This is achieved by deriving the firstand second-order partial derivatives of uncertain parameters,such as elasticity of the damaged member,with respect to the measurement noise,which allows expectation and covariance matrix of the uncertain parameters to be calculated.Besides the theoretical development,this paper reports numerical verification of the proposed method using a portal frame example and Monte Carlo simulation.
Energy Reductions Using Next-Generation Remanufacturing Techniques
Energy Technology Data Exchange (ETDEWEB)
Sordelet, Daniel; Racek, Ondrej
2012-02-24
supported the Industrial Technologies Program's initiative titled 'Industrial Energy Efficiency Grand Challenge.' To contribute to this Grand Challenge, we. pursued an innovative processing approach for the next generation of thermal spray coatings to capture substantial energy savings and green house gas emission reductions through the remanufacturing of steel and aluminum-based components. The primary goal was to develop a new thermal spray coating process that yields significantly enhanced bond strength. To reach the goal of higher coating bond strength, a laser was coupled with a traditional twin-wire arc (TWA) spray gun to treat the component surface (i.e., heat or partially melt) during deposition. Both ferrous and aluminum-based substrates and coating alloys were examined to determine what materials are more suitable for the laser-assisted twin-wire arc coating technique. Coating adhesion was measured by static tensile and dynamic fatigue techniques, and the results helped to guide the identification of appropriate remanufacturing opportunities that will now be viable due to the increased bond strength of the laser-assisted twin-wire arc coatings. The feasibility of the laser-assisted TWA (LATWA) process was successfully demonstrated in this current effort. Critical processing parameters were identified, and when these were properly controlled, a strong, diffusion bond was developed between the substrate and the deposited coating. Consequently, bond strengths were nearly doubled over those typically obtained using conventional grit-blast TWA coatings. Note, however, that successful LATWA processing was limited to ferrous substrates coated with steel coatings (e.g., 1020 and 1080 steel). With Al-based substrates, it was not possible to avoid melting a thin layer of the substrate during spraying, and this layer re-solidified to form a band of intermetallic phases at the substrate/coating interface, which significantly diminished the coating adhesion. The
Desulfurization of coal by an electrochemical-reduction flotation technique
Institute of Scientific and Technical Information of China (English)
ZHAO Wei; XU Wen-juan; ZHONG Shi-teng; ZONG Zhi-min
2008-01-01
The optimum conditions for sulfur removal from coal by electrochemical reduction flotation in an aqueous NaCI solution were determined from orthogonal experiments. The effect of electrolytic conditions on the desulfurization ratio was also studied.The electrochemical-reduction processed coal was examined by X-ray diffraction, Fourier transform infrared spectroscopy and wet chemical analysis. The results show that electrochemical reduction converts hydrophobic pyrite in Nantong coal into hydrophilic FeS and S2 and leads to an increase in the concentration of hydroxyl groups and aliphatic moieties and a corresponding decrease in carboxyl and carbonyl groups, which enhances the flotation desulfurization of the coal.
Reduced order methods for modeling and computational reduction
Rozza, Gianluigi
2014-01-01
This monograph addresses the state of the art of reduced order methods for modeling and computational reduction of complex parametrized systems, governed by ordinary and/or partial differential equations, with a special emphasis on real time computing techniques and applications in computational mechanics, bioengineering and computer graphics. Several topics are covered, including: design, optimization, and control theory in real-time with applications in engineering; data assimilation, geometry registration, and parameter estimation with special attention to real-time computing in biomedical engineering and computational physics; real-time visualization of physics-based simulations in computer science; the treatment of high-dimensional problems in state space, physical space, or parameter space; the interactions between different model reduction and dimensionality reduction approaches; the development of general error estimation frameworks which take into account both model and discretization effects. This...
Model reduction of linear conservative mechanical systems
Schaft, van der A.J.; Oeloff, J.E.
1990-01-01
An approach for model reduction of linear conservative or weakly damped mechanical systems is proposed. It is based on the balancing of an associated gradient system. It uses the joint knowledge of the system matrix and the input and output matrices of the Hamiltonian system. The key idea is to asso
Azimuthally Varying Noise Reduction Techniques Applied to Supersonic Jets
Heeb, Nicholas S.
An experimental investigation into the effect of azimuthal variance of chevrons and fluidically enhanced chevrons applied to supersonic jets is presented. Flow field measurements of streamwise and cross-stream particle imaging velocimetry were employed to determine the causes of noise reduction, which was demonstrated through acoustic measurements. Results were obtained in the over- and under- expanded regimes, and at the design condition, though emphasis was placed on the overexpanded regime due to practical application. Surveys of chevron geometry, number, and arrangement were undertaken in an effort to reduce noise and/or incurred performance penalties. Penetration was found to be positively correlated with noise reduction in the overexpanded regime, and negatively correlated in underexpanded operation due to increased effective penetration and high frequency penalty, respectively. The effect of arrangement indicated the beveled configuration achieved optimal abatement in the ideally and underexpanded regimes due to superior BSAN reduction. The symmetric configuration achieved optimal overexpanded noise reduction due to LSS suppression from improved vortex persistence. Increases in chevron number generally improved reduction of all noise components for lower penetration configurations. Higher penetration configurations reached levels of saturation in the four chevron range, with the potential to introduce secondary shock structures and generate additional noise with higher number. Alternation of penetration generated limited benefit, with slight reduction of the high frequency penalty caused by increased shock spacing. The combination of alternating penetration with beveled and clustered configurations achieved comparable noise reduction to the standard counterparts. Analysis of the entire data set indicated initial improvements with projected area that saturated after a given level and either plateaued or degraded with additional increases. Optimal reductions
IR Drop Analysis and Its Reduction Techniques in Deep Submicron Technology
Directory of Open Access Journals (Sweden)
Vanpreet Kaur
2015-01-01
Full Text Available This paper presents a detailed conceptual analysis of IR Drop effect in deep submicron technologies and its reduction techniques. The IR Drop effect in power/ground network increases rapidly with technology scaling. This affects the timing of the design and hence the desired speed. It is shown that in present day designs, using well known reduction techniques such as wire sizing and decoupling capacitor insertion, may not be sufficient to limit the voltage fluctuations and hence, two more important methods such as selective glitch reduction technique and IR Drop reduction through combinational circuit partitioning are discussed and the issues related to all the techniques are revised.
Index-aware model order reduction methods applications to differential-algebraic equations
Banagaaya, N; Schilders, W H A
2016-01-01
The main aim of this book is to discuss model order reduction (MOR) methods for differential-algebraic equations (DAEs) with linear coefficients that make use of splitting techniques before applying model order reduction. The splitting produces a system of ordinary differential equations (ODE) and a system of algebraic equations, which are then reduced separately. For the reduction of the ODE system, conventional MOR methods can be used, whereas for the reduction of the algebraic systems new methods are discussed. The discussion focuses on the index-aware model order reduction method (IMOR) and its variations, methods for which the so-called index of the original model is automatically preserved after reduction.
RADON REDUCTION TECHNIQUES FOR DETACHED HOUSES, TECHNICAL GUIDANCE (SECOND EDITION)
This document is intended for use by State officials, radon mitigation contractors, building contractors, concerned homeowners, and other persons as an aid in the selection, design, and operation of radon reduction measurements for houses. It is the second edition of EPA's techn...
Radon Reduction Techniques in Schools: Interim Technical Guidance.
Environmental Protection Agency, Washington, DC.
This technical document is intended to assist school facilities maintenance personnel in the selection, design, and operation of radon reduction systems in schools. The guidance contained in this document is based largely on research conducted in 1987 and 1988 in schools located in Maryland and Virginia. Researchers from the United States…
Energy Technology Data Exchange (ETDEWEB)
Bouscaren, R. [CITEPA, Centre Interprofessionnel Technique d`Etudes de la Pollution Atmospherique, 75 - Paris (France)
1996-12-31
Separating techniques offer a large choice between various procedures for air pollution reduction in combustion plants: mechanical, electrical, filtering, hydraulic, chemical, physical, catalytic, thermal and biological processes. Many environment-friendly equipment use such separating techniques, particularly for dust cleaning and fume desulfurizing and more recently for the abatement of volatile organic pollutants or dioxins and furans. These processes are briefly described
Methods in Model Order Reduction (MOR) field
Institute of Scientific and Technical Information of China (English)
刘志超
2014-01-01
Nowadays, the modeling of systems may be quite large, even up to tens of thousands orders. In spite of the increasing computational powers, direct simulation of these large-scale systems may be impractical. Thus, to industry requirements, analytically tractable and computationally cheap models must be designed. This is the essence task of Model Order Reduction (MOR). This article describes the basics of MOR optimization, various way of designing MOR, and gives the conclusion about existing methods. In addition, it proposed some heuristic footpath.
Compression and model reduction: A case study
Energy Technology Data Exchange (ETDEWEB)
LoFaro, T. [Washington State Univ., Pullman, WA (United States); Kopell, N. [Boston Univ., Boston, MA (United States)
1995-12-31
We discuss a method by which the dynamics of a network of coupled neurons can be captured in a one-dimensional map. The network used as an example of this technique consists of a pair of neurons, one of which is an endogenous burster and the other excitable, but not bursting in the absence of phasic input. The reduction is accomplished by decomposing the flow into fast and slow subsystems, each operating on a distinct time scale. A {open_quotes}map of knees{close_quotes} is constructed using singular perturbation techniques. A concise expression for this map is developed by introducing time coordinates to each stable branch of the slow manifold. The compression associated with the fast subsystem is used to determine the qualitative properties of the map.
A Comparative of business process modelling techniques
Tangkawarow, I. R. H. T.; Waworuntu, J.
2016-04-01
In this era, there is a lot of business process modeling techniques. This article is the research about differences of business process modeling techniques. For each technique will explain about the definition and the structure. This paper presents a comparative analysis of some popular business process modelling techniques. The comparative framework is based on 2 criteria: notation and how it works when implemented in Somerleyton Animal Park. Each technique will end with the advantages and disadvantages. The final conclusion will give recommend of business process modeling techniques that easy to use and serve the basis for evaluating further modelling techniques.
Modelling Hydrogen Reduction and Hydrodeoxygenation of Oxygenates
Energy Technology Data Exchange (ETDEWEB)
Zhao, Y.; Xu, Q.; Cheah, S.
2013-01-01
Based on Density Functional Theory (DFT) simulations, we have studied the reduction of nickel oxide and biomass derived oxygenates (catechol, guaiacol, etc.) in hydrogen. Both the kinetic barrier and thermodynamic favorability are calculated with respect to the modeled reaction pathways. In early-stage reduction of the NiO(100) surface by hydrogen, the pull-off of the surface oxygen atom and simultaneous activation of the nearby Ni atoms coordinately dissociate the hydrogen molecules so that a water molecule can be formed, leaving an oxygen vacancy on the surface. In hydrogen reaction with oxygenates catalyzed by transition metals, hydrogenation of the aromatic carbon ring normally dominates. However, selective deoxygenation is of particular interest for practical application such as biofuel conversion. Our modeling shows that doping of the transition metal catalysts can change the orientation of oxygenates adsorbed on metal surfaces. The correlation between the selectivity of reaction and the orientation of adsorption are discussed.
Non-IMF mandibular fracture reduction techniques: A review of the literature.
Batbayar, Enkh-Orchlon; van Minnen, Baucke; Bos, Ruud R M
2017-08-01
Intermaxillary fixation (IMF) techniques are commonly used in mandibular fracture treatment to reduce bone fragments and re-establish normal occlusion. However, non-IMF reduction techniques such as repositioning forceps may be preferable due to their quick yet adequate reduction. The purpose of this paper is to assess which non-IMF reduction techniques and reduction forceps are available for fracture reduction in the mandible. A systematic search was performed in the databases of Pubmed and EMBASE. The search was updated until February 2016 and no initial date and language preference was set. 14 articles were selected for this review, among them ten articles related to reduction forceps and four articles describing other techniques. Thus, modification and design of reduction forceps and other reduction techniques are qualitatively described. Few designs of repositioning forceps have been proposed in the literature. Quick and adequate reduction of fractures seems possible with non-IMF techniques resulting in anatomic repositioning and shorter operation time, especially in cases with good interfragmentary stability. Further development and clinical testing of reduction forceps is necessary to establish their future role in maxillofacial fracture treatment. Copyright © 2017 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.
COMPARATIVE ANALYSIS OF TRAUMA REDUCTION TECHNIQUES IN LAPAROSCOPIC CHOLECYSTECTOMY
Directory of Open Access Journals (Sweden)
Anton Koychev
2017-02-01
Full Text Available Nowadays, there is no operation in the field of abdominal surgery, which cannot be performed laparoscopically. Both surgeons and patients have at their disposal an increasing number of laparoscopic techniques to perform the surgical interventions. The prevalence of laparoscopic cholecystectomy is due to its undeniable advantages over the traditional open surgery, namely small invasiveness, reducing the frequency and severity of perioperative complications, the incomparably better cosmetic result, and the so much better medical and social, and medical and economic efficiency. Single-port laparoscopic techniques to perform laparoscopic cholecystectomy are acceptable alternative to the classical conventional multi-port techniques. The security of the laparoscopic cholecystectomy requires precise identification of anatomical structures and precise observing the diagnostic and treatment protocols, and criteria for selection of patients to be treated surgically by these methods.
Interference Reduction Technique in WCDMA using Cell Resizing
Directory of Open Access Journals (Sweden)
N.Mohan
2012-07-01
Full Text Available In WCDMA, the interference is produced by different factors such as thermal noise, intra cell traffic, traffic in adjacent cells and external traffic. In addition, the increase in number of users in a cell consequently increases the total interference in the network. Hence, the interference must be controlled to improve the throughput of the network. In this paper, we propose an Interference Revocation Technique in WCDMA using Cell Resizing approach. Our technique classifies the access points into three types as normal, saturated and cooperative based on its signal to noise ratio (SNR. The saturated cell triggers the process of cell resizing. This process balances the number of users in each cell and thereby cancels theinterference completely. We prove the efficiency of our technique through simulation results.
Boundary condition handling approaches for the model reduction of a vehicle frame
Xie, Qingxi; Zhang, Nong; Zhang, Bangji; Ji, Jinchen
2016-06-01
In order to apply model reduction technique to improve the computational efficiency for the large-scale FEM model of a vehicle, this paper presents the handling approaches for three widely-used boundary conditions, namely fixed boundary condition (FBC), prescribed motion (PSM) and coupling (COUP), respectively. It is found that iterated improved reduction system (IIRS) reduction method tends to generate better reduction approximation. Guyan method is not sensitive to the sequence of reduction and constraint under FBC, and can thus provide flexibility in handling different boundary conditions for the same system. As for PSM, 'constraint first' is recommended no matter which reduction method is used, and then separate reduction models can be coupled to form a new model with relative small dofs. By selecting appropriate master dofs for model reduction, the coupled model based on reduced models could produce same results as the original full one.
A Retrofit Technique for Kicker Beam-Coupling Impedance Reduction
Caspers, Friedhelm; Kroyer, T; Timmins, M; Uythoven, J; Kurennoy, S
2004-01-01
The reduction of the impedance of operational ferrite kicker structures may be desirable in order to avoid rebuilding such a device. Often resistively coated ceramic plates or tubes are installed for this purpose but at the expense of available aperture. Ceramic U-shaped profiles with a resistive coating fitting between the ellipse of the beam and the rectangular kicker aperture have been used to significantly reduce the impedance of the magnet, while having a limited effect on the available physical aperture. Details of this method, constraints, measurements and simulation results as well as practical aspects are presented and discussed.
Backscattering reduction of corner reflectors using SCS technique
Ajaikumar, V.; Jose, K. A.; Aanandan, C. K.; Mohanan, P.; Nair, K. G.
1992-10-01
The paper reports the use of a simulated corrugated surface (SCS) to reduce radar cross section of dihedral corner reflectors. The focus is on 90-deg corner reflectors, since they are involved in many targets and normally show an enhancement in RCS. The backscattering cross section of the dihedral corner reflector, which is large due to the mutual perpendicularity of the two flat surfaces, is found to be greatly reduced for TE polarization. This simple method is determined to be very effective in reducing the RCS of corner reflectors for any corner angle by suitably selecting the parameters of SCS. This may find potential use in strategic RCS reduction of targets in defense and space applications.
Pectus excavatum: current imaging techniques and opportunities for dose reduction.
Sarwar, Zahir U; DeFlorio, Robert; O'Connor, Stephen C
2014-08-01
Pectus excavatum (PE) is the most common congenital chest wall deformity in children. It affects 1 in every 300-1000 live births with a male to female ratio of 5:1. Most of the patients present in their first year of life. During the teenage years, patients may have exercise intolerance and psychological strain because of their chest wall deformity. The Nuss and Ravitch procedures are established methods of surgical correction of PE. An index of severity known best as the Haller index, typically evaluated with computed tomography scan, when measuring greater than 3.2 is considered to indicate moderate or severe PE and is a prerequisite for third-party insurance reimbursement for these corrective procedures. This article reviews the clinical features of PE, the role of imaging, and the opportunities for radiation dose reduction.
Radar Range Sidelobe Reduction Using Adaptive Pulse Compression Technique
Li, Lihua; Coon, Michael; McLinden, Matthew
2013-01-01
Pulse compression has been widely used in radars so that low-power, long RF pulses can be transmitted, rather than a highpower short pulse. Pulse compression radars offer a number of advantages over high-power short pulsed radars, such as no need of high-power RF circuitry, no need of high-voltage electronics, compact size and light weight, better range resolution, and better reliability. However, range sidelobe associated with pulse compression has prevented the use of this technique on spaceborne radars since surface returns detected by range sidelobes may mask the returns from a nearby weak cloud or precipitation particles. Research on adaptive pulse compression was carried out utilizing a field-programmable gate array (FPGA) waveform generation board and a radar transceiver simulator. The results have shown significant improvements in pulse compression sidelobe performance. Microwave and millimeter-wave radars present many technological challenges for Earth and planetary science applications. The traditional tube-based radars use high-voltage power supply/modulators and high-power RF transmitters; therefore, these radars usually have large size, heavy weight, and reliability issues for space and airborne platforms. Pulse compression technology has provided a path toward meeting many of these radar challenges. Recent advances in digital waveform generation, digital receivers, and solid-state power amplifiers have opened a new era for applying pulse compression to the development of compact and high-performance airborne and spaceborne remote sensing radars. The primary objective of this innovative effort is to develop and test a new pulse compression technique to achieve ultrarange sidelobes so that this technique can be applied to spaceborne, airborne, and ground-based remote sensing radars to meet future science requirements. By using digital waveform generation, digital receiver, and solid-state power amplifier technologies, this improved pulse compression
Fast Multiscale Reservoir Simulations using POD-DEIM Model Reduction
Ghasemi, Mohammadreza
2015-02-23
In this paper, we present a global-local model reduction for fast multiscale reservoir simulations in highly heterogeneous porous media with applications to optimization and history matching. Our proposed approach identifies a low dimensional structure of the solution space. We introduce an auxiliary variable (the velocity field) in our model reduction that allows achieving a high degree of model reduction. The latter is due to the fact that the velocity field is conservative for any low-order reduced model in our framework. Because a typical global model reduction based on POD is a Galerkin finite element method, and thus it can not guarantee local mass conservation. This can be observed in numerical simulations that use finite volume based approaches. Discrete Empirical Interpolation Method (DEIM) is used to approximate the nonlinear functions of fine-grid functions in Newton iterations. This approach allows achieving the computational cost that is independent of the fine grid dimension. POD snapshots are inexpensively computed using local model reduction techniques based on Generalized Multiscale Finite Element Method (GMsFEM) which provides (1) a hierarchical approximation of snapshot vectors (2) adaptive computations by using coarse grids (3) inexpensive global POD operations in a small dimensional spaces on a coarse grid. By balancing the errors of the global and local reduced-order models, our new methodology can provide an error bound in simulations. Our numerical results, utilizing a two-phase immiscible flow, show a substantial speed-up and we compare our results to the standard POD-DEIM in finite volume setup.
Energy Technology Data Exchange (ETDEWEB)
Garcia-Pareja, S. [Servicio de Radiofisica Hospitalaria, Hospital Regional Universitario ' Carlos Haya' , Avda. Carlos Haya, s/n, E-29010 Malaga (Spain)], E-mail: garciapareja@gmail.com; Vilches, M. [Servicio de Fisica y Proteccion Radiologica, Hospital Regional Universitario ' Virgen de las Nieves' , Avda. de las Fuerzas Armadas, 2, E-18014 Granada (Spain); Lallena, A.M. [Departamento de Fisica Atomica, Molecular y Nuclear, Universidad de Granada, E-18071 Granada (Spain)
2007-09-21
The ant colony method is used to control the application of variance reduction techniques to the simulation of clinical electron linear accelerators of use in cancer therapy. In particular, splitting and Russian roulette, two standard variance reduction methods, are considered. The approach can be applied to any accelerator in a straightforward way and permits, in addition, to investigate the 'hot' regions of the accelerator, an information which is basic to develop a source model for this therapy tool.
Model reduction using a posteriori analysis
Whiteley, Jonathan P.
2010-05-01
Mathematical models in biology and physiology are often represented by large systems of non-linear ordinary differential equations. In many cases, an observed behaviour may be written as a linear functional of the solution of this system of equations. A technique is presented in this study for automatically identifying key terms in the system of equations that are responsible for a given linear functional of the solution. This technique is underpinned by ideas drawn from a posteriori error analysis. This concept has been used in finite element analysis to identify regions of the computational domain and components of the solution where a fine computational mesh should be used to ensure accuracy of the numerical solution. We use this concept to identify regions of the computational domain and components of the solution where accurate representation of the mathematical model is required for accuracy of the functional of interest. The technique presented is demonstrated by application to a model problem, and then to automatically deduce known results from a cell-level cardiac electrophysiology model. © 2010 Elsevier Inc.
Model Reduction of Fuzzy Logic Systems
Directory of Open Access Journals (Sweden)
Zhandong Yu
2014-01-01
Full Text Available This paper deals with the problem of ℒ2-ℒ∞ model reduction for continuous-time nonlinear uncertain systems. The approach of the construction of a reduced-order model is presented for high-order nonlinear uncertain systems described by the T-S fuzzy systems, which not only approximates the original high-order system well with an ℒ2-ℒ∞ error performance level γ but also translates it into a linear lower-dimensional system. Then, the model approximation is converted into a convex optimization problem by using a linearization procedure. Finally, a numerical example is presented to show the effectiveness of the proposed method.
Setup time reduction in pvc boots production through smed technique
Directory of Open Access Journals (Sweden)
Amanda Herculano da Costa
2012-03-01
Full Text Available The competition imposed by the market requires of the organizations the continuous improvement of its processes, products and offered services, with lower production costs. This article addresses this issue describing the resulting improvements from the implementation of Single Minute Exchange of Die (SMED in the process of exchange the mold of the injection machine of PVC during the boots manufacturing. The case study was conducted in a large company of footwear, located in the state of Paraiba. In order to find the best alternative to the problem of the setup of the molds, were used the SMED and the methodology for problem resolution, and then was implemented the solution that generated greater productivity for the company. Among the improvements made, we should emphasize the reduction of inactive time of 11.56 minutes to 5 minutes, reducing the time needed for the adjustments of the molds with the implementation of guides for centering and shims to standardize the heights of the molds.
Wagenaar, Dirk; van der Graaf, Emiel R.; van der Schaaf, Arjen; Greuter, Marcel J. W.
2015-01-01
Objectives Typical streak artifacts known as metal artifacts occur in the presence of strongly attenuating materials in computed tomography (CT). Recently, vendors have started offering metal artifact reduction (MAR) techniques. In addition, a MAR technique called the metal deletion technique (MDT)
First principle leakage current reduction technique for CMOS devices
CSIR Research Space (South Africa)
Tsague, HD
2015-12-01
Full Text Available that the device is suitable for low power applications. Physical models used for simulation included SI(sub3)N(sub4) and HfO(sub2) as gate dielectric with TiSix as metal gate. From the simulation result, it was shown that HfO2 was the best dielectric material when...
Order reduction of large-scale linear oscillatory system models
Energy Technology Data Exchange (ETDEWEB)
Trudnowksi, D.J. (Pacific Northwest Lab., Richland, WA (United States))
1994-02-01
Eigen analysis and signal analysis techniques of deriving representations of power system oscillatory dynamics result in very high-order linear models. In order to apply many modern control design methods, the models must be reduced to a more manageable order while preserving essential characteristics. Presented in this paper is a model reduction method well suited for large-scale power systems. The method searches for the optimal subset of the high-order model that best represents the system. An Akaike information criterion is used to define the optimal reduced model. The method is first presented, and then examples of applying it to Prony analysis and eigenanalysis models of power systems are given.
Multiscale model reduction for shale gas transport in fractured media
Akkutlu, I Y; Vasilyeva, Maria
2015-01-01
In this paper, we develop a multiscale model reduction technique that describes shale gas transport in fractured media. Due to the pore-scale heterogeneities and processes, we use upscaled models to describe the matrix. We follow our previous work \\cite{aes14}, where we derived an upscaled model in the form of generalized nonlinear diffusion model to describe the effects of kerogen. To model the interaction between the matrix and the fractures, we use Generalized Multiscale Finite Element Method. In this approach, the matrix and the fracture interaction is modeled via local multiscale basis functions. We developed the GMsFEM and applied for linear flows with horizontal or vertical fracture orientations on a Cartesian fine grid. In this paper, we consider arbitrary fracture orientations and use triangular fine grid and developed GMsFEM for nonlinear flows. Moreover, we develop online basis function strategies to adaptively improve the convergence. The number of multiscale basis functions in each coarse region ...
Performability Modelling Tools, Evaluation Techniques and Applications
Haverkort, Boudewijn R.H.M.
1990-01-01
This thesis deals with three aspects of quantitative evaluation of fault-tolerant and distributed computer and communication systems: performability evaluation techniques, performability modelling tools, and performability modelling applications. Performability modelling is a relatively new
Energy Technology Data Exchange (ETDEWEB)
1980-04-01
A review is made of the state of the art of volume reduction techniques for low level liquid and solid radioactive wastes produced as a result of: (1) operation of commercial nuclear power plants, (2) storage of spent fuel in away-from-reactor facilities, and (3) decontamination/decommissioning of commercial nuclear power plants. The types of wastes and their chemical, physical, and radiological characteristics are identified. Methods used by industry for processing radioactive wastes are reviewed and compared to the new techniques for processing and reducing the volume of radioactive wastes. A detailed system description and report on operating experiences follow for each of the new volume reduction techniques. In addition, descriptions of volume reduction methods presently under development are provided. The Appendix records data collected during site surveys of vendor facilities and operating power plants. A Bibliography is provided for each of the various volume reduction techniques discussed in the report.
Selected Logistics Models and Techniques.
1984-09-01
ACCESS PROCEDURE: On-Line System (OLS), UNINET . RCA maintains proprietary control of this model, and the model is available only through a lease...System (OLS), UNINET . RCA maintains proprietary control of this model, and the model is available only through a lease arrangement. • SPONSOR: ASD/ACCC
Relativistic formulations with Blankenbecler-Sugar reduction technique for the three-particle system
Morioka, S.; Afnan, I. R.
1981-02-01
We present a critical comparison for two types of three-dimensional covariant equations for the three-particle system obtained by the Blankenbecler-Sugar reduction technique with the Wightman-Gårding momenta and the usual Jacobi variables. We also discuss the relations between the relativistic and nonrelativistic equations in the low-energy limit. NUCLEAR REACTIONS Relativistic Faddeev equations, Blankenbecler-Sugar reduction technique, nonrelativistic limit.
The Analysis of Dimensionality Reduction Techniques in Cryptographic Object Code Classification
Energy Technology Data Exchange (ETDEWEB)
Jason L. Wright; Milos Manic
2010-05-01
This paper compares the application of three different dimension reduction techniques to the problem of locating cryptography in compiled object code. A simple classi?er is used to compare dimension reduction via sorted covariance, principal component analysis, and correlation-based feature subset selection. The analysis concentrates on the classi?cation accuracy as the number of dimensions is increased.
Modeling Interconnect Variability Using Efficient Parametric Model Order Reduction
Li, Peng; Li, Xin; Pileggi, Lawrence T; Nassif, Sani R
2011-01-01
Assessing IC manufacturing process fluctuations and their impacts on IC interconnect performance has become unavoidable for modern DSM designs. However, the construction of parametric interconnect models is often hampered by the rapid increase in computational cost and model complexity. In this paper we present an efficient yet accurate parametric model order reduction algorithm for addressing the variability of IC interconnect performance. The efficiency of the approach lies in a novel combination of low-rank matrix approximation and multi-parameter moment matching. The complexity of the proposed parametric model order reduction is as low as that of a standard Krylov subspace method when applied to a nominal system. Under the projection-based framework, our algorithm also preserves the passivity of the resulting parametric models.
Model Reduction via Time-Interval Balanced Stochastic Truncation for Linear Time Invariant Systems
DEFF Research Database (Denmark)
Tahavori, Maryamsadat; Shaker, Hamid Reza
2013-01-01
In this article, a new method for model reduction of linear dynamical systems is presented. The proposed technique is from the family of gramian-based relative error model reduction methods. The method uses time-interval gramians in the reduction procedure rather than ordinary gramians and in suc...... player example. The numerical results show that the method is more accurate than ordinary balanced stochastic truncation....
RF Self-Interference reduction techniques for compact full duplex radios
Deballie, B.; van den Broek, Dirk-Jan; Lavin, C.; van Liempd, B.; Klumperink, Eric A.M.; Palacios, C.; Craninckx, J.; Nauta, Bram
2015-01-01
This paper describes three RF self-interference reduction techniques for full-duplex wireless links, which specifically target integration in compact radios. Concretely, a self-interference cancelling front-end, a dual-polarized antenna, and an electrical balance duplexer are proposed. Each techniqu
Sub-Threshold Leakage Current Reduction Techniques In VLSI Circuits -A Survey
Directory of Open Access Journals (Sweden)
V.Sri Sai Harsha
2015-09-01
Full Text Available There is an increasing demand for portable devices powered up by battery, this led the manufacturers of semiconductor technology to scale down the feature size which results in reduction in threshold voltage and enables the complex functionality on a single chip. By scaling down the feature size the dynamic power dissipation has no effect but the static power dissipation has become equal or more than that of Dynamic power dissipation. So in recent CMOS technologies static power dissipation i.e. power dissipation due to leakage current has become a challenging area for VLSI chip designers. In order to prolong the battery life and maintain reliability of circuit, leakage current reduction is the primary goal. A basic overview of techniques used for reduction of sub-threshold leakages is discussed in this paper. Based on the surveyed techniques, one would be able to choose required and apt leakage reduction technique.
Recycling BiCG for Model Reduction
Ahuja, Kapil; Chang, Eun R; Gugercin, Serkan
2010-01-01
Science and engineering problems frequently require solving a sequence of dual linear systems. Two examples are the Iterative Rational Krylov Algorithm (IRKA) for model reduction and Quantum Monte Carlo (QMC) methods in electronic structure calculations. This paper introduces Recycling BiCG, a BiCG method that recycles two Krylov subspaces from one pair of linear systems to the next pair. We develop an augmented bi-Lanczos algorithm and a modified two-term recurrence to include recycling in the iteration. The recycle spaces are approximate left and right invariant subspaces corresponding to the eigenvalues close to the origin. These recycle spaces are found by solving a small generalized eigenvalue problem alongside the dual linear systems being solved in the sequence. We test our algorithm in two application areas. First, we solve a discretized partial differential equation of convection-diffusion type, because these are well-known model problems. Second, we use Recycling BiCG for the linear systems arising ...
A new hybrid jpeg image compression scheme using symbol reduction technique
Kumar, Bheshaj; Sinha, G R
2012-01-01
Lossy JPEG compression is a widely used compression technique. Normally the JPEG standard technique uses three process mapping reduces interpixel redundancy, quantization, which is lossy process and entropy encoding, which is considered lossless process. In this paper, a new technique has been proposed by combining the JPEG algorithm and Symbol Reduction Huffman technique for achieving more compression ratio. The symbols reduction technique reduces the number of symbols by combining together to form a new symbol. As a result of this technique the number of Huffman code to be generated also reduced. It is simple fast and easy to implement. The result shows that the performance of standard JPEG method can be improved by proposed method. This hybrid approach achieves about 20% more compression ratio than the Standard JPEG.
Zoubair, M.; El Bardouni, T.; El Gonnouni, L.; Boulaich, Y.; El Bakkari, B.; El Younoussi, C.
2012-01-01
Computation time constitutes an important and a problematic parameter in Monte Carlo simulations, which is inversely proportional to the statistical errors so there comes the idea to use the variance reduction techniques. These techniques play an important role in reducing uncertainties and improving the statistical results. Several variance reduction techniques have been developed. The most known are Transport cutoffs, Interaction forcing, Bremsstrahlung splitting and Russian roulette. Also, the use of a phase space seems to be appropriate to reduce enormously the computing time. In this work, we applied these techniques on a linear accelerator (LINAC) using the MCNPX computer Monte Carlo code. This code gives a rich palette of variance reduction techniques. In this study we investigated various cards related to the variance reduction techniques provided by MCNPX. The parameters found in this study are warranted to be used efficiently in MCNPX code. Final calculations are performed in two steps that are related by a phase space. Results show that, comparatively to direct simulations (without neither variance-reduction nor phase space), the adopted method allows an improvement in the simulation efficiency by a factor greater than 700.
Modeling Techniques: Theory and Practice
Odd A. Asbjørnsen
1985-01-01
A survey is given of some crucial concepts in chemical process modeling. Those are the concepts of physical unit invariance, of reaction invariance and stoichiometry, the chromatographic effect in heterogeneous systems, the conservation and balance principles and the fundamental structures of cause and effect relationships. As an example, it is shown how the concept of reaction invariance may simplify the homogeneous reactor modeling to a large extent by an orthogonal decomposition of the pro...
A reliable ground bounce noise reduction technique for nanoscale CMOS circuits
Sharma, Vijay Kumar; Pattanaik, Manisha
2015-11-01
Power gating is the most effective method to reduce the standby leakage power by adding header/footer high-VTH sleep transistors between actual and virtual power/ground rails. When a power gating circuit transitions from sleep mode to active mode, a large instantaneous charge current flows through the sleep transistors. Ground bounce noise (GBN) is the high voltage fluctuation on real ground rail during sleep mode to active mode transitions of power gating circuits. GBN disturbs the logic states of internal nodes of circuits. A novel and reliable power gating structure is proposed in this article to reduce the problem of GBN. The proposed structure contains low-VTH transistors in place of high-VTH footer. The proposed power gating structure not only reduces the GBN but also improves other performance metrics. A large mitigation of leakage power in both modes eliminates the need of high-VTH transistors. A comprehensive and comparative evaluation of proposed technique is presented in this article for a chain of 5-CMOS inverters. The simulation results are compared to other well-known GBN reduction circuit techniques at 22 nm predictive technology model (PTM) bulk CMOS model using HSPICE tool. Robustness against process, voltage and temperature (PVT) variations is estimated through Monte-Carlo simulations.
Jiménez-Delgado, Juan J; Paulano-Godino, Félix; PulidoRam-Ramírez, Rubén; Jiménez-Pérez, J Roberto
2016-05-01
The development of support systems for surgery significantly increases the likelihood of obtaining satisfactory results. In the case of fracture reduction interventions these systems enable surgery planning, training, monitoring and assessment. They allow improvement of fracture stabilization, a minimizing of health risks and a reduction of surgery time. Planning a bone fracture reduction by means of a computer assisted simulation involves several semiautomatic or automatic steps. The simulation deals with the correct position of osseous fragments and fixation devices for a fracture reduction. Currently, to the best of our knowledge there is no computer assisted methods to plan an entire fracture reduction process. This paper presents an overall scheme of the computer based process for planning a bone fracture reduction, as described above, and details its main steps, the most common proposed techniques and their main shortcomings. In addition, challenges and new trends of this research field are depicted and analyzed.
Model-order reduction of biochemical reaction networks
Rao, Shodhan; Schaft, Arjan van der; Eunen, Karen van; Bakker, Barbara M.; Jayawardhana, Bayu
2013-01-01
In this paper we propose a model-order reduction method for chemical reaction networks governed by general enzyme kinetics, including the mass-action and Michaelis-Menten kinetics. The model-order reduction method is based on the Kron reduction of the weighted Laplacian matrix which describes the gr
Modeling Techniques: Theory and Practice
Directory of Open Access Journals (Sweden)
Odd A. Asbjørnsen
1985-07-01
Full Text Available A survey is given of some crucial concepts in chemical process modeling. Those are the concepts of physical unit invariance, of reaction invariance and stoichiometry, the chromatographic effect in heterogeneous systems, the conservation and balance principles and the fundamental structures of cause and effect relationships. As an example, it is shown how the concept of reaction invariance may simplify the homogeneous reactor modeling to a large extent by an orthogonal decomposition of the process variables. This allows residence time distribution function parameters to be estimated with the reaction in situ, but without any correlation between the estimated residence time distribution parameters and the estimated reaction kinetic parameters. A general word of warning is given to the choice of wrong mathematical structure of models.
Thaunat, M; Laude, F; Paillard, P; Saillant, G; Catonné, Y
2008-02-01
Little information is provided in the literature describing an efficient reduction technique for pelvic ring disruption. The aim of this study is to assess the use of the transcondylar traction as a closed reduction technique for vertically unstable fracture-dislocations of the sacro-iliac joint. Twenty-four pelvic ring disruptions were treated with attempted closed reduction followed by percutaneous screw fixation. Transcondylar traction was used as a closed reduction technique. Closed reduction to within 1 cm of residual displacement was obtained in all cases. No incidence of infection, digestive, cutaneous, or vascular complications occurred. We observed secondary displacement in three patients. Correction of the vertical displacement is better achieved when performed within 8 days after the trauma. Two posterior screws and a complementary anterior fixation is typically required to avoid further displacement in case of sacral fractures. However, an open approach should be preferred in both cases of crescent iliac fracture-sacroiliac dislocation and transforaminal fracture associated with peripheral neurological deficit. A vertical sacral fracture should make the surgeon more wary of fixation failure and loss of reduction.
Improvement of Continuous Hydrologic Models and HMS SMA Parameters Reduction
Rezaeian Zadeh, Mehdi; Zia Hosseinipour, E.; Abghari, Hirad; Nikian, Ashkan; Shaeri Karimi, Sara; Moradzadeh Azar, Foad
2010-05-01
Hydrological models can help us to predict stream flows and associated runoff volumes of rainfall events within a watershed. There are many different reasons why we need to model the rainfall-runoff processes of for a watershed. However, the main reason is the limitation of hydrological measurement techniques and the costs of data collection at a fine scale. Generally, we are not able to measure all that we would like to know about a given hydrological systems. This is very particularly the case for ungauged catchments. Since the ultimate aim of prediction using models is to improve decision-making about a hydrological problem, therefore, having a robust and efficient modeling tool becomes an important factor. Among several hydrologic modeling approaches, continuous simulation has the best predictions because it can model dry and wet conditions during a long-term period. Continuous hydrologic models, unlike event based models, account for a watershed's soil moisture balance over a long-term period and are suitable for simulating daily, monthly, and seasonal streamflows. In this paper, we describe a soil moisture accounting (SMA) algorithm added to the hydrologic modeling system (HEC-HMS) computer program. As is well known in the hydrologic modeling community one of the ways for improving a model utility is the reduction of input parameters. The enhanced model developed in this study is applied to Khosrow Shirin Watershed, located in the north-west part of Fars Province in Iran, a data limited watershed. The HMS SMA algorithm divides the potential path of rainfall onto a watershed into five zones. The results showed that the output of HMS SMA is insensitive with the variation of many parameters such as soil storage and soil percolation rate. The study's objective is to remove insensitive parameters from the model input using Multi-objective sensitivity analysis. Keywords: Continuous Hydrologic Modeling, HMS SMA, Multi-objective sensitivity analysis, SMA Parameters
Modelling nutrient reduction targets - model structure complexity vs. data availability
Capell, Rene; Lausten Hansen, Anne; Donnelly, Chantal; Refsgaard, Jens Christian; Arheimer, Berit
2015-04-01
In most parts of Europe, macronutrient concentrations and loads in surface water are currently affected by human land use and land management choices. Moreover, current macronutrient concentration and load levels often violate European Water Framework Directive (WFD) targets and effective measures to reduce these levels are sought after by water managers. Identifying such effective measures in specific target catchments should consider the four key processes release, transport, retention, and removal, and thus physical catchment characteristics as e.g. soils and geomorphology, but also management data such as crop distribution and fertilizer application regimes. The BONUS funded research project Soils2Sea evaluates new, differentiated regulation strategies to cost-efficiently reduce nutrient loads to the Baltic Sea based on new knowledge of nutrient transport and retention processes between soils and the coast. Within the Soils2Sea framework, we here examine the capability of two integrated hydrological and nutrient transfer models, HYPE and Mike SHE, to model runoff and nitrate flux responses in the 100 km2 Norsminde catchment, Denmark, comparing different model structures and data bases. We focus on comparing modelled nitrate reductions within and below the root zone, and evaluate model performances as function of available model structures (process representation within the model) and available data bases (temporal forcing data and spatial information). This model evaluation is performed to aid in the development of model tools which will be used to estimate the effect of new nutrient reduction measures on the catchment to regional scale, where available data - both climate forcing and land management - typically are increasingly limited with the targeted spatial scale and may act as a bottleneck for process conceptualizations and thus the value of a model as tool to provide decision support for differentiated regulation strategies.
Model checking timed automata : techniques and applications
Hendriks, Martijn.
2006-01-01
Model checking is a technique to automatically analyse systems that have been modeled in a formal language. The timed automaton framework is such a formal language. It is suitable to model many realistic problems in which time plays a central role. Examples are distributed algorithms, protocols, emb
Advanced structural equation modeling issues and techniques
Marcoulides, George A
2013-01-01
By focusing primarily on the application of structural equation modeling (SEM) techniques in example cases and situations, this book provides an understanding and working knowledge of advanced SEM techniques with a minimum of mathematical derivations. The book was written for a broad audience crossing many disciplines, assumes an understanding of graduate level multivariate statistics, including an introduction to SEM.
Numerical filtering techniques for the reduction of noise in digital telemetry data
Helfrich-Stone, Thomas M.
Telemetry data noise is due to the marginal or complete loss of telemetry carrier signal, leading to errors in the PCM data received. Attention is presently given to several postflight numerical filtering techniques for the reduction and/or removal of noise in digital telemetry data, prior to use in automated computer data analysis. The techniques encompass manual filtering, upper/lower bound filtering, mean-plus/minus standard deviation filtering, rate-of-change filtering, multiple-measurement filtering, and multiple filters.
Using Visualization Techniques in Multilayer Traffic Modeling
Bragg, Arnold
We describe visualization techniques for multilayer traffic modeling - i.e., traffic models that span several protocol layers, and traffic models of protocols that cross layers. Multilayer traffic modeling is challenging, as one must deal with disparate traffic sources; control loops; the effects of network elements such as IP routers; cross-layer protocols; asymmetries in bandwidth, session lengths, and application behaviors; and an enormous number of complex interactions among the various factors. We illustrate by using visualization techniques to identify relationships, transformations, and scaling; to smooth simulation and measurement data; to examine boundary cases, subtle effects and interactions, and outliers; to fit models; and to compare models with others that have fewer parameters. Our experience suggests that visualization techniques can provide practitioners with extraordinary insight about complex multilayer traffic effects and interactions that are common in emerging next-generation networks.
Spur Reduction Techniques for Phase-Locked Loops Exploiting A Sub-Sampling Phase Detector
Gao, X.; Klumperink, Eric A.M.; Socci, Gerard; Bohsali, Mounhir; Nauta, Bram
2010-01-01
This paper presents phase-locked loop (PLL) reference-spur reduction design techniques exploiting a sub-sampling phase detector (SSPD) (which is also referred to as a sampling phase detector). The VCO is sampled by the reference clock without using a frequency divider and an amplitude controlled
Relativistic formulations with Blankenbecler-Sugar reduction technique for the three-particle system
Energy Technology Data Exchange (ETDEWEB)
Morioka, S.; Afnan, I.R.
1981-02-01
We present a critical comparison for two types of three-dimensional covariant equations for the three-particle system obtained by the Blankenbecler-Sugar reduction technique with the Wightman-Garding momenta and the usual Jacobi variables. We also discuss the relations between the relativistic and nonrelativistic equations in the low-energy limit.
Spur Reduction Techniques for Phase-Locked Loops Exploiting A Sub-Sampling Phase Detector
Gao, Xiang; Klumperink, Eric A.M.; Socci, Gerard; Bohsali, Mounhir; Nauta, Bram
2010-01-01
This paper presents phase-locked loop (PLL) reference-spur reduction design techniques exploiting a sub-sampling phase detector (SSPD) (which is also referred to as a sampling phase detector). The VCO is sampled by the reference clock without using a frequency divider and an amplitude controlled cha
Inquiry-Based Stress Reduction Meditation Technique for Teacher Burnout: A Qualitative Study
Schnaider-Levi, Lia; Mitnik, Inbal; Zafrani, Keren; Goldman, Zehavit; Lev-Ari, Shahar
2017-01-01
An inquiry-based intervention has been found to have a positive effect on burnout and mental well-being parameters among teachers. The aim of the current study was to qualitatively evaluate the effect of the inquiry-based stress reduction (IBSR) meditation technique on the participants. Semi-structured interviews were conducted before and after…
TEM Cell Testing of Cable Noise Reduction Techniques from 2 MHz to 200 MHz -- Part 2
Bradley, Arthur T.; Evans, William C.; Reed, Joshua L.; Shimp, Samuel K., III; Fitzpatrick, Fred D.
2008-01-01
This paper presents empirical results of cable noise reduction techniques as demonstrated in a TEM cell operating with radiated fields from 2 - 200 MHz. It is the second part of a two-paper series. The first paper discussed cable types and shield connections. In this second paper, the effects of load and source resistances and chassis connections are examined. For each topic, well established theories are compared to data from a real-world physical system. Finally, recommendations for minimizing cable susceptibility (and thus cable emissions) are presented. There are numerous papers and textbooks that present theoretical analyses of cable noise reduction techniques. However, empirical data is often targeted to low frequencies (e.g. 100 MHz). Additionally, a comprehensive study showing the relative effects of various noise reduction techniques is needed. These include the use of dedicated return wires, twisted wiring, cable shielding, shield connections, changing load or source impedances, and implementing load- or source-to-chassis isolation. We have created an experimental setup that emulates a real-world electrical system, while still allowing us to independently vary a host of parameters. The goal of the experiment was to determine the relative effectiveness of various noise reduction techniques when the cable is in the presence of radiated emissions from 2 MHz to 200 MHz.
Identification of dose-reduction techniques for BWR and PWR repetitive high-dose jobs
Energy Technology Data Exchange (ETDEWEB)
Dionne, B.J.; Baum, J.W.
1984-01-01
As a result of concern about the apparent increase in collective radiation dose to workers at nuclear power plants, this project will provide information to industry in preplanning for radiation protection during maintenance operations. This study identifies Boiling Water Reactor (BWR) and Pressurized Water Reactor (PWR) repetitive jobs, and respective collective dose trends and dose reduction techniques. 3 references, 2 tables. (ACR)
Error reduction technique using covariant approximation and application to nucleon form factor
Blum, Thomas; Shintani, Eigo
2012-01-01
We demonstrate the new class of variance reduction techniques for hadron propagator and nucleon isovector form factor in the realistic lattice of $N_f=2+1$ domain-wall fermion. All-mode averaging (AMA) is one of the powerful tools to reduce the statistical noise effectively for wider varieties of observables compared to existing techniques such as low-mode averaging (LMA). We adopt this technique to hadron two-point functions and three-point functions, and compare with LMA and traditional source-shift method in the same ensembles. We observe AMA is much more cost effective in reducing statistical error for these observables.
Evaluation of clipping based iterative PAPR reduction techniques for FBMC systems.
Kollár, Zsolt; Varga, Lajos; Horváth, Bálint; Bakki, Péter; Bitó, János
2014-01-01
This paper investigates filter bankmulticarrier (FBMC), a multicarrier modulation technique exhibiting an extremely low adjacent channel leakage ratio (ACLR) compared to conventional orthogonal frequency division multiplexing (OFDM) technique. The low ACLR of the transmitted FBMC signal makes it especially favorable in cognitive radio applications, where strict requirements are posed on out-of-band radiation. Large dynamic range resulting in high peak-to-average power ratio (PAPR) is characteristic of all sorts of multicarrier signals. The advantageous spectral properties of the high-PAPR FBMC signal are significantly degraded if nonlinearities are present in the transceiver chain. Spectral regrowth may appear, causing harmful interference in the neighboring frequency bands. This paper presents novel clipping based PAPR reduction techniques, evaluated and compared by simulations and measurements, with an emphasis on spectral aspects. The paper gives an overall comparison of PAPR reduction techniques, focusing on the reduction of the dynamic range of FBMC signals without increasing out-of-band radiation. An overview is presented on transmitter oriented techniques employing baseband clipping, which can maintain the system performance with a desired bit error rate (BER).
Reduction of Protein Networks Models by Passivity Preserving Projection
Institute of Scientific and Technical Information of China (English)
Luca Mesin; Flavio Canavero; Lamberto Rondoni
2013-01-01
Reduction of complex protein networks models is of great importance.The accuracy of a passivity preserving algorithm (PRIMA) for model order reduction (MOR) is here tested on protein networks,introducing innovative variations of the standard PRIMA method to fit the problem at hand.The reduction method does not require to solve the complete system,resulting in a promising tool for studying very large-scale models for which the full solution cannot be computed.The mathematical structure of the considered kinetic equations is preserved.Keeping constant the reduction factor,the approximation error is lower for larger systems.
Application of chaotic noise reduction techniques to chaotic data trained by ANN
Indian Academy of Sciences (India)
C Chandra Shekara Bhat; M R Kaimal; T R Ramamohan
2001-10-01
We propose a novel method of combining artiﬁcial neural networks (ANNs) with chaotic noise reduction techniques that captures the metric and dynamic invariants of a chaotic time series, e.g. a time series obtained by iterating the logistic map in chaotic regimes. Our results indicate that while the feedforward neural network is capable of capturing the dynamical and metric invariants of chaotic time series within an error of about 25%, ANNs along with chaotic noise reduction techniques, such as Hammel’s method or the local projective method, can signiﬁcantly improve these results. This further suggests that the effort on the ANN to train data corresponding to complex structures can be signiﬁcantly reduced. This technique can be applied in areas like signal processing, data communication, image processing etc.
Variance reduction technique in a beta radiation beam using an extrapolation chamber.
Polo, Ivón Oramas; Souza Santos, William; de Lara Antonio, Patrícia; Caldas, Linda V E
2017-10-01
This paper aims to show how the variance reduction technique "Geometry splitting/Russian roulette" improves the statistical error and reduces uncertainties in the determination of the absorbed dose rate in tissue using an extrapolation chamber for beta radiation. The results show that the use of this technique can increase the number of events in the chamber cavity leading to a closer approximation of simulation result with the physical problem. There was a good agreement among the experimental measurements, the certificate of manufacture and the simulation results of the absorbed dose rate values and uncertainties. The absorbed dose rate variation coefficient using the variance reduction technique "Geometry splitting/Russian roulette" was 2.85%. Copyright © 2017 Elsevier Ltd. All rights reserved.
ON THE PAPR REDUCTION IN OFDM SYSTEMS: A NOVEL ZCT PRECODING BASED SLM TECHNIQUE
Directory of Open Access Journals (Sweden)
VARUN JEOTI
2011-06-01
Full Text Available High Peak to Average Power Ratio (PAPR reduction is still an important challenge in Orthogonal Frequency Division Multiplexing (OFDM systems. In this paper, we propose a novel Zadoff-Chu matrix Transform (ZCT precoding based Selected Mapping (SLM technique for PAPR reduction in OFDM systems. This technique is based on precoding the constellation symbols with ZCT precoder after the multiplication of phase rotation factor and before the Inverse Fast Fourier Transform (IFFT in the SLM based OFDM (SLM-OFDM Systems. Computer simulation results show that, the proposed technique can reduce PAPR up to 5.2 dB for N=64 (System subcarriers and V=16 (Dissimilar phase sequences, at clip rate of 10-3. Additionally, ZCT based SLM-OFDM (ZCT-SLM-OFDM systems also take advantage of frequency variations of the communication channel and can also offer substantial performance gain in fading multipath channels.
Hashemi-Kia, M.; Toossi, M.
1990-01-01
As a result of this work, a reduction procedure has been developed which can be applied to large finite element model of airframe type structures. This procedure, which is tailored to be used with MSC/NASTRAN finite element code, is applied to the full airframe dynamic finite element model of AH-64A Attack Helicopter. The applicability of the resulting reduced model to parametric and optimization studies is examined. Through application of the design sensitivity analysis, the viability and efficiency of this reduction technique has been demonstrated in a vibration reduction study.
A Comparative Analysis of Techniques for PAPR Reduction of OFDM Signals
Directory of Open Access Journals (Sweden)
M. Janjić
2014-06-01
Full Text Available In this paper the problem of high Peak-to-Average Power Ratio (PAPR in Orthogonal Frequency-Division Multiplexing (OFDM signals is studied. Besides describing three techniques for PAPR reduction, SeLective Mapping (SLM, Partial Transmit Sequence (PTS and Interleaving, a detailed analysis of the performances of these techniques for various values of relevant parameters (number of phase sequences, number of interleavers, number of phase factors, number of subblocks depending on applied technique, is carried out. Simulation of these techniques is run in Matlab software. Results are presented in the form of Complementary Cumulative Distribution Function (CCDF curves for PAPR of 30000 randomly generated OFDM symbols. Simulations are performed for OFDM signals with 32 and 256 subcarriers, oversampled by a factor of 4. A detailed comparison of these techniques is made based on Matlab simulation results.
The Performance Analysis of PAPR Reduction using a novel PTS technique in OFDM-MIMO System
Directory of Open Access Journals (Sweden)
Kashish Sareen
2012-08-01
Full Text Available Orthogonal Frequency Division Multiplexing (OFDM is an new multiplexing technique for 4G and 4.5G generation wireless communication. MIMO-OFDM is latest multiplexing technique which is resposible for high performance 4G broadband wireless communications. But there is one main disadvantage of MIMO-OFDM is the high peak-to-average power ratio (PAPR of the transmitter’s output signal on different antennas. In this paper, we present a new noble (PTS Partial transmit sequence technique to reduce PAPR problem in OFDM-MIMO system. This new PTS technique gives us better PAPR Reduction gain in OFDM-MIMO as compared with original and another PTS Techniques.
Efficient Symmetry Reduction and the Use of State Symmetries for Symbolic Model Checking
Directory of Open Access Journals (Sweden)
Christian Appold
2010-06-01
Full Text Available One technique to reduce the state-space explosion problem in temporal logic model checking is symmetry reduction. The combination of symmetry reduction and symbolic model checking by using BDDs suffered a long time from the prohibitively large BDD for the orbit relation. Dynamic symmetry reduction calculates representatives of equivalence classes of states dynamically and thus avoids the construction of the orbit relation. In this paper, we present a new efficient model checking algorithm based on dynamic symmetry reduction. Our experiments show that the algorithm is very fast and allows the verification of larger systems. We additionally implemented the use of state symmetries for symbolic symmetry reduction. To our knowledge we are the first who investigated state symmetries in combination with BDD based symbolic model checking.
An optimization approach to kinetic model reduction for combustion chemistry
Lebiedz, Dirk
2013-01-01
Model reduction methods are relevant when the computation time of a full convection-diffusion-reaction simulation based on detailed chemical reaction mechanisms is too large. In this article, we review a model reduction approach based on optimization of trajectories and show its applicability to realistic combustion models. As most model reduction methods, it identifies points on a slow invariant manifold based on time scale separation in the dynamics of the reaction system. The numerical approximation of points on the manifold is achieved by solving a semi-infinite optimization problem, where the dynamics enter the problem as constraints. The proof of existence of a solution for an arbitrarily chosen dimension of the reduced model (slow manifold) is extended to the case of realistic combustion models including thermochemistry by considering the properties of proper maps. The model reduction approach is finally applied to three models based on realistic reaction mechanisms: 1. ozone decomposition as a small t...
MODELLING OF BACTERIAL SULPHATE REDUCTION IN ANAEROBIC PONDS : KINETIC INVESTIGATIONS
Harerimana, Casimir; Vasel, Jean-Luc; Jupsin, Hugues; Ouali, Amira
2011-01-01
The aim of the study was first to develop a simple and practical model of anaerobic digestion including sulphate-reduction in anaerobic ponds. The basic microbiology of our model consists of three steps, namely, acidogenesis, methanogenesis, and sulphate reduction. This model includes multiple reaction stoichiometry and substrate utilization kinetics. The second aim was to determine some kinetic parameters associated with this model. The values of these parameters for sulfidogenic bacteria ar...
Model reduction methods for vector autoregressive processes
Brüggemann, Ralf
2004-01-01
1. 1 Objective of the Study Vector autoregressive (VAR) models have become one of the dominant research tools in the analysis of macroeconomic time series during the last two decades. The great success of this modeling class started with Sims' (1980) critique of the traditional simultaneous equation models (SEM). Sims criticized the use of 'too many incredible restrictions' based on 'supposed a priori knowledge' in large scale macroeconometric models which were popular at that time. Therefore, he advo cated largely unrestricted reduced form multivariate time series models, unrestricted VAR models in particular. Ever since his influential paper these models have been employed extensively to characterize the underlying dynamics in systems of time series. In particular, tools to summarize the dynamic interaction between the system variables, such as impulse response analysis or forecast error variance decompo sitions, have been developed over the years. The econometrics of VAR models and related quantities i...
Analysis and interpretation of dynamic FDG PET oncological studies using data reduction techniques
Directory of Open Access Journals (Sweden)
Santos Andres
2007-10-01
Full Text Available Abstract Background Dynamic positron emission tomography studies produce a large amount of image data, from which clinically useful parametric information can be extracted using tracer kinetic methods. Data reduction methods can facilitate the initial interpretation and visual analysis of these large image sequences and at the same time can preserve important information and allow for basic feature characterization. Methods We have applied principal component analysis to provide high-contrast parametric image sets of lower dimensions than the original data set separating structures based on their kinetic characteristics. Our method has the potential to constitute an alternative quantification method, independent of any kinetic model, and is particularly useful when the retrieval of the arterial input function is complicated. In independent component analysis images, structures that have different kinetic characteristics are assigned opposite values, and are readily discriminated. Furthermore, novel similarity mapping techniques are proposed, which can summarize in a single image the temporal properties of the entire image sequence according to a reference region. Results Using our new cubed sum coefficient similarity measure, we have shown that structures with similar time activity curves can be identified, thus facilitating the detection of lesions that are not easily discriminated using the conventional method employing standardized uptake values.
Variance reduction techniques for 14 MeV neutron streaming problem in rectangular annular bent duct
Energy Technology Data Exchange (ETDEWEB)
Ueki, Kotaro [Ship Research Inst., Mitaka, Tokyo (Japan)
1998-03-01
Monte Carlo method is the powerful technique for solving wide range of radiation transport problems. Its features are that it can solve the Boltzmann`s transport equation almost without approximation, and that the complexity of the systems to be treated rarely becomes a problem. However, the Monte Carlo calculation is always accompanied by statistical errors called variance. In shielding calculation, standard deviation or fractional standard deviation (FSD) is used frequently. The expression of the FSD is shown. Radiation shielding problems are roughly divided into transmission through deep layer and streaming problem. In the streaming problem, the large difference in the weight depending on the history of particles makes the FSD of Monte Carlo calculation worse. The streaming experiment in the 14 MeV neutron rectangular annular bent duct, which is the typical streaming bench mark experiment carried out of the OKTAVIAN of Osaka University, was analyzed by MCNP 4B, and the reduction of variance or FSD was attempted. The experimental system is shown. The analysis model by MCNP 4B, the input data and the results of analysis are reported, and the comparison with the experimental results was examined. (K.I.)
A Method to Test Model Calibration Techniques
Energy Technology Data Exchange (ETDEWEB)
Judkoff, Ron; Polly, Ben; Neymark, Joel
2016-08-26
This paper describes a method for testing model calibration techniques. Calibration is commonly used in conjunction with energy retrofit audit models. An audit is conducted to gather information about the building needed to assemble an input file for a building energy modeling tool. A calibration technique is used to reconcile model predictions with utility data, and then the 'calibrated model' is used to predict energy savings from a variety of retrofit measures and combinations thereof. Current standards and guidelines such as BPI-2400 and ASHRAE-14 set criteria for 'goodness of fit' and assume that if the criteria are met, then the calibration technique is acceptable. While it is logical to use the actual performance data of the building to tune the model, it is not certain that a good fit will result in a model that better predicts post-retrofit energy savings. Therefore, the basic idea here is that the simulation program (intended for use with the calibration technique) is used to generate surrogate utility bill data and retrofit energy savings data against which the calibration technique can be tested. This provides three figures of merit for testing a calibration technique, 1) accuracy of the post-retrofit energy savings prediction, 2) closure on the 'true' input parameter values, and 3) goodness of fit to the utility bill data. The paper will also discuss the pros and cons of using this synthetic surrogate data approach versus trying to use real data sets of actual buildings.
Evaluation of Clipping Based Iterative PAPR Reduction Techniques for FBMC Systems
Directory of Open Access Journals (Sweden)
Zsolt Kollár
2014-01-01
to conventional orthogonal frequency division multiplexing (OFDM technique. The low ACLR of the transmitted FBMC signal makes it especially favorable in cognitive radio applications, where strict requirements are posed on out-of-band radiation. Large dynamic range resulting in high peak-to-average power ratio (PAPR is characteristic of all sorts of multicarrier signals. The advantageous spectral properties of the high-PAPR FBMC signal are significantly degraded if nonlinearities are present in the transceiver chain. Spectral regrowth may appear, causing harmful interference in the neighboring frequency bands. This paper presents novel clipping based PAPR reduction techniques, evaluated and compared by simulations and measurements, with an emphasis on spectral aspects. The paper gives an overall comparison of PAPR reduction techniques, focusing on the reduction of the dynamic range of FBMC signals without increasing out-of-band radiation. An overview is presented on transmitter oriented techniques employing baseband clipping, which can maintain the system performance with a desired bit error rate (BER.
Srivastava, Rajeev; Gupta, JRP; Parthasarthy, Harish
2010-05-01
In this paper, the partial differential equation (PDE) based homomorphic filtering technique is proposed for speckle reduction from digitally reconstructed holographic images based on the concepts of complex diffusion processes. For digital implementations, the proposed scheme was discretized using finite differences scheme. Further, the performance of the proposed PDE-based technique is compared with other speckle reduction techniques such as homomorphic anisotropic diffusion filter based on extended concept of Perona and Malik (1990) [2], homomorphic Weiner filter, Lee filter, Frost filter, Kuan filter, speckle reducing anisotropic diffusion (SRAD) filter and hybrid filter in the context of digital holography. For the comparison of various speckle reduction techniques, the performance is evaluated quantitatively in terms of all possible parameters that justify the applicability of a scheme for a specific application. The chosen parameters are mean-square-error (MSE), normalized mean-square-error (NMSE), peak signal-to-noise ratio (PSNR), speckle index, average signal-to-noise ratio (SNR), effective number of looks (ENL), correlation parameter (CP), mean structure similarity index map (MSSIM) and execution time in seconds. For experimentation and computer simulation MATLAB 7.0 has been used and the performance is evaluated and tested for various sample holographic images for varying amount of speckle variance. The results obtained justify the applicability of proposed schemes.
Research Techniques Made Simple: Skin Carcinogenesis Models: Xenotransplantation Techniques.
Mollo, Maria Rosaria; Antonini, Dario; Cirillo, Luisa; Missero, Caterina
2016-02-01
Xenotransplantation is a widely used technique to test the tumorigenic potential of human cells in vivo using immunodeficient mice. Here we describe basic technologies and recent advances in xenotransplantation applied to study squamous cell carcinomas (SCCs) of the skin. SCC cells isolated from tumors can either be cultured to generate a cell line or injected directly into mice. Several immunodeficient mouse models are available for selection based on the experimental design and the type of tumorigenicity assay. Subcutaneous injection is the most widely used technique for xenotransplantation because it involves a simple procedure allowing the use of a large number of cells, although it may not mimic the original tumor environment. SCC cell injections at the epidermal-to-dermal junction or grafting of organotypic cultures containing human stroma have also been used to more closely resemble the tumor environment. Mixing of SCC cells with cancer-associated fibroblasts can allow the study of their interaction and reciprocal influence, which can be followed in real time by intradermal ear injection using conventional fluorescent microscopy. In this article, we will review recent advances in xenotransplantation technologies applied to study behavior of SCC cells and their interaction with the tumor environment in vivo.
The problem of margin calculation and its reduction via the p-median problem model
Goldengorin, B.; Krushynskyi, D.; Kuzmenko, V.; Mastorakis, NE; Demiralp, M; Mladenov,; Bojkovic, Z
2009-01-01
The paper deals with a model for calculation of the regulatory margin on brokerage accounts. The model is based on the p-Median problem (PMP) that is known to be NP-hard. We use a pseudo-Boolean representation of the PMP and propose several problem size reduction and preprocessing techniques. Our co
Towards reduction of Paradigm coordination models
S. Andova; L.P.J. Groenewegen; E.P. de Vink (Erik Peter)
2011-01-01
htmlabstractThe coordination modelling language Paradigm addresses collaboration between components in terms of dynamic constraints. Within a Paradigm model, component dynamics are consistently specified at a detailed and a global level of abstraction. To enable automated verification of Paradigm mo
Projection-based model reduction for contact problems
Balajewicz, Maciej; Farhat, Charbel
2015-01-01
Large scale finite element analysis requires model order reduction for computationally expensive applications such as optimization, parametric studies and control design. Although model reduction for nonlinear problems is an active area of research, a major hurdle is modeling and approximating contact problems. This manuscript introduces a projection-based model reduction approach for static and dynamic contact problems. In this approach, non-negative matrix factorization is utilized to optimally compress and strongly enforce positivity of contact forces in training simulation snapshots. Moreover, a greedy algorithm coupled with an error indicator is developed to efficiently construct parametrically robust low-order models. The proposed approach is successfully demonstrated for the model reduction of several two-dimensional elliptic and hyperbolic obstacle and self contact problems.
Methods for clinical evaluation of noise reduction techniques in abdominopelvic CT.
Ehman, Eric C; Yu, Lifeng; Manduca, Armando; Hara, Amy K; Shiung, Maria M; Jondal, Dayna; Lake, David S; Paden, Robert G; Blezek, Daniel J; Bruesewitz, Michael R; McCollough, Cynthia H; Hough, David M; Fletcher, Joel G
2014-01-01
Most noise reduction methods involve nonlinear processes, and objective evaluation of image quality can be challenging, since image noise cannot be fully characterized on the sole basis of the noise level at computed tomography (CT). Noise spatial correlation (or noise texture) is closely related to the detection and characterization of low-contrast objects and may be quantified by analyzing the noise power spectrum. High-contrast spatial resolution can be measured using the modulation transfer function and section sensitivity profile and is generally unaffected by noise reduction. Detectability of low-contrast lesions can be evaluated subjectively at varying dose levels using phantoms containing low-contrast objects. Clinical applications with inherent high-contrast abnormalities (eg, CT for renal calculi, CT enterography) permit larger dose reductions with denoising techniques. In low-contrast tasks such as detection of metastases in solid organs, dose reduction is substantially more limited by loss of lesion conspicuity due to loss of low-contrast spatial resolution and coarsening of noise texture. Existing noise reduction strategies for dose reduction have a substantial impact on lowering the radiation dose at CT. To preserve the diagnostic benefit of CT examination, thoughtful utilization of these strategies must be based on the inherent lesion-to-background contrast and the anatomy of interest. The authors provide an overview of existing noise reduction strategies for low-dose abdominopelvic CT, including analytic reconstruction, image and projection space denoising, and iterative reconstruction; review qualitative and quantitative tools for evaluating these strategies; and discuss the strengths and limitations of individual noise reduction methods.
PAPR Reduction Techniques in Orthogonal Frequency Division Multiplexing (OFDM: A Review
Directory of Open Access Journals (Sweden)
Ravi Mohan
2013-05-01
Full Text Available High Peak-to-Average Power Ratio (PAPR is the one of the major drawback of the Orthogonal Frequency Division Multiplexing (OFDM transmitted signal. Many techniques have been proposed to mitigate the PAPR problem. Except for the signal distortion techniques such as clipping, peak windowing and companding and so on. The redundancy based PAPR reduction techniques include selective mapping, partial transmit sequence, tone reservation, tone injection and coding, etc information The undesired effects occurring to the distortion techniques can be alleviated with the penalty of the reduced transmission rates due to introduction of redundancy. In a turbo coded orthogonal frequency-division multiplexing (TCOFDM system, low peak-to-average power ratio (PAPR can be achieved by selective-mapping (SLM. OFDM consist of large number of independent subcarriers, as a result of which the amplitude of such a signal can have high peak values. The .Selected Mapping (SLM and turbo coding is one of the promising PAPR reduction techniques for OFDM. In a turbo coded orthogonal frequency-division multiplexing (TCOFDM system, low peak-to-average power ratio (PAPR can be achieved by selective-mapping (SLM
SVD-Based Optimal Filtering Technique for Noise Reduction in Hearing Aids Using Two Microphones
Directory of Open Access Journals (Sweden)
Moonen Marc
2002-01-01
Full Text Available We introduce a new SVD-based (Singular value decomposition strategy for noise reduction in hearing aids. This technique is evaluated for noise reduction in a behind-the-ear (BTE hearing aid where two omnidirectional microphones are mounted in an endfire configuration. The behaviour of the SVD-based technique is compared to a two-stage adaptive beamformer for hearing aids developed by Vanden Berghe and Wouters (1998. The evaluation and comparison is done with a performance metric based on the speech intelligibility index (SII. The speech and noise signals are recorded in reverberant conditions with a signal-to-noise ratio of and the spectrum of the noise signals is similar to the spectrum of the speech signal. The SVD-based technique works without initialization nor assumptions about a look direction, unlike the two-stage adaptive beamformer. Still, for different noise scenarios, the SVD-based technique performs as well as the two-stage adaptive beamformer, for a similar filter length and adaptation time for the filter coefficients. In a diffuse noise scenario, the SVD-based technique performs better than the two-stage adaptive beamformer and hence provides a more flexible and robust solution under speaker position variations and reverberant conditions.
SVD-Based Optimal Filtering Technique for Noise Reduction in Hearing Aids Using Two Microphones
Maj, Jean-Baptiste; Moonen, Marc; Wouters, Jan
2002-12-01
We introduce a new SVD-based (Singular value decomposition) strategy for noise reduction in hearing aids. This technique is evaluated for noise reduction in a behind-the-ear (BTE) hearing aid where two omnidirectional microphones are mounted in an endfire configuration. The behaviour of the SVD-based technique is compared to a two-stage adaptive beamformer for hearing aids developed by Vanden Berghe and Wouters (1998). The evaluation and comparison is done with a performance metric based on the speech intelligibility index (SII). The speech and noise signals are recorded in reverberant conditions with a signal-to-noise ratio of [InlineEquation not available: see fulltext.] and the spectrum of the noise signals is similar to the spectrum of the speech signal. The SVD-based technique works without initialization nor assumptions about a look direction, unlike the two-stage adaptive beamformer. Still, for different noise scenarios, the SVD-based technique performs as well as the two-stage adaptive beamformer, for a similar filter length and adaptation time for the filter coefficients. In a diffuse noise scenario, the SVD-based technique performs better than the two-stage adaptive beamformer and hence provides a more flexible and robust solution under speaker position variations and reverberant conditions.
Layout-Driven Post-Placement Techniques for Temperature Reduction and Thermal Gradient Minimization
DEFF Research Database (Denmark)
Liu, Wei; Calimera, Andrea; Macii, Alberto;
2013-01-01
With the continuing scaling of CMOS technology, on-chip temperature and thermal-induced variations have become a major design concern. To effectively limit the high temperature in a chip equipped with a cost-effective cooling system, thermal specific approaches, besides low power techniques......, are necessary at the chip design level. The high temperature in hotspots and large thermal gradients are caused by the high local power density and the nonuniform power dissipation across the chip. With the objective of reducing power density in hotspots, we propose two placement techniques that spread cells...... layout. We compare the proposed methods in terms of temperature reduction, timing, and area overhead to the baseline method, which enlarges the circuit area uniformly. The experimental results showed that our methods achieve a larger reduction in both peak temperature and thermal gradient than...
Use experiences of MCNP in nuclear energy study. 2. Review of variance reduction techniques
Energy Technology Data Exchange (ETDEWEB)
Sakurai, Kiyoshi; Yamamoto, Toshihiro [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment] [eds.
1998-03-01
`MCNP Use Experience` Working Group was established in 1996 under the Special Committee on Nuclear Code Evaluation. This year`s main activity of the working group has been focused on the review of variance reduction techniques of Monte Carlo calculations. This working group dealt with the variance reduction techniques of (1) neutron and gamma ray transport calculation of fusion reactor system, (2) concept design of nuclear transmutation system using accelerator, (3) JMTR core calculation, (4) calculation of prompt neutron decay constant, (5) neutron and gamma ray transport calculation for exposure evaluation, (6) neutron and gamma ray transport calculation of shielding system, etc. Furthermore, this working group started an activity to compile `Guideline of Monte Carlo Calculation` which will be a standard in the future. The appendices of this report include this `Guideline`, the use experience of MCNP 4B and examples of Monte Carlo calculations of high energy charged particles. The 11 papers are indexed individually. (J.P.N.)
A specific closed percutaneous technique for reduction of Jeffery type II lesion.
Chotel, Franck; Sailhan, Frédéric; Martin, Jean-Noël; Filipe, Georges; Pem, Rajkumar; Garnier, Emmanuelle; Berard, Jerôme
2006-09-01
Open reduction is commonly recommended in Jeffery type II fractures. Attempts to reduce these fractures percutaneously were reported as unsafe and unreliable. We revisited this technique and used a specific percutaneous reduction that turned out to be successful in two cases. Instead of lifting the radial head as described in leverage maneuver, we use a pushing-back procedure to reduce the fracture. The maneuver aims at suppressing the capitellum interposition between the head fragment and the metaphysis by reproducing the reversed trajectory of trauma. This reduction is made possible because of the posterior periosteal attachment of the radial head. A few weeks after the procedure, the two patients remained painless, recovered a complete range of motion in prono-supination and returned to sports. In these two cases, the procedure used led to a prompt recovery and provided a much better outcome than described with the classic open approach.
Dimension reduction techniques for the integrative analysis of multi-omics data.
Meng, Chen; Zeleznik, Oana A; Thallinger, Gerhard G; Kuster, Bernhard; Gholami, Amin M; Culhane, Aedín C
2016-07-01
State-of-the-art next-generation sequencing, transcriptomics, proteomics and other high-throughput 'omics' technologies enable the efficient generation of large experimental data sets. These data may yield unprecedented knowledge about molecular pathways in cells and their role in disease. Dimension reduction approaches have been widely used in exploratory analysis of single omics data sets. This review will focus on dimension reduction approaches for simultaneous exploratory analyses of multiple data sets. These methods extract the linear relationships that best explain the correlated structure across data sets, the variability both within and between variables (or observations) and may highlight data issues such as batch effects or outliers. We explore dimension reduction techniques as one of the emerging approaches for data integration, and how these can be applied to increase our understanding of biological systems in normal physiological function and disease.
A Size Reduction Technique for Mobile Phone PIFA Antennas Using Lumped Inductors
DEFF Research Database (Denmark)
Thaysen, Jesper; Jakobsen, Kaj Bjarne
2005-01-01
A size reduction technique for the planar inverted-F antenna (PIFA) is presented. An 18 nH lumped inductor is used in addition to a small 0.3 cm3 PIFA. The PIFA is located on dielectric foam, 5 mm above a 40 mm × 100 mm ground plane. It is possible to reduce the center frequency (|S11|min) by 33 ...
TEM Cell Testing of Cable Noise Reduction Techniques From 2 MHz to 200 MHz - Part 1
Bradley, Arthur T.; Evans, William C.; Reed, Joshua L.; Shimp, Samuel K.; Fitzpatrick, Fred D.
2008-01-01
This paper presents empirical results of cable noise reduction techniques as demonstrated in a TEM cell operating with radiated fields from 2 - 200 MHz. It is the first part of a two-paper series. This first paper discusses cable types and shield connections. In the second paper, the effects of load and source resistances and chassis connections are examined. For each topic, well established theories are compared to data from a real-world physical system. Finally, recommendations for minimizing cable susceptibility (and thus cable emissions) are presented. There are numerous papers and textbooks that present theoretical analyses of cable noise reduction techniques. However, empirical data is often targeted to low frequencies (e.g. 100 MHz). Additionally, a comprehensive study showing the relative effects of various noise reduction techniques is needed. These include the use of dedicated return wires, twisted wiring, cable shielding, shield connections, changing load or source impedances, and implementing load- or source-to-chassis isolation. We have created an experimental setup that emulates a real-world electrical system, while still allowing us to independently vary a host of parameters. The goal of the experiment was to determine the relative effectiveness of various noise reduction techniques when the cable is in the presence of radiated emissions from 2 MHz to 200 MHz. The electronic system (Fig. 1) consisted of two Hammond shielded electrical enclosures, one containing the source resistance, and the other containing the load resistance. The boxes were mounted on a large aluminium plate acting as the chassis. Cables connecting the two boxes measured 81 cm in length and were attached to the boxes using standard D38999 military-style connectors. The test setup is shown in Fig. 2. Electromagnetic fields were created using an HP8657B signal generator, MiniCircuits ZHL-42W-SMA amplifier, and an EMCO 5103 TEM cell. Measurements were taken using an Agilent E4401B
Efficient Data Reduction Techniques for Remote Applications of a Wireless Visual Sensor Network
Directory of Open Access Journals (Sweden)
Khursheed Khursheed
2013-05-01
Full Text Available A Wireless Visual Sensor Network (WVSN is formed by deploying many Visual Sensor Nodes (VSNs in the field. After acquiring an image of the area of interest, the VSN performs local processing on it and transmits the result using an embedded wireless transceiver. Wireless data transmission consumes a great deal of energy, where energy consumption is mainly dependent on the amount of information being transmitted. The image captured by the VSN contains a huge amount of data. For certain applications, segmentation can be performed on the captured images. The amount of information in the segmented images can be reduced by applying efficient bi‐level image compression methods. In this way, the communication energy consumption of each of the VSNs can be reduced. However, the data reduction capability of bi‐level image compression standards is fixed and is limited by the used compression algorithm. For applications attributing few changes in adjacent frames, change coding can be applied for further data reduction. Detecting and compressing only the Regions of Interest (ROIs in the change frame is another possibility for further data reduction. In a communication system, where both the sender and the receiver know the employed compression standard, there is a possibility for further data reduction by not including the header information in the compressed bit stream of the sender. This paper summarizes different information reduction techniques such as image coding, change coding and ROI coding. The main contribution is the investigation of the combined effect of all these coding methods and their application to a few representative real life applications. This paper is intended to be a resource for researchers interested in techniques for information reduction in energy constrained embedded applications.
Power system coherency and model reduction
Chow, Joe H
2014-01-01
This book provides a comprehensive treatment for understanding interarea modes in large power systems and obtaining reduced-order models using the coherency concept and selective modal analysis method.
Numerical modeling techniques for flood analysis
Anees, Mohd Talha; Abdullah, K.; Nawawi, M. N. M.; Ab Rahman, Nik Norulaini Nik; Piah, Abd. Rahni Mt.; Zakaria, Nor Azazi; Syakir, M. I.; Mohd. Omar, A. K.
2016-12-01
Topographic and climatic changes are the main causes of abrupt flooding in tropical areas. It is the need to find out exact causes and effects of these changes. Numerical modeling techniques plays a vital role for such studies due to their use of hydrological parameters which are strongly linked with topographic changes. In this review, some of the widely used models utilizing hydrological and river modeling parameters and their estimation in data sparse region are discussed. Shortcomings of 1D and 2D numerical models and the possible improvements over these models through 3D modeling are also discussed. It is found that the HEC-RAS and FLO 2D model are best in terms of economical and accurate flood analysis for river and floodplain modeling respectively. Limitations of FLO 2D in floodplain modeling mainly such as floodplain elevation differences and its vertical roughness in grids were found which can be improve through 3D model. Therefore, 3D model was found to be more suitable than 1D and 2D models in terms of vertical accuracy in grid cells. It was also found that 3D models for open channel flows already developed recently but not for floodplain. Hence, it was suggested that a 3D model for floodplain should be developed by considering all hydrological and high resolution topographic parameter's models, discussed in this review, to enhance the findings of causes and effects of flooding.
Model reduction of strong-weak neurons
Steven James Cox; Bosen eDu; Danny eSorensen
2014-01-01
We consider neurons with large dendritic trees that are weakly excitable in the sense that back propagating action potentials are severly attenuated as they travel from the small, strongly excitable, spike initiation zone. In previous work we have shown that the computational size of weakly excitable cell models may be reduced by two or more orders of magnitude, and that the size of strongly excitable models may be reduced by at least one order of magnitude, without sacrificing the spatio–tem...
A Biomechanical Modeling Guided CBCT Estimation Technique.
Zhang, You; Tehrani, Joubin Nasehi; Wang, Jing
2017-02-01
Two-dimensional-to-three-dimensional (2D-3D) deformation has emerged as a new technique to estimate cone-beam computed tomography (CBCT) images. The technique is based on deforming a prior high-quality 3D CT/CBCT image to form a new CBCT image, guided by limited-view 2D projections. The accuracy of this intensity-based technique, however, is often limited in low-contrast image regions with subtle intensity differences. The solved deformation vector fields (DVFs) can also be biomechanically unrealistic. To address these problems, we have developed a biomechanical modeling guided CBCT estimation technique (Bio-CBCT-est) by combining 2D-3D deformation with finite element analysis (FEA)-based biomechanical modeling of anatomical structures. Specifically, Bio-CBCT-est first extracts the 2D-3D deformation-generated displacement vectors at the high-contrast anatomical structure boundaries. The extracted surface deformation fields are subsequently used as the boundary conditions to drive structure-based FEA to correct and fine-tune the overall deformation fields, especially those at low-contrast regions within the structure. The resulting FEA-corrected deformation fields are then fed back into 2D-3D deformation to form an iterative loop, combining the benefits of intensity-based deformation and biomechanical modeling for CBCT estimation. Using eleven lung cancer patient cases, the accuracy of the Bio-CBCT-est technique has been compared to that of the 2D-3D deformation technique and the traditional CBCT reconstruction techniques. The accuracy was evaluated in the image domain, and also in the DVF domain through clinician-tracked lung landmarks.
Lian, Yanbang; Cao, Wuteng; Zhu, Shanshan; Lin, Yanghao; Liu, Dechao; Wang, Xinhua; Qiu, Jianping; Zhou, Zhiyang
2014-07-01
To evaluate the application of low-dose CT enterography with adaptive iterative dose reduction(AIDR) technique in diagnosing Crohn's disease. Retrospective analysis was performed on 26 patients diagnosed as Crohn's disease by the multidisciplinary team in our hospital. Low-dose CT enterography with 640-slice MDCT was performed on these 26 patients using adaptive iterative dose reduction(AIDR) technique. Characteristics of Crohn's disease in CT enterography images were independently analyzed by two radiologists who were experienced in Crohn's disease with calculating the total radiation dosage. The radiation dosage of 26 patients ranged from 5.58 to 12.90 [mean (9.00±2.00)] mSv, which was lower than conventional scan (around 15 mSv) known from the literatures. According to the images of CT enterography of 26 cases, bowel wall thickening with abnormal enhancement and lymphadenectasis were found in 25 cases with total 109 segmental bowel wall thickening. Among 25 thickening cases, enterostenosis was found in 16 cases, stratification enhancement in 12 cases and comb sign in 14 cases. Besides, it was found that 8 cases with hyperdense fat on the mesenteric side, 7 cases with intestinal fistula, 6 cases with abdominal cavity abscess, and 3 cases with anal fistula. CT enterography of Crohn's disease with adaptive iterative dose reduction technique is an effective method to evaluate Crohn's disease without compromising image quality with reduced radiation dosage.
A HYBRID TECHNIQUE FOR PAPR REDUCTION OF OFDM USING DHT PRECODING WITH PIECEWISE LINEAR COMPANDING
Directory of Open Access Journals (Sweden)
Thammana Ajay
2016-06-01
Full Text Available Orthogonal Frequency Division Multiplexing (OFDM is a fascinating approach for wireless communication applications which require huge amount of data rates. However, OFDM signal suffers from its large Peak-to-Average Power Ratio (PAPR, which results in significant distortion while passing through a nonlinear device, such as a transmitter high power amplifier (HPA. Due to this high PAPR, the complexity of HPA as well as DAC also increases. For the reduction of PAPR in OFDM many techniques are available. Among them companding is an attractive low complexity technique for the OFDM signal’s PAPR reduction. Recently, a piecewise linear companding technique is recommended aiming at minimizing companding distortion. In this paper, a collective piecewise linear companding approach with Discrete Hartley Transform (DHT method is expected to reduce peak-to-average of OFDM to a great extent. Simulation results shows that this new proposed method obtains significant PAPR reduction while maintaining improved performance in the Bit Error Rate (BER and Power Spectral Density (PSD compared to piecewise linear companding method.
Noise Reduction Analysis of Radar Rainfall Using Chaotic Dynamics and Filtering Techniques
Directory of Open Access Journals (Sweden)
Soojun Kim
2014-01-01
Full Text Available The aim of this study is to evaluate the filtering techniques which can remove the noise involved in the time series. For this, Logistic series which is chaotic series and radar rainfall series are used for the evaluation of low-pass filter (LF and Kalman filter (KF. The noise is added to Logistic series by considering noise level and the noise added series is filtered by LF and KF for the noise reduction. The analysis for the evaluation of LF and KF techniques is performed by the correlation coefficient, standard error, the attractor, and the BDS statistic from chaos theory. The analysis result for Logistic series clearly showed that KF is better tool than LF for removing the noise. Also, we used the radar rainfall series for evaluating the noise reduction capabilities of LF and KF. In this case, it was difficult to distinguish which filtering technique is better way for noise reduction when the typical statistics such as correlation coefficient and standard error were used. However, when the attractor and the BDS statistic were used for evaluating LF and KF, we could clearly identify that KF is better than LF.
Modeling Techniques for IN/Internet Interworking
Institute of Scientific and Technical Information of China (English)
无
2000-01-01
This paper focuses on the authors' contributions to ITU-T to develop the network modeling for the support of IN/Internet interworking. Following an introduction to benchmark interworking services, the paper describes the consensus enhanced DFP architecture, which is reached based on IETF reference model and the authors' proposal. Then the proposed information flows for benchmark services are presented with new or updated flows identified. Finally a brief description is given to implementation techniques.
Adaptive deployment of model reductions for tau-leaping simulation.
Wu, Sheng; Fu, Jin; Petzold, Linda R
2015-05-28
Multiple time scales in cellular chemical reaction systems often render the tau-leaping algorithm inefficient. Various model reductions have been proposed to accelerate tau-leaping simulations. However, these are often identified and deployed manually, requiring expert knowledge. This is time-consuming and prone to error. In previous work, we proposed a methodology for automatic identification and validation of model reduction opportunities for tau-leaping simulation. Here, we show how the model reductions can be automatically and adaptively deployed during the time course of a simulation. For multiscale systems, this can result in substantial speedups.
Adaptive deployment of model reductions for tau-leaping simulation
Wu, Sheng; Fu, Jin; Petzold, Linda R.
2015-05-01
Multiple time scales in cellular chemical reaction systems often render the tau-leaping algorithm inefficient. Various model reductions have been proposed to accelerate tau-leaping simulations. However, these are often identified and deployed manually, requiring expert knowledge. This is time-consuming and prone to error. In previous work, we proposed a methodology for automatic identification and validation of model reduction opportunities for tau-leaping simulation. Here, we show how the model reductions can be automatically and adaptively deployed during the time course of a simulation. For multiscale systems, this can result in substantial speedups.
Multiscale model reduction for shale gas transport in fractured media
Akkutlu, I. Y.
2016-05-18
In this paper, we develop a multiscale model reduction technique that describes shale gas transport in fractured media. Due to the pore-scale heterogeneities and processes, we use upscaled models to describe the matrix. We follow our previous work (Akkutlu et al. Transp. Porous Media 107(1), 235–260, 2015), where we derived an upscaled model in the form of generalized nonlinear diffusion model to describe the effects of kerogen. To model the interaction between the matrix and the fractures, we use Generalized Multiscale Finite Element Method (Efendiev et al. J. Comput. Phys. 251, 116–135, 2013, 2015). In this approach, the matrix and the fracture interaction is modeled via local multiscale basis functions. In Efendiev et al. (2015), we developed the GMsFEM and applied for linear flows with horizontal or vertical fracture orientations aligned with a Cartesian fine grid. The approach in Efendiev et al. (2015) does not allow handling arbitrary fracture distributions. In this paper, we (1) consider arbitrary fracture distributions on an unstructured grid; (2) develop GMsFEM for nonlinear flows; and (3) develop online basis function strategies to adaptively improve the convergence. The number of multiscale basis functions in each coarse region represents the degrees of freedom needed to achieve a certain error threshold. Our approach is adaptive in a sense that the multiscale basis functions can be added in the regions of interest. Numerical results for two-dimensional problem are presented to demonstrate the efficiency of proposed approach. © 2016 Springer International Publishing Switzerland
Model reduction of strong-weak neurons.
Du, Bosen; Sorensen, Danny; Cox, Steven J
2014-01-01
We consider neurons with large dendritic trees that are weakly excitable in the sense that back propagating action potentials are severly attenuated as they travel from the small, strongly excitable, spike initiation zone. In previous work we have shown that the computational size of weakly excitable cell models may be reduced by two or more orders of magnitude, and that the size of strongly excitable models may be reduced by at least one order of magnitude, without sacrificing the spatio-temporal nature of its inputs (in the sense we reproduce the cell's precise mapping of inputs to outputs). We combine the best of these two strategies via a predictor-corrector decomposition scheme and achieve a drastically reduced highly accurate model of a caricature of the neuron responsible for collision detection in the locust.
Analysis of Leakage Reduction Techniques in Independent-Gate DG FinFET SRAM Cell
Directory of Open Access Journals (Sweden)
Vandna Sikarwar
2013-01-01
Full Text Available Scaling of devices in bulk CMOS technology leads to short-channel effects and increase in leakage. Static random access memory (SRAM is expected to occupy 90% of the area of SoC. Since leakage becomes the major factor in SRAM cell, it is implemented using FinFET. Further, double-gate FinFET devices became a better choice for deep submicron technologies. With this consideration in our research work, 6T SRAM cell is implemented using independent-gate DG FinFET in which both the opposite sides of gates are controlled independently which provides better scalability to the SRAM cell. The device is implemented using different leakage reduction techniques such as gated-Vdd technique and multithreshold voltage technique to reduce leakage. Therefore, power consumption in the SRAM cell is reduced and provides better performance. Independent-gate FinFET SRAM cell using various leakage reduction techniques has been simulated using Cadence virtuoso tool in 45 nm technology.
A Predicate Based Fault Localization Technique Based On Test Case Reduction
Directory of Open Access Journals (Sweden)
Rohit Mishra
2015-08-01
Full Text Available ABSTRACT In todays world software testing with statistical fault localization technique is one of most tedious expensive and time consuming activity. In faulty program a program element contrast dynamic spectra that estimate location of fault. There may have negative impact from coincidental correctness with these technique because in non failed run the fault can also be triggered out and if so disturb the assessment of fault location. Now eliminating of confounding rules on the recognizing the accuracy. In this paper coincidental correctness which is an effective interface is the reason of success of fault location. We can find out fault predicates by distribution overlapping of dynamic spectrum in failed runs and non failed runs and slacken the area by referencing the inter class distances of spectra to clamp the less suspicious candidate. After that we apply coverage matrix base reduction approach to reduce the test cases of that program and locate the fault in that program. Finally empirical result shows that our technique outshine with previous existing predicate based fault localization technique with test case reduction.
Model Reduction of Strong-Weak Neurons
Directory of Open Access Journals (Sweden)
Steven James Cox
2014-12-01
Full Text Available We consider neurons with large dendritic trees that are weakly excitable in the sense that back propagating action potentials are severly attenuated as they travelfrom the small, strongly excitable, spike initiation zone. In previous workwe have shown that the computational size of weakly excitable cell modelsmay be reduced by two or more orders of magnitude, and that the size of stronglyexcitable models may be reduced by at least one order of magnitude,without sacrificing thespatio-temporal nature of its inputs (in the sense we reproduce the cell's precise mapping of inputs to outputs. We combine the best of these twostrategies via a predictor--corrector decomposition scheme andachieve a drastically reduced highly accurate model of a caricature of the neuron responsible for collision detection in the locust.
Model reduction of systems with localized nonlinearities.
Energy Technology Data Exchange (ETDEWEB)
Segalman, Daniel Joseph
2006-03-01
An LDRD funded approach to development of reduced order models for systems with local nonlinearities is presented. This method is particularly useful for problems of structural dynamics, but has potential application in other fields. The key elements of this approach are (1) employment of eigen modes of a reference linear system, (2) incorporation of basis functions with an appropriate discontinuity at the location of the nonlinearity. Galerkin solution using the above combination of basis functions appears to capture the dynamics of the system with a small basis set. For problems involving small amplitude dynamics, the addition of discontinuous (joint) modes appears to capture the nonlinear mechanics correctly while preserving the modal form of the predictions. For problems involving large amplitude dynamics of realistic joint models (macro-slip), the use of appropriate joint modes along with sufficient basis eigen modes to capture the frequencies of the system greatly enhances convergence, though the modal nature the result is lost. Also observed is that when joint modes are used in conjunction with a small number of elastic eigen modes in problems of macro-slip of realistic joint models, the resulting predictions are very similar to those of the full solution when seen through a low pass filter. This has significance both in terms of greatly reducing the number of degrees of freedom of the problem and in terms of facilitating the use of much larger time steps.
H∞ Optimal Model Reduction for Singular Fast Subsystems
Institute of Scientific and Technical Information of China (English)
WANGJing; ZHANGQing-Ling; LIUWan-Quan; ZHOUYue
2005-01-01
In this paper, H∞ optimal model reduction for singular fast subsystems will be investigated. First, error system is established to measure the error magnitude between the original and reduced systems, and it is demonstrated that the new feature for model reduction of singular systems is to make H∞ norm of the error system finite and minimal. The necessary and sufficient condition is derived for the existence of the H∞ suboptimal model reduction problem. Next, we give an exactand practicable algorithm to get the parameters of the reduced subsystems by applying the matrix theory. Meanwhile, the reduced system may be also impulsive. The advantages of the proposed algorithm are that it is more flexible in a straight-forward way without much extra computation, and the order of the reduced systems is as minimal as possible. Finally, one illustrative example is given to illustrate the effectiveness of the proposed model reduction approach.
NP-Logic Systems and Model-Equivalence Reductions
Shen, Yuping; 10.4204/EPTCS.24.17
2010-01-01
In this paper we investigate the existence of model-equivalence reduction between NP-logic systems which are logic systems with model existence problem in NP. It is shown that among all NP-systems with model checking problem in NP, the existentially quantified propositional logic (\\exists PF) is maximal with respect to poly-time model-equivalent reduction. However, \\exists PF seems not a maximal NP-system in general because there exits a NP-system with model checking problem D^P-complete.
Aerosol model selection and uncertainty modelling by adaptive MCMC technique
Directory of Open Access Journals (Sweden)
M. Laine
2008-12-01
Full Text Available We present a new technique for model selection problem in atmospheric remote sensing. The technique is based on Monte Carlo sampling and it allows model selection, calculation of model posterior probabilities and model averaging in Bayesian way.
The algorithm developed here is called Adaptive Automatic Reversible Jump Markov chain Monte Carlo method (AARJ. It uses Markov chain Monte Carlo (MCMC technique and its extension called Reversible Jump MCMC. Both of these techniques have been used extensively in statistical parameter estimation problems in wide area of applications since late 1990's. The novel feature in our algorithm is the fact that it is fully automatic and easy to use.
We show how the AARJ algorithm can be implemented and used for model selection and averaging, and to directly incorporate the model uncertainty. We demonstrate the technique by applying it to the statistical inversion problem of gas profile retrieval of GOMOS instrument on board the ENVISAT satellite. Four simple models are used simultaneously to describe the dependence of the aerosol cross-sections on wavelength. During the AARJ estimation all the models are used and we obtain a probability distribution characterizing how probable each model is. By using model averaging, the uncertainty related to selecting the aerosol model can be taken into account in assessing the uncertainty of the estimates.
Mathematical modelling of risk reduction in reinsurance
Balashov, R. B.; Kryanev, A. V.; Sliva, D. E.
2017-01-01
The paper presents a mathematical model of efficient portfolio formation in the reinsurance markets. The presented approach provides the optimal ratio between the expected value of return and the risk of yield values below a certain level. The uncertainty in the return values is conditioned by use of expert evaluations and preliminary calculations, which result in expected return values and the corresponding risk levels. The proposed method allows for implementation of computationally simple schemes and algorithms for numerical calculation of the numerical structure of the efficient portfolios of reinsurance contracts of a given insurance company.
Model Order Reduction for Electronic Circuits:
DEFF Research Database (Denmark)
Hjorth, Poul G.; Shontz, Suzanne
Electronic circuits are ubiquitous; they are used in numerous industries including: the semiconductor, communication, robotics, auto, and music industries (among many others). As products become more and more complicated, their electronic circuits also grow in size and complexity. This increased ...... in the semiconductor industry. Circuit simulation proceeds by using Maxwell’s equations to create a mathematical model of the circuit. The boundary element method is then used to discretize the equations, and the variational form of the equations are then solved on the graph network....
Development and validation of a building design waste reduction model.
Llatas, C; Osmani, M
2016-10-01
Reduction in construction waste is a pressing need in many countries. The design of building elements is considered a pivotal process to achieve waste reduction at source, which enables an informed prediction of their wastage reduction levels. However the lack of quantitative methods linking design strategies to waste reduction hinders designing out waste practice in building projects. Therefore, this paper addresses this knowledge gap through the design and validation of a Building Design Waste Reduction Strategies (Waste ReSt) model that aims to investigate the relationships between design variables and their impact on onsite waste reduction. The Waste ReSt model was validated in a real-world case study involving 20 residential buildings in Spain. The validation process comprises three stages. Firstly, design waste causes were analyzed. Secondly, design strategies were applied leading to several alternative low waste building elements. Finally, their potential source reduction levels were quantified and discussed within the context of the literature. The Waste ReSt model could serve as an instrumental tool to simulate designing out strategies in building projects. The knowledge provided by the model could help project stakeholders to better understand the correlation between the design process and waste sources and subsequently implement design practices for low-waste buildings.
Size Reduction Techniques for Large Scale Permanent Magnet Generators in Wind Turbines
Khazdozian, Helena; Hadimani, Ravi; Jiles, David
2015-03-01
Increased wind penetration is necessary to reduce U.S. dependence on fossil fuels, combat climate change and increase national energy security. The U.S Department of Energy has recommended large scale and offshore wind turbines to achieve 20% wind electricity generation by 2030. Currently, geared doubly-fed induction generators (DFIGs) are typically employed in the drivetrain for conversion of mechanical to electrical energy. Yet, gearboxes account for the greatest downtime of wind turbines, decreasing reliability and contributing to loss of profit. Direct drive permanent magnet generators (PMGs) offer a reliable alternative to DFIGs by eliminating the gearbox. However, PMGs scale up in size and weight much more rapidly than DFIGs as rated power is increased, presenting significant challenges for large scale wind turbine application. Thus, size reduction techniques are needed for viability of PMGs in large scale wind turbines. Two size reduction techniques are presented. It is demonstrated that 25% size reduction of a 10MW PMG is possible with a high remanence theoretical permanent magnet. Additionally, the use of a Halbach cylinder in an outer rotor PMG is investigated to focus magnetic flux over the rotor surface in order to increase torque. This work was supported by the National Science Foundation under Grant No. 1069283 and a Barbara and James Palmer Endowment at Iowa State University.
Directory of Open Access Journals (Sweden)
Yewon Lee
2014-11-01
Full Text Available BackgroundFrontal sinus fractures, particularly anterior sinus fractures, are relatively common facial fractures. Many agree on the general principles of frontal fracture management; however, the optimal methods of reduction are still controversial. In this article, we suggest a simple reduction method using a subbrow incision as a treatment for isolated anterior sinus fractures.MethodsBetween March 2011 and March 2014, 13 patients with isolated frontal sinus fractures were treated by open reduction and internal fixation through a subbrow incision. The subbrow incision line was designed to be precisely at the lower margin of the brow in order to obtain an inconspicuous scar. A periosteal incision was made at 3 mm above the superior orbital rim. The fracture site of the frontal bone was reduced, and bone fixation was performed using an absorbable plate and screws.ResultsContour deformities were completely restored in all patients, and all patients were satisfied with the results. Scars were barely visible in the long-term follow-up. No complications related to the procedure, such as infection, uncontrolled sinus bleeding, hematoma, paresthesia, mucocele, or posterior wall and brain injury were observed.ConclusionsThe subbrow approach allowed for an accurate reduction and internal fixation of the fractures in the anterior table of the frontal sinus by providing a direct visualization of the fracture. Considering the surgical success of the reduction and the rigid fixation, patient satisfaction, and aesthetic problems, this transcutaneous approach through a subbrow incision is concluded to be superior to the other reduction techniques used in the case of an anterior table frontal sinus fracture.
Directory of Open Access Journals (Sweden)
Senthilnayagam Kalyanasundaram
2016-10-01
Full Text Available BACKGROUND Enterococcus faecalis, a facultative anaerobic Gram-positive coccus is involved in the endodontic failures. The bacterial elimination from the infected root canal is often achieved by mechanical cleaning and shaping along with irrigants. This study compares the intracanal bacterial reduction using two instrumentation techniques and irrigation regimens. METHODS 50 extracted human mandibular bicuspid teeth with single canal were decoronated at cemento-enamel junction and pulpectomy done. Working length determined and apical foramen sealed with acrylic resin and specimens autoclaved at 1210 centigrade for 20 minutes. Samples were divided in to six groups. Group I - Hand instrumentation with 0.9% saline irrigant; Group II - Hand instrumentation with 5% sodium hypochlorite as irrigant; Group III - Rotary instrumentation with 0.9% saline irrigant; Group IV - Rotary instrumentation with 5% sodium hypochlorite as irrigant; Group V - Control-Only saline irrigation; Group VI - Samples taken immediately after sterilization. Sterilized teeth infected with E. faecalis and incubated for one day at 370 centigrade. Samples were collected from the canals before and after instrumentation and irrigation. The colony forming units were then counted and transformed to log numbers and analysed statistically. RESULTS The reduction in number of colony-forming units was statistically significant. Statistical analysis reveals bacterial reduction in the following order GIV>GIII>GII>GI>GV. CONCLUSION Bacterial reduction is higher with greater taper (0.06 mm/mm instrumentation and it is enhanced with the use of 5% sodium hypochlorite compared to 0.9% saline solution.
Yousefian Jazi, Nima
Spatial filtering and directional discrimination has been shown to be an effective pre-processing approach for noise reduction in microphone array systems. In dual-microphone hearing aids, fixed and adaptive beamforming techniques are the most common solutions for enhancing the desired speech and rejecting unwanted signals captured by the microphones. In fact, beamformers are widely utilized in systems where spatial properties of target source (usually in front of the listener) is assumed to be known. In this dissertation, some dual-microphone coherence-based speech enhancement techniques applicable to hearing aids are proposed. All proposed algorithms operate in the frequency domain and (like traditional beamforming techniques) are purely based on the spatial properties of the desired speech source and does not require any knowledge of noise statistics for calculating the noise reduction filter. This benefit gives our algorithms the ability to address adverse noise conditions, such as situations where interfering talker(s) speaks simultaneously with the target speaker. In such cases, the (adaptive) beamformers lose their effectiveness in suppressing interference, since the noise channel (reference) cannot be built and updated accordingly. This difference is the main advantage of the proposed techniques in the dissertation over traditional adaptive beamformers. Furthermore, since the suggested algorithms are independent of noise estimation, they offer significant improvement in scenarios that the power level of interfering sources are much more than that of target speech. The dissertation also shows the premise behind the proposed algorithms can be extended and employed to binaural hearing aids. The main purpose of the investigated techniques is to enhance the intelligibility level of speech, measured through subjective listening tests with normal hearing and cochlear implant listeners. However, the improvement in quality of the output speech achieved by the
Field Assessment Techniques for Bank Erosion Modeling
1990-11-22
Field Assessment Techniques for Bank Erosion Modeling First Interim Report Prepared for US Army European Research Office US AR DS G-. EDISON HOUSE...SEDIMENTATION ANALYSIS SHEETS and GUIDELINES FOR THE USE OF SEDIMENTATION ANALYSIS SHEETS IN THE FIELD Prepared for US Army Engineer Waterways Experiment...Material Type 3 Material Type 4 Cobbles Toe[’ Toe Toefl Toefl Protection Status Cobbles/boulders Mid-Bnak .. Mid-na.k Mid-Bnask[ Mid-Boak
Advanced interaction techniques for medical models
Monclús, Eva
2014-01-01
Advances in Medical Visualization allows the analysis of anatomical structures with the use of 3D models reconstructed from a stack of intensity-based images acquired through different techniques, being Computerized Tomographic (CT) modality one of the most common. A general medical volume graphics application usually includes an exploration task which is sometimes preceded by an analysis process where the anatomical structures of interest are first identified. ...
Memory Size Reduction Technique of SDF IFFT Architecture for OFDM-Based Applications
Jang, In-Gul; Cho, Kyung-Ju; Kim, Yong-Eun; Chung, Jin-Gyun
In this paper, to reduce the memory size requirements of IFFT for OFDM-based applications, we propose a new IFFT design technique based on a combined integer mapping of three IFFT input signals: modulated data, pilot and null signals. The proposed method focuses on reducing the size of memory cells in the first two stages of the single-path delay feedback (SDF) IFFT architectures since the first two stages require 75% of the total memory cells. By simulations of 2048-point IFFT design for cognitive radio systems, it is shown that the proposed IFFT design method achieves more than 13% reduction in gate count and 11% reduction in power consumption compared with conventional IFFT design.
Classification of ECG signals using LDA with factor analysis method as feature reduction technique.
Kaur, Manpreet; Arora, A S
2012-11-01
The analysis of ECG signal, especially the QRS complex as the most characteristic wave in ECG, is a widely accepted approach to study and to classify cardiac dysfunctions. In this paper, first wavelet coefficients calculated for QRS complex are taken as features. Next, factor analysis procedures without rotation and with orthogonal rotation (varimax, equimax and quartimax) are used for feature reduction. The procedure uses the 'Principal Component Method' to estimate component loadings. Further, classification has been done with a LDA classifier. The MIT-BIH arrhythmia database is used and five types of beats (normal, PVC, paced, LBBB and RBBB) are considered for analysis. Accuracy, sensitivity and positive predictivity are performance parameters used for comparing performance of feature reduction techniques. Results demonstrate that the equimax rotation method yields maximum average accuracy of 99.056% for unknown data sets among other used methods.
Automatic identification of model reductions for discrete stochastic simulation
Wu, Sheng; Fu, Jin; Li, Hong; Petzold, Linda
2012-07-01
Multiple time scales in cellular chemical reaction systems present a challenge for the efficiency of stochastic simulation. Numerous model reductions have been proposed to accelerate the simulation of chemically reacting systems by exploiting time scale separation. However, these are often identified and deployed manually, requiring expert knowledge. This is time-consuming, prone to error, and opportunities for model reduction may be missed, particularly for large models. We propose an automatic model analysis algorithm using an adaptively weighted Petri net to dynamically identify opportunities for model reductions for both the stochastic simulation algorithm and tau-leaping simulation, with no requirement of expert knowledge input. Results are presented to demonstrate the utility and effectiveness of this approach.
A dynamic model reduction algorithm for atmospheric chemistry models
Santillana, Mauricio; Le Sager, Philippe; Jacob, Daniel J.; Brenner, Michael
2010-05-01
Understanding the dynamics of the chemical composition of our atmosphere is essential to address a wide range of environmental issues from air quality to climate change. Current models solve a very large and stiff system of nonlinear advection-reaction coupled partial differential equations in order to calculate the time evolution of the concentration of over a hundred chemical species. The numerical solution of this system of equations is difficult and the development of efficient and accurate techniques to achieve this has inspired research for the past four decades. In this work, we propose an adaptive method that dynamically adjusts the chemical mechanism to be solved to the local environment and we show that the use of our approach leads to accurate results and considerable computational savings. Our strategy consists of partitioning the computational domain in active and inactive regions for each chemical species at every time step. In a given grid-box, the concentration of active species is calculated using an accurate numerical scheme, whereas the concentration of inactive species is calculated using a simple and computationally inexpensive formula. We demonstrate the performance of the method by application to the GEOS-Chem global chemical transport model.
Shortening treatment time in robotic radiosurgery using a novel node reduction technique
Energy Technology Data Exchange (ETDEWEB)
Water, Steven van de; Hoogeman, Mischa S.; Breedveld, Sebastiaan; Heijmen, Ben J. M. [Department of Radiation Oncology, Erasmus MC-Daniel den Hoed Cancer Center, Groene Hilledijk 301, 3075 EA Rotterdam (Netherlands)
2011-03-15
Purpose: The fraction duration of robotic radiosurgery treatments can be reduced by generating more time-efficient treatment plans with a reduced number of node positions, beams, and monitor units (MUs). Node positions are preprogramed locations where the robot can position the focal spot of the x-ray beam. As the time needed for the robot to travel between node positions takes up a large part of the treatment time, the aim of this study was to develop and evaluate a node reduction technique in order to reduce the treatment time per fraction for robotic radiosurgery. Methods: Node reduction was integrated into the inverse planning algorithm, developed in-house for the robotic radiosurgery modality. It involved repeated inverse optimization, each iteration excluding low-contribution node positions from the planning and resampling new candidate beams from the remaining node positions. Node reduction was performed until the exclusion of a single node position caused a constraint violation, after which the shortest treatment plan was selected retrospectively. Treatment plans were generated with and without node reduction for two lung cases of different complexity, one oropharyngeal case and one prostate case. Plan quality was assessed using the number of node positions, beams and MUs, and the estimated treatment time per fraction. All treatment plans had to fulfill all clinical dose constraints. Extra constraints were added to maintain the low-dose conformality and restrict skin doses during node reduction. Results: Node reduction resulted in 12 residual node positions, on average (reduction by 77%), at the cost of an increase in the number of beams and total MUs of 28% and 9%, respectively. Overall fraction durations (excluding patient setup) were shortened by 25% (range of 18%-40%), on average. Dose distributions changed only little and dose in low-dose regions was effectively restricted by the additional constraints. Conclusions: The fraction duration of robotic
Directory of Open Access Journals (Sweden)
Dirk Wagenaar
Full Text Available Typical streak artifacts known as metal artifacts occur in the presence of strongly attenuating materials in computed tomography (CT. Recently, vendors have started offering metal artifact reduction (MAR techniques. In addition, a MAR technique called the metal deletion technique (MDT is freely available and able to reduce metal artifacts using reconstructed images. Although a comparison of the MDT to other MAR techniques exists, a comparison of commercially available MAR techniques is lacking. The aim of this study was therefore to quantify the difference in effectiveness of the currently available MAR techniques of different scanners and the MDT technique.Three vendors were asked to use their preferential CT scanner for applying their MAR techniques. The scans were performed on a Philips Brilliance ICT 256 (S1, a GE Discovery CT 750 HD (S2 and a Siemens Somatom Definition AS Open (S3. The scans were made using an anthropomorphic head and neck phantom (Kyoto Kagaku, Japan. Three amalgam dental implants were constructed and inserted between the phantom's teeth. The average absolute error (AAE was calculated for all reconstructions in the proximity of the amalgam implants.The commercial techniques reduced the AAE by 22.0±1.6%, 16.2±2.6% and 3.3±0.7% for S1 to S3 respectively. After applying the MDT to uncorrected scans of each scanner the AAE was reduced by 26.1±2.3%, 27.9±1.0% and 28.8±0.5% respectively. The difference in efficiency between the commercial techniques and the MDT was statistically significant for S2 (p=0.004 and S3 (p<0.001, but not for S1 (p=0.63.The effectiveness of MAR differs between vendors. S1 performed slightly better than S2 and both performed better than S3. Furthermore, for our phantom and outcome measure the MDT was more effective than the commercial MAR technique on all scanners.
Iterative methods used in overlap astrometric reduction techniques do not always converge
Rapaport, M.; Ducourant, C.; Colin, J.; Le Campion, J. F.
1993-04-01
In this paper we prove that the classical Gauss-Seidel type iterative methods used for the solution of the reduced normal equations occurring in overlapping reduction methods of astrometry do not always converge. We exhibit examples of divergence. We then analyze an alternative algorithm proposed by Wang (1985). We prove the consistency of this algorithm and verify that it can be convergent while the Gauss-Seidel method is divergent. We conjecture the convergence of Wang method for the solution of astrometric problems using overlap techniques.
Millar-Blanchaer, Maxwell A; Hung, Li-Wei; Fitzgerald, Michael P; Wang, Jason J; Chilcote, Jeffrey; Graham, James R; Bruzzone, Sebastian; Kalas, Paul G
2016-01-01
The Gemini Planet Imager (GPI) has been designed for the direct detection and characterization of exoplanets and circumstellar disks. GPI is equipped with a dual channel polarimetry mode designed to take advantage of the inherently polarized light scattered off circumstellar material to further suppress the residual seeing halo left uncorrected by the adaptive optics. We explore how recent advances in data reduction techniques reduce systematics and improve the achievable contrast in polarimetry mode. In particular, we consider different flux extraction techniques when constructing datacubes from raw data, division by a polarized flat-field and a method for subtracting instrumental polarization. Using observations of unpolarized standard stars we find that GPI's instrumental polarization is consistent with being wavelength independent within our errors. In addition, we provide polarimetry contrast curves that demonstrate typical performance throughout the GPIES campaign.
Hwang, Danny P.
1999-01-01
A new turbulent skin friction reduction technology, called the microblowing technique has been tested in supersonic flow (Mach number of 1.9) on specially designed porous plates with microholes. The skin friction was measured directly by a force balance and the boundary layer development was measured by a total pressure rake at the tailing edge of a test plate. The free stream Reynolds number was 1.0(10 exp 6) per meter. The turbulent skin friction coefficient ratios (C(sub f)/C(sub f0)) of seven porous plates are given in this report. Test results showed that the microblowing technique could reduce the turbulent skin friction in supersonic flow (up to 90 percent below a solid flat plate value, which was even greater than in subsonic flow).
A COMPARATIVE STUDY OF DIMENSION REDUCTION TECHNIQUES FOR CONTENT-BASED IMAGE RETRIEVAL
Directory of Open Access Journals (Sweden)
G. Sasikala
2010-08-01
Full Text Available Efficient and effective retrieval techniques of images are desired because of the explosive growth of digital images. Content-based image retrieval is a promising approach because of its automatic indexing and retrieval based on their semantic features and visual appearance. This paper discusses the method for dimensionality reduction called Maximum Margin Projection (MMP. MMP aims at maximizing the margin between positive and negative sample at each neighborhood. It is designed for discovering the local manifold structure. Therefore, MMP is likely to be more suitable for image retrieval systems, where nearest neighbor search is usually involved. The performance of these approaches is measured by a user evaluation. It is found that the MMP based technique provides more functionalities and capabilities to support the features of information seeking behavior and produces better performance in searching images.
Adaptive model reduction for nonsmooth discrete element simulation
Servin, Martin
2015-01-01
A method for adaptive model order reduction for nonsmooth discrete element simulation is developed and analysed in numerical experiments. Regions of the granular media that collectively move as rigid bodies are substituted with rigid bodies of the corresponding shape and mass distribution. The method also support particles merging with articulated multibody systems. A model approximation error is defined used for deriving and conditions for when and where to apply model reduction and refinement back into particles and smaller rigid bodies. Three methods for refinement are proposed and tested: prediction from contact events, trial solutions computed in the background and using split sensors. The computational performance can be increased by 5 - 50 times for model reduction level between 70 - 95 %.
Model reduction of nonlinear systems subject to input disturbances
Ndoye, Ibrahima
2017-07-10
The method of convex optimization is used as a tool for model reduction of a class of nonlinear systems in the presence of disturbances. It is shown that under some conditions the nonlinear disturbed system can be approximated by a reduced order nonlinear system with similar disturbance-output properties to the original plant. The proposed model reduction strategy preserves the nonlinearity and the input disturbance nature of the model. It guarantees a sufficiently small error between the outputs of the original and the reduced-order systems, and also maintains the properties of input-to-state stability. The matrices of the reduced order system are given in terms of a set of linear matrix inequalities (LMIs). The paper concludes with a demonstration of the proposed approach on model reduction of a nonlinear electronic circuit with additive disturbances.
Direct reduction of nickel catalyst with model bio-compounds
Cheng, F; Dupont, V; Twigg, MV
2017-01-01
The effects of temperature and S/C on the reduction extent and kinetics of a steam reforming NiO/α-Al₂O₃ catalyst were systematically investigated using five bio-compounds commonly produced during the fermentation, pyrolysis and gasification processes of biomass (acetic acid, ethanol, acetone, furfural and glucose). Reduction was also performed with methane and hydrogen for comparison. Kinetic modelling was applied to the NiO conversion range of 0–50% using the Handcock and Sharp method. The ...
Propensity score modelling in observational studies using dimension reduction methods.
Ghosh, Debashis
2011-07-01
Conditional independence assumptions are very important in causal inference modelling as well as in dimension reduction methodologies. These are two very strikingly different statistical literatures, and we study links between the two in this article. The concept of covariate sufficiency plays an important role, and we provide theoretical justification when dimension reduction and partial least squares methods will allow for valid causal inference to be performed. The methods are illustrated with application to a medical study and to simulated data.
Relations between two-dimensional models from dimensional reduction
Energy Technology Data Exchange (ETDEWEB)
Amaral, R.L.P.G.; Natividade, C.P. [Universidade Federal Fluminense, Niteroi, RJ (Brazil). Inst. de Fisica
1998-12-31
In this work we explore the consequences of dimensional reduction of the 3D Maxwell-Chern-Simons and some related models. A connection between topological mass generation in 3D and mass generation according to the Schwinger mechanism in 2D is obtained. Besides, a series of relationships are established by resorting to dimensional reduction and duality interpolating transformations. Nonabelian generalizations are also pointed out. (author) 10 refs.
PV O&M Cost Model and Cost Reduction
Energy Technology Data Exchange (ETDEWEB)
Walker, Andy
2017-03-15
This is a presentation on PV O&M cost model and cost reduction for the annual Photovoltaic Reliability Workshop (2017), covering estimating PV O&M costs, polynomial expansion, and implementation of Net Present Value (NPV) and reserve account in cost models.
Level of detail technique for plant models
Institute of Scientific and Technical Information of China (English)
Xiaopeng ZHANG; Qingqiong DENG; Marc JAEGER
2006-01-01
Realistic modelling and interactive rendering of forestry and landscape is a challenge in computer graphics and virtual reality. Recent new developments in plant growth modelling and simulation lead to plant models faithful to botanical structure and development, not only representing the complex architecture of a real plant but also its functioning in interaction with its environment. Complex geometry and material of a large group of plants is a big burden even for high performances computers, and they often overwhelm the numerical calculation power and graphic rendering power. Thus, in order to accelerate the rendering speed of a group of plants, software techniques are often developed. In this paper, we focus on plant organs, i.e. leaves, flowers, fruits and inter-nodes. Our approach is a simplification process of all sparse organs at the same time, i. e. , Level of Detail (LOD) , and multi-resolution models for plants. We do explain here the principle and construction of plant simplification. They are used to construct LOD and multi-resolution models of sparse organs and branches of big trees. These approaches take benefit from basic knowledge of plant architecture, clustering tree organs according to biological structures. We illustrate the potential of our approach on several big virtual plants for geometrical compression or LOD model definition. Finally we prove the efficiency of the proposed LOD models for realistic rendering with a virtual scene composed by 184 mature trees.
Model and controller reduction of large-scale structures based on projection methods
Gildin, Eduardo
The design of low-order controllers for high-order plants is a challenging problem theoretically as well as from a computational point of view. Frequently, robust controller design techniques result in high-order controllers. It is then interesting to achieve reduced-order models and controllers while maintaining robustness properties. Controller designed for large structures based on models obtained by finite element techniques yield large state-space dimensions. In this case, problems related to storage, accuracy and computational speed may arise. Thus, model reduction methods capable of addressing controller reduction problems are of primary importance to allow the practical applicability of advanced controller design methods for high-order systems. A challenging large-scale control problem that has emerged recently is the protection of civil structures, such as high-rise buildings and long-span bridges, from dynamic loadings such as earthquakes, high wind, heavy traffic, and deliberate attacks. Even though significant effort has been spent in the application of control theory to the design of civil structures in order increase their safety and reliability, several challenging issues are open problems for real-time implementation. This dissertation addresses with the development of methodologies for controller reduction for real-time implementation in seismic protection of civil structures using projection methods. Three classes of schemes are analyzed for model and controller reduction: nodal truncation, singular value decomposition methods and Krylov-based methods. A family of benchmark problems for structural control are used as a framework for a comparative study of model and controller reduction techniques. It is shown that classical model and controller reduction techniques, such as balanced truncation, modal truncation and moment matching by Krylov techniques, yield reduced-order controllers that do not guarantee stability of the closed-loop system, that
Noise Reduction Techniques and Scaling Effects towards Photon Counting CMOS Image Sensors
Directory of Open Access Journals (Sweden)
Assim Boukhayma
2016-04-01
Full Text Available This paper presents an overview of the read noise in CMOS image sensors (CISs based on four-transistors (4T pixels, column-level amplification and correlated multiple sampling. Starting from the input-referred noise analytical formula, process level optimizations, device choices and circuit techniques at the pixel and column level of the readout chain are derived and discussed. The noise reduction techniques that can be implemented at the column and pixel level are verified by transient noise simulations, measurement and results from recently-published low noise CIS. We show how recently-reported process refinement, leading to the reduction of the sense node capacitance, can be combined with an optimal in-pixel source follower design to reach a sub-0.3 \\(e^{-}_{rms}\\ read noise at room temperature. This paper also discusses the impact of technology scaling on the CIS read noise. It shows how designers can take advantage of scaling and how the Metal-Oxide-Semiconductor (MOS transistor gate leakage tunneling current appears as a challenging limitation. For this purpose, both simulation results of the gate leakage current and 1/f noise data reported from different foundries and technology nodes are used.
Directory of Open Access Journals (Sweden)
Korrakot Y. Tippayawong
2013-06-01
Full Text Available Corn is one of the major economic crops in Thailand. Corn postharvest operation involves various practices that consume a large amount of energy. Different energy conservation measures have been implemented but logistics consideration is not normally employed. In this work, attempt has been made to demonstrate that logistics techniques can offer a significant reduction in energy and cost. The main objective of this work is to identify and demonstrate possible approaches to improving energy efficiency and reducing operating cost for a dried corn warehouse operator. Three main problems are identified: (i relatively high fuel consumption for internal transfer process, (ii low quality of dried corn, and (iii excess expenditure on outbound transportation. Solutions are proposed and implemented using logistics operations. Improvement is achieved using plant layout and shortest path techniques, resulting in a reduction of almost 50% in energy consumption for the internal transfer process. Installation of an air distributor in the grain storage unit results in a decrease in loss due to poor-quality dried corn from 17% to 10%. Excess expenditure on dried corn distribution is reduced by 6% with application of a global positioning system.
Directory of Open Access Journals (Sweden)
Bradford S. Waddell
2016-03-01
Full Text Available Dislocation of the hip is a well-described event that occurs in conjunction with highenergy trauma or postoperatively after total hip arthroplasty. Bigelow first described closed treatment of a dislocated hip in 1870, and in the last decade many reduction techniques have been proposed. In this article, we review all described techniques for the reduction of hip dislocation while focusing on physician safety. Furthermore, we introduce a modified technique for the reduction of posterior hip dislocation that allows the physician to adhere to the back safety principles set for by the Occupational Safety and Health Administration.
A general technique to train language models on language models
Nederhof, MJ
2005-01-01
We show that under certain conditions, a language model can be trained oil the basis of a second language model. The main instance of the technique trains a finite automaton on the basis of a probabilistic context-free grammar, such that the Kullback-Leibler distance between grammar and trained auto
Incorporation of RAM techniques into simulation modeling
Energy Technology Data Exchange (ETDEWEB)
Nelson, S.C. Jr.; Haire, M.J.; Schryver, J.C.
1995-07-01
This work concludes that reliability, availability, and maintainability (RAM) analytical techniques can be incorporated into computer network simulation modeling to yield an important new analytical tool. This paper describes the incorporation of failure and repair information into network simulation to build a stochastic computer model represents the RAM Performance of two vehicles being developed for the US Army: The Advanced Field Artillery System (AFAS) and the Future Armored Resupply Vehicle (FARV). The AFAS is the US Army`s next generation self-propelled cannon artillery system. The FARV is a resupply vehicle for the AFAS. Both vehicles utilize automation technologies to improve the operational performance of the vehicles and reduce manpower. The network simulation model used in this work is task based. The model programmed in this application requirements a typical battle mission and the failures and repairs that occur during that battle. Each task that the FARV performs--upload, travel to the AFAS, refuel, perform tactical/survivability moves, return to logistic resupply, etc.--is modeled. Such a model reproduces a model reproduces operational phenomena (e.g., failures and repairs) that are likely to occur in actual performance. Simulation tasks are modeled as discrete chronological steps; after the completion of each task decisions are programmed that determine the next path to be followed. The result is a complex logic diagram or network. The network simulation model is developed within a hierarchy of vehicle systems, subsystems, and equipment and includes failure management subnetworks. RAM information and other performance measures are collected which have impact on design requirements. Design changes are evaluated through ``what if`` questions, sensitivity studies, and battle scenario changes.
Energy Technology Data Exchange (ETDEWEB)
Rahkamaa-Tolonen, K.
2001-07-01
Emissions from vehicles are suppressed by catalytic conversion, i.e. total oxidation of carbon monoxide and hydrocarbons and reduction of nitrogen oxides. The on-going demand for lower emissions requires more detailed knowledge about the catalytic reaction mechanisms and kinetics on the level of elementary steps, especially because of the mutual interactions in the complex reaction mixture. The reaction mechanisms for the abatement of nitrogen oxides (NO{sub x}) are of particular interest, since they are environmentally very unfriendly compounds. Transient experimental techniques can be used as a tool to understand the reaction mechanisms and to develop mathematical models allowing simulation and optimisation of the behaviour of three-way catalyst converters. In chemical kinetics, isotope-labelled reactants are frequently employed to follow reaction pathways and to determine reaction mechanisms. The kinetics and mechanisms of the catalytic reduction of nitrogen oxide (NO) by hydrogen as well as self-decomposition of NO and N{sub 2}O were studied over alumina based palladium and rhodium-alumina monoliths. In addition, NO reduction with H{sub 2} and D{sub 2}, isotope exchange of hydrogen atoms in water, ammonia and hydrogen with deuterium, as well as adsorption of ammonia and water on the Pd-monolith were studied with transient experiments. Transient step-response experiments, isotopic jumping techniques, steady- state isotopic-transient analysis, temperature programmed desorption (TPD) and Fourier-transformed infrared spectroscopy (FT-IR) were used as experimental techniques. The catalysts were characterised by carbon monoxide chemisorption, nitrogen physisorption and X-ray photoelectron spectroscopy (XPS). Nitrogen, nitrous oxide, ammonia, and water were detected as reaction products in NO reduction by hydrogen. The transient and FT-IR experiments yielded information about the surface reaction mechanisms. The dissociation of NO on the catalyst surface is the
Cross gramian approximation with Laguerre polynomials for model order reduction
Perev, Kamen
2015-11-01
This paper considers the problem of model order reduction by approximate balanced truncation with Laguerre polynomials approximation of the system cross gramian. The cross gramian contains information both for the reachability of the system as well as for its observability. The main property of the cross gramian for a square symmetric stable linear system is that its square is equal to the product of the reachability and observability gramians and therefore, the absolute values of its eigenvalues are equal to the Hankel singular values. This is the reason to use the cross gramian for computing balancing transformations for model reduction. Laguerre polynomial series representations are used to approximate the cross gramian of the system at infinity. The orthogonal polynomials of Laguerre possess good convergence properties and allow to reduce the computational complexity of the model reduction problem. Numerical experiments are performed confirming the effectiveness of the proposed method.
Adaptive model reduction for nonsmooth discrete element simulation
Servin, Martin; Wang, Da
2016-03-01
A method for adaptive model order reduction for nonsmooth discrete element simulation is developed and analysed in numerical experiments. Regions of the granular media that collectively move as rigid bodies are substituted with rigid bodies of the corresponding shape and mass distribution. The method also support particles merging with articulated multibody systems. A model approximation error is defined and used to derive conditions for when and where to apply reduction and refinement back into particles and smaller rigid bodies. Three methods for refinement are proposed and tested: prediction from contact events, trial solutions computed in the background and using split sensors. The computational performance can be increased by 5-50 times for model reduction level between 70-95 %.
Order reduction for a model of marine bacteriophage evolution
Pagliarini, Silvia; Korobeinikov, Andrei
2017-02-01
A typical mechanistic model of viral evolution necessary includes several time scales which can differ by orders of magnitude. Such a diversity of time scales makes analysis of these models difficult. Reducing the order of a model is highly desirable when handling such a model. A typical approach applied to such slow-fast (or singularly perturbed) systems is the time scales separation technique. Constructing the so-called quasi-steady-state approximation is the usual first step in applying the technique. While this technique is commonly applied, in some cases its straightforward application can lead to unsatisfactory results. In this paper we construct the quasi-steady-state approximation for a model of evolution of marine bacteriophages based on the Beretta-Kuang model. We show that for this particular model the quasi-steady-state approximation is able to produce only qualitative but not quantitative fit.
H∞ /H2 model reduction through dilated linear matrix inequalities
DEFF Research Database (Denmark)
Adegas, Fabiano Daher; Stoustrup, Jakob
2012-01-01
This paper presents sufficient dilated linear matrix inequalities (LMI) conditions to the $H_{infty}$ and $H_{2}$ model reduction problem. A special structure of the auxiliary (slack) variables allows the original model of order $n$ to be reduced to an order $r=n/s$ where $n,r,s in field...... not satisfactorily approximates the original system, an iterative algorithm based on dilated LMIs is proposed to significantly improve the approximation bound. The effectiveness of the method is accessed by numerical experiments. The method is also applied to the $H_2$ order reduction of a flexible wind turbine...
Improved modeling techniques for turbomachinery flow fields
Energy Technology Data Exchange (ETDEWEB)
Lakshminarayana, B.; Fagan, J.R. Jr.
1995-12-31
This program has the objective of developing an improved methodology for modeling turbomachinery flow fields, including the prediction of losses and efficiency. Specifically, the program addresses the treatment of the mixing stress tensor terms attributed to deterministic flow field mechanisms required in steady-state Computational Fluid Dynamic (CFD) models for turbomachinery flow fields. These mixing stress tensors arise due to spatial and temporal fluctuations (in an absolute frame of reference) caused by rotor-stator interaction due to various blade rows and by blade-to-blade variation of flow properties. This will be accomplished in a cooperative program by Penn State University and the Allison Engine Company. These tasks include the acquisition of previously unavailable experimental data in a high-speed turbomachinery environment, the use of advanced techniques to analyze the data, and the development of a methodology to treat the deterministic component of the mixing stress tenor.
Geometrical geodesy techniques in Goddard earth models
Lerch, F. J.
1974-01-01
The method for combining geometrical data with satellite dynamical and gravimetry data for the solution of geopotential and station location parameters is discussed. Geometrical tracking data (simultaneous events) from the global network of BC-4 stations are currently being processed in a solution that will greatly enhance of geodetic world system of stations. Previously the stations in Goddard earth models have been derived only from dynamical tracking data. A linear regression model is formulated from combining the data, based upon the statistical technique of weighted least squares. Reduced normal equations, independent of satellite and instrumental parameters, are derived for the solution of the geodetic parameters. Exterior standards for the evaluation of the solution and for the scale of the earth's figure are discussed.
Bayesian model reduction and empirical Bayes for group (DCM) studies.
Friston, Karl J; Litvak, Vladimir; Oswal, Ashwini; Razi, Adeel; Stephan, Klaas E; van Wijk, Bernadette C M; Ziegler, Gabriel; Zeidman, Peter
2016-03-01
This technical note describes some Bayesian procedures for the analysis of group studies that use nonlinear models at the first (within-subject) level - e.g., dynamic causal models - and linear models at subsequent (between-subject) levels. Its focus is on using Bayesian model reduction to finesse the inversion of multiple models of a single dataset or a single (hierarchical or empirical Bayes) model of multiple datasets. These applications of Bayesian model reduction allow one to consider parametric random effects and make inferences about group effects very efficiently (in a few seconds). We provide the relatively straightforward theoretical background to these procedures and illustrate their application using a worked example. This example uses a simulated mismatch negativity study of schizophrenia. We illustrate the robustness of Bayesian model reduction to violations of the (commonly used) Laplace assumption in dynamic causal modelling and show how its recursive application can facilitate both classical and Bayesian inference about group differences. Finally, we consider the application of these empirical Bayesian procedures to classification and prediction.
Experimental Study of Active Techniques for Blade/Vortex Interaction Noise Reduction
Kobiki, Noboru; Murashige, Atsushi; Tsuchihashi, Akihiko; Yamakawa, Eiichi
This paper presents the experimental results of the effect of Higher Harmonic Control (HHC) and Active Flap on the Blade/Vortex Interaction (BVI) noise. Wind tunnel tests were performed with a 1-bladed rotor system to evaluate the simplified BVI phenomenon avoiding the complicated aerodynamic interference which is characteristically and inevitably caused by a multi-bladed rotor. Another merit to use this 1-bladed rotor system is that the several objective active techniques can be evaluated under the same condition installed in the same rotor system. The effects of the active techniques on the BVI noise reduction were evaluated comprehensively by the sound pressure, the blade/vortex miss distance obtained by Laser light Sheet (LLS), the blade surface pressure distribution and the tip vortex structure by Particle Image Velocimetry (PIV). The correlation among these quantities to describe the effect of the active techniques on the BVI conditions is well obtained. The experiments show that the blade/vortex miss distance is more dominant for BVI noise than the other two BVI governing factors, such as blade lift and vortex strength at the moment of BVI.
Choi, Hyun Ho; Lee, Ju Hwan; Kim, Sung Min; Park, Sung Yun
2015-01-01
Here, the speckle noise in ultrasonic images is removed using an image fusion-based denoising method. To optimize the denoising performance, each discrete wavelet transform (DWT) and filtering technique was analyzed and compared. In addition, the performances were compared in order to derive the optimal input conditions. To evaluate the speckle noise removal performance, an image fusion algorithm was applied to the ultrasound images, and comparatively analyzed with the original image without the algorithm. As a result, applying DWT and filtering techniques caused information loss and noise characteristics, and did not represent the most significant noise reduction performance. Conversely, an image fusion method applying SRAD-original conditions preserved the key information in the original image, and the speckle noise was removed. Based on such characteristics, the input conditions of SRAD-original had the best denoising performance with the ultrasound images. From this study, the best denoising technique proposed based on the results was confirmed to have a high potential for clinical application.
Alexander, Tiffaney Miller
2017-01-01
Research results have shown that more than half of aviation, aerospace and aeronautics mishaps incidents are attributed to human error. As a part of Quality within space exploration ground processing operations, the identification and or classification of underlying contributors and causes of human error must be identified, in order to manage human error.This presentation will provide a framework and methodology using the Human Error Assessment and Reduction Technique (HEART) and Human Factor Analysis and Classification System (HFACS), as an analysis tool to identify contributing factors, their impact on human error events, and predict the Human Error probabilities (HEPs) of future occurrences. This research methodology was applied (retrospectively) to six (6) NASA ground processing operations scenarios and thirty (30) years of Launch Vehicle related mishap data. This modifiable framework can be used and followed by other space and similar complex operations.
Waste Reduction Model (WARM) Resources for Small Businesses and Organizations
This page provides a brief overview of how EPA’s Waste Reduction Model (WARM) can be used by small businesses and organizations. The page includes a brief summary of uses of WARM for the audience and links to other resources.
Model Reduction for Nonlinear Systems by Incremental Balanced Truncation
Besselink, Bart; van de Wouw, Nathan; Scherpen, Jacquelien M. A.; Nijmeijer, Henk
2014-01-01
In this paper, the method of incremental balanced truncation is introduced as a tool for model reduction of nonlinear systems. Incremental balanced truncation provides an extension of balanced truncation for linear systems towards the nonlinear case and differs from existing nonlinear balancing tech
Model Reduction for Nonlinear Systems by Incremental Balanced Truncation
Besselink, Bart; van de Wouw, Nathan; Scherpen, Jacquelien M. A.; Nijmeijer, Henk
2014-01-01
In this paper, the method of incremental balanced truncation is introduced as a tool for model reduction of nonlinear systems. Incremental balanced truncation provides an extension of balanced truncation for linear systems towards the nonlinear case and differs from existing nonlinear balancing tech
Model Reduction by Moment Matching for Linear Switched Systems
DEFF Research Database (Denmark)
Bastug, Mert; Petreczky, Mihaly; Wisniewski, Rafal;
2014-01-01
A moment-matching method for the model reduction of linear switched systems (LSSs) is developed. The method is based based upon a partial realization theory of LSSs and it is similar to the Krylov subspace methods used for moment matching for linear systems. The results are illustrated by numeric...
Model reduction for controller design for infinite-dimensional systems
Opmeer, Mark Robertus
2006-01-01
The main aim of this thesis is, as the title suggests, the presentation of results on model reduction for controller design for infinite-dimensional systems. The obtained results are presented for both discrete-time systems and continuous-time systems. They are perfect generalizations of the corresp
Energy-efficient data reduction techniques for wireless seizure detection systems.
Chiang, Joyce; Ward, Rabab K
2014-01-24
The emergence of wireless sensor networks (WSNs) has motivated a paradigm shift in patient monitoring and disease control. Epilepsy management is one of the areas that could especially benefit from the use of WSN. By using miniaturized wireless electroencephalogram (EEG) sensors, it is possible to perform ambulatory EEG recording and real-time seizure detection outside clinical settings. One major consideration in using such a wireless EEG-based system is the stringent battery energy constraint at the sensor side. Different solutions to reduce the power consumption at this side are therefore highly desired. The conventional approach incurs a high power consumption, as it transmits the entire EEG signals wirelessly to an external data server (where seizure detection is carried out). This paper examines the use of data reduction techniques for reducing the amount of data that has to be transmitted and, thereby, reducing the required power consumption at the sensor side. Two data reduction approaches are examined: compressive sensing-based EEG compression and low-complexity feature extraction. Their performance is evaluated in terms of seizure detection effectiveness and power consumption. Experimental results show that by performing low-complexity feature extraction at the sensor side and transmitting only the features that are pertinent to seizure detection to the server, a considerable overall saving in power is achieved. The battery life of the system is increased by 14 times, while the same seizure detection rate as the conventional approach (95%) is maintained.
A Fourier dimensionality reduction model for big data interferometric imaging
Vijay Kartik, S.; Carrillo, Rafael E.; Thiran, Jean-Philippe; Wiaux, Yves
2017-06-01
Data dimensionality reduction in radio interferometry can provide savings of computational resources for image reconstruction through reduced memory footprints and lighter computations per iteration, which is important for the scalability of imaging methods to the big data setting of the next-generation telescopes. This article sheds new light on dimensionality reduction from the perspective of the compressed sensing theory and studies its interplay with imaging algorithms designed in the context of convex optimization. We propose a post-gridding linear data embedding to the space spanned by the left singular vectors of the measurement operator, providing a dimensionality reduction below image size. This embedding preserves the null space of the measurement operator and hence its sampling properties are also preserved in light of the compressed sensing theory. We show that this can be approximated by first computing the dirty image and then applying a weighted subsampled discrete Fourier transform to obtain the final reduced data vector. This Fourier dimensionality reduction model ensures a fast implementation of the full measurement operator, essential for any iterative image reconstruction method. The proposed reduction also preserves the independent and identically distributed Gaussian properties of the original measurement noise. For convex optimization-based imaging algorithms, this is key to justify the use of the standard ℓ2-norm as the data fidelity term. Our simulations confirm that this dimensionality reduction approach can be leveraged by convex optimization algorithms with no loss in imaging quality relative to reconstructing the image from the complete visibility data set. Reconstruction results in simulation settings with no direction dependent effects or calibration errors show promising performance of the proposed dimensionality reduction. Further tests on real data are planned as an extension of the current work. matlab code implementing the
Model assisted qualification of NDE techniques
Ballisat, Alexander; Wilcox, Paul; Smith, Robert; Hallam, David
2017-02-01
The costly and time consuming nature of empirical trials typically performed for NDE technique qualification is a major barrier to the introduction of NDE techniques into service. The use of computational models has been proposed as a method by which the process of qualification can be accelerated. However, given the number of possible parameters present in an inspection, the number of combinations of parameter values scales to a power law and running simulations at all of these points rapidly becomes infeasible. Given that many NDE inspections result in a single valued scalar quantity, such as a phase or amplitude, using suitable sampling and interpolation methods significantly reduces the number of simulations that have to be performed. This paper presents initial results of applying Latin Hypercube Designs and M ultivariate Adaptive Regression Splines to the inspection of a fastener hole using an oblique ultrasonic shear wave inspection. It is demonstrated that an accurate mapping of the response of the inspection for the variations considered can be achieved by sampling only a small percentage of the parameter space of variations and that the required percentage decreases as the number of parameters and the number of possible sample points increases. It is then shown how the outcome of this process can be used to assess the reliability of the inspection through commonly used metrics such as probability of detection, thereby providing an alternative methodology to the current practice of performing empirical probability of detection trials.
Coping with Complexity Model Reduction and Data Analysis
Gorban, Alexander N
2011-01-01
This volume contains the extended version of selected talks given at the international research workshop 'Coping with Complexity: Model Reduction and Data Analysis', Ambleside, UK, August 31 - September 4, 2009. This book is deliberately broad in scope and aims at promoting new ideas and methodological perspectives. The topics of the chapters range from theoretical analysis of complex and multiscale mathematical models to applications in e.g., fluid dynamics and chemical kinetics.
Correlation analysis of PCB and comparison of test-analysis model reduction methods
Institute of Scientific and Technical Information of China (English)
Xu Fei; Li Chuanri; Jiang Tongmin; Rong Shuanglong
2014-01-01
The validity of correlation analysis between finite element model (FEM) and modal test data is strongly affected by three factors, i.e., quality of excitation and measurement points in modal test, FEM reduction methods, and correlation check techniques. A new criterion based on modified mode participation (MMP) for choosing the best excitation point is presented. Comparison between this new criterion and mode participation (MP) criterion is made by using Case 1 with a simple printed circuit board (PCB). The result indicates that this new criterion produces better results. In Case 2, 35 measure-ment points are selected to perform modal test and correlation analysis while 9 selected in Case 3. System equivalent reduction expansion process (SEREP), modal assurance criteria (MAC), coordinate modal assurance criteria (CoMAC), pseudo orthogonality check (POC) and coordinate orthogonality check (CORTHOG) are used to show the error introduced by modal test in Cases 2 and 3. Case 2 shows that additional errors which cannot be identified by using CoMAC can be found by using CORTHOG. In both Cases 2 and 3, Guyan reduction, improved reduced system (IRS) method, SEREP and Hybrid reduction are compared for accuracy and robustness. The results suggest that the quality of the reduction process is problem dependent. However, the IRS method is an improvement over the Guyan reduction, and the Hybrid reduction is an improvement over the SEREP reduction.
Correlation analysis of PCB and comparison of test-analysis model reduction methods
Directory of Open Access Journals (Sweden)
Xu Fei
2014-08-01
Full Text Available The validity of correlation analysis between finite element model (FEM and modal test data is strongly affected by three factors, i.e., quality of excitation and measurement points in modal test, FEM reduction methods, and correlation check techniques. A new criterion based on modified mode participation (MMP for choosing the best excitation point is presented. Comparison between this new criterion and mode participation (MP criterion is made by using Case 1 with a simple printed circuit board (PCB. The result indicates that this new criterion produces better results. In Case 2, 35 measurement points are selected to perform modal test and correlation analysis while 9 selected in Case 3. System equivalent reduction expansion process (SEREP, modal assurance criteria (MAC, coordinate modal assurance criteria (CoMAC, pseudo orthogonality check (POC and coordinate orthogonality check (CORTHOG are used to show the error introduced by modal test in Cases 2 and 3. Case 2 shows that additional errors which cannot be identified by using CoMAC can be found by using CORTHOG. In both Cases 2 and 3, Guyan reduction, improved reduced system (IRS method, SEREP and Hybrid reduction are compared for accuracy and robustness. The results suggest that the quality of the reduction process is problem dependent. However, the IRS method is an improvement over the Guyan reduction, and the Hybrid reduction is an improvement over the SEREP reduction.
Kocadal, Onur; Yucel, Mehmet; Pepe, Murad; Aksahin, Ertugrul; Aktekin, Cem Nuri
2016-12-01
Among the most important predictors of functional results of treatment of syndesmotic injuries is the accurate restoration of the syndesmotic space. The purpose of this study was to investigate the reduction performance of screw fixation and suture-button techniques using images obtained from computed tomography (CT) scans. Patients at or below 65 years who were treated with screw or suture-button fixation for syndesmotic injuries accompanying ankle fractures between January 2012 and March 2015 were retrospectively reviewed in our regional trauma unit. A total of 52 patients were included in the present study. Fixation was performed with syndesmotic screws in 26 patients and suture-button fixation in 26 patients. The patients were divided into 2 groups according to the fixation methods. Postoperative CT scans were used for radiologic evaluation. Four parameters (anteroposterior reduction, rotational reduction, the cross-sectional syndesmotic area, and the distal tibiofibular volumes) were taken into consideration for the radiologic assessment. Functional evaluation of patients was done using the American Orthopaedic Foot & Ankle Society (AOFAS) ankle-hindfoot scale at the final follow-up. The mean follow-up period was 16.7 ± 11.0 months, and the mean age was 44.1 ± 13.2. There was a statistically significant decrease in the degree of fibular rotation (P = .03) and an increase in the upper syndesmotic area (P = .006) compared with the contralateral limb in the screw fixation group. In the suture-button fixation group, there was a statistically significant increase in the lower syndesmotic area (P = .02) and distal tibiofibular volumes (P = .04) compared with the contralateral limbs. The mean AOFAS scores were 88.4 ± 9.2 and 86.1 ± 14.0 in the suture-button fixation and screw fixation group, respectively. There was no statistically significant difference in the functional ankle joint scores between the groups. Although the functional outcomes were similar, the
Directory of Open Access Journals (Sweden)
A. D. Chukalla
2015-07-01
Full Text Available Consumptive water footprint (WF reduction in irrigated crop production is essential given the increasing competition for fresh water. This study explores the effect of three management practices on the soil water balance and plant growth, specifically on evapotranspiration (ET and yield (Y and thus the consumptive WF of crops (ET/Y. The management practices are: four irrigation techniques (furrow, sprinkler, drip and subsurface drip (SSD; four irrigation strategies (full (FI, deficit (DI, supplementary (SI and no irrigation; and three mulching practices (no mulching, organic (OML and synthetic (SML mulching. Various cases were considered: arid, semi-arid, sub-humid and humid environments; wet, normal and dry years; three soil types; and three crops. The AquaCrop model and the global WF accounting standard were used to relate the management practices to effects on ET, Y and WF. For each management practice, the associated green, blue and total consumptive WF were compared to the reference case (furrow irrigation, full irrigation, no mulching. The average reduction in the consumptive WF is: 8–10 % if we change from the reference to drip or SSD; 13 % when changing to OML; 17–18 % when moving to drip or SSD in combination with OML; and 28 % for drip or SSD in combination with SML. All before-mentioned reductions increase by one or a few per cent when moving from full to deficit irrigation. Reduction in overall consumptive WF always goes together with an increasing ratio of green to blue WF. The WF of growing a crop for a particular environment is smallest under DI, followed by FI, SI and rain-fed. Growing crops with sprinkler irrigation has the largest consumptive WF, followed by furrow, drip and SSD. Furrow irrigation has a smaller consumptive WF compared with sprinkler, even though the classical measure of "irrigation efficiency" for furrow is lower.
Chukalla, A. D.; Krol, M. S.; Hoekstra, A. Y.
2015-12-01
Consumptive water footprint (WF) reduction in irrigated crop production is essential given the increasing competition for freshwater. This study explores the effect of three management practices on the soil water balance and plant growth, specifically on evapotranspiration (ET) and yield (Y) and thus the consumptive WF of crops (ET / Y). The management practices are four irrigation techniques (furrow, sprinkler, drip and subsurface drip (SSD)), four irrigation strategies (full (FI), deficit (DI), supplementary (SI) and no irrigation), and three mulching practices (no mulching, organic (OML) and synthetic (SML) mulching). Various cases were considered: arid, semi-arid, sub-humid and humid environments in Israel, Spain, Italy and the UK, respectively; wet, normal and dry years; three soil types (sand, sandy loam and silty clay loam); and three crops (maize, potato and tomato). The AquaCrop model and the global WF accounting standard were used to relate the management practices to effects on ET, Y and WF. For each management practice, the associated green, blue and total consumptive WF were compared to the reference case (furrow irrigation, full irrigation, no mulching). The average reduction in the consumptive WF is 8-10 % if we change from the reference to drip or SSD, 13 % when changing to OML, 17-18 % when moving to drip or SSD in combination with OML, and 28 % for drip or SSD in combination with SML. All before-mentioned reductions increase by one or a few per cent when moving from full to deficit irrigation. Reduction in overall consumptive WF always goes together with an increasing ratio of green to blue WF. The WF of growing a crop for a particular environment is smallest under DI, followed by FI, SI and rain-fed. Growing crops with sprinkler irrigation has the largest consumptive WF, followed by furrow, drip and SSD. Furrow irrigation has a smaller consumptive WF compared with sprinkler, even though the classical measure of "irrigation efficiency" for furrow
Haider, M. Salman; Badejo, Abimbola Comfort; Shao, Godlisten N.; Imran, S. M.; Abbas, Nadir; Chai, Young Gyu; Hussain, Manwar; Kim, Hee Taik
2015-06-01
The present study demonstrates a novel, systematic and application route synthesis approach to develop size-property relationship and control the growth of silver nanoparticles (AgNPs) embedded on reduced graphene oxide (rGO). A sequential repetitive chemical reduction technique to observe the growth of silver nanoparticles (AgNPs) attached to rGO, was performed on a single solution of graphene oxide (GO) and silver nitrate solution (7 runs, R1-R7) in order to manipulate the growth and size of the AgNPs. The physical-chemical properties of the samples were examined by RAMAN, XPS, XRD, SEM-EDAX, and HRTEM analyses. It was confirmed that AgNPs with diameter varying from 4 nm in first run (R1) to 50 nm in seventh run (R7) can be obtained using this technique. A major correlation between particle size and activities was also observed. Antibacterial activities of the samples were carried out to investigate the disinfection performance of the samples on the Gram negative bacteria (Escherichia coli). It was suggested that the sample obtained in the third run (R3) exhibited the highest antibacterial activity as compared to other samples, toward disinfection of bacteria due to its superior properties. This study provides a unique and novel application route to synthesize and control size of AgNPs embedded on graphene for various applications.
Directory of Open Access Journals (Sweden)
Shatrughna Prasad Yadav
2016-10-01
Full Text Available Orthogonal frequency division multiplexing (OFDM is preferred for mobile communications applications due to its increased data transmission capability. At one hand, it has advantages of being robust to multipath fading and possessing high data rate transmission capability, on the other hand it suffers from high peak to average power ratio (PAPR. In this paper FPGA implementation of clipping and filtering method have been implemented on Xilinx Spartan 3 Protoboard XC 3S 400 board for 4 QAM OFDM signals. The results obtained have been compared with National Instrument’s (NI AWR visual system simulator and Matlab software simulations. FPGA implementation of pre-filtering and post clipping technique results with 2.2 dB PAPR, whereas, PAPR values obtained in the case of pre-clipping and post filtering method, are 10.6, 10.7 and 10.9 dB with NI’s AWR simulation, Matlab simulation and FPGA implementation respectively. When compared with original OFDM signal having 12.5 dB PAPR, FPGA implementation of pre-filtering and post-clipping reduces PAPR by 10.3 dB, whereas for the case of pre-clipping and post-filtering technique the reductions in PAPR are 1.6, 1.8 and 1.9 dB with FPGA implementation, Matlab and NI’s AWR software simulations respectively.
O'Neil, Harold F., Jr.; And Others
The goal of this project was to examine various anxiety reduction techniques on the state anxiety levels and performance of college students. These techniques ranged from instructional to experimental treatments and were investigated in a range of computer-based situations. The state-trait anxiety inventory developed by Spielberger, Gorsuch, and…
Improved modeling techniques for turbomachinery flow fields
Energy Technology Data Exchange (ETDEWEB)
Lakshminarayana, B. [Pennsylvania State Univ., University Park, PA (United States); Fagan, J.R. Jr. [Allison Engine Company, Indianapolis, IN (United States)
1995-10-01
This program has the objective of developing an improved methodology for modeling turbomachinery flow fields, including the prediction of losses and efficiency. Specifically, the program addresses the treatment of the mixing stress tensor terms attributed to deterministic flow field mechanisms required in steady-state Computational Fluid Dynamic (CFD) models for turbo-machinery flow fields. These mixing stress tensors arise due to spatial and temporal fluctuations (in an absolute frame of reference) caused by rotor-stator interaction due to various blade rows and by blade-to-blade variation of flow properties. These tasks include the acquisition of previously unavailable experimental data in a high-speed turbomachinery environment, the use of advanced techniques to analyze the data, and the development of a methodology to treat the deterministic component of the mixing stress tensor. Penn State will lead the effort to make direct measurements of the momentum and thermal mixing stress tensors in high-speed multistage compressor flow field in the turbomachinery laboratory at Penn State. They will also process the data by both conventional and conditional spectrum analysis to derive momentum and thermal mixing stress tensors due to blade-to-blade periodic and aperiodic components, revolution periodic and aperiodic components arising from various blade rows and non-deterministic (which includes random components) correlations. The modeling results from this program will be publicly available and generally applicable to steady-state Navier-Stokes solvers used for turbomachinery component (compressor or turbine) flow field predictions. These models will lead to improved methodology, including loss and efficiency prediction, for the design of high-efficiency turbomachinery and drastically reduce the time required for the design and development cycle of turbomachinery.
A combined model reduction algorithm for controlled biochemical systems.
Snowden, Thomas J; van der Graaf, Piet H; Tindall, Marcus J
2017-02-13
Systems Biology continues to produce increasingly large models of complex biochemical reaction networks. In applications requiring, for example, parameter estimation, the use of agent-based modelling approaches, or real-time simulation, this growing model complexity can present a significant hurdle. Often, however, not all portions of a model are of equal interest in a given setting. In such situations methods of model reduction offer one possible approach for addressing the issue of complexity by seeking to eliminate those portions of a pathway that can be shown to have the least effect upon the properties of interest. In this paper a model reduction algorithm bringing together the complementary aspects of proper lumping and empirical balanced truncation is presented. Additional contributions include the development of a criterion for the selection of state-variable elimination via conservation analysis and use of an 'averaged' lumping inverse. This combined algorithm is highly automatable and of particular applicability in the context of 'controlled' biochemical networks. The algorithm is demonstrated here via application to two examples; an 11 dimensional model of bacterial chemotaxis in Escherichia coli and a 99 dimensional model of extracellular regulatory kinase activation (ERK) mediated via the epidermal growth factor (EGF) and nerve growth factor (NGF) receptor pathways. In the case of the chemotaxis model the algorithm was able to reduce the model to 2 state-variables producing a maximal relative error between the dynamics of the original and reduced models of only 2.8% whilst yielding a 26 fold speed up in simulation time. For the ERK activation model the algorithm was able to reduce the system to 7 state-variables, incurring a maximal relative error of 4.8%, and producing an approximately 10 fold speed up in the rate of simulation. Indices of controllability and observability are additionally developed and demonstrated throughout the paper. These provide
Frequency-domain generelaized singular peruturbation method for relative error model order reduction
Institute of Scientific and Technical Information of China (English)
Hamid Reza SHAKER
2009-01-01
A new mixed method for relative error model order reduction is proposed.In the proposed method the frequency domain balanced stochastic truncation method is improved by applying the generalized singular perturbation method to the frequency domain balanced system in the reduction procedure.The frequency domain balanced stochastic truncation method,which was proposed in [15] and [17] by the author,is based on two recently developed methods,namely frequency domain balanced truncation within a desired frequency bound and inner-outer factorization techniques.The proposed method in this paper is a carry over of the frequency-domain balanced stochastic truncation and is of interest for practical model order reduction because in this context it shows to keep the accuracy of the approximation as high as possible without sacrificing the computational efficiency and important system properties.It is shown that some important properties of the frequency domain stochastic balanced reduction technique are extended to the proposed reduction method by using the concept and properties of the reciprocal systems.Numerical results show the accuracy,simplicity and flexibility enhancement of the method.
Directory of Open Access Journals (Sweden)
Wolfgang Witteveen
2014-01-01
Full Text Available The mechanical response of multilayer sheet structures, such as leaf springs or car bodies, is largely determined by the nonlinear contact and friction forces between the sheets involved. Conventional computational approaches based on classical reduction techniques or the direct finite element approach have an inefficient balance between computational time and accuracy. In the present contribution, the method of trial vector derivatives is applied and extended in order to obtain a-priori trial vectors for the model reduction which are suitable for determining the nonlinearities in the joints of the reduced system. Findings show that the result quality in terms of displacements and contact forces is comparable to the direct finite element method but the computational effort is extremely low due to the model order reduction. Two numerical studies are presented to underline the method’s accuracy and efficiency. In conclusion, this approach is discussed with respect to the existing body of literature.
Reduction of 4(5)-Methylimidazole Using Cookie Model Systems.
Jung, Min-Chul; Kim, Mina K; Lee, Kwang-Geun
2017-09-11
The objective of this study was to determine the reduction of 4(5)-methylimidazole (4-MI) under various baking conditions. For 4-MI analysis, an analytical method using gas chromatography-mass spectrometry was developed. The developed method was validated with linearity (r(2) > 0.999), recovery (101% to 103%, 3 levels), and precision (1.5% to 4.3%, 3 levels). Limits of detection and quantification were 18.5 and 56.0 μg/kg, respectively. This method was used to monitor the level of 4-MI in 11 commercial cookies, which ranged from 71.5 to 1254.8 μg/kg. Time and temperature were modified in the cookie model system to reduce 4-MI. The largest reduction in 4-MI (56%) was achieved by baking at 140 °C for 8 min; however the cookies baked at this condition were not well accepted by consumers. With combination of consumer liking test result, baking cookies at 140 °C for 16 min is optimal for 4-MI reduction (28% reduction), while it has minimal impact on consumer acceptance. A strong correlation (r(2) = 0.9981) was found between caramel colorant and 4-MI in the cookie model system. © 2017 Institute of Food Technologists®.
Model-based reinforcement learning with dimension reduction.
Tangkaratt, Voot; Morimoto, Jun; Sugiyama, Masashi
2016-12-01
The goal of reinforcement learning is to learn an optimal policy which controls an agent to acquire the maximum cumulative reward. The model-based reinforcement learning approach learns a transition model of the environment from data, and then derives the optimal policy using the transition model. However, learning an accurate transition model in high-dimensional environments requires a large amount of data which is difficult to obtain. To overcome this difficulty, in this paper, we propose to combine model-based reinforcement learning with the recently developed least-squares conditional entropy (LSCE) method, which simultaneously performs transition model estimation and dimension reduction. We also further extend the proposed method to imitation learning scenarios. The experimental results show that policy search combined with LSCE performs well for high-dimensional control tasks including real humanoid robot control.
FPGA-based RF interference reduction techniques for simultaneous PET-MRI.
Gebhardt, P; Wehner, J; Weissler, B; Botnar, R; Marsden, P K; Schulz, V
2016-05-07
The combination of positron emission tomography (PET) and magnetic resonance imaging (MRI) as a multi-modal imaging technique is considered very promising and powerful with regard to in vivo disease progression examination, therapy response monitoring and drug development. However, PET-MRI system design enabling simultaneous operation with unaffected intrinsic performance of both modalities is challenging. As one of the major issues, both the PET detectors and the MRI radio-frequency (RF) subsystem are exposed to electromagnetic (EM) interference, which may lead to PET and MRI signal-to-noise ratio (SNR) deteriorations. Early digitization of electronic PET signals within the MRI bore helps to preserve PET SNR, but occurs at the expense of increased amount of PET electronics inside the MRI and associated RF field emissions. This raises the likelihood of PET-related MRI interference by coupling into the MRI RF coil unwanted spurious signals considered as RF noise, as it degrades MRI SNR and results in MR image artefacts. RF shielding of PET detectors is a commonly used technique to reduce PET-related RF interferences, but can introduce eddy-current-related MRI disturbances and hinder the highest system integration. In this paper, we present RF interference reduction methods which rely on EM field coupling-decoupling principles of RF receive coils rather than suppressing emitted fields. By modifying clock frequencies and changing clock phase relations of digital circuits, the resulting RF field emission is optimised with regard to a lower field coupling into the MRI RF coil, thereby increasing the RF silence of PET detectors. Our methods are demonstrated by performing FPGA-based clock frequency and phase shifting of digital silicon photo-multipliers (dSiPMs) used in the PET modules of our MR-compatible Hyperion II (D) PET insert. We present simulations and magnetic-field map scans visualising the impact of altered clock phase pattern on the spatial RF field
FPGA-based RF interference reduction techniques for simultaneous PET–MRI
Gebhardt, P; Wehner, J; Weissler, B; Botnar, R; Marsden, P K; Schulz, V
2016-01-01
Abstract The combination of positron emission tomography (PET) and magnetic resonance imaging (MRI) as a multi-modal imaging technique is considered very promising and powerful with regard to in vivo disease progression examination, therapy response monitoring and drug development. However, PET–MRI system design enabling simultaneous operation with unaffected intrinsic performance of both modalities is challenging. As one of the major issues, both the PET detectors and the MRI radio-frequency (RF) subsystem are exposed to electromagnetic (EM) interference, which may lead to PET and MRI signal-to-noise ratio (SNR) deteriorations. Early digitization of electronic PET signals within the MRI bore helps to preserve PET SNR, but occurs at the expense of increased amount of PET electronics inside the MRI and associated RF field emissions. This raises the likelihood of PET-related MRI interference by coupling into the MRI RF coil unwanted spurious signals considered as RF noise, as it degrades MRI SNR and results in MR image artefacts. RF shielding of PET detectors is a commonly used technique to reduce PET-related RF interferences, but can introduce eddy-current-related MRI disturbances and hinder the highest system integration. In this paper, we present RF interference reduction methods which rely on EM field coupling–decoupling principles of RF receive coils rather than suppressing emitted fields. By modifying clock frequencies and changing clock phase relations of digital circuits, the resulting RF field emission is optimised with regard to a lower field coupling into the MRI RF coil, thereby increasing the RF silence of PET detectors. Our methods are demonstrated by performing FPGA-based clock frequency and phase shifting of digital silicon photo-multipliers (dSiPMs) used in the PET modules of our MR-compatible Hyperion IID PET insert. We present simulations and magnetic-field map scans visualising the impact of altered clock phase pattern on the spatial RF field
FPGA-based RF interference reduction techniques for simultaneous PET-MRI
Gebhardt, P.; Wehner, J.; Weissler, B.; Botnar, R.; Marsden, P. K.; Schulz, V.
2016-05-01
The combination of positron emission tomography (PET) and magnetic resonance imaging (MRI) as a multi-modal imaging technique is considered very promising and powerful with regard to in vivo disease progression examination, therapy response monitoring and drug development. However, PET-MRI system design enabling simultaneous operation with unaffected intrinsic performance of both modalities is challenging. As one of the major issues, both the PET detectors and the MRI radio-frequency (RF) subsystem are exposed to electromagnetic (EM) interference, which may lead to PET and MRI signal-to-noise ratio (SNR) deteriorations. Early digitization of electronic PET signals within the MRI bore helps to preserve PET SNR, but occurs at the expense of increased amount of PET electronics inside the MRI and associated RF field emissions. This raises the likelihood of PET-related MRI interference by coupling into the MRI RF coil unwanted spurious signals considered as RF noise, as it degrades MRI SNR and results in MR image artefacts. RF shielding of PET detectors is a commonly used technique to reduce PET-related RF interferences, but can introduce eddy-current-related MRI disturbances and hinder the highest system integration. In this paper, we present RF interference reduction methods which rely on EM field coupling-decoupling principles of RF receive coils rather than suppressing emitted fields. By modifying clock frequencies and changing clock phase relations of digital circuits, the resulting RF field emission is optimised with regard to a lower field coupling into the MRI RF coil, thereby increasing the RF silence of PET detectors. Our methods are demonstrated by performing FPGA-based clock frequency and phase shifting of digital silicon photo-multipliers (dSiPMs) used in the PET modules of our MR-compatible Hyperion II D PET insert. We present simulations and magnetic-field map scans visualising the impact of altered clock phase pattern on the spatial RF field distribution
Energy Technology Data Exchange (ETDEWEB)
Parain, F.; Banatre, M.; Cabillic, G.; Lesot, J.Ph. [Institut de Recherche en Informatique et Systemes Aleatoires, INRIA, 35 - Rennes (France); Higuera, T.; Issarny, V. [Institut de Recherche au Coeur de la Societe de l' Information, 78 - Le Chesnay (France)
2001-07-01
Embedded systems are evolving to offer higher computation capacity. Nevertheless they also have to offer a large energetic autonomy. Power conservation techniques have been developed for a long time but they are now becoming one of the main concern of the design of an embedded system. This document describes several techniques developed to decrease power consumption in embedded systems. This includes hardware solutions as well as software or co-design solutions. One of this co-design solution, called dynamic voltage scaling, is studied more precisely. lt illustrates the impact a power conservation technic can have on a system, in particular on scheduling in real-time systems. (authors)
Second-Order Model Reduction Based on Gramians
Directory of Open Access Journals (Sweden)
Cong Teng
2012-01-01
Full Text Available Some new and simple Gramian-based model order reduction algorithms are presented on second-order linear dynamical systems, namely, SVD methods. Compared to existing Gramian-based algorithms, that is, balanced truncation methods, they are competitive and more favorable for large-scale systems. Numerical examples show the validity of the algorithms. Error bounds on error systems are discussed. Some observations are given on structures of Gramians of second order linear systems.
Martín Furones, Angel; Anquela Julián, Ana Belén; Dimas-Pages, Alejandro; Cos-Gayón, Fernando
2017-08-01
Precise point positioning (PPP) is a well established Global Navigation Satellite System (GNSS) technique that only requires information from the receiver (or rover) to obtain high-precision position coordinates. This is a very interesting and promising technique because eliminates the need for a reference station near the rover receiver or a network of reference stations, thus reducing the cost of a GNSS survey. From a computational perspective, there are two ways to solve the system of observation equations produced by static PPP either in a single step (so-called batch adjustment) or with a sequential adjustment/filter. The results of each should be the same if they are both well implemented. However, if a sequential solution (that is, not only the final coordinates, but also those observed in previous GNSS epochs), is needed, as for convergence studies, finding a batch solution becomes a very time consuming task owing to the need for matrix inversion that accumulates with each consecutive epoch. This is not a problem for the filter solution, which uses information computed in the previous epoch for the solution of the current epoch. Thus filter implementations need extra considerations of user dynamics and parameter state variations between observation epochs with appropriate stochastic update parameter variances from epoch to epoch. These filtering considerations are not needed in batch adjustment, which makes it attractive. The main objective of this research is to significantly reduce the computation time required to obtain sequential results using batch adjustment. The new method we implemented in the adjustment process led to a mean reduction in computational time by 45%.
Balbuena Ortega, A; Arroyo Carrasco, M L; Méndez Otero, M M; Gayou, V L; Delgado Macuil, R; Martínez Gutiérrez, H; Iturbe Castillo, M D
2014-12-12
In this paper, the nonlinear refractive index of colloidal gold nanoparticles under continuous wave illumination is investigated with the z-scan technique. Gold nanoparticles were synthesized using ascorbic acid as reductant, phosphates as stabilizer and cetyltrimethylammonium chloride (CTAC) as surfactant agent. The nanoparticle size was controlled with the CTAC concentration. Experiments changing incident power and sample concentration were done. The experimental z-scan results were fitted with three models: thermal lens, aberrant thermal lens and the nonlocal model. It is shown that the nonlocal model reproduces with exceptionally good agreement; the obtained experimental behaviour.
Application of variance reduction technique to nuclear transmutation system driven by accelerator
Energy Technology Data Exchange (ETDEWEB)
Sasa, Toshinobu [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment
1998-03-01
In Japan, it is the basic policy to dispose the high level radioactive waste arising from spent nuclear fuel in stable deep strata after glass solidification. If the useful elements in the waste can be separated and utilized, resources are effectively used, and it can be expected to guarantee high economical efficiency and safety in the disposal in strata. Japan Atomic Energy Research Institute proposed the hybrid type transmutation system, in which high intensity proton accelerator and subcritical fast core are combined, or the nuclear reactor which is optimized for the exclusive use for transmutation. The tungsten target, minor actinide nitride fuel transmutation system and the melted minor actinide chloride salt target fuel transmutation system are outlined. The conceptual figures of both systems are shown. As the method of analysis, Version 2.70 of Lahet Code System which was developed by Los Alamos National Laboratory in USA was adopted. In case of carrying out the analysis of accelerator-driven subcritical core in the energy range below 20 MeV, variance reduction technique must be applied. (K.I.)
Yusob, Diana; Zukhi, Jihan; Aziz Tajuddin, Abd; Zainon, Rafidah
2017-05-01
The aim of this study was to evaluate the efficacy of metal artefact reduction using contrasts media in Computed Tomography (CT) imaging. A water-based abdomen phantom of diameter 32 cm (adult body size) was fabricated using polymethyl methacrylate (PMMA) material. Three different contrast agents (iodine, barium and gadolinium) were filled in small PMMA tubes and placed inside a water-based PMMA adult abdomen phantom. The orthopedic metal screw was placed in each small PMMA tube separately. These two types of orthopedic metal screw (stainless steel and titanium alloy) were scanned separately. The orthopedic metal crews were scanned with single-energy CT at 120 kV and dual-energy CT at fast kV-switching between 80 kV and 140 kV. The scan modes were set automatically using the current modulation care4Dose setting and the scans were set at different pitch and slice thickness. The use of the contrast media technique on orthopedic metal screws were optimised by using pitch = 0.60 mm, and slice thickness = 5.0 mm. The use contrast media can reduce the metal streaking artefacts on CT image, enhance the CT images surrounding the implants, and it has potential use in improving diagnostic performance in patients with severe metallic artefacts. These results are valuable for imaging protocol optimisation in clinical applications.
Ahmed, Qasim Zeeshan
2013-12-18
This paper investigates and compares the performance of wireless sensor networks where sensors operate on the principles of cooperative communications. We consider a scenario where the source transmits signals to the destination with the help of L sensors. As the destination has the capacity of processing only U out of these L signals, the strongest U signals are selected while the remaining (L?U) signals are suppressed. A preprocessing block similar to channel-shortening is proposed in this contribution. However, this preprocessing block employs a rank-reduction technique instead of channel-shortening. By employing this preprocessing, we are able to decrease the computational complexity of the system without affecting the bit error rate (BER) performance. From our simulations, it can be shown that these schemes outperform the channel-shortening schemes in terms of computational complexity. In addition, the proposed schemes have a superior BER performance as compared to channel-shortening schemes when sensors employ fixed gain amplification. However, for sensors which employ variable gain amplification, a tradeoff exists in terms of BER performance between the channel-shortening and these schemes. These schemes outperform channel-shortening scheme for lower signal-to-noise ratio.
Maneuver and vibration reduction of flexible spacecraft using sliding mode/command shaping technique
Institute of Scientific and Technical Information of China (English)
HU Qing-lei; MA Guang-fu; ZHANG Wei
2006-01-01
A generalized scheme based on the sliding mode and component synthesis vibration suppression (CSVS) method has been proposed for the rotational maneuver and vibration suppression of an orbiting spacecraft with flexible appendages. The proposed control design process is twofold: design of the attitude controller followed by the design of a flexible vibration attenuator. The attitude controller using only the attitude and the rate information for the flexible spacecraft (FS) is designed to serve two purposes: it forces the attitude motion onto a pre-selected sliding surface and then guides it to the state space origin. The shaped command input controller based on the CSVS method is designed for the reduction of the flexible mode vibration, which only requires information about the natural frequency and damping of the closed system. This information is used to discretize the input so that minimum energy is injected via the controller to the flexible nodes of the spacecraft. Additionally, to extend the CSVS method to the system with the on-off actuators, the pulse-width pulse-frequency ( PWPF) modulation is introduced to control the thruster firing and integrated with the CSVS method. PWPF modulation is a control method that provides pseudo-linear operation for an on-off thruster. The proposed control strategy has been implemented on a FS, which is a hub with symmetric cantilever flexible beam appendages and can undergo a single axis rotation. The results have been proven the potential of this technique to control FS.
Variance reduction techniques for a quantitative understanding of the \\Delta I = 1/2 rule
Endress, Eric
2012-01-01
The role of the charm quark in the dynamics underlying the \\Delta I = 1/2 rule for kaon decays can be understood by studying the dependence of kaon decay amplitudes on the charm quark mass using an effective \\Delta S = 1 weak Hamiltonian in which the charm is kept as an active degree of freedom. Overlap fermions are employed in order to avoid renormalization problems, as well as to allow access to the deep chiral regime. Quenched results in the GIM limit have shown that a significant part of the enhancement is purely due to low-energy QCD effects; variance reduction techniques based on low-mode averaging were instrumental in determining the relevant weak effective lowenergy couplings in this case. Moving away from the GIM limit requires the computation of diagrams containing closed quark loops. We report on our progress to employ a combination of low-mode averaging and stochastic volume sources in order to control these contributions. Results showing a significant improvement in the statistical signal are pre...
Kanik, Mehmet; Aktas, Ozan; Sen, Huseyin Sener; Durgun, Engin; Bayindir, Mehmet
2014-09-23
We produced kilometer-long, endlessly parallel, spontaneously piezoelectric and thermally stable poly(vinylidene fluoride) (PVDF) micro- and nanoribbons using iterative size reduction technique based on thermal fiber drawing. Because of high stress and temperature used in thermal drawing process, we obtained spontaneously polar γ phase PVDF micro- and nanoribbons without electrical poling process. On the basis of X-ray diffraction (XRD) analysis, we observed that PVDF micro- and nanoribbons are thermally stable and conserve the polar γ phase even after being exposed to heat treatment above the melting point of PVDF. Phase transition mechanism is investigated and explained using ab initio calculations. We measured an average effective piezoelectric constant as -58.5 pm/V from a single PVDF nanoribbon using a piezo evaluation system along with an atomic force microscope. PVDF nanoribbons are promising structures for constructing devices such as highly efficient energy generators, large area pressure sensors, artificial muscle and skin, due to the unique geometry and extended lengths, high polar phase content, high thermal stability and high piezoelectric coefficient. We demonstrated two proof of principle devices for energy harvesting and sensing applications with a 60 V open circuit peak voltage and 10 μA peak short-circuit current output.
A Novel Power Reduction Technique for Dual-Threshold Domino Logic in Sub-65nm Technology
Directory of Open Access Journals (Sweden)
Tarun Kr. Gupta
2013-03-01
Full Text Available A novel technique for dual- threshold is proposed and examined with inputs and clock signals combinationin 65nm dual- threshold footerless domino circuit for reduced leakage current. In this technique a p-typeand an n-type leakage controlled transistor (LCTsare introduced between the pull-up and pull-downnetwork and the gate of one is controlled by the source of the other. A high-threshold transistor is used inthe input for reducing gate oxide leakage current which becomes dominant in nanometer technology.Simulations based on 65nm BISM4 model for proposeddomino circuits shows that CLIL (clock low andinput low and CHIH (clock high and input high state is ineffective for lowering leakage current. TheCLIH (clock low input high state is only effective to suppress the leakage at low and high temperatures forwide fan-in domino circuits but for AND gate CHIL (clock high input low state is preferred to reducetheleakage current. The proposed circuit technique forAND2, OR2, OR4 and OR8 circuits reduces the activepower consumption by 39.6% to 57.9% and by 32.4% to40.3% at low and high die temperaturesrespectively when compared to the standard dual-threshold voltage domino logic circuits.
A Novel Power Reduction Technique for Dual-Threshold Domino Logic in Sub-65nm Technology
Directory of Open Access Journals (Sweden)
Tarun Kr. Gupta
2013-02-01
Full Text Available A novel technique for dual- threshold is proposed and examined with inputs and clock signals combinationin 65nm dual- threshold footerless domino circuit for reduced leakage current. In this technique a p-typeand an n-type leakage controlled transistor (LCTsare introduced between the pull-up and pull-downnetwork and the gate of one is controlled by the source of the other. A high-threshold transistor is used inthe input for reducing gate oxide leakage current which becomes dominant in nanometer technology.Simulations based on 65nm BISM4 model for proposeddomino circuits shows that CLIL (clock low andinput low and CHIH (clock high and input high state is ineffective for lowering leakage current. TheCLIH (clock low input high state is only effective to suppress the leakage at low and high temperatures forwide fan-in domino circuits but for AND gate CHIL (clock high input low state is preferred to reducetheleakage current. The proposed circuit technique forAND2, OR2, OR4 and OR8 circuits reduces the activepower consumption by 39.6% to 57.9% and by 32.4% to40.3% at low and high die temperaturesrespectively when compared to the standard dual-threshold voltage domino logic circuits.
Compact Models and Measurement Techniques for High-Speed Interconnects
Sharma, Rohit
2012-01-01
Compact Models and Measurement Techniques for High-Speed Interconnects provides detailed analysis of issues related to high-speed interconnects from the perspective of modeling approaches and measurement techniques. Particular focus is laid on the unified approach (variational method combined with the transverse transmission line technique) to develop efficient compact models for planar interconnects. This book will give a qualitative summary of the various reported modeling techniques and approaches and will help researchers and graduate students with deeper insights into interconnect models in particular and interconnect in general. Time domain and frequency domain measurement techniques and simulation methodology are also explained in this book.
Health gain by salt reduction in europe: a modelling study.
Directory of Open Access Journals (Sweden)
Marieke A H Hendriksen
Full Text Available Excessive salt intake is associated with hypertension and cardiovascular diseases. Salt intake exceeds the World Health Organization population nutrition goal of 5 grams per day in the European region. We assessed the health impact of salt reduction in nine European countries (Finland, France, Ireland, Italy, Netherlands, Poland, Spain, Sweden and United Kingdom. Through literature research we obtained current salt intake and systolic blood pressure levels of the nine countries. The population health modeling tool DYNAMO-HIA including country-specific disease data was used to predict the changes in prevalence of ischemic heart disease and stroke for each country estimating the effect of salt reduction through its effect on blood pressure levels. A 30% salt reduction would reduce the prevalence of stroke by 6.4% in Finland to 13.5% in Poland. Ischemic heart disease would be decreased by 4.1% in Finland to 8.9% in Poland. When salt intake is reduced to the WHO population nutrient goal, it would reduce the prevalence of stroke from 10.1% in Finland to 23.1% in Poland. Ischemic heart disease would decrease by 6.6% in Finland to 15.5% in Poland. The number of postponed deaths would be 102,100 (0.9% in France, and 191,300 (2.3% in Poland. A reduction of salt intake to 5 grams per day is expected to substantially reduce the burden of cardiovascular disease and mortality in several European countries.
Health gain by salt reduction in europe: a modelling study.
Hendriksen, Marieke A H; van Raaij, Joop M A; Geleijnse, Johanna M; Breda, Joao; Boshuizen, Hendriek C
2015-01-01
Excessive salt intake is associated with hypertension and cardiovascular diseases. Salt intake exceeds the World Health Organization population nutrition goal of 5 grams per day in the European region. We assessed the health impact of salt reduction in nine European countries (Finland, France, Ireland, Italy, Netherlands, Poland, Spain, Sweden and United Kingdom). Through literature research we obtained current salt intake and systolic blood pressure levels of the nine countries. The population health modeling tool DYNAMO-HIA including country-specific disease data was used to predict the changes in prevalence of ischemic heart disease and stroke for each country estimating the effect of salt reduction through its effect on blood pressure levels. A 30% salt reduction would reduce the prevalence of stroke by 6.4% in Finland to 13.5% in Poland. Ischemic heart disease would be decreased by 4.1% in Finland to 8.9% in Poland. When salt intake is reduced to the WHO population nutrient goal, it would reduce the prevalence of stroke from 10.1% in Finland to 23.1% in Poland. Ischemic heart disease would decrease by 6.6% in Finland to 15.5% in Poland. The number of postponed deaths would be 102,100 (0.9%) in France, and 191,300 (2.3%) in Poland. A reduction of salt intake to 5 grams per day is expected to substantially reduce the burden of cardiovascular disease and mortality in several European countries.
A Stochastic Multiscale Model for Microstructure Model Reduction
2011-12-19
methods. In [4, 5] the principle of maximum entropy ( MaxEnt ) was used to describe the microstructure topology of binary and polycrystalline materials. A...such MaxEnt distribution and interrogated using appropriate physical model, e.g. a crystal plasticity finite element method (CPFEM) [6] for polycrystals
Marimón, E.; Nait-Charif, Hammadi; Khan, A.; Marsden, P. A.; Diaz, O
2017-01-01
Mammography examinations are highly affected by scattered radiation, as it degrades the quality of the image and complicates the diagnosis process. Anti-scatter grids are currently used in planar mammography examinations as the standard physical scattering reduction technique. This method has been found to be inefficient, as it increases the dose delivered to the patient, does not remove all the scattered radiation and increases the price of the equipment. Alternative scattering reduction met...
Ohlberger, Mario; Smetana, Kathrin
2016-09-01
In this article we introduce a procedure, which allows to recover the potentially very good approximation properties of tensor-based model reduction procedures for the solution of partial differential equations in the presence of interfaces or strong gradients in the solution which are skewed with respect to the coordinate axes. The two key ideas are the location of the interface either by solving a lower-dimensional partial differential equation or by using data functions and the subsequent removal of the interface of the solution by choosing the determined interface as the lifting function of the Dirichlet boundary conditions. We demonstrate in numerical experiments for linear elliptic equations and the reduced basis-hierarchical model reduction approach that the proposed procedure locates the interface well and yields a significantly improved convergence behavior even in the case when we only consider an approximation of the interface.
Satish, Bhava R J; Vinodkumar, Muniramaiah; Suresh, Masilamani; Seetharam, Prasad Y; Jaikumar, Krishnaraj
2014-09-01
In completely displaced pediatric distal radial fractures, achieving satisfactory reduction with closed manipulation and maintenance of reduction with casting is difficult. Although the Kapandji technique of K-wiring is widely practiced for distal radial fracture fixation in adults, it is rarely used in pediatric acute fractures. Forty-six completely displaced distal radial fractures in children 7 to 14 years old were treated with closed reduction and K-wire fixation. One or 2 intrafocal K-wires were used to lever out and reduce the distal fragment's posterior and radial translation. One or 2 extrafocal K-wires were used to augment intrafocal fixation. Postoperative immobilization was enforced for 3 to 6 weeks (with a short arm plaster of Paris cast for the first half of the time and a removable wrist splint for the second half), after which time the K-wires were removed. Patients were followed for a minimum of 4 months. Mean patient age was 9.5 years. Near-anatomical reduction was achieved easily with the intrafocal leverage technique in all fractures. Mean procedure time for K-wiring was 7 minutes. On follow-up, there was no loss of reduction; remanipulation was not performed in any case. There were no pin-related complications. All fractures healed, and full function of the wrist and forearm was achieved in every case. The Kapandji K-wire technique consistently achieves easy and near-anatomical closed reduction by a leverage reduction method in completely displaced pediatric distal radial fractures. Reduction is maintained throughout the fracture-healing period. The casting duration can be reduced without loss of reduction, and good functional results can be obtained. Copyright 2014, SLACK Incorporated.
Model Reduction in Dynamic Finite Element Analysis of Lightweight Structures
DEFF Research Database (Denmark)
Flodén, Ola; Persson, Kent; Sjöström, Anders
2012-01-01
. The objective of the analyses presented in this paper is to evaluate methods for model reduction of detailed finite element models of floor and wall structures and to investigate the influence of reducing the number of degrees of freedom and computational cost on the dynamic response of the models in terms....... The drawback of component mode synthesis compared to modelling with structural elements is the increased computational cost, although the number of degrees of freedom is small in comparison, as a result of the large bandwidth of the system matrices.......The application of wood as a construction material when building multi-storey buildings has many advantages, e.g., light weight, sustainability and low energy consumption during the construction and lifecycle of the building. However, compared to heavy structures, it is a greater challenge to build...
Model reduction using the genetic algorithm and routh approximations
Institute of Scientific and Technical Information of China (English)
无
2005-01-01
A new method of model reduction combining the genetic algorithm(GA) with the Routh approximation method is presented. It is suggested that a high-order system can be approximated by a low-order model with a time delay. The denominator parameters of the reduced-order model are determined by the Routh approximation method, then the numerator parameters and time delay are identified by the GA. The reduced-order models obtained by the proposed method will always be stable if the original system is stable and produce a good approximation to the original system in both the frequency domain and time domain. Two numerical examples show that the method is computationally simple and efficient.
Institute of Scientific and Technical Information of China (English)
2016-01-01
AIM To investigate the techniques and efficacy ofperoral endoscopic reduction of dilated gastrojejunalanastomosis after bariatric surgery.METHODS： An extensive English language literaturesearch was conducted using PubMed, MEDLINE,Medscape and Google to identify peer-reviewed originaland review articles using the keywords ＂bariatricendoscopic suturing＂, ＂overstitch bariatric surgery＂,＂endoscopic anastomotic reduction＂, ＂bariatric surgery＂,＂gastric bypass＂, ＂obesity＂, ＂weight loss＂. We identifiedarticles describing technical feasibility, safety, efficacy,and adverse outcomes of overstitch endoscopic suturingsystem for transoral outlet reduction in patients withweight regain following Roux-en-Y gastric bypass （RYGB）.All studies that contained material applicable to the topicwere considered. Retrieved peer-reviewed original andreview articles were reviewed by the authors and thedata extracted using a standardized collection tool. Datawere analyzed using statistical analysis as percentages ofthe event.RESULTS： Four original published articles which met oursearch criteria were pooled. The total number cases werefifty-nine with a mean age of 46.75 years （34-63 years）.Eight of the patients included in those studies weremales （13.6%） and fifty-one were females （86.4%）. Themean time elapsed since the primary bypass surgerywas 5.75 years. The average pre-endoscopic procedurebody mass index （BMI） was 38.68 （27.5-48.5）. Meanbody weight regained post-RYGB surgery was 13.4 kgfrom their post-RYGB nadir. The average pouch length atthe initial upper endoscopy was 5.75 cm （2-14 cm）. Thepre-intervention anastomotic diameter was averagedat 24.85 mm （8-40 mm）. Average procedure time was74 min （50-164 min）. Mean post endoscopic interventionanastomotic diameter was 8 mm （3-15 mm）. Weightreduction at 3 to 4 mo post revision noted to be an average of 10.1 kg. Average overall post revision BMIwas recorded at 37.7. The
Modeling effective viscosity reduction behaviour of solid suspensions
Institute of Scientific and Technical Information of China (English)
Wei En-Bo; Ji Yan-Ju; Zhang Jun
2012-01-01
Under a simple shearing flow,the effective viscosity of solid suspensions can be reduced by controlling the inclusion particle size or the number of inclusion particles in a unit volume.Based on the Stokes equation,the transformation field method is used to model the reduction behaviour of effective viscosity of solid suspensions theoretically by enlarging the particle size at a given high concentration of particles.With a lot of samples of random cubic particles in a unit cell,our statistical results show that at the same higher concentration,the effective viscosity of solid suspensions can be reduced by increasing the particle size or reducing the number of inclusion particles in a unit volume.This work discloses the viscosity reduction mechanism of increasing particle size,which is observed experimentally.
Directory of Open Access Journals (Sweden)
Hae-Gwang Jeong
2013-01-01
Full Text Available This paper proposes a second-order harmonic reduction technique using a proportional-resonant (PR controller for a photovoltaic (PV power conditioning system (PCS. In a grid-connected single-phase system, inverters create a second-order harmonic at twice the fundamental frequency. A ripple component unsettles the operating points of the PV array and deteriorates the operation of the maximum power point tracking (MPPT technique. The second-order harmonic component in PV PCS is analyzed using an equivalent circuit of the DC/DC converter and the DC/AC inverter. A new feed-forward compensation technique using a PR controller for ripple reduction is proposed. The proposed algorithm is advantageous in that additional devices are not required and complex calculations are unnecessary. Therefore, this method is cost-effective and simple to implement. The proposed feed-forward compensation technique is verified by simulation and experimental results.
Rozhkov, Mikhail; Bobrov, Dmitry; Kitov, Ivan
2014-05-01
The Master Event technique is a powerful tool for Expert Technical Analysis within the CTBT framework as well as for real-time monitoring with the waveform cross-correlation (CC) (match filter) approach. The primary goal of CTBT monitoring is detection and location of nuclear explosions. Therefore, the cross-correlation monitoring should be focused on finding such events. The use of physically adequate waveform templates may significantly increase the number of valid, both natural and manmade, events in the Reviewed Event Bulletin (REB) of the International Data Centre. Inadequate templates for master events may increase the number of CTBT irrelevant events in REB and reduce the sensitivity of the CC technique to valid events. In order to cover the entire earth, including vast aseismic territories, with the CC based nuclear test monitoring we conducted a thorough research and defined the most appropriate real and synthetic master events representing underground explosion sources. A procedure was developed on optimizing the master event template simulation and narrowing the classes of CC templates used in detection and location process based on principal and independent component analysis (PCA and ICA). Actual waveforms and metadata from the DTRA Verification Database were used to validate our approach. The detection and location results based on real and synthetic master events were compared. The prototype of CC-based Global Grid monitoring system developed in IDC during last year was populated with different hybrid waveform templates (synthetics, synthetics components, and real components) and its performance was assessed with the world seismicity data flow, including the DPRK-2013 event. The specific features revealed in this study for the P-waves from the DPRK underground nuclear explosions (UNEs) can reduce the global detection threshold of seismic monitoring under the CTBT by 0.5 units of magnitude. This corresponds to the reduction in the test yield by a
A GIS approach to model sediment reduction susceptibility of mixed sand and gravel beaches.
Eikaas, Hans S; Hemmingsen, Maree A
2006-06-01
The morphological form of mixed sand and gravel beaches is distinct, and the process/response system and complex dynamics of these beaches are not well understood. Process response models developed for pure sand or gravel beaches cannot be directly applied to these beaches. The Canterbury Bight coastline is apparently abundantly supplied with sediments from large rivers and coastal alluvial cliffs, but a large part of this coastline is experiencing long-term erosion. Sediment budget models provide little evidence to suggest sediments are stored within this system. Current sediment budget models inadequately quantify and account for the processes responsible for the patterns of erosion and accretion of this coastline. We outline a new method to extrapolate from laboratory experiments to the field using a geographical information system approach to model sediment reduction susceptibility for the Canterbury Bight. Sediment samples from ten representative sites were tumbled in a concrete mixer for an equivalent distance of 40 km. From the textural mixture and weight loss over 40 km tumbling, we applied regression techniques to generate a predictive equation for Sediment Reduction Susceptibility (SRS). We used Inverse Distance Weighting (IDW) to extrapolate the results from fifty-five sites with data on textural sediment composition to field locations with no data along the Canterbury Bight, creating a continuous sediment reductions susceptibility surface. Isolines of regular SRS intervals were then derived from the continuous surface to create a contour map of sediment reductions susceptibility for the Canterbury Bight. Results highlighted the variability in SRS along this coastline.
Energy Technology Data Exchange (ETDEWEB)
Shah, Chirag [Department of Radiation Oncology, William Beaumont Hospital, Royal Oak, MI (United States); Vicini, Frank A., E-mail: fvicini@beaumont.edu [Department of Radiation Oncology, William Beaumont Hospital, Royal Oak, MI (United States)
2011-11-15
As more women survive breast cancer, long-term toxicities affecting their quality of life, such as lymphedema (LE) of the arm, gain importance. Although numerous studies have attempted to determine incidence rates, identify optimal diagnostic tests, enumerate efficacious treatment strategies and outline risk reduction guidelines for breast cancer-related lymphedema (BCRL), few groups have consistently agreed on any of these issues. As a result, standardized recommendations are still lacking. This review will summarize the latest data addressing all of these concerns in order to provide patients and health care providers with optimal, contemporary recommendations. Published incidence rates for BCRL vary substantially with a range of 2-65% based on surgical technique, axillary sampling method, radiation therapy fields treated, and the use of chemotherapy. Newer clinical assessment tools can potentially identify BCRL in patients with subclinical disease with prospective data suggesting that early diagnosis and management with noninvasive therapy can lead to excellent outcomes. Multiple therapies exist with treatments defined by the severity of BCRL present. Currently, the standard of care for BCRL in patients with significant LE is complex decongestive physiotherapy (CDP). Contemporary data also suggest that a multidisciplinary approach to the management of BCRL should begin prior to definitive treatment for breast cancer employing patient-specific surgical, radiation therapy, and chemotherapy paradigms that limit risks. Further, prospective clinical assessments before and after treatment should be employed to diagnose subclinical disease. In those patients who require aggressive locoregional management, prophylactic therapies and the use of CDP can help reduce the long-term sequelae of BCRL.
Reduction in radiation dose with reconstruction technique in the brain perfusion CT
Kim, H. J.; Lee, H. K.; Song, H.; Ju, M. S.; Dong, K. R.; Chung, W. K.; Cho, M. S.; Cho, J. H.
2011-12-01
The principal objective of this study was to verify the utility of the reconstruction imaging technique in the brain perfusion computed tomography (PCT) scan by assessing reductions in the radiation dose and analyzing the generated images. The setting used for image acquisition had a detector coverage of 40 mm, a helical thickness of 0.625 mm, a helical shuttle mode scan type and a rotation time of 0.5 s as the image parameters used for the brain PCT scan. Additionally, a phantom experiment and an animal experiment were carried out. In the phantom and animal experiments, noise was measured in the scanning with the tube voltage fixed at 80 kVp (kilovolt peak) and the level of the adaptive statistical iterative reconstruction (ASIR) was changed from 0% to 100% at 10% intervals. The standard deviation of the CT coefficient was measured three times to calculate the mean value. In the phantom and animal experiments, the absorbed dose was measured 10 times under the same conditions as the ones for noise measurement before the mean value was calculated. In the animal experiment, pencil-type and CT-dedicated ionization chambers were inserted into the central portion of pig heads for measurement. In the phantom study, as the level of the ASIR changed from 0% to 100% under identical scanning conditions, the noise value and dose were proportionally reduced. In our animal experiment, the noise value was lowest when the ASIR level was 50%, unlike in the phantom study. The dose was reduced as in the phantom study.
Directory of Open Access Journals (Sweden)
Pixner Konrad
2015-01-01
Full Text Available The grape variety Vernatsch is prone to the formation of severe reductive notes during alcoholic fermentation (AF, spoiling the fruity aroma characteristic for this variety. We investigated the impact of eight different vinification treatments on the formation of volatile sulfur compounds (VSCs and their impact on the sensorial quality of the wines in this susceptible grape variety. Without the addition of sulfur under the form of potassium metabisulfite (K2S2O5 to the crushed grapes, wines were significant less reductive. The clarification treatment showed promising results for the diminution of reductive notes, but might not be a feasible strategy for commercial wineries. Changing fermentation temperature, adding air, bentonite or copper to fermenting wines increased the appearance of reductive notes. The addition sulfur prior AF increased reductive notes in Vernatsch wines and needs to be considered as a crucial factor for the formation of reductive notes.
Dimensional Reduction for the General Markov Model on Phylogenetic Trees.
Sumner, Jeremy G
2017-03-01
We present a method of dimensional reduction for the general Markov model of sequence evolution on a phylogenetic tree. We show that taking certain linear combinations of the associated random variables (site pattern counts) reduces the dimensionality of the model from exponential in the number of extant taxa, to quadratic in the number of taxa, while retaining the ability to statistically identify phylogenetic divergence events. A key feature is the identification of an invariant subspace which depends only bilinearly on the model parameters, in contrast to the usual multi-linear dependence in the full space. We discuss potential applications including the computation of split (edge) weights on phylogenetic trees from observed sequence data.
Verification and Uncertainty Reduction of Amchitka Underground Nuclear Testing Models
Energy Technology Data Exchange (ETDEWEB)
Ahmed Hassan; Jenny Chapman
2006-02-01
zero. The current results of no-breakthrough match this lower bound. (8) Significant uncertainty reduction is achieved for model input parameters (recharge, conductivity, and recharge-conductivity ratio) with the R/K ratio experiencing a very dramatic reduction. (9) Uncertainty in groundwater fluxes is also reduced due to the reduction of R/K uncertainty. (10) Groundwater velocities based on new data are orders of magnitude slower than the velocities produced by the 2002 model due to the higher porosity obtained from the analysis of the MT data. (11) Uncertainty reduction in radionuclide mass flux could not be assessed as the velocities are too small to produce radionuclide breakthrough within the model timeframe of 2,000 years.
Macculi, Claudio, Piro, L.; Gatti, F.; Lotti, S.; Argan, A.; Laurenza, M.; D'Andrea, M.; Torrioli, G.; Biasotti, M.; Corsini, D.; Orlando, A.; Mineo, T.; D'Ai, A.; Molendi, S.; Gastaldello, F.; Bulgarelli, A.; Fioretti, V.; Jacquey, C.; Laurent, P.
2015-09-01
We present the particles background reduction techniques aimed at increasing the X-IFU sensitivity which is reduced by primary protons of both solar and Cosmic Rays origin, and secondary electrons. The adopted solutions involve Monte Carlo simulation by both Geant4 toolkit related to the "expected" background at L2 orbit through the payload mass model and the ray tracing technique to evaluate the soft protons components focussed by the optics to the main detector, and the development of an active Cryogenic AntiCoincidence detector and a passive electron shielding to meet the scientific requirements.
A MODEL AND CONTROLLER REDUCTION METHOD FOR ROBUST CONTROL DESIGN.
Energy Technology Data Exchange (ETDEWEB)
YUE,M.; SCHLUETER,R.
2003-10-20
A bifurcation subsystem based model and controller reduction approach is presented. Using this approach a robust {micro}-synthesis SVC control is designed for interarea oscillation and voltage control based on a small reduced order bifurcation subsystem model of the full system. The control synthesis problem is posed by structured uncertainty modeling and control configuration formulation using the bifurcation subsystem knowledge of the nature of the interarea oscillation caused by a specific uncertainty parameter. Bifurcation subsystem method plays a key role in this paper because it provides (1) a bifurcation parameter for uncertainty modeling; (2) a criterion to reduce the order of the resulting MSVC control; and (3) a low order model for a bifurcation subsystem based SVC (BMSVC) design. The use of the model of the bifurcation subsystem to produce a low order controller simplifies the control design and reduces the computation efforts so significantly that the robust {micro}-synthesis control can be applied to large system where the computation makes robust control design impractical. The RGA analysis and time simulation show that the reduced BMSVC control design captures the center manifold dynamics and uncertainty structure of the full system model and is capable of stabilizing the full system and achieving satisfactory control performance.
Modelling Drivers' Behaviour as a Crash Risk Reduction Process
Directory of Open Access Journals (Sweden)
Seyyed Mohammad Sadat Hoseini
2008-05-01
Full Text Available The evermore widespread use of microscopic traffic simulationin the analysis of road systems has refocused attention onsub models, including car-following and lane-changing models.In this research a microscopic model is developed whichcombines car-following and lane-changing models and describesdriver behaviour as a crash risk reduction process ofdrivers. This model has been simulated by a cellular automatasimulator and compared with the real data. It has been shownthat there is no reason to consider the model invalid for drivers'behaviour in the basic segments of freeways in Iran, duringnot-congested conditions. Considering that uncertainty of positionof vehicles is caused by their acceleration or deceleration, aprobability function is calibrated for calculating the presenceprobability of vehicles in their feasible cells. Multiplying thepresence probability and impact of crash, crash risk of cells iscalculated. As an application of the model, it has been shownthat when difference between vehicles brake deceleration increases,the total crash risk increases.
Directory of Open Access Journals (Sweden)
Niaz Mohammad Jafari Chokan
2016-11-01
Full Text Available Anterior shoulder dislocation is the most common joint dislocation in human body. Many methods are traditionally described for reduction of shoulder dislocation. Most of these techniques are painful to patients and may be associated with further injury. An ideal method should be easy, effective, and less painful, not associated with iatrogenic complications and should be easy to teach and learn. Among different methods of reduction, external rotation and Milch methods are more popular. Both methods are found to be atraumatic, relatively painless and can be performed without anesthesia. In this article, we aimed to review the literatures regarding these two methods of reduction and comparing their success rate and outcome. We reviewed the literature to find articles related to reduction of anterior shoulder dislocations applying one of two techniques described above. We searched PubMed and Google Scholar. In total, 46 articles were found, of them 17 articles -which mainly focused on anterior shoulder dislocation reduction by means of two above methods-were included in this review. The results showed that both techniques were effective, safe, relatively painless, and were well tolerated with no complications, but the external rotation method was superior.
Kerfriden, P.; Goury, O.; Rabczuk, T.; Bordas, S.P.A.
2013-01-01
We propose in this paper a reduced order modelling technique based on domain partitioning for parametric problems of fracture. We show that coupling domain decomposition and projection-based model order reduction permits to focus the numerical effort where it is most needed: around the zones where damage propagates. No a priori knowledge of the damage pattern is required, the extraction of the corresponding spatial regions being based solely on algebra. The efficiency of the proposed approach is demonstrated numerically with an example relevant to engineering fracture. PMID:23750055
Disc volume reduction with percutaneous nucleoplasty in an animal model.
Directory of Open Access Journals (Sweden)
Richard Kasch
Full Text Available STUDY DESIGN: We assessed volume following nucleoplasty disc decompression in lower lumbar spines from cadaveric pigs using 7.1Tesla magnetic resonance imaging (MRI. PURPOSE: To investigate coblation-induced volume reductions as a possible mechanism underlying nucleoplasty. METHODS: We assessed volume following nucleoplastic disc decompression in pig spines using 7.1-Tesla MRI. Volumetry was performed in lumbar discs of 21 postmortem pigs. A preoperative image data set was obtained, volume was determined, and either disc decompression or placebo therapy was performed in a randomized manner. Group 1 (nucleoplasty group was treated according to the usual nucleoplasty protocol with coblation current applied to 6 channels for 10 seconds each in an application field of 360°; in group 2 (placebo group the same procedure was performed but without coblation current. After the procedure, a second data set was generated and volumes calculated and matched with the preoperative measurements in a blinded manner. To analyze the effectiveness of nucleoplasty, volumes between treatment and placebo groups were compared. RESULTS: The average preoperative nucleus volume was 0.994 ml (SD: 0.298 ml. In the nucleoplasty group (n = 21 volume was reduced by an average of 0.087 ml (SD: 0.110 ml or 7.14%. In the placebo group (n = 21 volume was increased by an average of 0.075 ml (SD: 0.075 ml or 8.94%. The average nucleoplasty-induced volume reduction was 0.162 ml (SD: 0.124 ml or 16.08%. Volume reduction in lumbar discs was significant in favor of the nucleoplasty group (p<0.0001. CONCLUSIONS: Our study demonstrates that nucleoplasty has a volume-reducing effect on the lumbar nucleus pulposus in an animal model. Furthermore, we show the volume reduction to be a coblation effect of nucleoplasty in porcine discs.
Problem Reduction in Online Payment System Using Hybrid Model
Singh, Sandeep Pratap; Rakesh, Nitin; Tyagi, Vipin
2011-01-01
Online auction, shopping, electronic billing etc. all such types of application involves problems of fraudulent transactions. Online fraud occurrence and its detection is one of the challenging fields for web development and online phantom transaction. As no-secure specification of online frauds is in research database, so the techniques to evaluate and stop them are also in study. We are providing an approach with Hidden Markov Model (HMM) and mobile implicit authentication to find whether the user interacting online is a fraud or not. We propose a model based on these approaches to counter the occurred fraud and prevent the loss of the customer. Our technique is more parameterized than traditional approaches and so,chances of detecting legitimate user as a fraud will reduce.
Energy Technology Data Exchange (ETDEWEB)
Carlberg, Kevin Thomas [Sandia National Lab. (SNL-CA), Livermore, CA (United States). Quantitative Modeling and Analysis; Drohmann, Martin [Sandia National Lab. (SNL-CA), Livermore, CA (United States). Quantitative Modeling and Analysis; Tuminaro, Raymond S. [Sandia National Lab. (SNL-CA), Livermore, CA (United States). Computational Mathematics; Boggs, Paul T. [Sandia National Lab. (SNL-CA), Livermore, CA (United States). Quantitative Modeling and Analysis; Ray, Jaideep [Sandia National Lab. (SNL-CA), Livermore, CA (United States). Quantitative Modeling and Analysis; van Bloemen Waanders, Bart Gustaaf [Sandia National Lab. (SNL-CA), Livermore, CA (United States). Optimization and Uncertainty Estimation
2014-10-01
Model reduction for dynamical systems is a promising approach for reducing the computational cost of large-scale physics-based simulations to enable high-fidelity models to be used in many- query (e.g., Bayesian inference) and near-real-time (e.g., fast-turnaround simulation) contexts. While model reduction works well for specialized problems such as linear time-invariant systems, it is much more difficult to obtain accurate, stable, and efficient reduced-order models (ROMs) for systems with general nonlinearities. This report describes several advances that enable nonlinear reduced-order models (ROMs) to be deployed in a variety of time-critical settings. First, we present an error bound for the Gauss-Newton with Approximated Tensors (GNAT) nonlinear model reduction technique. This bound allows the state-space error for the GNAT method to be quantified when applied with the backward Euler time-integration scheme. Second, we present a methodology for preserving classical Lagrangian structure in nonlinear model reduction. This technique guarantees that important properties--such as energy conservation and symplectic time-evolution maps--are preserved when performing model reduction for models described by a Lagrangian formalism (e.g., molecular dynamics, structural dynamics). Third, we present a novel technique for decreasing the temporal complexity --defined as the number of Newton-like iterations performed over the course of the simulation--by exploiting time-domain data. Fourth, we describe a novel method for refining projection-based reduced-order models a posteriori using a goal-oriented framework similar to mesh-adaptive h -refinement in finite elements. The technique allows the ROM to generate arbitrarily accurate solutions, thereby providing the ROM with a 'failsafe' mechanism in the event of insufficient training data. Finally, we present the reduced-order model error surrogate (ROMES) method for statistically quantifying reduced- order-model
Directory of Open Access Journals (Sweden)
Leila Sahraoui
2013-10-01
Full Text Available An OFDM system is combined with multiple-input multiple-output (MIMO in order to increase thediversity gain and system capacity over the time variant frequency-selective channels. However, a majordrawback of MIMO-OFDM system is that the transmitted signals on different antennas might exhibit highpeak-to-average power ratio (PAPR.In this paper, we present a PAPR analysis reduction of space-timeblock-coded (STBC MIMO-OFDM system for 4G wireless networks. Several techniques have been used toreduce the PAPR of the (STBC MIMOOFDM system: clipping and filtering, partial transmit sequence(PTS and selected mapping (SLM. Simulation results show that clipping and filtering provides a betterPAPR reduction than the others methods and only SLM technique conserve the PAPR reduction inreception part of signal
Formal modelling techniques in human-computer interaction
Haan, de G.; Veer, van der G.C.; Vliet, van J.C.
1991-01-01
This paper is a theoretical contribution, elaborating the concept of models as used in Cognitive Ergonomics. A number of formal modelling techniques in human-computer interaction will be reviewed and discussed. The analysis focusses on different related concepts of formal modelling techniques in hum
Model reduction for experimental thermal characterization of a holding furnace
Loussouarn, Thomas; Maillet, Denis; Remy, Benjamin; Dan, Diane
2017-09-01
Vacuum holding induction furnaces are used for the manufacturing of turbine blades by loss wax foundry process. The control of solidification parameters is a key factor for the manufacturing of these parts. The definition of the structure of a reduced heat transfer model with experimental identification through an estimation of its parameters is required here. Internal sensors outputs, together with this model, can be used for assessing the thermal state of the furnace through an inverse approach, for a better control. Here, an axisymmetric furnace and its load have been numerically modelled using FlexPDE, a finite elements code. The internal induction heat source as well as the transient radiative transfer inside the furnace are calculated through this detailed model. A reduced lumped body model has been constructed to represent the numerical furnace. The model reduction and the estimation of the parameters of the lumped body have been made using a Levenberg-Marquardt least squares minimization algorithm, using two synthetic temperature signals with a further validation test.
Quantitative model validation techniques: new insights
Ling, You
2012-01-01
This paper develops new insights into quantitative methods for the validation of computational model prediction. Four types of methods are investigated, namely classical and Bayesian hypothesis testing, a reliability-based method, and an area metric-based method. Traditional Bayesian hypothesis testing is extended based on interval hypotheses on distribution parameters and equality hypotheses on probability distributions, in order to validate models with deterministic/stochastic output for given inputs. Two types of validation experiments are considered - fully characterized (all the model/experimental inputs are measured and reported as point values) and partially characterized (some of the model/experimental inputs are not measured or are reported as intervals). Bayesian hypothesis testing can minimize the risk in model selection by properly choosing the model acceptance threshold, and its results can be used in model averaging to avoid Type I/II errors. It is shown that Bayesian interval hypothesis testing...
Model Reduction and Coarse-Graining Approaches for Multiscale Phenomena
Gorban, Alexander N; Theodoropoulos, Constantinos; Kazantzis, Nikolaos K; Öttinger, Hans Christian
2006-01-01
Model reduction and coarse-graining are important in many areas of science and engineering. How does a system with many degrees of freedom become one with fewer? How can a reversible micro-description be adapted to the dissipative macroscopic model? These crucial questions, as well as many other related problems, are discussed in this book. Specific areas of study include dynamical systems, non-equilibrium statistical mechanics, kinetic theory, hydrodynamics and mechanics of continuous media, (bio)chemical kinetics, nonlinear dynamics, nonlinear control, nonlinear estimation, and particulate systems from various branches of engineering. The generic nature and the power of the pertinent conceptual, analytical and computational frameworks helps eliminate some of the traditional language barriers, which often unnecessarily impede scientific progress and the interaction of researchers between disciplines such as physics, chemistry, biology, applied mathematics and engineering. All contributions are authored by ex...
Residual Minimizing Model Reduction for Parameterized Nonlinear Dynamical Systems
Constantine, Paul G
2010-01-01
We present a method for approximating the solution of a parameterized, nonlinear dynamical (or static) system using an affine combination of solutions computed at other points in the input parameter space. The coefficients of the affine combination are computed with a nonlinear least squares procedure that minimizes the residual of the dynamical system. The approximation properties of this residual minimizing scheme are comparable to existing reduced basis and POD-Galerkin model reduction methods, but its implementation requires only independent evaluations of the nonlinear forcing function. We prove some interesting characteristics of the scheme including uniqueness and an interpolatory property, and we present heuristics for mitigating the effects of the ill-conditioning and reducing the overall cost of the method. We apply the method to representative numerical examples from kinetics - a three state system with one parameter controlling the stiffness - and groundwater modeling - a nonlinear parabolic PDE w...
Kaluza-Klein Reduction of a Quadratic Curvature Model
Baskal, S
2010-01-01
Palatini variational principle is implemented on a five dimensional quadratic curvature gravity model, rendering two sets of equations which can be interpreted as the field equations and the stress-energy tensor. Unification of gravity with electromagnetism and the scalar dilaton field is achieved through the Kaluza-Klein dimensional reduction mechanism. The reduced curvature invariant, field equations and the stress-energy tensor in four dimensional spacetime are obtained. The structure of the interactions among the constituent fields is exhibited in detail. It is shown that the Lorentz force naturally emerges from the reduced field equations and the equations of the standard Kaluza-Klein theory is demonstrated to be intrinsically contained in this model.
Model Order Reduction for Fluid Dynamics with Moving Solid Boundary
Gao, Haotian; Wei, Mingjun
2016-11-01
We extended the application of POD-Galerkin projection for model order reduction from usual fixed-domain problems to more general fluid-solid systems when moving boundary/interface is involved. The idea is similar to numerical simulation approaches using embedded forcing terms to represent boundary motion and domain change. However, such a modified approach will not get away with the unsteadiness of boundary terms which appear as time-dependent coefficients in the new Galerkin model. These coefficients need to be pre-computed for prescribed motion, or worse, to be computed at each time step for non-prescribed motion. The extra computational cost gets expensive in some cases and eventually undermines the value of using reduced-order models. One solution is to decompose the moving boundary/domain to orthogonal modes and derive another low-order model with fixed coefficients for boundary motion. Further study shows that the most expensive integrations resulted from the unsteady motion (in both original and domain-decomposition approaches) have almost negligible impact on the overall dynamics. Dropping these expensive terms reduces the computation cost by at least one order while no obvious effect on model accuracy is noticed. Supported by ARL.
Xie, Yujing; Zhao, Laijun; Xue, Jian; Hu, Qingmi; Xu, Xiang; Wang, Hongbo
2016-12-15
How to effectively control severe regional air pollution has become a focus of global concern recently. The non-cooperative reduction model (NCRM) is still the main air pollution control pattern in China, but it is both ineffective and costly, because each province must independently fight air pollution. Thus, we proposed a cooperative reduction model (CRM), with the goal of maximizing the reduction in adverse health effects (AHEs) at the lowest cost by encouraging neighboring areas to jointly control air pollution. CRM has two parts: a model of optimal pollutant removal rates using two optimization objectives (maximizing the reduction in AHEs and minimizing pollutant reduction cost) while meeting the regional pollution control targets set by the central government, and a model that allocates the cooperation benefits (i.e., health improvement and cost reduction) among the participants according to their contributions using the Shapley value method. We applied CRM to the case of sulfur dioxide (SO2) reduction in Yangtze River Delta region. Based on data from 2003 to 2013, and using mortality due to respiratory and cardiovascular diseases as the health endpoints, CRM saves 437 more lives than NCRM, amounting to 12.1% of the reduction under NCRM. CRM also reduced costs by US $65.8×10(6) compared with NCRM, which is 5.2% of the total cost of NCRM. Thus, CRM performs significantly better than NCRM. Each province obtains significant benefits from cooperation, which can motivate them to actively cooperate in the long term. A sensitivity analysis was performed to quantify the effects of parameter values on the cooperation benefits. Results shown that the CRM is not sensitive to the changes in each province's pollutant carrying capacity and the minimum pollutant removal capacity, but sensitive to the maximum pollutant reduction capacity. Moreover, higher cooperation benefits will be generated when a province's maximum pollutant reduction capacity increases.
Techniques and Simulation Models in Risk Management
Mirela GHEORGHE
2012-01-01
In the present paper, the scientific approach of the research starts from the theoretical framework of the simulation concept and then continues in the setting of the practical reality, thus providing simulation models for a broad range of inherent risks specific to any organization and simulation of those models, using the informatics instrument @Risk (Palisade). The reason behind this research lies in the need for simulation models that will allow the person in charge with decision taking i...
Publishing nutrition research: a review of multivariate techniques--part 3: data reduction methods.
Gleason, Philip M; Boushey, Carol J; Harris, Jeffrey E; Zoellner, Jamie
2015-07-01
This is the ninth in a series of monographs on research design and analysis, and the third in a set of these monographs devoted to multivariate methods. The purpose of this article is to provide an overview of data reduction methods, including principal components analysis, factor analysis, reduced rank regression, and cluster analysis. In the field of nutrition, data reduction methods can be used for three general purposes: for descriptive analysis in which large sets of variables are efficiently summarized, to create variables to be used in subsequent analysis and hypothesis testing, and in questionnaire development. The article describes the situations in which these data reduction methods can be most useful, briefly describes how the underlying statistical analyses are performed, and summarizes how the results of these data reduction methods should be interpreted.
Order Reduction of the Radiative Heat Transfer Model for the Simulation of Plasma Arcs
Fagiano, Lorenzo
2015-01-01
An approach to derive low-complexity models describing thermal radiation for the sake of simulating the behavior of electric arcs in switchgear systems is presented. The idea is to approximate the (high dimensional) full-order equations, modeling the propagation of the radiated intensity in space, with a model of much lower dimension, whose parameters are identified by means of nonlinear system identification techniques. The low-order model preserves the main structural aspects of the full-order one, and its parameters can be straightforwardly used in arc simulation tools based on computational fluid dynamics. In particular, the model parameters can be used together with the common approaches to resolve radiation in magnetohydrodynamic simulations, including the discrete-ordinate method, the P-N methods and photohydrodynamics. The proposed order reduction approach is able to systematically compute the partitioning of the electromagnetic spectrum in frequency bands, and the related absorption coefficients, tha...
Modeling of detective quantum efficiency considering scatter-reduction devices
Energy Technology Data Exchange (ETDEWEB)
Park, Ji Woong; Kim, Dong Woon; Kim, Ho Kyung [Pusan National University, Busan (Korea, Republic of)
2016-05-15
The reduction of signal-to-noise ratio (SNR) cannot be restored and thus has become a severe issue in digital mammography.1 Therefore, antiscatter grids are typically used in mammography. Scatter-cleanup performance of various scatter-reduction devices, such as air gaps,2 linear (1D) or cellular (2D) grids,3, 4 and slot-scanning devices,5 has been extensively investigated by many research groups. In the present time, a digital mammography system with the slotscanning geometry is also commercially available.6 In this study, we theoretically investigate the effect of scattered photons on the detective quantum efficiency (DQE) performance of digital mammography detectors by using the cascaded-systems analysis (CSA) approach. We show a simple DQE formalism describing digital mammography detector systems equipped with scatter reduction devices by regarding the scattered photons as additive noise sources. The LFD increased with increasing PMMA thickness, and the amounts of LFD indicated the corresponding SF. The estimated SFs were 0.13, 0.21, and 0.29 for PMMA thicknesses of 10, 20, and 30 mm, respectively. While the solid line describing the measured MTF for PMMA with 0 mm was the result of least-squares of regression fit using Eq. (14), the other lines were simply resulted from the multiplication of the fit result (for PMMA with 0 mm) with the (1-SF) estimated from the LFDs in the measured MTFs. Spectral noise-power densities over the entire frequency range were not much changed with increasing scatter. On the other hand, the calculation results showed that the spectral noise-power densities increased with increasing scatter. This discrepancy may be explained by that the model developed in this study does not account for the changes in x-ray interaction parameters for varying spectral shapes due to beam hardening with increasing PMMA thicknesses.
A TECHNIQUE OF DIGITAL SURFACE MODEL GENERATION
Institute of Scientific and Technical Information of China (English)
无
2001-01-01
It is usually a time-consuming process to real-time set up 3D digital surface mo del(DSM) of an object with complex sur face.On the basis of the architectural survey proje ct of“Chilin Nunnery Reconstruction",this paper investigates an easy and feasi ble way,that is,on project site,applying digital close range photogrammetry an d CAD technique to establish the DSM for simulating ancient architectures with c omplex surface.The method has been proved very effective in practice.
DEFF Research Database (Denmark)
Panduro, Toke Emil; Thorsen, Bo Jellesmark
2014-01-01
Hedonic models in environmental valuation studies have grown in terms of number of transactions and number of explanatory variables. We focus on the practical challenge of model reduction, when aiming for reliable parsimonious models, sensitive to omitted variable bias and multicollinearity. We...
Time-to-Compromise Model for Cyber Risk Reduction Estimation
Energy Technology Data Exchange (ETDEWEB)
Miles A. McQueen; Wayne F. Boyer; Mark A. Flynn; George A. Beitel
2005-09-01
We propose a new model for estimating the time to compromise a system component that is visible to an attacker. The model provides an estimate of the expected value of the time-to-compromise as a function of known and visible vulnerabilities, and attacker skill level. The time-to-compromise random process model is a composite of three subprocesses associated with attacker actions aimed at the exploitation of vulnerabilities. In a case study, the model was used to aid in a risk reduction estimate between a baseline Supervisory Control and Data Acquisition (SCADA) system and the baseline system enhanced through a specific set of control system security remedial actions. For our case study, the total number of system vulnerabilities was reduced by 86% but the dominant attack path was through a component where the number of vulnerabilities was reduced by only 42% and the time-to-compromise of that component was increased by only 13% to 30% depending on attacker skill level.
Building virtual 3D bone fragment models to control diaphyseal fracture reduction
Leloup, Thierry; Schuind, Frederic; Lasudry, Nadine; Van Ham, Philippe
1999-05-01
Most fractures of the long bones are displaced and need to be surgically reduced. External fixation is often used but the crucial point of this technique is the control of reduction, which is effected with a brilliance amplifier. This system, giving instantly a x-ray image, has many disadvantages. It implies frequent irradiation to the patient and the surgical team, the visual field is limited, the supplied images are distorted and it only gives 2D information. Consequently, the reduction is occasionally imperfect although intraoperatively it appears acceptable. Using the pains inserted in each fragment as markers and an optical tracker, it is possible to build a virtual 3D model for each principal fragment and to follow its movement during the reduction. This system will supply a 3D image of the fracture in real time and without irradiation. The brilliance amplifier could then be replaced by such a virtual reality system to provide the surgeon with an accurate tool facilitating the reduction of the fracture. The purpose of this work is to show how to build the 3D model for each principal bone fragment.
Moving objects management models, techniques and applications
Meng, Xiaofeng; Xu, Jiajie
2014-01-01
This book describes the topics of moving objects modeling and location tracking, indexing and querying, clustering, location uncertainty, traffic aware navigation and privacy issues as well as the application to intelligent transportation systems.
DEFF Research Database (Denmark)
Chambon, Julie Claire Claudia; Bjerg, Poul Løgstrup; Scheutz, Charlotte;
2013-01-01
Reductive dechlorination is a major degradation pathway of chlorinated ethenes in anaerobic subsurface environments, and reactive kinetic models describing the degradation process are needed in fate and transport models of these contaminants. However, reductive dechlorination is a complex biologi...
Optimal State-Space Reduction for Pedigree Hidden Markov Models
Kirkpatrick, Bonnie
2012-01-01
To analyze whole-genome genetic data inherited in families, the likelihood is typically obtained from a Hidden Markov Model (HMM) having a state space of 2^n hidden states where n is the number of meioses or edges in the pedigree. There have been several attempts to speed up this calculation by reducing the state-space of the HMM. One of these methods has been automated in a calculation that is more efficient than the naive HMM calculation; however, that method treats a special case and the efficiency gain is available for only those rare pedigrees containing long chains of single-child lineages. The other existing state-space reduction method treats the general case, but the existing algorithm has super-exponential running time. We present three formulations of the state-space reduction problem, two dealing with groups and one with partitions. One of these problems, the maximum isometry group problem was discussed in detail by Browning and Browning. We show that for pedigrees, all three of these problems hav...
Transient stability of large scale system using efficient network reduction technique
Energy Technology Data Exchange (ETDEWEB)
Shenoy, D.L.; Belapurkar, R.K.; Raghavan, R.; Nanda, J.; Kothari, D.P.
1981-12-01
An efficient yet very simple technique incorporating Brown's axis discarding technique and optimal ordering of nodes for reducing a large scale power system has been described and its use in obtaining rapid transient stability solutions has been explained. The technique developed can also be used in short circuit analysis of a large power system when one is interested in finding out short circuit levels at a few buses in the system. 5 refs.
Institute of Scientific and Technical Information of China (English)
LIU Yan; JIANG Mao-fa; XU Li-xian; WANG De-yong
2012-01-01
For describing and resolving the process of chromium ore smelting reduction in a converter preferably, the coupling dynamic model was established based on the kinetic models of chromium ore dissolution and interfacial re- ducing reaction between the slag and metal. When 150 t stainless steel crude melts with chromium of 12% are produced in a smelting reduction converter with no initial chromium in metal at 1 560℃, the results of the coupling dynamic model show that the mean reduction rate and injection rate of chromium ore are 0. 091% ·min^-1 and 467 kg · min^-1 , respectively. The foundation of the coupling dynamic model provides a reference and basis on the constitution of rational processing route for a practical stainless steelmaking.
Influence of model reduction on uncertainty of flood inundation predictions
Romanowicz, R. J.; Kiczko, A.; Osuch, M.
2012-04-01
Derivation of flood risk maps requires an estimation of the maximum inundation extent for a flood with an assumed probability of exceedence, e.g. a 100 or 500 year flood. The results of numerical simulations of flood wave propagation are used to overcome the lack of relevant observations. In practice, deterministic 1-D models are used for flow routing, giving a simplified image of a flood wave propagation process. The solution of a 1-D model depends on the simplifications to the model structure, the initial and boundary conditions and the estimates of model parameters which are usually identified using the inverse problem based on the available noisy observations. Therefore, there is a large uncertainty involved in the derivation of flood risk maps. In this study we examine the influence of model structure simplifications on estimates of flood extent for the urban river reach. As the study area we chose the Warsaw reach of the River Vistula, where nine bridges and several dikes are located. The aim of the study is to examine the influence of water structures on the derived model roughness parameters, with all the bridges and dikes taken into account, with a reduced number and without any water infrastructure. The results indicate that roughness parameter values of a 1-D HEC-RAS model can be adjusted for the reduction in model structure. However, the price we pay is the model robustness. Apart from a relatively simple question regarding reducing model structure, we also try to answer more fundamental questions regarding the relative importance of input, model structure simplification, parametric and rating curve uncertainty to the uncertainty of flood extent estimates. We apply pseudo-Bayesian methods of uncertainty estimation and Global Sensitivity Analysis as the main methodological tools. The results indicate that the uncertainties have a substantial influence on flood risk assessment. In the paper we present a simplified methodology allowing the influence of
Mattern, Jann Paul; Edwards, Christopher A.
2017-01-01
Parameter estimation is an important part of numerical modeling and often required when a coupled physical-biogeochemical ocean model is first deployed. However, 3-dimensional ocean model simulations are computationally expensive and models typically contain upwards of 10 parameters suitable for estimation. Hence, manual parameter tuning can be lengthy and cumbersome. Here, we present four easy to implement and flexible parameter estimation techniques and apply them to two 3-dimensional biogeochemical models of different complexities. Based on a Monte Carlo experiment, we first develop a cost function measuring the model-observation misfit based on multiple data types. The parameter estimation techniques are then applied and yield a substantial cost reduction over ∼ 100 simulations. Based on the outcome of multiple replicate experiments, they perform on average better than random, uninformed parameter search but performance declines when more than 40 parameters are estimated together. Our results emphasize the complex cost function structure for biogeochemical parameters and highlight dependencies between different parameters as well as different cost function formulations.
Restrictions in Model Reduction for Polymer Chain Models in Dissipative Particle Dynamics
Moreno Chaparro, Nicolas
2014-06-06
We model high molecular weight homopolymers in semidilute concentration via Dissipative Particle Dynamics (DPD). We show that in model reduction methodologies for polymers it is not enough to preserve system properties (i.e., density ρ, pressure p, temperature T, radial distribution function g(r)) but preserving also the characteristic shape and length scale of the polymer chain model is necessary. In this work we apply a DPD-model-reduction methodology for linear polymers recently proposed; and demonstrate why the applicability of this methodology is limited upto certain maximum polymer length, and not suitable for solvent coarse graining.
DEFF Research Database (Denmark)
Panduro, Toke Emil; Thorsen, Bo Jellesmark
2014-01-01
evaluate two common model reduction approaches in an empirical case. The first relies on a principal component analysis (PCA) used to construct new orthogonal variables, which are applied in the hedonic model. The second relies on a stepwise model reduction based on the variance inflation index and Akaike......’s information criteria. Our empirical application focuses on estimating the implicit price of forest proximity in a Danish case area, with a dataset containing 86 relevant variables. We demonstrate that the estimated implicit price for forest proximity, while positive in all models, is clearly sensitive...
Laser speckle reduction techniques for mid-infrared microscopy and stand-off spectroscopy
Furstenberg, Robert; Kendziora, Christopher A.; Breshike, Christopher J.; Nguyen, Viet; McGill, R. Andrew
2017-05-01
Due to their high brightness, infrared (IR) lasers (such as tunable quantum cascade lasers, QCLLs) are very attractive illumination sources in both stand-off spectroscopy and micro-spectroscopy. In fact, they are the enabling device for trace-level spectroscopy. However, due to their high coherence as laser beams, QCLLs can cause speckle, especially when illuminating a rough surface. This is highly detrimental to the signal-to-noise ratio (SNR) of thee collected spectra and can easily negate the gains from using aa high brightness source. In most cases, speckle reduction is performed at the expense of optical power. In this paper, we examine several speckle reduction approaches and evaluate them for their ability to reduce speckle contrast while at the same time preserving aa high optical throughput. We analyze multi-mode fibers, integrating spheres, and stationary and moving diffusers for their speckle reduction potential. Speckle-contrast is measured directly by acquiring beam profiles of the illumination beam or, indirectly, by observing speckle formation from illuminating a rough surface (e.g. Infragold® coated surface) with an IR micro-bolometer camera. We also report on a novel speckle-reducing device with increased optical throughput. We characterize speckle contrast reduction from spatial, temporal and wavelength averaging for both CWW and pulsed QCLs. Examples of effect of speckle-reduction on hyperspectral images in both standoff and microscopy configurations are given.
Flood Water Crossing: Laboratory Model Investigations for Water Velocity Reductions
Directory of Open Access Journals (Sweden)
Kasnon N.
2014-01-01
Full Text Available The occurrence of floods may give a negative impact towards road traffic in terms of difficulties in mobilizing traffic as well as causing damage to the vehicles, which later cause them to be stuck in the traffic and trigger traffic problems. The high velocity of water flows occur when there is no existence of objects capable of diffusing the water velocity on the road surface. The shape, orientation and size of the object to be placed beside the road as a diffuser are important for the effective flow attenuation of water. In order to investigate the water flow, a laboratory experiment was set up and models were constructed to study the flow velocity reduction. The velocity of water before and after passing through the diffuser objects was investigated. This paper focuses on laboratory experiments to determine the flow velocity of the water using sensors before and after passing through two best diffuser objects chosen from a previous flow pattern experiment.
Survey of techniques for reduction of wind turbine blade trailing edge noise.
Energy Technology Data Exchange (ETDEWEB)
Barone, Matthew Franklin
2011-08-01
Aerodynamic noise from wind turbine rotors leads to constraints in both rotor design and turbine siting. The primary source of aerodynamic noise on wind turbine rotors is the interaction of turbulent boundary layers on the blades with the blade trailing edges. This report surveys concepts that have been proposed for trailing edge noise reduction, with emphasis on concepts that have been tested at either sub-scale or full-scale. These concepts include trailing edge serrations, low-noise airfoil designs, trailing edge brushes, and porous trailing edges. The demonstrated noise reductions of these concepts are cited, along with their impacts on aerodynamic performance. An assessment is made of future research opportunities in trailing edge noise reduction for wind turbine rotors.
Microwave Diffraction Techniques from Macroscopic Crystal Models
Murray, William Henry
1974-01-01
Discusses the construction of a diffractometer table and four microwave models which are built of styrofoam balls with implanted metallic reflecting spheres and designed to simulate the structures of carbon (graphite structure), sodium chloride, tin oxide, and palladium oxide. Included are samples of Bragg patterns and computer-analysis results.…
A new algorithm for degree-constrained minimum spanning tree based on the reduction technique
Institute of Scientific and Technical Information of China (English)
Aibing Ning; Liang Ma; Xiaohua Xiong
2008-01-01
The degree-constrained minimum spanning tree (DCMST) is an iVP-hard problem in graph theory. It consists of rinding a spanning tree whose vertices should not exceed some given maximum degrees and whose total edge length is minimal. In this paper, novel mathematical properties for DCMST are indicated which lead to a new reduction algorithm that can significantly reduce the size of the problem. Also an algorithm for DCMST to solve the smaller problem is presented which has been preprocessed by reduction algorithm.
Dynamic order reduction of thin-film deposition kinetics models: A reaction factorization approach
Energy Technology Data Exchange (ETDEWEB)
Adomaitis, Raymond A., E-mail: adomaiti@umd.edu [Department of Chemical and Biomolecular Engineering, Institute for Systems Research, University of Maryland, College Park, Maryland 20742 (United States)
2016-01-15
A set of numerical tools for the analysis and dynamic dimension reduction of chemical vapor and atomic layer deposition (ALD) surface reaction models is developed in this work. The approach is based on a two-step process where in the first, the chemical species surface balance dynamic equations are factored to effectively decouple the (nonlinear) reaction rates, a process that eliminates redundant dynamic modes and that identifies conserved quantities. If successful, the second phase is implemented to factor out redundant dynamic modes when species relatively minor in concentration are omitted; if unsuccessful, the technique points to potential model structural problems. An alumina ALD process is used for an example consisting of 19 reactions and 23 surface and gas-phase species. Using the approach developed, the model is reduced by nineteen modes to a four-dimensional dynamic system without any knowledge of the reaction rate values. Results are interpreted in the context of potential model validation studies.
Validation technique using mean and variance of kriging model
Energy Technology Data Exchange (ETDEWEB)
Kim, Ho Sung; Jung, Jae Jun; Lee, Tae Hee [Hanyang Univ., Seoul (Korea, Republic of)
2007-07-01
To validate rigorously the accuracy of metamodel is an important research area in metamodel techniques. A leave-k-out cross-validation technique not only requires considerable computational cost but also cannot measure quantitatively the fidelity of metamodel. Recently, the average validation technique has been proposed. However the average validation criterion may stop a sampling process prematurely even if kriging model is inaccurate yet. In this research, we propose a new validation technique using an average and a variance of response during a sequential sampling method, such as maximum entropy sampling. The proposed validation technique becomes more efficient and accurate than cross-validation technique, because it integrates explicitly kriging model to achieve an accurate average and variance, rather than numerical integration. The proposed validation technique shows similar trend to root mean squared error such that it can be used as a strop criterion for sequential sampling.
DEFF Research Database (Denmark)
Bahramzy, Pevand; Azzinnari, Leonardo; Jakobsen, Kaj Bjarne;
2009-01-01
This paper presents a method to improve the HAC of slide mobile phones. Although, the technique is demonstrated with a given commercial phone and for PCS band only, it could be adapted to other phones and bands as well. Simple antenna design and matching techniques can allow multi-band operation...... as well as high dimension flexibility. In the investigated cases, the two ILPE that are connected to the DPWB results in a reduction in excess of 45.9% for passive NF and of 40.3% for active NF at the resonant frequency of the ILPEs....
Comparative Analysis of Vehicle Make and Model Recognition Techniques
Directory of Open Access Journals (Sweden)
Faiza Ayub Syed
2014-03-01
Full Text Available Vehicle Make and Model Recognition (VMMR has emerged as a significant element of vision based systems because of its application in access control systems, traffic control and monitoring systems, security systems and surveillance systems, etc. So far a number of techniques have been developed for vehicle recognition. Each technique follows different methodology and classification approaches. The evaluation results highlight the recognition technique with highest accuracy level. In this paper we have pointed out the working of various vehicle make and model recognition techniques and compare these techniques on the basis of methodology, principles, classification approach, classifier and level of recognition After comparing these factors we concluded that Locally Normalized Harris Corner Strengths (LHNS performs best as compared to other techniques. LHNS uses Bayes and K-NN classification approaches for vehicle classification. It extracts information from frontal view of vehicles for vehicle make and model recognition.
Maucec, M
2005-01-01
Monte Carlo simulations for nuclear logging applications are considered to be highly demanding transport problems. In this paper, the implementation of weight-window variance reduction schemes in a 'manual' fashion to improve the efficiency of calculations for a neutron logging tool is presented. Th
Reduction of Under-Determined Linear Systems by Sparce Block Matrix Technique
DEFF Research Database (Denmark)
Tarp-Johansen, Niels Jacob; Poulsen, Peter Noe; Damkilde, Lars
1996-01-01
-determined equilibrium equation restrictions in an LP-problem. A significant reduction of computer time spent on solving the LP-problem is achieved if the equilib rium equations are reduced before going into the optimization procedure. Experience has shown that for some structures one must apply full pivoting to ensure...
Teaching the Basics: Development and Validation of a Distal Radius Reduction and Casting Model.
Seeley, Mark A; Fabricant, Peter D; Lawrence, J Todd R
2017-09-01
Approximately one-third of reduced pediatric distal radius fractures redisplace, resulting in further treatment. Two major modifiable risk factors for loss of reduction are reduction adequacy and cast quality. Closed reduction and immobilization of distal radius fractures is an Accreditation Council for Graduate Medical Education residency milestone. Teaching and assessing competency could be improved with a life-like simulation training tool. Our goal was to develop and validate a realistic distal radius fracture reduction and casting simulator as determined by (1) a questionnaire regarding the "realism" of the model and (2) the quantitative assessments of reduction time, residual angulation, and displacement. A distal radius fracture model was created with radiopaque bony segments and articulating elbows and shoulders. Simulated periosteum and internal deforming forces required proper reduction and casting techniques to achieve and maintain reduction. The forces required were estimated through an iterative process through feedback from experienced clinicians. Embedded monofilaments allowed for quantitative assessment of residual displacement and angulation through the use of fluoroscopy. Subjects were asked to perform closed reduction and apply a long arm fiberglass cast. Primary performance variables assessed included reduction time, residual angulation, and displacement. Secondary performance variables consisted of number of fluoroscopic images, casting time, and cast index (defined as the ratio of the internal width of the forearm cast in the sagittal plane to the internal width in the coronal plane at the fracture site). Subject grading was performed by two blinded reviewers. Interrater reliability was nearly perfect across all measurements (intraclass correlation coefficient range, 0.94-0.99), thus disagreements in measurements were handled by averaging the assessed values. After completion the participants answered a Likert-based questionnaire regarding the
Isma'eel, Hussain A; Sakr, George E; Almedawar, Mohamad M; Fathallah, Jihan; Garabedian, Torkom; Eddine, Savo Bou Zein; Nasreddine, Lara; Elhajj, Imad H
2015-06-01
High dietary salt intake is directly linked to hypertension and cardiovascular diseases (CVDs). Predicting behaviors regarding salt intake habits is vital to guide interventions and increase their effectiveness. We aim to compare the accuracy of an artificial neural network (ANN) based tool that predicts behavior from key knowledge questions along with clinical data in a high cardiovascular risk cohort relative to the least square models (LSM) method. We collected knowledge, attitude and behavior data on 115 patients. A behavior score was calculated to classify patients' behavior towards reducing salt intake. Accuracy comparison between ANN and regression analysis was calculated using the bootstrap technique with 200 iterations. Starting from a 69-item questionnaire, a reduced model was developed and included eight knowledge items found to result in the highest accuracy of 62% CI (58-67%). The best prediction accuracy in the full and reduced models was attained by ANN at 66% and 62%, respectively, compared to full and reduced LSM at 40% and 34%, respectively. The average relative increase in accuracy over all in the full and reduced models is 82% and 102%, respectively. Using ANN modeling, we can predict salt reduction behaviors with 66% accuracy. The statistical model has been implemented in an online calculator and can be used in clinics to estimate the patient's behavior. This will help implementation in future research to further prove clinical utility of this tool to guide therapeutic salt reduction interventions in high cardiovascular risk individuals.
Parada, Stephen A; Makani, Amun; Stadecker, Monica J; Warner, Jon J P
2015-10-01
Proximal humerus fractures are common injuries that can require operative treatment. Different operative techniques are available, but the hallmark of fixation for 3- and 4-part fractures is a locking-plate-and-screw construct. Despite advances in this technology, obtaining anatomical reduction and fracture union can be difficult, and complications (eg, need for revision) are not uncommon. These issues can be addressed by augmenting the fixation with an endosteally placed fibular allograft. Although biomechanical and clinical results have been good, the technique can lead to difficulties in future revision to arthroplasty, a common consequence of failed open reduction and internal fixation. The technique described, an alternative to placing a long endosteal bone graft, uses a trapezoidal, individually sized pedestal of allograft femoral head to facilitate the reduction and healing of the humeral head and tuberosity fragments in a displaced 3- or 4-part fracture of the proximal humerus. It can be easily incorporated with any plate-and-screw construct and does not necessitate placing more than 1 cm of bone into the humeral intramedullary canal, limiting the negative effects on any future revision to arthroplasty.
Directory of Open Access Journals (Sweden)
Hyeong-Ho Park, Xin Zhang, Seon-Yong Hwang, Sang Hyun Jung, Semin Kang, Hyun-Beom Shin, Ho Kwan Kang, Hyung-Ho Park, Ross H Hill and Chul Ki Ko
2012-01-01
Full Text Available We present a simple size reduction technique for fabricating 400 nm zinc oxide (ZnO architectures using a silicon master containing only microscale architectures. In this approach, the overall fabrication, from the master to the molds and the final ZnO architectures, features cost-effective UV photolithography, instead of electron beam lithography or deep-UV photolithography. A photosensitive Zn-containing sol–gel precursor was used to imprint architectures by direct UV-assisted nanoimprint lithography (UV-NIL. The resulting Zn-containing architectures were then converted to ZnO architectures with reduced feature sizes by thermal annealing at 400 °C for 1 h. The imprinted and annealed ZnO architectures were also used as new masters for the size reduction technique. ZnO pillars of 400 nm diameter were obtained from a silicon master with pillars of 1000 nm diameter by simply repeating the size reduction technique. The photosensitivity and contrast of the Zn-containing precursor were measured as 6.5 J cm−2 and 16.5, respectively. Interesting complex ZnO patterns, with both microscale pillars and nanoscale holes, were demonstrated by the combination of dose-controlled UV exposure and a two-step UV-NIL.
Energy Technology Data Exchange (ETDEWEB)
Brown, R.A.; Lips, H.; Kuby, W.C.
1989-03-01
The report describes the results of a design and experimental program to develop a post-combustion NOx control technique for gas-fired I.C. engines and gas turbines as applied to cogeneration. Emissions and performance data of both rich-burn and lean-burn engines were used to develop a conceptual reburner design to be placed between an engine and a waste heat boiler. This reburner design was then modeled for testing in a 100,000 Btu/hr subscale test facility. Parametric testing achieved 50 percent NOx reduction at a fuel fraction of 30 percent for rich-burn and mid-O2 range engine exhausts. Lean-burn NOx reductions were limited to 35 percent at the same fuel fraction. With the addition of a NiO catalyst in the rich zone, NOx reductions of up to 90 percent were achieved in the subscale testing. A full-scale system was designed, fabricated, and tested on a 150 kW Caterpillar engine. NOx reductions of 40 to 50 percent were achieved without a catalyst; reductions of up to 75 percent were achieved with a NiO catalyst.
Metamaterials modelling, fabrication and characterisation techniques
DEFF Research Database (Denmark)
Malureanu, Radu; Zalkovskij, Maksim; Andryieuski, Andrei
Metamaterials are artificially designed media that show averaged properties not yet encountered in nature. Among such properties, the possibility of obtaining optical magnetism and negative refraction are the ones mainly exploited but epsilon-near-zero and sub-unitary refraction index are also...... parameters that can be obtained. Such behaviour enables unprecedented applications. Within this work, we will present various aspects of metamaterials research field that we deal with at our department. From the modelling part, various approaches for determining the value of the refractive index...
Metamaterials modelling, fabrication, and characterisation techniques
DEFF Research Database (Denmark)
Malureanu, Radu; Zalkovskij, Maksim; Andryieuski, Andrei
2012-01-01
Metamaterials are artificially designed media that show averaged properties not yet encountered in nature. Among such properties, the possibility of obtaining optical magnetism and negative refraction are the ones mainly exploited but epsilon-near-zero and sub-unitary refraction index are also...... parameters that can be obtained. Such behaviour enables unprecedented applications. Within this work, we will present various aspects of metamaterials research field that we deal with at our department. From the modelling part, we will present tour approach for determining the field enhancement in slits...
New Approach to Low-Power & Leakage Current Reduction Technique for CMOS Circuit Design
Directory of Open Access Journals (Sweden)
Sujata Prajapati
2014-02-01
Full Text Available Leakage power dissipation has become major portion of total power consumption in the integrated device and is expected to grow exponentially in the next decade as per International Technology Roadmap for Semiconductors (IRTS. This directly affects the battery operated devices as it has long idle times. Thus by scaling down the threshold voltage has tremendously increased the sub threshold leakage current thereby making the static power dissipation very high. To overcome this problem several techniques has been proposed to overcome this high leakage power dissipation. A comprehensive survey and analysis of various leakage power minimization techniques is presented in this paper. Of the available techniques, eight techniques are considered for the analysis namely, Multi Threshold CMOS (MTCMOS, Super Cut-off CMOS (SCCMOS, Forced Transistor Stacking (FTS and Sleepy Stack (SS, Sleepy keeper (SK, Dual Stack (OS, and LECTOR. From the results, it is observed that Lector techniques produces lower power dissipation than the other techniques due to the ability of power gating.
Vehicle Lightweighting: Mass Reduction Spectrum Analysis and Process Cost Modeling
Energy Technology Data Exchange (ETDEWEB)
Mascarin, Anthony [IBIS Associates, Inc., Waltham, MA (United States); Hannibal, Ted [IBIS Associates, Inc., Waltham, MA (United States); Raghunathan, Anand [Energetics Inc., Columbia, MD (United States); Ivanic, Ziga [Energetics Inc., Columbia, MD (United States); Clark, Michael [Idaho National Lab. (INL), Idaho Falls, ID (United States)
2016-03-01
The U.S. Department of Energy’s Vehicle Technologies Office, Materials area commissioned a study to model and assess manufacturing economics of alternative design and production strategies for a series of lightweight vehicle concepts. In the first two phases of this effort examined combinations of strategies aimed at achieving strategic targets of 40% and a 45% mass reduction relative to a standard North American midsize passenger sedan at an effective cost of $3.42 per pound (lb) saved. These results have been reported in the Idaho National Laboratory report INL/EXT-14-33863 entitled Vehicle Lightweighting: 40% and 45% Weight Savings Analysis: Technical Cost Modeling for Vehicle Lightweighting published in March 2015. The data for these strategies were drawn from many sources, including Lotus Engineering Limited and FEV, Inc. lightweighting studies, U.S. Department of Energy-funded Vehma International of America, Inc./Ford Motor Company Multi-Material Lightweight Prototype Vehicle Demonstration Project, the Aluminum Association Transportation Group, many United States Council for Automotive Research’s/United States Automotive Materials Partnership LLC lightweight materials programs, and IBIS Associates, Inc.’s decades of experience in automotive lightweighting and materials substitution analyses.
Teleparallel Conformal Invariant Models induced by Kaluza-Klein Reduction
Geng, Chao-Qiang
2016-01-01
We study the extensions of teleparallism in the Kaluza-Klein (KK) scenario by writing the analogous form to the torsion scalar $T_{\\text{NGR}}$ in terms of the corresponding antisymmetric tensors, given by $T_{\\text{NGR}} = a\\,T_{ijk} \\, T^{ijk} + b\\,T_{ijk} \\,T^{kji} + c\\,T^{j}{}_{ji} \\, T^{k}{}_{k}{}^{i}$, in the four-dimensional New General Relativity (NGR) with arbitrary coefficients $a$, $b$ and $c$. After the KK dimensional reduction, the Lagrangian in the Einstein-frame can be realized by taking $2a+b+c=0$ with the ghost-free condition $c\\leq0$ for the one-parameter family of teleparallelism. We demonstrate that the conformal invariant gravity models can be constructed by the requirement of $2a+b+4c=0$ or $2a+b=0$. In particular, this conformal gravity is described on the Weyl-Cartan geometry $Y_4$ with the ghost-free condition $c>0$. We also consider the weak field approximation and discuss the non-minimal coupled term of the scalar current and torsion vector. For the conformal invariant models with $...
Models and Techniques for Proving Data Structure Lower Bounds
DEFF Research Database (Denmark)
Larsen, Kasper Green
In this dissertation, we present a number of new techniques and tools for proving lower bounds on the operational time of data structures. These techniques provide new lines of attack for proving lower bounds in both the cell probe model, the group model, the pointer machine model and the I/O-mod...... for range reporting problems in the pointer machine and the I/O-model. With this technique, we tighten the gap between the known upper bound and lower bound for the most fundamental range reporting problem, orthogonal range reporting. 5......In this dissertation, we present a number of new techniques and tools for proving lower bounds on the operational time of data structures. These techniques provide new lines of attack for proving lower bounds in both the cell probe model, the group model, the pointer machine model and the I....../O-model. In all cases, we push the frontiers further by proving lower bounds higher than what could possibly be proved using previously known techniques. For the cell probe model, our results have the following consequences: The rst (lg n) query time lower bound for linear space static data structures...
Directory of Open Access Journals (Sweden)
Shereena V. B
2015-04-01
Full Text Available The aim of this paper is to present a comparative study of two linear dimension reduction methods namely PCA (Principal Component Analysis and LDA (Linear Discriminant Analysis. The main idea of PCA is to transform the high dimensional input space onto the feature space where the maximal variance is displayed. The feature selection in traditional LDA is obtained by maximizing the difference between classes and minimizing the distance within classes. PCA finds the axes with maximum variance for the whole data set where LDA tries to find the axes for best class seperability. The proposed method is experimented over a general image database using Matlab. The performance of these systems has been evaluated by Precision and Recall measures. Experimental results show that PCA based dimension reduction method gives the better performance in terms of higher precision and recall values with lesser computational complexity than the LDA based method.
Sarhadi, Ali; Burn, Donald H.; Yang, Ge; Ghodsi, Ali
2017-02-01
One of the main challenges in climate change studies is accurate projection of the global warming impacts on the probabilistic behaviour of hydro-climate processes. Due to the complexity of climate-associated processes, identification of predictor variables from high dimensional atmospheric variables is considered a key factor for improvement of climate change projections in statistical downscaling approaches. For this purpose, the present paper adopts a new approach of supervised dimensionality reduction, which is called "Supervised Principal Component Analysis (Supervised PCA)" to regression-based statistical downscaling. This method is a generalization of PCA, extracting a sequence of principal components of atmospheric variables, which have maximal dependence on the response hydro-climate variable. To capture the nonlinear variability between hydro-climatic response variables and projectors, a kernelized version of Supervised PCA is also applied for nonlinear dimensionality reduction. The effectiveness of the Supervised PCA methods in comparison with some state-of-the-art algorithms for dimensionality reduction is evaluated in relation to the statistical downscaling process of precipitation in a specific site using two soft computing nonlinear machine learning methods, Support Vector Regression and Relevance Vector Machine. The results demonstrate a significant improvement over Supervised PCA methods in terms of performance accuracy.
Energy Technology Data Exchange (ETDEWEB)
Chang, D.H.; Hiss, S.; Borggrefe, J.; Bunck, A.C.; Maintz, D.; Hackenbroch, M. [Cologne University Hospital (Germany). Dept. of Radiology; Mueller, D. [Clinical Science Philips Healthcare GmbH, Munich (Germany). Clinical Science; Hellmich, M. [Cologne University Hospital (Germany). Inst. of Medical Statistics, Informatics and Epidemiology
2015-10-15
To compare the radiation doses and image qualities of computed tomography (CT)-guided interventions using a standard-dose CT (SDCT) protocol with filtered back projection and a low-dose CT (LDCT) protocol with both filtered back projection and iterative reconstruction. Image quality and radiation doses (dose-length product and CT dose index) were retrospectively reviewed for 130 patients who underwent CT-guided lung interventions. SDCT at 120 kVp and automatic mA modulation and LDCT at 100 kVp and a fixed exposure were each performed for 65 patients. Image quality was objectively evaluated as the contrast-to-noise ratio and subjectively by two radiologists for noise impression, sharpness, artifacts and diagnostic acceptability on a four-point scale. The groups did not significantly differ in terms of diagnostic acceptability and complication rate. LDCT yielded a median 68.6 % reduction in the radiation dose relative to SDCT. In the LDCT group, iterative reconstruction was superior to filtered back projection in terms of noise reduction and subjective image quality. The groups did not differ in terms of beam hardening artifacts. LDCT was feasible for all procedures and yielded a more than two-thirds reduction in radiation exposure while maintaining overall diagnostic acceptability, safety and precision. The iterative reconstruction algorithm is preferable according to the objective and subjective image quality analyses.
Energy Technology Data Exchange (ETDEWEB)
Sampson, Andrew; Le Yi; Williamson, Jeffrey F. [Department of Radiation Oncology, Virginia Commonwealth University, Richmond, Virginia 23298 (United States)
2012-02-15
Purpose: To demonstrate potential of correlated sampling Monte Carlo (CMC) simulation to improve the calculation efficiency for permanent seed brachytherapy (PSB) implants without loss of accuracy. Methods: CMC was implemented within an in-house MC code family (PTRAN) and used to compute 3D dose distributions for two patient cases: a clinical PSB postimplant prostate CT imaging study and a simulated post lumpectomy breast PSB implant planned on a screening dedicated breast cone-beam CT patient exam. CMC tallies the dose difference, {Delta}D, between highly correlated histories in homogeneous and heterogeneous geometries. The heterogeneous geometry histories were derived from photon collisions sampled in a geometrically identical but purely homogeneous medium geometry, by altering their particle weights to correct for bias. The prostate case consisted of 78 Model-6711 {sup 125}I seeds. The breast case consisted of 87 Model-200 {sup 103}Pd seeds embedded around a simulated lumpectomy cavity. Systematic and random errors in CMC were unfolded using low-uncertainty uncorrelated MC (UMC) as the benchmark. CMC efficiency gains, relative to UMC, were computed for all voxels, and the mean was classified in regions that received minimum doses greater than 20%, 50%, and 90% of D{sub 90}, as well as for various anatomical regions. Results: Systematic errors in CMC relative to UMC were less than 0.6% for 99% of the voxels and 0.04% for 100% of the voxels for the prostate and breast cases, respectively. For a 1 x 1 x 1 mm{sup 3} dose grid, efficiency gains were realized in all structures with 38.1- and 59.8-fold average gains within the prostate and breast clinical target volumes (CTVs), respectively. Greater than 99% of the voxels within the prostate and breast CTVs experienced an efficiency gain. Additionally, it was shown that efficiency losses were confined to low dose regions while the largest gains were located where little difference exists between the homogeneous and
Berdanier, Reid A.; Key, Nicole L.
2016-03-01
The single slanted hot-wire technique has been used extensively as a method for measuring three velocity components in turbomachinery applications. The cross-flow orientation of probes with respect to the mean flow in rotating machinery results in detrimental prong interference effects when using multi-wire probes. As a result, the single slanted hot-wire technique is often preferred. Typical data reduction techniques solve a set of nonlinear equations determined by curve fits to calibration data. A new method is proposed which utilizes a look-up table method applied to a simulated triple-wire sensor with application to turbomachinery environments having subsonic, incompressible flows. Specific discussion regarding corrections for temperature and density changes present in a multistage compressor application is included, and additional consideration is given to the experimental error which accompanies each data reduction process. Hot-wire data collected from a three-stage research compressor with two rotor tip clearances are used to compare the look-up table technique with the traditional nonlinear equation method. The look-up table approach yields velocity errors of less than 5 % for test conditions deviating by more than 20 °C from calibration conditions (on par with the nonlinear solver method), while requiring less than 10 % of the computational processing time.
A novel model reduction method based on balanced truncation
Institute of Scientific and Technical Information of China (English)
无
2009-01-01
The main goal of this paper is to construct an efficient reduced-order model (ROM) for unsteady aerodynamic force modeling. Balanced truncation (BT) is presented to address the problem. For conventional BT method, it is necessary to compute exact controllability and observability grammians. Although it is relatively straightforward to compute these matrices in a control setting where the system order is moderate, the technique does not extend easily to high order systems. In response to the challenge, snapshots-BT (S-BT) method is introduced for high order system ROM construction. The outline idea of the S-BT method is that snapshots of primary and dual system approximate the controllability and observability matrices in the frequency domain. The method has been demonstrated for 3 high order systems: (1) unsteady motion of a two-dimensional airfoil in response to gust, (2) AGARD 445.6 wing aeroelastic system, and (3) BACT (benchmark active control technology) standard aeroservoelastic system. All the results indicate that S-BT based ROM is efficient and accurate enough to provide a powerful tool for unsteady aerodynamic force modeling.
Technique Based on Image Pyramid and Bayes Rule for Noise Reduction in Unsupervised Change Detection
Institute of Scientific and Technical Information of China (English)
LI Zhi-qiang; HUO hong; FANG Tao; ZHU Ju-lian; GE Wei-li
2009-01-01
In this paper, a technique based on image pyramid and Bayes rule for reducing noise effects in unsupervised change detection is proposed. By using Gaussian pyramid to process two multitemporal images respectively, two image pyramids are constructed. The difference pyramid images are obtained by point-by-point subtraction between the same level images of the two image pyramids. By resizing all difference pyramid images to the size of the original multitemporal image and then making product operator among them, a map being similar to the difference image is obtained. The difference image is generated by point-by-point subtraction between the two multitemporal images directly. At last, the Bayes rule is used to distinguish the changed pixels. Both synthetic and real data sets are used to evaluate the performance of the proposed technique. Experimental results show that the map from the proposed technique is more robust to noise than the difference image.
New Tone Reservation Technique for Peak to Average Power Ratio Reduction
Wilharm, Joachim; Rohling, Hermann
2014-09-01
In Orthogonal Frequency Division Multiplexing (OFDM) the transmit signals have a highly fluctuating, non-constant envelope which is a technical challenge for the High Power Amplifier (HPA). Without any signal processing procedures the amplitude peaks of the transmit signal will be clipped by the HPA resulting in out-ofband radiation and in bit error rate (BER) performance degradation. The classical Tone Reservation (TR) technique calculates a correction signal in an iterative way to reduce the amplitude peaks. However this step leads to a high computational complexity. Therefore, in this paper an alternative TR technique is proposed. In this case a predefined signal pattern is shifted to any peak position inside the transmit signal and reduces thereby all amplitude peaks. This new procedure is able to outperform the classical TR technique and has a much lower computational complexity.
On Projection-Based Model Reduction of Biochemical Networks Part I: The Deterministic Case
Sootla, Aivar; Anderson, James
2014-01-01
This paper addresses the problem of model reduction for dynamical system models that describe biochemical reaction networks. Inherent in such models are properties such as stability, positivity and network structure. Ideally these properties should be preserved by model reduction procedures, although traditional projection based approaches struggle to do this. We propose a projection based model reduction algorithm which uses generalised block diagonal Gramians to preserve structure and posit...
Adaptive rational block Arnoldi methods for model reductions in large-scale MIMO dynamical systems
Directory of Open Access Journals (Sweden)
Khalide Jbilou
2016-04-01
Full Text Available In recent years, a great interest has been shown towards Krylov subspace techniques applied to model order reduction of large-scale dynamical systems. A special interest has been devoted to single-input single-output (SISO systems by using moment matching techniques based on Arnoldi or Lanczos algorithms. In this paper, we consider multiple-input multiple-output (MIMO dynamical systems and introduce the rational block Arnoldi process to design low order dynamical systems that are close in some sense to the original MIMO dynamical system. Rational Krylov subspace methods are based on the choice of suitable shifts that are selected a priori or adaptively. In this paper, we propose an adaptive selection of those shifts and show the efficiency of this approach in our numerical tests. We also give some new block Arnoldi-like relations that are used to propose an upper bound for the norm of the error on the transfer function.
Efficient Analysis of Structures with Rotatable Elements Using Model Order Reduction
Directory of Open Access Journals (Sweden)
G. Fotyga
2016-04-01
Full Text Available This paper presents a novel full-wave technique which allows for a fast 3D finite element analysis of waveguide structures containing rotatable tuning elements of arbitrary shapes. Rotation of these elements changes the resonant frequencies of the structure, which can be used in the tuning process to obtain the S-characteristics desired for the device. For fast commutations of the response as the tuning elements are rotated, the 3D finite element method is supported by multilevel model-order reduction, orthogonal projection at the boundaries of macromodels and the operation called macromodels cloning. All the time-consuming steps are performed only once in the preparatory stage. In the tuning stage, only small parts of the domain are updated, by means of a special meshing technique. In effect, the tuning process is performed extremely rapidly. The results of the numerical experiments confirm the efficiency and validity of the proposed method.
Lei, Jinzhi; Yvinec, Romain; Zhuge, Changjing
2012-01-01
This paper considers adiabatic reduction in a model of stochastic gene expression with bursting transcription. We prove that an adiabatic reduction can be performed in a stochastic slow/fast system with a jump Markov process. In the gene expression model, the production of mRNA (the fast variable) is assumed to follow a compound Poisson process (the phenomena called bursting in molecular biology) and the production of protein (the slow variable) is linear as a function of mRNA. When the dynamics of mRNA is assumed to be faster than the protein dynamics (due to a mRNA degradation rate larger than for the protein) we prove that, with the appropriate scaling, the bursting phenomena can be transmitted to the slow variable. We show that the reduced equation is either a stochastic differential equation with a jump Markov process or a deterministic ordinary differential equation depending on the scaling that is appropriate. These results are significant because adiabatic reduction techniques seem to have not been de...
Russell, J A; Holmes, E M; Keller, D J; Vargas, J H
1981-09-01
During the 1977-78, 1978-79, and 1979-80 ski seasons, 76 acute anterior shoulder dislocations were treated by one of three Orthopedic Surgeons at the Rutland Vermont Hospital Emergency Room: 68 (89.4%) were reduced on first attempt using the Milch technique of abduction and external rotation. Four (5.2%) required general anesthesia and four were reduced using meperidine hydrochloride (Demerol, Winthrop) and diazepam (Valium, Roche) and a traction-countertraction technique. Of the 68 shoulders reduced with the Milch technique, 47 (69.1%) required no analgesics or muscle relaxants. There were no complications attributable to the technique itself. Males were injured more frequently than females in a 4.4:1 ratio. Left shoulder injuries were as common as right. Recurrent dislocations occurred more frequently in younger individuals. Fractures of the greater tuberosity were associated injuries in five (6.6%) of all dislocations. These all occurred in individuals older than age 39 years and were coincident with primary dislocations.
Prediction of survival with alternative modeling techniques using pseudo values
T. van der Ploeg (Tjeerd); F.R. Datema (Frank); R.J. Baatenburg de Jong (Robert Jan); E.W. Steyerberg (Ewout)
2014-01-01
textabstractBackground: The use of alternative modeling techniques for predicting patient survival is complicated by the fact that some alternative techniques cannot readily deal with censoring, which is essential for analyzing survival data. In the current study, we aimed to demonstrate that pseudo
Laboratory procedures and data reduction techniques to determine rheologic properties of mass flows
Holmes, R.R.; Huizinga, R.J.; Brown, S.M.; Jobson, H.E.
1993-01-01
Determining the rheologic properties of coarse- grained mass flows is an important step to mathematically simulate potential inundation zones. Using the vertically rotating flume designed and built by the U.S. Geological Survey, laboratory procedures and subsequent data reduction have been developed to estimate shear stresses and strain rates of various flow materials. Although direct measurement of shear stress and strain rate currently (1992) are not possible in the vertically rotating flume, methods were derived to estimate these values from measurements of flow geometry, surface velocity, and flume velocity.
Speckle reduction technique for embeddable MEMS-laser pico-projector
Abelé, N.; Le Gros, C.; Masson, J.; Kilcher, L.
2014-03-01
MEMS-scanning laser projector have seen tremendous performance improvements in the past year, demonstrating devices with very good performances in terms of size, energy efficiency and image quality that were expected from the theory point of view. The last challenge that was not solved yet is the speckle reduction, which is the main bottleneck for this technology adoption. The paper presents an innovative design to reduce speckle contrast without degrading any other features and benefits of the projection system. The proposed despeckling solution is a single, non-movable part with less than 0.1cc in volume and of 4mm in thickness.
Use of surgical techniques in the rat pancreas transplantation model
National Research Council Canada - National Science Library
Ma, Yi; Guo, Zhi-Yong
2008-01-01
... (also called type 1 diabetes). With the improvement of microsurgical techniques, pancreas transplantation in rats has been the major model for physiological and immunological experimental studies in the past 20 years...
Energy Technology Data Exchange (ETDEWEB)
Jeong, K; Kuo, H; Ritter, J; Shen, J; Basavatia, A; Yaparpalvi, R; Kalnicki, S [Montefiore Medical Center, Bronx, NY (United States); Tome, W [Montefiore Medical Center, ALBERT EINSTEIN COLLEGE OF MEDICINE, Bronx, NY (United States)
2015-06-15
Purpose: To evaluate the feasibility of using a metal artifact reduction technique in depleting metal artifact and its application in improving dose calculation in External Radiation Therapy Planning. Methods: CIRS electron density phantom was scanned with and without steel drill bits placed in some plug holes. Meta artifact reduction software with Metal Deletion Technique (MDT) was used to remove metal artifacts for scanned image with metal. Hounsfield units of electron density plugs from artifact free reference image and MDT processed images were compared. To test the dose calculation improvement after the MDT processed images, clinically approved head and neck plan with manual dental artifact correction was tested. Patient images were exported and processed with MDT and plan was recalculated with new MDT image without manual correction. Dose profiles near the metal artifacts were compared. Results: The MDT used in this study effectively reduced the metal artifact caused by beam hardening and scatter. The windmill around the metal drill was greatly improved with smooth rounded view. Difference of the mean HU in each density plug between reference and MDT images were less than 10 HU in most of the plugs. Dose difference between original plan and MDT images were minimal. Conclusion: Most metal artifact reduction methods were developed for diagnostic improvement purpose. Hence Hounsfield unit accuracy was not rigorously tested before. In our test, MDT effectively eliminated metal artifacts with good HU reproduciblity. However, it can introduce new mild artifacts so the MDT images should be checked with original images.
Shanahan, M C
2017-08-01
The purpose of this study was to compare radiation dose measurements generated using a virtual radiography simulation with experimental dosimeter measurements for two radiation dose reduction techniques in digital radiography. Entrance Surface Dose (ESD) measurements were generated for an antero-posterior lumbar spine radiograph experimentally using NanoDOT™, single point dosimeters, for two radiographic systems (systems 1 and 2) and using Projection VR™, a virtual radiography simulation (system 3). Two dose reduction methods were tested, application of the 15% kVp rule, or simplified 10 kVp rule, and the exposure maintenance formula. The 15% or 10 kVp rules use a specified increase in kVp and halving of the mAs to reduce patient ESD. The exposure maintenance formula uses the increase in source-to-object distance to reduce ESD. Increasing kVp from 75 to 96 kVp, with the concomitant decrease in mAs, resulted in percent ESD reduction of 59.5% (4.02-1.63 mGy), 60.8% (3.55-1.39 mGy), and 60.3% (6.65-2.64 mGy), for experimental systems 1 and 2, and virtual simulation (system 3), respectively. Increasing the SID (with the appropriate increase in mAs) from 100 to 140 cm reduced ESD by 22.3% 18.8%, and 23.5%, for experimental systems 1 and 2, and virtual simulation (system 3), respectively. Percent dose reduction measurements were similar between the experimental and virtual measurement systems investigated. For the dose reduction practices tested, Projection VR™ provides a realistic alternate of percent dose reduction to direct dosimetry. Copyright © 2017 The College of Radiographers. Published by Elsevier Ltd. All rights reserved.
Directory of Open Access Journals (Sweden)
Hironori Ochi
2017-01-01
Full Text Available The positioning of the patient on the fracture table is critical for the successful reduction and operative fixation of intertrochanteric hip fractures. However, this manipulation is challenging with patients who have undergone amputations of their legs. A 97-year-old man presented to the emergency department with symptom of right hip pain following a mechanical fall. He had a below-knee amputation on his right leg following a traffic accident as a 19-year-old and had a below-knee patellar tendon bearing prosthesis fitted to his lower limb for mobility. Radiographs of his pelvis revealed a displaced intertrochanteric fracture of the right side femur. The patient was positioned on a fracture table, as in the standard procedure. The method of inverting the traction boot to accommodate the flexed knee and stump described by Al-Harthy could be used to provide traction and rotational control. Internal fixation was performed using a short femoral nail. Postoperatively, the patient could walk with full weight bearing using a prosthesis on his affected limb. The method of inverting the traction boot to accommodate the flexed knee and stump can be used safely and effectively to achieve and maintain fracture reduction during fixation of intertrochanteric fractures for patients with a below-knee amputated limb.
A bat inspired technique for clutter reduction in radar sounder systems
Carrer, L.; Bruzzone, L.
2016-10-01
Radar Sounders are valuable instruments for subsurface investigation. They are widely employed for the study of planetary bodies around the solar system. Due to their wide antenna beam pattern, off-nadir surface reflections (i.e. clutter) of the transmitted signal can compete with echoes coming from the subsurface thus masking them. Different strategies have been adopted for clutter mitigation. However, none of them proved to be the final solution for this specific problem. Bats are very well known for their ability in discriminating between a prey and unwanted clutter (e.g. foliage) by effectively employing their sonar. According to recent studies, big brown bats can discriminate clutter by transmitting two different carrier frequencies. Most interestingly, there are many striking analogies between the characteristics of the bat sonar and the one of a radar sounder. Among the most important ones, they share the same nadir acquisition geometry and transmitted signal type (i.e. linear frequency modulation). In this paper, we explore the feasibility of exploiting frequency diversity for the purpose of clutter discrimination in radar sounding by mimicking unique bats signal processing strategies. Accordingly, we propose a frequency diversity clutter reduction method based on specific mathematical conditions that, if verified, allow the disambiguation between the clutter and the subsurface signal to be performed. These analytic conditions depend on factors such as difference in central carrier frequencies, surface roughness and subsurface material properties. The method performance has been evaluated by different simulations of meaningful acquisition scenarios which confirm its clutter reduction effectiveness.
Energy Technology Data Exchange (ETDEWEB)
Willemink, Martin J.; Leiner, Tim; Jong, Pim A. de; Nievelstein, Rutger A.J.; Schilham, Arnold M.R. [Utrecht University Medical Center, Department of Radiology, P.O. Box 85500, Utrecht (Netherlands); Heer, Linda M. de [Cardiothoracic Surgery, Utrecht (Netherlands); Budde, Ricardo P.J. [Utrecht University Medical Center, Department of Radiology, P.O. Box 85500, Utrecht (Netherlands); Gelre Hospital, Department of Radiology, Apeldoorn (Netherlands)
2013-06-15
To present the results of a systematic literature search aimed at determining to what extent the radiation dose can be reduced with iterative reconstruction (IR) for cardiopulmonary and body imaging with computed tomography (CT) in the clinical setting and what the effects on image quality are with IR versus filtered back-projection (FBP) and to provide recommendations for future research on IR. We searched Medline and Embase from January 2006 to January 2012 and included original research papers concerning IR for CT. The systematic search yielded 380 articles. Forty-nine relevant studies were included. These studies concerned: the chest(n = 26), abdomen(n = 16), both chest and abdomen(n = 1), head(n = 4), spine(n = 1), and no specific area (n = 1). IR reduced noise and artefacts, and it improved subjective and objective image quality compared to FBP at the same dose. Conversely, low-dose IR and normal-dose FBP showed similar noise, artefacts, and subjective and objective image quality. Reported dose reductions ranged from 23 to 76 % compared to locally used default FBP settings. However, IR has not yet been investigated for ultra-low-dose acquisitions with clinical diagnosis and accuracy as endpoints. Benefits of IR include improved subjective and objective image quality as well as radiation dose reduction while preserving image quality. Future studies need to address the value of IR in ultra-low-dose CT with clinically relevant endpoints. (orig.)
THE IMPROVEMENT OF THE COMPUTATIONAL PERFORMANCE OF THE ZONAL MODEL POMA USING PARALLEL TECHNIQUES
Directory of Open Access Journals (Sweden)
Yao Yu
2014-01-01
Full Text Available The zonal modeling approach is a new simplified computational method used to predict temperature distribution, energy in multi-zone building and indoor airflow thermal behaviors of building. Although this approach is known to use less computer resource than CFD models, the computational time is still an issue especially when buildings are characterized by complicated geometry and indoor layout of furnishings. Therefore, using a new computing technique to the current zonal models in order to reduce the computational time is a promising way to further improve the model performance and promote the wide application of zonal models. Parallel computing techniques provide a way to accomplish these purposes. Unlike the serial computations that are commonly used in the current zonal models, these parallel techniques decompose the serial program into several discrete instructions which can be executed simultaneously on different processors/threads. As a result, the computational time of the parallelized program can be significantly reduced, compared to that of the traditional serial program. In this article, a parallel computing technique, Open Multi-Processing (OpenMP, is used into the zonal model, Pressurized zOnal Model with the Air diffuser (POMA, in order to improve the model computational performance, including the reduction of computational time and the investigation of the model scalability.
Electron energy loss spectroscopy techniques for the study of microbial chromium(VI) reduction
Daulton, Tyrone L.; Little, Brenda J.; Lowe, Kristine; Jones-Meehan, Joanne
2002-01-01
Electron energy loss spectroscopy (EELS) techniques were used to determine oxidation state, at high spatial resolution, of chromium associated with the metal-reducing bacteria, Shewanella oneidensis, in anaerobic cultures containing Cr(VI)O4(2-). These techniques were applied to fixed cells examined in thin section by conventional transmission electron microscopy (TEM) as well as unfixed, hydrated bacteria examined by environmental cell (EC)-TEM. Two distinct populations of bacteria were observed by TEM: bacteria exhibiting low image contrast and bacteria exhibiting high contrast in their cell membrane (or boundary) structure which was often encrusted with high-contrast precipitates. Measurements by EELS demonstrated that cell boundaries became saturated with low concentrations of Cr and the precipitates encrusting bacterial cells contained a reduced form of Cr in oxidation state + 3 or lower.
Oxygen reduction kinetics on mixed conducting SOFC model cathodes
Energy Technology Data Exchange (ETDEWEB)
Baumann, F.S.
2006-07-01
The kinetics of the oxygen reduction reaction at the surface of mixed conducting solid oxide fuel cell (SOFC) cathodes is one of the main limiting factors to the performance of these promising systems. For ''realistic'' porous electrodes, however, it is usually very difficult to separate the influence of different resistive processes. Therefore, a suitable, geometrically well-defined model system was used in this work to enable an unambiguous distinction of individual electrochemical processes by means of impedance spectroscopy. The electrochemical measurements were performed on dense thin film microelectrodes, prepared by PLD and photolithography, of mixed conducting perovskite-type materials. The first part of the thesis consists of an extensive impedance spectroscopic investigation of La0.6Sr0.4Co0.8Fe0.2O3 (LSCF) microelectrodes. An equivalent circuit was identified that describes the electrochemical properties of the model electrodes appropriately and enables an unambiguous interpretation of the measured impedance spectra. Hence, the dependencies of individual electrochemical processes such as the surface exchange reaction on a wide range of experimental parameters including temperature, dc bias and oxygen partial pressure could be studied. As a result, a comprehensive set of experimental data has been obtained, which was previously not available for a mixed conducting model system. In the course of the experiments on the dc bias dependence of the electrochemical processes a new and surprising effect was discovered: It could be shown that a short but strong dc polarisation of a LSCF microelectrode at high temperature improves its electrochemical performance with respect to the oxygen reduction reaction drastically. The electrochemical resistance associated with the oxygen surface exchange reaction, initially the dominant contribution to the total electrode resistance, can be reduced by two orders of magnitude. This &apos
Virtual 3d City Modeling: Techniques and Applications
Singh, S. P.; Jain, K.; Mandla, V. R.
2013-08-01
3D city model is a digital representation of the Earth's surface and it's related objects such as Building, Tree, Vegetation, and some manmade feature belonging to urban area. There are various terms used for 3D city models such as "Cybertown", "Cybercity", "Virtual City", or "Digital City". 3D city models are basically a computerized or digital model of a city contains the graphic representation of buildings and other objects in 2.5 or 3D. Generally three main Geomatics approach are using for Virtual 3-D City models generation, in first approach, researcher are using Conventional techniques such as Vector Map data, DEM, Aerial images, second approach are based on High resolution satellite images with LASER scanning, In third method, many researcher are using Terrestrial images by using Close Range Photogrammetry with DSM & Texture mapping. We start this paper from the introduction of various Geomatics techniques for 3D City modeling. These techniques divided in to two main categories: one is based on Automation (Automatic, Semi-automatic and Manual methods), and another is Based on Data input techniques (one is Photogrammetry, another is Laser Techniques). After details study of this, finally in short, we are trying to give the conclusions of this study. In the last, we are trying to give the conclusions of this research paper and also giving a short view for justification and analysis, and present trend for 3D City modeling. This paper gives an overview about the Techniques related with "Generation of Virtual 3-D City models using Geomatics Techniques" and the Applications of Virtual 3D City models. Photogrammetry, (Close range, Aerial, Satellite), Lasergrammetry, GPS, or combination of these modern Geomatics techniques play a major role to create a virtual 3-D City model. Each and every techniques and method has some advantages and some drawbacks. Point cloud model is a modern trend for virtual 3-D city model. Photo-realistic, Scalable, Geo-referenced virtual 3
SVM and ANN Based Classification of Plant Diseases Using Feature Reduction Technique
Pujari, Jagadeesh D.; Rajesh Yakkundimath; Abdulmunaf. Syedhusain. Byadgi
2016-01-01
Computers have been used for mechanization and automation in different applications of agriculture/horticulture. The critical decision on the agricultural yield and plant protection is done with the development of expert system (decision support system) using computer vision techniques. One of the areas considered in the present work is the processing of images of plant diseases affecting agriculture/horticulture crops. The first symptoms of plant disease have to be correctly detected, identi...
Directory of Open Access Journals (Sweden)
Ravikiran Nandiraju
2016-07-01
Full Text Available BACKGROUND Tibia being the most common fractured long bone of the body; 1. Distal metaphyseal fractures comprise 5-7% of these injures; 2. With or without involving the articular surface. Encouraging results for open reduction and internal fixation (Plate osteosynthesis and closed manual reduction with osteosynthesis with minimal invasive percutaneous locking plates has been noted for lower third tibial fractures. Locking compression plate provides the advantage of anatomic reduction, stable fixation, preservation of blood supply, preventing joint stiffness, less soft tissue injury. METHODS AND MATERIAL This study included (40 patients with distal tibia fractures between 18-65 years presenting in the Department of Orthopaedics in Osmania Medical College and Osmania General Hospital .This is prospective study. These patients are treated with locking compression plates. RESULTS Patients were evaluated using AOFAS7 score for hindfoot scale (100 points. Excellent - 26 (65%, Good - 12 (30%, Fair - 2 (5% comparable to other studies. CONCLUSION Reduction and internal fixation of distal tibial fractures using locking compression plate medially by open and MIPO technique is one of the acceptable forms of treatment for lower third tibia including the articular surface with or without communication.
Moreno Chaparro, Nicolas
2015-06-30
We introduce a framework for model reduction of polymer chain models for dissipative particle dynamics (DPD) simulations, where the properties governing the phase equilibria such as the characteristic size of the chain, compressibility, density, and temperature are preserved. The proposed methodology reduces the number of degrees of freedom required in traditional DPD representations to model equilibrium properties of systems with complex molecules (e.g., linear polymers). Based on geometrical considerations we explicitly account for the correlation between beads in fine-grained DPD models and consistently represent the effect of these correlations in a reduced model, in a practical and simple fashion via power laws and the consistent scaling of the simulation parameters. In order to satisfy the geometrical constraints in the reduced model we introduce bond-angle potentials that account for the changes in the chain free energy after the model reduction. Following this coarse-graining process we represent high molecular weight DPD chains (i.e., ≥200≥200 beads per chain) with a significant reduction in the number of particles required (i.e., ≥20≥20 times the original system). We show that our methodology has potential applications modeling systems of high molecular weight molecules at large scales, such as diblock copolymer and DNA.
Temporal moments revisited: Why there is no better way for physically based model reduction in time
Leube, P. C.; Nowak, W.; Schneider, G.
2012-11-01
Many hydro(geo)logical problems are highly complex in space and time, coupled with scale issues, variability, and uncertainty. Especially time-dependent models often consume enormous computational resources, but model reduction techniques can alleviate this problem. Temporal moments (TM) offer an approach to reduce the time demands of transient hydro(geo)logical simulations. TM reduce transient governing equations to steady state and directly simulate the temporal characteristics of the system, if the equations are linear and coefficients are time independent. This is achieved by an integral transform, projecting the dynamic system response onto monomials in time. In comparison to classical approaches of model reduction that involve orthogonal base functions, however, the monomials for TM are nonorthogonal, which might impair the quality and efficiency of model reduction. Thus, we raise the question of whether there are more suitable temporal base functions than the monomials that lead to TM. In this work, we will derive theoretically that there is only a limited class of temporal base functions that can reduce hydro(geo)logical models. By comparing those to TM we conclude that, in terms of gained efficiency versus maintained accuracy, TM are the best possible choice. While our theoretical results hold for all systems of linear partial or ordinary differential equations (PDEs, ODEs) with any order of space and time derivatives, we illustrate our study with an example of pumping tests in a confined aquifer. For that case, we demonstrate that two (four) TM are sufficient to represent more than 80% (90%) of the dynamic behavior, and that the information content strictly increases with increasing TM order.
Kal, Subhadeep; Mohanty, Nihar; Farrell, Richard A.; Franke, Elliott; Raley, Angelique; Thibaut, Sophie; Pereira, Cheryl; Pillai, Karthik; Ko, Akiteru; Mosden, Aelan; Biolsi, Peter
2017-04-01
Scaling beyond the 7nm technology node demands significant control over the variability down to a few angstroms, in order to achieve reasonable yield. For example, to meet the current scaling targets it is highly desirable to achieve sub 30nm pitch line/space features at back-end of the line (BEOL) or front end of line (FEOL); uniform and precise contact/hole patterning at middle of line (MOL). One of the quintessential requirements for such precise and possibly self-aligned patterning strategies is superior etch selectivity between the target films while other masks/films are exposed. The need to achieve high etch selectivity becomes more evident for unit process development at MOL and BEOL, as a result of low density films choices (compared to FEOL film choices) due to lower temperature budget. Low etch selectivity with conventional plasma and wet chemical etch techniques, causes significant gouging (un-intended etching of etch stop layer, as shown in Fig 1), high line edge roughness (LER)/line width roughness (LWR), non-uniformity, etc. In certain circumstances this may lead to added downstream process stochastics. Furthermore, conventional plasma etches may also have the added disadvantage of plasma VUV damage and corner rounding (Fig. 1). Finally, the above mentioned factors can potentially compromise edge placement error (EPE) and/or yield. Therefore a process flow enabled with extremely high selective etches inherent to film properties and/or etch chemistries is a significant advantage. To improve this etch selectivity for certain etch steps during a process flow, we have to implement alternate highly selective, plasma free techniques in conjunction with conventional plasma etches (Fig 2.). In this article, we will present our plasma free, chemical gas phase etch technique using chemistries that have high selectivity towards a spectrum of films owing to the reaction mechanism ( as shown Fig 1). Gas phase etches also help eliminate plasma damage to the
Prediction of survival with alternative modeling techniques using pseudo values.
Directory of Open Access Journals (Sweden)
Tjeerd van der Ploeg
Full Text Available BACKGROUND: The use of alternative modeling techniques for predicting patient survival is complicated by the fact that some alternative techniques cannot readily deal with censoring, which is essential for analyzing survival data. In the current study, we aimed to demonstrate that pseudo values enable statistically appropriate analyses of survival outcomes when used in seven alternative modeling techniques. METHODS: In this case study, we analyzed survival of 1282 Dutch patients with newly diagnosed Head and Neck Squamous Cell Carcinoma (HNSCC with conventional Kaplan-Meier and Cox regression analysis. We subsequently calculated pseudo values to reflect the individual survival patterns. We used these pseudo values to compare recursive partitioning (RPART, neural nets (NNET, logistic regression (LR general linear models (GLM and three variants of support vector machines (SVM with respect to dichotomous 60-month survival, and continuous pseudo values at 60 months or estimated survival time. We used the area under the ROC curve (AUC and the root of the mean squared error (RMSE to compare the performance of these models using bootstrap validation. RESULTS: Of a total of 1282 patients, 986 patients died during a median follow-up of 66 months (60-month survival: 52% [95% CI: 50%-55%]. The LR model had the highest optimism corrected AUC (0.791 to predict 60-month survival, followed by the SVM model with a linear kernel (AUC 0.787. The GLM model had the smallest optimism corrected RMSE when continuous pseudo values were considered for 60-month survival or the estimated survival time followed by SVM models with a linear kernel. The estimated importance of predictors varied substantially by the specific aspect of survival studied and modeling technique used. CONCLUSIONS: The use of pseudo values makes it readily possible to apply alternative modeling techniques to survival problems, to compare their performance and to search further for promising
Ross, Adrianne; Catanzariti, Alan R; Mendicino, Robert W
2011-01-01
Management of a dislocated ankle fracture can be challenging because of instability of the ankle mortise, a compromised soft tissue envelope, and the potential neurovascular compromise. Every effort should be made to quickly and efficiently relocate the disrupted ankle joint. Within the emergency department setting, narcotics and benzodiazepines can be used to sedate the patient before attempting closed reduction. The combination of narcotics and benzodiazepines provides relief of pain and muscle guarding; however, it conveys a risk of seizure as well as respiratory arrest. An alternative to conscious sedation is the hematoma block, or an intra-articular local anesthetic injection in the ankle joint and the associated fracture hematoma. The hematoma block offers a comparable amount of analgesia to conscious sedation without the additional cardiovascular risk, hospital cost, and procedure time.
Directory of Open Access Journals (Sweden)
S. Ebanezar Pravin
2016-11-01
Full Text Available The main objective of this paper is to control the speed of an induction motor by using seven level diode clamped multilevel inverter and improve the high quality sinusoidal output voltage with reduced harmonics. The presented scheme for diode clamped multilevel inverter is sine carrier Pulse Width Modulation control. An open loop speed control can be achieved by using V/ƒ method. This method can be implemented by changing the supply voltage and frequency applied to the three phase induction motor at constant ratio. The presented system is an effective replacement for the conventional method which has high switching losses, its result ends in a poor drive performance. The simulation result portrays the effective control in the motor speed and an enhanced drive performance through reduction in total harmonic distortion (THD. The effectiveness of the system is verified through simulation using PSIM6.1 Simulink package.
Model order reduction of linear time invariant systems
Directory of Open Access Journals (Sweden)
Lj. Radić-Weissenfeld
2008-05-01
Full Text Available This paper addresses issues related to the order reduction of systems with multiple input/output ports. The order reduction is divided up into two steps. The first step is the standard order reduction method based on the multipoint approximation of system matrices by applying Krylov subspace. The second step is based on the rejection of the weak part of a system. To recognise the weak system part, Lyapunov equations are used. Thus, this paper introduces efficient solutions of the Lyapunov equations for port to port subsystems.
Circuit oriented electromagnetic modeling using the PEEC techniques
Ruehli, Albert; Jiang, Lijun
2017-01-01
This book provides intuitive solutions to electromagnetic problems by using the Partial Eelement Eequivalent Ccircuit (PEEC) method. This book begins with an introduction to circuit analysis techniques, laws, and frequency and time domain analyses. The authors also treat Maxwell's equations, capacitance computations, and inductance computations through the lens of the PEEC method. Next, readers learn to build PEEC models in various forms: equivalent circuit models, non orthogonal PEEC models, skin-effect models, PEEC models for dielectrics, incident and radiate field models, and scattering PEEC models. The book concludes by considering issues like such as stability and passivity, and includes five appendices some with formulas for partial elements.
Jiang, Xin; Fan, Shuai; Cai, Bin; Fang, Zhong-Yi; Xu, Li-Li; Liu, Li-Kun
2016-10-01
This study aimed to evaluate the short-term efficiency of mandibular manipulation technique combined with exercise therapy and splint treatment of acute anterior TMJ disc displacement without reduction (ADDW), and TMJ disc-condyle relationship by magnetic resonance imaging (MRI). Forty-four patients (37 females, 7 males) were diagnosed as acute ADDW and confirmed by MRI. All patients underwent mandibular manipulation, combined with exercise therapy, including jaw movement exercise, stabilization exercise, disc reposition exercise, and splint treatment. Anterior repositioning splint was wore only at night during sleep, while the mandible was kept in rest position during the day. The treatment was continued for 2 weeks. The baseline and endpoint outcome assessment measures were maximum active mouth opening, visual analogue scale (VAS) score of TMJ pain. Consecutive MRI was performed 1~3 months after treatment. SPSS 17.0 software package was used for statistical analysis. Two weeks after treatment, the patients' maximum active mouth opening increased from(22.6±6.1) mm to (43.9±3.3) mm, VAS score of pain decreased from 3.6±1.5 to 0.7±0.25. After treatment of 4.6±4.7 weeks on average, 20 patients (46%) displayed normal dis-condyle relationship, 16 patients(36%) had displacement with reduction, and 8 patients（18%) had displacement without reduction on MRI. Mandibular manipulation technique combined with exercise therapy and splint treatment seems to be useful in the treatment of anterior TMJ disc displacement with reduction, which can help to maintain the complete anatomic disc-condyle relationship.
SU-E-I-77: A Noise Reduction Technique for Energy-Resolved Photon-Counting Detectors
Energy Technology Data Exchange (ETDEWEB)
Lam Ng, A; Ding, H; Cho, H; Molloi, S [University of California, Irvine, CA (United States)
2014-06-01
Purpose: Finding the optimal energy threshold setting for an energy-resolved photon-counting detector has an important impact on the maximization of contrast-to-noise-ratio (CNR). We introduce a noise reduction method to enhance CNR by reducing the noise in each energy bin without altering the average gray levels in the projection and image domains. Methods: We simulated a four bin energy-resolved photon-counting detector based on Si with a 10 mm depth of interaction. TASMIP algorithm was used to simulate a spectrum of 65 kVp with 2.7 mm Al filter. A 13 mm PMMA phantom with hydroxyapatite and iodine at different concentrations (100, 200 and 300 mg/ml for HA, and 2, 4, and 8 mg/ml for Iodine) was used. Projection-based and Image-based energy weighting methods were used to generate weighted images. A reference low noise image was used for noise reduction purposes. A Gaussian-like weighting function which computes the similarity between pixels of interest was calculated from the reference image and implemented on a pixel by pixel basis for the noisy images. Results: CNR improvement compared to different methods (Charge-Integrated, Photon-Counting and Energy-Weighting) and after noise reduction was highly task-dependent. The CNR improvement with respect to the Charge-Integrated CNR for hydroxyapatite and iodine were 1.8 and 1.5, respectively. In each of the energy bins, the noise was reduced by approximately factor of two without altering their respective average gray levels. Conclusion: The proposed noise reduction technique for energy-resolved photon-counting detectors can significantly reduce image noise. This technique can be used as a compliment to the current energy-weighting methods in CNR optimization.
Using data mining techniques for building fusion models
Zhang, Zhongfei; Salerno, John J.; Regan, Maureen A.; Cutler, Debra A.
2003-03-01
Over the past decade many techniques have been developed which attempt to predict possible events through the use of given models or patterns of activity. These techniques work quite well given the case that one has a model or a valid representation of activity. However, in reality for the majority of the time this is not the case. Models that do exist, in many cases were hand crafted, required many man-hours to develop and they are very brittle in the dynamic world in which we live. Data mining techniques have shown some promise in providing a set of solutions. In this paper we will provide the details for our motivation, theory and techniques which we have developed, as well as the results of a set of experiments.
A Novel Technique for Glitch and Leakage Power Reduction in CMOS VLSI Circuits
Directory of Open Access Journals (Sweden)
Pushpa Saini
2012-10-01
Full Text Available Leakage power has become a serious concern in nanometer CMOS technologies. Dynamic and leakage power both are the main contributors to the total power consumption. In the past, the dynamic power has dominated the total power dissipation of CMOS devices. However, with the continuous trend of technology scaling, leakage power is becoming a main contributor to power consumption. In this paper, a technique has been proposed which will reduce simultaneously both glitch and leakage power. The results are simulated in Microwind3.1 in 90nm and 250 nm technology at room temperature.
Stability Analysis and H∞ Model Reduction for Switched Discrete-Time Time-Delay Systems
Directory of Open Access Journals (Sweden)
Zheng-Fan Liu
2014-01-01
Full Text Available This paper is concerned with the problem of exponential stability and H∞ model reduction of a class of switched discrete-time systems with state time-varying delay. Some subsystems can be unstable. Based on the average dwell time technique and Lyapunov-Krasovskii functional (LKF approach, sufficient conditions for exponential stability with H∞ performance of such systems are derived in terms of linear matrix inequalities (LMIs. For the high-order systems, sufficient conditions for the existence of reduced-order model are derived in terms of LMIs. Moreover, the error system is guaranteed to be exponentially stable and an H∞ error performance is guaranteed. Numerical examples are also given to demonstrate the effectiveness and reduced conservatism of the obtained results.
Model reduction algorithms for optimal control and importance sampling of diffusions
Hartmann, Carsten; Schütte, Christof; Zhang, Wei
2016-08-01
We propose numerical algorithms for solving optimal control and importance sampling problems based on simplified models. The algorithms combine model reduction techniques for multiscale diffusions and stochastic optimization tools, with the aim of reducing the original, possibly high-dimensional problem to a lower dimensional representation of the dynamics, in which only a few relevant degrees of freedom are controlled or biased. Specifically, we study situations in which either a reaction coordinate onto which the dynamics can be projected is known, or situations in which the dynamics shows strongly localized behavior in the small noise regime. No explicit assumptions about small parameters or scale separation have to be made. We illustrate the approach with simple, but paradigmatic numerical examples.
Principles of operation and data reduction techniques for the loft drag disc turbine transducer
Energy Technology Data Exchange (ETDEWEB)
Silverman, S.
1977-09-01
An analysis of the single- and two-phase flow data applicable to the loss-of-fluid test (LOFT) is presented for the LOFT drag turbine transducer. Analytical models which were employed to correlate the experimental data are presented.
Seltzer, S. M.
1974-01-01
Some means of combining both computer simulation and anlytical techniques are indicated in order to mutually enhance their efficiency as design tools and to motivate those involved in engineering design to consider using such combinations. While the idea is not new, heavy reliance on computers often seems to overshadow the potential utility of analytical tools. Although the example used is drawn from the area of dynamics and control, the principles espoused are applicable to other fields. In the example the parameter plane stability analysis technique is described briefly and extended beyond that reported in the literature to increase its utility (through a simple set of recursive formulas) and its applicability (through the portrayal of the effect of varying the sampling period of the computer). The numerical values that were rapidly selected by analysis were found to be correct for the hybrid computer simulation for which they were needed. This obviated the need for cut-and-try methods to choose the numerical values, thereby saving both time and computer utilization.
Scintillation reduction using multi-beam propagating technique in atmospheric WOCDMA system
Institute of Scientific and Technical Information of China (English)
Yaqin Zhao; Danli Xu; Xin Zhong
2011-01-01
Wireless optical code division multiple access (WOCDMA) combines code division multiple access (CDMA) with wireless-optic communications.It can not only reserve the advantage of CDMA technology in radio frequency (RF) communication,but also use huge bandwidth and have simple network protocol,random access,and other characteristics.%We propose employing multi-beam propagating technology to mitigate the influence of atmospheric scintillation to the wireless optical code division multiple access (WOCDMA) system and then deduce the bit error rate (BER) formulas of systems in weak and strong scintillations, respectively. According to simulation experiment results, multi-beam propagation can improve the system performance very well compared with single-beam propagating technique. Moreover, the more beams we use, the better the performance we get. When the received optical power is -30 dBm, the BER of the system employing four beams is 5 and 1 dB lower than that of using single-beam propagating technique in weak and strong scintillations, respectively.
Driving the Model to Its Limit: Profile Likelihood Based Model Reduction.
Maiwald, Tim; Hass, Helge; Steiert, Bernhard; Vanlier, Joep; Engesser, Raphael; Raue, Andreas; Kipkeew, Friederike; Bock, Hans H; Kaschek, Daniel; Kreutz, Clemens; Timmer, Jens
2016-01-01
In systems biology, one of the major tasks is to tailor model complexity to information content of the data. A useful model should describe the data and produce well-determined parameter estimates and predictions. Too small of a model will not be able to describe the data whereas a model which is too large tends to overfit measurement errors and does not provide precise predictions. Typically, the model is modified and tuned to fit the data, which often results in an oversized model. To restore the balance between model complexity and available measurements, either new data has to be gathered or the model has to be reduced. In this manuscript, we present a data-based method for reducing non-linear models. The profile likelihood is utilised to assess parameter identifiability and designate likely candidates for reduction. Parameter dependencies are analysed along profiles, providing context-dependent suggestions for the type of reduction. We discriminate four distinct scenarios, each associated with a specific model reduction strategy. Iterating the presented procedure eventually results in an identifiable model, which is capable of generating precise and testable predictions. Source code for all toy examples is provided within the freely available, open-source modelling environment Data2Dynamics based on MATLAB available at http://www.data2dynamics.org/, as well as the R packages dMod/cOde available at https://github.com/dkaschek/. Moreover, the concept is generally applicable and can readily be used with any software capable of calculating the profile likelihood.
Golosio, Bruno; Schoonjans, Tom; Brunetti, Antonio; Oliva, Piernicola; Masala, Giovanni Luca
2014-03-01
Licensing provisions: GNU General Public License version 3 No. of lines in distributed program, including test data, etc.: 83617 No. of bytes in distributed program, including test data, etc.: 1038160 Distribution format: tar.gz Programming language: C++. Computer: Tested on several PCs and on Mac. Operating system: Linux, Mac OS X, Windows (native and cygwin). RAM: It is dependent on the input data but usually between 1 and 10 MB. Classification: 2.5, 21.1. External routines: XrayLib (https://github.com/tschoonj/xraylib/wiki) Nature of problem: Simulation of a wide range of X-ray imaging and spectroscopy experiments using different types of sources and detectors. Solution method: XRMC is a versatile program that is useful for the simulation of a wide range of X-ray imaging and spectroscopy experiments. It enables the simulation of monochromatic and polychromatic X-ray sources, with unpolarised or partially/completely polarised radiation. Single-element detectors as well as two-dimensional pixel detectors can be used in the simulations, with several acquisition options. In the current version of the program, the sample is modelled by combining convex three-dimensional objects demarcated by quadric surfaces, such as planes, ellipsoids and cylinders. The Monte Carlo approach makes XRMC able to accurately simulate X-ray photon transport and interactions with matter up to any order of interaction. The differential cross-sections and all other quantities related to the interaction processes (photoelectric absorption, fluorescence emission, elastic and inelastic scattering) are computed using the xraylib software library, which is currently the most complete and up-to-date software library for X-ray parameters. The use of variance reduction techniques makes XRMC able to reduce the simulation time by several orders of magnitude compared to other general-purpose Monte Carlo simulation programs. Running time: It is dependent on the complexity of the simulation. For the examples
Millett, Peter J; Braun, Sepp
2009-01-01
Arthroscopic treatment of bony Bankart lesions can be challenging. We present a new easy and reproducible technique for arthroscopic reduction and suture anchor fixation of bony Bankart fragments. A suture anchor is placed medially to the fracture on the glenoid neck, and its sutures are passed around the bony fragment through the soft tissue including the inferior glenohumeral ligament complex. The sutures of this anchor are loaded in a second anchor that is placed on the glenoid face. This creates a nontilting 2-point fixation that compresses the fragment into its bed. By use of the standard technique, additional suture anchors are used superiorly and inferiorly to the bony Bankart piece to repair the labrum and shift the joint capsule. We call this the "bony Bankart bridge" procedure.
Time Hierarchies and Model Reduction in Canonical Non-linear Models
Löwe, Hannes; Kremling, Andreas; Marin-Sanguino, Alberto
2016-01-01
The time-scale hierarchies of a very general class of models in differential equations is analyzed. Classical methods for model reduction and time-scale analysis have been adapted to this formalism and a complementary method is proposed. A unified theoretical treatment shows how the structure of the system can be much better understood by inspection of two sets of singular values: one related to the stoichiometric structure of the system and another to its kinetics. The methods are exemplified first through a toy model, then a large synthetic network and finally with numeric simulations of three classical benchmark models of real biological systems. PMID:27708665
On a Graphical Technique for Evaluating Some Rational Expectations Models
DEFF Research Database (Denmark)
Johansen, Søren; Swensen, Anders R.
2011-01-01
. In addition to getting a visual impression of the fit of the model, the purpose is to see if the two spreads are nevertheless similar as measured by correlation, variance ratio, and noise ratio. We extend these techniques to a number of rational expectation models and give a general definition of spread...
Matrix eigenvalue model: Feynman graph technique for all genera
Energy Technology Data Exchange (ETDEWEB)
Chekhov, Leonid [Steklov Mathematical Institute, ITEP and Laboratoire Poncelet, Moscow (Russian Federation); Eynard, Bertrand [SPhT, CEA, Saclay (France)
2006-12-15
We present the diagrammatic technique for calculating the free energy of the matrix eigenvalue model (the model with arbitrary power {beta} by the Vandermonde determinant) to all orders of 1/N expansion in the case where the limiting eigenvalue distribution spans arbitrary (but fixed) number of disjoint intervals (curves)
Energy Technology Data Exchange (ETDEWEB)
Xiao Yamping; Holappa, L. [Helsinki Univ. of Technology, Otaniemi (Finland). Lab. of Metallurgy
1996-12-31
This article summaries the research work on thermodynamics of chromium slags and kinetic modelling of chromite reduction. The thermodynamic properties of FeCr slag systems were calculated with the regular solution model. The effects of CaO/MgO ratio, Al{sub 2}0{sub 3} amount as well as the slag basicity on the activities of chromium oxides and the oxidation state of chromium were examined. The calculated results were compared to the experimental data in the literature. In the kinetic modelling of the chromite reduction, the reduction possibilities and tendencies of the chromite constitutes with CO were analysed based on the thermodynamic calculation. Two reaction models, a structural grain model and a multi-layers reaction model, were constructed and applied to simulate the chromite pellet reduction and chromite lumpy ore reduction, respectively. The calculated reduction rates were compared with the experimental measurements and the reaction mechanisms were discussed. (orig.) SULA 2 Research Programme; 4 refs.
Alanis Pena, Antonio Alejandro
Major commercial electricity generation is done by burning fossil fuels out of which coal-fired power plants produce a substantial quantity of electricity worldwide. The United States has large reserves of coal, and it is cheaply available, making it a good choice for the generation of electricity on a large scale. However, one major problem associated with using coal for combustion is that it produces a group of pollutants known as nitrogen oxides (NO x). NOx are strong oxidizers and contribute to ozone formation and respiratory illness. The Environmental Protection Agency (EPA) regulates the quantity of NOx emitted to the atmosphere in the United States. One technique coal-fired power plants use to reduce NOx emissions is Selective Catalytic Reduction (SCR). SCR uses layers of catalyst that need to be added or changed to maintain the required performance. Power plants do add or change catalyst layers during temporary shutdowns, but it is expensive. However, many companies do not have only one power plant, but instead they can have a fleet of coal-fired power plants. A fleet of power plants can use EPA cap and trade programs to have an outlet NOx emission below the allowances for the fleet. For that reason, the main aim of this research is to develop an SCR management mathematical optimization methods that, with a given set of scheduled outages for a fleet of power plants, minimizes the total cost of the entire fleet of power plants and also maintain outlet NO x below the desired target for the entire fleet. We use a multi commodity network flow problem (MCFP) that creates edges that represent all the SCR catalyst layers for each plant. This MCFP is relaxed because it does not consider average daily NOx constraint, and it is solved by a binary integer program. After that, we add the average daily NOx constraint to the model with a schedule elimination constraint (MCFPwSEC). The MCFPwSEC eliminates, one by one, the solutions that do not satisfy the average daily
A Comparison of Approximation Modeling Techniques: Polynomial Versus Interpolating Models
Giunta, Anthony A.; Watson, Layne T.
1998-01-01
Two methods of creating approximation models are compared through the calculation of the modeling accuracy on test problems involving one, five, and ten independent variables. Here, the test problems are representative of the modeling challenges typically encountered in realistic engineering optimization problems. The first approximation model is a quadratic polynomial created using the method of least squares. This type of polynomial model has seen considerable use in recent engineering optimization studies due to its computational simplicity and ease of use. However, quadratic polynomial models may be of limited accuracy when the response data to be modeled have multiple local extrema. The second approximation model employs an interpolation scheme known as kriging developed in the fields of spatial statistics and geostatistics. This class of interpolating model has the flexibility to model response data with multiple local extrema. However, this flexibility is obtained at an increase in computational expense and a decrease in ease of use. The intent of this study is to provide an initial exploration of the accuracy and modeling capabilities of these two approximation methods.
Directory of Open Access Journals (Sweden)
Lauren Ehrlichman
2017-03-01
Full Text Available Background: While various radiographic parameters and application of manual/gravity stress have been proposed to elucidate instability for Weber B fibula fractures, the prognostic capability of these modalities remains unclear. Determination of anatomic positioning of the mortise is paramount. We propose a novel view, the Gravity Reduction View (GRV, which helps elucidate non-anatomic positioning and reducibility of the mortise.Methods: The patient is positioned lateral decubitus with the injured leg elevated on a holder with the fibula directed superiorly. The x-ray cassette is placed posterior to the heel, with the beam angled at 15˚ of internal rotation to obtain a mortise view. Our proposed treatment algorithm is based upon the measurement of the medial clear space (MCS on the GRV versus the static mortise view (and in comparison to the superior clear space (SCS and is based on reducibility of the MCS. A retrospective review of patients evaluated utilizing the GRV was performed.Results: 26 patients with Weber B fibula fractures were managed according to this treatment algorithm. Mean age was 50.57 years old (range: range:18-81, SD=19. 17 patients underwent operative treatment and 9 patients were initially treated nonoperatively. 2 patients demonstrated late displacement and were treated surgically. Using this algorithm, at a mean follow-up of 26 weeks, all patients had a final MCS that was less than the SCS (final mean MCS 2.86 mm vs. mean SCS of 3.32 indicating effectiveness of the treatment algorithm.Conclusion: The GRV is a novel radiographic view in which deltoid competency, reducibility and initial positioning of the mortise are assessed by comparing a static mortise view with the appearance of the mortise on the GRV. We have developed a treatment algorithm based on the GRV and have found it to be useful in guiding treatment and successful at achieving anatomic mortise alignment.
Dissipativity preserving model reduction by retention of trajectories of minimal dissipation
Trentelman, Harry L.; Ha Binh Minh, [No Value; Rapisarda, Paolo
2009-01-01
We present a method for model reduction based on ideas from the behavioral theory of dissipative systems, in which the reduced order model is required to reproduce a subset of the set of trajectories of minimal dissipation of the original system. The passivity-preserving model reduction method of An
3D Modeling Techniques for Print and Digital Media
Stephens, Megan Ashley
In developing my thesis, I looked to gain skills using ZBrush to create 3D models, 3D scanning, and 3D printing. The models created compared the hearts of several vertebrates and were intended for students attending Comparative Vertebrate Anatomy. I used several resources to create a model of the human heart and was able to work from life while creating heart models from other vertebrates. I successfully learned ZBrush and 3D scanning, and successfully printed 3D heart models. ZBrush allowed me to create several intricate models for use in both animation and print media. The 3D scanning technique did not fit my needs for the project, but may be of use for later projects. I was able to 3D print using two different techniques as well.
Norman, M.; Sundvor, I.; Denby, B. R.; Johansson, C.; Gustafsson, M.; Blomqvist, G.; Janhäll, S.
2016-06-01
Road dust emissions in Nordic countries still remain a significant contributor to PM10 concentrations mainly due to the use of studded tyres. A number of measures have been introduced in these countries in order to reduce road dust emissions. These include speed reductions, reductions in studded tyre use, dust binding and road cleaning. Implementation of such measures can be costly and some confidence in the impact of the measures is required to weigh the costs against the benefits. Modelling tools are thus required that can predict the impact of these measures. In this paper the NORTRIP road dust emission model is used to simulate real world abatement measures that have been carried out in Oslo and Stockholm. In Oslo both vehicle speed and studded tyre share reductions occurred over a period from 2004 to 2006 on a major arterial road, RV4. In Stockholm a studded tyre ban on Hornsgatan in 2010 saw a significant reduction in studded tyre share together with a reduction in traffic volume. The model is found to correctly simulate the impact of these measures on the PM10 concentrations when compared to available kerbside measurement data. Importantly meteorology can have a significant impact on the concentrations through both surface and dispersion conditions. The first year after the implementation of the speed reduction on RV4 was much drier than the previous year, resulting in higher mean concentrations than expected. The following year was much wetter with significant rain and snow fall leading to wet or frozen road surfaces for 83% of the four month study period. This significantly reduced the net PM10 concentrations, by 58%, compared to the expected values if meteorological conditions had been similar to the previous years. In the years following the studded tyre ban on Hornsgatan road wear production through studded tyres decreased by 72%, due to a combination of reduced traffic volume and reduced studded tyre share. However, after accounting for exhaust
Segmentation and Dimension Reduction: Exploratory and Model-Based Approaches
J.M. van Rosmalen (Joost)
2009-01-01
textabstractRepresenting the information in a data set in a concise way is an important part of data analysis. A variety of multivariate statistical techniques have been developed for this purpose, such as k-means clustering and principal components analysis. These techniques are often based on the
A finite element parametric modeling technique of aircraft wing structures
Institute of Scientific and Technical Information of China (English)
Tang Jiapeng; Xi Ping; Zhang Baoyuan; Hu Bifu
2013-01-01
A finite element parametric modeling method of aircraft wing structures is proposed in this paper because of time-consuming characteristics of finite element analysis pre-processing. The main research is positioned during the preliminary design phase of aircraft structures. A knowledge-driven system of fast finite element modeling is built. Based on this method, employing a template parametric technique, knowledge including design methods, rules, and expert experience in the process of modeling is encapsulated and a finite element model is established automatically, which greatly improves the speed, accuracy, and standardization degree of modeling. Skeleton model, geometric mesh model, and finite element model including finite element mesh and property data are established on parametric description and automatic update. The outcomes of research show that the method settles a series of problems of parameter association and model update in the pro-cess of finite element modeling which establishes a key technical basis for finite element parametric analysis and optimization design.
Chengliang Dai; Bailey, Christopher
2015-01-01
This paper presents a time-domain based lossless data reduction technique called Log2 Sub-band encoding, which is designed for reducing the size of data recorded on a wireless electroencephalogram (EEG) recorder. A data reduction unit can help to save power from the wireless transceiver and from the storage medium since it allows lower data transmission and read/write rates, and then extends the life time of the battery on the device. Our compression ratio(CR) results show that Log2 Sub-band encoding is comparable and even superior to Huffman coding, a well known entropy encoding method, whilst requiring minimal hardware resource, and it can also be used to extract features from EEG to achieve seizure detection during the compression process. The power consumption when compressing the EEG data is presented to evaluate the system0s overall improvement on its power performance, and our results indicate that a noticeable power saving can be achieved with our technique. The possibility of applying this method to other biomedical signals will also be noted.
A Fourier dimensionality reduction model for big data interferometric imaging
Kartik, S Vijay; Thiran, Jean-Philippe; Wiaux, Yves
2016-01-01
Data dimensionality reduction in radio interferometry can provide critical savings of computational resources for image reconstruction, which is of paramount importance for the scalability of imaging methods to the big data setting of the next-generation telescopes. This article sheds new light on dimensionality reduction from the perspective of the compressed sensing theory and studies its interplay with imaging algorithms designed in the context of convex optimization. We propose a post-gridding linear data embedding to the space spanned by the left singular vectors of the measurement operator, providing a dimensionality reduction below image size. This embedding preserves the null space of the measurement operator and hence its sampling properties are also preserved in light of the compressed sensing theory. We show that this can be approximated by first computing the dirty image and then applying a weighted subsampled discrete Fourier transform to obtain the final reduced data vector. This Fourier dimensi...
SVM and ANN Based Classification of Plant Diseases Using Feature Reduction Technique
Directory of Open Access Journals (Sweden)
Jagadeesh D.Pujari
2016-06-01
Full Text Available Computers have been used for mechanization and automation in different applications of agriculture/horticulture. The critical decision on the agricultural yield and plant protection is done with the development of expert system (decision support system using computer vision techniques. One of the areas considered in the present work is the processing of images of plant diseases affecting agriculture/horticulture crops. The first symptoms of plant disease have to be correctly detected, identified, and quantified in the initial stages. The color and texture features have been used in order to work with the sample images of plant diseases. Algorithms for extraction of color and texture features have been developed, which are in turn used to train support vector machine (SVM and artificial neural network (ANN classifiers. The study has presented a reduced feature set based approach for recognition and classification of images of plant diseases. The results reveal that SVM classifier is more suitable for identification and classification of plant diseases affecting agriculture/horticulture crops.
Fault current reduction by SFCL in a distribution system with PV using fuzzy logic technique
Mounika, M.; Lingareddy, P.
2017-07-01
In the modern power system, as the utilization of electric power is very wide, there is a frequent occurring of any fault or disturbance in power system. It causes a high short circuit current. Due to this fault, high currents occurs results to large mechanical forces, these forces cause overheating of the equipment. If the large size equipment are used in power system then they need a large protection scheme for severe fault conditions. Generally, the maintenance of electrical power system reliability is more important. But the elimination of fault is not possible in power systems. So the only alternate solution is to minimize the fault currents. For this the Super Conducting Fault Current Limiter using fuzzy logic technique is the best electric equipment which is used for reducing the severe fault current levels. In this paper, we simulated the unsymmetrical and symmetrical faults with fuzzy based superconducting fault current limiter. In our analysis it is proved that, fuzzy logic based super conducting fault current limiter reduces fault current quickly to a lower value.
Instanton-based techniques for analysis and reduction of error floor of LDPC codes
Energy Technology Data Exchange (ETDEWEB)
Chertkov, Michael [Los Alamos National Laboratory; Chilappagari, Shashi K [Los Alamos National Laboratory; Stepanov, Mikhail G [Los Alamos National Laboratory; Vasic, Bane [SENIOR MEMBER, IEEE
2008-01-01
We describe a family of instanton-based optimization methods developed recently for the analysis of the error floors of low-density parity-check (LDPC) codes. Instantons are the most probable configurations of the channel noise which result in decoding failures. We show that the general idea and the respective optimization technique are applicable broadly to a variety of channels, discrete or continuous, and variety of sub-optimal decoders. Specifically, we consider: iterative belief propagation (BP) decoders, Gallager type decoders, and linear programming (LP) decoders performing over the additive white Gaussian noise channel (AWGNC) and the binary symmetric channel (BSC). The instanton analysis suggests that the underlying topological structures of the most probable instanton of the same code but different channels and decoders are related to each other. Armed with this understanding of the graphical structure of the instanton and its relation to the decoding failures, we suggest a method to construct codes whose Tanner graphs are free of these structures, and thus have less significant error floors.
Successful Gastric Volvulus Reduction and Gastropexy Using a Dual Endoscope Technique
Directory of Open Access Journals (Sweden)
Laith H. Jamil
2014-01-01
Full Text Available Gastric volvulus is a life threatening condition characterized by an abnormal rotation of the stomach around an axis. Although the first line treatment of this disorder is surgical, we report here a case of gastric volvulus that was endoscopically managed using a novel strategy. An 83-year-old female with a history of pancreatic cancer status postpylorus-preserving Whipple procedure presented with a cecal volvulus requiring right hemicolectomy. Postoperative imaging included a CT scan and upper GI series that showed a gastric volvulus with the antrum located above the diaphragm. An upper endoscopy was advanced through the pylorus into the duodenum and left in this position to keep the stomach under the diaphragm. A second pediatric endoscope was advanced alongside and used to complete percutaneous endoscopic gastrostomy (PEG placement for anterior gastropexy. The patient’s volvulus resolved and there were no complications. From our review of the literature, the dual endoscopic technique employed here has not been previously described. Patients who are poor surgical candidates or those who do not require emergent surgery can possibly benefit the most from similar minimally invasive endoscopic procedures as described here.
Manufacturing Enhancement through Reduction of Cycle Time using Different Lean Techniques
Suganthini Rekha, R.; Periyasamy, P.; Nallusamy, S.
2017-08-01
In recent manufacturing system the most important parameters in production line are work in process, TAKT time and line balancing. In this article lean tools and techniques were implemented to reduce the cycle time. The aim is to enhance the productivity of the water pump pipe by identifying the bottleneck stations and non value added activities. From the initial time study the bottleneck processes were identified and then necessary expanding processes were also identified for the bottleneck process. Subsequently the improvement actions have been established and implemented using different lean tools like value stream mapping, 5S and line balancing. The current state value stream mapping was developed to describe the existing status and to identify various problem areas. 5S was used to implement the steps to reduce the process cycle time and unnecessary movements of man and material. The improvement activities were implemented with required suggested and the future state value stream mapping was developed. From the results it was concluded that the total cycle time was reduced about 290.41 seconds and the customer demand has been increased about 760 units.
A Method to Test Model Calibration Techniques: Preprint
Energy Technology Data Exchange (ETDEWEB)
Judkoff, Ron; Polly, Ben; Neymark, Joel
2016-09-01
This paper describes a method for testing model calibration techniques. Calibration is commonly used in conjunction with energy retrofit audit models. An audit is conducted to gather information about the building needed to assemble an input file for a building energy modeling tool. A calibration technique is used to reconcile model predictions with utility data, and then the 'calibrated model' is used to predict energy savings from a variety of retrofit measures and combinations thereof. Current standards and guidelines such as BPI-2400 and ASHRAE-14 set criteria for 'goodness of fit' and assume that if the criteria are met, then the calibration technique is acceptable. While it is logical to use the actual performance data of the building to tune the model, it is not certain that a good fit will result in a model that better predicts post-retrofit energy savings. Therefore, the basic idea here is that the simulation program (intended for use with the calibration technique) is used to generate surrogate utility bill data and retrofit energy savings data against which the calibration technique can be tested. This provides three figures of merit for testing a calibration technique, 1) accuracy of the post-retrofit energy savings prediction, 2) closure on the 'true' input parameter values, and 3) goodness of fit to the utility bill data. The paper will also discuss the pros and cons of using this synthetic surrogate data approach versus trying to use real data sets of actual buildings.
Dietary Impact of Adding Potassium Chloride to Foods as a Sodium Reduction Technique
Directory of Open Access Journals (Sweden)
Leo van Buren
2016-04-01
Full Text Available Potassium chloride is a leading reformulation technology for reducing sodium in food products. As, globally, sodium intake exceeds guidelines, this technology is beneficial; however, its potential impact on potassium intake is unknown. Therefore, a modeling study was conducted using Dutch National Food Survey data to examine the dietary impact of reformulation (n = 2106. Product-specific sodium criteria, to enable a maximum daily sodium chloride intake of 5 grams/day, were applied to all foods consumed in the survey. The impact of replacing 20%, 50% and 100% of sodium chloride from each product with potassium chloride was modeled. At baseline median, potassium intake was 3334 mg/day. An increase in the median intake of potassium of 453 mg/day was seen when a 20% replacement was applied, 674 mg/day with a 50% replacement scenario and 733 mg/day with a 100% replacement scenario. Reformulation had the largest impact on: bread, processed fruit and vegetables, snacks and processed meat. Replacement of sodium chloride by potassium chloride, particularly in key contributing product groups, would result in better compliance to potassium intake guidelines (3510 mg/day. Moreover, it could be considered safe for the general adult population, as intake remains compliant with EFSA guidelines. Based on current modeling potassium chloride presents as a valuable, safe replacer for sodium chloride in food products.
An Empirical Study of Smoothing Techniques for Language Modeling
Chen, S F; Chen, Stanley F.; Goodman, Joshua T.
1996-01-01
We present an extensive empirical comparison of several smoothing techniques in the domain of language modeling, including those described by Jelinek and Mercer (1980), Katz (1987), and Church and Gale (1991). We investigate for the first time how factors such as training data size, corpus (e.g., Brown versus Wall Street Journal), and n-gram order (bigram versus trigram) affect the relative performance of these methods, which we measure through the cross-entropy of test data. In addition, we introduce two novel smoothing techniques, one a variation of Jelinek-Mercer smoothing and one a very simple linear interpolation technique, both of which outperform existing methods.
Esch, Tobias; Duckstein, Jorg; Welke, Justus; Braun, Vittoria
2007-11-01
Stress can affect health. There is a growing need for the evaluation and application of professional stress management options, i.e, stress reduction. Mind/body medicine serves this goal, e.g, by integrating self-care techniques into medicine and health care. Tai Chi (TC) can be classified as such a mind/body technique, potentially reducing stress and affecting physical as well as mental health parameters, which, however, has to be examined further. We conducted a prospective, longitudinal pilot study over 18 weeks for the evaluation of subjective and objective clinical effects of a Yang style TC intervention in young adults (beginners) by measuring physiological (blood pressure, heart rate, saliva cortisol) and psychological (SF-36, perceived stress, significant events) parameters, i.e, direct or indirect indicators of stress and stress reduction, in a non-randomised/-controlled, yet non-selected cohort (n=21) by pre-to-post comparison and in follow-up. SF-36 values were also compared with the age-adjusted norm population, serving as an external control. Additionally, we measured diurnal cortisol profiles in a cross-sectional sub-study (n=2+2, pre-to-post), providing an internal random control sub-sample. Only nine participants completed all measurements. Even so, we found significant (pstress reduction. A significant decrease in perceived mental stress (post) proved even highly significant (pstress perception declined to a much lesser degree. Significant improvements were also detected for the SF-36 dimensions general health perception, social functioning, vitality, and mental health/psychological well-being. Thus, the summarized mental health measures all clearly improved, pointing towards a predominantly psychological impact of TC. Subjective health increased, stress decreased (objectively and subjectively) during TC practice. Future studies should confirm this observation by rigorous methodology and by further combining physical and psychological measurements
Micro-bubble Drag Reduction on a High Speed Vessel Model
Institute of Scientific and Technical Information of China (English)
Yanuar; Gunawan; Sunaryo; A. Jamaluddin
2012-01-01
Ship hull form of the underwater area strongly influences the resistance of the ship.The major factor in ship resistance is skin friction resistance.Bulbous bows,polymer paint,water repellent paint (highly water-repellent wall),air injection,and specific roughness have been used by researchers as an attempt to obtain the resistance reduction and operation efficiency of ships.Micro-bubble injection is a promising technique for lowering frictional resistance.The injected air bubbles are supposed to somehow modify the energy inside the turbulent boundary layer and thereby lower the skin friction.The purpose of this study was to identify the effect of injected micro bubbles on a navy fast patrol boat (FPB) 57 m type model with the following main dimensions:L=2 450 mm,B=400 mm,and T=190 mm.The influence of the location of micro bubble injection and bubble velocity was also investigated.The ship model was pulled by an electric motor whose speed could be varied and adjusted.The ship model resistance was precisely measured by a load cell transducer.Comparison of ship resistance with and without micro-bubble injection was shown on a graph as a function of the drag coefficient and Froude number.It was shown that micro bubble injection behind the mid-ship is the best location to achieve the most effective drag reduction,and the drag reduction caused by the micro-bubbles can reach 6％-9％.
Optimization using surrogate models - by the space mapping technique
DEFF Research Database (Denmark)
Søndergaard, Jacob
2003-01-01
mapping surrogate has a lower approximation error for long steps. For short steps, however, the Taylor model of the expensive model is best, due to exact interpolation at the model origin. Five algorithms for space mapping optimization are presented and the numerical performance is evaluated. Three...... conditions are satisfied. So hybrid methods, combining the space mapping technique with classical optimization methods, should be used if convergence to high accuracy is wanted. Approximation abilities of the space mapping surrogate are compared with those of a Taylor model of the expensive model. The space...
Optimization using surrogate models - by the space mapping technique
DEFF Research Database (Denmark)
Søndergaard, Jacob
2003-01-01
mapping surrogate has a lower approximation error for long steps. For short steps, however, the Taylor model of the expensive model is best, due to exact interpolation at the model origin. Five algorithms for space mapping optimization are presented and the numerical performance is evaluated. Three...... conditions are satisfied. So hybrid methods, combining the space mapping technique with classical optimization methods, should be used if convergence to high accuracy is wanted. Approximation abilities of the space mapping surrogate are compared with those of a Taylor model of the expensive model. The space...
Noise Reduction Technique for Images using Radial Basis Function Neural Networks
Directory of Open Access Journals (Sweden)
Sander Ali Khowaja
2014-07-01
Full Text Available This paper presents a NN (Neural Network based model for reducing the noise from images. This is a RBF (Radial Basis Function network which is used to reduce the effect of noise and blurring from the captured images. The proposed network calculates the mean MSE (Mean Square Error and PSNR (Peak Signal to Noise Ratio of the noisy images. The proposed network has also been successfully applied to medical images. The performance of the trained RBF network has been compared with the MLP (Multilayer Perceptron Network and it has been demonstrated that the performance of the RBF network is better than the MLP network.
Health Gain by Salt Reduction in Europe: A Modelling Study
Hendriksen, M.A.H.; Raaij, van J.M.A.; Geleijnse, J.M.; Breda, J.; Boshuizen, H.C.
2015-01-01
Excessive salt intake is associated with hypertension and cardiovascular diseases. Salt intake exceeds the World Health Organization population nutrition goal of 5 grams per day in the European region. We assessed the health impact of salt reduction in nine European countries (Finland, France, Irela
Optimal Hankel Norm Model Reduction by Truncation of Trajectories
Roorda, B.; Weiland, S.
2000-01-01
We show how optimal Hankel-norm approximations of dynamical systems allow for a straightforward interpretation in terms of system trajectories. It is shown that for discrete time single-input systems optimal reductions are obtained by cutting 'balanced trajectories', i.e., by disconnecting the past
Health Gain by Salt Reduction in Europe: A Modelling Study
Hendriksen, M.A.H.; Raaij, van J.M.A.; Geleijnse, J.M.; Breda, J.; Boshuizen, H.C.
2015-01-01
Excessive salt intake is associated with hypertension and cardiovascular diseases. Salt intake exceeds the World Health Organization population nutrition goal of 5 grams per day in the European region. We assessed the health impact of salt reduction in nine European countries (Finland, France, Irela
Health Gain by Salt Reduction in Europe: A Modelling Study
Hendriksen, M.A.H.; Raaij, van J.M.A.; Geleijnse, J.M.; Breda, J.; Boshuizen, H.C.
2015-01-01
Excessive salt intake is associated with hypertension and cardiovascular diseases. Salt intake exceeds the World Health Organization population nutrition goal of 5 grams per day in the European region. We assessed the health impact of salt reduction in nine European countries (Finland, France,
Order reduction and efficient implementation of nonlinear nonlocal cochlear response models.
Filo, Maurice; Karameh, Fadi; Awad, Mariette
2016-12-01
The cochlea is an indispensable preliminary processing stage in auditory perception that employs mechanical frequency-tuning and electrical transduction of incoming sound waves. Cochlear mechanical responses are shown to exhibit active nonlinear spatiotemporal response dynamics (e.g., otoacoustic emission). To model such phenomena, it is often necessary to incorporate cochlear fluid-membrane interactions. This results in both excessively high-order model formulations and computationally intensive solutions that limit their practical use in simulating the model and analyzing its response even for simple single-tone inputs. In order to address these limitations, the current work employs a control-theoretic framework to reformulate a nonlinear two-dimensional cochlear model into discrete state space models that are of considerably lower order (factor of 8) and are computationally much simpler (factor of 25). It is shown that the reformulated models enjoy sparse matrix structures which permit efficient numerical manipulations. Furthermore, the spatially discretized models are linearized and simplified using balanced transformation techniques to result in lower-order (nonlinear) realizations derived from the dominant Hankel singular values of the system dynamics. Accuracy and efficiency of the reduced-order reformulations are demonstrated under the response to two fixed tones, sweeping tones and, more generally, a brief speech signal. The corresponding responses are compared to those produced by the original model in both frequency and spatiotemporal domains. Although carried out on a specific instance of cochlear models, the introduced framework of control-theoretic model reduction could be applied to a wide class of models that address the micro- and macro-mechanical properties of the cochlea.
Balakrishnan, Suhrid; Roy, Amit; Ierapetritou, Marianthi G.; Flach, Gregory P.; Georgopoulos, Panos G.
2005-06-01
This work presents a comparative assessment of efficient uncertainty modeling techniques, including Stochastic Response Surface Method (SRSM) and High Dimensional Model Representation (HDMR). This assessment considers improvement achieved with respect to conventional techniques of modeling uncertainty (Monte Carlo). Given that traditional methods for characterizing uncertainty are very computationally demanding, when they are applied in conjunction with complex environmental fate and transport models, this study aims to assess how accurately these efficient (and hence viable) techniques for uncertainty propagation can capture complex model output uncertainty. As a part of this effort, the efficacy of HDMR, which has primarily been used in the past as a model reduction tool, is also demonstrated for uncertainty analysis. The application chosen to highlight the accuracy of these new techniques is the steady state analysis of the groundwater flow in the Savannah River Site General Separations Area (GSA) using the subsurface Flow And Contaminant Transport (FACT) code. Uncertain inputs included three-dimensional hydraulic conductivity fields, and a two-dimensional recharge rate field. The output variables under consideration were the simulated stream baseflows and hydraulic head values. Results show that the uncertainty analysis outcomes obtained using SRSM and HDMR are practically indistinguishable from those obtained using the conventional Monte Carlo method, while requiring orders of magnitude fewer model simulations.
Adiabatic reduction of a model of stochastic gene expression with jump Markov process.
Yvinec, Romain; Zhuge, Changjing; Lei, Jinzhi; Mackey, Michael C
2014-04-01
This paper considers adiabatic reduction in a model of stochastic gene expression with bursting transcription considered as a jump Markov process. In this model, the process of gene expression with auto-regulation is described by fast/slow dynamics. The production of mRNA is assumed to follow a compound Poisson process occurring at a rate depending on protein levels (the phenomena called bursting in molecular biology) and the production of protein is a linear function of mRNA numbers. When the dynamics of mRNA is assumed to be a fast process (due to faster mRNA degradation than that of protein) we prove that, with appropriate scalings in the burst rate, jump size or translational rate, the bursting phenomena can be transmitted to the slow variable. We show that, depending on the scaling, the reduced equation is either a stochastic differential equation with a jump Poisson process or a deterministic ordinary differential equation. These results are significant because adiabatic reduction techniques seem to have not been rigorously justified for a stochastic differential system containing a jump Markov process. We expect that the results can be generalized to adiabatic methods in more general stochastic hybrid systems.
Drag reduction of a car model by linear genetic programming control
Li, Ruiying; Noack, Bernd R.; Cordier, Laurent; Borée, Jacques; Harambat, Fabien
2017-08-01
We investigate open- and closed-loop active control for aerodynamic drag reduction of a car model. Turbulent flow around a blunt-edged Ahmed body is examined at ReH≈ 3× 105 based on body height. The actuation is performed with pulsed jets at all trailing edges (multiple inputs) combined with a Coanda deflection surface. The flow is monitored with 16 pressure sensors distributed at the rear side (multiple outputs). We apply a recently developed model-free control strategy building on genetic programming in Dracopoulos and Kent (Neural Comput Appl 6:214-228, 1997) and Gautier et al. (J Fluid Mech 770:424-441, 2015). The optimized control laws comprise periodic forcing, multi-frequency forcing and sensor-based feedback including also time-history information feedback and combinations thereof. Key enabler is linear genetic programming (LGP) as powerful regression technique for optimizing the multiple-input multiple-output control laws. The proposed LGP control can select the best open- or closed-loop control in an unsupervised manner. Approximately 33% base pressure recovery associated with 22% drag reduction is achieved in all considered classes of control laws. Intriguingly, the feedback actuation emulates periodic high-frequency forcing. In addition, the control identified automatically the only sensor which listens to high-frequency flow components with good signal to noise ratio. Our control strategy is, in principle, applicable to all multiple actuators and sensors experiments.
Choi, Sung Eun; Brandeau, Margaret L; Basu, Sanjay
2016-01-01
The National Salt Reduction Initiative, in which food producers agree to lower sodium to levels deemed feasible for different foods, is expected to significantly reduce sodium intake if expanded to a large sector of food manufacturers. Given recent data on the relationship between sodium intake, hypertension, and associated cardiovascular disease at a population level, we sought to examine risks and benefits of the program. To estimate the impact of further expanding the initiative on hypertension, myocardial infarction (MI) and stroke incidence, and related mortality, given food consumption patterns across the United States, we developed and validated a stochastic microsimulation model of hypertension, MI, and stroke morbidity and mortality, using data from food producers on sodium reduction among foods, linked to 24-hour dietary recalls, blood pressure, and cardiovascular histories from the National Health and Nutrition Examination Survey. Expansion of the initiative to ensure all restaurants and manufacturers reach agreed-upon sodium targets would be expected to avert from 0.9 to 3.0 MIs (a 1.6%-5.4% reduction) and 0.5 to 2.8 strokes (a 1.1%-6.2% reduction) per 10,000 Americans per year over the next decade, after incorporating consumption patterns and variations in the effect of sodium reduction on blood pressure among different demographic groups. Even high levels of consumer addition of table salt or substitution among food categories would be unlikely to neutralize this benefit. However, if recent epidemiological associations between very low sodium and increased mortality are causal, then older women may be at risk of increased mortality from excessively low sodium intake. An expanded National Salt Reduction Initiative is likely to significantly reduce hypertension and hypertension-related cardiovascular morbidity but may be accompanied by potential risks to older women. © The Author(s) 2015.
Automatic Black-Box Model Order Reduction using Radial Basis Functions
Energy Technology Data Exchange (ETDEWEB)
Stephanson, M B; Lee, J F; White, D A
2011-07-15
Finite elements methods have long made use of model order reduction (MOR), particularly in the context of fast freqeucny sweeps. In this paper, we discuss a black-box MOR technique, applicable to a many solution methods and not restricted only to spectral responses. We also discuss automated methods for generating a reduced order model that meets a given error tolerance. Numerical examples demonstrate the effectiveness and wide applicability of the method. With the advent of improved computing hardware and numerous fast solution techniques, the field of computational electromagnetics are progressed rapidly in terms of the size and complexity of problems that can be solved. Numerous applications, however, require the solution of a problem for many different configurations, including optimization, parameter exploration, and uncertainly quantification, where the parameters that may be changed include frequency, material properties, geometric dimensions, etc. In such cases, thousands of solutions may be needed, so solve times of even a few minutes can be burdensome. Model order reduction (MOR) may alleviate this difficulty by creating a small model that can be evaluated quickly. Many MOR techniques have been applied to electromagnetic problems over the past few decades, particularly in the context of fast frequency sweeps. Recent works have extended these methods to allow more than one parameter and to allow the parameters to represent material and geometric properties. There are still limitations with these methods, however. First, they almost always assume that the finite element method is used to solve the problem, so that the system matrix is a known function of the parameters. Second, although some authors have presented adaptive methods (e.g., [2]), the order of the model is often determined before the MOR process begins, with little insight about what order is actually needed to reach the desired accuracy. Finally, it not clear how to efficiently extend most
Jarosch, H. S.
1982-01-01
A method based on the use of constrained spline fits is used to overcome the difficulties arising when body-wave data in the form of T-delta are reduced to the tau-p form in the presence of cusps. In comparison with unconstrained spline fits, the method proposed here tends to produce much smoother models which lie approximately in the middle of the bounds produced by the extremal method. The method is noniterative and, therefore, computationally efficient. The method is applied to the lunar seismic data, where at least one triplication is presumed to occur in the P-wave travel-time curve. It is shown, however, that because of an insufficient number of data points for events close to the antipode of the center of the lunar network, the present analysis is not accurate enough to resolve the problem of a possible lunar core.
Inventory Reduction Using Business Process Reengineering and Simulation Modeling.
1996-12-01
center is analyzed using simulation modeling and business process reengineering (BPR) concepts. The two simulation models were designed and evaluated by...reengineering and simulation modeling offer powerful tools to aid the manager in reducing cycle time and inventory levels.
Team mental models: techniques, methods, and analytic approaches.
Langan-Fox, J; Code, S; Langfield-Smith, K
2000-01-01
Effective team functioning requires the existence of a shared or team mental model among members of a team. However, the best method for measuring team mental models is unclear. Methods reported vary in terms of how mental model content is elicited and analyzed or represented. We review the strengths and weaknesses of vatrious methods that have been used to elicit, represent, and analyze individual and team mental models and provide recommendations for method selection and development. We describe the nature of mental models and review techniques that have been used to elicit and represent them. We focus on a case study on selecting a method to examine team mental models in industry. The processes involved in the selection and development of an appropriate method for eliciting, representing, and analyzing team mental models are described. The criteria for method selection were (a) applicability to the problem under investigation; (b) practical considerations - suitability for collecting data from the targeted research sample; and (c) theoretical rationale - the assumption that associative networks in memory are a basis for the development of mental models. We provide an evaluation of the method matched to the research problem and make recommendations for future research. The practical applications of this research include the provision of a technique for analyzing team mental models in organizations, the development of methods and processes for eliciting a mental model from research participants in their normal work environment, and a survey of available methodologies for mental model research.
Stanko, Z.; Boyce, S. E.; Yeh, W. W. G.
2015-12-01
Model reduction techniques using proper orthogonal decomposition (POD) have been very effective in applications to confined groundwater flow models. These techniques consist of performing a projection of the solution of the full model onto a reduced basis. POD combined with the snapshot approach has been successfully applied to highly discretized linear models. In many cases, the reduced model is orders of magnitude smaller than the full model and runs 1,000 times faster. For nonlinear models, such as the unconfined groundwater flow, direct application of POD requires additional calls to the full model to generate additional snapshots. This is time consuming and increases the dimension of the reduced model. The discrete empirical interpolation method (DEIM) is a technique that avoids the additional full model calls and captures the dynamics of the nonlinear term while reducing the dimensions. Here, POD and DEIM are combined to reduce both the nonlinear unconfined groundwater flow and solute transport equations. To prove the concept, simple one-dimensional models are created for MODFLOW and MT3DMS separately. The dual approach is then tested on a density-dependent flow and transport simulation using the LMT package developed for MODFLOW. For each iteration of the nonlinear flow solver and the transport solver, the respective reduced models are solved instead. Numerical experiments show that significant reduction is obtainable before errors become too large. This method is well suited for a coastal aquifer seawater intrusion scenario, where nonlinearities only exist in small subregions of the model domain. A fine discretization can be utilized and POD will effectively eliminate unnecessary parameterization by projecting the full model system matrix onto a subspace with fewer column dimensions. DEIM can then reduce the row dimension of the original system by using only those state variable nodes with the most influence. This combined approach allows for full
Gosses, Moritz; Moore, Catherine; Wöhling, Thomas
2016-04-01
The complexity of many groundwater-surface water models often results in long model run times even on today's computer systems. This becomes even more problematic in combination with the necessity of (many) repeated model runs for parameter estimation and later model purposes like predictive uncertainty analysis or monitoring network optimization. Model complexity reduction is a promising approach to reduce the computational effort of physically-based models. Its impact on the conservation of uncertainty as determined by the (more) complex model is not well known, though. A potential under-estimation of predictive uncertainty has, however, a significant impact on model applications such as model-based monitoring network optimization. Can we use model reduction techniques to significantly reduce run times of highly complex groundwater models and yet estimate accurate uncertainty levels? Our planned research project hopes to assess this question and apply model reduction to non-linear groundwater systems. Several encouraging model simplification methods have been developed in recent years. To analyze their respective performance, we will choose three different model reduction methods and apply them to both synthetic and real-world test cases to benchmark their computational efficiency and prediction accuracy. The three methods for benchmarking will be proper orthogonal decomposition (POD) (following Siade et al. 2010), the eigenmodel method (Sahuquillo et al. 1983) and inversion-based upscaling (Doherty and Christensen, 2011). In a further step, efficient model reduction methods for application to non-linear groundwater-surface water systems will be developed and applied to monitoring network optimization. In a first step we present here one variant of the implementation and benchmarking of the POD method. POD reduces model complexity by working in a subspace of the model matrices resulting from spatial discretization with the same significant eigenvalue spectrum
Health Gain by Salt Reduction in Europe: A Modelling Study
Hendriksen, Marieke A. H.; van Raaij, Joop M.A.; Geleijnse, Johanna M.; Joao Breda; Boshuizen, Hendriek C.
2015-01-01
Excessive salt intake is associated with hypertension and cardiovascular diseases. Salt intake exceeds the World Health Organization population nutrition goal of 5 grams per day in the European region. We assessed the health impact of salt reduction in nine European countries (Finland, France, Ireland, Italy, Netherlands, Poland, Spain, Sweden and United Kingdom). Through literature research we obtained current salt intake and systolic blood pressure levels of the nine countries. The populati...
Navier slip model of drag reduction by Leidenfrost vapour layers
Berry, Joseph D; Vakarelski, Ivan U.; Chan, Derek Y. C.; Thoroddsen, Sigurdur T.
2016-01-01
Recent experiments found that a hot solid sphere that is able to sustain a stable Leidenfrost vapour layer in a liquid exhibits significant drag reduction during free fall. The variation of the drag coefficient with Reynolds number shows substantial deviation from the characteristic drag crisis behavior at high Reynolds numbers. Results obtained with liqiuds of different viscosities show that onset of the drag crisis depends on the viscosity ratio of the vapor to the liquid. The key feature o...
Reduction of large-scale numerical ground water flow models
Vermeulen, P.T.M.; Heemink, A.W.; Testroet, C.B.M.
2002-01-01
Numerical models are often used for simulating ground water flow. Written in state space form, the dimension of these models is of the order of the number of model cells and can be very high (> million). As a result, these models are computationally very demanding, especially if many different scena
Selection of productivity improvement techniques via mathematical modeling
Directory of Open Access Journals (Sweden)
Mahassan M. Khater
2011-07-01
Full Text Available This paper presents a new mathematical model to select an optimal combination of productivity improvement techniques. The proposed model of this paper considers four-stage cycle productivity and the productivity is assumed to be a linear function of fifty four improvement techniques. The proposed model of this paper is implemented for a real-world case study of manufacturing plant. The resulted problem is formulated as a mixed integer programming which can be solved for optimality using traditional methods. The preliminary results of the implementation of the proposed model of this paper indicate that the productivity can be improved through a change on equipments and it can be easily applied for both manufacturing and service industries.
Concerning the Feasibility of Example-driven Modelling Techniques
Thorne, Simon; Ball, David; Lawson, Zoe Frances
2008-01-01
We report on a series of experiments concerning the feasibility of example driven \\ud modelling. The main aim was to establish experimentally within an academic \\ud environment; the relationship between error and task complexity using a) Traditional \\ud spreadsheet modelling, b) example driven techniques. We report on the experimental \\ud design, sampling, research methods and the tasks set for both control and treatment \\ud groups. Analysis of the completed tasks allows comparison of several...
Advanced Phase noise modeling techniques of nonlinear microwave devices
Prigent, M.; J. C. Nallatamby; R. Quere
2004-01-01
In this paper we present a coherent set of tools allowing an accurate and predictive design of low phase noise oscillators. Advanced phase noise modelling techniques in non linear microwave devices must be supported by a proven combination of the following : - Electrical modeling of low-frequency noise of semiconductor devices, oriented to circuit CAD . The local noise sources will be either cyclostationary noise sources or quasistationary noise sources. - Theoretic...
Modeling and design techniques for RF power amplifiers
Raghavan, Arvind; Laskar, Joy
2008-01-01
The book covers RF power amplifier design, from device and modeling considerations to advanced circuit design architectures and techniques. It focuses on recent developments and advanced topics in this area, including numerous practical designs to back the theoretical considerations. It presents the challenges in designing power amplifiers in silicon and helps the reader improve the efficiency of linear power amplifiers, and design more accurate compact device models, with faster extraction routines, to create cost effective and reliable circuits.
Validation of Models : Statistical Techniques and Data Availability
Kleijnen, J.P.C.
1999-01-01
This paper shows which statistical techniques can be used to validate simulation models, depending on which real-life data are available. Concerning this availability three situations are distinguished (i) no data, (ii) only output data, and (iii) both input and output data. In case (i) - no real
Techniques and tools for efficiently modeling multiprocessor systems
Carpenter, T.; Yalamanchili, S.
1990-01-01
System-level tools and methodologies associated with an integrated approach to the development of multiprocessor systems are examined. Tools for capturing initial program structure, automated program partitioning, automated resource allocation, and high-level modeling of the combined application and resource are discussed. The primary language focus of the current implementation is Ada, although the techniques should be appropriate for other programming paradigms.
Using of Structural Equation Modeling Techniques in Cognitive Levels Validation
Directory of Open Access Journals (Sweden)
Natalija Curkovic
2012-10-01
Full Text Available When constructing knowledge tests, cognitive level is usually one of the dimensions comprising the test specifications with each item assigned to measure a particular level. Recently used taxonomies of the cognitive levels most often represent some modification of the original Bloom’s taxonomy. There are many concerns in current literature about existence of predefined cognitive levels. The aim of this article is to investigate can structural equation modeling techniques confirm existence of different cognitive levels. For the purpose of the research, a Croatian final high-school Mathematics exam was used (N = 9626. Confirmatory factor analysis and structural regression modeling were used to test three different models. Structural equation modeling techniques did not support existence of different cognitive levels in this case. There is more than one possible explanation for that finding. Some other techniques that take into account nonlinear behaviour of the items as well as qualitative techniques might be more useful for the purpose of the cognitive levels validation. Furthermore, it seems that cognitive levels were not efficient descriptors of the items and so improvements are needed in describing the cognitive skills measured by items.
AN OVERVIEW OF REDUCED ORDER MODELING TECHNIQUES FOR SAFETY APPLICATIONS
Energy Technology Data Exchange (ETDEWEB)
Mandelli, D.; Alfonsi, A.; Talbot, P.; Wang, C.; Maljovec, D.; Smith, C.; Rabiti, C.; Cogliati, J.
2016-10-01
The RISMC project is developing new advanced simulation-based tools to perform Computational Risk Analysis (CRA) for the existing fleet of U.S. nuclear power plants (NPPs). These tools numerically model not only the thermal-hydraulic behavior of the reactors primary and secondary systems, but also external event temporal evolution and component/system ageing. Thus, this is not only a multi-physics problem being addressed, but also a multi-scale problem (both spatial, µm-mm-m, and temporal, seconds-hours-years). As part of the RISMC CRA approach, a large amount of computationally-expensive simulation runs may be required. An important aspect is that even though computational power is growing, the overall computational cost of a RISMC analysis using brute-force methods may be not viable for certain cases. A solution that is being evaluated to assist the computational issue is the use of reduced order modeling techniques. During the FY2015, we investigated and applied reduced order modeling techniques to decrease the RISMC analysis computational cost by decreasing the number of simulation runs; for this analysis improvement we used surrogate models instead of the actual simulation codes. This article focuses on the use of reduced order modeling techniques that can be applied to RISMC analyses in order to generate, analyze, and visualize data. In particular, we focus on surrogate models that approximate the simulation results but in a much faster time (microseconds instead of hours/days).
Comparing modelling techniques for analysing urban pluvial flooding.
van Dijk, E; van der Meulen, J; Kluck, J; Straatman, J H M
2014-01-01
Short peak rainfall intensities cause sewer systems to overflow leading to flooding of streets and houses. Due to climate change and densification of urban areas, this is expected to occur more often in the future. Hence, next to their minor (i.e. sewer) system, municipalities have to analyse their major (i.e. surface) system in order to anticipate urban flooding during extreme rainfall. Urban flood modelling techniques are powerful tools in both public and internal communications and transparently support design processes. To provide more insight into the (im)possibilities of different urban flood modelling techniques, simulation results have been compared for an extreme rainfall event. The results show that, although modelling software is tending to evolve towards coupled one-dimensional (1D)-two-dimensional (2D) simulation models, surface flow models, using an accurate digital elevation model, prove to be an easy and fast alternative to identify vulnerable locations in hilly and flat areas. In areas at the transition between hilly and flat, however, coupled 1D-2D simulation models give better results since catchments of major and minor systems can differ strongly in these areas. During the decision making process, surface flow models can provide a first insight that can be complemented with complex simulation models for critical locations.
Directory of Open Access Journals (Sweden)
Meenakshi
2016-07-01
Full Text Available MIMO-OFDM is an attractive interface for the next generation WLANs, WMAN, 4G and 5G mobile cellular systems. However the performance of the MIMO-OFDM systems is affected by Peak to Average Power Ratio (PAPR. PAPR is the main disadvantage associated with the MIMO-OFDM systems. So far, many techniques have been proposed to reduce the value of PAPR but high PAPR for MIMO-OFDM systems is still a demanding area and a different issue.In this paper, a hybrid VLM precoded SLM scheme using Clipping & Filtering has been proposed to reduce PAPR in MIMO-OFDM systems. And it has been observed that the proposed scheme has achieved a significant gain in PAPR reduction without increasing the system complexity and affecting the error performance of the system
Kar, Ayan; Low, Ke-Bin; Oye, Michael; Stroscio, Michael A; Dutta, Mitra; Nicholls, Alan; Meyyappan, M
2011-12-01
ZnO nanowire nucleation mechanism and initial stages of nanowire growth using the carbothermal reduction technique are studied confirming the involvement of the catalyst at the tip in the growth process. Role of the Au catalyst is further confirmed when the tapering observed in the nanowires can be explained by the change in the shape of the catalyst causing a variation of the contact area at the liquid-solid interface of the nanowires. The rate of decrease in nanowire diameter with length on the average is found to be 0.36 nm/s and this rate is larger near the base. Variation in the ZnO nanowire diameter with length is further explained on the basis of the rate at which Zn atoms are supplied as well as the droplet stability at the high flow rates and temperature. Further, saw-tooth faceting is noticed in tapered nanowires, and the formation is analyzed crystallographically.
Directory of Open Access Journals (Sweden)
Oye Michael
2011-01-01
Full Text Available Abstract ZnO nanowire nucleation mechanism and initial stages of nanowire growth using the carbothermal reduction technique are studied confirming the involvement of the catalyst at the tip in the growth process. Role of the Au catalyst is further confirmed when the tapering observed in the nanowires can be explained by the change in the shape of the catalyst causing a variation of the contact area at the liquid–solid interface of the nanowires. The rate of decrease in nanowire diameter with length on the average is found to be 0.36 nm/s and this rate is larger near the base. Variation in the ZnO nanowire diameter with length is further explained on the basis of the rate at which Zn atoms are supplied as well as the droplet stability at the high flow rates and temperature. Further, saw-tooth faceting is noticed in tapered nanowires, and the formation is analyzed crystallographically.
Procedures for Geometric Data Reduction in Solid Log Modelling
Luis G. Occeña; Wenzhen Chen; Daniel L. Schmoldt
1995-01-01
One of the difficulties in solid log modelling is working with huge data sets, such as those that come from computed axial tomographic imaging. Algorithmic procedures are described in this paper that have successfully reduced data without sacrificing modelling integrity.
Marimón, Elena; Nait-Charif, Hammadi; Khan, Asmar; Marsden, Philip A.; Diaz, Oliver
2017-03-01
X-ray Mammography examinations are highly affected by scattered radiation, as it degrades the quality of the image and complicates the diagnosis process. Anti-scatter grids are currently used in planar mammography examinations as the standard physical scattering reduction technique. This method has been found to be inefficient, as it increases the dose delivered to the patient, does not remove all the scattered radiation and increases the price of the equipment. Alternative scattering reduction methods, based on post-processing algorithms, are being investigated to substitute anti-scatter grids. Methods such as the convolution-based scatter estimation have lately become attractive as they are quicker and more flexible than pure Monte Carlo (MC) simulations. In this study we make use of this specific method, which is based on the premise that the scatter in the system is spatially diffuse, thus it can be approximated by a two-dimensional low-pass convolution filter of the primary image. This algorithm uses the narrow pencil beam method to obtain the scatter kernel used to convolve an image, acquired without anti-scatter grid. The results obtained show an image quality comparable, in the worst case, to the grid image, in terms of uniformity and contrast to noise ratio. Further improvement is expected when using clinically-representative phantoms.
ALC: automated reduction of rule-based models
Directory of Open Access Journals (Sweden)
Gilles Ernst
2008-10-01
Full Text Available Abstract Background Combinatorial complexity is a challenging problem for the modeling of cellular signal transduction since the association of a few proteins can give rise to an enormous amount of feasible protein complexes. The layer-based approach is an approximative, but accurate method for the mathematical modeling of signaling systems with inherent combinatorial complexity. The number of variables in the simulation equations is highly reduced and the resulting dynamic models show a pronounced modularity. Layer-based modeling allows for the modeling of systems not accessible previously. Results ALC (Automated Layer Construction is a computer program that highly simplifies the building of reduced modular models, according to the layer-based approach. The model is defined using a simple but powerful rule-based syntax that supports the concepts of modularity and macrostates. ALC performs consistency checks on the model definition and provides the model output in different formats (C MEX, MATLAB, Mathematica and SBML as ready-to-run simulation files. ALC also provides additional documentation files that simplify the publication or presentation of the models. The tool can be used offline or via a form on the ALC website. Conclusion ALC allows for a simple rule-based generation of layer-based reduced models. The model files are given in different formats as ready-to-run simulation files.
Separable Watermarking Technique Using the Biological Color Model
Directory of Open Access Journals (Sweden)
David Nino
2009-01-01
Full Text Available Problem statement: The issue of having robust and fragile watermarking is still main focus for various researchers worldwide. Performance of a watermarking technique depends on how complex as well as how feasible to implement. These issues are tested using various kinds of attacks including geometry and transformation. Watermarking techniques in color images are more challenging than gray images in terms of complexity and information handling. In this study, we focused on implementation of watermarking technique in color images using the biological model. Approach: We proposed a novel method for watermarking using spatial and the Discrete Cosine Transform (DCT domains. The proposed method deled with colored images in the biological color model, the Hue, Saturation and Intensity (HSI. Technique was implemented and used against various colored images including the standard ones such as pepper image. The experiments were done using various attacks such as cropping, transformation and geometry. Results: The method robustness showed high accuracy in retrieval data and technique is fragile against geometric attacks. Conclusion: Watermark security was increased by using the Hadamard transform matrix. The watermarks used were meaningful and of varying sizes and details.
Directory of Open Access Journals (Sweden)
Ashish Kumari
2012-01-01
Full Text Available Extended Finite State Machine uses the formal description language to model the requirement specification of the system. The system models are frequently changed because of the specification changes. We can show the changes in specification by changing the model represented using finite state machine. To test the modified parts of the model the selective test generation techniques are used. However, the regression test suits still may be very large according to the size. In this paper, we have discussed the method whichdefine the test suits reduction and the requirement specification that used for testing the main system after the modifications in the requirements and implementation. Extended finite state machine uses the state transition diagram for representing the requirement specification. It shows how system changes states and action and variable used during each transition. After that data dependency andcontrol dependency are find out among the transitions of state transition diagram. After these dependencies we can find out the affecting and affected portion in the system introduced by the modification. The main condition is: “If two test cases generate same affecting and affected pattern, it means it is enough to implement only one test case rather than two.” So using this approach we can substantially reduce the size of original test suite.
Deker, H.
1971-01-01
The West German tracking stations are equipped with ballistic cameras. Plate measurement and plate reduction must therefore follow photogrammetric methods. Approximately 100 star positions and 200 satellite positions are measured on each plate. The mathematical model for spatial rotation of the bundle of rays is extended by including terms for distortion and internal orientation of the camera as well as by providing terms for refraction which are computed for the measured coordinates of the star positions on the plate. From the measuring accuracy of the plate coordinates it follows that the timing accuracy for the exposures has to be about one millisecond, in order to obtain a homogeneous system.
Impact of Domain Modeling Techniques on the Quality of Domain Model: An Experiment
Directory of Open Access Journals (Sweden)
Hiqmat Nisa
2016-10-01
Full Text Available The unified modeling language (UML is widely used to analyze and design different software development artifacts in an object oriented development. Domain model is a significant artifact that models the problem domain and visually represents real world objects and relationships among them. It facilitates the comprehension process by identifying the vocabulary and key concepts of the business world. Category list technique identifies concepts and associations with the help of pre defined categories, which are important to business information systems. Whereas noun phrasing technique performs grammatical analysis of use case description to recognize concepts and associations. Both of these techniques are used for the construction of domain model, however, no empirical evidence exists that evaluates the quality of the resultant domain model constructed via these two basic techniques. A controlled experiment was performed to investigate the impact of category list and noun phrasing technique on quality of the domain model. The constructed domain model is evaluated for completeness, correctness and effort required for its design. The obtained results show that category list technique is better than noun phrasing technique for the identification of concepts as it avoids generating unnecessary elements i.e. extra concepts, associations and attributes in the domain model. The noun phrasing technique produces a comprehensive domain model and requires less effort as compared to category list. There is no statistically significant difference between both techniques in case of correctness.
Impact of Domain Modeling Techniques on the Quality of Domain Model: An Experiment
Directory of Open Access Journals (Sweden)
Hiqmat Nisa
2016-11-01
Full Text Available The unified modeling language (UML is widely used to analyze and design different software development artifacts in an object oriented development. Domain model is a significant artifact that models the problem domain and visually represents real world objects and relationships among them. It facilitates the comprehension process by identifying the vocabulary and key concepts of the business world. Category list technique identifies concepts and associations with the help of pre defined categories, which are important to business information systems. Whereas noun phrasing technique performs grammatical analysis of use case description to recognize concepts and associations. Both of these techniques are used for the construction of domain model, however, no empirical evidence exists that evaluates the quality of the resultant domain model constructed via these two basic techniques. A controlled experiment was performed to investigate the impact of category list and noun phrasing technique on quality of the domain model. The constructed domain model is evaluated for completeness, correctness and effort required for its design. The obtained results show that category list technique is better than noun phrasing technique for the identification of concepts as it avoids generating unnecessary elements i.e. extra concepts, associations and attributes in the domain model. The noun phrasing technique produces a comprehensive domain model and requires less effort as compared to category list. There is no statistically significant difference between both techniques in case of correctness.
Probabilistic Analysis Techniques Applied to Complex Spacecraft Power System Modeling
Hojnicki, Jeffrey S.; Rusick, Jeffrey J.
2005-01-01
Electric power system performance predictions are critical to spacecraft, such as the International Space Station (ISS), to ensure that sufficient power is available to support all the spacecraft s power needs. In the case of the ISS power system, analyses to date have been deterministic, meaning that each analysis produces a single-valued result for power capability because of the complexity and large size of the model. As a result, the deterministic ISS analyses did not account for the sensitivity of the power capability to uncertainties in model input variables. Over the last 10 years, the NASA Glenn Research Center has developed advanced, computationally fast, probabilistic analysis techniques and successfully applied them to large (thousands of nodes) complex structural analysis models. These same techniques were recently applied to large, complex ISS power system models. This new application enables probabilistic power analyses that account for input uncertainties and produce results that include variations caused by these uncertainties. Specifically, N&R Engineering, under contract to NASA, integrated these advanced probabilistic techniques with Glenn s internationally recognized ISS power system model, System Power Analysis for Capability Evaluation (SPACE).
Spatial Modeling of Geometallurgical Properties: Techniques and a Case Study
Energy Technology Data Exchange (ETDEWEB)
Deutsch, Jared L., E-mail: jdeutsch@ualberta.ca [University of Alberta, School of Mining and Petroleum Engineering, Department of Civil and Environmental Engineering (Canada); Palmer, Kevin [Teck Resources Limited (Canada); Deutsch, Clayton V.; Szymanski, Jozef [University of Alberta, School of Mining and Petroleum Engineering, Department of Civil and Environmental Engineering (Canada); Etsell, Thomas H. [University of Alberta, Department of Chemical and Materials Engineering (Canada)
2016-06-15
High-resolution spatial numerical models of metallurgical properties constrained by geological controls and more extensively by measured grade and geomechanical properties constitute an important part of geometallurgy. Geostatistical and other numerical techniques are adapted and developed to construct these high-resolution models accounting for all available data. Important issues that must be addressed include unequal sampling of the metallurgical properties versus grade assays, measurements at different scale, and complex nonlinear averaging of many metallurgical parameters. This paper establishes techniques to address each of these issues with the required implementation details and also demonstrates geometallurgical mineral deposit characterization for a copper–molybdenum deposit in South America. High-resolution models of grades and comminution indices are constructed, checked, and are rigorously validated. The workflow demonstrated in this case study is applicable to many other deposit types.
Drag reduction by linear viscosity model in turbulent channel flow of polymer solution
Institute of Scientific and Technical Information of China (English)
吴桂芬; 李昌烽; 黄东升; 赵作广; 冯晓东; 王瑞
2008-01-01
A further numerical study of the theory that the drag reduction in the turbulence is related to the viscosity profile growing linearly with the distance from the wall was performed.The constant viscosity in the Navier-Stokes equations was replaced using this viscosity model.Some drag reduction characteristics were shown comparing with Virk’s phenomenology.The mean velocity and Reynolds stress profiles are consistent with the experimental and direct numerical simulation results.A drag reduction level of 45% was obtained.It is reasonable for this linear viscosity model to explain the mechanism of turbulence drag reduction in some aspects.
POD/DEIM Nonlinear model order reduction of an ADI implicit shallow water equations model
Stefanescu, Razvan
2012-01-01
In the present paper we consider a 2-D shallow-water equations (SWE) model on a $\\beta$-plane solved using an alternating direction fully implicit (ADI) finite-difference scheme on a rectangular domain. The scheme was shown to be unconditionally stable for the linearized equations. The discretization yields a number of nonlinear systems of algebraic equations. We then use a proper orthogonal decomposition (POD) to reduce the dimension of the SWE model. Due to the model nonlinearities, the computational complexity of the reduced model still depends on the number of variables of the full shallow - water equations model. By employing the discrete empirical interpolation method (DEIM) we reduce the computational complexity of the reduced order model due to its depending on the nonlinear full dimension model and regain the full model reduction expected from the POD model. To emphasize the CPU gain in performance due to use of POD/DEIM, we also propose testing an explicit Euler finite difference scheme (EE) as an a...