Model order reduction techniques with applications in finite element analysis
Qu, Zu-Qing
2004-01-01
Despite the continued rapid advance in computing speed and memory the increase in the complexity of models used by engineers persists in outpacing them. Even where there is access to the latest hardware, simulations are often extremely computationally intensive and time-consuming when full-blown models are under consideration. The need to reduce the computational cost involved when dealing with high-order/many-degree-of-freedom models can be offset by adroit computation. In this light, model-reduction methods have become a major goal of simulation and modeling research. Model reduction can also ameliorate problems in the correlation of widely used finite-element analyses and test analysis models produced by excessive system complexity. Model Order Reduction Techniques explains and compares such methods focusing mainly on recent work in dynamic condensation techniques: - Compares the effectiveness of static, exact, dynamic, SEREP and iterative-dynamic condensation techniques in producing valid reduced-order mo...
Frequency Weighted Model Order Reduction Technique and Error Bounds for Discrete Time Systems
Directory of Open Access Journals (Sweden)
Muhammad Imran
2014-01-01
for whole frequency range. However, certain applications (like controller reduction require frequency weighted approximation, which introduce the concept of using frequency weights in model reduction techniques. Limitations of some existing frequency weighted model reduction techniques include lack of stability of reduced order models (for two sided weighting case and frequency response error bounds. A new frequency weighted technique for balanced model reduction for discrete time systems is proposed. The proposed technique guarantees stable reduced order models even for the case when two sided weightings are present. Efficient technique for frequency weighted Gramians is also proposed. Results are compared with other existing frequency weighted model reduction techniques for discrete time systems. Moreover, the proposed technique yields frequency response error bounds.
Size reduction techniques for vital compliant VHDL simulation models
Rich, Marvin J.; Misra, Ashutosh
2006-08-01
A method and system select delay values from a VHDL standard delay file that correspond to an instance of a logic gate in a logic model. Then the system collects all the delay values of the selected instance and builds super generics for the rise-time and the fall-time of the selected instance. Then, the system repeats this process for every delay value in the standard delay file (310) that correspond to every instance of every logic gate in the logic model. The system then outputs a reduced size standard delay file (314) containing the super generics for every instance of every logic gate in the logic model.
Energy Technology Data Exchange (ETDEWEB)
Dautin, S.
1997-04-01
This work concerns the modeling of thermal phenomena inside buildings for the evaluation of energy exploitation costs of thermal installations and for the modeling of thermal and aeraulic transient phenomena. This thesis comprises 7 chapters dealing with: (1) the thermal phenomena inside buildings and the CLIM2000 calculation code, (2) the ETNA and GENEC experimental cells and their modeling, (3) the techniques of model reduction tested (Marshall`s truncature, Michailesco aggregation method and Moore truncature) with their algorithms and their encoding in the MATRED software, (4) the application of model reduction methods to the GENEC and ETNA cells and to a medium size dual-zone building, (5) the modeling of meteorological influences classically applied to buildings (external temperature and solar flux), (6) the analytical expression of these modeled meteorological influences. The last chapter presents the results of these improved methods on the GENEC and ETNA cells and on a lower inertia building. These new methods are compared to classical methods. (J.S.) 69 refs.
Directory of Open Access Journals (Sweden)
Lubna Moin
2009-04-01
Full Text Available This research paper basically explores and compares the different modeling and analysis techniques and than it also explores the model order reduction approach and significance. The traditional modeling and simulation techniques for dynamic systems are generally adequate for single-domain systems only, but the Bond Graph technique provides new strategies for reliable solutions of multi-domain system. They are also used for analyzing linear and non linear dynamic production system, artificial intelligence, image processing, robotics and industrial automation. This paper describes a unique technique of generating the Genetic design from the tree structured transfer function obtained from Bond Graph. This research work combines bond graphs for model representation with Genetic programming for exploring different ideas on design space tree structured transfer function result from replacing typical bond graph element with their impedance equivalent specifying impedance lows for Bond Graph multiport. This tree structured form thus obtained from Bond Graph is applied for generating the Genetic Tree. Application studies will identify key issues and importance for advancing this approach towards becoming on effective and efficient design tool for synthesizing design for Electrical system. In the first phase, the system is modeled using Bond Graph technique. Its system response and transfer function with conventional and Bond Graph method is analyzed and then a approach towards model order reduction is observed. The suggested algorithm and other known modern model order reduction techniques are applied to a 11th order high pass filter [1], with different approach. The model order reduction technique developed in this paper has least reduction errors and secondly the final model retains structural information. The system response and the stability analysis of the system transfer function taken by conventional and by Bond Graph method is compared and
A Multi-Model Reduction Technique for Optimization of Coupled Structural-Acoustic Problems
DEFF Research Database (Denmark)
Creixell Mediante, Ester; Jensen, Jakob Søndergaard; Brunskog, Jonas
2016-01-01
Finite Element models of structural-acoustic coupled systems can become very large for complex structures with multiple connected parts. Optimization of the performance of the structure based on harmonic analysis of the system requires solving the coupled problem iteratively and for several...... frequencies, which can become highly time consuming. Several modal-based model reduction techniques for structure-acoustic interaction problems have been developed in the literature. The unsymmetric nature of the pressure-displacement formulation of the problem poses the question of how the reduction modal...... base should be formed, given that the modal vectors are not orthogonal due to the asymmetry of the system matrices. In this paper, a multi-model reduction (MMR) technique for structure-acoustic interaction problems is developed. In MMR, the reduction base is formed with the modal vectors of a family...
Flexible multibody simulation of automotive systems with non-modal model reduction techniques
Shiiba, Taichi; Fehr, Jörg; Eberhard, Peter
2012-12-01
The stiffness of the body structure of an automobile has a strong relationship with its noise, vibration, and harshness (NVH) characteristics. In this paper, the effect of the stiffness of the body structure upon ride quality is discussed with flexible multibody dynamics. In flexible multibody simulation, the local elastic deformation of the vehicle has been described traditionally with modal shape functions. Recently, linear model reduction techniques from system dynamics and mathematics came into the focus to find more sophisticated elastic shape functions. In this work, the NVH-relevant states of a racing kart are simulated, whereas the elastic shape functions are calculated with modern model reduction techniques like moment matching by projection on Krylov-subspaces, singular value decomposition-based reduction techniques, and combinations of those. The whole elastic multibody vehicle model consisting of tyres, steering, axle, etc. is considered, and an excitation with a vibration characteristics in a wide frequency range is evaluated in this paper. The accuracy and the calculation performance of those modern model reduction techniques is investigated including a comparison of the modal reduction approach.
New model reduction technique for a class of parabolic partial differential equations
Vajta, Miklos
1991-01-01
A model reduction (or lumping) technique for a class of parabolic-type partial differential equations is given, and its application is discussed. The frequency response of the temperature distribution in any multilayer solid is developed and given by a matrix expression. The distributed transfer
DEFF Research Database (Denmark)
Tahavori, Maryamsadat; Shaker, Hamid Reza
gramians within the time interval to build the appropriate Petrov-Galerkin projection for dynamical systems within the time interval of interest. The bound on approximation error is also derived. The numerical results are compared with the counterparts from other techniques. The results confirm......A method for model reduction of dynamical systems with the second order structure is proposed in this paper. The proposed technique preserves the second order structure of the system, and also preserves the stability of the original systems. The method uses the controllability and observability...
Soldner, Dominic; Brands, Benjamin; Zabihyan, Reza; Steinmann, Paul; Mergheim, Julia
2017-10-01
Computing the macroscopic material response of a continuum body commonly involves the formulation of a phenomenological constitutive model. However, the response is mainly influenced by the heterogeneous microstructure. Computational homogenisation can be used to determine the constitutive behaviour on the macro-scale by solving a boundary value problem at the micro-scale for every so-called macroscopic material point within a nested solution scheme. Hence, this procedure requires the repeated solution of similar microscopic boundary value problems. To reduce the computational cost, model order reduction techniques can be applied. An important aspect thereby is the robustness of the obtained reduced model. Within this study reduced-order modelling (ROM) for the geometrically nonlinear case using hyperelastic materials is applied for the boundary value problem on the micro-scale. This involves the Proper Orthogonal Decomposition (POD) for the primary unknown and hyper-reduction methods for the arising nonlinearity. Therein three methods for hyper-reduction, differing in how the nonlinearity is approximated and the subsequent projection, are compared in terms of accuracy and robustness. Introducing interpolation or Gappy-POD based approximations may not preserve the symmetry of the system tangent, rendering the widely used Galerkin projection sub-optimal. Hence, a different projection related to a Gauss-Newton scheme (Gauss-Newton with Approximated Tensors- GNAT) is favoured to obtain an optimal projection and a robust reduced model.
Trimming a hazard logic tree with a new model-order-reduction technique
Porter, Keith; Field, Edward; Milner, Kevin R
2017-01-01
The size of the logic tree within the Uniform California Earthquake Rupture Forecast Version 3, Time-Dependent (UCERF3-TD) model can challenge risk analyses of large portfolios. An insurer or catastrophe risk modeler concerned with losses to a California portfolio might have to evaluate a portfolio 57,600 times to estimate risk in light of the hazard possibility space. Which branches of the logic tree matter most, and which can one ignore? We employed two model-order-reduction techniques to simplify the model. We sought a subset of parameters that must vary, and the specific fixed values for the remaining parameters, to produce approximately the same loss distribution as the original model. The techniques are (1) a tornado-diagram approach we employed previously for UCERF2, and (2) an apparently novel probabilistic sensitivity approach that seems better suited to functions of nominal random variables. The new approach produces a reduced-order model with only 60 of the original 57,600 leaves. One can use the results to reduce computational effort in loss analyses by orders of magnitude.
A parametric model order reduction technique for poroelastic finite element models.
Lappano, Ettore; Polanz, Markus; Desmet, Wim; Mundo, Domenico
2017-10-01
This research presents a parametric model order reduction approach for vibro-acoustic problems in the frequency domain of systems containing poroelastic materials (PEM). The method is applied to the Finite Element (FE) discretization of the weak u-p integral formulation based on the Biot-Allard theory and makes use of reduced basis (RB) methods typically employed for parametric problems. The parametric reduction is obtained rewriting the Biot-Allard FE equations for poroelastic materials using an affine representation of the frequency (therefore allowing for RB methods) and projecting the frequency-dependent PEM system on a global reduced order basis generated with the proper orthogonal decomposition instead of standard modal approaches. This has proven to be better suited to describe the nonlinear frequency dependence and the strong coupling introduced by damping. The methodology presented is tested on two three-dimensional systems: in the first experiment, the surface impedance of a PEM layer sample is calculated and compared with results of the literature; in the second, the reduced order model of a multilayer system coupled to an air cavity is assessed and the results are compared to those of the reference FE model.
International Nuclear Information System (INIS)
WAGGONER, L.O.
2000-01-01
As radiation safety specialists, one of the things we are required to do is evaluate tools, equipment, materials and work practices and decide whether the use of these products or work practices will reduce radiation dose or risk to the environment. There is a tendency for many workers that work with radioactive material to accomplish radiological work the same way they have always done it rather than look for new technology or change their work practices. New technology is being developed all the time that can make radiological work easier and result in less radiation dose to the worker or reduce the possibility that contamination will be spread to the environment. As we discuss the various tools and techniques that reduce radiation dose, keep in mind that the radiological controls should be reasonable. We can not always get the dose to zero, so we must try to accomplish the work efficiently and cost-effectively. There are times we may have to accept there is only so much you can do. The goal is to do the smart things that protect the worker but do not hinder him while the task is being accomplished. In addition, we should not demand that large amounts of money be spent for equipment that has marginal value in order to save a few millirem. We have broken the handout into sections that should simplify the presentation. Time, distance, shielding, and source reduction are methods used to reduce dose and are covered in Part I on work execution. We then look at operational considerations, radiological design parameters, and discuss the characteristics of personnel who deal with ALARA. This handout should give you an overview of what it takes to have an effective dose reduction program
Energy Technology Data Exchange (ETDEWEB)
WAGGONER, L.O.
2000-05-16
As radiation safety specialists, one of the things we are required to do is evaluate tools, equipment, materials and work practices and decide whether the use of these products or work practices will reduce radiation dose or risk to the environment. There is a tendency for many workers that work with radioactive material to accomplish radiological work the same way they have always done it rather than look for new technology or change their work practices. New technology is being developed all the time that can make radiological work easier and result in less radiation dose to the worker or reduce the possibility that contamination will be spread to the environment. As we discuss the various tools and techniques that reduce radiation dose, keep in mind that the radiological controls should be reasonable. We can not always get the dose to zero, so we must try to accomplish the work efficiently and cost-effectively. There are times we may have to accept there is only so much you can do. The goal is to do the smart things that protect the worker but do not hinder him while the task is being accomplished. In addition, we should not demand that large amounts of money be spent for equipment that has marginal value in order to save a few millirem. We have broken the handout into sections that should simplify the presentation. Time, distance, shielding, and source reduction are methods used to reduce dose and are covered in Part I on work execution. We then look at operational considerations, radiological design parameters, and discuss the characteristics of personnel who deal with ALARA. This handout should give you an overview of what it takes to have an effective dose reduction program.
Simulation of Moving Loads in Elastic Multibody Systems With Parametric Model Reduction Techniques
Directory of Open Access Journals (Sweden)
Fischer Michael
2014-08-01
Full Text Available In elastic multibody systems, one considers large nonlinear rigid body motion and small elastic deformations. In a rising number of applications, e.g. automotive engineering, turning and milling processes, the position of acting forces on the elastic body varies. The necessary model order reduction to enable efficient simulations requires the determination of ansatz functions, which depend on the moving force position. For a large number of possible interaction points, the size of the reduced system would increase drastically in the classical Component Mode Synthesis framework. If many nodes are potentially loaded, or the contact area is not known a-priori and only a small number of nodes is loaded simultaneously, the system is described in this contribution with the parameter-dependent force position. This enables the application of parametric model order reduction methods. Here, two techniques based on matrix interpolation are described which transform individually reduced systems and allow the interpolation of the reduced system matrices to determine reduced systems for any force position. The online-offline decomposition and description of the force distribution onto the reduced elastic body are presented in this contribution. The proposed framework enables the simulation of elastic multibody systems with moving loads efficiently because it solely depends on the size of the reduced system. Results in frequency and time domain for the simulation of a thin-walled cylinder with a moving load illustrate the applicability of the proposed method.
DEFF Research Database (Denmark)
Creixell Mediante, Ester; Jensen, Jakob Søndergaard; Naets, Frank
2018-01-01
performance and modelling them accurately requires a precise description of the strong interaction between the light-weight parts and the internal and surrounding air over a wide frequency range. Parametric optimization of the FE model can be used to reduce the vibroacoustic feedback in a device during...... the design phase; however, it requires solving the model iteratively for multiple frequencies at different parameter values, which becomes highly time consuming when the system is large. Parametric Model Order Reduction (pMOR) techniques aim at reducing the computational cost associated with each analysis......-parameter optimization of a frequency response for a hearing aid model, evaluated at 300 frequencies, where the objective function evaluations become more than one order of magnitude faster than for the full system....
Noor, A. K.; Andersen, C. M.; Tanner, J. A.
1984-01-01
An effective computational strategy is presented for the large-rotation, nonlinear axisymmetric analysis of shells of revolution. The three key elements of the computational strategy are: (1) use of mixed finite-element models with discontinuous stress resultants at the element interfaces; (2) substantial reduction in the total number of degrees of freedom through the use of a multiple-parameter reduction technique; and (3) reduction in the size of the analysis model through the decomposition of asymmetric loads into symmetric and antisymmetric components coupled with the use of the multiple-parameter reduction technique. The potential of the proposed computational strategy is discussed. Numerical results are presented to demonstrate the high accuracy of the mixed models developed and to show the potential of using the proposed computational strategy for the analysis of tires.
Waggoner, L O
2000-01-01
As radiation safety specialists, one of the things we are required to do is evaluate tools, equipment, materials and work practices and decide whether the use of these products or work practices will reduce radiation dose or risk to the environment. There is a tendency for many workers that work with radioactive material to accomplish radiological work the same way they have always done it rather than look for new technology or change their work practices. New technology is being developed all the time that can make radiological work easier and result in less radiation dose to the worker or reduce the possibility that contamination will be spread to the environment. As we discuss the various tools and techniques that reduce radiation dose, keep in mind that the radiological controls should be reasonable. We can not always get the dose to zero, so we must try to accomplish the work efficiently and cost-effectively. There are times we may have to accept there is only so much you can do. The goal is to do the sm...
Alsmadi, Othman M K; Abo-Hammour, Zaer S
2015-01-01
A robust computational technique for model order reduction (MOR) of multi-time-scale discrete systems (single input single output (SISO) and multi-input multioutput (MIMO)) is presented in this paper. This work is motivated by the singular perturbation of multi-time-scale systems where some specific dynamics may not have significant influence on the overall system behavior. The new approach is proposed using genetic algorithms (GA) with the advantage of obtaining a reduced order model, maintaining the exact dominant dynamics in the reduced order, and minimizing the steady state error. The reduction process is performed by obtaining an upper triangular transformed matrix of the system state matrix defined in state space representation along with the elements of B, C, and D matrices. The GA computational procedure is based on maximizing the fitness function corresponding to the response deviation between the full and reduced order models. The proposed computational intelligence MOR method is compared to recently published work on MOR techniques where simulation results show the potential and advantages of the new approach.
Directory of Open Access Journals (Sweden)
Othman M. K. Alsmadi
2015-01-01
Full Text Available A robust computational technique for model order reduction (MOR of multi-time-scale discrete systems (single input single output (SISO and multi-input multioutput (MIMO is presented in this paper. This work is motivated by the singular perturbation of multi-time-scale systems where some specific dynamics may not have significant influence on the overall system behavior. The new approach is proposed using genetic algorithms (GA with the advantage of obtaining a reduced order model, maintaining the exact dominant dynamics in the reduced order, and minimizing the steady state error. The reduction process is performed by obtaining an upper triangular transformed matrix of the system state matrix defined in state space representation along with the elements of B, C, and D matrices. The GA computational procedure is based on maximizing the fitness function corresponding to the response deviation between the full and reduced order models. The proposed computational intelligence MOR method is compared to recently published work on MOR techniques where simulation results show the potential and advantages of the new approach.
A Multi-Model Reduction Technique for Optimization of Coupled Structural-Acoustic Problems
DEFF Research Database (Denmark)
Creixell Mediante, Ester; Jensen, Jakob Søndergaard; Brunskog, Jonas
2016-01-01
Finite Element models of structural-acoustic coupled systems can become very large for complex structures with multiple connected parts. Optimization of the performance of the structure based on harmonic analysis of the system requires solving the coupled problem iteratively and for several...
Xing, Yafei; Macq, Benoit
2017-11-01
With the emergence of clinical prototypes and first patient acquisitions for proton therapy, the research on prompt gamma imaging is aiming at making most use of the prompt gamma data for in vivo estimation of any shift from expected Bragg peak (BP). The simple problem of matching the measured prompt gamma profile of each pencil beam with a reference simulation from the treatment plan is actually made complex by uncertainties which can translate into distortions during treatment. We will illustrate this challenge and demonstrate the robustness of a predictive linear model we proposed for BP shift estimation based on principal component analysis (PCA) method. It considered the first clinical knife-edge slit camera design in use with anthropomorphic phantom CT data. Particularly, 4115 error scenarios were simulated for the learning model. PCA was applied to the training input randomly chosen from 500 scenarios for eliminating data collinearities. A total variance of 99.95% was used for representing the testing input from 3615 scenarios. This model improved the BP shift estimation by an average of 63+/-19% in a range between -2.5% and 86%, comparing to our previous profile shift (PS) method. The robustness of our method was demonstrated by a comparative study conducted by applying 1000 times Poisson noise to each profile. 67% cases obtained by the learning model had lower prediction errors than those obtained by PS method. The estimation accuracy ranged between 0.31 +/- 0.22 mm and 1.84 +/- 8.98 mm for the learning model, while for PS method it ranged between 0.3 +/- 0.25 mm and 20.71 +/- 8.38 mm.
Andreh, Angga Muhamad; Subiyanto, Sunardiyo, Said
2017-01-01
Development of non-linear loading in the application of industry and distribution system and also harmonic compensation becomes important. Harmonic pollution is an urgent problem in increasing power quality. The main contribution of the study is the modeling approach used to design a shunt active filter and the application of the cascade multilevel inverter topology to improve the power quality of electrical energy. In this study, shunt active filter was aimed to eliminate dominant harmonic component by injecting opposite currents with the harmonic component system. The active filter was designed by shunt configuration with cascaded multilevel inverter method controlled by PID controller and SPWM. With this shunt active filter, the harmonic current can be reduced so that the current wave pattern of the source is approximately sinusoidal. Design and simulation were conducted by using Power Simulator (PSIM) software. Shunt active filter performance experiment was conducted on the IEEE four bus test system. The result of shunt active filter installation on the system (IEEE four bus) could reduce THD current from 28.68% to 3.09%. With this result, the active filter can be applied as an effective method to reduce harmonics.
Dimension Reduction Techniques in Morhpometrics
Kratochvíl, Jakub
2011-01-01
This thesis centers around dimensionality reduction and its usage on landmark-type data which are often used in anthropology and morphometrics. In particular we focus on non-linear dimensionality reduction methods - locally linear embedding and multidimensional scaling. We introduce a new approach to dimensionality reduction called multipass dimensionality reduction and show that improves the quality of classification as well as requiring less dimensions for successful classification than the...
Disrattakit, P.; Chanphana, R.; Chatraphorn, P.
2017-10-01
Time varying skewness (S) and kurtosis (Q) of height distribution of (2 + 1) -dimensional larger curvature (LC) model with and without noise reduction techniques (NRTs) are investigated in both transient and steady state regimes. In this work, effects of the multiple hit NRT (m > 1 NRT) and the long surface diffusion length NRT (ℓ > 1 NRT) on the surface morphologies and characteristics of S and Q are studied. In the early growth time, plots of S and Q versus time of the m > 1 morphologies show pronounced oscillation indicating the layer by layer growth. Our results show that S = 0 and Q 0 at every complete layer. The results are confirmed by the same plots of the results from the Das Sarma-Tamborenea (DT) model. The ℓ > 1 LC model, on the other hand, has no evidence of the layer by layer growth mode due to the rapidly damped oscillation of S and Q. In the steady state, the m > 1 and ℓ > 1 NRTs affect weakly on the values of S and Q and the mounded morphologies of the film. This lead to the evidence of universality of S and Q in the steady state of the LC models with various m and ℓ. The finite size effect on the values of S and Q is found to be very weak in the LC model. By extrapolating to L → ∞, we obtain SL→∞ ≈ 0.05 and QL→∞ ≈ - 0.62 which are in agreement with the NRTs results.
Principal Components as a Data Reduction and Noise Reduction Technique
Imhoff, M. L.; Campbell, W. J.
1982-01-01
The potential of principal components as a pipeline data reduction technique for thematic mapper data was assessed and principal components analysis and its transformation as a noise reduction technique was examined. Two primary factors were considered: (1) how might data reduction and noise reduction using the principal components transformation affect the extraction of accurate spectral classifications; and (2) what are the real savings in terms of computer processing and storage costs of using reduced data over the full 7-band TM complement. An area in central Pennsylvania was chosen for a study area. The image data for the project were collected using the Earth Resources Laboratory's thematic mapper simulator (TMS) instrument.
Model Reduction in Biomechanics
Feng, Yan
mechanical parameters from experimental results. However, in real biological world, these homogeneous and isotropic assumptions are usually invalidate. Thus, instead of using hypothesized model, a specific continuum model at mesoscopic scale can be introduced based upon data reduction of the results from molecular simulations at atomistic level. Once a continuum model is established, it can provide details on the distribution of stresses and strains induced within the biomolecular system which is useful in determining the distribution and transmission of these forces to the cytoskeletal and sub-cellular components, and help us gain a better understanding in cell mechanics. A data-driven model reduction approach to the problem of microtubule mechanics as an application is present, a beam element is constructed for microtubules based upon data reduction of the results from molecular simulation of the carbon backbone chain of alphabeta-tubulin dimers. The data base of mechanical responses to various types of loads from molecular simulation is reduced to dominant modes. The dominant modes are subsequently used to construct the stiffness matrix of a beam element that captures the anisotropic behavior and deformation mode coupling that arises from a microtubule's spiral structure. In contrast to standard Euler-Bernoulli or Timoshenko beam elements, the link between forces and node displacements results not from hypothesized deformation behavior, but directly from the data obtained by molecular scale simulation. Differences between the resulting microtubule data-driven beam model (MTDDBM) and standard beam elements are presented, with a focus on coupling of bending, stretch, shear deformations. The MTDDBM is just as economical to use as a standard beam element, and allows accurate reconstruction of the mechanical behavior of structures within a cell as exemplified in a simple model of a component element of the mitotic spindle.
Post-placement temperature reduction techniques
DEFF Research Database (Denmark)
Liu, Wei; Nannarelli, Alberto
2010-01-01
With technology scaled to deep submicron era, temperature and temperature gradient have emerged as important design criteria. We propose two post-placement techniques to reduce peak temperature by intelligently allocating whitespace in the hotspots. Both methods are fully compliant with commercial...... technologies, and can be easily integrated with state-of-the-art thermal-aware design flow. Experiments in a set of tests on circuits implemented in STM 65nm technologies show that our methods achieve better peak temperature reduction than directly increasing circuit's area....
Mousavi, Seyed Mahdi; Niaei, Aligholi; Salari, Dariush; Panahi, Parvaneh Nakhostin; Samandari, Masoud
2013-01-01
A response surface methodology (RSM) involving a central composite design was applied to the modelling and optimization of a preparation of Mn/active carbon nanocatalysts in NH3-SCR of NO at 250 degrees C and the results were compared with the artificial neural network (ANN) predicted values. The catalyst preparation parameters, including metal loading (wt%), calcination temperature and pre-oxidization degree (v/v% HNO3) were selected as influence factors on catalyst efficiency. In the RSM model, the predicted values of NO conversion were found to be in good agreement with the experimental values. Pareto graphic analysis showed that all the chosen parameters and some of the interactions were effective on response. The optimization results showed that maximum NO conversion was achieved at the optimum conditions: 10.2 v/v% HNO3, 6.1 wt% Mn loading and calcination at 480 degrees C. The ANN model was developed by a feed-forward back propagation network with the topology 3, 8 and 1 and a Levenberg-Marquardt training algorithm. The mean square error for the ANN and RSM models were 0.339 and 1.176, respectively, and the R2 values were 0.991 and 0.972, respectively, indicating the superiority of ANN in capturing the nonlinear behaviour of the system and being accurate in estimating the values of the NO conversion.
Reduction of chemical reaction models
Frenklach, Michael
1991-01-01
An attempt is made to reconcile the different terminologies pertaining to reduction of chemical reaction models. The approaches considered include global modeling, response modeling, detailed reduction, chemical lumping, and statistical lumping. The advantages and drawbacks of each of these methods are pointed out.
Classifying variability modeling techniques
Sinnema, Marco; Deelstra, Sybren
Variability modeling is important for managing variability in software product families, especially during product derivation. In the past few years, several variability modeling techniques have been developed, each using its own concepts to model the variability provided by a product family. The
Structured building model reduction toward parallel simulation
Energy Technology Data Exchange (ETDEWEB)
Dobbs, Justin R. [Cornell University; Hencey, Brondon M. [Cornell University
2013-08-26
Building energy model reduction exchanges accuracy for improved simulation speed by reducing the number of dynamical equations. Parallel computing aims to improve simulation times without loss of accuracy but is poorly utilized by contemporary simulators and is inherently limited by inter-processor communication. This paper bridges these disparate techniques to implement efficient parallel building thermal simulation. We begin with a survey of three structured reduction approaches that compares their performance to a leading unstructured method. We then use structured model reduction to find thermal clusters in the building energy model and allocate processing resources. Experimental results demonstrate faster simulation and low error without any interprocessor communication.
Oladyshkin, S.; Schroeder, P.; Class, H.; Nowak, W.
2013-12-01
Predicting underground carbon dioxide (CO2) storage represents a challenging problem in a complex dynamic system. Due to lacking information about reservoir parameters, quantification of uncertainties may become the dominant question in risk assessment. Calibration on past observed data from pilot-scale test injection can improve the predictive power of the involved geological, flow, and transport models. The current work performs history matching to pressure time series from a pilot storage site operated in Europe, maintained during an injection period. Simulation of compressible two-phase flow and transport (CO2/brine) in the considered site is computationally very demanding, requiring about 12 days of CPU time for an individual model run. For that reason, brute-force approaches for calibration are not feasible. In the current work, we explore an advanced framework for history matching based on the arbitrary polynomial chaos expansion (aPC) and strict Bayesian principles. The aPC [1] offers a drastic but accurate stochastic model reduction. Unlike many previous chaos expansions, it can handle arbitrary probability distribution shapes of uncertain parameters, and can therefore handle directly the statistical information appearing during the matching procedure. We capture the dependence of model output on these multipliers with the expansion-based reduced model. In our study we keep the spatial heterogeneity suggested by geophysical methods, but consider uncertainty in the magnitude of permeability trough zone-wise permeability multipliers. Next combined the aPC with Bootstrap filtering (a brute-force but fully accurate Bayesian updating mechanism) in order to perform the matching. In comparison to (Ensemble) Kalman Filters, our method accounts for higher-order statistical moments and for the non-linearity of both the forward model and the inversion, and thus allows a rigorous quantification of calibrated model uncertainty. The usually high computational costs of
Error reduction techniques for Monte Carlo neutron transport calculations
International Nuclear Information System (INIS)
Ju, J.H.W.
1981-01-01
Monte Carlo methods have been widely applied to problems in nuclear physics, mathematical reliability, communication theory, and other areas. The work in this thesis is developed mainly with neutron transport applications in mind. For nuclear reactor and many other applications, random walk processes have been used to estimate multi-dimensional integrals and obtain information about the solution of integral equations. When the analysis is statistically based such calculations are often costly, and the development of efficient estimation techniques plays a critical role in these applications. All of the error reduction techniques developed in this work are applied to model problems. It is found that the nearly optimal parameters selected by the analytic method for use with GWAN estimator are nearly identical to parameters selected by the multistage method. Modified path length estimation (based on the path length importance measure) leads to excellent error reduction in all model problems examined. Finally, it should be pointed out that techniques used for neutron transport problems may be transferred easily to other application areas which are based on random walk processes. The transport problems studied in this dissertation provide exceptionally severe tests of the error reduction potential of any sampling procedure. It is therefore expected that the methods of this dissertation will prove useful in many other application areas
Alternating minimization algorithm for speckle reduction with a shifting technique.
Woo, Hyenkyun; Yun, Sangwoon
2012-04-01
Speckles (multiplicative noise) in synthetic aperture radar (SAR) make it difficult to interpret the observed image. Due to the edge-preserving feature of total variation (TV), variational models with TV regularization have attracted much interest in reducing speckles. Algorithms based on the augmented Lagrangian function have been proposed to efficiently solve speckle-reduction variational models with TV regularization. However, these algorithms require inner iterations or inverses involving the Laplacian operator at each iteration. In this paper, we adapt Tseng's alternating minimization algorithm with a shifting technique to efficiently remove the speckle without any inner iterations or inverses involving the Laplacian operator. The proposed method is very simple and highly parallelizable; therefore, it is very efficient to despeckle huge-size SAR images. Numerical results show that our proposed method outperforms the state-of-the-art algorithms for speckle-reduction variational models with a TV regularizer in terms of central-processing-unit time.
Microblowing Technique for Drag Reduction, Phase I
National Aeronautics and Space Administration — NASA seeks to develop technologies for aircraft drag reduction which contribute to improved aerodynamic efficiency in support of national goals for reducing fuel...
Multi-loop integrand reduction techniques
Badger, Simon; Zhang, Yang
2014-01-01
We review recent progress in D-dimensional integrand reduction algorithms for two loop amplitudes and give examples of their application to non-planar maximal cuts of the five-point all-plus helicity amplitude in QCD.
Chemical model reduction under uncertainty
Najm, Habib
2016-01-05
We outline a strategy for chemical kinetic model reduction under uncertainty. We present highlights of our existing deterministic model reduction strategy, and describe the extension of the formulation to include parametric uncertainty in the detailed mechanism. We discuss the utility of this construction, as applied to hydrocarbon fuel-air kinetics, and the associated use of uncertainty-aware measures of error between predictions from detailed and simplified models.
Outlier preservation by dimensionality reduction techniques
Onderwater, Martijn
2015-01-01
htmlabstractSensors are increasingly part of our daily lives: motion detection, lighting control, and energy consumption all rely on sensors. Combining this information into, for instance, simple and comprehensive graphs can be quite challenging. Dimensionality reduction is often used to address this problem, by decreasing the number of variables in the data and looking for shorter representations. However, dimensionality reduction is often aimed at normal daily data, and applying it to event...
Time-Weighted Balanced Stochastic Model Reduction
DEFF Research Database (Denmark)
Tahavori, Maryamsadat; Shaker, Hamid Reza
2011-01-01
A new relative error model reduction technique for linear time invariant (LTI) systems is proposed in this paper. Both continuous and discrete time systems can be reduced within this framework. The proposed model reduction method is mainly based upon time-weighted balanced truncation and a recently...... developed inner-outer factorization technique. Compared to the other analogous counterparts, the proposed method shows to provide more accurate results in terms of time weighted norms, when applied to different practical examples. The results are further illustrated by a numerical example....
Outlier preservation by dimensionality reduction Techniques
M. Onderwater (Martijn)
2015-01-01
textabstractSensors are increasingly part of our daily lives: motion detection, lighting control, and energy consumption all rely on sensors. Combining this information into, for instance, simple and comprehensive graphs can be quite challenging. Dimensionality reduction is often used to address
Outlier preservation by dimensionality reduction techniques
M. Onderwater (Martijn)
2015-01-01
htmlabstractSensors are increasingly part of our daily lives: motion detection, lighting control, and energy consumption all rely on sensors. Combining this information into, for instance, simple and comprehensive graphs can be quite challenging. Dimensionality reduction is often used to address
Dimensionality reduction in complex models
Boukouvalas, Alexis; Maniyar, Dharmesh M.; Cornford, Dan
2007-01-01
As a part of the Managing Uncertainty in Complex Models (MUCM) project, research at Aston University will develop methods for dimensionality reduction of the input and/or output spaces of models, as seen within the emulator framework. Towards this end this report describes a framework for generating toy datasets, whose underlying structure is understood, to facilitate early investigations of dimensionality reduction methods and to gain a deeper understanding of the algorithms employed, both i...
Three-dimensional dynamic range reduction techniques
Harding, Kevin G.; Qian, Xiaoping
2004-02-01
A significant limitation of the application of 3D structured light systems has been the large dynamic range of reflectivity of typical parts such as machined parts. The advent of digital cameras have helped this problem to some extent by providing a larger dynamic range of detection, but often parts must still be coated with white paint or powder to get a good enough return for 3D measurement techniques such as structured light. This paper will present an overview of methods that have been used to minimize the range of light reflections from many parts including polarization, multiple exposure, multiple viewing and masking techniques. Also presented will be methods of analysis such as phase analysis techniques which can provide improved robustness. Finally, we will discuss the pros and cons of these options as applied to the application of 3D structured light techniques to machined metal parts.
Discussion on variance reduction technique for shielding
Energy Technology Data Exchange (ETDEWEB)
Maekawa, Fujio [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment
1998-03-01
As the task of the engineering design activity of the international thermonuclear fusion experimental reactor (ITER), on 316 type stainless steel (SS316) and the compound system of SS316 and water, the shielding experiment using the D-T neutron source of FNS in Japan Atomic Energy Research Institute has been carried out. However, in these analyses, enormous working time and computing time were required for determining the Weight Window parameter. Limitation or complication was felt when the variance reduction by Weight Window method of MCNP code was carried out. For the purpose of avoiding this difficulty, investigation was performed on the effectiveness of the variance reduction by cell importance method. The conditions of calculation in all cases are shown. As the results, the distribution of fractional standard deviation (FSD) related to neutrons and gamma-ray flux in the direction of shield depth is reported. There is the optimal importance change, and when importance was increased at the same rate as that of the attenuation of neutron or gamma-ray flux, the optimal variance reduction can be done. (K.I.)
Volume reduction techniques for solid radioactive wastes
International Nuclear Information System (INIS)
Clarke, J.H.
1980-01-01
This report gives an account of some of the techniques in current use in the UK for the treatment of solid radioactive wastes to reduce their volume prior to storage or disposal. Reference is also made to current research and development projects. It is based on a report presented at a recent International Atomic Energy Agency Technical Committee when this subject was the main theme. An IAEA Technical Series report covering techniques in use in all parts of the world should be published within the next two years. (author)
Model reduction of parametrized systems
Ohlberger, Mario; Patera, Anthony; Rozza, Gianluigi; Urban, Karsten
2017-01-01
The special volume offers a global guide to new concepts and approaches concerning the following topics: reduced basis methods, proper orthogonal decomposition, proper generalized decomposition, approximation theory related to model reduction, learning theory and compressed sensing, stochastic and high-dimensional problems, system-theoretic methods, nonlinear model reduction, reduction of coupled problems/multiphysics, optimization and optimal control, state estimation and control, reduced order models and domain decomposition methods, Krylov-subspace and interpolatory methods, and applications to real industrial and complex problems. The book represents the state of the art in the development of reduced order methods. It contains contributions from internationally respected experts, guaranteeing a wide range of expertise and topics. Further, it reflects an important effor t, carried out over the last 12 years, to build a growing research community in this field. Though not a textbook, some of the chapters ca...
Treur, M.; Postma, M.
2014-01-01
Objectives: Patient-level simulation models provide increased flexibility to overcome the limitations of cohort-based approaches in health-economic analysis. However, computational requirements of reaching convergence is a notorious barrier. The objective was to assess the impact of using
Model Reduction of Hybrid Systems
DEFF Research Database (Denmark)
Shaker, Hamid Reza
for model reduction of switched systems is based on the switching generalized gramians. The reduced order switched system is guaranteed to be stable for all switching signal in this method. This framework uses stability conditions which are based on switching quadratic Lyapunov functions which are less...... conservative than the stability conditions based on common quadratic Lyapunov functions. The stability conditions which are used for this method are very useful in model reduction and design problems because they have slack variables in the conditions. Similar conditions for a class of switched nonlinear......High-Technological solutions of today are characterized by complex dynamical models. A lot of these models have inherent hybrid/switching structure. Hybrid/switched systems are powerful models for distributed embedded systems design where discrete controls are applied to continuous processes...
Fringe biasing: A variance reduction technique for optically thick meshes
International Nuclear Information System (INIS)
Smedley-Stevenson, R. P.
2013-01-01
Fringe biasing is a stratified sampling scheme applicable to Monte Carlo thermal radiation transport codes. The thermal emission source in optically thick cells is partitioned into separate contributions from the cell interiors (where the likelihood of the particles escaping the cells is virtually zero) and the 'fringe' regions close to the cell boundaries. Thermal emission in the cell interiors can now be modelled with fewer particles, the remaining particles being concentrated in the fringes so that they are more likely to contribute to the energy exchange between cells. Unlike other techniques for improving the efficiency in optically thick regions (such as random walk and discrete diffusion treatments), fringe biasing has the benefit of simplicity, as the associated changes are restricted to the sourcing routines with the particle tracking routines being unaffected. This paper presents an analysis of the potential for variance reduction achieved from employing the fringe biasing technique. The aim of this analysis is to guide the implementation of this technique in Monte Carlo thermal radiation codes, specifically in order to aid the choice of the fringe width and the proportion of particles allocated to the fringe (which are interrelated) in multi-dimensional simulations, and to confirm that the significant levels of variance reduction achieved in simulations can be understood by studying the behaviour for simple test cases. The variance reduction properties are studied for a single cell in a slab geometry purely absorbing medium, investigating the accuracy of the scalar flux and current tallies on one of the interfaces with the surrounding medium. (authors)
Randomized Local Model Order Reduction
Buhr, Andreas; Smetana, Kathrin
2017-01-01
In this paper we propose local approximation spaces for localized model order reduction procedures such as domain decomposition and multiscale methods. Those spaces are constructed from local solutions of the partial differential equation (PDE) with random boundary conditions, yield an approximation
An indirect reduction technique for percutaneous fixation of calcaneus fractures.
Klima, Matthew; Vlasak, Richard; Sadasivan, Kalia
2014-07-01
We describe a positioning and indirect reduction method that allows for earlier fixation of some displaced calcaneus fractures. Minimally invasive surgery with this technique can provide good results in high-risk patients while minimizing soft-tissue complications.
Exploring the CAESAR database using dimensionality reduction techniques
Mendoza-Schrock, Olga; Raymer, Michael L.
2012-06-01
The Civilian American and European Surface Anthropometry Resource (CAESAR) database containing over 40 anthropometric measurements on over 4000 humans has been extensively explored for pattern recognition and classification purposes using the raw, original data [1-4]. However, some of the anthropometric variables would be impossible to collect in an uncontrolled environment. Here, we explore the use of dimensionality reduction methods in concert with a variety of classification algorithms for gender classification using only those variables that are readily observable in an uncontrolled environment. Several dimensionality reduction techniques are employed to learn the underlining structure of the data. These techniques include linear projections such as the classical Principal Components Analysis (PCA) and non-linear (manifold learning) techniques, such as Diffusion Maps and the Isomap technique. This paper briefly describes all three techniques, and compares three different classifiers, Naïve Bayes, Adaboost, and Support Vector Machines (SVM), for gender classification in conjunction with each of these three dimensionality reduction approaches.
A Dimensionality Reduction Technique for Efficient Time Series Similarity Analysis
Wang, Qiang; Megalooikonomou, Vasileios
2008-01-01
We propose a dimensionality reduction technique for time series analysis that significantly improves the efficiency and accuracy of similarity searches. In contrast to piecewise constant approximation (PCA) techniques that approximate each time series with constant value segments, the proposed method--Piecewise Vector Quantized Approximation--uses the closest (based on a distance measure) codeword from a codebook of key-sequences to represent each segment. The new representation is symbolic and it allows for the application of text-based retrieval techniques into time series similarity analysis. Experiments on real and simulated datasets show that the proposed technique generally outperforms PCA techniques in clustering and similarity searches. PMID:18496587
A Dimensionality Reduction Technique for Efficient Time Series Similarity Analysis.
Wang, Qiang; Megalooikonomou, Vasileios
2008-03-01
We propose a dimensionality reduction technique for time series analysis that significantly improves the efficiency and accuracy of similarity searches. In contrast to piecewise constant approximation (PCA) techniques that approximate each time series with constant value segments, the proposed method--Piecewise Vector Quantized Approximation--uses the closest (based on a distance measure) codeword from a codebook of key-sequences to represent each segment. The new representation is symbolic and it allows for the application of text-based retrieval techniques into time series similarity analysis. Experiments on real and simulated datasets show that the proposed technique generally outperforms PCA techniques in clustering and similarity searches.
Chemical model reduction under uncertainty
Malpica Galassi, Riccardo
2017-03-06
A general strategy for analysis and reduction of uncertain chemical kinetic models is presented, and its utility is illustrated in the context of ignition of hydrocarbon fuel–air mixtures. The strategy is based on a deterministic analysis and reduction method which employs computational singular perturbation analysis to generate simplified kinetic mechanisms, starting from a detailed reference mechanism. We model uncertain quantities in the reference mechanism, namely the Arrhenius rate parameters, as random variables with prescribed uncertainty factors. We propagate this uncertainty to obtain the probability of inclusion of each reaction in the simplified mechanism. We propose probabilistic error measures to compare predictions from the uncertain reference and simplified models, based on the comparison of the uncertain dynamics of the state variables, where the mixture entropy is chosen as progress variable. We employ the construction for the simplification of an uncertain mechanism in an n-butane–air mixture homogeneous ignition case, where a 176-species, 1111-reactions detailed kinetic model for the oxidation of n-butane is used with uncertainty factors assigned to each Arrhenius rate pre-exponential coefficient. This illustration is employed to highlight the utility of the construction, and the performance of a family of simplified models produced depending on chosen thresholds on importance and marginal probabilities of the reactions.
Daghir-Wojtkowiak, Emilia; Wiczling, Paweł; Bocian, Szymon; Kubik, Łukasz; Kośliński, Piotr; Buszewski, Bogusław; Kaliszan, Roman; Markuszewski, Michał Jan
2015-07-17
The objective of this study was to model the retention of nucleosides and pterins in hydrophilic interaction liquid chromatography (HILIC) via QSRR-based approach. Two home-made (Amino-P-C18, Amino-P-C10) and one commercial (IAM.PC.DD2) HILIC stationary phases were considered. Logarithm of retention factor at 5% of acetonitrile (logkACN) along with descriptors obtained for 16 nucleosides and 11 pterins were used to develop QSRR models. We used and compared the predictive performance of three regression techniques: partial least square (PLS), the least absolute shrinkage and selection operator (LASSO), and the LASSO followed by stepwise multiple linear regression. The highest predictive squared correlation coefficient (QLOOCV(2)) in PLS analysis was found for Amino-P-C10 (QLOOCV(2)=0.687) and IAM.PC.DD2 (QLOOCV(2)=0.506) and the lowest for IAM.PC.DD2 (QLOOCV(2)=-0.01). Much higher values were obtained for the LASSO model. The QLOOCV(2) equaled 0.9 for Amino-P-C10, 0.66 for IAM.PC.DD2 and 0.59 for Amino-P-C18. The combination of LASSO with stepwise regression provided models with comparable predictive performance as the LASSO, however with possibility of calculating the standard error of estimates. The use of LASSO itself and in combination with classical stepwise regression may offer greater stability of the developed models thanks to more smooth change of coefficients and reduced susceptibility towards chance correlation. Application of QSRR-based approach, along with the computational methods proposed in this work, may offer a useful approach in the modeling of retention of nucleoside and pterin compounds in HILIC. Copyright © 2015 Elsevier B.V. All rights reserved.
Semiconductor Modeling Techniques
Xavier, Marie
2012-01-01
This book describes the key theoretical techniques for semiconductor research to quantitatively calculate and simulate the properties. It presents particular techniques to study novel semiconductor materials, such as 2D heterostructures, quantum wires, quantum dots and nitrogen containing III-V alloys. The book is aimed primarily at newcomers working in the field of semiconductor physics to give guidance in theory and experiment. The theoretical techniques for electronic and optoelectronic devices are explained in detail.
Case report macroglossia: Review and application of tongue reduction technique
Directory of Open Access Journals (Sweden)
Bilommi R. Irhamni
2015-05-01
Full Text Available Congenital macroglossia is uncommon condition, Enlargement can be true as seen in vascular malformations or muscular enlargement. It may cause significant symptoms in children such as sleep apnea, respiratory distress, drooling, difficulty in swallowing and dysarthria. Long-standing macroglossia leads to an anterior open bite deformity, mucosal changes, exposure to potential trauma, increased incidence of upper respiratory tract infections and failure to thrive. Tongue movements, sounds and Speech articulation may also be affected. It is important to achieve uniform global reduction of the enlarged tongue for functional as well as esthetic reasons. The multiple techniques advocated for tongue reduction reveal that an ideal procedure has yet to emerge. In our case report we describe a modified reduction technique of the tongue globally preserving the taste, sensation and mobility of the tongue suitable for cases of enlargement of the tongue as in muscular hypertrophy. It can be used for repeat reductions without jeopardizing the mobility and sensibility of the tongue.
Development and evaluation of thermal model reduction algorithms for spacecraft
Deiml, Michael; Suderland, Martin; Reiss, Philipp; Czupalla, Markus
2015-05-01
This paper is concerned with the topic of the reduction of thermal models of spacecraft. The work presented here has been conducted in cooperation with the company OHB AG, formerly Kayser-Threde GmbH, and the Institute of Astronautics at Technische Universität München with the goal to shorten and automatize the time-consuming and manual process of thermal model reduction. The reduction of thermal models can be divided into the simplification of the geometry model for calculation of external heat flows and radiative couplings and into the reduction of the underlying mathematical model. For simplification a method has been developed which approximates the reduced geometry model with the help of an optimization algorithm. Different linear and nonlinear model reduction techniques have been evaluated for their applicability in reduction of the mathematical model. Thereby the compatibility with the thermal analysis tool ESATAN-TMS is of major concern, which restricts the useful application of these methods. Additional model reduction methods have been developed, which account to these constraints. The Matrix Reduction method allows the approximation of the differential equation to reference values exactly expect for numerical errors. The summation method enables a useful, applicable reduction of thermal models that can be used in industry. In this work a framework for model reduction of thermal models has been created, which can be used together with a newly developed graphical user interface for the reduction of thermal models in industry.
Application of chaotic noise reduction techniques to chaotic data ...
Indian Academy of Sciences (India)
We propose a novel method of combining artiﬁcial neural networks (ANNs) with chaotic noise reduction techniques that captures the metric and dynamic invariants of a ... Computational Materials Science, Unit-I,Regional Research Laboratory (CSIR) Thiruvananthapuram 695 019, India; Department of Computer Science, ...
Power-reduction techniques for data-center storage systems
Bostoen, Tom; Mullender, Sape J.; Berbers, Yolande
As data-intensive, network-based applications proliferate, the power consumed by the data-center storage subsystem surges. This survey summarizes, organizes, and integrates a decade of research on power-aware enterprise storage systems. All of the existing power-reduction techniques are classified
Effect of Toloposogo Creativity Technique in the Reduction of ...
African Journals Online (AJOL)
This study examined the effects of TO-LO-PO-SO-GO creativity technique in the reduction of psychopathological behaviours of some adolescents in Nigerian Prisons. Sixty adolescent prisoners randomly selected from Ikoyi Lagos and Abeokuta prisons whose ages ranged from 18-21 with a mean of 19.5 were randomly ...
Energy Technology Data Exchange (ETDEWEB)
Khawaja, Ranish Deedar Ali, E-mail: rkhawaja@mgh.harvard.edu [MGH Imaging, Massachusetts General Hospital and Harvard Medical School, Boston, MA (United States); Singh, Sarabjeet; Blake, Michael; Harisinghani, Mukesh; Choy, Gary; Karosmangulu, Ali; Padole, Atul; Do, Synho [MGH Imaging, Massachusetts General Hospital and Harvard Medical School, Boston, MA (United States); Brown, Kevin; Thompson, Richard; Morton, Thomas; Raihani, Nilgoun [CT Research and Advanced Development, Philips Healthcare, Cleveland, OH (United States); Koehler, Thomas [Philips Technologie GmbH, Innovative Technologies, Hamburg (Germany); Kalra, Mannudeep K. [MGH Imaging, Massachusetts General Hospital and Harvard Medical School, Boston, MA (United States)
2015-01-15
Highlights: • Limited abdominal CT indications can be performed at a size specific dose estimate of (SSDE) 1.5 mGy (∼0.9 mSv) in smaller patients (BMI less than or equal to 25 kg/m{sup 2}) using a knowledge based Iterative Model Reconstruction (IMR) technique. • Evaluation of liver tumors and pathologies is unacceptable at this reduced dose with IMR technique especially in patients with a BMI greater than 25 kg/m{sup 2}. • IMR body soft tissue and routine settings perform substantially better than IMR sharp plus setting in reduced dose CT images. • At SSDE of 1.5 mGy, objective image noise in reduced dose IMR images is 8–56% less than compared to standard dose FBP images, with lowest image noise in IMR body-soft tissue images. - Abstract: Purpose: To assess lesion detection and image quality parameters of a knowledge-based Iterative Model Reconstruction (IMR) in reduced dose (RD) abdominal CT examinations. Materials and methods: This IRB-approved prospective study included 82 abdominal CT examinations performed for 41 consecutive patients (mean age, 62 ± 12 years; F:M 28:13) who underwent a RD CT (SSDE, 1.5 mGy ± 0.4 [∼0.9 mSv] at 120 kV with 17–20 mAs/slice) immediately after their standard dose (SD) CT exam (10 mGy ± 3 [∼6 mSv] at 120 kV with automatic exposure control) on 256 MDCT (iCT, Philips Healthcare). SD data were reconstructed using filtered back projection (FBP). RD data were reconstructed with FBP and IMR. Four radiologists used a five-point scale (1 = image quality better than SD CT to 5 = image quality unacceptable) to assess both subjective image quality and artifacts. Lesions were first detected on RD FBP images. RD IMR and RD FBP images were then compared side-by-side to SD-FBP images in an independent, randomized and blinded fashion. Friedman's test and intraclass correlation coefficient were used for data analysis. Objective measurements included image noise and attenuation as well as noise spectral density (NSD) curves
International Nuclear Information System (INIS)
Khawaja, Ranish Deedar Ali; Singh, Sarabjeet; Blake, Michael; Harisinghani, Mukesh; Choy, Gary; Karosmangulu, Ali; Padole, Atul; Do, Synho; Brown, Kevin; Thompson, Richard; Morton, Thomas; Raihani, Nilgoun; Koehler, Thomas; Kalra, Mannudeep K.
2015-01-01
Highlights: • Limited abdominal CT indications can be performed at a size specific dose estimate of (SSDE) 1.5 mGy (∼0.9 mSv) in smaller patients (BMI less than or equal to 25 kg/m 2 ) using a knowledge based Iterative Model Reconstruction (IMR) technique. • Evaluation of liver tumors and pathologies is unacceptable at this reduced dose with IMR technique especially in patients with a BMI greater than 25 kg/m 2 . • IMR body soft tissue and routine settings perform substantially better than IMR sharp plus setting in reduced dose CT images. • At SSDE of 1.5 mGy, objective image noise in reduced dose IMR images is 8–56% less than compared to standard dose FBP images, with lowest image noise in IMR body-soft tissue images. - Abstract: Purpose: To assess lesion detection and image quality parameters of a knowledge-based Iterative Model Reconstruction (IMR) in reduced dose (RD) abdominal CT examinations. Materials and methods: This IRB-approved prospective study included 82 abdominal CT examinations performed for 41 consecutive patients (mean age, 62 ± 12 years; F:M 28:13) who underwent a RD CT (SSDE, 1.5 mGy ± 0.4 [∼0.9 mSv] at 120 kV with 17–20 mAs/slice) immediately after their standard dose (SD) CT exam (10 mGy ± 3 [∼6 mSv] at 120 kV with automatic exposure control) on 256 MDCT (iCT, Philips Healthcare). SD data were reconstructed using filtered back projection (FBP). RD data were reconstructed with FBP and IMR. Four radiologists used a five-point scale (1 = image quality better than SD CT to 5 = image quality unacceptable) to assess both subjective image quality and artifacts. Lesions were first detected on RD FBP images. RD IMR and RD FBP images were then compared side-by-side to SD-FBP images in an independent, randomized and blinded fashion. Friedman's test and intraclass correlation coefficient were used for data analysis. Objective measurements included image noise and attenuation as well as noise spectral density (NSD) curves to
Relative Error Model Reduction via Time-Weighted Balanced Stochastic Singular Perturbation
DEFF Research Database (Denmark)
Tahavori, Maryamsadat; Shaker, Hamid Reza
2012-01-01
A new mixed method for relative error model reduction of linear time invariant (LTI) systems is proposed in this paper. This order reduction technique is mainly based upon time-weighted balanced stochastic model reduction method and singular perturbation model reduction technique. Compared...
Mathematical modelling techniques
Aris, Rutherford
1995-01-01
""Engaging, elegantly written."" - Applied Mathematical ModellingMathematical modelling is a highly useful methodology designed to enable mathematicians, physicists and other scientists to formulate equations from a given nonmathematical situation. In this elegantly written volume, a distinguished theoretical chemist and engineer sets down helpful rules not only for setting up models but also for solving the mathematical problems they pose and for evaluating models.The author begins with a discussion of the term ""model,"" followed by clearly presented examples of the different types of mode
Model Reduction of Switched Systems Based on Switching Generalized Gramians
DEFF Research Database (Denmark)
Shaker, Hamid Reza; Wisniewski, Rafal
2012-01-01
In this paper, a general method for model order reduction of discrete-time switched linear systems is presented. The proposed technique uses switching generalized gramians. It is shown that several classical reduction methods can be developed into the generalized gramian framework for the model...... reduction of linear systems and for the reduction of switched systems. Discrete-time balanced reduction within a specified frequency interval is taken as an example within this framework. To avoid numerical instability and to increase the numerical efficiency, a generalized gramian-based Petrov...
Relativistic model for statevector reduction
International Nuclear Information System (INIS)
Pearle, P.
1991-04-01
A relativistic quantum field model describing statevector reduction for fermion states is presented. The time evolution of the states is governed by a Schroedinger equation with a Hamiltonian that has a Hermitian and a non-Hermitian part. In addition to the fermions, the Hermitian part describes positive and negative energy mesons of equal mass, analogous to the longitudinal and timelike photons of electromagnetism. The meson-field-sum is coupled to the fermion field. This ''dresses'' each fermion so that, in the extreme nonrelativistic limit (non-moving fermions), a fermion in a position eigenstate is also in an eigenstate of the meson-field-difference with the Yukawa-potential as eigenvalue. However, the fermions do not interact: this is a theory of free dressed fermions. It is possible to obtain a stationary normalized ''vacuum'' state which satisfies two conditions analogous to the gauge conditions of electromagnetism (i.e., that the meson-field-difference, as well as its time derivative, give zero when applied to the vacuum state), to any desired degree of accuracy. The non-Hermitian part of the Hamiltonian contains the coupling of the meson-field-difference to an externally imposed c-number fluctuating white noise field, of the CSL (Continuous Spontaneous Localization) form. This causes statevector reduction, as is shown in the extreme nonrelativistic limit. For example, a superposition of spatially separated wavepackets of a fermion will eventually be reduced to a single wavepacket: the meson-field-difference discriminates among the Yukawa-potential ''handles'' attached to each wavepacket, thereby selecting one wavepacket to survive by the CSL mechanism. Analysis beyond that given in this paper is required to see what happens when the fermions are allowed to move. (It is possible that the ''vacuum'' state becomes involved in the dynamics so that the ''gauge'' conditions can no longer be maintained.) It is shown how to incorporate these ideas into quantum
Boundary representation modelling techniques
2006-01-01
Provides the most complete presentation of boundary representation solid modelling yet publishedOffers basic reference information for software developers, application developers and users Includes a historical perspective as well as giving a background for modern research.
A Comparison of Speckle Reduction Techniques in Medical Ultrasound Imaging
Directory of Open Access Journals (Sweden)
Cristina STOLOJESCU-CRISAN
2015-06-01
Full Text Available Speckle noise is a multiplicative noise that degrades the visual evaluation in ultrasound imaging. In addition, it limits the efficient application of intelligent image processing algorithms, such as segmentation techniques. Thus, speckle noise reduction is considered an essential pre-processing step. The objective of this paper is to carry out a comparative evaluation of speckle filtering techniques, based on two image quality evaluation metrics, the Peak Signal to Noise Ratio (PSNR, and the Structural SIMilarity (SSIM index, and visual evaluation.
Model Order Reduction of Aeroservoelastic Model of Flexible Aircraft
Wang, Yi; Song, Hongjun; Pant, Kapil; Brenner, Martin J.; Suh, Peter
2016-01-01
This paper presents a holistic model order reduction (MOR) methodology and framework that integrates key technological elements of sequential model reduction, consistent model representation, and model interpolation for constructing high-quality linear parameter-varying (LPV) aeroservoelastic (ASE) reduced order models (ROMs) of flexible aircraft. The sequential MOR encapsulates a suite of reduction techniques, such as truncation and residualization, modal reduction, and balanced realization and truncation to achieve optimal ROMs at grid points across the flight envelope. The consistence in state representation among local ROMs is obtained by the novel method of common subspace reprojection. Model interpolation is then exploited to stitch ROMs at grid points to build a global LPV ASE ROM feasible to arbitrary flight condition. The MOR method is applied to the X-56A MUTT vehicle with flexible wing being tested at NASA/AFRC for flutter suppression and gust load alleviation. Our studies demonstrated that relative to the fullorder model, our X-56A ROM can accurately and reliably capture vehicles dynamics at various flight conditions in the target frequency regime while the number of states in ROM can be reduced by 10X (from 180 to 19), and hence, holds great promise for robust ASE controller synthesis and novel vehicle design.
Power Backoff Reduction Techniques for Generalized Multicarrier Waveforms
Directory of Open Access Journals (Sweden)
Wesołowski K
2008-01-01
Full Text Available Abstract Amplification of generalized multicarrier (GMC signals by high-power amplifiers (HPAs before transmission can result in undesirable out-of-band spectral components, necessitating power backoff, and low HPA efficiency. We evaluate variations of several peak-to-average power ratio (PAPR reduction and HPA linearization techniques which were previously proposed for OFDM signals. Our main emphasis is on their applicability to the more general class of GMC signals, including serial modulation and DFT-precoded OFDM. Required power backoff is shown to depend on the type of signal transmitted, the specific HPA nonlinearity characteristic, and the spectrum mask which is imposed to limit adjacent channel interference. PAPR reduction and HPA linearization techniques are shown to be very effective when combined.
Power Backoff Reduction Techniques for Generalized Multicarrier Waveforms
Directory of Open Access Journals (Sweden)
D. Falconer
2007-12-01
Full Text Available Amplification of generalized multicarrier (GMC signals by high-power amplifiers (HPAs before transmission can result in undesirable out-of-band spectral components, necessitating power backoff, and low HPA efficiency. We evaluate variations of several peak-to-average power ratio (PAPR reduction and HPA linearization techniques which were previously proposed for OFDM signals. Our main emphasis is on their applicability to the more general class of GMC signals, including serial modulation and DFT-precoded OFDM. Required power backoff is shown to depend on the type of signal transmitted, the specific HPA nonlinearity characteristic, and the spectrum mask which is imposed to limit adjacent channel interference. PAPR reduction and HPA linearization techniques are shown to be very effective when combined.
On the selection of dimension reduction techniques for scientific applications
Energy Technology Data Exchange (ETDEWEB)
Fan, Y J; Kamath, C
2012-02-17
Many dimension reduction methods have been proposed to discover the intrinsic, lower dimensional structure of a high-dimensional dataset. However, determining critical features in datasets that consist of a large number of features is still a challenge. In this paper, through a series of carefully designed experiments on real-world datasets, we investigate the performance of different dimension reduction techniques, ranging from feature subset selection to methods that transform the features into a lower dimensional space. We also discuss methods that calculate the intrinsic dimensionality of a dataset in order to understand the reduced dimension. Using several evaluation strategies, we show how these different methods can provide useful insights into the data. These comparisons enable us to provide guidance to a user on the selection of a technique for their dataset.
Combining Breast Reduction Techniques to Treat Gigantomastia in Ghana
Directory of Open Access Journals (Sweden)
Melody F. Scheefer, MD
2018-02-01
Full Text Available Summary:. In this presentation of 2 consecutive cases of symptomatic juvenile breast hypertrophy in Ghana, we review the patient presentation, workup, and discuss outcomes following a combined technique of inferior pedicle stump with free nipple graft reduction mammoplasty. Surgical goals for treatment of gigantomastia were 2-fold: to resect adequate tissue to obtain symptomatic relief with improved quality of life, while avoiding a flat, boxy-appearing breast shape.
Model Reduction in Groundwater Modeling and Management
Siade, A. J.; Kendall, D. R.; Putti, M.; Yeh, W. W.
2008-12-01
Groundwater management requires the development and implementation of mathematical models that, through simulation, evaluate the effects of anthropogenic impacts on an aquifer system. To obtain high levels of accuracy, one must incorporate high levels of complexity, resulting in computationally demanding models. This study provides a methodology for solving groundwater management problems with reduced computational effort by replacing the large, complex numerical model with a significantly smaller, simpler approximation. This is achieved via Proper Orthogonal Decomposition (POD), where the goal is to project the larger model solution space onto a smaller or reduced subspace in which the management problem will be solved, achieving reductions in computation time of up to three orders of magnitude. Once the solution is obtained in the reduced space with acceptable accuracy, it is then projected back to the full model space. A major challenge when using this method is the definition of the reduced solution subspace. In POD, this subspace is defined based on samples or snapshots taken at specific times from the solution of the full model. In this work we determine when snapshots should be taken on the basis of the exponential behavior of the governing partial differential equation. This selection strategy is then generalized for any groundwater model by obtaining and using the optimal snapshot selection for a simplified, dimensionless model. Protocols are developed to allow the snapshot selection results of the simplified, dimensionless model to be transferred to that of a complex, heterogeneous model with any geometry. The proposed methodology is finally applied to a basin in the Oristano Plain located in the Sardinia Island, Italy.
Directory of Open Access Journals (Sweden)
S. Gimeno García
2012-09-01
Full Text Available Handling complexity to the smallest detail in atmospheric radiative transfer models is unfeasible in practice. On the one hand, the properties of the interacting medium, i.e., the atmosphere and the surface, are only available at a limited spatial resolution. On the other hand, the computational cost of accurate radiation models accounting for three-dimensional heterogeneous media are prohibitive for some applications, especially for climate modelling and operational remote-sensing algorithms. Hence, it is still common practice to use simplified models for atmospheric radiation applications.
Three-dimensional radiation models can deal with complex scenarios providing an accurate solution to the radiative transfer. In contrast, one-dimensional models are computationally more efficient, but introduce biases to the radiation results.
With the help of stochastic models that consider the multi-fractal nature of clouds, it is possible to scale cloud properties given at a coarse spatial resolution down to a higher resolution. Performing the radiative transfer within the cloud fields at higher spatial resolution noticeably helps to improve the radiation results.
We present a new Monte Carlo model, MoCaRT, that computes the radiative transfer in three-dimensional inhomogeneous atmospheres. The MoCaRT model is validated by comparison with the consensus results of the Intercomparison of Three-Dimensional Radiation Codes (I3RC project.
In the framework of this paper, we aim at characterising cloud heterogeneity effects on radiances and broadband fluxes, namely: the errors due to unresolved variability (the so-called plane parallel homogeneous, PPH, bias and the errors due to the neglect of transversal photon displacements (independent pixel approximation, IPA, bias. First, we study the effect of the missing cloud variability on reflectivities. We will show that the generation of subscale variability by means of stochastic
Volume reduction philosophy and techniques in use or planned
Energy Technology Data Exchange (ETDEWEB)
Row, T.H.
1984-01-01
Siting and development of nuclear waste disposal facilities is an expensive task. In the private sector, such developments face siting and licensing issues, public intervention, and technology challenges. The United States Department of Energy (DOE) faces similar challenges in the management of waste generated by the research and production facilities. Volume reduction can be used to lengthen the service life of existing facilities. A wide variety of volume reduction techniques are applied to different waste forms. Compressible waste is compacted into drums, cardboard and metal boxes, and the loaded drums are supercompacted into smaller units. Large metallic items are size-reduced and melted for recycle or sent to shallow land burial. Anaerobic digestion is a process that can reduce cellulosic and animal wastes by 80%. Incinerators of all types have been investigated for application to nuclear wastes and a number of installations operate or are constructing units for low-level and transuranic solid and liquid combustibles. Technology may help solve many of the problems in volume reduction, but the human element also has an important part in solving the puzzle. Aggressive educational campaigns at two sites have proved very successful in reducing waste generation. This overview of volume reduction is intended to transfer the current information from many DOE facilities. 44 references, 85 figures, 5 tables.
Volume reduction philosophy and techniques in use or planned
International Nuclear Information System (INIS)
Row, T.H.
1984-01-01
Siting and development of nuclear waste disposal facilities is an expensive task. In the private sector, such developments face siting and licensing issues, public intervention, and technology challenges. The United States Department of Energy (DOE) faces similar challenges in the management of waste generated by the research and production facilities. Volume reduction can be used to lengthen the service life of existing facilities. A wide variety of volume reduction techniques are applied to different waste forms. Compressible waste is compacted into drums, cardboard and metal boxes, and the loaded drums are supercompacted into smaller units. Large metallic items are size-reduced and melted for recycle or sent to shallow land burial. Anaerobic digestion is a process that can reduce cellulosic and animal wastes by 80%. Incinerators of all types have been investigated for application to nuclear wastes and a number of installations operate or are constructing units for low-level and transuranic solid and liquid combustibles. Technology may help solve many of the problems in volume reduction, but the human element also has an important part in solving the puzzle. Aggressive educational campaigns at two sites have proved very successful in reducing waste generation. This overview of volume reduction is intended to transfer the current information from many DOE facilities. 44 references, 85 figures, 5 tables
Technique for Reduction of Environmental Pollution from Construction Wastes
Bakaeva, N. V.; Klimenko, M. Y.
2017-11-01
The results of the research on the negative impact construction wastes have on the urban environment and construction ecological safety are described. The research results are based on the statistical data and indicators calculated with the use of environmental pollution assessment in the restoration system of urban buildings technical conditions. The technique for the reduction of environmental pollution from construction wastes is scientifically based on the analytic summary of scientific and practical results for ecological safety ensuring at major overhaul and current repairs (reconstruction) of the buildings and structures. It is also based on the practical application of the probability theory method, system analysis and disperse system theory. It is necessary to execute some stages implementing the developed technique to reduce environmental pollution from construction wastes. The stages include various steps starting from information collection to the system formation with optimum performance characteristics which are more resource saving and energy efficient for the accumulation of construction wastes from urban construction units. The following tasks are solved under certain studies: basic data collection about construction wastes accumulation; definition and comparison of technological combinations at each system functional stage intended for the reduction of construction wastes discharge into the environment; assessment criteria calculation of resource saving and energy efficiency; optimum working parameters of each implementation stage are created. The urban construction technique implementation shows that the resource saving criteria are from 55.22% to 88.84%; potential of construction wastes recycling is 450 million tons of construction damaged elements (parts).
Survey of semantic modeling techniques
Energy Technology Data Exchange (ETDEWEB)
Smith, C.L.
1975-07-01
The analysis of the semantics of programing languages was attempted with numerous modeling techniques. By providing a brief survey of these techniques together with an analysis of their applicability for answering semantic issues, this report attempts to illuminate the state-of-the-art in this area. The intent is to be illustrative rather than thorough in the coverage of semantic models. A bibliography is included for the reader who is interested in pursuing this area of research in more detail.
Reduction and analysis techniques for infrared imaging data
Mccaughrean, Mark
1989-01-01
Infrared detector arrays are becoming increasingly available to the astronomy community, with a number of array cameras already in use at national observatories, and others under development at many institutions. As the detector technology and imaging instruments grow more sophisticated, more attention is focussed on the business of turning raw data into scientifically significant information. Turning pictures into papers, or equivalently, astronomy into astrophysics, both accurately and efficiently, is discussed. Also discussed are some of the factors that can be considered at each of three major stages; acquisition, reduction, and analysis, concentrating in particular on several of the questions most relevant to the techniques currently applied to near infrared imaging.
Assessing clutter reduction in parallel coordinates using image processing techniques
Alhamaydh, Heba; Alzoubi, Hussein; Almasaeid, Hisham
2018-01-01
Information visualization has appeared as an important research field for multidimensional data and correlation analysis in recent years. Parallel coordinates (PCs) are one of the popular techniques to visual high-dimensional data. A problem with the PCs technique is that it suffers from crowding, a clutter which hides important data and obfuscates the information. Earlier research has been conducted to reduce clutter without loss in data content. We introduce the use of image processing techniques as an approach for assessing the performance of clutter reduction techniques in PC. We use histogram analysis as our first measure, where the mean feature of the color histograms of the possible alternative orderings of coordinates for the PC images is calculated and compared. The second measure is the extracted contrast feature from the texture of PC images based on gray-level co-occurrence matrices. The results show that the best PC image is the one that has the minimal mean value of the color histogram feature and the maximal contrast value of the texture feature. In addition to its simplicity, the proposed assessment method has the advantage of objectively assessing alternative ordering of PC visualization.
Data-Driven Model Order Reduction for Bayesian Inverse Problems
Cui, Tiangang
2014-01-06
One of the major challenges in using MCMC for the solution of inverse problems is the repeated evaluation of computationally expensive numerical models. We develop a data-driven projection- based model order reduction technique to reduce the computational cost of numerical PDE evaluations in this context.
Cogging Torque Reduction Techniques for Spoke-type IPMSM
Bahrim, F. S.; Sulaiman, E.; Kumar, R.; Jusoh, L. I.
2017-08-01
A spoke-type interior permanent magnet synchronous motor (IPMSM) is extending its tentacles in industrial arena due to good flux-weakening capability and high power density. In many of the application, high strength of permanent magnet causes the undesirable effects of high cogging torque that can aggravate performance of the motor. High cogging torque is significantly produced by IPMSM due to the similar length and the effectiveness of the magnetic air-gap. The address of this study is to analyze and compare the cogging torque effect and performance of four common techniques for cogging torque reduction such as skewing, notching, pole pairing and rotor pole pairing. With the aid of 3-D finite element analysis (FEA) by JMAG software, a 6S-4P Spoke-type IPMSM with various rotor-PM configurations has been designed. As a result, the cogging torque effect reduced up to 69.5% for skewing technique, followed by 31.96%, 29.6%, and 17.53% by pole pairing, axial pole pairing and notching techniques respectively.
Interslice leakage artifact reduction technique for simultaneous multislice acquisitions.
Cauley, Stephen F; Polimeni, Jonathan R; Bhat, Himanshu; Wald, Lawrence L; Setsompop, Kawin
2014-07-01
Controlled aliasing techniques for simultaneously acquired echo-planar imaging slices have been shown to significantly increase the temporal efficiency for both diffusion-weighted imaging and functional magnetic resonance imaging studies. The "slice-GRAPPA" (SG) method has been widely used to reconstruct such data. We investigate robust optimization techniques for SG to ensure image reconstruction accuracy through a reduction of leakage artifacts. Split SG is proposed as an alternative kernel optimization method. The performance of Split SG is compared to standard SG using data collected on a spherical phantom and in vivo on two subjects at 3 T. Slice-accelerated and nonaccelerated data were collected for a spin-echo diffusion-weighted acquisition. Signal leakage metrics and time-series SNR were used to quantify the performance of the kernel fitting approaches. The Split SG optimization strategy significantly reduces leakage artifacts for both phantom and in vivo acquisitions. In addition, a significant boost in time-series SNR for in vivo diffusion-weighted acquisitions with in-plane 2× and slice 3× accelerations was observed with the Split SG approach. By minimizing the influence of leakage artifacts during the training of SG kernels, we have significantly improved reconstruction accuracy. Our robust kernel fitting strategy should enable better reconstruction accuracy and higher slice-acceleration across many applications. Copyright © 2013 Wiley Periodicals, Inc.
identification with model reduction issues
Directory of Open Access Journals (Sweden)
A. Bilbao-Guillerna
2005-01-01
with the multiestimation scheme instead of a high-order one. Depending on the frequency spectrum characteristics of the input and on the estimates evolution, the multiestimation scheme selects on-line the most appropriate model and its related estimation scheme in order to improve the identification and control performances. Robust closed-loop stability is proved even in the presence of unmodeled dynamics of sufficiently small sizes as it has been confirmed by simulation results. The scheme chooses in real time the estimator/controller associated with a particular reduced model possessing the best performance according to an identification performance index by implementing a switching rule between estimators. The switching rule is subject to a minimum residence time at each identifier/adaptive controller parameterization for closed-loop stabilization purposes. A conceptually simple higher-level supervisor, based on heuristic updating rules which estimate on-line the weights of the switching rule between estimation schemes, is discussed.
Advanced Atmospheric Ensemble Modeling Techniques
Energy Technology Data Exchange (ETDEWEB)
Buckley, R. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Chiswell, S. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Kurzeja, R. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Maze, G. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Viner, B. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Werth, D. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)
2017-09-29
Ensemble modeling (EM), the creation of multiple atmospheric simulations for a given time period, has become an essential tool for characterizing uncertainties in model predictions. We explore two novel ensemble modeling techniques: (1) perturbation of model parameters (Adaptive Programming, AP), and (2) data assimilation (Ensemble Kalman Filter, EnKF). The current research is an extension to work from last year and examines transport on a small spatial scale (<100 km) in complex terrain, for more rigorous testing of the ensemble technique. Two different release cases were studied, a coastal release (SF6) and an inland release (Freon) which consisted of two release times. Observations of tracer concentration and meteorology are used to judge the ensemble results. In addition, adaptive grid techniques have been developed to reduce required computing resources for transport calculations. Using a 20- member ensemble, the standard approach generated downwind transport that was quantitatively good for both releases; however, the EnKF method produced additional improvement for the coastal release where the spatial and temporal differences due to interior valley heating lead to the inland movement of the plume. The AP technique showed improvements for both release cases, with more improvement shown in the inland release. This research demonstrated that transport accuracy can be improved when models are adapted to a particular location/time or when important local data is assimilated into the simulation and enhances SRNL’s capability in atmospheric transport modeling in support of its current customer base and local site missions, as well as our ability to attract new customers within the intelligence community.
Radaydeh, Redha Mahmoud
2014-02-01
This paper studies generalized single-stream transmit beamforming employing receive array co-channel interference reduction algorithms under slow and flat fading multiuser wireless systems. The impact of imperfect prediction of channel state information for the desired user spatially uncorrelated transmit channels on the effectiveness of transmit beamforming for different interference reduction techniques is investigated. The case of over-loaded receive array with closely-spaced elements is considered, wherein it can be configured to specified interfering sources. Both dominant interference reduction and adaptive interference reduction techniques for statistically ordered and unordered interferers powers, respectively, are thoroughly studied. The effect of outdated statistical ordering of the interferers powers on the efficiency of dominant interference reduction is studied and then compared against the adaptive interference reduction. For the system models described above, new analytical formulations for the statistics of combined signal-to-interference-plus-noise ratio are presented, from which results for conventional maximum ratio transmission and single-antenna best transmit selection can be directly deduced as limiting cases. These results are then utilized to obtain quantitative measures for various performance metrics. They are also used to compare the achieved performance of various configuration models under consideration. © 1972-2012 IEEE.
Generalized Gramian Framework for Model/Controller Order Reduction of Switched Systems
DEFF Research Database (Denmark)
Shaker, Hamid Reza; Wisniewski, Rafal
2011-01-01
In this article, a general method for model/controller order reduction of switched linear dynamical systems is presented. The proposed technique is based on the generalised gramian framework for model reduction. It is shown that different classical reduction methods can be developed into a genera......In this article, a general method for model/controller order reduction of switched linear dynamical systems is presented. The proposed technique is based on the generalised gramian framework for model reduction. It is shown that different classical reduction methods can be developed...
Preservation of thermodynamic structure in model reduction.
Öttinger, Hans Christian
2015-03-01
Based on the availability of an invariant manifold, we develop a model-reduction procedure that preserves thermodynamic structure. More concretely, we construct the Poisson and irreversible brackets of the general equation for the nonequilibrium reversible-irreversible coupling of nonequilibrium thermodynamics by means of the ideas originally introduced for handling constraints. The general ideas are then applied to the Kramers problem, that is, the description of transitions between two potential wells separated by a high barrier. This example reveals how a fortuitous cancellation mechanism that allows a logarithmic entropy to generate a linear diffusion equation is inherited by a master equation resulting from model reduction.
COGNITIVE RESTRUCTURING: ALTERNATIVE COUNSELING TECHNIQUES TO REDUCTION ACADEMIC PROCRASTINATION
Directory of Open Access Journals (Sweden)
Annisa Sofiana
2016-12-01
Full Text Available Procrastination is often experienced by almost everyone, including students who often delay to resolve any responsibility in the academic process that would decrease the individual academic achievement. cognitive restructuring is one of the cognitive techniques used in counseling in addition to cognitive behavioral techniques (behavioral and didaktif techniques. This technique has several procedures by focusing on identifying an effort and changing dysfunctional thoughts or negative self-statements into a new belief that is more rational and adaptive, which will affect more rational behavior anyway. Cognitive restructuring techniques assessed to be an alternative counseling techniques in reducing academic procrastination.
A comparison of internal validation techniques for multifactor dimensionality reduction
2010-01-01
Background It is hypothesized that common, complex diseases may be due to complex interactions between genetic and environmental factors, which are difficult to detect in high-dimensional data using traditional statistical approaches. Multifactor Dimensionality Reduction (MDR) is the most commonly used data-mining method to detect epistatic interactions. In all data-mining methods, it is important to consider internal validation procedures to obtain prediction estimates to prevent model over-fitting and reduce potential false positive findings. Currently, MDR utilizes cross-validation for internal validation. In this study, we incorporate the use of a three-way split (3WS) of the data in combination with a post-hoc pruning procedure as an alternative to cross-validation for internal model validation to reduce computation time without impairing performance. We compare the power to detect true disease causing loci using MDR with both 5- and 10-fold cross-validation to MDR with 3WS for a range of single-locus and epistatic disease models. Additionally, we analyze a dataset in HIV immunogenetics to demonstrate the results of the two strategies on real data. Results MDR with 3WS is computationally approximately five times faster than 5-fold cross-validation. The power to find the exact true disease loci without detecting false positive loci is higher with 5-fold cross-validation than with 3WS before pruning. However, the power to find the true disease causing loci in addition to false positive loci is equivalent to the 3WS. With the incorporation of a pruning procedure after the 3WS, the power of the 3WS approach to detect only the exact disease loci is equivalent to that of MDR with cross-validation. In the real data application, the cross-validation and 3WS analyses indicate the same two-locus model. Conclusions Our results reveal that the performance of the two internal validation methods is equivalent with the use of pruning procedures. The specific pruning
A comparison of internal validation techniques for multifactor dimensionality reduction
Directory of Open Access Journals (Sweden)
Slater Andrew J
2010-07-01
Full Text Available Abstract Background It is hypothesized that common, complex diseases may be due to complex interactions between genetic and environmental factors, which are difficult to detect in high-dimensional data using traditional statistical approaches. Multifactor Dimensionality Reduction (MDR is the most commonly used data-mining method to detect epistatic interactions. In all data-mining methods, it is important to consider internal validation procedures to obtain prediction estimates to prevent model over-fitting and reduce potential false positive findings. Currently, MDR utilizes cross-validation for internal validation. In this study, we incorporate the use of a three-way split (3WS of the data in combination with a post-hoc pruning procedure as an alternative to cross-validation for internal model validation to reduce computation time without impairing performance. We compare the power to detect true disease causing loci using MDR with both 5- and 10-fold cross-validation to MDR with 3WS for a range of single-locus and epistatic disease models. Additionally, we analyze a dataset in HIV immunogenetics to demonstrate the results of the two strategies on real data. Results MDR with 3WS is computationally approximately five times faster than 5-fold cross-validation. The power to find the exact true disease loci without detecting false positive loci is higher with 5-fold cross-validation than with 3WS before pruning. However, the power to find the true disease causing loci in addition to false positive loci is equivalent to the 3WS. With the incorporation of a pruning procedure after the 3WS, the power of the 3WS approach to detect only the exact disease loci is equivalent to that of MDR with cross-validation. In the real data application, the cross-validation and 3WS analyses indicate the same two-locus model. Conclusions Our results reveal that the performance of the two internal validation methods is equivalent with the use of pruning procedures
A comparison of internal validation techniques for multifactor dimensionality reduction.
Winham, Stacey J; Slater, Andrew J; Motsinger-Reif, Alison A
2010-07-22
It is hypothesized that common, complex diseases may be due to complex interactions between genetic and environmental factors, which are difficult to detect in high-dimensional data using traditional statistical approaches. Multifactor Dimensionality Reduction (MDR) is the most commonly used data-mining method to detect epistatic interactions. In all data-mining methods, it is important to consider internal validation procedures to obtain prediction estimates to prevent model over-fitting and reduce potential false positive findings. Currently, MDR utilizes cross-validation for internal validation. In this study, we incorporate the use of a three-way split (3WS) of the data in combination with a post-hoc pruning procedure as an alternative to cross-validation for internal model validation to reduce computation time without impairing performance. We compare the power to detect true disease causing loci using MDR with both 5- and 10-fold cross-validation to MDR with 3WS for a range of single-locus and epistatic disease models. Additionally, we analyze a dataset in HIV immunogenetics to demonstrate the results of the two strategies on real data. MDR with 3WS is computationally approximately five times faster than 5-fold cross-validation. The power to find the exact true disease loci without detecting false positive loci is higher with 5-fold cross-validation than with 3WS before pruning. However, the power to find the true disease causing loci in addition to false positive loci is equivalent to the 3WS. With the incorporation of a pruning procedure after the 3WS, the power of the 3WS approach to detect only the exact disease loci is equivalent to that of MDR with cross-validation. In the real data application, the cross-validation and 3WS analyses indicate the same two-locus model. Our results reveal that the performance of the two internal validation methods is equivalent with the use of pruning procedures. The specific pruning procedure should be chosen
Energy Reductions Using Next-Generation Remanufacturing Techniques
Energy Technology Data Exchange (ETDEWEB)
Sordelet, Daniel; Racek, Ondrej
2012-02-24
supported the Industrial Technologies Program's initiative titled 'Industrial Energy Efficiency Grand Challenge.' To contribute to this Grand Challenge, we. pursued an innovative processing approach for the next generation of thermal spray coatings to capture substantial energy savings and green house gas emission reductions through the remanufacturing of steel and aluminum-based components. The primary goal was to develop a new thermal spray coating process that yields significantly enhanced bond strength. To reach the goal of higher coating bond strength, a laser was coupled with a traditional twin-wire arc (TWA) spray gun to treat the component surface (i.e., heat or partially melt) during deposition. Both ferrous and aluminum-based substrates and coating alloys were examined to determine what materials are more suitable for the laser-assisted twin-wire arc coating technique. Coating adhesion was measured by static tensile and dynamic fatigue techniques, and the results helped to guide the identification of appropriate remanufacturing opportunities that will now be viable due to the increased bond strength of the laser-assisted twin-wire arc coatings. The feasibility of the laser-assisted TWA (LATWA) process was successfully demonstrated in this current effort. Critical processing parameters were identified, and when these were properly controlled, a strong, diffusion bond was developed between the substrate and the deposited coating. Consequently, bond strengths were nearly doubled over those typically obtained using conventional grit-blast TWA coatings. Note, however, that successful LATWA processing was limited to ferrous substrates coated with steel coatings (e.g., 1020 and 1080 steel). With Al-based substrates, it was not possible to avoid melting a thin layer of the substrate during spraying, and this layer re-solidified to form a band of intermetallic phases at the substrate/coating interface, which significantly diminished the coating adhesion. The
Dimensional reduction for a SIR type model
Cahyono, Edi; Soeharyadi, Yudi; Mukhsar
2018-03-01
Epidemic phenomena are often modeled in the form of dynamical systems. Such model has also been used to model spread of rumor, spread of extreme ideology, and dissemination of knowledge. Among the simplest is SIR (susceptible, infected and recovered) model, a model that consists of three compartments, and hence three variables. The variables are functions of time which represent the number of subpopulations, namely suspect, infected and recovery. The sum of the three is assumed to be constant. Hence, the model is actually two dimensional which sits in three-dimensional ambient space. This paper deals with the reduction of a SIR type model into two variables in two-dimensional ambient space to understand the geometry and dynamics better. The dynamics is studied, and the phase portrait is presented. The two dimensional model preserves the equilibrium and the stability. The model has been applied for knowledge dissemination, which has been the interest of knowledge management.
Reduced order methods for modeling and computational reduction
Rozza, Gianluigi
2014-01-01
This monograph addresses the state of the art of reduced order methods for modeling and computational reduction of complex parametrized systems, governed by ordinary and/or partial differential equations, with a special emphasis on real time computing techniques and applications in computational mechanics, bioengineering and computer graphics. Several topics are covered, including: design, optimization, and control theory in real-time with applications in engineering; data assimilation, geometry registration, and parameter estimation with special attention to real-time computing in biomedical engineering and computational physics; real-time visualization of physics-based simulations in computer science; the treatment of high-dimensional problems in state space, physical space, or parameter space; the interactions between different model reduction and dimensionality reduction approaches; the development of general error estimation frameworks which take into account both model and discretization effects. This...
Reduction techniques in the management of atlantoaxial subluxation
Shetty, Arjun; Kumar, Anil; Chacko, Arjun; Guthe, Sachin; Kini, Abhishek R
2013-01-01
Background: The traditional approach to atlantoaxial subluxation which is irreducible after traction is transoral decompression and reduction or odontoid excision and posterior fixation. Transoral approach is associated with comorbidities. However using a posterior approach a combination of atlantoaxial joint space release and a variety of manipulation procedures, optimal or near optimal reduction can be achieved. We analysed our results in this study based on above procedure. Materials and Methods: 66 cases treated over a 5 year period were evaluated retrospectively. Three cases treated by occipito cervical fusion were not included in the study. The remaining 63 cases were classified into three types. All except two cases were subjected to primary posterior C1-C2 joint space dissection and release followed by on table manipulation which was tailored to treat the type of atlantoaxial subluxation. Optimal or near optimal reduction was possible in all cases. An anterior transoral decompression was needed only in two cases where a bony growth (callus) between the C1 anterior arch and the odontoid precluded reduction by posterior manipulation. All cases then underwent posterior fusion and fixation procedures. Patients were neurologically and radiologically evaluated at regular followups to assess fusion and stability for a minimum period of 6 months. Results: Of the 63 cases who underwent posterior manipulation, 49 cases achieved optimum reduction and the remaining 14 cases showed near optimal reduction. Two cases expired in the postoperative period. None of the remaining cases showed neurological worsening after the procedure. Evaluation at 6 months after surgery revealed good stability and fusion in all except three cases. Conclusion: Atlantoaxial joint release and manipulation can be used to achieve reduction in most cases of atlantoaxial subluxation, obivating the need of transoral odontoid excision. PMID:23960275
Model reduction in integrated controls-structures design
Maghami, Peiman G.
1993-01-01
It is the objective of this paper to present a model reduction technique developed for the integrated controls-structures design of flexible structures. Integrated controls-structures design problems are typically posed as nonlinear mathematical programming problems, where the design variables consist of both structural and control parameters. In the solution process, both structural and control design variables are constantly changing; therefore, the dynamic characteristics of the structure are also changing. This presents a problem in obtaining a reduced-order model for active control design and analysis which will be valid for all design points within the design space. In other words, the frequency and number of the significant modes of the structure (modes that should be included) may vary considerably throughout the design process. This is also true as the locations and/or masses of the sensors and actuators change. Moreover, since the number of design evaluations in the integrated design process could easily run into thousands, any feasible order-reduction method should not require model reduction analysis at every design iteration. In this paper a novel and efficient technique for model reduction in the integrated controls-structures design process, which addresses these issues, is presented.
Radon Reduction Techniques in Schools: Interim Technical Guidance.
Environmental Protection Agency, Washington, DC.
This technical document is intended to assist school facilities maintenance personnel in the selection, design, and operation of radon reduction systems in schools. The guidance contained in this document is based largely on research conducted in 1987 and 1988 in schools located in Maryland and Virginia. Researchers from the United States…
Reduction techniques in the management of atlantoaxial subluxation
Directory of Open Access Journals (Sweden)
Arjun Shetty
2013-01-01
Materials and Methods: 66 cases treated over a 5 year period were evaluated retrospectively. Three cases treated by occipito cervical fusion were not included in the study. The remaining 63 cases were classified into three types. All except two cases were subjected to primary posterior C1-C2 joint space dissection and release followed by on table manipulation which was tailored to treat the type of atlantoaxial subluxation. Optimal or near optimal reduction was possible in all cases. An anterior transoral decompression was needed only in two cases where a bony growth (callus between the C1 anterior arch and the odontoid precluded reduction by posterior manipulation. All cases then underwent posterior fusion and fixation procedures. Patients were neurologically and radiologically evaluated at regular followups to assess fusion and stability for a minimum period of 6 months. Results: Of the 63 cases who underwent posterior manipulation, 49 cases achieved optimum reduction and the remaining 14 cases showed near optimal reduction. Two cases expired in the postoperative period. None of the remaining cases showed neurological worsening after the procedure. Evaluation at 6 months after surgery revealed good stability and fusion in all except three cases.
Energy Technology Data Exchange (ETDEWEB)
Bouscaren, R. [CITEPA, Centre Interprofessionnel Technique d`Etudes de la Pollution Atmospherique, 75 - Paris (France)
1996-12-31
Separating techniques offer a large choice between various procedures for air pollution reduction in combustion plants: mechanical, electrical, filtering, hydraulic, chemical, physical, catalytic, thermal and biological processes. Many environment-friendly equipment use such separating techniques, particularly for dust cleaning and fume desulfurizing and more recently for the abatement of volatile organic pollutants or dioxins and furans. These processes are briefly described
Model Order Reduction for Non Linear Mechanics
Pinillo, Rubén
2017-01-01
Context: Automotive industry is moving towards a new generation of cars. Main idea: Cars are furnished with radars, cameras, sensors, etc… providing useful information about the environment surrounding the car. Goals: Provide an efficient model for the radar input/output. Reducing computational costs by means of big data techniques.
Martingale models for quantum state reduction
Energy Technology Data Exchange (ETDEWEB)
Adler, S.L.; Brun, T.A. [Institute for Advanced Study, Princeton, NJ (United States)]. E-mails: adler@ias.edu; tbrun@ias.edu; Brody, D.C. [Blackett Laboratory, Imperial College, London (United Kingdom)]. E-mail: dorje@ic.ac.uk; Hughston, L.P. [Department of Mathematics, King' s College, Strand, London (United Kingdom)]. E-mail: lane.hughston@kcl.ac.uk
2001-10-26
Stochastic models for quantum state reduction give rise to statistical laws that are in most respects in agreement with those of quantum measurement theory. Here we examine the correspondence of the two theories in detail, making a systematic use of the methods of martingale theory. An analysis is carried out to determine the magnitude of the fluctuations experienced by the expectation of the observable during the course of the reduction process and an upper bound is established for the ensemble average of the greatest fluctuations incurred. We consider the general projection postulate of Lueders applicable in the case of a possibly degenerate eigenvalue spectrum, and derive this result rigorously from the underlying stochastic dynamics for state reduction in the case of both a pure and a mixed initial state. We also analyse the associated Lindblad equation for the evolution of the density matrix, and obtain an exact time-dependent solution for the state reduction that explicitly exhibits the transition from a general initial density matrix to the Lueders density matrix. Finally, we apply Girsanov's theorem to derive a set of simple formulae for the dynamics of the state in terms of a family of geometric Brownian motions, thereby constructing an explicit unravelling of the Lindblad equation. (author)
Application of chaotic noise reduction techniques to chaotic data ...
Indian Academy of Sciences (India)
Springer Verlag Heidelberg #4 2048 1996 Dec 15 10:16:45
Deco & Schurmann (1994) have considered recurrent networks that were able to capture the dynamic and metric invariants ... The two techniques we consider are Hammel's method and the local projective method. We compare the ..... where C is the mXm covariance matrix of the vectors zn within the neighbourhood Un,.
Energy Technology Data Exchange (ETDEWEB)
Kubo, Takeshi, E-mail: tkubo@kuhp.kyoto-u.ac.jp [Department of Diagnostic Imaging and Nuclear Medicine, Kyoto University Graduate School of Medicine, 54 Shogoin Kawahara-cho, Sakyo-ku, Kyoto 606-8507 (Japan); Ohno, Yoshiharu [Division of Functional and Diagnostic Imaging Research, Department of Radiology, Kobe University Graduate School of Medicine, 7-5-2 Kusunoki-cho, Chuo-ku, Kobe 650-0017 (Japan); Advanced Biomedical Imaging Research Center, Kobe University Graduate School of Medicine, 7-5-2 Kusunoki-cho, Chuo-ku, Kobe 650-0017 (Japan); Seo, Joon Beom [Department of Radiology, University of Ulsan College of Medicine, Asan Medical Center, 88 Olympic-ro 43-gil, Songpa-gu, Seoul 05505 (Korea, Republic of); Yamashiro, Tsuneo [Department of Radiology, Graduate School of Medical Science, University of the Ryukyus, 207 Uehara, Nishinara, Okinawa 903-0215 (Japan); Kalender, Willi A. [Institute of Medical Physics, Friedrich-Alexander-University Erlangen-Nürnberg, Henkestr. 91, 91052 Erlangen (Germany); Lee, Chang Hyun [Department of Radiology, Seoul National University Hospital, 28 Yeongeon-dong, Jongno-gu, Seoul (Korea, Republic of); Lynch, David A. [Department of Radiology, National Jewish Health, 1400 Jackson St, A330 Denver, Colorado 80206 (United States); Kauczor, Hans-Ulrich [Diagnostic and Interventional Radiology, University Hospital Heidelberg, Im Neuenheimer Feld 400, 69120 Heidelberg (Germany); Translational Lung Research Center Heidelberg (TLRC), Member of the German Center for Lung Research (DZL), Im Neuenheimer Feld 400, 69120 Heidelberg (Germany); Hatabu, Hiroto, E-mail: hhatabu@partners.org [Center for Pulmonary Functional Imaging, Department of Radiology, Brigham and Women' s Hospital, 75 Francis Street, Boston, MA 02115 (United States)
2017-01-15
Highlights: • Various techniques have led to substantial radiation dose reduction of chest CT. • Automatic modulation of tube current has been shown to reduce radiation dose. • Iterative reconstruction makes significant radiation dose reduction possible. • Processing time is a limitation for full iterative reconstruction, currently. • Validation of diagnostic accuracy is desirable for routine use of low dose protocols. - Abstract: The increase in the radiation exposure from CT examinations prompted the investigation on the various dose-reduction techniques. Significant dose reduction has been achieved and the level of radiation exposure of thoracic CT is expected to reach the level equivalent to several chest X-ray examinations. With more scanners with advanced dose reduction capability deployed, knowledge on the radiation dose reduction methods has become essential to clinical practice as well as academic research. This article reviews the history of dose reduction techniques, ongoing changes brought by newer technologies and areas of further investigation.
Dan, Michael; Phillips, Alfred; Simonian, Marcus; Flannagan, Scott
2015-06-01
We provide a review of literature on reduction techniques for posterior hip dislocations and present our experience with a novel technique for the reduction of acute posterior hip dislocations in the ED, 'the rocket launcher' technique. We present our results with six patients with prosthetic posterior hip dislocation treated in our rural ED. We recorded patient demographics. The technique involves placing the patient's knee over the shoulder, and holding the lower leg like a 'Rocket Launcher' allow the physician's shoulder to work as a fulcrum, in an ergonomically friendly manner for the reducer. We used Fisher's t-test for cohort analysis between reduction techniques. Of our patients, the mean age was 74 years (range 66 to 85 years). We had a 83% success rate. The one patient who the 'rocket launcher' failed in, was a hemi-arthroplasty patient who also failed all other closed techniques and needed open reduction. When compared with Allis (62% success rate), Whistler (60% success rate) and Captain Morgan (92% success rate) techniques, there was no statistically significant difference in the successfulness of the reduction techniques. There were no neurovascular or periprosthetic complications. We have described a reduction technique for posterior hip dislocations. Placing the patient's knee over the shoulder, and holding the lower leg like a 'Rocket Launcher' allow the physician's shoulder to work as a fulcrum, thus mechanically and ergonomically superior to standard techniques. © 2015 Australasian College for Emergency Medicine and Australasian Society for Emergency Medicine.
Model reduction using a posteriori analysis
Whiteley, Jonathan P.
2010-05-01
Mathematical models in biology and physiology are often represented by large systems of non-linear ordinary differential equations. In many cases, an observed behaviour may be written as a linear functional of the solution of this system of equations. A technique is presented in this study for automatically identifying key terms in the system of equations that are responsible for a given linear functional of the solution. This technique is underpinned by ideas drawn from a posteriori error analysis. This concept has been used in finite element analysis to identify regions of the computational domain and components of the solution where a fine computational mesh should be used to ensure accuracy of the numerical solution. We use this concept to identify regions of the computational domain and components of the solution where accurate representation of the mathematical model is required for accuracy of the functional of interest. The technique presented is demonstrated by application to a model problem, and then to automatically deduce known results from a cell-level cardiac electrophysiology model. © 2010 Elsevier Inc.
Fast Multiscale Reservoir Simulations using POD-DEIM Model Reduction
Ghasemi, Mohammadreza
2015-02-23
In this paper, we present a global-local model reduction for fast multiscale reservoir simulations in highly heterogeneous porous media with applications to optimization and history matching. Our proposed approach identifies a low dimensional structure of the solution space. We introduce an auxiliary variable (the velocity field) in our model reduction that allows achieving a high degree of model reduction. The latter is due to the fact that the velocity field is conservative for any low-order reduced model in our framework. Because a typical global model reduction based on POD is a Galerkin finite element method, and thus it can not guarantee local mass conservation. This can be observed in numerical simulations that use finite volume based approaches. Discrete Empirical Interpolation Method (DEIM) is used to approximate the nonlinear functions of fine-grid functions in Newton iterations. This approach allows achieving the computational cost that is independent of the fine grid dimension. POD snapshots are inexpensively computed using local model reduction techniques based on Generalized Multiscale Finite Element Method (GMsFEM) which provides (1) a hierarchical approximation of snapshot vectors (2) adaptive computations by using coarse grids (3) inexpensive global POD operations in a small dimensional spaces on a coarse grid. By balancing the errors of the global and local reduced-order models, our new methodology can provide an error bound in simulations. Our numerical results, utilizing a two-phase immiscible flow, show a substantial speed-up and we compare our results to the standard POD-DEIM in finite volume setup.
Model Reduction of Fuzzy Logic Systems
Directory of Open Access Journals (Sweden)
Zhandong Yu
2014-01-01
Full Text Available This paper deals with the problem of ℒ2-ℒ∞ model reduction for continuous-time nonlinear uncertain systems. The approach of the construction of a reduced-order model is presented for high-order nonlinear uncertain systems described by the T-S fuzzy systems, which not only approximates the original high-order system well with an ℒ2-ℒ∞ error performance level γ but also translates it into a linear lower-dimensional system. Then, the model approximation is converted into a convex optimization problem by using a linearization procedure. Finally, a numerical example is presented to show the effectiveness of the proposed method.
Energy Technology Data Exchange (ETDEWEB)
Garcia-Pareja, S. [Servicio de Radiofisica Hospitalaria, Hospital Regional Universitario ' Carlos Haya' , Avda. Carlos Haya, s/n, E-29010 Malaga (Spain)], E-mail: garciapareja@gmail.com; Vilches, M. [Servicio de Fisica y Proteccion Radiologica, Hospital Regional Universitario ' Virgen de las Nieves' , Avda. de las Fuerzas Armadas, 2, E-18014 Granada (Spain); Lallena, A.M. [Departamento de Fisica Atomica, Molecular y Nuclear, Universidad de Granada, E-18071 Granada (Spain)
2007-09-21
The ant colony method is used to control the application of variance reduction techniques to the simulation of clinical electron linear accelerators of use in cancer therapy. In particular, splitting and Russian roulette, two standard variance reduction methods, are considered. The approach can be applied to any accelerator in a straightforward way and permits, in addition, to investigate the 'hot' regions of the accelerator, an information which is basic to develop a source model for this therapy tool.
International Nuclear Information System (INIS)
Garcia-Pareja, S.; Vilches, M.; Lallena, A.M.
2007-01-01
The ant colony method is used to control the application of variance reduction techniques to the simulation of clinical electron linear accelerators of use in cancer therapy. In particular, splitting and Russian roulette, two standard variance reduction methods, are considered. The approach can be applied to any accelerator in a straightforward way and permits, in addition, to investigate the 'hot' regions of the accelerator, an information which is basic to develop a source model for this therapy tool
Setup time reduction in pvc boots production through smed technique
Directory of Open Access Journals (Sweden)
Amanda Herculano da Costa
2012-03-01
Full Text Available The competition imposed by the market requires of the organizations the continuous improvement of its processes, products and offered services, with lower production costs. This article addresses this issue describing the resulting improvements from the implementation of Single Minute Exchange of Die (SMED in the process of exchange the mold of the injection machine of PVC during the boots manufacturing. The case study was conducted in a large company of footwear, located in the state of Paraiba. In order to find the best alternative to the problem of the setup of the molds, were used the SMED and the methodology for problem resolution, and then was implemented the solution that generated greater productivity for the company. Among the improvements made, we should emphasize the reduction of inactive time of 11.56 minutes to 5 minutes, reducing the time needed for the adjustments of the molds with the implementation of guides for centering and shims to standardize the heights of the molds.
Wagenaar, Dirk; van der Graaf, Emiel R.; van der Schaaf, Arjen; Greuter, Marcel J. W.
2015-01-01
Objectives Typical streak artifacts known as metal artifacts occur in the presence of strongly attenuating materials in computed tomography (CT). Recently, vendors have started offering metal artifact reduction (MAR) techniques. In addition, a MAR technique called the metal deletion technique (MDT)
Galerkin v. discrete-optimal projection in nonlinear model reduction
Energy Technology Data Exchange (ETDEWEB)
Carlberg, Kevin Thomas [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Barone, Matthew Franklin [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Antil, Harbir [George Mason Univ., Fairfax, VA (United States)
2015-04-01
Discrete-optimal model-reduction techniques such as the Gauss{Newton with Approximated Tensors (GNAT) method have shown promise, as they have generated stable, accurate solutions for large-scale turbulent, compressible ow problems where standard Galerkin techniques have failed. However, there has been limited comparative analysis of the two approaches. This is due in part to difficulties arising from the fact that Galerkin techniques perform projection at the time-continuous level, while discrete-optimal techniques do so at the time-discrete level. This work provides a detailed theoretical and experimental comparison of the two techniques for two common classes of time integrators: linear multistep schemes and Runge{Kutta schemes. We present a number of new ndings, including conditions under which the discrete-optimal ROM has a time-continuous representation, conditions under which the two techniques are equivalent, and time-discrete error bounds for the two approaches. Perhaps most surprisingly, we demonstrate both theoretically and experimentally that decreasing the time step does not necessarily decrease the error for the discrete-optimal ROM; instead, the time step should be `matched' to the spectral content of the reduced basis. In numerical experiments carried out on a turbulent compressible- ow problem with over one million unknowns, we show that increasing the time step to an intermediate value decreases both the error and the simulation time of the discrete-optimal reduced-order model by an order of magnitude.
Spray drift reduction techniques for vineyards in fragmented landscapes.
Otto, S; Loddo, D; Baldoin, C; Zanin, G
2015-10-01
In intensive agricultural systems spray drift is one of the major potential diffuse pollution pathways for pesticides and poses a risk to the environment. There is also increasing concern about potential exposure to bystanders and passers-by, especially in fragmented landscapes like the Italian pre-Alps, where orchards and vineyards are surrounded by residential houses. There is thus an urgent need to do field measurements of drift generated by air-blast sprayer in vineyards, and to develop measures for its reduction (mitigation). A field experiment with an "event method" was conducted in north-eastern Italy in no-wind conditions, in the hilly area famed for Prosecco wine production, using an air-blast sprayer in order to evaluate the potential spray drift from equipment and the effectiveness of some practical mitigation measures, either single or in combination. A definition of mitigation is proposed, and a method for the calculation of total effectiveness of a series of mitigation measures is applied to some what-if scenarios of interest. Results show that low-drift equipment reduced potential spray drift by 38% and that a fully developed vine curtain mitigated it by about 70%; when the last row was treated without air-assistance mitigation was about 74%; hedgerows were always very effective in providing mitigation of up to 98%. In conclusion, spray drift is not inevitable and can be markedly reduced using a few mitigation measures, most already available to farmers, that can be strongly recommended for environmental regulatory schemes and community-based participatory research. Copyright © 2015 Elsevier Ltd. All rights reserved.
Reduction of radioactive waste volumes by using supercompaction technique
International Nuclear Information System (INIS)
Santos, John Wagner A.; Lima, Sandro Leonardo N. de
2007-01-01
The Radioactive Waste Management Program from ELETRONUCLEAR comprises the use of techniques and technologies to reduce the volumes of processed radwaste, which aims to improve the storage capacity and also assure the protection for the environment, as part of ELETRONUCLEAR's business strategy. The Angra site stores radwaste in temporary storage facilities, named Store number 1 and Store number 2, which are routinely managed and surveyed periodically by the ELETRONUCLEAR's Radiological Protection Division and submitted to frequent CNEN inspections. Medium level and low level radwastes are stored on those storage facilities. In January 2005, ELETRONUCLEAR decided to realize the Supercompaction of drums with compacted radwaste, mostly from Angra 1 and a little from Angra 2. By that time, the Store 1 was near to achieve its nominal capacity; this situation demanded a prompt response, and the chosen option was to proceed the supercompaction by using a mobile supercompactor unit. In April, 2006, two thousand and twenty seven drums of 200 liters were supercompacted at the plant site, and as a result, the initial storage area became sufficient to store drums for about five more years of operation. The supercompaction process is achieved by using a hydraulic press with extra high force. The pellets (crashed drums) were placed inside a special metallic box with 2500 liters of capacity (the overpack). This operation produced 128 full boxes, varying from 12 to 19 pellets inside each box, and the boxes were stored inside the Store 1. (author)
Snowden, Thomas J; van der Graaf, Piet H; Tindall, Marcus J
2018-03-26
In this paper we present a framework for the reduction and linking of physiologically based pharmacokinetic (PBPK) models with models of systems biology to describe the effects of drug administration across multiple scales. To address the issue of model complexity, we propose the reduction of each type of model separately prior to being linked. We highlight the use of balanced truncation in reducing the linear components of PBPK models, whilst proper lumping is shown to be efficient in reducing typically nonlinear systems biology type models. The overall methodology is demonstrated via two example systems; a model of bacterial chemotactic signalling in Escherichia coli and a model of extracellular regulatory kinase activation mediated via the extracellular growth factor and nerve growth factor receptor pathways. Each system is tested under the simulated administration of three hypothetical compounds; a strong base, a weak base, and an acid, mirroring the parameterisation of pindolol, midazolam, and thiopental, respectively. Our method can produce up to an 80% decrease in simulation time, allowing substantial speed-up for computationally intensive applications including parameter fitting or agent based modelling. The approach provides a straightforward means to construct simplified Quantitative Systems Pharmacology models that still provide significant insight into the mechanisms of drug action. Such a framework can potentially bridge pre-clinical and clinical modelling - providing an intermediate level of model granularity between classical, empirical approaches and mechanistic systems describing the molecular scale.
International Nuclear Information System (INIS)
1980-04-01
A review is made of the state of the art of volume reduction techniques for low level liquid and solid radioactive wastes produced as a result of: (1) operation of commercial nuclear power plants, (2) storage of spent fuel in away-from-reactor facilities, and (3) decontamination/decommissioning of commercial nuclear power plants. The types of wastes and their chemical, physical, and radiological characteristics are identified. Methods used by industry for processing radioactive wastes are reviewed and compared to the new techniques for processing and reducing the volume of radioactive wastes. A detailed system description and report on operating experiences follow for each of the new volume reduction techniques. In addition, descriptions of volume reduction methods presently under development are provided. The Appendix records data collected during site surveys of vendor facilities and operating power plants. A Bibliography is provided for each of the various volume reduction techniques discussed in the report
Energy Technology Data Exchange (ETDEWEB)
1980-04-01
A review is made of the state of the art of volume reduction techniques for low level liquid and solid radioactive wastes produced as a result of: (1) operation of commercial nuclear power plants, (2) storage of spent fuel in away-from-reactor facilities, and (3) decontamination/decommissioning of commercial nuclear power plants. The types of wastes and their chemical, physical, and radiological characteristics are identified. Methods used by industry for processing radioactive wastes are reviewed and compared to the new techniques for processing and reducing the volume of radioactive wastes. A detailed system description and report on operating experiences follow for each of the new volume reduction techniques. In addition, descriptions of volume reduction methods presently under development are provided. The Appendix records data collected during site surveys of vendor facilities and operating power plants. A Bibliography is provided for each of the various volume reduction techniques discussed in the report.
Hadgaonkar, Shailesh; Shah, Kunal; Khurjekar, Ketan; Krishnan, Vibhu; Shyam, Ashok; Sancheti, Parag
2017-01-01
Study Design: Technical report. Objective: Dorsolumbar vertebral dislocations, with or without associated fractures, occur secondary to very high velocity trauma. The reduction procedures and techniques, which may be adopted in these situations, have been multifariously discussed in the literature. Our objective was to assess the outcome of a novel reduction maneuver, using parallel rods which we have employed in reduction of high-grade thoracolumbar fractures to achieve precise sagittal bala...
The Analysis of Dimensionality Reduction Techniques in Cryptographic Object Code Classification
Energy Technology Data Exchange (ETDEWEB)
Jason L. Wright; Milos Manic
2010-05-01
This paper compares the application of three different dimension reduction techniques to the problem of locating cryptography in compiled object code. A simple classi?er is used to compare dimension reduction via sorted covariance, principal component analysis, and correlation-based feature subset selection. The analysis concentrates on the classi?cation accuracy as the number of dimensions is increased.
Dimensionality reduction of RKHS model parameters.
Taouali, Okba; Elaissi, Ilyes; Messaoud, Hassani
2015-07-01
This paper proposes a new method to reduce the parameter number of models developed in the Reproducing Kernel Hilbert Space (RKHS). In fact, this number is equal to the number of observations used in the learning phase which is assumed to be high. The proposed method entitled Reduced Kernel Partial Least Square (RKPLS) consists on approximating the retained latent components determined using the Kernel Partial Least Square (KPLS) method by their closest observation vectors. The paper proposes the design and the comparative study of the proposed RKPLS method and the Support Vector Machines on Regression (SVR) technique. The proposed method is applied to identify a nonlinear Process Trainer PT326 which is a physical process available in our laboratory. Moreover as a thermal process with large time response may help record easily effective observations which contribute to model identification. Compared to the SVR technique, the results from the proposed RKPLS method are satisfactory. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.
Directory of Open Access Journals (Sweden)
Christer Dalen
2017-10-01
Full Text Available A model reduction technique based on optimization theory is presented, where a possible higher order system/model is approximated with an unstable DIPTD model by using only step response data. The DIPTD model is used to tune PD/PID controllers for the underlying possible higher order system. Numerous examples are used to illustrate the theory, i.e. both linear and nonlinear models. The Pareto Optimal controller is used as a reference controller.
Optimisation of 12 MeV electron beam simulation using variance reduction technique
Jayamani, J.; Termizi, N. A. S. Mohd; Kamarulzaman, F. N. Mohd; Aziz, M. Z. Abdul
2017-05-01
Monte Carlo (MC) simulation for electron beam radiotherapy consumes a long computation time. An algorithm called variance reduction technique (VRT) in MC was implemented to speed up this duration. This work focused on optimisation of VRT parameter which refers to electron range rejection and particle history. EGSnrc MC source code was used to simulate (BEAMnrc code) and validate (DOSXYZnrc code) the Siemens Primus linear accelerator model with the non-VRT parameter. The validated MC model simulation was repeated by applying VRT parameter (electron range rejection) that controlled by global electron cut-off energy 1,2 and 5 MeV using 20 × 107 particle history. 5 MeV range rejection generated the fastest MC simulation with 50% reduction in computation time compared to non-VRT simulation. Thus, 5 MeV electron range rejection utilized in particle history analysis ranged from 7.5 × 107 to 20 × 107. In this study, 5 MeV electron cut-off with 10 × 107 particle history, the simulation was four times faster than non-VRT calculation with 1% deviation. Proper understanding and use of VRT can significantly reduce MC electron beam calculation duration at the same time preserving its accuracy.
Optimisation of 12 MeV electron beam simulation using variance reduction technique
International Nuclear Information System (INIS)
Jayamani, J; Aziz, M Z Abdul; Termizi, N A S Mohd; Kamarulzaman, F N Mohd
2017-01-01
Monte Carlo (MC) simulation for electron beam radiotherapy consumes a long computation time. An algorithm called variance reduction technique (VRT) in MC was implemented to speed up this duration. This work focused on optimisation of VRT parameter which refers to electron range rejection and particle history. EGSnrc MC source code was used to simulate (BEAMnrc code) and validate (DOSXYZnrc code) the Siemens Primus linear accelerator model with the non-VRT parameter. The validated MC model simulation was repeated by applying VRT parameter (electron range rejection) that controlled by global electron cut-off energy 1,2 and 5 MeV using 20 × 10 7 particle history. 5 MeV range rejection generated the fastest MC simulation with 50% reduction in computation time compared to non-VRT simulation. Thus, 5 MeV electron range rejection utilized in particle history analysis ranged from 7.5 × 10 7 to 20 × 10 7 . In this study, 5 MeV electron cut-off with 10 × 10 7 particle history, the simulation was four times faster than non-VRT calculation with 1% deviation. Proper understanding and use of VRT can significantly reduce MC electron beam calculation duration at the same time preserving its accuracy. (paper)
International Nuclear Information System (INIS)
Jakabe, Hideo; Maruyama, Yoshinaga
1996-01-01
The development program of KEPCO on outage-free maintenance techniques for distribution line work since 1984 is overviewed. It has succeeded in eliminating maintenance outages since 1989. The original aim was to improve customer satisfaction. However, in all, four benefits were realised through the development. These are cost reduction, securing of worker safety, improvement of customer service, and advancement of distribution techniques and morale in KEPCO. The introduction of robotic techniques for maintenance work and manipulator techniques for repair work is planned for further modernization. These new techniques are helping in both work safety and work efficiency improvement. Cost reduction and advancement of distribution line work techniques is also considered. (R.P.)
International Nuclear Information System (INIS)
Hirabayashi, T.; Kameo, Y.; Nakashio, N.
2001-01-01
For the purpose of reducing the amount and/or volume of low-level radioactive waste (LLW) arising from decommissioning of nuclear reactor, the Japan Atomic Energy Research Institute (JAERI) has been developing four decontamination techniques. They are: (a) Gas-carrying abrasive method, (b) In-situ remote electropolishing method for pipe system before dismantling, (c) Bead reaction - thermal shock method, and (d) Laser induced chemical method for components after dismantling. JAERI in developing techniques are also carrying out melting tests of metal and non-metal. Melting was confirmed to be effective in reducing the volume, homogenizing, and furthermore stabilizing non-metallic wastes. (author)
Advanced structural equation modeling issues and techniques
Marcoulides, George A
2013-01-01
By focusing primarily on the application of structural equation modeling (SEM) techniques in example cases and situations, this book provides an understanding and working knowledge of advanced SEM techniques with a minimum of mathematical derivations. The book was written for a broad audience crossing many disciplines, assumes an understanding of graduate level multivariate statistics, including an introduction to SEM.
Dimensionality reduction in epidemic spreading models
Frasca, M.; Rizzo, A.; Gallo, L.; Fortuna, L.; Porfiri, M.
2015-09-01
Complex dynamical systems often exhibit collective dynamics that are well described by a reduced set of key variables in a low-dimensional space. Such a low-dimensional description offers a privileged perspective to understand the system behavior across temporal and spatial scales. In this work, we propose a data-driven approach to establish low-dimensional representations of large epidemic datasets by using a dimensionality reduction algorithm based on isometric features mapping (ISOMAP). We demonstrate our approach on synthetic data for epidemic spreading in a population of mobile individuals. We find that ISOMAP is successful in embedding high-dimensional data into a low-dimensional manifold, whose topological features are associated with the epidemic outbreak. Across a range of simulation parameters and model instances, we observe that epidemic outbreaks are embedded into a family of closed curves in a three-dimensional space, in which neighboring points pertain to instants that are close in time. The orientation of each curve is unique to a specific outbreak, and the coordinates correlate with the number of infected individuals. A low-dimensional description of epidemic spreading is expected to improve our understanding of the role of individual response on the outbreak dynamics, inform the selection of meaningful global observables, and, possibly, aid in the design of control and quarantine procedures.
Bidra, Avinash S
2015-06-01
Bone reduction for maxillary fixed implant-supported prosthodontic treatment is often necessary to either gain prosthetic space or to conceal the prosthesis-tissue junction in patients with excessive gingival display (gummy smile). Inadequate bone reduction is often a cause of prosthetic failure due to material fractures, poor esthetics, or inability to perform oral hygiene procedures due to unfavorable ridge lap prosthetic contours. Various instruments and techniques are available for bone reduction. It would be helpful to have an accurate and efficient method for bone reduction at the time of surgery and subsequently create a smooth bony platform. This article presents a straightforward technique for systematic bone reduction by transferring the patient's maximum smile line, recorded clinically, to a clear radiographic smile guide for treatment planning using cone beam computed tomography (CBCT). The patient's smile line and the amount of required bone reduction are transferred clinically by marking bone with a sterile stationery graphite wood pencil at the time of surgery. This technique can help clinicians to accurately achieve the desired bone reduction during surgery, and provide confidence that the diagnostic and treatment planning goals have been achieved. Copyright © 2015 Editorial Council for the Journal of Prosthetic Dentistry. Published by Elsevier Inc. All rights reserved.
Verification of Orthogrid Finite Element Modeling Techniques
Steeve, B. E.
1996-01-01
The stress analysis of orthogrid structures, specifically with I-beam sections, is regularly performed using finite elements. Various modeling techniques are often used to simplify the modeling process but still adequately capture the actual hardware behavior. The accuracy of such 'Oshort cutso' is sometimes in question. This report compares three modeling techniques to actual test results from a loaded orthogrid panel. The finite element models include a beam, shell, and mixed beam and shell element model. Results show that the shell element model performs the best, but that the simpler beam and beam and shell element models provide reasonable to conservative results for a stress analysis. When deflection and stiffness is critical, it is important to capture the effect of the orthogrid nodes in the model.
Directory of Open Access Journals (Sweden)
Radwane Faroug
2013-01-01
Full Text Available Paediatric calcaneal fractures are rare injuries usually managed conservatively or with open reduction and internal fixation (ORIF. Closed reduction was previously thought to be impossible, and very few cases are reported in the literature. We report a new technique for closed reduction using Ilizarov half-rings. We report successful closed reduction and screwless fixation of an extra-articular calcaneal fracture dislocation in a 7-year-old boy. Reduction was achieved using two Ilizarov half-ring frames arranged perpendicular to each other, enabling simultaneous application of longitudinal and rotational traction. Anatomical reduction was achieved with restored angles of Bohler and Gissane. Two K-wires were the definitive fixation. Bony union with good functional outcome and minimal pain was achieved at eight-weeks follow up. ORIF of calcaneal fractures provides good functional outcome but is associated with high rates of malunion and postoperative pain. Preservation of the unique soft tissue envelope surrounding the calcaneus reduces the risk of infection. Closed reduction prevents distortion of these tissues and may lead to faster healing and mobilisation. Closed reduction and screwless fixation of paediatric calcaneal fractures is an achievable management option. Our technique has preserved the soft tissue envelope surrounding the calcaneus, has avoided retained metalwork related complications, and has resulted in a good functional outcome.
A Strategy Modelling Technique for Financial Services
Heinrich, Bernd; Winter, Robert
2004-01-01
Strategy planning processes often suffer from a lack of conceptual models that can be used to represent business strategies in a structured and standardized form. If natural language is replaced by an at least semi-formal model, the completeness, consistency, and clarity of strategy descriptions can be drastically improved. A strategy modelling technique is proposed that is based on an analysis of modelling requirements, a discussion of related work and a critical analysis of generic approach...
Temporal rainfall estimation using input data reduction and model inversion
Wright, A. J.; Vrugt, J. A.; Walker, J. P.; Pauwels, V. R. N.
2016-12-01
demonstration of equifinality. The use of a likelihood function that considers both rainfall and streamflow error combined with the use of the DWT as a model data reduction technique allows the joint inference of hydrologic model parameters along with rainfall.
Viswanathan, Karthickeyan
2018-03-01
In the present study, non-edible seed oil namely raw neem oil was converted into biodiesel using transesterification process. In the experimentation, two biodiesel blends were prepared namely B25 (25% neem oil methyl ester with 75% of diesel) and B50 (50% neem oil methyl ester with 50% diesel). Urea-based selective catalytic reduction (SCR) technique with catalytic converter (CC) was fixed in the exhaust tail pipe of the engine for the reduction of engine exhaust emissions. Initially, the engine was operated with diesel as a working fluid and followed by refilling of biodiesel blends B25 and B50 to obtain the baseline readings without SCR and CC. Then, the same procedure was repeated with SCR and CC technique for emission reduction measurement in diesel, B25 and B50 sample. The experimental results revealed that the B25 blend showed higher break thermal efficiency (BTE) and exhaust gas temperature (EGT) with lower break-specific fuel consumption (BSFC) than B50 blend at all loads. On comparing with biodiesel blends, diesel experiences increased BTE of 31.9% with reduced BSFC of 0.29 kg/kWh at full load. A notable emission reduction was noticed for all test fuels in SCR and CC setup. At full load, B25 showed lower carbon monoxide (CO) of 0.09% volume, hydrocarbon (HC) of 24 ppm, and smoke of 14 HSU and oxides of nitrogen (NOx) of 735 ppm than diesel and B50 in SCR and CC setup. On the whole, the engine with SCR and CC setup showed better performance and emission characteristics than standard engine operation.
International Nuclear Information System (INIS)
Jiang Yue; Lin Weizhen; Yao Side; Lin Nianyun
1998-01-01
One-electron reduction potential (E1/7) is one of the important parameters of electrophilic radioprotectors. In this work, one-electron reduction potential of tea polyphenol components including EGCG, ECG, EGC and EC in aqueous solution at pH7 were determined to be -321 mV, -326 mV, -331 mV and -330 mV, respectively, using pulse radiolysis techniques. 2,6-dimethyl benzoquinone (DQ) was used as a reference compound
Dimensionality reduction for uncertainty quantification of nuclear engineering models.
Energy Technology Data Exchange (ETDEWEB)
Roderick, O.; Wang, Z.; Anitescu, M. (Mathematics and Computer Science)
2011-01-01
The task of uncertainty quantification consists of relating the available information on uncertainties in the model setup to the resulting variation in the outputs of the model. Uncertainty quantification plays an important role in complex simulation models of nuclear engineering, where better understanding of uncertainty results in greater confidence in the model and in the improved safety and efficiency of engineering projects. In our previous work, we have shown that the effect of uncertainty can be approximated by polynomial regression with derivatives (PRD): a hybrid regression method that uses first-order derivatives of the model output as additional fitting conditions for a polynomial expansion. Numerical experiments have demonstrated the advantage of this approach over classical methods of uncertainty analysis: in precision, computational efficiency, or both. To obtain derivatives, we used automatic differentiation (AD) on the simulation code; hand-coded derivatives are acceptable for simpler models. We now present improvements on the method. We use a tuned version of the method of snapshots, a technique based on proper orthogonal decomposition (POD), to set up the reduced order representation of essential information on uncertainty in the model inputs. The automatically obtained sensitivity information is required to set up the method. Dimensionality reduction in combination with PRD allows analysis on a larger dimension of the uncertainty space (>100), at modest computational cost.
A Method to Test Model Calibration Techniques
Energy Technology Data Exchange (ETDEWEB)
Judkoff, Ron; Polly, Ben; Neymark, Joel
2016-08-26
This paper describes a method for testing model calibration techniques. Calibration is commonly used in conjunction with energy retrofit audit models. An audit is conducted to gather information about the building needed to assemble an input file for a building energy modeling tool. A calibration technique is used to reconcile model predictions with utility data, and then the 'calibrated model' is used to predict energy savings from a variety of retrofit measures and combinations thereof. Current standards and guidelines such as BPI-2400 and ASHRAE-14 set criteria for 'goodness of fit' and assume that if the criteria are met, then the calibration technique is acceptable. While it is logical to use the actual performance data of the building to tune the model, it is not certain that a good fit will result in a model that better predicts post-retrofit energy savings. Therefore, the basic idea here is that the simulation program (intended for use with the calibration technique) is used to generate surrogate utility bill data and retrofit energy savings data against which the calibration technique can be tested. This provides three figures of merit for testing a calibration technique, 1) accuracy of the post-retrofit energy savings prediction, 2) closure on the 'true' input parameter values, and 3) goodness of fit to the utility bill data. The paper will also discuss the pros and cons of using this synthetic surrogate data approach versus trying to use real data sets of actual buildings.
Kokiopoulou, Effrosyni; Saad, Yousef
2007-01-01
This paper considers the problem of dimensionality reduction by orthogonal projection techniques. The main feature of the proposed techniques is that they attempt to preserve both the intrinsic neighborhood geometry of the data samples and the global geometry. In particular we propose a method, named Orthogonal Neighborhood Preserving Projections, which works by first building an ``affinity'' graph for the data, in a way that is similar to the method of Locally Linear Embedding (LLE). ...
Model techniques for testing heated concrete structures
International Nuclear Information System (INIS)
Stefanou, G.D.
1983-01-01
Experimental techniques are described which may be used in the laboratory to measure strains of model concrete structures representing to scale actual structures of any shape or geometry, operating at elevated temperatures, for which time-dependent creep and shrinkage strains are dominant. These strains could be used to assess the distribution of stress in the scaled structure and hence to predict the actual behaviour of concrete structures used in nuclear power stations. Similar techniques have been employed in an investigation to measure elastic, thermal, creep and shrinkage strains in heated concrete models representing to scale parts of prestressed concrete pressure vessels for nuclear reactors. (author)
Hadgaonkar, Shailesh; Shah, Kunal; Khurjekar, Ketan; Krishnan, Vibhu; Shyam, Ashok; Sancheti, Parag
2017-06-01
Technical report. Dorsolumbar vertebral dislocations, with or without associated fractures, occur secondary to very high velocity trauma. The reduction procedures and techniques, which may be adopted in these situations, have been multifariously discussed in the literature. Our objective was to assess the outcome of a novel reduction maneuver, using parallel rods which we have employed in reduction of high-grade thoracolumbar fractures to achieve precise sagittal balance as well as accurate vertebral alignment with minimal soft tissue damage. The study included a total of 11 cases of thoracolumbar dislocations, who had presented to our emergency spine services following high-velocity trauma. After appropriate systemic stabilization and necessary investigations, all patients were surgically treated using the described technique. There were no surgical complications at 2-year follow-up. Radiographs showed good reduction and maintained sagittal balance. We believe that this technique is an excellent means of achieving safer, easier, and accurate reduction for restoration of sagittal/coronal balance and alignment in high-grade thoracolumbar dislocations. It is easily reproducible and predictable.
ON THE PAPR REDUCTION IN OFDM SYSTEMS: A NOVEL ZCT PRECODING BASED SLM TECHNIQUE
Directory of Open Access Journals (Sweden)
VARUN JEOTI
2011-06-01
Full Text Available High Peak to Average Power Ratio (PAPR reduction is still an important challenge in Orthogonal Frequency Division Multiplexing (OFDM systems. In this paper, we propose a novel Zadoff-Chu matrix Transform (ZCT precoding based Selected Mapping (SLM technique for PAPR reduction in OFDM systems. This technique is based on precoding the constellation symbols with ZCT precoder after the multiplication of phase rotation factor and before the Inverse Fast Fourier Transform (IFFT in the SLM based OFDM (SLM-OFDM Systems. Computer simulation results show that, the proposed technique can reduce PAPR up to 5.2 dB for N=64 (System subcarriers and V=16 (Dissimilar phase sequences, at clip rate of 10-3. Additionally, ZCT based SLM-OFDM (ZCT-SLM-OFDM systems also take advantage of frequency variations of the communication channel and can also offer substantial performance gain in fading multipath channels.
National Research Council Canada - National Science Library
Ingel, R
1999-01-01
.... Projection operators are employed for the model reduction or condensation process. Interpolation is then introduced over a user defined frequency window, which can have real and imaginary boundaries and be quite large. Hermitian...
Workshop on Computational Modelling Techniques in Structural ...
Indian Academy of Sciences (India)
Home; Journals; Resonance – Journal of Science Education; Volume 22; Issue 6. Workshop on Computational Modelling Techniques in Structural Biology. Information and Announcements Volume 22 Issue 6 June 2017 pp 619-619. Fulltext. Click here to view fulltext PDF. Permanent link:
Abdelhamid, Mohamed M; Bayoumy, Maysara Abdelhalim; Elkady, Hesham A; Abdelkawi, Ayman Farouk
2017-12-01
Several techniques of arthroscopic treatment of tibial spine avulsion fractures have been described in the literature. These techniques include the use of various fixation devices such as screws, K-wires, wiring, sutures, and suture anchors. In this study, we evaluate a new wiring technique for the treatment of these injuries. This technique involves fixation by stainless steel tension wires passed over the fractured spine and tied over a bone bridge. The advantages of this technique are that it aids in reduction, allows for compression of the tibial spine fragment anatomically in its fracture bed, provides stable fixation in difficult comminuted fractures, and allows for early mobilization and weight bearing because of the solid fixation.
A Comparative Analysis of Techniques for PAPR Reduction of OFDM Signals
Directory of Open Access Journals (Sweden)
M. Janjić
2014-06-01
Full Text Available In this paper the problem of high Peak-to-Average Power Ratio (PAPR in Orthogonal Frequency-Division Multiplexing (OFDM signals is studied. Besides describing three techniques for PAPR reduction, SeLective Mapping (SLM, Partial Transmit Sequence (PTS and Interleaving, a detailed analysis of the performances of these techniques for various values of relevant parameters (number of phase sequences, number of interleavers, number of phase factors, number of subblocks depending on applied technique, is carried out. Simulation of these techniques is run in Matlab software. Results are presented in the form of Complementary Cumulative Distribution Function (CCDF curves for PAPR of 30000 randomly generated OFDM symbols. Simulations are performed for OFDM signals with 32 and 256 subcarriers, oversampled by a factor of 4. A detailed comparison of these techniques is made based on Matlab simulation results.
Model reduction methods for vector autoregressive processes
Brüggemann, Ralf
2004-01-01
1. 1 Objective of the Study Vector autoregressive (VAR) models have become one of the dominant research tools in the analysis of macroeconomic time series during the last two decades. The great success of this modeling class started with Sims' (1980) critique of the traditional simultaneous equation models (SEM). Sims criticized the use of 'too many incredible restrictions' based on 'supposed a priori knowledge' in large scale macroeconometric models which were popular at that time. Therefore, he advo cated largely unrestricted reduced form multivariate time series models, unrestricted VAR models in particular. Ever since his influential paper these models have been employed extensively to characterize the underlying dynamics in systems of time series. In particular, tools to summarize the dynamic interaction between the system variables, such as impulse response analysis or forecast error variance decompo sitions, have been developed over the years. The econometrics of VAR models and related quantities i...
Towards reduction of Paradigm coordination models
S. Andova; L.P.J. Groenewegen; E.P. de Vink (Erik Peter); L. Aceto (Luca); M.R. Mousavi
2011-01-01
htmlabstractThe coordination modelling language Paradigm addresses collaboration between components in terms of dynamic constraints. Within a Paradigm model, component dynamics are consistently specified at a detailed and a global level of abstraction. To enable automated verification of Paradigm
Structured Dimensionality Reduction for Additive Model Regression
Fawzi, Alhussein; Fiot, Jean-Baptiste; Chen, Bei; Sinn, Mathieu; Frossard, Pascal
2016-01-01
Additive models are regression methods which model the response variable as the sum of univariate transfer functions of the input variables. Key benefits of additive models are their accuracy and interpretability on many real-world tasks. Additive models are however not adapted to problems involving a large number (e.g., hundreds) of input variables, as they are prone to overfitting in addition to losing interpretability. In this paper, we introduce a novel framework for applying additive ...
Efficient Symmetry Reduction and the Use of State Symmetries for Symbolic Model Checking
Directory of Open Access Journals (Sweden)
Christian Appold
2010-06-01
Full Text Available One technique to reduce the state-space explosion problem in temporal logic model checking is symmetry reduction. The combination of symmetry reduction and symbolic model checking by using BDDs suffered a long time from the prohibitively large BDD for the orbit relation. Dynamic symmetry reduction calculates representatives of equivalence classes of states dynamically and thus avoids the construction of the orbit relation. In this paper, we present a new efficient model checking algorithm based on dynamic symmetry reduction. Our experiments show that the algorithm is very fast and allows the verification of larger systems. We additionally implemented the use of state symmetries for symbolic symmetry reduction. To our knowledge we are the first who investigated state symmetries in combination with BDD based symbolic model checking.
Sharma, Diksha; Sempau, Josep; Badano, Aldo
2018-02-01
Monte Carlo simulations require large number of histories to obtain reliable estimates of the quantity of interest and its associated statistical uncertainty. Numerous variance reduction techniques (VRTs) have been employed to increase computational efficiency by reducing the statistical uncertainty. We investigate the effect of two VRTs for optical transport methods on accuracy and computing time for the estimation of variance (noise) in x-ray imaging detectors. We describe two VRTs. In the first, we preferentially alter the direction of the optical photons to increase detection probability. In the second, we follow only a fraction of the total optical photons generated. In both techniques, the statistical weight of photons is altered to maintain the signal mean. We use fastdetect2, an open-source, freely available optical transport routine from the hybridmantis package. We simulate VRTs for a variety of detector models and energy sources. The imaging data from the VRT simulations are then compared to the analog case (no VRT) using pulse height spectra, Swank factor, and the variance of the Swank estimate. We analyze the effect of VRTs on the statistical uncertainty associated with Swank factors. VRTs increased the relative efficiency by as much as a factor of 9. We demonstrate that we can achieve the same variance of the Swank factor with less computing time. With this approach, the simulations can be stopped when the variance of the variance estimates reaches the desired level of uncertainty. We implemented analytic estimates of the variance of Swank factor and demonstrated the effect of VRTs on image quality calculations. Our findings indicate that the Swank factor is dominated by the x-ray interaction profile as compared to the additional uncertainty introduced in the optical transport by the use of VRTs. For simulation experiments that aim at reducing the uncertainty in the Swank factor estimate, any of the proposed VRT can be used for increasing the relative
Variance reduction techniques for 14 MeV neutron streaming problem in rectangular annular bent duct
Energy Technology Data Exchange (ETDEWEB)
Ueki, Kotaro [Ship Research Inst., Mitaka, Tokyo (Japan)
1998-03-01
Monte Carlo method is the powerful technique for solving wide range of radiation transport problems. Its features are that it can solve the Boltzmann`s transport equation almost without approximation, and that the complexity of the systems to be treated rarely becomes a problem. However, the Monte Carlo calculation is always accompanied by statistical errors called variance. In shielding calculation, standard deviation or fractional standard deviation (FSD) is used frequently. The expression of the FSD is shown. Radiation shielding problems are roughly divided into transmission through deep layer and streaming problem. In the streaming problem, the large difference in the weight depending on the history of particles makes the FSD of Monte Carlo calculation worse. The streaming experiment in the 14 MeV neutron rectangular annular bent duct, which is the typical streaming bench mark experiment carried out of the OKTAVIAN of Osaka University, was analyzed by MCNP 4B, and the reduction of variance or FSD was attempted. The experimental system is shown. The analysis model by MCNP 4B, the input data and the results of analysis are reported, and the comparison with the experimental results was examined. (K.I.)
Variance reduction techniques for 14 MeV neutron streaming problem in rectangular annular bent duct
International Nuclear Information System (INIS)
Ueki, Kotaro
1998-01-01
Monte Carlo method is the powerful technique for solving wide range of radiation transport problems. Its features are that it can solve the Boltzmann's transport equation almost without approximation, and that the complexity of the systems to be treated rarely becomes a problem. However, the Monte Carlo calculation is always accompanied by statistical errors called variance. In shielding calculation, standard deviation or fractional standard deviation (FSD) is used frequently. The expression of the FSD is shown. Radiation shielding problems are roughly divided into transmission through deep layer and streaming problem. In the streaming problem, the large difference in the weight depending on the history of particles makes the FSD of Monte Carlo calculation worse. The streaming experiment in the 14 MeV neutron rectangular annular bent duct, which is the typical streaming bench mark experiment carried out of the OKTAVIAN of Osaka University, was analyzed by MCNP 4B, and the reduction of variance or FSD was attempted. The experimental system is shown. The analysis model by MCNP 4B, the input data and the results of analysis are reported, and the comparison with the experimental results was examined. (K.I.)
Aerosol model selection and uncertainty modelling by adaptive MCMC technique
Directory of Open Access Journals (Sweden)
M. Laine
2008-12-01
Full Text Available We present a new technique for model selection problem in atmospheric remote sensing. The technique is based on Monte Carlo sampling and it allows model selection, calculation of model posterior probabilities and model averaging in Bayesian way.
The algorithm developed here is called Adaptive Automatic Reversible Jump Markov chain Monte Carlo method (AARJ. It uses Markov chain Monte Carlo (MCMC technique and its extension called Reversible Jump MCMC. Both of these techniques have been used extensively in statistical parameter estimation problems in wide area of applications since late 1990's. The novel feature in our algorithm is the fact that it is fully automatic and easy to use.
We show how the AARJ algorithm can be implemented and used for model selection and averaging, and to directly incorporate the model uncertainty. We demonstrate the technique by applying it to the statistical inversion problem of gas profile retrieval of GOMOS instrument on board the ENVISAT satellite. Four simple models are used simultaneously to describe the dependence of the aerosol cross-sections on wavelength. During the AARJ estimation all the models are used and we obtain a probability distribution characterizing how probable each model is. By using model averaging, the uncertainty related to selecting the aerosol model can be taken into account in assessing the uncertainty of the estimates.
Evaluation of Clipping Based Iterative PAPR Reduction Techniques for FBMC Systems
Directory of Open Access Journals (Sweden)
Zsolt Kollár
2014-01-01
to conventional orthogonal frequency division multiplexing (OFDM technique. The low ACLR of the transmitted FBMC signal makes it especially favorable in cognitive radio applications, where strict requirements are posed on out-of-band radiation. Large dynamic range resulting in high peak-to-average power ratio (PAPR is characteristic of all sorts of multicarrier signals. The advantageous spectral properties of the high-PAPR FBMC signal are significantly degraded if nonlinearities are present in the transceiver chain. Spectral regrowth may appear, causing harmful interference in the neighboring frequency bands. This paper presents novel clipping based PAPR reduction techniques, evaluated and compared by simulations and measurements, with an emphasis on spectral aspects. The paper gives an overall comparison of PAPR reduction techniques, focusing on the reduction of the dynamic range of FBMC signals without increasing out-of-band radiation. An overview is presented on transmitter oriented techniques employing baseband clipping, which can maintain the system performance with a desired bit error rate (BER.
Bugenhagen, Scott M; Beard, Daniel A
2012-10-21
Biochemical reaction systems may be viewed as discrete event processes characterized by a number of states and state transitions. These systems may be modeled as state transition systems with transitions representing individual reaction events. Since they often involve a large number of interactions, it can be difficult to construct such a model for a system, and since the resulting state-level model can involve a huge number of states, model analysis can be difficult or impossible. Here, we describe methods for the high-level specification of a system using hypergraphs, for the automated generation of a state-level model from a high-level model, and for the exact reduction of a state-level model using information from the high-level model. Exact reduction is achieved through the automated application to the high-level model of the symmetry reduction technique and reduction by decomposition by independent subsystems, allowing potentially significant reductions without the need to generate a full model. The application of the method to biochemical reaction systems is illustrated by models describing a hypothetical ion-channel at several levels of complexity. The method allows for the reduction of the otherwise intractable example models to a manageable size.
Testing of indoor radon reduction techniques in eastern Pennsylvania: An update
International Nuclear Information System (INIS)
Henschel, D.B.; Scott, A.G.
1987-01-01
EPA has installed radon reduction measures in 38 houses in the Reading Prong region of eastern Pennsylvania. All were basement houses with hollow block or poured concrete foundation walls. The reduction approaches tested in most houses involved active soil ventilation, including: suction on the footing drain tile system; suction underneath the concrete slabs, using pipes inserted through the slabs from inside the house; and ventilation of the void network inside hollow block foundation walls. Heat recovery ventilators (HRVs) were tested in three houses. The current results confirm that, for the houses tested here, drain tile suction appears consistently able to provide high radon reductions when a complete loop of drain tile exists, often reducing high-radon houses to 4 pCi/l (148 B1/m 3 ) and less. Sub-slab suction (with pipes through the slab) can also provide high reductions if a sufficient number of suction pipes are located properly. Placement of one or more sub-slab suction pipes near each perimeter wall appears in this testing to aid in treating the major soil gas entry routes, although fewer pipes can sometimes give high reductions if conditions are favorable. For effective radon reduction using any active soil ventilation technique, it is important that major wall and slab openings be closed, and that a fan be employed that is capable of developing adequate static pressure
Power system coherency and model reduction
Chow, Joe H
2014-01-01
This book provides a comprehensive treatment for understanding interarea modes in large power systems and obtaining reduced-order models using the coherency concept and selective modal analysis method.
Model Order Reduction: Application to Electromagnetic Problems
Paquay, Yannick
2017-01-01
With the increase in computational resources, numerical modeling has grown expo- nentially these last two decades. From structural analysis to combustion modeling and electromagnetics, discretization methods–in particular the finite element method–have had a tremendous impact. Their main advantage consists in a correct representation of dynamical and nonlinear behaviors by solving equations at local scale, however the spatial discretization inherent to such approaches is also its main drawbac...
Bulk Current Injection Testing of Cable Noise Reduction Techniques, 50 kHz to 400 MHz
Bradley, Arthur T.; Hare, Richard J.; Singh, Manisha
2009-01-01
This paper presents empirical results of cable noise reduction techniques as demonstrated using bulk current injection (BCI) techniques with radiated fields from 50 kHz - 400 MHz. It is a follow up to the two-part paper series presented at the Asia Pacific EMC Conference that focused on TEM cell signal injection. This paper discusses the effects of cable types, shield connections, and chassis connections on cable noise. For each topic, well established theories are compared with data from a real-world physical system.
Use experiences of MCNP in nuclear energy study. 2. Review of variance reduction techniques
Energy Technology Data Exchange (ETDEWEB)
Sakurai, Kiyoshi; Yamamoto, Toshihiro [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment] [eds.
1998-03-01
`MCNP Use Experience` Working Group was established in 1996 under the Special Committee on Nuclear Code Evaluation. This year`s main activity of the working group has been focused on the review of variance reduction techniques of Monte Carlo calculations. This working group dealt with the variance reduction techniques of (1) neutron and gamma ray transport calculation of fusion reactor system, (2) concept design of nuclear transmutation system using accelerator, (3) JMTR core calculation, (4) calculation of prompt neutron decay constant, (5) neutron and gamma ray transport calculation for exposure evaluation, (6) neutron and gamma ray transport calculation of shielding system, etc. Furthermore, this working group started an activity to compile `Guideline of Monte Carlo Calculation` which will be a standard in the future. The appendices of this report include this `Guideline`, the use experience of MCNP 4B and examples of Monte Carlo calculations of high energy charged particles. The 11 papers are indexed individually. (J.P.N.)
Multiscale model reduction for shale gas transport in fractured media
Akkutlu, I. Y.
2016-05-18
In this paper, we develop a multiscale model reduction technique that describes shale gas transport in fractured media. Due to the pore-scale heterogeneities and processes, we use upscaled models to describe the matrix. We follow our previous work (Akkutlu et al. Transp. Porous Media 107(1), 235–260, 2015), where we derived an upscaled model in the form of generalized nonlinear diffusion model to describe the effects of kerogen. To model the interaction between the matrix and the fractures, we use Generalized Multiscale Finite Element Method (Efendiev et al. J. Comput. Phys. 251, 116–135, 2013, 2015). In this approach, the matrix and the fracture interaction is modeled via local multiscale basis functions. In Efendiev et al. (2015), we developed the GMsFEM and applied for linear flows with horizontal or vertical fracture orientations aligned with a Cartesian fine grid. The approach in Efendiev et al. (2015) does not allow handling arbitrary fracture distributions. In this paper, we (1) consider arbitrary fracture distributions on an unstructured grid; (2) develop GMsFEM for nonlinear flows; and (3) develop online basis function strategies to adaptively improve the convergence. The number of multiscale basis functions in each coarse region represents the degrees of freedom needed to achieve a certain error threshold. Our approach is adaptive in a sense that the multiscale basis functions can be added in the regions of interest. Numerical results for two-dimensional problem are presented to demonstrate the efficiency of proposed approach. © 2016 Springer International Publishing Switzerland
Dimension reduction techniques for the integrative analysis of multi-omics data
Zeleznik, Oana A.; Thallinger, Gerhard G.; Kuster, Bernhard; Gholami, Amin M.
2016-01-01
State-of-the-art next-generation sequencing, transcriptomics, proteomics and other high-throughput ‘omics' technologies enable the efficient generation of large experimental data sets. These data may yield unprecedented knowledge about molecular pathways in cells and their role in disease. Dimension reduction approaches have been widely used in exploratory analysis of single omics data sets. This review will focus on dimension reduction approaches for simultaneous exploratory analyses of multiple data sets. These methods extract the linear relationships that best explain the correlated structure across data sets, the variability both within and between variables (or observations) and may highlight data issues such as batch effects or outliers. We explore dimension reduction techniques as one of the emerging approaches for data integration, and how these can be applied to increase our understanding of biological systems in normal physiological function and disease. PMID:26969681
A simple noniterative principal component technique for rapid noise reduction in parallel MR images.
Patel, Anand S; Duan, Qi; Robson, Philip M; McKenzie, Charles A; Sodickson, Daniel K
2012-01-01
The utilization of parallel imaging permits increased MR acquisition speed and efficiency; however, parallel MRI usually leads to a deterioration in the signal-to-noise ratio when compared with otherwise equivalent unaccelerated acquisitions. At high accelerations, the parallel image reconstruction matrix tends to become dominated by one principal component. This has been utilized to enable substantial reductions in g-factor-related noise. A previously published technique achieved noise reductions via a computationally intensive search for multiples of the dominant singular vector which, when subtracted from the image, minimized joint entropy between the accelerated image and a reference image. We describe a simple algorithm that can accomplish similar results without a time-consuming search. Significant reductions in g-factor-related noise were achieved using this new algorithm with in vivo acquisitions at 1.5 T with an eight-element array. Copyright © 2011 John Wiley & Sons, Ltd.
A Size Reduction Technique for Mobile Phone PIFA Antennas Using Lumped Inductors
DEFF Research Database (Denmark)
Thaysen, Jesper; Jakobsen, Kaj Bjarne
2005-01-01
A size reduction technique for the planar inverted-F antenna (PIFA) is presented. An 18 nH lumped inductor is used in addition to a small 0.3 cm3 PIFA. The PIFA is located on dielectric foam, 5 mm above a 40 mm × 100 mm ground plane. It is possible to reduce the center frequency (|S11|min) by 33 ...
Extreme model reduction of shear layers
Qawasmeh, Bashar Rafee
The aim of this research is to develop nonlinear low-dimensional models (LDMs) to describe vortex dynamics in shear layers. A modified Proper Orthogonal Decomposition (POD)/Galerkin projection method is developed to obtain models at extremely low dimension for shear layers. The idea is to dynamically scale the shear layer along y direction to factor out the shear layer growth and capture the dynamics by only a couple of modes. The models are developed for two flows, incompressible spatially developing and weakly compressible temporally developing shear layers, respectively. To capture basic dynamics, the low-dimensional models require only two POD modes for each wavenumber/frequency. Thus, a two-mode model is capable of representing single-wavenumber/frequency dynamics such as vortex roll-up, and a four-mode model is capable of representing the nonlinear dynamics involving a fundamental wavenumber/frequency and its subharmonic, such as vortex pairing/merging. Most of the energy is captured by the first mode of each wavenumber/frequency, the second POD mode, however, plays a critical role and needs to be included. In the thesis, we first apply the approach on temporally developing weakly compressible shear layers. In compressible flows, the thermodynamic variables are dynamically important, and must be considered. We choose isentropic Navier-Stokes equations for simplicity, and choose a proper inner product to present both kinetic energy and thermal energy. Two cases of convective Mach numbers are studied for low compressibility and moderate compressibility. Moreover, we study the sensitivity of the compressible four-mode model to several flow parameters: Mach number, the strength of initial perturbations of the fundamental and its subharmonic, and Reynolds number. Secondly we apply the approach on spatially developing incompressible shear layers with periodicity in time. We consider a streamwise parabolic form of the Navier-Stokes equations. When we add arbitrary
Efficient Data Reduction Techniques for Remote Applications of a Wireless Visual Sensor Network
Directory of Open Access Journals (Sweden)
Khursheed Khursheed
2013-05-01
Full Text Available Abstract A Wireless Visual Sensor Network (WVSN is formed by deploying many Visual Sensor Nodes (VSNs in the field. After acquiring an image of the area of interest, the VSN performs local processing on it and transmits the result using an embedded wireless transceiver. Wireless data transmission consumes a great deal of energy, where energy consumption is mainly dependent on the amount of information being transmitted. The image captured by the VSN contains a huge amount of data. For certain applications, segmentation can be performed on the captured images. The amount of information in the segmented images can be reduced by applying efficient bi-level image compression methods. In this way, the communication energy consumption of each of the VSNs can be reduced. However, the data reduction capability of bi-level image compression standards is fixed and is limited by the used compression algorithm. For applications attributing few changes in adjacent frames, change coding can be applied for further data reduction. Detecting and compressing only the Regions of Interest (ROIs in the change frame is another possibility for further data reduction. In a communication system, where both the sender and the receiver know the employed compression standard, there is a possibility for further data reduction by not including the header information in the compressed bit stream of the sender. This paper summarizes different information reduction techniques such as image coding, change coding and ROI coding. The main contribution is the investigation of the combined effect of all these coding methods and their application to a few representative real life applications. This paper is intended to be a resource for researchers interested in techniques for information reduction in energy constrained embedded applications.
A HYBRID TECHNIQUE FOR PAPR REDUCTION OF OFDM USING DHT PRECODING WITH PIECEWISE LINEAR COMPANDING
Directory of Open Access Journals (Sweden)
Thammana Ajay
2016-06-01
Full Text Available Orthogonal Frequency Division Multiplexing (OFDM is a fascinating approach for wireless communication applications which require huge amount of data rates. However, OFDM signal suffers from its large Peak-to-Average Power Ratio (PAPR, which results in significant distortion while passing through a nonlinear device, such as a transmitter high power amplifier (HPA. Due to this high PAPR, the complexity of HPA as well as DAC also increases. For the reduction of PAPR in OFDM many techniques are available. Among them companding is an attractive low complexity technique for the OFDM signal’s PAPR reduction. Recently, a piecewise linear companding technique is recommended aiming at minimizing companding distortion. In this paper, a collective piecewise linear companding approach with Discrete Hartley Transform (DHT method is expected to reduce peak-to-average of OFDM to a great extent. Simulation results shows that this new proposed method obtains significant PAPR reduction while maintaining improved performance in the Bit Error Rate (BER and Power Spectral Density (PSD compared to piecewise linear companding method.
Graphical approach to model reduction for nonlinear biochemical networks.
Holland, David O; Krainak, Nicholas C; Saucerman, Jeffrey J
2011-01-01
Model reduction is a central challenge to the development and analysis of multiscale physiology models. Advances in model reduction are needed not only for computational feasibility but also for obtaining conceptual insights from complex systems. Here, we introduce an intuitive graphical approach to model reduction based on phase plane analysis. Timescale separation is identified by the degree of hysteresis observed in phase-loops, which guides a "concentration-clamp" procedure for estimating explicit algebraic relationships between species equilibrating on fast timescales. The primary advantages of this approach over Jacobian-based timescale decomposition are that: 1) it incorporates nonlinear system dynamics, and 2) it can be easily visualized, even directly from experimental data. We tested this graphical model reduction approach using a 25-variable model of cardiac β(1)-adrenergic signaling, obtaining 6- and 4-variable reduced models that retain good predictive capabilities even in response to new perturbations. These 6 signaling species appear to be optimal "kinetic biomarkers" of the overall β(1)-adrenergic pathway. The 6-variable reduced model is well suited for integration into multiscale models of heart function, and more generally, this graphical model reduction approach is readily applicable to a variety of other complex biological systems.
Graphical approach to model reduction for nonlinear biochemical networks.
Directory of Open Access Journals (Sweden)
David O Holland
Full Text Available Model reduction is a central challenge to the development and analysis of multiscale physiology models. Advances in model reduction are needed not only for computational feasibility but also for obtaining conceptual insights from complex systems. Here, we introduce an intuitive graphical approach to model reduction based on phase plane analysis. Timescale separation is identified by the degree of hysteresis observed in phase-loops, which guides a "concentration-clamp" procedure for estimating explicit algebraic relationships between species equilibrating on fast timescales. The primary advantages of this approach over Jacobian-based timescale decomposition are that: 1 it incorporates nonlinear system dynamics, and 2 it can be easily visualized, even directly from experimental data. We tested this graphical model reduction approach using a 25-variable model of cardiac β(1-adrenergic signaling, obtaining 6- and 4-variable reduced models that retain good predictive capabilities even in response to new perturbations. These 6 signaling species appear to be optimal "kinetic biomarkers" of the overall β(1-adrenergic pathway. The 6-variable reduced model is well suited for integration into multiscale models of heart function, and more generally, this graphical model reduction approach is readily applicable to a variety of other complex biological systems.
Directory of Open Access Journals (Sweden)
Sudha Mohankumar
2016-06-01
Full Text Available Precise rainfall forecasting is a common challenge across the globe in meteorological predictions. As rainfall forecasting involves rather complex dynamic parameters, an increasing demand for novel approaches to improve the forecasting accuracy has heightened. Recently, Rough Set Theory (RST has attracted a wide variety of scientific applications and is extensively adopted in decision support systems. Although there are several weather prediction techniques in the existing literature, identifying significant input for modelling effective rainfall prediction is not addressed in the present mechanisms. Therefore, this investigation has examined the feasibility of using rough set based feature selection and data mining methods, namely Naïve Bayes (NB, Bayesian Logistic Regression (BLR, Multi-Layer Perceptron (MLP, J48, Classification and Regression Tree (CART, Random Forest (RF, and Support Vector Machine (SVM, to forecast rainfall. Feature selection or reduction process is a process of identifying a significant feature subset, in which the generated subset must characterize the information system as a complete feature set. This paper introduces a novel rough set based Maximum Frequency Weighted (MFW feature reduction technique for finding an effective feature subset for modelling an efficient rainfall forecast system. The experimental analysis and the results indicate substantial improvements of prediction models when trained using the selected feature subset. CART and J48 classifiers have achieved an improved accuracy of 83.42% and 89.72%, respectively. From the experimental study, relative humidity2 (a4 and solar radiation (a6 have been identified as the effective parameters for modelling rainfall prediction.
Dimensional reduction of the ABJM model
Nastase, Horatiu; Papageorgakis, Constantinos
2011-03-01
We dimensionally reduce the ABJM model, obtaining a two-dimensional theory that can be thought of as a `master action'. This encodes information about both T- and S-duality, i.e. describes fundamental (F1) and D-strings (D1) in 9 and 10 dimensions. The Higgsed theory at large VEV, tilde{v} , and large k yields D1-brane actions in 9d and 10d, depending on which auxiliary fields are integrated out. For N = 1thereisamaptoa Green-Schwarz string wrapping a nontrivial circle in {{{{mathbb{C}^4}}} left/ {{{mathbb{Z}_k}}} right.}.
Techniques to develop data for hydrogeochemical models
Energy Technology Data Exchange (ETDEWEB)
Thompson, C.M.; Holcombe, L.J.; Gancarz, D.H.; Behl, A.E. (Radian Corp., Austin, TX (USA)); Erickson, J.R.; Star, I.; Waddell, R.K. (Geotrans, Inc., Boulder, CO (USA)); Fruchter, J.S. (Battelle Pacific Northwest Lab., Richland, WA (USA))
1989-12-01
The utility industry, through its research and development organization, the Electric Power Research Institute (EPRI), is developing the capability to evaluate potential migration of waste constitutents from utility disposal sites to the environment. These investigations have developed computer programs to predict leaching, transport, attenuation, and fate of inorganic chemicals. To predict solute transport at a site, the computer programs require data concerning the physical and chemical conditions that affect solute transport at the site. This manual provides a comprehensive view of the data requirements for computer programs that predict the fate of dissolved materials in the subsurface environment and describes techniques to measure or estimate these data. In this manual, basic concepts are described first and individual properties and their associated measurement or estimation techniques are described later. The first three sections review hydrologic and geochemical concepts, discuss data requirements for geohydrochemical computer programs, and describe the types of information the programs produce. The remaining sections define and/or describe the properties of interest for geohydrochemical modeling and summarize available technique to measure or estimate values for these properties. A glossary of terms associated with geohydrochemical modeling and an index are provided at the end of this manual. 318 refs., 9 figs., 66 tabs.
Development and validation of a building design waste reduction model.
Llatas, C; Osmani, M
2016-10-01
Reduction in construction waste is a pressing need in many countries. The design of building elements is considered a pivotal process to achieve waste reduction at source, which enables an informed prediction of their wastage reduction levels. However the lack of quantitative methods linking design strategies to waste reduction hinders designing out waste practice in building projects. Therefore, this paper addresses this knowledge gap through the design and validation of a Building Design Waste Reduction Strategies (Waste ReSt) model that aims to investigate the relationships between design variables and their impact on onsite waste reduction. The Waste ReSt model was validated in a real-world case study involving 20 residential buildings in Spain. The validation process comprises three stages. Firstly, design waste causes were analyzed. Secondly, design strategies were applied leading to several alternative low waste building elements. Finally, their potential source reduction levels were quantified and discussed within the context of the literature. The Waste ReSt model could serve as an instrumental tool to simulate designing out strategies in building projects. The knowledge provided by the model could help project stakeholders to better understand the correlation between the design process and waste sources and subsequently implement design practices for low-waste buildings. Copyright © 2016 Elsevier Ltd. All rights reserved.
Efremenko, Dmitry; Doicu, Adrian; Loyola, Diego; Trautmann, Thomas
2014-01-01
In this paper, we introduce several dimensionality reduction techniques for optical parameters. We consider the principal component analysis, the local linear embedding methods (locality pursuit embedding, locality preserving projection, locally embedded analysis), and discrete orthogonal transforms (cosine, Legendre, wavelet). The principle component analysis has already been shown to be an effective and accurate method of enhancing radiative transfer performance for simulations in an absorbing and a scattering atmosphere. By linearizing the corresponding radiative transfer model, we analyze the applicability of the proposed methods to a practical problem of total ozone column retrieval from UV-backscatter measurements.
Directory of Open Access Journals (Sweden)
Yewon Lee
2014-11-01
Full Text Available BackgroundFrontal sinus fractures, particularly anterior sinus fractures, are relatively common facial fractures. Many agree on the general principles of frontal fracture management; however, the optimal methods of reduction are still controversial. In this article, we suggest a simple reduction method using a subbrow incision as a treatment for isolated anterior sinus fractures.MethodsBetween March 2011 and March 2014, 13 patients with isolated frontal sinus fractures were treated by open reduction and internal fixation through a subbrow incision. The subbrow incision line was designed to be precisely at the lower margin of the brow in order to obtain an inconspicuous scar. A periosteal incision was made at 3 mm above the superior orbital rim. The fracture site of the frontal bone was reduced, and bone fixation was performed using an absorbable plate and screws.ResultsContour deformities were completely restored in all patients, and all patients were satisfied with the results. Scars were barely visible in the long-term follow-up. No complications related to the procedure, such as infection, uncontrolled sinus bleeding, hematoma, paresthesia, mucocele, or posterior wall and brain injury were observed.ConclusionsThe subbrow approach allowed for an accurate reduction and internal fixation of the fractures in the anterior table of the frontal sinus by providing a direct visualization of the fracture. Considering the surgical success of the reduction and the rigid fixation, patient satisfaction, and aesthetic problems, this transcutaneous approach through a subbrow incision is concluded to be superior to the other reduction techniques used in the case of an anterior table frontal sinus fracture.
Model building by Coset Space Dimensional Reduction scheme
Jittoh, Toshifumi; Koike, Masafumi; Nomura, Takaaki; Sato, Joe; Shimomura, Takashi
2009-04-01
We investigate the gauge-Higgs unification models within the scheme of the coset space dimensional reduction, beginning with a gauge theory in a fourteen-dimensional spacetime where extra-dimensional space has the structure of a ten-dimensional compact coset space. We found seventeen phenomenologically acceptable models through an exhaustive search for the candidates of the coset spaces, the gauge group in fourteen dimension, and fermion representation. Of the seventeen, ten models led to SO(10)(×U(1)) GUT-like models after dimensional reduction, three models led to SU(5)×U(1) GUT-like models, and four to SU(3)×SU(2)×U(1)×U(1) Standard-Model-like models. The combinations of the coset space, the gauge group in the fourteen-dimensional spacetime, and the representation of the fermion contents of such models are listed.
Lopez-de-Teruel, Pedro E; Canovas, Oscar; Garcia, Felix J
2017-04-15
Indoor positioning methods based on fingerprinting and radio signals rely on the quality of the radio map. For example, for room-level classification purposes, it is required that the signal observations related to each room exhibit significant differences in their RSSI values. However, it is difficult to verify and visualize that separability since radio maps are constituted by multi-dimensional observations whose dimension is directly related to the number of access points or monitors being employed for localization purposes. In this paper, we propose a refinement cycle for passive indoor positioning systems, which is based on dimensionality reduction techniques, to evaluate the quality of a radio map. By means of these techniques and our own data representation, we have defined two different visualization methods to obtain graphical information about the quality of a particular radio map in terms of overlapping areas and outliers. That information will be useful to determine whether new monitors are required or some existing ones should be moved. We have performed an exhaustive experimental analysis based on a variety of different scenarios, some deployed by our own research group and others corresponding to a well-known existing dataset widely analyzed by the community, in order to validate our proposal. As we will show, among the different combinations of data representation methods and dimensionality reduction techniques that we discuss, we have found that there are some specific configurations that are more useful in order to perform the refinement process.
Lower-Order Compensation Chain Threshold-Reduction Technique for Multi-Stage Voltage Multipliers.
Dell' Anna, Francesco; Dong, Tao; Li, Ping; Wen, Yumei; Azadmehr, Mehdi; Casu, Mario; Berg, Yngvar
2018-04-17
This paper presents a novel threshold-compensation technique for multi-stage voltage multipliers employed in low power applications such as passive and autonomous wireless sensing nodes (WSNs) powered by energy harvesters. The proposed threshold-reduction technique enables a topological design methodology which, through an optimum control of the trade-off among transistor conductivity and leakage losses, is aimed at maximizing the voltage conversion efficiency (VCE) for a given ac input signal and physical chip area occupation. The conducted simulations positively assert the validity of the proposed design methodology, emphasizing the exploitable design space yielded by the transistor connection scheme in the voltage multiplier chain. An experimental validation and comparison of threshold-compensation techniques was performed, adopting 2N5247 N-channel junction field effect transistors (JFETs) for the realization of the voltage multiplier prototypes. The attained measurements clearly support the effectiveness of the proposed threshold-reduction approach, which can significantly reduce the chip area occupation for a given target output performance and ac input signal.
Lower-Order Compensation Chain Threshold-Reduction Technique for Multi-Stage Voltage Multipliers
Directory of Open Access Journals (Sweden)
Francesco Dell’ Anna
2018-04-01
Full Text Available This paper presents a novel threshold-compensation technique for multi-stage voltage multipliers employed in low power applications such as passive and autonomous wireless sensing nodes (WSNs powered by energy harvesters. The proposed threshold-reduction technique enables a topological design methodology which, through an optimum control of the trade-off among transistor conductivity and leakage losses, is aimed at maximizing the voltage conversion efficiency (VCE for a given ac input signal and physical chip area occupation. The conducted simulations positively assert the validity of the proposed design methodology, emphasizing the exploitable design space yielded by the transistor connection scheme in the voltage multiplier chain. An experimental validation and comparison of threshold-compensation techniques was performed, adopting 2N5247 N-channel junction field effect transistors (JFETs for the realization of the voltage multiplier prototypes. The attained measurements clearly support the effectiveness of the proposed threshold-reduction approach, which can significantly reduce the chip area occupation for a given target output performance and ac input signal.
Dose-reduction techniques for high-dose worker groups in nuclear power plants
International Nuclear Information System (INIS)
Khan, T.A.; Baum, J.W.; Dionne, B.J.
1991-03-01
This report summarizes the main findings of a study of the extent of radiation dose received by special work groups in the nuclear power industry. Work groups which chronically get large doses were investigated, using information provided by the industry. The tasks that give high doses to these work groups were examined and techniques described that were found to be particularly successful in reducing dose. Quantitative information on the extent of radiation doses to various work groups shows that significant numbers of workers in several critical groups receive doses greater than 1 and even 2 rem per year, particularly contract personnel and workers at BWR-type plants. The number of radiation workers whose lifetime dose is greater than their age is much less. Although the techniques presented would go some way in reducing dose, it is likely that a sizeable reduction to the high-dose work groups may require development of new dose-reduction techniques as well as major changes in procedures. 10 refs., 26 tabs
Earlobe Reduction with Minimally Visible Scars: The Sub-Antitragal Groove Technique.
Van Putte, Lennert; Colpaert, Steven D M
2017-04-01
Ptosis of the earlobe is a common consequence of ageing, defined as an unappealingly large free caudal segment of over 5 mm. It is therefore important to consider reduction as a complement to rhytidectomy in selected patients. Moreover, facelifting operations can result in disproportionate or poorly positioned earlobes. Current earlobe-reducing techniques can leave a scar on the free lateral edge causing notching or involve complex pattern excisions with limited resection capability and the risk of deformities. The presented technique, on the other hand, is versatile and easy to use, as it follows general geometric principles. Excision of the designed area results in an earlobe flap which can be rotated in the excision defect. This results in ideal scar locations, situated at the sub-antitragal groove and at the cheek junction. The technique is adjustable, to incorporate potential piercing holes. This technique takes approximately 15 minutes per earlobe to complete. The resulting earlobes have undisturbed free borders. No vascularization-related flap problems were noted. This technique is a viable method for reducing the earlobe with minimally visible scars. This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266 .
Improved modeling techniques for turbomachinery flow fields
Energy Technology Data Exchange (ETDEWEB)
Lakshminarayana, B.; Fagan, J.R. Jr.
1995-12-31
This program has the objective of developing an improved methodology for modeling turbomachinery flow fields, including the prediction of losses and efficiency. Specifically, the program addresses the treatment of the mixing stress tensor terms attributed to deterministic flow field mechanisms required in steady-state Computational Fluid Dynamic (CFD) models for turbomachinery flow fields. These mixing stress tensors arise due to spatial and temporal fluctuations (in an absolute frame of reference) caused by rotor-stator interaction due to various blade rows and by blade-to-blade variation of flow properties. This will be accomplished in a cooperative program by Penn State University and the Allison Engine Company. These tasks include the acquisition of previously unavailable experimental data in a high-speed turbomachinery environment, the use of advanced techniques to analyze the data, and the development of a methodology to treat the deterministic component of the mixing stress tenor.
Advances in transgenic animal models and techniques.
Ménoret, Séverine; Tesson, Laurent; Remy, Séverine; Usal, Claire; Ouisse, Laure-Hélène; Brusselle, Lucas; Chenouard, Vanessa; Anegon, Ignacio
2017-10-01
On May 11th and 12th 2017 was held in Nantes, France, the international meeting "Advances in transgenic animal models and techniques" ( http://www.trm.univ-nantes.fr/ ). This biennial meeting is the fifth one of its kind to be organized by the Transgenic Rats ImmunoPhenomic (TRIP) Nantes facility ( http://www.tgr.nantes.inserm.fr/ ). The meeting was supported by private companies (SONIDEL, Scionics computer innovation, New England Biolabs, MERCK, genOway, Journal Disease Models and Mechanisms) and by public institutions (International Society for Transgenic Technology, University of Nantes, INSERM UMR 1064, SFR François Bonamy, CNRS, Région Pays de la Loire, Biogenouest, TEFOR infrastructure, ITUN, IHU-CESTI and DHU-Oncogeffe and Labex IGO). Around 100 participants, from France but also from different European countries, Japan and USA, attended the meeting.
Reduction and Mastopexy Techniques for Optimal Results in Oncoplastic Breast Reconstruction
Rose, Jessica F.; Colen, Jessica Suarez; Ellsworth, Warren A.
2015-01-01
Breast conservation therapy has emerged as an important option for select cancer patients as survival rates are similar to those after mastectomy. Large tumor size and the effect of radiation create cosmetic deformities in the shape of the breast after lumpectomy alone. Volume loss, nipple displacement, and asymmetry of the contralateral breast are just a few concerns. Reconstruction of lumpectomy defects with local tissue rearrangement in concert with reduction and mastopexy techniques have allowed for outstanding aesthetic results. In patients who have a reasonable tumor- to breast-size ratio, this oncoplastic surgery can successfully treat the patient's cancer while often improving upon preoperative breast shape. Specific surgical guidelines in reduction and mastopexy help achieve predictable aesthetic results, despite the effects of radiation, and can allow for a single surgical procedure for cancer removal, reconstruction, and contralateral symmetry in one stage. PMID:26528086
Directory of Open Access Journals (Sweden)
Dirk Wagenaar
Full Text Available Typical streak artifacts known as metal artifacts occur in the presence of strongly attenuating materials in computed tomography (CT. Recently, vendors have started offering metal artifact reduction (MAR techniques. In addition, a MAR technique called the metal deletion technique (MDT is freely available and able to reduce metal artifacts using reconstructed images. Although a comparison of the MDT to other MAR techniques exists, a comparison of commercially available MAR techniques is lacking. The aim of this study was therefore to quantify the difference in effectiveness of the currently available MAR techniques of different scanners and the MDT technique.Three vendors were asked to use their preferential CT scanner for applying their MAR techniques. The scans were performed on a Philips Brilliance ICT 256 (S1, a GE Discovery CT 750 HD (S2 and a Siemens Somatom Definition AS Open (S3. The scans were made using an anthropomorphic head and neck phantom (Kyoto Kagaku, Japan. Three amalgam dental implants were constructed and inserted between the phantom's teeth. The average absolute error (AAE was calculated for all reconstructions in the proximity of the amalgam implants.The commercial techniques reduced the AAE by 22.0±1.6%, 16.2±2.6% and 3.3±0.7% for S1 to S3 respectively. After applying the MDT to uncorrected scans of each scanner the AAE was reduced by 26.1±2.3%, 27.9±1.0% and 28.8±0.5% respectively. The difference in efficiency between the commercial techniques and the MDT was statistically significant for S2 (p=0.004 and S3 (p<0.001, but not for S1 (p=0.63.The effectiveness of MAR differs between vendors. S1 performed slightly better than S2 and both performed better than S3. Furthermore, for our phantom and outcome measure the MDT was more effective than the commercial MAR technique on all scanners.
Noise reduction techniques used on the high power klystron modulators at Argonne National Laboratory
International Nuclear Information System (INIS)
Russell, T.J.
1993-01-01
The modulators used in the Advanced Photon Source at Argonne National Laboratory have been redesigned with an emphasis on electrical noise reduction. Since the modulators are 100 MW modulators with <700 ns rise time, electrical noise can be coupled very easily to other electronic equipment in the area. This paper will detail the efforts made to reduce noise coupled to surrounding equipment. Shielding and sound grounding techniques accomplished the goal of drastically reducing the noise induced in surrounding equipment. The approach used in grounding and shielding will be discussed, and data will be presented comparing earlier designs to the improved design
Model reduction of nonlinear systems subject to input disturbances
Ndoye, Ibrahima
2017-07-10
The method of convex optimization is used as a tool for model reduction of a class of nonlinear systems in the presence of disturbances. It is shown that under some conditions the nonlinear disturbed system can be approximated by a reduced order nonlinear system with similar disturbance-output properties to the original plant. The proposed model reduction strategy preserves the nonlinearity and the input disturbance nature of the model. It guarantees a sufficiently small error between the outputs of the original and the reduced-order systems, and also maintains the properties of input-to-state stability. The matrices of the reduced order system are given in terms of a set of linear matrix inequalities (LMIs). The paper concludes with a demonstration of the proposed approach on model reduction of a nonlinear electronic circuit with additive disturbances.
H∞ /H2 model reduction through dilated linear matrix inequalities
DEFF Research Database (Denmark)
Adegas, Fabiano Daher; Stoustrup, Jakob
2012-01-01
This paper presents sufficient dilated linear matrix inequalities (LMI) conditions to the $H_{infty}$ and $H_{2}$ model reduction problem. A special structure of the auxiliary (slack) variables allows the original model of order $n$ to be reduced to an order $r=n/s$ where $n,r,s in field{N}$. Arb......This paper presents sufficient dilated linear matrix inequalities (LMI) conditions to the $H_{infty}$ and $H_{2}$ model reduction problem. A special structure of the auxiliary (slack) variables allows the original model of order $n$ to be reduced to an order $r=n/s$ where $n,r,s in field...... not satisfactorily approximates the original system, an iterative algorithm based on dilated LMIs is proposed to significantly improve the approximation bound. The effectiveness of the method is accessed by numerical experiments. The method is also applied to the $H_2$ order reduction of a flexible wind turbine...
Hwang, Danny P.
1999-01-01
A new turbulent skin friction reduction technology, called the microblowing technique has been tested in supersonic flow (Mach number of 1.9) on specially designed porous plates with microholes. The skin friction was measured directly by a force balance and the boundary layer development was measured by a total pressure rake at the tailing edge of a test plate. The free stream Reynolds number was 1.0(10 exp 6) per meter. The turbulent skin friction coefficient ratios (C(sub f)/C(sub f0)) of seven porous plates are given in this report. Test results showed that the microblowing technique could reduce the turbulent skin friction in supersonic flow (up to 90 percent below a solid flat plate value, which was even greater than in subsonic flow).
Kokiopoulou, Effrosyni; Saad, Yousef
2007-12-01
This paper considers the problem of dimensionality reduction by orthogonal projection techniques. The main feature of the proposed techniques is that they attempt to preserve both the intrinsic neighborhood geometry of the data samples and the global geometry. In particular we propose a method, named Orthogonal Neighborhood Preserving Projections, which works by first building an "affinity" graph for the data, in a way that is similar to the method of Locally Linear Embedding (LLE). However, in contrast with the standard LLE where the mapping between the input and the reduced spaces is implicit, ONPP employs an explicit linear mapping between the two. As a result, handling new data samples becomes straightforward, as this amounts to a simple linear transformation. We show how to define kernel variants of ONPP, as well as how to apply the method in a supervised setting. Numerical experiments are reported to illustrate the performance of ONPP and to compare it with a few competing methods.
PV O&M Cost Model and Cost Reduction
Energy Technology Data Exchange (ETDEWEB)
Walker, Andy
2017-03-15
This is a presentation on PV O&M cost model and cost reduction for the annual Photovoltaic Reliability Workshop (2017), covering estimating PV O&M costs, polynomial expansion, and implementation of Net Present Value (NPV) and reserve account in cost models.
The effect of cognitive modelling in the reduction of impulsive ...
African Journals Online (AJOL)
The study investigated the effect of cognitive modeling in the reduction of impulsive behaviour among primary school children. A total of twenty impulsive underachieving participants were randomly assigned to cognitive modeling and control groups. Different instruments comprising Impulsiveness Questionnaire for Children ...
Identification of differences in health impact modelling of salt reduction
M.A.H. Hendriksen (Marieke); J.M. Geleijnse (Marianne); Van Raaij, J.M.A. (Joop M. A.); F.P. Cappuccio (Francesco); Cobiac, L.C. (Linda C.); Scarborough, P. (Peter); W.J. Nusselder (Wilma); Jaccard, A. (Abbygail); H.C. Boshuizen (Hendriek)
2017-01-01
textabstractWe examined whether specific input data and assumptions explain outcome differences in otherwise comparable health impact assessment models. Seven population health models estimating the impact of salt reduction on morbidity and mortality in western populations were compared on four sets
Identification of differences in health impact modelling of salt reduction
Hendriksen, Marieke A.H.; Geleijnse, Johanna M.; Raaij, Van Joop M.A.; Cappuccio, Francesco P.; Cobiac, Linda C.; Scarborough, Peter; Nusselder, Wilma J.; Jaccard, Abbygail; Boshuizen, Hendriek C.
2017-01-01
We examined whether specific input data and assumptions explain outcome differences in otherwise comparable health impact assessment models. Seven population health models estimating the impact of salt reduction on morbidity and mortality in western populations were compared on four sets of key
Identification of differences in health impact modelling of salt reduction.
Hendriksen, Marieke A H; Geleijnse, Johanna M; van Raaij, Joop M A; Cappuccio, Francesco P; Cobiac, Linda C; Scarborough, Peter; Nusselder, Wilma J; Jaccard, Abbygail; Boshuizen, Hendriek C
2017-01-01
We examined whether specific input data and assumptions explain outcome differences in otherwise comparable health impact assessment models. Seven population health models estimating the impact of salt reduction on morbidity and mortality in western populations were compared on four sets of key
Partial-Order Reduction for GPU Model Checking
Neele, T.; Wijs, A.; Bosnacki, D.; van de Pol, Jan Cornelis; Artho, C; Legay, A.; Peled, D.
2016-01-01
Model checking using GPUs has seen increased popularity over the last years. Because GPUs have a limited amount of memory, only small to medium-sized systems can be verified. For on-the-fly explicit-state model checking, we improve memory efficiency by applying partial-order reduction. We propose
Remarks on Dimensional Reduction of Multidimensional Cosmological Models
Günther, Uwe; Zhuk, Alexander
2006-02-01
Multidimensional cosmological models with factorizable geometry and their dimensional reduction to effective four-dimensional theories are analyzed on sensitivity to different scalings. It is shown that a non-correct gauging of the effective four-dimensional gravitational constant within the dimensional reduction results in a non-correct rescaling of the cosmological constant and the gravexciton/radion masses. The relationship between the effective gravitational constants of theories with different dimensions is discussed for setups where the lower dimensional theory results via dimensional reduction from the higher dimensional one and where the compactified space components vary dynamically.
Sivakumar, Brahman S; Wong, Peter; Dick, Charles G; Steer, Richard A; Tetsworth, Kevin
2014-10-01
To highlight a technique combining fluoroscopy and arthroscopy to aid percutaneous reduction and internal fixation of selected displaced intra-articular calcaneal fractures, assess outcome scores, and compare this method with other previously reported percutaneous methods. Retrospective review of all patients treated by this technique between June 2009 and June 2012. A tertiary care center located in Brisbane, Queensland, Australia. Thirteen consecutive patients were treated by this method during this period. All patients had a minimum of 13 months follow-up and were available for radiological checks and assessment of complications; functional outcome scores were available for 9 patients. The patient was placed in a lateral decubitus position. Reduction was achieved with the aid of both intraoperative fluoroscopy and subtalar arthroscopy and held with cannulated screws in orthogonal planes. The patient was mobilized non-weight bearing for 10 weeks. Outcomes measured were improvement in Bohler angle, postoperative complications, and 3 functional outcome scores (American Orthopaedic Foot and Ankle Society ankle-hindfoot score, Foot Function Index, and Calcaneal Fracture Scoring System). Mean postoperative improvement in Bohler angle was 18.3 degrees, with subsidence of 1.7 degrees. Functional outcome scores compared favorably with the prior literature. Based on available postoperative computed tomography scans (8/13), maximal residual articular incongruity measured 2 mm or less in 87.5% of our cases. Early results indicate that this technique, when combined with careful patient selection, offers a valid therapeutic option for the treatment of a distinct subset of displaced intra-articular calcaneal fractures, with diminished risk of wound complications. Large, prospective multicenter studies will be necessary to better evaluate the potential benefits of this technique. Level IV Therapeutic. See Instructions for Authors for a complete description of levels of evidence.
A LATIN-based model reduction approach for the simulation of cycling damage
Bhattacharyya, Mainak; Fau, Amelie; Nackenhorst, Udo; Néron, David; Ladevèze, Pierre
2017-11-01
The objective of this article is to introduce a new method including model order reduction for the life prediction of structures subjected to cycling damage. Contrary to classical incremental schemes for damage computation, a non-incremental technique, the LATIN method, is used herein as a solution framework. This approach allows to introduce a PGD model reduction technique which leads to a drastic reduction of the computational cost. The proposed framework is exemplified for structures subjected to cyclic loading, where damage is considered to be isotropic and micro-defect closure effects are taken into account. A difficulty herein for the use of the LATIN method comes from the state laws which can not be transformed into linear relations through an internal variable transformation. A specific treatment of this issue is introduced in this work.
Wang, Xinsheng; Wang, Chenxu; Yu, Mingyan
2016-07-01
In this paper, we propose a generalised sub-block structure preservation interconnect model order reduction (MOR) technique based on the swarm intelligence method, that is, particle swarm optimisation (PSO). The swarm intelligence-based structure preservation MOR can be used for a standard model as a criterion for different structure preservation interconnect MOR methods. In the proposed technique, the PSO method is used for predicting the unknown elements of structure-preserving reduced-order modelling of interconnect circuits. The prediction is based on minimising the difference of transform function between the original full-order and desired reduced-order systems maintaining the full-order structure in the reduced-order model. The proposed swarm-intelligence-based structure-preserving MOR method is compared with published work on structure preservation MOR SPRIM techniques. Simulation and synthesis results verify the accuracy and validity of the new structure-preserving MOR technique.
Feldman, Evan M; Koshy, John C; Chike-Obi, Chuma J; Hatef, Daniel A; Bullocks, Jamal M; Stal, Samuel
2010-09-01
Nasal airway obstruction is a frequently-encountered problem, often secondary to inferior turbinate hypertrophy. Medical treatment can be beneficial but is inadequate for many individuals. For these refractory cases, surgical intervention plays a key role in management. The authors evaluate the current trends in surgical management of inferior turbinate hypertrophy and review the senior author's (SS) preferred technique. A questionnaire was devised and sent to members of the American Society for Aesthetic Plastic Surgery (ASAPS) to determine their preferred methods for assessment and treatment of inferior turbinate hypertrophy. One hundred and twenty-seven physicians responded to the survey, with 85% of surveys completed fully. Of the responses, 117 (92%) respondents were trained solely in plastic surgery and 108 (86.4%) were in private practice. Roughly 81.6% of respondents employ a clinical exam alone to evaluate for airway issues. The most commonly-preferred techniques to treat inferior turbinate hypertrophy were a limited turbinate excision (61.9%) and turbinate outfracture (35.2%). Based on the results of this study, it appears that limited turbinate excision and turbinate outfracture are the most commonly-used techniques in private practice by plastic surgeons. Newer techniques such as radiofrequency coblation have yet to become prevalent in terms of application, despite their current prevalence within the medical literature. The optimal method of management for inferior turbinate reduction should take into consideration the surgeon's skill and preference, access to surgical instruments, mode of anesthesia, and the current literature.
Order reduction for a model of marine bacteriophage evolution
Pagliarini, Silvia; Korobeinikov, Andrei
2017-02-01
A typical mechanistic model of viral evolution necessary includes several time scales which can differ by orders of magnitude. Such a diversity of time scales makes analysis of these models difficult. Reducing the order of a model is highly desirable when handling such a model. A typical approach applied to such slow-fast (or singularly perturbed) systems is the time scales separation technique. Constructing the so-called quasi-steady-state approximation is the usual first step in applying the technique. While this technique is commonly applied, in some cases its straightforward application can lead to unsatisfactory results. In this paper we construct the quasi-steady-state approximation for a model of evolution of marine bacteriophages based on the Beretta-Kuang model. We show that for this particular model the quasi-steady-state approximation is able to produce only qualitative but not quantitative fit.
Lopez, Israel; Sarigul-Klijn, Nesrin
2009-10-01
Vibration and acoustic-based health-monitoring techniques are used in the literature to monitor structural health under dynamic environment. In this paper, we propose a damage detection and monitoring method based on a distance similarity matrix of dimensionally reduced data wherein redundancy therein is removed. The matrix similarity approach is generic in nature and has the capability of multiscale representation of datasets. To extract damage-sensitive features, dimensional reduction techniques are applied and compared. An ensemble method of dimensional reduction feature outputs is presented and applied to two case studies. The results supports why ensembles can often perform better than any single-feature extraction method. For the first case study, aeroacoustic datasets are collected from controlled scaled experimental tests of controlled known damaged subscale wing structure. For the second case study, a vibration experiment study is used for abrupt change detection and tracking. The results of the two case studies demonstrate that the proposed method is very effective in detecting abrupt changes and the ensemble method developed here can be used for deterioration tracking.
Directory of Open Access Journals (Sweden)
Bradford S. Waddell
2016-03-01
Full Text Available Dislocation of the hip is a well-described event that occurs in conjunction with highenergy trauma or postoperatively after total hip arthroplasty. Bigelow first described closed treatment of a dislocated hip in 1870, and in the last decade many reduction techniques have been proposed. In this article, we review all described techniques for the reduction of hip dislocation while focusing on physician safety. Furthermore, we introduce a modified technique for the reduction of posterior hip dislocation that allows the physician to adhere to the back safety principles set for by the Occupational Safety and Health Administration.
Systematic reduction of a detailed atrial myocyte model
Lombardo, Daniel M.; Rappel, Wouter-Jan
2017-09-01
Cardiac arrhythmias are a major health concern and often involve poorly understood mechanisms. Mathematical modeling is able to provide insights into these mechanisms which might result in better treatment options. A key element of this modeling is a description of the electrophysiological properties of cardiac cells. A number of electrophysiological models have been developed, ranging from highly detailed and complex models, containing numerous parameters and variables, to simplified models in which variables and parameters no longer directly correspond to electrophysiological quantities. In this study, we present a systematic reduction of the complexity of the detailed model of Koivumaki et al. using the recently developed manifold boundary approximation method. We reduce the original model, containing 42 variables and 37 parameters, to a model with only 11 variables and 5 parameters and show that this reduced model can accurately reproduce the action potential shape and restitution curve of the original model. The reduced model contains only five currents and all variables and parameters can be directly linked to electrophysiological quantities. Due to its reduction in complexity, simulation times of our model are decreased more than three-fold. Furthermore, fitting the reduced model to clinical data is much more efficient, a potentially important step towards patient-specific modeling.
Directory of Open Access Journals (Sweden)
Gustavo Pacheco Martins Ferreira
2014-04-01
Full Text Available OBJECTIVE: to demonstrate a surgical technique for treating neck fractures of the fifth metacarpal, by means of reduction through intra-focal manipulation and percutaneous fixation using Kirschner wires, with the aims of making it easier to achieve and maintain the reduction during the operation and enabling reduction of these fractures even if a fibrous callus has formed.METHODS: a series of ten patients with neck fractures of the fifth metacarpal presenting palmar angles greater than 30◦ underwent the surgical technique described, as examples, and their results were evaluated through postoperative radiographs and clinical examinations.RESULTS: all the patients achieved reductions that were close to anatomical and evolved to consolidation of the fracture in the position obtained.CONCLUSION: the surgical technique described is effective, easy to carry out, minimally invasive and low-cost, thereby enabling adequate clinical and radiographic reduction, even in subacute fractures already presenting a fibrous callus.
Ferreira, Gustavo Pacheco Martins; Pires, Paulo Randal; Portugal, André Lopes; Schneiter, Henrique de Gouvêa
2014-01-01
to demonstrate a surgical technique for treating neck fractures of the fifth metacarpal, by means of reduction through intra-focal manipulation and percutaneous fixation using Kirschner wires, with the aims of making it easier to achieve and maintain the reduction during the operation and enabling reduction of these fractures even if a fibrous callus has formed. a series of ten patients with neck fractures of the fifth metacarpal presenting palmar angles greater than 30° underwent the surgical technique described, as examples, and their results were evaluated through postoperative radiographs and clinical examinations. all the patients achieved reductions that were close to anatomical and evolved to consolidation of the fracture in the position obtained. the surgical technique described is effective, easy to carry out, minimally invasive and low-cost, thereby enabling adequate clinical and radiographic reduction, even in subacute fractures already presenting a fibrous callus.
Choi, Hyun Ho; Lee, Ju Hwan; Kim, Sung Min; Park, Sung Yun
2015-01-01
Here, the speckle noise in ultrasonic images is removed using an image fusion-based denoising method. To optimize the denoising performance, each discrete wavelet transform (DWT) and filtering technique was analyzed and compared. In addition, the performances were compared in order to derive the optimal input conditions. To evaluate the speckle noise removal performance, an image fusion algorithm was applied to the ultrasound images, and comparatively analyzed with the original image without the algorithm. As a result, applying DWT and filtering techniques caused information loss and noise characteristics, and did not represent the most significant noise reduction performance. Conversely, an image fusion method applying SRAD-original conditions preserved the key information in the original image, and the speckle noise was removed. Based on such characteristics, the input conditions of SRAD-original had the best denoising performance with the ultrasound images. From this study, the best denoising technique proposed based on the results was confirmed to have a high potential for clinical application.
International Nuclear Information System (INIS)
Ghirardi, G.C.; Pearle, P.
1991-02-01
The problem of getting a relativistic generalization of the CSL dynamical reduction model, which has been presented in part I, is discussed. In so doing we have the opportunity to introduce the idea of a stochastically invariant theory. The theoretical model we present, that satisfies this kind of invariance requirement, offers us the possibility to reconsider, from a new point of view, some conceptually relevant issues such as nonlocality, the legitimacy of attributing elements of physical reality to physical systems and the problem of establishing causal relations between physical events. (author). Refs, 3 figs
Changela, Kinesh; Ofori, Emmanuel; Duddempudi, Sushil; Anand, Sury; Singhal, Shashideep
2016-02-25
To investigate the techniques and efficacy of peroral endoscopic reduction of dilated gastrojejunal anastomosis after bariatric surgery. An extensive English language literature search was conducted using PubMed, MEDLINE, Medscape and Google to identify peer-reviewed original and review articles using the keywords "bariatric endoscopic suturing", "overstitch bariatric surgery", "endoscopic anastomotic reduction", "bariatric surgery", "gastric bypass", "obesity", "weight loss". We identified articles describing technical feasibility, safety, efficacy, and adverse outcomes of overstitch endoscopic suturing system for transoral outlet reduction in patients with weight regain following Roux-en-Y gastric bypass (RYGB). All studies that contained material applicable to the topic were considered. Retrieved peer-reviewed original and review articles were reviewed by the authors and the data extracted using a standardized collection tool. Data were analyzed using statistical analysis as percentages of the event. Four original published articles which met our search criteria were pooled. The total number cases were fifty-nine with a mean age of 46.75 years (34-63 years). Eight of the patients included in those studies were males (13.6%) and fifty-one were females (86.4%). The mean time elapsed since the primary bypass surgery was 5.75 years. The average pre-endoscopic procedure body mass index (BMI) was 38.68 (27.5-48.5). Mean body weight regained post-RYGB surgery was 13.4 kg from their post-RYGB nadir. The average pouch length at the initial upper endoscopy was 5.75 cm (2-14 cm). The pre-intervention anastomotic diameter was averaged at 24.85 mm (8-40 mm). Average procedure time was 74 min (50-164 min). Mean post endoscopic intervention anastomotic diameter was 8 mm (3-15 mm). Weight reduction at 3 to 4 mo post revision noted to be an average of 10.1 kg. Average overall post revision BMI was recorded at 37.7. The combined technical and clinical success rate was 94.9% (56
Waste Reduction Model (WARM) Resources for Small Businesses and Organizations
This page provides a brief overview of how EPA’s Waste Reduction Model (WARM) can be used by small businesses and organizations. The page includes a brief summary of uses of WARM for the audience and links to other resources.
Pole solution in six dimensions as a dimensional reduction model
Ichinose, Shoichi
2002-01-01
A solution with the pole configuration in six dimensions is analyzed. It is a dimensional reduction model of Randall-Sundrum type. The soliton configuration is induced by the bulk Higgs mechanism. The boundary condition is systematically solved up to the 6th order. The Riemann curvature is finite everywhere.
Model Reduction by Moment Matching for Linear Switched Systems
DEFF Research Database (Denmark)
Bastug, Mert; Petreczky, Mihaly; Wisniewski, Rafal
2014-01-01
A moment-matching method for the model reduction of linear switched systems (LSSs) is developed. The method is based based upon a partial realization theory of LSSs and it is similar to the Krylov subspace methods used for moment matching for linear systems. The results are illustrated by numerical...
Data Reduction and Image Reconstruction Techniques for Non-redundant Masking
Sallum, S.; Eisner, J.
2017-11-01
The technique of non-redundant masking (NRM) transforms a conventional telescope into an interferometric array. In practice, this provides a much better constrained point-spread function than a filled aperture and thus higher resolution than traditional imaging methods. Here, we describe an NRM data reduction pipeline. We discuss strategies for NRM observations regarding dithering patterns and calibrator selection. We describe relevant image calibrations and use example Large Binocular Telescope data sets to show their effects on the scatter in the Fourier measurements. We also describe the various ways to calculate Fourier quantities, and discuss different calibration strategies. We present the results of image reconstructions from simulated observations where we adjust prior images, weighting schemes, and error bar estimation. We compare two imaging algorithms and discuss implications for reconstructing images from real observations. Finally, we explore how the current state of the art compares to next-generation Extremely Large Telescopes.
Alexander, Tiffaney Miller
2017-01-01
Research results have shown that more than half of aviation, aerospace and aeronautics mishaps incidents are attributed to human error. As a part of Safety within space exploration ground processing operations, the identification and/or classification of underlying contributors and causes of human error must be identified, in order to manage human error. This research provides a framework and methodology using the Human Error Assessment and Reduction Technique (HEART) and Human Factor Analysis and Classification System (HFACS), as an analysis tool to identify contributing factors, their impact on human error events, and predict the Human Error probabilities (HEPs) of future occurrences. This research methodology was applied (retrospectively) to six (6) NASA ground processing operations scenarios and thirty (30) years of Launch Vehicle related mishap data. This modifiable framework can be used and followed by other space and similar complex operations.
A Fourier dimensionality reduction model for big data interferometric imaging
Vijay Kartik, S.; Carrillo, Rafael E.; Thiran, Jean-Philippe; Wiaux, Yves
2017-06-01
Data dimensionality reduction in radio interferometry can provide savings of computational resources for image reconstruction through reduced memory footprints and lighter computations per iteration, which is important for the scalability of imaging methods to the big data setting of the next-generation telescopes. This article sheds new light on dimensionality reduction from the perspective of the compressed sensing theory and studies its interplay with imaging algorithms designed in the context of convex optimization. We propose a post-gridding linear data embedding to the space spanned by the left singular vectors of the measurement operator, providing a dimensionality reduction below image size. This embedding preserves the null space of the measurement operator and hence its sampling properties are also preserved in light of the compressed sensing theory. We show that this can be approximated by first computing the dirty image and then applying a weighted subsampled discrete Fourier transform to obtain the final reduced data vector. This Fourier dimensionality reduction model ensures a fast implementation of the full measurement operator, essential for any iterative image reconstruction method. The proposed reduction also preserves the independent and identically distributed Gaussian properties of the original measurement noise. For convex optimization-based imaging algorithms, this is key to justify the use of the standard ℓ2-norm as the data fidelity term. Our simulations confirm that this dimensionality reduction approach can be leveraged by convex optimization algorithms with no loss in imaging quality relative to reconstructing the image from the complete visibility data set. Reconstruction results in simulation settings with no direction dependent effects or calibration errors show promising performance of the proposed dimensionality reduction. Further tests on real data are planned as an extension of the current work. matlab code implementing the
Coping with Complexity Model Reduction and Data Analysis
Gorban, Alexander N
2011-01-01
This volume contains the extended version of selected talks given at the international research workshop 'Coping with Complexity: Model Reduction and Data Analysis', Ambleside, UK, August 31 - September 4, 2009. This book is deliberately broad in scope and aims at promoting new ideas and methodological perspectives. The topics of the chapters range from theoretical analysis of complex and multiscale mathematical models to applications in e.g., fluid dynamics and chemical kinetics.
Biological modelling of pelvic radiotherapy. Potential gains from conformal techniques
Energy Technology Data Exchange (ETDEWEB)
Fenwick, J.D
1999-07-01
Models have been developed which describe the dose and volume dependences of various long-term rectal complications of radiotherapy; assumptions underlying the models are consistent with clinical and experimental descriptions of complication pathogenesis. In particular, rectal bleeding - perhaps the most common complication of modern external beam prostate radiotherapy, and which might be viewed as its principle dose-limiting toxicity - has been modelled as a parallel-type complication. Rectal dose-surface-histograms have been calculated for 79 patients treated, in the course of the Royal Marsden trial of pelvic conformal radiotherapy, for prostate cancer using conformal or conventional techniques; rectal bleeding data is also available for these patients. The maximum- likelihood fit of the parallel bleeding model to the dose-surface-histograms and complication data shows that the complication status of the patients analysed (most of whom received reference point doses of 64 Gy) was significantly dependent on, and almost linearly proportional to, the volume of highly dosed rectal wall: a 1% decrease in the fraction of rectal wall (outlined over an 11 cm rectal length) receiving a dose of 58 Gy or more lead to a reduction in the (RTOG) grade 1,2,3 bleeding rate of about 1.1% - 95% confidence interval [0.04%, 2.2%]. The parallel model fit to the bleeding data is only marginally biased by uncertainties in the calculated dose-surface-histograms (due to setup errors, rectal wall movement and absolute rectal surface area variability), causing the gradient of the observed volume-response curve to be slightly lower than that which would be seen in the absence of these uncertainties. An analysis of published complication data supports these single-centre findings and indicates that the reductions in highly dosed rectal wall volumes obtainable using conformal radiotherapy techniques can be exploited to allow escalation of the dose delivered to the prostate target volume, the
A model with chaotic scattering and reduction of wave packets
Guarneri, Italo
2018-03-01
Some variants of Smilansky’s model of a particle interacting with harmonic oscillators are examined in the framework of scattering theory. A dynamical proof is given of the existence of wave operators. Analysis of a classical version of the model provides a transparent picture for the spectral transition to which the quantum model owes its renown, and for the underlying dynamical behaviour. The model is thereby classified as an extreme case of chaotic scattering, with aspects related to wave packet reduction and irreversibility.
Skin reduction technique for correction of lateral deviation of the erect straight penis.
Shaeer, Osama
2014-07-01
Lateral deviation of the erect straight penis (LDESP) refers to a penis that despite being straight in the erect state, points laterally, yet can be directed forward manually without the use of force. While LDESP should not impose a negative impact on sexual function, it may have a negative cosmetic impact. This work describes skin reduction technique (SRT) for correction of LDESP. Counseling was offered to males with LDESP after excluding other abnormalities. Surgery was performed in case of failed counseling. In the erect state, the degree and direction of LDESP were noted. Skin on the base of the penis on the contralateral side of LDESP was excised from the base of the penis and the edges approximated to correct LDESP. Further excision was repeated if needed. The incision was closed in two layers. Long-term efficacy of SRT was the main outcome measure. Out of 183 males with LDESP, 66.7% were not sexually active. Counseling relieved 91.8% of cases. Fifteen patients insisted on surgery, mostly from among the sexually active where the complaint was mutual from the patient and partner. SRT resulted in full correction of the angle of erection in 12 cases out of 15. Two had minimal recurrence, and one had major recurrence indicating re-SRT. LDESP is more common a complaint among those who have not experienced coital relationship, and is mostly relieved by counseling. However, sexually active males with this complaint are more difficult to relieve by counseling. A minority of patients may opt for surgical correction. SRT achieves a forward erection in such patients, is minimally invasive, and relatively safe, provided the angle of erection can be corrected manually without force. Shaeer O. Skin reduction technique for correction of lateral deviation of the erect straight penis. © 2014 International Society for Sexual Medicine.
Xie, Haozhe; Li, Jie; Zhang, Qiaosheng; Wang, Yadong
2016-12-01
Random Projection (RP) technique has been widely applied in many scenarios because it can reduce high-dimensional features into low-dimensional space within short time and meet the need of real-time analysis of massive data. There is an urgent need of dimensionality reduction with fast increase of big genomics data. However, the performance of RP is usually lower. We attempt to improve classification accuracy of RP through combining other reduction dimension methods such as Principle Component Analysis (PCA), Linear Discriminant Analysis (LDA), and Feature Selection (FS). We compared classification accuracy and running time of different combination methods on three microarray datasets and a simulation dataset. Experimental results show a remarkable improvement of 14.77% in classification accuracy of FS followed by RP compared to RP on BC-TCGA dataset. LDA followed by RP also helps RP to yield a more discriminative subspace with an increase of 13.65% on classification accuracy on the same dataset. FS followed by RP outperforms other combination methods in classification accuracy on most of the datasets. Copyright Â© 2016 Elsevier Ltd. All rights reserved.
Kocadal, Onur; Yucel, Mehmet; Pepe, Murad; Aksahin, Ertugrul; Aktekin, Cem Nuri
2016-12-01
Among the most important predictors of functional results of treatment of syndesmotic injuries is the accurate restoration of the syndesmotic space. The purpose of this study was to investigate the reduction performance of screw fixation and suture-button techniques using images obtained from computed tomography (CT) scans. Patients at or below 65 years who were treated with screw or suture-button fixation for syndesmotic injuries accompanying ankle fractures between January 2012 and March 2015 were retrospectively reviewed in our regional trauma unit. A total of 52 patients were included in the present study. Fixation was performed with syndesmotic screws in 26 patients and suture-button fixation in 26 patients. The patients were divided into 2 groups according to the fixation methods. Postoperative CT scans were used for radiologic evaluation. Four parameters (anteroposterior reduction, rotational reduction, the cross-sectional syndesmotic area, and the distal tibiofibular volumes) were taken into consideration for the radiologic assessment. Functional evaluation of patients was done using the American Orthopaedic Foot & Ankle Society (AOFAS) ankle-hindfoot scale at the final follow-up. The mean follow-up period was 16.7 ± 11.0 months, and the mean age was 44.1 ± 13.2. There was a statistically significant decrease in the degree of fibular rotation (P = .03) and an increase in the upper syndesmotic area (P = .006) compared with the contralateral limb in the screw fixation group. In the suture-button fixation group, there was a statistically significant increase in the lower syndesmotic area (P = .02) and distal tibiofibular volumes (P = .04) compared with the contralateral limbs. The mean AOFAS scores were 88.4 ± 9.2 and 86.1 ± 14.0 in the suture-button fixation and screw fixation group, respectively. There was no statistically significant difference in the functional ankle joint scores between the groups. Although the functional outcomes were similar, the
Chukalla, A. D.; Krol, M. S.; Hoekstra, A. Y.
2015-12-01
Consumptive water footprint (WF) reduction in irrigated crop production is essential given the increasing competition for freshwater. This study explores the effect of three management practices on the soil water balance and plant growth, specifically on evapotranspiration (ET) and yield (Y) and thus the consumptive WF of crops (ET / Y). The management practices are four irrigation techniques (furrow, sprinkler, drip and subsurface drip (SSD)), four irrigation strategies (full (FI), deficit (DI), supplementary (SI) and no irrigation), and three mulching practices (no mulching, organic (OML) and synthetic (SML) mulching). Various cases were considered: arid, semi-arid, sub-humid and humid environments in Israel, Spain, Italy and the UK, respectively; wet, normal and dry years; three soil types (sand, sandy loam and silty clay loam); and three crops (maize, potato and tomato). The AquaCrop model and the global WF accounting standard were used to relate the management practices to effects on ET, Y and WF. For each management practice, the associated green, blue and total consumptive WF were compared to the reference case (furrow irrigation, full irrigation, no mulching). The average reduction in the consumptive WF is 8-10 % if we change from the reference to drip or SSD, 13 % when changing to OML, 17-18 % when moving to drip or SSD in combination with OML, and 28 % for drip or SSD in combination with SML. All before-mentioned reductions increase by one or a few per cent when moving from full to deficit irrigation. Reduction in overall consumptive WF always goes together with an increasing ratio of green to blue WF. The WF of growing a crop for a particular environment is smallest under DI, followed by FI, SI and rain-fed. Growing crops with sprinkler irrigation has the largest consumptive WF, followed by furrow, drip and SSD. Furrow irrigation has a smaller consumptive WF compared with sprinkler, even though the classical measure of "irrigation efficiency" for furrow
Validation of variance reduction techniques in Mediso (SPIRIT DH-V) SPECT system by Monte Carlo
International Nuclear Information System (INIS)
Rodriguez Marrero, J. P.; Diaz Garcia, A.; Gomez Facenda, A.
2015-01-01
Monte Carlo simulation of nuclear medical imaging systems is a widely used method for reproducing their operation in a real clinical environment, There are several Single Photon Emission Tomography (SPECT) systems in Cuba. For this reason it is clearly necessary to introduce a reliable and fast simulation platform in order to obtain consistent image data. This data will reproduce the original measurements conditions. In order to fulfill these requirements Monte Carlo platform GAMOS (Geant4 Medicine Oriented Architecture for Applications) have been used. Due to the very size and complex configuration of parallel hole collimators in real clinical SPECT systems, Monte Carlo simulation usually consumes excessively high time and computing resources. main goal of the present work is to optimize the efficiency of calculation by means of new GAMOS functionality. There were developed and validated two GAMOS variance reduction techniques to speed up calculations. These procedures focus and limit transport of gamma quanta inside the collimator. The obtained results were asses experimentally in Mediso (SPIRIT DH-V) SPECT system. Main quality control parameters, such as sensitivity and spatial resolution were determined. Differences of 4.6% sensitivity and 8.7% spatial resolution were reported against manufacturer values. Simulation time was decreased up to 650 times. Using these techniques it was possible to perform several studies in almost 8 hours each. (Author)
Inter-slice Leakage Artifact Reduction Technique for Simultaneous Multi-Slice Acquisitions
Cauley, Stephen F.; Polimeni, Jonathan R.; Bhat, Himanshu; Wang, Dingxin; Wald, Lawrence L.; Setsompop, Kawin
2015-01-01
Purpose Controlled aliasing techniques for simultaneously acquired EPI slices have been shown to significantly increase the temporal efficiency for both diffusion-weighted imaging (DWI) and fMRI studies. The “slice-GRAPPA” (SG) method has been widely used to reconstruct such data. We investigate robust optimization techniques for SG to ensure image reconstruction accuracy through a reduction of leakage artifacts. Methods Split slice-GRAPPA (SP-SG) is proposed as an alternative kernel optimization method. The performance of SP-SG is compared to standard SG using data collected on a spherical phantom and in-vivo on two subjects at 3T. Slice accelerated and non-accelerated data were collected for a spin-echo diffusion weighted acquisition. Signal leakage metrics and time-series SNR were used to quantify the performance of the kernel fitting approaches. Results The SP-SG optimization strategy significantly reduces leakage artifacts for both phantom and in-vivo acquisitions. In addition, a significant boost in time-series SNR for in-vivo diffusion weighted acquisitions with in-plane 2× and slice 3× accelerations was observed with the SP-SG approach. Conclusion By minimizing the influence of leakage artifacts during the training of slice-GRAPPA kernels, we have significantly improved reconstruction accuracy. Our robust kernel fitting strategy should enable better reconstruction accuracy and higher slice-acceleration across many applications. PMID:23963964
Computational reduction techniques for numerical vibro-acoustic analysis of hearing aids
DEFF Research Database (Denmark)
Creixell Mediante, Ester
, topology optimization techniques for structure acoustic interaction problems are investigated with the aim of evaluating their applicability to the design of hearing aid parts. The strong fluid-structure interaction between the air and some of the thin, soft parts makes it necessary to include the effects......Numerical modelling is a key point for vibro-acoustic analysis and optimization of hearing aids. The great number of small components constituting the devices, and the strong structure-acoustic coupling of the system make it a challenge to obtain accurate and computationally efficient models....... In this thesis, several challenges encountered in the process of modelling and optimizing hearing aids are addressed. Firstly, a strategy for modelling the contacts between plastic parts for harmonic analysis is developed. Irregularities in the contact surfaces, inherent to the manufacturing process of the parts...
Space-time adaptive hierarchical model reduction for parabolic equations.
Perotto, Simona; Zilio, Alessandro
Surrogate solutions and surrogate models for complex problems in many fields of science and engineering represent an important recent research line towards the construction of the best trade-off between modeling reliability and computational efficiency. Among surrogate models, hierarchical model (HiMod) reduction provides an effective approach for phenomena characterized by a dominant direction in their dynamics. HiMod approach obtains 1D models naturally enhanced by the inclusion of the effect of the transverse dynamics. HiMod reduction couples a finite element approximation along the mainstream with a locally tunable modal representation of the transverse dynamics. In particular, we focus on the pointwise HiMod reduction strategy, where the modal tuning is performed on each finite element node. We formalize the pointwise HiMod approach in an unsteady setting, by resorting to a model discontinuous in time, continuous and hierarchically reduced in space (c[M([Formula: see text])G( s )]-dG( q ) approximation). The selection of the modal distribution and of the space-time discretization is automatically performed via an adaptive procedure based on an a posteriori analysis of the global error. The final outcome of this procedure is a table, named HiMod lookup diagram , that sets the time partition and, for each time interval, the corresponding 1D finite element mesh together with the associated modal distribution. The results of the numerical verification confirm the robustness of the proposed adaptive procedure in terms of accuracy, sensitivity with respect to the goal quantity and the boundary conditions, and the computational saving. Finally, the validation results in the groundwater experimental setting are promising. The extension of the HiMod reduction to an unsteady framework represents a crucial step with a view to practical engineering applications. Moreover, the results of the validation phase confirm that HiMod approximation is a viable approach.
An experimental comparison of modelling techniques for speaker ...
Indian Academy of Sciences (India)
Most of the existing modelling techniques for the speaker recognition task make an implicit assumption of sufﬁcient data for speaker modelling and hence may lead to poor modelling under limited data condition. The present work gives an experimental evaluation of the modelling techniques like Crisp Vector Quantization ...
An experimental comparison of modelling techniques for speaker ...
Indian Academy of Sciences (India)
Abstract. Most of the existing modelling techniques for the speaker recog- nition task make an implicit assumption of sufficient data for speaker modelling and hence may lead to poor modelling under limited data condition. The present work gives an experimental evaluation of the modelling techniques like Crisp.
Directory of Open Access Journals (Sweden)
Wolfgang Witteveen
2014-01-01
Full Text Available The mechanical response of multilayer sheet structures, such as leaf springs or car bodies, is largely determined by the nonlinear contact and friction forces between the sheets involved. Conventional computational approaches based on classical reduction techniques or the direct finite element approach have an inefficient balance between computational time and accuracy. In the present contribution, the method of trial vector derivatives is applied and extended in order to obtain a-priori trial vectors for the model reduction which are suitable for determining the nonlinearities in the joints of the reduced system. Findings show that the result quality in terms of displacements and contact forces is comparable to the direct finite element method but the computational effort is extremely low due to the model order reduction. Two numerical studies are presented to underline the method’s accuracy and efficiency. In conclusion, this approach is discussed with respect to the existing body of literature.
FPGA-based RF interference reduction techniques for simultaneous PET–MRI
Gebhardt, P; Wehner, J; Weissler, B; Botnar, R; Marsden, P K; Schulz, V
2016-01-01
Abstract The combination of positron emission tomography (PET) and magnetic resonance imaging (MRI) as a multi-modal imaging technique is considered very promising and powerful with regard to in vivo disease progression examination, therapy response monitoring and drug development. However, PET–MRI system design enabling simultaneous operation with unaffected intrinsic performance of both modalities is challenging. As one of the major issues, both the PET detectors and the MRI radio-frequency (RF) subsystem are exposed to electromagnetic (EM) interference, which may lead to PET and MRI signal-to-noise ratio (SNR) deteriorations. Early digitization of electronic PET signals within the MRI bore helps to preserve PET SNR, but occurs at the expense of increased amount of PET electronics inside the MRI and associated RF field emissions. This raises the likelihood of PET-related MRI interference by coupling into the MRI RF coil unwanted spurious signals considered as RF noise, as it degrades MRI SNR and results in MR image artefacts. RF shielding of PET detectors is a commonly used technique to reduce PET-related RF interferences, but can introduce eddy-current-related MRI disturbances and hinder the highest system integration. In this paper, we present RF interference reduction methods which rely on EM field coupling–decoupling principles of RF receive coils rather than suppressing emitted fields. By modifying clock frequencies and changing clock phase relations of digital circuits, the resulting RF field emission is optimised with regard to a lower field coupling into the MRI RF coil, thereby increasing the RF silence of PET detectors. Our methods are demonstrated by performing FPGA-based clock frequency and phase shifting of digital silicon photo-multipliers (dSiPMs) used in the PET modules of our MR-compatible Hyperion IID PET insert. We present simulations and magnetic-field map scans visualising the impact of altered clock phase pattern on the spatial RF field
FPGA-based RF interference reduction techniques for simultaneous PET-MRI
Gebhardt, P.; Wehner, J.; Weissler, B.; Botnar, R.; Marsden, P. K.; Schulz, V.
2016-05-01
The combination of positron emission tomography (PET) and magnetic resonance imaging (MRI) as a multi-modal imaging technique is considered very promising and powerful with regard to in vivo disease progression examination, therapy response monitoring and drug development. However, PET-MRI system design enabling simultaneous operation with unaffected intrinsic performance of both modalities is challenging. As one of the major issues, both the PET detectors and the MRI radio-frequency (RF) subsystem are exposed to electromagnetic (EM) interference, which may lead to PET and MRI signal-to-noise ratio (SNR) deteriorations. Early digitization of electronic PET signals within the MRI bore helps to preserve PET SNR, but occurs at the expense of increased amount of PET electronics inside the MRI and associated RF field emissions. This raises the likelihood of PET-related MRI interference by coupling into the MRI RF coil unwanted spurious signals considered as RF noise, as it degrades MRI SNR and results in MR image artefacts. RF shielding of PET detectors is a commonly used technique to reduce PET-related RF interferences, but can introduce eddy-current-related MRI disturbances and hinder the highest system integration. In this paper, we present RF interference reduction methods which rely on EM field coupling-decoupling principles of RF receive coils rather than suppressing emitted fields. By modifying clock frequencies and changing clock phase relations of digital circuits, the resulting RF field emission is optimised with regard to a lower field coupling into the MRI RF coil, thereby increasing the RF silence of PET detectors. Our methods are demonstrated by performing FPGA-based clock frequency and phase shifting of digital silicon photo-multipliers (dSiPMs) used in the PET modules of our MR-compatible Hyperion II D PET insert. We present simulations and magnetic-field map scans visualising the impact of altered clock phase pattern on the spatial RF field distribution
Transoral outlet reduction: a comparison of purse-string with interrupted stitch technique.
Schulman, Allison R; Kumar, Nitin; Thompson, Christopher C
2017-11-03
Weight regain after Roux-en-Y gastric bypass (RYGB) correlates with dilated gastrojejunal anastomosis (GJA). Endoscopic sutured transoral outlet reduction (TORe) is a safe and effective management and has predominantly been performed by either placing interrupted sutures at the GJA or the creation of a purse-string suture. The aim of the current study was to compare these techniques. All patients undergoing TORe for weight regain after RYGB were prospectively enrolled. Primary outcome was mean percent total weight loss (%TWL) at 3 and 12 months. Secondary outcomes included percent excess weight loss (%EWL), percent regained weight lost (%RWL), and total weight loss. Proportions were compared using the Fisher exact test and continuous variables using the Student t test. A P = .05 was significant. Multivariable regression analysis was performed. Two hundred forty-one patients were enrolled (purse string = 187, interrupted = 54). There was no statistical difference between the purse-string and interrupted groups at 3 months in %TWL (8.6 vs 8.0, P = .41), %EWL (20.5 vs 16.7, P = .39), % RWL (44.7 vs 33.3, P = .56), and total weight loss (9.5 vs 11.3, P = .32). At 12 months the purse-string group achieved statistically significant improvement in %TWL (8.6 vs 6.4, P = .02), %EWL (19.8 vs 11.7, P string technique results in greater weight loss at 12 months than the traditional interrupted suture pattern. Copyright © 2017 American Society for Gastrointestinal Endoscopy. Published by Elsevier Inc. All rights reserved.
Architectural and compiler techniques for energy reduction in high-performance microprocessors
Bellas, Nikolaos
1999-11-01
The microprocessor industry has started viewing power, along with area and performance, as a decisive design factor in today's microprocessors. The increasing cost of packaging and cooling systems poses stringent requirements on the maximum allowable power dissipation. Most of the research in recent years has focused on the circuit, gate, and register-transfer (RT) levels of the design. In this research, we focus on the software running on a microprocessor and we view the program as a power consumer. Our work concentrates on the role of the compiler in the construction of "power-efficient" code, and especially its interaction with the hardware so that unnecessary processor activity is saved. We propose techniques that use extra hardware features and compiler-driven code transformations that specifically target activity reduction in certain parts of the CPU which are known to be large power and energy consumers. Design for low power/energy at this level of abstraction entails larger energy gains than in the lower stages of the design hierarchy in which the design team has already made the most important design commitments. The role of the compiler in generating code which exploits the processor organization is also fundamental in energy minimization. Hence, we propose a hardware/software co-design paradigm, and we show what code transformations are necessary by the compiler so that "wasted" power in a modern microprocessor can be trimmed. More specifically, we propose a technique that uses an additional mini cache located between the instruction cache (I-Cache) and the CPU core; the mini cache buffers instructions that are nested within loops and are continuously fetched from the I-Cache. This mechanism can create very substantial energy savings, since the I-Cache unit is one of the main power consumers in most of today's high-performance microprocessors. Results are reported for the SPEC95 benchmarks in the R-4400 processor which implements the MIPS2 instruction
Health gain by salt reduction in europe: a modelling study.
Directory of Open Access Journals (Sweden)
Marieke A H Hendriksen
Full Text Available Excessive salt intake is associated with hypertension and cardiovascular diseases. Salt intake exceeds the World Health Organization population nutrition goal of 5 grams per day in the European region. We assessed the health impact of salt reduction in nine European countries (Finland, France, Ireland, Italy, Netherlands, Poland, Spain, Sweden and United Kingdom. Through literature research we obtained current salt intake and systolic blood pressure levels of the nine countries. The population health modeling tool DYNAMO-HIA including country-specific disease data was used to predict the changes in prevalence of ischemic heart disease and stroke for each country estimating the effect of salt reduction through its effect on blood pressure levels. A 30% salt reduction would reduce the prevalence of stroke by 6.4% in Finland to 13.5% in Poland. Ischemic heart disease would be decreased by 4.1% in Finland to 8.9% in Poland. When salt intake is reduced to the WHO population nutrient goal, it would reduce the prevalence of stroke from 10.1% in Finland to 23.1% in Poland. Ischemic heart disease would decrease by 6.6% in Finland to 15.5% in Poland. The number of postponed deaths would be 102,100 (0.9% in France, and 191,300 (2.3% in Poland. A reduction of salt intake to 5 grams per day is expected to substantially reduce the burden of cardiovascular disease and mortality in several European countries.
Yusob, Diana; Zukhi, Jihan; Aziz Tajuddin, Abd; Zainon, Rafidah
2017-05-01
The aim of this study was to evaluate the efficacy of metal artefact reduction using contrasts media in Computed Tomography (CT) imaging. A water-based abdomen phantom of diameter 32 cm (adult body size) was fabricated using polymethyl methacrylate (PMMA) material. Three different contrast agents (iodine, barium and gadolinium) were filled in small PMMA tubes and placed inside a water-based PMMA adult abdomen phantom. The orthopedic metal screw was placed in each small PMMA tube separately. These two types of orthopedic metal screw (stainless steel and titanium alloy) were scanned separately. The orthopedic metal crews were scanned with single-energy CT at 120 kV and dual-energy CT at fast kV-switching between 80 kV and 140 kV. The scan modes were set automatically using the current modulation care4Dose setting and the scans were set at different pitch and slice thickness. The use of the contrast media technique on orthopedic metal screws were optimised by using pitch = 0.60 mm, and slice thickness = 5.0 mm. The use contrast media can reduce the metal streaking artefacts on CT image, enhance the CT images surrounding the implants, and it has potential use in improving diagnostic performance in patients with severe metallic artefacts. These results are valuable for imaging protocol optimisation in clinical applications.
Ahmed, Qasim Zeeshan
2013-12-18
This paper investigates and compares the performance of wireless sensor networks where sensors operate on the principles of cooperative communications. We consider a scenario where the source transmits signals to the destination with the help of L sensors. As the destination has the capacity of processing only U out of these L signals, the strongest U signals are selected while the remaining (L?U) signals are suppressed. A preprocessing block similar to channel-shortening is proposed in this contribution. However, this preprocessing block employs a rank-reduction technique instead of channel-shortening. By employing this preprocessing, we are able to decrease the computational complexity of the system without affecting the bit error rate (BER) performance. From our simulations, it can be shown that these schemes outperform the channel-shortening schemes in terms of computational complexity. In addition, the proposed schemes have a superior BER performance as compared to channel-shortening schemes when sensors employ fixed gain amplification. However, for sensors which employ variable gain amplification, a tradeoff exists in terms of BER performance between the channel-shortening and these schemes. These schemes outperform channel-shortening scheme for lower signal-to-noise ratio.
Application of variance reduction technique to nuclear transmutation system driven by accelerator
Energy Technology Data Exchange (ETDEWEB)
Sasa, Toshinobu [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment
1998-03-01
In Japan, it is the basic policy to dispose the high level radioactive waste arising from spent nuclear fuel in stable deep strata after glass solidification. If the useful elements in the waste can be separated and utilized, resources are effectively used, and it can be expected to guarantee high economical efficiency and safety in the disposal in strata. Japan Atomic Energy Research Institute proposed the hybrid type transmutation system, in which high intensity proton accelerator and subcritical fast core are combined, or the nuclear reactor which is optimized for the exclusive use for transmutation. The tungsten target, minor actinide nitride fuel transmutation system and the melted minor actinide chloride salt target fuel transmutation system are outlined. The conceptual figures of both systems are shown. As the method of analysis, Version 2.70 of Lahet Code System which was developed by Los Alamos National Laboratory in USA was adopted. In case of carrying out the analysis of accelerator-driven subcritical core in the energy range below 20 MeV, variance reduction technique must be applied. (K.I.)
Kanik, Mehmet; Aktas, Ozan; Sen, Huseyin Sener; Durgun, Engin; Bayindir, Mehmet
2014-09-23
We produced kilometer-long, endlessly parallel, spontaneously piezoelectric and thermally stable poly(vinylidene fluoride) (PVDF) micro- and nanoribbons using iterative size reduction technique based on thermal fiber drawing. Because of high stress and temperature used in thermal drawing process, we obtained spontaneously polar γ phase PVDF micro- and nanoribbons without electrical poling process. On the basis of X-ray diffraction (XRD) analysis, we observed that PVDF micro- and nanoribbons are thermally stable and conserve the polar γ phase even after being exposed to heat treatment above the melting point of PVDF. Phase transition mechanism is investigated and explained using ab initio calculations. We measured an average effective piezoelectric constant as -58.5 pm/V from a single PVDF nanoribbon using a piezo evaluation system along with an atomic force microscope. PVDF nanoribbons are promising structures for constructing devices such as highly efficient energy generators, large area pressure sensors, artificial muscle and skin, due to the unique geometry and extended lengths, high polar phase content, high thermal stability and high piezoelectric coefficient. We demonstrated two proof of principle devices for energy harvesting and sensing applications with a 60 V open circuit peak voltage and 10 μA peak short-circuit current output.
Directory of Open Access Journals (Sweden)
Gustavo Ventorim
2014-03-01
Full Text Available http://dx.doi.org/10.5902/1980509813337This study aimed to evaluate the sensitiveness of the information obtained for the residual lignin from Eucalyptus grandis kraft pulps analyzed through the nitrobenzene oxidation, copper oxide (CuO reduction and acidolysis techniques. The chips were cooked, resulting pulps of kappa number 14,5 and 16,9, respectively. Both lignins’ pulps were evaluated through three methods (nitrobenzene oxidation, copper oxide oxidation and acidolysis. Then, they were subjected to an oxygen delignification stage. The 16,9 kappa number pulp resulted in higher levels of non-condensed lignin structures by the acidolysis method, higher syringyl/vanillin ratios (S/V by the nitrobenzene and copper oxide methods and better performance in the oxygen delignification stage. The different methods allowed to differ the residual lignin pulps with kappa number 14,5 and 16,9, and the nitrobenzene oxidation method showed the highest sensitiveness in this study results.
International Nuclear Information System (INIS)
Makgae, R.
2008-01-01
A private company, Citrus Research International (CIR) is intending to construct an insect irradiation facility for the irradiation of insect for pest management in south western region of South Africa. The facility will employ a Co-60 cylindrical source in the chamber. An adequate thickness for the concrete shielding walls and the ability of the labyrinth leading to the irradiation chamber, to attenuate radiation to dose rates that are acceptably low, were determined. Two methods of MCNP variance reduction techniques were applied to accommodate the two pathways of deep penetration to evaluate the radiological impact outside the 150 cm concrete walls and steaming of gamma photons through the labyrinth. The point-kernel based MicroShield software was used in the deep penetration calculations for the walls around the source room to test its accuracy and the results obtained are in good agreement with about 15-20% difference. The dose rate mapping due to radiation Streaming along the labyrinth to the facility entrance is also to be validated with the Attila code, which is a deterministic code that solves the Discrete Ordinates approximation. This file provides a template for writing papers for the conference. (authors)
Energy Technology Data Exchange (ETDEWEB)
Makgae, R. [Pebble Bed Modular Reactor (PBMR), P.O. Box 9396, Centurion (South Africa)
2008-07-01
A private company, Citrus Research International (CIR) is intending to construct an insect irradiation facility for the irradiation of insect for pest management in south western region of South Africa. The facility will employ a Co-60 cylindrical source in the chamber. An adequate thickness for the concrete shielding walls and the ability of the labyrinth leading to the irradiation chamber, to attenuate radiation to dose rates that are acceptably low, were determined. Two methods of MCNP variance reduction techniques were applied to accommodate the two pathways of deep penetration to evaluate the radiological impact outside the 150 cm concrete walls and steaming of gamma photons through the labyrinth. The point-kernel based MicroShield software was used in the deep penetration calculations for the walls around the source room to test its accuracy and the results obtained are in good agreement with about 15-20% difference. The dose rate mapping due to radiation Streaming along the labyrinth to the facility entrance is also to be validated with the Attila code, which is a deterministic code that solves the Discrete Ordinates approximation. This file provides a template for writing papers for the conference. (authors)
Respirometry techniques and activated sludge models
Benes, O.; Spanjers, H.; Holba, M.
2002-01-01
This paper aims to explain results of respirometry experiments using Activated Sludge Model No. 1. In cases of insufficient fit of ASM No. 1, further modifications to the model were carried out and the so-called "Enzymatic model" was developed. The best-fit method was used to determine the effect of
Formal modelling techniques in human-computer interaction
de Haan, G.; de Haan, G.; van der Veer, Gerrit C.; van Vliet, J.C.
1991-01-01
This paper is a theoretical contribution, elaborating the concept of models as used in Cognitive Ergonomics. A number of formal modelling techniques in human-computer interaction will be reviewed and discussed. The analysis focusses on different related concepts of formal modelling techniques in
Supervised Gaussian process latent variable model for dimensionality reduction.
Gao, Xinbo; Wang, Xiumei; Tao, Dacheng; Li, Xuelong
2011-04-01
The Gaussian process latent variable model (GP-LVM) has been identified to be an effective probabilistic approach for dimensionality reduction because it can obtain a low-dimensional manifold of a data set in an unsupervised fashion. Consequently, the GP-LVM is insufficient for supervised learning tasks (e.g., classification and regression) because it ignores the class label information for dimensionality reduction. In this paper, a supervised GP-LVM is developed for supervised learning tasks, and the maximum a posteriori algorithm is introduced to estimate positions of all samples in the latent variable space. We present experimental evidences suggesting that the supervised GP-LVM is able to use the class label information effectively, and thus, it outperforms the GP-LVM and the discriminative extension of the GP-LVM consistently. The comparison with some supervised classification methods, such as Gaussian process classification and support vector machines, is also given to illustrate the advantage of the proposed method.
Directory of Open Access Journals (Sweden)
N.R. Sakthivel
2014-03-01
Full Text Available Bearing fault, Impeller fault, seal fault and cavitation are the main causes of breakdown in a mono block centrifugal pump and hence, the detection and diagnosis of these mechanical faults in a mono block centrifugal pump is very crucial for its reliable operation. Based on a continuous acquisition of signals with a data acquisition system, it is possible to classify the faults. This is achieved by the extraction of features from the measured data and employing data mining approaches to explore the structural information hidden in the signals acquired. In the present study, statistical features derived from the vibration data are used as the features. In order to increase the robustness of the classifier and to reduce the data processing load, dimensionality reduction is necessary. In this paper dimensionality reduction is performed using traditional dimensionality reduction techniques and nonlinear dimensionality reduction techniques. The effectiveness of each dimensionality reduction technique is also verified using visual analysis. The reduced feature set is then classified using a decision tree. The results obtained are compared with those generated by classifiers such as Naïve Bayes, Bayes Net and kNN. The effort is to bring out the better dimensionality reduction technique–classifier combination.
UAV State Estimation Modeling Techniques in AHRS
Razali, Shikin; Zhahir, Amzari
2017-11-01
Autonomous unmanned aerial vehicle (UAV) system is depending on state estimation feedback to control flight operation. Estimation on the correct state improves navigation accuracy and achieves flight mission safely. One of the sensors configuration used in UAV state is Attitude Heading and Reference System (AHRS) with application of Extended Kalman Filter (EKF) or feedback controller. The results of these two different techniques in estimating UAV states in AHRS configuration are displayed through position and attitude graphs.
Genetic Algorithm-Based Model Order Reduction of Aeroservoelastic Systems with Consistant States
Zhu, Jin; Wang, Yi; Pant, Kapil; Suh, Peter M.; Brenner, Martin J.
2017-01-01
This paper presents a model order reduction framework to construct linear parameter-varying reduced-order models of flexible aircraft for aeroservoelasticity analysis and control synthesis in broad two-dimensional flight parameter space. Genetic algorithms are used to automatically determine physical states for reduction and to generate reduced-order models at grid points within parameter space while minimizing the trial-and-error process. In addition, balanced truncation for unstable systems is used in conjunction with the congruence transformation technique to achieve locally optimal realization and weak fulfillment of state consistency across the entire parameter space. Therefore, aeroservoelasticity reduced-order models at any flight condition can be obtained simply through model interpolation. The methodology is applied to the pitch-plant model of the X-56A Multi-Use Technology Testbed currently being tested at NASA Armstrong Flight Research Center for flutter suppression and gust load alleviation. The present studies indicate that the reduced-order model with more than 12× reduction in the number of states relative to the original model is able to accurately predict system response among all input-output channels. The genetic-algorithm-guided approach exceeds manual and empirical state selection in terms of efficiency and accuracy. The interpolated aeroservoelasticity reduced order models exhibit smooth pole transition and continuously varying gains along a set of prescribed flight conditions, which verifies consistent state representation obtained by congruence transformation. The present model order reduction framework can be used by control engineers for robust aeroservoelasticity controller synthesis and novel vehicle design.
Directory of Open Access Journals (Sweden)
Hae-Gwang Jeong
2013-01-01
Full Text Available This paper proposes a second-order harmonic reduction technique using a proportional-resonant (PR controller for a photovoltaic (PV power conditioning system (PCS. In a grid-connected single-phase system, inverters create a second-order harmonic at twice the fundamental frequency. A ripple component unsettles the operating points of the PV array and deteriorates the operation of the maximum power point tracking (MPPT technique. The second-order harmonic component in PV PCS is analyzed using an equivalent circuit of the DC/DC converter and the DC/AC inverter. A new feed-forward compensation technique using a PR controller for ripple reduction is proposed. The proposed algorithm is advantageous in that additional devices are not required and complex calculations are unnecessary. Therefore, this method is cost-effective and simple to implement. The proposed feed-forward compensation technique is verified by simulation and experimental results.
Verification and Uncertainty Reduction of Amchitka Underground Nuclear Testing Models
Energy Technology Data Exchange (ETDEWEB)
Ahmed Hassan; Jenny Chapman
2006-02-01
zero. The current results of no-breakthrough match this lower bound. (8) Significant uncertainty reduction is achieved for model input parameters (recharge, conductivity, and recharge-conductivity ratio) with the R/K ratio experiencing a very dramatic reduction. (9) Uncertainty in groundwater fluxes is also reduced due to the reduction of R/K uncertainty. (10) Groundwater velocities based on new data are orders of magnitude slower than the velocities produced by the 2002 model due to the higher porosity obtained from the analysis of the MT data. (11) Uncertainty reduction in radionuclide mass flux could not be assessed as the velocities are too small to produce radionuclide breakthrough within the model timeframe of 2,000 years.
Relevance units latent variable model and nonlinear dimensionality reduction.
Gao, Junbin; Zhang, Jun; Tien, David
2010-01-01
A new dimensionality reduction method, called relevance units latent variable model (RULVM), is proposed in this paper. RULVM has a close link with the framework of Gaussian process latent variable model (GPLVM) and it originates from a recently developed sparse kernel model called relevance units machine (RUM). RUM follows the idea of relevance vector machine (RVM) under the Bayesian framework but releases the constraint that relevance vectors (RVs) have to be selected from the input vectors. RUM treats relevance units (RUs) as part of the parameters to be learned from the data. As a result, a RUM maintains all the advantages of RVM and offers superior sparsity. RULVM inherits the advantages of sparseness offered by the RUM and the experimental result shows that RULVM algorithm possesses considerable computational advantages over GPLVM algorithm.
Comparative analysis of nonlinear dimensionality reduction techniques for breast MRI segmentation
International Nuclear Information System (INIS)
Akhbardeh, Alireza; Jacobs, Michael A.
2012-01-01
Purpose: Visualization of anatomical structures using radiological imaging methods is an important tool in medicine to differentiate normal from pathological tissue and can generate large amounts of data for a radiologist to read. Integrating these large data sets is difficult and time-consuming. A new approach uses both supervised and unsupervised advanced machine learning techniques to visualize and segment radiological data. This study describes the application of a novel hybrid scheme, based on combining wavelet transform and nonlinear dimensionality reduction (NLDR) methods, to breast magnetic resonance imaging (MRI) data using three well-established NLDR techniques, namely, ISOMAP, local linear embedding (LLE), and diffusion maps (DfM), to perform a comparative performance analysis. Methods: Twenty-five breast lesion subjects were scanned using a 3T scanner. MRI sequences used were T1-weighted, T2-weighted, diffusion-weighted imaging (DWI), and dynamic contrast-enhanced (DCE) imaging. The hybrid scheme consisted of two steps: preprocessing and postprocessing of the data. The preprocessing step was applied for B 1 inhomogeneity correction, image registration, and wavelet-based image compression to match and denoise the data. In the postprocessing step, MRI parameters were considered data dimensions and the NLDR-based hybrid approach was applied to integrate the MRI parameters into a single image, termed the embedded image. This was achieved by mapping all pixel intensities from the higher dimension to a lower dimensional (embedded) space. For validation, the authors compared the hybrid NLDR with linear methods of principal component analysis (PCA) and multidimensional scaling (MDS) using synthetic data. For the clinical application, the authors used breast MRI data, comparison was performed using the postcontrast DCE MRI image and evaluating the congruence of the segmented lesions. Results: The NLDR-based hybrid approach was able to define and segment both
Comparative analysis of nonlinear dimensionality reduction techniques for breast MRI segmentationa
Akhbardeh, Alireza; Jacobs, Michael A.
2012-01-01
Purpose: Visualization of anatomical structures using radiological imaging methods is an important tool in medicine to differentiate normal from pathological tissue and can generate large amounts of data for a radiologist to read. Integrating these large data sets is difficult and time-consuming. A new approach uses both supervised and unsupervised advanced machine learning techniques to visualize and segment radiological data. This study describes the application of a novel hybrid scheme, based on combining wavelet transform and nonlinear dimensionality reduction (NLDR) methods, to breast magnetic resonance imaging (MRI) data using three well-established NLDR techniques, namely, ISOMAP, local linear embedding (LLE), and diffusion maps (DfM), to perform a comparative performance analysis. Methods: Twenty-five breast lesion subjects were scanned using a 3T scanner. MRI sequences used were T1-weighted, T2-weighted, diffusion-weighted imaging (DWI), and dynamic contrast-enhanced (DCE) imaging. The hybrid scheme consisted of two steps: preprocessing and postprocessing of the data. The preprocessing step was applied for B1 inhomogeneity correction, image registration, and wavelet-based image compression to match and denoise the data. In the postprocessing step, MRI parameters were considered data dimensions and the NLDR-based hybrid approach was applied to integrate the MRI parameters into a single image, termed the embedded image. This was achieved by mapping all pixel intensities from the higher dimension to a lower dimensional (embedded) space. For validation, the authors compared the hybrid NLDR with linear methods of principal component analysis (PCA) and multidimensional scaling (MDS) using synthetic data. For the clinical application, the authors used breast MRI data, comparison was performed using the postcontrast DCE MRI image and evaluating the congruence of the segmented lesions. Results: The NLDR-based hybrid approach was able to define and segment both
Comparative analysis of nonlinear dimensionality reduction techniques for breast MRI segmentation
Energy Technology Data Exchange (ETDEWEB)
Akhbardeh, Alireza; Jacobs, Michael A. [Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, Baltimore, Maryland 21205 (United States); Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, Baltimore, Maryland 21205 (United States) and Sidney Kimmel Comprehensive Cancer Center, Johns Hopkins University School of Medicine, Baltimore, Maryland 21205 (United States)
2012-04-15
Purpose: Visualization of anatomical structures using radiological imaging methods is an important tool in medicine to differentiate normal from pathological tissue and can generate large amounts of data for a radiologist to read. Integrating these large data sets is difficult and time-consuming. A new approach uses both supervised and unsupervised advanced machine learning techniques to visualize and segment radiological data. This study describes the application of a novel hybrid scheme, based on combining wavelet transform and nonlinear dimensionality reduction (NLDR) methods, to breast magnetic resonance imaging (MRI) data using three well-established NLDR techniques, namely, ISOMAP, local linear embedding (LLE), and diffusion maps (DfM), to perform a comparative performance analysis. Methods: Twenty-five breast lesion subjects were scanned using a 3T scanner. MRI sequences used were T1-weighted, T2-weighted, diffusion-weighted imaging (DWI), and dynamic contrast-enhanced (DCE) imaging. The hybrid scheme consisted of two steps: preprocessing and postprocessing of the data. The preprocessing step was applied for B{sub 1} inhomogeneity correction, image registration, and wavelet-based image compression to match and denoise the data. In the postprocessing step, MRI parameters were considered data dimensions and the NLDR-based hybrid approach was applied to integrate the MRI parameters into a single image, termed the embedded image. This was achieved by mapping all pixel intensities from the higher dimension to a lower dimensional (embedded) space. For validation, the authors compared the hybrid NLDR with linear methods of principal component analysis (PCA) and multidimensional scaling (MDS) using synthetic data. For the clinical application, the authors used breast MRI data, comparison was performed using the postcontrast DCE MRI image and evaluating the congruence of the segmented lesions. Results: The NLDR-based hybrid approach was able to define and segment
Comparative analysis of nonlinear dimensionality reduction techniques for breast MRI segmentation.
Akhbardeh, Alireza; Jacobs, Michael A
2012-04-01
Visualization of anatomical structures using radiological imaging methods is an important tool in medicine to differentiate normal from pathological tissue and can generate large amounts of data for a radiologist to read. Integrating these large data sets is difficult and time-consuming. A new approach uses both supervised and unsupervised advanced machine learning techniques to visualize and segment radiological data. This study describes the application of a novel hybrid scheme, based on combining wavelet transform and nonlinear dimensionality reduction (NLDR) methods, to breast magnetic resonance imaging (MRI) data using three well-established NLDR techniques, namely, ISOMAP, local linear embedding (LLE), and diffusion maps (DfM), to perform a comparative performance analysis. Twenty-five breast lesion subjects were scanned using a 3T scanner. MRI sequences used were T1-weighted, T2-weighted, diffusion-weighted imaging (DWI), and dynamic contrast-enhanced (DCE) imaging. The hybrid scheme consisted of two steps: preprocessing and postprocessing of the data. The preprocessing step was applied for B(1) inhomogeneity correction, image registration, and wavelet-based image compression to match and denoise the data. In the postprocessing step, MRI parameters were considered data dimensions and the NLDR-based hybrid approach was applied to integrate the MRI parameters into a single image, termed the embedded image. This was achieved by mapping all pixel intensities from the higher dimension to a lower dimensional (embedded) space. For validation, the authors compared the hybrid NLDR with linear methods of principal component analysis (PCA) and multidimensional scaling (MDS) using synthetic data. For the clinical application, the authors used breast MRI data, comparison was performed using the postcontrast DCE MRI image and evaluating the congruence of the segmented lesions. The NLDR-based hybrid approach was able to define and segment both synthetic and clinical data
Rozhkov, Mikhail; Bobrov, Dmitry; Kitov, Ivan
2014-05-01
The Master Event technique is a powerful tool for Expert Technical Analysis within the CTBT framework as well as for real-time monitoring with the waveform cross-correlation (CC) (match filter) approach. The primary goal of CTBT monitoring is detection and location of nuclear explosions. Therefore, the cross-correlation monitoring should be focused on finding such events. The use of physically adequate waveform templates may significantly increase the number of valid, both natural and manmade, events in the Reviewed Event Bulletin (REB) of the International Data Centre. Inadequate templates for master events may increase the number of CTBT irrelevant events in REB and reduce the sensitivity of the CC technique to valid events. In order to cover the entire earth, including vast aseismic territories, with the CC based nuclear test monitoring we conducted a thorough research and defined the most appropriate real and synthetic master events representing underground explosion sources. A procedure was developed on optimizing the master event template simulation and narrowing the classes of CC templates used in detection and location process based on principal and independent component analysis (PCA and ICA). Actual waveforms and metadata from the DTRA Verification Database were used to validate our approach. The detection and location results based on real and synthetic master events were compared. The prototype of CC-based Global Grid monitoring system developed in IDC during last year was populated with different hybrid waveform templates (synthetics, synthetics components, and real components) and its performance was assessed with the world seismicity data flow, including the DPRK-2013 event. The specific features revealed in this study for the P-waves from the DPRK underground nuclear explosions (UNEs) can reduce the global detection threshold of seismic monitoring under the CTBT by 0.5 units of magnitude. This corresponds to the reduction in the test yield by a
International Nuclear Information System (INIS)
Shah, Chirag; Vicini, Frank A.
2011-01-01
As more women survive breast cancer, long-term toxicities affecting their quality of life, such as lymphedema (LE) of the arm, gain importance. Although numerous studies have attempted to determine incidence rates, identify optimal diagnostic tests, enumerate efficacious treatment strategies and outline risk reduction guidelines for breast cancer–related lymphedema (BCRL), few groups have consistently agreed on any of these issues. As a result, standardized recommendations are still lacking. This review will summarize the latest data addressing all of these concerns in order to provide patients and health care providers with optimal, contemporary recommendations. Published incidence rates for BCRL vary substantially with a range of 2–65% based on surgical technique, axillary sampling method, radiation therapy fields treated, and the use of chemotherapy. Newer clinical assessment tools can potentially identify BCRL in patients with subclinical disease with prospective data suggesting that early diagnosis and management with noninvasive therapy can lead to excellent outcomes. Multiple therapies exist with treatments defined by the severity of BCRL present. Currently, the standard of care for BCRL in patients with significant LE is complex decongestive physiotherapy (CDP). Contemporary data also suggest that a multidisciplinary approach to the management of BCRL should begin prior to definitive treatment for breast cancer employing patient-specific surgical, radiation therapy, and chemotherapy paradigms that limit risks. Further, prospective clinical assessments before and after treatment should be employed to diagnose subclinical disease. In those patients who require aggressive locoregional management, prophylactic therapies and the use of CDP can help reduce the long-term sequelae of BCRL.
Energy Technology Data Exchange (ETDEWEB)
Shah, Chirag [Department of Radiation Oncology, William Beaumont Hospital, Royal Oak, MI (United States); Vicini, Frank A., E-mail: fvicini@beaumont.edu [Department of Radiation Oncology, William Beaumont Hospital, Royal Oak, MI (United States)
2011-11-15
As more women survive breast cancer, long-term toxicities affecting their quality of life, such as lymphedema (LE) of the arm, gain importance. Although numerous studies have attempted to determine incidence rates, identify optimal diagnostic tests, enumerate efficacious treatment strategies and outline risk reduction guidelines for breast cancer-related lymphedema (BCRL), few groups have consistently agreed on any of these issues. As a result, standardized recommendations are still lacking. This review will summarize the latest data addressing all of these concerns in order to provide patients and health care providers with optimal, contemporary recommendations. Published incidence rates for BCRL vary substantially with a range of 2-65% based on surgical technique, axillary sampling method, radiation therapy fields treated, and the use of chemotherapy. Newer clinical assessment tools can potentially identify BCRL in patients with subclinical disease with prospective data suggesting that early diagnosis and management with noninvasive therapy can lead to excellent outcomes. Multiple therapies exist with treatments defined by the severity of BCRL present. Currently, the standard of care for BCRL in patients with significant LE is complex decongestive physiotherapy (CDP). Contemporary data also suggest that a multidisciplinary approach to the management of BCRL should begin prior to definitive treatment for breast cancer employing patient-specific surgical, radiation therapy, and chemotherapy paradigms that limit risks. Further, prospective clinical assessments before and after treatment should be employed to diagnose subclinical disease. In those patients who require aggressive locoregional management, prophylactic therapies and the use of CDP can help reduce the long-term sequelae of BCRL.
Ambient temperature modelling with soft computing techniques
Energy Technology Data Exchange (ETDEWEB)
Bertini, Ilaria; Ceravolo, Francesco; Citterio, Marco; Di Pietra, Biagio; Margiotta, Francesca; Pizzuti, Stefano; Puglisi, Giovanni [Energy, New Technology and Environment Agency (ENEA), Via Anguillarese 301, 00123 Rome (Italy); De Felice, Matteo [Energy, New Technology and Environment Agency (ENEA), Via Anguillarese 301, 00123 Rome (Italy); University of Rome ' ' Roma 3' ' , Dipartimento di Informatica e Automazione (DIA), Via della Vasca Navale 79, 00146 Rome (Italy)
2010-07-15
This paper proposes a hybrid approach based on soft computing techniques in order to estimate monthly and daily ambient temperature. Indeed, we combine the back-propagation (BP) algorithm and the simple Genetic Algorithm (GA) in order to effectively train artificial neural networks (ANN) in such a way that the BP algorithm initialises a few individuals of the GA's population. Experiments concerned monthly temperature estimation of unknown places and daily temperature estimation for thermal load computation. Results have shown remarkable improvements in accuracy compared to traditional methods. (author)
A MODEL AND CONTROLLER REDUCTION METHOD FOR ROBUST CONTROL DESIGN.
Energy Technology Data Exchange (ETDEWEB)
YUE,M.; SCHLUETER,R.
2003-10-20
A bifurcation subsystem based model and controller reduction approach is presented. Using this approach a robust {micro}-synthesis SVC control is designed for interarea oscillation and voltage control based on a small reduced order bifurcation subsystem model of the full system. The control synthesis problem is posed by structured uncertainty modeling and control configuration formulation using the bifurcation subsystem knowledge of the nature of the interarea oscillation caused by a specific uncertainty parameter. Bifurcation subsystem method plays a key role in this paper because it provides (1) a bifurcation parameter for uncertainty modeling; (2) a criterion to reduce the order of the resulting MSVC control; and (3) a low order model for a bifurcation subsystem based SVC (BMSVC) design. The use of the model of the bifurcation subsystem to produce a low order controller simplifies the control design and reduces the computation efforts so significantly that the robust {micro}-synthesis control can be applied to large system where the computation makes robust control design impractical. The RGA analysis and time simulation show that the reduced BMSVC control design captures the center manifold dynamics and uncertainty structure of the full system model and is capable of stabilizing the full system and achieving satisfactory control performance.
Energy Technology Data Exchange (ETDEWEB)
Carlberg, Kevin Thomas [Sandia National Lab. (SNL-CA), Livermore, CA (United States). Quantitative Modeling and Analysis; Drohmann, Martin [Sandia National Lab. (SNL-CA), Livermore, CA (United States). Quantitative Modeling and Analysis; Tuminaro, Raymond S. [Sandia National Lab. (SNL-CA), Livermore, CA (United States). Computational Mathematics; Boggs, Paul T. [Sandia National Lab. (SNL-CA), Livermore, CA (United States). Quantitative Modeling and Analysis; Ray, Jaideep [Sandia National Lab. (SNL-CA), Livermore, CA (United States). Quantitative Modeling and Analysis; van Bloemen Waanders, Bart Gustaaf [Sandia National Lab. (SNL-CA), Livermore, CA (United States). Optimization and Uncertainty Estimation
2014-10-01
Model reduction for dynamical systems is a promising approach for reducing the computational cost of large-scale physics-based simulations to enable high-fidelity models to be used in many- query (e.g., Bayesian inference) and near-real-time (e.g., fast-turnaround simulation) contexts. While model reduction works well for specialized problems such as linear time-invariant systems, it is much more difficult to obtain accurate, stable, and efficient reduced-order models (ROMs) for systems with general nonlinearities. This report describes several advances that enable nonlinear reduced-order models (ROMs) to be deployed in a variety of time-critical settings. First, we present an error bound for the Gauss-Newton with Approximated Tensors (GNAT) nonlinear model reduction technique. This bound allows the state-space error for the GNAT method to be quantified when applied with the backward Euler time-integration scheme. Second, we present a methodology for preserving classical Lagrangian structure in nonlinear model reduction. This technique guarantees that important properties--such as energy conservation and symplectic time-evolution maps--are preserved when performing model reduction for models described by a Lagrangian formalism (e.g., molecular dynamics, structural dynamics). Third, we present a novel technique for decreasing the temporal complexity --defined as the number of Newton-like iterations performed over the course of the simulation--by exploiting time-domain data. Fourth, we describe a novel method for refining projection-based reduced-order models a posteriori using a goal-oriented framework similar to mesh-adaptive h -refinement in finite elements. The technique allows the ROM to generate arbitrarily accurate solutions, thereby providing the ROM with a 'failsafe' mechanism in the event of insufficient training data. Finally, we present the reduced-order model error surrogate (ROMES) method for statistically quantifying reduced- order-model
Multiple models guide strategies for agricultural nutrient reductions
Scavia, Donald; Kalcic, Margaret; Muenich, Rebecca Logsdon; Read, Jennifer; Aloysius, Noel; Bertani, Isabella; Boles, Chelsie; Confesor, Remegio; DePinto, Joseph; Gildow, Marie; Martin, Jay; Redder, Todd; Robertson, Dale M.; Sowa, Scott P.; Wang, Yu-Chen; Yen, Haw
2017-01-01
In response to degraded water quality, federal policy makers in the US and Canada called for a 40% reduction in phosphorus (P) loads to Lake Erie, and state and provincial policy makers in the Great Lakes region set a load-reduction target for the year 2025. Here, we configured five separate SWAT (US Department of Agriculture's Soil and Water Assessment Tool) models to assess load reduction strategies for the agriculturally dominated Maumee River watershed, the largest P source contributing to toxic algal blooms in Lake Erie. Although several potential pathways may achieve the target loads, our results show that any successful pathway will require large-scale implementation of multiple practices. For example, one successful pathway involved targeting 50% of row cropland that has the highest P loss in the watershed with a combination of three practices: subsurface application of P fertilizers, planting cereal rye as a winter cover crop, and installing buffer strips. Achieving these levels of implementation will require local, state/provincial, and federal agencies to collaborate with the private sector to set shared implementation goals and to demand innovation and honest assessments of water quality-related programs, policies, and partnerships.
Moving objects management models, techniques and applications
Meng, Xiaofeng; Xu, Jiajie
2014-01-01
This book describes the topics of moving objects modeling and location tracking, indexing and querying, clustering, location uncertainty, traffic aware navigation and privacy issues as well as the application to intelligent transportation systems.
Materials and techniques for model construction
Wigley, D. A.
1985-01-01
The problems confronting the designer of models for cryogenic wind tunnel models are discussed with particular reference to the difficulties in obtaining appropriate data on the mechanical and physical properties of candidate materials and their fabrication technologies. The relationship between strength and toughness of alloys is discussed in the context of maximizing both and avoiding the problem of dimensional and microstructural instability. All major classes of materials used in model construction are considered in some detail and in the Appendix selected numerical data is given for the most relevant materials. The stepped-specimen program to investigate stress-induced dimensional changes in alloys is discussed in detail together with interpretation of the initial results. The methods used to bond model components are considered with particular reference to the selection of filler alloys and temperature cycles to avoid microstructural degradation and loss of mechanical properties.
Directory of Open Access Journals (Sweden)
Pixner Konrad
2015-01-01
Full Text Available The grape variety Vernatsch is prone to the formation of severe reductive notes during alcoholic fermentation (AF, spoiling the fruity aroma characteristic for this variety. We investigated the impact of eight different vinification treatments on the formation of volatile sulfur compounds (VSCs and their impact on the sensorial quality of the wines in this susceptible grape variety. Without the addition of sulfur under the form of potassium metabisulfite (K2S2O5 to the crushed grapes, wines were significant less reductive. The clarification treatment showed promising results for the diminution of reductive notes, but might not be a feasible strategy for commercial wineries. Changing fermentation temperature, adding air, bentonite or copper to fermenting wines increased the appearance of reductive notes. The addition sulfur prior AF increased reductive notes in Vernatsch wines and needs to be considered as a crucial factor for the formation of reductive notes.
Kerfriden, P.; Goury, O.; Rabczuk, T.; Bordas, S.P.A.
2013-01-01
We propose in this paper a reduced order modelling technique based on domain partitioning for parametric problems of fracture. We show that coupling domain decomposition and projection-based model order reduction permits to focus the numerical effort where it is most needed: around the zones where damage propagates. No a priori knowledge of the damage pattern is required, the extraction of the corresponding spatial regions being based solely on algebra. The efficiency of the proposed approach is demonstrated numerically with an example relevant to engineering fracture. PMID:23750055
Disc volume reduction with percutaneous nucleoplasty in an animal model.
Directory of Open Access Journals (Sweden)
Richard Kasch
Full Text Available STUDY DESIGN: We assessed volume following nucleoplasty disc decompression in lower lumbar spines from cadaveric pigs using 7.1Tesla magnetic resonance imaging (MRI. PURPOSE: To investigate coblation-induced volume reductions as a possible mechanism underlying nucleoplasty. METHODS: We assessed volume following nucleoplastic disc decompression in pig spines using 7.1-Tesla MRI. Volumetry was performed in lumbar discs of 21 postmortem pigs. A preoperative image data set was obtained, volume was determined, and either disc decompression or placebo therapy was performed in a randomized manner. Group 1 (nucleoplasty group was treated according to the usual nucleoplasty protocol with coblation current applied to 6 channels for 10 seconds each in an application field of 360°; in group 2 (placebo group the same procedure was performed but without coblation current. After the procedure, a second data set was generated and volumes calculated and matched with the preoperative measurements in a blinded manner. To analyze the effectiveness of nucleoplasty, volumes between treatment and placebo groups were compared. RESULTS: The average preoperative nucleus volume was 0.994 ml (SD: 0.298 ml. In the nucleoplasty group (n = 21 volume was reduced by an average of 0.087 ml (SD: 0.110 ml or 7.14%. In the placebo group (n = 21 volume was increased by an average of 0.075 ml (SD: 0.075 ml or 8.94%. The average nucleoplasty-induced volume reduction was 0.162 ml (SD: 0.124 ml or 16.08%. Volume reduction in lumbar discs was significant in favor of the nucleoplasty group (p<0.0001. CONCLUSIONS: Our study demonstrates that nucleoplasty has a volume-reducing effect on the lumbar nucleus pulposus in an animal model. Furthermore, we show the volume reduction to be a coblation effect of nucleoplasty in porcine discs.
Model Order Reductions for Stability Analysis of Islanded Microgrids With Droop Control
DEFF Research Database (Denmark)
Mariani, Valerio; Vasca, Francesco; Vásquez, Juan C.
2015-01-01
the stability properties of the original closed loop model. The analysis shows that the currents dynamics influence the stability of the microgrid particularly for high values of the frequency droop control parameters. It is also shown that a further reduction of the model order leads to the typical oscillator......Three-phase inverters subject to droop control are widely used in islanded microgrids to interface distributed energy resources to the network and to properly share the loads among different units. In this paper, a mathematical model for islanded microgrids with linear loads and inverters under...... frequency and voltage droop control is proposed. The model is constructed by introducing a suitable state space transformation which allows to write the closed loop model in an explicit state space form. Then, the singular perturbations technique is used to obtain reduced order models which reproduce...
Achebe, J U; Njeze, G E; Okwesili, O R
2014-01-01
Giant fibroadenoma (GFA) has been defined as fibroadenoma greater than 5 cm in it's the widest diameter and/or weighing more than 500 g. A benign lesion, its size also raises the possibility of malignancy requiring differentiation from a malignant breast disease. When unilateral GFA presents with a severe breast asymmetry, due to its size, it is not correctable by simple enucleation alone. Postoperative asymmetry from volume and ptosis disparity results, which needs to be addressed at the primary surgery. The inverted "T" technique, which is effective in volume reduction and ptosis correction in breast hypertrophy, can be applied in the treatment of unilateral GFA. This is a retrospective review of all GFA treated by inverted "T" method. A retrospective review was carried out on all patients with GFA treated by inverted "T" skin pattern method over a period of 20 years (January 1988 to December 2007). The procedures were carried out at the University of Nigeria Teaching Hospital and the National Orthopedic Hospital, Enugu. Information, which included patients' demographics, pre-operative assessment, operative findings and outcome of surgery were obtained from the case files of the patients. The degree of ptosis was recorded for each patient. Diagnosis of GFA was made after clinical evaluation and pre-operative tissue biopsy. Immediate results of treatment were based on the patients' satisfaction, visual assessment of symmetry of size of breasts, correction of ptosis and position of nipple areola complex (NAC). A total of 27 patients underwent inverted "T" technique for excision of GFA in their breasts. Their average age was 17.5 years (range 12-25 years) delay in presentation ranged from 2 months to 15 months. In 16 patients (59.2%), the left breast was involved in GFA whilst the tumor occurred on the right breast in 11 (40.7%). The tumor weighed on the average 1500 g (range 655-2200 g). Average diameter of the tumor was 15 cm (range 12-20 cm). All quadrants of the
Directory of Open Access Journals (Sweden)
Niaz Mohammad Jafari Chokan
2016-11-01
Full Text Available Anterior shoulder dislocation is the most common joint dislocation in human body. Many methods are traditionally described for reduction of shoulder dislocation. Most of these techniques are painful to patients and may be associated with further injury. An ideal method should be easy, effective, and less painful, not associated with iatrogenic complications and should be easy to teach and learn. Among different methods of reduction, external rotation and Milch methods are more popular. Both methods are found to be atraumatic, relatively painless and can be performed without anesthesia. In this article, we aimed to review the literatures regarding these two methods of reduction and comparing their success rate and outcome. We reviewed the literature to find articles related to reduction of anterior shoulder dislocations applying one of two techniques described above. We searched PubMed and Google Scholar. In total, 46 articles were found, of them 17 articles -which mainly focused on anterior shoulder dislocation reduction by means of two above methods-were included in this review. The results showed that both techniques were effective, safe, relatively painless, and were well tolerated with no complications, but the external rotation method was superior.
Advanced techniques for modeling avian nest survival
Dinsmore, S.J.; White, Gary C.; Knopf, F.L.
2002-01-01
Estimation of avian nest survival has traditionally involved simple measures of apparent nest survival or Mayfield constant-nest-survival models. However, these methods do not allow researchers to build models that rigorously assess the importance of a wide range of biological factors that affect nest survival. Models that incorporate greater detail, such as temporal variation in nest survival and covariates representative of individual nests represent a substantial improvement over traditional estimation methods. In an attempt to improve nest survival estimation procedures, we introduce the nest survival model now available in the program MARK and demonstrate its use on a nesting study of Mountain Plovers (Charadrius montanus Townsend) in Montana, USA. We modeled the daily survival of Mountain Plover nests as a function of the sex of the incubating adult, nest age, year, linear and quadratic time trends, and two weather covariates (maximum daily temperature and daily precipitation) during a six-year study (1995–2000). We found no evidence for yearly differences or an effect of maximum daily temperature on the daily nest survival of Mountain Plovers. Survival rates of nests tended by female and male plovers differed (female rate = 0.33; male rate = 0.49). The estimate of the additive effect for males on nest survival rate was 0.37 (95% confidence limits were 0.03, 0.71) on a logit scale. Daily survival rates of nests increased with nest age; the estimate of daily nest-age change in survival in the best model was 0.06 (95% confidence limits were 0.04, 0.09) on a logit scale. Daily precipitation decreased the probability that the nest would survive to the next day; the estimate of the additive effect of daily precipitation on the nest survival rate was −1.08 (95% confidence limits were −2.12, −0.13) on a logit scale. Our approach to modeling daily nest-survival rates allowed several biological factors of interest to be easily included in nest survival models
Model measurements for new accelerating techniques
International Nuclear Information System (INIS)
Aronson, S.; Haseroth, H.; Knott, J.; Willis, W.
1988-06-01
We summarize the work carried out for the past two years, concerning some different ways for achieving high-field gradients, particularly in view of future linear lepton colliders. These studies and measurements on low power models concern the switched power principle and multifrequency excitation of resonant cavities. 15 refs., 12 figs
Navier slip model of drag reduction by Leidenfrost vapor layers
Berry, Joseph D.
2017-10-17
Recent experiments found that a hot solid sphere that is able to sustain a stable Leidenfrost vapor layer in a liquid exhibits significant drag reduction during free fall. The variation of the drag coefficient with Reynolds number deviates substantially from the characteristic drag crisis behavior at high Reynolds numbers. Measurements based on liquids of different viscosities show that the onset of the drag crisis depends on the viscosity ratio of the vapor to the liquid. Here we attempt to characterize the complexity of the Leidenfrost vapor layer with respect to its variable thickness and possible vapor circulation within, in terms of the Navier slip model that is defined by a slip length. Such a model can facilitate tangential flow and thereby alter the behavior of the boundary layer. Direct numerical and large eddy simulations of flow past a sphere at moderate to high Reynolds numbers (102≤Re≤4×104) are employed to quantify comparisons with experimental results, including the drag coefficient and the form of the downstream wake on the sphere. This provides a simple one parameter characterization of the drag reduction phenomenon due to a stable vapor layer that envelops a solid body.
Navier slip model of drag reduction by Leidenfrost vapor layers
Berry, Joseph D.; Vakarelski, Ivan U.; Chan, Derek Y. C.; Thoroddsen, Sigurdur T.
2017-10-01
Recent experiments found that a hot solid sphere that is able to sustain a stable Leidenfrost vapor layer in a liquid exhibits significant drag reduction during free fall. The variation of the drag coefficient with Reynolds number deviates substantially from the characteristic drag crisis behavior at high Reynolds numbers. Measurements based on liquids of different viscosities show that the onset of the drag crisis depends on the viscosity ratio of the vapor to the liquid. Here we attempt to characterize the complexity of the Leidenfrost vapor layer with respect to its variable thickness and possible vapor circulation within, in terms of the Navier slip model that is defined by a slip length. Such a model can facilitate tangential flow and thereby alter the behavior of the boundary layer. Direct numerical and large eddy simulations of flow past a sphere at moderate to high Reynolds numbers (1 02≤Re≤4 ×1 04) are employed to quantify comparisons with experimental results, including the drag coefficient and the form of the downstream wake on the sphere. This provides a simple one parameter characterization of the drag reduction phenomenon due to a stable vapor layer that envelops a solid body.
Model Reduction and Coarse-Graining Approaches for Multiscale Phenomena
Gorban, Alexander N; Theodoropoulos, Constantinos; Kazantzis, Nikolaos K; Öttinger, Hans Christian
2006-01-01
Model reduction and coarse-graining are important in many areas of science and engineering. How does a system with many degrees of freedom become one with fewer? How can a reversible micro-description be adapted to the dissipative macroscopic model? These crucial questions, as well as many other related problems, are discussed in this book. Specific areas of study include dynamical systems, non-equilibrium statistical mechanics, kinetic theory, hydrodynamics and mechanics of continuous media, (bio)chemical kinetics, nonlinear dynamics, nonlinear control, nonlinear estimation, and particulate systems from various branches of engineering. The generic nature and the power of the pertinent conceptual, analytical and computational frameworks helps eliminate some of the traditional language barriers, which often unnecessarily impede scientific progress and the interaction of researchers between disciplines such as physics, chemistry, biology, applied mathematics and engineering. All contributions are authored by ex...
A Regression Algorithm for Model Reduction of Large-Scale Multi-Dimensional Problems
Rasekh, Ehsan
2011-11-01
Model reduction is an approach for fast and cost-efficient modelling of large-scale systems governed by Ordinary Differential Equations (ODEs). Multi-dimensional model reduction has been suggested for reduction of the linear systems simultaneously with respect to frequency and any other parameter of interest. Multi-dimensional model reduction is also used to reduce the weakly nonlinear systems based on Volterra theory. Multiple dimensions degrade the efficiency of reduction by increasing the size of the projection matrix. In this paper a new methodology is proposed to efficiently build the reduced model based on regression analysis. A numerical example confirms the validity of the proposed regression algorithm for model reduction.
Kulish-Sklyanin-type models: Integrability and reductions
Gerdjikov, V. S.
2017-08-01
We start with a Riemann-Hilbert problem ( RHP) related to BD.I- type symmetric spaces SO(2 r + 1)/ S( O(2 r - 2 s+1) ⊗ O(2 s)), s ≥ 1. We consider two RHPs: the first is formulated on the real axis R in the complex-λ plane; the second, on R ⊗ iR. The first RHP for s = 1 allows solving the Kulish-Sklyanin (KS) model; the second RHP is related to a new type of KS model. We consider an important example of nontrivial deep reductions of the KS model and show its effect on the scattering matrix. In particular, we obtain new two-component nonlinear Schrödinger equations. Finally, using the Wronski relations, we show that the inverse scattering method for KS models can be understood as generalized Fourier transforms. We thus find a way to characterize all the fundamental properties of KS models including the hierarchy of equations and the hierarchy of their Hamiltonian structures.
Directory of Open Access Journals (Sweden)
Amin Asadi
2017-10-01
Full Text Available Purpose: To study the benefits of Directional Bremsstrahlung Splitting (DBS dose variance reduction technique in BEAMnrc Monte Carlo (MC code for Oncor® linac at 6MV and 18MV energies. Materials and Method: A MC model of Oncor® linac was built using BEAMnrc MC Code and verified by the measured data for 6MV and 18MV energies of various field sizes. Then Oncor® machine was modeled running DBS technique, and the efficiency of total fluence and spatial fluence for electron and photon, the efficiency of dose variance reduction of MC calculations for PDD on the central beam axis and lateral dose profile across the nominal field was measured and compared. Result: With applying DBS technique, the total fluence of electron and photon increased in turn 626.8 (6MV and 983.4 (6MV, and 285.6 (18MV and 737.8 (18MV, the spatial fluence of electron and photon improved in turn 308.6±1.35% (6MV and 480.38±0.43% (6MV, and 153±0.9% (18MV and 462.6±0.27% (18MV. Moreover, by running DBS technique, the efficiency of dose variance reduction for PDD MC dose calculations before maximum dose point and after dose maximum point enhanced 187.8±0.68% (6MV and 184.6±0.65% (6MV, 156±0.43% (18MV and 153±0.37% (18MV, respectively, and the efficiency of MC calculations for lateral dose profile remarkably on the central beam axis and across the treatment field raised in turn 197±0.66% (6MV and 214.6±0.73% (6MV, 175±0.36% (18MV and 181.4±0.45% (18MV. Conclusion: Applying dose variance reduction technique of DBS for modeling Oncor® linac with using BEAMnrc MC Code surprisingly improved the fluence of electron and photon, and it therefore enhanced the efficiency of dose variance reduction for MC calculations. As a result, running DBS in different kinds of MC simulation Codes might be beneficent in reducing the uncertainty of MC calculations.
Implementation of linguistic models by holographic technique
Pavlov, Alexander V.; Shevchenko, Yanina Y.
2004-01-01
In this paper we consider linguistic model as an algebraic model and restrict our consideration to the semantics only. The concept allows "natural-like" language to be used by human-teacher to describe for machine the way of the problem solving, which is based on human"s knowledge and experience. Such imprecision words as "big", "very big", "not very big", etc can be used for human"s knowledge representation. Technically, the problem is to match metric scale, used by the technical device, with the linguistic scale, intuitively formed by the person. We develop an algebraic description of 4-f Fourier-holography setup by using triangular norms based approach. In the model we use the Fourier-duality of the t-norms and t-conorms, which is implemented by 4-f Fourier-holography setup. We demonstrate the setup is described adequately by De-Morgan"s law for involution. Fourier-duality of the t-norms and t-conorms leads to fuzzy-valued logic. We consider General Modus Ponens rule implementation to define the semantical operators, which are adequate to the setup. We consider scales, formed in both +1 and -1 orders of diffraction. We use representation of linguistic labels by fuzzy numbers to form the scale and discuss the dependence of the scale grading on the holographic recording medium operator. To implement reasoning with multi-parametric input variable we use Lorentz function to approximate linguistic labels. We use an example of medical diagnostics for experimental illustration of reasoning on the linguistic scale.
Metamaterials modelling, fabrication, and characterisation techniques
DEFF Research Database (Denmark)
Malureanu, Radu; Zalkovskij, Maksim; Andryieuski, Andrei
2012-01-01
Metamaterials are artificially designed media that show averaged properties not yet encountered in nature. Among such properties, the possibility of obtaining optical magnetism and negative refraction are the ones mainly exploited but epsilon-near-zero and sub-unitary refraction index are also...... parameters that can be obtained. Such behaviour enables unprecedented applications. Within this work, we will present various aspects of metamaterials research field that we deal with at our department. From the modelling part, we will present tour approach for determining the field enhancement in slits...
Metamaterials modelling, fabrication and characterisation techniques
DEFF Research Database (Denmark)
Malureanu, Radu; Zalkovskij, Maksim; Andryieuski, Andrei
Metamaterials are artificially designed media that show averaged properties not yet encountered in nature. Among such properties, the possibility of obtaining optical magnetism and negative refraction are the ones mainly exploited but epsilon-near-zero and sub-unitary refraction index are also...... parameters that can be obtained. Such behaviour enables unprecedented applications. Within this work, we will present various aspects of metamaterials research field that we deal with at our department. From the modelling part, various approaches for determining the value of the refractive index...
Ma, Haoning; Dong, Liang; Liu, Chuyin; Yi, Ping; Yang, Feng; Tang, Xiangsheng; Tan, Mingsheng
2016-01-01
One-stage anterior release and posterior reduction is one of the most effective methods for irreducible atlantoaxial dislocation. However, the criteria of appropriate tissue release for successful posterior reduction is yet to be confirmed. Hence, an assistant technique using the transoral approach to verify satisfactory release is required. To evaluate the efficacy of the modified technique of transoral release for irreducible atlantoaxial dislocation (IAAD) with patients underwent one-stage anterior release and posterior reduction. Between January 2009 and June 2014, 23 consecutive patients diagnosed with IAAD free from bony union between the C1-C2 facet joints on reconstructive computed tomography scan underwent one-stage anterior release and posterior reduction after no response to 2 weeks of skull traction. During transoral release, an elevator was used as a lever repeatedly to confirm a 3-5 mm bilateral joint space between the lateral masses of the atlas and axis. The release was accomplished since a 3-5 mm joint space was achieved. After anterior release, posterior reduction and instrumented fusion were subsequently performed. All patients were observed for an average of 18 (range 6-50) months. Nineteen of 23 patients achieved complete reduction while four had an incomplete reduction. Significant differences in pre- and postoperative JOA scores and cervicomedullary angle (CMA) were found. Twenty-one patients presenting with myelopathy had a JOA score of 12.9 at final follow-up, improved from 7.8 before surgery. The mean CMA improved to 143.5° postoperatively from 101.8° preoperatively. Bony fusion was confirmed in all cases under radiologic assessment during follow-up; there were no instrument failures. The modified technique of transoral release provides appropriate criteria for anterior release, to achieve good posterior reduction without excessive tissue release or intraspinal manipulation, proving its value as an assistant technique in one
Xie, Yujing; Zhao, Laijun; Xue, Jian; Hu, Qingmi; Xu, Xiang; Wang, Hongbo
2016-12-15
How to effectively control severe regional air pollution has become a focus of global concern recently. The non-cooperative reduction model (NCRM) is still the main air pollution control pattern in China, but it is both ineffective and costly, because each province must independently fight air pollution. Thus, we proposed a cooperative reduction model (CRM), with the goal of maximizing the reduction in adverse health effects (AHEs) at the lowest cost by encouraging neighboring areas to jointly control air pollution. CRM has two parts: a model of optimal pollutant removal rates using two optimization objectives (maximizing the reduction in AHEs and minimizing pollutant reduction cost) while meeting the regional pollution control targets set by the central government, and a model that allocates the cooperation benefits (i.e., health improvement and cost reduction) among the participants according to their contributions using the Shapley value method. We applied CRM to the case of sulfur dioxide (SO 2 ) reduction in Yangtze River Delta region. Based on data from 2003 to 2013, and using mortality due to respiratory and cardiovascular diseases as the health endpoints, CRM saves 437 more lives than NCRM, amounting to 12.1% of the reduction under NCRM. CRM also reduced costs by US $65.8×10 6 compared with NCRM, which is 5.2% of the total cost of NCRM. Thus, CRM performs significantly better than NCRM. Each province obtains significant benefits from cooperation, which can motivate them to actively cooperate in the long term. A sensitivity analysis was performed to quantify the effects of parameter values on the cooperation benefits. Results shown that the CRM is not sensitive to the changes in each province's pollutant carrying capacity and the minimum pollutant removal capacity, but sensitive to the maximum pollutant reduction capacity. Moreover, higher cooperation benefits will be generated when a province's maximum pollutant reduction capacity increases. Copyright
Modeling of detective quantum efficiency considering scatter-reduction devices
Energy Technology Data Exchange (ETDEWEB)
Park, Ji Woong; Kim, Dong Woon; Kim, Ho Kyung [Pusan National University, Busan (Korea, Republic of)
2016-05-15
The reduction of signal-to-noise ratio (SNR) cannot be restored and thus has become a severe issue in digital mammography.1 Therefore, antiscatter grids are typically used in mammography. Scatter-cleanup performance of various scatter-reduction devices, such as air gaps,2 linear (1D) or cellular (2D) grids,3, 4 and slot-scanning devices,5 has been extensively investigated by many research groups. In the present time, a digital mammography system with the slotscanning geometry is also commercially available.6 In this study, we theoretically investigate the effect of scattered photons on the detective quantum efficiency (DQE) performance of digital mammography detectors by using the cascaded-systems analysis (CSA) approach. We show a simple DQE formalism describing digital mammography detector systems equipped with scatter reduction devices by regarding the scattered photons as additive noise sources. The LFD increased with increasing PMMA thickness, and the amounts of LFD indicated the corresponding SF. The estimated SFs were 0.13, 0.21, and 0.29 for PMMA thicknesses of 10, 20, and 30 mm, respectively. While the solid line describing the measured MTF for PMMA with 0 mm was the result of least-squares of regression fit using Eq. (14), the other lines were simply resulted from the multiplication of the fit result (for PMMA with 0 mm) with the (1-SF) estimated from the LFDs in the measured MTFs. Spectral noise-power densities over the entire frequency range were not much changed with increasing scatter. On the other hand, the calculation results showed that the spectral noise-power densities increased with increasing scatter. This discrepancy may be explained by that the model developed in this study does not account for the changes in x-ray interaction parameters for varying spectral shapes due to beam hardening with increasing PMMA thicknesses.
Integration efficiency for model reduction in micro-mechanical analyses
van Tuijl, Rody A.; Remmers, Joris J. C.; Geers, Marc G. D.
2017-11-01
Micro-structural analyses are an important tool to understand material behavior on a macroscopic scale. The analysis of a microstructure is usually computationally very demanding and there are several reduced order modeling techniques available in literature to limit the computational costs of repetitive analyses of a single representative volume element. These techniques to speed up the integration at the micro-scale can be roughly divided into two classes; methods interpolating the integrand and cubature methods. The empirical interpolation method (high-performance reduced order modeling) and the empirical cubature method are assessed in terms of their accuracy in approximating the full-order result. A micro-structural volume element is therefore considered, subjected to four load-cases, including cyclic and path-dependent loading. The differences in approximating the micro- and macroscopic quantities of interest are highlighted, e.g. micro-fluctuations and stresses. Algorithmic speed-ups for both methods with respect to the full-order micro-structural model are quantified. The pros and cons of both classes are thereby clearly identified.
CT of metal implants: reduction of artifacts using an extended CT scale technique.
Link, T M; Berning, W; Scherf, S; Joosten, U; Joist, A; Engelke, K; Daldrup-Link, H E
2000-01-01
The purpose of this work was to use an extended CT scale technique (ECTS) to reduce artifacts due to metal implants and to optimize CT imaging parameters for metal implants using an experimental model. Osteotomies were performed in 20 porcine femur specimens. One hundred cobalt-base screws and 24 steel plates were used for osteosynthesis in these specimens. Artificial lesions were produced in 50 screws, such as osteolysis near the screws (mimicking lysis due to infection, tumor, or loosening), displacement of the screws, as well as fractures of the screws. All specimens were examined using eight different CT protocols: four conventional (CCT) and four spiral (SCT) CT protocols with different milliampere-second values (130 and 480 mAs for CCT, 130 and 300 mAs for SCT), kilovolt potentials (120 and 140 kVp), and slice thicknesses (2 and 5 mm). The images were analyzed by three observers using a standard window (maximum window width 4,000 HU) and ECTS (maximum window width 40,000 HU). Receiver operating characteristic analysis was performed, and image quality was assessed according to a five level scale. Metal artifacts were significantly reduced using ECTS (p 0.05). ECTS improved imaging of metal implants. In this study, no significant effects of exposure dose and kilovolt potential were noted. Metal artifacts were more prominent using SCT than using CCT.
Restrictions in Model Reduction for Polymer Chain Models in Dissipative Particle Dynamics
Moreno Chaparro, Nicolas
2014-06-06
We model high molecular weight homopolymers in semidilute concentration via Dissipative Particle Dynamics (DPD). We show that in model reduction methodologies for polymers it is not enough to preserve system properties (i.e., density ρ, pressure p, temperature T, radial distribution function g(r)) but preserving also the characteristic shape and length scale of the polymer chain model is necessary. In this work we apply a DPD-model-reduction methodology for linear polymers recently proposed; and demonstrate why the applicability of this methodology is limited upto certain maximum polymer length, and not suitable for solvent coarse graining.
Liang, Ke; Sun, Qin; Liu, Xiaoran
2018-05-01
The theoretical buckling load of a perfect cylinder must be reduced by a knock-down factor to account for structural imperfections. The EU project DESICOS proposed a new robust design for imperfection-sensitive composite cylindrical shells using the combination of deterministic and stochastic simulations, however the high computational complexity seriously affects its wider application in aerospace structures design. In this paper, the nonlinearity reduction technique and the polynomial chaos method are implemented into the robust design process, to significantly lower computational costs. The modified Newton-type Koiter-Newton approach which largely reduces the number of degrees of freedom in the nonlinear finite element model, serves as the nonlinear buckling solver to trace the equilibrium paths of geometrically nonlinear structures efficiently. The non-intrusive polynomial chaos method provides the buckling load with an approximate chaos response surface with respect to imperfections and uses buckling solver codes as black boxes. A fast large-sample study can be applied using the approximate chaos response surface to achieve probability characteristics of buckling loads. The performance of the method in terms of reliability, accuracy and computational effort is demonstrated with an unstiffened CFRP cylinder.
Optimization using surrogate models - by the space mapping technique
DEFF Research Database (Denmark)
Søndergaard, Jacob
2003-01-01
Surrogate modelling and optimization techniques are intended for engineering design in the case where an expensive physical model is involved. This thesis provides a literature overview of the field of surrogate modelling and optimization. The space mapping technique is one such method for constr......Surrogate modelling and optimization techniques are intended for engineering design in the case where an expensive physical model is involved. This thesis provides a literature overview of the field of surrogate modelling and optimization. The space mapping technique is one such method...... conditions are satisfied. So hybrid methods, combining the space mapping technique with classical optimization methods, should be used if convergence to high accuracy is wanted. Approximation abilities of the space mapping surrogate are compared with those of a Taylor model of the expensive model. The space...... mapping surrogate has a lower approximation error for long steps. For short steps, however, the Taylor model of the expensive model is best, due to exact interpolation at the model origin. Five algorithms for space mapping optimization are presented and the numerical performance is evaluated. Three...
Thermodynamic and kinetic modelling of the reduction of concentrated nitric acid
International Nuclear Information System (INIS)
Sicsic, David
2011-01-01
This research thesis aimed at determining and quantifying the different stages of the reduction mechanism in the case of concentrated nitric acid. After having reported the results of a bibliographical study on the chemical and electrochemical behaviour of concentrated nitric media (generalities, chemical equilibriums, NOx reactivity, electrochemical reduction of nitric acid), the author reports the development and discusses the results of a thermodynamic simulation of a nitric environment at 25 C. This allowed the main species to be identified in the liquid and gaseous phases of nitric acid solutions. The author reports an experimental electrochemical investigation coupled with analytic techniques (infrared and UV-visible spectroscopy) and shows that the reduction process depends on the cathodic overvoltage, and identifies three potential areas. A kinetic modelling of the stationary state and of the impedance is then developed in order to better determine, discuss and quantify the reduction process. The application of this kinetic model to the preliminary results of an electrochemical study performed on 304 L steel is then discussed [fr
DEFF Research Database (Denmark)
Chambon, Julie Claire Claudia; Bjerg, Poul Løgstrup; Scheutz, Charlotte
2013-01-01
Reductive dechlorination is a major degradation pathway of chlorinated ethenes in anaerobic subsurface environments, and reactive kinetic models describing the degradation process are needed in fate and transport models of these contaminants. However, reductive dechlorination is a complex biologi...
Kantowski-Sachs multidimensional cosmological models and dynamical dimensional reduction
International Nuclear Information System (INIS)
Demianski, M.; Rome Univ.; Golda, Z.A.; Heller, M.; Szydlowski, M.
1988-01-01
Einstein's field equations are solved for a multidimensional spacetime (KS) x Tsup(m), where (KS) is a four-dimensional Kantowski-Sachs spacetime and Tsup(m) is an m-dimensional torus. Among all possible vacuum solutions there is a large class of spacetimes in which the macroscopic space expands and the microscopic space contracts to a finite volume. We also consider a non-vacuum case and we explicitly solve the field equations for the matter satisfying the Zel'dovich equation of state. In non-vacuum models, with matter satisfying an equation of state p = γρ, O ≤ γ < 1, at a sufficiently late stage of evolution the microspace always expands and the dynamical dimensional reduction does not occur. (author)
Flood Water Crossing: Laboratory Model Investigations for Water Velocity Reductions
Directory of Open Access Journals (Sweden)
Kasnon N.
2014-01-01
Full Text Available The occurrence of floods may give a negative impact towards road traffic in terms of difficulties in mobilizing traffic as well as causing damage to the vehicles, which later cause them to be stuck in the traffic and trigger traffic problems. The high velocity of water flows occur when there is no existence of objects capable of diffusing the water velocity on the road surface. The shape, orientation and size of the object to be placed beside the road as a diffuser are important for the effective flow attenuation of water. In order to investigate the water flow, a laboratory experiment was set up and models were constructed to study the flow velocity reduction. The velocity of water before and after passing through the diffuser objects was investigated. This paper focuses on laboratory experiments to determine the flow velocity of the water using sensors before and after passing through two best diffuser objects chosen from a previous flow pattern experiment.
Role of sedimentary organic matter in bacterial sulfate reduction: the G model tested
International Nuclear Information System (INIS)
Westrich, J.T.; Berner, R.A.
1984-01-01
Laboratory study of the bacterial decomposition of Long Island Sound plankton in oxygenated seawater over a period of 2 years shows that the organic material undergoes decomposition via first-order kinetics and can be divided into two decomposable fractions, of considerably different reactivity, and a nonmetabolized fraction. This planktonic material, after undergoing varying degrees of oxic degradation, was added in the laboratory to anoxic sediment taken from a depth of 1 m at the NWC site of Long Island Sound and the rate of bacterial sulfate reduction in the sediment measured by the 35 S radiotracer technique. The stimulated rate of sulfate reduction was in direct proportion to the amount of planktonic carbon added. This provides direct confirmation of the first-order decomposition, or G model, for marine sediments and proves that the in situ rate of sulfate reduction is organic-matter limited. Slower sulfate reduction rates resulted when oxically degraded plankton rather than fresh plankton was added, and the results confirm the presence of the same two fractions of organic matter deduced from the oxic degradation studies. Near-surface Long Island Sound sediment, which already contains abundant readily decomposable organic matter, was also subjected to anoxic decomposition by bacterial sulfate reduction. The decrease in sulfate reduction rate with time parallels decreases in the amount of organic matter, and these results also indicate the presence of two fractions of organic carbon of distinctly different reactivity. From plots of the log of reduction rate vs. time two first-order rate constants were obtained that agree well with those derived from the plankton addition experiment. Together, the two experiments confirm the use of a simple multi-first-order rate law for organic matter decomposition in marine sediments
Karampalis, Christos; Grevitt, Michael; Shafafy, Masood; Webb, John
2012-05-01
To report the results of a cohort of patients treated with this technique high lighting radiological and functional outcomes, discussing also benefits arising from a gradual reduction procedure compared with other techniques. We evaluated nine patients who have undergone high-grade listhesis reduction and circumferential fusion at our institution from 1988 to 2006. Average length of follow-up was 11 years (5-19). Functional outcomes and radiological measurements were recorded and reported. Slip magnitude was reduced by an average of 2.9 grades (Meyerding classification). Slip angle improved by an average of 66% (p = 0.0001), lumbosacral angle by 47% (p = 0.0002), sacral rotation by 51% (p = 0.0068) and sacral inclination by 47% (p = 0.0055). At the latest follow-up 88.9% had achieved solid fusion. Post-operative 10-point Visual Analogue Score (VAS) for back pain had improved by 70% (p Average postoperative Oswestry Disability Index for all patients was 8% (range 0-16%) and that for Low Back Outcome Scores was 56.6 (range 44-70). All components of Short Form 36 Health Survey were greater than 80%. Overall patients' expectations were met in 100%. This is an effective and safe technique which addresses the lumbosacral kyphosis and cosmetic deformity without the neurological complications which accompany other reduction and fusion techniques for high-grade spondylolisthesis.
Dimensionality reduction method based on a tensor model
Yan, Ronghua; Peng, Jinye; Ma, Dongmei; Wen, Desheng
2017-04-01
Dimensionality reduction is a preprocessing step for hyperspectral image (HSI) classification. Principal component analysis reduces the spectral dimension and does not utilize the spatial information of an HSI. Both spatial and spectral information are used when an HSI is modeled as a tensor, that is, the noise in the spatial dimension is decreased and the dimension in a spectral dimension is reduced simultaneously. However, this model does not consider factors affecting the spectral signatures of ground objects. This means that further improving classification is very difficult. The authors propose that the spectral signatures of ground objects are the composite result of multiple factors, such as illumination, mixture, atmospheric scattering and radiation, and so on. In addition, these factors are very difficult to distinguish. Therefore, these factors are synthesized as within-class factors. Within-class factors, class factors, and pixels are selected to model a third-order tensor. Experimental results indicate that the classification accuracy of the new method is higher than that of the previous methods.
An experimental comparison of modelling techniques for speaker ...
Indian Academy of Sciences (India)
Feature extraction involves extracting speaker-specific features from the speech signal at reduced data rate. The extracted features are further combined using modelling techniques to generate speaker models. The speaker models are then tested using the features extracted from the test speech signal. The improvement in ...
Fourierdimredn: Fourier dimensionality reduction model for interferometric imaging
Kartik, S. Vijay; Carrillo, Rafael; Thiran, Jean-Philippe; Wiaux, Yves
2016-10-01
Fourierdimredn (Fourier dimensionality reduction) implements Fourier-based dimensionality reduction of interferometric data. Written in Matlab, it derives the theoretically optimal dimensionality reduction operator from a singular value decomposition perspective of the measurement operator. Fourierdimredn ensures a fast implementation of the full measurement operator and also preserves the i.i.d. Gaussian properties of the original measurement noise.
Directory of Open Access Journals (Sweden)
Vikram Gupta
2016-09-01
Full Text Available Finite element analysis of failed slope of the Surabhi Resort landslide located in the Mussoorie township, Garhwal Himalaya has been carried out using shear strength reduction technique. Two slope models viz. debris and rock mass were taken into consideration in this study and have been analysed for possible failure of slope in future. Critical strength reduction factor (SRF for the failed slope is observed to be 0.28 and 0.83 for the debris and rock mass model, respectively. A low SRF value of the slope revealed significant progressive displacement in the zone of detachment. This has also been evidenced in the form of cracks in the building of Surabhi Resort and presence of subsidence zones in the Mussoorie International School. These results are consistent with the study carried out by other workers using different approach.
Rigorous Model Reduction for a Damped-Forced Nonlinear Beam Model: An Infinite-Dimensional Analysis
Kogelbauer, Florian; Haller, George
2018-01-01
We use invariant manifold results on Banach spaces to conclude the existence of spectral submanifolds (SSMs) in a class of nonlinear, externally forced beam oscillations. SSMs are the smoothest nonlinear extensions of spectral subspaces of the linearized beam equation. Reduction in the governing PDE to SSMs provides an explicit low-dimensional model which captures the correct asymptotics of the full, infinite-dimensional dynamics. Our approach is general enough to admit extensions to other types of continuum vibrations. The model-reduction procedure we employ also gives guidelines for a mathematically self-consistent modeling of damping in PDEs describing structural vibrations.
International Nuclear Information System (INIS)
Robertson, D.D.; Fishman, E.K.; Kalender, W.A.; Magid, D.; Weiss, P.J.
1986-01-01
Radiographic assessment of revision total hip replacements suffers from the inability to provide adequate information regarding bone stock loss. Even CT, with its transaxial orientation, is limited because of metal artifacts. Three metal artifact reduction techniques are available for CT: material-dependent imaging, planar reformation of image data, and missing projection data replacement. These techniques were used to evaluate preoperatively seven patients with revision total hip replacements, and postoperatively eight patients with primary total hip replacements. Despite significant artifacts on the routine transaxial images, the metal artifact-reduced images were of sufficient quality to provide pertinent clinical information in all cases
Photonic crystal defect mode cavity modelling: a phenomenological dimensional reduction approach
Zhou, Weidong; Qiang, Zexuan; Chen, Li
2007-05-01
A phenomenological dimensional reduction approach (PDRA) for the cavity characteristics in defect mode based photonic crystal (PC) lasers is presented. Based on the fully vectorial three-dimensional finite-difference time-domain (3D FDTD) technique, simultaneous enhancement and suppression in spontaneous emission and absorption were obtained in an absorptive photonic crystal slab (PCS) cavity. Effective index perturbation (EIP) was proposed for fast and accurate determination of the effective index and the dominant resonant cavity frequency in a 3D PCS structure, with two-dimensional (2D) FDTD simulation. Further dimensional reduction from 2D to one-dimensional planar cavity enables phenomenological modelling of lasing characteristics via the effective reflectivity calculation and rate equation analysis. Very fast and accurate results have been achieved with this PDRA approach. A high spontaneous emission factor and cavity quality factor Q were obtained in a single defect cavity, which led to over an order reduction in lasing gain threshold. The model offers a fast and accurate tool for the design and modelling of PC defect mode cavity based devices and aids the research in the proposed novel defect mode based devices such as ultra-compact light sources on Si and spectrally resolved PC infrared photodetectors.
Virtual 3d City Modeling: Techniques and Applications
Singh, S. P.; Jain, K.; Mandla, V. R.
2013-08-01
3D city model is a digital representation of the Earth's surface and it's related objects such as Building, Tree, Vegetation, and some manmade feature belonging to urban area. There are various terms used for 3D city models such as "Cybertown", "Cybercity", "Virtual City", or "Digital City". 3D city models are basically a computerized or digital model of a city contains the graphic representation of buildings and other objects in 2.5 or 3D. Generally three main Geomatics approach are using for Virtual 3-D City models generation, in first approach, researcher are using Conventional techniques such as Vector Map data, DEM, Aerial images, second approach are based on High resolution satellite images with LASER scanning, In third method, many researcher are using Terrestrial images by using Close Range Photogrammetry with DSM & Texture mapping. We start this paper from the introduction of various Geomatics techniques for 3D City modeling. These techniques divided in to two main categories: one is based on Automation (Automatic, Semi-automatic and Manual methods), and another is Based on Data input techniques (one is Photogrammetry, another is Laser Techniques). After details study of this, finally in short, we are trying to give the conclusions of this study. In the last, we are trying to give the conclusions of this research paper and also giving a short view for justification and analysis, and present trend for 3D City modeling. This paper gives an overview about the Techniques related with "Generation of Virtual 3-D City models using Geomatics Techniques" and the Applications of Virtual 3D City models. Photogrammetry, (Close range, Aerial, Satellite), Lasergrammetry, GPS, or combination of these modern Geomatics techniques play a major role to create a virtual 3-D City model. Each and every techniques and method has some advantages and some drawbacks. Point cloud model is a modern trend for virtual 3-D city model. Photo-realistic, Scalable, Geo-referenced virtual 3
A new role for reduction in pressure drop in cyclones using computational fluid dynamics techniques
Directory of Open Access Journals (Sweden)
D. Noriler
2004-01-01
Full Text Available In this work a new mechanical device to improve the gas flow in cyclones by pressure drop reduction is presented and discussed. This behavior occurs due to the effects of introducing swirling breakdown phenomenon at the inlet of the vortex finder tube. The device consists of a tube with two gas inlets in an appositive spiral flux that produces a sudden reduction in the tangential velocity peak responsible for practically 80 % of the pressure drop in cyclones. In turn, peak reduction causes a decrease in pressure drop by a breakdown of the swirling, and because of this the solid particles tend to move faster toward the wall , increasing collection efficiency. As a result of this phenomenon the overall performance of cyclones is improved. Numerical simulations with 3-D, transient, asymmetric and anisotropic turbulence closure by differential Reynolds stress for Lapple and Stairmand standard geometries of 0.3 m in diameter, show a reduction in pressure drop of 20 % and a shift of the tangential velocity peak toward the wall. All numerical experiments were carried out with a commercial CFD code showing numerical stability and good convergence rates with high-order interpolation schemes, SIMPLEC pressure-velocity coupling and other numerical features.
Maucec, M
2005-01-01
Monte Carlo simulations for nuclear logging applications are considered to be highly demanding transport problems. In this paper, the implementation of weight-window variance reduction schemes in a 'manual' fashion to improve the efficiency of calculations for a neutron logging tool is presented.
Identification of interactions using model-based multifactor dimensionality reduction.
Gola, Damian; König, Inke R
2016-01-01
Common complex traits may involve multiple genetic and environmental factors and their interactions. Many methods have been proposed to identify these interaction effects, among them several machine learning and data mining methods. These are attractive for identifying interactions because they do not rely on specific genetic model assumptions. To handle the computational burden arising from an exhaustive search, including all possible combinations of factors, filter methods try to select promising factors in advance. Model-based multifactor dimensionality reduction (MB-MDR), a semiparametric machine learning method allowing adjustment for confounding variables and lower level effects, is applied to Genetic Analysis Workshop 19 (GAW19) data to identify interaction effects on different traits. Several filtering methods based on the nearest neighbor algorithm are assessed in terms of compatibility with MB-MDR. Single nucleotide polymorphism (SNP) rs859400 shows a significant interaction effect (corrected p value <0.05) with age on systolic blood pressure (SBP). We identified 23 SNP-SNP interaction effects on hypertension status (HS), 42 interaction effects on SBP, and 26 interaction effects on diastolic blood pressure (DBP). Several of these SNPs are in strong linkage disequilibrium (LD). Three of the interaction effects on HS are identified in filtered subsets. The considered filtering methods seem not to be appropriate to use with MB-MDR. LD pruning is further quality control to be incorporated, which can reduce the combinatorial burden by removing redundant SNPs.
Vehicle Lightweighting: Mass Reduction Spectrum Analysis and Process Cost Modeling
International Nuclear Information System (INIS)
Mascarin, Anthony; Hannibal, Ted; Raghunathan, Anand; Ivanic, Ziga; Clark, Michael
2016-01-01
The U.S. Department of Energy's Vehicle Technologies Office, Materials area commissioned a study to model and assess manufacturing economics of alternative design and production strategies for a series of lightweight vehicle concepts. In the first two phases of this effort examined combinations of strategies aimed at achieving strategic targets of 40% and a 45% mass reduction relative to a standard North American midsize passenger sedan at an effective cost of $3.42 per pound (lb) saved. These results have been reported in the Idaho National Laboratory report INL/EXT-14-33863 entitled Vehicle Lightweighting: 40% and 45% Weight Savings Analysis: Technical Cost Modeling for Vehicle Lightweighting published in March 2015. The data for these strategies were drawn from many sources, including Lotus Engineering Limited and FEV, Inc. lightweighting studies, U.S. Department of Energy-funded Vehma International of America, Inc./Ford Motor Company Multi-Material Lightweight Prototype Vehicle Demonstration Project, the Aluminum Association Transportation Group, many United States Council for Automotive Research's/United States Automotive Materials Partnership LLC lightweight materials programs, and IBIS Associates, Inc.'s decades of experience in automotive lightweighting and materials substitution analyses.
Vehicle Lightweighting: Mass Reduction Spectrum Analysis and Process Cost Modeling
Energy Technology Data Exchange (ETDEWEB)
Mascarin, Anthony [IBIS Associates, Inc., Waltham, MA (United States); Hannibal, Ted [IBIS Associates, Inc., Waltham, MA (United States); Raghunathan, Anand [Energetics Inc., Columbia, MD (United States); Ivanic, Ziga [Energetics Inc., Columbia, MD (United States); Clark, Michael [Idaho National Lab. (INL), Idaho Falls, ID (United States)
2016-03-01
The U.S. Department of Energy’s Vehicle Technologies Office, Materials area commissioned a study to model and assess manufacturing economics of alternative design and production strategies for a series of lightweight vehicle concepts. In the first two phases of this effort examined combinations of strategies aimed at achieving strategic targets of 40% and a 45% mass reduction relative to a standard North American midsize passenger sedan at an effective cost of $3.42 per pound (lb) saved. These results have been reported in the Idaho National Laboratory report INL/EXT-14-33863 entitled Vehicle Lightweighting: 40% and 45% Weight Savings Analysis: Technical Cost Modeling for Vehicle Lightweighting published in March 2015. The data for these strategies were drawn from many sources, including Lotus Engineering Limited and FEV, Inc. lightweighting studies, U.S. Department of Energy-funded Vehma International of America, Inc./Ford Motor Company Multi-Material Lightweight Prototype Vehicle Demonstration Project, the Aluminum Association Transportation Group, many United States Council for Automotive Research’s/United States Automotive Materials Partnership LLC lightweight materials programs, and IBIS Associates, Inc.’s decades of experience in automotive lightweighting and materials substitution analyses.
Shin, Sang-Jin; Sohn, Hoon-Sang; Do, Nam-Hoon
2012-10-01
To introduce a modified operative technique for minimally invasive plate osteosynthesis (MIPO) for acute displaced humeral shaft fractures and to evaluate the clinical and radiological outcomes. : Prospective clinical series study. University hospital. Twenty-one patients with acute displaced humeral shaft fractures were treated by MIPO with a modified fracture reduction technique. A narrow 4.5/5.0-mm locking compression plate was applied to the anterior aspect of the humerus. Fracture reduction and manipulation were performed using a plate and drill bits. The operating time, time to union, humeral alignment, and functional outcome of the shoulder and elbow joints were evaluated using the University of California Los Angeles shoulder score and Mayo elbow performance score. No patient experienced a neurological complication. Bony union was obtained in 20/21 patients at a mean 17.5 weeks postoperatively. Eighteen patients had excellent and 3 patients had good results in the University of California Los Angeles score. The average Mayo elbow performance score was 97.5. Two patients were converted to an open reduction during operation due to a failure of MIPO. There was 1 nonunion and 1 malunion in this series. Although the MIPO technique for humeral shaft fractures is technically demanding, satisfactory clinical outcomes in terms of bony union and shoulder and elbow function can be obtained using the modified fracture reduction method. Potential postoperative complications, such as malreduction and nonunion, must be considered. Appropriate surgical indications, a thorough understanding of the neurovascular anatomy and skillful surgical technique, are needed to reduce potential complications.
Ibarrola-Ulzurrun, Edurne; Marcello-Ruiz, Javier; Gonzalo-Martín, Consuelo
2017-10-01
The hyperspectral imagery is formed by a several narrow and continuous bands covering different regions of the electromagnetic spectrum, such as spectral bands of the visible, near infrared and far infrared. Hyperspectral imagery provides extremely higher spectral resolution than high spatial resolution multispectral imagery, improving the detection capability of terrestrial objects. The greatest difficulty found in the hyperspectral processing is the high dimensionality of these data, which brings out the 'Hughes' phenomenon. This phenomenon specifies that the size of training set required for a given classification increases exponentially with the number of spectral bands. Therefore, the dimensionality of the hyperspectral data is an important drawback when applying traditional classification or pattern recognition approaches to this hyperspectral imagery. In our context, the dimensionality reduction is necessary to obtain accurate thematic maps of natural protected areas. Dimensionality reduction can be divided into the feature-selection algorithms and featureextraction algorithms. We focus the study in the feature-extraction algorithms like Principal Component Analysis (PCA), Minimum Noise Fraction (MNF) and Independent Component Analysis (ICA). After a review of the state-of-art, it has been observed a lack of a comparative study on the techniques used in the hyperspectral imagery dimensionality reduction. In this context, our objective was to perform a comparative study of the traditional techniques of dimensionality reduction (PCA, MNF and ICA) to evaluate their performance in the classification of high spatial resolution imagery of the CASI (Compact Airborne Spectrographic Imager) sensor.
Barnea, Yoav; Inbal, Amir; Barsuk, Daphna; Menes, Tehila; Zaretski, Arik; Leshem, David; Weiss, Jerry; Schneebaum, Schlomo; Gur, Eyal
2014-01-01
Background Oncoplastic breast reduction in women with medium to large breasts has reportedly benefitted them both oncologically and cosmetically. We present our experience with an oncoplastic breast reduction technique using a vertical scar superior-medial pedicle pattern for immediate partial breast reconstruction. Methods All patients with breast tumours who underwent vertical scar superior-medial pedicle reduction pattern oncoplastic surgery at our centre between September 2006 and June 2010 were retrospectively studied. Follow-up continued from 12 months to 6 years. Results Twenty women (age 28–72 yr) were enrolled: 16 with invasive carcinoma and 4 with benign tumours. They all had tumour-free surgical margins, and no further oncological operations were required. The patients expressed a high degree of satisfaction from the surgical outcome in terms of improved quality of life and a good cosmetic result. Conclusion The vertical scar superior-medial pedicle reduction pattern is a versatile oncoplastic technique that allows breast tissue rearrangement for various tumour locations. It is oncologically beneficial and is associated with high patient satisfaction. PMID:25078939
International Nuclear Information System (INIS)
Park, Hyeong-Ho; Hwang, Seon-Yong; Jung, Sang Hyun; Kang, Semin; Shin, Hyun-Beom; Kang, Ho Kwan; Ko, Chul Ki; Zhang Xin; Hill, Ross H; Park, Hyung-Ho
2012-01-01
We present a simple size reduction technique for fabricating 400 nm zinc oxide (ZnO) architectures using a silicon master containing only microscale architectures. In this approach, the overall fabrication, from the master to the molds and the final ZnO architectures, features cost-effective UV photolithography, instead of electron beam lithography or deep-UV photolithography. A photosensitive Zn-containing sol–gel precursor was used to imprint architectures by direct UV-assisted nanoimprint lithography (UV-NIL). The resulting Zn-containing architectures were then converted to ZnO architectures with reduced feature sizes by thermal annealing at 400 °C for 1 h. The imprinted and annealed ZnO architectures were also used as new masters for the size reduction technique. ZnO pillars of 400 nm diameter were obtained from a silicon master with pillars of 1000 nm diameter by simply repeating the size reduction technique. The photosensitivity and contrast of the Zn-containing precursor were measured as 6.5 J cm −2 and 16.5, respectively. Interesting complex ZnO patterns, with both microscale pillars and nanoscale holes, were demonstrated by the combination of dose-controlled UV exposure and a two-step UV-NIL.
Directory of Open Access Journals (Sweden)
Hyeong-Ho Park, Xin Zhang, Seon-Yong Hwang, Sang Hyun Jung, Semin Kang, Hyun-Beom Shin, Ho Kwan Kang, Hyung-Ho Park, Ross H Hill and Chul Ki Ko
2012-01-01
Full Text Available We present a simple size reduction technique for fabricating 400 nm zinc oxide (ZnO architectures using a silicon master containing only microscale architectures. In this approach, the overall fabrication, from the master to the molds and the final ZnO architectures, features cost-effective UV photolithography, instead of electron beam lithography or deep-UV photolithography. A photosensitive Zn-containing sol–gel precursor was used to imprint architectures by direct UV-assisted nanoimprint lithography (UV-NIL. The resulting Zn-containing architectures were then converted to ZnO architectures with reduced feature sizes by thermal annealing at 400 °C for 1 h. The imprinted and annealed ZnO architectures were also used as new masters for the size reduction technique. ZnO pillars of 400 nm diameter were obtained from a silicon master with pillars of 1000 nm diameter by simply repeating the size reduction technique. The photosensitivity and contrast of the Zn-containing precursor were measured as 6.5 J cm−2 and 16.5, respectively. Interesting complex ZnO patterns, with both microscale pillars and nanoscale holes, were demonstrated by the combination of dose-controlled UV exposure and a two-step UV-NIL.
Modelling stillbirth mortality reduction with the Lives Saved Tool
Directory of Open Access Journals (Sweden)
Hannah Blencowe
2017-11-01
Full Text Available Abstract Background The worldwide burden of stillbirths is large, with an estimated 2.6 million babies stillborn in 2015 including 1.3 million dying during labour. The Every Newborn Action Plan set a stillbirth target of ≤12 per 1000 in all countries by 2030. Planning tools will be essential as countries set policy and plan investment to scale up interventions to meet this target. This paper summarises the approach taken for modelling the impact of scaling-up health interventions on stillbirths in the Lives Saved tool (LiST, and potential future refinements. Methods The specific application to stillbirths of the general method for modelling the impact of interventions in LiST is described. The evidence for the effectiveness of potential interventions to reduce stillbirths are reviewed and the assumptions of the affected fraction of stillbirths who could potentially benefit from these interventions are presented. The current assumptions and their effects on stillbirth reduction are described and potential future improvements discussed. Results High quality evidence are not available for all parameters in the LiST stillbirth model. Cause-specific mortality data is not available for stillbirths, therefore stillbirths are modelled in LiST using an attributable fraction approach by timing of stillbirths (antepartum/ intrapartum. Of 35 potential interventions to reduce stillbirths identified, eight interventions are currently modelled in LiST. These include childbirth care, induction for prolonged pregnancy, multiple micronutrient and balanced energy supplementation, malaria prevention and detection and management of hypertensive disorders of pregnancy, diabetes and syphilis. For three of the interventions, childbirth care, detection and management of hypertensive disorders of pregnancy, and diabetes the estimate of effectiveness is based on expert opinion through a Delphi process. Only for malaria is coverage information available, with coverage
Circuit oriented electromagnetic modeling using the PEEC techniques
Ruehli, Albert; Jiang, Lijun
2017-01-01
This book provides intuitive solutions to electromagnetic problems by using the Partial Eelement Eequivalent Ccircuit (PEEC) method. This book begins with an introduction to circuit analysis techniques, laws, and frequency and time domain analyses. The authors also treat Maxwell's equations, capacitance computations, and inductance computations through the lens of the PEEC method. Next, readers learn to build PEEC models in various forms: equivalent circuit models, non orthogonal PEEC models, skin-effect models, PEEC models for dielectrics, incident and radiate field models, and scattering PEEC models. The book concludes by considering issues like such as stability and passivity, and includes five appendices some with formulas for partial elements.
International Nuclear Information System (INIS)
Girauda, P.; Simon, L.; Yorke, E.; Mageras, G.; Jiang, S.; Rosenzweig, K.
2006-01-01
Respiration-gated radiotherapy offers a significant potential for improvement in the irradiation of tumour sites affected by respiratory motion such as lung, breast and liver tumours. An increased conformality of irradiation fields leading to decreased complications rates of organs at risk (lung, heart) is expected. Four main strategies are used to reduce respiratory motion effects: integration of respiratory movements into treatment planning, breath-hold techniques, respiratory gating techniques, and tracking techniques. Measurements of respiratory movements can be performed either in a representative sample of the general population, or directly on the patient before irradiation. The measured amplitude could be applied to a geometrical margin or integrated into dosimetry. However, these strategies remain limited for very mobile tumours, in which this approach results in larger irradiated volumes. Reduction of breathing motion can be achieved by using either breath-hold techniques or respiration synchronized gating techniques. Breath-hold can be achieved with active techniques, in which a valve temporarily blocks airflow of the patient, or passive techniques, in which the patient voluntarily breath-holds. Synchronized gating techniques use external devices to predict the phase of the respiration cycle while the patient breaths freely. Another category is tumour tracking, which consists of two major aspects: real-time localization of, and real-time beam adaptation to, a constantly moving tumour. These techniques are presently being investigated in several medical centres worldwide. Although promising, the first results obtained in lung and liver cancer patients require confirmation. This paper describes the most frequently used gating and tracking techniques and the main published clinical reports. (authors)
ENSO dynamics in current climate models: an investigation using nonlinear dimensionality reduction
Directory of Open Access Journals (Sweden)
I. Ross
2008-04-01
Full Text Available Linear dimensionality reduction techniques, notably principal component analysis, are widely used in climate data analysis as a means to aid in the interpretation of datasets of high dimensionality. These linear methods may not be appropriate for the analysis of data arising from nonlinear processes occurring in the climate system. Numerous techniques for nonlinear dimensionality reduction have been developed recently that may provide a potentially useful tool for the identification of low-dimensional manifolds in climate data sets arising from nonlinear dynamics. Here, we apply Isomap, one such technique, to the study of El Niño/Southern Oscillation variability in tropical Pacific sea surface temperatures, comparing observational data with simulations from a number of current coupled atmosphere-ocean general circulation models. We use Isomap to examine El Niño variability in the different datasets and assess the suitability of the Isomap approach for climate data analysis. We conclude that, for the application presented here, analysis using Isomap does not provide additional information beyond that already provided by principal component analysis.
ENSO dynamics in current climate models: an investigation using nonlinear dimensionality reduction
Ross, I.; Valdes, P. J.; Wiggins, S.
2008-04-01
Linear dimensionality reduction techniques, notably principal component analysis, are widely used in climate data analysis as a means to aid in the interpretation of datasets of high dimensionality. These linear methods may not be appropriate for the analysis of data arising from nonlinear processes occurring in the climate system. Numerous techniques for nonlinear dimensionality reduction have been developed recently that may provide a potentially useful tool for the identification of low-dimensional manifolds in climate data sets arising from nonlinear dynamics. Here, we apply Isomap, one such technique, to the study of El Niño/Southern Oscillation variability in tropical Pacific sea surface temperatures, comparing observational data with simulations from a number of current coupled atmosphere-ocean general circulation models. We use Isomap to examine El Niño variability in the different datasets and assess the suitability of the Isomap approach for climate data analysis. We conclude that, for the application presented here, analysis using Isomap does not provide additional information beyond that already provided by principal component analysis.
[Intestinal lengthening techniques: an experimental model in dogs].
Garibay González, Francisco; Díaz Martínez, Daniel Alberto; Valencia Flores, Alejandro; González Hernández, Miguel Angel
2005-01-01
To compare two intestinal lengthening procedures in an experimental dog model. Intestinal lengthening is one of the methods for gastrointestinal reconstruction used for treatment of short bowel syndrome. The modification to the Bianchi's technique is an alternative. The modified technique decreases the number of anastomoses to a single one, thus reducing the risk of leaks and strictures. To our knowledge there is not any clinical or experimental report that studied both techniques, so we realized the present report. Twelve creole dogs were operated with the Bianchi technique for intestinal lengthening (group A) and other 12 creole dogs from the same race and weight were operated by the modified technique (Group B). Both groups were compared in relation to operating time, difficulties in technique, cost, intestinal lengthening and anastomoses diameter. There were no statistical difference in the anastomoses diameter (A = 9.0 mm vs. B = 8.5 mm, p = 0.3846). Operating time (142 min vs. 63 min) cost and technique difficulties were lower in group B (p anastomoses (of Group B) and intestinal segments had good blood supply and were patent along their full length. Bianchi technique and the modified technique offer two good reliable alternatives for the treatment of short bowel syndrome. The modified technique improved operating time, cost and technical issues.
Energy Technology Data Exchange (ETDEWEB)
Sampson, Andrew; Le Yi; Williamson, Jeffrey F. [Department of Radiation Oncology, Virginia Commonwealth University, Richmond, Virginia 23298 (United States)
2012-02-15
Purpose: To demonstrate potential of correlated sampling Monte Carlo (CMC) simulation to improve the calculation efficiency for permanent seed brachytherapy (PSB) implants without loss of accuracy. Methods: CMC was implemented within an in-house MC code family (PTRAN) and used to compute 3D dose distributions for two patient cases: a clinical PSB postimplant prostate CT imaging study and a simulated post lumpectomy breast PSB implant planned on a screening dedicated breast cone-beam CT patient exam. CMC tallies the dose difference, {Delta}D, between highly correlated histories in homogeneous and heterogeneous geometries. The heterogeneous geometry histories were derived from photon collisions sampled in a geometrically identical but purely homogeneous medium geometry, by altering their particle weights to correct for bias. The prostate case consisted of 78 Model-6711 {sup 125}I seeds. The breast case consisted of 87 Model-200 {sup 103}Pd seeds embedded around a simulated lumpectomy cavity. Systematic and random errors in CMC were unfolded using low-uncertainty uncorrelated MC (UMC) as the benchmark. CMC efficiency gains, relative to UMC, were computed for all voxels, and the mean was classified in regions that received minimum doses greater than 20%, 50%, and 90% of D{sub 90}, as well as for various anatomical regions. Results: Systematic errors in CMC relative to UMC were less than 0.6% for 99% of the voxels and 0.04% for 100% of the voxels for the prostate and breast cases, respectively. For a 1 x 1 x 1 mm{sup 3} dose grid, efficiency gains were realized in all structures with 38.1- and 59.8-fold average gains within the prostate and breast clinical target volumes (CTVs), respectively. Greater than 99% of the voxels within the prostate and breast CTVs experienced an efficiency gain. Additionally, it was shown that efficiency losses were confined to low dose regions while the largest gains were located where little difference exists between the homogeneous and
Efficient Analysis of Structures with Rotatable Elements Using Model Order Reduction
Directory of Open Access Journals (Sweden)
G. Fotyga
2016-04-01
Full Text Available This paper presents a novel full-wave technique which allows for a fast 3D finite element analysis of waveguide structures containing rotatable tuning elements of arbitrary shapes. Rotation of these elements changes the resonant frequencies of the structure, which can be used in the tuning process to obtain the S-characteristics desired for the device. For fast commutations of the response as the tuning elements are rotated, the 3D finite element method is supported by multilevel model-order reduction, orthogonal projection at the boundaries of macromodels and the operation called macromodels cloning. All the time-consuming steps are performed only once in the preparatory stage. In the tuning stage, only small parts of the domain are updated, by means of a special meshing technique. In effect, the tuning process is performed extremely rapidly. The results of the numerical experiments confirm the efficiency and validity of the proposed method.
Sarhadi, Ali; Burn, Donald H.; Yang, Ge; Ghodsi, Ali
2017-02-01
One of the main challenges in climate change studies is accurate projection of the global warming impacts on the probabilistic behaviour of hydro-climate processes. Due to the complexity of climate-associated processes, identification of predictor variables from high dimensional atmospheric variables is considered a key factor for improvement of climate change projections in statistical downscaling approaches. For this purpose, the present paper adopts a new approach of supervised dimensionality reduction, which is called "Supervised Principal Component Analysis (Supervised PCA)" to regression-based statistical downscaling. This method is a generalization of PCA, extracting a sequence of principal components of atmospheric variables, which have maximal dependence on the response hydro-climate variable. To capture the nonlinear variability between hydro-climatic response variables and projectors, a kernelized version of Supervised PCA is also applied for nonlinear dimensionality reduction. The effectiveness of the Supervised PCA methods in comparison with some state-of-the-art algorithms for dimensionality reduction is evaluated in relation to the statistical downscaling process of precipitation in a specific site using two soft computing nonlinear machine learning methods, Support Vector Regression and Relevance Vector Machine. The results demonstrate a significant improvement over Supervised PCA methods in terms of performance accuracy.
Improving the quality of the audio sources using Gaussianity reduction technique
Naik, Ganesh R.; Kumar, Dinesh K.
2011-07-01
This research has developed a novel technique that is based on the fundamental property of background and foreground signals. Background signals are a result of the inferential summation of large number of sources, while the foreground signals are a result of limited number of sources. This makes the statistical properties of the signal very different. Using negative entropy, this article demonstrates that it is possible to obtain the foreground signals from the mixture of foreground and background signals. The technique is based on mixing the noisy recording with a similar known signal and separating the signals using negative entropy based independent component analysis (ICA). The results indicate that the technique is successful in significantly improving the quality of the audio signals.
Wan, Kenneth; Williamson, Raymond A; Gebauer, Dieter; Hird, Kathryn
2012-11-01
The study's purpose was to answer the following clinical question: in patients with mandibular angle fractures requiring open reduction and internal fixation, do those who have fixation screws inserted using a transbuccal approach compared with those with fixation screws inserted using a transoral approach have fewer complications after treatment? The investigators hypothesized that the transoral approach was associated with a higher risk of complications. A multicenter retrospective cohort study was performed in patients who had open reduction and internal fixation of mandibular angle fractures from 2008 to 2010 within Western Australia. Patients were divided into transbuccal and transoral groups and then further subdivided into groups with and without fixation failures (primary outcome variable) and statistically compared. Binary logistic regression was used to control for possible confounders, which included patient gender, age, a wisdom tooth within the fracture not extracted, dental caries, partial dentition, bilateral/unilateral fractures, and smoking. In total 597 patients were in the study. Sixteen percent of patients in the transoral group had complications after treatment versus 10% in the transbuccal group. For the transoral technique, the odds of having fixation failure was 1.71 times greater than with the transbuccal technique (95% confidence interval, 1.02 to 2.93; P = .04). Incidences of all complication variables (hardware loosening/fracturing, wound dehiscence, secondary infection, surgery redo, nonunion/malunion of fracture, and removal of plate) were lower in the transbuccal group apart from plate fracture. The transbuccal technique was associated with fewer complications after treatment compared with the transoral technique. Crown Copyright © 2012. Published by Elsevier Inc. All rights reserved.
On a Graphical Technique for Evaluating Some Rational Expectations Models
DEFF Research Database (Denmark)
Johansen, Søren; Swensen, Anders R.
2011-01-01
. In addition to getting a visual impression of the fit of the model, the purpose is to see if the two spreads are nevertheless similar as measured by correlation, variance ratio, and noise ratio. We extend these techniques to a number of rational expectation models and give a general definition of spread...
Application of the numerical modelling techniques to the simulation ...
African Journals Online (AJOL)
The aquifer was modelled by the application of Finite Element Method (F.E.M), with appropriate initial and boundary conditions. The matrix solver technique adopted for the F.E.M. was that of the Conjugate Gradient Method. After the steady state calibration and transient verification, the model was used to predict the effect of ...
Fuzzy Control Technique Applied to Modified Mathematical Model ...
African Journals Online (AJOL)
In this paper, fuzzy control technique is applied to the modified mathematical model for malaria control presented by the authors in an earlier study. Five Mamdani fuzzy controllers are constructed to control the input (some epidemiological parameters) to the malaria model simulated by 9 fully nonlinear ordinary differential ...
Fast uncertainty reduction strategies relying on Gaussian process models
International Nuclear Information System (INIS)
Chevalier, Clement
2013-01-01
This work deals with sequential and batch-sequential evaluation strategies of real-valued functions under limited evaluation budget, using Gaussian process models. Optimal Stepwise Uncertainty Reduction (SUR) strategies are investigated for two different problems, motivated by real test cases in nuclear safety. First we consider the problem of identifying the excursion set above a given threshold T of a real-valued function f. Then we study the question of finding the set of 'safe controlled configurations', i.e. the set of controlled inputs where the function remains below T, whatever the value of some others non-controlled inputs. New SUR strategies are presented, together with efficient procedures and formulas to compute and use them in real world applications. The use of fast formulas to recalculate quickly the posterior mean or covariance function of a Gaussian process (referred to as the 'kriging update formulas') does not only provide substantial computational savings. It is also one of the key tools to derive closed form formulas enabling a practical use of computationally-intensive sampling strategies. A contribution in batch-sequential optimization (with the multi-points Expected Improvement) is also presented. (author)
Summary on several key techniques in 3D geological modeling.
Mei, Gang
2014-01-01
Several key techniques in 3D geological modeling including planar mesh generation, spatial interpolation, and surface intersection are summarized in this paper. Note that these techniques are generic and widely used in various applications but play a key role in 3D geological modeling. There are two essential procedures in 3D geological modeling: the first is the simulation of geological interfaces using geometric surfaces and the second is the building of geological objects by means of various geometric computations such as the intersection of surfaces. Discrete geometric surfaces that represent geological interfaces can be generated by creating planar meshes first and then spatially interpolating; those surfaces intersect and then form volumes that represent three-dimensional geological objects such as rock bodies. In this paper, the most commonly used algorithms of the key techniques in 3D geological modeling are summarized.
Layout-Driven Post-Placement Techniques for Temperature Reduction and Thermal Gradient Minimization
DEFF Research Database (Denmark)
Liu, Wei; Calimera, Andrea; Macii, Alberto
2013-01-01
With the continuing scaling of CMOS technology, on-chip temperature and thermal-induced variations have become a major design concern. To effectively limit the high temperature in a chip equipped with a cost-effective cooling system, thermal specific approaches, besides low power techniques, are ...
A dimensionality reduction technique for 2D scattering problems in photonics
Ivanova, Alyona; Stoffer, Remco; Hammer, Manfred
2010-01-01
This paper describes a simulation method for 2D frequency domain scattering problems in photonics. The technique reduces the spatial dimensionality of the problem by means of global, continuous mode expansion combined with a variational formalism; the resulting equations are solved using a finite
Mesh refinement for uncertainty quantification through model reduction
International Nuclear Information System (INIS)
Li, Jing; Stinis, Panos
2015-01-01
We present a novel way of deciding when and where to refine a mesh in probability space in order to facilitate uncertainty quantification in the presence of discontinuities in random space. A discontinuity in random space makes the application of generalized polynomial chaos expansion techniques prohibitively expensive. The reason is that for discontinuous problems, the expansion converges very slowly. An alternative to using higher terms in the expansion is to divide the random space in smaller elements where a lower degree polynomial is adequate to describe the randomness. In general, the partition of the random space is a dynamic process since some areas of the random space, particularly around the discontinuity, need more refinement than others as time evolves. In the current work we propose a way to decide when and where to refine the random space mesh based on the use of a reduced model. The idea is that a good reduced model can monitor accurately, within a random space element, the cascade of activity to higher degree terms in the chaos expansion. In turn, this facilitates the efficient allocation of computational sources to the areas of random space where they are more needed. For the Kraichnan–Orszag system, the prototypical system to study discontinuities in random space, we present theoretical results which show why the proposed method is sound and numerical results which corroborate the theory
Development of a kinetic model for biological sulphate reduction ...
African Journals Online (AJOL)
The Rhodes BioSUREÆÊ Process is a low-cost active treatment system for acid mine drainage (AMD) waters. Central to this process is biological sulphate reduction (BSR) using primary sewage sludge (PSS) as the electron donor and organic carbon source, with the concomitant reduction of sulphate to sulphide and ...
Moreno Chaparro, Nicolas
2015-06-30
We introduce a framework for model reduction of polymer chain models for dissipative particle dynamics (DPD) simulations, where the properties governing the phase equilibria such as the characteristic size of the chain, compressibility, density, and temperature are preserved. The proposed methodology reduces the number of degrees of freedom required in traditional DPD representations to model equilibrium properties of systems with complex molecules (e.g., linear polymers). Based on geometrical considerations we explicitly account for the correlation between beads in fine-grained DPD models and consistently represent the effect of these correlations in a reduced model, in a practical and simple fashion via power laws and the consistent scaling of the simulation parameters. In order to satisfy the geometrical constraints in the reduced model we introduce bond-angle potentials that account for the changes in the chain free energy after the model reduction. Following this coarse-graining process we represent high molecular weight DPD chains (i.e., ≥200≥200 beads per chain) with a significant reduction in the number of particles required (i.e., ≥20≥20 times the original system). We show that our methodology has potential applications modeling systems of high molecular weight molecules at large scales, such as diblock copolymer and DNA.
Oxygen reduction kinetics on mixed conducting SOFC model cathodes
Energy Technology Data Exchange (ETDEWEB)
Baumann, F.S.
2006-07-01
The kinetics of the oxygen reduction reaction at the surface of mixed conducting solid oxide fuel cell (SOFC) cathodes is one of the main limiting factors to the performance of these promising systems. For ''realistic'' porous electrodes, however, it is usually very difficult to separate the influence of different resistive processes. Therefore, a suitable, geometrically well-defined model system was used in this work to enable an unambiguous distinction of individual electrochemical processes by means of impedance spectroscopy. The electrochemical measurements were performed on dense thin film microelectrodes, prepared by PLD and photolithography, of mixed conducting perovskite-type materials. The first part of the thesis consists of an extensive impedance spectroscopic investigation of La0.6Sr0.4Co0.8Fe0.2O3 (LSCF) microelectrodes. An equivalent circuit was identified that describes the electrochemical properties of the model electrodes appropriately and enables an unambiguous interpretation of the measured impedance spectra. Hence, the dependencies of individual electrochemical processes such as the surface exchange reaction on a wide range of experimental parameters including temperature, dc bias and oxygen partial pressure could be studied. As a result, a comprehensive set of experimental data has been obtained, which was previously not available for a mixed conducting model system. In the course of the experiments on the dc bias dependence of the electrochemical processes a new and surprising effect was discovered: It could be shown that a short but strong dc polarisation of a LSCF microelectrode at high temperature improves its electrochemical performance with respect to the oxygen reduction reaction drastically. The electrochemical resistance associated with the oxygen surface exchange reaction, initially the dominant contribution to the total electrode resistance, can be reduced by two orders of magnitude. This &apos
International Nuclear Information System (INIS)
Jeong, K; Kuo, H; Ritter, J; Shen, J; Basavatia, A; Yaparpalvi, R; Kalnicki, S; Tome, W
2015-01-01
Purpose: To evaluate the feasibility of using a metal artifact reduction technique in depleting metal artifact and its application in improving dose calculation in External Radiation Therapy Planning. Methods: CIRS electron density phantom was scanned with and without steel drill bits placed in some plug holes. Meta artifact reduction software with Metal Deletion Technique (MDT) was used to remove metal artifacts for scanned image with metal. Hounsfield units of electron density plugs from artifact free reference image and MDT processed images were compared. To test the dose calculation improvement after the MDT processed images, clinically approved head and neck plan with manual dental artifact correction was tested. Patient images were exported and processed with MDT and plan was recalculated with new MDT image without manual correction. Dose profiles near the metal artifacts were compared. Results: The MDT used in this study effectively reduced the metal artifact caused by beam hardening and scatter. The windmill around the metal drill was greatly improved with smooth rounded view. Difference of the mean HU in each density plug between reference and MDT images were less than 10 HU in most of the plugs. Dose difference between original plan and MDT images were minimal. Conclusion: Most metal artifact reduction methods were developed for diagnostic improvement purpose. Hence Hounsfield unit accuracy was not rigorously tested before. In our test, MDT effectively eliminated metal artifacts with good HU reproduciblity. However, it can introduce new mild artifacts so the MDT images should be checked with original images
Breger, A; Ehler, M; Bogunovic, H; Waldstein, S M; Philip, A-M; Schmidt-Erfurth, U; Gerendas, B S
2017-08-01
PurposeThe purpose of the present study is to develop fast automated quantification of retinal fluid in optical coherence tomography (OCT) image sets.MethodsWe developed an image analysis pipeline tailored towards OCT images that consists of five steps for binary retinal fluid segmentation. The method is based on feature extraction, pre-segmention, dimension reduction procedures, and supervised learning tools.ResultsFluid identification using our pipeline was tested on two separate patient groups: one associated to neovascular age-related macular degeneration, the other showing diabetic macular edema. For training and evaluation purposes, retinal fluid was annotated manually in each cross-section by human expert graders of the Vienna Reading Center. Compared with the manual annotations, our pipeline yields good quantification, visually and in numbers.ConclusionsBy demonstrating good automated retinal fluid quantification, our pipeline appears useful to expert graders within their current grading processes. Owing to dimension reduction, the actual learning part is fast and requires only few training samples. Hence, it is well-suited for integration into actual manufacturer's devices, further improving segmentation by its use in daily clinical life.
A Method to Test Model Calibration Techniques: Preprint
Energy Technology Data Exchange (ETDEWEB)
Judkoff, Ron; Polly, Ben; Neymark, Joel
2016-09-01
This paper describes a method for testing model calibration techniques. Calibration is commonly used in conjunction with energy retrofit audit models. An audit is conducted to gather information about the building needed to assemble an input file for a building energy modeling tool. A calibration technique is used to reconcile model predictions with utility data, and then the 'calibrated model' is used to predict energy savings from a variety of retrofit measures and combinations thereof. Current standards and guidelines such as BPI-2400 and ASHRAE-14 set criteria for 'goodness of fit' and assume that if the criteria are met, then the calibration technique is acceptable. While it is logical to use the actual performance data of the building to tune the model, it is not certain that a good fit will result in a model that better predicts post-retrofit energy savings. Therefore, the basic idea here is that the simulation program (intended for use with the calibration technique) is used to generate surrogate utility bill data and retrofit energy savings data against which the calibration technique can be tested. This provides three figures of merit for testing a calibration technique, 1) accuracy of the post-retrofit energy savings prediction, 2) closure on the 'true' input parameter values, and 3) goodness of fit to the utility bill data. The paper will also discuss the pros and cons of using this synthetic surrogate data approach versus trying to use real data sets of actual buildings.
A dimensionality reduction technique for 2D scattering problems in photonics
Alyona Ivanova, O. V.; Stoffer, Remco; Hammer, Manfred
2010-03-01
This paper describes a simulation method for 2D frequency domain scattering problems in photonics. The technique reduces the spatial dimensionality of the problem by means of global, continuous mode expansion combined with a variational formalism; the resulting equations are solved using a finite element method. Transparent influx boundary conditions and perfectly matched layers are employed at the computational window boundaries. Numerical examples validate the method.
Numerical model updating technique for structures using firefly algorithm
Sai Kubair, K.; Mohan, S. C.
2018-03-01
Numerical model updating is a technique used for updating the existing experimental models for any structures related to civil, mechanical, automobiles, marine, aerospace engineering, etc. The basic concept behind this technique is updating the numerical models to closely match with experimental data obtained from real or prototype test structures. The present work involves the development of numerical model using MATLAB as a computational tool and with mathematical equations that define the experimental model. Firefly algorithm is used as an optimization tool in this study. In this updating process a response parameter of the structure has to be chosen, which helps to correlate the numerical model developed with the experimental results obtained. The variables for the updating can be either material or geometrical properties of the model or both. In this study, to verify the proposed technique, a cantilever beam is analyzed for its tip deflection and a space frame has been analyzed for its natural frequencies. Both the models are updated with their respective response values obtained from experimental results. The numerical results after updating show that there is a close relationship that can be brought between the experimental and the numerical models.
A noise reduction technique based on nonlinear kernel function for heart sound analysis.
Mondal, Ashok; Saxena, Ishan; Tang, Hong; Banerjee, Poulami
2017-02-13
The main difficulty encountered in interpretation of cardiac sound is interference of noise. The contaminated noise obscures the relevant information which are useful for recognition of heart diseases. The unwanted signals are produced mainly by lungs and surrounding environment. In this paper, a novel heart sound de-noising technique has been introduced based on a combined framework of wavelet packet transform (WPT) and singular value decomposition (SVD). The most informative node of wavelet tree is selected on the criteria of mutual information measurement. Next, the coefficient corresponding to the selected node is processed by SVD technique to suppress noisy component from heart sound signal. To justify the efficacy of the proposed technique, several experiments have been conducted with heart sound dataset, including normal and pathological cases at different signal to noise ratios. The significance of the method is validated by statistical analysis of the results. The biological information preserved in de-noised heart sound (HS) signal is evaluated by k-means clustering algorithm and Fit Factor calculation. The overall results show that proposed method is superior than the baseline methods.
Kal, Subhadeep; Mohanty, Nihar; Farrell, Richard A.; Franke, Elliott; Raley, Angelique; Thibaut, Sophie; Pereira, Cheryl; Pillai, Karthik; Ko, Akiteru; Mosden, Aelan; Biolsi, Peter
2017-04-01
Scaling beyond the 7nm technology node demands significant control over the variability down to a few angstroms, in order to achieve reasonable yield. For example, to meet the current scaling targets it is highly desirable to achieve sub 30nm pitch line/space features at back-end of the line (BEOL) or front end of line (FEOL); uniform and precise contact/hole patterning at middle of line (MOL). One of the quintessential requirements for such precise and possibly self-aligned patterning strategies is superior etch selectivity between the target films while other masks/films are exposed. The need to achieve high etch selectivity becomes more evident for unit process development at MOL and BEOL, as a result of low density films choices (compared to FEOL film choices) due to lower temperature budget. Low etch selectivity with conventional plasma and wet chemical etch techniques, causes significant gouging (un-intended etching of etch stop layer, as shown in Fig 1), high line edge roughness (LER)/line width roughness (LWR), non-uniformity, etc. In certain circumstances this may lead to added downstream process stochastics. Furthermore, conventional plasma etches may also have the added disadvantage of plasma VUV damage and corner rounding (Fig. 1). Finally, the above mentioned factors can potentially compromise edge placement error (EPE) and/or yield. Therefore a process flow enabled with extremely high selective etches inherent to film properties and/or etch chemistries is a significant advantage. To improve this etch selectivity for certain etch steps during a process flow, we have to implement alternate highly selective, plasma free techniques in conjunction with conventional plasma etches (Fig 2.). In this article, we will present our plasma free, chemical gas phase etch technique using chemistries that have high selectivity towards a spectrum of films owing to the reaction mechanism ( as shown Fig 1). Gas phase etches also help eliminate plasma damage to the
International Nuclear Information System (INIS)
Khan, M.A.; Asghar, M.A.; Ahmed, A.; Iqbal, J.; Shamsuddin, Z.A.
2013-01-01
Dundi-cut whole red chillies (Capsicum indicum) are the most revenue- generating commodity of Pakistan. Accordingly, the competence and magnitude of manual hand-picked sorting of red chillies on the reduction of total aflatoxins (AFs) content were assessed during the present study. AFs contents were determined by thin layer chromatography (TLC) technique. On the basis of AFs content, red chilli samples were grouped as Group A with 1 to 20 mu g/kg, Group B with 20 to 30?g/kg, Group C with 30-100?g/kg and Group D quality samples with 100 to 150g/kg. Physically identified defects including midget/dwarfed, damaged, broken, dusty and dirty were looked for and such pods were removed. A reduction of 90-100% of AFs was achieved in Group A, 65-80% in B, 65-75% in C and 70% in D quality samples. An average of 78% reduction in AFs content was achieved. Hence, the non-destructive physical hand-picked sorting of red chillies can be applied as a rapid, safe and cost effective method for the reduction of AFs content in red chillies with preserved nutritional values. (author)
Directory of Open Access Journals (Sweden)
Bizon Katarzyna
2015-09-01
Full Text Available Over the last decades the method of proper orthogonal decomposition (POD has been successfully employed for reduced order modelling (ROM in many applications, including distributed parameter models of chemical reactors. Nevertheless, there are still a number of issues that need further investigation. Among them, the policy of the collection of representative ensemble of experimental or simulation data, being a starting and perhaps most crucial point of the POD-based model reduction procedure. This paper summarises the theoretical background of the POD method and briefly discusses the sampling issue. Next, the reduction procedure is applied to an idealised model of circulating fluidised bed combustor (CFBC. Results obtained confirm that a proper choice of the sampling strategy is essential for the modes convergence however, even low number of observations can be sufficient for the determination of the faithful dynamical ROM.
Plasticity models of material variability based on uncertainty quantification techniques
Energy Technology Data Exchange (ETDEWEB)
Jones, Reese E. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Rizzi, Francesco [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Boyce, Brad [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Templeton, Jeremy Alan [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Ostien, Jakob [Sandia National Lab. (SNL-CA), Livermore, CA (United States)
2017-11-01
The advent of fabrication techniques like additive manufacturing has focused attention on the considerable variability of material response due to defects and other micro-structural aspects. This variability motivates the development of an enhanced design methodology that incorporates inherent material variability to provide robust predictions of performance. In this work, we develop plasticity models capable of representing the distribution of mechanical responses observed in experiments using traditional plasticity models of the mean response and recently developed uncertainty quantification (UQ) techniques. Lastly, we demonstrate that the new method provides predictive realizations that are superior to more traditional ones, and how these UQ techniques can be used in model selection and assessing the quality of calibrated physical parameters.
Models and Techniques for Proving Data Structure Lower Bounds
DEFF Research Database (Denmark)
Larsen, Kasper Green
In this dissertation, we present a number of new techniques and tools for proving lower bounds on the operational time of data structures. These techniques provide new lines of attack for proving lower bounds in both the cell probe model, the group model, the pointer machine model and the I....../O-model. In all cases, we push the frontiers further by proving lower bounds higher than what could possibly be proved using previously known techniques. For the cell probe model, our results have the following consequences: The rst (lg n) query time lower bound for linear space static data structures...... bound of tutq = (lgd1 n). For ball range searching, we get a lower bound of tutq = (n11=d). The highest previous lower bound proved in the group model does not exceed ((lg n= lg lg n)2) on the maximum of tu and tq. Finally, we present a new technique for proving lower bounds...
Driving the Model to Its Limit: Profile Likelihood Based Model Reduction.
Maiwald, Tim; Hass, Helge; Steiert, Bernhard; Vanlier, Joep; Engesser, Raphael; Raue, Andreas; Kipkeew, Friederike; Bock, Hans H; Kaschek, Daniel; Kreutz, Clemens; Timmer, Jens
2016-01-01
In systems biology, one of the major tasks is to tailor model complexity to information content of the data. A useful model should describe the data and produce well-determined parameter estimates and predictions. Too small of a model will not be able to describe the data whereas a model which is too large tends to overfit measurement errors and does not provide precise predictions. Typically, the model is modified and tuned to fit the data, which often results in an oversized model. To restore the balance between model complexity and available measurements, either new data has to be gathered or the model has to be reduced. In this manuscript, we present a data-based method for reducing non-linear models. The profile likelihood is utilised to assess parameter identifiability and designate likely candidates for reduction. Parameter dependencies are analysed along profiles, providing context-dependent suggestions for the type of reduction. We discriminate four distinct scenarios, each associated with a specific model reduction strategy. Iterating the presented procedure eventually results in an identifiable model, which is capable of generating precise and testable predictions. Source code for all toy examples is provided within the freely available, open-source modelling environment Data2Dynamics based on MATLAB available at http://www.data2dynamics.org/, as well as the R packages dMod/cOde available at https://github.com/dkaschek/. Moreover, the concept is generally applicable and can readily be used with any software capable of calculating the profile likelihood.
Driving the Model to Its Limit: Profile Likelihood Based Model Reduction.
Directory of Open Access Journals (Sweden)
Tim Maiwald
Full Text Available In systems biology, one of the major tasks is to tailor model complexity to information content of the data. A useful model should describe the data and produce well-determined parameter estimates and predictions. Too small of a model will not be able to describe the data whereas a model which is too large tends to overfit measurement errors and does not provide precise predictions. Typically, the model is modified and tuned to fit the data, which often results in an oversized model. To restore the balance between model complexity and available measurements, either new data has to be gathered or the model has to be reduced. In this manuscript, we present a data-based method for reducing non-linear models. The profile likelihood is utilised to assess parameter identifiability and designate likely candidates for reduction. Parameter dependencies are analysed along profiles, providing context-dependent suggestions for the type of reduction. We discriminate four distinct scenarios, each associated with a specific model reduction strategy. Iterating the presented procedure eventually results in an identifiable model, which is capable of generating precise and testable predictions. Source code for all toy examples is provided within the freely available, open-source modelling environment Data2Dynamics based on MATLAB available at http://www.data2dynamics.org/, as well as the R packages dMod/cOde available at https://github.com/dkaschek/. Moreover, the concept is generally applicable and can readily be used with any software capable of calculating the profile likelihood.
Korzennik, Sylvain
1997-01-01
Under the direction of Dr. Rhodes, and the technical supervision of Dr. Korzennik, the data assimilation of high spatial resolution solar dopplergrams has been carried out throughout the program on the Intel Delta Touchstone supercomputer. With the help of a research assistant, partially supported by this grant, and under the supervision of Dr. Korzennik, code development was carried out at SAO, using various available resources. To ensure cross-platform portability, PVM was selected as the message passing library. A parallel implementation of power spectra computation for helioseismology data reduction, using PVM was successfully completed. It was successfully ported to SMP architectures (i.e. SUN), and to some MPP architectures (i.e. the CM5). Due to limitation of the implementation of PVM on the Cray T3D, the port to that architecture was not completed at the time.
Stability Analysis and H∞ Model Reduction for Switched Discrete-Time Time-Delay Systems
Directory of Open Access Journals (Sweden)
Zheng-Fan Liu
2014-01-01
Full Text Available This paper is concerned with the problem of exponential stability and H∞ model reduction of a class of switched discrete-time systems with state time-varying delay. Some subsystems can be unstable. Based on the average dwell time technique and Lyapunov-Krasovskii functional (LKF approach, sufficient conditions for exponential stability with H∞ performance of such systems are derived in terms of linear matrix inequalities (LMIs. For the high-order systems, sufficient conditions for the existence of reduced-order model are derived in terms of LMIs. Moreover, the error system is guaranteed to be exponentially stable and an H∞ error performance is guaranteed. Numerical examples are also given to demonstrate the effectiveness and reduced conservatism of the obtained results.
Mercury Concentration Reduction In Waste Water By Using Liquid Surfactant Membrane Technique
International Nuclear Information System (INIS)
Prayitno; Sardjono, Joko
2000-01-01
The objective of this research is ti know effectiveness of liquid surfactant membrane in diminishing mercury found in waste water. This process can be regarded as transferring process of solved mercury from the external phase functioning as a moving phase to continue to the membrane internal one. The existence of the convection rotation results in the change of the surface pressure on the whole interface parts, so the solved mercury disperses on every interface part. Because of this rotation, the solved mercury will fulfil every space with particles from dispersion phase in accordance with its volume. Therefore, the change of the surface pressure on the whole interface parts can be kept stable to adsorb mercury. The mercury adsorbed in the internal phase moves to dispersed particles through molecule diffusion process. The liquid surfactant membrane technique in which the membrane phase is realized into emulsion contains os kerosene as solvent, sorbitan monoleat (span-80) 5 % (v/v) as surfactant, threbuthyl phosphate (TBP) 10 % (v/v) as extractant, and solved mercury as the internal phase. All of those things are mixed and stirred with 8000 rpm speed for 20 minutes. After the stability of emulsion is formed, the solved mercury is extracted by applying extraction process. The effective condition required to achieve mercury ion recovery utilizing this technique is obtained through extraction and re-extraction process. This process was conducted in 30 minutes with membrane and mercury in scale 1 : 1 on 100 ppm concentration. The results of the processes was 99,6 % efficiency. This high efficiency shows that the liquid surfactant membrane technique is very effective to reduce waste water contamined by mercury
Modeling with data tools and techniques for scientific computing
Klemens, Ben
2009-01-01
Modeling with Data fully explains how to execute computationally intensive analyses on very large data sets, showing readers how to determine the best methods for solving a variety of different problems, how to create and debug statistical models, and how to run an analysis and evaluate the results. Ben Klemens introduces a set of open and unlimited tools, and uses them to demonstrate data management, analysis, and simulation techniques essential for dealing with large data sets and computationally intensive procedures. He then demonstrates how to easily apply these tools to the many threads of statistical technique, including classical, Bayesian, maximum likelihood, and Monte Carlo methods
Conde, Olga M.; Amado, Marta; García-Allende, Pilar B.; Cobo, Adolfo; López-Higuera, José Miguel
2007-04-01
Foreign object detection processes are improving thanks to imaging spectroscopy techniques through the employment of hyperspectral systems such as prism-grating-prism spectrographs. These devices offer a valuable but sometimes huge and redundant amount of spectral and spatial information that facilitates and speed up the classification and sorting procedures of materials in industrial production chains. In this work, different algorithms of supervised and non-supervised Principal Components Analysis (PCA) are thoroughly applied on the experimentally acquired hyperspectral images. The evaluated PCA versions implement different statistical mechanisms to maximize the class separability. PCA alternatives (traditional "m-method", "J-measure", SEPCOR and "Supervised PCA") are compared taking into account how the achieved spectral compression affects the classification performance in terms of accuracy and execution time. During the whole process, the classification stage is fixed and performed by an Artificial Neural Network (ANN). The developed techniques have been probed and successfully checked in tobacco industry where detection of plastics, cords, cardboards, papers, textile threads, etc. must be done in order to enter only tobacco leaves in the industrial chain.
DEFF Research Database (Denmark)
Dal, Mehmet; Teodorescu, Remus
2011-01-01
simulations. To demonstrate the effectiveness of each method, several experiments were performed on a DSP-based PM DC motor drive system. Then, the newly proposed combinations of these methods were implemented. The hardware implementation results are comparatively presented and discussed....... reduction techniques are investigated, and the effectiveness of chattering suppression for current regulation of PM DC drives is tested. The sampling rate was also examined to determine how it affects the amplitude of chattering. This paper concentrates on various combinations of observer-based methods...
Assessment of patient dose reduction when using AEC technique in toshiba 64 MDCT
International Nuclear Information System (INIS)
Khojali, Wadah Mohamed Ali.
2016-03-01
The aim of research is to evaluate the efficiency of AEC (SUREDOSE) used in Toshiba CT scanner in reducing patient radiation dose. 107 patients were studied from four CT scanners. Scan factors and radiation dose received during abdominal CT scan was registered between the contract phases of abdominal CT scan, where the arterial contrast phases was done with Routine Manual Protocol i.e. fixed mA and kVp regardless patient age, weight and reason of scan, while the vinous phase done using AEC. The mA values were considerably less in vinous phase than in the arterial phase for all hospitals with exceptional to hospital 4 where the mA values had increased. There were no variations between the two phases in the other scan factors (kVp. pitch, slice thickness, scan length), which indicates that the software was mainly changing the mA values. The mA also showed wide variations during venous phase as a result of the varying mA applied by the AEC for the different patient ages and weights. The data collection has showed that, the application of SURDOSE decreases that average mA by 56.6%, 61%6 and 56.6 for hospitals 1, 2, and 3 respectively. The reduction of the average of the CTD1 v ol were 54.2%. 64.1% in hospital 1.2. and 3 respectively. The average DLPs were also less by 57.1%. 62.8%. 57.5% in hospital, 2, and 3 respectively between the phases. In hospital 4 one raw of the CT detector was not functioning this has disturbed the SURDOSE software. Leading to increase of the mA values and hence the patient radiation dose mA, CTD1 v ol and DLP in this hospital increased by 47.7%, 54.3% and 42.8% respectively. This highlighted the risk of not applying the AEC correctly. The non application of this software was only due to lake of knowledge how to use it and the benefits of dose reduction associated with it. Application of this software is very useful and operator should be trained to use it in all CT exams. (Author)
Golosio, Bruno; Schoonjans, Tom; Brunetti, Antonio; Oliva, Piernicola; Masala, Giovanni Luca
2014-03-01
Licensing provisions: GNU General Public License version 3 No. of lines in distributed program, including test data, etc.: 83617 No. of bytes in distributed program, including test data, etc.: 1038160 Distribution format: tar.gz Programming language: C++. Computer: Tested on several PCs and on Mac. Operating system: Linux, Mac OS X, Windows (native and cygwin). RAM: It is dependent on the input data but usually between 1 and 10 MB. Classification: 2.5, 21.1. External routines: XrayLib (https://github.com/tschoonj/xraylib/wiki) Nature of problem: Simulation of a wide range of X-ray imaging and spectroscopy experiments using different types of sources and detectors. Solution method: XRMC is a versatile program that is useful for the simulation of a wide range of X-ray imaging and spectroscopy experiments. It enables the simulation of monochromatic and polychromatic X-ray sources, with unpolarised or partially/completely polarised radiation. Single-element detectors as well as two-dimensional pixel detectors can be used in the simulations, with several acquisition options. In the current version of the program, the sample is modelled by combining convex three-dimensional objects demarcated by quadric surfaces, such as planes, ellipsoids and cylinders. The Monte Carlo approach makes XRMC able to accurately simulate X-ray photon transport and interactions with matter up to any order of interaction. The differential cross-sections and all other quantities related to the interaction processes (photoelectric absorption, fluorescence emission, elastic and inelastic scattering) are computed using the xraylib software library, which is currently the most complete and up-to-date software library for X-ray parameters. The use of variance reduction techniques makes XRMC able to reduce the simulation time by several orders of magnitude compared to other general-purpose Monte Carlo simulation programs. Running time: It is dependent on the complexity of the simulation. For the examples
Energy Technology Data Exchange (ETDEWEB)
Xiao Yamping; Holappa, L. [Helsinki Univ. of Technology, Otaniemi (Finland). Lab. of Metallurgy
1996-12-31
This article summaries the research work on thermodynamics of chromium slags and kinetic modelling of chromite reduction. The thermodynamic properties of FeCr slag systems were calculated with the regular solution model. The effects of CaO/MgO ratio, Al{sub 2}0{sub 3} amount as well as the slag basicity on the activities of chromium oxides and the oxidation state of chromium were examined. The calculated results were compared to the experimental data in the literature. In the kinetic modelling of the chromite reduction, the reduction possibilities and tendencies of the chromite constitutes with CO were analysed based on the thermodynamic calculation. Two reaction models, a structural grain model and a multi-layers reaction model, were constructed and applied to simulate the chromite pellet reduction and chromite lumpy ore reduction, respectively. The calculated reduction rates were compared with the experimental measurements and the reaction mechanisms were discussed. (orig.) SULA 2 Research Programme; 4 refs.
Dose reduction for chest CT: comparison of two iterative reconstruction techniques.
Pourjabbar, Sarvenaz; Singh, Sarabjeet; Kulkarni, Naveen; Muse, Victorine; Digumarthy, Subba R; Khawaja, Ranish Deedar Ali; Padole, Atul; Do, Synho; Kalra, Mannudeep K
2015-06-01
Lowering radiation dose in computed tomography (CT) scan results in low quality noisy images. Iterative reconstruction techniques are used currently to lower image noise and improve the quality of images. To evaluate lesion detection and diagnostic acceptability of chest CT images acquired at CTDIvol of 1.8 mGy and processed with two different iterative reconstruction techniques. Twenty-two patients (mean age, 60 ± 14 years; men, 13; women, 9; body mass index, 27.4 ± 6.5 kg/m(2)) gave informed consent for acquisition of low dose (LD) series in addition to the standard dose (SD) chest CT on a 128 - multidetector CT (MDCT). LD images were reconstructed with SafeCT C4, L1, and L2 settings, and Safire S1, S2, and S3 settings. Three thoracic radiologists assessed LD image series (S1, S2, S3, C4, L1, and L2) for lesion detection and comparison of lesion margin, visibility of normal structures, and diagnostic confidence with SD chest CT. Inter-observer agreement (kappa) was calculated. Average CTDIvol was 6.4 ± 2.7 mGy and 1.8 ± 0.2 mGy for SD and LD series, respectively. No additional lesion was found in SD as compared to LD images. Visibility of ground-glass opacities and lesion margins, as well as normal structures visibility were not affected on LD. CT image visibility of major fissure and pericardium was not optimal in some cases (n = 5). Objective image noise in some low dose images processed with SafeCT and Safire was similar to SD images (P value > 0.5). Routine LD chest CT reconstructed with iterative reconstruction technique can provide similar diagnostic information in terms of lesion detection, margin, and diagnostic confidence as compared to SD, regardless of the iterative reconstruction settings. © The Foundation Acta Radiologica 2014 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.
Meyer-Bäse, Anke; Lespinats, Sylvain; Steinbrücker, Frank; Saalbach, Axel; Schlossbauer, Thomas; Barbu, Adrian
2009-04-01
Visualization of multi-dimensional data sets becomes a critical and significant area in modern medical image processing. To analyze such high dimensional data, novel nonlinear embedding approaches become increasingly important to show dependencies among these data in a two- or three-dimensional space. This paper investigates the potential of novel nonlinear dimensional data reduction techniques and compares their results with proven nonlinear techniques when applied to the differentiation of malignant and benign lesions described by high-dimensional data sets arising from dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI). Two important visualization modalities in medical imaging are presented: the mapping on a lower-dimensional data manifold and the image fusion.
TECHNIQUES OF PAIN REDUCTION IN THE NORMAL LABOR PROCESS : SYSTEMATIC REVIEW
Directory of Open Access Journals (Sweden)
Wan Anita
2017-10-01
Full Text Available Pain during labor is a physiological condition commonly experienced by most maternity mothers. Labor pain is a subjective experience caused by uterine muscle ischemia, withdrawal and traction of uterine ligaments, ovarian traction, fallopian tubes and lower uterine distension, pelvic floor muscles and perineum. The pain in labor arises from psychic responses and physical reflexes. The purpose of this Systematic review is to look at effective methods for reducing pain in the labor process so that it can be used as an alternative method of reducing pain in patients who will give birth. This review systematic review of the published artike through google scholar site with 17 journals reviewed. In an effort to reduce labor pain there are various methods that can be used in providing midwifery care in the process of childbirth. Based on this systematic review it can be concluded that many methods of pain reduction that can be used in reducing labor pain are counter pressure and abdominal lifting, hypnobirthing, religious and murottal music, classical music and local music, relaxation, compress, warm ginger drink, acupressur , TENS, account and aromatherapy.
Plants status monitor: Modelling techniques and inherent benefits
International Nuclear Information System (INIS)
Breeding, R.J.; Lainoff, S.M.; Rees, D.C.; Prather, W.A.; Fickiessen, K.O.E.
1987-01-01
The Plant Status Monitor (PSM) is designed to provide plant personnel with information on the operational status of the plant and compliance with the plant technical specifications. The PSM software evaluates system models using a 'distributed processing' technique in which detailed models of individual systems are processed rather than by evaluating a single, plant-level model. In addition, development of the system models for PSM provides inherent benefits to the plant by forcing detailed reviews of the technical specifications, system design and operating procedures, and plant documentation. (orig.)
Norman, M.; Sundvor, I.; Denby, B. R.; Johansson, C.; Gustafsson, M.; Blomqvist, G.; Janhäll, S.
2016-06-01
Road dust emissions in Nordic countries still remain a significant contributor to PM10 concentrations mainly due to the use of studded tyres. A number of measures have been introduced in these countries in order to reduce road dust emissions. These include speed reductions, reductions in studded tyre use, dust binding and road cleaning. Implementation of such measures can be costly and some confidence in the impact of the measures is required to weigh the costs against the benefits. Modelling tools are thus required that can predict the impact of these measures. In this paper the NORTRIP road dust emission model is used to simulate real world abatement measures that have been carried out in Oslo and Stockholm. In Oslo both vehicle speed and studded tyre share reductions occurred over a period from 2004 to 2006 on a major arterial road, RV4. In Stockholm a studded tyre ban on Hornsgatan in 2010 saw a significant reduction in studded tyre share together with a reduction in traffic volume. The model is found to correctly simulate the impact of these measures on the PM10 concentrations when compared to available kerbside measurement data. Importantly meteorology can have a significant impact on the concentrations through both surface and dispersion conditions. The first year after the implementation of the speed reduction on RV4 was much drier than the previous year, resulting in higher mean concentrations than expected. The following year was much wetter with significant rain and snow fall leading to wet or frozen road surfaces for 83% of the four month study period. This significantly reduced the net PM10 concentrations, by 58%, compared to the expected values if meteorological conditions had been similar to the previous years. In the years following the studded tyre ban on Hornsgatan road wear production through studded tyres decreased by 72%, due to a combination of reduced traffic volume and reduced studded tyre share. However, after accounting for exhaust
Mondal, Ashok; Bhattacharya, P S; Saha, Goutam
2011-01-01
During the recording time of lung sound (LS) signals from the chest wall of a subject, there is always heart sound (HS) signal interfering with it. This obscures the features of lung sound signals and creates confusion on pathological states, if any, of the lungs. A novel method based on empirical mode decomposition (EMD) technique is proposed in this paper for reducing the undesired heart sound interference from the desired lung sound signals. In this, the mixed signal is split into several components. Some of these components contain larger proportions of interfering signals like heart sound, environmental noise etc. and are filtered out. Experiments have been conducted on simulated and real-time recorded mixed signals of heart sound and lung sound. The proposed method is found to be superior in terms of time domain, frequency domain, and time-frequency domain representations and also in listening test performed by pulmonologist.
Advancement in solar evaporation techniques for volume reduction of chemical effluents
International Nuclear Information System (INIS)
Parakasamurthy, K.S.; Pande, D.P.
1994-01-01
A typical example of advancement of a unit operation for the given requirement is described. The solar evaporation ponds (SEP) have technical and economic advantages compared to other evaporation methods for concentrating chemical effluents. The operation of SEP is strongly dependent on the environmental and site conditions. Tropical conditions with high solar incidence, good wind speed along with hot and dry weather provide suitable climate for efficient operation of solar evaporation ponds. The particular site selected for the ponds at Nuclear Fuel Complex (NFC) has a rocky terrain with murrum over sheet with very low water table and small velocity of groundwater. During the past twenty five years extensive theoretical and experimental investigations have been carried out for advancement of solar evaporation technique. (author)
Sensitivity analysis technique for application to deterministic models
International Nuclear Information System (INIS)
Ishigami, T.; Cazzoli, E.; Khatib-Rahbar, M.; Unwin, S.D.
1987-01-01
The characterization of sever accident source terms for light water reactors should include consideration of uncertainties. An important element of any uncertainty analysis is an evaluation of the sensitivity of the output probability distributions reflecting source term uncertainties to assumptions regarding the input probability distributions. Historically, response surface methods (RSMs) were developed to replace physical models using, for example, regression techniques, with simplified models for example, regression techniques, with simplified models for extensive calculations. The purpose of this paper is to present a new method for sensitivity analysis that does not utilize RSM, but instead relies directly on the results obtained from the original computer code calculations. The merits of this approach are demonstrated by application of the proposed method to the suppression pool aerosol removal code (SPARC), and the results are compared with those obtained by sensitivity analysis with (a) the code itself, (b) a regression model, and (c) Iman's method
Selection of productivity improvement techniques via mathematical modeling
Directory of Open Access Journals (Sweden)
Mahassan M. Khater
2011-07-01
Full Text Available This paper presents a new mathematical model to select an optimal combination of productivity improvement techniques. The proposed model of this paper considers four-stage cycle productivity and the productivity is assumed to be a linear function of fifty four improvement techniques. The proposed model of this paper is implemented for a real-world case study of manufacturing plant. The resulted problem is formulated as a mixed integer programming which can be solved for optimality using traditional methods. The preliminary results of the implementation of the proposed model of this paper indicate that the productivity can be improved through a change on equipments and it can be easily applied for both manufacturing and service industries.
Guo, Ke; Sun, Jiaming; Qiao, Qun; Guo, Nengqiang; Guo, Liang
2012-06-01
The dermal bra technique was reported by the authors in 2003 for reduction mammaplasty and ptosis correction. The authors have summarized and modified continuously and here share their experience and analyze the long-term safety and efficacy of this technique. Three hundred forty-seven patients underwent the dermal bra technique in the authors' department from October of 2003 to October of 2011, and 213 of them were followed successfully for 3 months to 2 years. Patients before and after October of 2006 were divided into early and late groups. The incidence of complications, the long-term satisfaction rate, and modifications that have been developed were noted and analyzed. Short-term complications occurred in 55 breasts (7.9 percent), including hematoma (seroma), delayed wound healing, fat necrosis, deep folds, necrosis, and numbness of the nipple-areola complex. Long-term complications were found in 28 breasts (6.6 percent), including widened scar and enlarged areola, irregular areola, secondary ptosis, sunken nipple-areola complex, numbness of the nipple-areola complex, cyst, and chronic infection. Except for one case of nipple-areola complex numbness, all complications were corrected successfully. The long-term satisfaction rate was 95.7 percent. With three major modifications (W- or V-shaped gland resection, medial rotation of gland flap, and modified purse-string suture), the short-term and long-term complication rates (p bra technique and have made it a mature approach for reduction mammaplasty and ptosis correction. Therapeutic, IV.
International Nuclear Information System (INIS)
Domanski, Roman; Azzain, Gassem
2006-01-01
The assessment of possible reduction of heating and cooling requirements of 300 m 2 house-office building has been presented in this paper, when simple Thermal Passive Techniques (TPT) have been applied to building's construction in Sebha city at the Libyan south. The known software for dynamic simulation (TRNSYS) has been used as an environment of digital experimentation for this study. A prototype represents the building has been constructed with the help of the available model of single thermal zone of TRNSYS (Type 19). The built-in ASHREA Transfer Function Method within this model has been used to calculate the heat flux through building's materials. Primarily, the thermal load on building's construction without TPTs has been evaluated under weather conditions of a Typical Meteorological Year (TMY) of Sebha city. Then, the building has been equipped with simple TPTs (such as the control of building materials, insulation, shading, infiltration and ventilation with windows resizing). This building was subjected to the same weather conditions and again the thermal load has been evaluated in order to report the percentage of reduction of thermal load. The simulation has been conducted successfully, where good assessment of reduction of annual heating and cooling demands in the building has been obtained. It is proved that, about (46%) of annual heating load and (48%) of annual cooling load can be reduced if suitable simple TPTs were incorporated in buildings.(Author)
Reaction invariant-based reduction of the activated sludge model ASM1 for batch applications
DEFF Research Database (Denmark)
Santa Cruz, Judith A.; Mussati, Sergio F.; Scenna, Nicolás J.
2016-01-01
to batch activated sludge processes described by the Activated Sludge Model No. 1 (ASM1) for carbon and nitrogen removal. The objective of the model reduction is to describe the exact dynamics of the states predicted by the original model with a lower number of ODEs. This leads to a reduction...
Modeling and design techniques for RF power amplifiers
Raghavan, Arvind; Laskar, Joy
2008-01-01
The book covers RF power amplifier design, from device and modeling considerations to advanced circuit design architectures and techniques. It focuses on recent developments and advanced topics in this area, including numerous practical designs to back the theoretical considerations. It presents the challenges in designing power amplifiers in silicon and helps the reader improve the efficiency of linear power amplifiers, and design more accurate compact device models, with faster extraction routines, to create cost effective and reliable circuits.
Modeling and Simulation Techniques for Large-Scale Communications Modeling
National Research Council Canada - National Science Library
Webb, Steve
1997-01-01
.... Tests of random number generators were also developed and applied to CECOM models. It was found that synchronization of random number strings in simulations is easy to implement and can provide significant savings for making comparative studies. If synchronization is in place, then statistical experiment design can be used to provide information on the sensitivity of the output to input parameters. The report concludes with recommendations and an implementation plan.
Gan, Y.; Liang, X. Z.; Duan, Q.; Xu, J.; Zhao, P.; Hong, Y.
2017-12-01
The uncertainties associated with the parameters of a hydrological model need to be quantified and reduced for it to be useful for operational hydrological forecasting and decision support. An uncertainty quantification framework is presented to facilitate practical assessment and reduction of model parametric uncertainties. A case study, using the distributed hydrological model CREST for daily streamflow simulation during the period 2008-2010 over ten watershed, was used to demonstrate the performance of this new framework. Model behaviors across watersheds were analyzed by a two-stage stepwise sensitivity analysis procedure, using LH-OAT method for screening out insensitive parameters, followed by MARS-based Sobol' sensitivity indices for quantifying each parameter's contribution to the response variance due to its first-order and higher-order effects. Pareto optimal sets of the influential parameters were then found by the adaptive surrogate-based multi-objective optimization procedure, using MARS model for approximating the parameter-response relationship and SCE-UA algorithm for searching the optimal parameter sets of the adaptively updated surrogate model. The final optimal parameter sets were validated against the daily streamflow simulation of the same watersheds during the period 2011-2012. The stepwise sensitivity analysis procedure efficiently reduced the number of parameters that need to be calibrated from twelve to seven, which helps to limit the dimensionality of calibration problem and serves to enhance the efficiency of parameter calibration. The adaptive MARS-based multi-objective calibration exercise provided satisfactory solutions to the reproduction of the observed streamflow for all watersheds. The final optimal solutions showed significant improvement when compared to the default solutions, with about 65-90% reduction in 1-NSE and 60-95% reduction in |RB|. The validation exercise indicated a large improvement in model performance with about 40
Effects of Peer Modelling Technique in Reducing Substance Abuse ...
African Journals Online (AJOL)
The study investigated the effects of peer modelling techniques in reducing substance abuse among undergraduates in Nigeria. The participants were one hundred and twenty (120) undergraduate students in 100 and 400 levels respectively. There are two groups: one treatment group and one control group.
Using of Structural Equation Modeling Techniques in Cognitive Levels Validation
Directory of Open Access Journals (Sweden)
Natalija Curkovic
2012-10-01
Full Text Available When constructing knowledge tests, cognitive level is usually one of the dimensions comprising the test specifications with each item assigned to measure a particular level. Recently used taxonomies of the cognitive levels most often represent some modification of the original Bloom’s taxonomy. There are many concerns in current literature about existence of predefined cognitive levels. The aim of this article is to investigate can structural equation modeling techniques confirm existence of different cognitive levels. For the purpose of the research, a Croatian final high-school Mathematics exam was used (N = 9626. Confirmatory factor analysis and structural regression modeling were used to test three different models. Structural equation modeling techniques did not support existence of different cognitive levels in this case. There is more than one possible explanation for that finding. Some other techniques that take into account nonlinear behaviour of the items as well as qualitative techniques might be more useful for the purpose of the cognitive levels validation. Furthermore, it seems that cognitive levels were not efficient descriptors of the items and so improvements are needed in describing the cognitive skills measured by items.
Directory of Open Access Journals (Sweden)
Lauren Ehrlichman
2017-03-01
Full Text Available Background: While various radiographic parameters and application of manual/gravity stress have been proposed to elucidate instability for Weber B fibula fractures, the prognostic capability of these modalities remains unclear. Determination of anatomic positioning of the mortise is paramount. We propose a novel view, the Gravity Reduction View (GRV, which helps elucidate non-anatomic positioning and reducibility of the mortise.Methods: The patient is positioned lateral decubitus with the injured leg elevated on a holder with the fibula directed superiorly. The x-ray cassette is placed posterior to the heel, with the beam angled at 15˚ of internal rotation to obtain a mortise view. Our proposed treatment algorithm is based upon the measurement of the medial clear space (MCS on the GRV versus the static mortise view (and in comparison to the superior clear space (SCS and is based on reducibility of the MCS. A retrospective review of patients evaluated utilizing the GRV was performed.Results: 26 patients with Weber B fibula fractures were managed according to this treatment algorithm. Mean age was 50.57 years old (range: range:18-81, SD=19. 17 patients underwent operative treatment and 9 patients were initially treated nonoperatively. 2 patients demonstrated late displacement and were treated surgically. Using this algorithm, at a mean follow-up of 26 weeks, all patients had a final MCS that was less than the SCS (final mean MCS 2.86 mm vs. mean SCS of 3.32 indicating effectiveness of the treatment algorithm.Conclusion: The GRV is a novel radiographic view in which deltoid competency, reducibility and initial positioning of the mortise are assessed by comparing a static mortise view with the appearance of the mortise on the GRV. We have developed a treatment algorithm based on the GRV and have found it to be useful in guiding treatment and successful at achieving anatomic mortise alignment.
AN OVERVIEW OF REDUCED ORDER MODELING TECHNIQUES FOR SAFETY APPLICATIONS
Energy Technology Data Exchange (ETDEWEB)
Mandelli, D.; Alfonsi, A.; Talbot, P.; Wang, C.; Maljovec, D.; Smith, C.; Rabiti, C.; Cogliati, J.
2016-10-01
The RISMC project is developing new advanced simulation-based tools to perform Computational Risk Analysis (CRA) for the existing fleet of U.S. nuclear power plants (NPPs). These tools numerically model not only the thermal-hydraulic behavior of the reactors primary and secondary systems, but also external event temporal evolution and component/system ageing. Thus, this is not only a multi-physics problem being addressed, but also a multi-scale problem (both spatial, µm-mm-m, and temporal, seconds-hours-years). As part of the RISMC CRA approach, a large amount of computationally-expensive simulation runs may be required. An important aspect is that even though computational power is growing, the overall computational cost of a RISMC analysis using brute-force methods may be not viable for certain cases. A solution that is being evaluated to assist the computational issue is the use of reduced order modeling techniques. During the FY2015, we investigated and applied reduced order modeling techniques to decrease the RISMC analysis computational cost by decreasing the number of simulation runs; for this analysis improvement we used surrogate models instead of the actual simulation codes. This article focuses on the use of reduced order modeling techniques that can be applied to RISMC analyses in order to generate, analyze, and visualize data. In particular, we focus on surrogate models that approximate the simulation results but in a much faster time (microseconds instead of hours/days).
Successful Gastric Volvulus Reduction and Gastropexy Using a Dual Endoscope Technique
Directory of Open Access Journals (Sweden)
Laith H. Jamil
2014-01-01
Full Text Available Gastric volvulus is a life threatening condition characterized by an abnormal rotation of the stomach around an axis. Although the first line treatment of this disorder is surgical, we report here a case of gastric volvulus that was endoscopically managed using a novel strategy. An 83-year-old female with a history of pancreatic cancer status postpylorus-preserving Whipple procedure presented with a cecal volvulus requiring right hemicolectomy. Postoperative imaging included a CT scan and upper GI series that showed a gastric volvulus with the antrum located above the diaphragm. An upper endoscopy was advanced through the pylorus into the duodenum and left in this position to keep the stomach under the diaphragm. A second pediatric endoscope was advanced alongside and used to complete percutaneous endoscopic gastrostomy (PEG placement for anterior gastropexy. The patient’s volvulus resolved and there were no complications. From our review of the literature, the dual endoscopic technique employed here has not been previously described. Patients who are poor surgical candidates or those who do not require emergent surgery can possibly benefit the most from similar minimally invasive endoscopic procedures as described here.
Manufacturing Enhancement through Reduction of Cycle Time using Different Lean Techniques
Suganthini Rekha, R.; Periyasamy, P.; Nallusamy, S.
2017-08-01
In recent manufacturing system the most important parameters in production line are work in process, TAKT time and line balancing. In this article lean tools and techniques were implemented to reduce the cycle time. The aim is to enhance the productivity of the water pump pipe by identifying the bottleneck stations and non value added activities. From the initial time study the bottleneck processes were identified and then necessary expanding processes were also identified for the bottleneck process. Subsequently the improvement actions have been established and implemented using different lean tools like value stream mapping, 5S and line balancing. The current state value stream mapping was developed to describe the existing status and to identify various problem areas. 5S was used to implement the steps to reduce the process cycle time and unnecessary movements of man and material. The improvement activities were implemented with required suggested and the future state value stream mapping was developed. From the results it was concluded that the total cycle time was reduced about 290.41 seconds and the customer demand has been increased about 760 units.
Fault current reduction by SFCL in a distribution system with PV using fuzzy logic technique
Mounika, M.; Lingareddy, P.
2017-07-01
In the modern power system, as the utilization of electric power is very wide, there is a frequent occurring of any fault or disturbance in power system. It causes a high short circuit current. Due to this fault, high currents occurs results to large mechanical forces, these forces cause overheating of the equipment. If the large size equipment are used in power system then they need a large protection scheme for severe fault conditions. Generally, the maintenance of electrical power system reliability is more important. But the elimination of fault is not possible in power systems. So the only alternate solution is to minimize the fault currents. For this the Super Conducting Fault Current Limiter using fuzzy logic technique is the best electric equipment which is used for reducing the severe fault current levels. In this paper, we simulated the unsymmetrical and symmetrical faults with fuzzy based superconducting fault current limiter. In our analysis it is proved that, fuzzy logic based super conducting fault current limiter reduces fault current quickly to a lower value.
New-construction techniques and HVAC overpressurization for radon reduction in schools
International Nuclear Information System (INIS)
Saum, D.; Witter, K.A.; Craig, A.B.
1988-01-01
Construction of a school in Fairfax County, Virginia, is being carefully monitored since elevated indoor radon levels have been identified in many existing houses near the site. Soil gas radon concentrations measured prior to pouring of the slabs were also indicative of a potential radon problem should the soil gas enter the school; however, subslab radon measurements collected thus far are lower than anticipated. Radon-resistant features have been incorporated into construction of the school and include the placing of at least 100 mm of clean coarse aggregate under the slab and a plastic film barrier between the aggregate and the slab, the sealing of all expansion joints, the sealing or plugging of all utility penetrations where possible, and the painting of interior block walls. In addition, the school's heating, ventilating, and air-conditioning (HVAC) system has been designed to operate continuously in overpressurization to help reduce pressure-driven entry of radon-containing soil gas into the building. Following completion, indoor radon levels in the school will be monitored to determine the effectiveness of these radon-resistant new-construction techniques and HVAC overpressurization in limiting radon entry into the school
SVM and ANN Based Classification of Plant Diseases Using Feature Reduction Technique
Directory of Open Access Journals (Sweden)
Jagadeesh D.Pujari
2016-06-01
Full Text Available Computers have been used for mechanization and automation in different applications of agriculture/horticulture. The critical decision on the agricultural yield and plant protection is done with the development of expert system (decision support system using computer vision techniques. One of the areas considered in the present work is the processing of images of plant diseases affecting agriculture/horticulture crops. The first symptoms of plant disease have to be correctly detected, identified, and quantified in the initial stages. The color and texture features have been used in order to work with the sample images of plant diseases. Algorithms for extraction of color and texture features have been developed, which are in turn used to train support vector machine (SVM and artificial neural network (ANN classifiers. The study has presented a reduced feature set based approach for recognition and classification of images of plant diseases. The results reveal that SVM classifier is more suitable for identification and classification of plant diseases affecting agriculture/horticulture crops.
Khalil, Haitham H; Malahias, Marco; Shetty, Geeta
2016-01-01
Although Wise pattern reduction mammoplasty is one of the most prevalent procedures providing satisfactory cutaneous reduction, it is at the expense of inevitable lengthier scars and wound complications, especially at the inverted T junction. To describe a novel technique providing tension-free closure at the T junction through performing triangular lipodermal flaps. The aim is to alleviate skin tension, thus reducing skin necrosis, dehiscence and excessive scarring at the T junction. One hundred seventy-three consecutive procedures were performed on 137 patients between 2009 and 2013. Data collected included demographics, perioperative morbidity and resected breast tissue weight. The follow-up period ranged from three to 30 months; early and late postoperative complications and patient satisfaction were recorded. Superficial epidermolysis without T-junction dehiscence was experienced in eight (4.6%) procedures while five (2.9%) procedures developed full-thickness wound dehiscence. Ninety-four percent of patients were highly satisfied with the outcome. The technique is safe, versatile and easy to execute, providing a tension-free zone and acting as internal dermal sling, thus providing better wound healing with more favourable aesthetic outcome and maintaining breast projection.
Dietary Impact of Adding Potassium Chloride to Foods as a Sodium Reduction Technique
Directory of Open Access Journals (Sweden)
Leo van Buren
2016-04-01
Full Text Available Potassium chloride is a leading reformulation technology for reducing sodium in food products. As, globally, sodium intake exceeds guidelines, this technology is beneficial; however, its potential impact on potassium intake is unknown. Therefore, a modeling study was conducted using Dutch National Food Survey data to examine the dietary impact of reformulation (n = 2106. Product-specific sodium criteria, to enable a maximum daily sodium chloride intake of 5 grams/day, were applied to all foods consumed in the survey. The impact of replacing 20%, 50% and 100% of sodium chloride from each product with potassium chloride was modeled. At baseline median, potassium intake was 3334 mg/day. An increase in the median intake of potassium of 453 mg/day was seen when a 20% replacement was applied, 674 mg/day with a 50% replacement scenario and 733 mg/day with a 100% replacement scenario. Reformulation had the largest impact on: bread, processed fruit and vegetables, snacks and processed meat. Replacement of sodium chloride by potassium chloride, particularly in key contributing product groups, would result in better compliance to potassium intake guidelines (3510 mg/day. Moreover, it could be considered safe for the general adult population, as intake remains compliant with EFSA guidelines. Based on current modeling potassium chloride presents as a valuable, safe replacer for sodium chloride in food products.
Space geodetic techniques for global modeling of ionospheric peak parameters
Alizadeh, M. Mahdi; Schuh, Harald; Schmidt, Michael
The rapid development of new technological systems for navigation, telecommunication, and space missions which transmit signals through the Earth’s upper atmosphere - the ionosphere - makes the necessity of precise, reliable and near real-time models of the ionospheric parameters more crucial. In the last decades space geodetic techniques have turned into a capable tool for measuring ionospheric parameters in terms of Total Electron Content (TEC) or the electron density. Among these systems, the current space geodetic techniques, such as Global Navigation Satellite Systems (GNSS), Low Earth Orbiting (LEO) satellites, satellite altimetry missions, and others have found several applications in a broad range of commercial and scientific fields. This paper aims at the development of a three-dimensional integrated model of the ionosphere, by using various space geodetic techniques and applying a combination procedure for computation of the global model of electron density. In order to model ionosphere in 3D, electron density is represented as a function of maximum electron density (NmF2), and its corresponding height (hmF2). NmF2 and hmF2 are then modeled in longitude, latitude, and height using two sets of spherical harmonic expansions with degree and order 15. To perform the estimation, GNSS input data are simulated in such a way that the true position of the satellites are detected and used, but the STEC values are obtained through a simulation procedure, using the IGS VTEC maps. After simulating the input data, the a priori values required for the estimation procedure are calculated using the IRI-2012 model and also by applying the ray-tracing technique. The estimated results are compared with F2-peak parameters derived from the IRI model to assess the least-square estimation procedure and moreover, to validate the developed maps, the results are compared with the raw F2-peak parameters derived from the Formosat-3/Cosmic data.
Directory of Open Access Journals (Sweden)
Rachana J Shah
2014-09-01
Full Text Available A resilient soft liner (RSL has been used in processed complete dentures but controlling its thickness has always been a challenge because of uneven reduction of the denture’s intaglio surface. Use of a thermoplastic vacuum-formed template and an endodontic K-file, as guides, for the reduction of a processed mandibular complete denture to receive RSL is described in the present report. A processed mandibular complete denture is prepared by reducing its borders and drilling holes in its surface. A thermoplastic sheet adapted to the intaglio surface and an endodontic K-file with rubber stop adjusted to the desired dimension are used as guides to the reduction procedure and allows intermittent measuring of the reduced areas. This technique helps in reducing the processed denture’s intaglio surface in a controlled manner thus maintaining the strength of the denture base and effectiveness of soft liner. It also makes the application of resilient soft liner a cost and time effective maneuver
Wells, Kelley C.; Millet, Dylan B.; Bousserez, Nicolas; Henze, Daven K.; Griffis, Timothy J.; Chaliyakunnel, Sreelekha; Dlugokencky, Edward J.; Saikawa, Eri; Xiang, Gao; Prinn, Ronald G.; O'Doherty, Simon; Young, Dickon; Weiss, Ray F.; Dutton, Geoff S.; Elkins, James W.; Krummel, Paul B.; Langenfelds, Ray; Steele, L. Paul
2018-01-01
We present top-down constraints on global monthly N2O emissions for 2011 from a multi-inversion approach and an ensemble of surface observations. The inversions employ the GEOS-Chem adjoint and an array of aggregation strategies to test how well current observations can constrain the spatial distribution of global N2O emissions. The strategies include (1) a standard 4D-Var inversion at native model resolution (4° × 5°), (2) an inversion for six continental and three ocean regions, and (3) a fast 4D-Var inversion based on a novel dimension reduction technique employing randomized singular value decomposition (SVD). The optimized global flux ranges from 15.9 Tg N yr-1 (SVD-based inversion) to 17.5-17.7 Tg N yr-1 (continental-scale, standard 4D-Var inversions), with the former better capturing the extratropical N2O background measured during the HIAPER Pole-to-Pole Observations (HIPPO) airborne campaigns. We find that the tropics provide a greater contribution to the global N2O flux than is predicted by the prior bottom-up inventories, likely due to underestimated agricultural and oceanic emissions. We infer an overestimate of natural soil emissions in the extratropics and find that predicted emissions are seasonally biased in northern midlatitudes. Here, optimized fluxes exhibit a springtime peak consistent with the timing of spring fertilizer and manure application, soil thawing, and elevated soil moisture. Finally, the inversions reveal a major emission underestimate in the US Corn Belt in the bottom-up inventory used here. We extensively test the impact of initial conditions on the analysis and recommend formally optimizing the initial N2O distribution to avoid biasing the inferred fluxes. We find that the SVD-based approach provides a powerful framework for deriving emission information from N2O observations: by defining the optimal resolution of the solution based on the information content of the inversion, it provides spatial information that is lost when
Directory of Open Access Journals (Sweden)
Pouya Nezafati1
2015-10-01
Full Text Available Background: About half of all patients who undergo mitral valve surgery suffer from atrial fibrillation (AF. Cox described the surgical cut-and-sew Maze procedure, which is an effective surgical method but has some complications. This study was designed to evaluate the efficacy of a substitution method of radiofrequency ablation (RFA for patients undergoing mitral valve surgery with AF.Methods: We evaluated 50 patients, comprising 40 men and 10 women at a mean age of 61.8 ± 7.5 years, who underwent mitral valve surgery with RFA between March 2010 and August 2013. All the patients had permanent AF with an enlarged left atrium (LA. The first indication for surgery was underlying organic lesions. Mitral valve replacement or repair was performed in the patients as a single procedure or in combination with aortic valve replacement or coronary artery bypass grafting. Radiofrequency energy was used to create continuous endocardial lesions mimicking most incisions and sutures. We evaluated the pre- and postoperative LA size, duration of aortic cross-clamping, cardiopulmonary bypass time, intensive care unit stay, and total hospital stay.Results: The mean preoperative and postoperative LA sizes were 7.5 ± 1.4 cm and 4.3 ± 0.7 cm (p value = 0.0001, respectively. The mean cardiopulmonary bypass time and the aortic cross-clamping time were 134.3 ± 33.7 minand 109.0 ± 28.4 min, respectively. The average stay at the intensive care unit was 2.1 ± 1.2 days, and the total hospital stay was 8.3 ± 2.4 days. Rebleeding was the only complication, found in one patient. There was no early or late mortality. Eighty-two percent of the patients were discharged in normal sinus rhythm. Five other patients had normal sinus rhythm at 6months' follow-up, and the remaining 4 patients did not have a normal sinus rhythm after 6 months.Conclusion: Radiofrequency ablation, combined with LA reduction, is an effective option for the treatment of permanent AF concomitant with
Health Gain by Salt Reduction in Europe: A Modelling Study
Hendriksen, M.A.H.; Raaij, van J.M.A.; Geleijnse, J.M.; Breda, J.; Boshuizen, H.C.
2015-01-01
Excessive salt intake is associated with hypertension and cardiovascular diseases. Salt intake exceeds the World Health Organization population nutrition goal of 5 grams per day in the European region. We assessed the health impact of salt reduction in nine European countries (Finland, France,
Modeling phonologization : vowel reduction and epenthesis in Lunigiana dialects
Cavirani, Edoardo
2015-01-01
Building upon wave-theoretic assumptions, this dissertation provides a formal description of the relationship between diatopic/diachronic micro-variation and phonologization. In particular, an analysis is performed of the phonetic/phonological properties of unstressed vowel reduction and vowel
Study of wind change for the development of loads reduction techniques for the space shuttle
Adelfang, S. I.
1987-01-01
Wind change statistics are analyzed for Vandenberg AFB, California (VAFB) and Kennedy Space Center, Florida (KSC). Means and standard deviations of wind component change and vector wind change modulus within 3-9 and 9-16 km altitude bands are tabulated. The contribution to 3.5 hr wind component change by wind perturbations in various wavelength bands is evaluated. Probability distributions of maximum 3.5 hr wind change in an altitude band are presented and a model for wind change at a specified altitude is tested with data derived from six data bases from VAFB and Santa Monica, California.
Drag reduction of a car model by linear genetic programming control
Li, Ruiying; Noack, Bernd R.; Cordier, Laurent; Borée, Jacques; Harambat, Fabien
2017-08-01
We investigate open- and closed-loop active control for aerodynamic drag reduction of a car model. Turbulent flow around a blunt-edged Ahmed body is examined at ReH≈ 3× 105 based on body height. The actuation is performed with pulsed jets at all trailing edges (multiple inputs) combined with a Coanda deflection surface. The flow is monitored with 16 pressure sensors distributed at the rear side (multiple outputs). We apply a recently developed model-free control strategy building on genetic programming in Dracopoulos and Kent (Neural Comput Appl 6:214-228, 1997) and Gautier et al. (J Fluid Mech 770:424-441, 2015). The optimized control laws comprise periodic forcing, multi-frequency forcing and sensor-based feedback including also time-history information feedback and combinations thereof. Key enabler is linear genetic programming (LGP) as powerful regression technique for optimizing the multiple-input multiple-output control laws. The proposed LGP control can select the best open- or closed-loop control in an unsupervised manner. Approximately 33% base pressure recovery associated with 22% drag reduction is achieved in all considered classes of control laws. Intriguingly, the feedback actuation emulates periodic high-frequency forcing. In addition, the control identified automatically the only sensor which listens to high-frequency flow components with good signal to noise ratio. Our control strategy is, in principle, applicable to all multiple actuators and sensors experiments.
Adiabatic reduction of a model of stochastic gene expression with jump Markov process.
Yvinec, Romain; Zhuge, Changjing; Lei, Jinzhi; Mackey, Michael C
2014-04-01
This paper considers adiabatic reduction in a model of stochastic gene expression with bursting transcription considered as a jump Markov process. In this model, the process of gene expression with auto-regulation is described by fast/slow dynamics. The production of mRNA is assumed to follow a compound Poisson process occurring at a rate depending on protein levels (the phenomena called bursting in molecular biology) and the production of protein is a linear function of mRNA numbers. When the dynamics of mRNA is assumed to be a fast process (due to faster mRNA degradation than that of protein) we prove that, with appropriate scalings in the burst rate, jump size or translational rate, the bursting phenomena can be transmitted to the slow variable. We show that, depending on the scaling, the reduced equation is either a stochastic differential equation with a jump Poisson process or a deterministic ordinary differential equation. These results are significant because adiabatic reduction techniques seem to have not been rigorously justified for a stochastic differential system containing a jump Markov process. We expect that the results can be generalized to adiabatic methods in more general stochastic hybrid systems.
Jarosch, H. S.
1982-01-01
A method based on the use of constrained spline fits is used to overcome the difficulties arising when body-wave data in the form of T-delta are reduced to the tau-p form in the presence of cusps. In comparison with unconstrained spline fits, the method proposed here tends to produce much smoother models which lie approximately in the middle of the bounds produced by the extremal method. The method is noniterative and, therefore, computationally efficient. The method is applied to the lunar seismic data, where at least one triplication is presumed to occur in the P-wave travel-time curve. It is shown, however, that because of an insufficient number of data points for events close to the antipode of the center of the lunar network, the present analysis is not accurate enough to resolve the problem of a possible lunar core.
Model-checking techniques based on cumulative residuals.
Lin, D Y; Wei, L J; Ying, Z
2002-03-01
Residuals have long been used for graphical and numerical examinations of the adequacy of regression models. Conventional residual analysis based on the plots of raw residuals or their smoothed curves is highly subjective, whereas most numerical goodness-of-fit tests provide little information about the nature of model misspecification. In this paper, we develop objective and informative model-checking techniques by taking the cumulative sums of residuals over certain coordinates (e.g., covariates or fitted values) or by considering some related aggregates of residuals, such as moving sums and moving averages. For a variety of statistical models and data structures, including generalized linear models with independent or dependent observations, the distributions of these stochastic processes tinder the assumed model can be approximated by the distributions of certain zero-mean Gaussian processes whose realizations can be easily generated by computer simulation. Each observed process can then be compared, both graphically and numerically, with a number of realizations from the Gaussian process. Such comparisons enable one to assess objectively whether a trend seen in a residual plot reflects model misspecification or natural variation. The proposed techniques are particularly useful in checking the functional form of a covariate and the link function. Illustrations with several medical studies are provided.
Development of a kinetic model for biological sulphate reduction ...
African Journals Online (AJOL)
Further, in the BSR model the end-product sulphide has a gaseous equilibrium not in the UCTADM1 model, and hence the physical gas exchange for sulphide is included. The BSR biological, chemical and physical processes are integrated with those of the UCTADM1 model, to give a complete kinetic model for competitive ...
Takayanagi, Tomoya; Arai, Takehiro; Amanuma, Makoto; Sano, Tomonari; Ichiba, Masato; Ishizaka, Kazumasa; Sekine, Takako; Matsutani, Hideyuki; Morita, Hitomi; Takase, Shinichi
2017-01-01
Coronary computed tomography angiography (CCTA) in patients with pacemaker suffers from metallic lead-induced artifacts, which often interfere with accurate assessment of coronary luminal stenosis. The purpose of this study was to assess a frequency of the lead-induced artifacts and artifact-suppression effect by the single energy metal artifact reduction (SEMAR) technique. Forty-one patients with a dual-chamber pacemaker were evaluated using a 320 multi-detector row CT (MDCT). Among them, 22 patients with motion-free full data reconstruction images were the final candidates. Images with and without the SMEAR technique were subjectively compared, and the degree of metallic artifacts was compared. On images without SEMAR, severe metallic artifacts were often observed in the right coronary artery (#1, #2, #3) and distal anterior descending branch (#8). These artifacts were effectively suppressed by SEMAR, and the luminal accessibility was significantly improved in #3 and #8. While pacemaker leads often cause metallic-induced artifacts, SEMAR technique reduced the artifacts and significantly improved the accessibility of coronary lumen in #3 and #8.
Chromium (VI) reduction in acetate- and molasses-amended natural media: empirical model development
Energy Technology Data Exchange (ETDEWEB)
Hansen, Scott [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Boukhalfa, Hakim [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Karra, Satish [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Wang, Dongping [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Vesselinov, Velimir Valentinov [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2017-11-21
Stimulating indigenous microbes to reduce heavy metals from highly toxic oxidized species to more benign reduced species is a promising groundwater remediation technique that has already seen successful field applications. Designing such a bio-remediation scheme requires a model incorporating the kinetics of nonlinear bio-geochemical interactions between multiple species. With this motivation, we performed a set of microcosm experiments in natural sediments and their indigenous pore water and microbes, generating simultaneous time series for concentrations of Cr(VI), an electron donor (both molasses and acetate were considered), and biomass. Molasses was found to undergo a rapid direct abiotic reaction which eliminated all Cr(VI) before any biomass had time to grow. This was not found in the acetate microcosms, and a distinct zero-order bio-reduction process was observed. Existing models were found inappropriate and a new set of three coupled governing equations representing these process dynamics were developed and their parameters calibrated against the time series from the acetate-amended microcosms. Cell suspension batch experiments were also performed to calibrate bio-reduction rates in the absence of electron donor and sediment. The donor used to initially grow the cells (molasses or acetate) was found not to impact the reduction rate constants in suspension, which were orders of magnitude larger than those explaining the natural media microcosm experiments. This suggests the limited utility of kinetics determined in suspension for remedial design. Scoping studies on the natural media microcosms were also performed, suggesting limited impact of foreign abiotic material and minimal effect of diffusion limitation in the vertical dimension. These analyses may be of independent value to future researchers.
International Nuclear Information System (INIS)
Kugler, Ulrike; Theloke, Jochen; Joerss, Wolfram
2013-01-01
The modeling of the reference scenario and the various reduction scenarios in PAREST was based on the Central System of Emissions (CSE) (CSE, 2007). Emissions from road traffic were calculated by using the traffic emission model TREMOD (Knoerr et al., 2005) and fed into the CSE. The version TREMOD 4.17 has been used. The resulting emission levels in PAREST reference scenario were supplemented by the emission-reducing effect of the implementation of the future Euro 5 and 6 emission standards for cars and light commercial vehicles and Euro VI for heavy commercial vehicles in combination with the truck toll extension. [de
Advanced techniques in reliability model representation and solution
Palumbo, Daniel L.; Nicol, David M.
1992-01-01
The current tendency of flight control system designs is towards increased integration of applications and increased distribution of computational elements. The reliability analysis of such systems is difficult because subsystem interactions are increasingly interdependent. Researchers at NASA Langley Research Center have been working for several years to extend the capability of Markov modeling techniques to address these problems. This effort has been focused in the areas of increased model abstraction and increased computational capability. The reliability model generator (RMG) is a software tool that uses as input a graphical object-oriented block diagram of the system. RMG uses a failure-effects algorithm to produce the reliability model from the graphical description. The ASSURE software tool is a parallel processing program that uses the semi-Markov unreliability range evaluator (SURE) solution technique and the abstract semi-Markov specification interface to the SURE tool (ASSIST) modeling language. A failure modes-effects simulation is used by ASSURE. These tools were used to analyze a significant portion of a complex flight control system. The successful combination of the power of graphical representation, automated model generation, and parallel computation leads to the conclusion that distributed fault-tolerant system architectures can now be analyzed.
International Nuclear Information System (INIS)
Grebenkov, A.; Mansoux, H.; Yakushau, A.; Antsipov, G.; Averin, V.; Zhouchenko, Y.; Minenko, V.; Tirmarche, M.
2004-01-01
countermeasures recommended and was incorporated in the database developed. It addresses the above settlements and all other critical locations, assists for evaluation of radioecological conditions and ranking of contaminated settlements, files data on these settlements, recommends and evaluates protection measures, and provides decision-makers with a risk-based remedial planning and a feasibility assessment of recommended measures based on a cost-benefit analysis. To support a decision protocol, the database, in addition to informational block, contains also a simulation block comprised by the following models: (A) contaminant transfer to basic forage resources and food; (B) exposure and risk assessment and prediction; and (C) cost and efficiency analysis based on data of available countermeasures. This study is funded by FGI Programme. (author)
Procedures for Geometric Data Reduction in Solid Log Modelling
Luis G. Occeña; Wenzhen Chen; Daniel L. Schmoldt
1995-01-01
One of the difficulties in solid log modelling is working with huge data sets, such as those that come from computed axial tomographic imaging. Algorithmic procedures are described in this paper that have successfully reduced data without sacrificing modelling integrity.
Use of hydrological modelling and isotope techniques in Guvenc basin
International Nuclear Information System (INIS)
Altinbilek, D.
1991-07-01
The study covers the work performed under Project No. 335-RC-TUR-5145 entitled ''Use of Hydrologic Modelling and Isotope Techniques in Guvenc Basin'' and is an initial part of a program for estimating runoff from Central Anatolia Watersheds. The study presented herein consists of mainly three parts: 1) the acquisition of a library of rainfall excess, direct runoff and isotope data for Guvenc basin; 2) the modification of SCS model to be applied to Guvenc basin first and then to other basins of Central Anatolia for predicting the surface runoff from gaged and ungaged watersheds; and 3) the use of environmental isotope technique in order to define the basin components of streamflow of Guvenc basin. 31 refs, figs and tabs
Herkt, Sabrina
2008-01-01
This thesis shows an approach to combine the advantages of MBS tyre models and FEM models for the use in full vehicle simulations. The procedure proposed in this thesis aims to describe a nonlinear structure with a Finite Element approach combined with nonlinear model reduction methods. Unlike most model reduction methods - as the frequently used Craig-Bampton approach - the method of Proper Orthogonal Decomposition (POD) offers a projection basis suitable for nonlinear models. For the linear...
Equivalence and Differences between Structural Equation Modeling and State-Space Modeling Techniques
Chow, Sy-Miin; Ho, Moon-ho R.; Hamaker, Ellen L.; Dolan, Conor V.
2010-01-01
State-space modeling techniques have been compared to structural equation modeling (SEM) techniques in various contexts but their unique strengths have often been overshadowed by their similarities to SEM. In this article, we provide a comprehensive discussion of these 2 approaches' similarities and differences through analytic comparisons and…
Total laparoscopic gastrocystoplasty: experimental technique in a porcine model
Frederico R. Romero; Claudemir Trapp; Michael Muntener; Fabio A. Brito; Louis R. Kavoussi; Thomas W. Jarrett
2007-01-01
OBJECTIVE: Describe a unique simplified experimental technique for total laparoscopic gastrocystoplasty in a porcine model. MATERIAL AND METHODS: We performed laparoscopic gastrocystoplasty on 10 animals. The gastroepiploic arch was identified and carefully mobilized from its origin at the pylorus to the beginning of the previously demarcated gastric wedge. The gastric segment was resected with sharp dissection. Both gastric suturing and gastrovesical anastomosis were performed with absorbabl...
A Bayesian Technique for Selecting a Linear Forecasting Model
Ramona L. Trader
1983-01-01
The specification of a forecasting model is considered in the context of linear multiple regression. Several potential predictor variables are available, but some of them convey little information about the dependent variable which is to be predicted. A technique for selecting the "best" set of predictors which takes into account the inherent uncertainty in prediction is detailed. In addition to current data, there is often substantial expert opinion available which is relevant to the forecas...
[Evaluation on a fast weight reduction model in vitro].
Li, Songtao; Li, Ying; Wen, Ying; Sun, Changhao
2010-03-01
To establish a fast and effective model in vitro for screening weight-reducing drugs and taking preliminary evaluation of the model. Mature adipocytes of SD rat induced by oleic acid were used to establish a obesity model in vitro. Isoprel, genistein, caffeine were selected as positive agents and curcumine as negative agent to evaluate the obesity model. Lipolysis of adipocytes was stimulated significantly by isoprel, genistein and caffeine rather than curcumine. This model could be used efficiently for screening weight-losing drugs.
Fuzzy techniques for subjective workload-score modeling under uncertainties.
Kumar, Mohit; Arndt, Dagmar; Kreuzfeld, Steffi; Thurow, Kerstin; Stoll, Norbert; Stoll, Regina
2008-12-01
This paper deals with the development of a computer model to estimate the subjective workload score of individuals by evaluating their heart-rate (HR) signals. The identification of a model to estimate the subjective workload score of individuals under different workload situations is too ambitious a task because different individuals (due to different body conditions, emotional states, age, gender, etc.) show different physiological responses (assessed by evaluating the HR signal) under different workload situations. This is equivalent to saying that the mathematical mappings between physiological parameters and the workload score are uncertain. Our approach to deal with the uncertainties in a workload-modeling problem consists of the following steps: 1) The uncertainties arising due the individual variations in identifying a common model valid for all the individuals are filtered out using a fuzzy filter; 2) stochastic modeling of the uncertainties (provided by the fuzzy filter) use finite-mixture models and utilize this information regarding uncertainties for identifying the structure and initial parameters of a workload model; and 3) finally, the workload model parameters for an individual are identified in an online scenario using machine learning algorithms. The contribution of this paper is to propose, with a mathematical analysis, a fuzzy-based modeling technique that first filters out the uncertainties from the modeling problem, analyzes the uncertainties statistically using finite-mixture modeling, and, finally, utilizes the information about uncertainties for adapting the workload model to an individual's physiological conditions. The approach of this paper, demonstrated with the real-world medical data of 11 subjects, provides a fuzzy-based tool useful for modeling in the presence of uncertainties.
Sensitivity analysis techniques for models of human behavior.
Energy Technology Data Exchange (ETDEWEB)
Bier, Asmeret Brooke
2010-09-01
Human and social modeling has emerged as an important research area at Sandia National Laboratories due to its potential to improve national defense-related decision-making in the presence of uncertainty. To learn about which sensitivity analysis techniques are most suitable for models of human behavior, different promising methods were applied to an example model, tested, and compared. The example model simulates cognitive, behavioral, and social processes and interactions, and involves substantial nonlinearity, uncertainty, and variability. Results showed that some sensitivity analysis methods create similar results, and can thus be considered redundant. However, other methods, such as global methods that consider interactions between inputs, can generate insight not gained from traditional methods.
Practical Techniques for Modeling Gas Turbine Engine Performance
Chapman, Jeffryes W.; Lavelle, Thomas M.; Litt, Jonathan S.
2016-01-01
The cost and risk associated with the design and operation of gas turbine engine systems has led to an increasing dependence on mathematical models. In this paper, the fundamentals of engine simulation will be reviewed, an example performance analysis will be performed, and relationships useful for engine control system development will be highlighted. The focus will be on thermodynamic modeling utilizing techniques common in industry, such as: the Brayton cycle, component performance maps, map scaling, and design point criteria generation. In general, these topics will be viewed from the standpoint of an example turbojet engine model; however, demonstrated concepts may be adapted to other gas turbine systems, such as gas generators, marine engines, or high bypass aircraft engines. The purpose of this paper is to provide an example of gas turbine model generation and system performance analysis for educational uses, such as curriculum creation or student reference.
Dimensional reduction of Markov state models from renormalization group theory
Orioli, S.; Faccioli, P.
2016-09-01
Renormalization Group (RG) theory provides the theoretical framework to define rigorous effective theories, i.e., systematic low-resolution approximations of arbitrary microscopic models. Markov state models are shown to be rigorous effective theories for Molecular Dynamics (MD). Based on this fact, we use real space RG to vary the resolution of the stochastic model and define an algorithm for clustering microstates into macrostates. The result is a lower dimensional stochastic model which, by construction, provides the optimal coarse-grained Markovian representation of the system's relaxation kinetics. To illustrate and validate our theory, we analyze a number of test systems of increasing complexity, ranging from synthetic toy models to two realistic applications, built form all-atom MD simulations. The computational cost of computing the low-dimensional model remains affordable on a desktop computer even for thousands of microstates.
New techniques and models for assessing ischemic heart disease risks
Directory of Open Access Journals (Sweden)
I.N. Yakovina
2017-09-01
Full Text Available The paper focuses on tasks of creating and implementing a new technique aimed at assessing ischemic heart diseases risk. The techniques is based on a laboratory-diagnostic complex which includes oxidative, lipid-lipoprotein, inflammatory and metabolic biochemical parameters; s system of logic-mathematic models used for obtaining numeric risk assessments; and a program module which allows to calculate and analyze the results. we justified our models in the course of our re-search which included 172 patients suffering from ischemic heart diseases (IHD combined with coronary atherosclerosis verified by coronary arteriography and 167 patients who didn't have ischemic heart diseases. Our research program in-cluded demographic and social data, questioning on tobacco and alcohol addiction, questioning about dietary habits, chronic diseases case history and medications intake, cardiologic questioning as per Rose, anthropometry, 3-times meas-ured blood pressure, spirometry, and electrocardiogram taking and recording with decoding as per Minnesota code. We detected biochemical parameters of each patient and adjusted our task of creating techniques and models for assessing ischemic heart disease risks on the basis of inflammatory, oxidative, and lipid biological markers. We created a system of logic and mathematic models which is a universal scheme for laboratory parameters processing allowing for dissimilar data specificity. The system of models is universal, but a diagnostic approach to applied biochemical parameters is spe-cific. The created program module (calculator helps a physician to obtain a result on the basis of laboratory research data; the result characterizes numeric risks of coronary atherosclerosis and ischemic heart disease for a patient. It also allows to obtain a visual image of a system of parameters and their deviation from a conditional «standard – pathology» boundary. The complex is implemented into practice by the Scientific
Mathematical and Numerical Techniques in Energy and Environmental Modeling
Chen, Z.; Ewing, R. E.
Mathematical models have been widely used to predict, understand, and optimize many complex physical processes, from semiconductor or pharmaceutical design to large-scale applications such as global weather models to astrophysics. In particular, simulation of environmental effects of air pollution is extensive. Here we address the need for using similar models to understand the fate and transport of groundwater contaminants and to design in situ remediation strategies. Three basic problem areas need to be addressed in the modeling and simulation of the flow of groundwater contamination. First, one obtains an effective model to describe the complex fluid/fluid and fluid/rock interactions that control the transport of contaminants in groundwater. This includes the problem of obtaining accurate reservoir descriptions at various length scales and modeling the effects of this heterogeneity in the reservoir simulators. Next, one develops accurate discretization techniques that retain the important physical properties of the continuous models. Finally, one develops efficient numerical solution algorithms that utilize the potential of the emerging computing architectures. We will discuss recent advances and describe the contribution of each of the papers in this book in these three areas. Keywords: reservoir simulation, mathematical models, partial differential equations, numerical algorithms
Climate change air toxic co-reduction in the context of macroeconomic modelling.
Crawford-Brown, Douglas; Chen, Pi-Cheng; Shi, Hsiu-Ching; Chao, Chia-Wei
2013-08-15
This paper examines the health implications of global PM reduction accompanying greenhouse gas emissions reductions in the 180 national economies of the global macroeconomy. A human health effects module based on empirical data on GHG emissions, PM emissions, background PM concentrations, source apportionment and human health risk coefficients is used to estimate reductions in morbidity and mortality from PM exposures globally as co-reduction of GHG reductions. These results are compared against the "fuzzy bright line" that often underlies regulatory decisions for environmental toxics, and demonstrate that the risk reduction through PM reduction would usually be considered justified in traditional risk-based decisions for environmental toxics. It is shown that this risk reduction can be on the order of more than 4 × 10(-3) excess lifetime mortality risk, with global annual cost savings of slightly more than $10B, when uniform GHG reduction measures across all sectors of the economy form the basis for climate policy ($2.2B if only Annex I nations reduce). Consideration of co-reduction of PM-10 within a climate policy framework harmonized with other environmental policies can therefore be an effective driver of climate policy. An error analysis comparing results of the current model against those of significantly more spatially resolved models at city and national scales indicates errors caused by the low spatial resolution of the global model used here may be on the order of a factor of 2. Copyright © 2013 Elsevier Ltd. All rights reserved.
Hallas, Tony
There are two distinct kinds of noise - structural and color. Each requires a specific method of attack to minimize. The great challenge is to reduce the noise without reducing the faint and delicate detail in the image. My most-used and favorite noise suppression is found in Photoshop CS 5 Camera Raw. If I cannot get the desired results with the first choice, I will use Noise Ninja, which has certain advantages in some situations that we will cover.
A Romanian energy system model and a nuclear reduction strategy
DEFF Research Database (Denmark)
Gota, Dan-Ioan; Lund, Henrik; Miclea, Liviu
2011-01-01
energy system are compared to the actual data of Romania of year 2008. First, a comparison is made between the 2008 model and the 2013 strategy scenario corresponding to the grid of the Romanian transmission system operator (TSO) Transelectrica. Then, a comparison is made to a second strategy scenario......This paper presents a model of the Romanian energy system with the purpose of providing a tool for the analysis of future sustainable energy strategies. The model represents the total national energy system and is detailed to the level of hourly demand and production in order to be able to analyse...... the consequences of adding fluctuating renewable energy sources to the system. The model has been implemented into the EnergyPLAN tool and has been validated in order to determine if it can be used as a reference model for other simulations. In EnergyPLAN, two different future strategy scenarios for the Romanian...
Energy demand modelling and GHG emission reduction: case study Croatia
DEFF Research Database (Denmark)
Pukšec, Tomislav; Mathiesen, Brian Vad; Novosel, Tomislav
2013-01-01
and develop new energy policy towards energy efficiency and renewable energy sources, in order to comply with all of the presented tasks. Planning future energy demand, considering various policy options like regulation, fiscal and financial measures, becomes one of the crucial issues of future national...... energy strategy. This paper analyses Croatian long term energy demand and its effect on the future national GHG emissions. For that purpose the national energy demand model was constructed (NeD model). The model is comprised out of six modules each representing one sector, following Croatian national...... energy balance; industry, transport, households, services, agriculture and construction. For three of the modules (industry, transport and households) previously developed long term energy demand models were used, while for the remaining three new models were constructed. As an additional feature, new...
International Nuclear Information System (INIS)
Katsevich, A.I.
1996-01-01
Conventional tomographic imaging techniques are nonlocal: to reconstruct an unknown function f at a point x, one needs to know its Radon transform (RT) f (θ,p). Suppose that one is interested in the recovery of f only for x in some set U. The author calls U the region of interest (ROI). Define the local data as the integrals of f along the lines that intersect the ROI. He proposes algorithms for finding locations and values of jumps (sharp variations) of f from only the local data. In case of transmission tomography, this results in a reduction of the x-ray dose to a patient. The proposed algorithms can also be used in emission tomographies. They allow one: to image jumps of f with better resolution than conventional techniques; to take into account variable attenuation (if it is known); and to obtain meaningful images even if the attenuation is not known. Results of testing the proposed algorithms on the simulated and real data are presented
Goitein, Orly; Matetzky, Shlomi; Eshet, Yael; Goitein, David; Hamdan, Ashraf; Segni, Elio Di; Konen, Eli
2011-10-01
Coronary CT angiography (CCTA) is used daily in acute chest pain triage, although exposing patients to significant radiation dosage. CCTA using prospective ECG gating (PG CCTA) enables significant radiation reduction. To determine whether the routine use of 128 vs. 64 multidetector CT (MDCT) can increase the proportion of patients scanned using PG CCTA technique, lowering radiation exposure, without decreasing image quality. The study comprised 232 patients, 116 consecutive patients scanned using 128 MDCT (mean age 49 years, 79 men, BMI 28) and 116 consecutive patients (mean age 50 years, 75 men, BMI 28) which were scanned using 64 MDCT. PG CCTA was performed whenever technically permissible by each type of scanner: 64 MDCT = stable heart rate (HR) exposure was 6.2 ± 4.8 mSv and 10.4 ± 7.5 mSv for the 128 and 64 MDCT, respectively (P = 0.008). The 128 MDCT scanner enables utilization of PG CCTA technique in a greater proportion of patients, thereby decreasing the related radiation significantly, without hampering image quality.
Xu, Fan; Wang, Jiaxing; Zhu, Daiyin; Tu, Qi
2018-04-01
Speckle noise has always been a particularly tricky problem in improving the ranging capability and accuracy of Lidar system especially in harsh environment. Currently, effective speckle de-noising techniques are extremely scarce and should be further developed. In this study, a speckle noise reduction technique has been proposed based on independent component analysis (ICA). Since normally few changes happen in the shape of laser pulse itself, the authors employed the laser source as a reference pulse and executed the ICA decomposition to find the optimal matching position. In order to achieve the self-adaptability of algorithm, local Mean Square Error (MSE) has been defined as an appropriate criterion for investigating the iteration results. The obtained experimental results demonstrated that the self-adaptive pulse-matching ICA (PM-ICA) method could effectively decrease the speckle noise and recover the useful Lidar echo signal component with high quality. Especially, the proposed method achieves 4 dB more improvement of signal-to-noise ratio (SNR) than a traditional homomorphic wavelet method.
Kinetic modeling of liquefied petroleum gas (LPG) reduction of titania in MATLAB
Yin, Tan Wei; Ramakrishnan, Sivakumar; Rezan, Sheikh Abdul; Noor, Ahmad Fauzi Mohd; Izah Shoparwe, Noor; Alizadeh, Reza; Roohi, Parham
2017-04-01
In the present study, reduction of Titania (TiO2) by liquefied petroleum gas (LPG)-hydrogen-argon gas mixture was investigated by experimental and kinetic modelling in MATLAB. The reduction experiments were carried out in the temperature range of 1100-1200°C with a reduction time from 1-3 hours and 10-20 minutes of LPG flowing time. A shrinking core model (SCM) was employed for the kinetic modelling in order to determine the rate and extent of reduction. The highest experimental extent of reduction of 38% occurred at a temperature of 1200°C with 3 hours reduction time and 20 minutes of LPG flowing time. The SCM gave a predicted extent of reduction of 82.1% due to assumptions made in the model. The deviation between SCM and experimental data was attributed to porosity, thermodynamic properties and minute thermal fluctuations within the sample. In general, the reduction rates increased with increasing reduction temperature and LPG flowing time.
Climate models agree remarkably well on Arctic sea ice reductions
Christensen, Jens H.; Yang, Shuting; Langen, Peter L.; Thejl, Peter; Boberg, Fredrik
2017-04-01
Coupled global climate models have been used to provide future climate projections as major tools based on physical laws that govern the dynamics and thermodynamics of the climate system. However, while climate models in general predict declines in Arctic sea ice cover (i.e., ice extent and volume) from late 20th century through the next decades in response to increase of anthropogenic forcing, models show wide inter-model spread in hindcast with simulated sea ice extend as low as 50% or as high as 200% of the observed present day conditions. Likewise models show a wide range in the timing of projected sea ice decline, raising the question of uncertainty in model predicted polar climate and casting doubt on the robustness of the findings based on multi-model approaches, such as provided by the Coupled Model Intercomparison Project phase 5 (CMIP5). Constrained estimates of when global mean temperature pass a certain threshold leading to a new sea ice state in the Arctic with summer time open water conditions are in increasing demand both for scientific reasons, but also from policymakers and stakeholders in general. Climate models are used to pursue this, but due to model inadequacies or 'errors' mentioned above, as well as a wide spread in possible future projections, uncertainties due to model deficiencies have been seen as the main source of uncertainty in providing the demanded information with sufficient accuracy. As an effort within the ERC-Synergy project Ice2Ice, here we demonstrate that relating relative changes in sea ice area with global mean temperature change from individual models using all available information from the CMIP5 archives from historical and the RCP4.5 and RCP8.0 future scenarios, together with the observed variation from 1979-2015 shows that i) simulated and observed sea ice area cannot at the 95% level be seen as coming from different statistical populations; ii) the Arctic could as a combination of natural variability and anthropogenic
Improved ceramic slip casting technique. [application to aircraft model fabrication
Buck, Gregory M. (Inventor); Vasquez, Peter (Inventor)
1993-01-01
A primary concern in modern fluid dynamics research is the experimental verification of computational aerothermodynamic codes. This research requires high precision and detail in the test model employed. Ceramic materials are used for these models because of their low heat conductivity and their survivability at high temperatures. To fabricate such models, slip casting techniques were developed to provide net-form, precision casting capability for high-purity ceramic materials in aqueous solutions. In previous slip casting techniques, block, or flask molds made of plaster-of-paris were used to draw liquid from the slip material. Upon setting, parts were removed from the flask mold and cured in a kiln at high temperatures. Casting detail was usually limited with this technique -- detailed parts were frequently damaged upon separation from the flask mold, as the molded parts are extremely delicate in the uncured state, and the flask mold is inflexible. Ceramic surfaces were also marred by 'parting lines' caused by mold separation. This adversely affected the aerodynamic surface quality of the model as well. (Parting lines are invariably necessary on or near the leading edges of wings, nosetips, and fins for mold separation. These areas are also critical for flow boundary layer control.) Parting agents used in the casting process also affected surface quality. These agents eventually soaked into the mold, the model, or flaked off when releasing the case model. Different materials were tried, such as oils, paraffin, and even an algae. The algae released best, but some of it remained on the model and imparted an uneven texture and discoloration on the model surface when cured. According to the present invention, a wax pattern for a shell mold is provided, and an aqueous mixture of a calcium sulfate-bonded investment material is applied as a coating to the wax pattern. The coated wax pattern is then dried, followed by curing to vaporize the wax pattern and leave a shell
Selective catalytic reduction of NO in a reverse-flow reactor: Modelling and experimental validation
International Nuclear Information System (INIS)
Muñoz, Emilio; Marín, Pablo; Díez, Fernando V.; Ordóñez, Salvador
2015-01-01
Highlights: • Reverse-flow reactors easily overcome feed concentration disturbances. • Central feeding improves ammonia adsorption in reverse-flow reactors. • Dynamic heterogeneous model validated with bench-scale experiments. • Optimum reverse-flow reactor design improves efficiency and reduces reactor size. - Abstract: The abatement of nitrogen oxides produced in combustion processes and in the chemical industry requires efficient and reliable technologies capable of fulfilling strict environmental regulations. Selective catalytic reduction (SCR) with ammonia in fixed-bed (monolithic) reactors has stood out among other techniques in the last decades. In this work, the use of reverse-flow reactors, operated under the forced un-steady state generated by the periodic reversal of the flow direction, is studied for improving the SCR performance. This reactor can take advantage of ammonia adsorption in the catalyst to enhance concentration profiles in the reactor, increasing reaction rate, efficiency and reducing the emission of un-reacted ammonia. The process has been studied experimentally in a bench-scale device using a commercial monolithic catalyst. The optimum operating conditions, best ammonia feed configuration (side or central) and capacity of the reactor to deal with feed concentration disturbances is analysed. The experiments have also been used for validating a mathematical model of the reactor based on mass conservation equations, and the model has been used to design a full-size reverse-flow reactor able of operating at industrial conditions
Cooperative cognitive radio networking system model, enabling techniques, and performance
Cao, Bin; Mark, Jon W
2016-01-01
This SpringerBrief examines the active cooperation between users of Cooperative Cognitive Radio Networking (CCRN), exploring the system model, enabling techniques, and performance. The brief provides a systematic study on active cooperation between primary users and secondary users, i.e., (CCRN), followed by the discussions on research issues and challenges in designing spectrum-energy efficient CCRN. As an effort to shed light on the design of spectrum-energy efficient CCRN, they model the CCRN based on orthogonal modulation and orthogonally dual-polarized antenna (ODPA). The resource allocation issues are detailed with respect to both models, in terms of problem formulation, solution approach, and numerical results. Finally, the optimal communication strategies for both primary and secondary users to achieve spectrum-energy efficient CCRN are analyzed.
Numerical and modeling techniques used in the EPIC code
International Nuclear Information System (INIS)
Pizzica, P.A.; Abramson, P.B.
1977-01-01
EPIC models fuel and coolant motion which result from internal fuel pin pressure (from fission gas or fuel vapor) and/or from the generation of sodium vapor pressures in the coolant channel subsequent to pin failure in an LMFBR. The modeling includes the ejection of molten fuel from the pin into a coolant channel with any amount of voiding through a clad rip which may be of any length or which may expand with time. One-dimensional Eulerian hydrodynamics is used to model both the motion of fuel and fission gas inside a molten fuel cavity and the mixture of two-phase sodium and fission gas in the channel. Motion of molten fuel particles in the coolant channel is tracked with a particle-in-cell technique
Teaching scientific concepts through simple models and social communication techniques
International Nuclear Information System (INIS)
Tilakaratne, K.
2011-01-01
For science education, it is important to demonstrate to students the relevance of scientific concepts in every-day life experiences. Although there are methods available for achieving this goal, it is more effective if cultural flavor is also added to the teaching techniques and thereby the teacher and students can easily relate the subject matter to their surroundings. Furthermore, this would bridge the gap between science and day-to-day experiences in an effective manner. It could also help students to use science as a tool to solve problems faced by them and consequently they would feel science is a part of their lives. In this paper, it has been described how simple models and cultural communication techniques can be used effectively in demonstrating important scientific concepts to the students of secondary and higher secondary levels by using two consecutive activities carried out at the Institute of Fundamental Studies (IFS), Sri Lanka. (author)
Sanz, Luis; Alonso, Juan Antonio
2017-12-01
In this work we develop approximate aggregation techniques in the context of slow-fast linear population models governed by stochastic differential equations and apply the results to the treatment of populations with spatial heterogeneity. Approximate aggregation techniques allow one to transform a complex system involving many coupled variables and in which there are processes with different time scales, by a simpler reduced model with a fewer number of 'global' variables, in such a way that the dynamics of the former can be approximated by that of the latter. In our model we contemplate a linear fast deterministic process together with a linear slow process in which the parameters are affected by additive noise, and give conditions for the solutions corresponding to positive initial conditions to remain positive for all times. By letting the fast process reach equilibrium we build a reduced system with a lesser number of variables, and provide results relating the asymptotic behaviour of the first- and second-order moments of the population vector for the original and the reduced system. The general technique is illustrated by analysing a multiregional stochastic system in which dispersal is deterministic and the rate growth of the populations in each patch is affected by additive noise.
An Iterative Uncertainty Assessment Technique for Environmental Modeling
International Nuclear Information System (INIS)
Engel, David W.; Liebetrau, Albert M.; Jarman, Kenneth D.; Ferryman, Thomas A.; Scheibe, Timothy D.; Didier, Brett T.
2004-01-01
The reliability of and confidence in predictions from model simulations are crucial--these predictions can significantly affect risk assessment decisions. For example, the fate of contaminants at the U.S. Department of Energy's Hanford Site has critical impacts on long-term waste management strategies. In the uncertainty estimation efforts for the Hanford Site-Wide Groundwater Modeling program, computational issues severely constrain both the number of uncertain parameters that can be considered and the degree of realism that can be included in the models. Substantial improvements in the overall efficiency of uncertainty analysis are needed to fully explore and quantify significant sources of uncertainty. We have combined state-of-the-art statistical and mathematical techniques in a unique iterative, limited sampling approach to efficiently quantify both local and global prediction uncertainties resulting from model input uncertainties. The approach is designed for application to widely diverse problems across multiple scientific domains. Results are presented for both an analytical model where the response surface is ''known'' and a simplified contaminant fate transport and groundwater flow model. The results show that our iterative method for approximating a response surface (for subsequent calculation of uncertainty estimates) of specified precision requires less computing time than traditional approaches based upon noniterative sampling methods
Light aircraft sound transmission studies - Noise reduction model
Atwal, Mahabir S.; Heitman, Karen E.; Crocker, Malcolm J.
1987-01-01
Experimental tests conducted on the fuselage of a single-engine Piper Cherokee light aircraft suggest that the cabin interior noise can be reduced by increasing the transmission loss of the dominant sound transmission paths and/or by increasing the cabin interior sound absorption. The validity of using a simple room equation model to predict the cabin interior sound-pressure level for different fuselage and exterior sound field conditions is also presented. The room equation model is based on the sound power flow balance for the cabin space and utilizes the measured transmitted sound intensity data. The room equation model predictions were considered good enough to be used for preliminary acoustical design studies.
Use of advanced modeling techniques to optimize thermal packaging designs.
Formato, Richard M; Potami, Raffaele; Ahmed, Iftekhar
2010-01-01
Through a detailed case study the authors demonstrate, for the first time, the capability of using advanced modeling techniques to correctly simulate the transient temperature response of a convective flow-based thermal shipper design. The objective of this case study was to demonstrate that simulation could be utilized to design a 2-inch-wall polyurethane (PUR) shipper to hold its product box temperature between 2 and 8 °C over the prescribed 96-h summer profile (product box is the portion of the shipper that is occupied by the payload). Results obtained from numerical simulation are in excellent agreement with empirical chamber data (within ±1 °C at all times), and geometrical locations of simulation maximum and minimum temperature match well with the corresponding chamber temperature measurements. Furthermore, a control simulation test case was run (results taken from identical product box locations) to compare the coupled conduction-convection model with a conduction-only model, which to date has been the state-of-the-art method. For the conduction-only simulation, all fluid elements were replaced with "solid" elements of identical size and assigned thermal properties of air. While results from the coupled thermal/fluid model closely correlated with the empirical data (±1 °C), the conduction-only model was unable to correctly capture the payload temperature trends, showing a sizeable error compared to empirical values (ΔT > 6 °C). A modeling technique capable of correctly capturing the thermal behavior of passively refrigerated shippers can be used to quickly evaluate and optimize new packaging designs. Such a capability provides a means to reduce the cost and required design time of shippers while simultaneously improving their performance. Another advantage comes from using thermal modeling (assuming a validated model is available) to predict the temperature distribution in a shipper that is exposed to ambient temperatures which were not bracketed
Modelling galaxy formation with multi-scale techniques
International Nuclear Information System (INIS)
Hobbs, A.
2011-01-01
Full text: Galaxy formation and evolution depends on a wide variety of physical processes - star formation, gas cooling, supernovae explosions and stellar winds etc. - that span an enormous range of physical scales. We present a novel technique for modelling such massively multiscale systems. This has two key new elements: Lagrangian re simulation, and convergent 'sub-grid' physics. The former allows us to hone in on interesting simulation regions with very high resolution. The latter allows us to increase resolution for the physics that we can resolve, without unresolved physics spoiling convergence. We illustrate the power of our new approach by showing some new results for star formation in the Milky Way. (author)
Prescribed wind shear modelling with the actuator line technique
DEFF Research Database (Denmark)
Mikkelsen, Robert Flemming; Sørensen, Jens Nørkær; Troldborg, Niels
2007-01-01
A method for prescribing arbitrary steady atmospheric wind shear profiles combined with CFD is presented. The method is furthermore combined with the actuator line technique governing the aerodynamic loads on a wind turbine. Computation are carried out on a wind turbine exposed to a representative...... steady atmospheric wind shear profile with and without wind direction changes up through the atmospheric boundary layer. Results show that the main impact on the turbine is captured by the model. Analysis of the wake behind the wind turbine, reveal the formation of a skewed wake geometry interacting...
Validation techniques of agent based modelling for geospatial simulations
Darvishi, M.; Ahmadi, G.
2014-10-01
One of the most interesting aspects of modelling and simulation study is to describe the real world phenomena that have specific properties; especially those that are in large scales and have dynamic and complex behaviours. Studying these phenomena in the laboratory is costly and in most cases it is impossible. Therefore, Miniaturization of world phenomena in the framework of a model in order to simulate the real phenomena is a reasonable and scientific approach to understand the world. Agent-based modelling and simulation (ABMS) is a new modelling method comprising of multiple interacting agent. They have been used in the different areas; for instance, geographic information system (GIS), biology, economics, social science and computer science. The emergence of ABM toolkits in GIS software libraries (e.g. ESRI's ArcGIS, OpenMap, GeoTools, etc) for geospatial modelling is an indication of the growing interest of users to use of special capabilities of ABMS. Since ABMS is inherently similar to human cognition, therefore it could be built easily and applicable to wide range applications than a traditional simulation. But a key challenge about ABMS is difficulty in their validation and verification. Because of frequent emergence patterns, strong dynamics in the system and the complex nature of ABMS, it is hard to validate and verify ABMS by conventional validation methods. Therefore, attempt to find appropriate validation techniques for ABM seems to be necessary. In this paper, after reviewing on Principles and Concepts of ABM for and its applications, the validation techniques and challenges of ABM validation are discussed.
Optimal Tax Reduction by Depreciation : A Stochastic Model
Berg, M.; De Waegenaere, A.M.B.; Wielhouwer, J.L.
1996-01-01
This paper focuses on the choice of a depreciation method, when trying to minimize the expected value of the present value of future tax payments.In a quite general model that allows for stochastic future cash- ows and a tax structure with tax brackets, we determine the optimal choice between the
Equation-free model reduction for complex dynamical systems
International Nuclear Information System (INIS)
Le Maitre, O. P.; Mathelin, L.; Le Maitre, O. P.
2010-01-01
This paper presents a reduced model strategy for simulation of complex physical systems. A classical reduced basis is first constructed relying on proper orthogonal decomposition of the system. Then, unlike the alternative approaches, such as Galerkin projection schemes for instance, an equation-free reduced model is constructed. It consists in the determination of an explicit transformation, or mapping, for the evolution over a coarse time-step of the projection coefficients of the system state on the reduced basis. The mapping is expressed as an explicit polynomial transformation of the projection coefficients and is computed once and for all in a pre-processing stage using the detailed model equation of the system. The reduced system can then be advanced in time by successive applications of the mapping. The CPU cost of the method lies essentially in the mapping approximation which is performed offline, in a parallel fashion, and only once. Subsequent application of the mapping to perform a time-integration is carried out at a low cost thanks to its explicit character. Application of the method is considered for the 2-D flow around a circular cylinder. We investigate the effectiveness of the reduced model in rendering the dynamics for both asymptotic state and transient stages. It is shown that the method leads to a stable and accurate time-integration for only a fraction of the cost of a detailed simulation, provided that the mapping is properly approximated and the reduced basis remains relevant for the dynamics investigated. (authors)
Modeling and Recovery of Iron (Fe) from Red Mud by Coal Reduction
Zhao, Xiancong; Li, Hongxu; Wang, Lei; Zhang, Lifeng
Recovery of Fe from red mud has been studied using statistically designed experiments. The effects of three factors, namely: reduction temperature, reduction time and proportion of additive on recovery of Fe have been investigated. Experiments have been carried out using orthogonal central composite design and factorial design methods. A model has been obtained through variance analysis at 92.5% confidence level.
Laparoscopic anterior resection: new anastomosis technique in a pig model.
Bedirli, Abdulkadir; Yucel, Deniz; Ekim, Burcu
2014-01-01
Bowel anastomosis after anterior resection is one of the most difficult tasks to perform during laparoscopic colorectal surgery. This study aims to evaluate a new feasible and safe intracorporeal anastomosis technique after laparoscopic left-sided colon or rectum resection in a pig model. The technique was evaluated in 5 pigs. The OrVil device (Covidien, Mansfield, Massachusetts) was inserted into the anus and advanced proximally to the rectum. A 0.5-cm incision was made in the sigmoid colon, and the 2 sutures attached to its delivery tube were cut. After the delivery tube was evacuated through the anus, the tip of the anvil was removed through the perforation. The sigmoid colon was transected just distal to the perforation with an endoscopic linear stapler. The rectosigmoid segment to be resected was removed through the anus with a grasper, and distal transection was performed. A 25-mm circular stapler was inserted and combined with the anvil, and end-to-side intracorporeal anastomosis was then performed. We performed the technique in 5 pigs. Anastomosis required an average of 12 minutes. We observed that the proximal and distal donuts were completely removed in all pigs. No anastomotic air leakage was observed in any of the animals. This study shows the efficacy and safety of intracorporeal anastomosis with the OrVil device after laparoscopic anterior resection.
New efficient optimizing techniques for Kalman filters and numerical weather prediction models
Famelis, Ioannis; Galanis, George; Liakatas, Aristotelis
2016-06-01
The need for accurate local environmental predictions and simulations beyond the classical meteorological forecasts are increasing the last years due to the great number of applications that are directly or not affected: renewable energy resource assessment, natural hazards early warning systems, global warming and questions on the climate change can be listed among them. Within this framework the utilization of numerical weather and wave prediction systems in conjunction with advanced statistical techniques that support the elimination of the model bias and the reduction of the error variability may successfully address the above issues. In the present work, new optimization methods are studied and tested in selected areas of Greece where the use of renewable energy sources is of critical. The added value of the proposed work is due to the solid mathematical background adopted making use of Information Geometry and Statistical techniques, new versions of Kalman filters and state of the art numerical analysis tools.
Reduction and identification for hybrid dynamical models of terrestrial locomotion
Burden, Samuel A.; Sastry, S. Shankar
2013-06-01
The study of terrestrial locomotion has compelling applications ranging from design of legged robots to development of novel prosthetic devices. From a first-principles perspective, the dynamics of legged locomotion seem overwhelmingly complex as nonlinear rigid body dynamics couple to a granular substrate through viscoelastic limbs. However, a surfeit of empirical data demonstrates that animals use a small fraction of their available degrees-of-freedom during locomotion on regular terrain, suggesting that a reduced-order model can accurately describe the dynamical variation observed during steady-state locomotion. Exploiting this emergent phenomena has the potential to dramatically simplify design and control of micro-scale legged robots. We propose a paradigm for studying dynamic terrestrial locomotion using empirically-validated reduced{order models.
Rumpler, Romain; Deü, Jean-François; Göransson, Peter
2012-11-01
Structural-acoustic finite element models including three-dimensional (3D) modeling of porous media are generally computationally costly. While being the most commonly used predictive tool in the context of noise reduction applications, efficient solution strategies are required. In this work, an original modal reduction technique, involving real-valued modes computed from a classical eigenvalue solver is proposed to reduce the size of the problem associated with the porous media. In the form presented in this contribution, the method is suited for homogeneous porous layers. It is validated on a 1D poro-acoustic academic problem and tested for its performance on a 3D application, using a subdomain decomposition strategy. The performance of the proposed method is estimated in terms of degrees of freedom downsizing, computational time enhancement, as well as matrix sparsity of the reduced system.
Advanced applications of numerical modelling techniques for clay extruder design
Kandasamy, Saravanakumar
Ceramic materials play a vital role in our day to day life. Recent advances in research, manufacture and processing techniques and production methodologies have broadened the scope of ceramic products such as bricks, pipes and tiles, especially in the construction industry. These are mainly manufactured using an extrusion process in auger extruders. During their long history of application in the ceramic industry, most of the design developments of extruder systems have resulted from expensive laboratory-based experimental work and field-based trial and error runs. In spite of these design developments, the auger extruders continue to be energy intensive devices with high operating costs. Limited understanding of the physical process involved in the process and the cost and time requirements of lab-based experiments were found to be the major obstacles in the further development of auger extruders.An attempt has been made herein to use Computational Fluid Dynamics (CFD) and Finite Element Analysis (FEA) based numerical modelling techniques to reduce the costs and time associated with research into design improvement by experimental trials. These two techniques, although used widely in other engineering applications, have rarely been applied for auger extruder development. This had been due to a number of reasons including technical limitations of CFD tools previously available. Modern CFD and FEA software packages have much enhanced capabilities and allow the modelling of the flow of complex fluids such as clay.This research work presents a methodology in using Herschel-Bulkley's fluid flow based CFD model to simulate and assess the flow of clay-water mixture through the extruder and the die of a vacuum de-airing type clay extrusion unit used in ceramic extrusion. The extruder design and the operating parameters were varied to study their influence on the power consumption and the extrusion pressure. The model results were then validated using results from
Modeling nitrate-nitrogen load reduction strategies for the Des Moines River, Iowa using SWAT.
Schilling, Keith E; Wolter, Calvin F
2009-10-01
The Des Moines River that drains a watershed of 16,175 km(2) in portions of Iowa and Minnesota is impaired for nitrate-nitrogen (nitrate) due to concentrations that exceed regulatory limits for public water supplies. The Soil Water Assessment Tool (SWAT) model was used to model streamflow and nitrate loads and evaluate a suite of basin-wide changes and targeting configurations to potentially reduce nitrate loads in the river. The SWAT model comprised 173 subbasins and 2,516 hydrologic response units and included point and nonpoint nitrogen sources. The model was calibrated for an 11-year period and three basin-wide and four targeting strategies were evaluated. Results indicated that nonpoint sources accounted for 95% of the total nitrate export. Reduction in fertilizer applications from 170 to 50 kg/ha achieved the 38% reduction in nitrate loads, exceeding the 34% reduction required. In terms of targeting, the most efficient load reductions occurred when fertilizer applications were reduced in subbasins nearest the watershed outlet. The greatest load reduction for the area of land treated was associated with reducing loads from 55 subbasins with the highest nitrate loads, achieving a 14% reduction in nitrate loads achieved by reducing applications on 30% of the land area. SWAT model results provide much needed guidance on how to begin implementing load reduction strategies most efficiently in the Des Moines River watershed.
Yang, L. M.; Shu, C.; Yang, W. M.; Wu, J.
2018-04-01
High consumption of memory and computational effort is the major barrier to prevent the widespread use of the discrete velocity method (DVM) in the simulation of flows in all flow regimes. To overcome this drawback, an implicit DVM with a memory reduction technique for solving a steady discrete velocity Boltzmann equation (DVBE) is presented in this work. In the method, the distribution functions in the whole discrete velocity space do not need to be stored, and they are calculated from the macroscopic flow variables. As a result, its memory requirement is in the same order as the conventional Euler/Navier-Stokes solver. In the meantime, it is more efficient than the explicit DVM for the simulation of various flows. To make the method efficient for solving flow problems in all flow regimes, a prediction step is introduced to estimate the local equilibrium state of the DVBE. In the prediction step, the distribution function at the cell interface is calculated by the local solution of DVBE. For the flow simulation, when the cell size is less than the mean free path, the prediction step has almost no effect on the solution. However, when the cell size is much larger than the mean free path, the prediction step dominates the solution so as to provide reasonable results in such a flow regime. In addition, to further improve the computational efficiency of the developed scheme in the continuum flow regime, the implicit technique is also introduced into the prediction step. Numerical results showed that the proposed implicit scheme can provide reasonable results in all flow regimes and increase significantly the computational efficiency in the continuum flow regime as compared with the existing DVM solvers.
Verified reduction of dimensionality for an all-vanadium redox flow battery model
Sharma, A. K.; Ling, C. Y.; Birgersson, E.; Vynnycky, M.; Han, M.
2015-04-01
The computational cost for all-vanadium redox flow batteries (VRFB) models that seek to capture the transport phenomena usually increases with the number of spatial dimensions considered. In this context, we carry out scale analysis to derive a reduced zero-dimensional model. Two nondimensional numbers and their limits to support the model reduction are identified. We verify the reduced model by comparing its charge-discharge curve predictions with that of a full two-dimensional model. The proposed analysis leading to reduction in dimensionality is generic and can be employed for other types of redox flow batteries.
Assessments of Bubble Dynamics Model and Influential Parameters in Microbubble Drag Reduction
National Research Council Canada - National Science Library
Skudarnov, P. V; Lin, C. X
2006-01-01
.... The effects of mixture density variation, free stream turbulence intensity, free stream velocity, and surface roughness on the microbubble drag reduction were studied using a single phase model based...
Techniques to Access Databases and Integrate Data for Hydrologic Modeling
International Nuclear Information System (INIS)
Whelan, Gene; Tenney, Nathan D.; Pelton, Mitchell A.; Coleman, Andre M.; Ward, Duane L.; Droppo, James G.; Meyer, Philip D.; Dorow, Kevin E.; Taira, Randal Y.
2009-01-01
This document addresses techniques to access and integrate data for defining site-specific conditions and behaviors associated with ground-water and surface-water radionuclide transport applicable to U.S. Nuclear Regulatory Commission reviews. Environmental models typically require input data from multiple internal and external sources that may include, but are not limited to, stream and rainfall gage data, meteorological data, hydrogeological data, habitat data, and biological data. These data may be retrieved from a variety of organizations (e.g., federal, state, and regional) and source types (e.g., HTTP, FTP, and databases). Available data sources relevant to hydrologic analyses for reactor licensing are identified and reviewed. The data sources described can be useful to define model inputs and parameters, including site features (e.g., watershed boundaries, stream locations, reservoirs, site topography), site properties (e.g., surface conditions, subsurface hydraulic properties, water quality), and site boundary conditions, input forcings, and extreme events (e.g., stream discharge, lake levels, precipitation, recharge, flood and drought characteristics). Available software tools for accessing established databases, retrieving the data, and integrating it with models were identified and reviewed. The emphasis in this review was on existing software products with minimal required modifications to enable their use with the FRAMES modeling framework. The ability of four of these tools to access and retrieve the identified data sources was reviewed. These four software tools were the Hydrologic Data Acquisition and Processing System (HDAPS), Integrated Water Resources Modeling System (IWRMS) External Data Harvester, Data for Environmental Modeling Environmental Data Download Tool (D4EM EDDT), and the FRAMES Internet Database Tools. The IWRMS External Data Harvester and the D4EM EDDT were identified as the most promising tools based on their ability to access and
Techniques to Access Databases and Integrate Data for Hydrologic Modeling
Energy Technology Data Exchange (ETDEWEB)
Whelan, Gene; Tenney, Nathan D.; Pelton, Mitchell A.; Coleman, Andre M.; Ward, Duane L.; Droppo, James G.; Meyer, Philip D.; Dorow, Kevin E.; Taira, Randal Y.
2009-06-17
This document addresses techniques to access and integrate data for defining site-specific conditions and behaviors associated with ground-water and surface-water radionuclide transport applicable to U.S. Nuclear Regulatory Commission reviews. Environmental models typically require input data from multiple internal and external sources that may include, but are not limited to, stream and rainfall gage data, meteorological data, hydrogeological data, habitat data, and biological data. These data may be retrieved from a variety of organizations (e.g., federal, state, and regional) and source types (e.g., HTTP, FTP, and databases). Available data sources relevant to hydrologic analyses for reactor licensing are identified and reviewed. The data sources described can be useful to define model inputs and parameters, including site features (e.g., watershed boundaries, stream locations, reservoirs, site topography), site properties (e.g., surface conditions, subsurface hydraulic properties, water quality), and site boundary conditions, input forcings, and extreme events (e.g., stream discharge, lake levels, precipitation, recharge, flood and drought characteristics). Available software tools for accessing established databases, retrieving the data, and integrating it with models were identified and reviewed. The emphasis in this review was on existing software products with minimal required modifications to enable their use with the FRAMES modeling framework. The ability of four of these tools to access and retrieve the identified data sources was reviewed. These four software tools were the Hydrologic Data Acquisition and Processing System (HDAPS), Integrated Water Resources Modeling System (IWRMS) External Data Harvester, Data for Environmental Modeling Environmental Data Download Tool (D4EM EDDT), and the FRAMES Internet Database Tools. The IWRMS External Data Harvester and the D4EM EDDT were identified as the most promising tools based on their ability to access and
Mathematical Modeling and Dimension Reduction in Dynamical Systems
DEFF Research Database (Denmark)
Elmegård, Michael
Processes that change in time are in mathematics typically described by differential equations. These may be applied to model everything from weather forecasting, brain patterns, reaction kinetics, water waves, finance, social dynamics, structural dynamics and electrodynamics to name only a few....... These systems are generically nonlinear and the studies of them often become enormously complex. The framework in which such systems are best understood is via the theory of dynamical systems, where the critical behavior is systematically analyzed by performing bifurcation theory. In that context the current...
Coset Space Dimensional Reduction approach to the Standard Model
International Nuclear Information System (INIS)
Farakos, K.; Kapetanakis, D.; Koutsoumbas, G.; Zoupanos, G.
1988-01-01
We present a unified theory in ten dimensions based on the gauge group E 8 , which is dimensionally reduced to the Standard Mode SU 3c xSU 2 -LxU 1 , which breaks further spontaneously to SU 3L xU 1em . The model gives similar predictions for sin 2 θ w and proton decay as the minimal SU 5 G.U.T., while a natural choice of the coset space radii predicts light Higgs masses a la Coleman-Weinberg
Modelling of biofilters for ammonium reduction in combined sewer overflow.
Henrichs, M; Welker, A; Uhl, M
2009-01-01
Biofiltration has proved to be a useful system to treat combined sewer overflow (CSO). The study presented uses numerical simulation to detect the critical operating conditions of the filter. The multi-component reactive transport module CW2D was used for the simulation study. Single-event simulations of lab-scale-column experiments with varying boundary conditions regarding the throttle outflow rate were carried out. For the calibration of the CW2D model measurement results of four experiments in two lab-scale columns were used. The model was validated by simulating four events of two further columns filled with the same filter material. These columns were operating with higher throttle outflow rates than the columns used for calibration. For ammonium (NH(4)-N) a good fit between measured and simulated data could be achieved. However, the comparison of simulated and measured effluent concentrations of nitrate (NO(3)-N) showed that there is a need for further investigations mainly due to the uncertainties in the degradation process during dry periods between the loadings.
Directory of Open Access Journals (Sweden)
Tom Cattaert
Full Text Available We propose a novel multifactor dimensionality reduction method for epistasis detection in small or extended pedigrees, FAM-MDR. It combines features of the Genome-wide Rapid Association using Mixed Model And Regression approach (GRAMMAR with Model-Based MDR (MB-MDR. We focus on continuous traits, although the method is general and can be used for outcomes of any type, including binary and censored traits. When comparing FAM-MDR with Pedigree-based Generalized MDR (PGMDR, which is a generalization of Multifactor Dimensionality Reduction (MDR to continuous traits and related individuals, FAM-MDR was found to outperform PGMDR in terms of power, in most of the considered simulated scenarios. Additional simulations revealed that PGMDR does not appropriately deal with multiple testing and consequently gives rise to overly optimistic results. FAM-MDR adequately deals with multiple testing in epistasis screens and is in contrast rather conservative, by construction. Furthermore, simulations show that correcting for lower order (main effects is of utmost importance when claiming epistasis. As Type 2 Diabetes Mellitus (T2DM is a complex phenotype likely influenced by gene-gene interactions, we applied FAM-MDR to examine data on glucose area-under-the-curve (GAUC, an endophenotype of T2DM for which multiple independent genetic associations have been observed, in the Amish Family Diabetes Study (AFDS. This application reveals that FAM-MDR makes more efficient use of the available data than PGMDR and can deal with multi-generational pedigrees more easily. In conclusion, we have validated FAM-MDR and compared it to PGMDR, the current state-of-the-art MDR method for family data, using both simulations and a practical dataset. FAM-MDR is found to outperform PGMDR in that it handles the multiple testing issue more correctly, has increased power, and efficiently uses all available information.
Cattaert, Tom; Urrea, Víctor; Naj, Adam C; De Lobel, Lizzy; De Wit, Vanessa; Fu, Mao; Mahachie John, Jestinah M; Shen, Haiqing; Calle, M Luz; Ritchie, Marylyn D; Edwards, Todd L; Van Steen, Kristel
2010-04-22
We propose a novel multifactor dimensionality reduction method for epistasis detection in small or extended pedigrees, FAM-MDR. It combines features of the Genome-wide Rapid Association using Mixed Model And Regression approach (GRAMMAR) with Model-Based MDR (MB-MDR). We focus on continuous traits, although the method is general and can be used for outcomes of any type, including binary and censored traits. When comparing FAM-MDR with Pedigree-based Generalized MDR (PGMDR), which is a generalization of Multifactor Dimensionality Reduction (MDR) to continuous traits and related individuals, FAM-MDR was found to outperform PGMDR in terms of power, in most of the considered simulated scenarios. Additional simulations revealed that PGMDR does not appropriately deal with multiple testing and consequently gives rise to overly optimistic results. FAM-MDR adequately deals with multiple testing in epistasis screens and is in contrast rather conservative, by construction. Furthermore, simulations show that correcting for lower order (main) effects is of utmost importance when claiming epistasis. As Type 2 Diabetes Mellitus (T2DM) is a complex phenotype likely influenced by gene-gene interactions, we applied FAM-MDR to examine data on glucose area-under-the-curve (GAUC), an endophenotype of T2DM for which multiple independent genetic associations have been observed, in the Amish Family Diabetes Study (AFDS). This application reveals that FAM-MDR makes more efficient use of the available data than PGMDR and can deal with multi-generational pedigrees more easily. In conclusion, we have validated FAM-MDR and compared it to PGMDR, the current state-of-the-art MDR method for family data, using both simulations and a practical dataset. FAM-MDR is found to outperform PGMDR in that it handles the multiple testing issue more correctly, has increased power, and efficiently uses all available information.
Directory of Open Access Journals (Sweden)
Wenjun Chen
2016-03-01
Full Text Available A nuclear magnetic resonance (NMR experiment for measurement of time-dependent magnetic fields was introduced. To improve the signal-to-interference-plus-noise ratio (SINR of NMR data, a new method for interference cancellation and noise reduction (ICNR based on singular value decomposition (SVD was proposed. The singular values corresponding to the radio frequency interference (RFI signal were identified in terms of the correlation between the FID data and the reference data, and then the RFI and noise were suppressed by setting the corresponding singular values to zero. The validity of the algorithm was verified by processing the measured NMR data. The results indicated that, this method has a significantly suppression of RFI and random noise, and can well preserve the FID signal. At present, the major limitation of the proposed SVD-based ICNR technique is that the threshold value for interference cancellation needs to be manually selected. Finally, the inversion waveform of the applied alternating magnetic field was given by fitting the processed experimental data.
Energy Technology Data Exchange (ETDEWEB)
Ahmad, Sajjad, E-mail: sajjadhaleli@gmail.com [Department of Physics, Bahauddin Zakariya University, Multan 60800 (Pakistan); Ziya, Amer Bashir [Department of Physics, Bahauddin Zakariya University, Multan 60800 (Pakistan); Ashiq, Muhammad Naeem, E-mail: naeemashiqqau@yahoo.com [Institute of Chemical Sciences, Bahauddin Zakariya University, Multan 60800 (Pakistan); Ibrahim, Ather; Atiq, Shabbar [Institute of Advanced Materials, Bahauddin Zakariya University, Multan 60800 (Pakistan); Ahmad, Naseeb [Department of Physics, Government College University, Faisalabad (Pakistan); Shakeel, Muhammad [Institute of Advanced Materials, Bahauddin Zakariya University, Multan 60800 (Pakistan); Khan, Muhammad Azhar [Department of Physics, The Islamia University of Bahawalpur, Bahawalpur 63100 (Pakistan)
2016-12-01
Fe–Ni–Cu invar alloys of various compositions (Fe{sub 65}Ni{sub 35−x}Cu{sub x}, x=0, 0.2, 0.6, 1, 1.4 and 1.8) were synthesized via chemical reduction route. These alloys were characterized by X-ray diffraction (XRD), scanning electron microscopy (SEM) and vibrating sample magnetometry (VSM) techniques. The XRD analysis revealed the formation of face centered cubic (fcc) structure. The lattice parameter and the crystallite size of the investigated alloys were calculated and the line broadening indicated the nano-crystallites size of alloy powder. The particle size was estimated from SEM and it decreases by the incorporation of Cu and found to be in the range of 24–40 nm. The addition of Cu in these alloys appreciably enhances the saturation magnetization and it increases from 99 to 123 emu/g. Electrical conductivity has been improved with Cu addition. The thermal conductivity was calculated using the Wiedemann–Franz law. - Graphical abstract: M–H loops of Fe{sub 65}Ni{sub 35−x}Cu{sub x} x =0, 0.2, 0.6, 1, 1.4, 1.8 nano-invar alloys. - Highlights: • A simple method has been employed for the synthesis of invar alloys. • The magnetic properties has been enhanced by the Cu content. • The electrical conductivity has been improved.
Ahmad, Sajjad; Ziya, Amer Bashir; Ashiq, Muhammad Naeem; Ibrahim, Ather; Atiq, Shabbar; Ahmad, Naseeb; Shakeel, Muhammad; Khan, Muhammad Azhar
2016-12-01
Fe-Ni-Cu invar alloys of various compositions (Fe65Ni35-xCux, x=0, 0.2, 0.6, 1, 1.4 and 1.8) were synthesized via chemical reduction route. These alloys were characterized by X-ray diffraction (XRD), scanning electron microscopy (SEM) and vibrating sample magnetometry (VSM) techniques. The XRD analysis revealed the formation of face centered cubic (fcc) structure. The lattice parameter and the crystallite size of the investigated alloys were calculated and the line broadening indicated the nano-crystallites size of alloy powder. The particle size was estimated from SEM and it decreases by the incorporation of Cu and found to be in the range of 24-40 nm. The addition of Cu in these alloys appreciably enhances the saturation magnetization and it increases from 99 to 123 emu/g. Electrical conductivity has been improved with Cu addition. The thermal conductivity was calculated using the Wiedemann-Franz law.
Chen, Wenjun; Ma, Hong; Yu, De; Zhang, Hua
2016-03-04
A nuclear magnetic resonance (NMR) experiment for measurement of time-dependent magnetic fields was introduced. To improve the signal-to-interference-plus-noise ratio (SINR) of NMR data, a new method for interference cancellation and noise reduction (ICNR) based on singular value decomposition (SVD) was proposed. The singular values corresponding to the radio frequency interference (RFI) signal were identified in terms of the correlation between the FID data and the reference data, and then the RFI and noise were suppressed by setting the corresponding singular values to zero. The validity of the algorithm was verified by processing the measured NMR data. The results indicated that, this method has a significantly suppression of RFI and random noise, and can well preserve the FID signal. At present, the major limitation of the proposed SVD-based ICNR technique is that the threshold value for interference cancellation needs to be manually selected. Finally, the inversion waveform of the applied alternating magnetic field was given by fitting the processed experimental data.
Model Reduction of Nonlinear Aeroelastic Systems Experiencing Hopf Bifurcation
Abdelkefi, Abdessattar
2013-06-18
In this paper, we employ the normal form to derive a reduced - order model that reproduces nonlinear dynamical behavior of aeroelastic systems that undergo Hopf bifurcation. As an example, we consider a rigid two - dimensional airfoil that is supported by nonlinear springs in the pitch and plunge directions and subjected to nonlinear aerodynamic loads. We apply the center manifold theorem on the governing equations to derive its normal form that constitutes a simplified representation of the aeroelastic sys tem near flutter onset (manifestation of Hopf bifurcation). Then, we use the normal form to identify a self - excited oscillator governed by a time - delay ordinary differential equation that approximates the dynamical behavior while reducing the dimension of the original system. Results obtained from this oscillator show a great capability to predict properly limit cycle oscillations that take place beyond and above flutter as compared with the original aeroelastic system.
Model reduction and analysis of a vibrating beam microgyroscope
Ghommem, Mehdi
2012-05-08
The present work is concerned with the nonlinear dynamic analysis of a vibrating beam microgyroscope composed of a rotating cantilever beam with a tip mass at its end. The rigid mass is coupled to two orthogonal electrodes in the drive and sense directions, which are attached to the rotating base. The microbeam is driven by an AC voltage in the drive direction, which induces vibrations in the orthogonal sense direction due to rotation about the microbeam axis. The electrode placed in the sense direction is used to measure the induced motions and extract the underlying angular speed. A reduced-order model of the gyroscope is developed using the method of multiple scales and used to examine its dynamic behavior. © The Author(s) 2012 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.
Modeling and Forecasting Electricity Demand in Azerbaijan Using Cointegration Techniques
Directory of Open Access Journals (Sweden)
Fakhri J. Hasanov
2016-12-01
Full Text Available Policymakers in developing and transitional economies require sound models to: (i understand the drivers of rapidly growing energy consumption and (ii produce forecasts of future energy demand. This paper attempts to model electricity demand in Azerbaijan and provide future forecast scenarios—as far as we are aware this is the first such attempt for Azerbaijan using a comprehensive modelling framework. Electricity consumption increased and decreased considerably in Azerbaijan from 1995 to 2013 (the period used for the empirical analysis—it increased on average by about 4% per annum from 1995 to 2006 but decreased by about 4½% per annum from 2006 to 2010 and increased thereafter. It is therefore vital that Azerbaijani planners and policymakers understand what drives electricity demand and be able to forecast how it will grow in order to plan for future power production. However, modeling electricity demand for such a country has many challenges. Azerbaijan is rich in energy resources, consequently GDP is heavily influenced by oil prices; hence, real non-oil GDP is employed as the activity driver in this research (unlike almost all previous aggregate energy demand studies. Moreover, electricity prices are administered rather than market driven. Therefore, different cointegration and error correction techniques are employed to estimate a number of per capita electricity demand models for Azerbaijan, which are used to produce forecast scenarios for up to 2025. The resulting estimated models (in terms of coefficients, etc. and forecasts of electricity demand for Azerbaijan in 2025 prove to be very similar; with the Business as Usual forecast ranging from about of 19½ to 21 TWh.
Adaptive Autoregressive Model for Reduction of Noise in SPECT
Directory of Open Access Journals (Sweden)
Reijo Takalo
2015-01-01
Full Text Available This paper presents improved autoregressive modelling (AR to reduce noise in SPECT images. An AR filter was applied to prefilter projection images and postfilter ordered subset expectation maximisation (OSEM reconstruction images (AR-OSEM-AR method. The performance of this method was compared with filtered back projection (FBP preceded by Butterworth filtering (BW-FBP method and the OSEM reconstruction method followed by Butterworth filtering (OSEM-BW method. A mathematical cylinder phantom was used for the study. It consisted of hot and cold objects. The tests were performed using three simulated SPECT datasets. Image quality was assessed by means of the percentage contrast resolution (CR% and the full width at half maximum (FWHM of the line spread functions of the cylinders. The BW-FBP method showed the highest CR% values and the AR-OSEM-AR method gave the lowest CR% values for cold stacks. In the analysis of hot stacks, the BW-FBP method had higher CR% values than the OSEM-BW method. The BW-FBP method exhibited the lowest FWHM values for cold stacks and the AR-OSEM-AR method for hot stacks. In conclusion, the AR-OSEM-AR method is a feasible way to remove noise from SPECT images. It has good spatial resolution for hot objects.
Towner, Robert L.; Band, Jonathan L.
2012-01-01
An analysis technique was developed to compare and track mode shapes for different Finite Element Models. The technique may be applied to a variety of structural dynamics analyses, including model reduction validation (comparing unreduced and reduced models), mode tracking for various parametric analyses (e.g., launch vehicle model dispersion analysis to identify sensitivities to modal gain for Guidance, Navigation, and Control), comparing models of different mesh fidelity (e.g., a coarse model for a preliminary analysis compared to a higher-fidelity model for a detailed analysis) and mode tracking for a structure with properties that change over time (e.g., a launch vehicle from liftoff through end-of-burn, with propellant being expended during the flight). Mode shapes for different models are compared and tracked using several numerical indicators, including traditional Cross-Orthogonality and Modal Assurance Criteria approaches, as well as numerical indicators obtained by comparing modal strain energy and kinetic energy distributions. This analysis technique has been used to reliably identify correlated mode shapes for complex Finite Element Models that would otherwise be difficult to compare using traditional techniques. This improved approach also utilizes an adaptive mode tracking algorithm that allows for automated tracking when working with complex models and/or comparing a large group of models.
International Nuclear Information System (INIS)
Rosselet, C.M.; Kerr, J.A.
1993-05-01
During summertime high pressure conditions, high photo-oxidant (O 3 , H 2 O 2 , PAN and others) levels are frequently observed in the planetary boundary layer in central Europe. It is well known that close to the earth's surface ozone is formed by complex reactions involving VOC, NO x , and sunlight. Substantial reductions of both precursors are needed to reduce photo-oxidant levels. In this context the reductions of the abundance of the precursors and the variation of their ratios is of great importance. Here we report model calculations from the Harwell Photochemical Trajectory Model of the levels of O 3 , H 2 O 2 and PAN along a trajectory over the Swiss Plateau from Lake Constance to Lake Geneva. These calculations are in satisfactory agreement with measurements made during the intensive observation period of the research program POLLUMET (Pollution and Meteorology in Switzerland). Sensitivity calculations of emission reduction scenarios indicate that on the Swiss Plateau the ozone production may be mainly NO x -limited; under conditions where the CO levels are closer to the upper limit within the range (120-600 ppbv). The calculated peak ozone level reduction caused by an exclusive NO x -emission reduction is about three times larger than that caused by an exclusive VOC reduction. The combined reduction of all precursor compounds is the most efficient strategy, although it is only marginally more efficient than the NO x -reduction scenario alone. (author) figs., tabs., 75 refs
Model Building by Coset Space Dimensional Reduction Scheme Using Ten-Dimensional Coset Spaces
Jittoh, T.; Koike, M.; Nomura, T.; Sato, J.; Shimomura, T.
2008-12-01
We investigate the gauge-Higgs unification models within the scheme of the coset space dimensional reduction, beginning with a gauge theory in a fourteen-dimensional spacetime where extra-dimensional space has the structure of a ten-dimensional compact coset space. We found seventeen phenomenologically acceptable models through an exhaustive search for the candidates of the coset spaces, the gauge group in fourteen dimension, and fermion representation. Of the seventeen, ten models led to {SO}(10) (× {U}(1)) GUT-like models after dimensional reduction, three models led to {SU}(5) × {U}(1) GUT-like models, and four to {SU}(3) × {SU}(2) × {U}(1) × {U}(1) Standard-Model-like models. The combinations of the coset space, the gauge group in the fourteen-dimensional spacetime, and the representation of the fermion contents of such models are listed.
Building a model by coset space dimensional reduction using 10 dimensional coset spaces
Jittoh, Toshifumi; Koike, Masafumi; Nomura, Takaaki; Sato, Joe; Shimomura, Takashi
2008-05-01
We investigate gauge-Higgs unification models within the scheme of the coset space dimensional reduction, beginning with a gauge theory in a fourteen-dimensional spacetime whose extra-dimensional space has a structure of a ten-dimensional compact coset space. We found seventeen phenomenologically acceptable models through an exhaustive search for the candidates of the coset spaces, the gauge group in fourteen dimension, and fermion representation. Of the seventeen, ten models led to SO(10)(×U(1)) GUT-like models after dimensional reduction, three models led to SU(5)×U(l) GUT-like models, and four to SU(3)×SU(2)×U(1)×U(1) Standard-Model-like models. The combinations of the coset space, the gauge group in the fourteen-dimensional spacetime, and the representation of the fermion contents of such models are listed.
Model assessment using a multi-metric ranking technique
Fitzpatrick, P. J.; Lau, Y.; Alaka, G.; Marks, F.
2017-12-01
Validation comparisons of multiple models presents challenges when skill levels are similar, especially in regimes dominated by the climatological mean. Assessing skill separation will require advanced validation metrics and identifying adeptness in extreme events, but maintain simplicity for management decisions. Flexibility for operations is also an asset. This work postulates a weighted tally and consolidation technique which ranks results by multiple types of metrics. Variables include absolute error, bias, acceptable absolute error percentages, outlier metrics, model efficiency, Pearson correlation, Kendall's Tau, reliability Index, multiplicative gross error, and root mean squared differences. Other metrics, such as root mean square difference and rank correlation were also explored, but removed when the information was discovered to be generally duplicative to other metrics. While equal weights are applied, weights could be altered depending for preferred metrics. Two examples are shown comparing ocean models' currents and tropical cyclone products, including experimental products. The importance of using magnitude and direction for tropical cyclone track forecasts instead of distance, along-track, and cross-track are discussed. Tropical cyclone intensity and structure prediction are also assessed. Vector correlations are not included in the ranking process, but found useful in an independent context, and will be briefly reported.
Total laparoscopic gastrocystoplasty: experimental technique in a porcine model
Directory of Open Access Journals (Sweden)
Frederico R. Romero
2007-02-01
Full Text Available OBJECTIVE: Describe a unique simplified experimental technique for total laparoscopic gastrocystoplasty in a porcine model. MATERIAL AND METHODS: We performed laparoscopic gastrocystoplasty on 10 animals. The gastroepiploic arch was identified and carefully mobilized from its origin at the pylorus to the beginning of the previously demarcated gastric wedge. The gastric segment was resected with sharp dissection. Both gastric suturing and gastrovesical anastomosis were performed with absorbable running sutures. The complete procedure and stages of gastric dissection, gastric closure, and gastrovesical anastomosis were separately timed for each laparoscopic gastrocystoplasty. The end-result of the gastric suturing and the bladder augmentation were evaluated by fluoroscopy or endoscopy. RESULTS: Mean total operative time was 5.2 (range 3.5 - 8 hours: 84.5 (range 62 - 110 minutes for the gastric dissection, 56 (range 28 - 80 minutes for the gastric suturing, and 170.6 (range 70 to 200 minutes for the gastrovesical anastomosis. A cystogram showed a small leakage from the vesical anastomosis in the first two cases. No extravasation from gastric closure was observed in the postoperative gastrogram. CONCLUSIONS: Total laparoscopic gastrocystoplasty is a feasible but complex procedure that currently has limited clinical application. With the increasing use of laparoscopy in reconstructive surgery of the lower urinary tract, gastrocystoplasty may become an attractive option because of its potential advantages over techniques using small and large bowel segments.
Mapping the Complexities of Online Dialogue: An Analytical Modeling Technique
Directory of Open Access Journals (Sweden)
Robert Newell
2014-03-01
Full Text Available The e-Dialogue platform was developed in 2001 to explore the potential of using the Internet for engaging diverse groups of people and multiple perspectives in substantive dialogue on sustainability. The system is online, text-based, and serves as a transdisciplinary space for bringing together researchers, practitioners, policy-makers and community leaders. The Newell-Dale Conversation Modeling Technique (NDCMT was designed for in-depth analysis of e-Dialogue conversations and uses empirical methodology to minimize observer bias during analysis of a conversation transcript. NDCMT elucidates emergent ideas, identifies connections between ideas and themes, and provides a coherent synthesis and deeper understanding of the underlying patterns of online conversations. Continual application and improvement of NDCMT can lead to powerful methodologies for empirically analyzing digital discourse and better capture of innovations produced through such discourse. URN: http://nbn-resolving.de/urn:nbn:de:0114-fqs140221
Vector machine techniques for modeling of seismic liquefaction data
Directory of Open Access Journals (Sweden)
Pijush Samui
2014-06-01
Full Text Available This article employs three soft computing techniques, Support Vector Machine (SVM; Least Square Support Vector Machine (LSSVM and Relevance Vector Machine (RVM, for prediction of liquefaction susceptibility of soil. SVM and LSSVM are based on the structural risk minimization (SRM principle which seeks to minimize an upper bound of the generalization error consisting of the sum of the training error and a confidence interval. RVM is a sparse Bayesian kernel machine. SVM, LSSVM and RVM have been used as classification tools. The developed SVM, LSSVM and RVM give equations for prediction of liquefaction susceptibility of soil. A comparative study has been carried out between the developed SVM, LSSVM and RVM models. The results from this article indicate that the developed SVM gives the best performance for prediction of liquefaction susceptibility of soil.
Demand Management Based on Model Predictive Control Techniques
Directory of Open Access Journals (Sweden)
Yasser A. Davizón
2014-01-01
Full Text Available Demand management (DM is the process that helps companies to sell the right product to the right customer, at the right time, and for the right price. Therefore the challenge for any company is to determine how much to sell, at what price, and to which market segment while maximizing its profits. DM also helps managers efficiently allocate undifferentiated units of capacity to the available demand with the goal of maximizing revenue. This paper introduces control system approach to demand management with dynamic pricing (DP using the model predictive control (MPC technique. In addition, we present a proper dynamical system analogy based on active suspension and a stability analysis is provided via the Lyapunov direct method.
Karaca, Koray; Bayın, Selçuk
2008-04-01
We construct a physical model to study the effects of dimensional reduction that might have taken place during the inflationary phase of the universe. The model we propose is a (1 + D)-dimensional ( D > 3), nonsingular, spatially homogeneous and isotropic Friedmann model. We consider dimensional reduction to take place in a stepwise manner and interpret each step as a phase transition. Independent of the details of the process of dimensional reduction, we impose suitable boundary conditions across the transitions and trace the effects of dimensional reduction to the currently observable parameters of the universe. In order to exhibit the cosmological features of the proposed model, we construct a (1 + 4)-dimensional toy model for both closed and open cases of Friedmann geometries. It is shown that in these models the universe makes transition into the lower dimension when the critical length parameter l 4,3, which signals dimensional reduction, reaches the Planck length in D = 3. The numerical models we present in this paper have the capability of making definite predictions about the cosmological parameters of the universe such as the Hubble parameter, age and density.
Arsenault, Richard; Poissant, Dominique; Brissette, François
2015-11-01
This paper evaluated the effects of parametric reduction of a hydrological model on five regionalization methods and 267 catchments in the province of Quebec, Canada. The Sobol' variance-based sensitivity analysis was used to rank the model parameters by their influence on the model results and sequential parameter fixing was performed. The reduction in parameter correlations improved parameter identifiability, however this improvement was found to be minimal and was not transposed in the regionalization mode. It was shown that 11 of the HSAMI models' 23 parameters could be fixed with little or no loss in regionalization skill. The main conclusions were that (1) the conceptual lumped models used in this study did not represent physical processes sufficiently well to warrant parameter reduction for physics-based regionalization methods for the Canadian basins examined and (2) catchment descriptors did not adequately represent the relevant hydrological processes, namely snow accumulation and melt.
Cai, Chenxia; Kelly, James T; Avise, Jeremy C; Kaduwela, Ajith P; Stockwell, William R
2011-05-01
An updated version of the Statewide Air Pollution Research Center (SAPRC) chemical mechanism (SAPRC07C) was implemented into the Community Multiscale Air Quality (CMAQ) version 4.6. CMAQ simulations using SAPRC07C and the previously released version, SAPRC99, were performed and compared for an episode during July-August, 2000. Ozone (O3) predictions of the SAPRC07C simulation are generally lower than those of the SAPRC99 simulation in the key areas of central and southern California, especially in areas where modeled concentrations are greater than the federal 8-hr O3 standard of 75 parts per billion (ppb) and/or when the volatile organic compound (VOC)/nitrogen oxides (NOx) ratio is less than 13. The relative changes of ozone production efficiency (OPE) against the VOC/NOx ratio at 46 sites indicate that the OPE is reduced in SAPRC07C compared with SAPRC99 at most sites by as much as approximately 22%. The SAPRC99 and SAPRC07C mechanisms respond similarly to 20% reductions in anthropogenic VOC emissions. The response of the mechanisms to 20% NOx emissions reductions can be grouped into three cases. In case 1, in which both mechanisms show a decrease in daily maximum 8-hr O3 concentration with decreasing NOx emissions, the O3 decrease in SAPRC07C is smaller. In case 2, in which both mechanisms show an increase in O3 with decreasing NOx emissions, the O3 increase is larger in SAPRC07C. In case 3, SAPRC07C simulates an increase in O3 in response to reduced NOx emissions whereas SAPRC99 simulates a decrease in O3 for the same region. As a result, the areas where NOx controls would be disbeneficial are spatially expanded in SAPRC07C. Although the results presented here are valuable for understanding differences in predictions and model response for SAPRC99 and SAPRC07C, the study did not evaluate the impact of mechanism differences in the context of the U.S. Environmental Protection Agency's guidance for using numerical models in demonstrating air quality attainment
Directory of Open Access Journals (Sweden)
Yong-Beom Lee
2017-01-01
Full Text Available Background. Among coracoclavicular (CC fixation techniques, the use of flip button device was demonstrated to have successful outcomes with the advantage of being able to accommodate an arthroscopic procedure. Purpose. This study was conducted to investigate the factors associated with loss of fixation after arthroscopically assisted CC fixation using a single flip button device for acromioclavicular (AC joint dislocations. Materials and Methods. We enrolled a total of 47 patients (35 men and 12 women. Plain radiography was performed at a mean of 24 months postoperatively to evaluate the final radiological outcome. The primary outcome measure was a long-term reduction of the AC joint for at least 24 months. Results. We found that 29 patients had a high quality reduction (61.7% and 18 patients had a low quality reduction (38.3% in initial postoperative CT findings. Our study showed that the duration (5 days from injury to treatment and the quality of initial postoperative reduction were significantly associated with the maintenance of reduction at final follow-up. Conclusion. Our study showed that maintaining stable reduction after arthroscopically assisted CC fixation using a single flip button device technique is difficult especially in patients who received delayed treatment or whose initial reduction quality was poor.
International Nuclear Information System (INIS)
Schubiger-Banz, S; Arisona, S M; Zhong, C
2014-01-01
This paper presents a workflow to increase the level of detail of reality-based 3D urban models. It combines the established workflows from photogrammetry and procedural modeling in order to exploit distinct advantages of both approaches. The combination has advantages over purely automatic acquisition in terms of visual quality, accuracy and model semantics. Compared to manual modeling, procedural techniques can be much more time effective while maintaining the qualitative properties of the modeled environment. In addition, our method includes processes for procedurally adding additional features such as road and rail networks. The resulting models meet the increasing needs in urban environments for planning, inventory, and analysis
McCollough, Cynthia H; Leng, Shuai; Sunnegardh, Johan; Vrieze, Thomas J; Yu, Lifeng; Lane, John; Raupach, Rainer; Stierstorfer, Karl; Flohr, Thomas
2013-06-01
To assess the z-axis resolution improvement and dose reduction potential achieved using a z-axis deconvolution technique with iterative reconstruction (IR) relative to filtered backprojection (FBP) images created with the use of a z-axis comb filter. Each of three phantoms were scanned with two different acquisition modes: (1) an ultrahigh resolution (UHR) scan mode that uses a comb filter in the fan angle direction to increase in-plane spatial resolution and (2) a z-axis ultrahigh spatial resolution (zUHR) scan mode that uses comb filters in both the fan and cone angle directions to improve both in-plane and z-axis spatial resolution. All other scanning parameters were identical. First, the ACR CT Accreditation phantom, rotated by 90° so that the high-contrast spatial resolution targets were parallel to the coronal plane, was scanned to assess limiting spatial resolution and image noise. Second, section sensitivity profiles (SSPs) were measured using a copper foil embedded in an acrylic cylinder and the full-width-at-half-maximum (FWHM) and full-width-at-tenth-maximum (FWTM) of the SSPs were calculated. Third, an anthropomorphic head phantom containing a human skull was scanned to assess clinical acceptability for imaging of the temporal bone. For each scan, FBP images were reconstructed for the zUHR scan using the narrowest image thickness available. For the CT accreditation phantom, zUHR images were also reconstructed using an IR algorithm (SAFIRE, Siemens Healthcare, Forchheim, Germany) to assess the influence of the IR algorithm on image noise. A z-axis deconvolution technique combined with the IR algorithm was used to reconstruct images at the narrowest image thickness possible from the UHR scan data. Images of the ACR and head phantoms were reformatted into the coronal plane. The head phantom images were evaluated by a neuroradiologist to assess acceptability for use in patients undergoing clinically indicated CT imaging of the temporal bone. The limiting
Carlberg, Kevin
2010-12-10
A novel model reduction technique for static systems is presented. The method is developed using a goal-oriented framework, and it extends the concept of snapshots for proper orthogonal decomposition (POD) to include (sensitivity) derivatives of the state with respect to system input parameters. The resulting reduced-order model generates accurate approximations due to its goal-oriented construction and the explicit \\'training\\' of the model for parameter changes. The model is less computationally expensive to construct than typical POD approaches, since efficient multiple right-hand side solvers can be used to compute the sensitivity derivatives. The effectiveness of the method is demonstrated on a parameterized aerospace structure problem. © 2010 John Wiley & Sons, Ltd.
A stochastic approach for model reduction and memory function design in hydrogeophysical inversion
Hou, Z.; Kellogg, A.; Terry, N.
2009-12-01
Geophysical (e.g., seismic, electromagnetic, radar) techniques and statistical methods are essential for research related to subsurface characterization, including monitoring subsurface flow and transport processes, oil/gas reservoir identification, etc. For deep subsurface characterization such as reservoir petroleum exploration, seismic methods have been widely used. Recently, electromagnetic (EM) methods have drawn great attention in the area of reservoir characterization. However, considering the enormous computational demand corresponding to seismic and EM forward modeling, it is usually a big problem to have too many unknown parameters in the modeling domain. For shallow subsurface applications, the characterization can be very complicated considering the complexity and nonlinearity of flow and transport processes in the unsaturated zone. It is warranted to reduce the dimension of parameter space to a reasonable level. Another common concern is how to make the best use of time-lapse data with spatial-temporal correlations. This is even more critical when we try to monitor subsurface processes using geophysical data collected at different times. The normal practice is to get the inverse images individually. These images are not necessarily continuous or even reasonably related, because of the non-uniqueness of hydrogeophysical inversion. We propose to use a stochastic framework by integrating minimum-relative-entropy concept, quasi Monto Carlo sampling techniques, and statistical tests. The approach allows efficient and sufficient exploration of all possibilities of model parameters and evaluation of their significances to geophysical responses. The analyses enable us to reduce the parameter space significantly. The approach can be combined with Bayesian updating, allowing us to treat the updated ‘posterior’ pdf as a memory function, which stores all the information up to date about the distributions of soil/field attributes/properties, then consider the
Directory of Open Access Journals (Sweden)
Shilpa Bhandari
2016-01-01
Full Text Available INTRODUCTION: With the advent of assisted reproductive treatment options, the incidence of multiple pregnancies has increased. Although the need for elective single embryo transfer is emphasized time and again, its uniform applicability in practice is yet a distant goal. In view of the fact that triplet and higher order pregnancies are associated with significant fetomaternal complications, the fetal reduction is a commonly used option in such cases. This retrospective study aims to compare the perinatal outcome in patients with triplet gestation who have undergone spontaneous fetal reduction (SFR as against those in whom multifetal pregnancy reduction (MFPR was done. MATERIALS AND METHODS: In the present study, eighty patients with triplet gestation at 6 weeks were considered. The patients underwent SFR or MFPR at or before 12-13 weeks and were divided into two groups (34 and 46, respectively. RESULTS: Our study found no statistical difference in perinatal outcome between the SFR and MFPR groups in terms of average gestational age at delivery, abortion rate, preterm delivery rate, and birth weight. The study shows that the risk of aborting all fetuses after SFR is three times (odds ratio [OR] = 3.600, 95% confidence interval [CI] = 0.2794-46.388 that of MFPR in subsequent 2 weeks. There were more chances of loss of extra fetus in SFR (23.5% group than MFPR group (8.7% (OR = 3.889, 95% CI = 1.030-14.680. As neither group offers any significant benefit from preterm delivery, multiple pregnancies continue to be responsible for preterm delivery despite fetal reduction. CONCLUSION: There appears to be some advantages of MFPR in perinatal outcome when compared to SFR, especially if the latter happens at advanced gestation. Therefore, although it is advisable to wait for SFR to occur, in patients with triplet gestation at 11-12 weeks, MFPR is a viable option to be considered.
Global-local nonlinear model reduction for flows in heterogeneous porous media
AlOtaibi, Manal
2015-08-01
In this paper, we combine discrete empirical interpolation techniques, global mode decomposition methods, and local multiscale methods, such as the Generalized Multiscale Finite Element Method (GMsFEM), to reduce the computational complexity associated with nonlinear flows in highly-heterogeneous porous media. To solve the nonlinear governing equations, we employ the GMsFEM to represent the solution on a coarse grid with multiscale basis functions and apply proper orthogonal decomposition on a coarse grid. Computing the GMsFEM solution involves calculating the residual and the Jacobian on a fine grid. As such, we use local and global empirical interpolation concepts to circumvent performing these computations on the fine grid. The resulting reduced-order approach significantly reduces the flow problem size while accurately capturing the behavior of fully-resolved solutions. We consider several numerical examples of nonlinear multiscale partial differential equations that are numerically integrated using fully-implicit time marching schemes to demonstrate the capability of the proposed model reduction approach to speed up simulations of nonlinear flows in high-contrast porous media.
Candreia, Claudia; Birrer, Ruth; Fistarol, Susanna; Kompis, Martin; Caversaccio, Marco; Arnold, Andreas; Stieger, Christof
2016-12-01
We present an analysis of adverse events after implantation of bone anchored hearing device in our patient population with focus on individual risk factors for peri-implant skin reactions. The investigation involved a chart review of adult Baha patients (n = 179) with 203 Bahas implanted with skin reduction techniques between 1993 and 2009, a questionnaire (n = 97) and a free clinical examination (n = 47). Skin reactions were graded by severity from 0 (no skin reaction) to 4 (implant loss resulting from infection) according to Holgers. We analyzed the skin reaction rate (SRR) defined as the number of skin reactions per year and the worst Holgers grade (WHG), which indicates the grade of the worst skin reaction per implant. We defined 20 parameters including the demographic characteristics, surgery details, subjective benefits, handling and individual factors. The most frequent adverse events (85 %) were skin reactions. The average SRR was 0.426 per Baha year. Six parameters showed an association with the SRR or the WHG. The clinically most relevant factors are an elevated Body Mass Index (BMI, p = 0.02) and darker skin type (p = 0.03). The SRR increased with the distance between the tragus and the implant (p = 0.02). Regarding the identified risk factors, the SRR might be reduced by selecting a location for the implant near the pinna and by specific counseling regarding post-operative care for patients with darker skin type or an elevated Body Mass Index (BMI). Few of the factors analyzed were found to influence the SRR and WHG. Since most adverse skin reactions could be treated easily with local therapy, our results suggest that in adult patients, individual risk factors for skin reactions are not a contraindication for Baha implantation. Thus, patients can be selected purely on audiological criteria.
Ebuna, D. R.; Kluesner, J.; Cunningham, K. J.; Edwards, J. H.
2016-12-01
An effective method for determining the approximate spatial extent of karst pore systems is critical for hydrological modeling in such environments. When using geophysical techniques, karst features are especially challenging to constrain due to their inherent heterogeneity and complex seismic signatures. We present a method for mapping these systems using three-dimensional seismic reflection data by combining applications of machine learning and modern data science. Supervised neural networks (NN) have been successfully implemented in seismic reflection studies to produce multi-attributes (or meta-attributes) for delineating faults, chimneys, salt domes, and slumps. Using a seismic reflection dataset from southeast Florida, we develop an objective multi-attribute workflow for mapping karst in which potential interpreter bias is minimized by applying linear and non-linear data transformations for dimensionality reduction. This statistical approach yields a reduced set of input seismic attributes to the NN by eliminating irrelevant and overly correlated variables, while still preserving the vast majority of the observed data variance. By initiating the supervised NN from an eigenspace that maximizes the separation between classes, the convergence time and accuracy of the computations are improved since the NN only needs to recognize small perturbations to the provided decision boundaries. We contend that this 3D seismic reflection, data-driven method for defining the spatial bounds of karst pore systems provides great value as a standardized preliminary step for hydrological characterization and modeling in these complex geological environments.
DEFF Research Database (Denmark)
Manoli, Gabriele; Chambon, Julie C.; Bjerg, Poul L.
2012-01-01
A numerical model of metabolic reductive dechlorination is used to describe the performance of enhanced bioremediation in fractured clay till. The model is developed to simulate field observations of a full scale bioremediation scheme in a fractured clay till and thereby to assess remediation...
The phase field technique for modeling multiphase materials
Singer-Loginova, I.; Singer, H. M.
2008-10-01
This paper reviews methods and applications of the phase field technique, one of the fastest growing areas in computational materials science. The phase field method is used as a theory and computational tool for predictions of the evolution of arbitrarily shaped morphologies and complex microstructures in materials. In this method, the interface between two phases (e.g. solid and liquid) is treated as a region of finite width having a gradual variation of different physical quantities, i.e. it is a diffuse interface model. An auxiliary variable, the phase field or order parameter \\phi(\\vec{x}) , is introduced, which distinguishes one phase from the other. Interfaces are identified by the variation of the phase field. We begin with presenting the physical background of the phase field method and give a detailed thermodynamical derivation of the phase field equations. We demonstrate how equilibrium and non-equilibrium physical phenomena at the phase interface are incorporated into the phase field methods. Then we address in detail dendritic and directional solidification of pure and multicomponent alloys, effects of natural convection and forced flow, grain growth, nucleation, solid-solid phase transformation and highlight other applications of the phase field methods. In particular, we review the novel phase field crystal model, which combines atomistic length scales with diffusive time scales. We also discuss aspects of quantitative phase field modeling such as thin interface asymptotic analysis and coupling to thermodynamic databases. The phase field methods result in a set of partial differential equations, whose solutions require time-consuming large-scale computations and often limit the applicability of the method. Subsequently, we review numerical approaches to solve the phase field equations and present a finite difference discretization of the anisotropic Laplacian operator.
Yi, Guilian; Sui, Yunkang; Du, Jiazheng
2011-06-01
To reduce vibration and noise, a damping layer and constraint layer are usually pasted on the inner surface of a gearbox thin shell, and their thicknesses are the main parameters in the vibration and noise reduction design. The normal acceleration of the point on the gearbox surface is the main index that can reflect the vibration and noise of that point, and the normal accelerations of different points can reflect the degree of the vibration and noise of the whole structure. The K-S function is adopted to process many points' normal accelerations as the comprehensive index of the vibration characteristics of the whole structure, and the vibration acceleration level is adopted to measure the degree of the vibration and noise. Secondary development of the Abaqus preprocess and postprocess on the basis of the Python scripting programming automatically modifies the model parameters, submits the job, and restarts the analysis totally, which avoids the tedious work of returning to the Abaqus/CAE for modifying and resubmitting and improves the speed of the preprocess and postprocess and the computational efficiency.
Directory of Open Access Journals (Sweden)
Sadik Kamel Gharghan
2016-01-01
Full Text Available In most wireless sensor network (WSN applications, the sensor nodes (SNs are battery powered and the amount of energy consumed by the nodes in the network determines the network lifespan. For future Internet of Things (IoT applications, reducing energy consumption of SNs has become mandatory. In this paper, an ultra-low-power nRF24L01 wireless protocol is considered for a bicycle WSN. The power consumption of the mobile node on the cycle track was modified by combining adjustable data rate, sleep/wake, and transmission power control (TPC based on two algorithms. The first algorithm was a TPC-based distance estimation, which adopted a novel hybrid particle swarm optimization-artificial neural network (PSO-ANN using the received signal strength indicator (RSSI, while the second algorithm was a novel TPC-based accelerometer using inclination angle of the bicycle on the cycle track. Based on the second algorithm, the power consumption of the mobile and master nodes can be improved compared with the first algorithm and constant transmitted power level. In addition, an analytical model is derived to correlate the power consumption and data rate of the mobile node. The results indicate that the power savings based on the two algorithms outperformed the conventional operation (i.e., without power reduction algorithm by 78%.
Establishment of reproducible osteosarcoma rat model using orthotopic implantation technique.
Yu, Zhe; Sun, Honghui; Fan, Qingyu; Long, Hua; Yang, Tongtao; Ma, Bao'an
2009-05-01
In experimental musculoskeletal oncology, there remains a need for animal models that can be used to assess the efficacy of new and innovative treatment methodologies for bone tumors. Rat plays a very important role in the bone field especially in the evaluation of metabolic bone diseases. The objective of this study was to develop a rat osteosarcoma model for evaluation of new surgical and molecular methods of treatment for extremity sarcoma. One hundred male SD rats weighing 125.45+/-8.19 g were divided into 5 groups and anesthetized intraperitoneally with 10% chloral hydrate. Orthotopic implantation models of rat osteosarcoma were performed by injecting directly into the SD rat femur with a needle for inoculation with SD tumor cells. In the first step of the experiment, 2x10(5) to 1x10(6) UMR106 cells in 50 microl were injected intraosseously into median or distal part of the femoral shaft and the tumor take rate was determined. The second stage consisted of determining tumor volume, correlating findings from ultrasound with findings from necropsia and determining time of survival. In the third stage, the orthotopically implanted tumors and lung nodules were resected entirely, sectioned, and then counter stained with hematoxylin and eosin for histopathologic evaluation. The tumor take rate was 100% for implants with 8x10(5) tumor cells or more, which was much less than the amount required for subcutaneous implantation, with a high lung metastasis rate of 93.0%. Ultrasound and necropsia findings matched closely (r=0.942; p<0.01), which demonstrated that Doppler ultrasonography is a convenient and reliable technique for measuring cancer at any stage. Tumor growth curve showed that orthotopically implanted tumors expanded vigorously with time-lapse, especially in the first 3 weeks. The median time of survival was 38 days and surgical mortality was 0%. The UMR106 cell line has strong carcinogenic capability and high lung metastasis frequency. The present rat
DEFF Research Database (Denmark)
Larsson, Caroline; Vitger, Anne; Jensen, Rasmus Bovbjerg
2014-01-01
was to validate and evaluate the suitability of the oral 13C-bicarbonate technique (o13CBT) for measuring EE in dog obesity studies. A further objective was to investigate the impact of body weight (BW) reduction and changes in body composition on the EE when measured under conditions corresponding to the basal...
Directory of Open Access Journals (Sweden)
Mohamed Sayed Abdelhafez
2018-02-01
Conclusion: Early transvaginal reduction of triplets to twins leads to improved obstetric outcomes as it decreases prematurity and its related neonatal morbidities and mortality without increase in the miscarriage rate. Early fetal reduction seems to be better than continuation of triplet pregnancies with prophylactic placement of cervical cerclage.
Directory of Open Access Journals (Sweden)
Missa Takasaka
2016-06-01
Full Text Available ABSTRACT OBJECTIVE: To evaluate, compare and identify the surgical technique with best results for treating intra-articular calcaneal fractures, taking into account postoperative outcomes, complications and scoring in the Aofas questionnaire. METHODS: This was a retrospective study on 54 patients with fractures of the calcaneus who underwent surgery between 2002 and 2012 by means of the following techniques: (1 open reduction with extended L-shaped lateral incision and fixation with double-H plate of 3.5 mm; (2 open reduction with minimal incision lateral approach and percutaneous fixation with wires and screws; and (3 open reduction with minimal incision lateral approach and fixation with adjustable monoplanar external fixator. RESULTS: Patients treated using a lateral approach, with fixation using a plate had a mean Aofas score of 76 points; those treated through a minimal incision lateral approach with screw and wire fixation had a mean score of 71 points; and those treated through a minimal incision lateral approach with an external fixator had a mean score of 75 points. The three surgical techniques were shown to be effective for treating intra-articular calcaneal fractures, without any evidence that any of the techniques being superior. CONCLUSION: Intra-articular calcaneal fractures are complex and their treatment should be individualized based on patient characteristics, type of fracture and the surgeon's experience with the surgical technique chosen.
Wentworth, Mami Tonoe
Uncertainty quantification plays an important role when making predictive estimates of model responses. In this context, uncertainty quantification is defined as quantifying and reducing uncertainties, and the objective is to quantify uncertainties in parameter, model and measurements, and propagate the uncertainties through the model, so that one can make a predictive estimate with quantified uncertainties. Two of the aspects of uncertainty quantification that must be performed prior to propagating uncertainties are model calibration and parameter selection. There are several efficient techniques for these processes; however, the accuracy of these methods are often not verified. This is the motivation for our work, and in this dissertation, we present and illustrate verification frameworks for model calibration and parameter selection in the context of biological and physical models. First, HIV models, developed and improved by [2, 3, 8], describe the viral infection dynamics of an HIV disease. These are also used to make predictive estimates of viral loads and T-cell counts and to construct an optimal control for drug therapy. Estimating input parameters is an essential step prior to uncertainty quantification. However, not all the parameters are identifiable, implying that they cannot be uniquely determined by the observations. These unidentifiable parameters can be partially removed by performing parameter selection, a process in which parameters that have minimal impacts on the model response are determined. We provide verification techniques for Bayesian model calibration and parameter selection for an HIV model. As an example of a physical model, we employ a heat model with experimental measurements presented in [10]. A steady-state heat model represents a prototypical behavior for heat conduction and diffusion process involved in a thermal-hydraulic model, which is a part of nuclear reactor models. We employ this simple heat model to illustrate verification
Müller, Verena; Fikatas, Panagiotis; Gül, Safak; Noesser, Maximilian; Fuehrer, Kirs Ten; Sauer, Igor; Pratschke, Johann; Zorron, Ricardo
2017-01-01
Bariatric surgery is currently the most effective method to ameliorate co-morbidities as consequence of morbidly obese patients with BMI over 35 kg/m2. Endoscopic techniques have been developed to treat patients with mild obesity and ameliorate comorbidities, but endoscopic skills are needed, beside the costs of the devices. To report a new technique for internal gastric plication using an intragastric single port device in an experimental swine model. Twenty experiments using fresh pig cadaver stomachs in a laparoscopic trainer were performed. The procedure was performed as follow in ten pigs: 1) volume measure; 2) insufflation of the stomach with CO2; 3) extroversion of the stomach through the simulator and installation of the single port device (Gelpoint Applied Mini) through a gastrotomy close to the pylorus; 4) performance of four intragastric handsewn 4-point sutures with Prolene 2-0, from the gastric fundus to the antrum; 5) after the performance, the residual volume was measured. Sleeve gastrectomy was also performed in further ten pigs and pre- and post-procedure gastric volume were measured. The internal gastric plication technique was performed successfully in the ten swine experiments. The mean procedure time was 27±4 min. It produced a reduction of gastric volume of a mean of 51%, and sleeve gastrectomy, a mean of 90% in this swine model. The internal gastric plication technique using an intragastric single port device required few skills to perform, had low operative time and achieved good reduction (51%) of gastric volume in an in vitro experimental model. A cirurgia bariátrica é atualmente o método mais efetivo para melhorar as co-morbidades decorrentes da obesidade mórbida com IMC acima de 35 kg/m2. Técnicas endoscópicas foram desenvolvidas para tratar pacientes com obesidade leve e melhorar as comorbidades, mas habilidades endoscópicas são necessárias, além dos custos. Relatar uma nova técnica para a plicatura gástrica interna
Directory of Open Access Journals (Sweden)
Ina Koch
2017-06-01
Full Text Available Motivation:Arabidopsis thaliana is a well-established model system for the analysis of the basic physiological and metabolic pathways of plants. Nevertheless, the system is not yet fully understood, although many mechanisms are described, and information for many processes exists. However, the combination and interpretation of the large amount of biological data remain a big challenge, not only because data sets for metabolic paths are still incomplete. Moreover, they are often inconsistent, because they are coming from different experiments of various scales, regarding, for example, accuracy and/or significance. Here, theoretical modeling is powerful to formulate hypotheses for pathways and the dynamics of the metabolism, even if the biological data are incomplete. To develop reliable mathematical models they have to be proven for consistency. This is still a challenging task because many verification techniques fail already for middle-sized models. Consequently, new methods, like decomposition methods or reduction approaches, are developed to circumvent this problem.Methods: We present a new semi-quantitative mathematical model of the metabolism of Arabidopsis thaliana. We used the Petri net formalism to express the complex reaction system in a mathematically unique manner. To verify the model for correctness and consistency we applied concepts of network decomposition and network reduction such as transition invariants, common transition pairs, and invariant transition pairs.Results: We formulated the core metabolism of Arabidopsis thaliana based on recent knowledge from literature, including the Calvin cycle, glycolysis and citric acid cycle, glyoxylate cycle, urea cycle, sucrose synthesis, and the starch metabolism. By applying network decomposition and reduction techniques at steady-state conditions, we suggest a straightforward mathematical modeling process. We demonstrate that potential steady-state pathways exist, which provide the
Modeling of Chemical Reactions in Afterburning for the Reduction of N2O
DEFF Research Database (Denmark)
Gustavsson, Lennart; Glarborg, Peter; Leckner, Bo
1996-01-01
Full scale tests in a 12 MW fluidized bed combustor on reduction of N2O by secondary fuel injection are analyzed in terms a model that involves a detailed reaction mechanism for the gas phase chemistry as well as a description of gas-solid reactions.......Full scale tests in a 12 MW fluidized bed combustor on reduction of N2O by secondary fuel injection are analyzed in terms a model that involves a detailed reaction mechanism for the gas phase chemistry as well as a description of gas-solid reactions....
Geometric subspace updates with applications to online adaptive nonlinear model reduction
DEFF Research Database (Denmark)
Zimmermann, Ralf; Peherstorfer, Benjamin; Willcox, Karen
2017-01-01
In many scientific applications, including model reduction and image processing, subspaces are used as ansatz spaces for the low-dimensional approximation and reconstruction of the state vectors of interest. We introduce a procedure for adapting an existing subspace based on information from...... Estimation (GROUSE). We establish for GROUSE a closed-form expression for the residual function along the geodesic descent direction. Specific applications of subspace adaptation are discussed in the context of image processing and model reduction of nonlinear partial differential equation systems....
Apivatthakakul, Theerachai; Phornphutkul, C; Bunmaprasert, T; Sananpanich, K; Fernandez Dell'Oca, Alberto
2012-06-01
Periprosthetic femoral fractures (PPFs) associated at or near a well-fixed femoral prostheses (Vancouver type-B1) present a clinical challenge due to the quality of the bone stock and instability of the fracture. The purpose of this study was to present a novel reduction technique and analyze clinical and radiographic outcome in patients with Vancouver type-B1 fractures treated with percutaneous cerclage wiring for fracture reduction and maintenance of reduction with minimally invasive plate osteosynthesis (MIPO) utilizing a locking compression plate (LCP). Between March 2007 and December 2008, ten consecutive patients with spiral, oblique or wedge Vancouver type-B1 were treated with closed percutaneous cerclage wiring using a new cerclage passer instrument (Synthes) through small 2-3 cm incisions for reduction and maintenance of reduction. Internal fixation with MIPO was obtained utilizing a long LCP Synthes bridging the fracture. The reduction time, fixation time and operative time were recorded. The rehabilitation protocol consisted of partial weight bearing as tolerated. Clinical and radiographic outcomes included evidence of union, return to pre-injury mobility, and surgical complications were recorded. There were three men and seven women with an average age of 74 years (range 47-84 years) at the time the fracture occured. The average follow-up was 13.2 months. One patient died 2 months after surgery due to cardiovascular problems and was excluded. The average reduction time with percutaneous cerclage wiring was 24.4 min (range 7-45 min). The average fixation time was 79 min (range 53-100 min). The average operative time was 103 min (range 75-140 min). Blood loss was minimal and only two patients needed a blood transfusion. All fractures healed with a mean time to union of 18 weeks (range 16-20 weeks). There was one implant which bent 10° in the post-operative period but went on to heal uneventfully within 16 weeks. There was no evidence of loosening of any