WorldWideScience

Sample records for modeling techniques results

  1. Numerical modelling of radon-222 entry into houses: An outline of techniques and results

    DEFF Research Database (Denmark)

    Andersen, C.E.

    2001-01-01

    Numerical modelling is a powerful tool for studies of soil gas and radon-222 entry into houses. It is the purpose of this paper to review some main techniques and results. In the past, modelling has focused on Darcy flow of soil gas (driven by indoor–outdoor pressure differences) and combined...... diffusive and advective transport of radon. Models of different complexity have been used. The simpler ones are finite-difference models with one or two spatial dimensions. The more complex models allow for full three-dimensional and time dependency. Advanced features include: soil heterogeneity, anisotropy......, fractures, moisture, non-uniform soil temperature, non-Darcy flow of gas, and flow caused by changes in the atmospheric pressure. Numerical models can be used to estimate the importance of specific factors for radon entry. Models are also helpful when results obtained in special laboratory or test structure...

  2. Determining Plutonium Mass in Spent Fuel with Nondestructive Assay Techniques -- Preliminary Modeling Results Emphasizing Integration among Techniques

    International Nuclear Information System (INIS)

    Tobin, S.J.; Fensin, M.L.; Ludewigt, B.A.; Menlove, H.O.; Quiter, B.J.; Sandoval, N.P.; Swinhoe, M.T.; Thompson, S.J.

    2009-01-01

    There are a variety of motivations for quantifying Pu in spent (used) fuel assemblies by means of nondestructive assay (NDA) including the following: strengthen the capabilities of the International Atomic Energy Agencies to safeguards nuclear facilities, quantifying shipper/receiver difference, determining the input accountability value at reprocessing facilities and providing quantitative input to burnup credit determination for repositories. For the purpose of determining the Pu mass in spent fuel assemblies, twelve NDA techniques were identified that provide information about the composition of an assembly. A key point motivating the present research path is the realization that none of these techniques, in isolation, is capable of both (1) quantifying the elemental Pu mass of an assembly and (2) detecting the diversion of a significant number of pins. As such, the focus of this work is determining how to best integrate 2 or 3 techniques into a system that can quantify elemental Pu and to assess how well this system can detect material diversion. Furthermore, it is important economically to down-select among the various techniques before advancing to the experimental phase. In order to achieve this dual goal of integration and down-selection, a Monte Carlo library of PWR assemblies was created and is described in another paper at Global 2009 (Fensin et al.). The research presented here emphasizes integration among techniques. An overview of a five year research plan starting in 2009 is given. Preliminary modeling results for the Monte Carlo assembly library are presented for 3 NDA techniques: Delayed Neutrons, Differential Die-Away, and Nuclear Resonance Fluorescence. As part of the focus on integration, the concept of 'Pu isotopic correlation' is discussed and the role of cooling time determination.

  3. Evaluation of data assimilation techniques for a mesoscale meteorological model and their effects on air quality model results

    Science.gov (United States)

    Amicarelli, A.; Gariazzo, C.; Finardi, S.; Pelliccioni, A.; Silibello, C.

    2008-05-01

    Data assimilation techniques are methods to limit the growth of errors in a dynamical model by allowing observations distributed in space and time to force (nudge) model solutions. They have become common for meteorological model applications in recent years, especially to enhance weather forecast and to support air-quality studies. In order to investigate the influence of different data assimilation techniques on the meteorological fields produced by RAMS model, and to evaluate their effects on the ozone and PM10 concentrations predicted by FARM model, several numeric experiments were conducted over the urban area of Rome, Italy, during a summer episode.

  4. Evaluation of data assimilation techniques for a mesoscale meteorological model and their effects on air quality model results

    Energy Technology Data Exchange (ETDEWEB)

    Amicarelli, A; Pelliccioni, A [ISPESL - Dipartimento Insediamenti Produttivi e Interazione con l' Ambiente, Via Fontana Candida, 1 00040 Monteporzio Catone (RM) Italy (Italy); Finardi, S; Silibello, C [ARIANET, via Gilino 9, 20128 Milano (Italy); Gariazzo, C

    2008-05-01

    Data assimilation techniques are methods to limit the growth of errors in a dynamical model by allowing observations distributed in space and time to force (nudge) model solutions. They have become common for meteorological model applications in recent years, especially to enhance weather forecast and to support air-quality studies. In order to investigate the influence of different data assimilation techniques on the meteorological fields produced by RAMS model, and to evaluate their effects on the ozone and PM{sub 10} concentrations predicted by FARM model, several numeric experiments were conducted over the urban area of Rome, Italy, during a summer episode.

  5. Evaluation of data assimilation techniques for a mesoscale meteorological model and their effects on air quality model results

    International Nuclear Information System (INIS)

    Amicarelli, A; Pelliccioni, A; Finardi, S; Silibello, C; Gariazzo, C

    2008-01-01

    Data assimilation techniques are methods to limit the growth of errors in a dynamical model by allowing observations distributed in space and time to force (nudge) model solutions. They have become common for meteorological model applications in recent years, especially to enhance weather forecast and to support air-quality studies. In order to investigate the influence of different data assimilation techniques on the meteorological fields produced by RAMS model, and to evaluate their effects on the ozone and PM 10 concentrations predicted by FARM model, several numeric experiments were conducted over the urban area of Rome, Italy, during a summer episode

  6. VLBI FOR GRAVITY PROBE B. IV. A NEW ASTROMETRIC ANALYSIS TECHNIQUE AND A COMPARISON WITH RESULTS FROM OTHER TECHNIQUES

    International Nuclear Information System (INIS)

    Lebach, D. E.; Ratner, M. I.; Shapiro, I. I.; Bartel, N.; Bietenholz, M. F.; Lederman, J. I.; Ransom, R. R.; Campbell, R. M.; Gordon, D.; Lestrade, J.-F.

    2012-01-01

    When very long baseline interferometry (VLBI) observations are used to determine the position or motion of a radio source relative to reference sources nearby on the sky, the astrometric information is usually obtained via (1) phase-referenced maps or (2) parametric model fits to measured fringe phases or multiband delays. In this paper, we describe a 'merged' analysis technique which combines some of the most important advantages of these other two approaches. In particular, our merged technique combines the superior model-correction capabilities of parametric model fits with the ability of phase-referenced maps to yield astrometric measurements of sources that are too weak to be used in parametric model fits. We compare the results from this merged technique with the results from phase-referenced maps and from parametric model fits in the analysis of astrometric VLBI observations of the radio-bright star IM Pegasi (HR 8703) and the radio source B2252+172 nearby on the sky. In these studies we use central-core components of radio sources 3C 454.3 and B2250+194 as our positional references. We obtain astrometric results for IM Peg with our merged technique even when the source is too weak to be used in parametric model fits, and we find that our merged technique yields astrometric results superior to the phase-referenced mapping technique. We used our merged technique to estimate the proper motion and other astrometric parameters of IM Peg in support of the NASA/Stanford Gravity Probe B mission.

  7. Numerical models: Detailing and simulation techniques aimed at comparison with experimental data, support to test result interpretation

    International Nuclear Information System (INIS)

    Lin Chiwen

    2001-01-01

    This part of the presentation discusses the modelling details required and the simulation techniques available for analyses, facilitating the comparison with the experimental data and providing support for interpretation of the test results. It is organised to cover the following topics: analysis inputs; basic modelling requirements for reactor coolant system; method applicable for reactor cooling system; consideration of damping values and integration time steps; typical analytic models used for analysis of reactor pressure vessel and internals; hydrodynamic mass and fluid damping for the internal analysis; impact elements for fuel analysis; and PEI theorem and its applications. The intention of these topics is to identify the key parameters associated with models of analysis and analytical methods. This should provide proper basis for useful comparison with the test results

  8. Automated Techniques for the Qualitative Analysis of Ecological Models: Continuous Models

    Directory of Open Access Journals (Sweden)

    Lynn van Coller

    1997-06-01

    Full Text Available The mathematics required for a detailed analysis of the behavior of a model can be formidable. In this paper, I demonstrate how various computer packages can aid qualitative analyses by implementing techniques from dynamical systems theory. Because computer software is used to obtain the results, the techniques can be used by nonmathematicians as well as mathematicians. In-depth analyses of complicated models that were previously very difficult to study can now be done. Because the paper is intended as an introduction to applying the techniques to ecological models, I have included an appendix describing some of the ideas and terminology. A second appendix shows how the techniques can be applied to a fairly simple predator-prey model and establishes the reliability of the computer software. The main body of the paper discusses a ratio-dependent model. The new techniques highlight some limitations of isocline analyses in this three-dimensional setting and show that the model is structurally unstable. Another appendix describes a larger model of a sheep-pasture-hyrax-lynx system. Dynamical systems techniques are compared with a traditional sensitivity analysis and are found to give more information. As a result, an incomplete relationship in the model is highlighted. I also discuss the resilience of these models to both parameter and population perturbations.

  9. Verification of Orthogrid Finite Element Modeling Techniques

    Science.gov (United States)

    Steeve, B. E.

    1996-01-01

    The stress analysis of orthogrid structures, specifically with I-beam sections, is regularly performed using finite elements. Various modeling techniques are often used to simplify the modeling process but still adequately capture the actual hardware behavior. The accuracy of such 'Oshort cutso' is sometimes in question. This report compares three modeling techniques to actual test results from a loaded orthogrid panel. The finite element models include a beam, shell, and mixed beam and shell element model. Results show that the shell element model performs the best, but that the simpler beam and beam and shell element models provide reasonable to conservative results for a stress analysis. When deflection and stiffness is critical, it is important to capture the effect of the orthogrid nodes in the model.

  10. Techniques for sensitivity analysis of SYVAC results

    International Nuclear Information System (INIS)

    Prust, J.O.

    1985-05-01

    Sensitivity analysis techniques may be required to examine the sensitivity of SYVAC model predictions to the input parameter values, the subjective probability distributions assigned to the input parameters and to the relationship between dose and the probability of fatal cancers plus serious hereditary disease in the first two generations of offspring of a member of the critical group. This report mainly considers techniques for determining the sensitivity of dose and risk to the variable input parameters. The performance of a sensitivity analysis technique may be improved by decomposing the model and data into subsets for analysis, making use of existing information on sensitivity and concentrating sampling in regions the parameter space that generates high doses or risks. A number of sensitivity analysis techniques are reviewed for their application to the SYVAC model including four techniques tested in an earlier study by CAP Scientific for the SYVAC project. This report recommends the development now of a method for evaluating the derivative of dose and parameter value and extending the Kruskal-Wallis technique to test for interactions between parameters. It is also recommended that the sensitivity of the output of each sub-model of SYVAC to input parameter values should be examined. (author)

  11. Advanced Atmospheric Ensemble Modeling Techniques

    Energy Technology Data Exchange (ETDEWEB)

    Buckley, R. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Chiswell, S. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Kurzeja, R. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Maze, G. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Viner, B. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Werth, D. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)

    2017-09-29

    Ensemble modeling (EM), the creation of multiple atmospheric simulations for a given time period, has become an essential tool for characterizing uncertainties in model predictions. We explore two novel ensemble modeling techniques: (1) perturbation of model parameters (Adaptive Programming, AP), and (2) data assimilation (Ensemble Kalman Filter, EnKF). The current research is an extension to work from last year and examines transport on a small spatial scale (<100 km) in complex terrain, for more rigorous testing of the ensemble technique. Two different release cases were studied, a coastal release (SF6) and an inland release (Freon) which consisted of two release times. Observations of tracer concentration and meteorology are used to judge the ensemble results. In addition, adaptive grid techniques have been developed to reduce required computing resources for transport calculations. Using a 20- member ensemble, the standard approach generated downwind transport that was quantitatively good for both releases; however, the EnKF method produced additional improvement for the coastal release where the spatial and temporal differences due to interior valley heating lead to the inland movement of the plume. The AP technique showed improvements for both release cases, with more improvement shown in the inland release. This research demonstrated that transport accuracy can be improved when models are adapted to a particular location/time or when important local data is assimilated into the simulation and enhances SRNL’s capability in atmospheric transport modeling in support of its current customer base and local site missions, as well as our ability to attract new customers within the intelligence community.

  12. Selection of productivity improvement techniques via mathematical modeling

    Directory of Open Access Journals (Sweden)

    Mahassan M. Khater

    2011-07-01

    Full Text Available This paper presents a new mathematical model to select an optimal combination of productivity improvement techniques. The proposed model of this paper considers four-stage cycle productivity and the productivity is assumed to be a linear function of fifty four improvement techniques. The proposed model of this paper is implemented for a real-world case study of manufacturing plant. The resulted problem is formulated as a mixed integer programming which can be solved for optimality using traditional methods. The preliminary results of the implementation of the proposed model of this paper indicate that the productivity can be improved through a change on equipments and it can be easily applied for both manufacturing and service industries.

  13. Numerical model updating technique for structures using firefly algorithm

    Science.gov (United States)

    Sai Kubair, K.; Mohan, S. C.

    2018-03-01

    Numerical model updating is a technique used for updating the existing experimental models for any structures related to civil, mechanical, automobiles, marine, aerospace engineering, etc. The basic concept behind this technique is updating the numerical models to closely match with experimental data obtained from real or prototype test structures. The present work involves the development of numerical model using MATLAB as a computational tool and with mathematical equations that define the experimental model. Firefly algorithm is used as an optimization tool in this study. In this updating process a response parameter of the structure has to be chosen, which helps to correlate the numerical model developed with the experimental results obtained. The variables for the updating can be either material or geometrical properties of the model or both. In this study, to verify the proposed technique, a cantilever beam is analyzed for its tip deflection and a space frame has been analyzed for its natural frequencies. Both the models are updated with their respective response values obtained from experimental results. The numerical results after updating show that there is a close relationship that can be brought between the experimental and the numerical models.

  14. A Comparative of business process modelling techniques

    Science.gov (United States)

    Tangkawarow, I. R. H. T.; Waworuntu, J.

    2016-04-01

    In this era, there is a lot of business process modeling techniques. This article is the research about differences of business process modeling techniques. For each technique will explain about the definition and the structure. This paper presents a comparative analysis of some popular business process modelling techniques. The comparative framework is based on 2 criteria: notation and how it works when implemented in Somerleyton Animal Park. Each technique will end with the advantages and disadvantages. The final conclusion will give recommend of business process modeling techniques that easy to use and serve the basis for evaluating further modelling techniques.

  15. Models and Techniques for Proving Data Structure Lower Bounds

    DEFF Research Database (Denmark)

    Larsen, Kasper Green

    In this dissertation, we present a number of new techniques and tools for proving lower bounds on the operational time of data structures. These techniques provide new lines of attack for proving lower bounds in both the cell probe model, the group model, the pointer machine model and the I...... bound of tutq = (lgd􀀀1 n). For ball range searching, we get a lower bound of tutq = (n1􀀀1=d). The highest previous lower bound proved in the group model does not exceed ((lg n= lg lg n)2) on the maximum of tu and tq. Finally, we present a new technique for proving lower bounds....../O-model. In all cases, we push the frontiers further by proving lower bounds higher than what could possibly be proved using previously known techniques. For the cell probe model, our results have the following consequences: The rst (lg n) query time lower bound for linear space static data structures...

  16. A Method to Test Model Calibration Techniques: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Judkoff, Ron; Polly, Ben; Neymark, Joel

    2016-09-01

    This paper describes a method for testing model calibration techniques. Calibration is commonly used in conjunction with energy retrofit audit models. An audit is conducted to gather information about the building needed to assemble an input file for a building energy modeling tool. A calibration technique is used to reconcile model predictions with utility data, and then the 'calibrated model' is used to predict energy savings from a variety of retrofit measures and combinations thereof. Current standards and guidelines such as BPI-2400 and ASHRAE-14 set criteria for 'goodness of fit' and assume that if the criteria are met, then the calibration technique is acceptable. While it is logical to use the actual performance data of the building to tune the model, it is not certain that a good fit will result in a model that better predicts post-retrofit energy savings. Therefore, the basic idea here is that the simulation program (intended for use with the calibration technique) is used to generate surrogate utility bill data and retrofit energy savings data against which the calibration technique can be tested. This provides three figures of merit for testing a calibration technique, 1) accuracy of the post-retrofit energy savings prediction, 2) closure on the 'true' input parameter values, and 3) goodness of fit to the utility bill data. The paper will also discuss the pros and cons of using this synthetic surrogate data approach versus trying to use real data sets of actual buildings.

  17. Sensitivity analysis technique for application to deterministic models

    International Nuclear Information System (INIS)

    Ishigami, T.; Cazzoli, E.; Khatib-Rahbar, M.; Unwin, S.D.

    1987-01-01

    The characterization of sever accident source terms for light water reactors should include consideration of uncertainties. An important element of any uncertainty analysis is an evaluation of the sensitivity of the output probability distributions reflecting source term uncertainties to assumptions regarding the input probability distributions. Historically, response surface methods (RSMs) were developed to replace physical models using, for example, regression techniques, with simplified models for example, regression techniques, with simplified models for extensive calculations. The purpose of this paper is to present a new method for sensitivity analysis that does not utilize RSM, but instead relies directly on the results obtained from the original computer code calculations. The merits of this approach are demonstrated by application of the proposed method to the suppression pool aerosol removal code (SPARC), and the results are compared with those obtained by sensitivity analysis with (a) the code itself, (b) a regression model, and (c) Iman's method

  18. Adaptive subdomain modeling: A multi-analysis technique for ocean circulation models

    Science.gov (United States)

    Altuntas, Alper; Baugh, John

    2017-07-01

    Many coastal and ocean processes of interest operate over large temporal and geographical scales and require a substantial amount of computational resources, particularly when engineering design and failure scenarios are also considered. This study presents an adaptive multi-analysis technique that improves the efficiency of these computations when multiple alternatives are being simulated. The technique, called adaptive subdomain modeling, concurrently analyzes any number of child domains, with each instance corresponding to a unique design or failure scenario, in addition to a full-scale parent domain providing the boundary conditions for its children. To contain the altered hydrodynamics originating from the modifications, the spatial extent of each child domain is adaptively adjusted during runtime depending on the response of the model. The technique is incorporated in ADCIRC++, a re-implementation of the popular ADCIRC ocean circulation model with an updated software architecture designed to facilitate this adaptive behavior and to utilize concurrent executions of multiple domains. The results of our case studies confirm that the method substantially reduces computational effort while maintaining accuracy.

  19. New techniques for subdivision modelling

    OpenAIRE

    BEETS, Koen

    2006-01-01

    In this dissertation, several tools and techniques for modelling with subdivision surfaces are presented. Based on the huge amount of theoretical knowledge about subdivision surfaces, we present techniques to facilitate practical 3D modelling which make subdivision surfaces even more useful. Subdivision surfaces have reclaimed attention several years ago after their application in full-featured 3D animation movies, such as Toy Story. Since then and due to their attractive properties an ever i...

  20. Variances in the projections, resulting from CLIMEX, Boosted Regression Trees and Random Forests techniques

    Science.gov (United States)

    Shabani, Farzin; Kumar, Lalit; Solhjouy-fard, Samaneh

    2017-08-01

    The aim of this study was to have a comparative investigation and evaluation of the capabilities of correlative and mechanistic modeling processes, applied to the projection of future distributions of date palm in novel environments and to establish a method of minimizing uncertainty in the projections of differing techniques. The location of this study on a global scale is in Middle Eastern Countries. We compared the mechanistic model CLIMEX (CL) with the correlative models MaxEnt (MX), Boosted Regression Trees (BRT), and Random Forests (RF) to project current and future distributions of date palm ( Phoenix dactylifera L.). The Global Climate Model (GCM), the CSIRO-Mk3.0 (CS) using the A2 emissions scenario, was selected for making projections. Both indigenous and alien distribution data of the species were utilized in the modeling process. The common areas predicted by MX, BRT, RF, and CL from the CS GCM were extracted and compared to ascertain projection uncertainty levels of each individual technique. The common areas identified by all four modeling techniques were used to produce a map indicating suitable and unsuitable areas for date palm cultivation for Middle Eastern countries, for the present and the year 2100. The four different modeling approaches predict fairly different distributions. Projections from CL were more conservative than from MX. The BRT and RF were the most conservative methods in terms of projections for the current time. The combination of the final CL and MX projections for the present and 2100 provide higher certainty concerning those areas that will become highly suitable for future date palm cultivation. According to the four models, cold, hot, and wet stress, with differences on a regional basis, appears to be the major restrictions on future date palm distribution. The results demonstrate variances in the projections, resulting from different techniques. The assessment and interpretation of model projections requires reservations

  1. Survey of semantic modeling techniques

    Energy Technology Data Exchange (ETDEWEB)

    Smith, C.L.

    1975-07-01

    The analysis of the semantics of programing languages was attempted with numerous modeling techniques. By providing a brief survey of these techniques together with an analysis of their applicability for answering semantic issues, this report attempts to illuminate the state-of-the-art in this area. The intent is to be illustrative rather than thorough in the coverage of semantic models. A bibliography is included for the reader who is interested in pursuing this area of research in more detail.

  2. Modelling of ground penetrating radar data in stratified media using the reflectivity technique

    International Nuclear Information System (INIS)

    Sena, Armando R; Sen, Mrinal K; Stoffa, Paul L

    2008-01-01

    Horizontally layered media are often encountered in shallow exploration geophysics. Ground penetrating radar (GPR) data in these environments can be modelled by techniques that are more efficient than finite difference (FD) or finite element (FE) schemes because the lateral homogeneity of the media allows us to reduce the dependence on the horizontal spatial variables through Fourier transforms on these coordinates. We adapt and implement the invariant embedding or reflectivity technique used to model elastic waves in layered media to model GPR data. The results obtained with the reflectivity and FDTD modelling techniques are in excellent agreement and the effects of the air–soil interface on the radiation pattern are correctly taken into account by the reflectivity technique. Comparison with real wide-angle GPR data shows that the reflectivity technique can satisfactorily reproduce the real GPR data. These results and the computationally efficient characteristics of the reflectivity technique (compared to FD or FE) demonstrate its usefulness in interpretation and possible model-based inversion schemes of GPR data in stratified media

  3. Constructing canine carotid artery stenosis model by endovascular technique

    International Nuclear Information System (INIS)

    Cheng Guangsen; Liu Yizhi

    2005-01-01

    Objective: To establish a carotid artery stenosis model by endovascular technique suitable for neuro-interventional therapy. Methods: Twelve dogs were anesthetized, the unilateral segments of the carotid arteries' tunica media and intima were damaged by a corneous guiding wire of home made. Twenty-four carotid artery stenosis models were thus created. DSA examination was performed on postprocedural weeks 2, 4, 8, 10 to estimate the changes of those stenotic carotid arteries. Results: Twenty-four carotid artery stenosis models were successfully created in twelve dogs. Conclusions: Canine carotid artery stenosis models can be created with the endovascular method having variation of pathologic characters and hemodynamic changes similar to human being. It is useful for further research involving the new technique and new material for interventional treatment. (authors)

  4. Virtual 3d City Modeling: Techniques and Applications

    Science.gov (United States)

    Singh, S. P.; Jain, K.; Mandla, V. R.

    2013-08-01

    3D city model is a digital representation of the Earth's surface and it's related objects such as Building, Tree, Vegetation, and some manmade feature belonging to urban area. There are various terms used for 3D city models such as "Cybertown", "Cybercity", "Virtual City", or "Digital City". 3D city models are basically a computerized or digital model of a city contains the graphic representation of buildings and other objects in 2.5 or 3D. Generally three main Geomatics approach are using for Virtual 3-D City models generation, in first approach, researcher are using Conventional techniques such as Vector Map data, DEM, Aerial images, second approach are based on High resolution satellite images with LASER scanning, In third method, many researcher are using Terrestrial images by using Close Range Photogrammetry with DSM & Texture mapping. We start this paper from the introduction of various Geomatics techniques for 3D City modeling. These techniques divided in to two main categories: one is based on Automation (Automatic, Semi-automatic and Manual methods), and another is Based on Data input techniques (one is Photogrammetry, another is Laser Techniques). After details study of this, finally in short, we are trying to give the conclusions of this study. In the last, we are trying to give the conclusions of this research paper and also giving a short view for justification and analysis, and present trend for 3D City modeling. This paper gives an overview about the Techniques related with "Generation of Virtual 3-D City models using Geomatics Techniques" and the Applications of Virtual 3D City models. Photogrammetry, (Close range, Aerial, Satellite), Lasergrammetry, GPS, or combination of these modern Geomatics techniques play a major role to create a virtual 3-D City model. Each and every techniques and method has some advantages and some drawbacks. Point cloud model is a modern trend for virtual 3-D city model. Photo-realistic, Scalable, Geo-referenced virtual 3

  5. The Common Technique for Analyzing the Financial Results Report

    Directory of Open Access Journals (Sweden)

    Pasternak Maria M.

    2017-04-01

    Full Text Available The article is aimed at generalizing the theoretical approaches to the structure and elements of the technique for analysis of the Financial results report (Cumulative income report and providing suggestions for its improvement. The current methods have been analyzed, relevance of the application of a common technique for such analysis has been substantiated. A common technique for analyzing the Financial results report has been proposed, which includes definition of the objectives and tasks of analysis, subjects and objects of analysis, sources of its information. Stages of such an analysis were allocated and described. The findings of the article can be used to theoretically substantiate and to practically develop a technique for analyzing the Financial results report in the branches of Ukrainian economy.

  6. Construct canine intracranial aneurysm model by endovascular technique

    International Nuclear Information System (INIS)

    Liang Xiaodong; Liu Yizhi; Ni Caifang; Ding Yi

    2004-01-01

    Objective: To construct canine bifurcation aneurysms suitable for evaluating the exploration of endovascular devices for interventional therapy by endovascular technique. Methods: The right common carotid artery of six dogs was expanded with a pliable balloon by means of endovascular technique, then embolization with detached balloon was taken at their originations DAS examination were performed on 1, 2, 3 d after the procedurse. Results: 6 aneurysm models were created in six dogs successfully with the mean width and height of the aneurysms decreasing in 3 days. Conclusions: This canine aneurysm model presents the virtue in the size and shape of human cerebral bifurcation saccular aneurysms on DSA image, suitable for developing the exploration of endovascular devices for aneurismal therapy. The procedure is quick, reliable and reproducible. (authors)

  7. Modeling with data tools and techniques for scientific computing

    CERN Document Server

    Klemens, Ben

    2009-01-01

    Modeling with Data fully explains how to execute computationally intensive analyses on very large data sets, showing readers how to determine the best methods for solving a variety of different problems, how to create and debug statistical models, and how to run an analysis and evaluate the results. Ben Klemens introduces a set of open and unlimited tools, and uses them to demonstrate data management, analysis, and simulation techniques essential for dealing with large data sets and computationally intensive procedures. He then demonstrates how to easily apply these tools to the many threads of statistical technique, including classical, Bayesian, maximum likelihood, and Monte Carlo methods

  8. Machine Learning Techniques for Modelling Short Term Land-Use Change

    Directory of Open Access Journals (Sweden)

    Mileva Samardžić-Petrović

    2017-11-01

    Full Text Available The representation of land use change (LUC is often achieved by using data-driven methods that include machine learning (ML techniques. The main objectives of this research study are to implement three ML techniques, Decision Trees (DT, Neural Networks (NN, and Support Vector Machines (SVM for LUC modeling, in order to compare these three ML techniques and to find the appropriate data representation. The ML techniques are applied on the case study of LUC in three municipalities of the City of Belgrade, the Republic of Serbia, using historical geospatial data sets and considering nine land use classes. The ML models were built and assessed using two different time intervals. The information gain ranking technique and the recursive attribute elimination procedure were implemented to find the most informative attributes that were related to LUC in the study area. The results indicate that all three ML techniques can be used effectively for short-term forecasting of LUC, but the SVM achieved the highest agreement of predicted changes.

  9. A neuro-fuzzy computing technique for modeling hydrological time series

    Science.gov (United States)

    Nayak, P. C.; Sudheer, K. P.; Rangan, D. M.; Ramasastri, K. S.

    2004-05-01

    Intelligent computing tools such as artificial neural network (ANN) and fuzzy logic approaches are proven to be efficient when applied individually to a variety of problems. Recently there has been a growing interest in combining both these approaches, and as a result, neuro-fuzzy computing techniques have evolved. This approach has been tested and evaluated in the field of signal processing and related areas, but researchers have only begun evaluating the potential of this neuro-fuzzy hybrid approach in hydrologic modeling studies. This paper presents the application of an adaptive neuro fuzzy inference system (ANFIS) to hydrologic time series modeling, and is illustrated by an application to model the river flow of Baitarani River in Orissa state, India. An introduction to the ANFIS modeling approach is also presented. The advantage of the method is that it does not require the model structure to be known a priori, in contrast to most of the time series modeling techniques. The results showed that the ANFIS forecasted flow series preserves the statistical properties of the original flow series. The model showed good performance in terms of various statistical indices. The results are highly promising, and a comparative analysis suggests that the proposed modeling approach outperforms ANNs and other traditional time series models in terms of computational speed, forecast errors, efficiency, peak flow estimation etc. It was observed that the ANFIS model preserves the potential of the ANN approach fully, and eases the model building process.

  10. Nondestructive determination of plutonium mass in spent fuel: preliminary modeling results using the passive neutron Albedo reactivity technique

    International Nuclear Information System (INIS)

    Evans, Louise G.; Tobin, Stephen J.; Schear, Melissa A.; Menlove, Howard O.; Lee, Sang Y.; Swinhoe, Martyn T.

    2009-01-01

    There are a variety of motivations for quantifying plutonium (Pu) in spent fuel assemblies by means of nondestructive assay (NDA) including the following: strengthening the capability of the International Atomic Energy Agency (LAEA) to safeguard nuclear facilities, quantifying shipper/receiver difference, determining the input accountability value at pyrochemical processing facilities, providing quantitative input to burnup credit and final safeguards measurements at a long-term repository. In order to determine Pu mass in spent fuel assemblies, thirteen NDA techniques were identified that provide information about the composition of an assembly. A key motivation of the present research is the realization that none of these techniques, in isolation, is capable of both (1) quantifying the Pu mass of an assembly and (2) detecting the diversion of a significant number of rods. It is therefore anticipated that a combination of techniques will be required. A 5 year effort funded by the Next Generation Safeguards Initiative (NGSI) of the U.S. DOE was recently started in pursuit of these goals. The first two years involves researching all thirteen techniques using Monte Carlo modeling while the final three years involves fabricating hardware and measuring spent fuel. Here, we present the work in two main parts: (1) an overview of this NGSI effort describing the motivations and approach being taken; (2) The preliminary results for one of the NDA techniques - Passive Neutron Albedo Reactivity (PNAR). The PNAR technique functions by using the intrinsic neutron emission of the fuel (primarily from the spontaneous fission of curium) to self-interrogate any fissile material present. Two separate measurements of the spent fuel are made, both with and without cadmium (Cd) present. The ratios of the Singles, Doubles and Triples count rates obtained in each case are analyzed; known as the Cd ratio. The primary differences between the two measurements are the neutron energy spectrum

  11. Comparison of different uncertainty techniques in urban stormwater quantity and quality modelling

    DEFF Research Database (Denmark)

    Dotto, C. B.; Mannina, G.; Kleidorfer, M.

    2012-01-01

    -UA), an approach based on a multi-objective auto-calibration (a multialgorithm, genetically adaptive multiobjective method, AMALGAM) and a Bayesian approach based on a simplified Markov Chain Monte Carlo method (implemented in the software MICA). To allow a meaningful comparison among the different uncertainty...... techniques, common criteria have been set for the likelihood formulation, defining the number of simulations, and the measure of uncertainty bounds. Moreover, all the uncertainty techniques were implemented for the same case study, in which the same stormwater quantity and quality model was used alongside...... the same dataset. The comparison results for a well-posed rainfall/runoff model showed that the four methods provide similar probability distributions of model parameters, and model prediction intervals. For ill-posed water quality model the differences between the results were much wider; and the paper...

  12. A pilot modeling technique for handling-qualities research

    Science.gov (United States)

    Hess, R. A.

    1980-01-01

    A brief survey of the more dominant analysis techniques used in closed-loop handling-qualities research is presented. These techniques are shown to rely on so-called classical and modern analytical models of the human pilot which have their foundation in the analysis and design principles of feedback control. The optimal control model of the human pilot is discussed in some detail and a novel approach to the a priori selection of pertinent model parameters is discussed. Frequency domain and tracking performance data from 10 pilot-in-the-loop simulation experiments involving 3 different tasks are used to demonstrate the parameter selection technique. Finally, the utility of this modeling approach in handling-qualities research is discussed.

  13. New quantitative safety standards: different techniques, different results?

    International Nuclear Information System (INIS)

    Rouvroye, J.L.; Brombacher, A.C.

    1999-01-01

    Safety Instrumented Systems (SIS) are used in the process industry to perform safety functions. Many factors can influence the safety of a SIS like system layout, diagnostics, testing and repair. In standards like the German DIN no quantitative analysis is demanded (DIN V 19250 Grundlegende Sicherheitsbetrachtungen fuer MSR-Schutzeinrichtungen, Berlin, 1994; DIN/VDE 0801 Grundsaetze fuer Rechner in Systemen mit Sicherheitsaufgaben, Berlin, 1990). The analysis according to these standards is based on expert opinion and qualitative analysis techniques. New standards like the IEC 61508 (IEC 61508 Functional safety of electrical/electronic/programmable electronic safety-related systems, IEC, Geneve, 1997) and the ISA-S84.01 (ISA-S84.01.1996 Application of Safety Instrumented Systems for the Process Industries, Instrument Society of America, Research Triangle Park, 1996) require quantitative risk analysis but do not prescribe how to perform the analysis. Earlier publications of the authors (Rouvroye et al., Uncertainty in safety, new techniques for the assessment and optimisation of safety in process industry, D W. Pyatt (ed), SERA-Vol. 4, Safety engineering and risk analysis, ASME, New York 1995; Rouvroye et al., A comparison study of qualitative and quantitative analysis techniques for the assessment of safety in industry, P.C. Cacciabue, I.A. Papazoglou (eds), Proceedings PSAM III conference, Crete, Greece, June 1996) have shown that different analysis techniques cover different aspects of system behaviour. This paper shows by means of a case study, that different (quantitative) analysis techniques may lead to different results. The consequence is that the application of the standards to practical systems will not always lead to unambiguous results. The authors therefore propose a technique to overcome this major disadvantage

  14. Model Checking Markov Chains: Techniques and Tools

    NARCIS (Netherlands)

    Zapreev, I.S.

    2008-01-01

    This dissertation deals with four important aspects of model checking Markov chains: the development of efficient model-checking tools, the improvement of model-checking algorithms, the efficiency of the state-space reduction techniques, and the development of simulation-based model-checking

  15. The Effect of Learning Based on Technology Model and Assessment Technique toward Thermodynamic Learning Achievement

    Science.gov (United States)

    Makahinda, T.

    2018-02-01

    The purpose of this research is to find out the effect of learning model based on technology and assessment technique toward thermodynamic achievement by controlling students intelligence. This research is an experimental research. The sample is taken through cluster random sampling with the total respondent of 80 students. The result of the research shows that the result of learning of thermodynamics of students who taught the learning model of environmental utilization is higher than the learning result of student thermodynamics taught by simulation animation, after controlling student intelligence. There is influence of student interaction, and the subject between models of technology-based learning with assessment technique to student learning result of Thermodynamics, after controlling student intelligence. Based on the finding in the lecture then should be used a thermodynamic model of the learning environment with the use of project assessment technique.

  16. Performability Modelling Tools, Evaluation Techniques and Applications

    NARCIS (Netherlands)

    Haverkort, Boudewijn R.H.M.

    1990-01-01

    This thesis deals with three aspects of quantitative evaluation of fault-tolerant and distributed computer and communication systems: performability evaluation techniques, performability modelling tools, and performability modelling applications. Performability modelling is a relatively new

  17. Level-set techniques for facies identification in reservoir modeling

    Science.gov (United States)

    Iglesias, Marco A.; McLaughlin, Dennis

    2011-03-01

    In this paper we investigate the application of level-set techniques for facies identification in reservoir models. The identification of facies is a geometrical inverse ill-posed problem that we formulate in terms of shape optimization. The goal is to find a region (a geologic facies) that minimizes the misfit between predicted and measured data from an oil-water reservoir. In order to address the shape optimization problem, we present a novel application of the level-set iterative framework developed by Burger in (2002 Interfaces Free Bound. 5 301-29 2004 Inverse Problems 20 259-82) for inverse obstacle problems. The optimization is constrained by (the reservoir model) a nonlinear large-scale system of PDEs that describes the reservoir dynamics. We reformulate this reservoir model in a weak (integral) form whose shape derivative can be formally computed from standard results of shape calculus. At each iteration of the scheme, the current estimate of the shape derivative is utilized to define a velocity in the level-set equation. The proper selection of this velocity ensures that the new shape decreases the cost functional. We present results of facies identification where the velocity is computed with the gradient-based (GB) approach of Burger (2002) and the Levenberg-Marquardt (LM) technique of Burger (2004). While an adjoint formulation allows the straightforward application of the GB approach, the LM technique requires the computation of the large-scale Karush-Kuhn-Tucker system that arises at each iteration of the scheme. We efficiently solve this system by means of the representer method. We present some synthetic experiments to show and compare the capabilities and limitations of the proposed implementations of level-set techniques for the identification of geologic facies.

  18. Level-set techniques for facies identification in reservoir modeling

    International Nuclear Information System (INIS)

    Iglesias, Marco A; McLaughlin, Dennis

    2011-01-01

    In this paper we investigate the application of level-set techniques for facies identification in reservoir models. The identification of facies is a geometrical inverse ill-posed problem that we formulate in terms of shape optimization. The goal is to find a region (a geologic facies) that minimizes the misfit between predicted and measured data from an oil–water reservoir. In order to address the shape optimization problem, we present a novel application of the level-set iterative framework developed by Burger in (2002 Interfaces Free Bound. 5 301–29; 2004 Inverse Problems 20 259–82) for inverse obstacle problems. The optimization is constrained by (the reservoir model) a nonlinear large-scale system of PDEs that describes the reservoir dynamics. We reformulate this reservoir model in a weak (integral) form whose shape derivative can be formally computed from standard results of shape calculus. At each iteration of the scheme, the current estimate of the shape derivative is utilized to define a velocity in the level-set equation. The proper selection of this velocity ensures that the new shape decreases the cost functional. We present results of facies identification where the velocity is computed with the gradient-based (GB) approach of Burger (2002) and the Levenberg–Marquardt (LM) technique of Burger (2004). While an adjoint formulation allows the straightforward application of the GB approach, the LM technique requires the computation of the large-scale Karush–Kuhn–Tucker system that arises at each iteration of the scheme. We efficiently solve this system by means of the representer method. We present some synthetic experiments to show and compare the capabilities and limitations of the proposed implementations of level-set techniques for the identification of geologic facies

  19. Probabilistic evaluation of process model matching techniques

    NARCIS (Netherlands)

    Kuss, Elena; Leopold, Henrik; van der Aa, Han; Stuckenschmidt, Heiner; Reijers, Hajo A.

    2016-01-01

    Process model matching refers to the automatic identification of corresponding activities between two process models. It represents the basis for many advanced process model analysis techniques such as the identification of similar process parts or process model search. A central problem is how to

  20. GRAVTool, a Package to Compute Geoid Model by Remove-Compute-Restore Technique

    Science.gov (United States)

    Marotta, G. S.; Blitzkow, D.; Vidotti, R. M.

    2015-12-01

    Currently, there are several methods to determine geoid models. They can be based on terrestrial gravity data, geopotential coefficients, astro-geodetic data or a combination of them. Among the techniques to compute a precise geoid model, the Remove-Compute-Restore (RCR) has been widely applied. It considers short, medium and long wavelengths derived from altitude data provided by Digital Terrain Models (DTM), terrestrial gravity data and global geopotential coefficients, respectively. In order to apply this technique, it is necessary to create procedures that compute gravity anomalies and geoid models, by the integration of different wavelengths, and that adjust these models to one local vertical datum. This research presents a developed package called GRAVTool based on MATLAB software to compute local geoid models by RCR technique and its application in a study area. The studied area comprehends the federal district of Brazil, with ~6000 km², wavy relief, heights varying from 600 m to 1340 m, located between the coordinates 48.25ºW, 15.45ºS and 47.33ºW, 16.06ºS. The results of the numerical example on the studied area show the local geoid model computed by the GRAVTool package (Figure), using 1377 terrestrial gravity data, SRTM data with 3 arc second of resolution, and geopotential coefficients of the EIGEN-6C4 model to degree 360. The accuracy of the computed model (σ = ± 0.071 m, RMS = 0.069 m, maximum = 0.178 m and minimum = -0.123 m) matches the uncertainty (σ =± 0.073) of 21 points randomly spaced where the geoid was computed by geometrical leveling technique supported by positioning GNSS. The results were also better than those achieved by Brazilian official regional geoid model (σ = ± 0.099 m, RMS = 0.208 m, maximum = 0.419 m and minimum = -0.040 m).

  1. The impact of applying product-modelling techniques in configurator projects

    DEFF Research Database (Denmark)

    Hvam, Lars; Kristjansdottir, Katrin; Shafiee, Sara

    2018-01-01

    This paper aims to increase understanding of the impact of using product-modelling techniques to structure and formalise knowledge in configurator projects. Companies that provide customised products increasingly apply configurators in support of sales and design activities, reaping benefits...... that include shorter lead times, improved quality of specifications and products, and lower overall product costs. The design and implementation of configurators are a challenging task that calls for scientifically based modelling techniques to support the formal representation of configurator knowledge. Even...... the phenomenon model and information model are considered visually, (2) non-UML-based modelling techniques, in which only the phenomenon model is considered and (3) non-formal modelling techniques. This study analyses the impact to companies from increased availability of product knowledge and improved control...

  2. Enhancing photogrammetric 3d city models with procedural modeling techniques for urban planning support

    International Nuclear Information System (INIS)

    Schubiger-Banz, S; Arisona, S M; Zhong, C

    2014-01-01

    This paper presents a workflow to increase the level of detail of reality-based 3D urban models. It combines the established workflows from photogrammetry and procedural modeling in order to exploit distinct advantages of both approaches. The combination has advantages over purely automatic acquisition in terms of visual quality, accuracy and model semantics. Compared to manual modeling, procedural techniques can be much more time effective while maintaining the qualitative properties of the modeled environment. In addition, our method includes processes for procedurally adding additional features such as road and rail networks. The resulting models meet the increasing needs in urban environments for planning, inventory, and analysis

  3. Results and Error Estimates from GRACE Forward Modeling over Antarctica

    Science.gov (United States)

    Bonin, Jennifer; Chambers, Don

    2013-04-01

    Forward modeling using a weighted least squares technique allows GRACE information to be projected onto a pre-determined collection of local basins. This decreases the impact of spatial leakage, allowing estimates of mass change to be better localized. The technique is especially valuable where models of current-day mass change are poor, such as over Antarctica. However when tested previously, the least squares technique has required constraints in the form of added process noise in order to be reliable. Poor choice of local basin layout has also adversely affected results, as has the choice of spatial smoothing used with GRACE. To develop design parameters which will result in correct high-resolution mass detection and to estimate the systematic errors of the method over Antarctica, we use a "truth" simulation of the Antarctic signal. We apply the optimal parameters found from the simulation to RL05 GRACE data across Antarctica and the surrounding ocean. We particularly focus on separating the Antarctic peninsula's mass signal from that of the rest of western Antarctica. Additionally, we characterize how well the technique works for removing land leakage signal from the nearby ocean, particularly that near the Drake Passage.

  4. Percutaneous radiofrequency ablation of osteoid osteomas. Technique and results

    International Nuclear Information System (INIS)

    Bruners, P.; Penzkofer, T.; Guenther, R. W.; Mahnken, A.

    2009-01-01

    Purpose: Osteoid osteoma is a benign primary bone tumor that typically occurs in children and young adults. Besides local pain, which is often worse at night, prompt relief due to medication with acetylsalicylic acid (ASS) is characteristic for this bone lesion. Because long-term medication with ASS does not represent an alternative treatment strategy due to its potentially severe side effects, different minimally invasive image-guided techniques for the therapy of osteoid osteoma have been developed. In this context radiofrequency (RF) ablation in particular has become part of the clinical routine. The technique and results of image-guided RF ablation are compared to alternative treatment strategies. Materials and Methods: Using this technique, an often needle-shaped RF applicator is percutaneously placed into the tumor under image guidance. Then a high-frequency alternating current is applied by the tip of the applicator which leads to ionic motion within the tissue resulting in local heat development and thus in thermal destruction of the surrounding tissue including the tumor. Results: The published primary and secondary success rates of this technique are 87 and 83%, respectively. Surgical resection and open curettage show comparable success rates but are associated with higher complication rates. In addition image-guided RF ablation of osteoid osteomas is associated with low costs. (orig.)

  5. HIGHLY-ACCURATE MODEL ORDER REDUCTION TECHNIQUE ON A DISCRETE DOMAIN

    Directory of Open Access Journals (Sweden)

    L. D. Ribeiro

    2015-09-01

    Full Text Available AbstractIn this work, we present a highly-accurate technique of model order reduction applied to staged processes. The proposed method reduces the dimension of the original system based on null values of moment-weighted sums of heat and mass balance residuals on real stages. To compute these sums of weighted residuals, a discrete form of Gauss-Lobatto quadrature was developed, allowing a high degree of accuracy in these calculations. The locations where the residuals are cancelled vary with time and operating conditions, characterizing a desirable adaptive nature of this technique. Balances related to upstream and downstream devices (such as condenser, reboiler, and feed tray of a distillation column are considered as boundary conditions of the corresponding difference-differential equations system. The chosen number of moments is the dimension of the reduced model being much lower than the dimension of the complete model and does not depend on the size of the original model. Scaling of the discrete independent variable related with the stages was crucial for the computational implementation of the proposed method, avoiding accumulation of round-off errors present even in low-degree polynomial approximations in the original discrete variable. Dynamical simulations of distillation columns were carried out to check the performance of the proposed model order reduction technique. The obtained results show the superiority of the proposed procedure in comparison with the orthogonal collocation method.

  6. Results and Error Estimates from GRACE Forward Modeling over Greenland, Canada, and Alaska

    Science.gov (United States)

    Bonin, J. A.; Chambers, D. P.

    2012-12-01

    Forward modeling using a weighted least squares technique allows GRACE information to be projected onto a pre-determined collection of local basins. This decreases the impact of spatial leakage, allowing estimates of mass change to be better localized. The technique is especially valuable where models of current-day mass change are poor, such as over Greenland and Antarctica. However, the accuracy of the forward model technique has not been determined, nor is it known how the distribution of the local basins affects the results. We use a "truth" model composed of hydrology and ice-melt slopes as an example case, to estimate the uncertainties of this forward modeling method and expose those design parameters which may result in an incorrect high-resolution mass distribution. We then apply these optimal parameters in a forward model estimate created from RL05 GRACE data. We compare the resulting mass slopes with the expected systematic errors from the simulation, as well as GIA and basic trend-fitting uncertainties. We also consider whether specific regions (such as Ellesmere Island and Baffin Island) can be estimated reliably using our optimal basin layout.

  7. Effects of pump recycling technique on stimulated Brillouin scattering threshold: a theoretical model.

    Science.gov (United States)

    Al-Asadi, H A; Al-Mansoori, M H; Ajiya, M; Hitam, S; Saripan, M I; Mahdi, M A

    2010-10-11

    We develop a theoretical model that can be used to predict stimulated Brillouin scattering (SBS) threshold in optical fibers that arises through the effect of Brillouin pump recycling technique. Obtained simulation results from our model are in close agreement with our experimental results. The developed model utilizes single mode optical fiber of different lengths as the Brillouin gain media. For 5-km long single mode fiber, the calculated threshold power for SBS is about 16 mW for conventional technique. This value is reduced to about 8 mW when the residual Brillouin pump is recycled at the end of the fiber. The decrement of SBS threshold is due to longer interaction lengths between Brillouin pump and Stokes wave.

  8. AN OVERVIEW OF REDUCED ORDER MODELING TECHNIQUES FOR SAFETY APPLICATIONS

    Energy Technology Data Exchange (ETDEWEB)

    Mandelli, D.; Alfonsi, A.; Talbot, P.; Wang, C.; Maljovec, D.; Smith, C.; Rabiti, C.; Cogliati, J.

    2016-10-01

    The RISMC project is developing new advanced simulation-based tools to perform Computational Risk Analysis (CRA) for the existing fleet of U.S. nuclear power plants (NPPs). These tools numerically model not only the thermal-hydraulic behavior of the reactors primary and secondary systems, but also external event temporal evolution and component/system ageing. Thus, this is not only a multi-physics problem being addressed, but also a multi-scale problem (both spatial, µm-mm-m, and temporal, seconds-hours-years). As part of the RISMC CRA approach, a large amount of computationally-expensive simulation runs may be required. An important aspect is that even though computational power is growing, the overall computational cost of a RISMC analysis using brute-force methods may be not viable for certain cases. A solution that is being evaluated to assist the computational issue is the use of reduced order modeling techniques. During the FY2015, we investigated and applied reduced order modeling techniques to decrease the RISMC analysis computational cost by decreasing the number of simulation runs; for this analysis improvement we used surrogate models instead of the actual simulation codes. This article focuses on the use of reduced order modeling techniques that can be applied to RISMC analyses in order to generate, analyze, and visualize data. In particular, we focus on surrogate models that approximate the simulation results but in a much faster time (microseconds instead of hours/days).

  9. Modeling techniques for quantum cascade lasers

    Energy Technology Data Exchange (ETDEWEB)

    Jirauschek, Christian [Institute for Nanoelectronics, Technische Universität München, D-80333 Munich (Germany); Kubis, Tillmann [Network for Computational Nanotechnology, Purdue University, 207 S Martin Jischke Drive, West Lafayette, Indiana 47907 (United States)

    2014-03-15

    Quantum cascade lasers are unipolar semiconductor lasers covering a wide range of the infrared and terahertz spectrum. Lasing action is achieved by using optical intersubband transitions between quantized states in specifically designed multiple-quantum-well heterostructures. A systematic improvement of quantum cascade lasers with respect to operating temperature, efficiency, and spectral range requires detailed modeling of the underlying physical processes in these structures. Moreover, the quantum cascade laser constitutes a versatile model device for the development and improvement of simulation techniques in nano- and optoelectronics. This review provides a comprehensive survey and discussion of the modeling techniques used for the simulation of quantum cascade lasers. The main focus is on the modeling of carrier transport in the nanostructured gain medium, while the simulation of the optical cavity is covered at a more basic level. Specifically, the transfer matrix and finite difference methods for solving the one-dimensional Schrödinger equation and Schrödinger-Poisson system are discussed, providing the quantized states in the multiple-quantum-well active region. The modeling of the optical cavity is covered with a focus on basic waveguide resonator structures. Furthermore, various carrier transport simulation methods are discussed, ranging from basic empirical approaches to advanced self-consistent techniques. The methods include empirical rate equation and related Maxwell-Bloch equation approaches, self-consistent rate equation and ensemble Monte Carlo methods, as well as quantum transport approaches, in particular the density matrix and non-equilibrium Green's function formalism. The derived scattering rates and self-energies are generally valid for n-type devices based on one-dimensional quantum confinement, such as quantum well structures.

  10. Modeling techniques for quantum cascade lasers

    Science.gov (United States)

    Jirauschek, Christian; Kubis, Tillmann

    2014-03-01

    Quantum cascade lasers are unipolar semiconductor lasers covering a wide range of the infrared and terahertz spectrum. Lasing action is achieved by using optical intersubband transitions between quantized states in specifically designed multiple-quantum-well heterostructures. A systematic improvement of quantum cascade lasers with respect to operating temperature, efficiency, and spectral range requires detailed modeling of the underlying physical processes in these structures. Moreover, the quantum cascade laser constitutes a versatile model device for the development and improvement of simulation techniques in nano- and optoelectronics. This review provides a comprehensive survey and discussion of the modeling techniques used for the simulation of quantum cascade lasers. The main focus is on the modeling of carrier transport in the nanostructured gain medium, while the simulation of the optical cavity is covered at a more basic level. Specifically, the transfer matrix and finite difference methods for solving the one-dimensional Schrödinger equation and Schrödinger-Poisson system are discussed, providing the quantized states in the multiple-quantum-well active region. The modeling of the optical cavity is covered with a focus on basic waveguide resonator structures. Furthermore, various carrier transport simulation methods are discussed, ranging from basic empirical approaches to advanced self-consistent techniques. The methods include empirical rate equation and related Maxwell-Bloch equation approaches, self-consistent rate equation and ensemble Monte Carlo methods, as well as quantum transport approaches, in particular the density matrix and non-equilibrium Green's function formalism. The derived scattering rates and self-energies are generally valid for n-type devices based on one-dimensional quantum confinement, such as quantum well structures.

  11. PND fuel handling decontamination program: specialized techniques and results

    International Nuclear Information System (INIS)

    Pan, R.; Hobbs, K.; Minnis, M.; Graham, K.

    1995-01-01

    The use of various decontamination techniques and equipment has become a critical part of Fuel Handling maintenance work at the Pickering Nuclear Station, an eight unit CANDU station located about 30 km east of Toronto. This paper presents an overview of the set up and techniques used for cleaning in the PND Fuel Handling Maintenance Facility, and the results achieved. (author)

  12. Probabilistic method/techniques of evaluation/modeling that permits to optimize/reduce the necessary resources

    International Nuclear Information System (INIS)

    Florescu, G.; Apostol, M.; Farcasiu, M.; Luminita Bedreaga, M.; Nitoi, M.; Turcu, I.

    2004-01-01

    Fault tree/event tree modeling approach is widely used in modeling and behavior simulation of nuclear structures, systems and components (NSSCs), during different condition of operation. Evaluation of NSSCs reliability, availability, risk or safety, during operation, by using probabilistic techniques, is also largely used. Development of computer capabilities offered new possibilities for large NSSCs models designing, processing and using. There are situations where large, complex and correct NSSC models are desired to be associated with rapid results/solutions/decisions or with multiple processing in order to obtain specific results. Large fault/event trees are hardly to be developed, reviewed and processed. During operation of NSSCs, the time, especially, is an important factor in taking decision. The paper presents a probabilistic method that permits evaluation/modeling of NSSCs and intents to solve the above problems by adopting appropriate techniques. The method is stated for special applications and is based on specific PSA analysis steps, information, algorithms, criteria and relations, in correspondence with the fault tree/event tree modeling and similar techniques, in order to obtain appropriate results for NSSC model analysis. Special classification of NSSCs is stated in order to reflect aspects of use of the method. Also the common reliability databases are part of information necessary to complete the analysis process. Special data and information bases contribute to state the proposed method/techniques. The paper also presents the specific steps of the method, its applicability, the main advantages and problems to be furthermore studied. The method permits optimization/reducing of resources used to perform the PSA activities. (author)

  13. Electromagnetic interference modeling and suppression techniques in variable-frequency drive systems

    Science.gov (United States)

    Yang, Le; Wang, Shuo; Feng, Jianghua

    2017-11-01

    Electromagnetic interference (EMI) causes electromechanical damage to the motors and degrades the reliability of variable-frequency drive (VFD) systems. Unlike fundamental frequency components in motor drive systems, high-frequency EMI noise, coupled with the parasitic parameters of the trough system, are difficult to analyze and reduce. In this article, EMI modeling techniques for different function units in a VFD system, including induction motors, motor bearings, and rectifierinverters, are reviewed and evaluated in terms of applied frequency range, model parameterization, and model accuracy. The EMI models for the motors are categorized based on modeling techniques and model topologies. Motor bearing and shaft models are also reviewed, and techniques that are used to eliminate bearing current are evaluated. Modeling techniques for conventional rectifierinverter systems are also summarized. EMI noise suppression techniques, including passive filter, Wheatstone bridge balance, active filter, and optimized modulation, are reviewed and compared based on the VFD system models.

  14. Application of separable parameter space techniques to multi-tracer PET compartment modeling

    International Nuclear Information System (INIS)

    Zhang, Jeff L; Michael Morey, A; Kadrmas, Dan J

    2016-01-01

    Multi-tracer positron emission tomography (PET) can image two or more tracers in a single scan, characterizing multiple aspects of biological functions to provide new insights into many diseases. The technique uses dynamic imaging, resulting in time-activity curves that contain contributions from each tracer present. The process of separating and recovering separate images and/or imaging measures for each tracer requires the application of kinetic constraints, which are most commonly applied by fitting parallel compartment models for all tracers. Such multi-tracer compartment modeling presents challenging nonlinear fits in multiple dimensions. This work extends separable parameter space kinetic modeling techniques, previously developed for fitting single-tracer compartment models, to fitting multi-tracer compartment models. The multi-tracer compartment model solution equations were reformulated to maximally separate the linear and nonlinear aspects of the fitting problem, and separable least-squares techniques were applied to effectively reduce the dimensionality of the nonlinear fit. The benefits of the approach are then explored through a number of illustrative examples, including characterization of separable parameter space multi-tracer objective functions and demonstration of exhaustive search fits which guarantee the true global minimum to within arbitrary search precision. Iterative gradient-descent algorithms using Levenberg–Marquardt were also tested, demonstrating improved fitting speed and robustness as compared to corresponding fits using conventional model formulations. The proposed technique overcomes many of the challenges in fitting simultaneous multi-tracer PET compartment models. (paper)

  15. EXCHANGE-RATES FORECASTING: EXPONENTIAL SMOOTHING TECHNIQUES AND ARIMA MODELS

    Directory of Open Access Journals (Sweden)

    Dezsi Eva

    2011-07-01

    Full Text Available Exchange rates forecasting is, and has been a challenging task in finance. Statistical and econometrical models are widely used in analysis and forecasting of foreign exchange rates. This paper investigates the behavior of daily exchange rates of the Romanian Leu against the Euro, United States Dollar, British Pound, Japanese Yen, Chinese Renminbi and the Russian Ruble. Smoothing techniques are generated and compared with each other. These models include the Simple Exponential Smoothing technique, as the Double Exponential Smoothing technique, the Simple Holt-Winters, the Additive Holt-Winters, namely the Autoregressive Integrated Moving Average model.

  16. Liver metastases: interventional therapeutic techniques and results, state of the art

    International Nuclear Information System (INIS)

    Vogl, T.J.; Mueller, P.K.; Mack, M.G.; Straub, R.; Engelmann, K.; Neuhaus, P.

    1999-01-01

    The liver is the most common site of metastatic tumour deposits. Hepatic metastases are the major cause of morbidity and mortality in patients with gastrointestinal carcinomas and other malignant tumours. The rationale and results for interventional therapeutic techniques in the treatment of liver metastases are presented. For the treatment of patients with irresectable liver metastases, alternative local ablative therapeutic modalities have been developed. Technique and results of local interventional therapies are presented such as microwave-, radiofrequency (RF)- and ultrasound ablation, and laser-induced interstitial therapy (LITT), cryotherapy and local drug administration such as alcohol injection, endotumoral chemotherapy and regional chemoembolisation. In addition to cryotherapy, all ablative techniques can be performed percutaneously with low morbidity and mortality. Cryotherapy is an effective and precise technique for inducing tumour necrosis, but it is currently performed via laparotomy. Percutaneous local alcohol injection results in an inhomogeneous distribution in liver metastases with unreliable control rates. Local chemotherapeutic drug instillation and regional chemoembolisation produces relevant but non-reproducible lesions. Laser-induced interstitial thermotherapy (LITT) performed under MRI guidance results in precise and reproducible areas of induced necrosis with a local control of 94 %, and with an improved survival rate. Interventional therapeutic techniques of liver metastases do result in a remarkable local tumour control rate with improved survival results. (orig.)

  17. Verification Techniques for Parameter Selection and Bayesian Model Calibration Presented for an HIV Model

    Science.gov (United States)

    Wentworth, Mami Tonoe

    Uncertainty quantification plays an important role when making predictive estimates of model responses. In this context, uncertainty quantification is defined as quantifying and reducing uncertainties, and the objective is to quantify uncertainties in parameter, model and measurements, and propagate the uncertainties through the model, so that one can make a predictive estimate with quantified uncertainties. Two of the aspects of uncertainty quantification that must be performed prior to propagating uncertainties are model calibration and parameter selection. There are several efficient techniques for these processes; however, the accuracy of these methods are often not verified. This is the motivation for our work, and in this dissertation, we present and illustrate verification frameworks for model calibration and parameter selection in the context of biological and physical models. First, HIV models, developed and improved by [2, 3, 8], describe the viral infection dynamics of an HIV disease. These are also used to make predictive estimates of viral loads and T-cell counts and to construct an optimal control for drug therapy. Estimating input parameters is an essential step prior to uncertainty quantification. However, not all the parameters are identifiable, implying that they cannot be uniquely determined by the observations. These unidentifiable parameters can be partially removed by performing parameter selection, a process in which parameters that have minimal impacts on the model response are determined. We provide verification techniques for Bayesian model calibration and parameter selection for an HIV model. As an example of a physical model, we employ a heat model with experimental measurements presented in [10]. A steady-state heat model represents a prototypical behavior for heat conduction and diffusion process involved in a thermal-hydraulic model, which is a part of nuclear reactor models. We employ this simple heat model to illustrate verification

  18. Application of Soft Computing Techniques and Multiple Regression Models for CBR prediction of Soils

    Directory of Open Access Journals (Sweden)

    Fatimah Khaleel Ibrahim

    2017-08-01

    Full Text Available The techniques of soft computing technique such as Artificial Neutral Network (ANN have improved the predicting capability and have actually discovered application in Geotechnical engineering. The aim of this research is to utilize the soft computing technique and Multiple Regression Models (MLR for forecasting the California bearing ratio CBR( of soil from its index properties. The indicator of CBR for soil could be predicted from various soils characterizing parameters with the assist of MLR and ANN methods. The data base that collected from the laboratory by conducting tests on 86 soil samples that gathered from different projects in Basrah districts. Data gained from the experimental result were used in the regression models and soft computing techniques by using artificial neural network. The liquid limit, plastic index , modified compaction test and the CBR test have been determined. In this work, different ANN and MLR models were formulated with the different collection of inputs to be able to recognize their significance in the prediction of CBR. The strengths of the models that were developed been examined in terms of regression coefficient (R2, relative error (RE% and mean square error (MSE values. From the results of this paper, it absolutely was noticed that all the proposed ANN models perform better than that of MLR model. In a specific ANN model with all input parameters reveals better outcomes than other ANN models.

  19. Structural Modeling Using "Scanning and Mapping" Technique

    Science.gov (United States)

    Amos, Courtney L.; Dash, Gerald S.; Shen, J. Y.; Ferguson, Frederick; Noga, Donald F. (Technical Monitor)

    2000-01-01

    Supported by NASA Glenn Center, we are in the process developing a structural damage diagnostic and monitoring system for rocket engines, which consists of five modules: Structural Modeling, Measurement Data Pre-Processor, Structural System Identification, Damage Detection Criterion, and Computer Visualization. The function of the system is to detect damage as it is incurred by the engine structures. The scientific principle to identify damage is to utilize the changes in the vibrational properties between the pre-damaged and post-damaged structures. The vibrational properties of the pre-damaged structure can be obtained based on an analytic computer model of the structure. Thus, as the first stage of the whole research plan, we currently focus on the first module - Structural Modeling. Three computer software packages are selected, and will be integrated for this purpose. They are PhotoModeler-Pro, AutoCAD-R14, and MSC/NASTRAN. AutoCAD is the most popular PC-CAD system currently available in the market. For our purpose, it plays like an interface to generate structural models of any particular engine parts or assembly, which is then passed to MSC/NASTRAN for extracting structural dynamic properties. Although AutoCAD is a powerful structural modeling tool, the complexity of engine components requires a further improvement in structural modeling techniques. We are working on a so-called "scanning and mapping" technique, which is a relatively new technique. The basic idea is to producing a full and accurate 3D structural model by tracing on multiple overlapping photographs taken from different angles. There is no need to input point positions, angles, distances or axes. Photographs can be taken by any types of cameras with different lenses. With the integration of such a modeling technique, the capability of structural modeling will be enhanced. The prototypes of any complex structural components will be produced by PhotoModeler first based on existing similar

  20. Skin fluorescence model based on the Monte Carlo technique

    Science.gov (United States)

    Churmakov, Dmitry Y.; Meglinski, Igor V.; Piletsky, Sergey A.; Greenhalgh, Douglas A.

    2003-10-01

    The novel Monte Carlo technique of simulation of spatial fluorescence distribution within the human skin is presented. The computational model of skin takes into account spatial distribution of fluorophores following the collagen fibers packing, whereas in epidermis and stratum corneum the distribution of fluorophores assumed to be homogeneous. The results of simulation suggest that distribution of auto-fluorescence is significantly suppressed in the NIR spectral region, while fluorescence of sensor layer embedded in epidermis is localized at the adjusted depth. The model is also able to simulate the skin fluorescence spectra.

  1. Advanced structural equation modeling issues and techniques

    CERN Document Server

    Marcoulides, George A

    2013-01-01

    By focusing primarily on the application of structural equation modeling (SEM) techniques in example cases and situations, this book provides an understanding and working knowledge of advanced SEM techniques with a minimum of mathematical derivations. The book was written for a broad audience crossing many disciplines, assumes an understanding of graduate level multivariate statistics, including an introduction to SEM.

  2. Multi-scale model of the ionosphere from the combination of modern space-geodetic satellite techniques - project status and first results

    Science.gov (United States)

    Schmidt, M.; Hugentobler, U.; Jakowski, N.; Dettmering, D.; Liang, W.; Limberger, M.; Wilken, V.; Gerzen, T.; Hoque, M.; Berdermann, J.

    2012-04-01

    Near real-time high resolution and high precision ionosphere models are needed for a large number of applications, e.g. in navigation, positioning, telecommunications or astronautics. Today these ionosphere models are mostly empirical, i.e., based purely on mathematical approaches. In the DFG project 'Multi-scale model of the ionosphere from the combination of modern space-geodetic satellite techniques (MuSIK)' the complex phenomena within the ionosphere are described vertically by combining the Chapman electron density profile with a plasmasphere layer. In order to consider the horizontal and temporal behaviour the fundamental target parameters of this physics-motivated approach are modelled by series expansions in terms of tensor products of localizing B-spline functions depending on longitude, latitude and time. For testing the procedure the model will be applied to an appropriate region in South America, which covers relevant ionospheric processes and phenomena such as the Equatorial Anomaly. The project connects the expertise of the three project partners, namely Deutsches Geodätisches Forschungsinstitut (DGFI) Munich, the Institute of Astronomical and Physical Geodesy (IAPG) of the Technical University Munich (TUM) and the German Aerospace Center (DLR), Neustrelitz. In this presentation we focus on the current status of the project. In the first year of the project we studied the behaviour of the ionosphere in the test region, we setup appropriate test periods covering high and low solar activity as well as winter and summer and started the data collection, analysis, pre-processing and archiving. We developed partly the mathematical-physical modelling approach and performed first computations based on simulated input data. Here we present information on the data coverage for the area and the time periods of our investigations and we outline challenges of the multi-dimensional mathematical-physical modelling approach. We show first results, discuss problems

  3. Low level waste management: a compilation of models and monitoring techniques. Volume 1

    Energy Technology Data Exchange (ETDEWEB)

    Mosier, J.E.; Fowler, J.R.; Barton, C.J. (comps.)

    1980-04-01

    In support of the National Low-Level Waste (LLW) Management Research and Development Program being carried out at Oak Ridge National Laboratory, Science Applications, Inc., conducted a survey of models and monitoring techniques associated with the transport of radionuclides and other chemical species from LLW burial sites. As a result of this survey, approximately 350 models were identified. For each model the purpose and a brief description are presented. To the extent possible, a point of contact and reference material are identified. The models are organized into six technical categories: atmospheric transport, dosimetry, food chain, groundwater transport, soil transport, and surface water transport. About 4% of the models identified covered other aspects of LLW management and are placed in a miscellaneous category. A preliminary assessment of all these models was performed to determine their ability to analyze the transport of other chemical species. The models that appeared to be applicable are identified. A brief survey of the state-of-the-art techniques employed to monitor LLW burial sites is also presented, along with a very brief discussion of up-to-date burial techniques.

  4. Low level waste management: a compilation of models and monitoring techniques. Volume 1

    International Nuclear Information System (INIS)

    Mosier, J.E.; Fowler, J.R.; Barton, C.J.

    1980-04-01

    In support of the National Low-Level Waste (LLW) Management Research and Development Program being carried out at Oak Ridge National Laboratory, Science Applications, Inc., conducted a survey of models and monitoring techniques associated with the transport of radionuclides and other chemical species from LLW burial sites. As a result of this survey, approximately 350 models were identified. For each model the purpose and a brief description are presented. To the extent possible, a point of contact and reference material are identified. The models are organized into six technical categories: atmospheric transport, dosimetry, food chain, groundwater transport, soil transport, and surface water transport. About 4% of the models identified covered other aspects of LLW management and are placed in a miscellaneous category. A preliminary assessment of all these models was performed to determine their ability to analyze the transport of other chemical species. The models that appeared to be applicable are identified. A brief survey of the state-of-the-art techniques employed to monitor LLW burial sites is also presented, along with a very brief discussion of up-to-date burial techniques

  5. Liver metastases: interventional therapeutic techniques and results, state of the art

    Energy Technology Data Exchange (ETDEWEB)

    Vogl, T.J.; Mueller, P.K.; Mack, M.G.; Straub, R.; Engelmann, K. [Dept. of Radiology, Univ. of Frankfurt (Germany); Neuhaus, P. [Dept. of Surgery, Humboldt University of Berlin (Germany)

    1999-05-01

    The liver is the most common site of metastatic tumour deposits. Hepatic metastases are the major cause of morbidity and mortality in patients with gastrointestinal carcinomas and other malignant tumours. The rationale and results for interventional therapeutic techniques in the treatment of liver metastases are presented. For the treatment of patients with irresectable liver metastases, alternative local ablative therapeutic modalities have been developed. Technique and results of local interventional therapies are presented such as microwave-, radiofrequency (RF)- and ultrasound ablation, and laser-induced interstitial therapy (LITT), cryotherapy and local drug administration such as alcohol injection, endotumoral chemotherapy and regional chemoembolisation. In addition to cryotherapy, all ablative techniques can be performed percutaneously with low morbidity and mortality. Cryotherapy is an effective and precise technique for inducing tumour necrosis, but it is currently performed via laparotomy. Percutaneous local alcohol injection results in an inhomogeneous distribution in liver metastases with unreliable control rates. Local chemotherapeutic drug instillation and regional chemoembolisation produces relevant but non-reproducible lesions. Laser-induced interstitial thermotherapy (LITT) performed under MRI guidance results in precise and reproducible areas of induced necrosis with a local control of 94 %, and with an improved survival rate. Interventional therapeutic techniques of liver metastases do result in a remarkable local tumour control rate with improved survival results. (orig.) With 5 figs., 1 tab., 43 refs.

  6. Identification techniques for phenomenological models of hysteresis based on the conjugate gradient method

    International Nuclear Information System (INIS)

    Andrei, Petru; Oniciuc, Liviu; Stancu, Alexandru; Stoleriu, Laurentiu

    2007-01-01

    An identification technique for the parameters of phenomenological models of hysteresis is presented. The basic idea of our technique is to set up a system of equations for the parameters of the model as a function of known quantities on the major or minor hysteresis loops (e.g. coercive force, susceptibilities at various points, remanence), or other magnetization curves. This system of equations can be either over or underspecified and is solved by using the conjugate gradient method. Numerical results related to the identification of parameters in the Energetic, Jiles-Atherton, and Preisach models are presented

  7. Fuzzy modeling and control of rotary inverted pendulum system using LQR technique

    International Nuclear Information System (INIS)

    Fairus, M A; Mohamed, Z; Ahmad, M N

    2013-01-01

    Rotary inverted pendulum (RIP) system is a nonlinear, non-minimum phase, unstable and underactuated system. Controlling such system can be a challenge and is considered a benchmark in control theory problem. Prior to designing a controller, equations that represent the behaviour of the RIP system must be developed as accurately as possible without compromising the complexity of the equations. Through Takagi-Sugeno (T-S) fuzzy modeling technique, the nonlinear system model is then transformed into several local linear time-invariant models which are then blended together to reproduce, or approximate, the nonlinear system model within local region. A parallel distributed compensation (PDC) based fuzzy controller using linear quadratic regulator (LQR) technique is designed to control the RIP system. The results show that the designed controller able to balance the RIP system

  8. Applications of soft computing in time series forecasting simulation and modeling techniques

    CERN Document Server

    Singh, Pritpal

    2016-01-01

    This book reports on an in-depth study of fuzzy time series (FTS) modeling. It reviews and summarizes previous research work in FTS modeling and also provides a brief introduction to other soft-computing techniques, such as artificial neural networks (ANNs), rough sets (RS) and evolutionary computing (EC), focusing on how these techniques can be integrated into different phases of the FTS modeling approach. In particular, the book describes novel methods resulting from the hybridization of FTS modeling approaches with neural networks and particle swarm optimization. It also demonstrates how a new ANN-based model can be successfully applied in the context of predicting Indian summer monsoon rainfall. Thanks to its easy-to-read style and the clear explanations of the models, the book can be used as a concise yet comprehensive reference guide to fuzzy time series modeling, and will be valuable not only for graduate students, but also for researchers and professionals working for academic, business and governmen...

  9. Model techniques for testing heated concrete structures

    International Nuclear Information System (INIS)

    Stefanou, G.D.

    1983-01-01

    Experimental techniques are described which may be used in the laboratory to measure strains of model concrete structures representing to scale actual structures of any shape or geometry, operating at elevated temperatures, for which time-dependent creep and shrinkage strains are dominant. These strains could be used to assess the distribution of stress in the scaled structure and hence to predict the actual behaviour of concrete structures used in nuclear power stations. Similar techniques have been employed in an investigation to measure elastic, thermal, creep and shrinkage strains in heated concrete models representing to scale parts of prestressed concrete pressure vessels for nuclear reactors. (author)

  10. Analysis of Multipath Mitigation Techniques with Land Mobile Satellite Channel Model

    Directory of Open Access Journals (Sweden)

    M. Z. H. Bhuiyan J. Zhang

    2012-12-01

    is of utmost importance to analyze the performance of different multipath mitigation techniques in some realistic measurement-based channel models, for example, the Land Mobile Satellite (LMS channel model [1]-[4], developed at the German Aerospace Center (DLR. The DLR LMS channel model is widely used for simulating the positioning accuracy of mobile satellite navigation receivers in urban outdoor scenarios. The main objective of this paper is to present a comprehensive analysis of some of the most promising techniques with the DLR LMS channel model in varying multipath scenarios. Four multipath mitigation techniques are chosen herein for performance comparison, namely, the narrow Early-Minus-Late (nEML, the High Resolution Correlator, the C/N0-based two stage delay tracking technique, and the Reduced Search Space Maximum Likelihood (RSSML delay estimator. The first two techniques are the most popular and traditional ones used in nowadays GNSS receivers, whereas the later two techniques are comparatively new and are advanced techniques, recently proposed by the authors. In addition, the implementation of the RSSML is optimized here for a narrow-bandwidth receiver configuration in the sense that it now requires a significantly less number of correlators and memory than its original implementation. The simulation results show that the reduced-complexity RSSML achieves the best multipath mitigation performance in moderate-to-good carrier-to-noise density ratio with the DLR LMS channel model in varying multipath scenarios.

  11. Reduction of thermal models of buildings: improvement of techniques using meteorological influence models; Reduction de modeles thermiques de batiments: amelioration des techniques par modelisation des sollicitations meteorologiques

    Energy Technology Data Exchange (ETDEWEB)

    Dautin, S.

    1997-04-01

    This work concerns the modeling of thermal phenomena inside buildings for the evaluation of energy exploitation costs of thermal installations and for the modeling of thermal and aeraulic transient phenomena. This thesis comprises 7 chapters dealing with: (1) the thermal phenomena inside buildings and the CLIM2000 calculation code, (2) the ETNA and GENEC experimental cells and their modeling, (3) the techniques of model reduction tested (Marshall`s truncature, Michailesco aggregation method and Moore truncature) with their algorithms and their encoding in the MATRED software, (4) the application of model reduction methods to the GENEC and ETNA cells and to a medium size dual-zone building, (5) the modeling of meteorological influences classically applied to buildings (external temperature and solar flux), (6) the analytical expression of these modeled meteorological influences. The last chapter presents the results of these improved methods on the GENEC and ETNA cells and on a lower inertia building. These new methods are compared to classical methods. (J.S.) 69 refs.

  12. MODELLING THE DELAMINATION FAILURE ALONG THE CFRP-CFST BEAM INTERACTION SURFACE USING DIFFERENT FINITE ELEMENT TECHNIQUES

    Directory of Open Access Journals (Sweden)

    AHMED W. AL-ZAND

    2017-01-01

    Full Text Available Nonlinear finite element (FE models are prepared to investigate the behaviour of concrete-filled steel tube (CFST beams strengthened by carbon fibre reinforced polymer (CFRP sheets. The beams are strengthened from the bottom side only by varied sheet lengths (full and partial beam lengths and then subjected to ultimate flexural loads. Three surface interaction techniques are used to implement the bonding behaviour between the steel tube and the CFRP sheet, namely, full tie interaction (TI, cohesive element (CE and cohesive behaviour (CB techniques using ABAQUS software. Results of the comparison between the FE analysis and existing experimental study confirm that the FE models with the TI technique could be applicable for beams strengthened by CFRP sheets with a full wrapping length; the technique could not accurately implement the CFRP delamination failure, which occurred for beams with a partial wrapping length. Meanwhile, the FE models with the CE and CB techniques are applicable in the implementation of both CFRP failures (rapture and delamination for both full and partial wrapping lengths, respectively. Where, the ultimate loads' ratios achieved by the FE models using TI, CE and CB techniques about 1.122, 1.047 and 1.045, respectively, comparing to the results of existing experimental tests.

  13. Model order reduction techniques with applications in finite element analysis

    CERN Document Server

    Qu, Zu-Qing

    2004-01-01

    Despite the continued rapid advance in computing speed and memory the increase in the complexity of models used by engineers persists in outpacing them. Even where there is access to the latest hardware, simulations are often extremely computationally intensive and time-consuming when full-blown models are under consideration. The need to reduce the computational cost involved when dealing with high-order/many-degree-of-freedom models can be offset by adroit computation. In this light, model-reduction methods have become a major goal of simulation and modeling research. Model reduction can also ameliorate problems in the correlation of widely used finite-element analyses and test analysis models produced by excessive system complexity. Model Order Reduction Techniques explains and compares such methods focusing mainly on recent work in dynamic condensation techniques: - Compares the effectiveness of static, exact, dynamic, SEREP and iterative-dynamic condensation techniques in producing valid reduced-order mo...

  14. Development Model of Basic Technique Skills Training Shot-Put Obrien Style Based Biomechanics Review

    Directory of Open Access Journals (Sweden)

    danang rohmat hidayanto

    2018-03-01

    Full Text Available The background of this research is the unavailability of learning model of basic technique technique of O'Brien style force that integrated in skill program based on biomechanics study which is used as a reference to build the basic technique skill of the O'Brien style force among students. The purpose of this study is to develop a model of basic-style technique of rejecting the O'Brien-style shot put based on biomechanical studies for beginner levels, including basic prefix technique, glide, final stage, repulsion, further motion and repulsion performance of O'Brien style, all of which arranged in a medium that is easily accessible whenever, by anyone and anywhere, especially in SMK Negeri 1 Kalijambe Sragen . The research method used is "Reasearch and Developement" approach. "Preliminary studies show that 43.0% of respondents considered that the O'Brien style was very important to be developed with a model of skill-based exercise based on biomechanics, as many as 40.0% ressponden stated that it is important to be developed with biomechanics based learning media. Therefore, it is deemed necessary to develop the learning media of the O'Brien style-based training skills based on biomechanical studies. Development of media starts from the design of the storyboard and script form that will be used as media. The design of this model is called the draft model. Draft models that have been prepared are reviewed by the multimedia expert and the O'Brien style expert to get the product's validity. A total of 78.24% of experts declare a viable product with some input. In small groups with n = 6, earned value 72.2% was obtained or valid enough to be tested in large groups. In the large group test with n = 12,values obtained 70.83% or quite feasible to be tested in the field. In the field test, experimental group was prepared with treatment according to media and control group with free treatment. From result of counting of significance test can be

  15. Model-checking techniques based on cumulative residuals.

    Science.gov (United States)

    Lin, D Y; Wei, L J; Ying, Z

    2002-03-01

    Residuals have long been used for graphical and numerical examinations of the adequacy of regression models. Conventional residual analysis based on the plots of raw residuals or their smoothed curves is highly subjective, whereas most numerical goodness-of-fit tests provide little information about the nature of model misspecification. In this paper, we develop objective and informative model-checking techniques by taking the cumulative sums of residuals over certain coordinates (e.g., covariates or fitted values) or by considering some related aggregates of residuals, such as moving sums and moving averages. For a variety of statistical models and data structures, including generalized linear models with independent or dependent observations, the distributions of these stochastic processes tinder the assumed model can be approximated by the distributions of certain zero-mean Gaussian processes whose realizations can be easily generated by computer simulation. Each observed process can then be compared, both graphically and numerically, with a number of realizations from the Gaussian process. Such comparisons enable one to assess objectively whether a trend seen in a residual plot reflects model misspecification or natural variation. The proposed techniques are particularly useful in checking the functional form of a covariate and the link function. Illustrations with several medical studies are provided.

  16. Results of hip arthroplasty using Paavilainen technique in patients with congenitally dislocated hip

    Directory of Open Access Journals (Sweden)

    R. M. Tikhilov

    2014-01-01

    Full Text Available The purpose of the study was to analyze the medium- and long-term results of hip arthroplasty using Paavilainen technique in patients with the congenitally dislocated hip. Methods: From 2001 to 2012 180 operations were carried out were using the Paavilainen technique in 140 patients with high dislocation of the hip (Crowe IV. All patients were clinically evaluated using the Harris Hip Score (HHS, VAS and radiography. Statistical analysis was performed using the Pearson correlation coefficients, multiple regression analysis and classification trees analysis. Results: The average Harris score improved from preoperative 41.6 (40,3-43,5 to 79.3 (77,9-82,7 at final follow-up, and the difference was significant. Early complications were 9% (the most frequent were fractures of the proximal femur, later - 16.7% (pseudoarthrosis of the greater trochanter, 13.9%; disclocations-1,1%, aseptic loosening of the components - 1.7%, reoperation performed in 8.3% of cases. Such factors as age and limb length has statistically significant effect on functional outcomes. Established predictive model allows to get the best possible functional outcome in such patients with severe dysplasia. Conclusions: Total Hip arthroplasty using the Paavilainen technique is an effective method of surgical treatment in patients with the congenitally dislocated hip, but it is technically difficult operation with a high incidence of complications in comparison with standard primary total hip replacement.

  17. A Strategy Modelling Technique for Financial Services

    OpenAIRE

    Heinrich, Bernd; Winter, Robert

    2004-01-01

    Strategy planning processes often suffer from a lack of conceptual models that can be used to represent business strategies in a structured and standardized form. If natural language is replaced by an at least semi-formal model, the completeness, consistency, and clarity of strategy descriptions can be drastically improved. A strategy modelling technique is proposed that is based on an analysis of modelling requirements, a discussion of related work and a critical analysis of generic approach...

  18. Use of advanced modeling techniques to optimize thermal packaging designs.

    Science.gov (United States)

    Formato, Richard M; Potami, Raffaele; Ahmed, Iftekhar

    2010-01-01

    Through a detailed case study the authors demonstrate, for the first time, the capability of using advanced modeling techniques to correctly simulate the transient temperature response of a convective flow-based thermal shipper design. The objective of this case study was to demonstrate that simulation could be utilized to design a 2-inch-wall polyurethane (PUR) shipper to hold its product box temperature between 2 and 8 °C over the prescribed 96-h summer profile (product box is the portion of the shipper that is occupied by the payload). Results obtained from numerical simulation are in excellent agreement with empirical chamber data (within ±1 °C at all times), and geometrical locations of simulation maximum and minimum temperature match well with the corresponding chamber temperature measurements. Furthermore, a control simulation test case was run (results taken from identical product box locations) to compare the coupled conduction-convection model with a conduction-only model, which to date has been the state-of-the-art method. For the conduction-only simulation, all fluid elements were replaced with "solid" elements of identical size and assigned thermal properties of air. While results from the coupled thermal/fluid model closely correlated with the empirical data (±1 °C), the conduction-only model was unable to correctly capture the payload temperature trends, showing a sizeable error compared to empirical values (ΔT > 6 °C). A modeling technique capable of correctly capturing the thermal behavior of passively refrigerated shippers can be used to quickly evaluate and optimize new packaging designs. Such a capability provides a means to reduce the cost and required design time of shippers while simultaneously improving their performance. Another advantage comes from using thermal modeling (assuming a validated model is available) to predict the temperature distribution in a shipper that is exposed to ambient temperatures which were not bracketed

  19. Numerical and modeling techniques used in the EPIC code

    International Nuclear Information System (INIS)

    Pizzica, P.A.; Abramson, P.B.

    1977-01-01

    EPIC models fuel and coolant motion which result from internal fuel pin pressure (from fission gas or fuel vapor) and/or from the generation of sodium vapor pressures in the coolant channel subsequent to pin failure in an LMFBR. The modeling includes the ejection of molten fuel from the pin into a coolant channel with any amount of voiding through a clad rip which may be of any length or which may expand with time. One-dimensional Eulerian hydrodynamics is used to model both the motion of fuel and fission gas inside a molten fuel cavity and the mixture of two-phase sodium and fission gas in the channel. Motion of molten fuel particles in the coolant channel is tracked with a particle-in-cell technique

  20. Modeling of high-pressure generation using the laser colliding foil technique

    Energy Technology Data Exchange (ETDEWEB)

    Fabbro, R.; Faral, B.; Virmont, J.; Cottet, F.; Romain, J.P.

    1989-03-01

    An analytical model describing the collision of two foils is presented and applied to the collision of laser-accelerated foils. Numerical simulations have been made to verify this model and to compare its results in the case of laser-accelerated foils. Scaling laws relating the different parameters (shock pressure, laser intensity, target material, etc.) have been established. The application of this technique to high-pressure equation of state experiments is then discussed.

  1. Modeling of high-pressure generation using the laser colliding foil technique

    International Nuclear Information System (INIS)

    Fabbro, R.; Faral, B.; Virmont, J.; Cottet, F.; Romain, J.P.

    1989-01-01

    An analytical model describing the collision of two foils is presented and applied to the collision of laser-accelerated foils. Numerical simulations have been made to verify this model and to compare its results in the case of laser-accelerated foils. Scaling laws relating the different parameters (shock pressure, laser intensity, target material, etc.) have been established. The application of this technique to high-pressure equation of state experiments is then discussed

  2. TAPP - Stuttgart technique and result of a large single center series

    Directory of Open Access Journals (Sweden)

    Bittner R

    2006-01-01

    Full Text Available Laparoscopic hernioplasty is assessed as a difficult operation. Operative technique determines the frequency of complications, the time of recovery and the rate of recurrences. A proper technique is absolutely necessary to achieve results that are superior to open hernia surgery. Technique: The key points in our technique are 1 use of nondisposable instruments; 2 use of blunt trocars, consisting of expanding and non-incisive cone-shaped tips; 3 spacious and curved opening to the peritoneum, high above all possible hernia openings; 4 meticulous dissection of the entire pelvic floor; 5 complete reduction of the hernial sac; 6 wide parietalization of the peritoneal sac, at least down to the mid of psoas muscle; 7 implantation of a large mesh, at least 10 cm x 15 cm; 8 fixation of the mesh by clip to Cooper′s ligament, to the rectus muscle and lateral to the epigastric vessels, high above the ileopubic tract; 9 the use of glue allows fixation also to the latero-caudial region; and 10 closure of the peritoneum by running suture. Results: With this technique in 12,678 hernia repairs, the following results could be achieved: operating time - 40 min; morbidity - 2.9%; recurrence rate - 0.7%; disability of work - 14 days. In all types of hernias (recurrence after previous open surgery, recurrence after previous preperitoneal operation, scrotal hernia, hernia in patients after transabdominal prostate resection, similar results could be achieved. Summary: Laparoscopic hernia repair can be performed successfully in clinical practice even by surgeons in training. Precondition for the success is a strictly standardized operative technique and a well-structured educational program.

  3. Recent applications of nuclear medicine techniques and results in Vietnam

    International Nuclear Information System (INIS)

    Phan Sy An

    2008-01-01

    The author presented recent applications of nuclear medicine techniques and results in Vietnam. The author concentrated some valuable and helpful studies such as functional tests, myocardial perfusion scintigraphy, bone, thyroid, lung, kidney and gastrointestinal tract scintigraphy for diagnosis. The results of RIA and IRMA concerning with thyroid diseases, cancer, microalbuminuria and TSH in blood spot on paper for screening of congenital hypothyroidism in new born babies were also given. The report also mentioned results of liver cancer and palliative bone metastasis treatments in Vietnam. A new technique using gamma probe in surgery for breast cancer was presented. The author introduced some modern teleradiotherapeutic modalities such as cyberknif, gamma knife, gamma rotating systeme and linac recently installed in Vietnam. (author)

  4. An Animal Model of Abdominal Aortic Aneurysm Created with Peritoneal Patch: Technique and Initial Results

    International Nuclear Information System (INIS)

    Maynar, Manuel; Qian Zhong; Hernandez, Javier; Sun Fei; Miguel, Carmen de; Crisostomo, Veronica; Uson, Jesus; Pineda, Luis-Fernando; Espinoza, Carmen G.; Castaneda, Wilfrido R.

    2003-01-01

    The purpose of this study was to develop an abdominal aortic aneurysm model that more closely resembles themorphology of human aneurysms with potential for further growth of the sac. An infrarenal abdominal aortic aneurysm (AAA) model was created with a double-layered peritoneal patch in 27 domestic swine. The patch,measuring in average from 6 to 12 cm in length and from 2 to 3 cm in width, was sutured to the edge of an aortotomy. Pre- and postsurgical digital subtraction aortograms (DSA) were obtained to document the appearance and dimensions of the aneurysm. All animals were followed with DSA for up to 5 months. Laparoscopic examination enhanced by the use of laparoscopic ultrasound was also carried out in 2 animals to assess the aneurysm at 30 and 60 days following surgery. Histological examination was performed on 4 animals. All the animals that underwent the surgical creation of the AAA survived the surgical procedure.Postsurgical DSA demonstrated the presence of the AAA in all animals,defined as more than 50% increase in diameter. The aneurysmal mean diameter increased from the baseline of 10.27 ± 1.24 to 16.69± 2.29 mm immediately after surgery, to 27.6 ± 6.59 mm at 14 days, 32.45 ± 8.76 mm at 30 days (p <0.01), and subsequently decreased to 25.98 ± 3.75 mm at 60 days. A total of 15 animals died of aneurysmal rupture that occurred more frequently in the long aneurysms (≥6 cm in length) than the short aneurysms (<6 cm in length) during the first 2 weeks after surgery(p < 0.05). No rupture occurred beyond 16 days after surgery. Four animals survived and underwent 60-day angiographic follow-up. Laparoscopic follow-up showed strong pulses, a reddish external appearance and undetectable suture lines on the aneurysmal wall. On pathology, the patches were well incorporated into the aortic wall, the luminal wall appeared almost completely endothelialized, and cellular and matrix proliferation were noted in the aneurysmal wall. A reproducible technique for the

  5. A fully blanketed early B star LTE model atmosphere using an opacity sampling technique

    International Nuclear Information System (INIS)

    Phillips, A.P.; Wright, S.L.

    1980-01-01

    A fully blanketed LTE model of a stellar atmosphere with Tsub(e) = 21914 K (thetasub(e) = 0.23), log g = 4 is presented. The model includes an explicit representation of the opacity due to the strongest lines, and uses a statistical opacity sampling technique to represent the weaker line opacity. The sampling technique is subjected to several tests and the model is compared with an atmosphere calculated using the line-distribution function method. The limitations of the distribution function method and the particular opacity sampling method used here are discussed in the light of the results obtained. (author)

  6. Modelling, analysis and validation of microwave techniques for the characterisation of metallic nanoparticles

    Science.gov (United States)

    Sulaimalebbe, Aslam

    High Frequency Structure Simulator (HFSS), followed by the electrical characterisation of synthesised Pt NP films using the novel miniature fabricated OCP technique. The results obtained from this technique provided the inspiration to synthesise and evaluate the microwave properties of Au NPs. The findings from this technique provided the motivation to characterise both the Pt and Au NP films using the DR technique. Unlike the OCP technique, the DR method is highly sensitive but the achievable measurement accuracy is limited since this technique does not have broadband frequency capability like the OCP method. The results obtained from the DR technique show a good agreement with the theoretical prediction. In the last phase of this research, a further validation of the aperture admittance models on different types OCP (i.e. RG-405 and RG-402 cables and SMA connector) have been carried out on the developed 3D full wave models using HFSS software, followed by the development of universal models for the aforementioned OCPs based on the same 3D full wave models.

  7. On a Graphical Technique for Evaluating Some Rational Expectations Models

    DEFF Research Database (Denmark)

    Johansen, Søren; Swensen, Anders R.

    2011-01-01

    Campbell and Shiller (1987) proposed a graphical technique for the present value model, which consists of plotting estimates of the spread and theoretical spread as calculated from the cointegrated vector autoregressive model without imposing the restrictions implied by the present value model....... In addition to getting a visual impression of the fit of the model, the purpose is to see if the two spreads are nevertheless similar as measured by correlation, variance ratio, and noise ratio. We extend these techniques to a number of rational expectation models and give a general definition of spread...

  8. 3D Modeling Techniques for Print and Digital Media

    Science.gov (United States)

    Stephens, Megan Ashley

    In developing my thesis, I looked to gain skills using ZBrush to create 3D models, 3D scanning, and 3D printing. The models created compared the hearts of several vertebrates and were intended for students attending Comparative Vertebrate Anatomy. I used several resources to create a model of the human heart and was able to work from life while creating heart models from other vertebrates. I successfully learned ZBrush and 3D scanning, and successfully printed 3D heart models. ZBrush allowed me to create several intricate models for use in both animation and print media. The 3D scanning technique did not fit my needs for the project, but may be of use for later projects. I was able to 3D print using two different techniques as well.

  9. The Effect of Group Investigation Learning Model with Brainstroming Technique on Students Learning Outcomes

    Directory of Open Access Journals (Sweden)

    Astiti Kade kAyu

    2018-01-01

    Full Text Available This study aims to determine the effect of group investigation (GI learning model with brainstorming technique on student physics learning outcomes (PLO compared to jigsaw learning model with brainstroming technique. The learning outcome in this research are the results of learning in the cognitive domain. The method used in this research is experiment with Randomised Postest Only Control Group Design. Population in this research is all students of class XI IPA SMA Negeri 9 Kupang year lesson 2015/2016. The selected sample are 40 students of class XI IPA 1 as the experimental class and 38 students of class XI IPA 2 as the control class using simple random sampling technique. The instrument used is 13 items description test. The first hypothesis was tested by using two tailed t-test. From that, it is obtained that H0 rejected which means there are differences of students physics learning outcome. The second hypothesis was tested using one tailed t-test. It is obtained that H0 rejected which means the students PLO in experiment class were higher than control class. Based on the results of this study, researchers recommend the use of GI learning models with brainstorming techniques to improve PLO, especially in the cognitive domain.

  10. Trimming a hazard logic tree with a new model-order-reduction technique

    Science.gov (United States)

    Porter, Keith; Field, Edward; Milner, Kevin R

    2017-01-01

    The size of the logic tree within the Uniform California Earthquake Rupture Forecast Version 3, Time-Dependent (UCERF3-TD) model can challenge risk analyses of large portfolios. An insurer or catastrophe risk modeler concerned with losses to a California portfolio might have to evaluate a portfolio 57,600 times to estimate risk in light of the hazard possibility space. Which branches of the logic tree matter most, and which can one ignore? We employed two model-order-reduction techniques to simplify the model. We sought a subset of parameters that must vary, and the specific fixed values for the remaining parameters, to produce approximately the same loss distribution as the original model. The techniques are (1) a tornado-diagram approach we employed previously for UCERF2, and (2) an apparently novel probabilistic sensitivity approach that seems better suited to functions of nominal random variables. The new approach produces a reduced-order model with only 60 of the original 57,600 leaves. One can use the results to reduce computational effort in loss analyses by orders of magnitude.

  11. On the Reliability of Nonlinear Modeling using Enhanced Genetic Programming Techniques

    Science.gov (United States)

    Winkler, S. M.; Affenzeller, M.; Wagner, S.

    The use of genetic programming (GP) in nonlinear system identification enables the automated search for mathematical models that are evolved by an evolutionary process using the principles of selection, crossover and mutation. Due to the stochastic element that is intrinsic to any evolutionary process, GP cannot guarantee the generation of similar or even equal models in each GP process execution; still, if there is a physical model underlying to the data that are analyzed, then GP is expected to find these structures and produce somehow similar results. In this paper we define a function for measuring the syntactic similarity of mathematical models represented as structure trees; using this similarity function we compare the results produced by GP techniques for a data set representing measurement data of a BMW Diesel engine.

  12. Saturation of superstorms and finite compressibility of the magnetosphere: Results of the magnetogram inversion technique and global PPMLR-MHD model

    Science.gov (United States)

    Mishin, V. V.; Mishin, V. M.; Karavaev, Yu.; Han, J. P.; Wang, C.

    2016-07-01

    We report on novel features of the saturation process of the polar cap magnetic flux and Poynting flux into the magnetosphere from the solar wind during three superstorms. In addition to the well-known effect of the interplanetary electric (Esw) and southward magnetic (interplanetary magnetic field (IMF) Bz) fields, we found that the saturation depends also on the solar wind ram pressure Pd. By means of the magnetogram inversion technique and a global MHD numerical model Piecewise Parabolic Method with a Lagrangian Remap, we explore the dependence of the magnetopause standoff distance on ram pressure and the southward IMF. Unlike earlier studies, in the considered superstorms both Pd and Bz achieve extreme values. As a result, we show that the compression rate of the dayside magnetosphere decreases with increasing Pd and the southward Bz, approaching very small values for extreme Pd ≥ 15 nPa and Bz ≤ -40 nT. This dependence suggests that finite compressibility of the magnetosphere controls saturation of superstorms.

  13. A probabilistic evaluation procedure for process model matching techniques

    NARCIS (Netherlands)

    Kuss, Elena; Leopold, Henrik; van der Aa, Han; Stuckenschmidt, Heiner; Reijers, Hajo A.

    2018-01-01

    Process model matching refers to the automatic identification of corresponding activities between two process models. It represents the basis for many advanced process model analysis techniques such as the identification of similar process parts or process model search. A central problem is how to

  14. Meteorological uncertainty of atmospheric dispersion model results (MUD)

    International Nuclear Information System (INIS)

    Havskov Soerensen, J.; Amstrup, B.; Feddersen, H.

    2013-08-01

    The MUD project addresses assessment of uncertainties of atmospheric dispersion model predictions, as well as possibilities for optimum presentation to decision makers. Previously, it has not been possible to estimate such uncertainties quantitatively, but merely to calculate the 'most likely' dispersion scenario. However, recent developments in numerical weather prediction (NWP) include probabilistic forecasting techniques, which can be utilised also for long-range atmospheric dispersion models. The ensemble statistical methods developed and applied to NWP models aim at describing the inherent uncertainties of the meteorological model results. These uncertainties stem from e.g. limits in meteorological observations used to initialise meteorological forecast series. By perturbing e.g. the initial state of an NWP model run in agreement with the available observational data, an ensemble of meteorological forecasts is produced from which uncertainties in the various meteorological parameters are estimated, e.g. probabilities for rain. Corresponding ensembles of atmospheric dispersion can now be computed from which uncertainties of predicted radionuclide concentration and deposition patterns can be derived. (Author)

  15. Plasticity models of material variability based on uncertainty quantification techniques

    Energy Technology Data Exchange (ETDEWEB)

    Jones, Reese E. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Rizzi, Francesco [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Boyce, Brad [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Templeton, Jeremy Alan [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Ostien, Jakob [Sandia National Lab. (SNL-CA), Livermore, CA (United States)

    2017-11-01

    The advent of fabrication techniques like additive manufacturing has focused attention on the considerable variability of material response due to defects and other micro-structural aspects. This variability motivates the development of an enhanced design methodology that incorporates inherent material variability to provide robust predictions of performance. In this work, we develop plasticity models capable of representing the distribution of mechanical responses observed in experiments using traditional plasticity models of the mean response and recently developed uncertainty quantification (UQ) techniques. Lastly, we demonstrate that the new method provides predictive realizations that are superior to more traditional ones, and how these UQ techniques can be used in model selection and assessing the quality of calibrated physical parameters.

  16. Integrated approach to model decomposed flow hydrograph using artificial neural network and conceptual techniques

    Science.gov (United States)

    Jain, Ashu; Srinivasulu, Sanaga

    2006-02-01

    This paper presents the findings of a study aimed at decomposing a flow hydrograph into different segments based on physical concepts in a catchment, and modelling different segments using different technique viz. conceptual and artificial neural networks (ANNs). An integrated modelling framework is proposed capable of modelling infiltration, base flow, evapotranspiration, soil moisture accounting, and certain segments of the decomposed flow hydrograph using conceptual techniques and the complex, non-linear, and dynamic rainfall-runoff process using ANN technique. Specifically, five different multi-layer perceptron (MLP) and two self-organizing map (SOM) models have been developed. The rainfall and streamflow data derived from the Kentucky River catchment were employed to test the proposed methodology and develop all the models. The performance of all the models was evaluated using seven different standard statistical measures. The results obtained in this study indicate that (a) the rainfall-runoff relationship in a large catchment consists of at least three or four different mappings corresponding to different dynamics of the underlying physical processes, (b) an integrated approach that models the different segments of the decomposed flow hydrograph using different techniques is better than a single ANN in modelling the complex, dynamic, non-linear, and fragmented rainfall runoff process, (c) a simple model based on the concept of flow recession is better than an ANN to model the falling limb of a flow hydrograph, and (d) decomposing a flow hydrograph into the different segments corresponding to the different dynamics based on the physical concepts is better than using the soft decomposition employed using SOM.

  17. Application of nonliner reduction techniques in chemical process modeling: a review

    International Nuclear Information System (INIS)

    Muhaimin, Z; Aziz, N.; Abd Shukor, S.R.

    2006-01-01

    Model reduction techniques have been used widely in engineering fields for electrical, mechanical as well as chemical engineering. The basic idea of reduction technique is to replace the original system by an approximating system with much smaller state-space dimension. A reduced order model is more beneficial to process and industrial field in terms of control purposes. This paper is to provide a review on application of nonlinear reduction techniques in chemical processes. The advantages and disadvantages of each technique reviewed are also highlighted

  18. Summary on several key techniques in 3D geological modeling.

    Science.gov (United States)

    Mei, Gang

    2014-01-01

    Several key techniques in 3D geological modeling including planar mesh generation, spatial interpolation, and surface intersection are summarized in this paper. Note that these techniques are generic and widely used in various applications but play a key role in 3D geological modeling. There are two essential procedures in 3D geological modeling: the first is the simulation of geological interfaces using geometric surfaces and the second is the building of geological objects by means of various geometric computations such as the intersection of surfaces. Discrete geometric surfaces that represent geological interfaces can be generated by creating planar meshes first and then spatially interpolating; those surfaces intersect and then form volumes that represent three-dimensional geological objects such as rock bodies. In this paper, the most commonly used algorithms of the key techniques in 3D geological modeling are summarized.

  19. Transvaginal Cystocele Repair by Purse-String Technique Reinforced with Three Simple Sutures: Surgical Technique and Results

    Directory of Open Access Journals (Sweden)

    Ho-Sook Song

    2012-09-01

    Full Text Available Purpose Different techniques for cystocele repair including the conventional anterior colporrhaphy and mesh technique are known. Our goal was to evaluate the anatomical success and safety of our method of transvaginal anterior vaginal wall repair by the purse-string technique reinforced with three simple additional sutures in the repair of cystocele over a 4-year follow-up period. Methods This was a retrospective review of 69 consecutive patients (grades 2 to 4 who underwent the above operations between 2001 and 2011, including their success rates as assessed by use of the Baden-Walker halfway classification system. Results Of the patients, 62 patients (98% were completely cured of cystocele and 1 patient showed grade 2 cystocele recurrence that required no further treatment. Two patients with grade 4 cystocele were completely cured. There was no vaginal erosion related to the cystocele repair. Conclusions Transvaginal anterior colporrhaphy by a purse-string technique reinforced with simple additive sutures appears to be a simple, safe, and easily performed approach in cystocele repair. There is no need for other material for reinforcement, even in high-grade cystocele, which is an advantage of our technique.

  20. Rabbit tissue model (RTM) harvesting technique.

    Science.gov (United States)

    Medina, Marelyn

    2002-01-01

    A method for creating a tissue model using a female rabbit for laparoscopic simulation exercises is described. The specimen is called a Rabbit Tissue Model (RTM). Dissection techniques are described for transforming the rabbit carcass into a small, compact unit that can be used for multiple training sessions. Preservation is accomplished by using saline and refrigeration. Only the animal trunk is used, with the rest of the animal carcass being discarded. Practice exercises are provided for using the preserved organs. Basic surgical skills, such as dissection, suturing, and knot tying, can be practiced on this model. In addition, the RTM can be used with any pelvic trainer that permits placement of larger practice specimens within its confines.

  1. An experimental technique for the modelling of air flow movements in nuclear plant

    International Nuclear Information System (INIS)

    Ainsworth, R.W.; Hallas, N.J.

    1986-01-01

    This paper describes an experimental technique developed at Harwell to model ventilation flows in plant at 1/5th scale. The technique achieves dynamic similarity not only for forced convection imposed by the plant ventilation system, but also for the interaction between natural convection (from heated objects) and forced convection. The use of a scale model to study flow of fluids is a well established technique, relying upon various criteria, expressed in terms of dimensionless numbers, to achieve dynamic similarity. For forced convective flows, simulation of Reynolds number is sufficient, but to model natural convection and its interaction with forced convection, the Rayleigh, Grashof and Prandtl numbers must be simulated at the same time. This paper describes such a technique, used in experiments on a hypothetical glove box cell to study the interaction between forced and natural convection. The model contained features typically present in a cell, such as a man, motor, stairs, glove box, etc. The aim of the experiment was to study the overall flow patterns, especially around the model man 'working' at the glove box. The cell ventilation was theoretically designed to produce a downward flow over the face of the man working at the glove box. However, the results have shown that the flow velocities produced an upwards flow over the face of the man. The work has indicated the viability of modelling simultaneously the forced and natural convection processes in a cell. It has also demonstrated that simplistic assumptions cannot be made about ventilation flow patterns. (author)

  2. [Intestinal lengthening techniques: an experimental model in dogs].

    Science.gov (United States)

    Garibay González, Francisco; Díaz Martínez, Daniel Alberto; Valencia Flores, Alejandro; González Hernández, Miguel Angel

    2005-01-01

    To compare two intestinal lengthening procedures in an experimental dog model. Intestinal lengthening is one of the methods for gastrointestinal reconstruction used for treatment of short bowel syndrome. The modification to the Bianchi's technique is an alternative. The modified technique decreases the number of anastomoses to a single one, thus reducing the risk of leaks and strictures. To our knowledge there is not any clinical or experimental report that studied both techniques, so we realized the present report. Twelve creole dogs were operated with the Bianchi technique for intestinal lengthening (group A) and other 12 creole dogs from the same race and weight were operated by the modified technique (Group B). Both groups were compared in relation to operating time, difficulties in technique, cost, intestinal lengthening and anastomoses diameter. There were no statistical difference in the anastomoses diameter (A = 9.0 mm vs. B = 8.5 mm, p = 0.3846). Operating time (142 min vs. 63 min) cost and technique difficulties were lower in group B (p anastomoses (of Group B) and intestinal segments had good blood supply and were patent along their full length. Bianchi technique and the modified technique offer two good reliable alternatives for the treatment of short bowel syndrome. The modified technique improved operating time, cost and technical issues.

  3. A new cerebral vasospasm model established with endovascular puncture technique

    International Nuclear Information System (INIS)

    Tu Jianfei; Liu Yizhi; Ji Jiansong; Zhao Zhongwei

    2011-01-01

    Objective: To investigate the method of establishing cerebral vasospasm (CVS) models in rabbits by using endovascular puncture technique. Methods: Endovascular puncture procedure was performed in 78 New Zealand white rabbits to produce subarachnoid hemorrhage (SAH). The survival rabbits were randomly divided into seven groups (3 h, 12 h, 1 d, 2 d, 3 d, 7 d and 14 d), with five rabbits in each group for both study group (SAH group) and control group. Cerebral CT scanning was carried out in all rabbits both before and after the operation. The inner diameter and the thickness of vascular wall of both posterior communicating artery (PcoA) and basilar artery (BA) were determined after the animals were sacrificed, and the results were analyzed. Results: Of 78 experimental rabbits, CVS model was successfully established in 45, including 35 of SAH group and 10 control subgroup. The technical success rate was 57.7%. Twelve hours after the procedure, the inner diameter of PcoA and BA in SAH group was decreased by 45.6% and 52.3%, respectively, when compared with these in control group. The vascular narrowing showed biphasic changes, the inner diameter markedly decreased again at the 7th day when the decrease reached its peak to 31.2% and 48.6%, respectively. Conclusion: Endovascular puncture technique is an effective method to establish CVS models in rabbits. The death rate of experimental animals can be decreased if new interventional material is used and the manipulation is carefully performed. (authors)

  4. Using of Structural Equation Modeling Techniques in Cognitive Levels Validation

    Directory of Open Access Journals (Sweden)

    Natalija Curkovic

    2012-10-01

    Full Text Available When constructing knowledge tests, cognitive level is usually one of the dimensions comprising the test specifications with each item assigned to measure a particular level. Recently used taxonomies of the cognitive levels most often represent some modification of the original Bloom’s taxonomy. There are many concerns in current literature about existence of predefined cognitive levels. The aim of this article is to investigate can structural equation modeling techniques confirm existence of different cognitive levels. For the purpose of the research, a Croatian final high-school Mathematics exam was used (N = 9626. Confirmatory factor analysis and structural regression modeling were used to test three different models. Structural equation modeling techniques did not support existence of different cognitive levels in this case. There is more than one possible explanation for that finding. Some other techniques that take into account nonlinear behaviour of the items as well as qualitative techniques might be more useful for the purpose of the cognitive levels validation. Furthermore, it seems that cognitive levels were not efficient descriptors of the items and so improvements are needed in describing the cognitive skills measured by items.

  5. Rotational acceleration during head impact resulting from different judo throwing techniques.

    Science.gov (United States)

    Murayama, Haruo; Hitosugi, Masahito; Motozawa, Yasuki; Ogino, Masahiro; Koyama, Katsuhiro

    2014-01-01

    Most severe head injuries in judo are reported as acute subdural hematoma. It is thus necessary to examine the rotational acceleration of the head to clarify the mechanism of head injuries. We determined the rotational acceleration of the head when the subject is thrown by judo techniques. One Japanese male judo expert threw an anthropomorphic test device using two throwing techniques, Osoto-gari and Ouchi-gari. Rotational and translational head accelerations were measured with and without an under-mat. For Osoto-gari, peak resultant rotational acceleration ranged from 4,284.2 rad/s(2) to 5,525.9 rad/s(2) and peak resultant translational acceleration ranged from 64.3 g to 87.2 g; for Ouchi-gari, the accelerations respectively ranged from 1,708.0 rad/s(2) to 2,104.1 rad/s(2) and from 120.2 g to 149.4 g. The resultant rotational acceleration did not decrease with installation of an under-mat for both Ouchi-gari and Osoto-gari. We found that head contact with the tatami could result in the peak values of translational and rotational accelerations, respectively. In general, because kinematics of the body strongly affects translational and rotational accelerations of the head, both accelerations should be measured to analyze the underlying mechanism of head injury. As a primary preventative measure, throwing techniques should be restricted to participants demonstrating ability in ukemi techniques to avoid head contact with the tatami.

  6. Nurse Practitioners' Use of Communication Techniques: Results of a Maryland Oral Health Literacy Survey.

    Science.gov (United States)

    Koo, Laura W; Horowitz, Alice M; Radice, Sarah D; Wang, Min Q; Kleinman, Dushanka V

    2016-01-01

    We examined nurse practitioners' use and opinions of recommended communication techniques for the promotion of oral health as part of a Maryland state-wide oral health literacy assessment. Use of recommended health-literate and patient-centered communication techniques have demonstrated improved health outcomes. A 27-item self-report survey, containing 17 communication technique items, across 5 domains, was mailed to 1,410 licensed nurse practitioners (NPs) in Maryland in 2010. Use of communication techniques and opinions about their effectiveness were analyzed using descriptive statistics. General linear models explored provider and practice characteristics to predict differences in the total number and the mean number of communication techniques routinely used in a week. More than 80% of NPs (N = 194) routinely used 3 of the 7 basic communication techniques: simple language, limiting teaching to 2-3 concepts, and speaking slowly. More than 75% of respondents believed that 6 of the 7 basic communication techniques are effective. Sociodemographic provider characteristics and practice characteristics were not significant predictors of the mean number or the total number of communication techniques routinely used by NPs in a week. Potential predictors for using more of the 7 basic communication techniques, demonstrating significance in one general linear model each, were: assessing the office for user-friendliness and ever taking a communication course in addition to nursing school. NPs in Maryland self-reported routinely using some recommended health-literate communication techniques, with belief in their effectiveness. Our findings suggest that NPs who had assessed the office for patient-friendliness or who had taken a communication course beyond their initial education may be predictors for using more of the 7 basic communication techniques. These self-reported findings should be validated with observational studies. Graduate and continuing education for NPs

  7. Application of the weighted total field-scattering field technique to 3D-PSTD light scattering model

    Science.gov (United States)

    Hu, Shuai; Gao, Taichang; Liu, Lei; Li, Hao; Chen, Ming; Yang, Bo

    2018-04-01

    PSTD (Pseudo Spectral Time Domain) is an excellent model for the light scattering simulation of nonspherical aerosol particles. However, due to the particularity of its discretization form of the Maxwell's equations, the traditional Total Field/Scattering Field (TF/SF) technique for FDTD (Finite Differential Time Domain) is not applicable to PSTD, and the time-consuming pure scattering field technique is mainly applied to introduce the incident wave. To this end, the weighted TF/SF technique proposed by X. Gao is generalized and applied to the 3D-PSTD scattering model. Using this technique, the incident light can be effectively introduced by modifying the electromagnetic components in an inserted connecting region between the total field and the scattering field region with incident terms, where the incident terms are obtained by weighting the incident field by a window function. To optimally determine the thickness of connection region and the window function type for PSTD calculations, their influence on the modeling accuracy is firstly analyzed. To further verify the effectiveness and advantages of the weighted TF/SF technique, the improved PSTD model is validated against the PSTD model equipped with pure scattering field technique in both calculation accuracy and efficiency. The results show that, the performance of PSTD seems to be not sensitive to variation of window functions. The number of the connection layer required decreases with the increasing of spatial resolution, where for spatial resolution of 24 grids per wavelength, a 6-layer region is thick enough. The scattering phase matrices and integral scattering parameters obtained by the improved PSTD show an excellent consistency with those well-tested models for spherical and nonspherical particles, illustrating that the weighted TF/SF technique can introduce the incident precisely. The weighted TF/SF technique shows higher computational efficiency than pure scattering technique.

  8. A Dynamic Operation Permission Technique Based on an MFM Model and Numerical Simulation

    International Nuclear Information System (INIS)

    Akio, Gofuku; Masahiro, Yonemura

    2011-01-01

    It is important to support operator activities to an abnormal plant situation where many counter actions are taken in relatively short time. The authors proposed a technique called dynamic operation permission to decrease human errors without eliminating creative idea of operators to cope with an abnormal plant situation by checking if the counter action taken is consistent with emergency operation procedure. If the counter action is inconsistent, a dynamic operation permission system warns it to operators. It also explains how and why the counter action is inconsistent and what influence will appear on the future plant behavior by a qualitative influence inference technique based on a model by the Mf (Multilevel Flow Modeling). However, the previous dynamic operation permission is not able to explain quantitative effects on plant future behavior. Moreover, many possible influence paths are derived because a qualitative reasoning does not give a solution when positive and negative influences are propagated to the same node. This study extends the dynamic operation permission by combining the qualitative reasoning and the numerical simulation technique. The qualitative reasoning based on an Mf model of plant derives all possible influence propagation paths. Then, a numerical simulation gives a prediction of plant future behavior in the case of taking a counter action. The influence propagation that does not coincide with the simulation results is excluded from possible influence paths. The extended technique is implemented in a dynamic operation permission system for an oil refinery plant. An MFM model and a static numerical simulator are developed. The results of dynamic operation permission for some abnormal plant situations show the improvement of the accuracy of dynamic operation permission and the quality of explanation for the effects of the counter action taken

  9. Modelling techniques for predicting the long term consequences of radiation on natural aquatic populations

    International Nuclear Information System (INIS)

    Wallis, I.G.

    1978-01-01

    The purpose of this working paper is to describe modelling techniques for predicting the long term consequences of radiation on natural aquatic populations. Ideally, it would be possible to use aquatic population models: (1) to predict changes in the health and well-being of all aquatic populations as a result of changing the composition, amount and location of radionuclide discharges; (2) to compare the effects of steady, fluctuating and accidental releases of radionuclides; and (3) to evaluate the combined impact of the discharge of radionuclides and other wastes, and natural environmental stresses on aquatic populations. At the onset it should be stated that there is no existing model which can achieve this ideal performance. However, modelling skills and techniques are available to develop useful aquatic population models. This paper discusses the considerations involved in developing these models and briefly describes the various types of population models which have been developed to date

  10. Experimental investigation of the predictive capabilities of data driven modeling techniques in hydrology - Part 1: Concepts and methodology

    Directory of Open Access Journals (Sweden)

    A. Elshorbagy

    2010-10-01

    Full Text Available A comprehensive data driven modeling experiment is presented in a two-part paper. In this first part, an extensive data-driven modeling experiment is proposed. The most important concerns regarding the way data driven modeling (DDM techniques and data were handled, compared, and evaluated, and the basis on which findings and conclusions were drawn are discussed. A concise review of key articles that presented comparisons among various DDM techniques is presented. Six DDM techniques, namely, neural networks, genetic programming, evolutionary polynomial regression, support vector machines, M5 model trees, and K-nearest neighbors are proposed and explained. Multiple linear regression and naïve models are also suggested as baseline for comparison with the various techniques. Five datasets from Canada and Europe representing evapotranspiration, upper and lower layer soil moisture content, and rainfall-runoff process are described and proposed, in the second paper, for the modeling experiment. Twelve different realizations (groups from each dataset are created by a procedure involving random sampling. Each group contains three subsets; training, cross-validation, and testing. Each modeling technique is proposed to be applied to each of the 12 groups of each dataset. This way, both prediction accuracy and uncertainty of the modeling techniques can be evaluated. The description of the datasets, the implementation of the modeling techniques, results and analysis, and the findings of the modeling experiment are deferred to the second part of this paper.

  11. Application of data assimilation technique for flow field simulation for Kaiga site using TAPM model

    International Nuclear Information System (INIS)

    Shrivastava, R.; Oza, R.B.; Puranik, V.D.; Hegde, M.N.; Kushwaha, H.S.

    2008-01-01

    The data assimilation techniques are becoming popular nowadays to get realistic flow field simulation for the site under consideration. The present paper describes data assimilation technique for flow field simulation for Kaiga site using the air pollution model (TAPM) developed by CSIRO, Australia. In this, the TAPM model was run for Kaiga site for a period of one month (Nov. 2004) using the analysed meteorological data supplied with the model for Central Asian (CAS) region and the model solutions were nudged with the observed wind speed and wind direction data available for the site. The model was run with 4 nested grids with grid spacing varying from 30km, 10km, 3km and 1km respectively. The models generated results with and without nudging are statistically compared with the observations. (author)

  12. Meteorological uncertainty of atmospheric dispersion model results (MUD)

    Energy Technology Data Exchange (ETDEWEB)

    Havskov Soerensen, J.; Amstrup, B.; Feddersen, H. [Danish Meteorological Institute, Copenhagen (Denmark)] [and others

    2013-08-15

    The MUD project addresses assessment of uncertainties of atmospheric dispersion model predictions, as well as possibilities for optimum presentation to decision makers. Previously, it has not been possible to estimate such uncertainties quantitatively, but merely to calculate the 'most likely' dispersion scenario. However, recent developments in numerical weather prediction (NWP) include probabilistic forecasting techniques, which can be utilised also for long-range atmospheric dispersion models. The ensemble statistical methods developed and applied to NWP models aim at describing the inherent uncertainties of the meteorological model results. These uncertainties stem from e.g. limits in meteorological observations used to initialise meteorological forecast series. By perturbing e.g. the initial state of an NWP model run in agreement with the available observational data, an ensemble of meteorological forecasts is produced from which uncertainties in the various meteorological parameters are estimated, e.g. probabilities for rain. Corresponding ensembles of atmospheric dispersion can now be computed from which uncertainties of predicted radionuclide concentration and deposition patterns can be derived. (Author)

  13. Application of object modeling technique to medical image retrieval system

    International Nuclear Information System (INIS)

    Teshima, Fumiaki; Abe, Takeshi

    1993-01-01

    This report describes the results of discussions on the object-oriented analysis methodology, which is one of the object-oriented paradigms. In particular, we considered application of the object modeling technique (OMT) to the analysis of a medical image retrieval system. The object-oriented methodology places emphasis on the construction of an abstract model from real-world entities. The effectiveness of and future improvements to OMT are discussed from the standpoint of the system's expandability. These discussions have elucidated that the methodology is sufficiently well-organized and practical to be applied to commercial products, provided that it is applied to the appropriate problem domain. (author)

  14. Increasing the reliability of ecological models using modern software engineering techniques

    Science.gov (United States)

    Robert M. Scheller; Brian R. Sturtevant; Eric J. Gustafson; Brendan C. Ward; David J. Mladenoff

    2009-01-01

    Modern software development techniques are largely unknown to ecologists. Typically, ecological models and other software tools are developed for limited research purposes, and additional capabilities are added later, usually in an ad hoc manner. Modern software engineering techniques can substantially increase scientific rigor and confidence in ecological models and...

  15. Immersive visualization of dynamic CFD model results

    International Nuclear Information System (INIS)

    Comparato, J.R.; Ringel, K.L.; Heath, D.J.

    2004-01-01

    With immersive visualization the engineer has the means for vividly understanding problem causes and discovering opportunities to improve design. Software can generate an interactive world in which collaborators experience the results of complex mathematical simulations such as computational fluid dynamic (CFD) modeling. Such software, while providing unique benefits over traditional visualization techniques, presents special development challenges. The visualization of large quantities of data interactively requires both significant computational power and shrewd data management. On the computational front, commodity hardware is outperforming large workstations in graphical quality and frame rates. Also, 64-bit commodity computing shows promise in enabling interactive visualization of large datasets. Initial interactive transient visualization methods and examples are presented, as well as development trends in commodity hardware and clustering. Interactive, immersive visualization relies on relevant data being stored in active memory for fast response to user requests. For large or transient datasets, data management becomes a key issue. Techniques for dynamic data loading and data reduction are presented as means to increase visualization performance. (author)

  16. National Alliance of Business Sales Techniques and Results (STAR).

    Science.gov (United States)

    Golightly, Steven J.

    This paper presents an overview of the Sales Techniques and Results (STAR) training program developed by the National Alliance of Business in conjunction with IBM. The STAR training program can be used to help vocational directors, teachers, and counselors to be better salespersons for cooperative education or job placement programs. The paper…

  17. A note on the multi model super ensemble technique for reducing forecast errors

    International Nuclear Information System (INIS)

    Kantha, L.; Carniel, S.; Sclavo, M.

    2008-01-01

    The multi model super ensemble (S E) technique has been used with considerable success to improve meteorological forecasts and is now being applied to ocean models. Although the technique has been shown to produce deterministic forecasts that can be superior to the individual models in the ensemble or a simple multi model ensemble forecast, there is a clear need to understand its strengths and limitations. This paper is an attempt to do so in simple, easily understood contexts. The results demonstrate that the S E forecast is almost always better than the simple ensemble forecast, the degree of improvement depending on the properties of the models in the ensemble. However, the skill of the S E forecast with respect to the true forecast depends on a number of factors, principal among which is the skill of the models in the ensemble. As can be expected, if the ensemble consists of models with poor skill, the S E forecast will also be poor, although better than the ensemble forecast. On the other hand, the inclusion of even a single skillful model in the ensemble increases the forecast skill significantly.

  18. A New Profile Learning Model for Recommendation System based on Machine Learning Technique

    Directory of Open Access Journals (Sweden)

    Shereen H. Ali

    2016-03-01

    Full Text Available Recommender systems (RSs have been used to successfully address the information overload problem by providing personalized and targeted recommendations to the end users. RSs are software tools and techniques providing suggestions for items to be of use to a user, hence, they typically apply techniques and methodologies from Data Mining. The main contribution of this paper is to introduce a new user profile learning model to promote the recommendation accuracy of vertical recommendation systems. The proposed profile learning model employs the vertical classifier that has been used in multi classification module of the Intelligent Adaptive Vertical Recommendation (IAVR system to discover the user’s area of interest, and then build the user’s profile accordingly. Experimental results have proven the effectiveness of the proposed profile learning model, which accordingly will promote the recommendation accuracy.

  19. Optimizing Availability of a Framework in Series Configuration Utilizing Markov Model and Monte Carlo Simulation Techniques

    Directory of Open Access Journals (Sweden)

    Mansoor Ahmed Siddiqui

    2017-06-01

    Full Text Available This research work is aimed at optimizing the availability of a framework comprising of two units linked together in series configuration utilizing Markov Model and Monte Carlo (MC Simulation techniques. In this article, effort has been made to develop a maintenance model that incorporates three distinct states for each unit, while taking into account their different levels of deterioration. Calculations are carried out using the proposed model for two distinct cases of corrective repair, namely perfect and imperfect repairs, with as well as without opportunistic maintenance. Initially, results are accomplished using an analytical technique i.e., Markov Model. Validation of the results achieved is later carried out with the help of MC Simulation. In addition, MC Simulation based codes also work well for the frameworks that follow non-exponential failure and repair rates, and thus overcome the limitations offered by the Markov Model.

  20. Circuit oriented electromagnetic modeling using the PEEC techniques

    CERN Document Server

    Ruehli, Albert; Jiang, Lijun

    2017-01-01

    This book provides intuitive solutions to electromagnetic problems by using the Partial Eelement Eequivalent Ccircuit (PEEC) method. This book begins with an introduction to circuit analysis techniques, laws, and frequency and time domain analyses. The authors also treat Maxwell's equations, capacitance computations, and inductance computations through the lens of the PEEC method. Next, readers learn to build PEEC models in various forms: equivalent circuit models, non orthogonal PEEC models, skin-effect models, PEEC models for dielectrics, incident and radiate field models, and scattering PEEC models. The book concludes by considering issues like such as stability and passivity, and includes five appendices some with formulas for partial elements.

  1. A characteristic study of CCF modeling techniques and optimization of CCF defense strategies

    International Nuclear Information System (INIS)

    Kim, Min Chull

    2000-02-01

    Common Cause Failures (CCFs ) are among the major contributors to risk and core damage frequency (CDF ) from operating nuclear power plants (NPPs ). Our study on CCF focused on the following aspects : 1) a characteristic study on the CCF modeling techniques and 2) development of the optimal CCF defense strategy. Firstly, the characteristics of CCF modeling techniques were studied through sensitivity study of CCF occurrence probability upon system redundancy. The modeling techniques considered in this study include those most widely used worldwide, i.e., beta factor, MGL, alpha factor, and binomial failure rate models. We found that MGL and alpha factor models are essentially identical in terms of the CCF probability. Secondly, in the study for CCF defense, the various methods identified in the previous studies for defending against CCF were classified into five different categories. Based on these categories, we developed a generic method by which the optimal CCF defense strategy can be selected. The method is not only qualitative but also quantitative in nature: the selection of the optimal strategy among candidates is based on the use of analytic hierarchical process (AHP). We applied this method to two motor-driven valves for containment sump isolation in Ulchin 3 and 4 nuclear power plants. The result indicates that the method for developing an optimal CCF defense strategy is effective

  2. System Response Analysis and Model Order Reduction, Using Conventional Method, Bond Graph Technique and Genetic Programming

    Directory of Open Access Journals (Sweden)

    Lubna Moin

    2009-04-01

    Full Text Available This research paper basically explores and compares the different modeling and analysis techniques and than it also explores the model order reduction approach and significance. The traditional modeling and simulation techniques for dynamic systems are generally adequate for single-domain systems only, but the Bond Graph technique provides new strategies for reliable solutions of multi-domain system. They are also used for analyzing linear and non linear dynamic production system, artificial intelligence, image processing, robotics and industrial automation. This paper describes a unique technique of generating the Genetic design from the tree structured transfer function obtained from Bond Graph. This research work combines bond graphs for model representation with Genetic programming for exploring different ideas on design space tree structured transfer function result from replacing typical bond graph element with their impedance equivalent specifying impedance lows for Bond Graph multiport. This tree structured form thus obtained from Bond Graph is applied for generating the Genetic Tree. Application studies will identify key issues and importance for advancing this approach towards becoming on effective and efficient design tool for synthesizing design for Electrical system. In the first phase, the system is modeled using Bond Graph technique. Its system response and transfer function with conventional and Bond Graph method is analyzed and then a approach towards model order reduction is observed. The suggested algorithm and other known modern model order reduction techniques are applied to a 11th order high pass filter [1], with different approach. The model order reduction technique developed in this paper has least reduction errors and secondly the final model retains structural information. The system response and the stability analysis of the system transfer function taken by conventional and by Bond Graph method is compared and

  3. Prediction of high level vibration test results by use of available inelastic analysis techniques

    International Nuclear Information System (INIS)

    Hofmayer, C.H.; Park, Y.J.; Costello, J.F.

    1991-01-01

    As part of a cooperative study between the United States and Japan, the US Nuclear Regulatory Commission and the Ministry of International Trade and Industry of Japan agreed to perform a test program that would subject a large scale piping model to significant plastic strains under excitation conditions much greater than the design condition for nuclear power plants. The objective was to compare the results of the tests with state-of-the-art analyses. Comparisons were done at different excitation levels from elastic to elastic-plastic to levels where cracking was induced in the test model. The program was called the high Level Vibration Test (HLVT). The HLVT was performed on the seismic table at the Tadotsu Engineering Laboratory of Nuclear Power Engineering Test Center in Japan. The test model was constructed by modifying the 1/2.5 scale model of one loop of a PWR primary coolant system which was previously tested by NUPEC as part of their seismic proving test program. A comparison of various analysis techniques with test results shows a higher prediction error in the detailed strain values than in the overall response values. This prediction error is magnified as the plasticity in the test model increases. There is no significant difference in the peak responses between the simplified and the detailed analyses. A comparison between various detailed finite element model runs indicates that the material properties and plasticity modeling have a significant impact on the plastic strain responses under dynamic loading reversals. 5 refs., 12 figs

  4. In vivo dosimetry for total body irradiation: five‐year results and technique comparison

    Science.gov (United States)

    Warry, Alison J.; Eaton, David J.; Collis, Christopher H.; Rosenberg, Ivan

    2014-01-01

    The aim of this work is to establish if the new CT‐based total body irradiation (TBI) planning techniques used at University College London Hospital (UCLH) and Royal Free Hospital (RFH) are comparable to the previous technique at the Middlesex Hospital (MXH) by analyzing predicted and measured diode results. TBI aims to deliver a homogeneous dose to the entire body, typically using extended SSD fields with beam modulation to limit doses to organs at risk. In vivo dosimetry is used to verify the accuracy of delivered doses. In 2005, when the Middlesex Hospital was decommissioned and merged with UCLH, both UCLH and the RFH introduced updated CT‐planned TBI techniques, based on the old MXH technique. More CT slices and in vivo measurement points were used by both; UCLH introduced a beam modulation technique using MLC segments, while RFH updated to a combination of lead compensators and bolus. Semiconductor diodes were used to measure entrance and exit doses in several anatomical locations along the entire body. Diode results from both centers for over five years of treatments were analyzed and compared to the previous MXH technique for accuracy and precision of delivered doses. The most stable location was the field center with standard deviations of 4.1% (MXH), 3.7% (UCLH), and 1.7% (RFH). The least stable position was the ankles. Mean variation with fraction number was within 1.5% for all three techniques. In vivo dosimetry can be used to verify complex modulated CT‐planned TBI, and demonstrate improvements and limitations in techniques. The results show that the new UCLH technique is no worse than the previous MXH one and comparable to the current RFH technique. PACS numbers: 87.55.Qr, 87.56.N‐ PMID:25207423

  5. System health monitoring using multiple-model adaptive estimation techniques

    Science.gov (United States)

    Sifford, Stanley Ryan

    Monitoring system health for fault detection and diagnosis by tracking system parameters concurrently with state estimates is approached using a new multiple-model adaptive estimation (MMAE) method. This novel method is called GRid-based Adaptive Parameter Estimation (GRAPE). GRAPE expands existing MMAE methods by using new techniques to sample the parameter space. GRAPE expands on MMAE with the hypothesis that sample models can be applied and resampled without relying on a predefined set of models. GRAPE is initially implemented in a linear framework using Kalman filter models. A more generalized GRAPE formulation is presented using extended Kalman filter (EKF) models to represent nonlinear systems. GRAPE can handle both time invariant and time varying systems as it is designed to track parameter changes. Two techniques are presented to generate parameter samples for the parallel filter models. The first approach is called selected grid-based stratification (SGBS). SGBS divides the parameter space into equally spaced strata. The second approach uses Latin Hypercube Sampling (LHS) to determine the parameter locations and minimize the total number of required models. LHS is particularly useful when the parameter dimensions grow. Adding more parameters does not require the model count to increase for LHS. Each resample is independent of the prior sample set other than the location of the parameter estimate. SGBS and LHS can be used for both the initial sample and subsequent resamples. Furthermore, resamples are not required to use the same technique. Both techniques are demonstrated for both linear and nonlinear frameworks. The GRAPE framework further formalizes the parameter tracking process through a general approach for nonlinear systems. These additional methods allow GRAPE to either narrow the focus to converged values within a parameter range or expand the range in the appropriate direction to track the parameters outside the current parameter range boundary

  6. Numerical modeling techniques for flood analysis

    Science.gov (United States)

    Anees, Mohd Talha; Abdullah, K.; Nawawi, M. N. M.; Ab Rahman, Nik Norulaini Nik; Piah, Abd. Rahni Mt.; Zakaria, Nor Azazi; Syakir, M. I.; Mohd. Omar, A. K.

    2016-12-01

    Topographic and climatic changes are the main causes of abrupt flooding in tropical areas. It is the need to find out exact causes and effects of these changes. Numerical modeling techniques plays a vital role for such studies due to their use of hydrological parameters which are strongly linked with topographic changes. In this review, some of the widely used models utilizing hydrological and river modeling parameters and their estimation in data sparse region are discussed. Shortcomings of 1D and 2D numerical models and the possible improvements over these models through 3D modeling are also discussed. It is found that the HEC-RAS and FLO 2D model are best in terms of economical and accurate flood analysis for river and floodplain modeling respectively. Limitations of FLO 2D in floodplain modeling mainly such as floodplain elevation differences and its vertical roughness in grids were found which can be improve through 3D model. Therefore, 3D model was found to be more suitable than 1D and 2D models in terms of vertical accuracy in grid cells. It was also found that 3D models for open channel flows already developed recently but not for floodplain. Hence, it was suggested that a 3D model for floodplain should be developed by considering all hydrological and high resolution topographic parameter's models, discussed in this review, to enhance the findings of causes and effects of flooding.

  7. Plants status monitor: Modelling techniques and inherent benefits

    International Nuclear Information System (INIS)

    Breeding, R.J.; Lainoff, S.M.; Rees, D.C.; Prather, W.A.; Fickiessen, K.O.E.

    1987-01-01

    The Plant Status Monitor (PSM) is designed to provide plant personnel with information on the operational status of the plant and compliance with the plant technical specifications. The PSM software evaluates system models using a 'distributed processing' technique in which detailed models of individual systems are processed rather than by evaluating a single, plant-level model. In addition, development of the system models for PSM provides inherent benefits to the plant by forcing detailed reviews of the technical specifications, system design and operating procedures, and plant documentation. (orig.)

  8. A complex-plane strategy for computing rotating polytropic models - Numerical results for strong and rapid differential rotation

    International Nuclear Information System (INIS)

    Geroyannis, V.S.

    1990-01-01

    In this paper, a numerical method, called complex-plane strategy, is implemented in the computation of polytropic models distorted by strong and rapid differential rotation. The differential rotation model results from a direct generalization of the classical model, in the framework of the complex-plane strategy; this generalization yields very strong differential rotation. Accordingly, the polytropic models assume extremely distorted interiors, while their boundaries are slightly distorted. For an accurate simulation of differential rotation, a versatile method, called multiple partition technique is developed and implemented. It is shown that the method remains reliable up to rotation states where other elaborate techniques fail to give accurate results. 11 refs

  9. Advanced Techniques for Reservoir Simulation and Modeling of Non-Conventional Wells

    Energy Technology Data Exchange (ETDEWEB)

    Durlofsky, Louis J.

    2000-08-28

    This project targets the development of (1) advanced reservoir simulation techniques for modeling non-conventional wells; (2) improved techniques for computing well productivity (for use in reservoir engineering calculations) and well index (for use in simulation models), including the effects of wellbore flow; and (3) accurate approaches to account for heterogeneity in the near-well region.

  10. Analyzing the field of bioinformatics with the multi-faceted topic modeling technique.

    Science.gov (United States)

    Heo, Go Eun; Kang, Keun Young; Song, Min; Lee, Jeong-Hoon

    2017-05-31

    Bioinformatics is an interdisciplinary field at the intersection of molecular biology and computing technology. To characterize the field as convergent domain, researchers have used bibliometrics, augmented with text-mining techniques for content analysis. In previous studies, Latent Dirichlet Allocation (LDA) was the most representative topic modeling technique for identifying topic structure of subject areas. However, as opposed to revealing the topic structure in relation to metadata such as authors, publication date, and journals, LDA only displays the simple topic structure. In this paper, we adopt the Tang et al.'s Author-Conference-Topic (ACT) model to study the field of bioinformatics from the perspective of keyphrases, authors, and journals. The ACT model is capable of incorporating the paper, author, and conference into the topic distribution simultaneously. To obtain more meaningful results, we use journals and keyphrases instead of conferences and bag-of-words.. For analysis, we use PubMed to collected forty-six bioinformatics journals from the MEDLINE database. We conducted time series topic analysis over four periods from 1996 to 2015 to further examine the interdisciplinary nature of bioinformatics. We analyze the ACT Model results in each period. Additionally, for further integrated analysis, we conduct a time series analysis among the top-ranked keyphrases, journals, and authors according to their frequency. We also examine the patterns in the top journals by simultaneously identifying the topical probability in each period, as well as the top authors and keyphrases. The results indicate that in recent years diversified topics have become more prevalent and convergent topics have become more clearly represented. The results of our analysis implies that overtime the field of bioinformatics becomes more interdisciplinary where there is a steady increase in peripheral fields such as conceptual, mathematical, and system biology. These results are

  11. VLF surface-impedance modelling techniques for coal exploration

    Energy Technology Data Exchange (ETDEWEB)

    Wilson, G.; Thiel, D.; O' Keefe, S. [Central Queensland University, Rockhampton, Qld. (Australia). Faculty of Engineering and Physical Systems

    2000-10-01

    New and efficient computational techniques are required for geophysical investigations of coal. This will allow automated inverse analysis procedures to be used for interpretation of field data. In this paper, a number of methods of modelling electromagnetic surface impedance measurements are reviewed, particularly as applied to typical coal seam geology found in the Bowen Basin. At present, the Impedance method and the finite-difference time-domain (FDTD) method appear to offer viable solutions although both have problems. The Impedance method is currently slightly inaccurate, and the FDTD method has large computational demands. In this paper both methods are described and results are presented for a number of geological targets. 17 refs., 14 figs.

  12. Hydroponic cultivation techniques: good results with Eg system

    Energy Technology Data Exchange (ETDEWEB)

    Mimiola, G; Sigliuzzo, C [Tecnagro, Valenzano (Italy)

    1988-12-01

    This report describes results obtained at the Tecnagro agronomic institute (Valenzano, Italy) in which research is being carried out on the use of the Eg hydroponic system developed in Israel. The research program examined the following: composition of nutritive solutions for ornamental plants and vegetables, methods of application of nutritive substances, breeding densities for ornamental plants and vegetables. Successful nutritive formulas were obtained which resulted, in the case of ornamental plants, in increases in plant height (from 30 to 50%), foliage area (50%), as well as, in shortened growth cycles. For vegetables, shortened growth cycles were developed along with a greater and more consistant production. From the economics point of view, tomatoes proved to be the best choice of vegetable for cultivation with the Eg technique.

  13. Model assessment using a multi-metric ranking technique

    Science.gov (United States)

    Fitzpatrick, P. J.; Lau, Y.; Alaka, G.; Marks, F.

    2017-12-01

    Validation comparisons of multiple models presents challenges when skill levels are similar, especially in regimes dominated by the climatological mean. Assessing skill separation will require advanced validation metrics and identifying adeptness in extreme events, but maintain simplicity for management decisions. Flexibility for operations is also an asset. This work postulates a weighted tally and consolidation technique which ranks results by multiple types of metrics. Variables include absolute error, bias, acceptable absolute error percentages, outlier metrics, model efficiency, Pearson correlation, Kendall's Tau, reliability Index, multiplicative gross error, and root mean squared differences. Other metrics, such as root mean square difference and rank correlation were also explored, but removed when the information was discovered to be generally duplicative to other metrics. While equal weights are applied, weights could be altered depending for preferred metrics. Two examples are shown comparing ocean models' currents and tropical cyclone products, including experimental products. The importance of using magnitude and direction for tropical cyclone track forecasts instead of distance, along-track, and cross-track are discussed. Tropical cyclone intensity and structure prediction are also assessed. Vector correlations are not included in the ranking process, but found useful in an independent context, and will be briefly reported.

  14. On a Numerical and Graphical Technique for Evaluating some Models Involving Rational Expectations

    DEFF Research Database (Denmark)

    Johansen, Søren; Swensen, Anders Rygh

    Campbell and Shiller (1987) proposed a graphical technique for the present value model which consists of plotting the spread and theoretical spread as calculated from the cointegrated vector autoregressive model. We extend these techniques to a number of rational expectation models and give...

  15. On a numerical and graphical technique for evaluating some models involving rational expectations

    DEFF Research Database (Denmark)

    Johansen, Søren; Swensen, Anders Rygh

    Campbell and Shiller (1987) proposed a graphical technique for the present value model which consists of plotting the spread and theoretical spread as calculated from the cointegrated vector autoregressive model. We extend these techniques to a number of rational expectation models and give...

  16. Advanced Modeling and Uncertainty Quantification for Flight Dynamics; Interim Results and Challenges

    Science.gov (United States)

    Hyde, David C.; Shweyk, Kamal M.; Brown, Frank; Shah, Gautam

    2014-01-01

    into a real-time simulation capability, generating techniques for uncertainty modeling that draw data from multiple modeling sources, and providing a unified database model that includes nominal plus increments for each flight condition. This paper presents status of testing in the BR&T water tunnel and analysis of the resulting data and efforts to characterize these data using alternative modeling methods. Program challenges and issues are also presented.

  17. Line impedance estimation using model based identification technique

    DEFF Research Database (Denmark)

    Ciobotaru, Mihai; Agelidis, Vassilios; Teodorescu, Remus

    2011-01-01

    The estimation of the line impedance can be used by the control of numerous grid-connected systems, such as active filters, islanding detection techniques, non-linear current controllers, detection of the on/off grid operation mode. Therefore, estimating the line impedance can add extra functions...... into the operation of the grid-connected power converters. This paper describes a quasi passive method for estimating the line impedance of the distribution electricity network. The method uses the model based identification technique to obtain the resistive and inductive parts of the line impedance. The quasi...

  18. Results of arthrospine assisted percutaneous technique for lumbar discectomy

    Directory of Open Access Journals (Sweden)

    Mohinder Kaushal

    2016-01-01

    Full Text Available Background: Avaialable minimal invasive arthro/endoscopic techniques are not compatible with 30 degree arthroscope which orthopedic surgeons uses in knee and shoulder arthroscopy. Minimally invasive “Arthrospine assisted percutaneous technique for lumbar discectomy” is an attempt to allow standard familiar microsurgical discectomy and decompression to be performed using 30° arthroscope used in knee and shoulder arthroscopy with conventional micro discectomy instruments. Materials and Methods: 150 patients suffering from lumbar disc herniations were operated between January 2004 and December 2012 by indiginously designed Arthrospine system and were evaluated retrospectively. In lumbar discectomy group, there were 85 males and 65 females aged between 18 and 72 years (mean, 38.4 years. The delay between onset of symptoms to surgery was between 3 months to 7 years. Levels operated upon included L1-L2 (n = 3, L2-L3 (n = 2, L3-L4 (n = 8, L4-L5 (n = 90, and L5-S1 (n = 47. Ninety patients had radiculopathy on right side and 60 on left side. There were 22 central, 88 paracentral, 12 contained, 3 extraforaminal, and 25 sequestrated herniations. Standard protocol of preoperative blood tests, x-ray LS Spine and pre operative MRI and pre anaesthetic evaluation for anaesthesia was done in all cases. Technique comprised localization of symptomatic level followed by percutaneous dilatation and insertion of a newly devised arthrospine system devise over a dilator through a 15 mm skin and fascial incision. Arthro/endoscopic discectomy was then carried out by 30° arthroscope and conventional disc surgery instruments. Results: Based on modified Macnab's criteria, of 150 patients operated for lumbar discectomy, 136 (90% patients had excellent to good, 12 (8% had fair, and 2 patients (1.3% had poor results. The complications observed were discitis in 3 patients (2%, dural tear in 4 patients (2.6%, and nerve root injury in 2 patients (1.3%. About 90% patients

  19. Model technique for aerodynamic study of boiler furnace

    Energy Technology Data Exchange (ETDEWEB)

    1966-02-01

    The help of the Division was recently sought to improve the heat transfer and reduce the exit gas temperature in a pulverized-fuel-fired boiler at an Australian power station. One approach adopted was to construct from Perspex a 1:20 scale cold-air model of the boiler furnace and to use a flow-visualization technique to study the aerodynamic patterns established when air was introduced through the p.f. burners of the model. The work established good correlations between the behaviour of the model and of the boiler furnace.

  20. Comparison of two stochastic techniques for reliable urban runoff prediction by modeling systematic errors

    DEFF Research Database (Denmark)

    Del Giudice, Dario; Löwe, Roland; Madsen, Henrik

    2015-01-01

    from different fields and have not yet been compared in environmental modeling. To compare the two approaches, we develop a unifying terminology, evaluate them theoretically, and apply them to conceptual rainfall-runoff modeling in the same drainage system. Our results show that both approaches can......In urban rainfall-runoff, commonly applied statistical techniques for uncertainty quantification mostly ignore systematic output errors originating from simplified models and erroneous inputs. Consequently, the resulting predictive uncertainty is often unreliable. Our objective is to present two...... approaches which use stochastic processes to describe systematic deviations and to discuss their advantages and drawbacks for urban drainage modeling. The two methodologies are an external bias description (EBD) and an internal noise description (IND, also known as stochastic gray-box modeling). They emerge...

  1. A New ABCD Technique to Analyze Business Models & Concepts

    OpenAIRE

    Aithal P. S.; Shailasri V. T.; Suresh Kumar P. M.

    2015-01-01

    Various techniques are used to analyze individual characteristics or organizational effectiveness like SWOT analysis, SWOC analysis, PEST analysis etc. These techniques provide an easy and systematic way of identifying various issues affecting a system and provides an opportunity for further development. Whereas these provide a broad-based assessment of individual institutions and systems, it suffers limitations while applying to business context. The success of any business model depends on ...

  2. Extending the reach of strong-coupling: an iterative technique for Hamiltonian lattice models

    International Nuclear Information System (INIS)

    Alberty, J.; Greensite, J.; Patkos, A.

    1983-12-01

    The authors propose an iterative method for doing lattice strong-coupling-like calculations in a range of medium to weak couplings. The method is a modified Lanczos scheme, with greatly improved convergence properties. The technique is tested on the Mathieu equation and on a Hamiltonian finite-chain XY model, with excellent results. (Auth.)

  3. A fermionic molecular dynamics technique to model nuclear matter

    International Nuclear Information System (INIS)

    Vantournhout, K.; Jachowicz, N.; Ryckebusch, J.

    2009-01-01

    Full text: At sub-nuclear densities of about 10 14 g/cm 3 , nuclear matter arranges itself in a variety of complex shapes. This can be the case in the crust of neutron stars and in core-collapse supernovae. These slab like and rod like structures, designated as nuclear pasta, have been modelled with classical molecular dynamics techniques. We present a technique, based on fermionic molecular dynamics, to model nuclear matter at sub-nuclear densities in a semi classical framework. The dynamical evolution of an antisymmetric ground state is described making the assumption of periodic boundary conditions. Adding the concepts of antisymmetry, spin and probability distributions to classical molecular dynamics, brings the dynamical description of nuclear matter to a quantum mechanical level. Applications of this model vary from investigation of macroscopic observables and the equation of state to the study of fundamental interactions on the microscopic structure of the matter. (author)

  4. New diagnostic technique for Zeeman-compensated atomic beam slowing: technique and results

    OpenAIRE

    Molenaar, P.A.; Straten, P. van der; Heideman, H.G.M.; Metcalf, H.

    1997-01-01

    We have developed a new diagnostic tool for the study of Zeeman-compensated slowing of an alkali atomic beam. Our time-of-flight technique measures the longitudinal veloc- ity distribution of the slowed atoms with a resolution below the Doppler limit of 30 cm/s. Furthermore, it can map the position and velocity distribution of atoms in either ground hyperfine level inside the solenoid without any devices inside the solenoid. The technique reveals the optical pumping ef- fects, and shows in de...

  5. Is the Linear Modeling Technique Good Enough for Optimal Form Design? A Comparison of Quantitative Analysis Models

    Science.gov (United States)

    Lin, Yang-Cheng; Yeh, Chung-Hsing; Wang, Chen-Cheng; Wei, Chun-Chun

    2012-01-01

    How to design highly reputable and hot-selling products is an essential issue in product design. Whether consumers choose a product depends largely on their perception of the product image. A consumer-oriented design approach presented in this paper helps product designers incorporate consumers' perceptions of product forms in the design process. The consumer-oriented design approach uses quantification theory type I, grey prediction (the linear modeling technique), and neural networks (the nonlinear modeling technique) to determine the optimal form combination of product design for matching a given product image. An experimental study based on the concept of Kansei Engineering is conducted to collect numerical data for examining the relationship between consumers' perception of product image and product form elements of personal digital assistants (PDAs). The result of performance comparison shows that the QTTI model is good enough to help product designers determine the optimal form combination of product design. Although the PDA form design is used as a case study, the approach is applicable to other consumer products with various design elements and product images. The approach provides an effective mechanism for facilitating the consumer-oriented product design process. PMID:23258961

  6. New techniques and results for worldline simulations of lattice field theories

    Science.gov (United States)

    Giuliani, Mario; Orasch, Oliver; Gattringer, Christof

    2018-03-01

    We use the complex ø4 field at finite density as a model system for developing further techniques based on worldline formulations of lattice field theories. More specifically we: 1) Discuss new variants of the worm algorithm for updating the ø4 theory and related systems with site weights. 2) Explore the possibility of canonical simulations in the worldline formulation. 3) Study the connection of 2-particle condensation at low temperature to scattering parameters of the theory.

  7. Techniques for discrimination-free predictive models (Chapter 12)

    NARCIS (Netherlands)

    Kamiran, F.; Calders, T.G.K.; Pechenizkiy, M.; Custers, B.H.M.; Calders, T.G.K.; Schermer, B.W.; Zarsky, T.Z.

    2013-01-01

    In this chapter, we give an overview of the techniques developed ourselves for constructing discrimination-free classifiers. In discrimination-free classification the goal is to learn a predictive model that classifies future data objects as accurately as possible, yet the predicted labels should be

  8. Air quality modelling using chemometric techniques | Azid | Journal ...

    African Journals Online (AJOL)

    This study presents that the chemometric techniques and modelling become an excellent tool in API assessment, air pollution source identification, apportionment and can be setbacks in designing an API monitoring network for effective air pollution resources management. Keywords: air pollutant index; chemometric; ANN; ...

  9. Modelling Technique for Demonstrating Gravity Collapse Structures in Jointed Rock.

    Science.gov (United States)

    Stimpson, B.

    1979-01-01

    Described is a base-friction modeling technique for studying the development of collapse structures in jointed rocks. A moving belt beneath weak material is designed to simulate gravity. A description is given of the model frame construction. (Author/SA)

  10. Experimental investigation of the predictive capabilities of data driven modeling techniques in hydrology - Part 2: Application

    Directory of Open Access Journals (Sweden)

    A. Elshorbagy

    2010-10-01

    Full Text Available In this second part of the two-part paper, the data driven modeling (DDM experiment, presented and explained in the first part, is implemented. Inputs for the five case studies (half-hourly actual evapotranspiration, daily peat soil moisture, daily till soil moisture, and two daily rainfall-runoff datasets are identified, either based on previous studies or using the mutual information content. Twelve groups (realizations were randomly generated from each dataset by randomly sampling without replacement from the original dataset. Neural networks (ANNs, genetic programming (GP, evolutionary polynomial regression (EPR, Support vector machines (SVM, M5 model trees (M5, K-nearest neighbors (K-nn, and multiple linear regression (MLR techniques are implemented and applied to each of the 12 realizations of each case study. The predictive accuracy and uncertainties of the various techniques are assessed using multiple average overall error measures, scatter plots, frequency distribution of model residuals, and the deterioration rate of prediction performance during the testing phase. Gamma test is used as a guide to assist in selecting the appropriate modeling technique. Unlike two nonlinear soil moisture case studies, the results of the experiment conducted in this research study show that ANNs were a sub-optimal choice for the actual evapotranspiration and the two rainfall-runoff case studies. GP is the most successful technique due to its ability to adapt the model complexity to the modeled data. EPR performance could be close to GP with datasets that are more linear than nonlinear. SVM is sensitive to the kernel choice and if appropriately selected, the performance of SVM can improve. M5 performs very well with linear and semi linear data, which cover wide range of hydrological situations. In highly nonlinear case studies, ANNs, K-nn, and GP could be more successful than other modeling techniques. K-nn is also successful in linear situations, and it

  11. Experimental investigation of the predictive capabilities of data driven modeling techniques in hydrology - Part 2: Application

    Science.gov (United States)

    Elshorbagy, A.; Corzo, G.; Srinivasulu, S.; Solomatine, D. P.

    2010-10-01

    In this second part of the two-part paper, the data driven modeling (DDM) experiment, presented and explained in the first part, is implemented. Inputs for the five case studies (half-hourly actual evapotranspiration, daily peat soil moisture, daily till soil moisture, and two daily rainfall-runoff datasets) are identified, either based on previous studies or using the mutual information content. Twelve groups (realizations) were randomly generated from each dataset by randomly sampling without replacement from the original dataset. Neural networks (ANNs), genetic programming (GP), evolutionary polynomial regression (EPR), Support vector machines (SVM), M5 model trees (M5), K-nearest neighbors (K-nn), and multiple linear regression (MLR) techniques are implemented and applied to each of the 12 realizations of each case study. The predictive accuracy and uncertainties of the various techniques are assessed using multiple average overall error measures, scatter plots, frequency distribution of model residuals, and the deterioration rate of prediction performance during the testing phase. Gamma test is used as a guide to assist in selecting the appropriate modeling technique. Unlike two nonlinear soil moisture case studies, the results of the experiment conducted in this research study show that ANNs were a sub-optimal choice for the actual evapotranspiration and the two rainfall-runoff case studies. GP is the most successful technique due to its ability to adapt the model complexity to the modeled data. EPR performance could be close to GP with datasets that are more linear than nonlinear. SVM is sensitive to the kernel choice and if appropriately selected, the performance of SVM can improve. M5 performs very well with linear and semi linear data, which cover wide range of hydrological situations. In highly nonlinear case studies, ANNs, K-nn, and GP could be more successful than other modeling techniques. K-nn is also successful in linear situations, and it should

  12. Continuous Measurements and Quantitative Constraints: Challenge Problems for Discrete Modeling Techniques

    Science.gov (United States)

    Goodrich, Charles H.; Kurien, James; Clancy, Daniel (Technical Monitor)

    2001-01-01

    We present some diagnosis and control problems that are difficult to solve with discrete or purely qualitative techniques. We analyze the nature of the problems, classify them and explain why they are frequently encountered in systems with closed loop control. This paper illustrates the problem with several examples drawn from industrial and aerospace applications and presents detailed information on one important application: In-Situ Resource Utilization (ISRU) on Mars. The model for an ISRU plant is analyzed showing where qualitative techniques are inadequate to identify certain failure modes and to maintain control of the system in degraded environments. We show why the solution to the problem will result in significantly more robust and reliable control systems. Finally, we illustrate requirements for a solution to the problem by means of examples.

  13. Modeling and design techniques for RF power amplifiers

    CERN Document Server

    Raghavan, Arvind; Laskar, Joy

    2008-01-01

    The book covers RF power amplifier design, from device and modeling considerations to advanced circuit design architectures and techniques. It focuses on recent developments and advanced topics in this area, including numerous practical designs to back the theoretical considerations. It presents the challenges in designing power amplifiers in silicon and helps the reader improve the efficiency of linear power amplifiers, and design more accurate compact device models, with faster extraction routines, to create cost effective and reliable circuits.

  14. Earthquake induced rock shear through a deposition hole. Modelling of three model tests scaled 1:10. Verification of the bentonite material model and the calculation technique

    Energy Technology Data Exchange (ETDEWEB)

    Boergesson, Lennart (Clay Technology AB, Lund (Sweden)); Hernelind, Jan (5T Engineering AB, Vaesteraas (Sweden))

    2010-11-15

    Three model shear tests of very high quality simulating a horizontal rock shear through a deposition hole in the centre of a canister were performed 1986. The tests and the results are described by /Boergesson 1986/. The tests simulated a deposition hole in the scale 1:10 with reference density of the buffer, very stiff confinement simulating the rock, and a solid bar of copper simulating the canister. The three tests were almost identical with exception of the rate of shear, which was varied between 0.031 and 160 mm/s, i.e. with a factor of more than 5,000 and the density of the bentonite, which differed slightly. The tests were very well documented. Shear force, shear rate, total stress in the bentonite, strain in the copper and the movement of the top of the simulated canister were measured continuously during the shear. After finished shear the equipment was dismantled and careful sampling of the bentonite with measurement of water ratio and density were made. The deformed copper 'canister' was also carefully measured after the test. The tests have been modelled with the finite element code Abaqus with the same models and techniques that were used for the full scale scenarios in SR-Site. The results have been compared with the measured results, which has yielded very valuable information about the relevancy of the material models and the modelling technique. An elastic-plastic material model was used for the bentonite where the stress-strain relations have been derived from laboratory tests. The material model is made a function of both the density and the strain rate at shear. Since the shear is fast and takes place under undrained conditions, the density is not changed during the tests. However, strain rate varies largely with both the location of the elements and time. This can be taken into account in Abaqus by making the material model a function of the strain rate for each element. A similar model, based on tensile tests on the copper used in

  15. Validation of transport models using additive flux minimization technique

    Energy Technology Data Exchange (ETDEWEB)

    Pankin, A. Y.; Kruger, S. E. [Tech-X Corporation, 5621 Arapahoe Ave., Boulder, Colorado 80303 (United States); Groebner, R. J. [General Atomics, San Diego, California 92121 (United States); Hakim, A. [Princeton Plasma Physics Laboratory, Princeton, New Jersey 08543-0451 (United States); Kritz, A. H.; Rafiq, T. [Department of Physics, Lehigh University, Bethlehem, Pennsylvania 18015 (United States)

    2013-10-15

    A new additive flux minimization technique is proposed for carrying out the verification and validation (V and V) of anomalous transport models. In this approach, the plasma profiles are computed in time dependent predictive simulations in which an additional effective diffusivity is varied. The goal is to obtain an optimal match between the computed and experimental profile. This new technique has several advantages over traditional V and V methods for transport models in tokamaks and takes advantage of uncertainty quantification methods developed by the applied math community. As a demonstration of its efficiency, the technique is applied to the hypothesis that the paleoclassical density transport dominates in the plasma edge region in DIII-D tokamak discharges. A simplified version of the paleoclassical model that utilizes the Spitzer resistivity for the parallel neoclassical resistivity and neglects the trapped particle effects is tested in this paper. It is shown that a contribution to density transport, in addition to the paleoclassical density transport, is needed in order to describe the experimental profiles. It is found that more additional diffusivity is needed at the top of the H-mode pedestal, and almost no additional diffusivity is needed at the pedestal bottom. The implementation of this V and V technique uses the FACETS::Core transport solver and the DAKOTA toolkit for design optimization and uncertainty quantification. The FACETS::Core solver is used for advancing the plasma density profiles. The DAKOTA toolkit is used for the optimization of plasma profiles and the computation of the additional diffusivity that is required for the predicted density profile to match the experimental profile.

  16. Validation of transport models using additive flux minimization technique

    International Nuclear Information System (INIS)

    Pankin, A. Y.; Kruger, S. E.; Groebner, R. J.; Hakim, A.; Kritz, A. H.; Rafiq, T.

    2013-01-01

    A new additive flux minimization technique is proposed for carrying out the verification and validation (V and V) of anomalous transport models. In this approach, the plasma profiles are computed in time dependent predictive simulations in which an additional effective diffusivity is varied. The goal is to obtain an optimal match between the computed and experimental profile. This new technique has several advantages over traditional V and V methods for transport models in tokamaks and takes advantage of uncertainty quantification methods developed by the applied math community. As a demonstration of its efficiency, the technique is applied to the hypothesis that the paleoclassical density transport dominates in the plasma edge region in DIII-D tokamak discharges. A simplified version of the paleoclassical model that utilizes the Spitzer resistivity for the parallel neoclassical resistivity and neglects the trapped particle effects is tested in this paper. It is shown that a contribution to density transport, in addition to the paleoclassical density transport, is needed in order to describe the experimental profiles. It is found that more additional diffusivity is needed at the top of the H-mode pedestal, and almost no additional diffusivity is needed at the pedestal bottom. The implementation of this V and V technique uses the FACETS::Core transport solver and the DAKOTA toolkit for design optimization and uncertainty quantification. The FACETS::Core solver is used for advancing the plasma density profiles. The DAKOTA toolkit is used for the optimization of plasma profiles and the computation of the additional diffusivity that is required for the predicted density profile to match the experimental profile

  17. ADVANCED TECHNIQUES FOR RESERVOIR SIMULATION AND MODELING OF NONCONVENTIONAL WELLS

    Energy Technology Data Exchange (ETDEWEB)

    Louis J. Durlofsky; Khalid Aziz

    2004-08-20

    in the wellbore); and (3) accurate approaches to account for the effects of reservoir heterogeneity and for the optimization of nonconventional well deployment. An overview of our progress in each of these main areas is as follows. A general purpose object-oriented research simulator (GPRS) was developed under this project. The GPRS code is managed using modern software management techniques and has been deployed to many companies and research institutions. The simulator includes general black-oil and compositional modeling modules. The formulation is general in that it allows for the selection of a wide variety of primary and secondary variables and accommodates varying degrees of solution implicitness. Specifically, we developed and implemented an IMPSAT procedure (implicit in pressure and saturation, explicit in all other variables) for compositional modeling as well as an adaptive implicit procedure. Both of these capabilities allow for efficiency gains through selective implicitness. The code treats cell connections through a general connection list, which allows it to accommodate both structured and unstructured grids. The GPRS code was written to be easily extendable so new modeling techniques can be readily incorporated. Along these lines, we developed a new dual porosity module compatible with the GPRS framework, as well as a new discrete fracture model applicable for fractured or faulted reservoirs. Both of these methods display substantial advantages over previous implementations. Further, we assessed the performance of different preconditioners in an attempt to improve the efficiency of the linear solver. As a result of this investigation, substantial improvements in solver performance were achieved.

  18. Improved modeling techniques for turbomachinery flow fields

    Energy Technology Data Exchange (ETDEWEB)

    Lakshminarayana, B. [Pennsylvania State Univ., University Park, PA (United States); Fagan, J.R. Jr. [Allison Engine Company, Indianapolis, IN (United States)

    1995-10-01

    This program has the objective of developing an improved methodology for modeling turbomachinery flow fields, including the prediction of losses and efficiency. Specifically, the program addresses the treatment of the mixing stress tensor terms attributed to deterministic flow field mechanisms required in steady-state Computational Fluid Dynamic (CFD) models for turbo-machinery flow fields. These mixing stress tensors arise due to spatial and temporal fluctuations (in an absolute frame of reference) caused by rotor-stator interaction due to various blade rows and by blade-to-blade variation of flow properties. These tasks include the acquisition of previously unavailable experimental data in a high-speed turbomachinery environment, the use of advanced techniques to analyze the data, and the development of a methodology to treat the deterministic component of the mixing stress tensor. Penn State will lead the effort to make direct measurements of the momentum and thermal mixing stress tensors in high-speed multistage compressor flow field in the turbomachinery laboratory at Penn State. They will also process the data by both conventional and conditional spectrum analysis to derive momentum and thermal mixing stress tensors due to blade-to-blade periodic and aperiodic components, revolution periodic and aperiodic components arising from various blade rows and non-deterministic (which includes random components) correlations. The modeling results from this program will be publicly available and generally applicable to steady-state Navier-Stokes solvers used for turbomachinery component (compressor or turbine) flow field predictions. These models will lead to improved methodology, including loss and efficiency prediction, for the design of high-efficiency turbomachinery and drastically reduce the time required for the design and development cycle of turbomachinery.

  19. Soft computing techniques toward modeling the water supplies of Cyprus.

    Science.gov (United States)

    Iliadis, L; Maris, F; Tachos, S

    2011-10-01

    This research effort aims in the application of soft computing techniques toward water resources management. More specifically, the target is the development of reliable soft computing models capable of estimating the water supply for the case of "Germasogeia" mountainous watersheds in Cyprus. Initially, ε-Regression Support Vector Machines (ε-RSVM) and fuzzy weighted ε-RSVMR models have been developed that accept five input parameters. At the same time, reliable artificial neural networks have been developed to perform the same job. The 5-fold cross validation approach has been employed in order to eliminate bad local behaviors and to produce a more representative training data set. Thus, the fuzzy weighted Support Vector Regression (SVR) combined with the fuzzy partition has been employed in an effort to enhance the quality of the results. Several rational and reliable models have been produced that can enhance the efficiency of water policy designers. Copyright © 2011 Elsevier Ltd. All rights reserved.

  20. Mechanical Properties of Nanostructured Materials Determined Through Molecular Modeling Techniques

    Science.gov (United States)

    Clancy, Thomas C.; Gates, Thomas S.

    2005-01-01

    The potential for gains in material properties over conventional materials has motivated an effort to develop novel nanostructured materials for aerospace applications. These novel materials typically consist of a polymer matrix reinforced with particles on the nanometer length scale. In this study, molecular modeling is used to construct fully atomistic models of a carbon nanotube embedded in an epoxy polymer matrix. Functionalization of the nanotube which consists of the introduction of direct chemical bonding between the polymer matrix and the nanotube, hence providing a load transfer mechanism, is systematically varied. The relative effectiveness of functionalization in a nanostructured material may depend on a variety of factors related to the details of the chemical bonding and the polymer structure at the nanotube-polymer interface. The objective of this modeling is to determine what influence the details of functionalization of the carbon nanotube with the polymer matrix has on the resulting mechanical properties. By considering a range of degree of functionalization, the structure-property relationships of these materials is examined and mechanical properties of these models are calculated using standard techniques.

  1. Controls/CFD Interdisciplinary Research Software Generates Low-Order Linear Models for Control Design From Steady-State CFD Results

    Science.gov (United States)

    Melcher, Kevin J.

    1997-01-01

    The NASA Lewis Research Center is developing analytical methods and software tools to create a bridge between the controls and computational fluid dynamics (CFD) disciplines. Traditionally, control design engineers have used coarse nonlinear simulations to generate information for the design of new propulsion system controls. However, such traditional methods are not adequate for modeling the propulsion systems of complex, high-speed vehicles like the High Speed Civil Transport. To properly model the relevant flow physics of high-speed propulsion systems, one must use simulations based on CFD methods. Such CFD simulations have become useful tools for engineers that are designing propulsion system components. The analysis techniques and software being developed as part of this effort are an attempt to evolve CFD into a useful tool for control design as well. One major aspect of this research is the generation of linear models from steady-state CFD results. CFD simulations, often used during the design of high-speed inlets, yield high resolution operating point data. Under a NASA grant, the University of Akron has developed analytical techniques and software tools that use these data to generate linear models for control design. The resulting linear models have the same number of states as the original CFD simulation, so they are still very large and computationally cumbersome. Model reduction techniques have been successfully applied to reduce these large linear models by several orders of magnitude without significantly changing the dynamic response. The result is an accurate, easy to use, low-order linear model that takes less time to generate than those generated by traditional means. The development of methods for generating low-order linear models from steady-state CFD is most complete at the one-dimensional level, where software is available to generate models with different kinds of input and output variables. One-dimensional methods have been extended

  2. [Hierarchy structuring for mammography technique by interpretive structural modeling method].

    Science.gov (United States)

    Kudo, Nozomi; Kurowarabi, Kunio; Terashita, Takayoshi; Nishimoto, Naoki; Ogasawara, Katsuhiko

    2009-10-20

    Participation in screening mammography is currently desired in Japan because of the increase in breast cancer morbidity. However, the pain and discomfort of mammography is recognized as a significant deterrent for women considering this examination. Thus quick procedures, sufficient experience, and advanced skills are required for radiologic technologists. The aim of this study was to make the point of imaging techniques explicit and to help understand the complicated procedure. We interviewed 3 technologists who were highly skilled in mammography, and 14 factors were retrieved by using brainstorming and the KJ method. We then applied Interpretive Structural Modeling (ISM) to the factors and developed a hierarchical concept structure. The result showed a six-layer hierarchy whose top node was explanation of the entire procedure on mammography. Male technologists were related to as a negative factor. Factors concerned with explanation were at the upper node. We gave attention to X-ray techniques and considerations. The findings will help beginners improve their skills.

  3. Assessing sequential data assimilation techniques for integrating GRACE data into a hydrological model

    KAUST Repository

    Khaki, M.

    2017-07-06

    The time-variable terrestrial water storage (TWS) products from the Gravity Recovery And Climate Experiment (GRACE) have been increasingly used in recent years to improve the simulation of hydrological models by applying data assimilation techniques. In this study, for the first time, we assess the performance of the most popular data assimilation sequential techniques for integrating GRACE TWS into the World-Wide Water Resources Assessment (W3RA) model. We implement and test stochastic and deterministic ensemble-based Kalman filters (EnKF), as well as Particle filters (PF) using two different resampling approaches of Multinomial Resampling and Systematic Resampling. These choices provide various opportunities for weighting observations and model simulations during the assimilation and also accounting for error distributions. Particularly, the deterministic EnKF is tested to avoid perturbing observations before assimilation (that is the case in an ordinary EnKF). Gaussian-based random updates in the EnKF approaches likely do not fully represent the statistical properties of the model simulations and TWS observations. Therefore, the fully non-Gaussian PF is also applied to estimate more realistic updates. Monthly GRACE TWS are assimilated into W3RA covering the entire Australia. To evaluate the filters performances and analyze their impact on model simulations, their estimates are validated by independent in-situ measurements. Our results indicate that all implemented filters improve the estimation of water storage simulations of W3RA. The best results are obtained using two versions of deterministic EnKF, i.e. the Square Root Analysis (SQRA) scheme and the Ensemble Square Root Filter (EnSRF), respectively improving the model groundwater estimations errors by 34% and 31% compared to a model run without assimilation. Applying the PF along with Systematic Resampling successfully decreases the model estimation error by 23%.

  4. Spatial Modeling of Geometallurgical Properties: Techniques and a Case Study

    Energy Technology Data Exchange (ETDEWEB)

    Deutsch, Jared L., E-mail: jdeutsch@ualberta.ca [University of Alberta, School of Mining and Petroleum Engineering, Department of Civil and Environmental Engineering (Canada); Palmer, Kevin [Teck Resources Limited (Canada); Deutsch, Clayton V.; Szymanski, Jozef [University of Alberta, School of Mining and Petroleum Engineering, Department of Civil and Environmental Engineering (Canada); Etsell, Thomas H. [University of Alberta, Department of Chemical and Materials Engineering (Canada)

    2016-06-15

    High-resolution spatial numerical models of metallurgical properties constrained by geological controls and more extensively by measured grade and geomechanical properties constitute an important part of geometallurgy. Geostatistical and other numerical techniques are adapted and developed to construct these high-resolution models accounting for all available data. Important issues that must be addressed include unequal sampling of the metallurgical properties versus grade assays, measurements at different scale, and complex nonlinear averaging of many metallurgical parameters. This paper establishes techniques to address each of these issues with the required implementation details and also demonstrates geometallurgical mineral deposit characterization for a copper–molybdenum deposit in South America. High-resolution models of grades and comminution indices are constructed, checked, and are rigorously validated. The workflow demonstrated in this case study is applicable to many other deposit types.

  5. A local isotropic/global orthotropic finite element technique for modeling the crush of wood in impact limiters

    International Nuclear Information System (INIS)

    Attaway, S.W.; Yoshimura, H.R.

    1989-01-01

    Wood is often used as the energy absorbing material in impact limiters, because it begins to crush at low strains, then maintains a near constant crush stress up to nearly 60% volume reduction, and then locks up. Hill (Hill and Joseph, 1974) has performed tests that show that wood is an excellent absorber. However, wood's orthotropic behavior for large crush is difficult to model. In the past, analysts have used isotropic foam-like material models for modeling wood. A new finite element technique is presented in this paper that gives a better model of wood crush than the model currently in use. The orthotropic technique is based on locally isotropic, but globally orthotropic (LIGO) (Attaway, 1988) assumptions in which alternating layers of hard and soft crushable material are used. Each layer is isotropic; however, by alternating hard and soft thin layers, the resulting global behavior is orthotropic. In the remainder of this paper, the new technique for modeling orthotropic wood crush will be presented. The model is used to predict the crush behavior for different grain orientations of balsa wood. As an example problem, an impact limiter containing balsa wood as the crushable material is analyzed using both an isotropic model and the LIGO model

  6. Accuracy Enhanced Stability and Structure Preserving Model Reduction Technique for Dynamical Systems with Second Order Structure

    DEFF Research Database (Denmark)

    Tahavori, Maryamsadat; Shaker, Hamid Reza

    A method for model reduction of dynamical systems with the second order structure is proposed in this paper. The proposed technique preserves the second order structure of the system, and also preserves the stability of the original systems. The method uses the controllability and observability...... gramians within the time interval to build the appropriate Petrov-Galerkin projection for dynamical systems within the time interval of interest. The bound on approximation error is also derived. The numerical results are compared with the counterparts from other techniques. The results confirm...

  7. Prediction of Monthly Summer Monsoon Rainfall Using Global Climate Models Through Artificial Neural Network Technique

    Science.gov (United States)

    Nair, Archana; Singh, Gurjeet; Mohanty, U. C.

    2018-01-01

    The monthly prediction of summer monsoon rainfall is very challenging because of its complex and chaotic nature. In this study, a non-linear technique known as Artificial Neural Network (ANN) has been employed on the outputs of Global Climate Models (GCMs) to bring out the vagaries inherent in monthly rainfall prediction. The GCMs that are considered in the study are from the International Research Institute (IRI) (2-tier CCM3v6) and the National Centre for Environmental Prediction (Coupled-CFSv2). The ANN technique is applied on different ensemble members of the individual GCMs to obtain monthly scale prediction over India as a whole and over its spatial grid points. In the present study, a double-cross-validation and simple randomization technique was used to avoid the over-fitting during training process of the ANN model. The performance of the ANN-predicted rainfall from GCMs is judged by analysing the absolute error, box plots, percentile and difference in linear error in probability space. Results suggest that there is significant improvement in prediction skill of these GCMs after applying the ANN technique. The performance analysis reveals that the ANN model is able to capture the year to year variations in monsoon months with fairly good accuracy in extreme years as well. ANN model is also able to simulate the correct signs of rainfall anomalies over different spatial points of the Indian domain.

  8. Shaded computer graphic techniques for visualizing and interpreting analytic fluid flow models

    Science.gov (United States)

    Parke, F. I.

    1981-01-01

    Mathematical models which predict the behavior of fluid flow in different experiments are simulated using digital computers. The simulations predict values of parameters of the fluid flow (pressure, temperature and velocity vector) at many points in the fluid. Visualization of the spatial variation in the value of these parameters is important to comprehend and check the data generated, to identify the regions of interest in the flow, and for effectively communicating information about the flow to others. The state of the art imaging techniques developed in the field of three dimensional shaded computer graphics is applied to visualization of fluid flow. Use of an imaging technique known as 'SCAN' for visualizing fluid flow, is studied and the results are presented.

  9. Effect of Load Model Using Ranking Identification Technique for Multi Type DG Incorporating Embedded Meta EP-Firefly Algorithm

    Directory of Open Access Journals (Sweden)

    Abdul Rahim Siti Rafidah

    2018-01-01

    Full Text Available This paper presents the effect of load model prior to the distributed generation (DG planning in distribution system. In achieving optimal allocation and placement of DG, a ranking identification technique was proposed in order to study the DG planning using pre-developed Embedded Meta Evolutionary Programming–Firefly Algorithm. The aim of this study is to analyze the effect of different type of DG in order to reduce the total losses considering load factor. To realize the effectiveness of the proposed technique, the IEEE 33 bus test systems was utilized as the test specimen. In this study, the proposed techniques were used to determine the DG sizing and the suitable location for DG planning. The results produced are utilized for the optimization process of DG for the benefit of power system operators and planners in the utility. The power system planner can choose the suitable size and location from the result obtained in this study with the appropriate company’s budget. The modeling of voltage dependent loads has been presented and the results show the voltage dependent load models have a significant effect on total losses of a distribution system for different DG type.

  10. Field results of antifouling techniques for optical instruments

    Science.gov (United States)

    Strahle, W.J.; Hotchkiss, F.S.; Martini, Marinna A.

    1998-01-01

    An anti-fouling technique is developed for the protection of optical instruments from biofouling which leaches a bromide compound into a sample chamber and pumps new water into the chamber prior to measurement. The primary advantage of using bromide is that it is less toxic than the metal-based antifoulants. The drawback of the bromide technique is also discussed.

  11. Comparative data compression techniques and multi-compression results

    International Nuclear Information System (INIS)

    Hasan, M R; Ibrahimy, M I; Motakabber, S M A; Ferdaus, M M; Khan, M N H

    2013-01-01

    Data compression is very necessary in business data processing, because of the cost savings that it offers and the large volume of data manipulated in many business applications. It is a method or system for transmitting a digital image (i.e., an array of pixels) from a digital data source to a digital data receiver. More the size of the data be smaller, it provides better transmission speed and saves time. In this communication, we always want to transmit data efficiently and noise freely. This paper will provide some compression techniques for lossless text type data compression and comparative result of multiple and single compression, that will help to find out better compression output and to develop compression algorithms

  12. Transcorporeal cervical foraminotomy: description of technique and results

    Directory of Open Access Journals (Sweden)

    Guilherme Pereira Corrêa Meyer

    2014-09-01

    Full Text Available OBJECTIVE: Retrospective analyses of 216 patients undergoing foraminal decompression with transcorporeal approach and review of the surgical technique. METHOD: 216 patients with minimum follow-up of 2 years and an average of 41.8 months were included in the study. The clinical records of these patients were reviewed for complications, NDI (neck disability index and VAS (visual analogue scale. Pre and post-operative radiographs were used to evaluate the disc height. RESULTS: At the end of follow-up patients had significant clinical improvement with reduction of NDI of 88.3% and 86.5% and 68.3% of the VAS for neck and upper limb, respectively (p<0.05. A reduction of 8.8% of the disc height was observed without other complications associated (p<0.05. CONCLUSION: Radicular decompression through a transcorporeal approach is an alternative that provides good clinical results without the need for a fusion and with few complications.

  13. 3-D thermo-mechanical laboratory modeling of plate-tectonics: modeling scheme, technique and first experiments

    Directory of Open Access Journals (Sweden)

    D. Boutelier

    2011-05-01

    Full Text Available We present an experimental apparatus for 3-D thermo-mechanical analogue modeling of plate tectonic processes such as oceanic and continental subductions, arc-continent or continental collisions. The model lithosphere, made of temperature-sensitive elasto-plastic analogue materials with strain softening, is submitted to a constant temperature gradient causing a strength reduction with depth in each layer. The surface temperature is imposed using infrared emitters, which allows maintaining an unobstructed view of the model surface and the use of a high resolution optical strain monitoring technique (Particle Imaging Velocimetry. Subduction experiments illustrate how the stress conditions on the interplate zone can be estimated using a force sensor attached to the back of the upper plate and adjusted via the density and strength of the subducting lithosphere or the lubrication of the plate boundary. The first experimental results reveal the potential of the experimental set-up to investigate the three-dimensional solid-mechanics interactions of lithospheric plates in multiple natural situations.

  14. Identifying and quantifying energy savings on fired plant using low cost modelling techniques

    International Nuclear Information System (INIS)

    Tucker, Robert; Ward, John

    2012-01-01

    Research highlights: → Furnace models based on the zone method for radiation calculation are described. → Validated steady-state and transient models have been developed. → We show how these simple models can identify the best options for saving energy. → High emissivity coatings predicted to give performance enhancement on a fired heater. → Optimal heat recovery strategies on a steel reheating furnace are predicted. -- Abstract: Combustion in fired heaters, boilers and furnaces often accounts for the major energy consumption on industrial processes. Small improvements in efficiency can result in large reductions in energy consumption, CO 2 emissions, and operating costs. This paper will describe some useful low cost modelling techniques based on the zone method to help identify energy saving opportunities on high temperature fuel-fired process plant. The zone method has for many decades, been successfully applied to small batch furnaces through to large steel-reheating furnaces, glass tanks, boilers and fired heaters on petrochemical plant. Zone models can simulate both steady-state furnace operation and more complex transient operation typical of a production environment. These models can be used to predict thermal efficiency and performance, and more importantly, to assist in identifying and predicting energy saving opportunities from such measures as: ·Improving air/fuel ratio and temperature controls. ·Improved insulation. ·Use of oxygen or oxygen enrichment. ·Air preheating via flue gas heat recovery. ·Modification to furnace geometry and hearth loading. There is also increasing interest in the application of refractory coatings for increasing surface radiation in fired plant. All of the techniques can yield savings ranging from a few percent upwards and can deliver rapid financial payback, but their evaluation often requires robust and reliable models in order to increase confidence in making financial investment decisions. This paper gives

  15. The commercial use of segmentation and predictive modeling techniques for database marketing in the Netherlands

    NARCIS (Netherlands)

    Verhoef, PC; Spring, PN; Hoekstra, JC; Leeflang, PSH

    Although the application of segmentation and predictive modeling is an important topic in the database marketing (DBM) literature, no study has yet investigated the extent of adoption of these techniques. We present the results of a Dutch survey involving 228 database marketing companies. We find

  16. A hybrid SEA/modal technique for modeling structural-acoustic interior noise in rotorcraft.

    Science.gov (United States)

    Jayachandran, V; Bonilha, M W

    2003-03-01

    This paper describes a hybrid technique that combines Statistical Energy Analysis (SEA) predictions for structural vibration with acoustic modal summation techniques to predict interior noise levels in rotorcraft. The method was applied for predicting the sound field inside a mock-up of the interior panel system of the Sikorsky S-92 helicopter. The vibration amplitudes of the frame and panel systems were predicted using a detailed SEA model and these were used as inputs to the model of the interior acoustic space. The spatial distribution of the vibration field on individual panels, and their coupling to the acoustic space were modeled using stochastic techniques. Leakage and nonresonant transmission components were accounted for using space-averaged values obtained from a SEA model of the complete structural-acoustic system. Since the cabin geometry was quite simple, the modeling of the interior acoustic space was performed using a standard modal summation technique. Sound pressure levels predicted by this approach at specific microphone locations were compared with measured data. Agreement within 3 dB in one-third octave bands above 40 Hz was observed. A large discrepancy in the one-third octave band in which the first acoustic mode is resonant (31.5 Hz) was observed. Reasons for such a discrepancy are discussed in the paper. The developed technique provides a method for modeling helicopter cabin interior noise in the frequency mid-range where neither FEA nor SEA is individually effective or accurate.

  17. Mixed Models and Reduction Techniques for Large-Rotation, Nonlinear Analysis of Shells of Revolution with Application to Tires

    Science.gov (United States)

    Noor, A. K.; Andersen, C. M.; Tanner, J. A.

    1984-01-01

    An effective computational strategy is presented for the large-rotation, nonlinear axisymmetric analysis of shells of revolution. The three key elements of the computational strategy are: (1) use of mixed finite-element models with discontinuous stress resultants at the element interfaces; (2) substantial reduction in the total number of degrees of freedom through the use of a multiple-parameter reduction technique; and (3) reduction in the size of the analysis model through the decomposition of asymmetric loads into symmetric and antisymmetric components coupled with the use of the multiple-parameter reduction technique. The potential of the proposed computational strategy is discussed. Numerical results are presented to demonstrate the high accuracy of the mixed models developed and to show the potential of using the proposed computational strategy for the analysis of tires.

  18. Establishment of 60Co dose calibration curve using fluorescent in situ hybridization assay technique: Result of preliminary study

    International Nuclear Information System (INIS)

    Rahimah Abdul Rahim; Noriah Jamal; Noraisyah Mohd Yusof; Juliana Mahamad Napiah; Nelly Bo Nai Lee

    2010-01-01

    This study aims at establishing an in-vitro 60 Co dose calibration curve using Fluorescent In-Situ Hybridization assay technique for the Malaysian National Bio dosimetry Laboratory. Blood samples collected from a female healthy donor were irradiated with several doses of 60 Co radiation. Following culturing of lymphocytes, microscopic slides are prepared, denatured and hybridized. The frequencies of translocation are estimated in the metaphases. A calibration curve was then generated using a regression technique. It shows a good fit to a linear-quadratic model. The results of this study might be useful in estimating absorbed dose for the individual exposed to ionizing radiation retrospectively. This information may be useful as a guide for medical treatment for the assessment of possible health consequences. (author)

  19. Energetic neutral atom imaging with the Polar CEPPAD/IPS instrument: Initial forward modeling results

    International Nuclear Information System (INIS)

    Henderson, M.G.; Reeves, G.D.; Moore, K.R.; Spence, H.E.; Jorgensen, A.M.; Roelof, E.C.

    1997-01-01

    Although the primary function of the CEP-PAD/IPS instrument on Polar is the measurement of energetic ions in-situ, it has also proven to be a very capable Energetic neutral Atom (ENA) imager. Raw ENA images are currently being constructed on a routine basis with a temporal resolution of minutes during both active and quiet times. However, while analyses of these images by themselves provide much information on the spatial distribution and dynamics of the energetic ion population in the ring current, detailed modeling is required to extract the actual ion distributions. In this paper, the authors present the initial results of forward modeling an IPS ENA image obtained during a small geo-magnetic storm on June 9, 1997. The equatorial ion distribution inferred with this technique reproduces the expected large noon/midnight and dawn/dusk asymmetries. The limitations of the model are discussed and a number of modifications to the basic forward modeling technique are proposed which should significantly improve its performance in future studies

  20. Towards a Business Process Modeling Technique for Agile Development of Case Management Systems

    Directory of Open Access Journals (Sweden)

    Ilia Bider

    2017-12-01

    Full Text Available A modern organization needs to adapt its behavior to changes in the business environment by changing its Business Processes (BP and corresponding Business Process Support (BPS systems. One way of achieving such adaptability is via separation of the system code from the process description/model by applying the concept of executable process models. Furthermore, to ease introduction of changes, such process model should separate different perspectives, for example, control-flow, human resources, and data perspectives, from each other. In addition, for developing a completely new process, it should be possible to start with a reduced process model to get a BPS system quickly running, and then continue to develop it in an agile manner. This article consists of two parts, the first sets requirements on modeling techniques that could be used in the tools that supports agile development of BPs and BPS systems. The second part suggests a business process modeling technique that allows to start modeling with the data/information perspective which would be appropriate for processes supported by Case or Adaptive Case Management (CM/ACM systems. In a model produced by this technique, called data-centric business process model, a process instance/case is defined as sequence of states in a specially designed instance database, while the process model is defined as a set of rules that set restrictions on allowed states and transitions between them. The article details the background for the project of developing the data-centric process modeling technique, presents the outline of the structure of the model, and gives formal definitions for a substantial part of the model.

  1. Characterising and modelling regolith stratigraphy using multiple geophysical techniques

    Science.gov (United States)

    Thomas, M.; Cremasco, D.; Fotheringham, T.; Hatch, M. A.; Triantifillis, J.; Wilford, J.

    2013-12-01

    -registration, depth correction, etc.) each geophysical profile was evaluated by matching the core data. Applying traditional geophysical techniques, the best profiles were inverted using the core data creating two-dimensional (2-D) stratigraphic regolith models for each transect, and evaluated using independent validation. Next, in a test of an alternative method borrowed from digital soil mapping, the best preprocessed geophysical profiles were co-registered and stratigraphic models for each property created using multivariate environmental correlation. After independent validation, the qualities of the latest models were compared to the traditionally derived 2-D inverted models. Finally, the best overall stratigraphic models were used in conjunction with local environmental data (e.g. geology, geochemistry, terrain, soils) to create conceptual regolith hillslope models for each transect highlighting important features and processes, e.g. morphology, hydropedology and weathering characteristics. Results are presented with recommendations regarding the use of geophysics in modelling regolith stratigraphy at fine scales.

  2. Image-Based Modeling Techniques for Architectural Heritage 3d Digitalization: Limits and Potentialities

    Science.gov (United States)

    Santagati, C.; Inzerillo, L.; Di Paola, F.

    2013-07-01

    3D reconstruction from images has undergone a revolution in the last few years. Computer vision techniques use photographs from data set collection to rapidly build detailed 3D models. The simultaneous applications of different algorithms (MVS), the different techniques of image matching, feature extracting and mesh optimization are inside an active field of research in computer vision. The results are promising: the obtained models are beginning to challenge the precision of laser-based reconstructions. Among all the possibilities we can mainly distinguish desktop and web-based packages. Those last ones offer the opportunity to exploit the power of cloud computing in order to carry out a semi-automatic data processing, thus allowing the user to fulfill other tasks on its computer; whereas desktop systems employ too much processing time and hard heavy approaches. Computer vision researchers have explored many applications to verify the visual accuracy of 3D model but the approaches to verify metric accuracy are few and no one is on Autodesk 123D Catch applied on Architectural Heritage Documentation. Our approach to this challenging problem is to compare the 3Dmodels by Autodesk 123D Catch and 3D models by terrestrial LIDAR considering different object size, from the detail (capitals, moldings, bases) to large scale buildings for practitioner purpose.

  3. Examining Interior Grid Nudging Techniques Using Two-Way Nesting in the WRF Model for Regional Climate Modeling

    Science.gov (United States)

    This study evaluates interior nudging techniques using the Weather Research and Forecasting (WRF) model for regional climate modeling over the conterminous United States (CONUS) using a two-way nested configuration. NCEP–Department of Energy Atmospheric Model Intercomparison Pro...

  4. Structural and geochemical techniques for the hydrogeological characterisation and stochastic modelling of fractured media

    International Nuclear Information System (INIS)

    Vela, A.; Elorza, F.J.; Florez, F.; Paredes, C.; Mazadiego, L.; Llamas, J.F.; Perez, E.; Vives, L.; Carrera, J.; Munoz, A.; De Vicente, G.; Casquet, C.

    1999-01-01

    Safety analysis of radioactive waste storage systems require fractured rock studies. The performance assessment studies of this type of problems include the development of radionuclide flow and transport models to predict the evolution of possible contaminants released from the repository to the biosphere. The methodology developed in the HIDROBAP project and some results obtained with its application in El Berrocal granite batholith are presented. It integrates modern tools belonging to different disciplines. A Discrete Fracture Network model (DFT) was selected to simulate the fractured medium and a 3D finite element flow and transport model that includes the inverse problem techniques has been coupled to the DFT model to simulate the water movement trough the fracture network system. Preliminary results show that this integrated methodology can be very useful for the hydrogeological characterisation of rock fractured media. (author)

  5. BIOMEHANICAL MODEL OF THE GOLF SWING TECHNIQUE

    Directory of Open Access Journals (Sweden)

    Milan Čoh

    2011-08-01

    Full Text Available Golf is an extremely complex game which depends on a number of interconnected factors. One of the most important elements is undoubtedly the golf swing technique. High performance of the golf swing technique is generated by: the level of motor abilities, high degree of movement control, the level of movement structure stabilisation, morphological characteristics, inter- and intro-muscular coordination, motivation, and concentration. The golf swing technique was investigated using the biomechanical analysis method. Kinematic parameters were registered using two synchronised high-speed cameras at a frequency of 2,000 Hz. The sample of subjects consisted of three professional golf players. The study results showed a relatively high variability of the swing technique. The maximum velocity of the ball after a wood swing ranged from 233 to 227 km/h. The velocity of the ball after an iron swing was lower by 10 km/h on average. The elevation angle of the ball ranged from 11.7 to 15.3 degrees. In the final phase of the golf swing, i.e. downswing, the trunk rotators play the key role.

  6. Pressure Measurement Techniques for Abdominal Hypertension: Conclusions from an Experimental Model.

    Science.gov (United States)

    Chopra, Sascha Santosh; Wolf, Stefan; Rohde, Veit; Freimann, Florian Baptist

    2015-01-01

    Introduction. Intra-abdominal pressure (IAP) measurement is an indispensable tool for the diagnosis of abdominal hypertension. Different techniques have been described in the literature and applied in the clinical setting. Methods. A porcine model was created to simulate an abdominal compartment syndrome ranging from baseline IAP to 30 mmHg. Three different measurement techniques were applied, comprising telemetric piezoresistive probes at two different sites (epigastric and pelvic) for direct pressure measurement and intragastric and intravesical probes for indirect measurement. Results. The mean difference between the invasive IAP measurements using telemetric pressure probes and the IVP measurements was -0.58 mmHg. The bias between the invasive IAP measurements and the IGP measurements was 3.8 mmHg. Compared to the realistic results of the intraperitoneal and intravesical measurements, the intragastric data showed a strong tendency towards decreased values. The hydrostatic character of the IAP was eliminated at high-pressure levels. Conclusion. We conclude that intragastric pressure measurement is potentially hazardous and might lead to inaccurately low intra-abdominal pressure values. This may result in missed diagnosis of elevated abdominal pressure or even ACS. The intravesical measurements showed the most accurate values during baseline pressure and both high-pressure plateaus.

  7. Development og groundwater flow modeling techniques for the low-level radwaste disposal (III)

    Energy Technology Data Exchange (ETDEWEB)

    Bae, Dae-Seok; Kim, Chun-Soo; Kim, Kyung-Soo; Park, Byung-Yoon; Koh, Yong-Kweon; Park, Hyun-Soo [Korea Atomic Energy Research Institute, Taejeon (Korea)

    2000-12-01

    The project amis to establish the methodology of hydrogeologic assessment by the field application of the evaluation techniques gained and accumulated from the previous hydrogeological research works in Korea. The results of the project and their possible areas for application are (1) acquisition of detailed hydrogeologic information by using a borehole televiewer and a multipacker system, (2) establishing an integrated hydrogeological assessment method for fractured rocks, (3) acquisition of the fracture parameters for fracture modeling, (4) an inversion analysis of hydraulic parameters from fracture network modeling, (5) geostatistical methods for the spatial assignment of hydraulic parameters for fractured rocks, and (6) establishing the groundwater flow modeling procedure for a repository. 75 refs., 72 figs., 34 tabs. (Author)

  8. Proposal of a congestion control technique in LAN networks using an econometric model ARIMA

    Directory of Open Access Journals (Sweden)

    Joaquín F Sánchez

    2017-01-01

    Full Text Available Hasty software development can produce immediate implementations with source code unnecessarily complex and hardly readable. These small kinds of software decay generate a technical debt that could be big enough to seriously affect future maintenance activities. This work presents an analysis technique for identifying architectural technical debt related to non-uniformity of naming patterns; the technique is based on term frequency over package hierarchies. The proposal has been evaluated on projects of two popular organizations, Apache and Eclipse. The results have shown that most of the projects have frequent occurrences of the proposed naming patterns, and using a graph model and aggregated data could enable the elaboration of simple queries for debt identification. The technique has features that favor its applicability on emergent architectures and agile software development.

  9. Comparison of Analysis and Spectral Nudging Techniques for Dynamical Downscaling with the WRF Model over China

    OpenAIRE

    Ma, Yuanyuan; Yang, Yi; Mai, Xiaoping; Qiu, Chongjian; Long, Xiao; Wang, Chenghai

    2016-01-01

    To overcome the problem that the horizontal resolution of global climate models may be too low to resolve features which are important at the regional or local scales, dynamical downscaling has been extensively used. However, dynamical downscaling results generally drift away from large-scale driving fields. The nudging technique can be used to balance the performance of dynamical downscaling at large and small scales, but the performances of the two nudging techniques (analysis nudging and s...

  10. Dynamic model reduction: An overview of available techniques with application to power systems

    Directory of Open Access Journals (Sweden)

    Đukić Savo D.

    2012-01-01

    Full Text Available This paper summarises the model reduction techniques used for the reduction of large-scale linear and nonlinear dynamic models, described by the differential and algebraic equations that are commonly used in control theory. The groups of methods discussed in this paper for reduction of the linear dynamic model are based on singular perturbation analysis, modal analysis, singular value decomposition, moment matching and methods based on a combination of singular value decomposition and moment matching. Among the nonlinear dynamic model reduction methods, proper orthogonal decomposition, the trajectory piecewise linear method, balancing-based methods, reduction by optimising system matrices and projection from a linearised model, are described. Part of the paper is devoted to the techniques commonly used for reduction (equivalencing of large-scale power systems, which are based on coherency, synchrony, singular perturbation analysis, modal analysis and identification. Two (most interesting of the described techniques are applied to the reduction of the commonly used New England 10-generator, 39-bus test power system.

  11. Sabots, Obturator and Gas-In-Launch Tube Techniques for Heat Flux Models in Ballistic Ranges

    Science.gov (United States)

    Bogdanoff, David W.; Wilder, Michael C.

    2013-01-01

    For thermal protection system (heat shield) design for space vehicle entry into earth and other planetary atmospheres, it is essential to know the augmentation of the heat flux due to vehicle surface roughness. At the NASA Ames Hypervelocity Free Flight Aerodynamic Facility (HFFAF) ballistic range, a campaign of heat flux studies on rough models, using infrared camera techniques, has been initiated. Several phenomena can interfere with obtaining good heat flux data when using this measuring technique. These include leakage of the hot drive gas in the gun barrel through joints in the sabot (model carrier) to create spurious thermal imprints on the model forebody, deposition of sabot material on the model forebody, thereby changing the thermal properties of the model surface and unknown in-barrel heating of the model. This report presents developments in launch techniques to greatly reduce or eliminate these problems. The techniques include the use of obturator cups behind the launch package, enclosed versus open front sabot designs and the use of hydrogen gas in the launch tube. Attention also had to be paid to the problem of the obturator drafting behind the model and impacting the model. Of the techniques presented, the obturator cups and hydrogen in the launch tube were successful when properly implemented

  12. Modelling the effects of the sterile insect technique applied to Eldana saccharina Walker in sugarcane

    Directory of Open Access Journals (Sweden)

    L Potgieter

    2012-12-01

    Full Text Available A mathematical model is formulated for the population dynamics of an Eldana saccharina Walker infestation of sugarcane under the influence of partially sterile released insects. The model describes the population growth of and interaction between normal and sterile E.saccharina moths in a temporally variable, but spatially homogeneous environment. The model consists of a deterministic system of difference equations subject to strictly positive initial data. The primary objective of this model is to determine suitable parameters in terms of which the above population growth and interaction may be quantified and according to which E.saccharina infestation levels and the associated sugarcane damage may be measured. Although many models have been formulated in the past describing the sterile insect technique, few of these models describe the technique for Lepidopteran species with more than one life stage and where F1-sterility is relevant. In addition, none of these models consider the technique when fully sterile females and partially sterile males are being released. The model formulated is also the first to describe the technique applied specifically to E.saccharina, and to consider the economic viability of applying the technique to this species. Pertinent decision support is provided to farm managers in terms of the best timing for releases, release ratios and release frequencies.

  13. The application of neural networks with artificial intelligence technique in the modeling of industrial processes

    International Nuclear Information System (INIS)

    Saini, K. K.; Saini, Sanju

    2008-01-01

    Neural networks are a relatively new artificial intelligence technique that emulates the behavior of biological neural systems in digital software or hardware. These networks can 'learn', automatically, complex relationships among data. This feature makes the technique very useful in modeling processes for which mathematical modeling is difficult or impossible. The work described here outlines some examples of the application of neural networks with artificial intelligence technique in the modeling of industrial processes.

  14. Use of hydrological modelling and isotope techniques in Guvenc basin

    International Nuclear Information System (INIS)

    Altinbilek, D.

    1991-07-01

    The study covers the work performed under Project No. 335-RC-TUR-5145 entitled ''Use of Hydrologic Modelling and Isotope Techniques in Guvenc Basin'' and is an initial part of a program for estimating runoff from Central Anatolia Watersheds. The study presented herein consists of mainly three parts: 1) the acquisition of a library of rainfall excess, direct runoff and isotope data for Guvenc basin; 2) the modification of SCS model to be applied to Guvenc basin first and then to other basins of Central Anatolia for predicting the surface runoff from gaged and ungaged watersheds; and 3) the use of environmental isotope technique in order to define the basin components of streamflow of Guvenc basin. 31 refs, figs and tabs

  15. Modeling of an Aged Porous Silicon Humidity Sensor Using ANN Technique

    Directory of Open Access Journals (Sweden)

    Tarikul ISLAM

    2006-10-01

    Full Text Available Porous silicon (PS sensor based on capacitive technique used for measuring relative humidity has the advantages of low cost, ease of fabrication with controlled structure and CMOS compatibility. But the response of the sensor is nonlinear function of humidity and suffers from errors due to aging and stability. One adaptive linear (ADALINE ANN model has been developed to model the behavior of the sensor with a view to estimate these errors and compensate them. The response of the sensor is represented by third order polynomial basis function whose coefficients are determined by the ANN technique. The drift in sensor output due to aging of PS layer is also modeled by adapting the weights of the polynomial function. ANN based modeling is found to be more suitable than conventional physical modeling of PS humidity sensor in changing environment and drift due to aging. It helps online estimation of nonlinearity as well as monitoring of the fault of the PS humidity sensor using the coefficients of the model.

  16. Evaluation of mesh morphing and mapping techniques in patient specific modeling of the human pelvis.

    Science.gov (United States)

    Salo, Zoryana; Beek, Maarten; Whyne, Cari Marisa

    2013-01-01

    Robust generation of pelvic finite element models is necessary to understand the variation in mechanical behaviour resulting from differences in gender, aging, disease and injury. The objective of this study was to apply and evaluate mesh morphing and mapping techniques to facilitate the creation and structural analysis of specimen-specific finite element (FE) models of the pelvis. A specimen-specific pelvic FE model (source mesh) was generated following a traditional user-intensive meshing scheme. The source mesh was morphed onto a computed tomography scan generated target surface of a second pelvis using a landmarked-based approach, in which exterior source nodes were shifted to target surface vertices, while constrained along a normal. A second copy of the morphed model was further refined through mesh mapping, in which surface nodes of the initial morphed model were selected in patches and remapped onto the surfaces of the target model. Computed tomography intensity based material properties were assigned to each model. The source, target, morphed and mapped models were analyzed under axial compression using linear static FE analysis and their strain distributions evaluated. Morphing and mapping techniques were effectively applied to generate good quality geometrically complex specimen-specific pelvic FE models. Mapping significantly improved strain concurrence with the target pelvis FE model. Copyright © 2012 John Wiley & Sons, Ltd.

  17. Evaluation of mesh morphing and mapping techniques in patient specific modelling of the human pelvis.

    Science.gov (United States)

    Salo, Zoryana; Beek, Maarten; Whyne, Cari Marisa

    2012-08-01

    Robust generation of pelvic finite element models is necessary to understand variation in mechanical behaviour resulting from differences in gender, aging, disease and injury. The objective of this study was to apply and evaluate mesh morphing and mapping techniques to facilitate the creation and structural analysis of specimen-specific finite element (FE) models of the pelvis. A specimen-specific pelvic FE model (source mesh) was generated following a traditional user-intensive meshing scheme. The source mesh was morphed onto a computed tomography scan generated target surface of a second pelvis using a landmarked-based approach, in which exterior source nodes were shifted to target surface vertices, while constrained along a normal. A second copy of the morphed model was further refined through mesh mapping, in which surface nodes of the initial morphed model were selected in patches and remapped onto the surfaces of the target model. Computed tomography intensity-based material properties were assigned to each model. The source, target, morphed and mapped models were analyzed under axial compression using linear static FE analysis, and their strain distributions were evaluated. Morphing and mapping techniques were effectively applied to generate good quality and geometrically complex specimen-specific pelvic FE models. Mapping significantly improved strain concurrence with the target pelvis FE model. Copyright © 2012 John Wiley & Sons, Ltd.

  18. New Diagnostic, Launch and Model Control Techniques in the NASA Ames HFFAF Ballistic Range

    Science.gov (United States)

    Bogdanoff, David W.

    2012-01-01

    This report presents new diagnostic, launch and model control techniques used in the NASA Ames HFFAF ballistic range. High speed movies were used to view the sabot separation process and the passage of the model through the model splap paper. Cavities in the rear of the sabot, to catch the muzzle blast of the gun, were used to control sabot finger separation angles and distances. Inserts were installed in the powder chamber to greatly reduce the ullage volume (empty space) in the chamber. This resulted in much more complete and repeatable combustion of the powder and hence, in much more repeatable muzzle velocities. Sheets of paper or cardstock, impacting one half of the model, were used to control the amplitudes of the model pitch oscillations.

  19. Monosomy 3 by FISH in uveal melanoma: variability in techniques and results.

    Science.gov (United States)

    Aronow, Mary; Sun, Yang; Saunthararajah, Yogen; Biscotti, Charles; Tubbs, Raymond; Triozzi, Pierre; Singh, Arun D

    2012-09-01

    Tumor monosomy 3 confers a poor prognosis in patients with uveal melanoma. We critically review the techniques used for fluorescence in situ hybridization (FISH) detection of monosomy 3 in order to assess variability in practice patterns and to explain differences in results. Significant variability that has likely affected reported results was found in tissue sampling methods, selection of FISH probes, number of cells counted, and the cut-off point used to determine monosomy 3 status. Clinical parameters and specific techniques employed to report FISH results should be specified so as to allow meta-analysis of published studies. FISH-based detection of monosomy 3 in uveal melanoma has not been performed in a standardized manner, which limits conclusions regarding its clinical utility. FISH is a widely available, versatile technology, and when performed optimally has the potential to be a valuable tool for determining the prognosis of uveal melanoma. Copyright © 2012 Elsevier Inc. All rights reserved.

  20. NMR and modelling techniques in structural and conformation analysis

    Energy Technology Data Exchange (ETDEWEB)

    Abraham, R J [Liverpool Univ. (United Kingdom)

    1994-12-31

    The use of Lanthanide Induced Shifts (L.I.S.) and modelling techniques in conformational analysis is presented. The use of Co{sup III} porphyrins as shift reagents is discussed, with examples of their use in the conformational analysis of some heterocyclic amines. (author) 13 refs., 9 figs.

  1. A review of cutting mechanics and modeling techniques for biological materials.

    Science.gov (United States)

    Takabi, Behrouz; Tai, Bruce L

    2017-07-01

    This paper presents a comprehensive survey on the modeling of tissue cutting, including both soft tissue and bone cutting processes. In order to achieve higher accuracy in tissue cutting, as a critical process in surgical operations, the meticulous modeling of such processes is important in particular for surgical tool development and analysis. This review paper is focused on the mechanical concepts and modeling techniques utilized to simulate tissue cutting such as cutting forces and chip morphology. These models are presented in two major categories, namely soft tissue cutting and bone cutting. Fracture toughness is commonly used to describe tissue cutting while Johnson-Cook material model is often adopted for bone cutting in conjunction with finite element analysis (FEA). In each section, the most recent mathematical and computational models are summarized. The differences and similarities among these models, challenges, novel techniques, and recommendations for future work are discussed along with each section. This review is aimed to provide a broad and in-depth vision of the methods suitable for tissue and bone cutting simulations. Copyright © 2017 IPEM. Published by Elsevier Ltd. All rights reserved.

  2. IMAGE-BASED MODELING TECHNIQUES FOR ARCHITECTURAL HERITAGE 3D DIGITALIZATION: LIMITS AND POTENTIALITIES

    Directory of Open Access Journals (Sweden)

    C. Santagati

    2013-07-01

    Full Text Available 3D reconstruction from images has undergone a revolution in the last few years. Computer vision techniques use photographs from data set collection to rapidly build detailed 3D models. The simultaneous applications of different algorithms (MVS, the different techniques of image matching, feature extracting and mesh optimization are inside an active field of research in computer vision. The results are promising: the obtained models are beginning to challenge the precision of laser-based reconstructions. Among all the possibilities we can mainly distinguish desktop and web-based packages. Those last ones offer the opportunity to exploit the power of cloud computing in order to carry out a semi-automatic data processing, thus allowing the user to fulfill other tasks on its computer; whereas desktop systems employ too much processing time and hard heavy approaches. Computer vision researchers have explored many applications to verify the visual accuracy of 3D model but the approaches to verify metric accuracy are few and no one is on Autodesk 123D Catch applied on Architectural Heritage Documentation. Our approach to this challenging problem is to compare the 3Dmodels by Autodesk 123D Catch and 3D models by terrestrial LIDAR considering different object size, from the detail (capitals, moldings, bases to large scale buildings for practitioner purpose.

  3. Childhood familial pheochromocytoma. Conflicting results of localization techniques

    International Nuclear Information System (INIS)

    Turner, M.C.; DeQuattro, V.; Falk, R.; Ansari, A.; Lieberman, E.

    1986-01-01

    Childhood familial pheochromocytoma was investigated in four patients by abdominal computed tomographic scan, [ 131 I]metaiodobenzylguanidine scan, and vena caval catecholamine sampling. Results conflicted with surgical findings. Computed tomographic scan identified all four adrenal tumors but missed two midline tumors in one patient. [ 131 I]metaiodobenzylguanidine scan identified two of three adrenal tumors but also suggested extra-adrenal tumors not confirmed at operation in two of three patients. Vena caval sampling for catecholamines confirmed all adrenal tumors but suggested additional tumors not verified at operation in two of three patients. All patients are asymptomatic and have normal urinary catecholamines 15 to 51 months after operation. Because of the frequency of multiple tumors in familial pheochromocytoma, different diagnostic techniques were employed. False-positive results were more frequent with [ 131 I]metaiodobenzylguanidine and vena caval sampling. Reinterpretation of the [ 131 I]metaiodobenzylguanidine scans at a later date led to less false-positive interpretation, although the false-negative rate remained unchanged. More pediatric experience with [ 131 I]metaiodobenzylguanidine scans and vena caval sampling in familial pheochromocytoma is needed. Confirmation of tumor and its localization rest with meticulous surgical exploration

  4. Transfer of physics detector models into CAD systems using modern techniques

    International Nuclear Information System (INIS)

    Dach, M.; Vuoskoski, J.

    1996-01-01

    Designing high energy physics detectors for future experiments requires sophisticated computer aided design and simulation tools. In order to satisfy the future demands in this domain, modern techniques, methods, and standards have to be applied. We present an interface application, designed and implemented using object-oriented techniques, for the widely used GEANT physics simulation package. It converts GEANT detector models into the future industrial standard, STEP. (orig.)

  5. Cystoscopic-assisted partial cystectomy: description of technique and results

    Directory of Open Access Journals (Sweden)

    Gofrit ON

    2014-10-01

    Full Text Available Ofer N Gofrit,1 Amos Shapiro,1 Ran Katz,1 Mordechai Duvdevani,1 Vladimir Yutkin,1 Ezekiel H Landau,1 Kevin C Zorn,2 Guy Hidas,1 Dov Pode1 1Department of Urology, Hadassah Hebrew University Hospital, Jerusalem, Israel; 2Department of Surgery, Section of Urology, Montreal, Canada Background: Partial cystectomy provides oncological results comparable with those of radical cystectomy in selected patients with invasive bladder cancer without the morbidity associated with radical cystectomy and urinary diversion. We describe a novel technique of partial cystectomy that allows accurate identification of tumor margins while minimizing damage to the rest of the bladder. Methods: During the study period, 30 patients underwent partial cystectomy for invasive high-grade cancer. In 19 patients, the traditional method of tumor identification was used, ie, identifying the tumor by palpation and cystotomy. In eleven patients, after mobilization of the bladder, flexible cystoscopy was done and the light of the cystoscope was pointed toward one edge of the planned resected ellipse around the tumor, thus avoiding cystotomy. Results: Patients who underwent partial cystectomy using the novel method were similar in all characteristics to patients operated on using the traditional technique except for tumor diameter which was significantly larger in patients operated on using the novel method (4.3±1.5 cm versus 3.11±1.18 cm, P=0.032. Complications were rare in both types of surgery. The 5-year local recurrence-free survival was marginally superior using the novel method (0.8 versus 0.426, P=0.088. Overall, disease-specific and disease-free survival rates were similar. Conclusion: The use of a flexible cystoscope during partial cystectomy is a simple, low-cost maneuver that assists in planning the bladder incision and minimizes injury to the remaining bladder by avoiding the midline cystotomy. Initial oncological results show a trend toward a lower rate of local

  6. Validation techniques of agent based modelling for geospatial simulations

    Directory of Open Access Journals (Sweden)

    M. Darvishi

    2014-10-01

    Full Text Available One of the most interesting aspects of modelling and simulation study is to describe the real world phenomena that have specific properties; especially those that are in large scales and have dynamic and complex behaviours. Studying these phenomena in the laboratory is costly and in most cases it is impossible. Therefore, Miniaturization of world phenomena in the framework of a model in order to simulate the real phenomena is a reasonable and scientific approach to understand the world. Agent-based modelling and simulation (ABMS is a new modelling method comprising of multiple interacting agent. They have been used in the different areas; for instance, geographic information system (GIS, biology, economics, social science and computer science. The emergence of ABM toolkits in GIS software libraries (e.g. ESRI’s ArcGIS, OpenMap, GeoTools, etc for geospatial modelling is an indication of the growing interest of users to use of special capabilities of ABMS. Since ABMS is inherently similar to human cognition, therefore it could be built easily and applicable to wide range applications than a traditional simulation. But a key challenge about ABMS is difficulty in their validation and verification. Because of frequent emergence patterns, strong dynamics in the system and the complex nature of ABMS, it is hard to validate and verify ABMS by conventional validation methods. Therefore, attempt to find appropriate validation techniques for ABM seems to be necessary. In this paper, after reviewing on Principles and Concepts of ABM for and its applications, the validation techniques and challenges of ABM validation are discussed.

  7. Validation techniques of agent based modelling for geospatial simulations

    Science.gov (United States)

    Darvishi, M.; Ahmadi, G.

    2014-10-01

    One of the most interesting aspects of modelling and simulation study is to describe the real world phenomena that have specific properties; especially those that are in large scales and have dynamic and complex behaviours. Studying these phenomena in the laboratory is costly and in most cases it is impossible. Therefore, Miniaturization of world phenomena in the framework of a model in order to simulate the real phenomena is a reasonable and scientific approach to understand the world. Agent-based modelling and simulation (ABMS) is a new modelling method comprising of multiple interacting agent. They have been used in the different areas; for instance, geographic information system (GIS), biology, economics, social science and computer science. The emergence of ABM toolkits in GIS software libraries (e.g. ESRI's ArcGIS, OpenMap, GeoTools, etc) for geospatial modelling is an indication of the growing interest of users to use of special capabilities of ABMS. Since ABMS is inherently similar to human cognition, therefore it could be built easily and applicable to wide range applications than a traditional simulation. But a key challenge about ABMS is difficulty in their validation and verification. Because of frequent emergence patterns, strong dynamics in the system and the complex nature of ABMS, it is hard to validate and verify ABMS by conventional validation methods. Therefore, attempt to find appropriate validation techniques for ABM seems to be necessary. In this paper, after reviewing on Principles and Concepts of ABM for and its applications, the validation techniques and challenges of ABM validation are discussed.

  8. Towards Model Validation and Verification with SAT Techniques

    OpenAIRE

    Gogolla, Martin

    2010-01-01

    After sketching how system development and the UML (Unified Modeling Language) and the OCL (Object Constraint Language) are related, validation and verification with the tool USE (UML-based Specification Environment) is demonstrated. As a more efficient alternative for verification tasks, two approaches using SAT-based techniques are put forward: First, a direct encoding of UML and OCL with Boolean variables and propositional formulas, and second, an encoding employing an...

  9. Evaluation of 3-dimensional superimposition techniques on various skeletal structures of the head using surface models.

    Science.gov (United States)

    Gkantidis, Nikolaos; Schauseil, Michael; Pazera, Pawel; Zorkun, Berna; Katsaros, Christos; Ludwig, Björn

    2015-01-01

    To test the applicability, accuracy, precision, and reproducibility of various 3D superimposition techniques for radiographic data, transformed to triangulated surface data. Five superimposition techniques (3P: three-point registration; AC: anterior cranial base; AC + F: anterior cranial base + foramen magnum; BZ: both zygomatic arches; 1Z: one zygomatic arch) were tested using eight pairs of pre-existing CT data (pre- and post-treatment). These were obtained from non-growing orthodontic patients treated with rapid maxillary expansion. All datasets were superimposed by three operators independently, who repeated the whole procedure one month later. Accuracy was assessed by the distance (D) between superimposed datasets on three form-stable anatomical areas, located on the anterior cranial base and the foramen magnum. Precision and reproducibility were assessed using the distances between models at four specific landmarks. Non parametric multivariate models and Bland-Altman difference plots were used for analyses. There was no difference among operators or between time points on the accuracy of each superimposition technique (p>0.05). The AC + F technique was the most accurate (D0.05), the detected structural changes differed significantly between different techniques (p<0.05). Bland-Altman difference plots showed that BZ superimposition was comparable to AC, though it presented slightly higher random error. Superimposition of 3D datasets using surface models created from voxel data can provide accurate, precise, and reproducible results, offering also high efficiency and increased post-processing capabilities. In the present study population, the BZ superimposition was comparable to AC, with the added advantage of being applicable to scans with a smaller field of view.

  10. [Pseudo-continent perineal colostomy. Results and techniques].

    Science.gov (United States)

    Lasser, P; Dubé, P; Guillot, J M; Elias, D

    1997-09-01

    This prospective study was conducted to assess functional results obtained after pseudo-continent perineal colostomy using the Schmidt procedure. Functional outcome was assessed in 40 patients who had undergone amputation of the rectum for cancer and pseudo-continent perineal colostomy reconstruction between 1989 and 1995 in our institution. The cancer pathology, operative procedure and post-operative care were noted. Morbidity, functional outcome and degree of patient satisfaction were recorded. Mean follow-up was 45 months (18-87) in 100% of the patients. There were no operative deaths. Twenty patients had post-operative complications and 2 patients required early conversion to definitive abdominal colostomy due to severe perineal complications. Function outcome showed normal continence in 4 patients, air incontinence in 23, occasional minimal leakage in 9 and incontinence requiring iliac colostomy in 2. Eighty-six percent of the patients were highly satisfied or satisfied with their continence capacity. Pseudo-continent perineal colostomy is a reliable technique which can be proposed as an alternative to left iliac colostomy after amputation of the rectum for cancer if a rigorous procedure is applied: careful patient selection, informed consent, rigorous surgical procedure, daily life-long irrigation of the colon.

  11. Nudging technique for scale bridging in air quality/climate atmospheric composition modelling

    Directory of Open Access Journals (Sweden)

    A. Maurizi

    2012-04-01

    Full Text Available The interaction between air quality and climate involves dynamical scales that cover a very wide range. Bridging these scales in numerical simulations is fundamental in studies devoted to megacity/hot-spot impacts on larger scales. A technique based on nudging is proposed as a bridging method that can couple different models at different scales.

    Here, nudging is used to force low resolution chemical composition models with a run of a high resolution model on a critical area. A one-year numerical experiment focused on the Po Valley hot spot is performed using the BOLCHEM model to asses the method.

    The results show that the model response is stable to perturbation induced by the nudging and that, taking the high resolution run as a reference, performances of the nudged run increase with respect to the non-forced run. The effect outside the forcing area depends on transport and is significant in a relevant number of events although it becomes weak on seasonal or yearly basis.

  12. New diagnostic technique for Zeeman-compensated atomic beam slowing: technique and results

    NARCIS (Netherlands)

    Molenaar, P.A.; Straten, P. van der; Heideman, H.G.M.; Metcalf, H.

    1997-01-01

    We have developed a new diagnostic tool for the study of Zeeman-compensated slowing of an alkali atomic beam. Our time-of-flight technique measures the longitudinal veloc- ity distribution of the slowed atoms with a resolution below the Doppler limit of 30 cm/s. Furthermore, it can map

  13. [Preparation of simulate craniocerebral models via three dimensional printing technique].

    Science.gov (United States)

    Lan, Q; Chen, A L; Zhang, T; Zhu, Q; Xu, T

    2016-08-09

    Three dimensional (3D) printing technique was used to prepare the simulate craniocerebral models, which were applied to preoperative planning and surgical simulation. The image data was collected from PACS system. Image data of skull bone, brain tissue and tumors, cerebral arteries and aneurysms, and functional regions and relative neural tracts of the brain were extracted from thin slice scan (slice thickness 0.5 mm) of computed tomography (CT), magnetic resonance imaging (MRI, slice thickness 1mm), computed tomography angiography (CTA), and functional magnetic resonance imaging (fMRI) data, respectively. MIMICS software was applied to reconstruct colored virtual models by identifying and differentiating tissues according to their gray scales. Then the colored virtual models were submitted to 3D printer which produced life-sized craniocerebral models for surgical planning and surgical simulation. 3D printing craniocerebral models allowed neurosurgeons to perform complex procedures in specific clinical cases though detailed surgical planning. It offered great convenience for evaluating the size of spatial fissure of sellar region before surgery, which helped to optimize surgical approach planning. These 3D models also provided detailed information about the location of aneurysms and their parent arteries, which helped surgeons to choose appropriate aneurismal clips, as well as perform surgical simulation. The models further gave clear indications of depth and extent of tumors and their relationship to eloquent cortical areas and adjacent neural tracts, which were able to avoid surgical damaging of important neural structures. As a novel and promising technique, the application of 3D printing craniocerebral models could improve the surgical planning by converting virtual visualization into real life-sized models.It also contributes to functional anatomy study.

  14. A new technique for measuring aerosols with moonlight observations and a sky background model

    Science.gov (United States)

    Jones, Amy; Noll, Stefan; Kausch, Wolfgang; Kimeswenger, Stefan; Szyszka, Ceszary; Unterguggenberger, Stefanie

    2014-05-01

    moonlight model is designed for the average atmospheric conditions at Cerro Paranal. The Mie scattering is calculated for the average distribution of aerosol particles, but this input can be modified. We can avoid the airglow emission lines, and near full Moon the airglow continuum can be ignored. In the case study, by comparing the scattered moonlight for the various angles and wavelengths along with the extinction curve from the standard stars, we can iteratively find the optimal aerosol size distribution for the time of observation. We will present this new technique, the results from this case study, and how it can be implemented for investigating aerosols using the X-Shooter archive and other astronomical archives.

  15. Experience with the Large Eddy Simulation (LES) Technique for the Modelling of Premixed and Non-premixed Combustion

    OpenAIRE

    Malalasekera, W; Ibrahim, SS; Masri, AR; Gubba, SR; Sadasivuni, SK

    2013-01-01

    Compared to RANS based combustion modelling, the Large Eddy Simulation (LES) technique has recently emerged as a more accurate and very adaptable technique in terms of handling complex turbulent interactions in combustion modelling problems. In this paper application of LES based combustion modelling technique and the validation of models in non-premixed and premixed situations are considered. Two well defined experimental configurations where high quality data are available for validation is...

  16. An overview of techniques for linking high-dimensional molecular data to time-to-event endpoints by risk prediction models.

    Science.gov (United States)

    Binder, Harald; Porzelius, Christine; Schumacher, Martin

    2011-03-01

    Analysis of molecular data promises identification of biomarkers for improving prognostic models, thus potentially enabling better patient management. For identifying such biomarkers, risk prediction models can be employed that link high-dimensional molecular covariate data to a clinical endpoint. In low-dimensional settings, a multitude of statistical techniques already exists for building such models, e.g. allowing for variable selection or for quantifying the added value of a new biomarker. We provide an overview of techniques for regularized estimation that transfer this toward high-dimensional settings, with a focus on models for time-to-event endpoints. Techniques for incorporating specific covariate structure are discussed, as well as techniques for dealing with more complex endpoints. Employing gene expression data from patients with diffuse large B-cell lymphoma, some typical modeling issues from low-dimensional settings are illustrated in a high-dimensional application. First, the performance of classical stepwise regression is compared to stage-wise regression, as implemented by a component-wise likelihood-based boosting approach. A second issues arises, when artificially transforming the response into a binary variable. The effects of the resulting loss of efficiency and potential bias in a high-dimensional setting are illustrated, and a link to competing risks models is provided. Finally, we discuss conditions for adequately quantifying the added value of high-dimensional gene expression measurements, both at the stage of model fitting and when performing evaluation. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  17. Using the Continuum of Design Modelling Techniques to Aid the Development of CAD Modeling Skills in First Year Industrial Design Students

    Science.gov (United States)

    Storer, I. J.; Campbell, R. I.

    2012-01-01

    Industrial Designers need to understand and command a number of modelling techniques to communicate their ideas to themselves and others. Verbal explanations, sketches, engineering drawings, computer aided design (CAD) models and physical prototypes are the most commonly used communication techniques. Within design, unlike some disciplines,…

  18. Hawaii Solar Integration Study: Solar Modeling Developments and Study Results; Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Orwig, K.; Corbus, D.; Piwko, R.; Schuerger, M.; Matsuura, M.; Roose, L.

    2012-12-01

    The Hawaii Solar Integration Study (HSIS) is a follow-up to the Oahu Wind Integration and Transmission Study completed in 2010. HSIS focuses on the impacts of higher penetrations of solar energy on the electrical grid and on other generation. HSIS goes beyond the island of Oahu and investigates Maui as well. The study examines reserve strategies, impacts on thermal unit commitment and dispatch, utilization of energy storage, renewable energy curtailment, and other aspects of grid reliability and operation. For the study, high-frequency (2-second) solar power profiles were generated using a new combined Numerical Weather Prediction model/ stochastic-kinematic cloud model approach, which represents the 'sharp-edge' effects of clouds passing over solar facilities. As part of the validation process, the solar data was evaluated using a variety of analysis techniques including wavelets, power spectral densities, ramp distributions, extreme values, and cross correlations. This paper provides an overview of the study objectives, results of the solar profile validation, and study results.

  19. Base Oils Biodegradability Prediction with Data Mining Techniques

    Directory of Open Access Journals (Sweden)

    Malika Trabelsi

    2010-02-01

    Full Text Available In this paper, we apply various data mining techniques including continuous numeric and discrete classification prediction models of base oils biodegradability, with emphasis on improving prediction accuracy. The results show that highly biodegradable oils can be better predicted through numeric models. In contrast, classification models did not uncover a similar dichotomy. With the exception of Memory Based Reasoning and Decision Trees, tested classification techniques achieved high classification prediction. However, the technique of Decision Trees helped uncover the most significant predictors. A simple classification rule derived based on this predictor resulted in good classification accuracy. The application of this rule enables efficient classification of base oils into either low or high biodegradability classes with high accuracy. For the latter, a higher precision biodegradability prediction can be obtained using continuous modeling techniques.

  20. Chronology of DIC technique based on the fundamental mathematical modeling and dehydration impact.

    Science.gov (United States)

    Alias, Norma; Saipol, Hafizah Farhah Saipan; Ghani, Asnida Che Abd

    2014-12-01

    A chronology of mathematical models for heat and mass transfer equation is proposed for the prediction of moisture and temperature behavior during drying using DIC (Détente Instantanée Contrôlée) or instant controlled pressure drop technique. DIC technique has the potential as most commonly used dehydration method for high impact food value including the nutrition maintenance and the best possible quality for food storage. The model is governed by the regression model, followed by 2D Fick's and Fourier's parabolic equation and 2D elliptic-parabolic equation in a rectangular slice. The models neglect the effect of shrinkage and radiation effects. The simulations of heat and mass transfer equations with parabolic and elliptic-parabolic types through some numerical methods based on finite difference method (FDM) have been illustrated. Intel®Core™2Duo processors with Linux operating system and C programming language have been considered as a computational platform for the simulation. Qualitative and quantitative differences between DIC technique and the conventional drying methods have been shown as a comparative.

  1. Reliability modeling of digital component in plant protection system with various fault-tolerant techniques

    International Nuclear Information System (INIS)

    Kim, Bo Gyung; Kang, Hyun Gook; Kim, Hee Eun; Lee, Seung Jun; Seong, Poong Hyun

    2013-01-01

    Highlights: • Integrated fault coverage is introduced for reflecting characteristics of fault-tolerant techniques in the reliability model of digital protection system in NPPs. • The integrated fault coverage considers the process of fault-tolerant techniques from detection to fail-safe generation process. • With integrated fault coverage, the unavailability of repairable component of DPS can be estimated. • The new developed reliability model can reveal the effects of fault-tolerant techniques explicitly for risk analysis. • The reliability model makes it possible to confirm changes of unavailability according to variation of diverse factors. - Abstract: With the improvement of digital technologies, digital protection system (DPS) has more multiple sophisticated fault-tolerant techniques (FTTs), in order to increase fault detection and to help the system safely perform the required functions in spite of the possible presence of faults. Fault detection coverage is vital factor of FTT in reliability. However, the fault detection coverage is insufficient to reflect the effects of various FTTs in reliability model. To reflect characteristics of FTTs in the reliability model, integrated fault coverage is introduced. The integrated fault coverage considers the process of FTT from detection to fail-safe generation process. A model has been developed to estimate the unavailability of repairable component of DPS using the integrated fault coverage. The new developed model can quantify unavailability according to a diversity of conditions. Sensitivity studies are performed to ascertain important variables which affect the integrated fault coverage and unavailability

  2. A Review of Domain Modelling and Domain Imaging Techniques in Ferroelectric Crystals

    Directory of Open Access Journals (Sweden)

    John E. Huber

    2011-02-01

    Full Text Available The present paper reviews models of domain structure in ferroelectric crystals, thin films and bulk materials. Common crystal structures in ferroelectric materials are described and the theory of compatible domain patterns is introduced. Applications to multi-rank laminates are presented. Alternative models employing phase-field and related techniques are reviewed. The paper then presents methods of observing ferroelectric domain structure, including optical, polarized light, scanning electron microscopy, X-ray and neutron diffraction, atomic force microscopy and piezo-force microscopy. Use of more than one technique for unambiguous identification of the domain structure is also described.

  3. Methods and Techniques Used to Convey Total System Performance Assessment Analyses and Results for Site Recommendation at Yucca Mountain, Nevada, USA

    International Nuclear Information System (INIS)

    Mattie, Patrick D.; McNeish, Jerry A.; Sevougian, S. David; Andrews, Robert W.

    2001-01-01

    Total System Performance Assessment (TSPA) is used as a key decision-making tool for the potential geologic repository of high level radioactive waste at Yucca Mountain, Nevada USA. Because of the complexity and uncertainty involved in a post-closure performance assessment, an important goal is to produce a transparent document describing the assumptions, the intermediate steps, the results, and the conclusions of the analyses. An important objective for a TSPA analysis is to illustrate confidence in performance projections of the potential repository given a complex system of interconnected process models, data, and abstractions. The methods and techniques used for the recent TSPA analyses demonstrate an effective process to portray complex models and results with transparency and credibility

  4. The Integrated Use of Enterprise and System Dynamics Modelling Techniques in Support of Business Decisions

    Directory of Open Access Journals (Sweden)

    K. Agyapong-Kodua

    2012-01-01

    Full Text Available Enterprise modelling techniques support business process (reengineering by capturing existing processes and based on perceived outputs, support the design of future process models capable of meeting enterprise requirements. System dynamics modelling tools on the other hand are used extensively for policy analysis and modelling aspects of dynamics which impact on businesses. In this paper, the use of enterprise and system dynamics modelling techniques has been integrated to facilitate qualitative and quantitative reasoning about the structures and behaviours of processes and resource systems used by a Manufacturing Enterprise during the production of composite bearings. The case study testing reported has led to the specification of a new modelling methodology for analysing and managing dynamics and complexities in production systems. This methodology is based on a systematic transformation process, which synergises the use of a selection of public domain enterprise modelling, causal loop and continuous simulation modelling techniques. The success of the modelling process defined relies on the creation of useful CIMOSA process models which are then converted to causal loops. The causal loop models are then structured and translated to equivalent dynamic simulation models using the proprietary continuous simulation modelling tool iThink.

  5. Evaluation of 3-dimensional superimposition techniques on various skeletal structures of the head using surface models.

    Directory of Open Access Journals (Sweden)

    Nikolaos Gkantidis

    Full Text Available To test the applicability, accuracy, precision, and reproducibility of various 3D superimposition techniques for radiographic data, transformed to triangulated surface data.Five superimposition techniques (3P: three-point registration; AC: anterior cranial base; AC + F: anterior cranial base + foramen magnum; BZ: both zygomatic arches; 1Z: one zygomatic arch were tested using eight pairs of pre-existing CT data (pre- and post-treatment. These were obtained from non-growing orthodontic patients treated with rapid maxillary expansion. All datasets were superimposed by three operators independently, who repeated the whole procedure one month later. Accuracy was assessed by the distance (D between superimposed datasets on three form-stable anatomical areas, located on the anterior cranial base and the foramen magnum. Precision and reproducibility were assessed using the distances between models at four specific landmarks. Non parametric multivariate models and Bland-Altman difference plots were used for analyses.There was no difference among operators or between time points on the accuracy of each superimposition technique (p>0.05. The AC + F technique was the most accurate (D0.05, the detected structural changes differed significantly between different techniques (p<0.05. Bland-Altman difference plots showed that BZ superimposition was comparable to AC, though it presented slightly higher random error.Superimposition of 3D datasets using surface models created from voxel data can provide accurate, precise, and reproducible results, offering also high efficiency and increased post-processing capabilities. In the present study population, the BZ superimposition was comparable to AC, with the added advantage of being applicable to scans with a smaller field of view.

  6. Improved ceramic slip casting technique. [application to aircraft model fabrication

    Science.gov (United States)

    Buck, Gregory M. (Inventor); Vasquez, Peter (Inventor)

    1993-01-01

    A primary concern in modern fluid dynamics research is the experimental verification of computational aerothermodynamic codes. This research requires high precision and detail in the test model employed. Ceramic materials are used for these models because of their low heat conductivity and their survivability at high temperatures. To fabricate such models, slip casting techniques were developed to provide net-form, precision casting capability for high-purity ceramic materials in aqueous solutions. In previous slip casting techniques, block, or flask molds made of plaster-of-paris were used to draw liquid from the slip material. Upon setting, parts were removed from the flask mold and cured in a kiln at high temperatures. Casting detail was usually limited with this technique -- detailed parts were frequently damaged upon separation from the flask mold, as the molded parts are extremely delicate in the uncured state, and the flask mold is inflexible. Ceramic surfaces were also marred by 'parting lines' caused by mold separation. This adversely affected the aerodynamic surface quality of the model as well. (Parting lines are invariably necessary on or near the leading edges of wings, nosetips, and fins for mold separation. These areas are also critical for flow boundary layer control.) Parting agents used in the casting process also affected surface quality. These agents eventually soaked into the mold, the model, or flaked off when releasing the case model. Different materials were tried, such as oils, paraffin, and even an algae. The algae released best, but some of it remained on the model and imparted an uneven texture and discoloration on the model surface when cured. According to the present invention, a wax pattern for a shell mold is provided, and an aqueous mixture of a calcium sulfate-bonded investment material is applied as a coating to the wax pattern. The coated wax pattern is then dried, followed by curing to vaporize the wax pattern and leave a shell

  7. Techniques to extract physical modes in model-independent analysis of rings

    International Nuclear Information System (INIS)

    Wang, C.-X.

    2004-01-01

    A basic goal of Model-Independent Analysis is to extract the physical modes underlying the beam histories collected at a large number of beam position monitors so that beam dynamics and machine properties can be deduced independent of specific machine models. Here we discuss techniques to achieve this goal, especially the Principal Component Analysis and the Independent Component Analysis.

  8. Technique and results of femoral bifurcation endarterectomy by eversion.

    Science.gov (United States)

    Dufranc, Julie; Palcau, Laura; Heyndrickx, Maxime; Gouicem, Djelloul; Coffin, Olivier; Felisaz, Aurélien; Berger, Ludovic

    2015-03-01

    This study evaluated, in a contemporary prospective series, the safety and efficacy of femoral endarterectomy using the eversion technique and compared our results with results obtained in the literature for the standard endarterectomy with patch closure. Between 2010 and 2012, 121 patients (76% male; mean age, 68.7 years; diabetes, 28%; renal insufficiency, 20%) underwent 147 consecutive femoral bifurcation endarterectomies using the eversion technique, associating or not inflow or outflow concomitant revascularization. The indications were claudication in 89 procedures (60%) and critical limb ischemia in 58 (40%). Primary, primary assisted, and secondary patency of the femoral bifurcation, clinical improvement, limb salvage, and survival were assessed using Kaplan-Meier life-table analysis. Factors associated with those primary end-points were evaluated with univariate analysis. The technical success of eversion was of 93.2%. The 30-day mortality was 0%, and the complication rate was 8.2%; of which, half were local and benign. Median follow-up was 16 months (range, 1.6-31.2 months). Primary, primary assisted, and secondary patencies were, respectively, 93.2%, 97.2%, and 98.6% at 2 years. Primary, primary assisted, and secondary maintenance of clinical improvement were, respectively, 79.9%, 94.6%, and 98.6% at 2 years. The predictive factors for clinical degradation were clinical stage (Rutherford category 5 or 6, P = .024), platelet aggregation inhibitor treatment other than clopidogrel (P = .005), malnutrition (P = .025), and bad tibial runoff (P = .0016). A reintervention was necessary in 18.3% of limbs at 2 years: 2% involving femoral bifurcation, 6.1% inflow improvement, and 9.5% outflow improvement. The risk factors of reintervention were platelet aggregation inhibitor (other than clopidogrel, P = .049) and cancer (P = .011). Limb preservation at 2 years was 100% in the claudicant population. Limb salvage was 88.6% in the critical limb ischemia population

  9. Current control design for three-phase grid-connected inverters using a pole placement technique based on numerical models

    OpenAIRE

    Citro, Costantino; Gavriluta, Catalin; Nizak Md, H. K.; Beltran, H.

    2012-01-01

    This paper presents a design procedure for linear current controllers of three-phase grid-connected inverters. The proposed method consists in deriving a numerical model of the converter by using software simulations and applying the pole placement technique to design the controller with the desired performances. A clear example on how to apply the technique is provided. The effectiveness of the proposed design procedure has been verified through the experimental results obtained with ...

  10. Continuous Modeling Technique of Fiber Pullout from a Cement Matrix with Different Interface Mechanical Properties Using Finite Element Program

    Directory of Open Access Journals (Sweden)

    Leandro Ferreira Friedrich

    Full Text Available Abstract Fiber-matrix interface performance has a great influence on the mechanical properties of fiber reinforced composite. This influence is mainly presented during fiber pullout from the matrix. As fiber pullout process consists of fiber debonding stage and pullout stage which involve complex contact problem, numerical modeling is a best way to investigate the interface influence. Although many numerical research works have been conducted, practical and effective technique suitable for continuous modeling of fiber pullout process is still scarce. The reason is in that numerical divergence frequently happens, leading to the modeling interruption. By interacting the popular finite element program ANSYS with the MATLAB, we proposed continuous modeling technique and realized modeling of fiber pullout from cement matrix with desired interface mechanical performance. For debonding process, we used interface elements with cohesive surface traction and exponential failure behavior. For pullout process, we switched interface elements to spring elements with variable stiffness, which is related to the interface shear stress as a function of the interface slip displacement. For both processes, the results obtained are very good in comparison with other numerical or analytical models and experimental tests. We suggest using the present technique to model toughening achieved by randomly distributed fibers.

  11. Results and techniques of multi-loop calculations

    International Nuclear Information System (INIS)

    Steinhauser, M.

    2002-01-01

    In this review some recent multi-loop results obtained in the framework of perturbative quantum chromodynamics (QCD) and quantum electrodynamics (QED) are discussed. After reviewing the most advanced techniques used for the computation of renormalization group functions, we consider the decoupling of heavy quarks. In particular, an effective method for the evaluation of the decoupling constants is presented and explicit results are given. Furthermore the connection to observables involving a scalar Higgs boson is worked out in detail. An all-order low energy v theorem is derived which establishes a relation between the coefficient functions in the hadronic Higgs decay and the decoupling constants. We review the radiative corrections of a Higgs boson into gluons and quarks and present explicit results up to order α s 4 and α s 3 , respectively. In this review special emphasis is put on the applications of asymptotic expansions. A method is described which combines expansion terms of different kinematical regions with the help of conformal mapping and Pade approximation. This method allows us to proceed beyond the present scope of exact multi-loop calculations. As far as physical processes are concerned, we review the computation of three-loop current correlators in QCD taking into account the full mass-dependence. In particular, we concentrate on the evaluation of the total cross section for the production of hadrons in e + e - annihilation. The knowledge of the complete mass dependence at order α s 2 has triggered a bunch of theory-driven analyses of the hadronic contribution to the electromagnetic coupling evaluated at high energy scales. The status is summarized in this review. In a further application four-loop diagrams are considered which contribute to the order α 2 QED corrections to the μ decay. Its relevance for the determination of the Fermi constant G F is discussed. Finally the calculation of the three-loop relation between the anti M anti S and on

  12. THE INFLUENCE OF CONVERSION MODEL CHOICE FOR EROSION RATE ESTIMATION AND THE SENSITIVITY OF THE RESULTS TO CHANGES IN THE MODEL PARAMETER

    Directory of Open Access Journals (Sweden)

    Nita Suhartini

    2010-06-01

    Full Text Available A study of soil erosion rates had been done on a slightly and long slope of cultivated area in Ciawi - Bogor, using 137Cs technique. The objective of the present study was to evaluate the applicability of the 137Cs technique in obtaining spatially distributed information of soil redistribution at small catchment. This paper reports the result of the choice of conversion model for erosion rate estimates and the sensitive of the changes in the model parameter. For this purpose, small site was selected, namely landuse I (LU-I. The top of a slope was chosen as a reference site. The erosion/deposit rate of individual sampling points was estimated using the conversion models, namely Proportional Model (PM, Mass Balance Model 1 (MBM1 and Mass Balance Model 2 (MBM2. A comparison of the conversion models showed that the lowest value is obtained by the PM. The MBM1 gave values closer to MBM2, but MBM2 gave a reliable values. In this study, a sensitivity analysis suggest that the conversion models are sensitive to changes in parameters that depend on the site conditions, but insensitive to changes in  parameters that interact to the onset of 137Cs fallout input.   Keywords: soil erosion, environmental radioisotope, cesium

  13. Tapering of the CHESS-APS undulator: Results and modelling

    International Nuclear Information System (INIS)

    Lai, B.; Viccaro, P.J.; Dejus, R.; Gluskin, E.; Yun, W.B.; McNulty, I.; Henderson, C.; White, J.; Shen, Q.; Finkelstein, K.

    1992-01-01

    When the magnetic gap of an undulator is tapered along the beam direction, the slowly varying peak field B o introduces a spread in the value of the deflection parameter K. The result is a broad energy-band undulator that still maintains high degree of spatial collimation. These properties are very useful for EXAFS and energy dispersive techniques. We have characterized the CHESS-APS undulator (1 υ = 3.3cm) at one tapered configuration (10% change of the magnetic gap from one end of the undulator to the other). Spatial distribution and energy spectra of the first three harmonics through a pinhole were measured. The on-axis first harmonic width increased from 0.27 keV to 0.61 keV (FWHM) at the central energy of E 1 = 6.6 keV (K average = 0.69). Broadening in the angular distribution due to tapering was minimal. These results will be compared with computer modelling which simulates the actual electron trajectory in the tapered case

  14. Reactivity change measurements on plutonium-uranium fuel elements in hector experimental techniques and results

    International Nuclear Information System (INIS)

    Tattersall, R.B.; Small, V.G.; MacBean, I.J.; Howe, W.D.

    1964-08-01

    The techniques used in making reactivity change measurements on HECTOR are described and discussed. Pile period measurements were used in the majority of oases, though the pile oscillator technique was used occasionally. These two methods are compared. Flux determinations were made in the vicinity of the fuel element samples using manganese foils, and the techniques used are described and an error assessment made. Results of both reactivity change and flux measurements on 1.2 in. diameter uranium and plutonium-uranium alloy fuel elements are presented, these measurements being carried out in a variety of graphite moderated lattices at temperatures up to 450 deg. C. (author)

  15. Mathematical Model and Artificial Intelligent Techniques Applied to a Milk Industry through DSM

    Science.gov (United States)

    Babu, P. Ravi; Divya, V. P. Sree

    2011-08-01

    The resources for electrical energy are depleting and hence the gap between the supply and the demand is continuously increasing. Under such circumstances, the option left is optimal utilization of available energy resources. The main objective of this chapter is to discuss about the Peak load management and overcome the problems associated with it in processing industries such as Milk industry with the help of DSM techniques. The chapter presents a generalized mathematical model for minimizing the total operating cost of the industry subject to the constraints. The work presented in this chapter also deals with the results of application of Neural Network, Fuzzy Logic and Demand Side Management (DSM) techniques applied to a medium scale milk industrial consumer in India to achieve the improvement in load factor, reduction in Maximum Demand (MD) and also the consumer gets saving in the energy bill.

  16. Modeling, Control and Analyze of Multi-Machine Drive Systems using Bond Graph Technique

    Directory of Open Access Journals (Sweden)

    J. Belhadj

    2006-03-01

    Full Text Available In this paper, a system viewpoint method has been investigated to study and analyze complex systems using Bond Graph technique. These systems are multimachine multi-inverter based on Induction Machine (IM, well used in industries like rolling mills, textile, and railway traction. These systems are multi-domains, multi-scales time and present very strong internal and external couplings, with non-linearity characterized by a high model order. The classical study with analytic model is difficult to manipulate and it is limited to some performances. In this study, a “systemic approach” is presented to design these kinds of systems, using an energetic representation based on Bond Graph formalism. Three types of multimachine are studied with their control strategies. The modeling is carried out by Bond Graph and results are discussed to show the performances of this methodology

  17. Parameter estimation techniques and uncertainty in ground water flow model predictions

    International Nuclear Information System (INIS)

    Zimmerman, D.A.; Davis, P.A.

    1990-01-01

    Quantification of uncertainty in predictions of nuclear waste repository performance is a requirement of Nuclear Regulatory Commission regulations governing the licensing of proposed geologic repositories for high-level radioactive waste disposal. One of the major uncertainties in these predictions is in estimating the ground-water travel time of radionuclides migrating from the repository to the accessible environment. The cause of much of this uncertainty has been attributed to a lack of knowledge about the hydrogeologic properties that control the movement of radionuclides through the aquifers. A major reason for this lack of knowledge is the paucity of data that is typically available for characterizing complex ground-water flow systems. Because of this, considerable effort has been put into developing parameter estimation techniques that infer property values in regions where no measurements exist. Currently, no single technique has been shown to be superior or even consistently conservative with respect to predictions of ground-water travel time. This work was undertaken to compare a number of parameter estimation techniques and to evaluate how differences in the parameter estimates and the estimation errors are reflected in the behavior of the flow model predictions. That is, we wished to determine to what degree uncertainties in flow model predictions may be affected simply by the choice of parameter estimation technique used. 3 refs., 2 figs

  18. Adaptive parametric model order reduction technique for optimization of vibro-acoustic models: Application to hearing aid design

    Science.gov (United States)

    Creixell-Mediante, Ester; Jensen, Jakob S.; Naets, Frank; Brunskog, Jonas; Larsen, Martin

    2018-06-01

    Finite Element (FE) models of complex structural-acoustic coupled systems can require a large number of degrees of freedom in order to capture their physical behaviour. This is the case in the hearing aid field, where acoustic-mechanical feedback paths are a key factor in the overall system performance and modelling them accurately requires a precise description of the strong interaction between the light-weight parts and the internal and surrounding air over a wide frequency range. Parametric optimization of the FE model can be used to reduce the vibroacoustic feedback in a device during the design phase; however, it requires solving the model iteratively for multiple frequencies at different parameter values, which becomes highly time consuming when the system is large. Parametric Model Order Reduction (pMOR) techniques aim at reducing the computational cost associated with each analysis by projecting the full system into a reduced space. A drawback of most of the existing techniques is that the vector basis of the reduced space is built at an offline phase where the full system must be solved for a large sample of parameter values, which can also become highly time consuming. In this work, we present an adaptive pMOR technique where the construction of the projection basis is embedded in the optimization process and requires fewer full system analyses, while the accuracy of the reduced system is monitored by a cheap error indicator. The performance of the proposed method is evaluated for a 4-parameter optimization of a frequency response for a hearing aid model, evaluated at 300 frequencies, where the objective function evaluations become more than one order of magnitude faster than for the full system.

  19. Comparison of Analysis and Spectral Nudging Techniques for Dynamical Downscaling with the WRF Model over China

    Directory of Open Access Journals (Sweden)

    Yuanyuan Ma

    2016-01-01

    Full Text Available To overcome the problem that the horizontal resolution of global climate models may be too low to resolve features which are important at the regional or local scales, dynamical downscaling has been extensively used. However, dynamical downscaling results generally drift away from large-scale driving fields. The nudging technique can be used to balance the performance of dynamical downscaling at large and small scales, but the performances of the two nudging techniques (analysis nudging and spectral nudging are debated. Moreover, dynamical downscaling is now performed at the convection-permitting scale to reduce the parameterization uncertainty and obtain the finer resolution. To compare the performances of the two nudging techniques in this study, three sensitivity experiments (with no nudging, analysis nudging, and spectral nudging covering a period of two months with a grid spacing of 6 km over continental China are conducted to downscale the 1-degree National Centers for Environmental Prediction (NCEP dataset with the Weather Research and Forecasting (WRF model. Compared with observations, the results show that both of the nudging experiments decrease the bias of conventional meteorological elements near the surface and at different heights during the process of dynamical downscaling. However, spectral nudging outperforms analysis nudging for predicting precipitation, and analysis nudging outperforms spectral nudging for the simulation of air humidity and wind speed.

  20. Variability and trends in the Arctic Sea ice cover: Results from different techniques

    Science.gov (United States)

    Comiso, Josefino C.; Meier, Walter N.; Gersten, Robert

    2017-08-01

    this study, a comparison of results from four different techniques that are frequently used shows significant disagreements in the characterization of the distribution of the sea ice cover primarily in areas that have a large fraction of new ice cover or significant amount of surface melt. However, the actual changes in the ice cover are consistently depicted and the trends in sea ice extent and ice area from the different data sets are practically the same providing strong confidence that satellite data are interpreted consistently by different scientists independently and confirming that the ice extent of the Arctic perennial ice is indeed declining at the rate of about 11% per decade. The results provide useful information for modelers, policy makers, and the general scientific public.

  1. Respirometry techniques and activated sludge models

    NARCIS (Netherlands)

    Benes, O.; Spanjers, H.; Holba, M.

    2002-01-01

    This paper aims to explain results of respirometry experiments using Activated Sludge Model No. 1. In cases of insufficient fit of ASM No. 1, further modifications to the model were carried out and the so-called "Enzymatic model" was developed. The best-fit method was used to determine the effect of

  2. Prostate Cancer Probability Prediction By Machine Learning Technique.

    Science.gov (United States)

    Jović, Srđan; Miljković, Milica; Ivanović, Miljan; Šaranović, Milena; Arsić, Milena

    2017-11-26

    The main goal of the study was to explore possibility of prostate cancer prediction by machine learning techniques. In order to improve the survival probability of the prostate cancer patients it is essential to make suitable prediction models of the prostate cancer. If one make relevant prediction of the prostate cancer it is easy to create suitable treatment based on the prediction results. Machine learning techniques are the most common techniques for the creation of the predictive models. Therefore in this study several machine techniques were applied and compared. The obtained results were analyzed and discussed. It was concluded that the machine learning techniques could be used for the relevant prediction of prostate cancer.

  3. Acquisition War-Gaming Technique for Acquiring Future Complex Systems: Modeling and Simulation Results for Cost Plus Incentive Fee Contract

    Directory of Open Access Journals (Sweden)

    Tien M. Nguyen

    2018-03-01

    Full Text Available This paper provides a high-level discussion and propositions of frameworks and models for acquisition strategy of complex systems. In particular, it presents an innovative system engineering approach to model the Department of Defense (DoD acquisition process and offers several optimization modules including simulation models using game theory and war-gaming concepts. Our frameworks employ Advanced Game-based Mathematical Framework (AGMF and Unified Game-based Acquisition Framework (UGAF, and related advanced simulation and mathematical models that include a set of War-Gaming Engines (WGEs implemented in MATLAB statistical optimization models. WGEs are defined as a set of algorithms, characterizing the Program and Technical Baseline (PTB, technology enablers, architectural solutions, contract type, contract parameters and associated incentives, and industry bidding position. As a proof of concept, Aerospace, in collaboration with the North Carolina State University (NCSU and University of Hawaii (UH, successfully applied and extended the proposed frameworks and decision models to determine the optimum contract parameters and incentives for a Cost Plus Incentive Fee (CPIF contract. As a result, we can suggest a set of acquisition strategies that ensure the optimization of the PTB.

  4. Statistical and Machine-Learning Data Mining Techniques for Better Predictive Modeling and Analysis of Big Data

    CERN Document Server

    Ratner, Bruce

    2011-01-01

    The second edition of a bestseller, Statistical and Machine-Learning Data Mining: Techniques for Better Predictive Modeling and Analysis of Big Data is still the only book, to date, to distinguish between statistical data mining and machine-learning data mining. The first edition, titled Statistical Modeling and Analysis for Database Marketing: Effective Techniques for Mining Big Data, contained 17 chapters of innovative and practical statistical data mining techniques. In this second edition, renamed to reflect the increased coverage of machine-learning data mining techniques, the author has

  5. Radiation therapy for retinoblastoma: comparison of results with lens-sparing versus lateral beam techniques

    International Nuclear Information System (INIS)

    McCormick, B.; Ellsworth, R.; Abramson, D.; Haik, B.; Tome, M.; Grabowski, E.; LoSasso, T.

    1988-01-01

    From 1979 through 1986, 170 children were seen at our institution diagnosed with retinoblastoma. Sixty-six of the children with involvement of 121 eyes, were referred for definitive external beam radiation to one or both eyes. During the study period, two distinct radiation techniques were used. From 1980 through mid-1984, a lens-sparing technique included an anterior electron beam with a contact lens mounted lead shield, combined with a lateral field, was used. Since mid-1984, a modified lateral beam technique has been used, mixing lateral electrons and superior and inferior lateral oblique split beam wedged photons. Doses prescribed were similar for both techniques, ranging from 3,850 to 5,000 cGy in 4 to 5 weeks. The lens-sparing and the modified lateral techniques are compared for local control. For eyes with Group I through III disease, the lens-sparing technique resulted in local control in 33% of the eyes treated, where the modified lateral technique controlled 83% of the eyes treated (p = .006). Mean time to relapse was identical in both groups, that is 24 and 26 months respectively. Most relapses were successfully treated with further local therapy, including laser or cryosurgery, or 60Co plaques. Five eyes required enucleation following initial treatment with the lens-sparing technique, but none thus far with the lateral beam technique. For eyes with Group IV and V disease, no significant differences were found between the two techniques in terms of local control or eventual need for enucleation. With a mean follow-up time of 33 months for the entire group, the 4-year survival is 93%. Two of the 4 deaths are due to second primary tumor, and all 4 have occurred in the lens-sparing group. Because follow-up time is more limited in the lateral beam group, this is not statistically significant and direct survival comparisons are premature

  6. Viable Techniques, Leontief’s Closed Model, and Sraffa’s Subsistence Economies

    Directory of Open Access Journals (Sweden)

    Alberto Benítez

    2014-11-01

    Full Text Available This paper studies the production techniques employed in economies that reproduce themselves. Special attention is paid to the distinction usually made between those that do not produce a surplus and those that do, which are referred to as first and second class economies, respectively. Based on this, we present a new definition of viable economies and show that every viable economy of the second class can be represented as a viable economy of the first class under two different forms, Leontief‘s closed model and Sraffa’s subsistence economies. This allows us to present some remarks concerning the economic interpretation of the two models. On the one hand, we argue that the participation of each good in the production of every good can be considered as a normal characteristic of the first model and, on the other hand, we provide a justification for the same condition to be considered a characteristic of the second model. Furthermore, we discuss three definitions of viable techniques advanced by other authors and show that they differ from ours because they admit economies that do not reproduce themselves completely.

  7. First results from the International Urban Energy Balance Model Comparison: Model Complexity

    Science.gov (United States)

    Blackett, M.; Grimmond, S.; Best, M.

    2009-04-01

    offline to ensure no feedback to larger scale conditions within the modelling domain. Initially, participants were issued with just forcing data from an unknown urban site (termed "Alpha"); in subsequent stages, further details of the site were provided. Results from each stage, for each participating model, were then compared using a variety of statistical and graphical techniques. * The EGU2009-5713 Team: C.S.B. Grimmond1, M. Blackett1, M. Best2 and J. Barlow3and J.-J. Baik4, S. Belcher3, S. Bohnenstengel3, I. Calmet5, F. Chen6, A. Dandou7, K. Fortuniak8, M. Gouvea1, R. Hamdi9, M. Hendry2, H. Kondo10, S. Krayenhoff11, S. H. Lee4, T. Loridan1, A. Martilli12, S. Miao13, K. Oleson6, G. Pigeon14, A. Porson2,3, F. Salamanca12, L. Shashua-Bar15, G.-J. Steeneveld16, M. Tombrou7, J. Voogt17, N. Zhang18. 1King's College London, UK, 2UK Met Office, UK, 3University of Reading, UK, 4Seoul National University, Korea, 5Ecole Centrale de Nantes, France, 6National Center for Atmospheric Research, USA, 7University of Athens, Greece, 8University of Ł ódź , Poland, 9Royal Meteorological Institute, Belgium, 10National Institute of Advanced Industrial Science and Technology, Japan, 11University of British Columbia, Canada, 12CIEMAT, Spain, 13IUM, CMA, China, 14Meteo France, France, 15Ben Gurion University, Israel, 16Wageningen University, Netherlands, 17University of Western Ontario, Canada, 18Nanjing University, China.

  8. Endovascular Aortic Aneurysm Repair with Chimney and Snorkel Grafts: Indications, Techniques and Results

    Energy Technology Data Exchange (ETDEWEB)

    Patel, Rakesh P., E-mail: rpatel9@nhs.net [Northwick Park Hospital, Department of Vascular Radiology (United Kingdom); Katsargyris, Athanasios, E-mail: kthanassos@yahoo.com; Verhoeven, Eric L. G., E-mail: Eric.Verhoeven@klinikum-nuernberg.de [Klinikum Nuernberg, Department of Vascular and Endovascular Surgery (Germany); Adam, Donald J., E-mail: donald.adam@tiscali.co.uk [Heartlands Hospital, Department of Vascular Surgery (United Kingdom); Hardman, John A., E-mail: johnhardman@doctors.org.uk [Royal United Hospital Bath, Department of Vascular Radiology (United Kingdom)

    2013-12-15

    The chimney technique in endovascular aortic aneurysm repair (Ch-EVAR) involves placement of a stent or stent-graft parallel to the main aortic stent-graft to extend the proximal or distal sealing zone while maintaining side branch patency. Ch-EVAR can facilitate endovascular repair of juxtarenal and aortic arch pathology using available standard aortic stent-grafts, therefore, eliminating the manufacturing delays required for customised fenestrated and branched stent-grafts. Several case series have demonstrated the feasibility of Ch-EVAR both in acute and elective cases with good early results. This review discusses indications, technique, and the current available clinical data on Ch-EVAR.

  9. Space Geodetic Technique Co-location in Space: Simulation Results for the GRASP Mission

    Science.gov (United States)

    Kuzmicz-Cieslak, M.; Pavlis, E. C.

    2011-12-01

    The Global Geodetic Observing System-GGOS, places very stringent requirements in the accuracy and stability of future realizations of the International Terrestrial Reference Frame (ITRF): an origin definition at 1 mm or better at epoch and a temporal stability on the order of 0.1 mm/y, with similar numbers for the scale (0.1 ppb) and orientation components. These goals were derived from the requirements of Earth science problems that are currently the international community's highest priority. None of the geodetic positioning techniques can achieve this goal alone. This is due in part to the non-observability of certain attributes from a single technique. Another limitation is imposed from the extent and uniformity of the tracking network and the schedule of observational availability and number of suitable targets. The final limitation derives from the difficulty to "tie" the reference points of each technique at the same site, to an accuracy that will support the GGOS goals. The future GGOS network will address decisively the ground segment and to certain extent the space segment requirements. The JPL-proposed multi-technique mission GRASP (Geodetic Reference Antenna in Space) attempts to resolve the accurate tie between techniques, using their co-location in space, onboard a well-designed spacecraft equipped with GNSS receivers, a SLR retroreflector array, a VLBI beacon and a DORIS system. Using the anticipated system performance for all four techniques at the time the GGOS network is completed (ca 2020), we generated a number of simulated data sets for the development of a TRF. Our simulation studies examine the degree to which GRASP can improve the inter-technique "tie" issue compared to the classical approach, and the likely modus operandi for such a mission. The success of the examined scenarios is judged by the quality of the origin and scale definition of the resulting TRF.

  10. Atmospheric Deposition Modeling Results

    Data.gov (United States)

    U.S. Environmental Protection Agency — This asset provides data on model results for dry and total deposition of sulfur, nitrogen and base cation species. Components include deposition velocities, dry...

  11. Establishment of SHG-44 human glioma model in brain of wistar rat with stereotactic technique

    International Nuclear Information System (INIS)

    Hong Xinyu; Luo Yi'nan; Fu Shuanglin; Wang Zhanfeng; Bie Li; Cui Jiale

    2004-01-01

    Objective: To establish solid intracerebral human glioma model in Wistar rat with xenograft methods. Methods: The SHG-44 cells were injected into brain right caudate nucleus of previous immuno-inhibitory Wistar rats with stereotactic technique. The MRI scans were performed at 1 week and 2 weeks later after implantation. After 2 weeks the rats were killed and pathological examination and immunohistologic stain for human GFAP were used. Results: The MRI scan after 1 week of implantation showed the glioma was growing, pathological histochemical examination demonstrated the tumor was glioma. Human GFAP stain was positive. The growth rate of glioma model was about 60%. Conclusion: Solid intracerebral human glioma model in previous immuno-inhibitory Wistar rat is successfully established

  12. Comparing modelling techniques when designing VPH gratings for BigBOSS

    Science.gov (United States)

    Poppett, Claire; Edelstein, Jerry; Lampton, Michael; Jelinsky, Patrick; Arns, James

    2012-09-01

    BigBOSS is a Stage IV Dark Energy instrument based on the Baryon Acoustic Oscillations (BAO) and Red Shift Distortions (RSD) techniques using spectroscopic data of 20 million ELG and LRG galaxies at 0.5VPH) gratings have been identified as a key technology which will enable the efficiency requirement to be met, however it is important to be able to accurately predict their performance. In this paper we quantitatively compare different modelling techniques in order to assess the parameter space over which they are more capable of accurately predicting measured performance. Finally we present baseline parameters for grating designs that are most suitable for the BigBOSS instrument.

  13. Establishment of atherosclerotic model and USPIO enhanced MRI techniques study in rabbits

    International Nuclear Information System (INIS)

    Li Yonggang; Zhu Mo; Dai Yinyu; Chen Jianhua; Guo Liang; Ni Jiankun

    2010-01-01

    Objective: To explore the methods of establishment of atherosclerotic model and USPIO enhanced MRI techniques in rabbits. Methods: Thirty New Zealand male rabbits were divided randomly into two groups: 20 animals in the experiment group, 10 animals in the control group. Animal model of atherosclerosis was induced with aortic balloon endothelial injury and high-fat diet feeding. There was no intervention with the rabbits in control group. MRI examination included plan scan, USPIO enhanced black-blood sequences and white-blood sequence. The features of the plaques was analyzed in the experimental group and the effection on the image quality of different coils, sequences and parameters and a statistical study was also analyzed. Results: Animal model of atherosclerosis was successfully made in 12 rabbits and most plaques located in the abdomen aorta. There were 86 plaques within the scanning scope among which 67 plaques were positive to the Prussian blue staining. The image quality of knee joint coil was better than that of other coils. Although there was no difference in the detection of numbers of AS plaques between USPIO enhanced black-blood sequences and white-blood sequence (P > 0.05), blackblood sequences was superior to white-blood sequence in the demonstration of the components of plaque. Conclusion: The method of aortic balloon endothelial injury and high-fat diet feeding can easily establish the AS model in rabbits with a shorter period and it may be used for controlling the location of the plaques. USPIO enhanced MRI sequences has high sensitivity in the detection of the AS plauqes and can reveal the component of AS plaques. The optimization of MRI techniques is very important in the improvement of the image quality and the detection of the plaques. (authors)

  14. Statistical techniques for modeling extreme price dynamics in the energy market

    International Nuclear Information System (INIS)

    Mbugua, L N; Mwita, P N

    2013-01-01

    Extreme events have large impact throughout the span of engineering, science and economics. This is because extreme events often lead to failure and losses due to the nature unobservable of extra ordinary occurrences. In this context this paper focuses on appropriate statistical methods relating to a combination of quantile regression approach and extreme value theory to model the excesses. This plays a vital role in risk management. Locally, nonparametric quantile regression is used, a method that is flexible and best suited when one knows little about the functional forms of the object being estimated. The conditions are derived in order to estimate the extreme value distribution function. The threshold model of extreme values is used to circumvent the lack of adequate observation problem at the tail of the distribution function. The application of a selection of these techniques is demonstrated on the volatile fuel market. The results indicate that the method used can extract maximum possible reliable information from the data. The key attraction of this method is that it offers a set of ready made approaches to the most difficult problem of risk modeling.

  15. Learning-based computing techniques in geoid modeling for precise height transformation

    Science.gov (United States)

    Erol, B.; Erol, S.

    2013-03-01

    Precise determination of local geoid is of particular importance for establishing height control in geodetic GNSS applications, since the classical leveling technique is too laborious. A geoid model can be accurately obtained employing properly distributed benchmarks having GNSS and leveling observations using an appropriate computing algorithm. Besides the classical multivariable polynomial regression equations (MPRE), this study attempts an evaluation of learning based computing algorithms: artificial neural networks (ANNs), adaptive network-based fuzzy inference system (ANFIS) and especially the wavelet neural networks (WNNs) approach in geoid surface approximation. These algorithms were developed parallel to advances in computer technologies and recently have been used for solving complex nonlinear problems of many applications. However, they are rather new in dealing with precise modeling problem of the Earth gravity field. In the scope of the study, these methods were applied to Istanbul GPS Triangulation Network data. The performances of the methods were assessed considering the validation results of the geoid models at the observation points. In conclusion the ANFIS and WNN revealed higher prediction accuracies compared to ANN and MPRE methods. Beside the prediction capabilities, these methods were also compared and discussed from the practical point of view in conclusions.

  16. Parallel-scanning tomosynthesis using a slot scanning technique: Fixed-focus reconstruction and the resulting image quality

    International Nuclear Information System (INIS)

    Shibata, Koichi; Notohara, Daisuke; Sakai, Takihito

    2014-01-01

    Purpose: Parallel-scanning tomosynthesis (PS-TS) is a novel technique that fuses the slot scanning technique and the conventional tomosynthesis (TS) technique. This approach allows one to obtain long-view tomosynthesis images in addition to normally sized tomosynthesis images, even when using a system that has no linear tomographic scanning function. The reconstruction technique and an evaluation of the resulting image quality for PS-TS are described in this paper. Methods: The PS-TS image-reconstruction technique consists of several steps (1) the projection images are divided into strips, (2) the strips are stitched together to construct images corresponding to the reconstruction plane, (3) the stitched images are filtered, and (4) the filtered stitched images are back-projected. In the case of PS-TS using the fixed-focus reconstruction method (PS-TS-F), one set of stitched images is used for the reconstruction planes at all heights, thus avoiding the necessity of repeating steps (1)–(3). A physical evaluation of the image quality of PS-TS-F compared with that of the conventional linear TS was performed using a R/F table (Sonialvision safire, Shimadzu Corp., Kyoto, Japan). The tomographic plane with the best theoretical spatial resolution (the in-focus plane, IFP) was set at a height of 100 mm from the table top by adjusting the reconstruction program. First, the spatial frequency response was evaluated at heights of −100, −50, 0, 50, 100, and 150 mm from the IFP using the edge of a 0.3-mm-thick copper plate. Second, the spatial resolution at each height was visually evaluated using an x-ray test pattern (Model No. 38, PTW Freiburg, Germany). Third, the slice sensitivity at each height was evaluated via the wire method using a 0.1-mm-diameter tungsten wire. Phantom studies using a knee phantom and a whole-body phantom were also performed. Results: The spatial frequency response of PS-TS-F yielded the best results at the IFP and degraded slightly as the

  17. Parallel-scanning tomosynthesis using a slot scanning technique: Fixed-focus reconstruction and the resulting image quality

    Energy Technology Data Exchange (ETDEWEB)

    Shibata, Koichi, E-mail: shibatak@suzuka-u.ac.jp [Department of Radiological Technology, Faculty of Health Science, Suzuka University of Medical Science 1001-1, Kishioka-cho, Suzuka 510-0293 (Japan); Notohara, Daisuke; Sakai, Takihito [R and D Department, Medical Systems Division, Shimadzu Corporation 1, Nishinokyo-Kuwabara-cho, Nakagyo-ku, Kyoto 604-8511 (Japan)

    2014-11-01

    Purpose: Parallel-scanning tomosynthesis (PS-TS) is a novel technique that fuses the slot scanning technique and the conventional tomosynthesis (TS) technique. This approach allows one to obtain long-view tomosynthesis images in addition to normally sized tomosynthesis images, even when using a system that has no linear tomographic scanning function. The reconstruction technique and an evaluation of the resulting image quality for PS-TS are described in this paper. Methods: The PS-TS image-reconstruction technique consists of several steps (1) the projection images are divided into strips, (2) the strips are stitched together to construct images corresponding to the reconstruction plane, (3) the stitched images are filtered, and (4) the filtered stitched images are back-projected. In the case of PS-TS using the fixed-focus reconstruction method (PS-TS-F), one set of stitched images is used for the reconstruction planes at all heights, thus avoiding the necessity of repeating steps (1)–(3). A physical evaluation of the image quality of PS-TS-F compared with that of the conventional linear TS was performed using a R/F table (Sonialvision safire, Shimadzu Corp., Kyoto, Japan). The tomographic plane with the best theoretical spatial resolution (the in-focus plane, IFP) was set at a height of 100 mm from the table top by adjusting the reconstruction program. First, the spatial frequency response was evaluated at heights of −100, −50, 0, 50, 100, and 150 mm from the IFP using the edge of a 0.3-mm-thick copper plate. Second, the spatial resolution at each height was visually evaluated using an x-ray test pattern (Model No. 38, PTW Freiburg, Germany). Third, the slice sensitivity at each height was evaluated via the wire method using a 0.1-mm-diameter tungsten wire. Phantom studies using a knee phantom and a whole-body phantom were also performed. Results: The spatial frequency response of PS-TS-F yielded the best results at the IFP and degraded slightly as the

  18. A Three-Component Model for Magnetization Transfer. Solution by Projection-Operator Technique, and Application to Cartilage

    Science.gov (United States)

    Adler, Ronald S.; Swanson, Scott D.; Yeung, Hong N.

    1996-01-01

    A projection-operator technique is applied to a general three-component model for magnetization transfer, extending our previous two-component model [R. S. Adler and H. N. Yeung,J. Magn. Reson. A104,321 (1993), and H. N. Yeung, R. S. Adler, and S. D. Swanson,J. Magn. Reson. A106,37 (1994)]. The PO technique provides an elegant means of deriving a simple, effective rate equation in which there is natural separation of relaxation and source terms and allows incorporation of Redfield-Provotorov theory without any additional assumptions or restrictive conditions. The PO technique is extended to incorporate more general, multicomponent models. The three-component model is used to fit experimental data from samples of human hyaline cartilage and fibrocartilage. The fits of the three-component model are compared to the fits of the two-component model.

  19. The phase field technique for modeling multiphase materials

    Science.gov (United States)

    Singer-Loginova, I.; Singer, H. M.

    2008-10-01

    This paper reviews methods and applications of the phase field technique, one of the fastest growing areas in computational materials science. The phase field method is used as a theory and computational tool for predictions of the evolution of arbitrarily shaped morphologies and complex microstructures in materials. In this method, the interface between two phases (e.g. solid and liquid) is treated as a region of finite width having a gradual variation of different physical quantities, i.e. it is a diffuse interface model. An auxiliary variable, the phase field or order parameter \\phi(\\vec{x}) , is introduced, which distinguishes one phase from the other. Interfaces are identified by the variation of the phase field. We begin with presenting the physical background of the phase field method and give a detailed thermodynamical derivation of the phase field equations. We demonstrate how equilibrium and non-equilibrium physical phenomena at the phase interface are incorporated into the phase field methods. Then we address in detail dendritic and directional solidification of pure and multicomponent alloys, effects of natural convection and forced flow, grain growth, nucleation, solid-solid phase transformation and highlight other applications of the phase field methods. In particular, we review the novel phase field crystal model, which combines atomistic length scales with diffusive time scales. We also discuss aspects of quantitative phase field modeling such as thin interface asymptotic analysis and coupling to thermodynamic databases. The phase field methods result in a set of partial differential equations, whose solutions require time-consuming large-scale computations and often limit the applicability of the method. Subsequently, we review numerical approaches to solve the phase field equations and present a finite difference discretization of the anisotropic Laplacian operator.

  20. Monte Carlo technique for very large ising models

    Science.gov (United States)

    Kalle, C.; Winkelmann, V.

    1982-08-01

    Rebbi's multispin coding technique is improved and applied to the kinetic Ising model with size 600*600*600. We give the central part of our computer program (for a CDC Cyber 76), which will be helpful also in a simulation of smaller systems, and describe the other tricks necessary to go to large lattices. The magnetization M at T=1.4* T c is found to decay asymptotically as exp(-t/2.90) if t is measured in Monte Carlo steps per spin, and M( t = 0) = 1 initially.

  1. Comparison of Grid Nudging and Spectral Nudging Techniques for Dynamical Climate Downscaling within the WRF Model

    Science.gov (United States)

    Fan, X.; Chen, L.; Ma, Z.

    2010-12-01

    Climate downscaling has been an active research and application area in the past several decades focusing on regional climate studies. Dynamical downscaling, in addition to statistical methods, has been widely used in downscaling as the advanced modern numerical weather and regional climate models emerge. The utilization of numerical models enables that a full set of climate variables are generated in the process of downscaling, which are dynamically consistent due to the constraints of physical laws. While we are generating high resolution regional climate, the large scale climate patterns should be retained. To serve this purpose, nudging techniques, including grid analysis nudging and spectral nudging, have been used in different models. There are studies demonstrating the benefit and advantages of each nudging technique; however, the results are sensitive to many factors such as nudging coefficients and the amount of information to nudge to, and thus the conclusions are controversy. While in a companion work of developing approaches for quantitative assessment of the downscaled climate, in this study, the two nudging techniques are under extensive experiments in the Weather Research and Forecasting (WRF) model. Using the same model provides fair comparability. Applying the quantitative assessments provides objectiveness of comparison. Three types of downscaling experiments were performed for one month of choice. The first type is serving as a base whereas the large scale information is communicated through lateral boundary conditions only; the second is using the grid analysis nudging; and the third is using spectral nudging. Emphases are given to the experiments of different nudging coefficients and nudging to different variables in the grid analysis nudging; while in spectral nudging, we focus on testing the nudging coefficients, different wave numbers on different model levels to nudge.

  2. Use of Video Modeling to Teach Weight Lifting Techniques to Adults with Down Syndrome: A Pilot Study

    Science.gov (United States)

    Carter, Kathleen; Pennington, Robert; Ledford, Elizabeth

    2017-01-01

    As adults with Down syndrome (DS) age, their strength decreases resulting in difficulty performing activities of daily living. In the current study, we investigated the use of video modeling for teaching three adults with DS to perform weight lifting techniques. A multiple probe design across behaviors (i.e., lifts) was used to evaluate…

  3. Finite-element-model updating using computational intelligence techniques applications to structural dynamics

    CERN Document Server

    Marwala, Tshilidzi

    2010-01-01

    Finite element models (FEMs) are widely used to understand the dynamic behaviour of various systems. FEM updating allows FEMs to be tuned better to reflect measured data and may be conducted using two different statistical frameworks: the maximum likelihood approach and Bayesian approaches. Finite Element Model Updating Using Computational Intelligence Techniques applies both strategies to the field of structural mechanics, an area vital for aerospace, civil and mechanical engineering. Vibration data is used for the updating process. Following an introduction a number of computational intelligence techniques to facilitate the updating process are proposed; they include: • multi-layer perceptron neural networks for real-time FEM updating; • particle swarm and genetic-algorithm-based optimization methods to accommodate the demands of global versus local optimization models; • simulated annealing to put the methodologies into a sound statistical basis; and • response surface methods and expectation m...

  4. Review of the phenomenon of fluidization and its numerical modelling techniques

    Directory of Open Access Journals (Sweden)

    H Khawaja

    2016-10-01

    Full Text Available The paper introduces the phenomenon of fluidization as a process. Fluidization occurs when a fluid (liquid or gas is pushed upwards through a bed of granular material. This may make the granular material to behave like a liquid and, for example, keep a level meniscus on a tilted container, or make a lighter object float on top and a heavier object sink to the bottom. The behavior of the granular material, when fluidized, depends on the superficial gas velocity, particle size, particle density, and fluid properties resulting in various regimes of fluidization. These regimes are discussed in detail in the paper. This paper also discusses the application of fluidized beds from its early usage in the Winkler coal gasifier to more recent applications for manufacturing of carbon nano-tubes. In addition, Geldart grouping based on the range of particle sizes is discussed. The minimum fluidization condition is defined and it is demonstrated that it may be registered slightly different when particles are being fluidized or de-fluidized. The paper presents discussion on three numerical modelling techniques: the two fluid model, unresolved fluid-particle model and resolved fluid particle model. The two fluid model is often referred to Eulerian-Eulerian method of solution and assumes particles as well as fluid as continuum. The unresolved and resolved fluid-particle models are based on Eulerian-Lagrangian method of solution. The key difference between them is the whether to use a drag correlation or solve the boundary layer around the particles. The paper ends with the discussion on the applicability of these models.

  5. New analytical technique for carbon dioxide absorption solvents

    Energy Technology Data Exchange (ETDEWEB)

    Pouryousefi, F.; Idem, R.O. [University of Regina, Regina, SK (Canada). Faculty of Engineering

    2008-02-15

    The densities and refractive indices of two binary systems (water + MEA and water + MDEA) and three ternary systems (water + MEA + CO{sub 2}, water + MDEA + CO{sub 2}, and water + MEA + MDEA) used for carbon dioxide (CO{sub 2}) capture were measured over the range of compositions of the aqueous alkanolamine(s) used for CO{sub 2} absorption at temperatures from 295 to 338 K. Experimental densities were modeled empirically, while the experimental refractive indices were modeled using well-established models from the known values of their pure-component densities and refractive indices. The density and Gladstone-Dale refractive index models were then used to obtain the compositions of unknown samples of the binary and ternary systems by simultaneous solution of the density and refractive index equations. The results from this technique have been compared with HPLC (high-performance liquid chromatography) results, while a third independent technique (acid-base titration) was used to verify the results. The results show that the systems' compositions obtained from the simple and easy-to-use refractive index/density technique were very comparable to the expensive and laborious HPLC/titration techniques, suggesting that the refractive index/density technique can be used to replace existing methods for analysis of fresh or nondegraded, CO{sub 2}-loaded, single and mixed alkanolamine solutions.

  6. Model-driven engineering of information systems principles, techniques, and practice

    CERN Document Server

    Cretu, Liviu Gabriel

    2015-01-01

    Model-driven engineering (MDE) is the automatic production of software from simplified models of structure and functionality. It mainly involves the automation of the routine and technologically complex programming tasks, thus allowing developers to focus on the true value-adding functionality that the system needs to deliver. This book serves an overview of some of the core topics in MDE. The volume is broken into two sections offering a selection of papers that helps the reader not only understand the MDE principles and techniques, but also learn from practical examples. Also covered are the

  7. Comparison of lung tumor motion measured using a model-based 4DCT technique and a commercial protocol.

    Science.gov (United States)

    O'Connell, Dylan; Shaverdian, Narek; Kishan, Amar U; Thomas, David H; Dou, Tai H; Lewis, John H; Lamb, James M; Cao, Minsong; Tenn, Stephen; Percy, Lee P; Low, Daniel A

    2017-11-11

    To compare lung tumor motion measured with a model-based technique to commercial 4-dimensional computed tomography (4DCT) scans and describe a workflow for using model-based 4DCT as a clinical simulation protocol. Twenty patients were imaged using a model-based technique and commercial 4DCT. Tumor motion was measured on each commercial 4DCT dataset and was calculated on model-based datasets for 3 breathing amplitude percentile intervals: 5th to 85th, 5th to 95th, and 0th to 100th. Internal target volumes (ITVs) were defined on the 4DCT and 5th to 85th interval datasets and compared using Dice similarity. Images were evaluated for noise and rated by 2 radiation oncologists for artifacts. Mean differences in tumor motion magnitude between commercial and model-based images were 0.47 ± 3.0, 1.63 ± 3.17, and 5.16 ± 4.90 mm for the 5th to 85th, 5th to 95th, and 0th to 100th amplitude intervals, respectively. Dice coefficients between ITVs defined on commercial and 5th to 85th model-based images had a mean value of 0.77 ± 0.09. Single standard deviation image noise was 11.6 ± 9.6 HU in the liver and 6.8 ± 4.7 HU in the aorta for the model-based images compared with 57.7 ± 30 and 33.7 ± 15.4 for commercial 4DCT. Mean model error within the ITV regions was 1.71 ± 0.81 mm. Model-based images exhibited reduced presence of artifacts at the tumor compared with commercial images. Tumor motion measured with the model-based technique using the 5th to 85th percentile breathing amplitude interval corresponded more closely to commercial 4DCT than the 5th to 95th or 0th to 100th intervals, which showed greater motion on average. The model-based technique tended to display increased tumor motion when breathing amplitude intervals wider than 5th to 85th were used because of the influence of unusually deep inhalations. These results suggest that care must be taken in selecting the appropriate interval during image generation when using model-based 4DCT methods. Copyright © 2017

  8. Results of the naive quark model

    International Nuclear Information System (INIS)

    Gignoux, C.

    1987-10-01

    The hypotheses and limits of the naive quark model are recalled and results on nucleon-nucleon scattering and possible multiquark states are presented. Results show that with this model, ropers do not come. For hadron-hadron interactions, the model predicts Van der Waals forces that the resonance group method does not allow. Known many-body forces are not found in the model. The lack of mesons shows up in the absence of a far reaching force. However, the model does have strengths. It is free from spuriousness of center of mass, and allows a democratic handling of flavor. It has few parameters, and its predictions are very good [fr

  9. All-Arthroscopic Revision Eden-Hybinette Procedure for Failed Instability Surgery: Technique and Preliminary Results.

    Science.gov (United States)

    Giannakos, Antonios; Vezeridis, Peter S; Schwartz, Daniel G; Jany, Richard; Lafosse, Laurent

    2017-01-01

    To describe the technique of an all-arthroscopic Eden-Hybinette procedure in the revision setting for treatment of a failed instability procedure, particularly after failed Latarjet, as well as to present preliminary results of this technique. Between 2007 and 2011, 18 shoulders with persistent instability after failed instability surgery were treated with an arthroscopic Eden-Hybinette technique using an autologous bicortical iliac crest bone graft. Of 18 patients, 12 (9 men, 3 women) were available for follow-up. The average follow-up was 28.8 months (range, 15 to 60 months). A Latarjet procedure was performed as an index surgery in 10 patients (83%). Two patients (17%) had a prior arthroscopic Bankart repair. Eight patients (67%) obtained a good or excellent result, whereas 4 patients (33%) reported a fair or poor result. Seven patients (58%) returned to sport activities. A positive apprehension test persisted in 5 patients (42%), including 2 patients (17%) with recurrent subluxations. The Rowe score increased from 30.00 to 78.33 points (P Instability Index score showed a good result of 28.71% (603 points). The average anterior flexion was 176° (range, 150° to 180°), and the average external rotation was 66° (range, 0° to 90°). Two patients (16.67%) showed a progression of glenohumeral osteoarthritic changes, with each patient increasing by one stage in the Samilson-Prieto classification. All 4 patients (33%) with a fair or poor result had a nonunion identified on postoperative computed tomography scan. An all-arthroscopic Eden-Hybinette procedure in the revision setting for failed instability surgery, although technically demanding, is a safe, effective, and reproducible technique. Although the learning curve is considerable, this procedure offers all the advantages of arthroscopic surgery and allows reconstruction of glenoid defects and restoration of shoulder stability in this challenging patient population. In our hands, this procedure yields good

  10. Joining of polymer-metal lightweight structures using self-piercing riveting (SPR) technique: Numerical approach and simulation results

    Science.gov (United States)

    Amro, Elias; Kouadri-Henni, Afia

    2018-05-01

    Restrictions in pollutant emissions dictated at the European Commission level in the past few years have urged mass production car manufacturers to engage rapidly several strategies in order to reduce significantly the energy consumption of their vehicles. One of the most relevant taken action is light-weighting of body in white (BIW) structures, concretely visible with the increased introduction of polymer-based composite materials reinforced by carbon/glass fibers. However, the design and manufacturing of such "hybrid" structures is limiting the use of conventional assembly techniques like resistance spot welding (RSW) which are not transferable as they are for polymer-metal joining. This research aims at developing a joining technique that would eventually enable the assembly of a sheet molding compound (SMC) polyester thermoset-made component on a structure composed of several high strength steel grades. The state of the art of polymer-metal joining techniques highlighted the few ones potentially able to respond to the industrial challenge, which are: structural bonding, self-piercing riveting (SPR), direct laser joining and friction spot welding (FSpW). In this study, the promising SPR technique is investigated. Modelling of SPR process in the case of polymer-metal joining was performed through the building of a 2D axisymmetric FE model using the commercial code Abaqus CAE 6.10-1. Details of the numerical approach are presented with a particular attention to the composite sheet for which Mori-Tanaka's homogenization method is used in order to estimate overall mechanical properties. Large deformations induced by the riveting process are enabled with the use of a mixed finite element formulation ALE (arbitrary Lagrangian-Eulerian). FE model predictions are compared with experimental data followed by a discussion.

  11. Minimally invasive treatment of trochanteric fractures with intramedullary nails. Technique and results.

    Science.gov (United States)

    Todor, Adrian; Pojar, Adina; Lucaciu, Dan

    2013-01-01

    The aim of the study was to evaluate the results of minimally invasive treatment of trochanteric fractures with the use of intramedullary nails. From September 2010 to September 2012 we treated 21 patients with pertrochanteric fractures by a minimally invasive technique using the Gamma 3 (Stryker, Howmedica) nail. There were 13 females and 8 men with a mean age of 74.1 years, ranging from 58 to 88 years. Fractures were classified as being stable (AO type 31-A1) in 5 cases and unstable (AO type 31-A2 and A3) in the rest of 16 cases. Patients were reviewed at 6 weeks and 3 months postoperatively. Mean surgery time was 46.8 minutes and mean hospital stay was 14.9 days. No patients required blood transfusions. During the hospital stay all the patients were mobilized with weight bearing as tolerated. All patients were available for review at 6 weeks, and 2 were lost to the 3 months follow up. 16 patients regained the previous level of activity. This minimally invasive technique using a gamma nail device for pertrochanteric fractures gives reliable good results with excellent preservation of hip function.

  12. Investigations of auroral dynamics: techniques and results

    International Nuclear Information System (INIS)

    Steen, Aa.

    1988-10-01

    This study is an experimental investigation of the dynamics of the aurora, describing both the systems developed for the optical measurements and the results obtained. It is found that during a auroral arc deformation, a fold travelling eastward along the arc is associated with an enhanced F-region ion temperature of 2700 K, measured by EISCAT, indicative of enhanced ionspheric electric fields. It is shown that for an auroral break-up, the large-scale westward travelling surge (WTS) is the last developed spiral in a sequence of spiral formations. It is proposed that the Kelvin-Helmholtz instability is the responsible process. In another event it is shown that large-amplitude long-lasting pulsations, observed both in ground-based magnetic field and photometer recordings, correspond to strong modulations of the particle intensity at the equatorial orbit (6.6 Re). In this event a gradual transition occurs between pulses classified as Ps6/auroral torches toward pulses with characteristics of substorms. The observations are explained by the Kelvin-Helmholtz instability in a magnetospheric boundary layer. The meridional neutral wind, at about 240 km altitude, is found to be reduced prior to or at the onset of auroral activity. These findings are suggestive of large-scale reconfigurations of the ionspheric electric fields prior to auroral onsets. A new real time triangulation technique developed to determine the altitude of auroral arcs is presented, and an alternative method to analyze incoherent scatter data is discussed. (With 46 refs.) (author)

  13. Uncertainty estimates of a GRACE inversion modelling technique over Greenland using a simulation

    Science.gov (United States)

    Bonin, Jennifer; Chambers, Don

    2013-07-01

    The low spatial resolution of GRACE causes leakage, where signals in one location spread out into nearby regions. Because of this leakage, using simple techniques such as basin averages may result in an incorrect estimate of the true mass change in a region. A fairly simple least squares inversion technique can be used to more specifically localize mass changes into a pre-determined set of basins of uniform internal mass distribution. However, the accuracy of these higher resolution basin mass amplitudes has not been determined, nor is it known how the distribution of the chosen basins affects the results. We use a simple `truth' model over Greenland as an example case, to estimate the uncertainties of this inversion method and expose those design parameters which may result in an incorrect high-resolution mass distribution. We determine that an appropriate level of smoothing (300-400 km) and process noise (0.30 cm2 of water) gets the best results. The trends of the Greenland internal basins and Iceland can be reasonably estimated with this method, with average systematic errors of 3.5 cm yr-1 per basin. The largest mass losses found from GRACE RL04 occur in the coastal northwest (-19.9 and -33.0 cm yr-1) and southeast (-24.2 and -27.9 cm yr-1), with small mass gains (+1.4 to +7.7 cm yr-1) found across the northern interior. Acceleration of mass change is measurable at the 95 per cent confidence level in four northwestern basins, but not elsewhere in Greenland. Due to an insufficiently detailed distribution of basins across internal Canada, the trend estimates of Baffin and Ellesmere Islands are expected to be incorrect due to systematic errors caused by the inversion technique.

  14. A predictive model for dysphagia following IMRT for head and neck cancer: Introduction of the EMLasso technique

    International Nuclear Information System (INIS)

    Kim, De Ruyck; Duprez, Fréderic; Werbrouck, Joke; Sabbe, Nick; Sofie, De Langhe; Boterberg, Tom; Madani, Indira; Thas, Olivier; Wilfried, De Neve; Thierens, Hubert

    2013-01-01

    Background and purpose: Design a model for prediction of acute dysphagia following intensity-modulated radiotherapy (IMRT) for head and neck cancer. Illustrate the use of the EMLasso technique for model selection. Material and methods: Radiation-induced dysphagia was scored using CTCAE v.3.0 in 189 head and neck cancer patients. Clinical data (gender, age, nicotine and alcohol use, diabetes, tumor location), treatment parameters (chemotherapy, surgery involving the primary tumor, lymph node dissection, overall treatment time), dosimetric parameters (doses delivered to pharyngeal constrictor (PC) muscles and esophagus) and 19 genetic polymorphisms were used in model building. The predicting model was achieved by EMLasso, i.e. an EM algorithm to account for missing values, applied to penalized logistic regression, which allows for variable selection by tuning the penalization parameter through crossvalidation on AUC, thus avoiding overfitting. Results: Fifty-three patients (28%) developed acute ⩾ grade 3 dysphagia. The final model has an AUC of 0.71 and contains concurrent chemotherapy, D 2 to the superior PC and the rs3213245 (XRCC1) polymorphism. The model’s false negative rate and false positive rate in the optimal operation point on the ROC curve are 21% and 49%, respectively. Conclusions: This study demonstrated the utility of the EMLasso technique for model selection in predictive radiogenetics

  15. Metabolic syndrome after bariatric surgery. Results depending on the technique performed.

    Science.gov (United States)

    Gracia-Solanas, Jose Antonio; Elia, M; Aguilella, V; Ramirez, J M; Martínez, J; Bielsa, M A; Martínez, M

    2011-02-01

    There is a lack of long-term studies for metabolic syndrome after bariatric surgery. Our aim is to show the evolution of the parameters that define the metabolic syndrome after bariatric surgery, up to 10 years of follow-up, in order to clarify what technique gets better results with fewer complications. The IDF definition of the metabolic syndrome was used for this study. One hundred twenty-five morbid obese and superobese patients underwent vertical banded gastroplasty. Two hundred sixty-five morbid obese and superobese patients had biliopancreatic diversion (Scopinaro and modified biliopancreatic diversions), and 152 morbid obese patients underwent laparoscopic gastric bypass. A mean follow-up of up to 7 years was done in all groups. Prior to surgery, metabolic syndrome was diagnosed in 114 patients of Scopinaro group (76%), in 85 patients of modified biliopancreatic diversion group (73.9%), in 81 patients of laparoscopic gastric bypass (53.4%), and in 98 patients of vertical banded gastroplasty (78.4%). When metabolic syndrome parameters were evaluated at 7 years of follow-up, owing to weight gain, these results changed nearby to preoperative values in both laparoscopic gastric bypass and vertical banded gastroplasty groups. According to our results, the best technique to resolve metabolic syndrome is the modified biliopancreatic diversion. Due to its high morbidity, it only must be considered in superobese patients. In obese patients, the laparoscopic gastric bypass may be a less agressive choice, but it should be coupled with lifestyle changes to keep away from the weight gain in the long run. Restrictive procedures may be indicated only in a few well-selected cases.

  16. Personal recommender systems for learners in lifelong learning: requirements, techniques and model

    NARCIS (Netherlands)

    Drachsler, Hendrik; Hummel, Hans; Koper, Rob

    2007-01-01

    Drachsler, H., Hummel, H. G. K., & Koper, R. (2008). Personal recommender systems for learners in lifelong learning: requirements, techniques and model. International Journal of Learning Technology, 3(4), 404-423.

  17. Development of Reservoir Characterization Techniques and Production Models for Exploiting Naturally Fractured Reservoirs

    Energy Technology Data Exchange (ETDEWEB)

    Wiggins, Michael L.; Brown, Raymon L.; Civan, Frauk; Hughes, Richard G.

    2001-08-15

    Research continues on characterizing and modeling the behavior of naturally fractured reservoir systems. Work has progressed on developing techniques for estimating fracture properties from seismic and well log data, developing naturally fractured wellbore models, and developing a model to characterize the transfer of fluid from the matrix to the fracture system for use in the naturally fractured reservoir simulator.

  18. Adult spinal deformity treated with minimally invasive surgery. Description of surgical technique, radiological results and literature review.

    Science.gov (United States)

    Domínguez, I; Luque, R; Noriega, M; Rey, J; Alía, J; Urda, A; Marco, F

    The prevalence of adult spinal deformity has been increasing exponentially over time. Surgery has been credited with good radiological and clinical results. The incidence of complications is high. MIS techniques provide good results with fewer complications. This is a retrospective study of 25 patients with an adult spinal deformity treated by MIS surgery, with a minimum follow-up of 6 months. Radiological improvement was SVA from 5 to 2cm, coronal Cobb angle from 31° to 6°, and lumbar lordosis from 18° to 38°. All of these parameters remained stable over time. We also present the complications that appeared in 4 patients (16%). Only one patient needed reoperation. We describe the technique used and review the references on the subject. We conclude that the MIS technique for treating adult spinal deformity has comparable results to those of the conventional techniques but with fewer complications. Copyright © 2017 SECOT. Publicado por Elsevier España, S.L.U. All rights reserved.

  19. Numerical and physical testing of upscaling techniques for constitutive properties

    International Nuclear Information System (INIS)

    McKenna, S.A.; Tidwell, V.C.

    1995-01-01

    This paper evaluates upscaling techniques for hydraulic conductivity measurements based on accuracy and practicality for implementation in evaluating the performance of the potential repository at Yucca Mountain. Analytical and numerical techniques are compared to one another, to the results of physical upscaling experiments, and to the results obtained on the original domain. The results from different scaling techniques are then compared to the case where unscaled point scale statistics are used to generate realizations directly at the flow model grid-block scale. Initital results indicate that analytical techniques provide upscaling constitutive properties from the point measurement scale to the flow model grid-block scale. However, no single analytic technique proves to be adequate for all situations. Numerical techniques are also accurate, but they are time intensive and their accuracy is dependent on knowledge of the local flow regime at every grid-block

  20. Full Core modeling techniques for research reactors with irregular geometries using Serpent and PARCS applied to the CROCUS reactor

    International Nuclear Information System (INIS)

    Siefman, Daniel J.; Girardin, Gaëtan; Rais, Adolfo; Pautz, Andreas; Hursin, Mathieu

    2015-01-01

    Highlights: • Modeling of research reactors. • Serpent and PARCS coupling. • Lattice physics codes modeling techniques. - Abstract: This paper summarizes the results of modeling methodologies developed for the zero-power (100 W) teaching and research reactor CROCUS located in the Laboratory for Reactor Physics and Systems Behavior (LRS) at the Swiss Federal Institute of Technology in Lausanne (EPFL). The study gives evidence that the Monte Carlo code Serpent can be used effectively as a lattice physics tool for small reactors. CROCUS’ core has an irregular geometry with two fuel zones of different lattice pitches. This and the reactor’s small size necessitate the use of nonstandard cross-section homogenization techniques when modeling the full core with a 3D nodal diffusion code (e.g. PARCS). The primary goal of this work is the development of these techniques for steady-state neutronics and future transient neutronics analyses of not only CROCUS, but research reactors in general. In addition, the modeling methods can provide useful insight for analyzing small modular reactor concepts based on light water technology. Static computational models of CROCUS with the codes Serpent and MCNP5 are presented and methodologies are analyzed for using Serpent and SerpentXS to prepare macroscopic homogenized group cross-sections for a pin-by-pin model of CROCUS with PARCS. The most accurate homogenization scheme lead to a difference in terms of k eff of 385 pcm between the Serpent and PARCS model, while the MCNP5 and Serpent models differed in terms of k eff by 13 pcm (within the statistical error of each simulation). Comparisons of the axial power profiles between the Serpent model as a reference and a set of PARCS models using different homogenization techniques showed a consistent root-mean-square deviation of ∼8%, indicating that the differences are not due to the homogenization technique but rather arise from the definition of the diffusion coefficients

  1. Abnormal urinalysis results are common, regardless of specimen collection technique, in women without urinary tract infections.

    Science.gov (United States)

    Frazee, Bradley W; Enriquez, Kayla; Ng, Valerie; Alter, Harrison

    2015-06-01

    Voided urinalysis to test for urinary tract infection (UTI) is prone to false-positive results for a number of reasons. Specimens are often collected at triage from women with any abdominal complaint, creating a low UTI prevalence population. Improper collection technique by the patient may affect the result. At least four indices, if positive, can indicate UTI. We examine the impact of voided specimen collection technique on urinalysis indicators of UTI and on urine culture contamination in disease-free women. In this crossover design, 40 menstrual-age female emergency department staff without UTI symptoms collected urine two ways: directly in a cup ("non-clean") and midstream clean catch ("ideal"). Samples underwent standard automated urinalysis and culture. Urinalysis indices and culture contamination were compared. The proportion of abnormal results from samples collected by "non-clean" vs. "ideal" technique, respectively, were: leukocyte esterase (>trace) 50%, 35% (95% confidence interval for difference -6% to 36%); nitrites (any) 2.5%, 2.5% (difference -2.5 to 2.5%); white blood cells (>5/high-powered field [HPF]) 50%, 27.5% (difference 4 to 41%); bacteria (any/HPF) 77.5%, 62.5%, (difference -7 to 37%); epithelial cells (>few) 65%, 30% (difference 13 to 56%); culture contamination (>1000 colony-forming units of commensal or >2 species) 77%, 63% (difference -5 to 35%). No urinalysis index was positively correlated with culture contamination. Contemporary automated urinalysis indices were often abnormal in a disease-free population of women, even using ideal collection technique. In clinical practice, such false-positive results could lead to false-positive UTI diagnosis. Only urine nitrite showed a high specificity. Culture contamination was common regardless of collection technique and was not predicted by urinalysis results. Copyright © 2015 Elsevier Inc. All rights reserved.

  2. Field studies of the thermal plume from the D. C. Cook submerged discharge with comparisons to hydraulic-model results

    International Nuclear Information System (INIS)

    Frigo, A.A.; Paddock, R.A.; McCown, D.L.

    1975-06-01

    The Donald C. Cook Nuclear Plant at Bridgman, Michigan, uses submerged-diffuser discharges as a means of disposing waste heat into Lake Michigan. Preliminary results of temperature surveys of the thermal plume at the D. C. Cook Plant are presented. Indications are that the spatial extent of the plume at the surface is much smaller than previous results for surface shoreline discharges, particularly in the near and intermediate portions of the plume. Comparisons of limited prototype data with hydraulic (tank)-model predictions indicate that the model predictions for centerline temperature decay at the surface are too high for the initial 200 m from the discharge, but are generally correct beyond this point to the limits of the model. In addition, the hydraulic-model results underestimate the areal extent of the near and intermediate portions of the plume at the surface. Because this is the first report of a new field program, several inadequacies in the field-measurement techniques are noted and discussed. New techniques that have been developed to remedy these deficiencies, and which will be implemented for future field work, are also described. (auth)

  3. Subsurface stormflow modeling with sensitivity analysis using a Latin-hypercube sampling technique

    International Nuclear Information System (INIS)

    Gwo, J.P.; Toran, L.E.; Morris, M.D.; Wilson, G.V.

    1994-09-01

    Subsurface stormflow, because of its dynamic and nonlinear features, has been a very challenging process in both field experiments and modeling studies. The disposal of wastes in subsurface stormflow and vadose zones at Oak Ridge National Laboratory, however, demands more effort to characterize these flow zones and to study their dynamic flow processes. Field data and modeling studies for these flow zones are relatively scarce, and the effect of engineering designs on the flow processes is poorly understood. On the basis of a risk assessment framework and a conceptual model for the Oak Ridge Reservation area, numerical models of a proposed waste disposal site were built, and a Latin-hypercube simulation technique was used to study the uncertainty of model parameters. Four scenarios, with three engineering designs, were simulated, and the effectiveness of the engineering designs was evaluated. Sensitivity analysis of model parameters suggested that hydraulic conductivity was the most influential parameter. However, local heterogeneities may alter flow patterns and result in complex recharge and discharge patterns. Hydraulic conductivity, therefore, may not be used as the only reference for subsurface flow monitoring and engineering operations. Neither of the two engineering designs, capping and French drains, was found to be effective in hydrologically isolating downslope waste trenches. However, pressure head contours indicated that combinations of both designs may prove more effective than either one alone

  4. Near-real-time regional troposphere models for the GNSS precise point positioning technique

    International Nuclear Information System (INIS)

    Hadas, T; Kaplon, J; Bosy, J; Sierny, J; Wilgan, K

    2013-01-01

    The GNSS precise point positioning (PPP) technique requires high quality product (orbits and clocks) application, since their error directly affects the quality of positioning. For real-time purposes it is possible to utilize ultra-rapid precise orbits and clocks which are disseminated through the Internet. In order to eliminate as many unknown parameters as possible, one may introduce external information on zenith troposphere delay (ZTD). It is desirable that the a priori model is accurate and reliable, especially for real-time application. One of the open problems in GNSS positioning is troposphere delay modelling on the basis of ground meteorological observations. Institute of Geodesy and Geoinformatics of Wroclaw University of Environmental and Life Sciences (IGG WUELS) has developed two independent regional troposphere models for the territory of Poland. The first one is estimated in near-real-time regime using GNSS data from a Polish ground-based augmentation system named ASG-EUPOS established by Polish Head Office of Geodesy and Cartography (GUGiK) in 2008. The second one is based on meteorological parameters (temperature, pressure and humidity) gathered from various meteorological networks operating over the area of Poland and surrounding countries. This paper describes the methodology of both model calculation and verification. It also presents results of applying various ZTD models into kinematic PPP in the post-processing mode using Bernese GPS Software. Positioning results were used to assess the quality of the developed models during changing weather conditions. Finally, the impact of model application to simulated real-time PPP on precision, accuracy and convergence time is discussed. (paper)

  5. Real-time kinetic modeling of YSZ thin film roughness deposited by e-beam evaporation technique

    International Nuclear Information System (INIS)

    Galdikas, A.; Cerapaite-Trusinskiene, R.; Laukaitis, G.; Dudonis, J.

    2008-01-01

    In the present study, the process of yttrium-stabilized zirconia (YSZ) thin films deposition on optical quartz (SiO 2 ) substrates using e-beam deposition technique controlling electron gun power is analyzed. It was found that electron gun power influences the non-monotonous kinetics of YSZ film surface roughness. The evolution of YSZ thin film surface roughness was analyzed by a kinetic model. The model is based on the rate equations and includes processes of surface diffusion of the adatoms and the clusters, nucleation, growth and coalescence of islands in the case of thin film growth in Volmer-Weber mode. The analysis of the experimental results done by modeling explains non-monotonous kinetics and dependence of the surface roughness on the electron gun power. A good quantitative agreement with experimental results is obtained taking into account the initial roughness of the substrate surface and the amount of the clusters in the flux of evaporated material.

  6. The Development and Application of Reactive Transport Modeling Techniques to Study Radionuclide Migration at Yucca Mountain, NV

    International Nuclear Information System (INIS)

    Hari Selvi Viswanathan

    1999-01-01

    Yucca Mountain, Nevada has been chosen as a possible site for the first high level radioactive waste repository in the United States. As part of the site investigation studies, we need to make scientifically rigorous estimations of radionuclide migration in the event of a repository breach. Performance assessment models used to make these estimations are computationally intensive. We have developed two reactive transport modeling techniques to simulate radionuclide transport at Yucca Mountain: (1) the selective coupling approach applied to the convection-dispersion-reaction (CDR) model and (2) a reactive stream tube approach (RST). These models were designed to capture the important processes that influence radionuclide migration while being computationally efficient. The conventional method of modeling reactive transport models is to solve a coupled set of multi-dimensional partial differential equations for the relevant chemical components in the system. We have developed an iterative solution technique, denoted the selective coupling method, that represents a versatile alternative to traditional uncoupled iterative techniques and the filly coupled global implicit method. We show that selective coupling results in computational and memory savings relative to these approaches. We develop RST as an alternative to the CDR method for solving large two- or three-dimensional reactive transport simulations for cases in which one is interested in predicting the flux across a specific control plane. In the RST method, the multidimensional problem is reduced to a series of one-dimensional transport simulations along streamlines. The key assumption with RST is that mixing at the control plane approximates the transverse dispersion between streamlines. We compare the CDR and RST approaches for several scenarios that are relevant to the Yucca Mountain Project. For example, we apply the CDR and RST approaches to model an ongoing field experiment called the Unsaturated Zone

  7. Development and validation of predictive simulation model of multi-layer repair welding process by temper bead technique

    International Nuclear Information System (INIS)

    Okano, Shigetaka; Miyasaka, Fumikazu; Mochizuki, Masahito; Tanaka, Manabu

    2015-01-01

    Stress corrosion cracking (SCC) has recently been observed in the nickel base alloy weld metal of dissimilar pipe joint used in pressurized water reactor (PWR) . Temper bead technique has been developed as one of repair procedures against SCC applicable in case that post weld heat treatment (PWHT) is difficult to carry out. In this regard, however it is essential to pass the property and performance qualification test to confirm the effect of tempering on the mechanical properties at repair welds before temper bead technique is actually used in practice. Thus the appropriate welding procedure conditions in temper bead technique are determined on the basis of the property and performance qualification testing. It is necessary for certifying the structural soundness and reliability at repair welds but takes a lot of work and time in the present circumstances. Therefore it is desirable to establish the reasonable alternatives for qualifying the property and performance at repair welds. In this study, mathematical modeling and numerical simulation procedures were developed for predicting weld bead configuration and temperature distribution during multi-layer repair welding process by temper bead technique. In the developed simulation technique, characteristics of heat source in temper bead welding are calculated from weld heat input conditions through the arc plasma simulation and then weld bead configuration and temperature distribution during temper bead welding are calculated from characteristics of heat source obtained through the coupling analysis between bead surface shape and thermal conduction. The simulation results were compared with the experimental results under the same welding heat input conditions. As the results, the bead surface shape and temperature distribution, such as A cl lines, were in good agreement between simulation and experimental results. It was concluded that the developed simulation technique has the potential to become useful for

  8. Electricity market price spike analysis by a hybrid data model and feature selection technique

    International Nuclear Information System (INIS)

    Amjady, Nima; Keynia, Farshid

    2010-01-01

    In a competitive electricity market, energy price forecasting is an important activity for both suppliers and consumers. For this reason, many techniques have been proposed to predict electricity market prices in the recent years. However, electricity price is a complex volatile signal owning many spikes. Most of electricity price forecast techniques focus on the normal price prediction, while price spike forecast is a different and more complex prediction process. Price spike forecasting has two main aspects: prediction of price spike occurrence and value. In this paper, a novel technique for price spike occurrence prediction is presented composed of a new hybrid data model, a novel feature selection technique and an efficient forecast engine. The hybrid data model includes both wavelet and time domain variables as well as calendar indicators, comprising a large candidate input set. The set is refined by the proposed feature selection technique evaluating both relevancy and redundancy of the candidate inputs. The forecast engine is a probabilistic neural network, which are fed by the selected candidate inputs of the feature selection technique and predict price spike occurrence. The efficiency of the whole proposed method for price spike occurrence forecasting is evaluated by means of real data from the Queensland and PJM electricity markets. (author)

  9. Mathematical modelling techniques

    CERN Document Server

    Aris, Rutherford

    1995-01-01

    ""Engaging, elegantly written."" - Applied Mathematical ModellingMathematical modelling is a highly useful methodology designed to enable mathematicians, physicists and other scientists to formulate equations from a given nonmathematical situation. In this elegantly written volume, a distinguished theoretical chemist and engineer sets down helpful rules not only for setting up models but also for solving the mathematical problems they pose and for evaluating models.The author begins with a discussion of the term ""model,"" followed by clearly presented examples of the different types of mode

  10. Using ecosystem modelling techniques in exposure assessments of radionuclides - an overview

    International Nuclear Information System (INIS)

    Kumblad, L.

    2005-01-01

    The risk to humans from potential releases from nuclear facilities is evaluated in safety assessments. Essential components of these assessments are exposure models, which estimate the transport of radionuclides in the environment, the uptake in biota, and transfer to humans. Recently, there has been a growing concern for radiological protection of the whole environment, not only humans, and a first attempt has been to employ model approaches based on stylized environments and transfer functions to biota based exclusively on bioconcentration factors (BCF). They are generally of a non-mechanistic nature and involve no knowledge of the actual processes involved, which is a severe limitation when assessing real ecosystems. in this paper, the possibility of using an ecological modelling approach as a complement or an alternative to the use of BCF-based models is discussed. The paper gives an overview of ecological and ecosystem modelling and examples of studies where ecosystem models have been used in association to ecological risk assessment studies for other pollutants than radionuclides. It also discusses the potential to use this technique in exposure assessments of radionuclides with a few examples from the safety assessment work performed by the Swedish nuclear fuel and waste management company (SKB). Finally there is a comparison of the characteristics of ecosystem models and traditionally exposure models for radionuclides used to estimate the radionuclide exposure of biota. The evaluation of ecosystem models already applied in safety assessments has shown that the ecosystem approach is possible to use to assess exposure to biota, and that it can handle many of the modelling problems identified related to BCF-models. The findings in this paper suggest that both national and international assessment frameworks for protection of the environment from ionising radiation would benefit from striving to adopt methodologies based on ecologically sound principles and

  11. Evaluation of the functional results after rotator cuff arthroscopic repair with the suture bridge technique

    Directory of Open Access Journals (Sweden)

    Alberto Naoki Miyazaki

    Full Text Available ABSTRACT OBJECTIVE: To evaluate the results of arthroscopic treatment of large and extensive rotator cuff injuries (RCI that involved the supra and infraspinatus muscles using the suture bridge (SB technique. METHODS: Between July 2010 and November 2014, 37 patients with RCI who were treated with SB technique were evaluated. The study included all patients with a minimum follow-up of 12 months who underwent primary surgery of the shoulder. Twenty-four patients were male and 13 were female. The mean age was 60 years (45-75. The dominant side was affected in 32 cases. The most common cause of injury was trauma (18 cases. The mean preoperative motion was 123°, 58°, T11. Through magnetic resonance imaging, 36 fatty degenerations were classified according to Goutallier. Patients underwent rotator cuff repair with SB technique, which consists of using a medial row anchor with two Corkscrew(r fibertape(r or fiberwire(r at the articular margin, associated with lateral fixation without stitch using PushLocks(r or SwiveLocks(r. RESULTS: The mean age was 60 years and mean fatty degeneration was 2.6. The mean range of motion (following the AAOS in the postoperative evaluation was 148° of forward elevation, 55° in lateral rotation and medial rotation in T9. Using the criteria of the University of California at Los Angeles (UCLA, 35 (94% patients had excellent and good results; one (2.7%, fair; and one (2.7%, poor. CONCLUSION: Arthroscopic repair of a large and extensive RCI using SB technique had good and excellent results in 94% of the patients.

  12. Transtemporal amygdalohippocampectomy: a novel minimally-invasive technique with optimal clinical results and low cost

    Directory of Open Access Journals (Sweden)

    Juan Antonio Castro Flores

    Full Text Available ABSTRACT Mesial temporal sclerosis creates a focal epileptic syndrome that usually requires surgical resection of mesial temporal structures. Objective: To describe a novel operative technique for treatment of temporal lobe epilepsy and its clinical results. Methods: Prospective case-series at a single institution, performed by a single surgeon, from 2006 to 2012. A total of 120 patients were submitted to minimally-invasive keyhole transtemporal amygdalohippocampectomy. Results: Of the patients, 55% were male, and 85% had a right-sided disease. The first 70 surgeries had a mean surgical time of 2.51 hours, and the last 50 surgeries had a mean surgical time of 1.62 hours. There was 3.3% morbidity, and 5% mild temporal muscle atrophy. There was no visual field impairment. On the Engel Outcome Scale at the two-year follow-up, 71% of the patients were Class I, 21% were Class II, and 6% were Class III. Conclusion: This novel technique is feasible and reproducible, with optimal clinical results.

  13. Properties of hypothesis testing techniques and (Bayesian) model selection for exploration-based and theory-based (order-restricted) hypotheses.

    Science.gov (United States)

    Kuiper, Rebecca M; Nederhoff, Tim; Klugkist, Irene

    2015-05-01

    In this paper, the performance of six types of techniques for comparisons of means is examined. These six emerge from the distinction between the method employed (hypothesis testing, model selection using information criteria, or Bayesian model selection) and the set of hypotheses that is investigated (a classical, exploration-based set of hypotheses containing equality constraints on the means, or a theory-based limited set of hypotheses with equality and/or order restrictions). A simulation study is conducted to examine the performance of these techniques. We demonstrate that, if one has specific, a priori specified hypotheses, confirmation (i.e., investigating theory-based hypotheses) has advantages over exploration (i.e., examining all possible equality-constrained hypotheses). Furthermore, examining reasonable order-restricted hypotheses has more power to detect the true effect/non-null hypothesis than evaluating only equality restrictions. Additionally, when investigating more than one theory-based hypothesis, model selection is preferred over hypothesis testing. Because of the first two results, we further examine the techniques that are able to evaluate order restrictions in a confirmatory fashion by examining their performance when the homogeneity of variance assumption is violated. Results show that the techniques are robust to heterogeneity when the sample sizes are equal. When the sample sizes are unequal, the performance is affected by heterogeneity. The size and direction of the deviations from the baseline, where there is no heterogeneity, depend on the effect size (of the means) and on the trend in the group variances with respect to the ordering of the group sizes. Importantly, the deviations are less pronounced when the group variances and sizes exhibit the same trend (e.g., are both increasing with group number). © 2014 The British Psychological Society.

  14. Optimization models and techniques for implementation and pricing of electricity markets

    International Nuclear Information System (INIS)

    Madrigal Martinez, M.

    2001-01-01

    The operation and planning of vertically integrated electric power systems can be optimized using models that simulate solutions to problems. As the electric power industry is going through a period of restructuring, there is a need for new optimization tools. This thesis describes the importance of optimization tools and presents techniques for implementing them. It also presents methods for pricing primary electricity markets. Three modeling groups are studied. The first considers a simplified continuous and discrete model for power pool auctions. The second considers the unit commitment problem, and the third makes use of a new type of linear network-constrained clearing system model for daily markets for power and spinning reserve. The newly proposed model considers bids for supply and demand and bilateral contracts. It is a direct current model for the transmission network

  15. Analytical Modelling of the Effects of Different Gas Turbine Cooling Techniques on Engine Performance =

    Science.gov (United States)

    Uysal, Selcuk Can

    In this research, MATLAB SimulinkRTM was used to develop a cooled engine model for industrial gas turbines and aero-engines. The model consists of uncooled on-design, mean-line turbomachinery design and a cooled off-design analysis in order to evaluate the engine performance parameters by using operating conditions, polytropic efficiencies, material information and cooling system details. The cooling analysis algorithm involves a 2nd law analysis to calculate losses from the cooling technique applied. The model is used in a sensitivity analysis that evaluates the impacts of variations in metal Biot number, thermal barrier coating Biot number, film cooling effectiveness, internal cooling effectiveness and maximum allowable blade temperature on main engine performance parameters of aero and industrial gas turbine engines. The model is subsequently used to analyze the relative performance impact of employing Anti-Vortex Film Cooling holes (AVH) by means of data obtained for these holes by Detached Eddy Simulation-CFD Techniques that are valid for engine-like turbulence intensity conditions. Cooled blade configurations with AVH and other different external cooling techniques were used in a performance comparison study. (Abstract shortened by ProQuest.).

  16. Parallel-scanning tomosynthesis using a slot scanning technique: fixed-focus reconstruction and the resulting image quality.

    Science.gov (United States)

    Shibata, Koichi; Notohara, Daisuke; Sakai, Takihito

    2014-11-01

    Parallel-scanning tomosynthesis (PS-TS) is a novel technique that fuses the slot scanning technique and the conventional tomosynthesis (TS) technique. This approach allows one to obtain long-view tomosynthesis images in addition to normally sized tomosynthesis images, even when using a system that has no linear tomographic scanning function. The reconstruction technique and an evaluation of the resulting image quality for PS-TS are described in this paper. The PS-TS image-reconstruction technique consists of several steps (1) the projection images are divided into strips, (2) the strips are stitched together to construct images corresponding to the reconstruction plane, (3) the stitched images are filtered, and (4) the filtered stitched images are back-projected. In the case of PS-TS using the fixed-focus reconstruction method (PS-TS-F), one set of stitched images is used for the reconstruction planes at all heights, thus avoiding the necessity of repeating steps (1)-(3). A physical evaluation of the image quality of PS-TS-F compared with that of the conventional linear TS was performed using a R/F table (Sonialvision safire, Shimadzu Corp., Kyoto, Japan). The tomographic plane with the best theoretical spatial resolution (the in-focus plane, IFP) was set at a height of 100 mm from the table top by adjusting the reconstruction program. First, the spatial frequency response was evaluated at heights of -100, -50, 0, 50, 100, and 150 mm from the IFP using the edge of a 0.3-mm-thick copper plate. Second, the spatial resolution at each height was visually evaluated using an x-ray test pattern (Model No. 38, PTW Freiburg, Germany). Third, the slice sensitivity at each height was evaluated via the wire method using a 0.1-mm-diameter tungsten wire. Phantom studies using a knee phantom and a whole-body phantom were also performed. The spatial frequency response of PS-TS-F yielded the best results at the IFP and degraded slightly as the distance from the IFP increased. A

  17. Exact and Direct Modeling Technique for Rotor-Bearing Systems with Arbitrary Selected Degrees-of-Freedom

    Directory of Open Access Journals (Sweden)

    Shilin Chen

    1994-01-01

    Full Text Available An exact and direct modeling technique is proposed for modeling of rotor-bearing systems with arbitrary selected degrees-of-freedom. This technique is based on the combination of the transfer and dynamic stiffness matrices. The technique differs from the usual combination methods in that the global dynamic stiffness matrix for the system or the subsystem is obtained directly by rearranging the corresponding global transfer matrix. Therefore, the dimension of the global dynamic stiffness matrix is independent of the number of the elements or the substructures. In order to show the simplicity and efficiency of the method, two numerical examples are given.

  18. A Robust Computational Technique for Model Order Reduction of Two-Time-Scale Discrete Systems via Genetic Algorithms

    Directory of Open Access Journals (Sweden)

    Othman M. K. Alsmadi

    2015-01-01

    Full Text Available A robust computational technique for model order reduction (MOR of multi-time-scale discrete systems (single input single output (SISO and multi-input multioutput (MIMO is presented in this paper. This work is motivated by the singular perturbation of multi-time-scale systems where some specific dynamics may not have significant influence on the overall system behavior. The new approach is proposed using genetic algorithms (GA with the advantage of obtaining a reduced order model, maintaining the exact dominant dynamics in the reduced order, and minimizing the steady state error. The reduction process is performed by obtaining an upper triangular transformed matrix of the system state matrix defined in state space representation along with the elements of B, C, and D matrices. The GA computational procedure is based on maximizing the fitness function corresponding to the response deviation between the full and reduced order models. The proposed computational intelligence MOR method is compared to recently published work on MOR techniques where simulation results show the potential and advantages of the new approach.

  19. Validation Techniques of network harmonic models based on switching of a series linear component and measuring resultant harmonic increments

    DEFF Research Database (Denmark)

    Wiechowski, Wojciech Tomasz; Lykkegaard, Jan; Bak, Claus Leth

    2007-01-01

    In this paper two methods of validation of transmission network harmonic models are introduced. The methods were developed as a result of the work presented in [1]. The first method allows calculating the transfer harmonic impedance between two nodes of a network. Switching a linear, series network......, as for example a transmission line. Both methods require that harmonic measurements performed at two ends of the disconnected element are precisely synchronized....... are used for calculation of the transfer harmonic impedance between the nodes. The determined transfer harmonic impedance can be used to validate a computer model of the network. The second method is an extension of the fist one. It allows switching a series element that contains a shunt branch...

  20. Fiscal 1997 report of the verification research on geothermal prospecting technology. Theme 5-2. Development of a reservoir change prospecting method (reservoir change prediction technique (modeling support technique)); 1997 nendo chinetsu tansa gijutsu nado kensho chosa. 5-2. Choryuso hendo tansaho kaihatsu (choryuso hendo yosoku gijutsu (modeling shien gijutsu)) hokokusho

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1998-03-01

    To evaluate geothermal reservoirs in the initial stage of development, to keep stable output in service operation, and to develop a technology effective for extraction from peripheral reservoirs, study was made on a reservoir variation prediction technique, in particular, a modeling support technique. This paper describes the result in fiscal 1997. Underground temperature estimation technique using homogenization temperatures of fluid inclusions among core fault system measurement systems was applied to Wasabizawa field. The effect of stretching is important to estimate reservoir temperatures, and use of a minimum homogenization temperature of fluid inclusions in quartz was suitable. Even in the case of no quartz in hydrothermal veins, measured data of quartz (secondary fluid inclusion) in parent rocks adjacent to hydrothermal veins well agreed with measured temperature data. The developmental possibility of a new modeling support technique was confirmed enough through collection of documents and information. Based on the result, measurement equipment suitable for R and D was selected, and a measurement system was established through preliminary experiments. 39 refs., 35 figs., 6 tabs.

  1. Stakeholder approach, Stakeholders mental model: A visualization test with cognitive mapping technique

    Directory of Open Access Journals (Sweden)

    Garoui Nassreddine

    2012-04-01

    Full Text Available The idea of this paper is to determine the mental models of actors in the firm with respect to the stakeholder approach of corporate governance. The use of the cognitive map to view these diagrams to show the ways of thinking and conceptualization of the stakeholder approach. The paper takes a corporate governance perspective, discusses stakeholder model. It takes also a cognitive mapping technique.

  2. Labia Majora Augmentation with Hyaluronic Acid Filler: Technique and Results.

    Science.gov (United States)

    Fasola, Elena; Gazzola, Riccardo

    2016-11-01

    External female genitalia lose elasticity and volume with age. In the literature several techniques address the redundancy of the labia minora, but only few reports describe the augmentation of labia majora with fat grafting. At present, no studies describe the augmentation of the labia majora with hyaluronic acid. This study aims to present our technique of infiltration of hyaluronic acid filler, analyzing effectiveness, patient satisfaction, and complications. We retrospectively analyzed 54 patients affected by hypotrophy of the labia majora; they were treated with hyaluronic acid filler between November 2010 and December 2014. The Global Aesthetic Improvement Scale (GAIS) filled out by the doctor and the patients was used to evaluate the results 12 months after the infiltration. Complications were recorded. A total of 31 patients affected by mild to moderate labia majora hypotrophy were treated with 19 mg/mL HA filler; 23 patients affected by severe labia majora hypotrophy were treated with 21 mg/mL HA filler. Among the first group of patients, one underwent a second infiltration 6 months later with 19 mg/mL HA filler (maximum 1 mL). A significant improvement (P labia majora is able to provide a significant rejuvenation with a simple outpatient procedure. We achieved significant improvements with one infiltration in all cases. The treatment is repeatable, has virtually no complications and it is reversible. 4 Therapeutic. © 2016 The American Society for Aesthetic Plastic Surgery, Inc. Reprints and permission: journals.permissions@oup.com.

  3. Nonlinear modelling of polymer electrolyte membrane fuel cell stack using nonlinear cancellation technique

    Energy Technology Data Exchange (ETDEWEB)

    Barus, R. P. P., E-mail: rismawan.ppb@gmail.com [Engineering Physics, Faculty of Industrial Technology, Institut Teknologi Bandung, Jalan Ganesa 10 Bandung and Centre for Material and Technical Product, Jalan Sangkuriang No. 14 Bandung (Indonesia); Tjokronegoro, H. A.; Leksono, E. [Engineering Physics, Faculty of Industrial Technology, Institut Teknologi Bandung, Jalan Ganesa 10 Bandung (Indonesia); Ismunandar [Chemistry Study, Faculty of Mathematics and Science, Institut Teknologi Bandung, Jalan Ganesa 10 Bandung (Indonesia)

    2014-09-25

    Fuel cells are promising new energy conversion devices that are friendly to the environment. A set of control systems are required in order to operate a fuel cell based power plant system optimally. For the purpose of control system design, an accurate fuel cell stack model in describing the dynamics of the real system is needed. Currently, linear model are widely used for fuel cell stack control purposes, but it has limitations in narrow operation range. While nonlinear models lead to nonlinear control implemnetation whos more complex and hard computing. In this research, nonlinear cancellation technique will be used to transform a nonlinear model into a linear form while maintaining the nonlinear characteristics. The transformation is done by replacing the input of the original model by a certain virtual input that has nonlinear relationship with the original input. Then the equality of the two models is tested by running a series of simulation. Input variation of H2, O2 and H2O as well as disturbance input I (current load) are studied by simulation. The error of comparison between the proposed model and the original nonlinear model are less than 1 %. Thus we can conclude that nonlinear cancellation technique can be used to represent fuel cell nonlinear model in a simple linear form while maintaining the nonlinear characteristics and therefore retain the wide operation range.

  4. Nonlinear modelling of polymer electrolyte membrane fuel cell stack using nonlinear cancellation technique

    International Nuclear Information System (INIS)

    Barus, R. P. P.; Tjokronegoro, H. A.; Leksono, E.; Ismunandar

    2014-01-01

    Fuel cells are promising new energy conversion devices that are friendly to the environment. A set of control systems are required in order to operate a fuel cell based power plant system optimally. For the purpose of control system design, an accurate fuel cell stack model in describing the dynamics of the real system is needed. Currently, linear model are widely used for fuel cell stack control purposes, but it has limitations in narrow operation range. While nonlinear models lead to nonlinear control implemnetation whos more complex and hard computing. In this research, nonlinear cancellation technique will be used to transform a nonlinear model into a linear form while maintaining the nonlinear characteristics. The transformation is done by replacing the input of the original model by a certain virtual input that has nonlinear relationship with the original input. Then the equality of the two models is tested by running a series of simulation. Input variation of H2, O2 and H2O as well as disturbance input I (current load) are studied by simulation. The error of comparison between the proposed model and the original nonlinear model are less than 1 %. Thus we can conclude that nonlinear cancellation technique can be used to represent fuel cell nonlinear model in a simple linear form while maintaining the nonlinear characteristics and therefore retain the wide operation range

  5. UAV State Estimation Modeling Techniques in AHRS

    Science.gov (United States)

    Razali, Shikin; Zhahir, Amzari

    2017-11-01

    Autonomous unmanned aerial vehicle (UAV) system is depending on state estimation feedback to control flight operation. Estimation on the correct state improves navigation accuracy and achieves flight mission safely. One of the sensors configuration used in UAV state is Attitude Heading and Reference System (AHRS) with application of Extended Kalman Filter (EKF) or feedback controller. The results of these two different techniques in estimating UAV states in AHRS configuration are displayed through position and attitude graphs.

  6. Comparison of a new expert elicitation model with the Classical Model, equal weights and single experts, using a cross-validation technique

    Energy Technology Data Exchange (ETDEWEB)

    Flandoli, F. [Dip.to di Matematica Applicata, Universita di Pisa, Pisa (Italy); Giorgi, E. [Dip.to di Matematica Applicata, Universita di Pisa, Pisa (Italy); Istituto Nazionale di Geofisica e Vulcanologia, Sezione di Pisa, via della Faggiola 32, 56126 Pisa (Italy); Aspinall, W.P. [Dept. of Earth Sciences, University of Bristol, and Aspinall and Associates, Tisbury (United Kingdom); Neri, A., E-mail: neri@pi.ingv.it [Istituto Nazionale di Geofisica e Vulcanologia, Sezione di Pisa, via della Faggiola 32, 56126 Pisa (Italy)

    2011-10-15

    The problem of ranking and weighting experts' performances when quantitative judgments are being elicited for decision support is considered. A new scoring model, the Expected Relative Frequency model, is presented, based on the closeness between central values provided by the expert and known values used for calibration. Using responses from experts in five different elicitation datasets, a cross-validation technique is used to compare this new approach with the Cooke Classical Model, the Equal Weights model, and individual experts. The analysis is performed using alternative reward schemes designed to capture proficiency either in quantifying uncertainty, or in estimating true central values. Results show that although there is only a limited probability that one approach is consistently better than another, the Cooke Classical Model is generally the most suitable for assessing uncertainties, whereas the new ERF model should be preferred if the goal is central value estimation accuracy. - Highlights: > A new expert elicitation model, named Expected Relative Frequency (ERF), is presented. > A cross-validation approach to evaluate the performance of different elicitation models is applied. > The new ERF model shows the best performance with respect to the point-wise estimates.

  7. GRAVTool, Advances on the Package to Compute Geoid Model path by the Remove-Compute-Restore Technique, Following Helmert's Condensation Method

    Science.gov (United States)

    Marotta, G. S.

    2017-12-01

    Currently, there are several methods to determine geoid models. They can be based on terrestrial gravity data, geopotential coefficients, astrogeodetic data or a combination of them. Among the techniques to compute a precise geoid model, the Remove Compute Restore (RCR) has been widely applied. It considers short, medium and long wavelengths derived from altitude data provided by Digital Terrain Models (DTM), terrestrial gravity data and Global Geopotential Model (GGM), respectively. In order to apply this technique, it is necessary to create procedures that compute gravity anomalies and geoid models, by the integration of different wavelengths, and adjust these models to one local vertical datum. This research presents the advances on the package called GRAVTool to compute geoid models path by the RCR, following Helmert's condensation method, and its application in a study area. The studied area comprehends the federal district of Brazil, with 6000 km², wavy relief, heights varying from 600 m to 1340 m, located between the coordinates 48.25ºW, 15.45ºS and 47.33ºW, 16.06ºS. The results of the numerical example on the studied area show a geoid model computed by the GRAVTool package, after analysis of the density, DTM and GGM values, more adequate to the reference values used on the study area. The accuracy of the computed model (σ = ± 0.058 m, RMS = 0.067 m, maximum = 0.124 m and minimum = -0.155 m), using density value of 2.702 g/cm³ ±0.024 g/cm³, DTM SRTM Void Filled 3 arc-second and GGM EIGEN-6C4 up to degree and order 250, matches the uncertainty (σ =± 0.073) of 26 points randomly spaced where the geoid was computed by geometrical leveling technique supported by positioning GNSS. The results were also better than those achieved by Brazilian official regional geoid model (σ = ± 0.076 m, RMS = 0.098 m, maximum = 0.320 m and minimum = -0.061 m).

  8. Using an inverse modelling approach to evaluate the water retention in a simple water harvesting technique

    Directory of Open Access Journals (Sweden)

    K. Verbist

    2009-10-01

    Full Text Available In arid and semi-arid zones, runoff harvesting techniques are often applied to increase the water retention and infiltration on steep slopes. Additionally, they act as an erosion control measure to reduce land degradation hazards. Nevertheless, few efforts were observed to quantify the water harvesting processes of these techniques and to evaluate their efficiency. In this study, a combination of detailed field measurements and modelling with the HYDRUS-2D software package was used to visualize the effect of an infiltration trench on the soil water content of a bare slope in northern Chile. Rainfall simulations were combined with high spatial and temporal resolution water content monitoring in order to construct a useful dataset for inverse modelling purposes. Initial estimates of model parameters were provided by detailed infiltration and soil water retention measurements. Four different measurement techniques were used to determine the saturated hydraulic conductivity (Ksat independently. The tension infiltrometer measurements proved a good estimator of the Ksat value and a proxy for those measured under simulated rainfall, whereas the pressure and constant head well infiltrometer measurements showed larger variability. Six different parameter optimization functions were tested as a combination of soil-water content, water retention and cumulative infiltration data. Infiltration data alone proved insufficient to obtain high model accuracy, due to large scatter on the data set, and water content data were needed to obtain optimized effective parameter sets with small confidence intervals. Correlation between the observed soil water content and the simulated values was as high as R2=0.93 for ten selected observation points used in the model calibration phase, with overall correlation for the 22 observation points equal to 0.85. The model results indicate that the infiltration trench has a

  9. A forward model and conjugate gradient inversion technique for low-frequency ultrasonic imaging.

    Science.gov (United States)

    van Dongen, Koen W A; Wright, William M D

    2006-10-01

    Emerging methods of hyperthermia cancer treatment require noninvasive temperature monitoring, and ultrasonic techniques show promise in this regard. Various tomographic algorithms are available that reconstruct sound speed or contrast profiles, which can be related to temperature distribution. The requirement of a high enough frequency for adequate spatial resolution and a low enough frequency for adequate tissue penetration is a difficult compromise. In this study, the feasibility of using low frequency ultrasound for imaging and temperature monitoring was investigated. The transient probing wave field had a bandwidth spanning the frequency range 2.5-320.5 kHz. The results from a forward model which computed the propagation and scattering of low-frequency acoustic pressure and velocity wave fields were used to compare three imaging methods formulated within the Born approximation, representing two main types of reconstruction. The first uses Fourier techniques to reconstruct sound-speed profiles from projection or Radon data based on optical ray theory, seen as an asymptotical limit for comparison. The second uses backpropagation and conjugate gradient inversion methods based on acoustical wave theory. The results show that the accuracy in localization was 2.5 mm or better when using low frequencies and the conjugate gradient inversion scheme, which could be used for temperature monitoring.

  10. A Search Technique for Weak and Long-Duration Gamma-Ray Bursts from Background Model Residuals

    Science.gov (United States)

    Skelton, R. T.; Mahoney, W. A.

    1993-01-01

    We report on a planned search technique for Gamma-Ray Bursts too weak to trigger the on-board threshold. The technique is to search residuals from a physically based background model used for analysis of point sources by the Earth occultation method.

  11. Synchrotron-Based Microspectroscopic Analysis of Molecular and Biopolymer Structures Using Multivariate Techniques and Advanced Multi-Components Modeling

    International Nuclear Information System (INIS)

    Yu, P.

    2008-01-01

    More recently, advanced synchrotron radiation-based bioanalytical technique (SRFTIRM) has been applied as a novel non-invasive analysis tool to study molecular, functional group and biopolymer chemistry, nutrient make-up and structural conformation in biomaterials. This novel synchrotron technique, taking advantage of bright synchrotron light (which is million times brighter than sunlight), is capable of exploring the biomaterials at molecular and cellular levels. However, with the synchrotron RFTIRM technique, a large number of molecular spectral data are usually collected. The objective of this article was to illustrate how to use two multivariate statistical techniques: (1) agglomerative hierarchical cluster analysis (AHCA) and (2) principal component analysis (PCA) and two advanced multicomponent modeling methods: (1) Gaussian and (2) Lorentzian multi-component peak modeling for molecular spectrum analysis of bio-tissues. The studies indicated that the two multivariate analyses (AHCA, PCA) are able to create molecular spectral corrections by including not just one intensity or frequency point of a molecular spectrum, but by utilizing the entire spectral information. Gaussian and Lorentzian modeling techniques are able to quantify spectral omponent peaks of molecular structure, functional group and biopolymer. By application of these four statistical methods of the multivariate techniques and Gaussian and Lorentzian modeling, inherent molecular structures, functional group and biopolymer onformation between and among biological samples can be quantified, discriminated and classified with great efficiency.

  12. Clinical Results After Prostatic Artery Embolization Using the PErFecTED Technique: A Single-Center Study

    International Nuclear Information System (INIS)

    Amouyal, Gregory; Thiounn, Nicolas; Pellerin, Olivier; Yen-Ting, Lin; Giudice, Costantino Del; Dean, Carole; Pereira, Helena; Chatellier, Gilles; Sapoval, Marc

    2016-01-01

    BackgroundProstatic artery embolization (PAE) has been performed for a few years, but there is no report on PAE using the PErFecTED technique outside from the team that initiated this approach.ObjectiveThis single-center retrospective open label study reports our experience and clinical results on patients suffering from symptomatic BPH, who underwent PAE aiming at using the PErFecTED technique.Materials and MethodsWe treated 32 consecutive patients, mean age 65 (52–84 years old) between December 2013 and January 2015. Patients were referred for PAE after failure of medical treatment and refusal or contra-indication to surgery. They were treated using the PErFecTED technique, when feasible, with 300–500 µm calibrated microspheres (two-night hospital stay or outpatient procedure). Follow-up was performed at 3, 6, and 12 months.ResultsWe had a 100 % immediate technical success of embolization (68 % of feasibility of the PErFecTED technique) with no immediate complications. After a mean follow-up of 7.7 months, we observed a 78 % rate of clinical success. Mean IPSS decreased from 15.3 to 4.2 (p = .03), mean QoL from 5.4 to 2 (p = .03), mean Qmax increased from 9.2 to 19.2 (p = .25), mean prostatic volume decreased from 91 to 62 (p = .009) mL. There was no retrograde ejaculation and no major complication.ConclusionPAE using the PErFecTED technique is a safe and efficient technique to treat bothersome LUTS related to BPH. It is of interest to note that the PErFecTED technique cannot be performed in some cases for anatomical reasons

  13. Clinical Results After Prostatic Artery Embolization Using the PErFecTED Technique: A Single-Center Study

    Energy Technology Data Exchange (ETDEWEB)

    Amouyal, Gregory, E-mail: gregamouyal@hotmail.com; Thiounn, Nicolas, E-mail: nicolas.thiounn@aphp.fr; Pellerin, Olivier, E-mail: olivier.pellerin@aphp.fr [Université Paris Descartes - Sorbonne - Paris - Cité, Faculté de Médecine (France); Yen-Ting, Lin, E-mail: ymerically@gmail.com [Assistance Publique - Hôpitaux de Paris, Hôpital Européen Georges Pompidou, Interventional Radiology Department (France); Giudice, Costantino Del, E-mail: costantino.delgiudice@aphp.fr [Université Paris Descartes - Sorbonne - Paris - Cité, Faculté de Médecine (France); Dean, Carole, E-mail: carole.dean@aphp.fr [Assistance Publique - Hôpitaux de Paris, Hôpital Européen Georges Pompidou, Interventional Radiology Department (France); Pereira, Helena, E-mail: helena.pereira@aphp.fr [Assistance Publique - Hôpitaux de Paris, Hôpital Européen Georges Pompidou, Clinical Research Unit (France); Chatellier, Gilles, E-mail: gilles.chatellier@aphp.fr; Sapoval, Marc, E-mail: marc.sapoval2@aphp.fr [Université Paris Descartes - Sorbonne - Paris - Cité, Faculté de Médecine (France)

    2016-03-15

    BackgroundProstatic artery embolization (PAE) has been performed for a few years, but there is no report on PAE using the PErFecTED technique outside from the team that initiated this approach.ObjectiveThis single-center retrospective open label study reports our experience and clinical results on patients suffering from symptomatic BPH, who underwent PAE aiming at using the PErFecTED technique.Materials and MethodsWe treated 32 consecutive patients, mean age 65 (52–84 years old) between December 2013 and January 2015. Patients were referred for PAE after failure of medical treatment and refusal or contra-indication to surgery. They were treated using the PErFecTED technique, when feasible, with 300–500 µm calibrated microspheres (two-night hospital stay or outpatient procedure). Follow-up was performed at 3, 6, and 12 months.ResultsWe had a 100 % immediate technical success of embolization (68 % of feasibility of the PErFecTED technique) with no immediate complications. After a mean follow-up of 7.7 months, we observed a 78 % rate of clinical success. Mean IPSS decreased from 15.3 to 4.2 (p = .03), mean QoL from 5.4 to 2 (p = .03), mean Qmax increased from 9.2 to 19.2 (p = .25), mean prostatic volume decreased from 91 to 62 (p = .009) mL. There was no retrograde ejaculation and no major complication.ConclusionPAE using the PErFecTED technique is a safe and efficient technique to treat bothersome LUTS related to BPH. It is of interest to note that the PErFecTED technique cannot be performed in some cases for anatomical reasons.

  14. A review on reflective remote sensing and data assimilation techniques for enhanced agroecosystem modeling

    Science.gov (United States)

    Dorigo, W. A.; Zurita-Milla, R.; de Wit, A. J. W.; Brazile, J.; Singh, R.; Schaepman, M. E.

    2007-05-01

    During the last 50 years, the management of agroecosystems has been undergoing major changes to meet the growing demand for food, timber, fibre and fuel. As a result of this intensified use, the ecological status of many agroecosystems has been severely deteriorated. Modeling the behavior of agroecosystems is, therefore, of great help since it allows the definition of management strategies that maximize (crop) production while minimizing the environmental impacts. Remote sensing can support such modeling by offering information on the spatial and temporal variation of important canopy state variables which would be very difficult to obtain otherwise. In this paper, we present an overview of different methods that can be used to derive biophysical and biochemical canopy state variables from optical remote sensing data in the VNIR-SWIR regions. The overview is based on an extensive literature review where both statistical-empirical and physically based methods are discussed. Subsequently, the prevailing techniques of assimilating remote sensing data into agroecosystem models are outlined. The increasing complexity of data assimilation methods and of models describing agroecosystem functioning has significantly increased computational demands. For this reason, we include a short section on the potential of parallel processing to deal with the complex and computationally intensive algorithms described in the preceding sections. The studied literature reveals that many valuable techniques have been developed both for the retrieval of canopy state variables from reflective remote sensing data as for assimilating the retrieved variables in agroecosystem models. However, for agroecosystem modeling and remote sensing data assimilation to be commonly employed on a global operational basis, emphasis will have to be put on bridging the mismatch between data availability and accuracy on one hand, and model and user requirements on the other. This could be achieved by

  15. Modelling desertification risk in the north-west of Jordan using geospatial and remote sensing techniques

    Directory of Open Access Journals (Sweden)

    Jawad T. Al-Bakri

    2016-03-01

    Full Text Available Remote sensing, climate, and ground data were used within a geographic information system (GIS to map desertification risk in the north-west of Jordan. The approach was based on modelling wind and water erosion and incorporating the results with a map representing the severity of drought. Water erosion was modelled by the universal soil loss equation, while wind erosion was modelled by a dust emission model. The extent of drought was mapped using the evapotranspiration water stress index (EWSI which incorporated actual and potential evapotranspiration. Output maps were assessed within GIS in terms of spatial patterns and the degree of correlation with soil surficial properties. Results showed that both topography and soil explained 75% of the variation in water erosion, while soil explained 25% of the variation in wind erosion, which was mainly controlled by natural factors of topography and wind. Analysis of the EWSI map showed that drought risk was dominating most of the rainfed areas. The combined effects of soil erosion and drought were reflected on the desertification risk map. The adoption of these geospatial and remote sensing techniques is, therefore, recommended to map desertification risk in Jordan and in similar arid environments.

  16. Korean round-robin result for new international program to assess the reliability of emerging nondestructive techniques

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Kyung Cho; Kim, Jin Gyum; Kang, Sung Sik; Jhung, Myung Jo [Korea Institute of Nuclear Safety, Daejeon (Korea, Republic of)

    2017-04-15

    The Korea Institute of Nuclear Safety, as a representative organization of Korea, in February 2012 participated in an international Program to Assess the Reliability of Emerging Nondestructive Techniques initiated by the U.S. Nuclear Regulatory Commission. The goal of the Program to Assess the Reliability of Emerging Nondestructive Techniques is to investigate the performance of emerging and prospective novel nondestructive techniques to find flaws in nickel-alloy welds and base materials. In this article, Korean round-robin test results were evaluated with respect to the test blocks and various nondestructive examination techniques. The test blocks were prepared to simulate large-bore dissimilar metal welds, small-bore dissimilar metal welds, and bottom-mounted instrumentation penetration welds in nuclear power plants. Also, lessons learned from the Korean round-robin test were summarized and discussed.

  17. A model for teaching and learning spinal thrust manipulation and its effect on participant confidence in technique performance.

    Science.gov (United States)

    Wise, Christopher H; Schenk, Ronald J; Lattanzi, Jill Black

    2016-07-01

    Despite emerging evidence to support the use of high velocity thrust manipulation in the management of lumbar spinal conditions, utilization of thrust manipulation among clinicians remains relatively low. One reason for the underutilization of these procedures may be related to disparity in training in the performance of these techniques at the professional and post professional levels. To assess the effect of using a new model of active learning on participant confidence in the performance of spinal thrust manipulation and the implications for its use in the professional and post-professional training of physical therapists. A cohort of 15 DPT students in their final semester of entry-level professional training participated in an active training session emphasizing a sequential partial task practice (SPTP) strategy in which participants engaged in partial task practice over several repetitions with different partners. Participants' level of confidence in the performance of these techniques was determined through comparison of pre- and post-training session surveys and a post-session open-ended interview. The increase in scores across all items of the individual pre- and post-session surveys suggests that this model was effective in changing overall participant perception regarding the effectiveness and safety of these techniques and in increasing student confidence in their performance. Interviews revealed that participants greatly preferred the SPTP strategy, which enhanced their confidence in technique performance. Results indicate that this new model of psychomotor training may be effective at improving confidence in the performance of spinal thrust manipulation and, subsequently, may be useful for encouraging the future use of these techniques in the care of individuals with impairments of the spine. Inasmuch, this method of instruction may be useful for training of physical therapists at both the professional and post-professional levels.

  18. Comparison between two meshless methods based on collocation technique for the numerical solution of four-species tumor growth model

    Science.gov (United States)

    Dehghan, Mehdi; Mohammadi, Vahid

    2017-03-01

    As is said in [27], the tumor-growth model is the incorporation of nutrient within the mixture as opposed to being modeled with an auxiliary reaction-diffusion equation. The formulation involves systems of highly nonlinear partial differential equations of surface effects through diffuse-interface models [27]. Simulations of this practical model using numerical methods can be applied for evaluating it. The present paper investigates the solution of the tumor growth model with meshless techniques. Meshless methods are applied based on the collocation technique which employ multiquadrics (MQ) radial basis function (RBFs) and generalized moving least squares (GMLS) procedures. The main advantages of these choices come back to the natural behavior of meshless approaches. As well as, a method based on meshless approach can be applied easily for finding the solution of partial differential equations in high-dimension using any distributions of points on regular and irregular domains. The present paper involves a time-dependent system of partial differential equations that describes four-species tumor growth model. To overcome the time variable, two procedures will be used. One of them is a semi-implicit finite difference method based on Crank-Nicolson scheme and another one is based on explicit Runge-Kutta time integration. The first case gives a linear system of algebraic equations which will be solved at each time-step. The second case will be efficient but conditionally stable. The obtained numerical results are reported to confirm the ability of these techniques for solving the two and three-dimensional tumor-growth equations.

  19. Land Cover Mapping Analysis and Urban Growth Modelling Using Remote Sensing Techniques in Greater Cairo Region—Egypt

    Directory of Open Access Journals (Sweden)

    Yasmine Megahed

    2015-09-01

    Full Text Available This study modeled the urban growth in the Greater Cairo Region (GCR, one of the fastest growing mega cities in the world, using remote sensing data and ancillary data. Three land use land cover (LULC maps (1984, 2003 and 2014 were produced from satellite images by using Support Vector Machines (SVM. Then, land cover changes were detected by applying a high level mapping technique that combines binary maps (change/no-change and post classification comparison technique. The spatial and temporal urban growth patterns were analyzed using selected statistical metrics developed in the FRAGSTATS software. Major transitions to urban were modeled to predict the future scenarios for year 2025 using Land Change Modeler (LCM embedded in the IDRISI software. The model results, after validation, indicated that 14% of the vegetation and 4% of the desert in 2014 will be urbanized in 2025. The urban areas within a 5-km buffer around: the Great Pyramids, Islamic Cairo and Al-Baron Palace were calculated, highlighting an intense urbanization especially around the Pyramids; 28% in 2014 up to 40% in 2025. Knowing the current and estimated urbanization situation in GCR will help decision makers to adjust and develop new plans to achieve a sustainable development of urban areas and to protect the historical locations.

  20. Generalized image contrast enhancement technique based on Heinemann contrast discrimination model

    Science.gov (United States)

    Liu, Hong; Nodine, Calvin F.

    1994-03-01

    This paper presents a generalized image contrast enhancement technique which equalizes perceived brightness based on the Heinemann contrast discrimination model. This is a modified algorithm which presents an improvement over the previous study by Mokrane in its mathematically proven existence of a unique solution and in its easily tunable parameterization. The model uses a log-log representation of contrast luminosity between targets and the surround in a fixed luminosity background setting. The algorithm consists of two nonlinear gray-scale mapping functions which have seven parameters, two of which are adjustable Heinemann constants. Another parameter is the background gray level. The remaining four parameters are nonlinear functions of gray scale distribution of the image, and can be uniquely determined once the previous three are given. Tests have been carried out to examine the effectiveness of the algorithm for increasing the overall contrast of images. It can be demonstrated that the generalized algorithm provides better contrast enhancement than histogram equalization. In fact, the histogram equalization technique is a special case of the proposed mapping.

  1. Robotic and endoscopic transoral thyroidectomy: feasibility and description of the technique in the cadaveric model.

    Science.gov (United States)

    Kahramangil, Bora; Mohsin, Khuzema; Alzahrani, Hassan; Bu Ali, Daniah; Tausif, Syed; Kang, Sang-Wook; Kandil, Emad; Berber, Eren

    2017-12-01

    Numerous new approaches have been described over the years to improve the cosmetic outcomes of thyroid surgery. Transoral approach is a new technique that aims to achieve superior cosmetic outcomes by concealing the incision in the oral cavity. Transoral thyroidectomy through vestibular approach was performed in two institutions on cadaveric models. Procedure was performed endoscopically in one institution, while the robotic technique was utilized at the other. Transoral thyroidectomy was successfully performed at both institutions with robotic and endoscopic techniques. All vital structures were identified and preserved. Transoral thyroidectomy has been performed in animal and cadaveric models, as well as in some clinical studies. Our initial experience indicates the feasibility of this approach. More clinical studies are required to elucidate its full utility.

  2. New horizontal global solar radiation estimation models for Turkey based on robust coplot supported genetic programming technique

    International Nuclear Information System (INIS)

    Demirhan, Haydar; Kayhan Atilgan, Yasemin

    2015-01-01

    Highlights: • Precise horizontal global solar radiation estimation models are proposed for Turkey. • Genetic programming technique is used to construct the models. • Robust coplot analysis is applied to reduce the impact of outlier observations. • Better estimation and prediction properties are observed for the models. - Abstract: Renewable energy sources have been attracting more and more attention of researchers due to the diminishing and harmful nature of fossil energy sources. Because of the importance of solar energy as a renewable energy source, an accurate determination of significant covariates and their relationships with the amount of global solar radiation reaching the Earth is a critical research problem. There are numerous meteorological and terrestrial covariates that can be used in the analysis of horizontal global solar radiation. Some of these covariates are highly correlated with each other. It is possible to find a large variety of linear or non-linear models to explain the amount of horizontal global solar radiation. However, models that explain the amount of global solar radiation with the smallest set of covariates should be obtained. In this study, use of the robust coplot technique to reduce the number of covariates before going forward with advanced modelling techniques is considered. After reducing the dimensionality of model space, yearly and monthly mean daily horizontal global solar radiation estimation models for Turkey are built by using the genetic programming technique. It is observed that application of robust coplot analysis is helpful for building precise models that explain the amount of global solar radiation with the minimum number of covariates without suffering from outlier observations and the multicollinearity problem. Consequently, over a dataset of Turkey, precise yearly and monthly mean daily global solar radiation estimation models are introduced using the model spaces obtained by robust coplot technique and

  3. VNIR spectral modeling of Mars analogue rocks: first results

    Science.gov (United States)

    Pompilio, L.; Roush, T.; Pedrazzi, G.; Sgavetti, M.

    Knowledge regarding the surface composition of Mars and other bodies of the inner solar system is fundamental to understanding of their origin, evolution, and internal structures. Technological improvements of remote sensors and associated implications for planetary studies have encouraged increased laboratory and field spectroscopy research to model the spectral behavior of terrestrial analogues for planetary surfaces. This approach has proven useful during Martian surface and orbital missions, and petrologic studies of Martian SNC meteorites. Thermal emission data were used to suggest two lithologies occurring on Mars surface: basalt with abundant plagioclase and clinopyroxene and andesite, dominated by plagioclase and volcanic glass [1,2]. Weathered basalt has been suggested as an alternative to the andesite interpretation [3,4]. Orbital VNIR spectral imaging data also suggest the crust is dominantly basaltic, chiefly feldspar and pyroxene [5,6]. A few outcrops of ancient crust have higher concentrations of olivine and low-Ca pyroxene, and have been interpreted as cumulates [6]. Based upon these orbital observations future lander/rover missions can be expected to encounter particulate soils, rocks, and rock outcrops. Approaches to qualitative and quantitative analysis of remotely-acquired spectra have been successfully used to infer the presence and abundance of minerals and to discover compositionally associated spectral trends [7-9]. Both empirical [10] and mathematical [e.g. 11-13] methods have been applied, typically with full compositional knowledge, to chiefly particulate samples and as a result cannot be considered as objective techniques for predicting the compositional information, especially for understanding the spectral behavior of rocks. Extending the compositional modeling efforts to include more rocks and developing objective criteria in the modeling are the next required steps. This is the focus of the present investigation. We present results of

  4. Models of cognitive behavior in nuclear power plant personnel. A feasibility study: summary of results. Volume 1

    International Nuclear Information System (INIS)

    Woods, D.D.; Roth, E.M.; Hanes, L.F.

    1986-07-01

    This report summarizes the results of a feasibility study to determine if the current state of models of human cognitive activities can serve as the basis for improved techniques for predicting human error in nuclear power plants emergency operations. Based on the answer to this question, two subsequent phases of research are planned. Phase II is to develop a model of cognitive activities, and Phase III is to test the model. The feasibility study included an analysis of the cognitive activities that occur in emergency operations and an assessment of the modeling concepts/tools available to capture these cognitive activities. The results indicated that a symbolic processing (or artificial intelligence) model of cognitive activities in nuclear power plants is both desirable and feasible. This cognitive model can be built upon the computational framework provided by an existing artificial intelligence system for medical problem solving, called Caduceus. The resulting cognitive model will increase the capability to capture the human contribution to risk in probabilistic risk assessment studies. Volume 1 summarizes the major findings and conclusions of the study. Volume 2 provides a complete description of the methods and results, including a synthesis of the cognitive activities that occur during emergency operations, and a literature review on cognitive modeling relevant to nuclear power plants. 19 refs

  5. A novel hybrid model for air quality index forecasting based on two-phase decomposition technique and modified extreme learning machine.

    Science.gov (United States)

    Wang, Deyun; Wei, Shuai; Luo, Hongyuan; Yue, Chenqiang; Grunder, Olivier

    2017-02-15

    The randomness, non-stationarity and irregularity of air quality index (AQI) series bring the difficulty of AQI forecasting. To enhance forecast accuracy, a novel hybrid forecasting model combining two-phase decomposition technique and extreme learning machine (ELM) optimized by differential evolution (DE) algorithm is developed for AQI forecasting in this paper. In phase I, the complementary ensemble empirical mode decomposition (CEEMD) is utilized to decompose the AQI series into a set of intrinsic mode functions (IMFs) with different frequencies; in phase II, in order to further handle the high frequency IMFs which will increase the forecast difficulty, variational mode decomposition (VMD) is employed to decompose the high frequency IMFs into a number of variational modes (VMs). Then, the ELM model optimized by DE algorithm is applied to forecast all the IMFs and VMs. Finally, the forecast value of each high frequency IMF is obtained through adding up the forecast results of all corresponding VMs, and the forecast series of AQI is obtained by aggregating the forecast results of all IMFs. To verify and validate the proposed model, two daily AQI series from July 1, 2014 to June 30, 2016 collected from Beijing and Shanghai located in China are taken as the test cases to conduct the empirical study. The experimental results show that the proposed hybrid model based on two-phase decomposition technique is remarkably superior to all other considered models for its higher forecast accuracy. Copyright © 2016 Elsevier B.V. All rights reserved.

  6. Modeling and Simulation of Voids in Composite Tape Winding Process Based on Domain Superposition Technique

    Science.gov (United States)

    Deng, Bo; Shi, Yaoyao

    2017-11-01

    The tape winding technology is an effective way to fabricate rotationally composite materials. Nevertheless, some inevitable defects will seriously influence the performance of winding products. One of the crucial ways to identify the quality of fiber-reinforced composite material products is examining its void content. Significant improvement in products' mechanical properties can be achieved by minimizing the void defect. Two methods were applied in this study, finite element analysis and experimental testing, respectively, to investigate the mechanism of how void forming in composite tape winding processing. Based on the theories of interlayer intimate contact and Domain Superposition Technique (DST), a three-dimensional model of prepreg tape void with SolidWorks has been modeled in this paper. Whereafter, ABAQUS simulation software was used to simulate the void content change with pressure and temperature. Finally, a series of experiments were performed to determine the accuracy of the model-based predictions. The results showed that the model is effective for predicting the void content in the composite tape winding process.

  7. Data-driven remaining useful life prognosis techniques stochastic models, methods and applications

    CERN Document Server

    Si, Xiao-Sheng; Hu, Chang-Hua

    2017-01-01

    This book introduces data-driven remaining useful life prognosis techniques, and shows how to utilize the condition monitoring data to predict the remaining useful life of stochastic degrading systems and to schedule maintenance and logistics plans. It is also the first book that describes the basic data-driven remaining useful life prognosis theory systematically and in detail. The emphasis of the book is on the stochastic models, methods and applications employed in remaining useful life prognosis. It includes a wealth of degradation monitoring experiment data, practical prognosis methods for remaining useful life in various cases, and a series of applications incorporated into prognostic information in decision-making, such as maintenance-related decisions and ordering spare parts. It also highlights the latest advances in data-driven remaining useful life prognosis techniques, especially in the contexts of adaptive prognosis for linear stochastic degrading systems, nonlinear degradation modeling based pro...

  8. Uranium exploration techniques

    International Nuclear Information System (INIS)

    Nichols, C.E.

    1984-01-01

    The subject is discussed under the headings: introduction (genetic description of some uranium deposits; typical concentrations of uranium in the natural environment); sedimentary host rocks (sandstones; tabular deposits; roll-front deposits; black shales); metamorphic host rocks (exploration techniques); geologic techniques (alteration features in sandstones; favourable features in metamorphic rocks); geophysical techniques (radiometric surveys; surface vehicle methods; airborne methods; input surveys); geochemical techniques (hydrogeochemistry; petrogeochemistry; stream sediment geochemistry; pedogeochemistry; emanometry; biogeochemistry); geochemical model for roll-front deposits; geologic model for vein-like deposits. (U.K.)

  9. Comparision of results between two different techniques of cranio-cervical decompression in patients with Chiari I malformation.

    Science.gov (United States)

    Kunert, Przemysław; Janowski, Mirosław; Zakrzewska, Agnieszka; Marchel, Andrzej

    2009-01-01

    A variety of approaches are employed for treatment of Chiari I malformation. They differ in extent of cranio-c ervical decompression. A technique based on arachnoid preservation and duroplasty was introduced in our department in 2001. The aim of the study is to compare between the previous and the present technique. Retrospective analysis of 38 patients with Chiari I malformation treated between 1998 and 2004 was performed. The previous technique including arach- noid incision, coagulation of cerebellar tonsils, and fourth ventricle exploration without duroplasty was used to treat 21 patients (group 1). A further 17 patients were treated with the present technique consisting of arachnoid preservation and duroplasty (group 2). Complication rates as well as early and late results of treatment were evaluated. Karnofsky (KPS), Rankin (RS) and Bidziński (BS) scales were used for evaluation of results. Post-operative complications were detected in 9 patients in group 1 (43%). They included liquorrhoea (5 cases), meningitis (1 case), and symptoms progression (3 cases). There were no surgical complications in group 2 (p = 0.002). Neurological improvement in the early period (until discharge from hospital) occurred in 10 (48%) patients in group 1 and in 13 (76%) in group 2 (p = NS). Further improvement or lack of symptoms progression was found in 58% in group 1 and 82% in group 2 (p = NS). Assessment in KPS, RS and BS showed slightly better results of group 2 but the difference was statistically insignificant. Results of both techniques are comparable. The risk of post-operative complications after extra-arachnoid cranio-cervical decompression with duroplasty is, however, significantly lower.

  10. Demand Management Based on Model Predictive Control Techniques

    Directory of Open Access Journals (Sweden)

    Yasser A. Davizón

    2014-01-01

    Full Text Available Demand management (DM is the process that helps companies to sell the right product to the right customer, at the right time, and for the right price. Therefore the challenge for any company is to determine how much to sell, at what price, and to which market segment while maximizing its profits. DM also helps managers efficiently allocate undifferentiated units of capacity to the available demand with the goal of maximizing revenue. This paper introduces control system approach to demand management with dynamic pricing (DP using the model predictive control (MPC technique. In addition, we present a proper dynamical system analogy based on active suspension and a stability analysis is provided via the Lyapunov direct method.

  11. Interpreting Results from the Multinomial Logit Model

    DEFF Research Database (Denmark)

    Wulff, Jesper

    2015-01-01

    This article provides guidelines and illustrates practical steps necessary for an analysis of results from the multinomial logit model (MLM). The MLM is a popular model in the strategy literature because it allows researchers to examine strategic choices with multiple outcomes. However, there see...... suitable for both interpretation and communication of results. The pratical steps are illustrated through an application of the MLM to the choice of foreign market entry mode.......This article provides guidelines and illustrates practical steps necessary for an analysis of results from the multinomial logit model (MLM). The MLM is a popular model in the strategy literature because it allows researchers to examine strategic choices with multiple outcomes. However, there seem...... to be systematic issues with regard to how researchers interpret their results when using the MLM. In this study, I present a set of guidelines critical to analyzing and interpreting results from the MLM. The procedure involves intuitive graphical representations of predicted probabilities and marginal effects...

  12. New model reduction technique for a class of parabolic partial differential equations

    NARCIS (Netherlands)

    Vajta, Miklos

    1991-01-01

    A model reduction (or lumping) technique for a class of parabolic-type partial differential equations is given, and its application is discussed. The frequency response of the temperature distribution in any multilayer solid is developed and given by a matrix expression. The distributed transfer

  13. Evaluation of inverse modeling techniques for pinpointing water leakages at building constructions

    NARCIS (Netherlands)

    Schijndel, van A.W.M.

    2015-01-01

    The location and nature of the moisture leakages are sometimes difficult to detect. Moreover, the relation between observed inside surface moisture patterns and where the moisture enters the construction is often not clear. The objective of this paper is to investigate inverse modeling techniques as

  14. Therapeutic Results of Radiotherapy in Rectal Carcinoma -Comparison of Sandwich Technique Radiotherapy with Postoperative Radiotherapy

    International Nuclear Information System (INIS)

    Huh, Gil Cha; Suh, Hyun Suk; Lee, Hyuk Sang; Kim, Re Hwe; Kim, Chul Soo; Kim, Hong Yong; Kim, Sung Rok

    1996-01-01

    Purpose : To evaluate the potential advantage for 'sandwich' technique radiotherapy compared to postoperative radiotherapy in respectable rectal cancer. Between January 1989 and May 1994, 60 patients with respectable rectal cancer were treated at Inje University Seoul and Sanggye Paik Hospital.Fifty one patients were available for analysis : 20 patients were treated with sandwich technique radiotherapy and 31 patients were treated with postoperative radiotherapy. In sandwich technique radiotherapy(RT), patients were treated with preoperative RT 1500 cGy/5fx followed by immediate curative resection. Patients staged as Astler-Coller B2, C were considered for postoperative RT with 2500-4500 cGy. In postoperative RT, total radiation dose of 4500-6120 cGy, 180 cGy daily at 4-6 weeks was delivered. Patients were followed for median period of 25 months. Results : The overall 5-year survival rates for sandwich technique RT group and postoperative RT group were 60% and 71%, respectively(p>0.05). The 5-year disease free survival rates for each group were 63%. There was no difference in local failure rate between two groups(11% versus 7%). Incidence of distant metastasis was 11%(2/20) in the sandwich technique RT group and 20%(6/31) in the postoperative RT group(p>0.05). The frequencies of acute and chronic complications were comparable in both groups. Conclusion : The sandwich technique radiotherapy group shows local recurrence and survival similar to those of postoperative RT alone group but reduced distant metastasis compared to postoperative RT group. But long term follow-up and large number of patients is needed to make an any firm conclusion regarding the value of this sandwich technique RT

  15. Control System Design for Cylindrical Tank Process Using Neural Model Predictive Control Technique

    Directory of Open Access Journals (Sweden)

    M. Sridevi

    2010-10-01

    Full Text Available Chemical manufacturing and process industry requires innovative technologies for process identification. This paper deals with model identification and control of cylindrical process. Model identification of the process was done using ARMAX technique. A neural model predictive controller was designed for the identified model. The performance of the controllers was evaluated using MATLAB software. The performance of NMPC controller was compared with Smith Predictor controller and IMC controller based on rise time, settling time, overshoot and ISE and it was found that the NMPC controller is better suited for this process.

  16. Data mining techniques for thermophysical properties of refrigerants

    International Nuclear Information System (INIS)

    Kuecueksille, Ecir Ugur; Selbas, Resat; Sencan, Arzu

    2009-01-01

    This study presents ten modeling techniques within data mining process for the prediction of thermophysical properties of refrigerants (R134a, R404a, R407c and R410a). These are linear regression (LR), multi layer perception (MLP), pace regression (PR), simple linear regression (SLR), sequential minimal optimization (SMO), KStar, additive regression (AR), M5 model tree, decision table (DT), M5'Rules models. Relations depending on temperature and pressure were carried out for the determination of thermophysical properties as the specific heat capacity, viscosity, heat conduction coefficient, density of the refrigerants. Obtained model results for every refrigerant were compared and the best model was investigated. Results indicate that use of derived formulations from these techniques will facilitate design and optimize of heat exchangers which is component of especially vapor compression refrigeration system

  17. Constraint-based Student Modelling in Probability Story Problems with Scaffolding Techniques

    Directory of Open Access Journals (Sweden)

    Nabila Khodeir

    2018-01-01

    Full Text Available Constraint-based student modelling (CBM is an important technique employed in intelligent tutoring systems to model student knowledge to provide relevant assistance. This paper introduces the Math Story Problem Tutor (MAST, a Web-based intelligent tutoring system for probability story problems, which is able to generate problems of different contexts, types and difficulty levels for self-paced learning. Constraints in MAST are specified at a low-level of granularity to allow fine-grained diagnosis of the student error. Furthermore, MAST extends CBM to address errors due to misunderstanding of the narrative story. It can locate and highlight keywords that may have been overlooked or misunderstood leading to an error. This is achieved by utilizing the role of sentences and keywords that are defined through the Natural Language Generation (NLG methods deployed in the story problem generation. MAST also integrates CBM with scaffolding questions and feedback to provide various forms of help and guidance to the student. This allows the student to discover and correct any errors in his/her solution. MAST has been preliminary evaluated empirically and the results show the potential effectiveness in tutoring students with a decrease in the percentage of violated constraints along the learning curve. Additionally, there is a significant improvement in the results of the post–test exam in comparison to the pre-test exam of the students using MAST in comparison to those relying on the textbook

  18. GUIDING NONLINEAR FORCE-FREE MODELING USING CORONAL OBSERVATIONS: FIRST RESULTS USING A QUASI-GRAD-RUBIN SCHEME

    Energy Technology Data Exchange (ETDEWEB)

    Malanushenko, A. [Department of Physics, Montana State University, Bozeman, MT (United States); Schrijver, C. J.; DeRosa, M. L. [Lockheed Martin Advanced Technology Center, Palo Alto, CA (United States); Wheatland, M. S.; Gilchrist, S. A. [Sydney Institute for Astronomy, School of Physics, University of Sydney (Australia)

    2012-09-10

    At present, many models of the coronal magnetic field rely on photospheric vector magnetograms, but these data have been shown to be problematic as the sole boundary information for nonlinear force-free field extrapolations. Magnetic fields in the corona manifest themselves in high-energy images (X-rays and EUV) in the shapes of coronal loops, providing an additional constraint that is not at present used as constraints in the computational domain, directly influencing the evolution of the model. This is in part due to the mathematical complications of incorporating such input into numerical models. Projection effects, confusion due to overlapping loops (the coronal plasma is optically thin), and the limited number of usable loops further complicate the use of information from coronal images. We develop and test a new algorithm to use images of coronal loops in the modeling of the solar coronal magnetic field. We first fit projected field lines with those of constant-{alpha} force-free fields to approximate the three-dimensional distribution of currents in the corona along a sparse set of trajectories. We then apply a Grad-Rubin-like iterative technique, which uses these trajectories as volume constraints on the values of {alpha}, to obtain a volume-filling nonlinear force-free model of the magnetic field, modifying a code and method presented by Wheatland. We thoroughly test the technique on known analytical and solar-like model magnetic fields previously used for comparing different extrapolation techniques and compare the results with those obtained by currently available methods relying only on the photospheric data. We conclude that we have developed a functioning method of modeling the coronal magnetic field by combining the line-of-sight component of the photospheric magnetic field with information from coronal images. Whereas we focus on the use of coronal loop information in combination with line-of-sight magnetograms, the method is readily extended to

  19. GUIDING NONLINEAR FORCE-FREE MODELING USING CORONAL OBSERVATIONS: FIRST RESULTS USING A QUASI-GRAD-RUBIN SCHEME

    International Nuclear Information System (INIS)

    Malanushenko, A.; Schrijver, C. J.; DeRosa, M. L.; Wheatland, M. S.; Gilchrist, S. A.

    2012-01-01

    At present, many models of the coronal magnetic field rely on photospheric vector magnetograms, but these data have been shown to be problematic as the sole boundary information for nonlinear force-free field extrapolations. Magnetic fields in the corona manifest themselves in high-energy images (X-rays and EUV) in the shapes of coronal loops, providing an additional constraint that is not at present used as constraints in the computational domain, directly influencing the evolution of the model. This is in part due to the mathematical complications of incorporating such input into numerical models. Projection effects, confusion due to overlapping loops (the coronal plasma is optically thin), and the limited number of usable loops further complicate the use of information from coronal images. We develop and test a new algorithm to use images of coronal loops in the modeling of the solar coronal magnetic field. We first fit projected field lines with those of constant-α force-free fields to approximate the three-dimensional distribution of currents in the corona along a sparse set of trajectories. We then apply a Grad-Rubin-like iterative technique, which uses these trajectories as volume constraints on the values of α, to obtain a volume-filling nonlinear force-free model of the magnetic field, modifying a code and method presented by Wheatland. We thoroughly test the technique on known analytical and solar-like model magnetic fields previously used for comparing different extrapolation techniques and compare the results with those obtained by currently available methods relying only on the photospheric data. We conclude that we have developed a functioning method of modeling the coronal magnetic field by combining the line-of-sight component of the photospheric magnetic field with information from coronal images. Whereas we focus on the use of coronal loop information in combination with line-of-sight magnetograms, the method is readily extended to incorporate

  20. Composite use of numerical groundwater flow modeling and geoinformatics techniques for monitoring Indus Basin aquifer, Pakistan.

    Science.gov (United States)

    Ahmad, Zulfiqar; Ashraf, Arshad; Fryar, Alan; Akhter, Gulraiz

    2011-02-01

    The integration of the Geographic Information System (GIS) with groundwater modeling and satellite remote sensing capabilities has provided an efficient way of analyzing and monitoring groundwater behavior and its associated land conditions. A 3-dimensional finite element model (Feflow) has been used for regional groundwater flow modeling of Upper Chaj Doab in Indus Basin, Pakistan. The approach of using GIS techniques that partially fulfill the data requirements and define the parameters of existing hydrologic models was adopted. The numerical groundwater flow model is developed to configure the groundwater equipotential surface, hydraulic head gradient, and estimation of the groundwater budget of the aquifer. GIS is used for spatial database development, integration with a remote sensing, and numerical groundwater flow modeling capabilities. The thematic layers of soils, land use, hydrology, infrastructure, and climate were developed using GIS. The Arcview GIS software is used as additive tool to develop supportive data for numerical groundwater flow modeling and integration and presentation of image processing and modeling results. The groundwater flow model was calibrated to simulate future changes in piezometric heads from the period 2006 to 2020. Different scenarios were developed to study the impact of extreme climatic conditions (drought/flood) and variable groundwater abstraction on the regional groundwater system. The model results indicated a significant response in watertable due to external influential factors. The developed model provides an effective tool for evaluating better management options for monitoring future groundwater development in the study area.

  1. The Role of Flow Diagnostic Techniques in Fan and Open Rotor Noise Modeling

    Science.gov (United States)

    Envia, Edmane

    2016-01-01

    A principal source of turbomachinery noise is the interaction of the rotating and stationary blade rows with the perturbations in the airstream through the engine. As such, a lot of research has been devoted to the study of the turbomachinery noise generation mechanisms. This is particularly true of fan and open rotors, both of which are the major contributors to the overall noise output of modern aircraft engines. Much of the research in fan and open rotor noise has been focused on developing theoretical models for predicting their noise characteristics. These models, which run the gamut from the semi-empirical to fully computational ones, are, in one form or another, informed by the description of the unsteady flow-field in which the propulsors (i.e., the fan and open rotors) operate. Not surprisingly, the fidelity of the theoretical models is dependent, to a large extent, on capturing the nuances of the unsteady flowfield that have a direct role in the noise generation process. As such, flow diagnostic techniques have proven to be indispensible in identifying the shortcoming of theoretical models and in helping to improve them. This presentation will provide a few examples of the role of flow diagnostic techniques in assessing the fidelity and robustness of the fan and open rotor noise prediction models.

  2. Constructing an Urban Population Model for Medical Insurance Scheme Using Microsimulation Techniques

    Directory of Open Access Journals (Sweden)

    Linping Xiong

    2012-01-01

    Full Text Available China launched a pilot project of medical insurance reform in 79 cities in 2007 to cover urban nonworking residents. An urban population model was created in this paper for China’s medical insurance scheme using microsimulation model techniques. The model made it clear for the policy makers the population distributions of different groups of people, the potential urban residents entering the medical insurance scheme. The income trends of units of individuals and families were also obtained. These factors are essential in making the challenging policy decisions when considering to balance the long-term financial sustainability of the medical insurance scheme.

  3. Técnica de suturas ajustables: Resultados Technique of adjustable sutures: Results

    Directory of Open Access Journals (Sweden)

    Lourdes R. Hernández Santos

    2001-06-01

    Full Text Available Se realizó un estudio sensorial y motor preoperatorio y posoperatorio a 84 pacientes que acudieron a la consulta de Visión Binocular con el diagnóstico de estrabismo horizontal a partir de los 13 años de edad. El método estadístico utilizado fue "t" o Chi cuadrado. Nos trazamos como objetivo determinar los resultados posoperatorios de la cirugía de estrabismo realizada con la técnica de suturas ajustables, que fueron los siguientes: el 61 % de los pacientes con exotropía y el 71,4 % con el diagnóstico de exotropía se encontraban en ortotropía a los 6 meses de la intervención. El 71,4 % de los pacientes con esotropía y el 83,3 % con el diagnóstico de esotropía se encontraban en ortotropía al año de la intervención. Esta técnica quirúrgica permite la modificación de la desviación en el posoperatorio inmediato.A preoperative and postoperative sensorial and motor study was conducted among 84 patients who received attention at the consultation room of Binocular Vision with the diagnosis of horizontal strabismus from the age of 13 years old on. The statistical method used was "t" or chi square test. Our objective was to determine the postoperative results of the strabismus surgery performed by the technique of adjustable sutures. The results were as follows: 61 % of the patients with exotropia and 71.4 % with the diagnosis of exotropia were in orthotropia 6 months after the operation. 71.4 % of the patients with exotropia and 83.3 % with the diagnosis of exotropia were in orthotropia a year after the operation. This surgical technique allows the modification of the deviation in the immediate postoperative.

  4. Comparison of numerical and experimental results of the flow in the U9 Kaplan turbine model

    Energy Technology Data Exchange (ETDEWEB)

    Petit, O; Nilsson, H [Division of Fluid Mechanics, Chalmers University of Technology, Hoersalsvaegen 7A, SE-41296 Goeteborg (Sweden); Mulu, B; Cervantes, M, E-mail: olivierp@chalmers.s [Division of Fluid Mechanics, Luleaa University of Technology, SE-971 87 Luleaa (Sweden)

    2010-08-15

    The present work compares simulations made using the OpenFOAM CFD code with experimental measurements of the flow in the U9 Kaplan turbine model. Comparisons of the velocity profiles in the spiral casing and in the draft tube are presented. The U9 Kaplan turbine prototype located in Porjus and its model, located in Alvkarleby, Sweden, have curved inlet pipes that lead the flow to the spiral casing. Nowadays, this curved pipe and its effect on the flow in the turbine is not taken into account when numerical simulations are performed at design stage. To study the impact of the inlet pipe curvature on the flow in the turbine, and to get a better overview of the flow of the whole system, measurements were made on the 1:3.1 model of the U9 turbine. Previously published measurements were taken at the inlet of the spiral casing and just before the guide vanes, using the laser Doppler anemometry (LDA) technique. In the draft tube, a number of velocity profiles were measured using the LDA techniques. The present work extends the experimental investigation with a horizontal section at the inlet of the draft tube. The experimental results are used to specify the inlet boundary condition for the numerical simulations in the draft tube, and to validate the computational results in both the spiral casing and the draft tube. The numerical simulations were realized using the standard k-e model and a block-structured hexahedral wall function mesh.

  5. Comparison of numerical and experimental results of the flow in the U9 Kaplan turbine model

    Science.gov (United States)

    Petit, O.; Mulu, B.; Nilsson, H.; Cervantes, M.

    2010-08-01

    The present work compares simulations made using the OpenFOAM CFD code with experimental measurements of the flow in the U9 Kaplan turbine model. Comparisons of the velocity profiles in the spiral casing and in the draft tube are presented. The U9 Kaplan turbine prototype located in Porjus and its model, located in Älvkarleby, Sweden, have curved inlet pipes that lead the flow to the spiral casing. Nowadays, this curved pipe and its effect on the flow in the turbine is not taken into account when numerical simulations are performed at design stage. To study the impact of the inlet pipe curvature on the flow in the turbine, and to get a better overview of the flow of the whole system, measurements were made on the 1:3.1 model of the U9 turbine. Previously published measurements were taken at the inlet of the spiral casing and just before the guide vanes, using the laser Doppler anemometry (LDA) technique. In the draft tube, a number of velocity profiles were measured using the LDA techniques. The present work extends the experimental investigation with a horizontal section at the inlet of the draft tube. The experimental results are used to specify the inlet boundary condition for the numerical simulations in the draft tube, and to validate the computational results in both the spiral casing and the draft tube. The numerical simulations were realized using the standard k-e model and a block-structured hexahedral wall function mesh.

  6. Comparison of numerical and experimental results of the flow in the U9 Kaplan turbine model

    International Nuclear Information System (INIS)

    Petit, O; Nilsson, H; Mulu, B; Cervantes, M

    2010-01-01

    The present work compares simulations made using the OpenFOAM CFD code with experimental measurements of the flow in the U9 Kaplan turbine model. Comparisons of the velocity profiles in the spiral casing and in the draft tube are presented. The U9 Kaplan turbine prototype located in Porjus and its model, located in Alvkarleby, Sweden, have curved inlet pipes that lead the flow to the spiral casing. Nowadays, this curved pipe and its effect on the flow in the turbine is not taken into account when numerical simulations are performed at design stage. To study the impact of the inlet pipe curvature on the flow in the turbine, and to get a better overview of the flow of the whole system, measurements were made on the 1:3.1 model of the U9 turbine. Previously published measurements were taken at the inlet of the spiral casing and just before the guide vanes, using the laser Doppler anemometry (LDA) technique. In the draft tube, a number of velocity profiles were measured using the LDA techniques. The present work extends the experimental investigation with a horizontal section at the inlet of the draft tube. The experimental results are used to specify the inlet boundary condition for the numerical simulations in the draft tube, and to validate the computational results in both the spiral casing and the draft tube. The numerical simulations were realized using the standard k-e model and a block-structured hexahedral wall function mesh.

  7. A two-system, single-analysis, fluid-structure interaction technique for modelling abdominal aortic aneurysms.

    Science.gov (United States)

    Kelly, S C; O'Rourke, M J

    2010-01-01

    This work reports on the implementation and validation of a two-system, single-analysis, fluid-structure interaction (FSI) technique that uses the finite volume (FV) method for performing simulations on abdominal aortic aneurysm (AAA) geometries. This FSI technique, which was implemented in OpenFOAM, included fluid and solid mesh motion and incorporated a non-linear material model to represent AAA tissue. Fully implicit coupling was implemented, ensuring that both the fluid and solid domains reached convergence within each time step. The fluid and solid parts of the FSI code were validated independently through comparison with experimental data, before performing a complete FSI simulation on an idealized AAA geometry. Results from the FSI simulation showed that a vortex formed at the proximal end of the aneurysm during systolic acceleration, and moved towards the distal end of the aneurysm during diastole. Wall shear stress (WSS) values were found to peak at both the proximal and distal ends of the aneurysm and remain low along the centre of the aneurysm. The maximum von Mises stress in the aneurysm wall was found to be 408kPa, and this occurred at the proximal end of the aneurysm, while the maximum displacement of 2.31 mm occurred in the centre of the aneurysm. These results were found to be consistent with results from other FSI studies in the literature.

  8. Predicting bottlenose dolphin distribution along Liguria coast (northwestern Mediterranean Sea) through different modeling techniques and indirect predictors.

    Science.gov (United States)

    Marini, C; Fossa, F; Paoli, C; Bellingeri, M; Gnone, G; Vassallo, P

    2015-03-01

    Habitat modeling is an important tool to investigate the quality of the habitat for a species within a certain area, to predict species distribution and to understand the ecological processes behind it. Many species have been investigated by means of habitat modeling techniques mainly to address effective management and protection policies and cetaceans play an important role in this context. The bottlenose dolphin (Tursiops truncatus) has been investigated with habitat modeling techniques since 1997. The objectives of this work were to predict the distribution of bottlenose dolphin in a coastal area through the use of static morphological features and to compare the prediction performances of three different modeling techniques: Generalized Linear Model (GLM), Generalized Additive Model (GAM) and Random Forest (RF). Four static variables were tested: depth, bottom slope, distance from 100 m bathymetric contour and distance from coast. RF revealed itself both the most accurate and the most precise modeling technique with very high distribution probabilities predicted in presence cells (90.4% of mean predicted probabilities) and with 66.7% of presence cells with a predicted probability comprised between 90% and 100%. The bottlenose distribution obtained with RF allowed the identification of specific areas with particularly high presence probability along the coastal zone; the recognition of these core areas may be the starting point to develop effective management practices to improve T. truncatus protection. Copyright © 2014 Elsevier Ltd. All rights reserved.

  9. Advanced particle-in-cell simulation techniques for modeling the Lockheed Martin Compact Fusion Reactor

    Science.gov (United States)

    Welch, Dale; Font, Gabriel; Mitchell, Robert; Rose, David

    2017-10-01

    We report on particle-in-cell developments of the study of the Compact Fusion Reactor. Millisecond, two and three-dimensional simulations (cubic meter volume) of confinement and neutral beam heating of the magnetic confinement device requires accurate representation of the complex orbits, near perfect energy conservation, and significant computational power. In order to determine initial plasma fill and neutral beam heating, these simulations include ionization, elastic and charge exchange hydrogen reactions. To this end, we are pursuing fast electromagnetic kinetic modeling algorithms including a two implicit techniques and a hybrid quasi-neutral algorithm with kinetic ions. The kinetic modeling includes use of the Poisson-corrected direct implicit, magnetic implicit, as well as second-order cloud-in-cell techniques. The hybrid algorithm, ignoring electron inertial effects, is two orders of magnitude faster than kinetic but not as accurate with respect to confinement. The advantages and disadvantages of these techniques will be presented. Funded by Lockheed Martin.

  10. Materials and techniques for model construction

    Science.gov (United States)

    Wigley, D. A.

    1985-01-01

    The problems confronting the designer of models for cryogenic wind tunnel models are discussed with particular reference to the difficulties in obtaining appropriate data on the mechanical and physical properties of candidate materials and their fabrication technologies. The relationship between strength and toughness of alloys is discussed in the context of maximizing both and avoiding the problem of dimensional and microstructural instability. All major classes of materials used in model construction are considered in some detail and in the Appendix selected numerical data is given for the most relevant materials. The stepped-specimen program to investigate stress-induced dimensional changes in alloys is discussed in detail together with interpretation of the initial results. The methods used to bond model components are considered with particular reference to the selection of filler alloys and temperature cycles to avoid microstructural degradation and loss of mechanical properties.

  11. Assessing sequential data assimilation techniques for integrating GRACE data into a hydrological model

    KAUST Repository

    Khaki, M.; Hoteit, Ibrahim; Kuhn, M.; Awange, J.; Forootan, E.; van Dijk, A.; Schumacher, M.; Pattiaratchi, C.

    2017-01-01

    The time-variable terrestrial water storage (TWS) products from the Gravity Recovery And Climate Experiment (GRACE) have been increasingly used in recent years to improve the simulation of hydrological models by applying data assimilation techniques

  12. Comparison of QuadrapolarTM radiofrequency lesions produced by standard versus modified technique: an experimental model

    Directory of Open Access Journals (Sweden)

    Safakish R

    2017-06-01

    Full Text Available Ramin Safakish Allevio Pain Management Clinic, Toronto, ON, Canada Abstract: Lower back pain (LBP is a global public health issue and is associated with substantial financial costs and loss of quality of life. Over the years, different literature has provided different statistics regarding the causes of the back pain. The following statistic is the closest estimation regarding our patient population. The sacroiliac (SI joint pain is responsible for LBP in 18%–30% of individuals with LBP. Quadrapolar™ radiofrequency ablation, which involves ablation of the nerves of the SI joint using heat, is a commonly used treatment for SI joint pain. However, the standard Quadrapolar radiofrequency procedure is not always effective at ablating all the sensory nerves that cause the pain in the SI joint. One of the major limitations of the standard Quadrapolar radiofrequency procedure is that it produces small lesions of ~4 mm in diameter. Smaller lesions increase the likelihood of failure to ablate all nociceptive input. In this study, we compare the standard Quadrapolar radiofrequency ablation technique to a modified Quadrapolar ablation technique that has produced improved patient outcomes in our clinic. The methodology of the two techniques are compared. In addition, we compare results from an experimental model comparing the lesion sizes produced by the two techniques. Taken together, the findings from this study suggest that the modified Quadrapolar technique provides longer lasting relief for the back pain that is caused by SI joint dysfunction. A randomized controlled clinical trial is the next step required to quantify the difference in symptom relief and quality of life produced by the two techniques. Keywords: lower back pain, radiofrequency ablation, sacroiliac joint, Quadrapolar radiofrequency ablation

  13. Experimental and Numerical Modeling of Fluid Flow Processes in Continuous Casting: Results from the LIMMCAST-Project

    Science.gov (United States)

    Timmel, K.; Kratzsch, C.; Asad, A.; Schurmann, D.; Schwarze, R.; Eckert, S.

    2017-07-01

    The present paper reports about numerical simulations and model experiments concerned with the fluid flow in the continuous casting process of steel. This work was carried out in the LIMMCAST project in the framework of the Helmholtz alliance LIMTECH. A brief description of the LIMMCAST facilities used for the experimental modeling at HZDR is given here. Ultrasonic and inductive techniques and the X-ray radioscopy were employed for flow measurements or visualizations of two-phase flow regimes occurring in the submerged entry nozzle and the mold. Corresponding numerical simulations were performed at TUBAF taking into account the dimensions and properties of the model experiments. Numerical models were successfully validated using the experimental data base. The reasonable and in many cases excellent agreement of numerical with experimental data allows to extrapolate the models to real casting configurations. Exemplary results will be presented here showing the effect of electromagnetic brakes or electromagnetic stirrers on the flow in the mold or illustrating the properties of two-phase flows resulting from an Ar injection through the stopper rod.

  14. Estimation of genetic variability and heritability of wheat agronomic traits resulted from some gamma rays irradiation techniques

    International Nuclear Information System (INIS)

    Wijaya Murti Indriatama; Trikoesoemaningtyas; Syarifah Iis Aisyah; Soeranto Human

    2016-01-01

    Gamma irradiation techniques have significant effect on frequency and spectrum of macro-mutation but the study of its effect on micro-mutation that related to genetic variability on mutated population is very limited. The aim of this research was to study the effect of gamma irradiation techniques on genetic variability and heritability of wheat agronomic characters at M2 generation. This research was conducted from July to November 2014, at Cibadak experimental station, Indonesian Center for Agricultural Biotechnology and Genetic Resources Research and Development, Ministry of Agriculture. Three introduced wheat breeding lines (F-44, Kiran-95 & WL-711) were treated by 3 gamma irradiation techniques (acute, fractionated and intermittent). M1 generation of combination treatments were planted and harvested its spike individually per plants. As M2 generation, seeds of 75 M1 spike were planted at the field with one row one spike method and evaluated on the agronomic characters and its genetic components. The used of gamma irradiation techniques decreased mean but increased range values of agronomic traits in M2 populations. Fractionated irradiation induced higher mean and wider range on spike length and number of spike let per spike than other irradiation techniques. Fractionated and intermittent irradiation resulted greater variability of grain weight per plant than acute irradiation. The number of tillers, spike weight, grain weight per spike and grain weight per plant on M2 population resulted from induction of three gamma irradiation techniques have high estimated heritability and broad sense of genetic variability coefficient values. The three gamma irradiation techniques increased genetic variability of agronomic traits on M2 populations, except plant height. (author)

  15. Development of evaluation method for software safety analysis techniques

    International Nuclear Information System (INIS)

    Huang, H.; Tu, W.; Shih, C.; Chen, C.; Yang, W.; Yih, S.; Kuo, C.; Chen, M.

    2006-01-01

    Full text: Full text: Following the massive adoption of digital Instrumentation and Control (I and C) system for nuclear power plant (NPP), various Software Safety Analysis (SSA) techniques are used to evaluate the NPP safety for adopting appropriate digital I and C system, and then to reduce risk to acceptable level. However, each technique has its specific advantage and disadvantage. If the two or more techniques can be complementarily incorporated, the SSA combination would be more acceptable. As a result, if proper evaluation criteria are available, the analyst can then choose appropriate technique combination to perform analysis on the basis of resources. This research evaluated the applicable software safety analysis techniques nowadays, such as, Preliminary Hazard Analysis (PHA), Failure Modes and Effects Analysis (FMEA), Fault Tree Analysis (FTA), Markov chain modeling, Dynamic Flowgraph Methodology (DFM), and simulation-based model analysis; and then determined indexes in view of their characteristics, which include dynamic capability, completeness, achievability, detail, signal/ noise ratio, complexity, and implementation cost. These indexes may help the decision makers and the software safety analysts to choose the best SSA combination arrange their own software safety plan. By this proposed method, the analysts can evaluate various SSA combinations for specific purpose. According to the case study results, the traditional PHA + FMEA + FTA (with failure rate) + Markov chain modeling (without transfer rate) combination is not competitive due to the dilemma for obtaining acceptable software failure rates. However, the systematic architecture of FTA and Markov chain modeling is still valuable for realizing the software fault structure. The system centric techniques, such as DFM and Simulation-based model analysis, show the advantage on dynamic capability, achievability, detail, signal/noise ratio. However, their disadvantage are the completeness complexity

  16. Forecasting performance of three automated modelling techniques during the economic crisis 2007-2009

    DEFF Research Database (Denmark)

    Kock, Anders Bredahl; Teräsvirta, Timo

    In this work we consider forecasting macroeconomic variables during an economic crisis. The focus is on a speci…c class of models, the so-called single hidden-layer feedforward autoregressive neural network models. What makes these models interesting in the present context is that they form a cla...... during the economic crisis 2007–2009. Forecast accuracy is measured by the root mean square forecast error. Hypothesis testing is also used to compare the performance of the different techniques with each other....

  17. Comparison of results analysis of chemical composition of alloys inside the U-Zr-Nb by XRF and AAS techniques

    International Nuclear Information System (INIS)

    Masrukan; Tri Yulianto; Anwar Muchsin

    2011-01-01

    U-Zr-Nb alloy chemical composition analysis using X Ray Fluorescence (XRF) and Atomic Absorption Spectroscopy (AAS) techniques have been conducted, where U-Zr- Nb alloy was chosen as candidates for new high-density fuel for future research reactors . Composition analysis is necessary because the composition of elements in the fuel will determine the characteristics of fuel during the fabrication process and in the reactor. The use of two kinds of analysis techniques were designed to obtain accurate analysis results. The experiment was conducted to determine the major element composition and impurities in the alloy U-Zr-Nb. First U-Zr-Nb varying alloy composition Nb were respectively 1%, 4%, 7% (U10% Zr1% Nb, U10% Zr4% Nb and U10% 7% Nb) as results of the melting process of measuring the diameter of 120 mm crushed on the surface bottom. Once on the bottom surface is smooth, then analyzed using XRF techniques. To analyze the elements using AAS techniques, alloy U-Zr-Nb cut into 10 mm x 5 mm then dissolved using HF and nitric acid. Solution that occurred were analyzed using AAS technique. From the analysis using the XRF technique is obtained the alloy U-10% Zr-1% Nb, U-10% Zr-4% Nb and Zr-10% U-7% Nb) had a content of each element as follows: U (87.8858%), Zr (2.6097%) and Nb (0.2206%), U (87.8556%), Zr (2.6302%), and Nb (0.6573%); U (84.6334%), Zr (2.5773%), and Nb (1.0940) weight. Results of analysis using AAS techniques on samples obtained third consecutive Zr content of 9.25%, 8.90% and 9.80% while the content of Nb was not detected. Meanwhile, the results of elemental analysis of impurities in all three samples showed that almost all the elements are still qualify as fuel except Zn element. Element Zn at the three samples of each alloys U-10% Zr-1% Nb, U-10% Zr-4% Nb and U-10% Zr-7%Nb is 1.3266%, 3.2756% and 1.0927% weight. It could be concluded that the results of analysis of elemental content and impurities in the alloy U-Nb-Zr using both XRF and AAS visible

  18. Physical simulations using centrifuge techniques

    International Nuclear Information System (INIS)

    Sutherland, H.J.

    1981-01-01

    Centrifuge techniques offer a technique for doing physical simulations of the long-term mechanical response of deep ocean sediment to the emplacement of waste canisters and to the temperature gradients generated by them. Preliminary investigations of the scaling laws for pertinent phenomena indicate that the time scaling will be consistent among them and equal to the scaling factor squared. This result implies that this technique will permit accelerated-life-testing of proposed configurations; i.e, long-term studies may be done in relatively short times. Presently, existing centrifuges are being modified to permit scale model testing. This testing will start next year

  19. Comparison of groundwater residence time using isotope techniques and numerical groundwater flow model in Gneissic Terrain, Korea

    International Nuclear Information System (INIS)

    Bae, D.S.; Kim, C.S.; Koh, Y.K.; Kim, K.S.; Song, M.Y.

    1997-01-01

    The prediction of groundwater flow affecting the migration of radionuclides is an important component of the performance assessment of radioactive waste disposal. Groundwater flow in fractured rock mass is controlled by fracture networks, transmissivity and hydraulic gradient. Furthermore the scale-dependent and anisotropic properties of hydraulic parameters are resulted mainly from irregular patterns of fracture system, which are very complex to evaluate properly with the current techniques available. For the purpose of characterizing a groundwater flow in fractured rock mass, the discrete fracture network (DFN) concept is available on the basis of assumptions of groundwater flowing only along fractures and flowpaths in rock mass formed by interconnected fractures. To increase the reliability of assessment in groundwater flow phenomena, numerical groundwater flow model and isotopic techniques were applied. Fracture mapping, borehole acoustic scanning were performed to identify conductive fractures in gneissic terrane. Tracer techniques, using deuterium, oxygen-18 and tritium were applied to evaluate the recharge area and groundwater residence time

  20. Application of HEART technique in health care system and accuracy of its results

    Directory of Open Access Journals (Sweden)

    Mohammad Beiruti

    2016-12-01

    Full Text Available Introduction: Human error is considered as a crucial challenge in occupational settings. Health care system is amongst occupational environments with high rate of human errors. Numerous preceding studies noted that more than 2/3 of medical errors are preventable. Accordingly, different methods are suggested to evaluate human errors, especially in nuclear industries. The aim of this study was to evaluate the application and accuracy of HEART technique in medical health system. Material and Method:  This qualitative study was conducted in surgical intensive care units of a hospital in Shiraz city. All nurses recorded errors were categorized regarding the given tasks and then all tasks were ranked based on the number of errors. The probability of nurses’ tasks error was estimated through AHP-HEART method and the resultant ranking was compared with the recorded errors. Additionally, the prioritization of contributing factors to errors, determined by AHP and AHP-HEART methods, was compared employing Pearson statistical test. Results: Based on the results, there was a concordance in the rate of nurses’ error determined by HEART method and the recorded errors. However, no significant correlation was between errors contributing factors determined by AHP and AHP-HEART methods. Conclusion: This study suggested that although HEART technique was successful to rank the tasks considering the magnitude of error probability, but the coefficients of error producing conditions should be customized for nurses’ tasks in order to provide appropriate control measures.

  1. Preliminary results on 3D channel modeling: From theory to standardization

    KAUST Repository

    Kammoun, Abla; Khanfir, Hajer; Altman, Zwi; Debbah, Mé roú ane; Kamoun, Mohamed Amine

    2014-01-01

    Three dimensional (3D) beamforming (also elevation beamforming) is now gaining interest among researchers in wireless communication. The reason can be attributed to its potential for enabling a variety of strategies such as sector or user specific elevation beamforming and cell-splitting. Since these techniques cannot be directly supported by current LTE releases, the 3GPP is now working on defining the required technical specifications. In particular, a large effort is currently being made to get accurate 3D channel models that support the elevation dimension. This step is necessary as it will evaluate the potential of 3D and full dimensional (FD) beamforming techniques to benefit from the richness of real channels. This work aims at presenting the on-going 3GPP study item 'study on 3D-channel model for elevation beamforming and FD-MIMO studies for LTE' and positioning it with respect to previous standardization works. © 2014 IEEE.

  2. Preliminary results on 3D channel modeling: From theory to standardization

    KAUST Repository

    Kammoun, Abla

    2014-06-01

    Three dimensional (3D) beamforming (also elevation beamforming) is now gaining interest among researchers in wireless communication. The reason can be attributed to its potential for enabling a variety of strategies such as sector or user specific elevation beamforming and cell-splitting. Since these techniques cannot be directly supported by current LTE releases, the 3GPP is now working on defining the required technical specifications. In particular, a large effort is currently being made to get accurate 3D channel models that support the elevation dimension. This step is necessary as it will evaluate the potential of 3D and full dimensional (FD) beamforming techniques to benefit from the richness of real channels. This work aims at presenting the on-going 3GPP study item \\'study on 3D-channel model for elevation beamforming and FD-MIMO studies for LTE\\' and positioning it with respect to previous standardization works. © 2014 IEEE.

  3. Artificial Intelligence techniques for mission planning for mobile robots

    International Nuclear Information System (INIS)

    Martinez, J.M.; Nomine, J.P.

    1990-01-01

    This work focuses on Spatial Modelization Techniques and on Control Software Architectures, in order to deal efficiently with the Navigation and Perception problems encountered in Mobile Autonomous Robotics. After a brief survey of the current various approaches for these techniques, we expose ongoing simulation works for a specific mission in robotics. Studies in progress used for Spatial Reasoning are based on new approaches combining Artificial Intelligence and Geometrical techniques. These methods deal with the problem of environment modelization using three types of models: geometrical topological and semantic models at different levels. The decision making processes of control are presented as the result of cooperation between a group of decentralized agents that communicate by sending messages. (author)

  4. Optimization of DNA Sensor Model Based Nanostructured Graphene Using Particle Swarm Optimization Technique

    Directory of Open Access Journals (Sweden)

    Hediyeh Karimi

    2013-01-01

    Full Text Available It has been predicted that the nanomaterials of graphene will be among the candidate materials for postsilicon electronics due to their astonishing properties such as high carrier mobility, thermal conductivity, and biocompatibility. Graphene is a semimetal zero gap nanomaterial with demonstrated ability to be employed as an excellent candidate for DNA sensing. Graphene-based DNA sensors have been used to detect the DNA adsorption to examine a DNA concentration in an analyte solution. In particular, there is an essential need for developing the cost-effective DNA sensors holding the fact that it is suitable for the diagnosis of genetic or pathogenic diseases. In this paper, particle swarm optimization technique is employed to optimize the analytical model of a graphene-based DNA sensor which is used for electrical detection of DNA molecules. The results are reported for 5 different concentrations, covering a range from 0.01 nM to 500 nM. The comparison of the optimized model with the experimental data shows an accuracy of more than 95% which verifies that the optimized model is reliable for being used in any application of the graphene-based DNA sensor.

  5. Robustness of an uncertainty and sensitivity analysis of early exposure results with the MACCS reactor accident consequence model

    International Nuclear Information System (INIS)

    Helton, J.C.; Johnson, J.D.; McKay, M.D.; Shiver, A.W.; Sprung, J.L.

    1995-01-01

    Uncertainty and sensitivity analysis techniques based on Latin hypercube sampling, partial correlation analysis and stepwise regression analysis were used in an investigation with the MACCS model of the early health effects associated with a severe accident at a nuclear power station. The following results were obtained in tests to check the robustness of the analysis techniques: two independent Latin hypercube samples produced similar uncertainty and sensitivity analysis results; setting important variables to best-estimate values produced substantial reductions in uncertainty, while setting the less important variables to best-estimate values had little effect on uncertainty; similar sensitivity analysis results were obtained when the original uniform and loguniform distributions assigned to the 34 imprecisely known input variables were changed to left-triangular distributions and then to right-triangular distributions; and analyses with rank-transformed and logarithmically-transformed data produced similar results and substantially outperformed analyses with raw (i.e., untransformed) data

  6. Identifying and prioritizing the tools/techniques of knowledge management based on the Asian Productivity Organization Model (APO) to use in hospitals.

    Science.gov (United States)

    Khajouei, Hamid; Khajouei, Reza

    2017-12-01

    in the knowledge application step. The results showed that 12 out of 26 tools in the APO model are appropriate for hospitals of which 11 are significantly applicable, and "storytelling" is marginally applicable. In this study, the preferred tools/techniques for implementation of each of the five KM steps in hospitals are introduced. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. Monte Carlo Techniques for the Comprehensive Modeling of Isotopic Inventories in Future Nuclear Systems and Fuel Cycles. Final Report

    International Nuclear Information System (INIS)

    Paul P.H. Wilson

    2005-01-01

    The development of Monte Carlo techniques for isotopic inventory analysis has been explored in order to facilitate the modeling of systems with flowing streams of material through varying neutron irradiation environments. This represents a novel application of Monte Carlo methods to a field that has traditionally relied on deterministic solutions to systems of first-order differential equations. The Monte Carlo techniques were based largely on the known modeling techniques of Monte Carlo radiation transport, but with important differences, particularly in the area of variance reduction and efficiency measurement. The software that was developed to implement and test these methods now provides a basis for validating approximate modeling techniques that are available to deterministic methodologies. The Monte Carlo methods have been shown to be effective in reproducing the solutions of simple problems that are possible using both stochastic and deterministic methods. The Monte Carlo methods are also effective for tracking flows of materials through complex systems including the ability to model removal of individual elements or isotopes in the system. Computational performance is best for flows that have characteristic times that are large fractions of the system lifetime. As the characteristic times become short, leading to thousands or millions of passes through the system, the computational performance drops significantly. Further research is underway to determine modeling techniques to improve performance within this range of problems. This report describes the technical development of Monte Carlo techniques for isotopic inventory analysis. The primary motivation for this solution methodology is the ability to model systems of flowing material being exposed to varying and stochastically varying radiation environments. The methodology was developed in three stages: analog methods which model each atom with true reaction probabilities (Section 2), non-analog methods

  8. The preparation of aneurysm model in rabbits by vessel ligation and elastase-induced technique

    International Nuclear Information System (INIS)

    Lu Chuan; Xie Qianyu; Liu Linxiang

    2010-01-01

    Objective: To establish an aneurysm model, which is quite similar to the human intracranial aneurysm in morphology, in rabbits by means of vessel ligation together with elastase-induced technique. Methods: Sixteen New Zealand white rabbits were used in this study. Distal carotid ligation and intraluminal elastase incubation was employed in ten rabbits (study group) to create aneurysm on the right common carotid artery. And surgical suture of a segment of the left carotid common artery was carried out in six rabbits (used as control group) to establish the aneurysm model. DSA exam of the created aneurysms by using catheterization via femoral artery was performed at one week and at one month after surgery. The patency, morphology and pathology of the aneurysms were observed. The results were statistically analyzed. Results: The aneurysms in both groups remained patent after they were created. Angiography one week after the surgery showed that all the aneurysms in study group were patent, while in control group only two aneurysms showed opacification with contrast medium and the remaining four aneurysms were all occluded. DSA at one month after the procedure demonstrated that all the aneurysms in study group remained patent, and the previous two patent aneurysms in control group became occluded. The mean width and length of the aneurysmal cavity in study group immediately after the procedure were (3.70 ± 0.16) mm and (6.53 ± 0.65) mm respectively, which enlarged to (5.06 ± 0.31) mm and (9.0 ± 0.52) mm respectively one month after the surgery. The difference in size changes was statistically significant (P < 0.05). Pathologically, almost complete absence of the internal elastic lamina and medial wall elastin of the aneurysms was observed. Conclusion: The aneurysm model prepared with vessel ligation together with elastase-induced technique carries high patent rate and possesses the feature of spontaneous growing, moreover, its morphology is quite similar to the

  9. Model-Based Fault Diagnosis Techniques Design Schemes, Algorithms and Tools

    CERN Document Server

    Ding, Steven X

    2013-01-01

    Guaranteeing a high system performance over a wide operating range is an important issue surrounding the design of automatic control systems with successively increasing complexity. As a key technology in the search for a solution, advanced fault detection and identification (FDI) is receiving considerable attention. This book introduces basic model-based FDI schemes, advanced analysis and design algorithms, and mathematical and control-theoretic tools. This second edition of Model-Based Fault Diagnosis Techniques contains: ·         new material on fault isolation and identification, and fault detection in feedback control loops; ·         extended and revised treatment of systematic threshold determination for systems with both deterministic unknown inputs and stochastic noises; addition of the continuously-stirred tank heater as a representative process-industrial benchmark; and ·         enhanced discussion of residual evaluation in stochastic processes. Model-based Fault Diagno...

  10. Modeling Techniques for a Computational Efficient Dynamic Turbofan Engine Model

    Directory of Open Access Journals (Sweden)

    Rory A. Roberts

    2014-01-01

    Full Text Available A transient two-stream engine model has been developed. Individual component models developed exclusively in MATLAB/Simulink including the fan, high pressure compressor, combustor, high pressure turbine, low pressure turbine, plenum volumes, and exit nozzle have been combined to investigate the behavior of a turbofan two-stream engine. Special attention has been paid to the development of transient capabilities throughout the model, increasing physics model, eliminating algebraic constraints, and reducing simulation time through enabling the use of advanced numerical solvers. The lessening of computation time is paramount for conducting future aircraft system-level design trade studies and optimization. The new engine model is simulated for a fuel perturbation and a specified mission while tracking critical parameters. These results, as well as the simulation times, are presented. The new approach significantly reduces the simulation time.

  11. Optimization of the design of thick, segmented scintillators for megavoltage cone-beam CT using a novel, hybrid modeling technique

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Langechuan; Antonuk, Larry E., E-mail: antonuk@umich.edu; El-Mohri, Youcef; Zhao, Qihua; Jiang, Hao [Department of Radiation Oncology, University of Michigan, Ann Arbor, Michigan 48109 (United States)

    2014-06-15

    Purpose: Active matrix flat-panel imagers (AMFPIs) incorporating thick, segmented scintillators have demonstrated order-of-magnitude improvements in detective quantum efficiency (DQE) at radiotherapy energies compared to systems based on conventional phosphor screens. Such improved DQE values facilitate megavoltage cone-beam CT (MV CBCT) imaging at clinically practical doses. However, the MV CBCT performance of such AMFPIs is highly dependent on the design parameters of the scintillators. In this paper, optimization of the design of segmented scintillators was explored using a hybrid modeling technique which encompasses both radiation and optical effects. Methods: Imaging performance in terms of the contrast-to-noise ratio (CNR) and spatial resolution of various hypothetical scintillator designs was examined through a hybrid technique involving Monte Carlo simulation of radiation transport in combination with simulation of optical gain distributions and optical point spread functions. The optical simulations employed optical parameters extracted from a best fit to measurement results reported in a previous investigation of a 1.13 cm thick, 1016μm pitch prototype BGO segmented scintillator. All hypothetical designs employed BGO material with a thickness and element-to-element pitch ranging from 0.5 to 6 cm and from 0.508 to 1.524 mm, respectively. In the CNR study, for each design, full tomographic scans of a contrast phantom incorporating various soft-tissue inserts were simulated at a total dose of 4 cGy. Results: Theoretical values for contrast, noise, and CNR were found to be in close agreement with empirical results from the BGO prototype, strongly supporting the validity of the modeling technique. CNR and spatial resolution for the various scintillator designs demonstrate complex behavior as scintillator thickness and element pitch are varied—with a clear trade-off between these two imaging metrics up to a thickness of ∼3 cm. Based on these results, an

  12. Teaching scientific concepts through simple models and social communication techniques

    International Nuclear Information System (INIS)

    Tilakaratne, K.

    2011-01-01

    For science education, it is important to demonstrate to students the relevance of scientific concepts in every-day life experiences. Although there are methods available for achieving this goal, it is more effective if cultural flavor is also added to the teaching techniques and thereby the teacher and students can easily relate the subject matter to their surroundings. Furthermore, this would bridge the gap between science and day-to-day experiences in an effective manner. It could also help students to use science as a tool to solve problems faced by them and consequently they would feel science is a part of their lives. In this paper, it has been described how simple models and cultural communication techniques can be used effectively in demonstrating important scientific concepts to the students of secondary and higher secondary levels by using two consecutive activities carried out at the Institute of Fundamental Studies (IFS), Sri Lanka. (author)

  13. Application of radiosurgical techniques to produce a primate model of brain lesions

    Directory of Open Access Journals (Sweden)

    Jun eKunimatsu

    2015-04-01

    Full Text Available Behavioral analysis of subjects with discrete brain lesions provides important information about the mechanisms of various brain functions. However, it is generally difficult to experimentally produce discrete lesions in deep brain structures. Here we show that a radiosurgical technique, which is used as an alternative treatment for brain tumors and vascular malformations, is applicable to create non-invasive lesions in experimental animals for the research in systems neuroscience. We delivered highly focused radiation (130–150 Gy at ISO center to the frontal eye field of macaque monkeys using a clinical linear accelerator (LINAC. The effects of irradiation were assessed by analyzing oculomotor performance along with magnetic resonance (MR images before and up to 8 months following irradiation. In parallel with tissue edema indicated by MR images, deficits in saccadic and smooth pursuit eye movements were observed during several days following irradiation. Although initial signs of oculomotor deficits disappeared within a month, damage to the tissue and impaired eye movements gradually developed during the course of the subsequent 6 months. Postmortem histological examinations showed necrosis and hemorrhages within a large area of the white matter and, to a lesser extent, in the adjacent gray matter, which was centered at the irradiated target. These results indicated that the LINAC system was useful for making brain lesions in experimental animals, while the suitable radiation parameters to generate more focused lesions need to be further explored. We propose the use of a radiosurgical technique for establishing animal models of brain lesions, and discuss the possible uses of this technique for functional neurosurgical treatments in humans.

  14. Study of 3D bathymetry modelling using LAPAN Surveillance Unmanned Aerial Vehicle 02 (LSU-02) photo data with stereo photogrammetry technique, Wawaran Beach, Pacitan, East Java, Indonesia

    Science.gov (United States)

    Sari, N. M.; Nugroho, J. T.; Chulafak, G. A.; Kushardono, D.

    2018-05-01

    Coastal is an ecosystem that has unique object and phenomenon. The potential of the aerial photo data with very high spatial resolution covering coastal area is extensive. One of the aerial photo data can be used is LAPAN Surveillance UAV 02 (LSU-02) photo data which is acquired in 2016 with a spatial resolution reaching 10cm. This research aims to create an initial bathymetry model with stereo photogrammetry technique using LSU-02 data. In this research the bathymetry model was made by constructing 3D model with stereo photogrammetry technique that utilizes the dense point cloud created from overlapping of those photos. The result shows that the 3D bathymetry model can be built with stereo photogrammetry technique. It can be seen from the surface and bathymetry transect profile.

  15. A physiological production model for cacao : results of model simulations

    NARCIS (Netherlands)

    Zuidema, P.A.; Leffelaar, P.A.

    2002-01-01

    CASE2 is a physiological model for cocoa (Theobroma cacao L.) growth and yield. This report introduces the CAcao Simulation Engine for water-limited production in a non-technical way and presents simulation results obtained with the model.

  16. IMAGE ANALYSIS BASED ON EDGE DETECTION TECHNIQUES

    Institute of Scientific and Technical Information of China (English)

    纳瑟; 刘重庆

    2002-01-01

    A method that incorporates edge detection technique, Markov Random field (MRF), watershed segmentation and merging techniques was presented for performing image segmentation and edge detection tasks. It first applies edge detection technique to obtain a Difference In Strength (DIS) map. An initial segmented result is obtained based on K-means clustering technique and the minimum distance. Then the region process is modeled by MRF to obtain an image that contains different intensity regions. The gradient values are calculated and then the watershed technique is used. DIS calculation is used for each pixel to define all the edges (weak or strong) in the image. The DIS map is obtained. This help as priority knowledge to know the possibility of the region segmentation by the next step (MRF), which gives an image that has all the edges and regions information. In MRF model,gray level l, at pixel location i, in an image X, depends on the gray levels of neighboring pixels. The segmentation results are improved by using watershed algorithm. After all pixels of the segmented regions are processed, a map of primitive region with edges is generated. The edge map is obtained using a merge process based on averaged intensity mean values. A common edge detectors that work on (MRF) segmented image are used and the results are compared. The segmentation and edge detection result is one closed boundary per actual region in the image.

  17. Multidisciplinary Optimization of Tilt Rotor Blades Using Comprehensive Composite Modeling Technique

    Science.gov (United States)

    Chattopadhyay, Aditi; McCarthy, Thomas R.; Rajadas, John N.

    1997-01-01

    An optimization procedure is developed for addressing the design of composite tilt rotor blades. A comprehensive technique, based on a higher-order laminate theory, is developed for the analysis of the thick composite load-carrying sections, modeled as box beams, in the blade. The theory, which is based on a refined displacement field, is a three-dimensional model which approximates the elasticity solution so that the beam cross-sectional properties are not reduced to one-dimensional beam parameters. Both inplane and out-of-plane warping are included automatically in the formulation. The model can accurately capture the transverse shear stresses through the thickness of each wall while satisfying stress free boundary conditions on the inner and outer surfaces of the beam. The aerodynamic loads on the blade are calculated using the classical blade element momentum theory. Analytical expressions for the lift and drag are obtained based on the blade planform with corrections for the high lift capability of rotor blades. The aerodynamic analysis is coupled with the structural model to formulate the complete coupled equations of motion for aeroelastic analyses. Finally, a multidisciplinary optimization procedure is developed to improve the aerodynamic, structural and aeroelastic performance of the tilt rotor aircraft. The objective functions include the figure of merit in hover and the high speed cruise propulsive efficiency. Structural, aerodynamic and aeroelastic stability criteria are imposed as constraints on the problem. The Kreisselmeier-Steinhauser function is used to formulate the multiobjective function problem. The search direction is determined by the Broyden-Fletcher-Goldfarb-Shanno algorithm. The optimum results are compared with the baseline values and show significant improvements in the overall performance of the tilt rotor blade.

  18. Modelling of Evaporator in Waste Heat Recovery System using Finite Volume Method and Fuzzy Technique

    Directory of Open Access Journals (Sweden)

    Jahedul Islam Chowdhury

    2015-12-01

    Full Text Available The evaporator is an important component in the Organic Rankine Cycle (ORC-based Waste Heat Recovery (WHR system since the effective heat transfer of this device reflects on the efficiency of the system. When the WHR system operates under supercritical conditions, the heat transfer mechanism in the evaporator is unpredictable due to the change of thermo-physical properties of the fluid with temperature. Although the conventional finite volume model can successfully capture those changes in the evaporator of the WHR process, the computation time for this method is high. To reduce the computation time, this paper develops a new fuzzy based evaporator model and compares its performance with the finite volume method. The results show that the fuzzy technique can be applied to predict the output of the supercritical evaporator in the waste heat recovery system and can significantly reduce the required computation time. The proposed model, therefore, has the potential to be used in real time control applications.

  19. Developing material for promoting problem-solving ability through bar modeling technique

    Science.gov (United States)

    Widyasari, N.; Rosiyanti, H.

    2018-01-01

    This study aimed at developing material for enhancing problem-solving ability through bar modeling technique with thematic learning. Polya’s steps of problem-solving were chosen as the basis of the study. The methods of the study were research and development. The subject of this study were five teen students of the fifth grade of Lab-school FIP UMJ elementary school. Expert review and student’ response analysis were used to collect the data. Furthermore, the data were analyzed using qualitative descriptive and quantitative. The findings showed that material in theme “Selalu Berhemat Energi” was categorized as valid and practical. The validity was measured by using the aspect of language, contents, and graphics. Based on the expert comments, the materials were easy to implement in the teaching-learning process. In addition, the result of students’ response showed that material was both interesting and easy to understand. Thus, students gained more understanding in learning problem-solving.

  20. Development of evaluation method for software hazard identification techniques

    International Nuclear Information System (INIS)

    Huang, H. W.; Chen, M. H.; Shih, C.; Yih, S.; Kuo, C. T.; Wang, L. H.; Yu, Y. C.; Chen, C. W.

    2006-01-01

    This research evaluated the applicable software hazard identification techniques nowadays, such as, Preliminary Hazard Analysis (PHA), Failure Modes and Effects Analysis (FMEA), Fault Tree Analysis (FTA), Markov chain modeling, Dynamic Flow-graph Methodology (DFM), and simulation-based model analysis; and then determined indexes in view of their characteristics, which include dynamic capability, completeness, achievability, detail, signal/noise ratio, complexity, and implementation cost. By this proposed method, the analysts can evaluate various software hazard identification combinations for specific purpose. According to the case study results, the traditional PHA + FMEA + FTA (with failure rate) + Markov chain modeling (with transfer rate) combination is not competitive due to the dilemma for obtaining acceptable software failure rates. However, the systematic architecture of FTA and Markov chain modeling is still valuable for realizing the software fault structure. The system centric techniques, such as DFM and simulation-based model-analysis, show the advantage on dynamic capability, achievability, detail, signal/noise ratio. However, their disadvantages are the completeness complexity and implementation cost. This evaluation method can be a platform to reach common consensus for the stakeholders. Following the evolution of software hazard identification techniques, the evaluation results could be changed. However, the insight of software hazard identification techniques is much more important than the numbers obtained by the evaluation. (authors)

  1. A comparison of geostatistically based inverse techniques for use in performance assessment analysis at the Waste Isolation Pilot Plant Site: Results from Test Case No. 1

    International Nuclear Information System (INIS)

    Zimmerman, D.A.; Gallegos, D.P.

    1993-10-01

    The groundwater flow pathway in the Culebra Dolomite aquifer at the Waste Isolation Pilot Plant (WIPP) has been identified as a potentially important pathway for radionuclide migration to the accessible environment. Consequently, uncertainties in the models used to describe flow and transport in the Culebra need to be addressed. A ''Geostatistics Test Problem'' is being developed to evaluate a number of inverse techniques that may be used for flow calculations in the WIPP performance assessment (PA). The Test Problem is actually a series of test cases, each being developed as a highly complex synthetic data set; the intent is for the ensemble of these data sets to span the range of possible conceptual models of groundwater flow at the WIPP site. The Test Problem analysis approach is to use a comparison of the probabilistic groundwater travel time (GWTT) estimates produced by each technique as the basis for the evaluation. Participants are given observations of head and transmissivity (possibly including measurement error) or other information such as drawdowns from pumping wells, and are asked to develop stochastic models of groundwater flow for the synthetic system. Cumulative distribution functions (CDFs) of groundwater flow (computed via particle tracking) are constructed using the head and transmissivity data generated through the application of each technique; one semi-analytical method generates the CDFs of groundwater flow directly. This paper describes the results from Test Case No. 1

  2. Calibration techniques and results for the Portsmouth Cf shuffler

    International Nuclear Information System (INIS)

    Gross, J.C.; Wines, K.M.

    1993-01-01

    As environmental concerns over radioactive waste disposal continue to rise, the importance of Californium shufflers as a versatile waste monitoring and segregation instrument also continue to increase. The extent to which different amounts and types of materials can be measured by the shuffler is directly related to the extent of its calibration. As shufflers become more common place and their waste management uses also rise, the importance of a wide ranging and thorough calibration becomes critical. This paper presents the techniques used at the Portsmouth Gaseous Diffusion Plant for calibrating the shuffler to detect levels of U-235 in radioactive waste. While the calibration techniques are similar to those used by Los Alamos, the standards that were used were constructed somewhat differently so that geometric effects are maximized. Also presented are shuffler transmission measurements that are used to determine the matrix type and the corresponding calibration. A discussion of the calibration data is given. This discussion includes specific aspects of the calibration such as overall range, high end limits, and poly shielding range and usefulness

  3. Effects of Different Missing Data Imputation Techniques on the Performance of Undiagnosed Diabetes Risk Prediction Models in a Mixed-Ancestry Population of South Africa.

    Directory of Open Access Journals (Sweden)

    Katya L Masconi

    Full Text Available Imputation techniques used to handle missing data are based on the principle of replacement. It is widely advocated that multiple imputation is superior to other imputation methods, however studies have suggested that simple methods for filling missing data can be just as accurate as complex methods. The objective of this study was to implement a number of simple and more complex imputation methods, and assess the effect of these techniques on the performance of undiagnosed diabetes risk prediction models during external validation.Data from the Cape Town Bellville-South cohort served as the basis for this study. Imputation methods and models were identified via recent systematic reviews. Models' discrimination was assessed and compared using C-statistic and non-parametric methods, before and after recalibration through simple intercept adjustment.The study sample consisted of 1256 individuals, of whom 173 were excluded due to previously diagnosed diabetes. Of the final 1083 individuals, 329 (30.4% had missing data. Family history had the highest proportion of missing data (25%. Imputation of the outcome, undiagnosed diabetes, was highest in stochastic regression imputation (163 individuals. Overall, deletion resulted in the lowest model performances while simple imputation yielded the highest C-statistic for the Cambridge Diabetes Risk model, Kuwaiti Risk model, Omani Diabetes Risk model and Rotterdam Predictive model. Multiple imputation only yielded the highest C-statistic for the Rotterdam Predictive model, which were matched by simpler imputation methods.Deletion was confirmed as a poor technique for handling missing data. However, despite the emphasized disadvantages of simpler imputation methods, this study showed that implementing these methods results in similar predictive utility for undiagnosed diabetes when compared to multiple imputation.

  4. Limited vs extended face-lift techniques: objective analysis of intraoperative results.

    Science.gov (United States)

    Litner, Jason A; Adamson, Peter A

    2006-01-01

    To compare the intraoperative outcomes of superficial musculoaponeurotic system plication, imbrication, and deep-plane rhytidectomy techniques. Thirty-two patients undergoing primary deep-plane rhytidectomy participated. Each hemiface in all patients was submitted sequentially to 3 progressively more extensive lifts, while other variables were standardized. Four major outcome measures were studied, including the extent of skin redundancy and the repositioning of soft tissues along the malar, mandibular, and cervical vectors of lift. The amount of skin excess was measured without tension from the free edge to a point over the intertragal incisure, along a plane overlying the jawline. Using a soft tissue caliper, repositioning was examined by measurement of preintervention and immediate postintervention distances from dependent points to fixed anthropometric reference points. The mean skin excesses were 10.4, 12.8, and 19.4 mm for the plication, imbrication, and deep-plane lifts, respectively. The greatest absolute soft tissue repositioning was noted along the jawline, with the least in the midface. Analysis revealed significant differences from baseline and between lift types for each of the studied techniques in each of the variables tested. These data support the use of the deep-plane rhytidectomy technique to achieve a superior intraoperative lift relative to comparator techniques.

  5. Coronary artery plaques: Cardiac CT with model-based and adaptive-statistical iterative reconstruction technique

    International Nuclear Information System (INIS)

    Scheffel, Hans; Stolzmann, Paul; Schlett, Christopher L.; Engel, Leif-Christopher; Major, Gyöngi Petra; Károlyi, Mihály; Do, Synho; Maurovich-Horvat, Pál; Hoffmann, Udo

    2012-01-01

    Objectives: To compare image quality of coronary artery plaque visualization at CT angiography with images reconstructed with filtered back projection (FBP), adaptive statistical iterative reconstruction (ASIR), and model based iterative reconstruction (MBIR) techniques. Methods: The coronary arteries of three ex vivo human hearts were imaged by CT and reconstructed with FBP, ASIR and MBIR. Coronary cross-sectional images were co-registered between the different reconstruction techniques and assessed for qualitative and quantitative image quality parameters. Readers were blinded to the reconstruction algorithm. Results: A total of 375 triplets of coronary cross-sectional images were co-registered. Using MBIR, 26% of the images were rated as having excellent overall image quality, which was significantly better as compared to ASIR and FBP (4% and 13%, respectively, all p < 0.001). Qualitative assessment of image noise demonstrated a noise reduction by using ASIR as compared to FBP (p < 0.01) and further noise reduction by using MBIR (p < 0.001). The contrast-to-noise-ratio (CNR) using MBIR was better as compared to ASIR and FBP (44 ± 19, 29 ± 15, 26 ± 9, respectively; all p < 0.001). Conclusions: Using MBIR improved image quality, reduced image noise and increased CNR as compared to the other available reconstruction techniques. This may further improve the visualization of coronary artery plaque and allow radiation reduction.

  6. Three-dimensional accuracy of different impression techniques for dental implants

    Directory of Open Access Journals (Sweden)

    Mohammadreza Nakhaei

    2015-01-01

    Full Text Available Background: Accurate impression making is an essential prerequisite for achieving a passive fit between the implant and the superstructure. The aim of this in vitro study was to compare the three-dimensional accuracy of open-tray and three closed-tray impression techniques. Materials and Methods: Three acrylic resin mandibular master models with four parallel implants were used: Biohorizons (BIO, Straumann tissue-level (STL, and Straumann bone-level (SBL. Forty-two putty/wash polyvinyl siloxane impressions of the models were made using open-tray and closed-tray techniques. Closed-tray impressions were made using snap-on (STL model, transfer coping (TC (BIO model and TC plus plastic cap (TC-Cap (SBL model. The impressions were poured with type IV stone, and the positional accuracy of the implant analog heads in each dimension (x, y and z axes, and the linear displacement (ΔR were evaluated using a coordinate measuring machine. Data were analyzed using ANOVA and post-hoc Tukey tests (α = 0.05. Results: The ΔR values of the snap-on technique were significantly lower than those of TC and TC-Cap techniques (P < 0.001. No significant differences were found between closed and open impression techniques for STL in Δx, Δy, Δz and ΔR values (P = 0.444, P = 0.181, P = 0.835 and P = 0.911, respectively. Conclusion: Considering the limitations of this study, the snap-on implant-level impression technique resulted in more three-dimensional accuracy than TC and TC-Cap, but it was similar to the open-tray technique.

  7. Modeling and simulation of PEM fuel cell's flow channels using CFD techniques

    International Nuclear Information System (INIS)

    Cunha, Edgar F.; Andrade, Alexandre B.; Robalinho, Eric; Bejarano, Martha L.M.; Linardi, Marcelo; Cekinski, Efraim

    2007-01-01

    Fuel cells are one of the most important devices to obtain electrical energy from hydrogen. The Proton Exchange Membrane Fuel Cell (PEMFC) consists of two important parts: the Membrane Electrode Assembly (MEA), where the reactions occur, and the flow field plates. The plates have many functions in a fuel cell: distribute reactant gases (hydrogen and air or oxygen), conduct electrical current, remove heat and water from the electrodes and make the cell robust. The cost of the bipolar plates corresponds up to 45% of the total stack costs. The Computational Fluid Dynamic (CFD) is a very useful tool to simulate hydrogen and oxygen gases flow channels, to reduce the costs of bipolar plates production and to optimize mass transport. Two types of flow channels were studied. The first type was a commercial plate by ELECTROCELL and the other was entirely projected at Programa de Celula a Combustivel (IPEN/CNEN-SP) and the experimental data were compared with modelling results. Optimum values for each set of variables were obtained and the models verification was carried out in order to show the feasibility of this technique to improve fuel cell efficiency. (author)

  8. Improving default risk prediction using Bayesian model uncertainty techniques.

    Science.gov (United States)

    Kazemi, Reza; Mosleh, Ali

    2012-11-01

    Credit risk is the potential exposure of a creditor to an obligor's failure or refusal to repay the debt in principal or interest. The potential of exposure is measured in terms of probability of default. Many models have been developed to estimate credit risk, with rating agencies dating back to the 19th century. They provide their assessment of probability of default and transition probabilities of various firms in their annual reports. Regulatory capital requirements for credit risk outlined by the Basel Committee on Banking Supervision have made it essential for banks and financial institutions to develop sophisticated models in an attempt to measure credit risk with higher accuracy. The Bayesian framework proposed in this article uses the techniques developed in physical sciences and engineering for dealing with model uncertainty and expert accuracy to obtain improved estimates of credit risk and associated uncertainties. The approach uses estimates from one or more rating agencies and incorporates their historical accuracy (past performance data) in estimating future default risk and transition probabilities. Several examples demonstrate that the proposed methodology can assess default probability with accuracy exceeding the estimations of all the individual models. Moreover, the methodology accounts for potentially significant departures from "nominal predictions" due to "upsetting events" such as the 2008 global banking crisis. © 2012 Society for Risk Analysis.

  9. Particle Markov Chain Monte Carlo Techniques of Unobserved Component Time Series Models Using Ox

    DEFF Research Database (Denmark)

    Nonejad, Nima

    This paper details Particle Markov chain Monte Carlo techniques for analysis of unobserved component time series models using several economic data sets. PMCMC combines the particle filter with the Metropolis-Hastings algorithm. Overall PMCMC provides a very compelling, computationally fast...... and efficient framework for estimation. These advantages are used to for instance estimate stochastic volatility models with leverage effect or with Student-t distributed errors. We also model changing time series characteristics of the US inflation rate by considering a heteroskedastic ARFIMA model where...

  10. Generalized image contrast enhancement technique based on the Heinemann contrast discrimination model

    Science.gov (United States)

    Liu, Hong; Nodine, Calvin F.

    1996-07-01

    This paper presents a generalized image contrast enhancement technique, which equalizes the perceived brightness distribution based on the Heinemann contrast discrimination model. It is based on the mathematically proven existence of a unique solution to a nonlinear equation, and is formulated with easily tunable parameters. The model uses a two-step log-log representation of luminance contrast between targets and surround in a luminous background setting. The algorithm consists of two nonlinear gray scale mapping functions that have seven parameters, two of which are adjustable Heinemann constants. Another parameter is the background gray level. The remaining four parameters are nonlinear functions of the gray-level distribution of the given image, and can be uniquely determined once the previous three are set. Tests have been carried out to demonstrate the effectiveness of the algorithm for increasing the overall contrast of radiology images. The traditional histogram equalization can be reinterpreted as an image enhancement technique based on the knowledge of human contrast perception. In fact, it is a special case of the proposed algorithm.

  11. Generalised Chou-Yang model and recent results

    International Nuclear Information System (INIS)

    Fazal-e-Aleem; Rashid, H.

    1996-01-01

    It is shown that most recent results of E710 and UA4/2 collaboration for the total cross section and ρ together with earlier measurements give good agreement with measurements for the differential cross section at 546 and 1800 GeV within the framework of Generalised Chou-Yang model. These results are also compared with the predictions of other models. (author)

  12. Laparoscopic anterior resection: new anastomosis technique in a pig model.

    Science.gov (United States)

    Bedirli, Abdulkadir; Yucel, Deniz; Ekim, Burcu

    2014-01-01

    Bowel anastomosis after anterior resection is one of the most difficult tasks to perform during laparoscopic colorectal surgery. This study aims to evaluate a new feasible and safe intracorporeal anastomosis technique after laparoscopic left-sided colon or rectum resection in a pig model. The technique was evaluated in 5 pigs. The OrVil device (Covidien, Mansfield, Massachusetts) was inserted into the anus and advanced proximally to the rectum. A 0.5-cm incision was made in the sigmoid colon, and the 2 sutures attached to its delivery tube were cut. After the delivery tube was evacuated through the anus, the tip of the anvil was removed through the perforation. The sigmoid colon was transected just distal to the perforation with an endoscopic linear stapler. The rectosigmoid segment to be resected was removed through the anus with a grasper, and distal transection was performed. A 25-mm circular stapler was inserted and combined with the anvil, and end-to-side intracorporeal anastomosis was then performed. We performed the technique in 5 pigs. Anastomosis required an average of 12 minutes. We observed that the proximal and distal donuts were completely removed in all pigs. No anastomotic air leakage was observed in any of the animals. This study shows the efficacy and safety of intracorporeal anastomosis with the OrVil device after laparoscopic anterior resection.

  13. Design and results of the ice sheet model initialisation initMIP-Greenland: an ISMIP6 intercomparison

    Directory of Open Access Journals (Sweden)

    H. Goelzer

    2018-04-01

    Full Text Available Earlier large-scale Greenland ice sheet sea-level projections (e.g. those run during the ice2sea and SeaRISE initiatives have shown that ice sheet initial conditions have a large effect on the projections and give rise to important uncertainties. The goal of this initMIP-Greenland intercomparison exercise is to compare, evaluate, and improve the initialisation techniques used in the ice sheet modelling community and to estimate the associated uncertainties in modelled mass changes. initMIP-Greenland is the first in a series of ice sheet model intercomparison activities within ISMIP6 (the Ice Sheet Model Intercomparison Project for CMIP6, which is the primary activity within the Coupled Model Intercomparison Project Phase 6 (CMIP6 focusing on the ice sheets. Two experiments for the large-scale Greenland ice sheet have been designed to allow intercomparison between participating models of (1 the initial present-day state of the ice sheet and (2 the response in two idealised forward experiments. The forward experiments serve to evaluate the initialisation in terms of model drift (forward run without additional forcing and in response to a large perturbation (prescribed surface mass balance anomaly; they should not be interpreted as sea-level projections. We present and discuss results that highlight the diversity of data sets, boundary conditions, and initialisation techniques used in the community to generate initial states of the Greenland ice sheet. We find good agreement across the ensemble for the dynamic response to surface mass balance changes in areas where the simulated ice sheets overlap but differences arising from the initial size of the ice sheet. The model drift in the control experiment is reduced for models that participated in earlier intercomparison exercises.

  14. High frequency magnetic field technique: mathematical modelling and development of a full scale water fraction meter

    Energy Technology Data Exchange (ETDEWEB)

    Cimpan, Emil

    2004-09-15

    This work is concerned with the development of a new on-line measuring technique to be used in measurements of the water concentration in a two component oil/water or three component (i.e. multiphase) oil/water/gas flow. The technique is based on using non-intrusive coil detectors and experiments were performed both statically (medium at rest) and dynamically (medium flowing through a flow rig). The various coil detectors were constructed with either one or two coils and specially designed electronics were used. The medium was composed by air, machine oil, and water having different conductivity values, i.e. seawater and salt water with various conductivities (salt concentrations) such as 1 S/m, 4.9 S/m and 9.3 S/m. The experimental measurements done with the different mixtures were further used to mathematically model the physical principle used in the technique. This new technique is based on measuring the coil impedance and signal frequency at the self-resonance frequency of the coil to determine the water concentration in the mix. By using numerous coils it was found, experimentally, that generally both the coil impedance and the self-resonance frequency of the coil decreased as the medium conductivity increased. Both the impedance and the self-resonance frequency of the coil depended on the medium loss due to the induced eddy currents within the conductive media in the mixture, i.e. water. In order to detect relatively low values of the medium loss, the self-resonance frequency of the coil and also of the magnetic field penetrating the media should be relatively high (within the MHz range and higher). Therefore, the technique was called and referred to throughout the entire work as the high frequency magnetic field technique (HFMFT). To practically use the HFMFT, it was necessary to circumscribe an analytical frame to this technique. This was done by working out a mathematical model that relates the impedance and the self-resonance frequency of the coil to the

  15. Rotational Acceleration during Head Impact Resulting from Different Judo Throwing Techniques

    OpenAIRE

    MURAYAMA, Haruo; HITOSUGI, Masahito; MOTOZAWA, Yasuki; OGINO, Masahiro; KOYAMA, Katsuhiro

    2014-01-01

    Most severe head injuries in judo are reported as acute subdural hematoma. It is thus necessary to examine the rotational acceleration of the head to clarify the mechanism of head injuries. We determined the rotational acceleration of the head when the subject is thrown by judo techniques. One Japanese male judo expert threw an anthropomorphic test device using two throwing techniques, Osoto-gari and Ouchigari. Rotational and translational head accelerations were measured with and without an ...

  16. Next generation initiation techniques

    Science.gov (United States)

    Warner, Tom; Derber, John; Zupanski, Milija; Cohn, Steve; Verlinde, Hans

    1993-01-01

    Four-dimensional data assimilation strategies can generally be classified as either current or next generation, depending upon whether they are used operationally or not. Current-generation data-assimilation techniques are those that are presently used routinely in operational-forecasting or research applications. They can be classified into the following categories: intermittent assimilation, Newtonian relaxation, and physical initialization. It should be noted that these techniques are the subject of continued research, and their improvement will parallel the development of next generation techniques described by the other speakers. Next generation assimilation techniques are those that are under development but are not yet used operationally. Most of these procedures are derived from control theory or variational methods and primarily represent continuous assimilation approaches, in which the data and model dynamics are 'fitted' to each other in an optimal way. Another 'next generation' category is the initialization of convective-scale models. Intermittent assimilation systems use an objective analysis to combine all observations within a time window that is centered on the analysis time. Continuous first-generation assimilation systems are usually based on the Newtonian-relaxation or 'nudging' techniques. Physical initialization procedures generally involve the use of standard or nonstandard data to force some physical process in the model during an assimilation period. Under the topic of next-generation assimilation techniques, variational approaches are currently being actively developed. Variational approaches seek to minimize a cost or penalty function which measures a model's fit to observations, background fields and other imposed constraints. Alternatively, the Kalman filter technique, which is also under investigation as a data assimilation procedure for numerical weather prediction, can yield acceptable initial conditions for mesoscale models. The

  17. A Model for the Acceptance of Cloud Computing Technology Using DEMATEL Technique and System Dynamics Approach

    Directory of Open Access Journals (Sweden)

    seyyed mohammad zargar

    2018-03-01

    Full Text Available Cloud computing is a new method to provide computing resources and increase computing power in organizations. Despite the many benefits this method shares, it has not been universally used because of some obstacles including security issues and has become a concern for IT managers in organization. In this paper, the general definition of cloud computing is presented. In addition, having reviewed previous studies, the researchers identified effective variables on technology acceptance and, especially, cloud computing technology. Then, using DEMATEL technique, the effectiveness and permeability of the variable were determined. The researchers also designed a model to show the existing dynamics in cloud computing technology using system dynamics approach. The validity of the model was confirmed through evaluation methods in dynamics model by using VENSIM software. Finally, based on different conditions of the proposed model, a variety of scenarios were designed. Then, the implementation of these scenarios was simulated within the proposed model. The results showed that any increase in data security, government support and user training can lead to the increase in the adoption and use of cloud computing technology.

  18. Computed tomography landmark-based semi-automated mesh morphing and mapping techniques: generation of patient specific models of the human pelvis without segmentation.

    Science.gov (United States)

    Salo, Zoryana; Beek, Maarten; Wright, David; Whyne, Cari Marisa

    2015-04-13

    Current methods for the development of pelvic finite element (FE) models generally are based upon specimen specific computed tomography (CT) data. This approach has traditionally required segmentation of CT data sets, which is time consuming and necessitates high levels of user intervention due to the complex pelvic anatomy. The purpose of this research was to develop and assess CT landmark-based semi-automated mesh morphing and mapping techniques to aid the generation and mechanical analysis of specimen-specific FE models of the pelvis without the need for segmentation. A specimen-specific pelvic FE model (source) was created using traditional segmentation methods and morphed onto a CT scan of a different (target) pelvis using a landmark-based method. The morphed model was then refined through mesh mapping by moving the nodes to the bone boundary. A second target model was created using traditional segmentation techniques. CT intensity based material properties were assigned to the morphed/mapped model and to the traditionally segmented target models. Models were analyzed to evaluate their geometric concurrency and strain patterns. Strains generated in a double-leg stance configuration were compared to experimental strain gauge data generated from the same target cadaver pelvis. CT landmark-based morphing and mapping techniques were efficiently applied to create a geometrically multifaceted specimen-specific pelvic FE model, which was similar to the traditionally segmented target model and better replicated the experimental strain results (R(2)=0.873). This study has shown that mesh morphing and mapping represents an efficient validated approach for pelvic FE model generation without the need for segmentation. Copyright © 2015 Elsevier Ltd. All rights reserved.

  19. Development of acoustic model-based iterative reconstruction technique for thick-concrete imaging

    Science.gov (United States)

    Almansouri, Hani; Clayton, Dwight; Kisner, Roger; Polsky, Yarom; Bouman, Charles; Santos-Villalobos, Hector

    2016-02-01

    Ultrasound signals have been used extensively for non-destructive evaluation (NDE). However, typical reconstruction techniques, such as the synthetic aperture focusing technique (SAFT), are limited to quasi-homogenous thin media. New ultrasonic systems and reconstruction algorithms are in need for one-sided NDE of non-homogenous thick objects. An application example space is imaging of reinforced concrete structures for commercial nuclear power plants (NPPs). These structures provide important foundation, support, shielding, and containment functions. Identification and management of aging and degradation of concrete structures is fundamental to the proposed long-term operation of NPPs. Another example is geothermal and oil/gas production wells. These multi-layered structures are composed of steel, cement, and several types of soil and rocks. Ultrasound systems with greater penetration range and image quality will allow for better monitoring of the well's health and prediction of high-pressure hydraulic fracturing of the rock. These application challenges need to be addressed with an integrated imaging approach, where the application, hardware, and reconstruction software are highly integrated and optimized. Therefore, we are developing an ultrasonic system with Model-Based Iterative Reconstruction (MBIR) as the image reconstruction backbone. As the first implementation of MBIR for ultrasonic signals, this paper document the first implementation of the algorithm and show reconstruction results for synthetically generated data.1

  20. Development of Acoustic Model-Based Iterative Reconstruction Technique for Thick-Concrete Imaging

    Energy Technology Data Exchange (ETDEWEB)

    Almansouri, Hani [Purdue University; Clayton, Dwight A [ORNL; Kisner, Roger A [ORNL; Polsky, Yarom [ORNL; Bouman, Charlie [Purdue University; Santos-Villalobos, Hector J [ORNL

    2016-01-01

    Ultrasound signals have been used extensively for non-destructive evaluation (NDE). However, typical reconstruction techniques, such as the synthetic aperture focusing technique (SAFT), are limited to quasi-homogenous thin media. New ultrasonic systems and reconstruction algorithms are in need for one-sided NDE of non-homogenous thick objects. An application example space is imaging of reinforced concrete structures for commercial nuclear power plants (NPPs). These structures provide important foundation, support, shielding, and containment functions. Identification and management of aging and degradation of concrete structures is fundamental to the proposed long-term operation of NPPs. Another example is geothermal and oil/gas production wells. These multi-layered structures are composed of steel, cement, and several types of soil and rocks. Ultrasound systems with greater penetration range and image quality will allow for better monitoring of the well's health and prediction of high-pressure hydraulic fracturing of the rock. These application challenges need to be addressed with an integrated imaging approach, where the application, hardware, and reconstruction software are highly integrated and optimized. Therefore, we are developing an ultrasonic system with Model-Based Iterative Reconstruction (MBIR) as the image reconstruction backbone. As the first implementation of MBIR for ultrasonic signals, this paper document the first implementation of the algorithm and show reconstruction results for synthetically generated data.

  1. Development of Acoustic Model-Based Iterative Reconstruction Technique for Thick-Concrete Imaging

    Energy Technology Data Exchange (ETDEWEB)

    Almansouri, Hani [Purdue University; Clayton, Dwight A [ORNL; Kisner, Roger A [ORNL; Polsky, Yarom [ORNL; Bouman, Charlie [Purdue University; Santos-Villalobos, Hector J [ORNL

    2015-01-01

    Ultrasound signals have been used extensively for non-destructive evaluation (NDE). However, typical reconstruction techniques, such as the synthetic aperture focusing technique (SAFT), are limited to quasi-homogenous thin media. New ultrasonic systems and reconstruction algorithms are in need for one-sided NDE of non-homogenous thick objects. An application example space is imaging of reinforced concrete structures for commercial nuclear power plants (NPPs). These structures provide important foundation, support, shielding, and containment functions. Identification and management of aging and degradation of concrete structures is fundamental to the proposed long-term operation of NPPs. Another example is geothermal and oil/gas production wells. These multi-layered structures are composed of steel, cement, and several types of soil and rocks. Ultrasound systems with greater penetration range and image quality will allow for better monitoring of the well s health and prediction of high-pressure hydraulic fracturing of the rock. These application challenges need to be addressed with an integrated imaging approach, where the application, hardware, and reconstruction software are highly integrated and optimized. Therefore, we are developing an ultrasonic system with Model-Based Iterative Reconstruction (MBIR) as the image reconstruction backbone. As the first implementation of MBIR for ultrasonic signals, this paper document the first implementation of the algorithm and show reconstruction results for synthetically generated data.

  2. Feasibility of the use of optimisation techniques to calibrate the models used in a post-closure radiological assessment

    International Nuclear Information System (INIS)

    Laundy, R.S.

    1991-01-01

    This report addresses the feasibility of the use of optimisation techniques to calibrate the models developed for the impact assessment of a radioactive waste repository. The maximum likelihood method for improving parameter estimates is considered in detail, and non-linear optimisation techniques for finding solutions are reviewed. Applications are described for the calibration of groundwater flow, radionuclide transport and biosphere models. (author)

  3. Combining variational and model-based techniques to register PET and MR images in hand osteoarthritis

    International Nuclear Information System (INIS)

    Magee, Derek; Tanner, Steven F; Jeavons, Alan P; Waller, Michael; Tan, Ai Lyn; McGonagle, Dennis

    2010-01-01

    Co-registration of clinical images acquired using different imaging modalities and equipment is finding increasing use in patient studies. Here we present a method for registering high-resolution positron emission tomography (PET) data of the hand acquired using high-density avalanche chambers with magnetic resonance (MR) images of the finger obtained using a 'microscopy coil'. This allows the identification of the anatomical location of the PET radiotracer and thereby locates areas of active bone metabolism/'turnover'. Image fusion involving data acquired from the hand is demanding because rigid-body transformations cannot be employed to accurately register the images. The non-rigid registration technique that has been implemented in this study uses a variational approach to maximize the mutual information between images acquired using these different imaging modalities. A piecewise model of the fingers is employed to ensure that the methodology is robust and that it generates an accurate registration. Evaluation of the accuracy of the technique is tested using both synthetic data and PET and MR images acquired from patients with osteoarthritis. The method outperforms some established non-rigid registration techniques and results in a mean registration error that is less than approximately 1.5 mm in the vicinity of the finger joints.

  4. Results of steel containment vessel model test

    International Nuclear Information System (INIS)

    Luk, V.K.; Ludwigsen, J.S.; Hessheimer, M.F.; Komine, Kuniaki; Matsumoto, Tomoyuki; Costello, J.F.

    1998-05-01

    A series of static overpressurization tests of scale models of nuclear containment structures is being conducted by Sandia National Laboratories for the Nuclear Power Engineering Corporation of Japan and the US Nuclear Regulatory Commission. Two tests are being conducted: (1) a test of a model of a steel containment vessel (SCV) and (2) a test of a model of a prestressed concrete containment vessel (PCCV). This paper summarizes the conduct of the high pressure pneumatic test of the SCV model and the results of that test. Results of this test are summarized and are compared with pretest predictions performed by the sponsoring organizations and others who participated in a blind pretest prediction effort. Questions raised by this comparison are identified and plans for posttest analysis are discussed

  5. Forecasting performances of three automated modelling techniques during the economic crisis 2007-2009

    DEFF Research Database (Denmark)

    Kock, Anders Bredahl; Teräsvirta, Timo

    2014-01-01

    . The performances of these three model selectors are compared by looking at the accuracy of the forecasts of the estimated neural network models. We apply the neural network model and the three modelling techniques to monthly industrial production and unemployment series from the G7 countries and the four......In this work we consider the forecasting of macroeconomic variables during an economic crisis. The focus is on a specific class of models, the so-called single hidden-layer feed-forward autoregressive neural network models. What makes these models interesting in the present context is the fact...... that they form a class of universal approximators and may be expected to work well during exceptional periods such as major economic crises. Neural network models are often difficult to estimate, and we follow the idea of White (2006) of transforming the specification and nonlinear estimation problem...

  6. Characterization technique for inhomogeneous 4H-SiC Schottky contacts: A practical model for high temperature behavior

    Science.gov (United States)

    Brezeanu, G.; Pristavu, G.; Draghici, F.; Badila, M.; Pascu, R.

    2017-08-01

    In this paper, a characterization technique for 4H-SiC Schottky diodes with varying levels of metal-semiconductor contact inhomogeneity is proposed. A macro-model, suitable for high-temperature evaluation of SiC Schottky contacts, with discrete barrier height non-uniformity, is introduced in order to determine the temperature interval and bias domain where electrical behavior of the devices can be described by the thermionic emission theory (has a quasi-ideal performance). A minimal set of parameters, the effective barrier height and peff, the non-uniformity factor, is associated. Model-extracted parameters are discussed in comparison with literature-reported results based on existing inhomogeneity approaches, in terms of complexity and physical relevance. Special consideration was given to models based on a Gaussian distribution of barrier heights on the contact surface. The proposed methodology is validated by electrical characterization of nickel silicide Schottky contacts on silicon carbide (4H-SiC), where a discrete barrier distribution can be considered. The same method is applied to inhomogeneous Pt/4H-SiC contacts. The forward characteristics measured at different temperatures are accurately reproduced using this inhomogeneous barrier model. A quasi-ideal behavior is identified for intervals spanning 200 °C for all measured Schottky samples, with Ni and Pt contact metals. A predictable exponential current-voltage variation over at least 2 orders of magnitude is also proven, with a stable barrier height and effective area for temperatures up to 400 °C. This application-oriented characterization technique is confirmed by using model parameters to fit a SiC-Schottky high temperature sensor's response.

  7. Integration of computational modeling and experimental techniques to design fuel surrogates

    DEFF Research Database (Denmark)

    Choudhury, H.A.; Intikhab, S.; Kalakul, Sawitree

    2017-01-01

    performance. A simplified alternative is to develop surrogate fuels that have fewer compounds and emulate certain important desired physical properties of the target fuels. Six gasoline blends were formulated through a computer aided model based technique “Mixed Integer Non-Linear Programming” (MINLP...... Virtual Process-Product Design Laboratory (VPPD-Lab) are applied onto the defined compositions of the surrogate gasoline. The aim is to primarily verify the defined composition of gasoline by means of VPPD-Lab. ρ, η and RVP are calculated with more accuracy and constraints such as distillation curve...... and flash point on the blend design are also considered. A post-design experiment-based verification step is proposed to further improve and fine-tune the “best” selected gasoline blends following the computation work. Here, advanced experimental techniques are used to measure the RVP, ρ, η, RON...

  8. Generalised Chou-Yang model and recent results

    International Nuclear Information System (INIS)

    Fazal-e-Aleem; Rashid, H.

    1995-09-01

    It is shown that most recent results of E710 and UA4/2 collaboration for the total cross section and ρ together with earlier measurements give good agreement with measurements for the differential cross section at 546 and 1800 GeV the framework of Generalised Chou-Yang model. These results are also compared with the predictions of other models. (author). 16 refs, 2 figs

  9. Generalised Chou-Yang model and recent results

    Energy Technology Data Exchange (ETDEWEB)

    Fazal-e-Aleem [International Centre for Theoretical Physics, Trieste (Italy); Rashid, H. [Punjab Univ., Lahore (Pakistan). Centre for High Energy Physics

    1996-12-31

    It is shown that most recent results of E710 and UA4/2 collaboration for the total cross section and {rho} together with earlier measurements give good agreement with measurements for the differential cross section at 546 and 1800 GeV within the framework of Generalised Chou-Yang model. These results are also compared with the predictions of other models. (author) 16 refs.

  10. Investigating the probability of detection of typical cavity shapes through modelling and comparison of geophysical techniques

    Science.gov (United States)

    James, P.

    2011-12-01

    With a growing need for housing in the U.K., the government has proposed increased development of brownfield sites. However, old mine workings and natural cavities represent a potential hazard before, during and after construction on such sites, and add further complication to subsurface parameters. Cavities are hence a limitation to certain redevelopment and their detection is an ever important consideration. The current standard technique for cavity detection is a borehole grid, which is intrusive, non-continuous, slow and expensive. A new robust investigation standard in the detection of cavities is sought and geophysical techniques offer an attractive alternative. Geophysical techniques have previously been utilised successfully in the detection of cavities in various geologies, but still has an uncertain reputation in the engineering industry. Engineers are unsure of the techniques and are inclined to rely on well known techniques than utilise new technologies. Bad experiences with geophysics are commonly due to the indiscriminate choice of particular techniques. It is imperative that a geophysical survey is designed with the specific site and target in mind at all times, and the ability and judgement to rule out some, or all, techniques. To this author's knowledge no comparative software exists to aid technique choice. Also, previous modelling software limit the shapes of bodies and hence typical cavity shapes are not represented. Here, we introduce 3D modelling software (Matlab) which computes and compares the response to various cavity targets from a range of techniques (gravity, gravity gradient, magnetic, magnetic gradient and GPR). Typical near surface cavity shapes are modelled including shafts, bellpits, various lining and capping materials, and migrating voids. The probability of cavity detection is assessed in typical subsurface and noise conditions across a range of survey parameters. Techniques can be compared and the limits of detection distance

  11. Korean national QPE technique development: Analysis of current QPE results and future plan

    Science.gov (United States)

    Cha, Joo Wan

    2013-04-01

    Korea Meteorological Administration(KMA) has developed a Real-time ADjusted Radar-AWS (Automatic Weather Station) Rainrate (RAD-RAR) system using eleven radars over the South Korea. The procedure of the RAD-RAR system in real time consists of four steps: 1) the quality control of volumetric reflectivity for each radar, 2) the computation of the every 10-min rain gauge rainfall within each radar, 3) the real time (10 min-updated) rainfall estimation by the Z-R relationship minimizing the difference between the 1.5-km constant altitude plan precipitation indicator and rain gauge rainfall based on Window Probability Matching Method(WPMM) and by the real-time bias correction of RAD-RAR conducted at every 10 minutes for each radar by making the bias, and 4) the composition of the 11-radar estimated rainfall data. In addition, a local gauge correction method applies for RAD-RAR system. Therefore, the correlation coefficient of R2 = 0.81 is obtained between the daily accumulated observed and RAD-RAR estimated rainfall in 2012. We like to develop a new QPE system using the multi-sensor(radar, rain gauge, numerical model output, and lightning) data for newly improving Korean national QPE system. We made the prototype QPE system in 2012 and improve the detail techniques now. In the future, the new high performance QPE system will include a dual polarization radar observation technique for providing more accurate and valuable national QPE data

  12. Biodegradable Magnesium Stent Treatment of Saccular Aneurysms in a Rat Model - Introduction of the Surgical Technique.

    Science.gov (United States)

    Nevzati, Edin; Rey, Jeannine; Coluccia, Daniel; D'Alonzo, Donato; Grüter, Basil; Remonda, Luca; Fandino, Javier; Marbacher, Serge

    2017-10-01

    The steady progess in the armamentarium of techniques available for endovascular treatment of intracranial aneurysms requires affordable and reproducable experimental animal models to test novel embolization materials such as stents and flow diverters. The aim of the present project was to design a safe, fast, and standardized surgical technique for stent assisted embolization of saccular aneurysms in a rat animal model. Saccular aneurysms were created from an arterial graft from the descending aorta.The aneurysms were microsurgically transplanted through end-to-side anastomosis to the infrarenal abdominal aorta of a syngenic male Wistar rat weighing >500 g. Following aneurysm anastomosis, aneurysm embolization was performed using balloon expandable magnesium stents (2.5 mm x 6 mm). The stent system was retrograde introduced from the lower abdominal aorta using a modified Seldinger technique. Following a pilot series of 6 animals, a total of 67 rats were operated according to established standard operating procedures. Mean surgery time, mean anastomosis time, and mean suturing time of the artery puncture site were 167 ± 22 min, 26 ± 6 min and 11 ± 5 min, respectively. The mortality rate was 6% (n=4). The morbidity rate was 7.5% (n=5), and in-stent thrombosis was found in 4 cases (n=2 early, n=2 late in stent thrombosis). The results demonstrate the feasibility of standardized stent occlusion of saccular sidewall aneurysms in rats - with low rates of morbidity and mortality. This stent embolization procedure combines the opportunity to study novel concepts of stent or flow diverter based devices as well as the molecular aspects of healing.

  13. Near infrared spectrometric technique for testing fruit quality: optimisation of regression models using genetic algorithms

    Science.gov (United States)

    Isingizwe Nturambirwe, J. Frédéric; Perold, Willem J.; Opara, Umezuruike L.

    2016-02-01

    Near infrared (NIR) spectroscopy has gained extensive use in quality evaluation. It is arguably one of the most advanced spectroscopic tools in non-destructive quality testing of food stuff, from measurement to data analysis and interpretation. NIR spectral data are interpreted through means often involving multivariate statistical analysis, sometimes associated with optimisation techniques for model improvement. The objective of this research was to explore the extent to which genetic algorithms (GA) can be used to enhance model development, for predicting fruit quality. Apple fruits were used, and NIR spectra in the range from 12000 to 4000 cm-1 were acquired on both bruised and healthy tissues, with different degrees of mechanical damage. GAs were used in combination with partial least squares regression methods to develop bruise severity prediction models, and compared to PLS models developed using the full NIR spectrum. A classification model was developed, which clearly separated bruised from unbruised apple tissue. GAs helped improve prediction models by over 10%, in comparison with full spectrum-based models, as evaluated in terms of error of prediction (Root Mean Square Error of Cross-validation). PLS models to predict internal quality, such as sugar content and acidity were developed and compared to the versions optimized by genetic algorithm. Overall, the results highlighted the potential use of GA method to improve speed and accuracy of fruit quality prediction.

  14. Application of class-modelling techniques to infrared spectra for analysis of pork adulteration in beef jerkys.

    Science.gov (United States)

    Kuswandi, Bambang; Putri, Fitra Karima; Gani, Agus Abdul; Ahmad, Musa

    2015-12-01

    The use of chemometrics to analyse infrared spectra to predict pork adulteration in the beef jerky (dendeng) was explored. In the first step, the analysis of pork in the beef jerky formulation was conducted by blending the beef jerky with pork at 5-80 % levels. Then, they were powdered and classified into training set and test set. The second step, the spectra of the two sets was recorded by Fourier Transform Infrared (FTIR) spectroscopy using atenuated total reflection (ATR) cell on the basis of spectral data at frequency region 4000-700 cm(-1). The spectra was categorised into four data sets, i.e. (a) spectra in the whole region as data set 1; (b) spectra in the fingerprint region (1500-600 cm(-1)) as data set 2; (c) spectra in the whole region with treatment as data set 3; and (d) spectra in the fingerprint region with treatment as data set 4. The third step, the chemometric analysis were employed using three class-modelling techniques (i.e. LDA, SIMCA, and SVM) toward the data sets. Finally, the best result of the models towards the data sets on the adulteration analysis of the samples were selected and the best model was compared with the ELISA method. From the chemometric results, the LDA model on the data set 1 was found to be the best model, since it could classify and predict 100 % accuracy of the sample tested. The LDA model was applied toward the real samples of the beef jerky marketed in Jember, and the results showed that the LDA model developed was in good agreement with the ELISA method.

  15. A comparison of linear and nonlinear statistical techniques in performance attribution.

    Science.gov (United States)

    Chan, N H; Genovese, C R

    2001-01-01

    Performance attribution is usually conducted under the linear framework of multifactor models. Although commonly used by practitioners in finance, linear multifactor models are known to be less than satisfactory in many situations. After a brief survey of nonlinear methods, nonlinear statistical techniques are applied to performance attribution of a portfolio constructed from a fixed universe of stocks using factors derived from some commonly used cross sectional linear multifactor models. By rebalancing this portfolio monthly, the cumulative returns for procedures based on standard linear multifactor model and three nonlinear techniques-model selection, additive models, and neural networks-are calculated and compared. It is found that the first two nonlinear techniques, especially in combination, outperform the standard linear model. The results in the neural-network case are inconclusive because of the great variety of possible models. Although these methods are more complicated and may require some tuning, toolboxes are developed and suggestions on calibration are proposed. This paper demonstrates the usefulness of modern nonlinear statistical techniques in performance attribution.

  16. The results of surgical treatment of oviductal infertility with use of microsurgical technique

    International Nuclear Information System (INIS)

    Cislo, M.; Murawski, M.; Palczynski, B.

    1993-01-01

    40 women were operated on infertility due to oviductal factor. This kind of infertility has been previously diagnosed by hysterosalpingography examination and then verified in 22 cases (55%) by laparoscopy combined with chromotubation. The operations were carried out with use of microsurgical technique and instruments. At the same time the prophylactics of postoperative intraperitoneal adhesions was applied. Seven pregnancies were obtained, that makes 17.5% of success. It is an outcome comparable with results presented by many other world centers of gynecological microsurgery. (author)

  17. The Danish national passenger modelModel specification and results

    DEFF Research Database (Denmark)

    Rich, Jeppe; Hansen, Christian Overgaard

    2016-01-01

    The paper describes the structure of the new Danish National Passenger model and provides on this basis a general discussion of large-scale model design, cost-damping and model validation. The paper aims at providing three main contributions to the existing literature. Firstly, at the general level......, the paper provides a description of a large-scale forecast model with a discussion of the linkage between population synthesis, demand and assignment. Secondly, the paper gives specific attention to model specification and in particular choice of functional form and cost-damping. Specifically we suggest...... a family of logarithmic spline functions and illustrate how it is applied in the model. Thirdly and finally, we evaluate model sensitivity and performance by evaluating the distance distribution and elasticities. In the paper we present results where the spline-function is compared with more traditional...

  18. Passive Super-Low Frequency electromagnetic prospecting technique

    Science.gov (United States)

    Wang, Nan; Zhao, Shanshan; Hui, Jian; Qin, Qiming

    2017-03-01

    The Super-Low Frequency (SLF) electromagnetic prospecting technique, adopted as a non-imaging remote sensing tool for depth sounding, is systematically proposed for subsurface geological survey. In this paper, we propose and theoretically illustrate natural source magnetic amplitudes as SLF responses for the first step. In order to directly calculate multi-dimensional theoretical SLF responses, modeling algorithms were developed and evaluated using the finite difference method. The theoretical results of three-dimensional (3-D) models show that the average normalized SLF magnetic amplitude responses were numerically stable and appropriate for practical interpretation. To explore the depth resolution, three-layer models were configured. The modeling results prove that the SLF technique is more sensitive to conductive objective layers than high resistive ones, with the SLF responses of conductive objective layers obviously showing uprising amplitudes in the low frequency range. Afterwards, we proposed an improved Frequency-Depth transformation based on Bostick inversion to realize the depth sounding by empirically adjusting two parameters. The SLF technique has already been successfully applied in geothermal exploration and coalbed methane (CBM) reservoir interpretation, which demonstrates that the proposed methodology is effective in revealing low resistive distributions. Furthermore, it siginificantly contributes to reservoir identification with electromagnetic radiation anomaly extraction. Meanwhile, the SLF interpretation results are in accordance with dynamic production status of CBM reservoirs, which means it could provide an economical, convenient and promising method for exploring and monitoring subsurface geo-objects.

  19. Storm-time ring current: model-dependent results

    Directory of Open Access Journals (Sweden)

    N. Yu. Ganushkina

    2012-01-01

    Full Text Available The main point of the paper is to investigate how much the modeled ring current depends on the representations of magnetic and electric fields and boundary conditions used in simulations. Two storm events, one moderate (SymH minimum of −120 nT on 6–7 November 1997 and one intense (SymH minimum of −230 nT on 21–22 October 1999, are modeled. A rather simple ring current model is employed, namely, the Inner Magnetosphere Particle Transport and Acceleration model (IMPTAM, in order to make the results most evident. Four different magnetic field and two electric field representations and four boundary conditions are used. We find that different combinations of the magnetic and electric field configurations and boundary conditions result in very different modeled ring current, and, therefore, the physical conclusions based on simulation results can differ significantly. A time-dependent boundary outside of 6.6 RE gives a possibility to take into account the particles in the transition region (between dipole and stretched field lines forming partial ring current and near-Earth tail current in that region. Calculating the model SymH* by Biot-Savart's law instead of the widely used Dessler-Parker-Sckopke (DPS relation gives larger and more realistic values, since the currents are calculated in the regions with nondipolar magnetic field. Therefore, the boundary location and the method of SymH* calculation are of key importance for ring current data-model comparisons to be correctly interpreted.

  20. Effect of the Impeller Design on Degasification Kinetics Using the Impeller Injector Technique Assisted by Mathematical Modeling

    Directory of Open Access Journals (Sweden)

    Diego Abreu-López

    2017-04-01

    Full Text Available A mathematical model was developed to describe the hydrodynamics of a batch reactor for aluminum degassing utilizing the rotor-injector technique. The mathematical model uses the Eulerian algorithm to represent the two-phase system including the simulation of vortex formation at the free surface, and the use of the RNG k-ε model to account for the turbulence in the system. The model was employed to test the performances of three different impeller designs, two of which are available commercially, while the third one is a new design proposed in previous work. The model simulates the hydrodynamics and consequently helps to explain and connect the performances in terms of degassing kinetics and gas consumption found in physical modeling previously reported. Therefore, the model simulates a water physical model. The model reveals that the new impeller design distributes the bubbles more uniformly throughout the ladle, and exhibits a better-agitated bath, since the transfer of momentum to the fluids is better. Gas is evenly distributed with this design because both phases, gas and liquid, are dragged to the bottom of the ladle as a result of the higher pumping effect in comparison to the commercial designs.

  1. Microstructure evolution during homogenization of Al–Mn–Fe–Si alloys: Modeling and experimental results

    International Nuclear Information System (INIS)

    Du, Q.; Poole, W.J.; Wells, M.A.; Parson, N.C.

    2013-01-01

    Microstructure evolution during the homogenization heat treatment of Al–Mn–Fe–Si, or AA3xxx, alloys has been investigated using a combination of modeling and experimental studies. The model is fully coupled to CALculation PHAse Diagram (CALPHAD) software and has explicitly taken into account the two different length scales for diffusion encountered in modeling the homogenization process. The model is able to predict the evolution of all the important microstructural features during homogenization, including the inhomogeneous spatial distribution of dispersoids and alloying elements in solution, the dispersoid number density and the size distribution, and the type and fraction of intergranular constituent particles. Experiments were conducted using four direct chill (DC) cast AA3xxx alloys subjected to various homogenization treatments. The resulting microstructures were then characterized using a range of characterization techniques, including optical and electron microscopy, electron micro probe analysis, field emission gun scanning electron microscopy, and electrical resistivity measurements. The model predictions have been compared with the experimental measurements to validate the model. Further, it has been demonstrated that the validated model is able to predict the effects of alloying elements (e.g. Si and Mn) on microstructure evolution. It is concluded that the model provides a time and cost effective tool in optimizing and designing industrial AA3xxx alloy chemistries and homogenization heat treatments

  2. Computational modelling of the mechanics of trabecular bone and marrow using fluid structure interaction techniques.

    Science.gov (United States)

    Birmingham, E; Grogan, J A; Niebur, G L; McNamara, L M; McHugh, P E

    2013-04-01

    Bone marrow found within the porous structure of trabecular bone provides a specialized environment for numerous cell types, including mesenchymal stem cells (MSCs). Studies have sought to characterize the mechanical environment imposed on MSCs, however, a particular challenge is that marrow displays the characteristics of a fluid, while surrounded by bone that is subject to deformation, and previous experimental and computational studies have been unable to fully capture the resulting complex mechanical environment. The objective of this study was to develop a fluid structure interaction (FSI) model of trabecular bone and marrow to predict the mechanical environment of MSCs in vivo and to examine how this environment changes during osteoporosis. An idealized repeating unit was used to compare FSI techniques to a computational fluid dynamics only approach. These techniques were used to determine the effect of lower bone mass and different marrow viscosities, representative of osteoporosis, on the shear stress generated within bone marrow. Results report that shear stresses generated within bone marrow under physiological loading conditions are within the range known to stimulate a mechanobiological response in MSCs in vitro. Additionally, lower bone mass leads to an increase in the shear stress generated within the marrow, while a decrease in bone marrow viscosity reduces this generated shear stress.

  3. COST Action TU1208 - Working Group 3 - Electromagnetic modelling, inversion, imaging and data-processing techniques for Ground Penetrating Radar

    Science.gov (United States)

    Pajewski, Lara; Giannopoulos, Antonios; Sesnic, Silvestar; Randazzo, Andrea; Lambot, Sébastien; Benedetto, Francesco; Economou, Nikos

    2017-04-01

    This work aims at presenting the main results achieved by Working Group (WG) 3 "Electromagnetic methods for near-field scattering problems by buried structures; data processing techniques" of the COST (European COoperation in Science and Technology) Action TU1208 "Civil Engineering Applications of Ground Penetrating Radar" (www.GPRadar.eu, www.cost.eu). The main objective of the Action, started in April 2013 and ending in October 2017, is to exchange and increase scientific-technical knowledge and experience of Ground Penetrating Radar (GPR) techniques in civil engineering, whilst promoting in Europe the effective use of this safe non-destructive technique. The Action involves more than 150 Institutions from 28 COST Countries, a Cooperating State, 6 Near Neighbour Countries and 6 International Partner Countries. Among the most interesting achievements of WG3, we wish to mention the following ones: (i) A new open-source version of the finite-difference time-domain simulator gprMax was developed and released. The new gprMax is written in Python and includes many advanced features such as anisotropic and dispersive-material modelling, building of realistic heterogeneous objects with rough surfaces, built-in libraries of antenna models, optimisation of parameters based on Taguchi's method - and more. (ii) A new freeware CAD was developed and released, for the construction of two-dimensional gprMax models. This tool also includes scripts easing the execution of gprMax on multi-core machines or network of computers and scripts for a basic plotting of gprMax results. (iii) A series of interesting freeware codes were developed will be released by the end of the Action, implementing differential and integral forward-scattering methods, for the solution of simple electromagnetic problems by buried objects. (iv) An open database of synthetic and experimental GPR radargrams was created, in cooperation with WG2. The idea behind this initiative is to give researchers the

  4. Fast Spectral Velocity Estimation Using Adaptive Techniques: In-Vivo Results

    DEFF Research Database (Denmark)

    Gran, Fredrik; Jakobsson, Andreas; Udesen, Jesper

    2007-01-01

    Adaptive spectral estimation techniques are known to provide good spectral resolution and contrast even when the observation window(OW) is very sbort. In this paper two adaptive techniques are tested and compared to the averaged perlodogram (Welch) for blood velocity estimation. The Blood Power...... the blood process over slow-time and averaging over depth to find the power spectral density estimate. In this paper, the two adaptive methods are explained, and performance Is assessed in controlled steady How experiments and in-vivo measurements. The three methods were tested on a circulating How rig...... with a blood mimicking fluid flowing in the tube. The scanning section is submerged in water to allow ultrasound data acquisition. Data was recorded using a BK8804 linear array transducer and the RASMUS ultrasound scanner. The controlled experiments showed that the OW could be significantly reduced when...

  5. RC Beams Strengthened with Mechanically Fastened Composites: Experimental Results and Numerical Modeling

    Directory of Open Access Journals (Sweden)

    Enzo Martinelli

    2014-03-01

    Full Text Available The use of mechanically-fastened fiber-reinforced polymer (MF-FRP systems has recently emerged as a competitive solution for the flexural strengthening of reinforced concrete (RC beams and slabs. An overview of the experimental research has proven the effectiveness and the potentiality of the MF-FRP technique which is particularly suitable for emergency repairs or when the speed of installation and immediacy of use are imperative. A finite-element (FE model has been recently developed by the authors with the aim to simulate the behavior of RC beams strengthened in bending by MF-FRP laminates; such a model has also been validated by using a wide experimental database collected from the literature. By following the previous study, the FE model and the assembled database are considered herein with the aim of better exploring the influence of some specific aspects on the structural response of MF-FRP strengthened members, such as the bearing stress-slip relationship assumed for the FRP-concrete interface, the stress-strain law considered for reinforcing steel rebars and the cracking process in RC members resulting in the well-known tension stiffening effect. The considerations drawn from this study will be useful to researchers for the calibration of criteria and design rules for strengthening RC beams through MF-FRP laminates.

  6. Mechanical Elongation of the Small Intestine: Evaluation of Techniques for Optimal Screw Placement in a Rodent Model

    Directory of Open Access Journals (Sweden)

    P. A. Hausbrandt

    2013-01-01

    Full Text Available Introduction. The aim of this study was to evaluate techniques and establish an optimal method for mechanical elongation of small intestine (MESI using screws in a rodent model in order to develop a potential therapy for short bowel syndrome (SBS. Material and Methods. Adult female Sprague Dawley rats (n=24 with body weight from 250 to 300 g (Σ=283 were evaluated using 5 different groups in which the basic denominator for the technique involved the fixation of a blind loop of the intestine on the abdominal wall with the placement of a screw in the lumen secured to the abdominal wall. Results. In all groups with accessible screws, the rodents removed the implants despite the use of washers or suits to prevent removal. Subcutaneous placement of the screw combined with antibiotic treatment and dietary modifications was finally successful. In two animals autologous transplantation of the lengthened intestinal segment was successful. Discussion. While the rodent model may provide useful basic information on mechanical intestinal lengthening, further investigations should be performed in larger animals to make use of the translational nature of MESI in human SBS treatment.

  7. Outcomes of laryngohyoid suspension techniques in an ovine model of profound oropharyngeal dysphagia.

    Science.gov (United States)

    Johnson, Christopher M; Venkatesan, Naren N; Siddiqui, M Tausif; Cates, Daniel J; Kuhn, Maggie A; Postma, Gregory M; Belafsky, Peter C

    2017-12-01

    To evaluate the efficacy of various techniques of laryngohyoid suspension in the elimination of aspiration utilizing a cadaveric ovine model of profound oropharyngeal dysphagia. Animal study. The head and neck of a Dorper cross ewe was placed in the lateral fluoroscopic view. Five conditions were tested: baseline, thyroid cartilage to hyoid approximation (THA), thyroid cartilage to hyoid to mandible (laryngohyoid) suspension (LHS), LHS with cricopharyngeus muscle myotomy (LHS-CPM), and cricopharyngeus muscle myotomy (CPM) alone. Five 20-mL trials of barium sulfate were delivered into the oropharynx under fluoroscopy for each condition. Outcome measures included the penetration aspiration scale (PAS) and the National Institutes of Health (NIH) Swallow Safety Scale (NIH-SSS). Median baseline PAS and NIH-SSS scores were 8 and 6, respectively, indicating severe impairment. THA scores were not improved from baseline. LHS alone reduced the PAS to 1 (P = .025) and NIH-SSS to 2 (P = .025) from baseline. LHS-CPM reduced the PAS to 1 (P = .025) and NIH-SSS to 0 (P = .025) from baseline. CPM alone did not improve scores. LHS-CPM displayed improved NIH-SSS over LHS alone (P = .003). This cadaveric model represents end-stage profound oropharyngeal dysphagia such as what could result from severe neurological insult. CPM alone failed to improve fluoroscopic outcomes in this model. Thyrohyoid approximation also failed to improve outcomes. LHS significantly improved both PAS and NIH-SSS. The addition of CPM to LHS resulted in improvement over suspension alone. NA. Laryngoscope, 127:E422-E427, 2017. © 2017 The American Laryngological, Rhinological and Otological Society, Inc.

  8. Modeling ionospheric foF 2 response during geomagnetic storms using neural network and linear regression techniques

    Science.gov (United States)

    Tshisaphungo, Mpho; Habarulema, John Bosco; McKinnell, Lee-Anne

    2018-06-01

    In this paper, the modeling of the ionospheric foF 2 changes during geomagnetic storms by means of neural network (NN) and linear regression (LR) techniques is presented. The results will lead to a valuable tool to model the complex ionospheric changes during disturbed days in an operational space weather monitoring and forecasting environment. The storm-time foF 2 data during 1996-2014 from Grahamstown (33.3°S, 26.5°E), South Africa ionosonde station was used in modeling. In this paper, six storms were reserved to validate the models and hence not used in the modeling process. We found that the performance of both NN and LR models is comparable during selected storms which fell within the data period (1996-2014) used in modeling. However, when validated on storm periods beyond 1996-2014, the NN model gives a better performance (R = 0.62) compared to LR model (R = 0.56) for a storm that reached a minimum Dst index of -155 nT during 19-23 December 2015. We also found that both NN and LR models are capable of capturing the ionospheric foF 2 responses during two great geomagnetic storms (28 October-1 November 2003 and 6-12 November 2004) which have been demonstrated to be difficult storms to model in previous studies.

  9. Increasing the Intelligence of Virtual Sales Assistants through Knowledge Modeling Techniques

    OpenAIRE

    Molina, Martin

    2001-01-01

    Shopping agents are web-based applications that help consumers to find appropriate products in the context of e-commerce. In this paper we argue about the utility of advanced model-based techniques that recently have been proposed in the fields of Artificial Intelligence and Knowledge Engineering, in order to increase the level of support provided by this type of applications. We illustrate this approach with a virtual sales assistant that dynamically configures a product according to the nee...

  10. Some techniques and results from high-pressure shock-wave experiments utilizing the radiation from shocked transparent materials

    International Nuclear Information System (INIS)

    McQueen, R.G.; Fritz, J.N.

    1981-01-01

    It has been known for many years that some transparent materials emit radiation when shocked to high pressures. This property was used to determine the temperature of shocked fused and crystal quartz, which in turn allowed the thermal expansion of SiO 2 at high pressure and also the specific heat to be calculated. Once the radiative energy as a function of pressure is known for one material it is shown how this can be used to determine the temperature of other transparent materials. By the nature of the experiments very accurate shock velocities can be measured and hence high quality equation of state data obtained. Some techniques and results are presented on measuring sound velocities from symmetrical impact of nontransparent materials using radiation emitting transparent analyzers, and on nonsymmetrical impact experiments on transparent materials. Because of special requirements in the later experiments, techniques were developed that lead to very high-precision shock-wave data. Preliminary results, using these techniques are presented for making estimates of the melting region and the yield strength of some metals under strong shock conditions

  11. RESULTS OF THE USE OF PEEK CAGES IN THE TREATMENT OF BASILAR INVAGINATION BY GOEL TECHNIQUE

    Directory of Open Access Journals (Sweden)

    Luís Eduardo Carelli Teixeira da Silva

    2016-03-01

    Full Text Available ABSTRACT Objective: Analysis of the use of polyetheretherketone (PEEK cages for atlantoaxial facet realignment and distraction for treatment of basilar invagination by Goel technique. Method: Retrospective descriptive statistical analysis of the neurological status, pain, presence of subsidence and bone fusion with the use of PEEK cages in 8 atlantoaxial joints of 4 patients with basilar invagination. All patients were treated with atlantoaxial facet distraction and realignment and subsequent arthrodesis C1-C2 by the technique of Goel modified by the use of PEEK cage. Results: All patients showed improvement in Nurick neurological assessment scale and Visual Analogue Scale (VAS of pain. There were no cases of subsidence, migration, or damage to the vertebral artery during the insertion of the cage. All joints evolved with bone fusion, assessed by dynamic radiographs, and computed tomography. Two patients developed neuropathic pain in dermatome of C2 and one patient had unilateral vertebral artery injury during C2 instrumentation treated with insertion of pedicle screw to control the bleeding. Conclusion: The results of the treatment of basilar invagination by the Goel technique with the use of PEEK cages shown to be effective and safe although further studies are needed to confirm this use.

  12. INTRAVAL test case 1b - modelling results

    International Nuclear Information System (INIS)

    Jakob, A.; Hadermann, J.

    1991-07-01

    This report presents results obtained within Phase I of the INTRAVAL study. Six different models are fitted to the results of four infiltration experiments with 233 U tracer on small samples of crystalline bore cores originating from deep drillings in Northern Switzerland. Four of these are dual porosity media models taking into account advection and dispersion in water conducting zones (either tubelike veins or planar fractures), matrix diffusion out of these into pores of the solid phase, and either non-linear or linear sorption of the tracer onto inner surfaces. The remaining two are equivalent porous media models (excluding matrix diffusion) including either non-linear sorption onto surfaces of a single fissure family or linear sorption onto surfaces of several different fissure families. The fits to the experimental data have been carried out by Marquardt-Levenberg procedure yielding error estimates of the parameters, correlation coefficients and also, as a measure for the goodness of the fits, the minimum values of the χ 2 merit function. The effects of different upstream boundary conditions are demonstrated and the penetration depth for matrix diffusion is discussed briefly for both alternative flow path scenarios. The calculations show that the dual porosity media models are significantly more appropriate to the experimental data than the single porosity media concepts. Moreover, it is matrix diffusion rather than the non-linearity of the sorption isotherm which is responsible for the tailing part of the break-through curves. The extracted parameter values for some models for both the linear and non-linear (Freundlich) sorption isotherms are consistent with the results of independent static batch sorption experiments. From the fits, it is generally not possible to discriminate between the two alternative flow path geometries. On the basis of the modelling results, some proposals for further experiments are presented. (author) 15 refs., 23 figs., 7 tabs

  13. In silico modeling techniques for predicting the tertiary structure of human H4 receptor.

    Science.gov (United States)

    Zaid, Hilal; Raiyn, Jamal; Osman, Midhat; Falah, Mizied; Srouji, Samer; Rayan, Anwar

    2016-01-01

    First cloned in 2000, the human Histamine H4 Receptor (hH4R) is the last member of the histamine receptors family discovered so far, it belongs to the GPCR super-family and is involved in a wide variety of immunological and inflammatory responses. Potential hH4R antagonists are proposed to have therapeutic potential for the treatment of allergies, inflammation, asthma and colitis. So far, no hH4R ligands have been successfully introduced to the pharmaceutical market, which creates a strong demand for new selective ligands to be developed. in silico techniques and structural based modeling are likely to facilitate the achievement of this goal. In this review paper we attempt to cover the fundamental concepts of hH4R structure modeling and its implementations in drug discovery and development, especially those that have been experimentally tested and to highlight some ideas that are currently being discussed on the dynamic nature of hH4R and GPCRs, in regards to computerized techniques for 3-D structure modeling.

  14. Data-driven techniques to estimate parameters in a rate-dependent ferromagnetic hysteresis model

    International Nuclear Information System (INIS)

    Hu Zhengzheng; Smith, Ralph C.; Ernstberger, Jon M.

    2012-01-01

    The quantification of rate-dependent ferromagnetic hysteresis is important in a range of applications including high speed milling using Terfenol-D actuators. There exist a variety of frameworks for characterizing rate-dependent hysteresis including the magnetic model in Ref. , the homogenized energy framework, Preisach formulations that accommodate after-effects, and Prandtl-Ishlinskii models. A critical issue when using any of these models to characterize physical devices concerns the efficient estimation of model parameters through least squares data fits. A crux of this issue is the determination of initial parameter estimates based on easily measured attributes of the data. In this paper, we present data-driven techniques to efficiently and robustly estimate parameters in the homogenized energy model. This framework was chosen due to its physical basis and its applicability to ferroelectric, ferromagnetic and ferroelastic materials.

  15. Measurement and modeling of out-of-field doses from various advanced post-mastectomy radiotherapy techniques

    Science.gov (United States)

    Yoon, Jihyung; Heins, David; Zhao, Xiaodong; Sanders, Mary; Zhang, Rui

    2017-12-01

    More and more advanced radiotherapy techniques have been adopted for post-mastectomy radiotherapies (PMRT). Patient dose reconstruction is challenging for these advanced techniques because they increase the low out-of-field dose area while the accuracy of out-of-field dose calculations by current commercial treatment planning systems (TPSs) is poor. We aim to measure and model the out-of-field radiation doses from various advanced PMRT techniques. PMRT treatment plans for an anthropomorphic phantom were generated, including volumetric modulated arc therapy with standard and flattening-filter-free photon beams, mixed beam therapy, 4-field intensity modulated radiation therapy (IMRT), and tomotherapy. We measured doses in the phantom where the TPS calculated doses were lower than 5% of the prescription dose using thermoluminescent dosimeters (TLD). The TLD measurements were corrected by two additional energy correction factors, namely out-of-beam out-of-field (OBOF) correction factor K OBOF and in-beam out-of-field (IBOF) correction factor K IBOF, which were determined by separate measurements using an ion chamber and TLD. A simple analytical model was developed to predict out-of-field dose as a function of distance from the field edge for each PMRT technique. The root mean square discrepancies between measured and calculated out-of-field doses were within 0.66 cGy Gy-1 for all techniques. The IBOF doses were highly scattered and should be evaluated case by case. One can easily combine the measured out-of-field dose here with the in-field dose calculated by the local TPS to reconstruct organ doses for a specific PMRT patient if the same treatment apparatus and technique were used.

  16. Identification of System Parameters by the Random Decrement Technique

    DEFF Research Database (Denmark)

    Brincker, Rune; Kirkegaard, Poul Henning; Rytter, Anders

    1991-01-01

    -Walker equations and finally, least-square fitting of the theoretical correlation function. The results are compared to the results of fitting an Auto Regressive Moving Average (ARMA) model directly to the system output from a single-degree-of-freedom system loaded by white noise.......The aim of this paper is to investigate and illustrate the possibilities of using correlation functions estimated by the Random Decrement Technique as a basis for parameter identification. A two-stage system identification system is used: first, the correlation functions are estimated by the Random...... Decrement Technique, and then the system parameters are identified from the correlation function estimates. Three different techniques are used in the parameter identification process: a simple non-parametric method, estimation of an Auto Regressive (AR) model by solving an overdetermined set of Yule...

  17. Development of self-learning Monte Carlo technique for more efficient modeling of nuclear logging measurements

    International Nuclear Information System (INIS)

    Zazula, J.M.

    1988-01-01

    The self-learning Monte Carlo technique has been implemented to the commonly used general purpose neutron transport code MORSE, in order to enhance sampling of the particle histories that contribute to a detector response. The parameters of all the biasing techniques available in MORSE, i.e. of splitting, Russian roulette, source and collision outgoing energy importance sampling, path length transformation and additional biasing of the source angular distribution are optimized. The learning process is iteratively performed after each batch of particles, by retrieving the data concerning the subset of histories that passed the detector region and energy range in the previous batches. This procedure has been tested on two sample problems in nuclear geophysics, where an unoptimized Monte Carlo calculation is particularly inefficient. The results are encouraging, although the presented method does not directly minimize the variance and the convergence of our algorithm is restricted by the statistics of successful histories from previous random walk. Further applications for modeling of the nuclear logging measurements seem to be promising. 11 refs., 2 figs., 3 tabs. (author)

  18. First results in the use of sterile insect technique against Trialeurodes vaporariorum (Homoptera: Aleyroididae) in greenhouses

    International Nuclear Information System (INIS)

    Calvitti, M.; Remotti, P.C.; Pasquali, A.; Cirio, U.

    1998-01-01

    Trials for the evaluation of the effectiveness of the sterile insect technique for the suppression of greenhouse whitefly, Trialeurodes vaporariorum (Westwood), both in cage and in greenhouse conditions are described. The results show a significant reduction of the reproductive capacity of the untreated whitefly populations interacting with sterile insects. Untreated whiteflies, co-existing in a mixed population together with sterile insects, attained less than the half (44%) of their potential reproductive capacity. This trend was also evident in the cage test where the untreated whitefly population, crossed with the sterile whiteflies, increased without exceeding 2/3 of the density recorded in the control cages. These results may be based on 2 joint sterile insect technique effects: primarily a drastic reduction of the progeny of normal untreated females, when mating with sterile males, carriers of dominant lethal mutations, and secondarily a progressive reduction of the females in the population due to an increasing rate of unsuccessful matings resulting in a condition of forced arrhenotoky. No deleterious effects, on plant health and fruit quality, were observed on plants exposed to high sterile whitefly pressures

  19. Meta-heuristic ant colony optimization technique to forecast the amount of summer monsoon rainfall: skill comparison with Markov chain model

    Science.gov (United States)

    Chaudhuri, Sutapa; Goswami, Sayantika; Das, Debanjana; Middey, Anirban

    2014-05-01

    Forecasting summer monsoon rainfall with precision becomes crucial for the farmers to plan for harvesting in a country like India where the national economy is mostly based on regional agriculture. The forecast of monsoon rainfall based on artificial neural network is a well-researched problem. In the present study, the meta-heuristic ant colony optimization (ACO) technique is implemented to forecast the amount of summer monsoon rainfall for the next day over Kolkata (22.6°N, 88.4°E), India. The ACO technique belongs to swarm intelligence and simulates the decision-making processes of ant colony similar to other adaptive learning techniques. ACO technique takes inspiration from the foraging behaviour of some ant species. The ants deposit pheromone on the ground in order to mark a favourable path that should be followed by other members of the colony. A range of rainfall amount replicating the pheromone concentration is evaluated during the summer monsoon season. The maximum amount of rainfall during summer monsoon season (June—September) is observed to be within the range of 7.5-35 mm during the period from 1998 to 2007, which is in the range 4 category set by the India Meteorological Department (IMD). The result reveals that the accuracy in forecasting the amount of rainfall for the next day during the summer monsoon season using ACO technique is 95 % where as the forecast accuracy is 83 % with Markov chain model (MCM). The forecast through ACO and MCM are compared with other existing models and validated with IMD observations from 2008 to 2012.

  20. Using Game Theory Techniques and Concepts to Develop Proprietary Models for Use in Intelligent Games

    Science.gov (United States)

    Christopher, Timothy Van

    2011-01-01

    This work is about analyzing games as models of systems. The goal is to understand the techniques that have been used by game designers in the past, and to compare them to the study of mathematical game theory. Through the study of a system or concept a model often emerges that can effectively educate students about making intelligent decisions…

  1. Multi-factor models and signal processing techniques application to quantitative finance

    CERN Document Server

    Darolles, Serges; Jay, Emmanuelle

    2013-01-01

    With recent outbreaks of multiple large-scale financial crises, amplified by interconnected risk sources, a new paradigm of fund management has emerged. This new paradigm leverages "embedded" quantitative processes and methods to provide more transparent, adaptive, reliable and easily implemented "risk assessment-based" practices.This book surveys the most widely used factor models employed within the field of financial asset pricing. Through the concrete application of evaluating risks in the hedge fund industry, the authors demonstrate that signal processing techniques are an intere

  2. Different Techniques For Producing Precision Holes (>20 mm) In Hardened Steel—Comparative Results

    Science.gov (United States)

    Coelho, R. T.; Tanikawa, S. T.

    2009-11-01

    High speed machining (HSM), or high performance machining, has been one of the most recent technological advances. When applied to milling operations, using adequate machines, CAM programs and tooling, it allows cutting hardened steels, which was not feasible just a couple of years ago. The use of very stiff and precision machines has created the possibilities of machining holes in hardened steels, such as AISI H13 with 48-50 HRC, using helical interpolations, for example. Such process is particularly useful for holes with diameter bigger than normal solid carbide drills commercially available, around 20 mm, or higher. Such holes may need narrow tolerances, fine surface finishing, which can be obtained just by end milling operations. The present work compares some of the strategies used to obtain such holes by end milling, and also some techniques employed to finish them, by milling, boring and also by fine grinding at the same machine. Results indicate that it is possible to obtain holes with less than 0.36 m in circularity, 7.41 m in cylindricity and 0.12 m in surface roughness Ra. Additionally, there is less possibilities of obtaining heat affected layers when using such technique.

  3. [Endoscopic calcaneoplasty (ECP) in Haglund's syndrome. Indication, surgical technique, surgical findings and results].

    Science.gov (United States)

    Jerosch, J; Sokkar, S; Dücker, M; Donner, A

    2012-06-01

    Posterior calcaneal exostosis treatment modalities have given rise to many controversial opinions. After failure of the conservative treatment, surgical bursectomy and resection of the calcaneal exostosis are indicated by many authors. But clinical studies also show a high rate of unsatisfactory results with a relative high incidence of complications. The minimally invasive surgical technique by an endoscopic calcaneoplasty (ECP) could be an option to overcome some of these problems. Between 1999 und 2010 we operated 164 patients with an age range between 16 and 67 years, 81 males and 83 females. The radiological examination prior to surgery documented in all cases a posterior superior calcaneal exostosis that showed friction to the Achilles tendon. All patients included in the study had no clinical varus of the hind foot, nor cavus deformities. All patients had undergone a trial of conservative treatment for at least 6 months and did not show a positive response. The average follow-up was 46.3 (range: 8-120) months. According to the Ogilvie-Harris score 71 patients presented good and 84 patients excellent results, while 5 patients showed fair results, and 4 patients only poor results. All the post-operative radiographs showed sufficient resection of the calcaneal spur. In 61 patients the preoperative MRI showed a partial rupture of the Achilles tendon close to the insertion side. In no case could we observe a complete tear at the time of follow-up. Only minor postoperative complications were observed. In many patients we could observe a chondral layer at the posterior aspect of the calcaneus. Close to the intersion the Achilles tendon showed also in many patients a chondroide metaplasia. ECP is an effective and minimally invasive procedure for the treatment of patients with calcaneal exostosis. After a short learning curve the endoscopic exposure is superior to the open technique, has less morbidity, less operating time, and nearly no complications. Moreover, the

  4. Design and results of the ice sheet model initialisation experiments initMIP-Greenland: an ISMIP6 intercomparison

    Science.gov (United States)

    Goelzer, Heiko; Nowicki, Sophie; Edwards, Tamsin; Beckley, Matthew; Abe-Ouchi, Ayako; Aschwanden, Andy; Calov, Reinhard; Gagliardini, Olivier; Gillet-Chaulet, Fabien; Golledge, Nicholas R.; Gregory, Jonathan; Greve, Ralf; Humbert, Angelika; Huybrechts, Philippe; Kennedy, Joseph H.; Larour, Eric; Lipscomb, William H.; Le clec'h, Sébastien; Lee, Victoria; Morlighem, Mathieu; Pattyn, Frank; Payne, Antony J.; Rodehacke, Christian; Rückamp, Martin; Saito, Fuyuki; Schlegel, Nicole; Seroussi, Helene; Shepherd, Andrew; Sun, Sainan; van de Wal, Roderik; Ziemen, Florian A.

    2018-04-01

    Earlier large-scale Greenland ice sheet sea-level projections (e.g. those run during the ice2sea and SeaRISE initiatives) have shown that ice sheet initial conditions have a large effect on the projections and give rise to important uncertainties. The goal of this initMIP-Greenland intercomparison exercise is to compare, evaluate, and improve the initialisation techniques used in the ice sheet modelling community and to estimate the associated uncertainties in modelled mass changes. initMIP-Greenland is the first in a series of ice sheet model intercomparison activities within ISMIP6 (the Ice Sheet Model Intercomparison Project for CMIP6), which is the primary activity within the Coupled Model Intercomparison Project Phase 6 (CMIP6) focusing on the ice sheets. Two experiments for the large-scale Greenland ice sheet have been designed to allow intercomparison between participating models of (1) the initial present-day state of the ice sheet and (2) the response in two idealised forward experiments. The forward experiments serve to evaluate the initialisation in terms of model drift (forward run without additional forcing) and in response to a large perturbation (prescribed surface mass balance anomaly); they should not be interpreted as sea-level projections. We present and discuss results that highlight the diversity of data sets, boundary conditions, and initialisation techniques used in the community to generate initial states of the Greenland ice sheet. We find good agreement across the ensemble for the dynamic response to surface mass balance changes in areas where the simulated ice sheets overlap but differences arising from the initial size of the ice sheet. The model drift in the control experiment is reduced for models that participated in earlier intercomparison exercises.

  5. Floating Node Method and Virtual Crack Closure Technique for Modeling Matrix Cracking-Delamination Migration

    Science.gov (United States)

    DeCarvalho, Nelson V.; Chen, B. Y.; Pinho, Silvestre T.; Baiz, P. M.; Ratcliffe, James G.; Tay, T. E.

    2013-01-01

    A novel approach is proposed for high-fidelity modeling of progressive damage and failure in composite materials that combines the Floating Node Method (FNM) and the Virtual Crack Closure Technique (VCCT) to represent multiple interacting failure mechanisms in a mesh-independent fashion. In this study, the approach is applied to the modeling of delamination migration in cross-ply tape laminates. Delamination, matrix cracking, and migration are all modeled using fracture mechanics based failure and migration criteria. The methodology proposed shows very good qualitative and quantitative agreement with experiments.

  6. Clinical Results of Flexor Tendon Repair in Zone II Using a six Strand Double Loop Technique.

    Science.gov (United States)

    Savvidou, Christiana; Tsai, Tsu-Min

    2015-06-01

    The purpose of this study is to report the clinical results after repair of flexor tendon zone II injuries utilizing a 6-strand double-loop technique and early post-operative active rehabilitation. We retrospectively reviewed 22 patients involving 51 cases with zone II flexor tendon repair using a six strand double loop technique from September 1996 to December 2012. Most common mechanism of injuries was sharp lacerations (86.5 %). Tendon injuries occurred equally in manual and non-manual workers and were work-related in 33 % of the cases. The Strickland score for active range of motion (ROM) postoperatively was excellent and good in the majority of the cases (81 %). The rupture rate was 1.9 %. The six strand double loop technique for Zone II flexor tendon repair leads to good and excellent motion in the majority of patients and low re- rupture rate. It is clinically effective and allows for early postoperative active rehabilitation.

  7. Comparison of experimental target currents with analytical model results for plasma immersion ion implantation

    International Nuclear Information System (INIS)

    En, W.G.; Lieberman, M.A.; Cheung, N.W.

    1995-01-01

    Ion implantation is a standard fabrication technique used in semiconductor manufacturing. Implantation has also been used to modify the surface properties of materials to improve their resistance to wear, corrosion and fatigue. However, conventional ion implanters require complex optics to scan a narrow ion beam across the target to achieve implantation uniformity. An alternative implantation technique, called Plasma Immersion Ion Implantation (PIII), immerses the target into a plasma. The ions are extracted from the plasma directly and accelerated by applying negative high-voltage pulses to the target. An analytical model of the voltage and current characteristics of a remote plasma is presented. The model simulates the ion, electron and secondary electron currents induced before, during and after a high voltage negative pulse is applied to a target immersed in a plasma. The model also includes analytical relations that describe the sheath expansion and collapse due to negative high voltage pulses. The sheath collapse is found to be important for high repetition rate pulses. Good correlation is shown between the model and experiment for a wide variety of voltage pulses and plasma conditions

  8. Weighted Least Squares Techniques for Improved Received Signal Strength Based Localization

    Directory of Open Access Journals (Sweden)

    José R. Casar

    2011-09-01

    Full Text Available The practical deployment of wireless positioning systems requires minimizing the calibration procedures while improving the location estimation accuracy. Received Signal Strength localization techniques using propagation channel models are the simplest alternative, but they are usually designed under the assumption that the radio propagation model is to be perfectly characterized a priori. In practice, this assumption does not hold and the localization results are affected by the inaccuracies of the theoretical, roughly calibrated or just imperfect channel models used to compute location. In this paper, we propose the use of weighted multilateration techniques to gain robustness with respect to these inaccuracies, reducing the dependency of having an optimal channel model. In particular, we propose two weighted least squares techniques based on the standard hyperbolic and circular positioning algorithms that specifically consider the accuracies of the different measurements to obtain a better estimation of the position. These techniques are compared to the standard hyperbolic and circular positioning techniques through both numerical simulations and an exhaustive set of real experiments on different types of wireless networks (a wireless sensor network, a WiFi network and a Bluetooth network. The algorithms not only produce better localization results with a very limited overhead in terms of computational cost but also achieve a greater robustness to inaccuracies in channel modeling.

  9. Defining Malaysian Knowledge Society: Results from the Delphi Technique

    Science.gov (United States)

    Hamid, Norsiah Abdul; Zaman, Halimah Badioze

    This paper outlines the findings of research where the central idea is to define the term Knowledge Society (KS) in Malaysian context. The research focuses on three important dimensions, namely knowledge, ICT and human capital. This study adopts a modified Delphi technique to seek the important dimensions that can contribute to the development of Malaysian's KS. The Delphi technique involved ten experts in a five-round iterative and controlled feedback procedure to obtain consensus on the important dimensions and to verify the proposed definition of KS. The finding shows that all three dimensions proposed initially scored high and moderate consensus. Round One (R1) proposed an initial definition of KS and required comments and inputs from the panel. These inputs were then used to develop items for a R2 questionnaire. In R2, 56 out of 73 items scored high consensus and in R3, 63 out of 90 items scored high. R4 was conducted to re-rate the new items, in which 8 out of 17 items scored high. Other items scored moderate consensus and no item scored low or no consensus in all rounds. The final round (R5) was employed to verify the final definition of KS. Findings and discovery of this study are significant to the definition of KS and the development of a framework in the Malaysian context.

  10. Image acquisition and planimetry systems to develop wounding techniques in 3D wound model

    Directory of Open Access Journals (Sweden)

    Kiefer Ann-Kathrin

    2017-09-01

    Full Text Available Wound healing represents a complex biological repair process. Established 2D monolayers and wounding techniques investigate cell migration, but do not represent coordinated multi-cellular systems. We aim to use wound surface area measurements obtained from image acquisition and planimetry systems to establish our wounding technique and in vitro organotypic tissue. These systems will be used in our future wound healing treatment studies to assess the rate of wound closure in response to wound healing treatment with light therapy (photobiomodulation. The image acquisition and planimetry systems were developed, calibrated, and verified to measure wound surface area in vitro. The system consists of a recording system (Sony DSC HX60, 20.4 M Pixel, 1/2.3″ CMOS sensor and calibrated with 1mm scale paper. Macro photography with an optical zoom magnification of 2:1 achieves sufficient resolution to evaluate the 3mm wound size and healing growth. The camera system was leveled with an aluminum construction to ensure constant distance and orientation of the images. The JPG-format images were processed with a planimetry system in MATLAB. Edge detection enables definition of the wounded area. Wound area can be calculated with surface integrals. To separate the wounded area from the background, the image was filtered in several steps. Agar models, injured through several test persons with different levels of experience, were used as pilot data to test the planimetry software. These image acquisition and planimetry systems support the development of our wound healing research. The reproducibility of our wounding technique can be assessed by the variability in initial wound surface area. Also, wound healing treatment effects can be assessed by the change in rate of wound closure. These techniques represent the foundations of our wound model, wounding technique, and analysis systems in our ongoing studies in wound healing and therapy.

  11. Using Unified Modelling Language (UML) as a process-modelling technique for clinical-research process improvement.

    Science.gov (United States)

    Kumarapeli, P; De Lusignan, S; Ellis, T; Jones, B

    2007-03-01

    The Primary Care Data Quality programme (PCDQ) is a quality-improvement programme which processes routinely collected general practice computer data. Patient data collected from a wide range of different brands of clinical computer systems are aggregated, processed, and fed back to practices in an educational context to improve the quality of care. Process modelling is a well-established approach used to gain understanding and systematic appraisal, and identify areas of improvement of a business process. Unified modelling language (UML) is a general purpose modelling technique used for this purpose. We used UML to appraise the PCDQ process to see if the efficiency and predictability of the process could be improved. Activity analysis and thinking-aloud sessions were used to collect data to generate UML diagrams. The UML model highlighted the sequential nature of the current process as a barrier for efficiency gains. It also identified the uneven distribution of process controls, lack of symmetric communication channels, critical dependencies among processing stages, and failure to implement all the lessons learned in the piloting phase. It also suggested that improved structured reporting at each stage - especially from the pilot phase, parallel processing of data and correctly positioned process controls - should improve the efficiency and predictability of research projects. Process modelling provided a rational basis for the critical appraisal of a clinical data processing system; its potential maybe underutilized within health care.

  12. Hair analysis by means of laser induced breakdown spectroscopy technique and support vector machine model for diagnosing addiction

    Directory of Open Access Journals (Sweden)

    M Vahid Dastjerdi

    2018-02-01

    Full Text Available Along with the development of laboratory methods for diagnosing addiction, concealment ways, either physically or chemically, for creating false results have been in progress. In this research based on the Laser Induced Breakdown Spectroscopy technique (LIBS and analyzing hair of addicted and normal people, we are proposing a new method to overcome problems in conventional methods and reduce possibility of cheating in the process of diagnosing addiction. For this purpose, at first we have sampled hair of 17 normal and addicted people and recorded 5 spectrums for each sample, overall 170 spectrums. After analyzing the recorded LIBS spectra and detecting the atomic and ionic lines as well as molecular bands, relative intensities of emission lines for Aluminum to Calcium (Al/Ca and Aluminum to Sodium (Al/Na were selected as the input variables for the Support Vector Machine model (SVM.The Radial Basis, Polynomial Kernel functions and a linear function were chosen for classifying the data in SVM model. The results of this research showed that by the combination of LIBS technique and SVM one can distinguish addicted person with precision of 100%. Because of several advantages of LIBS such as high speed analysis and being portable, this method can be used individually or together with available methods as an automatic method for diagnosing addiction through hair analysis.

  13. Comparison of the cytology technique and the frozen section results in intraoperative consultation of the breast lesions

    Directory of Open Access Journals (Sweden)

    "Haeri H

    2002-07-01

    Full Text Available The cytology study is effective and reliable technique in intraoperative consultation. This study was performed to evaluate the accuracy of the cytology study in intraoperative consultation of the breast lesions. 125 specimens of the breast lesions were examined and studied in Imam Khomeini Hospital during the years 1998-99. The sensitivity, specificity and accuracy for cytological method were 87.5% , 95%, 90.5% and for the frozen section 92.4%, 100% and 95.4% respectively. The false positive reports were 2% in the cytology technique and the most important source of error and false postivie reports was fibroadenoma in this method. By reviewing the results. It could be concluded that combination of these two techniques is beneficial and more reliable in intraoperative consultation resports of the breast lesions

  14. Photolysis frequency measurement techniques: results of a comparison within the ACCENT project

    Directory of Open Access Journals (Sweden)

    K. C. Clemitshaw

    2008-09-01

    Full Text Available An intercomparison of different radiometric techniques measuring atmospheric photolysis frequencies j(NO2, j(HCHO and j(O1D was carried out in a two-week field campaign in June 2005 at Jülich, Germany. Three double-monochromator based spectroradiometers (DM-SR, three single-monochromator based spectroradiometers with diode-array detectors (SM-SR and seventeen filter radiometers (FR (ten j(NO2-FR, seven j(O1D-FR took part in this comparison. For j(NO2, all spectroradiometer results agreed within ±3%. For j(HCHO, agreement was slightly poorer between −8% and +4% of the DM-SR reference result. For the SM-SR deviations were explained by poorer spectral resolutions and lower accuracies caused by decreased sensitivities of the photodiode arrays in a wavelength range below 350 nm. For j(O1D, the results were more complex within +8% and −4% with increasing deviations towards larger solar zenith angles for the SM-SR. The direction and the magnitude of the deviations were dependent on the technique of background determination. All j(NO2-FR showed good linearity with single calibration factors being sufficient to convert from output voltages to j(NO2. Measurements were feasible until sunset and comparison with previous calibrations showed good long-term stability. For the j(O1D-FR, conversion from output voltages to j(O1D needed calibration factors and correction functions considering the influences of total ozone column and elevation of the sun. All instruments showed good linearity at photolysis frequencies exceeding about 10% of maximum values. At larger solar zenith angles, the agreement was non-uniform with deviations explainable by insufficient correction functions. Comparison with previous calibrations for some j(O1D-FR indicated

  15. A biomechanical modeling-guided simultaneous motion estimation and image reconstruction technique (SMEIR-Bio) for 4D-CBCT reconstruction

    Science.gov (United States)

    Huang, Xiaokun; Zhang, You; Wang, Jing

    2018-02-01

    Reconstructing four-dimensional cone-beam computed tomography (4D-CBCT) images directly from respiratory phase-sorted traditional 3D-CBCT projections can capture target motion trajectory, reduce motion artifacts, and reduce imaging dose and time. However, the limited numbers of projections in each phase after phase-sorting decreases CBCT image quality under traditional reconstruction techniques. To address this problem, we developed a simultaneous motion estimation and image reconstruction (SMEIR) algorithm, an iterative method that can reconstruct higher quality 4D-CBCT images from limited projections using an inter-phase intensity-driven motion model. However, the accuracy of the intensity-driven motion model is limited in regions with fine details whose quality is degraded due to insufficient projection number, which consequently degrades the reconstructed image quality in corresponding regions. In this study, we developed a new 4D-CBCT reconstruction algorithm by introducing biomechanical modeling into SMEIR (SMEIR-Bio) to boost the accuracy of the motion model in regions with small fine structures. The biomechanical modeling uses tetrahedral meshes to model organs of interest and solves internal organ motion using tissue elasticity parameters and mesh boundary conditions. This physics-driven approach enhances the accuracy of solved motion in the organ’s fine structures regions. This study used 11 lung patient cases to evaluate the performance of SMEIR-Bio, making both qualitative and quantitative comparisons between SMEIR-Bio, SMEIR, and the algebraic reconstruction technique with total variation regularization (ART-TV). The reconstruction results suggest that SMEIR-Bio improves the motion model’s accuracy in regions containing small fine details, which consequently enhances the accuracy and quality of the reconstructed 4D-CBCT images.

  16. The Recoverability of P-Technique Factor Analysis

    Science.gov (United States)

    Molenaar, Peter C. M.; Nesselroade, John R.

    2009-01-01

    It seems that just when we are about to lay P-technique factor analysis finally to rest as obsolete because of newer, more sophisticated multivariate time-series models using latent variables--dynamic factor models--it rears its head to inform us that an obituary may be premature. We present the results of some simulations demonstrating that even…

  17. Application of computer-aided three-dimensional skull model with rapid prototyping technique in repair of zygomatico-orbito-maxillary complex fracture.

    Science.gov (United States)

    Li, Wei Zhong; Zhang, Mei Chao; Li, Shao Ping; Zhang, Lei Tao; Huang, Yu

    2009-06-01

    With the advent of CAD/CAM and rapid prototyping (RP), a technical revolution in oral and maxillofacial trauma was promoted to benefit treatment, repair of maxillofacial fractures and reconstruction of maxillofacial defects. For a patient with zygomatico-facial collapse deformity resulting from a zygomatico-orbito-maxillary complex (ZOMC) fracture, CT scan data were processed by using Mimics 10.0 for three-dimensional (3D) reconstruction. The reduction design was aided by 3D virtual imaging and the 3D skull model was reproduced using the RP technique. In line with the design by Mimics, presurgery was performed on the 3D skull model and the semi-coronal incision was taken for reduction of ZOMC fracture, based on the outcome from the presurgery. Postoperative CT and images revealed significantly modified zygomatic collapse and zygomatic arch rise and well-modified facial symmetry. The CAD/CAM and RP technique is a relatively useful tool that can assist surgeons with reconstruction of the maxillofacial skeleton, especially in repairs of ZOMC fracture.

  18. Why 1D electrical resistivity techniques can result in inaccurate siting of boreholes in hard rock aquifers and why electrical resistivity tomography must be preferred: the example of Benin, West Africa

    Science.gov (United States)

    Alle, Iboukoun Christian; Descloitres, Marc; Vouillamoz, Jean-Michel; Yalo, Nicaise; Lawson, Fabrice Messan Amen; Adihou, Akonfa Consolas

    2018-03-01

    Hard rock aquifers are of particular importance for supplying people with drinking water in Africa and in the world. Although the common use of one-dimensional (1D) electrical resistivity techniques to locate drilling site, the failure rate of boreholes is usually high. For instance, about 40% of boreholes drilled in hard rock aquifers in Benin are unsuccessful. This study investigates why the current use of 1D techniques (e.g. electrical profiling and electrical sounding) can result in inaccurate siting of boreholes, and checks the interest and the limitations of the use of two-dimensional (2D) Electrical Resistivity Tomography (ERT). Geophysical numerical modeling and comprehensive 1D and 2D resistivity surveys were carried out in hard rock aquifers in Benin. The experiments carried out at 7 sites located in different hard rock groups confirmed the results of the numerical modeling: the current use of 1D techniques can frequently leads to inaccurate siting, and ERT better reveals hydrogeological targets such as thick weathered zone (e.g. stratiform fractured layer and preferential weathering associated with subvertical fractured zone). Moreover, a cost analysis demonstrates that the use of ERT can save money at the scale of a drilling programme if ERT improves the success rate by only 5% as compared to the success rate obtained with 1D techniques. Finally, this study demonstrates, using the example of Benin, that the use of electrical resistivity profiling and sounding for siting boreholes in weathered hard rocks of western Africa should be discarded and replaced by the use of ERT technique, more efficient.

  19. An experimental evaluation of the generalizing capabilities of process discovery techniques and black-box sequence models

    NARCIS (Netherlands)

    Tax, N.; van Zelst, S.J.; Teinemaa, I.; Gulden, Jens; Reinhartz-Berger, Iris; Schmidt, Rainer; Guerreiro, Sérgio; Guédria, Wided; Bera, Palash

    2018-01-01

    A plethora of automated process discovery techniques have been developed which aim to discover a process model based on event data originating from the execution of business processes. The aim of the discovered process models is to describe the control-flow of the underlying business process. At the

  20. A Continuous Dynamic Traffic Assignment Model From Plate Scanning Technique

    Energy Technology Data Exchange (ETDEWEB)

    Rivas, A.; Gallego, I.; Sanchez-Cambronero, S.; Ruiz-Ripoll, L.; Barba, R.M.

    2016-07-01

    This paper presents a methodology for the dynamic estimation of traffic flows on all links of a network from observable field data assuming the first-in-first-out (FIFO) hypothesis. The traffic flow intensities recorded at the exit of the scanned links are propagated to obtain the flow waves on unscanned links. For that, the model calculates the flow-cost functions through information registered with the plate scanning technique. The model also responds to the concern about the parameter quality of flow-cost functions to replicate the real traffic flow behaviour. It includes a new algorithm for the adjustment of the parameter values to link characteristics when its quality is questionable. For that, it is necessary the a priori study of the location of the scanning devices to identify all path flows and to measure travel times in all links. A synthetic network is used to illustrate the proposed method and to prove its usefulness and feasibility. (Author)

  1. Techniques for asynchronous and periodically-synchronous coupling of atmosphere and ocean models. Pt. 1. General strategy and application to the cyclo-stationary case

    Energy Technology Data Exchange (ETDEWEB)

    Sausen, R [Deutsche Forschungsanstalt fuer Luft- und Raumfahrt e.V. (DLR), Wessling (Germany). Inst. fuer Physik der Atmosphaere; Voss, R [Deutsches Klimarechenzentrum (DKRZ), Hamburg (Germany)

    1995-07-01

    Asynchronous and periodically-synchronous schemes for coupling atmosphere and ocean models are presented. The performance of the schemes is tested by simulating the climatic response to a step function forcing and to a gradually increasing forcing with a simple zero-dimensional non-linear energy balance model. Both the initial transient response and the asymptotic approach of the equilibrium state are studied. If no annual cycle is allowed the asynchronous coupling technique proves to be a suitable tool. However, if the annual cycle is retained, the periodically-synchronous coupling technique reproduces the results of the synchronously coupled runs with smaller bias. In this case it is important that the total length of one synchronous period and one ocean only period is not a multiple of 6 months. (orig.)

  2. The uncertainty analysis of model results a practical guide

    CERN Document Server

    Hofer, Eduard

    2018-01-01

    This book is a practical guide to the uncertainty analysis of computer model applications. Used in many areas, such as engineering, ecology and economics, computer models are subject to various uncertainties at the level of model formulations, parameter values and input data. Naturally, it would be advantageous to know the combined effect of these uncertainties on the model results as well as whether the state of knowledge should be improved in order to reduce the uncertainty of the results most effectively. The book supports decision-makers, model developers and users in their argumentation for an uncertainty analysis and assists them in the interpretation of the analysis results.

  3. Identification of System Parameters by the Random Decrement Technique

    DEFF Research Database (Denmark)

    Brincker, Rune; Kirkegaard, Poul Henning; Rytter, Anders

    -Walker equations and finally least square fitting of the theoretical correlation function. The results are compared to the results of fitting an Auto Regressive Moving Average(ARMA) model directly to the system output. All investigations are performed on the simulated output from a single degree-off-freedom system......The aim of this paper is to investigate and illustrate the possibilities of using correlation functions estimated by the Random Decrement Technique as a basis for parameter identification. A two-stage system identification method is used: first the correlation functions are estimated by the Random...... Decrement technique and then the system parameters are identified from the correlation function estimates. Three different techniques are used in the parameters identification process: a simple non-paramatic method, estimation of an Auto Regressive(AR) model by solving an overdetermined set of Yule...

  4. Modelling rainfall erosion resulting from climate change

    Science.gov (United States)

    Kinnell, Peter

    2016-04-01

    It is well known that soil erosion leads to agricultural productivity decline and contributes to water quality decline. The current widely used models for determining soil erosion for management purposes in agriculture focus on long term (~20 years) average annual soil loss and are not well suited to determining variations that occur over short timespans and as a result of climate change. Soil loss resulting from rainfall erosion is directly dependent on the product of runoff and sediment concentration both of which are likely to be influenced by climate change. This presentation demonstrates the capacity of models like the USLE, USLE-M and WEPP to predict variations in runoff and erosion associated with rainfall events eroding bare fallow plots in the USA with a view to modelling rainfall erosion in areas subject to climate change.

  5. A Morphing Technique Applied to Lung Motions in Radiotherapy: Preliminary Results

    Directory of Open Access Journals (Sweden)

    R. Laurent

    2010-01-01

    Full Text Available Organ motion leads to dosimetric uncertainties during a patient’s treatment. Much work has been done to quantify the dosimetric effects of lung movement during radiation treatment. There is a particular need for a good description and prediction of organ motion. To describe lung motion more precisely, we have examined the possibility of using a computer technique: a morphing algorithm. Morphing is an iterative method which consists of blending one image into another image. To evaluate the use of morphing, Four Dimensions Computed Tomography (4DCT acquisition of a patient was performed. The lungs were automatically segmented for different phases, and morphing was performed using the end-inspiration and the end-expiration phase scans only. Intermediate morphing files were compared with 4DCT intermediate images. The results showed good agreement between morphing images and 4DCT images: fewer than 2 % of the 512 by 256 voxels were wrongly classified as belonging/not belonging to a lung section. This paper presents preliminary results, and our morphing algorithm needs improvement. We can infer that morphing offers considerable advantages in terms of radiation protection of the patient during the diagnosis phase, handling of artifacts, definition of organ contours and description of organ motion.

  6. Combined rock-physical modelling and seismic inversion techniques for characterisation of stacked sandstone reservoir

    NARCIS (Netherlands)

    Justiniano, A.; Jaya, Y.; Diephuis, G.; Veenhof, R.; Pringle, T.

    2015-01-01

    The objective of the study is to characterise the Triassic massive stacked sandstone deposits of the Main Buntsandstein Subgroup at Block Q16 located in the West Netherlands Basin. The characterisation was carried out through combining rock-physics modelling and seismic inversion techniques. The

  7. Towards Understanding Soil Forming in Santa Clotilde Critical Zone Observatory: Modelling Soil Mixing Processes in a Hillslope using Luminescence Techniques

    Science.gov (United States)

    Sanchez, A. R.; Laguna, A.; Reimann, T.; Giráldez, J. V.; Peña, A.; Wallinga, J.; Vanwalleghem, T.

    2017-12-01

    Different geomorphological processes such as bioturbation and erosion-deposition intervene in soil formation and landscape evolution. The latter processes produce the alteration and degradation of the materials that compose the rocks. The degree to which the bedrock is weathered is estimated through the fraction of the bedrock which is mixing in the soil either vertically or laterally. This study presents an analytical solution for the diffusion-advection equation to quantify bioturbation and erosion-depositions rates in profiles along a catena. The model is calibrated with age-depth data obtained from profiles using the luminescence dating based on single grain Infrared Stimulated Luminescence (IRSL). Luminescence techniques contribute to a direct measurement of the bioturbation and erosion-deposition processes. Single-grain IRSL techniques is applied to feldspar minerals of fifteen samples which were collected from four soil profiles at different depths along a catena in Santa Clotilde Critical Zone Observatory, Cordoba province, SE Spain. A sensitivity analysis is studied to know the importance of the parameters in the analytical model. An uncertainty analysis is carried out to stablish the better fit of the parameters to the measured age-depth data. The results indicate a diffusion constant at 20 cm in depth of 47 (mm2/year) in the hill-base profile and 4.8 (mm2/year) in the hilltop profile. The model has high uncertainty in the estimation of erosion and deposition rates. This study reveals the potential of luminescence single-grain techniques to quantify pedoturbation processes.

  8. Models of signal validation using artificial intelligence techniques applied to a nuclear reactor

    International Nuclear Information System (INIS)

    Oliveira, Mauro V.; Schirru, Roberto

    2000-01-01

    This work presents two models of signal validation in which the analytical redundancy of the monitored signals from a nuclear plant is made by neural networks. In one model the analytical redundancy is made by only one neural network while in the other it is done by several neural networks, each one working in a specific part of the entire operation region of the plant. Four cluster techniques were tested to separate the entire operation region in several specific regions. An additional information of systems' reliability is supplied by a fuzzy inference system. The models were implemented in C language and tested with signals acquired from Angra I nuclear power plant, from its start to 100% of power. (author)

  9. Spotted star light curve numerical modeling technique and its application to HII 1883 surface imaging

    Science.gov (United States)

    Kolbin, A. I.; Shimansky, V. V.

    2014-04-01

    We developed a code for imaging the surfaces of spotted stars by a set of circular spots with a uniform temperature distribution. The flux from the spotted surface is computed by partitioning the spots into elementary areas. The code takes into account the passing of spots behind the visible stellar limb, limb darkening, and overlapping of spots. Modeling of light curves includes the use of recent results of the theory of stellar atmospheres needed to take into account the temperature dependence of flux intensity and limb darkening coefficients. The search for spot parameters is based on the analysis of several light curves obtained in different photometric bands. We test our technique by applying it to HII 1883.

  10. Application of neural network technique to determine a corrector surface for global geopotential model using GPS/levelling measurements in Egypt

    Science.gov (United States)

    Elshambaky, Hossam Talaat

    2018-01-01

    Owing to the appearance of many global geopotential models, it is necessary to determine the most appropriate model for use in Egyptian territory. In this study, we aim to investigate three global models, namely EGM2008, EIGEN-6c4, and GECO. We use five mathematical transformation techniques, i.e., polynomial expression, exponential regression, least-squares collocation, multilayer feed forward neural network, and radial basis neural networks to make the conversion from regional geometrical geoid to global geoid models and vice versa. From a statistical comparison study based on quality indexes between previous transformation techniques, we confirm that the multilayer feed forward neural network with two neurons is the most accurate of the examined transformation technique, and based on the mean tide condition, EGM2008 represents the most suitable global geopotential model for use in Egyptian territory to date. The final product gained from this study was the corrector surface that was used to facilitate the transformation process between regional geometrical geoid model and the global geoid model.

  11. Modern techniques in galaxy kinematics : Results from planetary nebula spectroscopy

    NARCIS (Netherlands)

    Romanowsky, AJ; Douglas, NG; Kuijken, K; Arnaboldi, M; Gerssen, J; Merrifield, MR; Kwok, S; Dopita, M; Sutherland, R

    2003-01-01

    We have observed planetary nebulae (PNe) in several early-type galaxies using new techniques on 4- to 8-meter-class telescopes. We obtain the first large data sets (greater than or similar to 100 velocities each) of PN kinematics in galaxies at greater than or similar to 15 Mpc, and present some

  12. A time-dependent event tree technique for modelling recovery operations

    International Nuclear Information System (INIS)

    Kohut, P.; Fitzpatrick, R.

    1991-01-01

    The development of a simplified time dependent event tree methodology is presented. The technique is especially applicable to describe recovery operations in nuclear reactor accident scenarios initiated by support system failures. The event tree logic is constructed using time dependent top events combined with a damage function that contains information about the final state time behavior of the reactor core. Both the failure and the success states may be utilized for the analysis. The method is illustrated by modeling the loss of service water function with special emphasis on the RCP [reactor coolant pump] seal LOCA [loss of coolant accident] scenario. 5 refs., 2 figs., 2 tabs

  13. Construction of an experimental simplified model for determining of flow parameters in chemical reactors, using nuclear techniques

    International Nuclear Information System (INIS)

    Araujo Paiva, J.A. de.

    1981-03-01

    The development of a simplified experimental model for investigation of nuclear techniques to determine the solid phase parameters in gas-solid flows is presented. A method for the measurement of the solid phase residence time inside a chemical reactor of the type utilised in the cracking process of catalytic fluids is described. An appropriate radioactive labelling technique of the solid phase and the construction of an eletronic timing circuit were the principal stages in the definition of measurement technique. (Author) [pt

  14. Application of Tissue Culture and Transformation Techniques in Model Species Brachypodium distachyon.

    Science.gov (United States)

    Sogutmaz Ozdemir, Bahar; Budak, Hikmet

    2018-01-01

    Brachypodium distachyon has recently emerged as a model plant species for the grass family (Poaceae) that includes major cereal crops and forage grasses. One of the important traits of a model species is its capacity to be transformed and ease of growing both in tissue culture and in greenhouse conditions. Hence, plant transformation technology is crucial for improvements in agricultural studies, both for the study of new genes and in the production of new transgenic plant species. In this chapter, we review an efficient tissue culture and two different transformation systems for Brachypodium using most commonly preferred gene transfer techniques in plant species, microprojectile bombardment method (biolistics) and Agrobacterium-mediated transformation.In plant transformation studies, frequently used explant materials are immature embryos due to their higher transformation efficiencies and regeneration capacity. However, mature embryos are available throughout the year in contrast to immature embryos. We explain a tissue culture protocol for Brachypodium using mature embryos with the selected inbred lines from our collection. Embryogenic calluses obtained from mature embryos are used to transform Brachypodium with both plant transformation techniques that are revised according to previously studied protocols applied in the grasses, such as applying vacuum infiltration, different wounding effects, modification in inoculation and cocultivation steps or optimization of bombardment parameters.

  15. Impact of head models in N170 component source imaging: results in control subjects and ADHD patients

    International Nuclear Information System (INIS)

    Beltrachini, L; Blenkmann, A; Ellenrieder, N von; Muravchik, C H; Petroni, A; Urquina, H; Manes, F; Ibáñez, A

    2011-01-01

    The major goal of evoked related potential studies arise in source localization techniques to identify the loci of neural activity that give rise to a particular voltage distribution measured on the surface of the scalp. In this paper we evaluate the effect of the head model adopted in order to estimate the N170 component source in attention deficit hyperactivity disorder (ADHD) patients and control subjects, considering faces and words stimuli. The standardized low resolution brain electromagnetic tomography algorithm (sLORETA) is used to compare between the three shell spherical head model and a fully realistic model based on the ICBM-152 atlas. We compare their variance on source estimation and analyze the impact on the N170 source localization. Results show that the often used three shell spherical model may lead to erroneous solutions, specially on ADHD patients, so its use is not recommended. Our results also suggest that N170 sources are mainly located in the right occipital fusiform gyrus for faces stimuli and in the left occipital fusiform gyrus for words stimuli, for both control subjects and ADHD patients. We also found a notable decrease on the N170 estimated source amplitude on ADHD patients, resulting in a plausible marker of the disease.

  16. Impact of head models in N170 component source imaging: results in control subjects and ADHD patients

    Science.gov (United States)

    Beltrachini, L.; Blenkmann, A.; von Ellenrieder, N.; Petroni, A.; Urquina, H.; Manes, F.; Ibáñez, A.; Muravchik, C. H.

    2011-12-01

    The major goal of evoked related potential studies arise in source localization techniques to identify the loci of neural activity that give rise to a particular voltage distribution measured on the surface of the scalp. In this paper we evaluate the effect of the head model adopted in order to estimate the N170 component source in attention deficit hyperactivity disorder (ADHD) patients and control subjects, considering faces and words stimuli. The standardized low resolution brain electromagnetic tomography algorithm (sLORETA) is used to compare between the three shell spherical head model and a fully realistic model based on the ICBM-152 atlas. We compare their variance on source estimation and analyze the impact on the N170 source localization. Results show that the often used three shell spherical model may lead to erroneous solutions, specially on ADHD patients, so its use is not recommended. Our results also suggest that N170 sources are mainly located in the right occipital fusiform gyrus for faces stimuli and in the left occipital fusiform gyrus for words stimuli, for both control subjects and ADHD patients. We also found a notable decrease on the N170 estimated source amplitude on ADHD patients, resulting in a plausible marker of the disease.

  17. Impact of head models in N170 component source imaging: results in control subjects and ADHD patients

    Energy Technology Data Exchange (ETDEWEB)

    Beltrachini, L; Blenkmann, A; Ellenrieder, N von; Muravchik, C H [Laboratory of Industrial Electronics, Control and Instrumentation (LEICI), National University of La Plata (Argentina); Petroni, A [Integrative Neuroscience Laboratory, Physics Department, University of Buenos Aires, Buenos Aires (Argentina); Urquina, H; Manes, F; Ibanez, A [Institute of Cognitive Neurology (INECO) and Institute of Neuroscience, Favaloro University, Buenos Aires (Argentina)

    2011-12-23

    The major goal of evoked related potential studies arise in source localization techniques to identify the loci of neural activity that give rise to a particular voltage distribution measured on the surface of the scalp. In this paper we evaluate the effect of the head model adopted in order to estimate the N170 component source in attention deficit hyperactivity disorder (ADHD) patients and control subjects, considering faces and words stimuli. The standardized low resolution brain electromagnetic tomography algorithm (sLORETA) is used to compare between the three shell spherical head model and a fully realistic model based on the ICBM-152 atlas. We compare their variance on source estimation and analyze the impact on the N170 source localization. Results show that the often used three shell spherical model may lead to erroneous solutions, specially on ADHD patients, so its use is not recommended. Our results also suggest that N170 sources are mainly located in the right occipital fusiform gyrus for faces stimuli and in the left occipital fusiform gyrus for words stimuli, for both control subjects and ADHD patients. We also found a notable decrease on the N170 estimated source amplitude on ADHD patients, resulting in a plausible marker of the disease.

  18. Nuclear techniques to assess irrigation schedules for field crops. Results of a co-ordinated research programme

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1996-06-01

    This TECDOC summarizes the results of a Co-ordinated Research Programme on The Use of Nuclear and Related Techniques in Assessment of Irrigation Schedules of Field Crops to Increase Effective Use of Water in Irrigation Projects. The programme was carried out between 1990 and 1995 through the technical co-ordination of the Soil Fertility, Irrigation and Crop Production Section of the Joint FAO/IAEA Division of Nuclear Techniques in Food and Agriculture of the International Atomic Energy Agency. Fourteen Member States of the IAEA and FAO carried out a series of field experiments aimed at improving irrigation water use efficiency through a type of irrigation scheduling known as deficit irrigation. Refs, figs, tabs.

  19. Nuclear techniques to assess irrigation schedules for field crops. Results of a co-ordinated research programme

    International Nuclear Information System (INIS)

    1996-06-01

    This TECDOC summarizes the results of a Co-ordinated Research Programme on The Use of Nuclear and Related Techniques in Assessment of Irrigation Schedules of Field Crops to Increase Effective Use of Water in Irrigation Projects. The programme was carried out between 1990 and 1995 through the technical co-ordination of the Soil Fertility, Irrigation and Crop Production Section of the Joint FAO/IAEA Division of Nuclear Techniques in Food and Agriculture of the International Atomic Energy Agency. Fourteen Member States of the IAEA and FAO carried out a series of field experiments aimed at improving irrigation water use efficiency through a type of irrigation scheduling known as deficit irrigation. Refs, figs, tabs

  20. A stochastic delay model for pricing debt and equity: Numerical techniques and applications

    Science.gov (United States)

    Tambue, Antoine; Kemajou Brown, Elisabeth; Mohammed, Salah

    2015-01-01

    Delayed nonlinear models for pricing corporate liabilities and European options were recently developed. Using self-financed strategy and duplication we were able to derive a Random Partial Differential Equation (RPDE) whose solutions describe the evolution of debt and equity values of a corporate in the last delay period interval in the accompanied paper (Kemajou et al., 2012) [14]. In this paper, we provide robust numerical techniques to solve the delayed nonlinear model for the corporate value, along with the corresponding RPDEs modeling the debt and equity values of the corporate. Using financial data from some firms, we forecast and compare numerical solutions from both the nonlinear delayed model and classical Merton model with the real corporate data. From this comparison, it comes up that in corporate finance the past dependence of the firm value process may be an important feature and therefore should not be ignored.

  1. Towards representing human behavior and decision making in Earth system models - an overview of techniques and approaches

    Science.gov (United States)

    Müller-Hansen, Finn; Schlüter, Maja; Mäs, Michael; Donges, Jonathan F.; Kolb, Jakob J.; Thonicke, Kirsten; Heitzig, Jobst

    2017-11-01

    Today, humans have a critical impact on the Earth system and vice versa, which can generate complex feedback processes between social and ecological dynamics. Integrating human behavior into formal Earth system models (ESMs), however, requires crucial modeling assumptions about actors and their goals, behavioral options, and decision rules, as well as modeling decisions regarding human social interactions and the aggregation of individuals' behavior. Here, we review existing modeling approaches and techniques from various disciplines and schools of thought dealing with human behavior at different levels of decision making. We demonstrate modelers' often vast degrees of freedom but also seek to make modelers aware of the often crucial consequences of seemingly innocent modeling assumptions. After discussing which socioeconomic units are potentially important for ESMs, we compare models of individual decision making that correspond to alternative behavioral theories and that make diverse modeling assumptions about individuals' preferences, beliefs, decision rules, and foresight. We review approaches to model social interaction, covering game theoretic frameworks, models of social influence, and network models. Finally, we discuss approaches to studying how the behavior of individuals, groups, and organizations can aggregate to complex collective phenomena, discussing agent-based, statistical, and representative-agent modeling and economic macro-dynamics. We illustrate the main ingredients of modeling techniques with examples from land-use dynamics as one of the main drivers of environmental change bridging local to global scales.

  2. Development of Ultrasonic Modeling Techniques for the Study of Crustal Inhomogeneities.

    Science.gov (United States)

    1983-08-01

    layer material consisted of Carnauba wax and silica powder. A 2% (by weight) amount of beeswax was added to the middle layer material to reduce the...t 4i ci 0 ci ( a) Yn 4 J 41 E940 G) -4 C iiH U’ c W -1 >. a~ u 00 hard carnauba wax dominate the Rayleiqh velocity to a Ireat extent; the RzvlIqh...and tested to evaluate our seismic ultrasonic modeling technique. A 2.3 mm thick layer composed of the carnauba wax mixture was deposited on a

  3. Techniques to Access Databases and Integrate Data for Hydrologic Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Whelan, Gene; Tenney, Nathan D.; Pelton, Mitchell A.; Coleman, Andre M.; Ward, Duane L.; Droppo, James G.; Meyer, Philip D.; Dorow, Kevin E.; Taira, Randal Y.

    2009-06-17

    This document addresses techniques to access and integrate data for defining site-specific conditions and behaviors associated with ground-water and surface-water radionuclide transport applicable to U.S. Nuclear Regulatory Commission reviews. Environmental models typically require input data from multiple internal and external sources that may include, but are not limited to, stream and rainfall gage data, meteorological data, hydrogeological data, habitat data, and biological data. These data may be retrieved from a variety of organizations (e.g., federal, state, and regional) and source types (e.g., HTTP, FTP, and databases). Available data sources relevant to hydrologic analyses for reactor licensing are identified and reviewed. The data sources described can be useful to define model inputs and parameters, including site features (e.g., watershed boundaries, stream locations, reservoirs, site topography), site properties (e.g., surface conditions, subsurface hydraulic properties, water quality), and site boundary conditions, input forcings, and extreme events (e.g., stream discharge, lake levels, precipitation, recharge, flood and drought characteristics). Available software tools for accessing established databases, retrieving the data, and integrating it with models were identified and reviewed. The emphasis in this review was on existing software products with minimal required modifications to enable their use with the FRAMES modeling framework. The ability of four of these tools to access and retrieve the identified data sources was reviewed. These four software tools were the Hydrologic Data Acquisition and Processing System (HDAPS), Integrated Water Resources Modeling System (IWRMS) External Data Harvester, Data for Environmental Modeling Environmental Data Download Tool (D4EM EDDT), and the FRAMES Internet Database Tools. The IWRMS External Data Harvester and the D4EM EDDT were identified as the most promising tools based on their ability to access and

  4. Techniques to Access Databases and Integrate Data for Hydrologic Modeling

    International Nuclear Information System (INIS)

    Whelan, Gene; Tenney, Nathan D.; Pelton, Mitchell A.; Coleman, Andre M.; Ward, Duane L.; Droppo, James G.; Meyer, Philip D.; Dorow, Kevin E.; Taira, Randal Y.

    2009-01-01

    This document addresses techniques to access and integrate data for defining site-specific conditions and behaviors associated with ground-water and surface-water radionuclide transport applicable to U.S. Nuclear Regulatory Commission reviews. Environmental models typically require input data from multiple internal and external sources that may include, but are not limited to, stream and rainfall gage data, meteorological data, hydrogeological data, habitat data, and biological data. These data may be retrieved from a variety of organizations (e.g., federal, state, and regional) and source types (e.g., HTTP, FTP, and databases). Available data sources relevant to hydrologic analyses for reactor licensing are identified and reviewed. The data sources described can be useful to define model inputs and parameters, including site features (e.g., watershed boundaries, stream locations, reservoirs, site topography), site properties (e.g., surface conditions, subsurface hydraulic properties, water quality), and site boundary conditions, input forcings, and extreme events (e.g., stream discharge, lake levels, precipitation, recharge, flood and drought characteristics). Available software tools for accessing established databases, retrieving the data, and integrating it with models were identified and reviewed. The emphasis in this review was on existing software products with minimal required modifications to enable their use with the FRAMES modeling framework. The ability of four of these tools to access and retrieve the identified data sources was reviewed. These four software tools were the Hydrologic Data Acquisition and Processing System (HDAPS), Integrated Water Resources Modeling System (IWRMS) External Data Harvester, Data for Environmental Modeling Environmental Data Download Tool (D4EM EDDT), and the FRAMES Internet Database Tools. The IWRMS External Data Harvester and the D4EM EDDT were identified as the most promising tools based on their ability to access and

  5. Modeling fuel cell stack systems

    Energy Technology Data Exchange (ETDEWEB)

    Lee, J H [Los Alamos National Lab., Los Alamos, NM (United States); Lalk, T R [Dept. of Mech. Eng., Texas A and M Univ., College Station, TX (United States)

    1998-06-15

    A technique for modeling fuel cell stacks is presented along with the results from an investigation designed to test the validity of the technique. The technique was specifically designed so that models developed using it can be used to determine the fundamental thermal-physical behavior of a fuel cell stack for any operating and design configuration. Such models would be useful tools for investigating fuel cell power system parameters. The modeling technique can be applied to any type of fuel cell stack for which performance data is available for a laboratory scale single cell. Use of the technique is demonstrated by generating sample results for a model of a Proton Exchange Membrane Fuel Cell (PEMFC) stack consisting of 125 cells each with an active area of 150 cm{sup 2}. A PEMFC stack was also used in the verification investigation. This stack consisted of four cells, each with an active area of 50 cm{sup 2}. Results from the verification investigation indicate that models developed using the technique are capable of accurately predicting fuel cell stack performance. (orig.)

  6. Evaluation of the influence of double and triple Gaussian proton kernel models on accuracy of dose calculations for spot scanning technique.

    Science.gov (United States)

    Hirayama, Shusuke; Takayanagi, Taisuke; Fujii, Yusuke; Fujimoto, Rintaro; Fujitaka, Shinichiro; Umezawa, Masumi; Nagamine, Yoshihiko; Hosaka, Masahiro; Yasui, Keisuke; Omachi, Chihiro; Toshito, Toshiyuki

    2016-03-01

    The main purpose in this study was to present the results of beam modeling and how the authors systematically investigated the influence of double and triple Gaussian proton kernel models on the accuracy of dose calculations for spot scanning technique. The accuracy of calculations was important for treatment planning software (TPS) because the energy, spot position, and absolute dose had to be determined by TPS for the spot scanning technique. The dose distribution was calculated by convolving in-air fluence with the dose kernel. The dose kernel was the in-water 3D dose distribution of an infinitesimal pencil beam and consisted of an integral depth dose (IDD) and a lateral distribution. Accurate modeling of the low-dose region was important for spot scanning technique because the dose distribution was formed by cumulating hundreds or thousands of delivered beams. The authors employed a double Gaussian function as the in-air fluence model of an individual beam. Double and triple Gaussian kernel models were also prepared for comparison. The parameters of the kernel lateral model were derived by fitting a simulated in-water lateral dose profile induced by an infinitesimal proton beam, whose emittance was zero, at various depths using Monte Carlo (MC) simulation. The fitted parameters were interpolated as a function of depth in water and stored as a separate look-up table. These stored parameters for each energy and depth in water were acquired from the look-up table when incorporating them into the TPS. The modeling process for the in-air fluence and IDD was based on the method proposed in the literature. These were derived using MC simulation and measured data. The authors compared the measured and calculated absolute doses at the center of the spread-out Bragg peak (SOBP) under various volumetric irradiation conditions to systematically investigate the influence of the two types of kernel models on the dose calculations. The authors investigated the difference

  7. Network modeling and analysis technique for the evaluation of nuclear safeguards systems effectiveness

    International Nuclear Information System (INIS)

    Grant, F.H. III; Miner, R.J.; Engi, D.

    1978-01-01

    Nuclear safeguards systems are concerned with the physical protection and control of nuclear materials. The Safeguards Network Analysis Procedure (SNAP) provides a convenient and standard analysis methodology for the evaluation of safeguards system effectiveness. This is achieved through a standard set of symbols which characterize the various elements of safeguards systems and an analysis program to execute simulation models built using the SNAP symbology. The reports provided by the SNAP simulation program enable analysts to evaluate existing sites as well as alternative design possibilities. This paper describes the SNAP modeling technique and provides an example illustrating its use

  8. Modeling and numerical techniques for high-speed digital simulation of nuclear power plants

    International Nuclear Information System (INIS)

    Wulff, W.; Cheng, H.S.; Mallen, A.N.

    1987-01-01

    Conventional computing methods are contrasted with newly developed high-speed and low-cost computing techniques for simulating normal and accidental transients in nuclear power plants. Six principles are formulated for cost-effective high-fidelity simulation with emphasis on modeling of transient two-phase flow coolant dynamics in nuclear reactors. Available computing architectures are characterized. It is shown that the combination of the newly developed modeling and computing principles with the use of existing special-purpose peripheral processors is capable of achieving low-cost and high-speed simulation with high-fidelity and outstanding user convenience, suitable for detailed reactor plant response analyses

  9. Network modeling and analysis technique for the evaluation of nuclear safeguards systems effectiveness

    International Nuclear Information System (INIS)

    Grant, F.H. III; Miner, R.J.; Engi, D.

    1979-02-01

    Nuclear safeguards systems are concerned with the physical protection and control of nuclear materials. The Safeguards Network Analysis Procedure (SNAP) provides a convenient and standard analysis methodology for the evaluation of safeguards system effectiveness. This is achieved through a standard set of symbols which characterize the various elements of safeguards systems and an analysis program to execute simulation models built using the SNAP symbology. The reports provided by the SNAP simulation program enable analysts to evaluate existing sites as well as alternative design possibilities. This paper describes the SNAP modeling technique and provides an example illustrating its use

  10. Statistical techniques to extract information during SMAP soil moisture assimilation

    Science.gov (United States)

    Kolassa, J.; Reichle, R. H.; Liu, Q.; Alemohammad, S. H.; Gentine, P.

    2017-12-01

    Statistical techniques permit the retrieval of soil moisture estimates in a model climatology while retaining the spatial and temporal signatures of the satellite observations. As a consequence, the need for bias correction prior to an assimilation of these estimates is reduced, which could result in a more effective use of the independent information provided by the satellite observations. In this study, a statistical neural network (NN) retrieval algorithm is calibrated using SMAP brightness temperature observations and modeled soil moisture estimates (similar to those used to calibrate the SMAP Level 4 DA system). Daily values of surface soil moisture are estimated using the NN and then assimilated into the NASA Catchment model. The skill of the assimilation estimates is assessed based on a comprehensive comparison to in situ measurements from the SMAP core and sparse network sites as well as the International Soil Moisture Network. The NN retrieval assimilation is found to significantly improve the model skill, particularly in areas where the model does not represent processes related to agricultural practices. Additionally, the NN method is compared to assimilation experiments using traditional bias correction techniques. The NN retrieval assimilation is found to more effectively use the independent information provided by SMAP resulting in larger model skill improvements than assimilation experiments using traditional bias correction techniques.

  11. Modeling FWM and impairments aware amplifiers placement technique for an optical MAN/WAN: Inline amplifiers case

    Science.gov (United States)

    Singh, Gurpreet; Singh, Maninder Lal

    2015-08-01

    A new four wave mixing (FWM) model for an optical network with amplifiers and a comparative analysis among three proposed amplifiers placement techniques have been presented in this paper. The FWM model is validated with the experimental measured data. The novelty of this model is its uniqueness that on direct substitutions of network parameters like length, it works even for unequal inter amplifier separations. The novelty of the analysis done among three schemes is that it presents fair choice of amplifiers placement methods for varied total system length. The appropriateness of these three schemes has been analyzed on the basis of critical system length, critical number of amplifiers and critical bit error rate (10-9) in presence of four wave mixing (FWM) and amplified spontaneous emission noise (ASE). The implementation of analysis done has been given with the help of an example of a regenerative metropolitan area network (MAN). The results suggest that the decreasing fiber section scheme should be avoided for placements of amplifiers and schemes IUFS and EFS shows their importance interchangeably for different set of parameters.

  12. Sterile insect technique: A model for dose optimisation for improved sterile insect quality

    International Nuclear Information System (INIS)

    Parker, A.; Mehta, K.

    2007-01-01

    The sterile insect technique (SIT) is an environment-friendly pest control technique with application in the area-wide integrated control of key pests, including the suppression or elimination of introduced populations and the exclusion of new introductions. Reproductive sterility is normally induced by ionizing radiation, a convenient and consistent method that maintains a reasonable degree of competitiveness in the released insects. The cost and effectiveness of a control program integrating the SIT depend on the balance between sterility and competitiveness, but it appears that current operational programs with an SIT component are not achieving an appropriate balance. In this paper we discuss optimization of the sterilization process and present a simple model and procedure for determining the optimum dose. (author) [es

  13. Mathematical Foundation Based Inter-Connectivity modelling of Thermal Image processing technique for Fire Protection

    Directory of Open Access Journals (Sweden)

    Sayantan Nath

    2015-09-01

    Full Text Available In this paper, integration between multiple functions of image processing and its statistical parameters for intelligent alarming series based fire detection system is presented. The proper inter-connectivity mapping between processing elements of imagery based on classification factor for temperature monitoring and multilevel intelligent alarm sequence is introduced by abstractive canonical approach. The flow of image processing components between core implementation of intelligent alarming system with temperature wise area segmentation as well as boundary detection technique is not yet fully explored in the present era of thermal imaging. In the light of analytical perspective of convolutive functionalism in thermal imaging, the abstract algebra based inter-mapping model between event-calculus supported DAGSVM classification for step-by-step generation of alarm series with gradual monitoring technique and segmentation of regions with its affected boundaries in thermographic image of coal with respect to temperature distinctions is discussed. The connectedness of the multifunctional operations of image processing based compatible fire protection system with proper monitoring sequence is presently investigated here. The mathematical models representing the relation between the temperature affected areas and its boundary in the obtained thermal image defined in partial derivative fashion is the core contribution of this study. The thermal image of coal sample is obtained in real-life scenario by self-assembled thermographic camera in this study. The amalgamation between area segmentation, boundary detection and alarm series are described in abstract algebra. The principal objective of this paper is to understand the dependency pattern and the principles of working of image processing components and structure an inter-connected modelling technique also for those components with the help of mathematical foundation.

  14. Techniques for Automated Performance Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Marcus, Ryan C. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2014-09-02

    The performance of a particular HPC code depends on a multitude of variables, including compiler selection, optimization flags, OpenMP pool size, file system load, memory usage, MPI configuration, etc. As a result of this complexity, current predictive models have limited applicability, especially at scale. We present a formulation of scientific codes, nodes, and clusters that reduces complex performance analysis to well-known mathematical techniques. Building accurate predictive models and enhancing our understanding of scientific codes at scale is an important step towards exascale computing.

  15. Tsunami Hazard Preventing Based Land Use Planning Model Using GIS Techniques in Muang Krabi, Thailand

    Directory of Open Access Journals (Sweden)

    Abdul Salam Soomro

    2012-10-01

    Full Text Available The terrible tsunami disaster, on 26 December 2004 hit Krabi, one of the ecotourist and very fascinating provinces of southern Thailand including its various regions e.g. Phangna and Phuket by devastating the human lives, coastal communications and the financially viable activities. This research study has been aimed to generate the tsunami hazard preventing based lands use planning model using GIS (Geographical Information Systems based on the hazard suitability analysis approach. The different triggering factors e.g. elevation, proximity to shore line, population density, mangrove, forest, stream and road have been used based on the land use zoning criteria. Those criteria have been used by using Saaty scale of importance one, of the mathematical techniques. This model has been classified according to the land suitability classification. The various techniques of GIS, namely subsetting, spatial analysis, map difference and data conversion have been used. The model has been generated with five categories such as high, moderate, low, very low and not suitable regions illustrating with their appropriate definition for the decision makers to redevelop the region.

  16. Tsunami hazard preventing based land use planing model using GIS technique in Muang Krabi, Thailand

    International Nuclear Information System (INIS)

    Soormo, A.S.

    2012-01-01

    The terrible tsunami disaster, on 26 December 2004 hit Krabi, one of the ecotourist and very fascinating provinces of southern Thailand including its various regions e.g. Phangna and Phuket by devastating the human lives, coastal communications and the financially viable activities. This research study has been aimed to generate the tsunami hazard preventing based lands use planning model using GIS (Geographical Information Systems) based on the hazard suitability analysis approach. The different triggering factors e.g. elevation, proximity to shore line, population density, mangrove, forest, stream and road have been used based on the land use zoning criteria. Those criteria have been used by using Saaty scale of importance one, of the mathematical techniques. This model has been classified according to the land suitability classification. The various techniques of GIS, namely subsetting, spatial analysis, map difference and data conversion have been used. The model has been generated with five categories such as high, moderate, low, very low and not suitable regions illustrating with their appropriate definition for the decision makers to redevelop the region. (author)

  17. Scapular flap for maxillectomy defect reconstruction and preliminary results using three-dimensional modeling.

    Science.gov (United States)

    Modest, Mara C; Moore, Eric J; Abel, Kathryn M Van; Janus, Jeffrey R; Sims, John R; Price, Daniel L; Olsen, Kerry D

    2017-01-01

    Discuss current techniques utilizing the scapular tip and subscapular system for free tissue reconstruction of maxillary defects and highlight the impact of medical modeling on these techniques with a case series. Case review series at an academic hospital of patients undergoing maxillectomy + thoracodorsal scapula composite free flap (TSCF) reconstruction. Three-dimensional (3D) models were used in the last five cases. 3D modeling, surgical, functional, and aesthetic outcomes were reviewed. Nine patients underwent TSCF reconstruction for maxillectomy defects (median age = 43 years; range, 19-66 years). Five patients (55%) had a total maxillectomy (TM) ± orbital exenteration, whereas four patients (44%) underwent subtotal palatal maxillectomy. For TM, the contralateral scapula tip was positioned with its natural concavity recreating facial contour. The laterally based vascular pedicle was ideally positioned for facial vessel anastomosis. For subtotal-palatal defect, an ipsilateral flap was harvested, but inset with the convex surface facing superiorly. Once 3D models were available from our anatomic modeling lab, they were used for intraoperative planning of the last five patients. Use of the model intraoperatively improved efficiency and allowed for better contouring/plating of the TSCF. At last follow-up, all patients had good functional outcomes. Aesthetic outcomes were more successful in patients where 3D-modeling was used (100% vs. 50%). There were no flap failures. Median follow-up >1 month was 5.2 months (range, 1-32.7 months). Reconstruction of maxillectomy defects is complex. Successful aesthetic and functional outcomes are critical to patient satisfaction. The TSCF is a versatile flap. Based on defect type, choosing laterality is crucial for proper vessel orientation and outcomes. The use of internally produced 3D models has helped refine intraoperative contouring and flap inset, leading to more successful outcomes. 4. Laryngoscope, 127:E8-E14

  18. DEVELOPMENT OF RESERVOIR CHARACTERIZATION TECHNIQUES AND PRODUCTION MODELS FOR EXPLOITING NATURALLY FRACTURED RESERVOIRS

    Energy Technology Data Exchange (ETDEWEB)

    Michael L. Wiggins; Raymon L. Brown; Faruk Civan; Richard G. Hughes

    2002-12-31

    For many years, geoscientists and engineers have undertaken research to characterize naturally fractured reservoirs. Geoscientists have focused on understanding the process of fracturing and the subsequent measurement and description of fracture characteristics. Engineers have concentrated on the fluid flow behavior in the fracture-porous media system and the development of models to predict the hydrocarbon production from these complex systems. This research attempts to integrate these two complementary views to develop a quantitative reservoir characterization methodology and flow performance model for naturally fractured reservoirs. The research has focused on estimating naturally fractured reservoir properties from seismic data, predicting fracture characteristics from well logs, and developing a naturally fractured reservoir simulator. It is important to develop techniques that can be applied to estimate the important parameters in predicting the performance of naturally fractured reservoirs. This project proposes a method to relate seismic properties to the elastic compliance and permeability of the reservoir based upon a sugar cube model. In addition, methods are presented to use conventional well logs to estimate localized fracture information for reservoir characterization purposes. The ability to estimate fracture information from conventional well logs is very important in older wells where data are often limited. Finally, a desktop naturally fractured reservoir simulator has been developed for the purpose of predicting the performance of these complex reservoirs. The simulator incorporates vertical and horizontal wellbore models, methods to handle matrix to fracture fluid transfer, and fracture permeability tensors. This research project has developed methods to characterize and study the performance of naturally fractured reservoirs that integrate geoscience and engineering data. This is an important step in developing exploitation strategies for

  19. Double contrast barium enema: technique, indications, results and limitations of a conventional imaging methodology in the MDCT virtual endoscopy era.

    Science.gov (United States)

    Rollandi, Gian Andrea; Biscaldi, Ennio; DeCicco, Enzo

    2007-03-01

    The double contrast barium enema of the colon continues to be a diffused conventional radiological technique and allows for the diagnosis of neoplastic and inflammatory pathology. After the '70s, a massive initiative is undertaken to simplify, perfect and encode the method of the double contrast barium enema: Altaras from Germany, Miller from USA and Cittadini from Italy are responsible for the perfection of this technique in the last 30 years. The tailored patient preparation, a perfect technique of execution and a precise radiological documentation are essentials steps to obtain a reliable examination. The main limit of double contrast enema is that it considers the pathology only from the mucosal surface. In neoplastic pathology evaluation the main limit is the "T" parameter staging, but more limited are the "N" and "M" parameters evaluation. Today the double contrast technique continues to be a refined, sensitive and specific diagnostic method, moreover, diagnostic results cannot compete with the new CT multislice techniques (CT-enteroclysis and virtual colonoscopy) which can examine both the lumen and the wall of the colon. The double contrast is a cheap and simple examination but in the next future is predictably a progressive substitution of conventional radiology from new multislice techniques, because the cross sectional imaging is more frequently able to detect causes of the symptoms whether resulting both from colonic or non colonic origin.

  20. Double contrast barium enema: Technique, indications, results and limitations of a conventional imaging methodology in the MDCT virtual endoscopy era

    International Nuclear Information System (INIS)

    Rollandi, Gian Andrea; Biscaldi, Ennio; DeCicco, Enzo

    2007-01-01

    The double contrast barium enema of the colon continues to be a diffused conventional radiological technique and allows for the diagnosis of neoplastic and inflammatory pathology. After the '70s, a massive initiative is undertaken to simplify, perfect and encode the method of the double contrast barium enema: Altaras from Germany, Miller from USA and Cittadini from Italy are responsible for the perfection of this technique in the last 30 years. The tailored patient preparation, a perfect technique of execution and a precise radiological documentation are essentials steps to obtain a reliable examination. The main limit of double contrast enema is that it considers the pathology only from the mucosal surface. In neoplastic pathology evaluation the main limit is the 'T' parameter staging, but more limited are the 'N' and 'M' parameters evaluation. Today the double contrast technique continues to be a refined, sensitive and specific diagnostic method, moreover, diagnostic results cannot compete with the new CT multislice techniques (CT-enteroclysis and virtual colonoscopy) which can examine both the lumen and the wall of the colon. The double contrast is a cheap and simple examination but in the next future is predictably a progressive substitution of conventional radiology from new multislice techniques, because the cross sectional imaging is more frequently able to detect causes of the symptoms whether resulting both from colonic or non colonic origin