WorldWideScience

Sample records for modeling technique based

  1. Model-checking techniques based on cumulative residuals.

    Science.gov (United States)

    Lin, D Y; Wei, L J; Ying, Z

    2002-03-01

    Residuals have long been used for graphical and numerical examinations of the adequacy of regression models. Conventional residual analysis based on the plots of raw residuals or their smoothed curves is highly subjective, whereas most numerical goodness-of-fit tests provide little information about the nature of model misspecification. In this paper, we develop objective and informative model-checking techniques by taking the cumulative sums of residuals over certain coordinates (e.g., covariates or fitted values) or by considering some related aggregates of residuals, such as moving sums and moving averages. For a variety of statistical models and data structures, including generalized linear models with independent or dependent observations, the distributions of these stochastic processes tinder the assumed model can be approximated by the distributions of certain zero-mean Gaussian processes whose realizations can be easily generated by computer simulation. Each observed process can then be compared, both graphically and numerically, with a number of realizations from the Gaussian process. Such comparisons enable one to assess objectively whether a trend seen in a residual plot reflects model misspecification or natural variation. The proposed techniques are particularly useful in checking the functional form of a covariate and the link function. Illustrations with several medical studies are provided.

  2. Validation techniques of agent based modelling for geospatial simulations

    Science.gov (United States)

    Darvishi, M.; Ahmadi, G.

    2014-10-01

    One of the most interesting aspects of modelling and simulation study is to describe the real world phenomena that have specific properties; especially those that are in large scales and have dynamic and complex behaviours. Studying these phenomena in the laboratory is costly and in most cases it is impossible. Therefore, Miniaturization of world phenomena in the framework of a model in order to simulate the real phenomena is a reasonable and scientific approach to understand the world. Agent-based modelling and simulation (ABMS) is a new modelling method comprising of multiple interacting agent. They have been used in the different areas; for instance, geographic information system (GIS), biology, economics, social science and computer science. The emergence of ABM toolkits in GIS software libraries (e.g. ESRI's ArcGIS, OpenMap, GeoTools, etc) for geospatial modelling is an indication of the growing interest of users to use of special capabilities of ABMS. Since ABMS is inherently similar to human cognition, therefore it could be built easily and applicable to wide range applications than a traditional simulation. But a key challenge about ABMS is difficulty in their validation and verification. Because of frequent emergence patterns, strong dynamics in the system and the complex nature of ABMS, it is hard to validate and verify ABMS by conventional validation methods. Therefore, attempt to find appropriate validation techniques for ABM seems to be necessary. In this paper, after reviewing on Principles and Concepts of ABM for and its applications, the validation techniques and challenges of ABM validation are discussed.

  3. Demand Management Based on Model Predictive Control Techniques

    Directory of Open Access Journals (Sweden)

    Yasser A. Davizón

    2014-01-01

    Full Text Available Demand management (DM is the process that helps companies to sell the right product to the right customer, at the right time, and for the right price. Therefore the challenge for any company is to determine how much to sell, at what price, and to which market segment while maximizing its profits. DM also helps managers efficiently allocate undifferentiated units of capacity to the available demand with the goal of maximizing revenue. This paper introduces control system approach to demand management with dynamic pricing (DP using the model predictive control (MPC technique. In addition, we present a proper dynamical system analogy based on active suspension and a stability analysis is provided via the Lyapunov direct method.

  4. Plasticity models of material variability based on uncertainty quantification techniques

    Energy Technology Data Exchange (ETDEWEB)

    Jones, Reese E. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Rizzi, Francesco [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Boyce, Brad [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Templeton, Jeremy Alan [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Ostien, Jakob [Sandia National Lab. (SNL-CA), Livermore, CA (United States)

    2017-11-01

    The advent of fabrication techniques like additive manufacturing has focused attention on the considerable variability of material response due to defects and other micro-structural aspects. This variability motivates the development of an enhanced design methodology that incorporates inherent material variability to provide robust predictions of performance. In this work, we develop plasticity models capable of representing the distribution of mechanical responses observed in experiments using traditional plasticity models of the mean response and recently developed uncertainty quantification (UQ) techniques. Lastly, we demonstrate that the new method provides predictive realizations that are superior to more traditional ones, and how these UQ techniques can be used in model selection and assessing the quality of calibrated physical parameters.

  5. Models, Web-Based Simulations, and Integrated Analysis Techniques for Improved Logistical Performance

    National Research Council Canada - National Science Library

    Hill, Raymond

    2001-01-01

    ... Laboratory, Logistics Research Division, Logistics Readiness Branch to propose a research agenda entitled, "Models, Web-based Simulations, and Integrated Analysis Techniques for Improved Logistical Performance...

  6. Line impedance estimation using model based identification technique

    DEFF Research Database (Denmark)

    Ciobotaru, Mihai; Agelidis, Vassilios; Teodorescu, Remus

    2011-01-01

    The estimation of the line impedance can be used by the control of numerous grid-connected systems, such as active filters, islanding detection techniques, non-linear current controllers, detection of the on/off grid operation mode. Therefore, estimating the line impedance can add extra functions......-passive behaviour of the proposed method comes from the combination of the non intrusive behaviour of the passive methods with a better accuracy of the active methods. The simulation results reveal the good accuracy of the proposed method....

  7. Ionospheric scintillation forecasting model based on NN-PSO technique

    Science.gov (United States)

    Sridhar, M.; Venkata Ratnam, D.; Padma Raju, K.; Sai Praharsha, D.; Saathvika, K.

    2017-09-01

    The forecasting and modeling of ionospheric scintillation effects are crucial for precise satellite positioning and navigation applications. In this paper, a Neural Network model, trained using Particle Swarm Optimization (PSO) algorithm, has been implemented for the prediction of amplitude scintillation index (S4) observations. The Global Positioning System (GPS) and Ionosonde data available at Darwin, Australia (12.4634° S, 130.8456° E) during 2013 has been considered. The correlation analysis between GPS S4 and Ionosonde drift velocities (hmf2 and fof2) data has been conducted for forecasting the S4 values. The results indicate that forecasted S4 values closely follow the measured S4 values for both the quiet and disturbed conditions. The outcome of this work will be useful for understanding the ionospheric scintillation phenomena over low latitude regions.

  8. The Effect of Learning Based on Technology Model and Assessment Technique toward Thermodynamic Learning Achievement

    Science.gov (United States)

    Makahinda, T.

    2018-02-01

    The purpose of this research is to find out the effect of learning model based on technology and assessment technique toward thermodynamic achievement by controlling students intelligence. This research is an experimental research. The sample is taken through cluster random sampling with the total respondent of 80 students. The result of the research shows that the result of learning of thermodynamics of students who taught the learning model of environmental utilization is higher than the learning result of student thermodynamics taught by simulation animation, after controlling student intelligence. There is influence of student interaction, and the subject between models of technology-based learning with assessment technique to student learning result of Thermodynamics, after controlling student intelligence. Based on the finding in the lecture then should be used a thermodynamic model of the learning environment with the use of project assessment technique.

  9. New sunshine-based models for predicting global solar radiation using PSO (particle swarm optimization) technique

    International Nuclear Information System (INIS)

    Behrang, M.A.; Assareh, E.; Noghrehabadi, A.R.; Ghanbarzadeh, A.

    2011-01-01

    PSO (particle swarm optimization) technique is applied to estimate monthly average daily GSR (global solar radiation) on horizontal surface for different regions of Iran. To achieve this, five new models were developed as well as six models were chosen from the literature. First, for each city, the empirical coefficients for all models were separately determined using PSO technique. The results indicate that new models which are presented in this study have better performance than existing models in the literature for 10 cities from 17 considered cities in this study. It is also shown that the empirical coefficients found for a given latitude can be generalized to estimate solar radiation in cities at similar latitude. Some case studies are presented to demonstrate this generalization with the result showing good agreement with the measurements. More importantly, these case studies further validate the models developed, and demonstrate the general applicability of the models developed. Finally, the obtained results of PSO technique were compared with the obtained results of SRTs (statistical regression techniques) on Angstrom model for all 17 cities. The results showed that obtained empirical coefficients for Angstrom model based on PSO have more accuracy than SRTs for all 17 cities. -- Highlights: → The first study to apply an intelligent optimization technique to more accurately determine empirical coefficients in solar radiation models. → New models which are presented in this study have better performance than existing models. → The empirical coefficients found for a given latitude can be generalized to estimate solar radiation in cities at similar latitude. → A fair comparison between the performance of PSO and SRTs on GSR modeling.

  10. Modeling of PV Systems Based on Inflection Points Technique Considering Reverse Mode

    Directory of Open Access Journals (Sweden)

    Bonie J. Restrepo-Cuestas

    2013-11-01

    Full Text Available This paper proposes a methodology for photovoltaic (PV systems modeling, considering their behavior in both direct and reverse operating mode and considering mismatching conditions. The proposed methodology is based on the inflection points technique with a linear approximation to model the bypass diode and a simplified PV model. The proposed mathematical model allows to evaluate the energetic performance of a PV system, exhibiting short simulation times in large PV systems. In addition, this methodology allows to estimate the condition of the modules affected by the partial shading since it is possible to know the power dissipated due to its operation at the second quadrant.

  11. [Eco-value level classification model of forest ecosystem based on modified projection pursuit technique].

    Science.gov (United States)

    Wu, Chengzhen; Hong, Wei; Hong, Tao

    2006-03-01

    To optimize the projection function and direction of projection pursuit technique, predigest its realization process, and overcome the shortcomings in long time calculation and in the difficulty of optimizing projection direction and computer programming, this paper presented a modified simplex method (MSM), and based on it, brought forward the eco-value level classification model (EVLCM) of forest ecosystem, which could integrate the multidimensional classification index into one-dimensional projection value, with high projection value denoting high ecosystem services value. Examples of forest ecosystem could be reasonably classified by the new model according to their projection value, suggesting that EVLCM driven directly by samples data of forest ecosystem was simple and feasible, applicable, and maneuverable. The calculating time and value of projection function were 34% and 143% of those with the traditional projection pursuit technique, respectively. This model could be applied extensively to classify and estimate all kinds of non-linear and multidimensional data in ecology, biology, and regional sustainable development.

  12. Development Model of Basic Technique Skills Training Shot-Put Obrien Style Based Biomechanics Review

    Directory of Open Access Journals (Sweden)

    danang rohmat hidayanto

    2018-03-01

    Full Text Available The background of this research is the unavailability of learning model of basic technique technique of O'Brien style force that integrated in skill program based on biomechanics study which is used as a reference to build the basic technique skill of the O'Brien style force among students. The purpose of this study is to develop a model of basic-style technique of rejecting the O'Brien-style shot put based on biomechanical studies for beginner levels, including basic prefix technique, glide, final stage, repulsion, further motion and repulsion performance of O'Brien style, all of which arranged in a medium that is easily accessible whenever, by anyone and anywhere, especially in SMK Negeri 1 Kalijambe Sragen . The research method used is "Reasearch and Developement" approach. "Preliminary studies show that 43.0% of respondents considered that the O'Brien style was very important to be developed with a model of skill-based exercise based on biomechanics, as many as 40.0% ressponden stated that it is important to be developed with biomechanics based learning media. Therefore, it is deemed necessary to develop the learning media of the O'Brien style-based training skills based on biomechanical studies. Development of media starts from the design of the storyboard and script form that will be used as media. The design of this model is called the draft model. Draft models that have been prepared are reviewed by the multimedia expert and the O'Brien style expert to get the product's validity. A total of 78.24% of experts declare a viable product with some input. In small groups with n = 6, earned value 72.2% was obtained or valid enough to be tested in large groups. In the large group test with n = 12,values obtained 70.83% or quite feasible to be tested in the field. In the field test, experimental group was prepared with treatment according to media and control group with free treatment. From result of counting of significance test can be

  13. Tsunami hazard preventing based land use planing model using GIS technique in Muang Krabi, Thailand

    International Nuclear Information System (INIS)

    Soormo, A.S.

    2012-01-01

    The terrible tsunami disaster, on 26 December 2004 hit Krabi, one of the ecotourist and very fascinating provinces of southern Thailand including its various regions e.g. Phangna and Phuket by devastating the human lives, coastal communications and the financially viable activities. This research study has been aimed to generate the tsunami hazard preventing based lands use planning model using GIS (Geographical Information Systems) based on the hazard suitability analysis approach. The different triggering factors e.g. elevation, proximity to shore line, population density, mangrove, forest, stream and road have been used based on the land use zoning criteria. Those criteria have been used by using Saaty scale of importance one, of the mathematical techniques. This model has been classified according to the land suitability classification. The various techniques of GIS, namely subsetting, spatial analysis, map difference and data conversion have been used. The model has been generated with five categories such as high, moderate, low, very low and not suitable regions illustrating with their appropriate definition for the decision makers to redevelop the region. (author)

  14. Mathematical Foundation Based Inter-Connectivity modelling of Thermal Image processing technique for Fire Protection

    Directory of Open Access Journals (Sweden)

    Sayantan Nath

    2015-09-01

    Full Text Available In this paper, integration between multiple functions of image processing and its statistical parameters for intelligent alarming series based fire detection system is presented. The proper inter-connectivity mapping between processing elements of imagery based on classification factor for temperature monitoring and multilevel intelligent alarm sequence is introduced by abstractive canonical approach. The flow of image processing components between core implementation of intelligent alarming system with temperature wise area segmentation as well as boundary detection technique is not yet fully explored in the present era of thermal imaging. In the light of analytical perspective of convolutive functionalism in thermal imaging, the abstract algebra based inter-mapping model between event-calculus supported DAGSVM classification for step-by-step generation of alarm series with gradual monitoring technique and segmentation of regions with its affected boundaries in thermographic image of coal with respect to temperature distinctions is discussed. The connectedness of the multifunctional operations of image processing based compatible fire protection system with proper monitoring sequence is presently investigated here. The mathematical models representing the relation between the temperature affected areas and its boundary in the obtained thermal image defined in partial derivative fashion is the core contribution of this study. The thermal image of coal sample is obtained in real-life scenario by self-assembled thermographic camera in this study. The amalgamation between area segmentation, boundary detection and alarm series are described in abstract algebra. The principal objective of this paper is to understand the dependency pattern and the principles of working of image processing components and structure an inter-connected modelling technique also for those components with the help of mathematical foundation.

  15. A Critical Review of Model-Based Economic Studies of Depression: Modelling Techniques, Model Structure and Data Sources

    OpenAIRE

    Hossein Haji Ali Afzali; Jonathan Karnon; Jodi Gray

    2012-01-01

    Depression is the most common mental health disorder and is recognized as a chronic disease characterized by multiple acute episodes/relapses. Although modelling techniques play an increasingly important role in the economic evaluation of depression interventions, comparatively little attention has been paid to issues around modelling studies with a focus on potential biases. This, however, is important as different modelling approaches, variations in model structure and input parameters may ...

  16. Repositioning the knee joint in human body FE models using a graphics-based technique.

    Science.gov (United States)

    Jani, Dhaval; Chawla, Anoop; Mukherjee, Sudipto; Goyal, Rahul; Vusirikala, Nataraju; Jayaraman, Suresh

    2012-01-01

    Human body finite element models (FE-HBMs) are available in standard occupant or pedestrian postures. There is a need to have FE-HBMs in the same posture as a crash victim or to be configured in varying postures. Developing FE models for all possible positions is not practically viable. The current work aims at obtaining a posture-specific human lower extremity model by reconfiguring an existing one. A graphics-based technique was developed to reposition the lower extremity of an FE-HBM by specifying the flexion-extension angle. Elements of the model were segregated into rigid (bones) and deformable components (soft tissues). The bones were rotated about the flexion-extension axis followed by rotation about the longitudinal axis to capture the twisting of the tibia. The desired knee joint movement was thus achieved. Geometric heuristics were then used to reposition the skin. A mapping defined over the space between bones and the skin was used to regenerate the soft tissues. Mesh smoothing was then done to augment mesh quality. The developed method permits control over the kinematics of the joint and maintains the initial mesh quality of the model. For some critical areas (in the joint vicinity) where element distortion is large, mesh smoothing is done to improve mesh quality. A method to reposition the knee joint of a human body FE model was developed. Repositions of a model from 9 degrees of flexion to 90 degrees of flexion in just a few seconds without subjective interventions was demonstrated. Because the mesh quality of the repositioned model was maintained to a predefined level (typically to the level of a well-made model in the initial configuration), the model was suitable for subsequent simulations.

  17. Hybrid LES RANS technique based on a one-equation near-wall model

    Science.gov (United States)

    Breuer, M.; Jaffrézic, B.; Arora, K.

    2008-05-01

    In order to reduce the high computational effort of wall-resolved large-eddy simulations (LES), the present paper suggests a hybrid LES RANS approach which splits up the simulation into a near-wall RANS part and an outer LES part. Generally, RANS is adequate for attached boundary layers requiring reasonable CPU-time and memory, where LES can also be applied but demands extremely large resources. Contrarily, RANS often fails in flows with massive separation or large-scale vortical structures. Here, LES is without a doubt the best choice. The basic concept of hybrid methods is to combine the advantages of both approaches yielding a prediction method, which, on the one hand, assures reliable results for complex turbulent flows, including large-scale flow phenomena and massive separation, but, on the other hand, consumes much fewer resources than LES, especially for high Reynolds number flows encountered in technical applications. In the present study, a non-zonal hybrid technique is considered (according to the signification retained by the authors concerning the terms zonal and non-zonal), which leads to an approach where the suitable simulation technique is chosen more or less automatically. For this purpose the hybrid approach proposed relies on a unique modeling concept. In the LES mode a subgrid-scale model based on a one-equation model for the subgrid-scale turbulent kinetic energy is applied, where the length scale is defined by the filter width. For the viscosity-affected near-wall RANS mode the one-equation model proposed by Rodi et al. (J Fluids Eng 115:196 205, 1993) is used, which is based on the wall-normal velocity fluctuations as the velocity scale and algebraic relations for the length scales. Although the idea of combined LES RANS methods is not new, a variety of open questions still has to be answered. This includes, in particular, the demand for appropriate coupling techniques between LES and RANS, adaptive control mechanisms, and proper subgrid

  18. Model-Based Fault Diagnosis Techniques Design Schemes, Algorithms and Tools

    CERN Document Server

    Ding, Steven X

    2013-01-01

    Guaranteeing a high system performance over a wide operating range is an important issue surrounding the design of automatic control systems with successively increasing complexity. As a key technology in the search for a solution, advanced fault detection and identification (FDI) is receiving considerable attention. This book introduces basic model-based FDI schemes, advanced analysis and design algorithms, and mathematical and control-theoretic tools. This second edition of Model-Based Fault Diagnosis Techniques contains: ·         new material on fault isolation and identification, and fault detection in feedback control loops; ·         extended and revised treatment of systematic threshold determination for systems with both deterministic unknown inputs and stochastic noises; addition of the continuously-stirred tank heater as a representative process-industrial benchmark; and ·         enhanced discussion of residual evaluation in stochastic processes. Model-based Fault Diagno...

  19. Modeling and Simulation of Voids in Composite Tape Winding Process Based on Domain Superposition Technique

    Science.gov (United States)

    Deng, Bo; Shi, Yaoyao

    2017-11-01

    The tape winding technology is an effective way to fabricate rotationally composite materials. Nevertheless, some inevitable defects will seriously influence the performance of winding products. One of the crucial ways to identify the quality of fiber-reinforced composite material products is examining its void content. Significant improvement in products' mechanical properties can be achieved by minimizing the void defect. Two methods were applied in this study, finite element analysis and experimental testing, respectively, to investigate the mechanism of how void forming in composite tape winding processing. Based on the theories of interlayer intimate contact and Domain Superposition Technique (DST), a three-dimensional model of prepreg tape void with SolidWorks has been modeled in this paper. Whereafter, ABAQUS simulation software was used to simulate the void content change with pressure and temperature. Finally, a series of experiments were performed to determine the accuracy of the model-based predictions. The results showed that the model is effective for predicting the void content in the composite tape winding process.

  20. A numerical study of different projection-based model reduction techniques applied to computational homogenisation

    Science.gov (United States)

    Soldner, Dominic; Brands, Benjamin; Zabihyan, Reza; Steinmann, Paul; Mergheim, Julia

    2017-10-01

    Computing the macroscopic material response of a continuum body commonly involves the formulation of a phenomenological constitutive model. However, the response is mainly influenced by the heterogeneous microstructure. Computational homogenisation can be used to determine the constitutive behaviour on the macro-scale by solving a boundary value problem at the micro-scale for every so-called macroscopic material point within a nested solution scheme. Hence, this procedure requires the repeated solution of similar microscopic boundary value problems. To reduce the computational cost, model order reduction techniques can be applied. An important aspect thereby is the robustness of the obtained reduced model. Within this study reduced-order modelling (ROM) for the geometrically nonlinear case using hyperelastic materials is applied for the boundary value problem on the micro-scale. This involves the Proper Orthogonal Decomposition (POD) for the primary unknown and hyper-reduction methods for the arising nonlinearity. Therein three methods for hyper-reduction, differing in how the nonlinearity is approximated and the subsequent projection, are compared in terms of accuracy and robustness. Introducing interpolation or Gappy-POD based approximations may not preserve the symmetry of the system tangent, rendering the widely used Galerkin projection sub-optimal. Hence, a different projection related to a Gauss-Newton scheme (Gauss-Newton with Approximated Tensors- GNAT) is favoured to obtain an optimal projection and a robust reduced model.

  1. Model-Based Data Integration and Process Standardization Techniques for Fault Management: A Feasibility Study

    Science.gov (United States)

    Haste, Deepak; Ghoshal, Sudipto; Johnson, Stephen B.; Moore, Craig

    2018-01-01

    This paper describes the theory and considerations in the application of model-based techniques to assimilate information from disjoint knowledge sources for performing NASA's Fault Management (FM)-related activities using the TEAMS® toolset. FM consists of the operational mitigation of existing and impending spacecraft failures. NASA's FM directives have both design-phase and operational-phase goals. This paper highlights recent studies by QSI and DST of the capabilities required in the TEAMS® toolset for conducting FM activities with the aim of reducing operating costs, increasing autonomy, and conforming to time schedules. These studies use and extend the analytic capabilities of QSI's TEAMS® toolset to conduct a range of FM activities within a centralized platform.

  2. Assess and Predict Automatic Generation Control Performances for Thermal Power Generation Units Based on Modeling Techniques

    Science.gov (United States)

    Zhao, Yan; Yang, Zijiang; Gao, Song; Liu, Jinbiao

    2018-02-01

    Automatic generation control(AGC) is a key technology to maintain real time power generation and load balance, and to ensure the quality of power supply. Power grids require each power generation unit to have a satisfactory AGC performance, being specified in two detailed rules. The two rules provide a set of indices to measure the AGC performance of power generation unit. However, the commonly-used method to calculate these indices is based on particular data samples from AGC responses and will lead to incorrect results in practice. This paper proposes a new method to estimate the AGC performance indices via system identification techniques. In addition, a nonlinear regression model between performance indices and load command is built in order to predict the AGC performance indices. The effectiveness of the proposed method is validated through industrial case studies.

  3. Constraint-based Student Modelling in Probability Story Problems with Scaffolding Techniques

    Directory of Open Access Journals (Sweden)

    Nabila Khodeir

    2018-01-01

    Full Text Available Constraint-based student modelling (CBM is an important technique employed in intelligent tutoring systems to model student knowledge to provide relevant assistance. This paper introduces the Math Story Problem Tutor (MAST, a Web-based intelligent tutoring system for probability story problems, which is able to generate problems of different contexts, types and difficulty levels for self-paced learning. Constraints in MAST are specified at a low-level of granularity to allow fine-grained diagnosis of the student error. Furthermore, MAST extends CBM to address errors due to misunderstanding of the narrative story. It can locate and highlight keywords that may have been overlooked or misunderstood leading to an error. This is achieved by utilizing the role of sentences and keywords that are defined through the Natural Language Generation (NLG methods deployed in the story problem generation. MAST also integrates CBM with scaffolding questions and feedback to provide various forms of help and guidance to the student. This allows the student to discover and correct any errors in his/her solution. MAST has been preliminary evaluated empirically and the results show the potential effectiveness in tutoring students with a decrease in the percentage of violated constraints along the learning curve. Additionally, there is a significant improvement in the results of the post–test exam in comparison to the pre-test exam of the students using MAST in comparison to those relying on the textbook

  4. Classifying variability modeling techniques

    NARCIS (Netherlands)

    Sinnema, Marco; Deelstra, Sybren

    Variability modeling is important for managing variability in software product families, especially during product derivation. In the past few years, several variability modeling techniques have been developed, each using its own concepts to model the variability provided by a product family. The

  5. Global ocean modeling and rendering techniques based on ellipsoid Rectangular grid mapping

    Science.gov (United States)

    Ma, W.; Wan, G.; Wang, L.; Li, W. J.

    2013-10-01

    Summary: In recent years, with the development of the virtual reality technology and data acquisition technology, people increase the demand of GIS visualization. Especially accounting for occupying 70 percent of global area, and as a based environmental visualization, global ocean visualization is particularly important in some applications. This paper studies the global ocean visualization and modeling techniques under the framework of the WGS84 ellipsoid and achieves a method of rapid global ocean photorealistic rendering. The main research works are as follow: 1. In the height field modeling, with the law of statistical and spectral marine and Phillip wave spectrum, we can produce a single height map which considers the wind farm on the ocean wave magnitude of impact; 2. With ellipsoid rectangular grid mapping relationship, the single height map produced above will be mapped to the ellipsoid repeatedly, and achieve a goal of global ocean height field modeling; 3. With the conversion of screen space coordinate system and the rectangular spatial coordinate system, sampling points can be acquired by the view-dependent ellipsoid; 4. With the introduction of global bathymetric data, and came through the GPU for rapid sampling, so that we can get sampling points related to transparency and depth values to achieve a global ocean and land border processing.

  6. Development of Acoustic Model-Based Iterative Reconstruction Technique for Thick-Concrete Imaging

    Energy Technology Data Exchange (ETDEWEB)

    Almansouri, Hani [Purdue University; Clayton, Dwight A [ORNL; Kisner, Roger A [ORNL; Polsky, Yarom [ORNL; Bouman, Charlie [Purdue University; Santos-Villalobos, Hector J [ORNL

    2016-01-01

    Ultrasound signals have been used extensively for non-destructive evaluation (NDE). However, typical reconstruction techniques, such as the synthetic aperture focusing technique (SAFT), are limited to quasi-homogenous thin media. New ultrasonic systems and reconstruction algorithms are in need for one-sided NDE of non-homogenous thick objects. An application example space is imaging of reinforced concrete structures for commercial nuclear power plants (NPPs). These structures provide important foundation, support, shielding, and containment functions. Identification and management of aging and degradation of concrete structures is fundamental to the proposed long-term operation of NPPs. Another example is geothermal and oil/gas production wells. These multi-layered structures are composed of steel, cement, and several types of soil and rocks. Ultrasound systems with greater penetration range and image quality will allow for better monitoring of the well's health and prediction of high-pressure hydraulic fracturing of the rock. These application challenges need to be addressed with an integrated imaging approach, where the application, hardware, and reconstruction software are highly integrated and optimized. Therefore, we are developing an ultrasonic system with Model-Based Iterative Reconstruction (MBIR) as the image reconstruction backbone. As the first implementation of MBIR for ultrasonic signals, this paper document the first implementation of the algorithm and show reconstruction results for synthetically generated data.

  7. Development of Acoustic Model-Based Iterative Reconstruction Technique for Thick-Concrete Imaging

    Energy Technology Data Exchange (ETDEWEB)

    Almansouri, Hani [Purdue University; Clayton, Dwight A [ORNL; Kisner, Roger A [ORNL; Polsky, Yarom [ORNL; Bouman, Charlie [Purdue University; Santos-Villalobos, Hector J [ORNL

    2015-01-01

    Ultrasound signals have been used extensively for non-destructive evaluation (NDE). However, typical reconstruction techniques, such as the synthetic aperture focusing technique (SAFT), are limited to quasi-homogenous thin media. New ultrasonic systems and reconstruction algorithms are in need for one-sided NDE of non-homogenous thick objects. An application example space is imaging of reinforced concrete structures for commercial nuclear power plants (NPPs). These structures provide important foundation, support, shielding, and containment functions. Identification and management of aging and degradation of concrete structures is fundamental to the proposed long-term operation of NPPs. Another example is geothermal and oil/gas production wells. These multi-layered structures are composed of steel, cement, and several types of soil and rocks. Ultrasound systems with greater penetration range and image quality will allow for better monitoring of the well s health and prediction of high-pressure hydraulic fracturing of the rock. These application challenges need to be addressed with an integrated imaging approach, where the application, hardware, and reconstruction software are highly integrated and optimized. Therefore, we are developing an ultrasonic system with Model-Based Iterative Reconstruction (MBIR) as the image reconstruction backbone. As the first implementation of MBIR for ultrasonic signals, this paper document the first implementation of the algorithm and show reconstruction results for synthetically generated data.

  8. Advances in Intelligent Modelling and Simulation Artificial Intelligence-Based Models and Techniques in Scalable Computing

    CERN Document Server

    Khan, Samee; Burczy´nski, Tadeusz

    2012-01-01

    One of the most challenging issues in today’s large-scale computational modeling and design is to effectively manage the complex distributed environments, such as computational clouds, grids, ad hoc, and P2P networks operating under  various  types of users with evolving relationships fraught with  uncertainties. In this context, the IT resources and services usually belong to different owners (institutions, enterprises, or individuals) and are managed by different administrators. Moreover, uncertainties are presented to the system at hand in various forms of information that are incomplete, imprecise, fragmentary, or overloading, which hinders in the full and precise resolve of the evaluation criteria, subsequencing and selection, and the assignment scores. Intelligent scalable systems enable the flexible routing and charging, advanced user interactions and the aggregation and sharing of geographically-distributed resources in modern large-scale systems.   This book presents new ideas, theories, models...

  9. Coronary stent on coronary CT angiography: Assessment with model-based iterative reconstruction technique

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Eun Chae; Kim, Yeo Koon; Chun, Eun Ju; Choi, Sang IL [Dept. of of Radiology, Seoul National University Bundang Hospital, Seongnam (Korea, Republic of)

    2016-05-15

    To assess the performance of model-based iterative reconstruction (MBIR) technique for evaluation of coronary artery stents on coronary CT angiography (CCTA). Twenty-two patients with coronary stent implantation who underwent CCTA were retrospectively enrolled for comparison of image quality between filtered back projection (FBP), adaptive statistical iterative reconstruction (ASIR) and MBIR. In each data set, image noise was measured as the standard deviation of the measured attenuation units within circular regions of interest in the ascending aorta (AA) and left main coronary artery (LM). To objectively assess the noise and blooming artifacts in coronary stent, we additionally measured the standard deviation of the measured attenuation and intra-luminal stent diameters of total 35 stents with dedicated software. All image noise measured in the AA (all p < 0.001), LM (p < 0.001, p = 0.001) and coronary stent (all p < 0.001) were significantly lower with MBIR in comparison to those with FBP or ASIR. Intraluminal stent diameter was significantly higher with MBIR, as compared with ASIR or FBP (p < 0.001, p = 0.001). MBIR can reduce image noise and blooming artifact from the stent, leading to better in-stent assessment in patients with coronary artery stent.

  10. Validation of a COMSOL Multiphysics based soil model using imaging techniques

    Science.gov (United States)

    Hayes, Robert; Newill, Paul; Podd, Frank; Dorn, Oliver; York, Trevor; Grieve, Bruce

    2010-05-01

    In the face of climate change the ability to rapidly identify new plant varieties that will be tolerant to drought, and other stresses, is going to be key to breeding the food crops of tomorrow. Currently, above soil features (phenotypes) are monitored in industrial greenhouses and field trials during seed breeding programmes so as to provide an indication of which plants have the most likely preferential genetics to thrive in the future global environments. These indicators of 'plant vigour' are often based on loosely related features which may be straightforward to examine, such as an additional ear of corn on a maize plant, but which are labour intensive and often lacking in direct linkage to the required crop features. A new visualisation tool is being developed for seed breeders, providing on-line data for each individual plant in a screening programme indicating how efficiently each plant utilises the water and nutrients available in the surrounding soil. It will be used as an in-field tool for early detection of desirable genetic traits with the aim of increased efficiency in identification and delivery of tomorrow's drought tolerant food crops. Visualisation takes the form of Electrical Impedance Tomography (EIT), a non-destructive and non-intrusive imaging technique. The measurement space is typical of medical and industrial process monitoring i.e. on a small spatial scale as opposed to that of typical geophysical applications. EIT measurements are obtained for an individual plant thus allowing water and nutrient absorption levels for an individual specimen to be inferred from the resistance distribution image obtained. In addition to traditional soft-field image reconstruction techniques the inverse problem is solved using mathematical models for the mobility of water and solutes in soil. The University of Manchester/Syngenta LCT2 (Low Cost Tomography 2) instrument has been integrated into crop growth studies under highly controlled soil, nutrient and

  11. Model-based wear measurements in total knee arthroplasty : development and validation of novel radiographic techniques

    NARCIS (Netherlands)

    IJsseldijk, van E.A.

    2016-01-01

    The primary aim of this work was to develop novel model-based mJSW measurement methods using a 3D reconstruction and compare the accuracy and precision of these methods to conventional mJSW measurement. This thesis contributed to the development, validation and clinical application of model-based

  12. A New Profile Learning Model for Recommendation System based on Machine Learning Technique

    Directory of Open Access Journals (Sweden)

    Shereen H. Ali

    2016-03-01

    Full Text Available Recommender systems (RSs have been used to successfully address the information overload problem by providing personalized and targeted recommendations to the end users. RSs are software tools and techniques providing suggestions for items to be of use to a user, hence, they typically apply techniques and methodologies from Data Mining. The main contribution of this paper is to introduce a new user profile learning model to promote the recommendation accuracy of vertical recommendation systems. The proposed profile learning model employs the vertical classifier that has been used in multi classification module of the Intelligent Adaptive Vertical Recommendation (IAVR system to discover the user’s area of interest, and then build the user’s profile accordingly. Experimental results have proven the effectiveness of the proposed profile learning model, which accordingly will promote the recommendation accuracy.

  13. Statistical Techniques to Explore the Quality of Constraints in Constraint-Based Modeling Environments

    Science.gov (United States)

    Gálvez, Jaime; Conejo, Ricardo; Guzmán, Eduardo

    2013-01-01

    One of the most popular student modeling approaches is Constraint-Based Modeling (CBM). It is an efficient approach that can be easily applied inside an Intelligent Tutoring System (ITS). Even with these characteristics, building new ITSs requires carefully designing the domain model to be taught because different sources of errors could affect…

  14. Modeling and Control PV-Wind Hybrid System Based On Fuzzy Logic Control Technique

    Directory of Open Access Journals (Sweden)

    Doaa M. Atia

    2012-09-01

    Full Text Available As energy demands around the world increase, the need for a renewable energy sources that will not harm the environment is increased. The overall objective of renewable energy systems is to obtain electricity that is cost competitive and even advantageous with respect to other energy sources. The optimal design of the renewable energy system can significantly improve the economical and technical performance of power supply. This paper presents the power management control using fuzzy logic control technique. Also, a complete mathematical modeling and MATLAB SIMULINK model for the proposed the electrical part of an aquaculture system is implemented to track the system performance. The simulation results show that, the feasibility of control technique.

  15. Introduction to Information Visualization (InfoVis) Techniques for Model-Based Systems Engineering

    Science.gov (United States)

    Sindiy, Oleg; Litomisky, Krystof; Davidoff, Scott; Dekens, Frank

    2013-01-01

    This paper presents insights that conform to numerous system modeling languages/representation standards. The insights are drawn from best practices of Information Visualization as applied to aerospace-based applications.

  16. Model-based recognition of 3-D objects by geometric hashing technique

    International Nuclear Information System (INIS)

    Severcan, M.; Uzunalioglu, H.

    1992-09-01

    A model-based object recognition system is developed for recognition of polyhedral objects. The system consists of feature extraction, modelling and matching stages. Linear features are used for object descriptions. Lines are obtained from edges using rotation transform. For modelling and recognition process, geometric hashing method is utilized. Each object is modelled using 2-D views taken from the viewpoints on the viewing sphere. A hidden line elimination algorithm is used to find these views from the wire frame model of the objects. The recognition experiments yielded satisfactory results. (author). 8 refs, 5 figs

  17. Comparison of lung tumor motion measured using a model-based 4DCT technique and a commercial protocol.

    Science.gov (United States)

    O'Connell, Dylan; Shaverdian, Narek; Kishan, Amar U; Thomas, David H; Dou, Tai H; Lewis, John H; Lamb, James M; Cao, Minsong; Tenn, Stephen; Percy, Lee P; Low, Daniel A

    2017-11-11

    To compare lung tumor motion measured with a model-based technique to commercial 4-dimensional computed tomography (4DCT) scans and describe a workflow for using model-based 4DCT as a clinical simulation protocol. Twenty patients were imaged using a model-based technique and commercial 4DCT. Tumor motion was measured on each commercial 4DCT dataset and was calculated on model-based datasets for 3 breathing amplitude percentile intervals: 5th to 85th, 5th to 95th, and 0th to 100th. Internal target volumes (ITVs) were defined on the 4DCT and 5th to 85th interval datasets and compared using Dice similarity. Images were evaluated for noise and rated by 2 radiation oncologists for artifacts. Mean differences in tumor motion magnitude between commercial and model-based images were 0.47 ± 3.0, 1.63 ± 3.17, and 5.16 ± 4.90 mm for the 5th to 85th, 5th to 95th, and 0th to 100th amplitude intervals, respectively. Dice coefficients between ITVs defined on commercial and 5th to 85th model-based images had a mean value of 0.77 ± 0.09. Single standard deviation image noise was 11.6 ± 9.6 HU in the liver and 6.8 ± 4.7 HU in the aorta for the model-based images compared with 57.7 ± 30 and 33.7 ± 15.4 for commercial 4DCT. Mean model error within the ITV regions was 1.71 ± 0.81 mm. Model-based images exhibited reduced presence of artifacts at the tumor compared with commercial images. Tumor motion measured with the model-based technique using the 5th to 85th percentile breathing amplitude interval corresponded more closely to commercial 4DCT than the 5th to 95th or 0th to 100th intervals, which showed greater motion on average. The model-based technique tended to display increased tumor motion when breathing amplitude intervals wider than 5th to 85th were used because of the influence of unusually deep inhalations. These results suggest that care must be taken in selecting the appropriate interval during image generation when using model-based 4DCT methods. Copyright © 2017

  18. Short term load forecasting technique based on the seasonal exponential adjustment method and the regression model

    International Nuclear Information System (INIS)

    Wu, Jie; Wang, Jianzhou; Lu, Haiyan; Dong, Yao; Lu, Xiaoxiao

    2013-01-01

    Highlights: ► The seasonal and trend items of the data series are forecasted separately. ► Seasonal item in the data series is verified by the Kendall τ correlation testing. ► Different regression models are applied to the trend item forecasting. ► We examine the superiority of the combined models by the quartile value comparison. ► Paired-sample T test is utilized to confirm the superiority of the combined models. - Abstract: For an energy-limited economy system, it is crucial to forecast load demand accurately. This paper devotes to 1-week-ahead daily load forecasting approach in which load demand series are predicted by employing the information of days before being similar to that of the forecast day. As well as in many nonlinear systems, seasonal item and trend item are coexisting in load demand datasets. In this paper, the existing of the seasonal item in the load demand data series is firstly verified according to the Kendall τ correlation testing method. Then in the belief of the separate forecasting to the seasonal item and the trend item would improve the forecasting accuracy, hybrid models by combining seasonal exponential adjustment method (SEAM) with the regression methods are proposed in this paper, where SEAM and the regression models are employed to seasonal and trend items forecasting respectively. Comparisons of the quartile values as well as the mean absolute percentage error values demonstrate this forecasting technique can significantly improve the accuracy though models applied to the trend item forecasting are eleven different ones. This superior performance of this separate forecasting technique is further confirmed by the paired-sample T tests

  19. Model-based orientation-independent 3-D machine vision techniques

    Science.gov (United States)

    De Figueiredo, R. J. P.; Kehtarnavaz, N.

    1988-01-01

    Orientation-dependent techniques for the identification of a three-dimensional object by a machine vision system are represented in parts. In the first part, the data consist of intensity images of polyhedral objects obtained by a single camera, while in the second part, the data consist of range images of curved objects obtained by a laser scanner. In both cases, the attributed graphic representation of the object surface is used to drive the respective algorithm. In this representation, a graph node represents a surface patch and a link represents the adjacency between two patches. The attributes assigned to nodes are moment invariants of the corresponding face for polyhedral objects. For range images, the Gaussian curvature is used as a segmentation criterion for providing symbolic shape attributes. Identification is achieved by an efficient graph-matching algorithm used to match the graph obtained from the data to a subgraph of one of the model graphs stored in the commputer memory.

  20. Suppression of Spiral Waves by Voltage Clamp Techniques in a Conductance-Based Cardiac Tissue Model

    International Nuclear Information System (INIS)

    Lian-Chun, Yu; Guo-Yong, Zhang; Yong, Chen; Jun, Ma

    2008-01-01

    A new control method is proposed to control the spatio-temporal dynamics in excitable media, which is described by the Morris–Lecar cells model. It is confirmed that successful suppression of spiral waves can be obtained by spatially clamping the membrane voltage of the excitable cells. The low voltage clamping induces breakup of spiral waves and the fragments are soon absorbed by low voltage obstacles, whereas the high voltage clamping generates travel waves that annihilate spiral waves through collision with them. However, each method has its shortcomings. Furthermore, a two-step method that combines both low and high voltage clamp techniques is then presented as a possible way of out this predicament. (cross-disciplinary physics and related areas of science and technology)

  1. Model-based fault diagnosis techniques design schemes, algorithms, and tools

    CERN Document Server

    Ding, Steven

    2008-01-01

    The objective of this book is to introduce basic model-based FDI schemes, advanced analysis and design algorithms, and the needed mathematical and control theory tools at a level for graduate students and researchers as well as for engineers. This is a textbook with extensive examples and references. Most methods are given in the form of an algorithm that enables a direct implementation in a programme. Comparisons among different methods are included when possible.

  2. Temperature based daily incoming solar radiation modeling based on gene expression programming, neuro-fuzzy and neural network computing techniques.

    Science.gov (United States)

    Landeras, G.; López, J. J.; Kisi, O.; Shiri, J.

    2012-04-01

    The correct observation/estimation of surface incoming solar radiation (RS) is very important for many agricultural, meteorological and hydrological related applications. While most weather stations are provided with sensors for air temperature detection, the presence of sensors necessary for the detection of solar radiation is not so habitual and the data quality provided by them is sometimes poor. In these cases it is necessary to estimate this variable. Temperature based modeling procedures are reported in this study for estimating daily incoming solar radiation by using Gene Expression Programming (GEP) for the first time, and other artificial intelligence models such as Artificial Neural Networks (ANNs), and Adaptive Neuro-Fuzzy Inference System (ANFIS). Traditional temperature based solar radiation equations were also included in this study and compared with artificial intelligence based approaches. Root mean square error (RMSE), mean absolute error (MAE) RMSE-based skill score (SSRMSE), MAE-based skill score (SSMAE) and r2 criterion of Nash and Sutcliffe criteria were used to assess the models' performances. An ANN (a four-input multilayer perceptron with ten neurons in the hidden layer) presented the best performance among the studied models (2.93 MJ m-2 d-1 of RMSE). A four-input ANFIS model revealed as an interesting alternative to ANNs (3.14 MJ m-2 d-1 of RMSE). Very limited number of studies has been done on estimation of solar radiation based on ANFIS, and the present one demonstrated the ability of ANFIS to model solar radiation based on temperatures and extraterrestrial radiation. By the way this study demonstrated, for the first time, the ability of GEP models to model solar radiation based on daily atmospheric variables. Despite the accuracy of GEP models was slightly lower than the ANFIS and ANN models the genetic programming models (i.e., GEP) are superior to other artificial intelligence models in giving a simple explicit equation for the

  3. Implementation of a model based fault detection and diagnosis technique for actuation faults of the SSME

    Science.gov (United States)

    Duyar, A.; Guo, T.-H.; Merrill, W.; Musgrave, J.

    1991-01-01

    In a previous study, Guo, Merrill and Duyar, 1990, reported a conceptual development of a fault detection and diagnosis system for actuation faults of the Space Shuttle main engine. This study, which is a continuation of the previous work, implements the developed fault detection and diagnosis scheme for the real time actuation fault diagnosis of the Space Shuttle Main Engine. The scheme will be used as an integral part of an intelligent control system demonstration experiment at NASA Lewis. The diagnosis system utilizes a model based method with real time identification and hypothesis testing for actuation, sensor, and performance degradation faults.

  4. Recent developments of the projected shell model based on many-body techniques

    Directory of Open Access Journals (Sweden)

    Sun Yang

    2015-01-01

    Full Text Available Recent developments of the projected shell model (PSM are summarized. Firstly, by using the Pfaffian algorithm, the multi-quasiparticle configuration space is expanded to include 6-quasiparticle states. The yrast band of 166Hf at very high spins is studied as an example, where the observed third back-bending in the moment of inertia is well reproduced and explained. Secondly, an angular-momentum projected generate coordinate method is developed based on PSM. The evolution of the low-lying states, including the second 0+ state, of the soft Gd, Dy, and Er isotopes to the well-deformed ones is calculated, and compared with experimental data.

  5. Comparison between two meshless methods based on collocation technique for the numerical solution of four-species tumor growth model

    Science.gov (United States)

    Dehghan, Mehdi; Mohammadi, Vahid

    2017-03-01

    As is said in [27], the tumor-growth model is the incorporation of nutrient within the mixture as opposed to being modeled with an auxiliary reaction-diffusion equation. The formulation involves systems of highly nonlinear partial differential equations of surface effects through diffuse-interface models [27]. Simulations of this practical model using numerical methods can be applied for evaluating it. The present paper investigates the solution of the tumor growth model with meshless techniques. Meshless methods are applied based on the collocation technique which employ multiquadrics (MQ) radial basis function (RBFs) and generalized moving least squares (GMLS) procedures. The main advantages of these choices come back to the natural behavior of meshless approaches. As well as, a method based on meshless approach can be applied easily for finding the solution of partial differential equations in high-dimension using any distributions of points on regular and irregular domains. The present paper involves a time-dependent system of partial differential equations that describes four-species tumor growth model. To overcome the time variable, two procedures will be used. One of them is a semi-implicit finite difference method based on Crank-Nicolson scheme and another one is based on explicit Runge-Kutta time integration. The first case gives a linear system of algebraic equations which will be solved at each time-step. The second case will be efficient but conditionally stable. The obtained numerical results are reported to confirm the ability of these techniques for solving the two and three-dimensional tumor-growth equations.

  6. Pre-analysis techniques applied to area-based correlation aiming Digital Terrain Model generation

    Directory of Open Access Journals (Sweden)

    Maurício Galo

    2005-12-01

    Full Text Available Area-based matching is an useful procedure in some photogrammetric processes and its results are of crucial importance in applications such as relative orientation, phototriangulation and Digital Terrain Model generation. The successful determination of correspondence depends on radiometric and geometric factors. Considering these aspects, the use of procedures that previously estimate the quality of the parameters to be computed is a relevant issue. This paper describes these procedures and it is shown that the quality prediction can be computed before performing matching by correlation, trough the analysis of the reference window. This procedure can be incorporated in the correspondence process for Digital Terrain Model generation and Phototriangulation. The proposed approach comprises the estimation of the variance matrix of the translations from the gray levels in the reference window and the reduction of the search space using the knowledge of the epipolar geometry. As a consequence, the correlation process becomes more reliable, avoiding the application of matching procedures in doubtful areas. Some experiments with simulated and real data are presented, evidencing the efficiency of the studied strategy.

  7. Semiconductor Modeling Techniques

    CERN Document Server

    Xavier, Marie

    2012-01-01

    This book describes the key theoretical techniques for semiconductor research to quantitatively calculate and simulate the properties. It presents particular techniques to study novel semiconductor materials, such as 2D heterostructures, quantum wires, quantum dots and nitrogen containing III-V alloys. The book is aimed primarily at newcomers working in the field of semiconductor physics to give guidance in theory and experiment. The theoretical techniques for electronic and optoelectronic devices are explained in detail.

  8. Advanced Atmospheric Ensemble Modeling Techniques

    Energy Technology Data Exchange (ETDEWEB)

    Buckley, R. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Chiswell, S. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Kurzeja, R. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Maze, G. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Viner, B. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Werth, D. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)

    2017-09-29

    Ensemble modeling (EM), the creation of multiple atmospheric simulations for a given time period, has become an essential tool for characterizing uncertainties in model predictions. We explore two novel ensemble modeling techniques: (1) perturbation of model parameters (Adaptive Programming, AP), and (2) data assimilation (Ensemble Kalman Filter, EnKF). The current research is an extension to work from last year and examines transport on a small spatial scale (<100 km) in complex terrain, for more rigorous testing of the ensemble technique. Two different release cases were studied, a coastal release (SF6) and an inland release (Freon) which consisted of two release times. Observations of tracer concentration and meteorology are used to judge the ensemble results. In addition, adaptive grid techniques have been developed to reduce required computing resources for transport calculations. Using a 20- member ensemble, the standard approach generated downwind transport that was quantitatively good for both releases; however, the EnKF method produced additional improvement for the coastal release where the spatial and temporal differences due to interior valley heating lead to the inland movement of the plume. The AP technique showed improvements for both release cases, with more improvement shown in the inland release. This research demonstrated that transport accuracy can be improved when models are adapted to a particular location/time or when important local data is assimilated into the simulation and enhances SRNL’s capability in atmospheric transport modeling in support of its current customer base and local site missions, as well as our ability to attract new customers within the intelligence community.

  9. Spherical harmonics based intrasubject 3-D kidney modeling/registration technique applied on partial information

    Science.gov (United States)

    Dillenseger, Jean-Louis; Guillaume, Hélène; Patard, Jean-Jacques

    2006-01-01

    This paper presents a 3D shape reconstruction/intra-patient rigid registration technique used to establish a Nephron-Sparing Surgery preoperative planning. The usual preoperative imaging system is the Spiral CT Urography, which provides successive 3D acquisitions of complementary information on kidney anatomy. Because the kidney is difficult to demarcate from the liver or from the spleen only limited information on its volume or surface is available. In our paper we propose a methodology allowing a global kidney spatial representation on a spherical harmonics basis. The spherical harmonics are exploited to recover the kidney 3D shape and also to perform intra-patient 3D rigid registration. An evaluation performed on synthetic data showed that this technique presented lower performance then expected for the 3D shape recovering but exhibited registration results slightly more accurate as the ICP technique with faster computation time. PMID:17073323

  10. Dynamic model updating based on strain mode shape and natural frequency using hybrid pattern search technique

    Science.gov (United States)

    Guo, Ning; Yang, Zhichun; Wang, Le; Ouyang, Yan; Zhang, Xinping

    2018-05-01

    Aiming at providing a precise dynamic structural finite element (FE) model for dynamic strength evaluation in addition to dynamic analysis. A dynamic FE model updating method is presented to correct the uncertain parameters of the FE model of a structure using strain mode shapes and natural frequencies. The strain mode shape, which is sensitive to local changes in structure, is used instead of the displacement mode for enhancing model updating. The coordinate strain modal assurance criterion is developed to evaluate the correlation level at each coordinate over the experimental and the analytical strain mode shapes. Moreover, the natural frequencies which provide the global information of the structure are used to guarantee the accuracy of modal properties of the global model. Then, the weighted summation of the natural frequency residual and the coordinate strain modal assurance criterion residual is used as the objective function in the proposed dynamic FE model updating procedure. The hybrid genetic/pattern-search optimization algorithm is adopted to perform the dynamic FE model updating procedure. Numerical simulation and model updating experiment for a clamped-clamped beam are performed to validate the feasibility and effectiveness of the present method. The results show that the proposed method can be used to update the uncertain parameters with good robustness. And the updated dynamic FE model of the beam structure, which can correctly predict both the natural frequencies and the local dynamic strains, is reliable for the following dynamic analysis and dynamic strength evaluation.

  11. LOCALITY UNCERTAINTY AND THE DIFFERENTIAL PERFORMANCE OF FOUR COMMON NICHE-BASED MODELING TECHNIQUES

    Directory of Open Access Journals (Sweden)

    Miguel Fernandez

    2009-09-01

    Full Text Available We address a poorly understood aspect of ecological niche modeling: its sensitivity to different levels of geographic uncertainty in organism occurrence data. Our primary interest was to assess how accuracy degrades under increasing uncertainty, with performance measured indirectly through model consistency. We used Monte Carlo simulations and a similarity measure to assess model sensitivity across three variables: locality accuracy, niche modeling method, and species. Randomly generated data sets with known levels of locality uncertainty were compared to an original prediction using Fuzzy Kappa. Data sets where locality uncertainty is low were expected to produce similar distribution maps to the original. In contrast, data sets where locality uncertainty is high were expected to produce less similar maps. BIOCLIM, DOMAIN, Maxent and GARP were used to predict the distributions for 1200 simulated datasets (3 species x 4 buffer sizes x 100 randomized data sets. Thus, our experimental design produced a total of 4800 similarity measures, with each of the simulated distributions compared to the prediction of the original data set and corresponding modeling method. A general linear model (GLM analysis was performed which enables us to simultaneously measure the effect of buffer size, modeling method, and species, as well as interactions among all variables. Our results show that modeling method has the largest effect on similarity scores and uniquely accounts for 40% of the total variance in the model. The second most important factor was buffer size, but it uniquely accounts for only 3% of the variation in the model. The newer and currently more popular methods, GARP and Maxent, were shown to produce more inconsistent predictions than the earlier and simpler methods, BIOCLIM and DOMAIN. Understanding the performance of different niche modeling methods under varying levels of geographic uncertainty is an important step toward more productive

  12. Fusion of 3D models derived from TLS and image-based techniques for CH enhanced documentation

    Science.gov (United States)

    Bastonero, P.; Donadio, E.; Chiabrando, F.; Spanò, A.

    2014-05-01

    Recognizing the various advantages offered by 3D new metric survey technologies in the Cultural Heritage documentation phase, this paper presents some tests of 3D model generation, using different methods, and their possible fusion. With the aim to define potentialities and problems deriving from integration or fusion of metric data acquired with different survey techniques, the elected test case is an outstanding Cultural Heritage item, presenting both widespread and specific complexities connected to the conservation of historical buildings. The site is the Staffarda Abbey, the most relevant evidence of medieval architecture in Piedmont. This application faced one of the most topical architectural issues consisting in the opportunity to study and analyze an object as a whole, from twice location of acquisition sensors, both the terrestrial and the aerial one. In particular, the work consists in the evaluation of chances deriving from a simple union or from the fusion of different 3D cloudmodels of the abbey, achieved by multi-sensor techniques. The aerial survey is based on a photogrammetric RPAS (Remotely piloted aircraft system) flight while the terrestrial acquisition have been fulfilled by laser scanning survey. Both techniques allowed to extract and process different point clouds and to generate consequent 3D continuous models which are characterized by different scale, that is to say different resolutions and diverse contents of details and precisions. Starting from these models, the proposed process, applied to a sample area of the building, aimed to test the generation of a unique 3Dmodel thorough a fusion of different sensor point clouds. Surely, the describing potential and the metric and thematic gains feasible by the final model exceeded those offered by the two detached models.

  13. Mathematical modelling techniques

    CERN Document Server

    Aris, Rutherford

    1995-01-01

    ""Engaging, elegantly written."" - Applied Mathematical ModellingMathematical modelling is a highly useful methodology designed to enable mathematicians, physicists and other scientists to formulate equations from a given nonmathematical situation. In this elegantly written volume, a distinguished theoretical chemist and engineer sets down helpful rules not only for setting up models but also for solving the mathematical problems they pose and for evaluating models.The author begins with a discussion of the term ""model,"" followed by clearly presented examples of the different types of mode

  14. Assessment of surface solar irradiance derived from real-time modelling techniques and verification with ground-based measurements

    Science.gov (United States)

    Kosmopoulos, Panagiotis G.; Kazadzis, Stelios; Taylor, Michael; Raptis, Panagiotis I.; Keramitsoglou, Iphigenia; Kiranoudis, Chris; Bais, Alkiviadis F.

    2018-02-01

    This study focuses on the assessment of surface solar radiation (SSR) based on operational neural network (NN) and multi-regression function (MRF) modelling techniques that produce instantaneous (in less than 1 min) outputs. Using real-time cloud and aerosol optical properties inputs from the Spinning Enhanced Visible and Infrared Imager (SEVIRI) on board the Meteosat Second Generation (MSG) satellite and the Copernicus Atmosphere Monitoring Service (CAMS), respectively, these models are capable of calculating SSR in high resolution (1 nm, 0.05°, 15 min) that can be used for spectrally integrated irradiance maps, databases and various applications related to energy exploitation. The real-time models are validated against ground-based measurements of the Baseline Surface Radiation Network (BSRN) in a temporal range varying from 15 min to monthly means, while a sensitivity analysis of the cloud and aerosol effects on SSR is performed to ensure reliability under different sky and climatological conditions. The simulated outputs, compared to their common training dataset created by the radiative transfer model (RTM) libRadtran, showed median error values in the range -15 to 15 % for the NN that produces spectral irradiances (NNS), 5-6 % underestimation for the integrated NN and close to zero errors for the MRF technique. The verification against BSRN revealed that the real-time calculation uncertainty ranges from -100 to 40 and -20 to 20 W m-2, for the 15 min and monthly mean global horizontal irradiance (GHI) averages, respectively, while the accuracy of the input parameters, in terms of aerosol and cloud optical thickness (AOD and COT), and their impact on GHI, was of the order of 10 % as compared to the ground-based measurements. The proposed system aims to be utilized through studies and real-time applications which are related to solar energy production planning and use.

  15. Experimental models of brain ischemia: a review of techniques, magnetic resonance imaging and investigational cell-based therapies

    Directory of Open Access Journals (Sweden)

    Alessandra eCanazza

    2014-02-01

    Full Text Available Stroke continues to be a significant cause of death and disability worldwide. Although major advances have been made in the past decades in prevention, treatment and rehabilitation, enormous challenges remain in the way of translating new therapeutic approaches from bench to bedside. Thrombolysis, while routinely used for ischemic stroke, is only a viable option within a narrow time window. Recently, progress in stem cell biology has opened up avenues to therapeutic strategies aimed at supporting and replacing neural cells in infarcted areas. Realistic experimental animal models are crucial to understand the mechanisms of neuronal survival following ischemic brain injury and to develop therapeutic interventions. Current studies on experimental stroke therapies evaluate the efficiency of neuroprotective agents and cell-based approaches using primarily rodent models of permanent or transient focal cerebral ischemia. In parallel, advancements in imaging techniques permit better mapping of the spatial-temporal evolution of the lesioned cortex and its functional responses. This review provides a condensed conceptual review of the state of the art of this field, from models and magnetic resonance imaging techniques through to stem cell therapies.

  16. Boundary representation modelling techniques

    CERN Document Server

    2006-01-01

    Provides the most complete presentation of boundary representation solid modelling yet publishedOffers basic reference information for software developers, application developers and users Includes a historical perspective as well as giving a background for modern research.

  17. REVIEW OF ADVANCES IN COBB ANGLE CALCULATION AND IMAGE-BASED MODELLING TECHNIQUES FOR SPINAL DEFORMITIES

    Directory of Open Access Journals (Sweden)

    V. Giannoglou

    2016-06-01

    Full Text Available Scoliosis is a 3D deformity of the human spinal column that is caused from the bending of the latter, causing pain, aesthetic and respiratory problems. This internal deformation is reflected in the outer shape of the human back. The golden standard for diagnosis and monitoring of scoliosis is the Cobb angle, which refers to the internal curvature of the trunk. This work is the first part of a post-doctoral research, presenting the most important researches that have been done in the field of scoliosis, concerning its digital visualisation, in order to provide a more precise and robust identification and monitoring of scoliosis. The research is divided in four fields, namely, the X-ray processing, the automatic Cobb angle(s calculation, the 3D modelling of the spine that provides a more accurate representation of the trunk and the reduction of X-ray radiation exposure throughout the monitoring of scoliosis. Despite the fact that many researchers have been working on the field for the last decade at least, there is no reliable and universal tool to automatically calculate the Cobb angle(s and successfully perform proper 3D modelling of the spinal column that would assist a more accurate detection and monitoring of scoliosis.

  18. 3D printing of high-resolution PLA-based structures by hybrid electrohydrodynamic and fused deposition modeling techniques

    International Nuclear Information System (INIS)

    Zhang, Bin; Seong, Baekhoon; Byun, Doyoung; Nguyen, VuDat

    2016-01-01

    Recently, the three-dimensional (3D) printing technique has received much attention for shape forming and manufacturing. The fused deposition modeling (FDM) printer is one of the various 3D printers available and has become widely used due to its simplicity, low-cost, and easy operation. However, the FDM technique has a limitation whereby its patterning resolution is too low at around 200 μm. In this paper, we first present a hybrid mechanism of electrohydrodynamic jet printing with the FDM technique, which we name E-FDM. We then develop a novel high-resolution 3D printer based on the E-FDM process. To determine the optimal condition for structuring, we also investigated the effect of several printing parameters, such as temperature, applied voltage, working height, printing speed, flow-rate, and acceleration on the patterning results. This method was capable of fabricating both high resolution 2D and 3D structures with the use of polylactic acid (PLA). PLA has been used to fabricate scaffold structures for tissue engineering, which has different hierarchical structure sizes. The fabrication speed was up to 40 mm/s and the pattern resolution could be improved to 10 μm. (paper)

  19. A new mathematical modelling based shape extraction technique for Forensic Odontology.

    Science.gov (United States)

    G, Jaffino; A, Banumathi; Gurunathan, Ulaganathan; B, Vijayakumari; J, Prabin Jose

    2017-04-01

    Forensic Odontology is a specific means for identifying a person in which deceased, and particularly in fatality incidents. The algorithm can be proposed to identify a person by comparing both postmortem (PM) and antemortem (AM) dental radiographs and photographs. This work aims to introduce a new mathematical algorithm for photographs in addition with radiographs. Isoperimetric graph partitioning method is used to extract the shape of dental images in forensic identification. Shape matching is done by comparing AM and PM dental images using both similarity and distance measures. Experimental results prove that the higher matching distance is observed by distance metric rather than similarity measures. The results of this algorithm show that a high hit rate is observed for distance based performance measures and it is well suited for forensic odontologist to identify a person. Copyright © 2017 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.

  20. A novel approach to model exposure of coastal-marine ecosystems to riverine flood plumes based on remote sensing techniques.

    Science.gov (United States)

    Álvarez-Romero, Jorge G; Devlin, Michelle; Teixeira da Silva, Eduardo; Petus, Caroline; Ban, Natalie C; Pressey, Robert L; Kool, Johnathan; Roberts, Jason J; Cerdeira-Estrada, Sergio; Wenger, Amelia S; Brodie, Jon

    2013-04-15

    Increased loads of land-based pollutants are a major threat to coastal-marine ecosystems. Identifying the affected marine areas and the scale of influence on ecosystems is critical to assess the impacts of degraded water quality and to inform planning for catchment management and marine conservation. Studies using remotely-sensed data have contributed to our understanding of the occurrence and influence of river plumes, and to our ability to assess exposure of marine ecosystems to land-based pollutants. However, refinement of plume modeling techniques is required to improve risk assessments. We developed a novel, complementary, approach to model exposure of coastal-marine ecosystems to land-based pollutants. We used supervised classification of MODIS-Aqua true-color satellite imagery to map the extent of plumes and to qualitatively assess the dispersal of pollutants in plumes. We used the Great Barrier Reef (GBR), the world's largest coral reef system, to test our approach. We combined frequency of plume occurrence with spatially distributed loads (based on a cost-distance function) to create maps of exposure to suspended sediment and dissolved inorganic nitrogen. We then compared annual exposure maps (2007-2011) to assess inter-annual variability in the exposure of coral reefs and seagrass beds to these pollutants. We found this method useful to map plumes and qualitatively assess exposure to land-based pollutants. We observed inter-annual variation in exposure of ecosystems to pollutants in the GBR, stressing the need to incorporate a temporal component into plume exposure/risk models. Our study contributes to our understanding of plume spatial-temporal dynamics of the GBR and offers a method that can also be applied to monitor exposure of coastal-marine ecosystems to plumes and explore their ecological influences. Copyright © 2013 Elsevier Ltd. All rights reserved.

  1. Properties of hypothesis testing techniques and (Bayesian) model selection for exploration-based and theory-based (order-restricted) hypotheses.

    Science.gov (United States)

    Kuiper, Rebecca M; Nederhoff, Tim; Klugkist, Irene

    2015-05-01

    In this paper, the performance of six types of techniques for comparisons of means is examined. These six emerge from the distinction between the method employed (hypothesis testing, model selection using information criteria, or Bayesian model selection) and the set of hypotheses that is investigated (a classical, exploration-based set of hypotheses containing equality constraints on the means, or a theory-based limited set of hypotheses with equality and/or order restrictions). A simulation study is conducted to examine the performance of these techniques. We demonstrate that, if one has specific, a priori specified hypotheses, confirmation (i.e., investigating theory-based hypotheses) has advantages over exploration (i.e., examining all possible equality-constrained hypotheses). Furthermore, examining reasonable order-restricted hypotheses has more power to detect the true effect/non-null hypothesis than evaluating only equality restrictions. Additionally, when investigating more than one theory-based hypothesis, model selection is preferred over hypothesis testing. Because of the first two results, we further examine the techniques that are able to evaluate order restrictions in a confirmatory fashion by examining their performance when the homogeneity of variance assumption is violated. Results show that the techniques are robust to heterogeneity when the sample sizes are equal. When the sample sizes are unequal, the performance is affected by heterogeneity. The size and direction of the deviations from the baseline, where there is no heterogeneity, depend on the effect size (of the means) and on the trend in the group variances with respect to the ordering of the group sizes. Importantly, the deviations are less pronounced when the group variances and sizes exhibit the same trend (e.g., are both increasing with group number). © 2014 The British Psychological Society.

  2. Towards a new technique to construct a 3D shear-wave velocity model based on converted waves

    Science.gov (United States)

    Hetényi, G.; Colavitti, L.

    2017-12-01

    A 3D model is essential in all branches of solid Earth sciences because geological structures can be heterogeneous and change significantly in their lateral dimension. The main target of this research is to build a crustal S-wave velocity structure in 3D. The currently popular methodologies to construct 3D shear-wave velocity models are Ambient Noise Tomography (ANT) and Local Earthquake Tomography (LET). Here we propose a new technique to map Earth discontinuities and velocities at depth based on the analysis of receiver functions. The 3D model is obtained by simultaneously inverting P-to-S converted waveforms recorded at a dense array. The individual velocity models corresponding to each trace are extracted from the 3D initial model along ray paths that are calculated using the shooting method, and the velocity model is updated during the inversion. We consider a spherical approximation of ray propagation using a global velocity model (iasp91, Kennett and Engdahl, 1991) for the teleseismic part, while we adopt Cartesian coordinates and a local velocity model for the crust. During the inversion process we work with a multi-layer crustal model for shear-wave velocity, with a flexible mesh for the depth of the interfaces. The RFs inversion represents a complex problem because the amplitude and the arrival time of different phases depend in a non-linear way on the depth of interfaces and the characteristics of the velocity structure. The solution we envisage to manage the inversion problem is the stochastic Neighbourhood Algorithm (NA, Sambridge, 1999), whose goal is to find an ensemble of models that sample the good data-fitting regions of a multidimensional parameter space. Depending on the studied area, this method can accommodate possible independent and complementary geophysical data (gravity, active seismics, LET, ANT, etc.), helping to reduce the non-linearity of the inversion. Our first focus of application is the Central Alps, where a 20-year long dataset of

  3. A new wind speed forecasting strategy based on the chaotic time series modelling technique and the Apriori algorithm

    International Nuclear Information System (INIS)

    Guo, Zhenhai; Chi, Dezhong; Wu, Jie; Zhang, Wenyu

    2014-01-01

    Highlights: • Impact of meteorological factors on wind speed forecasting is taken into account. • Forecasted wind speed results are corrected by the associated rules. • Forecasting accuracy is improved by the new wind speed forecasting strategy. • Robust of the proposed model is validated by data sampled from different sites. - Abstract: Wind energy has been the fastest growing renewable energy resource in recent years. Because of the intermittent nature of wind, wind power is a fluctuating source of electrical energy. Therefore, to minimize the impact of wind power on the electrical grid, accurate and reliable wind power forecasting is mandatory. In this paper, a new wind speed forecasting approach based on based on the chaotic time series modelling technique and the Apriori algorithm has been developed. The new approach consists of four procedures: (I) Clustering by using the k-means clustering approach; (II) Employing the Apriori algorithm to discover the association rules; (III) Forecasting the wind speed according to the chaotic time series forecasting model; and (IV) Correcting the forecasted wind speed data using the associated rules discovered previously. This procedure has been verified by 31-day-ahead daily average wind speed forecasting case studies, which employed the wind speed and other meteorological data collected from four meteorological stations located in the Hexi Corridor area of China. The results of these case studies reveal that the chaotic forecasting model can efficiently improve the accuracy of the wind speed forecasting, and the Apriori algorithm can effectively discover the association rules between the wind speed and other meteorological factors. In addition, the correction results demonstrate that the association rules discovered by the Apriori algorithm have powerful capacities in handling the forecasted wind speed values correction when the forecasted values do not match the classification discovered by the association rules

  4. Survey of semantic modeling techniques

    Energy Technology Data Exchange (ETDEWEB)

    Smith, C.L.

    1975-07-01

    The analysis of the semantics of programing languages was attempted with numerous modeling techniques. By providing a brief survey of these techniques together with an analysis of their applicability for answering semantic issues, this report attempts to illuminate the state-of-the-art in this area. The intent is to be illustrative rather than thorough in the coverage of semantic models. A bibliography is included for the reader who is interested in pursuing this area of research in more detail.

  5. A Strategy Modelling Technique for Financial Services

    OpenAIRE

    Heinrich, Bernd; Winter, Robert

    2004-01-01

    Strategy planning processes often suffer from a lack of conceptual models that can be used to represent business strategies in a structured and standardized form. If natural language is replaced by an at least semi-formal model, the completeness, consistency, and clarity of strategy descriptions can be drastically improved. A strategy modelling technique is proposed that is based on an analysis of modelling requirements, a discussion of related work and a critical analysis of generic approach...

  6. Feasibility Study on Tension Estimation Technique for Hanger Cables Using the FE Model-Based System Identification Method

    Directory of Open Access Journals (Sweden)

    Kyu-Sik Park

    2015-01-01

    Full Text Available Hanger cables in suspension bridges are partly constrained by horizontal clamps. So, existing tension estimation methods based on a single cable model are prone to higher errors as the cable gets shorter, making it more sensitive to flexural rigidity. Therefore, inverse analysis and system identification methods based on finite element models are suggested recently. In this paper, the applicability of system identification methods is investigated using the hanger cables of Gwang-An bridge. The test results show that the inverse analysis and systemic identification methods based on finite element models are more reliable than the existing string theory and linear regression method for calculating the tension in terms of natural frequency errors. However, the estimation error of tension can be varied according to the accuracy of finite element model in model based methods. In particular, the boundary conditions affect the results more profoundly when the cable gets shorter. Therefore, it is important to identify the boundary conditions through experiment if it is possible. The FE model-based tension estimation method using system identification method can take various boundary conditions into account. Also, since it is not sensitive to the number of natural frequency inputs, the availability of this system is high.

  7. Quantification of intervertebral displacement with a novel MRI-based modeling technique: Assessing measurement bias and reliability with a porcine spine model.

    Science.gov (United States)

    Mahato, Niladri K; Montuelle, Stephane; Goubeaux, Craig; Cotton, John; Williams, Susan; Thomas, James; Clark, Brian C

    2017-05-01

    The purpose of this study was to develop a novel magnetic resonance imaging (MRI)-based modeling technique for measuring intervertebral displacements. Here, we present the measurement bias and reliability of the developmental work using a porcine spine model. Porcine lumbar vertebral segments were fitted in a custom-built apparatus placed within an externally calibrated imaging volume of an open-MRI scanner. The apparatus allowed movement of the vertebrae through pre-assigned magnitudes of sagittal and coronal translation and rotation. The induced displacements were imaged with static (T 1 ) and fast dynamic (2D HYCE S) pulse sequences. These images were imported into animation software, in which these images formed a background 'scene'. Three-dimensional models of vertebrae were created using static axial scans from the specimen and then transferred into the animation environment. In the animation environment, the user manually moved the models (rotoscoping) to perform model-to-'scene' matching to fit the models to their image silhouettes and assigned anatomical joint axes to the motion-segments. The animation protocol quantified the experimental translation and rotation displacements between the vertebral models. Accuracy of the technique was calculated as 'bias' using a linear mixed effects model, average percentage error and root mean square errors. Between-session reliability was examined by computing intra-class correlation coefficients (ICC) and the coefficient of variations (CV). For translation trials, a constant bias (β 0 ) of 0.35 (±0.11) mm was detected for the 2D HYCE S sequence (p=0.01). The model did not demonstrate significant additional bias with each mm increase in experimental translation (β 1 Displacement=0.01mm; p=0.69). Using the T 1 sequence for the same assessments did not significantly change the bias (p>0.05). ICC values for the T 1 and 2D HYCE S pulse sequences were 0.98 and 0.97, respectively. For rotation trials, a constant bias (

  8. Computed tomography landmark-based semi-automated mesh morphing and mapping techniques: generation of patient specific models of the human pelvis without segmentation.

    Science.gov (United States)

    Salo, Zoryana; Beek, Maarten; Wright, David; Whyne, Cari Marisa

    2015-04-13

    Current methods for the development of pelvic finite element (FE) models generally are based upon specimen specific computed tomography (CT) data. This approach has traditionally required segmentation of CT data sets, which is time consuming and necessitates high levels of user intervention due to the complex pelvic anatomy. The purpose of this research was to develop and assess CT landmark-based semi-automated mesh morphing and mapping techniques to aid the generation and mechanical analysis of specimen-specific FE models of the pelvis without the need for segmentation. A specimen-specific pelvic FE model (source) was created using traditional segmentation methods and morphed onto a CT scan of a different (target) pelvis using a landmark-based method. The morphed model was then refined through mesh mapping by moving the nodes to the bone boundary. A second target model was created using traditional segmentation techniques. CT intensity based material properties were assigned to the morphed/mapped model and to the traditionally segmented target models. Models were analyzed to evaluate their geometric concurrency and strain patterns. Strains generated in a double-leg stance configuration were compared to experimental strain gauge data generated from the same target cadaver pelvis. CT landmark-based morphing and mapping techniques were efficiently applied to create a geometrically multifaceted specimen-specific pelvic FE model, which was similar to the traditionally segmented target model and better replicated the experimental strain results (R(2)=0.873). This study has shown that mesh morphing and mapping represents an efficient validated approach for pelvic FE model generation without the need for segmentation. Copyright © 2015 Elsevier Ltd. All rights reserved.

  9. Plankton Biomass Models Based on GIS and Remote Sensing Technique for Predicting Marine Megafauna Hotspots in the Solor Waters

    Science.gov (United States)

    Putra, MIH; Lewis, SA; Kurniasih, EM; Prabuning, D.; Faiqoh, E.

    2016-11-01

    Geographic information system and remote sensing techniques can be used to assist with distribution modelling; a useful tool that helps with strategic design and management plans for MPAs. This study built a pilot model of plankton biomass and distribution in the waters off Solor and Lembata, and is the first study to identify marine megafauna foraging areas in the region. Forty-three samples of zooplankton were collected every 4 km according to the range time and station of aqua MODIS. Generalized additive model (GAM) we used to modelling zooplankton biomass response from environmental properties.Thirty one samples were used to build a model of inverse distance weighting (IDW) (cell size 0.01°) and 12 samples were used as a control to verify the models accuracy. Furthermore, Getis-Ord Gi was used to identify the significance of the hotspot and cold-spot for foraging area. The GAM models was explain 88.1% response of zooplankton biomass and percent to full moon, phytopankton biomassbeing strong predictors. The sampling design was essential in order to build highly accurate models. Our models 96% accurate for phytoplankton and 88% accurate for zooplankton. The foraging behaviour was significantly related to plankton biomass hotspots, which were two times higher compared to plankton cold-spots. In addition, extremely steep slopes of the Lamakera strait support strong upwelling with highly productive waters that affect the presence of marine megafauna. This study detects that the Lamakera strait provides the planktonic requirements for marine megafauna foraging, helping to explain why this region supports such high diversity and abundance of marine megafauna.

  10. Bases of technique of sprinting

    Directory of Open Access Journals (Sweden)

    Valeriy Druz

    2015-06-01

    Full Text Available Purpose: to determine the biomechanical consistent patterns of a movement of a body providing the highest speed of sprinting. Material and Methods: the analysis of scientific and methodical literature on the considered problem, the anthropometrical characteristics of the surveyed contingent of sportsmen, the analysis of high-speed shootings of the leading runners of the world. Results: the biomechanical bases of technique of sprinting make dispersal and movement of the general center of body weight of the sportsman on a parabolic curve in a start phase taking into account the initial height of its stay in a pose of a low start. Its further movement happens on a cycloidal trajectory which is formed due to a pendulum movement of the extremities creating the lifting power which provides flight duration more in a running step, than duration of a basic phase. Conclusions: the received biomechanical regularities of technique of sprinting allow increasing the efficiency of training of sportsmen in sprinting.

  11. Probabilistic hydrological nowcasting using radar based nowcasting techniques and distributed hydrological models: application in the Mediterranean area

    Science.gov (United States)

    Poletti, Maria Laura; Pignone, Flavio; Rebora, Nicola; Silvestro, Francesco

    2017-04-01

    The exposure of the urban areas to flash-floods is particularly significant to Mediterranean coastal cities, generally densely-inhabited. Severe rainfall events often associated to intense and organized thunderstorms produced, during the last century, flash-floods and landslides causing serious damages to urban areas and in the worst events led to human losses. The temporal scale of these events has been observed strictly linked to the size of the catchments involved: in the Mediterranean area a great number of catchments that pass through coastal cities have a small drainage area (less than 100 km2) and a corresponding hydrologic response timescale in the order of a few hours. A suitable nowcasting chain is essential for the on time forecast of this kind of events. In fact meteorological forecast systems are unable to predict precipitation at the scale of these events, small both at spatial (few km) and temporal (hourly) scales. Nowcasting models, covering the time interval of the following two hours starting from the observation try to extend the predictability limits of the forecasting models in support of real-time flood alert system operations. This work aims to present the use of hydrological models coupled with nowcasting techniques. The nowcasting model PhaSt furnishes an ensemble of equi-probable future precipitation scenarios on time horizons of 1-3 h starting from the most recent radar observations. The coupling of the nowcasting model PhaSt with the hydrological model Continuum allows to forecast the flood with a few hours in advance. In this way it is possible to generate different discharge prediction for the following hours and associated return period maps: these maps can be used as a support in the decisional process for the warning system.

  12. Technique for infrared and visible image fusion based on non-subsampled shearlet transform and spiking cortical model

    Science.gov (United States)

    Kong, Weiwei; Wang, Binghe; Lei, Yang

    2015-07-01

    Fusion of infrared and visible images is an active research area in image processing, and a variety of relevant algorithms have been developed. However, the existing techniques commonly cannot gain good fusion performance and acceptable computational complexity simultaneously. This paper proposes a novel image fusion approach that integrates the non-subsampled shearlet transform (NSST) with spiking cortical model (SCM) to overcome the above drawbacks. On the one hand, using NSST to conduct the decompositions and reconstruction not only consists with human vision characteristics, but also effectively decreases the computational complexity compared with the current popular multi-resolution analysis tools such as non-subsampled contourlet transform (NSCT). On the other hand, SCM, which has been considered to be an optimal neuron network model recently, is responsible for the fusion of sub-images from different scales and directions. Experimental results indicate that the proposed method is promising, and it does significantly improve the fusion quality in both aspects of subjective visual performance and objective comparisons compared with other current popular ones.

  13. Aerosol model selection and uncertainty modelling by adaptive MCMC technique

    Directory of Open Access Journals (Sweden)

    M. Laine

    2008-12-01

    Full Text Available We present a new technique for model selection problem in atmospheric remote sensing. The technique is based on Monte Carlo sampling and it allows model selection, calculation of model posterior probabilities and model averaging in Bayesian way.

    The algorithm developed here is called Adaptive Automatic Reversible Jump Markov chain Monte Carlo method (AARJ. It uses Markov chain Monte Carlo (MCMC technique and its extension called Reversible Jump MCMC. Both of these techniques have been used extensively in statistical parameter estimation problems in wide area of applications since late 1990's. The novel feature in our algorithm is the fact that it is fully automatic and easy to use.

    We show how the AARJ algorithm can be implemented and used for model selection and averaging, and to directly incorporate the model uncertainty. We demonstrate the technique by applying it to the statistical inversion problem of gas profile retrieval of GOMOS instrument on board the ENVISAT satellite. Four simple models are used simultaneously to describe the dependence of the aerosol cross-sections on wavelength. During the AARJ estimation all the models are used and we obtain a probability distribution characterizing how probable each model is. By using model averaging, the uncertainty related to selecting the aerosol model can be taken into account in assessing the uncertainty of the estimates.

  14. Extension of an Itô-based general approximation technique for random vibration of a BBW general hysteris model part II: Non-Gaussian analysis

    Science.gov (United States)

    Davoodi, H.; Noori, M.

    1990-07-01

    The work presented in this paper constitutes the second phase of on-going research aimed at developing mathematical models for representing general hysteretic behavior of structures and approximation techniques for the computation and analysis of the response of hysteretic systems to random excitations. In this second part, the technique previously developed by the authors for the Gaussian response analysis of non-linear systems with general hysteretic behavior is extended for the non-Gaussian analysis of these systems. This approximation technique is based on the approach proposed independently by Ibrahim and Wu-Lin. In this work up to fourth order moments of the response co-ordinates are obtained for the Bouc-Baber-Wen smooth hysteresis model. These higher order statistics previously have not been made available for general hysteresis models by using existing approximation methods. Second order moments obtained for the model by this non-Gaussian closure scheme are compared with equivalent linearization and Gaussian closure results via Monte Carlo simulation (MCS). Higher order moments are compared with the simulation results. The study performed for a wide range of degradation parameters and input power spectral density ( PSD) levels shows that the non-Gaussian responses obtained by this approach are in better agreement with the MCS results than the linearized and Gaussian ones. This approximation technique can provide information on higher order moments for general hysteretic systems. This information is valuable in random vibration and the reliability analysis of hysteretically yielding structures.

  15. Identifying and prioritizing the tools/techniques of knowledge management based on the Asian Productivity Organization Model (APO) to use in hospitals.

    Science.gov (United States)

    Khajouei, Hamid; Khajouei, Reza

    2017-12-01

    Appropriate knowledge, correct information, and relevant data are vital in medical diagnosis and treatment systems. Knowledge Management (KM) through its tools/techniques provides a pertinent framework for decision-making in healthcare systems. The objective of this study was to identify and prioritize the KM tools/techniques that apply to hospital setting. This is a descriptive-survey study. Data were collected using a -researcher-made questionnaire that was developed based on experts' opinions to select the appropriate tools/techniques from 26 tools/techniques of the Asian Productivity Organization (APO) model. Questions were categorized into five steps of KM (identifying, creating, storing, sharing, and applying the knowledge) according to this model. The study population consisted of middle and senior managers of hospitals and managing directors of Vice-Chancellor for Curative Affairs in Kerman University of Medical Sciences in Kerman, Iran. The data were analyzed in SPSS v.19 using one-sample t-test. Twelve out of 26 tools/techniques of the APO model were identified as the tools applicable in hospitals. "Knowledge café" and "APO knowledge management assessment tool" with respective means of 4.23 and 3.7 were the most and the least applicable tools in the knowledge identification step. "Mentor-mentee scheme", as well as "voice and Voice over Internet Protocol (VOIP)" with respective means of 4.20 and 3.52 were the most and the least applicable tools/techniques in the knowledge creation step. "Knowledge café" and "voice and VOIP" with respective means of 3.85 and 3.42 were the most and the least applicable tools/techniques in the knowledge storage step. "Peer assist and 'voice and VOIP' with respective means of 4.14 and 3.38 were the most and the least applicable tools/techniques in the knowledge sharing step. Finally, "knowledge worker competency plan" and "knowledge portal" with respective means of 4.38 and 3.85 were the most and the least applicable tools/techniques

  16. Customization of UWB 3D-RTLS Based on the New Uncertainty Model of the AoA Ranging Technique

    Directory of Open Access Journals (Sweden)

    Bartosz Jachimczyk

    2017-01-01

    Full Text Available The increased potential and effectiveness of Real-time Locating Systems (RTLSs substantially influence their application spectrum. They are widely used, inter alia, in the industrial sector, healthcare, home care, and in logistic and security applications. The research aims to develop an analytical method to customize UWB-based RTLS, in order to improve their localization performance in terms of accuracy and precision. The analytical uncertainty model of Angle of Arrival (AoA localization in a 3D indoor space, which is the foundation of the customization concept, is established in a working environment. Additionally, a suitable angular-based 3D localization algorithm is introduced. The paper investigates the following issues: the influence of the proposed correction vector on the localization accuracy; the impact of the system’s configuration and LS’s relative deployment on the localization precision distribution map. The advantages of the method are verified by comparing them with a reference commercial RTLS localization engine. The results of simulations and physical experiments prove the value of the proposed customization method. The research confirms that the analytical uncertainty model is the valid representation of RTLS’ localization uncertainty in terms of accuracy and precision and can be useful for its performance improvement. The research shows, that the Angle of Arrival localization in a 3D indoor space applying the simple angular-based localization algorithm and correction vector improves of localization accuracy and precision in a way that the system challenges the reference hardware advanced localization engine. Moreover, the research guides the deployment of location sensors to enhance the localization precision.

  17. Technique for obtaining an engine emissions model based on continuous EPA-CVS test data and a dynamic vehicle model. SAE Paper 760156

    Energy Technology Data Exchange (ETDEWEB)

    Radtke, R.; Frank, A.; Beachley, N.

    1976-01-01

    A new technique for obtaining engine emission flow rate maps has been developed. The maps are based on specific emissions data obtained on a continuous basis over a single EPA-CVS urban driving cycle test. The data are averaged for the various torque-speed ranges of the engine. It has been found that these dynamic average emission flow rate maps, which are functions only of engine torque and speed, allow instantaneous emissions to be fairly accurately predicted. It also appears that the technique might be used advantageously to determine engine calibration parameters.

  18. Size Adaptive Region Based Huffman Compression Technique

    OpenAIRE

    Nandi, Utpal; Mandal, Jyotsna Kumar

    2014-01-01

    A loss-less compression technique is proposed which uses a variable length Region formation technique to divide the input file into a number of variable length regions. Huffman codes are obtained for entire file after formation of regions. Symbols of each region are compressed one by one. Comparisons are made among proposed technique, Region Based Huffman compression technique and classical Huffman technique. The proposed technique offers better compression ratio for some files than other two.

  19. Development of a morphology-based modeling technique for tracking solid-body displacements: examining the reliability of a potential MRI-only approach for joint kinematics assessment.

    Science.gov (United States)

    Mahato, Niladri K; Montuelle, Stephane; Cotton, John; Williams, Susan; Thomas, James; Clark, Brian

    2016-05-18

    Single or biplanar video radiography and Roentgen stereophotogrammetry (RSA) techniques used for the assessment of in-vivo joint kinematics involves application of ionizing radiation, which is a limitation for clinical research involving human subjects. To overcome this limitation, our long-term goal is to develop a magnetic resonance imaging (MRI)-only, three dimensional (3-D) modeling technique that permits dynamic imaging of joint motion in humans. Here, we present our initial findings, as well as reliability data, for an MRI-only protocol and modeling technique. We developed a morphology-based motion-analysis technique that uses MRI of custom-built solid-body objects to animate and quantify experimental displacements between them. The technique involved four major steps. First, the imaging volume was calibrated using a custom-built grid. Second, 3-D models were segmented from axial scans of two custom-built solid-body cubes. Third, these cubes were positioned at pre-determined relative displacements (translation and rotation) in the magnetic resonance coil and scanned with a T1 and a fast contrast-enhanced pulse sequences. The digital imaging and communications in medicine (DICOM) images were then processed for animation. The fourth step involved importing these processed images into an animation software, where they were displayed as background scenes. In the same step, 3-D models of the cubes were imported into the animation software, where the user manipulated the models to match their outlines in the scene (rotoscoping) and registered the models into an anatomical joint system. Measurements of displacements obtained from two different rotoscoping sessions were tested for reliability using coefficient of variations (CV), intraclass correlation coefficients (ICC), Bland-Altman plots, and Limits of Agreement analyses. Between-session reliability was high for both the T1 and the contrast-enhanced sequences. Specifically, the average CVs for translation were 4

  20. Development of a morphology-based modeling technique for tracking solid-body displacements: examining the reliability of a potential MRI-only approach for joint kinematics assessment

    International Nuclear Information System (INIS)

    Mahato, Niladri K.; Montuelle, Stephane; Cotton, John; Williams, Susan; Thomas, James; Clark, Brian

    2016-01-01

    Single or biplanar video radiography and Roentgen stereophotogrammetry (RSA) techniques used for the assessment of in-vivo joint kinematics involves application of ionizing radiation, which is a limitation for clinical research involving human subjects. To overcome this limitation, our long-term goal is to develop a magnetic resonance imaging (MRI)-only, three dimensional (3-D) modeling technique that permits dynamic imaging of joint motion in humans. Here, we present our initial findings, as well as reliability data, for an MRI-only protocol and modeling technique. We developed a morphology-based motion-analysis technique that uses MRI of custom-built solid-body objects to animate and quantify experimental displacements between them. The technique involved four major steps. First, the imaging volume was calibrated using a custom-built grid. Second, 3-D models were segmented from axial scans of two custom-built solid-body cubes. Third, these cubes were positioned at pre-determined relative displacements (translation and rotation) in the magnetic resonance coil and scanned with a T 1 and a fast contrast-enhanced pulse sequences. The digital imaging and communications in medicine (DICOM) images were then processed for animation. The fourth step involved importing these processed images into an animation software, where they were displayed as background scenes. In the same step, 3-D models of the cubes were imported into the animation software, where the user manipulated the models to match their outlines in the scene (rotoscoping) and registered the models into an anatomical joint system. Measurements of displacements obtained from two different rotoscoping sessions were tested for reliability using coefficient of variations (CV), intraclass correlation coefficients (ICC), Bland-Altman plots, and Limits of Agreement analyses. Between-session reliability was high for both the T 1 and the contrast-enhanced sequences. Specifically, the average CVs for translation were 4

  1. A laplace transform-based technique for solving multiscale and multidomain problems: Application to a countercurrent hemodialyzer model.

    Science.gov (United States)

    Simon, Laurent

    2017-08-01

    An integral-based method was employed to evaluate the behavior of a countercurrent hemodialyzer model. Solute transfer from the blood into the dialysate was described by writing mass balance equations over a section of the device. The approach provided Laplace transform concentration profiles on both sides of the membrane. Applications of the final value theorem led to the development of the effective time constants and steady-state concentrations in the exit streams. Transient responses were derived by a numerical inversion algorithm. Simulations show that the period elapsed, before reaching equilibrium in the effluents, decreased when the blood flow rate increased from 0.25 to 0.30 ml/s. The performance index decreased from 0.80 to 0.71 when the blood-to-dialysate flow ratio increased by 20% and increased from 0.80 to 0.85 when this fraction was reduced by 17%. The analytical solution predicted methadone removal in patients undergoing dialysis. Clinicians can use these findings to predict the time required to achieve a target extraction ratio. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. A novel hybrid model for air quality index forecasting based on two-phase decomposition technique and modified extreme learning machine.

    Science.gov (United States)

    Wang, Deyun; Wei, Shuai; Luo, Hongyuan; Yue, Chenqiang; Grunder, Olivier

    2017-02-15

    The randomness, non-stationarity and irregularity of air quality index (AQI) series bring the difficulty of AQI forecasting. To enhance forecast accuracy, a novel hybrid forecasting model combining two-phase decomposition technique and extreme learning machine (ELM) optimized by differential evolution (DE) algorithm is developed for AQI forecasting in this paper. In phase I, the complementary ensemble empirical mode decomposition (CEEMD) is utilized to decompose the AQI series into a set of intrinsic mode functions (IMFs) with different frequencies; in phase II, in order to further handle the high frequency IMFs which will increase the forecast difficulty, variational mode decomposition (VMD) is employed to decompose the high frequency IMFs into a number of variational modes (VMs). Then, the ELM model optimized by DE algorithm is applied to forecast all the IMFs and VMs. Finally, the forecast value of each high frequency IMF is obtained through adding up the forecast results of all corresponding VMs, and the forecast series of AQI is obtained by aggregating the forecast results of all IMFs. To verify and validate the proposed model, two daily AQI series from July 1, 2014 to June 30, 2016 collected from Beijing and Shanghai located in China are taken as the test cases to conduct the empirical study. The experimental results show that the proposed hybrid model based on two-phase decomposition technique is remarkably superior to all other considered models for its higher forecast accuracy. Copyright © 2016 Elsevier B.V. All rights reserved.

  3. RF Tomography in Free Space: Experimental Validation of the Forward Model and an Inversion Algorithm Based on the Algebraic Reconstruction Technique

    Directory of Open Access Journals (Sweden)

    V. Picco

    2013-01-01

    Full Text Available Radio-frequency tomography was originally proposed to image underground cavities. Its flexible forward model can be used in free-space by choosing an appropriate dyadic Green's function and can be translated in the microwave domain. Experimental data are used to validate a novel inversion scheme, based on the algebraic reconstruction technique. The proposed method is improved by introducing physical bounds on the solution returned. As a result, the images of the dielectric permittivity profiles obtained are superior in quality to the ones obtained using classical regularization approaches such as the truncated singular value decomposition. The results from three experimental case studies are presented and discussed.

  4. Optimization-based human motion prediction using an inverse-inverse dynamics technique implemented in the AnyBody Modeling System

    DEFF Research Database (Denmark)

    Farahani, Saeed Davoudabadi; Andersen, Michael Skipper; de Zee, Mark

    2012-01-01

    derived from the detailed musculoskeletal analysis. The technique is demonstrated on a human model pedaling a bicycle. We use a physiology-based cost function expressing the mean square of all muscle activities over the cycle to predict a realistic motion pattern. Posture and motion prediction......, the parameters of these functions are optimized to produce an optimum posture or movement according to a user-defined cost function and constraints. The cost function and the constraints are typically express performance, comfort, injury risk, fatigue, muscle load, joint forces and other physiological properties...

  5. Autonomous Robotic Following Using Vision Based Techniques

    National Research Council Canada - National Science Library

    Kania, Robert T; Frederick, Phil A; Del Rose, Mike

    2006-01-01

    .... ISAAC is designed to follow a specific person using solely vision based techniques. The core of the vision based algorithms used in this application is based on years of research from a collaboration of government and university partners...

  6. Autonomous Robotic Following Using Vision Based Techniques

    National Research Council Canada - National Science Library

    Kania, Robert T; Del Rose, Michael

    2005-01-01

    .... ISAAC is designed to follow a specific person using solely vision based techniques. The core of the vision based algorithms used in this application is based on years of research from a collaboration of government and university partners...

  7. Neural-network-based prediction techniques for single station modeling and regional mapping of the foF2 and M(3000F2 ionospheric characteristics

    Directory of Open Access Journals (Sweden)

    T. D. Xenos

    2002-01-01

    Full Text Available In this work, Neural-Network-based single-station hourly daily foF2 and M(3000F2 modelling of 15 European ionospheric stations is investigated. The data used are neural networks and hourly daily values from the period 1964- 1988 for training the neural networks and from the period 1989-1994 for checking the prediction accuracy. Two types of models are presented for the F2-layer critical frequency prediction and two for the propagation factor M(3000F2. The first foF2 model employs the E-layer local noon calculated daily critical frequency (foE12 and the local noon F2- layer critical frequency of the previous day. The second foF2 model, which introduces a new regional mapping technique, employs the Juliusruh neural network model and uses the E-layer local noon calculated daily critical frequency (foE12, and the previous day F2-layer critical frequency measured at Juliusruh at noon. The first M(3000F2 model employs the E-layer local noon calculated daily critical frequency (foE12, its ± 3 h deviations and the local noon cosine of the solar zenith angle (cos c12. The second model, which introduces a new M(3000F2 mapping technique, employs Juliusruh neural network model and uses the E-layer local noon calculated daily critical frequency (foE12, and the previous day F2-layer critical frequency measured at Juliusruh at noon.

  8. Microprocessor based techniques at CESR

    International Nuclear Information System (INIS)

    Giannini, G.; Cornell Univ., Ithaca, NY

    1981-01-01

    Microprocessor based systems succesfully used in connection with the High Energy Physics experimental program at the Cornell Electron Storage Ring are described. The multiprocessor calibration system for the CUSB calorimeter is analyzed in view of present and future applications. (orig.)

  9. Validation Techniques of network harmonic models based on switching of a series linear component and measuring resultant harmonic increments

    DEFF Research Database (Denmark)

    Wiechowski, Wojciech Tomasz; Lykkegaard, Jan; Bak, Claus Leth

    2007-01-01

    In this paper two methods of validation of transmission network harmonic models are introduced. The methods were developed as a result of the work presented in [1]. The first method allows calculating the transfer harmonic impedance between two nodes of a network. Switching a linear, series network......, as for example a transmission line. Both methods require that harmonic measurements performed at two ends of the disconnected element are precisely synchronized....... are used for calculation of the transfer harmonic impedance between the nodes. The determined transfer harmonic impedance can be used to validate a computer model of the network. The second method is an extension of the fist one. It allows switching a series element that contains a shunt branch...

  10. Bases en technique du vide

    CERN Document Server

    Rommel, Guy

    2017-01-01

    Cette seconde édition, 20 ans après la première, devrait continuer à aider les techniciens pour la réalisation de leur système de vide. La technologie du vide est utilisée, à présent, dans de nombreux domaines très différents les uns des autres et avec des matériels très fiables. Or, elle est souvent bien peu étudiée, de plus, c'est une discipline où le savoir-faire prend tout son sens. Malheureusement la transmission par des ingénieurs et techniciens expérimentés ne se fait plus ou trop rapidement. La technologie du vide fait appel à la physique, à la chimie, à la mécanique, à la métallurgie, au dessin industriel, à l'électronique, à la thermique, etc. Cette discipline demande donc de maîtriser des techniques de domaines très divers, et ce n'est pas chose facile. Chaque installation est en soi un cas particulier avec ses besoins, sa façon de traiter les matériaux et celle d'utiliser les matériels. Les systèmes de vide sont parfois copiés d'un laboratoire à un autre et le...

  11. Spatio-temporal surveillance of water based infectious disease (malaria) in Rawalpindi, Pakistan using geostatistical modeling techniques.

    Science.gov (United States)

    Ahmad, Sheikh Saeed; Aziz, Neelam; Butt, Amna; Shabbir, Rabia; Erum, Summra

    2015-09-01

    One of the features of medical geography that has made it so useful in health research is statistical spatial analysis, which enables the quantification and qualification of health events. The main objective of this research was to study the spatial distribution patterns of malaria in Rawalpindi district using spatial statistical techniques to identify the hot spots and the possible risk factor. Spatial statistical analyses were done in ArcGIS, and satellite images for land use classification were processed in ERDAS Imagine. Four hundred and fifty water samples were also collected from the study area to identify the presence or absence of any microbial contamination. The results of this study indicated that malaria incidence varied according to geographical location, with eco-climatic condition and showing significant positive spatial autocorrelation. Hotspots or location of clusters were identified using Getis-Ord Gi* statistic. Significant clustering of malaria incidence occurred in rural central part of the study area including Gujar Khan, Kaller Syedan, and some part of Kahuta and Rawalpindi Tehsil. Ordinary least square (OLS) regression analysis was conducted to analyze the relationship of risk factors with the disease cases. Relationship of different land cover with the disease cases indicated that malaria was more related with agriculture, low vegetation, and water class. Temporal variation of malaria cases showed significant positive association with the meteorological variables including average monthly rainfall and temperature. The results of the study further suggested that water supply and sewage system and solid waste collection system needs a serious attention to prevent any outbreak in the study area.

  12. Finger Based Techniques for Nonvisual Touchscreen Text Entry

    OpenAIRE

    Fakrudeen, Mohammed; Yousef, Sufian; Miraz, Mahdi H.; Hussein, AbdelRahman Hamza

    2017-01-01

    This research proposes Finger Based Technique (FBT) for non-visual touch screen device interaction designed for blind users. Based on the proposed technique, the blind user can access virtual keys based on finger holding positions. Three different models have been proposed. They are Single Digit Finger-Digit Input (FDI), Double Digit FDI for digital text entry, and Finger-Text Input (FTI) for normal text entry. All the proposed models were implemented with voice feedback while enabling touch ...

  13. Hydrodynamically Coupled Brownian Dynamics: A coarse-grain particle-based Brownian dynamics technique with hydrodynamic interactions for modeling self-developing flow of polymer solutions

    Science.gov (United States)

    Ahuja, V. R.; van der Gucht, J.; Briels, W. J.

    2018-01-01

    We present a novel coarse-grain particle-based simulation technique for modeling self-developing flow of dilute and semi-dilute polymer solutions. The central idea in this paper is the two-way coupling between a mesoscopic polymer model and a phenomenological fluid model. As our polymer model, we choose Responsive Particle Dynamics (RaPiD), a Brownian dynamics method, which formulates the so-called "conservative" and "transient" pair-potentials through which the polymers interact besides experiencing random forces in accordance with the fluctuation dissipation theorem. In addition to these interactions, our polymer blobs are also influenced by the background solvent velocity field, which we calculate by solving the Navier-Stokes equation discretized on a moving grid of fluid blobs using the Smoothed Particle Hydrodynamics (SPH) technique. While the polymers experience this frictional force opposing their motion relative to the background flow field, our fluid blobs also in turn are influenced by the motion of the polymers through an interaction term. This makes our technique a two-way coupling algorithm. We have constructed this interaction term in such a way that momentum is conserved locally, thereby preserving long range hydrodynamics. Furthermore, we have derived pairwise fluctuation terms for the velocities of the fluid blobs using the Fokker-Planck equation, which have been alternatively derived using the General Equation for the Non-Equilibrium Reversible-Irreversible Coupling (GENERIC) approach in Smoothed Dissipative Particle Dynamics (SDPD) literature. These velocity fluctuations for the fluid may be incorporated into the velocity updates for our fluid blobs to obtain a thermodynamically consistent distribution of velocities. In cases where these fluctuations are insignificant, however, these additional terms may well be dropped out as they are in a standard SPH simulation. We have applied our technique to study the rheology of two different

  14. GIS-based bivariate statistical techniques for groundwater potential ...

    Indian Academy of Sciences (India)

    Ali Haghizadeh

    2017-11-23

    Nov 23, 2017 ... So, these models are known as computational intel- ligence and machine learning techniques to use for replacing physically based models. In contrast, knowledge-driven methods (KDM) use rich prior knowledge for model building based on knowledge engineering and management technologies (Azkune.

  15. System-Level Modeling and Synthesis Techniques for Flow-Based Microfluidic Very Large Scale Integration Biochips

    DEFF Research Database (Denmark)

    Minhass, Wajid Hassan

    Microfluidic biochips integrate different biochemical analysis functionalities on-chip and offer several advantages over the conventional biochemical laboratories. In this thesis, we focus on the flow-based biochips. The basic building block of such a chip is a valve which can be fabricated at very...... to reduce the macro-assembly around the chip and enhance chip scalability, we propose an approach for the biochip pin count minimization. We also propose a throughput maximization scheme for the cell culture mVLSI biochips, saving time and reducing costs. We have extensively evaluated the proposed...

  16. Mechanistic model to predict colostrum intake based on deuterium oxide dilution technique data and impact of gestation and prefarrowing diets on piglet intake and sow yield of colostrum.

    Science.gov (United States)

    Theil, P K; Flummer, C; Hurley, W L; Kristensen, N B; Labouriau, R L; Sørensen, M T

    2014-12-01

    The aims of the present study were to quantify colostrum intake (CI) of piglets using the D2O dilution technique, to develop a mechanistic model to predict CI, to compare these data with CI predicted by a previous empirical predictive model developed for bottle-fed piglets, and to study how composition of diets fed to gestating sows affected piglet CI, sow colostrum yield (CY), and colostrum composition. In total, 240 piglets from 40 litters were enriched with D2O. The CI measured by D2O from birth until 24 h after the birth of first-born piglet was on average 443 g (SD 151). Based on measured CI, a mechanistic model to predict CI was developed using piglet characteristics (24-h weight gain [WG; g], BW at birth [BWB; kg], and duration of CI [D; min]: CI, g=-106+2.26 WG+200 BWB+0.111 D-1,414 WG/D+0.0182 WG/BWB (R2=0.944). This model was used to predict the CI for all colostrum suckling piglets within the 40 litters (n=500, mean=437 g, SD=153 g) and was compared with the CI predicted by a previous empirical predictive model (mean=305 g, SD=140 g). The previous empirical model underestimated the CI by 30% compared with that obtained by the new mechanistic model. The sows were fed 1 of 4 gestation diets (n=10 per diet) based on different fiber sources (low fiber [17%] or potato pulp, pectin residue, or sugarbeet pulp [32 to 40%]) from mating until d 108 of gestation. From d 108 of gestation until parturition, sows were fed 1 of 5 prefarrowing diets (n=8 per diet) varying in supplemented fat (3% animal fat, 8% coconut oil, 8% sunflower oil, 8% fish oil, or 4% fish oil+4% octanoic acid). Sows fed diets with pectin residue or sugarbeet pulp during gestation produced colostrum with lower protein, fat, DM, and energy concentrations and higher lactose concentrations, and their piglets had greater CI as compared with sows fed potato pulp or the low-fiber diet (Pcoconut oil decreased lactose and increased DM concentrations of colostrum compared with other prefarrowing diets (P

  17. Advanced structural equation modeling issues and techniques

    CERN Document Server

    Marcoulides, George A

    2013-01-01

    By focusing primarily on the application of structural equation modeling (SEM) techniques in example cases and situations, this book provides an understanding and working knowledge of advanced SEM techniques with a minimum of mathematical derivations. The book was written for a broad audience crossing many disciplines, assumes an understanding of graduate level multivariate statistics, including an introduction to SEM.

  18. Cells, Agents, and Support Vectors in Interaction - Modeling Urban Sprawl based on Machine Learning and Artificial Intelligence Techniques in a Post-Industrial Region

    Science.gov (United States)

    Rienow, A.; Menz, G.

    2015-12-01

    Since the beginning of the millennium, artificial intelligence techniques as cellular automata (CA) and multi-agent systems (MAS) have been incorporated into land-system simulations to address the complex challenges of transitions in urban areas as open, dynamic systems. The study presents a hybrid modeling approach for modeling the two antagonistic processes of urban sprawl and urban decline at once. The simulation power of support vector machines (SVM), cellular automata (CA) and multi-agent systems (MAS) are integrated into one modeling framework and applied to the largest agglomeration of Central Europe: the Ruhr. A modified version of SLEUTH (short for Slope, Land-use, Exclusion, Urban, Transport, and Hillshade) functions as the CA component. SLEUTH makes use of historic urban land-use data sets and growth coefficients for the purpose of modeling physical urban expansion. The machine learning algorithm of SVM is applied in order to enhance SLEUTH. Thus, the stochastic variability of the CA is reduced and information about the human and ecological forces driving the local suitability of urban sprawl is incorporated. Subsequently, the supported CA is coupled with the MAS ReHoSh (Residential Mobility and the Housing Market of Shrinking City Systems). The MAS models population patterns, housing prices, and housing demand in shrinking regions based on interactions between household and city agents. Semi-explicit urban weights are introduced as a possibility of modeling from and to the pixel simultaneously. Three scenarios of changing housing preferences reveal the urban development of the region in terms of quantity and location. They reflect the dissemination of sustainable thinking among stakeholders versus the steady dream of owning a house in sub- and exurban areas. Additionally, the outcomes are transferred into a digital petri dish reflecting a synthetic environment with perfect conditions of growth. Hence, the generic growth elements affecting the future

  19. Verification of Orthogrid Finite Element Modeling Techniques

    Science.gov (United States)

    Steeve, B. E.

    1996-01-01

    The stress analysis of orthogrid structures, specifically with I-beam sections, is regularly performed using finite elements. Various modeling techniques are often used to simplify the modeling process but still adequately capture the actual hardware behavior. The accuracy of such 'Oshort cutso' is sometimes in question. This report compares three modeling techniques to actual test results from a loaded orthogrid panel. The finite element models include a beam, shell, and mixed beam and shell element model. Results show that the shell element model performs the best, but that the simpler beam and beam and shell element models provide reasonable to conservative results for a stress analysis. When deflection and stiffness is critical, it is important to capture the effect of the orthogrid nodes in the model.

  20. Matlab-Based Modeling and Simulations to Study the Performance of Different MPPT Techniques Used for Photovoltaic Systems under Partially Shaded Conditions

    Directory of Open Access Journals (Sweden)

    Jehun Hahm

    2015-01-01

    Full Text Available A pulse-width-modulator- (PWM- based sliding mode controller is developed to study the effects of partial shade, temperature, and insolation on the performance of maximum power point tracking (MPPT used in photovoltaic (PV systems. Under partially shaded conditions and temperature, PV array characteristics become more complex, with multiple power-voltage maxima. MPPT is an automatic control technique to adjust power interfaces and deliver power for a diverse range of insolation values, temperatures, and partially shaded modules. The PV system is tested using two conventional algorithms: the Perturb and Observe (P&O algorithm and the Incremental Conductance (IncCond algorithm, which are simple to implement for a PV array. The proposed method applied a model to simulate the performance of the PV system for solar energy usage, which is compared to the conventional methods under nonuniform insolation improving the PV system utilization efficiency and allowing optimization of the system performance. The PWM-based sliding mode controller successfully overcomes the issues presented by nonuniform conditions and tracks the global MPP. In this paper, the PV system consists of a solar module under shade connected to a boost converter that is controlled by three different algorithms and is generated using Matlab/Simulink.

  1. A Method to Test Model Calibration Techniques

    Energy Technology Data Exchange (ETDEWEB)

    Judkoff, Ron; Polly, Ben; Neymark, Joel

    2016-08-26

    This paper describes a method for testing model calibration techniques. Calibration is commonly used in conjunction with energy retrofit audit models. An audit is conducted to gather information about the building needed to assemble an input file for a building energy modeling tool. A calibration technique is used to reconcile model predictions with utility data, and then the 'calibrated model' is used to predict energy savings from a variety of retrofit measures and combinations thereof. Current standards and guidelines such as BPI-2400 and ASHRAE-14 set criteria for 'goodness of fit' and assume that if the criteria are met, then the calibration technique is acceptable. While it is logical to use the actual performance data of the building to tune the model, it is not certain that a good fit will result in a model that better predicts post-retrofit energy savings. Therefore, the basic idea here is that the simulation program (intended for use with the calibration technique) is used to generate surrogate utility bill data and retrofit energy savings data against which the calibration technique can be tested. This provides three figures of merit for testing a calibration technique, 1) accuracy of the post-retrofit energy savings prediction, 2) closure on the 'true' input parameter values, and 3) goodness of fit to the utility bill data. The paper will also discuss the pros and cons of using this synthetic surrogate data approach versus trying to use real data sets of actual buildings.

  2. Model techniques for testing heated concrete structures

    International Nuclear Information System (INIS)

    Stefanou, G.D.

    1983-01-01

    Experimental techniques are described which may be used in the laboratory to measure strains of model concrete structures representing to scale actual structures of any shape or geometry, operating at elevated temperatures, for which time-dependent creep and shrinkage strains are dominant. These strains could be used to assess the distribution of stress in the scaled structure and hence to predict the actual behaviour of concrete structures used in nuclear power stations. Similar techniques have been employed in an investigation to measure elastic, thermal, creep and shrinkage strains in heated concrete models representing to scale parts of prestressed concrete pressure vessels for nuclear reactors. (author)

  3. Techniques for managing behaviour in pediatric dentistry: comparative study of live modelling and tell-show-do based on children's heart rates during treatment.

    Science.gov (United States)

    Farhat-McHayleh, Nada; Harfouche, Alice; Souaid, Philippe

    2009-05-01

    Tell-show-do is the most popular technique for managing children"s behaviour in dentists" offices. Live modelling is used less frequently, despite the satisfactory results obtained in studies conducted during the 1980s. The purpose of this study was to compare the effects of these 2 techniques on children"s heart rates during dental treatments, heart rate being the simplest biological parameter to measure and an increase in heart rate being the most common physiologic indicator of anxiety and fear. For this randomized, controlled, parallel-group single-centre clinical trial, children 5 to 9 years of age presenting for the first time to the Saint Joseph University dental care centre in Beirut, Lebanon, were divided into 3 groups: those in groups A and B were prepared for dental treatment by means of live modelling, the mother serving as the model for children in group A and the father as the model for children in group B. The children in group C were prepared by a pediatric dentist using the tell-show-do method. Each child"s heart rate was monitored during treatment, which consisted of an oral examination and cleaning. A total of 155 children met the study criteria and participated in the study. Children who received live modelling with the mother as model had lower heart rates than those who received live modelling with the father as model and those who were prepared by the tell-show-do method (p pediatric dentistry.

  4. Workshop on Computational Modelling Techniques in Structural ...

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 22; Issue 6. Workshop on Computational Modelling Techniques in Structural Biology. Information and Announcements Volume 22 Issue 6 June 2017 pp 619-619. Fulltext. Click here to view fulltext PDF. Permanent link:

  5. A brief review of dispensing-based rapid prototyping techniques in tissue scaffold fabrication: role of modeling on scaffold properties prediction

    Energy Technology Data Exchange (ETDEWEB)

    Li, M G; Chen, X B [Department of Mechanical Engineering, University of Saskatchewan, Saskatoon, Saskatchewan S7N 5A9 (Canada); Tian, X Y, E-mail: mil715@mail.usask.c [Division of Biomedical Engineering, University of Saskatchewan, Saskatoon, Saskatchewan S7N 5A9 (Canada)

    2009-09-15

    Artificial scaffolds play vital roles in tissue engineering as they provide a supportive environment for cell attachment, proliferation and differentiation during tissue formation. Fabrication of tissue scaffolds is thus of fundamental importance for tissue engineering. Of the variety of scaffold fabrication techniques available, rapid prototyping (RP) methods have attracted a great deal of attention in recent years. This method can improve conventional scaffold fabrication by controlling scaffold microstructure, incorporating cells into scaffolds and regulating cell distribution. All of these contribute towards the ultimate goal of tissue engineering: functional tissues or organs. Dispensing is typically used in different RP techniques to implement the layer-by-layer fabrication process. This article reviews RP methods in tissue scaffold fabrication, with emphasis on dispensing-based techniques, and analyzes the effects of different process factors on fabrication performance, including flow rate, pore size and porosity, and mechanical cell damage that can occur in the bio-manufacturing process. (topical review)

  6. Virtual 3d City Modeling: Techniques and Applications

    Science.gov (United States)

    Singh, S. P.; Jain, K.; Mandla, V. R.

    2013-08-01

    3D city model is a digital representation of the Earth's surface and it's related objects such as Building, Tree, Vegetation, and some manmade feature belonging to urban area. There are various terms used for 3D city models such as "Cybertown", "Cybercity", "Virtual City", or "Digital City". 3D city models are basically a computerized or digital model of a city contains the graphic representation of buildings and other objects in 2.5 or 3D. Generally three main Geomatics approach are using for Virtual 3-D City models generation, in first approach, researcher are using Conventional techniques such as Vector Map data, DEM, Aerial images, second approach are based on High resolution satellite images with LASER scanning, In third method, many researcher are using Terrestrial images by using Close Range Photogrammetry with DSM & Texture mapping. We start this paper from the introduction of various Geomatics techniques for 3D City modeling. These techniques divided in to two main categories: one is based on Automation (Automatic, Semi-automatic and Manual methods), and another is Based on Data input techniques (one is Photogrammetry, another is Laser Techniques). After details study of this, finally in short, we are trying to give the conclusions of this study. In the last, we are trying to give the conclusions of this research paper and also giving a short view for justification and analysis, and present trend for 3D City modeling. This paper gives an overview about the Techniques related with "Generation of Virtual 3-D City models using Geomatics Techniques" and the Applications of Virtual 3D City models. Photogrammetry, (Close range, Aerial, Satellite), Lasergrammetry, GPS, or combination of these modern Geomatics techniques play a major role to create a virtual 3-D City model. Each and every techniques and method has some advantages and some drawbacks. Point cloud model is a modern trend for virtual 3-D city model. Photo-realistic, Scalable, Geo-referenced virtual 3

  7. Ultra-low dose abdominal MDCT: Using a knowledge-based Iterative Model Reconstruction technique for substantial dose reduction in a prospective clinical study

    Energy Technology Data Exchange (ETDEWEB)

    Khawaja, Ranish Deedar Ali, E-mail: rkhawaja@mgh.harvard.edu [MGH Imaging, Massachusetts General Hospital and Harvard Medical School, Boston, MA (United States); Singh, Sarabjeet; Blake, Michael; Harisinghani, Mukesh; Choy, Gary; Karosmangulu, Ali; Padole, Atul; Do, Synho [MGH Imaging, Massachusetts General Hospital and Harvard Medical School, Boston, MA (United States); Brown, Kevin; Thompson, Richard; Morton, Thomas; Raihani, Nilgoun [CT Research and Advanced Development, Philips Healthcare, Cleveland, OH (United States); Koehler, Thomas [Philips Technologie GmbH, Innovative Technologies, Hamburg (Germany); Kalra, Mannudeep K. [MGH Imaging, Massachusetts General Hospital and Harvard Medical School, Boston, MA (United States)

    2015-01-15

    Highlights: • Limited abdominal CT indications can be performed at a size specific dose estimate of (SSDE) 1.5 mGy (∼0.9 mSv) in smaller patients (BMI less than or equal to 25 kg/m{sup 2}) using a knowledge based Iterative Model Reconstruction (IMR) technique. • Evaluation of liver tumors and pathologies is unacceptable at this reduced dose with IMR technique especially in patients with a BMI greater than 25 kg/m{sup 2}. • IMR body soft tissue and routine settings perform substantially better than IMR sharp plus setting in reduced dose CT images. • At SSDE of 1.5 mGy, objective image noise in reduced dose IMR images is 8–56% less than compared to standard dose FBP images, with lowest image noise in IMR body-soft tissue images. - Abstract: Purpose: To assess lesion detection and image quality parameters of a knowledge-based Iterative Model Reconstruction (IMR) in reduced dose (RD) abdominal CT examinations. Materials and methods: This IRB-approved prospective study included 82 abdominal CT examinations performed for 41 consecutive patients (mean age, 62 ± 12 years; F:M 28:13) who underwent a RD CT (SSDE, 1.5 mGy ± 0.4 [∼0.9 mSv] at 120 kV with 17–20 mAs/slice) immediately after their standard dose (SD) CT exam (10 mGy ± 3 [∼6 mSv] at 120 kV with automatic exposure control) on 256 MDCT (iCT, Philips Healthcare). SD data were reconstructed using filtered back projection (FBP). RD data were reconstructed with FBP and IMR. Four radiologists used a five-point scale (1 = image quality better than SD CT to 5 = image quality unacceptable) to assess both subjective image quality and artifacts. Lesions were first detected on RD FBP images. RD IMR and RD FBP images were then compared side-by-side to SD-FBP images in an independent, randomized and blinded fashion. Friedman's test and intraclass correlation coefficient were used for data analysis. Objective measurements included image noise and attenuation as well as noise spectral density (NSD) curves

  8. Ultra-low dose abdominal MDCT: Using a knowledge-based Iterative Model Reconstruction technique for substantial dose reduction in a prospective clinical study

    International Nuclear Information System (INIS)

    Khawaja, Ranish Deedar Ali; Singh, Sarabjeet; Blake, Michael; Harisinghani, Mukesh; Choy, Gary; Karosmangulu, Ali; Padole, Atul; Do, Synho; Brown, Kevin; Thompson, Richard; Morton, Thomas; Raihani, Nilgoun; Koehler, Thomas; Kalra, Mannudeep K.

    2015-01-01

    Highlights: • Limited abdominal CT indications can be performed at a size specific dose estimate of (SSDE) 1.5 mGy (∼0.9 mSv) in smaller patients (BMI less than or equal to 25 kg/m 2 ) using a knowledge based Iterative Model Reconstruction (IMR) technique. • Evaluation of liver tumors and pathologies is unacceptable at this reduced dose with IMR technique especially in patients with a BMI greater than 25 kg/m 2 . • IMR body soft tissue and routine settings perform substantially better than IMR sharp plus setting in reduced dose CT images. • At SSDE of 1.5 mGy, objective image noise in reduced dose IMR images is 8–56% less than compared to standard dose FBP images, with lowest image noise in IMR body-soft tissue images. - Abstract: Purpose: To assess lesion detection and image quality parameters of a knowledge-based Iterative Model Reconstruction (IMR) in reduced dose (RD) abdominal CT examinations. Materials and methods: This IRB-approved prospective study included 82 abdominal CT examinations performed for 41 consecutive patients (mean age, 62 ± 12 years; F:M 28:13) who underwent a RD CT (SSDE, 1.5 mGy ± 0.4 [∼0.9 mSv] at 120 kV with 17–20 mAs/slice) immediately after their standard dose (SD) CT exam (10 mGy ± 3 [∼6 mSv] at 120 kV with automatic exposure control) on 256 MDCT (iCT, Philips Healthcare). SD data were reconstructed using filtered back projection (FBP). RD data were reconstructed with FBP and IMR. Four radiologists used a five-point scale (1 = image quality better than SD CT to 5 = image quality unacceptable) to assess both subjective image quality and artifacts. Lesions were first detected on RD FBP images. RD IMR and RD FBP images were then compared side-by-side to SD-FBP images in an independent, randomized and blinded fashion. Friedman's test and intraclass correlation coefficient were used for data analysis. Objective measurements included image noise and attenuation as well as noise spectral density (NSD) curves to

  9. Analytical research using synchrotron radiation based techniques

    International Nuclear Information System (INIS)

    Jha, Shambhu Nath

    2015-01-01

    There are many Synchrotron Radiation (SR) based techniques such as X-ray Absorption Spectroscopy (XAS), X-ray Fluorescence Analysis (XRF), SR-Fourier-transform Infrared (SRFTIR), Hard X-ray Photoelectron Spectroscopy (HAXPS) etc. which are increasingly being employed worldwide in analytical research. With advent of modern synchrotron sources these analytical techniques have been further revitalized and paved ways for new techniques such as microprobe XRF and XAS, FTIR microscopy, Hard X-ray Photoelectron Spectroscopy (HAXPS) etc. The talk will cover mainly two techniques illustrating its capability in analytical research namely XRF and XAS. XRF spectroscopy: XRF spectroscopy is an analytical technique which involves the detection of emitted characteristic X-rays following excitation of the elements within the sample. While electron, particle (protons or alpha particles), or X-ray beams can be employed as the exciting source for this analysis, the use of X-ray beams from a synchrotron source has been instrumental in the advancement of the technique in the area of microprobe XRF imaging and trace level compositional characterisation of any sample. Synchrotron radiation induced X-ray emission spectroscopy, has become competitive with the earlier microprobe and nanoprobe techniques following the advancements in manipulating and detecting these X-rays. There are two important features that contribute to the superb elemental sensitivities of microprobe SR induced XRF: (i) the absence of the continuum (Bremsstrahlung) background radiation that is a feature of spectra obtained from charged particle beams, and (ii) the increased X-ray flux on the sample associated with the use of tunable third generation synchrotron facilities. Detection sensitivities have been reported in the ppb range, with values of 10 -17 g - 10 -14 g (depending on the particular element and matrix). Keeping in mind its demand, a microprobe XRF beamline has been setup by RRCAT at Indus-2 synchrotron

  10. Power system stabilizers based on modern control techniques

    Energy Technology Data Exchange (ETDEWEB)

    Malik, O.P.; Chen, G.P.; Zhang, Y.; El-Metwally, K. [Calgary Univ., AB (Canada). Dept. of Electrical and Computer Engineering

    1994-12-31

    Developments in digital technology have made it feasible to develop and implement improved controllers based on sophisticated control techniques. Power system stabilizers based on adaptive control, fuzzy logic and artificial networks are being developed. Each of these control techniques possesses unique features and strengths. In this paper, the relative performance of power systems stabilizers based on adaptive control, fuzzy logic and neural network, both in simulation studies and real time tests on a physical model of a power system, is presented and compared to that of a fixed parameter conventional power system stabilizer. (author) 16 refs., 45 figs., 3 tabs.

  11. Graph based techniques for tag cloud generation

    DEFF Research Database (Denmark)

    Leginus, Martin; Dolog, Peter; Lage, Ricardo Gomes

    2013-01-01

    Tag cloud is one of the navigation aids for exploring documents. Tag cloud also link documents through the user defined terms. We explore various graph based techniques to improve the tag cloud generation. Moreover, we introduce relevance measures based on underlying data such as ratings...... or citation counts for improved measurement of relevance of tag clouds. We show, that on the given data sets, our approach outperforms the state of the art baseline methods with respect to such relevance by 41 % on Movielens dataset and by 11 % on Bibsonomy data set....

  12. Techniques to develop data for hydrogeochemical models

    Energy Technology Data Exchange (ETDEWEB)

    Thompson, C.M.; Holcombe, L.J.; Gancarz, D.H.; Behl, A.E. (Radian Corp., Austin, TX (USA)); Erickson, J.R.; Star, I.; Waddell, R.K. (Geotrans, Inc., Boulder, CO (USA)); Fruchter, J.S. (Battelle Pacific Northwest Lab., Richland, WA (USA))

    1989-12-01

    The utility industry, through its research and development organization, the Electric Power Research Institute (EPRI), is developing the capability to evaluate potential migration of waste constitutents from utility disposal sites to the environment. These investigations have developed computer programs to predict leaching, transport, attenuation, and fate of inorganic chemicals. To predict solute transport at a site, the computer programs require data concerning the physical and chemical conditions that affect solute transport at the site. This manual provides a comprehensive view of the data requirements for computer programs that predict the fate of dissolved materials in the subsurface environment and describes techniques to measure or estimate these data. In this manual, basic concepts are described first and individual properties and their associated measurement or estimation techniques are described later. The first three sections review hydrologic and geochemical concepts, discuss data requirements for geohydrochemical computer programs, and describe the types of information the programs produce. The remaining sections define and/or describe the properties of interest for geohydrochemical modeling and summarize available technique to measure or estimate values for these properties. A glossary of terms associated with geohydrochemical modeling and an index are provided at the end of this manual. 318 refs., 9 figs., 66 tabs.

  13. GIS-Based bivariate statistical techniques for groundwater potential ...

    Indian Academy of Sciences (India)

    24

    driven and knowledge-driven models (Corsini ... addition, the usage application of GIS-based SI technique in groundwater potential mapping .... lithology of an given area and affect the drainage density and can be of great big value for to evaluate ...

  14. Simulation-based optimization parametric optimization techniques and reinforcement learning

    CERN Document Server

    Gosavi, Abhijit

    2003-01-01

    Simulation-Based Optimization: Parametric Optimization Techniques and Reinforcement Learning introduces the evolving area of simulation-based optimization. The book's objective is two-fold: (1) It examines the mathematical governing principles of simulation-based optimization, thereby providing the reader with the ability to model relevant real-life problems using these techniques. (2) It outlines the computational technology underlying these methods. Taken together these two aspects demonstrate that the mathematical and computational methods discussed in this book do work. Broadly speaking, the book has two parts: (1) parametric (static) optimization and (2) control (dynamic) optimization. Some of the book's special features are: *An accessible introduction to reinforcement learning and parametric-optimization techniques. *A step-by-step description of several algorithms of simulation-based optimization. *A clear and simple introduction to the methodology of neural networks. *A gentle introduction to converg...

  15. Flood alert system based on bayesian techniques

    Science.gov (United States)

    Gulliver, Z.; Herrero, J.; Viesca, C.; Polo, M. J.

    2012-04-01

    The problem of floods in the Mediterranean regions is closely linked to the occurrence of torrential storms in dry regions, where even the water supply relies on adequate water management. Like other Mediterranean basins in Southern Spain, the Guadalhorce River Basin is a medium sized watershed (3856 km2) where recurrent yearly floods occur , mainly in autumn and spring periods, driven by cold front phenomena. The torrential character of the precipitation in such small basins, with a concentration time of less than 12 hours, produces flash flood events with catastrophic effects over the city of Malaga (600000 inhabitants). From this fact arises the need for specific alert tools which can forecast these kinds of phenomena. Bayesian networks (BN) have been emerging in the last decade as a very useful and reliable computational tool for water resources and for the decision making process. The joint use of Artificial Neural Networks (ANN) and BN have served us to recognize and simulate the two different types of hydrological behaviour in the basin: natural and regulated. This led to the establishment of causal relationships between precipitation, discharge from upstream reservoirs, and water levels at a gauging station. It was seen that a recurrent ANN model working at an hourly scale, considering daily precipitation and the two previous hourly values of reservoir discharge and water level, could provide R2 values of 0.86. BN's results slightly improve this fit, but contribute with uncertainty to the prediction. In our current work to Design a Weather Warning Service based on Bayesian techniques the first steps were carried out through an analysis of the correlations between the water level and rainfall at certain representative points in the basin, along with the upstream reservoir discharge. The lower correlation found between precipitation and water level emphasizes the highly regulated condition of the stream. The autocorrelations of the variables were also

  16. Artificial Intelligence based technique for BTS placement

    International Nuclear Information System (INIS)

    Alenoghena, C O; Emagbetere, J O; 1 Minna (Nigeria))" data-affiliation=" (Department of Telecommunications Engineering, Federal University of Techn.1 Minna (Nigeria))" >Aibinu, A M

    2013-01-01

    The increase of the base transceiver station (BTS) in most urban areas can be traced to the drive by network providers to meet demand for coverage and capacity. In traditional network planning, the final decision of BTS placement is taken by a team of radio planners, this decision is not fool proof against regulatory requirements. In this paper, an intelligent based algorithm for optimal BTS site placement has been proposed. The proposed technique takes into consideration neighbour and regulation considerations objectively while determining cell site. The application will lead to a quantitatively unbiased evaluated decision making process in BTS placement. An experimental data of a 2km by 3km territory was simulated for testing the new algorithm, results obtained show a 100% performance of the neighbour constrained algorithm in BTS placement optimization. Results on the application of GA with neighbourhood constraint indicate that the choices of location can be unbiased and optimization of facility placement for network design can be carried out

  17. Improved modeling techniques for turbomachinery flow fields

    Energy Technology Data Exchange (ETDEWEB)

    Lakshminarayana, B.; Fagan, J.R. Jr.

    1995-12-31

    This program has the objective of developing an improved methodology for modeling turbomachinery flow fields, including the prediction of losses and efficiency. Specifically, the program addresses the treatment of the mixing stress tensor terms attributed to deterministic flow field mechanisms required in steady-state Computational Fluid Dynamic (CFD) models for turbomachinery flow fields. These mixing stress tensors arise due to spatial and temporal fluctuations (in an absolute frame of reference) caused by rotor-stator interaction due to various blade rows and by blade-to-blade variation of flow properties. This will be accomplished in a cooperative program by Penn State University and the Allison Engine Company. These tasks include the acquisition of previously unavailable experimental data in a high-speed turbomachinery environment, the use of advanced techniques to analyze the data, and the development of a methodology to treat the deterministic component of the mixing stress tenor.

  18. Advances in transgenic animal models and techniques.

    Science.gov (United States)

    Ménoret, Séverine; Tesson, Laurent; Remy, Séverine; Usal, Claire; Ouisse, Laure-Hélène; Brusselle, Lucas; Chenouard, Vanessa; Anegon, Ignacio

    2017-10-01

    On May 11th and 12th 2017 was held in Nantes, France, the international meeting "Advances in transgenic animal models and techniques" ( http://www.trm.univ-nantes.fr/ ). This biennial meeting is the fifth one of its kind to be organized by the Transgenic Rats ImmunoPhenomic (TRIP) Nantes facility ( http://www.tgr.nantes.inserm.fr/ ). The meeting was supported by private companies (SONIDEL, Scionics computer innovation, New England Biolabs, MERCK, genOway, Journal Disease Models and Mechanisms) and by public institutions (International Society for Transgenic Technology, University of Nantes, INSERM UMR 1064, SFR François Bonamy, CNRS, Région Pays de la Loire, Biogenouest, TEFOR infrastructure, ITUN, IHU-CESTI and DHU-Oncogeffe and Labex IGO). Around 100 participants, from France but also from different European countries, Japan and USA, attended the meeting.

  19. Ambient temperature modelling with soft computing techniques

    Energy Technology Data Exchange (ETDEWEB)

    Bertini, Ilaria; Ceravolo, Francesco; Citterio, Marco; Di Pietra, Biagio; Margiotta, Francesca; Pizzuti, Stefano; Puglisi, Giovanni [Energy, New Technology and Environment Agency (ENEA), Via Anguillarese 301, 00123 Rome (Italy); De Felice, Matteo [Energy, New Technology and Environment Agency (ENEA), Via Anguillarese 301, 00123 Rome (Italy); University of Rome ' ' Roma 3' ' , Dipartimento di Informatica e Automazione (DIA), Via della Vasca Navale 79, 00146 Rome (Italy)

    2010-07-15

    This paper proposes a hybrid approach based on soft computing techniques in order to estimate monthly and daily ambient temperature. Indeed, we combine the back-propagation (BP) algorithm and the simple Genetic Algorithm (GA) in order to effectively train artificial neural networks (ANN) in such a way that the BP algorithm initialises a few individuals of the GA's population. Experiments concerned monthly temperature estimation of unknown places and daily temperature estimation for thermal load computation. Results have shown remarkable improvements in accuracy compared to traditional methods. (author)

  20. Technique for Assessing the Stability and Controllability Characteristics of Naval Aircraft Systems Based on the Rational Combination of Modeling, Identification and Flight Experiments

    Directory of Open Access Journals (Sweden)

    S. V. Nikolaev

    2015-01-01

    Full Text Available The aim of this work is to improve test quality and reliability of modern naval aircraft for assessment of stability and controllability characteristics and test shortening. To achieve this goal it is necessary to develop an algorithmic, mathematical and methodological support of the flight trials and the mathematical modeling of controlled flight modes to determine the stability and controllability characteristics of the naval aircraft.The article analyses the problems related to determining the stability and controllability characteristics under flight tests, describes the technique to correct a mathematical model of aerodynamic characteristics and engine thrust forces of modern naval aircraft. It shows the importance of using algorithm to control the correctness of onboard measurements of flight parameters. The article presents new results of identification of the aircraft aerodynamic coefficients and proves that in identifying characteristics of the longitudinal control channel it is necessary to take into account the engine thrust forces. In the article the aerodynamic coefficients, obtained by identification methods, are compared with those in the original aerodynamic data Bank.An important and new component of the work described in the fourth part of the article, is a set of computer programmes, integrated into a common interface. The development of this software has greatly improved a processing technology of the flight experiment materials and identification of the aerodynamic characteristics of the aircraft.When applying the work results in the testing phase, the required characteristics of stability and controllability are determined by simulation, and identification provides the model refinement according to the flight data.The created technology of practical identification is used to verify and refine the mathematical models according to the flight experiment data. Thus, the result is a proven and refined model of the aircraft

  1. Current NMR Techniques for Structure-Based Drug Discovery.

    Science.gov (United States)

    Sugiki, Toshihiko; Furuita, Kyoko; Fujiwara, Toshimichi; Kojima, Chojiro

    2018-01-12

    A variety of nuclear magnetic resonance (NMR) applications have been developed for structure-based drug discovery (SBDD). NMR provides many advantages over other methods, such as the ability to directly observe chemical compounds and target biomolecules, and to be used for ligand-based and protein-based approaches. NMR can also provide important information about the interactions in a protein-ligand complex, such as structure, dynamics, and affinity, even when the interaction is too weak to be detected by ELISA or fluorescence resonance energy transfer (FRET)-based high-throughput screening (HTS) or to be crystalized. In this study, we reviewed current NMR techniques. We focused on recent progress in NMR measurement and sample preparation techniques that have expanded the potential of NMR-based SBDD, such as fluorine NMR ( 19 F-NMR) screening, structure modeling of weak complexes, and site-specific isotope labeling of challenging targets.

  2. A novel model-free data analysis technique based on clustering in a mutual information space: application to resting-state fMRI

    Directory of Open Access Journals (Sweden)

    Simon Benjaminsson

    2010-08-01

    Full Text Available Non-parametric data-driven analysis techniques can be used to study datasets with few assumptions about the data and underlying experiment. Variations of Independent Component Analysis (ICA have been the methods mostly used on fMRI data, e.g. in finding resting-state networks thought to reflect the connectivity of the brain. Here we present a novel data analysis technique and demonstrate it on resting-state fMRI data. It is a generic method with few underlying assumptions about the data. The results are built from the statistical relations between all input voxels, resulting in a whole-brain analysis on a voxel level. It has good scalability properties and the parallel implementation is capable of handling large datasets and databases. From the mutual information between the activities of the voxels over time, a distance matrix is created for all voxels in the input space. Multidimensional scaling is used to put the voxels in a lower-dimensional space reflecting the dependency relations based on the distance matrix. By performing clustering in this space we can find the strong statistical regularities in the data, which for the resting-state data turns out to be the resting-state networks. The decomposition is performed in the last step of the algorithm and is computationally simple. This opens up for rapid analysis and visualization of the data on different spatial levels, as well as automatically finding a suitable number of decomposition components.

  3. Exploring a physico-chemical multi-array explanatory model with a new multiple covariance-based technique: structural equation exploratory regression.

    Science.gov (United States)

    Bry, X; Verron, T; Cazes, P

    2009-05-29

    In this work, we consider chemical and physical variable groups describing a common set of observations (cigarettes). One of the groups, minor smoke compounds (minSC), is assumed to depend on the others (minSC predictors). PLS regression (PLSR) of m inSC on the set of all predictors appears not to lead to a satisfactory analytic model, because it does not take into account the expert's knowledge. PLS path modeling (PLSPM) does not use the multidimensional structure of predictor groups. Indeed, the expert needs to separate the influence of several pre-designed predictor groups on minSC, in order to see what dimensions this influence involves. To meet these needs, we consider a multi-group component-regression model, and propose a method to extract from each group several strong uncorrelated components that fit the model. Estimation is based on a global multiple covariance criterion, used in combination with an appropriate nesting approach. Compared to PLSR and PLSPM, the structural equation exploratory regression (SEER) we propose fully uses predictor group complementarity, both conceptually and statistically, to predict the dependent group.

  4. Flat-Panel Detector—Based Volume Computed Tomography: A Novel 3D Imaging Technique to Monitor Osteolytic Bone Lesions in a Mouse Tumor Metastasis Model

    Directory of Open Access Journals (Sweden)

    Jeannine Missbach-Guentner

    2007-09-01

    Full Text Available Skeletal metastasis is an important cause of mortality in patients with breast cancer. Hence, animal models, in combination with various imaging techniques, are in high demand for preclinical assessment of novel therapies. We evaluated the applicability of flat-panel volume computed tomography (fpVCT to noninvasive detection of osteolytic bone metastases that develop in severe immunodeficient mice after intracardial injection of MDA-MB-231 breast cancer cells. A single fpVCT scan at 200-wm isotropic resolution was employed to detect osteolysis within the entire skeleton. Osteolytic lesions identified by fpVCT correlated with Faxitron X-ray analysis and were subsequently confirmed by histopathological examination. Isotropic three-dimensional image data sets obtained by fpVCT were the basis for the precise visualization of the extent of the lesion within the cortical bone and for the measurement of bone loss. Furthermore, fpVCT imaging allows continuous monitoring of growth kinetics for each metastatic site and visualization of lesions in more complex regions of the skeleton, such as the skull. Our findings suggest that fpVCT is a powerful tool that can be used to monitor the occurrence and progression of osteolytic lesions in vivo and can be further developed to monitor responses to antimetastatic therapies over the course of the disease.

  5. AN OVERVIEW OF REDUCED ORDER MODELING TECHNIQUES FOR SAFETY APPLICATIONS

    Energy Technology Data Exchange (ETDEWEB)

    Mandelli, D.; Alfonsi, A.; Talbot, P.; Wang, C.; Maljovec, D.; Smith, C.; Rabiti, C.; Cogliati, J.

    2016-10-01

    The RISMC project is developing new advanced simulation-based tools to perform Computational Risk Analysis (CRA) for the existing fleet of U.S. nuclear power plants (NPPs). These tools numerically model not only the thermal-hydraulic behavior of the reactors primary and secondary systems, but also external event temporal evolution and component/system ageing. Thus, this is not only a multi-physics problem being addressed, but also a multi-scale problem (both spatial, µm-mm-m, and temporal, seconds-hours-years). As part of the RISMC CRA approach, a large amount of computationally-expensive simulation runs may be required. An important aspect is that even though computational power is growing, the overall computational cost of a RISMC analysis using brute-force methods may be not viable for certain cases. A solution that is being evaluated to assist the computational issue is the use of reduced order modeling techniques. During the FY2015, we investigated and applied reduced order modeling techniques to decrease the RISMC analysis computational cost by decreasing the number of simulation runs; for this analysis improvement we used surrogate models instead of the actual simulation codes. This article focuses on the use of reduced order modeling techniques that can be applied to RISMC analyses in order to generate, analyze, and visualize data. In particular, we focus on surrogate models that approximate the simulation results but in a much faster time (microseconds instead of hours/days).

  6. Implementation of linguistic models by holographic technique

    Science.gov (United States)

    Pavlov, Alexander V.; Shevchenko, Yanina Y.

    2004-01-01

    In this paper we consider linguistic model as an algebraic model and restrict our consideration to the semantics only. The concept allows "natural-like" language to be used by human-teacher to describe for machine the way of the problem solving, which is based on human"s knowledge and experience. Such imprecision words as "big", "very big", "not very big", etc can be used for human"s knowledge representation. Technically, the problem is to match metric scale, used by the technical device, with the linguistic scale, intuitively formed by the person. We develop an algebraic description of 4-f Fourier-holography setup by using triangular norms based approach. In the model we use the Fourier-duality of the t-norms and t-conorms, which is implemented by 4-f Fourier-holography setup. We demonstrate the setup is described adequately by De-Morgan"s law for involution. Fourier-duality of the t-norms and t-conorms leads to fuzzy-valued logic. We consider General Modus Ponens rule implementation to define the semantical operators, which are adequate to the setup. We consider scales, formed in both +1 and -1 orders of diffraction. We use representation of linguistic labels by fuzzy numbers to form the scale and discuss the dependence of the scale grading on the holographic recording medium operator. To implement reasoning with multi-parametric input variable we use Lorentz function to approximate linguistic labels. We use an example of medical diagnostics for experimental illustration of reasoning on the linguistic scale.

  7. A Comparative Study of Three Vibration Based Damage Assessment Techniques

    DEFF Research Database (Denmark)

    Kirkegaard, Poul Henning; Rytter, A.

    Three different vibration based damage assessment techniques have been compared. One of the techniques uses the ratios between changes in experimentally and theoretically estimated natural frequencies, respectively, to locate a damage. The second technique relies on updating of an FEM based...... on experimentally estimated natural frequencies where the stiffness matrix is given as a function of damage size and location. The last technique is based on neural networks trained with the relative changes in natural frequencies. It has been found that all techniques seems to be useful. Especially, the neural...... networks based technique seems to be very promising....

  8. Estimation of strong ground motion in broad-frequency band based on a seismic source scaling model and an empirical Green's function technique

    Directory of Open Access Journals (Sweden)

    K. Kamae

    1994-06-01

    Full Text Available We introduce a generalized method for simulating strong ground motion from large earthquakes by summing subevent records to follow the ?2 law. The original idea of the method is based on a constant stress parameter between the target event and the subevent. It is applicable to a case where both events have a different stress drop after some manipulation. However, the simulation for a very large earthquake from a small event with this method has inevitably some deficiencies of spectral amplitudes in the intermediate frequency range deviating f`rom the ?2 model, although the high and low frequency motions match the scaling. We improve the simulation algorithm so as not to make spectral sags, introducing self-similar distribution of subfaults with different sizes in the fault plane, so-called fractal composite faulting model. We show successful simulations for intermediate-sized earthquakes (MJMA = 5.0, 6.0 and 6.1, the large aftershocks of the 1983 Akita-Oki earthquake. using the records of smaller aftershocks (MJMA = 3.9 and 5.0 as an empirical Green's function. Further, we attempted to estimate strong ground motion for the 1946 Nankai earthquake with Mw 8.2, using the records of a MJMA 5.1 earthquake occurring near the source region of the mainshock. We found that strong ground motions simulated for the fractal composite faulting model with two asperities radiating significantly high frequency motions matched well the observed data such as the near-field displacement record, the source spectrum estimated from the teleseismic record, and the seismic intensity distribution during the 1946 Nankai earthquake.

  9. Knowledge-based iterative model reconstruction technique in computed tomography of lumbar spine lowers radiation dose and improves tissue differentiation for patients with lower back pain.

    Science.gov (United States)

    Yang, Cheng Hui; Wu, Tung-Hsin; Lin, Chung-Jung; Chiou, Yi-You; Chen, Ying-Chou; Sheu, Ming-Huei; Guo, Wan-Yuo; Chiu, Chen Fen

    2016-10-01

    To evaluate the image quality and diagnostic confidence of reduced-dose computed tomography (CT) of the lumbar spine (L-spine) reconstructed with knowledge-based iterative model reconstruction (IMR). Prospectively, group A consisted of 55 patients imaged with standard acquisition reconstructed with filtered back-projection. Group B consisted of 58 patients imaged with half tube current, reconstructed with hybrid iterative reconstruction (iDose(4)) in Group B1 and knowledge-based IMR in Group B2. Signal-to-noise ratio (SNR) of different regions, the contrast-to-noise ratio between the intervetebral disc (IVD) and dural sac (D-D CNR), and subjective image quality of different regions were compared. Higher strength IMR was also compared in spinal stenosis cases. The SNR of the psoas muscle and D-D CNR were significantly higher in the IMR group. Except for the facet joint, subjective image quality of other regions including IVD, intervertebral foramen (IVF), dural sac, peridural fat, ligmentum flavum, and overall diagnostic acceptability were best for the IMR group. Diagnostic confidence of narrowing IVF and IVD was good (kappa=0.58-0.85). Higher strength IMR delineated IVD better in spinal stenosis cases. Lower dose CT of L-spine reconstructed with IMR demonstrates better tissue differentiation than iDose(4) and standard dose CT with FBP. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  10. Knowledge-based iterative model reconstruction technique in computed tomography of lumbar spine lowers radiation dose and improves tissue differentiation for patients with lower back pain

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Cheng Hui [Department of Medical Imaging, Pojen General Hopsital, Taipei, Taiwan (China); School of Medicine, National Yang-Ming University, Taipei, Taiwan (China); Wu, Tung-Hsin [Department of Biomedical Imaging and Radiological Sciences, National Yang-Ming University, Taipei, Taiwan (China); Lin, Chung-Jung, E-mail: bcjlin@me.com [School of Medicine, National Yang-Ming University, Taipei, Taiwan (China); Department of Radiology, Taipei Veterans General Hospital, Taipei, Taiwan (China); Chiou, Yi-You; Chen, Ying-Chou; Sheu, Ming-Huei; Guo, Wan-Yuo; Chiu, Chen Fen [School of Medicine, National Yang-Ming University, Taipei, Taiwan (China); Department of Radiology, Taipei Veterans General Hospital, Taipei, Taiwan (China)

    2016-10-15

    Highlights: • Knowledge-based IMR improves tissue differentiation in CT of L-spine better than hybrid IR (iDose{sup 4}). • Higher strength IMR improves image qualities of the IVD and IVF in spinal stenosis. • IMR provides diagnostic lower dose CT of L-spine. - Abstract: Purpose: To evaluate the image quality and diagnostic confidence of reduced-dose computed tomography (CT) of the lumbar spine (L-spine) reconstructed with knowledge-based iterative model reconstruction (IMR). Materials and methods: Prospectively, group A consisted of 55 patients imaged with standard acquisition reconstructed with filtered back-projection. Group B consisted of 58 patients imaged with half tube current, reconstructed with hybrid iterative reconstruction (iDose{sup 4}) in Group B1 and knowledge-based IMR in Group B2. Signal-to-noise ratio (SNR) of different regions, the contrast-to-noise ratio between the intervetebral disc (IVD) and dural sac (D-D CNR), and subjective image quality of different regions were compared. Higher strength IMR was also compared in spinal stenosis cases. Results: The SNR of the psoas muscle and D-D CNR were significantly higher in the IMR group. Except for the facet joint, subjective image quality of other regions including IVD, intervertebral foramen (IVF), dural sac, peridural fat, ligmentum flavum, and overall diagnostic acceptability were best for the IMR group. Diagnostic confidence of narrowing IVF and IVD was good (kappa = 0.58–0.85). Higher strength IMR delineated IVD better in spinal stenosis cases. Conclusion: Lower dose CT of L-spine reconstructed with IMR demonstrates better tissue differentiation than iDose{sup 4} and standard dose CT with FBP.

  11. A voxel-based technique to estimate volume and volumetric error of terrestrial photogrammetry-derived digital terrain models (DTM) of topographic depressions

    Science.gov (United States)

    Székely, Balázs; Raveloson, Andrea; Rasztovits, Sascha; Molnár, Gábor; Dorninger, Peter

    2013-04-01

    It is a common task in geoscience to determine the volume of a topographic depression (e.g., a valley, a crater, a gully, etc.) based on a digital terrain model (DTM). In case of DTMs based on laser scanned data this task can be fulfilled with a relatively high accuracy. However, if the DTM is generated using terrestrial photogrammetric methods, the limitations of the technology often makes geodetically inaccurate/biased models at forested or purely visible areas or if the landform has an ill-posed geometry (e.g. it is elongated). In these cases the inaccuracies may hamper the generation of a proper DTM. On the other hand if we are interested rather in the determination of the volume of the feature with a certain accuracy or we intend to carry out an order of magnitude volumetric estimation, a DTM having larger inaccuracies is tolerable. In this case the volume calculation can be still done by setting realistic assumptions about the errors of the DTM. In our approach two DTMs are generated to create top and bottom envelope surfaces that confine the "true" but unknown DTM. The varying accuracy of the photogrammetric DTM is considered via the varying deviation of these two surfaces: at problematic corners of the feature the deviation of the two surfaces will be larger, whereas at well-renderable domains the deviation of the surfaces remain minimal. Since such topographic depressions may have a complicated geometry, the error-prone areas may complicate the geometry of the aforementioned envelopes even more. The proper calculation of the volume may turn to be difficult. To reduce this difficulty, a voxel-based approach is used. The volumetric error is calculated based on the gridded envelopes using an appropriate voxel resolution. The method is applied for gully features termed lavakas existing in large numbers in Madagascar. These landforms are typically characterised by a complex shape, steep walls, they are often elongated, and have internal crests. All these

  12. DCT-based cyber defense techniques

    Science.gov (United States)

    Amsalem, Yaron; Puzanov, Anton; Bedinerman, Anton; Kutcher, Maxim; Hadar, Ofer

    2015-09-01

    With the increasing popularity of video streaming services and multimedia sharing via social networks, there is a need to protect the multimedia from malicious use. An attacker may use steganography and watermarking techniques to embed malicious content, in order to attack the end user. Most of the attack algorithms are robust to basic image processing techniques such as filtering, compression, noise addition, etc. Hence, in this article two novel, real-time, defense techniques are proposed: Smart threshold and anomaly correction. Both techniques operate at the DCT domain, and are applicable for JPEG images and H.264 I-Frames. The defense performance was evaluated against a highly robust attack, and the perceptual quality degradation was measured by the well-known PSNR and SSIM quality assessment metrics. A set of defense techniques is suggested for improving the defense efficiency. For the most aggressive attack configuration, the combination of all the defense techniques results in 80% protection against cyber-attacks with PSNR of 25.74 db.

  13. Nasal base narrowing: the combined alar base excision technique.

    Science.gov (United States)

    Foda, Hossam M T

    2007-01-01

    To evaluate the role of the combined alar base excision technique in narrowing the nasal base and correcting excessive alar flare. The study included 60 cases presenting with a wide nasal base and excessive alar flaring. The surgical procedure combined an external alar wedge resection with an internal vestibular floor excision. All cases were followed up for a mean of 32 (range, 12-144) months. Nasal tip modification and correction of any preexisting caudal septal deformities were always completed before the nasal base narrowing. The mean width of the external alar wedge excised was 7.2 (range, 4-11) mm, whereas the mean width of the sill excision was 3.1 (range, 2-7) mm. Completing the internal excision first resulted in a more conservative external resection, thus avoiding any blunting of the alar-facial crease. No cases of postoperative bleeding, infection, or keloid formation were encountered, and the external alar wedge excision healed with an inconspicuous scar that was well hidden in the depth of the alar-facial crease. Finally, the risk of notching of the alar rim, which can occur at the junction of the external and internal excisions, was significantly reduced by adopting a 2-layered closure of the vestibular floor (P = .01). The combined alar base excision resulted in effective narrowing of the nasal base with elimination of excessive alar flare. Commonly feared complications, such as blunting of the alar-facial crease or notching of the alar rim, were avoided by using simple modifications in the technique of excision and closure.

  14. Combining agent based-models and virtual screening techniques to predict the best citrus-derived vaccine adjuvants against human papilloma virus.

    Science.gov (United States)

    Pennisi, Marzio; Russo, Giulia; Ravalli, Silvia; Pappalardo, Francesco

    2017-12-28

    Human papillomavirus infection is a global social burden that, every year, leads to thousands new diagnosis of cancer. The introduction of a protocol of immunization, with Gardasil and Cervarix vaccines, has radically changed the way this infection easily spreads among people. Even though vaccination is only preventive and not therapeutic, it is a strong tool capable to avoid the consequences that this pathogen could cause. Gardasil vaccine is not free from side effects and the duration of immunity is not always well determined. This work aim to enhance the effects of the vaccination by using a new class of adjuvants and a different administration protocol. Due to their minimum side effects, their easy extraction, their low production costs and their proven immune stimulating activity, citrus-derived molecules are valid candidates to be administered as adjuvants in a vaccine formulation against Hpv. With the aim to get a stronger immune response against Hpv infection we built an in silico model that delivers a way to predict the best adjuvants and the optimal means of administration to obtain such a goal. Simulations envisaged that the use of Neohesperidin elicited a strong immune response that was then validated in vivo. We built up a computational infrastructure made by a virtual screening approach able to preselect promising citrus derived compounds, and by an agent based model that reproduces HPV dynamics subject to vaccine stimulation. This integrated methodology was able to predict the best protocol that confers a very good immune response against HPV infection. We finally tested the in silico results through in vivo experiments on mice, finding good agreement.

  15. An experimental comparison of modelling techniques for speaker ...

    Indian Academy of Sciences (India)

    Most of the existing modelling techniques for the speaker recognition task make an implicit assumption of sufficient data for speaker modelling and hence may lead to poor modelling under limited data condition. The present work gives an experimental evaluation of the modelling techniques like Crisp Vector Quantization ...

  16. An experimental comparison of modelling techniques for speaker ...

    Indian Academy of Sciences (India)

    Abstract. Most of the existing modelling techniques for the speaker recog- nition task make an implicit assumption of sufficient data for speaker modelling and hence may lead to poor modelling under limited data condition. The present work gives an experimental evaluation of the modelling techniques like Crisp.

  17. Spatio-temporal patterns and source apportionment of pollution in Qiantang River (China) using neural-based modeling and multivariate statistical techniques

    Science.gov (United States)

    Su, Shiliang; Zhi, Junjun; Lou, Liping; Huang, Fang; Chen, Xia; Wu, Jiaping

    Characterizing the spatio-temporal patterns and apportioning the pollution sources of water bodies are important for the management and protection of water resources. The main objective of this study is to describe the dynamics of water quality and provide references for improving river pollution control practices. Comprehensive application of neural-based modeling and different multivariate methods was used to evaluate the spatio-temporal patterns and source apportionment of pollution in Qiantang River, China. Measurement data were obtained and pretreated for 13 variables from 41 monitoring sites for the period of 2001-2004. A self-organizing map classified the 41 monitoring sites into three groups (Group A, B and C), representing different pollution characteristics. Four significant parameters (dissolved oxygen, biochemical oxygen demand, total phosphorus and total lead) were identified by discriminant analysis for distinguishing variations of different years, with about 80% correct assignment for temporal variation. Rotated principal component analysis (PCA) identified four potential pollution sources for Group A (domestic sewage and agricultural pollution, industrial wastewater pollution, mineral weathering, vehicle exhaust and sand mining), five for Group B (heavy metal pollution, agricultural runoff, vehicle exhaust and sand mining, mineral weathering, chemical plants discharge) and another five for Group C (vehicle exhaust and sand mining, chemical plants discharge, soil weathering, biochemical pollution, mineral weathering). The identified potential pollution sources explained 75.6% of the total variances for Group A, 75.0% for Group B and 80.0% for Group C, respectively. Receptor-based source apportionment was applied to further estimate source contributions for each pollution variable in the three groups, which facilitated and supported the PCA results. These results could assist managers to develop optimal strategies and determine priorities for river

  18. Power system dynamic state estimation using prediction based evolutionary technique

    International Nuclear Information System (INIS)

    Basetti, Vedik; Chandel, Ashwani K.; Chandel, Rajeevan

    2016-01-01

    In this paper, a new robust LWS (least winsorized square) estimator is proposed for dynamic state estimation of a power system. One of the main advantages of this estimator is that it has an inbuilt bad data rejection property and is less sensitive to bad data measurements. In the proposed approach, Brown's double exponential smoothing technique has been utilised for its reliable performance at the prediction step. The state estimation problem is solved as an optimisation problem using a new jDE-self adaptive differential evolution with prediction based population re-initialisation technique at the filtering step. This new stochastic search technique has been embedded with different state scenarios using the predicted state. The effectiveness of the proposed LWS technique is validated under different conditions, namely normal operation, bad data, sudden load change, and loss of transmission line conditions on three different IEEE test bus systems. The performance of the proposed approach is compared with the conventional extended Kalman filter. On the basis of various performance indices, the results thus obtained show that the proposed technique increases the accuracy and robustness of power system dynamic state estimation performance. - Highlights: • To estimate the states of the power system under dynamic environment. • The performance of the EKF method is degraded during anomaly conditions. • The proposed method remains robust towards anomalies. • The proposed method provides precise state estimates even in the presence of anomalies. • The results show that prediction accuracy is enhanced by using the proposed model.

  19. Fuzzy techniques for subjective workload-score modeling under uncertainties.

    Science.gov (United States)

    Kumar, Mohit; Arndt, Dagmar; Kreuzfeld, Steffi; Thurow, Kerstin; Stoll, Norbert; Stoll, Regina

    2008-12-01

    This paper deals with the development of a computer model to estimate the subjective workload score of individuals by evaluating their heart-rate (HR) signals. The identification of a model to estimate the subjective workload score of individuals under different workload situations is too ambitious a task because different individuals (due to different body conditions, emotional states, age, gender, etc.) show different physiological responses (assessed by evaluating the HR signal) under different workload situations. This is equivalent to saying that the mathematical mappings between physiological parameters and the workload score are uncertain. Our approach to deal with the uncertainties in a workload-modeling problem consists of the following steps: 1) The uncertainties arising due the individual variations in identifying a common model valid for all the individuals are filtered out using a fuzzy filter; 2) stochastic modeling of the uncertainties (provided by the fuzzy filter) use finite-mixture models and utilize this information regarding uncertainties for identifying the structure and initial parameters of a workload model; and 3) finally, the workload model parameters for an individual are identified in an online scenario using machine learning algorithms. The contribution of this paper is to propose, with a mathematical analysis, a fuzzy-based modeling technique that first filters out the uncertainties from the modeling problem, analyzes the uncertainties statistically using finite-mixture modeling, and, finally, utilizes the information about uncertainties for adapting the workload model to an individual's physiological conditions. The approach of this paper, demonstrated with the real-world medical data of 11 subjects, provides a fuzzy-based tool useful for modeling in the presence of uncertainties.

  20. Electromagnetic interference modeling and suppression techniques in variable-frequency drive systems

    Science.gov (United States)

    Yang, Le; Wang, Shuo; Feng, Jianghua

    2017-11-01

    Electromagnetic interference (EMI) causes electromechanical damage to the motors and degrades the reliability of variable-frequency drive (VFD) systems. Unlike fundamental frequency components in motor drive systems, high-frequency EMI noise, coupled with the parasitic parameters of the trough system, are difficult to analyze and reduce. In this article, EMI modeling techniques for different function units in a VFD system, including induction motors, motor bearings, and rectifierinverters, are reviewed and evaluated in terms of applied frequency range, model parameterization, and model accuracy. The EMI models for the motors are categorized based on modeling techniques and model topologies. Motor bearing and shaft models are also reviewed, and techniques that are used to eliminate bearing current are evaluated. Modeling techniques for conventional rectifierinverter systems are also summarized. EMI noise suppression techniques, including passive filter, Wheatstone bridge balance, active filter, and optimized modulation, are reviewed and compared based on the VFD system models.

  1. OFF-LINE HANDWRITING RECOGNITION USING VARIOUS HYBRID MODELING TECHNIQUES AND CHARACTER N-GRAMS

    NARCIS (Netherlands)

    Brakensiek, A.; Rottland, J.; Kosmala, A.; Rigoll, G.

    2004-01-01

    In this paper a system for on-line cursive handwriting recognition is described. The system is based on Hidden Markov Models (HMMs) using discrete and hybrid modeling techniques. Here, we focus on two aspects of the recognition system. First, we present different hybrid modeling techniques, whereas

  2. Enhancing photogrammetric 3d city models with procedural modeling techniques for urban planning support

    International Nuclear Information System (INIS)

    Schubiger-Banz, S; Arisona, S M; Zhong, C

    2014-01-01

    This paper presents a workflow to increase the level of detail of reality-based 3D urban models. It combines the established workflows from photogrammetry and procedural modeling in order to exploit distinct advantages of both approaches. The combination has advantages over purely automatic acquisition in terms of visual quality, accuracy and model semantics. Compared to manual modeling, procedural techniques can be much more time effective while maintaining the qualitative properties of the modeled environment. In addition, our method includes processes for procedurally adding additional features such as road and rail networks. The resulting models meet the increasing needs in urban environments for planning, inventory, and analysis

  3. Respirometry techniques and activated sludge models

    NARCIS (Netherlands)

    Benes, O.; Spanjers, H.; Holba, M.

    2002-01-01

    This paper aims to explain results of respirometry experiments using Activated Sludge Model No. 1. In cases of insufficient fit of ASM No. 1, further modifications to the model were carried out and the so-called "Enzymatic model" was developed. The best-fit method was used to determine the effect of

  4. Formal modelling techniques in human-computer interaction

    NARCIS (Netherlands)

    de Haan, G.; de Haan, G.; van der Veer, Gerrit C.; van Vliet, J.C.

    1991-01-01

    This paper is a theoretical contribution, elaborating the concept of models as used in Cognitive Ergonomics. A number of formal modelling techniques in human-computer interaction will be reviewed and discussed. The analysis focusses on different related concepts of formal modelling techniques in

  5. A DIFFERENT WEB-BASED GEOCODING SERVICE USING FUZZY TECHNIQUES

    Directory of Open Access Journals (Sweden)

    P. Pahlavani

    2015-12-01

    Full Text Available Geocoding – the process of finding position based on descriptive data such as address or postal code - is considered as one of the most commonly used spatial analyses. Many online map providers such as Google Maps, Bing Maps and Yahoo Maps present geocoding as one of their basic capabilities. Despite the diversity of geocoding services, users usually face some limitations when they use available online geocoding services. In existing geocoding services, proximity and nearness concept is not modelled appropriately as well as these services search address only by address matching based on descriptive data. In addition there are also some limitations in display searching results. Resolving these limitations can enhance efficiency of the existing geocoding services. This paper proposes the idea of integrating fuzzy technique with geocoding process to resolve these limitations. In order to implement the proposed method, a web-based system is designed. In proposed method, nearness to places is defined by fuzzy membership functions and multiple fuzzy distance maps are created. Then these fuzzy distance maps are integrated using fuzzy overlay technique for obtain the results. Proposed methods provides different capabilities for users such as ability to search multi-part addresses, searching places based on their location, non-point representation of results as well as displaying search results based on their priority.

  6. Liquid Tunable Microlenses based on MEMS techniques

    Science.gov (United States)

    Zeng, Xuefeng; Jiang, Hongrui

    2013-01-01

    The recent rapid development in microlens technology has provided many opportunities for miniaturized optical systems, and has found a wide range of applications. Of these microlenses, tunable-focus microlenses are of special interest as their focal lengths can be tuned using micro-scale actuators integrated with the lens structure. Realization of such tunable microlens generally relies on the microelectromechanical system (MEMS) technologies. Here, we review the recent progress in tunable liquid microlenses. The underlying physics relevant to these microlenses are first discussed, followed by description of three main categories of tunable microlenses involving MEMS techniques, mechanically driven, electrically driven, and those integrated within microfluidic systems. PMID:24163480

  7. UAV State Estimation Modeling Techniques in AHRS

    Science.gov (United States)

    Razali, Shikin; Zhahir, Amzari

    2017-11-01

    Autonomous unmanned aerial vehicle (UAV) system is depending on state estimation feedback to control flight operation. Estimation on the correct state improves navigation accuracy and achieves flight mission safely. One of the sensors configuration used in UAV state is Attitude Heading and Reference System (AHRS) with application of Extended Kalman Filter (EKF) or feedback controller. The results of these two different techniques in estimating UAV states in AHRS configuration are displayed through position and attitude graphs.

  8. Cooperative cognitive radio networking system model, enabling techniques, and performance

    CERN Document Server

    Cao, Bin; Mark, Jon W

    2016-01-01

    This SpringerBrief examines the active cooperation between users of Cooperative Cognitive Radio Networking (CCRN), exploring the system model, enabling techniques, and performance. The brief provides a systematic study on active cooperation between primary users and secondary users, i.e., (CCRN), followed by the discussions on research issues and challenges in designing spectrum-energy efficient CCRN. As an effort to shed light on the design of spectrum-energy efficient CCRN, they model the CCRN based on orthogonal modulation and orthogonally dual-polarized antenna (ODPA). The resource allocation issues are detailed with respect to both models, in terms of problem formulation, solution approach, and numerical results. Finally, the optimal communication strategies for both primary and secondary users to achieve spectrum-energy efficient CCRN are analyzed.

  9. Advanced applications of numerical modelling techniques for clay extruder design

    Science.gov (United States)

    Kandasamy, Saravanakumar

    Ceramic materials play a vital role in our day to day life. Recent advances in research, manufacture and processing techniques and production methodologies have broadened the scope of ceramic products such as bricks, pipes and tiles, especially in the construction industry. These are mainly manufactured using an extrusion process in auger extruders. During their long history of application in the ceramic industry, most of the design developments of extruder systems have resulted from expensive laboratory-based experimental work and field-based trial and error runs. In spite of these design developments, the auger extruders continue to be energy intensive devices with high operating costs. Limited understanding of the physical process involved in the process and the cost and time requirements of lab-based experiments were found to be the major obstacles in the further development of auger extruders.An attempt has been made herein to use Computational Fluid Dynamics (CFD) and Finite Element Analysis (FEA) based numerical modelling techniques to reduce the costs and time associated with research into design improvement by experimental trials. These two techniques, although used widely in other engineering applications, have rarely been applied for auger extruder development. This had been due to a number of reasons including technical limitations of CFD tools previously available. Modern CFD and FEA software packages have much enhanced capabilities and allow the modelling of the flow of complex fluids such as clay.This research work presents a methodology in using Herschel-Bulkley's fluid flow based CFD model to simulate and assess the flow of clay-water mixture through the extruder and the die of a vacuum de-airing type clay extrusion unit used in ceramic extrusion. The extruder design and the operating parameters were varied to study their influence on the power consumption and the extrusion pressure. The model results were then validated using results from

  10. New techniques and models for assessing ischemic heart disease risks

    Directory of Open Access Journals (Sweden)

    I.N. Yakovina

    2017-09-01

    Full Text Available The paper focuses on tasks of creating and implementing a new technique aimed at assessing ischemic heart diseases risk. The techniques is based on a laboratory-diagnostic complex which includes oxidative, lipid-lipoprotein, inflammatory and metabolic biochemical parameters; s system of logic-mathematic models used for obtaining numeric risk assessments; and a program module which allows to calculate and analyze the results. we justified our models in the course of our re-search which included 172 patients suffering from ischemic heart diseases (IHD combined with coronary atherosclerosis verified by coronary arteriography and 167 patients who didn't have ischemic heart diseases. Our research program in-cluded demographic and social data, questioning on tobacco and alcohol addiction, questioning about dietary habits, chronic diseases case history and medications intake, cardiologic questioning as per Rose, anthropometry, 3-times meas-ured blood pressure, spirometry, and electrocardiogram taking and recording with decoding as per Minnesota code. We detected biochemical parameters of each patient and adjusted our task of creating techniques and models for assessing ischemic heart disease risks on the basis of inflammatory, oxidative, and lipid biological markers. We created a system of logic and mathematic models which is a universal scheme for laboratory parameters processing allowing for dissimilar data specificity. The system of models is universal, but a diagnostic approach to applied biochemical parameters is spe-cific. The created program module (calculator helps a physician to obtain a result on the basis of laboratory research data; the result characterizes numeric risks of coronary atherosclerosis and ischemic heart disease for a patient. It also allows to obtain a visual image of a system of parameters and their deviation from a conditional «standard – pathology» boundary. The complex is implemented into practice by the Scientific

  11. Moving objects management models, techniques and applications

    CERN Document Server

    Meng, Xiaofeng; Xu, Jiajie

    2014-01-01

    This book describes the topics of moving objects modeling and location tracking, indexing and querying, clustering, location uncertainty, traffic aware navigation and privacy issues as well as the application to intelligent transportation systems.

  12. Materials and techniques for model construction

    Science.gov (United States)

    Wigley, D. A.

    1985-01-01

    The problems confronting the designer of models for cryogenic wind tunnel models are discussed with particular reference to the difficulties in obtaining appropriate data on the mechanical and physical properties of candidate materials and their fabrication technologies. The relationship between strength and toughness of alloys is discussed in the context of maximizing both and avoiding the problem of dimensional and microstructural instability. All major classes of materials used in model construction are considered in some detail and in the Appendix selected numerical data is given for the most relevant materials. The stepped-specimen program to investigate stress-induced dimensional changes in alloys is discussed in detail together with interpretation of the initial results. The methods used to bond model components are considered with particular reference to the selection of filler alloys and temperature cycles to avoid microstructural degradation and loss of mechanical properties.

  13. Noise Evaluation Technique Based on Surface Pressure

    DEFF Research Database (Denmark)

    Fischer, Andreas

    2012-01-01

    In this chapter the relevant theory for the understanding of TE noise modeling is collected. It contains the acoustic formulations of [31] and [57]. Both give a relation for the far field sound pressure in dependence of the frequency wave number spectral density of the pressure on the airfoil...

  14. Dynamics Change of Honghu Lake's Water Surface Area and Its Driving Force Analysis Based on Remote Sensing Technique and TOPMODEL model

    International Nuclear Information System (INIS)

    Wen, X; Cao, B; Shen, S; Hu, D; Tang, X

    2014-01-01

    Honghu Lake is the largest freshwater lake in the Hubei Province of China. This paper introduces a remote sensing approach to monitor the lake's water surface area dynamics over the last 40 years by using multi-temporal remote sensing imagery including Landsat and HJ-1. Meanwhile, the daily precipitation and evaporation data provided by Honghu meteorological station since 1970s were also collected and used to analyze the influence of climate change factors. The typical situation for precipitation was selected as an input into the TOPMODEL model to simulate the hydrological process in Honghu Lake. The simulation result with the water surface area extracted from remote sensing imagery was analyzed. This experiment shows the precipitation and timing of precipitation effects changes in the lake with remote sensing data and it showed the potential of using TOPMODEL model to analyze the combined hydrological process in Honghu Lake

  15. Standardized fluoroscopy-based technique to measure intraoperative cup anteversion.

    Science.gov (United States)

    Zingg, Matthieu; Boudabbous, Sana; Hannouche, Didier; Montet, Xavier; Boettner, Friedrich

    2017-10-01

    Direct anterior approach (DAA) with the patient lying supine has facilitated the use of intraoperative fluoroscopy and allows for standardized positioning of the patient. The current study presents a new technique to measure acetabular component anteversion using intraoperative fluoroscopy. The current paper describes a mathematical formula to calculate true acetabular component anteversion based on the acetabular component abduction angle and the c-arm tilt angle (CaT). The CaT is determined by tilting the c-arm until an external pelvic oblique radiograph with the equatorial plane of the acetabular component perpendicular to the fluoroscopy receptor is obtained. CaT is determined by direct reading on the C-arm device. The technique was validated using a radiopaque synbone model comparing the described technique to computed tomography anteversion measurement. The experiment was repeated 25 times. The difference in anteversion between the two measuring techniques was on average 0.2° (range -3.0-3.1). The linear regression coefficients evaluating the agreement between the experimental and control methods were 0.99 (95%CI 0.88-1.10, p < 0.001) and 0.33 (95%CI -1.53-2.20, p = 0.713) for the slope and intercept, respectively. The current study confirms that the described three-step c-arm acetabular cup measuring technique can reproducibly and reliably assess acetabular component anteversion in the supine position, as compared to CT-imaging. © 2017 Orthopaedic Research Society. Published by Wiley Periodicals, Inc. J Orthop Res 35:2307-2312, 2017. © 2017 Orthopaedic Research Society. Published by Wiley Periodicals, Inc.

  16. Accelerator based techniques for contraband detection

    Science.gov (United States)

    Vourvopoulos, George

    1994-05-01

    It has been shown that narcotics, explosives, and other contraband materials, contain various chemical elements such as H, C, N, O, P, S, and Cl in quantities and ratios that differentiate them from each other and from other innocuous substances. Neutrons and γ-rays have the ability to penetrate through various materials at large depths. They are thus able, in a non-intrusive way, to interrogate volumes ranging from suitcases to Sea-Land containers, and have the ability to image the object with an appreciable degree of reliability. Neutron induced reactions such as (n, γ), (n, n') (n, p) or proton induced γ-resonance absorption are some of the reactions currently investigated for the identification of the chemical elements mentioned above. Various DC and pulsed techniques are discussed and their advantages, characteristics, and current progress are shown. Areas where use of these methods is currently under evaluation are detection of hidden explosives, illicit drug interdiction, chemical war agents identification, nuclear waste assay, nuclear weapons destruction and others.

  17. Advanced techniques for modeling avian nest survival

    Science.gov (United States)

    Dinsmore, S.J.; White, Gary C.; Knopf, F.L.

    2002-01-01

    Estimation of avian nest survival has traditionally involved simple measures of apparent nest survival or Mayfield constant-nest-survival models. However, these methods do not allow researchers to build models that rigorously assess the importance of a wide range of biological factors that affect nest survival. Models that incorporate greater detail, such as temporal variation in nest survival and covariates representative of individual nests represent a substantial improvement over traditional estimation methods. In an attempt to improve nest survival estimation procedures, we introduce the nest survival model now available in the program MARK and demonstrate its use on a nesting study of Mountain Plovers (Charadrius montanus Townsend) in Montana, USA. We modeled the daily survival of Mountain Plover nests as a function of the sex of the incubating adult, nest age, year, linear and quadratic time trends, and two weather covariates (maximum daily temperature and daily precipitation) during a six-year study (1995–2000). We found no evidence for yearly differences or an effect of maximum daily temperature on the daily nest survival of Mountain Plovers. Survival rates of nests tended by female and male plovers differed (female rate = 0.33; male rate = 0.49). The estimate of the additive effect for males on nest survival rate was 0.37 (95% confidence limits were 0.03, 0.71) on a logit scale. Daily survival rates of nests increased with nest age; the estimate of daily nest-age change in survival in the best model was 0.06 (95% confidence limits were 0.04, 0.09) on a logit scale. Daily precipitation decreased the probability that the nest would survive to the next day; the estimate of the additive effect of daily precipitation on the nest survival rate was −1.08 (95% confidence limits were −2.12, −0.13) on a logit scale. Our approach to modeling daily nest-survival rates allowed several biological factors of interest to be easily included in nest survival models

  18. D Models for All: Low-Cost Acquisition Through Mobile Devices in Comparison with Image Based Techniques. Potentialities and Weaknesses in Cultural Heritage Domain

    Science.gov (United States)

    Santagati, C.; Lo Turco, M.; Bocconcino, M. M.; Donato, V.; Galizia, M.

    2017-11-01

    Nowadays, 3D digital imaging proposes effective solutions for preserving the expression of human creativity across the centuries, as well as is a great tool to guarantee global dissemination of knowledge and wide access to these invaluable resources of the past. Nevertheless, in several cases, a massive digitalisation of cultural heritage items (from the archaeological site up to the monument and museum collections) could be unworkable due to the still high costs in terms of equipment and human resources: 3D acquisition technologies and the need of skilled team within cultural institutions. Therefore, it is necessary to explore new possibilities offered by growing technologies: the lower costs of these technologies as well as their attractive visual quality constitute a challenge for researchers. Besides these possibilities, it is also important to consider how information is spread through graphic representation of knowledge. The focus of this study is to explore the potentialities and weaknesses of a newly released low cost device in the cultural heritage domain, trying to understand its effective usability in museum collections. The aim of the research is to test their usability, critically analysing the final outcomes of this entry level technology in relation to the other better assessed low cost technologies for 3D scanning, such as Structure from Motion (SfM) techniques (also produced by the same device) combined with dataset generated by a professional digital camera. The final outcomes were compared in terms of quality definition, time processing and file size. The specimens of the collections of the Civic Museum Castello Ursino in Catania have been chosen as the site of experimentation.

  19. 3D MODELS FOR ALL: LOW-COST ACQUISITION THROUGH MOBILE DEVICES IN COMPARISON WITH IMAGE BASED TECHNIQUES. POTENTIALITIES AND WEAKNESSES IN CULTURAL HERITAGE DOMAIN

    Directory of Open Access Journals (Sweden)

    C. Santagati

    2017-11-01

    Full Text Available Nowadays, 3D digital imaging proposes effective solutions for preserving the expression of human creativity across the centuries, as well as is a great tool to guarantee global dissemination of knowledge and wide access to these invaluable resources of the past. Nevertheless, in several cases, a massive digitalisation of cultural heritage items (from the archaeological site up to the monument and museum collections could be unworkable due to the still high costs in terms of equipment and human resources: 3D acquisition technologies and the need of skilled team within cultural institutions. Therefore, it is necessary to explore new possibilities offered by growing technologies: the lower costs of these technologies as well as their attractive visual quality constitute a challenge for researchers. Besides these possibilities, it is also important to consider how information is spread through graphic representation of knowledge. The focus of this study is to explore the potentialities and weaknesses of a newly released low cost device in the cultural heritage domain, trying to understand its effective usability in museum collections. The aim of the research is to test their usability, critically analysing the final outcomes of this entry level technology in relation to the other better assessed low cost technologies for 3D scanning, such as Structure from Motion (SfM techniques (also produced by the same device combined with dataset generated by a professional digital camera. The final outcomes were compared in terms of quality definition, time processing and file size. The specimens of the collections of the Civic Museum Castello Ursino in Catania have been chosen as the site of experimentation.

  20. CT-based Techniques for Brain Perfusion.

    Science.gov (United States)

    Krishnan, Pradeep; Murphy, Amanda; Aviv, Richard I

    2017-06-01

    Recent rapid advances in endovascular treatment for acute ischemic stroke highlight the crucial role of neuroimaging especially multimodal computed tomography (CT) including CT perfusion in stroke triage and management decisions. With an increasing focus on changes in cerebral physiology along with time-based matrices in clinical decisions for acute ischemic stroke, CT perfusion provides a rapid and practical modality for assessment and identification of salvageable tissue at risk and infarct core and provides a better understanding of the changes in cerebral physiology. Although there are challenges with the lack of standardization and accuracy of quantitative assessment, CT perfusion is evolving as a cornerstone for imaging-based strategies in the rapid management of acute ischemic stroke.

  1. Use of advanced modeling techniques to optimize thermal packaging designs.

    Science.gov (United States)

    Formato, Richard M; Potami, Raffaele; Ahmed, Iftekhar

    2010-01-01

    Through a detailed case study the authors demonstrate, for the first time, the capability of using advanced modeling techniques to correctly simulate the transient temperature response of a convective flow-based thermal shipper design. The objective of this case study was to demonstrate that simulation could be utilized to design a 2-inch-wall polyurethane (PUR) shipper to hold its product box temperature between 2 and 8 °C over the prescribed 96-h summer profile (product box is the portion of the shipper that is occupied by the payload). Results obtained from numerical simulation are in excellent agreement with empirical chamber data (within ±1 °C at all times), and geometrical locations of simulation maximum and minimum temperature match well with the corresponding chamber temperature measurements. Furthermore, a control simulation test case was run (results taken from identical product box locations) to compare the coupled conduction-convection model with a conduction-only model, which to date has been the state-of-the-art method. For the conduction-only simulation, all fluid elements were replaced with "solid" elements of identical size and assigned thermal properties of air. While results from the coupled thermal/fluid model closely correlated with the empirical data (±1 °C), the conduction-only model was unable to correctly capture the payload temperature trends, showing a sizeable error compared to empirical values (ΔT > 6 °C). A modeling technique capable of correctly capturing the thermal behavior of passively refrigerated shippers can be used to quickly evaluate and optimize new packaging designs. Such a capability provides a means to reduce the cost and required design time of shippers while simultaneously improving their performance. Another advantage comes from using thermal modeling (assuming a validated model is available) to predict the temperature distribution in a shipper that is exposed to ambient temperatures which were not bracketed

  2. Model measurements for new accelerating techniques

    International Nuclear Information System (INIS)

    Aronson, S.; Haseroth, H.; Knott, J.; Willis, W.

    1988-06-01

    We summarize the work carried out for the past two years, concerning some different ways for achieving high-field gradients, particularly in view of future linear lepton colliders. These studies and measurements on low power models concern the switched power principle and multifrequency excitation of resonant cavities. 15 refs., 12 figs

  3. An Iterative Uncertainty Assessment Technique for Environmental Modeling

    International Nuclear Information System (INIS)

    Engel, David W.; Liebetrau, Albert M.; Jarman, Kenneth D.; Ferryman, Thomas A.; Scheibe, Timothy D.; Didier, Brett T.

    2004-01-01

    The reliability of and confidence in predictions from model simulations are crucial--these predictions can significantly affect risk assessment decisions. For example, the fate of contaminants at the U.S. Department of Energy's Hanford Site has critical impacts on long-term waste management strategies. In the uncertainty estimation efforts for the Hanford Site-Wide Groundwater Modeling program, computational issues severely constrain both the number of uncertain parameters that can be considered and the degree of realism that can be included in the models. Substantial improvements in the overall efficiency of uncertainty analysis are needed to fully explore and quantify significant sources of uncertainty. We have combined state-of-the-art statistical and mathematical techniques in a unique iterative, limited sampling approach to efficiently quantify both local and global prediction uncertainties resulting from model input uncertainties. The approach is designed for application to widely diverse problems across multiple scientific domains. Results are presented for both an analytical model where the response surface is ''known'' and a simplified contaminant fate transport and groundwater flow model. The results show that our iterative method for approximating a response surface (for subsequent calculation of uncertainty estimates) of specified precision requires less computing time than traditional approaches based upon noniterative sampling methods

  4. Language Based Techniques for Systems Biology

    DEFF Research Database (Denmark)

    Pilegaard, Henrik

    Process calculus is the common denominator for a class of compact, idealised, domain-specific formalisms normally associated with the study of reactive concurrent systems within Computer Science. With the rise of the interactioncentred science of Systems Biology a number of bio-inspired process......), is context insensitive, while the other, a poly-variant analysis (2CFA), is context-sensitive. These analyses compute safe approximations to the set of spatial configurations that are reachable according to a given model. This is useful in the qualitative study of cellular self-organisation and, e.......g., the effects of receptor defects or drug delivery mechanisms. The property of sequential realisability. which is closely related to the function of biochemical pathways, is addressed by a variant of traditional Data Flow Analysis (DFA). This so-called ‘Pathway Analysis’ computes safe approximations to the set...

  5. Web Based VRML Modelling

    NARCIS (Netherlands)

    Kiss, S.; Sarfraz, M.

    2004-01-01

    Presents a method to connect VRML (Virtual Reality Modeling Language) and Java components in a Web page using EAI (External Authoring Interface), which makes it possible to interactively generate and edit VRML meshes. The meshes used are based on regular grids, to provide an interaction and modeling

  6. Web Based VRML Modelling

    NARCIS (Netherlands)

    Kiss, S.; Banissi, E.; Khosrowshahi, F.; Sarfraz, M.; Ursyn, A.

    2001-01-01

    Presents a method to connect VRML (Virtual Reality Modeling Language) and Java components in a Web page using EAI (External Authoring Interface), which makes it possible to interactively generate and edit VRML meshes. The meshes used are based on regular grids, to provide an interaction and modeling

  7. FDI and Accommodation Using NN Based Techniques

    Science.gov (United States)

    Garcia, Ramon Ferreiro; de Miguel Catoira, Alberto; Sanz, Beatriz Ferreiro

    Massive application of dynamic backpropagation neural networks is used on closed loop control FDI (fault detection and isolation) tasks. The process dynamics is mapped by means of a trained backpropagation NN to be applied on residual generation. Process supervision is then applied to discriminate faults on process sensors, and process plant parameters. A rule based expert system is used to implement the decision making task and the corresponding solution in terms of faults accommodation and/or reconfiguration. Results show an efficient and robust FDI system which could be used as the core of an SCADA or alternatively as a complement supervision tool operating in parallel with the SCADA when applied on a heat exchanger.

  8. Metamaterials modelling, fabrication, and characterisation techniques

    DEFF Research Database (Denmark)

    Malureanu, Radu; Zalkovskij, Maksim; Andryieuski, Andrei

    2012-01-01

    Metamaterials are artificially designed media that show averaged properties not yet encountered in nature. Among such properties, the possibility of obtaining optical magnetism and negative refraction are the ones mainly exploited but epsilon-near-zero and sub-unitary refraction index are also...... parameters that can be obtained. Such behaviour enables unprecedented applications. Within this work, we will present various aspects of metamaterials research field that we deal with at our department. From the modelling part, we will present tour approach for determining the field enhancement in slits...

  9. Metamaterials modelling, fabrication and characterisation techniques

    DEFF Research Database (Denmark)

    Malureanu, Radu; Zalkovskij, Maksim; Andryieuski, Andrei

    Metamaterials are artificially designed media that show averaged properties not yet encountered in nature. Among such properties, the possibility of obtaining optical magnetism and negative refraction are the ones mainly exploited but epsilon-near-zero and sub-unitary refraction index are also...... parameters that can be obtained. Such behaviour enables unprecedented applications. Within this work, we will present various aspects of metamaterials research field that we deal with at our department. From the modelling part, various approaches for determining the value of the refractive index...

  10. Huffman-based code compression techniques for embedded processors

    KAUST Repository

    Bonny, Mohamed Talal

    2010-09-01

    The size of embedded software is increasing at a rapid pace. It is often challenging and time consuming to fit an amount of required software functionality within a given hardware resource budget. Code compression is a means to alleviate the problem by providing substantial savings in terms of code size. In this article we introduce a novel and efficient hardware-supported compression technique that is based on Huffman Coding. Our technique reduces the size of the generated decoding table, which takes a large portion of the memory. It combines our previous techniques, Instruction Splitting Technique and Instruction Re-encoding Technique into new one called Combined Compression Technique to improve the final compression ratio by taking advantage of both previous techniques. The instruction Splitting Technique is instruction set architecture (ISA)-independent. It splits the instructions into portions of varying size (called patterns) before Huffman coding is applied. This technique improves the final compression ratio by more than 20% compared to other known schemes based on Huffman Coding. The average compression ratios achieved using this technique are 48% and 50% for ARM and MIPS, respectively. The Instruction Re-encoding Technique is ISA-dependent. It investigates the benefits of reencoding unused bits (we call them reencodable bits) in the instruction format for a specific application to improve the compression ratio. Reencoding those bits can reduce the size of decoding tables by up to 40%. Using this technique, we improve the final compression ratios in comparison to the first technique to 46% and 45% for ARM and MIPS, respectively (including all overhead that incurs). The Combined Compression Technique improves the compression ratio to 45% and 42% for ARM and MIPS, respectively. In our compression technique, we have conducted evaluations using a representative set of applications and we have applied each technique to two major embedded processor architectures

  11. Model order reduction techniques with applications in finite element analysis

    CERN Document Server

    Qu, Zu-Qing

    2004-01-01

    Despite the continued rapid advance in computing speed and memory the increase in the complexity of models used by engineers persists in outpacing them. Even where there is access to the latest hardware, simulations are often extremely computationally intensive and time-consuming when full-blown models are under consideration. The need to reduce the computational cost involved when dealing with high-order/many-degree-of-freedom models can be offset by adroit computation. In this light, model-reduction methods have become a major goal of simulation and modeling research. Model reduction can also ameliorate problems in the correlation of widely used finite-element analyses and test analysis models produced by excessive system complexity. Model Order Reduction Techniques explains and compares such methods focusing mainly on recent work in dynamic condensation techniques: - Compares the effectiveness of static, exact, dynamic, SEREP and iterative-dynamic condensation techniques in producing valid reduced-order mo...

  12. Optimization using surrogate models - by the space mapping technique

    DEFF Research Database (Denmark)

    Søndergaard, Jacob

    2003-01-01

    Surrogate modelling and optimization techniques are intended for engineering design in the case where an expensive physical model is involved. This thesis provides a literature overview of the field of surrogate modelling and optimization. The space mapping technique is one such method for constr......Surrogate modelling and optimization techniques are intended for engineering design in the case where an expensive physical model is involved. This thesis provides a literature overview of the field of surrogate modelling and optimization. The space mapping technique is one such method...... conditions are satisfied. So hybrid methods, combining the space mapping technique with classical optimization methods, should be used if convergence to high accuracy is wanted. Approximation abilities of the space mapping surrogate are compared with those of a Taylor model of the expensive model. The space...... mapping surrogate has a lower approximation error for long steps. For short steps, however, the Taylor model of the expensive model is best, due to exact interpolation at the model origin. Five algorithms for space mapping optimization are presented and the numerical performance is evaluated. Three...

  13. Orientation of student entrepreneurial practices based on administrative techniques

    Directory of Open Access Journals (Sweden)

    Héctor Horacio Murcia Cabra

    2005-07-01

    Full Text Available As part of the second phase of the research project «Application of a creativity model to update the teaching of the administration in Colombian agricultural entrepreneurial systems» it was decided to re-enforce student planning and execution of the students of the Agricultural business Administration Faculty of La Salle University. Those finishing their studies were given special attention. The plan of action was initiated in the second semester of 2003. It was initially defined as a model of entrepreneurial strengthening based on a coherent methodology that included the most recent administration and management techniques. Later, the applicability of this model was tested in some organizations of the agricultural sector that had asked for support in their planning processes. Through an investigation-action process the methodology was redefined in order to arrive at a final model that could be used by faculty students and graduates. The results obtained were applied to the teaching of Entrepreneurial Laboratory of ninth semester students with the hope of improving administrative support to agricultural enterprises. Following this procedure more than 100 students and 200 agricultural producers have applied this procedure between June 2003 and July 2005. The methodology used and the results obtained are presented in this article.

  14. A Comparative Analysis of Transmission Control Protocol Improvement Techniques over Space-Based Transmission Media

    National Research Council Canada - National Science Library

    Lawson, Joseph M

    2006-01-01

    ... justification for the implementation of a given enhancement technique. The research questions were answered through model and simulation of a satellite transmission system via a Linux-based network topology...

  15. Inverter-based circuit design techniques for low supply voltages

    CERN Document Server

    Palani, Rakesh Kumar

    2017-01-01

    This book describes intuitive analog design approaches using digital inverters, providing filter architectures and circuit techniques enabling high performance analog circuit design. The authors provide process, supply voltage and temperature (PVT) variation-tolerant design techniques for inverter based circuits. They also discuss various analog design techniques for lower technology nodes and lower power supply, which can be used for designing high performance systems-on-chip.    .

  16. Array-based techniques for fingerprinting medicinal herbs

    Directory of Open Access Journals (Sweden)

    Xue Charlie

    2011-05-01

    Full Text Available Abstract Poor quality control of medicinal herbs has led to instances of toxicity, poisoning and even deaths. The fundamental step in quality control of herbal medicine is accurate identification of herbs. Array-based techniques have recently been adapted to authenticate or identify herbal plants. This article reviews the current array-based techniques, eg oligonucleotides microarrays, gene-based probe microarrays, Suppression Subtractive Hybridization (SSH-based arrays, Diversity Array Technology (DArT and Subtracted Diversity Array (SDA. We further compare these techniques according to important parameters such as markers, polymorphism rates, restriction enzymes and sample type. The applicability of the array-based methods for fingerprinting depends on the availability of genomics and genetics of the species to be fingerprinted. For the species with few genome sequence information but high polymorphism rates, SDA techniques are particularly recommended because they require less labour and lower material cost.

  17. Clustering economies based on multiple criteria decision making techniques

    Directory of Open Access Journals (Sweden)

    Mansour Momeni

    2011-10-01

    Full Text Available One of the primary concerns on many countries is to determine different important factors affecting economic growth. In this paper, we study some factors such as unemployment rate, inflation ratio, population growth, average annual income, etc to cluster different countries. The proposed model of this paper uses analytical hierarchy process (AHP to prioritize the criteria and then uses a K-mean technique to cluster 59 countries based on the ranked criteria into four groups. The first group includes countries with high standards such as Germany and Japan. In the second cluster, there are some developing countries with relatively good economic growth such as Saudi Arabia and Iran. The third cluster belongs to countries with faster rates of growth compared with the countries located in the second group such as China, India and Mexico. Finally, the fourth cluster includes countries with relatively very low rates of growth such as Jordan, Mali, Niger, etc.

  18. Model-Based Security Testing

    Directory of Open Access Journals (Sweden)

    Ina Schieferdecker

    2012-02-01

    Full Text Available Security testing aims at validating software system requirements related to security properties like confidentiality, integrity, authentication, authorization, availability, and non-repudiation. Although security testing techniques are available for many years, there has been little approaches that allow for specification of test cases at a higher level of abstraction, for enabling guidance on test identification and specification as well as for automated test generation. Model-based security testing (MBST is a relatively new field and especially dedicated to the systematic and efficient specification and documentation of security test objectives, security test cases and test suites, as well as to their automated or semi-automated generation. In particular, the combination of security modelling and test generation approaches is still a challenge in research and of high interest for industrial applications. MBST includes e.g. security functional testing, model-based fuzzing, risk- and threat-oriented testing, and the usage of security test patterns. This paper provides a survey on MBST techniques and the related models as well as samples of new methods and tools that are under development in the European ITEA2-project DIAMONDS.

  19. Verification Techniques for Parameter Selection and Bayesian Model Calibration Presented for an HIV Model

    Science.gov (United States)

    Wentworth, Mami Tonoe

    techniques for model calibration. For Bayesian model calibration, we employ adaptive Metropolis algorithms to construct densities for input parameters in the heat model and the HIV model. To quantify the uncertainty in the parameters, we employ two MCMC algorithms: Delayed Rejection Adaptive Metropolis (DRAM) [33] and Differential Evolution Adaptive Metropolis (DREAM) [66, 68]. The densities obtained using these methods are compared to those obtained through the direct numerical evaluation of the Bayes' formula. We also combine uncertainties in input parameters and measurement errors to construct predictive estimates for a model response. A significant emphasis is on the development and illustration of techniques to verify the accuracy of sampling-based Metropolis algorithms. We verify the accuracy of DRAM and DREAM by comparing chains, densities and correlations obtained using DRAM, DREAM and the direct evaluation of Bayes formula. We also perform similar analysis for credible and prediction intervals for responses. Once the parameters are estimated, we employ energy statistics test [63, 64] to compare the densities obtained by different methods for the HIV model. The energy statistics are used to test the equality of distributions. We also consider parameter selection and verification techniques for models having one or more parameters that are noninfluential in the sense that they minimally impact model outputs. We illustrate these techniques for a dynamic HIV model but note that the parameter selection and verification framework is applicable to a wide range of biological and physical models. To accommodate the nonlinear input to output relations, which are typical for such models, we focus on global sensitivity analysis techniques, including those based on partial correlations, Sobol indices based on second-order model representations, and Morris indices, as well as a parameter selection technique based on standard errors. A significant objective is to provide

  20. An experimental comparison of modelling techniques for speaker ...

    Indian Academy of Sciences (India)

    Feature extraction involves extracting speaker-specific features from the speech signal at reduced data rate. The extracted features are further combined using modelling techniques to generate speaker models. The speaker models are then tested using the features extracted from the test speech signal. The improvement in ...

  1. Model Based Temporal Reasoning

    Science.gov (United States)

    Rabin, Marla J.; Spinrad, Paul R.; Fall, Thomas C.

    1988-03-01

    Systems that assess the real world must cope with evidence that is uncertain, ambiguous, and spread over time. Typically, the most important function of an assessment system is to identify when activities are occurring that are unusual or unanticipated. Model based temporal reasoning addresses both of these requirements. The differences among temporal reasoning schemes lies in the methods used to avoid computational intractability. If we had n pieces of data and we wanted to examine how they were related, the worst case would be where we had to examine every subset of these points to see if that subset satisfied the relations. This would be 2n, which is intractable. Models compress this; if several data points are all compatible with a model, then that model represents all those data points. Data points are then considered related if they lie within the same model or if they lie in models that are related. Models thus address the intractability problem. They also address the problem of determining unusual activities if the data do not agree with models that are indicated by earlier data then something out of the norm is taking place. The models can summarize what we know up to that time, so when they are not predicting correctly, either something unusual is happening or we need to revise our models. The model based reasoner developed at Advanced Decision Systems is thus both intuitive and powerful. It is currently being used on one operational system and several prototype systems. It has enough power to be used in domains spanning the spectrum from manufacturing engineering and project management to low-intensity conflict and strategic assessment.

  2. Techniques to Access Databases and Integrate Data for Hydrologic Modeling

    International Nuclear Information System (INIS)

    Whelan, Gene; Tenney, Nathan D.; Pelton, Mitchell A.; Coleman, Andre M.; Ward, Duane L.; Droppo, James G.; Meyer, Philip D.; Dorow, Kevin E.; Taira, Randal Y.

    2009-01-01

    This document addresses techniques to access and integrate data for defining site-specific conditions and behaviors associated with ground-water and surface-water radionuclide transport applicable to U.S. Nuclear Regulatory Commission reviews. Environmental models typically require input data from multiple internal and external sources that may include, but are not limited to, stream and rainfall gage data, meteorological data, hydrogeological data, habitat data, and biological data. These data may be retrieved from a variety of organizations (e.g., federal, state, and regional) and source types (e.g., HTTP, FTP, and databases). Available data sources relevant to hydrologic analyses for reactor licensing are identified and reviewed. The data sources described can be useful to define model inputs and parameters, including site features (e.g., watershed boundaries, stream locations, reservoirs, site topography), site properties (e.g., surface conditions, subsurface hydraulic properties, water quality), and site boundary conditions, input forcings, and extreme events (e.g., stream discharge, lake levels, precipitation, recharge, flood and drought characteristics). Available software tools for accessing established databases, retrieving the data, and integrating it with models were identified and reviewed. The emphasis in this review was on existing software products with minimal required modifications to enable their use with the FRAMES modeling framework. The ability of four of these tools to access and retrieve the identified data sources was reviewed. These four software tools were the Hydrologic Data Acquisition and Processing System (HDAPS), Integrated Water Resources Modeling System (IWRMS) External Data Harvester, Data for Environmental Modeling Environmental Data Download Tool (D4EM EDDT), and the FRAMES Internet Database Tools. The IWRMS External Data Harvester and the D4EM EDDT were identified as the most promising tools based on their ability to access and

  3. Techniques to Access Databases and Integrate Data for Hydrologic Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Whelan, Gene; Tenney, Nathan D.; Pelton, Mitchell A.; Coleman, Andre M.; Ward, Duane L.; Droppo, James G.; Meyer, Philip D.; Dorow, Kevin E.; Taira, Randal Y.

    2009-06-17

    This document addresses techniques to access and integrate data for defining site-specific conditions and behaviors associated with ground-water and surface-water radionuclide transport applicable to U.S. Nuclear Regulatory Commission reviews. Environmental models typically require input data from multiple internal and external sources that may include, but are not limited to, stream and rainfall gage data, meteorological data, hydrogeological data, habitat data, and biological data. These data may be retrieved from a variety of organizations (e.g., federal, state, and regional) and source types (e.g., HTTP, FTP, and databases). Available data sources relevant to hydrologic analyses for reactor licensing are identified and reviewed. The data sources described can be useful to define model inputs and parameters, including site features (e.g., watershed boundaries, stream locations, reservoirs, site topography), site properties (e.g., surface conditions, subsurface hydraulic properties, water quality), and site boundary conditions, input forcings, and extreme events (e.g., stream discharge, lake levels, precipitation, recharge, flood and drought characteristics). Available software tools for accessing established databases, retrieving the data, and integrating it with models were identified and reviewed. The emphasis in this review was on existing software products with minimal required modifications to enable their use with the FRAMES modeling framework. The ability of four of these tools to access and retrieve the identified data sources was reviewed. These four software tools were the Hydrologic Data Acquisition and Processing System (HDAPS), Integrated Water Resources Modeling System (IWRMS) External Data Harvester, Data for Environmental Modeling Environmental Data Download Tool (D4EM EDDT), and the FRAMES Internet Database Tools. The IWRMS External Data Harvester and the D4EM EDDT were identified as the most promising tools based on their ability to access and

  4. Validation of a satellite-based cyclogenesis technique over the ...

    Indian Academy of Sciences (India)

    2013), it is called a depression over the NIO. It becomes a cyclonic storm if the MSW is 34 knots or more. The intensity of the low pressure sys- tem is also defined based on Dvorak's technique. (Dvorak 1984) using satellite imageries. In this technique, the intensity is defined with an assigned number in a scale of 1–8 in an ...

  5. Estimate-Merge-Technique-based algorithms to track an underwater ...

    Indian Academy of Sciences (India)

    In this paper, two novel methods based on the Estimate Merge Technique are proposed. The Estimate Merge Technique involves a process of getting a final estimate by the fusion of a posteriori estimates given by different nonlinear estimates, which are in turn driven by the towed array bearing-only measurements.

  6. Introducing Risk Management Techniques Within Project Based Software Engineering Courses

    Science.gov (United States)

    Port, Daniel; Boehm, Barry

    2002-03-01

    In 1996, USC switched its core two-semester software engineering course from a hypothetical-project, homework-and-exam course based on the Bloom taxonomy of educational objectives (knowledge, comprehension, application, analysis, synthesis, and evaluation). The revised course is a real-client team-project course based on the CRESST model of learning objectives (content understanding, problem solving, collaboration, communication, and self-regulation). We used the CRESST cognitive demands analysis to determine the necessary student skills required for software risk management and the other major project activities, and have been refining the approach over the last 5 years of experience, including revised versions for one-semester undergraduate and graduate project course at Columbia. This paper summarizes our experiences in evolving the risk management aspects of the project course. These have helped us mature more general techniques such as risk-driven specifications, domain-specific simplifier and complicator lists, and the schedule as an independent variable (SAIV) process model. The largely positive results in terms of review of pass / fail rates, client evaluations, product adoption rates, and hiring manager feedback are summarized as well.

  7. Mapping the Complexities of Online Dialogue: An Analytical Modeling Technique

    Directory of Open Access Journals (Sweden)

    Robert Newell

    2014-03-01

    Full Text Available The e-Dialogue platform was developed in 2001 to explore the potential of using the Internet for engaging diverse groups of people and multiple perspectives in substantive dialogue on sustainability. The system is online, text-based, and serves as a transdisciplinary space for bringing together researchers, practitioners, policy-makers and community leaders. The Newell-Dale Conversation Modeling Technique (NDCMT was designed for in-depth analysis of e-Dialogue conversations and uses empirical methodology to minimize observer bias during analysis of a conversation transcript. NDCMT elucidates emergent ideas, identifies connections between ideas and themes, and provides a coherent synthesis and deeper understanding of the underlying patterns of online conversations. Continual application and improvement of NDCMT can lead to powerful methodologies for empirically analyzing digital discourse and better capture of innovations produced through such discourse. URN: http://nbn-resolving.de/urn:nbn:de:0114-fqs140221

  8. Vector machine techniques for modeling of seismic liquefaction data

    Directory of Open Access Journals (Sweden)

    Pijush Samui

    2014-06-01

    Full Text Available This article employs three soft computing techniques, Support Vector Machine (SVM; Least Square Support Vector Machine (LSSVM and Relevance Vector Machine (RVM, for prediction of liquefaction susceptibility of soil. SVM and LSSVM are based on the structural risk minimization (SRM principle which seeks to minimize an upper bound of the generalization error consisting of the sum of the training error and a confidence interval. RVM is a sparse Bayesian kernel machine. SVM, LSSVM and RVM have been used as classification tools. The developed SVM, LSSVM and RVM give equations for prediction of liquefaction susceptibility of soil. A comparative study has been carried out between the developed SVM, LSSVM and RVM models. The results from this article indicate that the developed SVM gives the best performance for prediction of liquefaction susceptibility of soil.

  9. Nitrous oxide-based techniques versus nitrous oxide-free techniques for general anaesthesia.

    Science.gov (United States)

    Sun, Rao; Jia, Wen Qin; Zhang, Peng; Yang, KeHu; Tian, Jin Hui; Ma, Bin; Liu, Yali; Jia, Run H; Luo, Xiao F; Kuriyama, Akira

    2015-11-06

    anaesthesia (or both) with any general anaesthesia using a volatile anaesthetic or propofol-based maintenance of anaesthesia but no nitrous oxide for adults undergoing surgery. Our primary outcome was inhospital case fatality rate. Secondary outcomes were complications and length of stay. Two review authors independently assessed trial quality and extracted the outcome data. We used meta-analysis for data synthesis. Heterogeneity was examined with the Chi² test and by calculating the I² statistic. We used a fixed-effect model if the measure of inconsistency was low for all comparisons (I² statistic nitrous oxide-based techniques increased the incidence of pulmonary atelectasis (odds ratio (OR) 1.57, 95% confidence interval (CI) 1.18 to 2.10, P = 0.002), but had no effects on the inhospital case fatality rate, the incidence of pneumonia, myocardial infarction, stroke, severe nausea and vomiting, venous thromboembolism, wound infection, or the length of hospital stay. The sensitivity analyses suggested that the results of the meta-analyses were all robust except for the outcomes of pneumonia, and severe nausea and vomiting. Two trials reported length of intensive care unit (ICU) stay but the data were skewed so were not pooled. Both trials reported that nitrous oxide-based techniques had no effects on the length of ICU stay. We rated the quality of evidence for two outcomes (pulmonary atelectasis, myocardial infarction) as high, four outcomes (inhospital case fatality rate, stroke, venous thromboembolism, length of hospital stay) as moderate, and three (pneumonia, severe nausea and vomiting, wound infection rate) as low. Given the evidence from this Cochrane review, the avoidance of nitrous oxide may be reasonable in participants with pre-existing poor pulmonary function or at high risk of postoperative nausea and vomiting. Since there are eight studies awaiting classification, selection bias may exist in our systematic review.

  10. improvement of digital image watermarking techniques based on FPGA implementation

    International Nuclear Information System (INIS)

    EL-Hadedy, M.E

    2006-01-01

    digital watermarking provides the ownership of a piece of digital data by marking the considered data invisibly or visibly. this can be used to protect several types of multimedia objects such as audio, text, image and video. this thesis demonstrates the different types of watermarking techniques such as (discrete cosine transform (DCT) and discrete wavelet transform (DWT) and their characteristics. then, it classifies these techniques declaring their advantages and disadvantages. an improved technique with distinguished features, such as peak signal to noise ratio ( PSNR) and similarity ratio (SR) has been introduced. the modified technique has been compared with the other techniques by measuring heir robustness against differ attacks. finally, field programmable gate arrays (FPGA) based implementation and comparison, for the proposed watermarking technique have been presented and discussed

  11. The detection of bulk explosives using nuclear-based techniques

    Energy Technology Data Exchange (ETDEWEB)

    Morgado, R.E.; Gozani, T.; Seher, C.C.

    1988-01-01

    In 1986 we presented a rationale for the detection of bulk explosives based on nuclear techniques that addressed the requirements of civil aviation security in the airport environment. Since then, efforts have intensified to implement a system based on thermal neutron activation (TNA), with new work developing in fast neutron and energetic photon reactions. In this paper we will describe these techniques and present new results from laboratory and airport testing. Based on preliminary results, we contended in our earlier paper that nuclear-based techniques did provide sufficiently penetrating probes and distinguishable detectable reaction products to achieve the FAA operational goals; new data have supported this contention. The status of nuclear-based techniques for the detection of bulk explosives presently under investigation by the US Federal Aviation Administration (FAA) is reviewed. These include thermal neutron activation (TNA), fast neutron activation (FNA), the associated particle technique, nuclear resonance absorption, and photoneutron activation. The results of comprehensive airport testing of the TNA system performed during 1987-88 are summarized. From a technical point of view, nuclear-based techniques now represent the most comprehensive and feasible approach for meeting the operational criteria of detection, false alarms, and throughput. 9 refs., 5 figs., 2 tabs.

  12. Template Metaprogramming Techniques for Concept-Based Specialization

    Directory of Open Access Journals (Sweden)

    Bruno Bachelet

    2013-01-01

    Full Text Available In generic programming, software components are parameterized on types. When available, a static specialization mechanism allows selecting, for a given set of parameters, a more suitable version of a generic component than its primary version. The normal C++ template specialization mechanism is based on the type pattern of the parameters, which is not always the best way to guide the specialization process: type patterns are missing some information on types that could be relevant to define specializations. The notion of a concept, which represents a set of requirements (including syntactic and semantic aspects for a type, is known to be an interesting approach to control template specialization. For many reasons, concepts were dropped from C++11 standard, this article therefore describes template metaprogramming techniques for declaring concepts, modeling relationships (meaning that a type fulfills the requirements of a concept, and refinement relationships (meaning that a concept refines the requirements of another concept. From a taxonomy of concepts and template specializations based on concepts, an automatic mechanism selects the most appropriate version of a generic component for a given instantiation. Our purely library-based solution is also open for retroactive extension: new concepts, relationships, and template specializations can be defined at any time; such additions will then be picked up by the specialization mechanism.

  13. Simulation-driven design by knowledge-based response correction techniques

    CERN Document Server

    Koziel, Slawomir

    2016-01-01

    Focused on efficient simulation-driven multi-fidelity optimization techniques, this monograph on simulation-driven optimization covers simulations utilizing physics-based low-fidelity models, often based on coarse-discretization simulations or other types of simplified physics representations, such as analytical models. The methods presented in the book exploit as much as possible any knowledge about the system or device of interest embedded in the low-fidelity model with the purpose of reducing the computational overhead of the design process. Most of the techniques described in the book are of response correction type and can be split into parametric (usually based on analytical formulas) and non-parametric, i.e., not based on analytical formulas. The latter, while more complex in implementation, tend to be more efficient. The book presents a general formulation of response correction techniques as well as a number of specific methods, including those based on correcting the low-fidelity model response (out...

  14. Advanced Multipath Mitigation Techniques for Satellite-Based Positioning Applications

    Directory of Open Access Journals (Sweden)

    Mohammad Zahidul H. Bhuiyan

    2010-01-01

    Full Text Available Multipath remains a dominant source of ranging errors in Global Navigation Satellite Systems (GNSS, such as the Global Positioning System (GPS or the future European satellite navigation system Galileo. Multipath is generally considered undesirable in the context of GNSS, since the reception of multipath can make significant distortion to the shape of the correlation function used for time delay estimation. However, some wireless communications techniques exploit multipath in order to provide signal diversity though in GNSS, the major challenge is to effectively mitigate the multipath, since we are interested only in the satellite-receiver transit time offset of the Line-Of-Sight (LOS signal for the receiver's position estimate. Therefore, the multipath problem has been approached from several directions in order to mitigate the impact of multipath on navigation receivers, including the development of novel signal processing techniques. In this paper, we propose a maximum likelihood-based technique, namely, the Reduced Search Space Maximum Likelihood (RSSML delay estimator, which is capable of mitigating the multipath effects reasonably well at the expense of increased complexity. The proposed RSSML attempts to compensate the multipath error contribution by performing a nonlinear curve fit on the input correlation function, which finds a perfect match from a set of ideal reference correlation functions with certain amplitude(s, phase(s, and delay(s of the multipath signal. It also incorporates a threshold-based peak detection method, which eventually reduces the code-delay search space significantly. However, the downfall of RSSML is the memory requirement which it uses to store the reference correlation functions. The multipath performance of other delay-tracking methods previously studied for Binary Phase Shift Keying-(BPSK- and Sine Binary Offset Carrier- (SinBOC- modulated signals is also analyzed in closed loop model with the new Composite

  15. Circuit oriented electromagnetic modeling using the PEEC techniques

    CERN Document Server

    Ruehli, Albert; Jiang, Lijun

    2017-01-01

    This book provides intuitive solutions to electromagnetic problems by using the Partial Eelement Eequivalent Ccircuit (PEEC) method. This book begins with an introduction to circuit analysis techniques, laws, and frequency and time domain analyses. The authors also treat Maxwell's equations, capacitance computations, and inductance computations through the lens of the PEEC method. Next, readers learn to build PEEC models in various forms: equivalent circuit models, non orthogonal PEEC models, skin-effect models, PEEC models for dielectrics, incident and radiate field models, and scattering PEEC models. The book concludes by considering issues like such as stability and passivity, and includes five appendices some with formulas for partial elements.

  16. A line-based vegetation sampling technique and its application in ...

    African Journals Online (AJOL)

    A plotless vegetation sampling technique (ellipse-intercept method) is described and its accuracy and precision compared by simulation models with two other line-based techniques: line-intercept and a point-intercept method (which measure species covers). The ellipse-intercept method provides three measures of plant ...

  17. A novel technique for active vibration control, based on optimal ...

    Indian Academy of Sciences (India)

    ... structure by simultaneously tracking zero references for modes of vibration. To illustrate the technique, a two-degrees of freedom spring-mass-dampersystem is considered as a test system. The mathematical model of the system is derived and then converted into a state-space model. A linear quadratic tracking control law ...

  18. [Intestinal lengthening techniques: an experimental model in dogs].

    Science.gov (United States)

    Garibay González, Francisco; Díaz Martínez, Daniel Alberto; Valencia Flores, Alejandro; González Hernández, Miguel Angel

    2005-01-01

    To compare two intestinal lengthening procedures in an experimental dog model. Intestinal lengthening is one of the methods for gastrointestinal reconstruction used for treatment of short bowel syndrome. The modification to the Bianchi's technique is an alternative. The modified technique decreases the number of anastomoses to a single one, thus reducing the risk of leaks and strictures. To our knowledge there is not any clinical or experimental report that studied both techniques, so we realized the present report. Twelve creole dogs were operated with the Bianchi technique for intestinal lengthening (group A) and other 12 creole dogs from the same race and weight were operated by the modified technique (Group B). Both groups were compared in relation to operating time, difficulties in technique, cost, intestinal lengthening and anastomoses diameter. There were no statistical difference in the anastomoses diameter (A = 9.0 mm vs. B = 8.5 mm, p = 0.3846). Operating time (142 min vs. 63 min) cost and technique difficulties were lower in group B (p anastomoses (of Group B) and intestinal segments had good blood supply and were patent along their full length. Bianchi technique and the modified technique offer two good reliable alternatives for the treatment of short bowel syndrome. The modified technique improved operating time, cost and technical issues.

  19. The impact of applying product-modelling techniques in configurator projects

    DEFF Research Database (Denmark)

    Hvam, Lars; Kristjansdottir, Katrin; Shafiee, Sara

    2018-01-01

    This paper aims to increase understanding of the impact of using product-modelling techniques to structure and formalise knowledge in configurator projects. Companies that provide customised products increasingly apply configurators in support of sales and design activities, reaping benefits...... though extant literature has shown the importance of formal modelling techniques, the impact of utilising these techniques remains relatively unknown. Therefore, this article studies three main areas: (1) the impact of using modelling techniques based on Unified Modelling Language (UML), in which...... ability to reduce the number of product variants. This paper contributes to an increased understanding of what companies can gain from using more formalised modelling techniques in configurator projects, and under what circumstances they should be used....

  20. Skull base tumor model.

    Science.gov (United States)

    Gragnaniello, Cristian; Nader, Remi; van Doormaal, Tristan; Kamel, Mahmoud; Voormolen, Eduard H J; Lasio, Giovanni; Aboud, Emad; Regli, Luca; Tulleken, Cornelius A F; Al-Mefty, Ossama

    2010-11-01

    Resident duty-hours restrictions have now been instituted in many countries worldwide. Shortened training times and increased public scrutiny of surgical competency have led to a move away from the traditional apprenticeship model of training. The development of educational models for brain anatomy is a fascinating innovation allowing neurosurgeons to train without the need to practice on real patients and it may be a solution to achieve competency within a shortened training period. The authors describe the use of Stratathane resin ST-504 polymer (SRSP), which is inserted at different intracranial locations to closely mimic meningiomas and other pathological entities of the skull base, in a cadaveric model, for use in neurosurgical training. Silicone-injected and pressurized cadaveric heads were used for studying the SRSP model. The SRSP presents unique intrinsic metamorphic characteristics: liquid at first, it expands and foams when injected into the desired area of the brain, forming a solid tumorlike structure. The authors injected SRSP via different passages that did not influence routes used for the surgical approach for resection of the simulated lesion. For example, SRSP injection routes included endonasal transsphenoidal or transoral approaches if lesions were to be removed through standard skull base approach, or, alternatively, SRSP was injected via a cranial approach if the removal was planned to be via the transsphenoidal or transoral route. The model was set in place in 3 countries (US, Italy, and The Netherlands), and a pool of 13 physicians from 4 different institutions (all surgeons and surgeons in training) participated in evaluating it and provided feedback. All 13 evaluating physicians had overall positive impressions of the model. The overall score on 9 components evaluated--including comparison between the tumor model and real tumor cases, perioperative requirements, general impression, and applicability--was 88% (100% being the best possible

  1. On a Graphical Technique for Evaluating Some Rational Expectations Models

    DEFF Research Database (Denmark)

    Johansen, Søren; Swensen, Anders R.

    2011-01-01

    . In addition to getting a visual impression of the fit of the model, the purpose is to see if the two spreads are nevertheless similar as measured by correlation, variance ratio, and noise ratio. We extend these techniques to a number of rational expectation models and give a general definition of spread...

  2. Application of the numerical modelling techniques to the simulation ...

    African Journals Online (AJOL)

    The aquifer was modelled by the application of Finite Element Method (F.E.M), with appropriate initial and boundary conditions. The matrix solver technique adopted for the F.E.M. was that of the Conjugate Gradient Method. After the steady state calibration and transient verification, the model was used to predict the effect of ...

  3. Fuzzy Control Technique Applied to Modified Mathematical Model ...

    African Journals Online (AJOL)

    In this paper, fuzzy control technique is applied to the modified mathematical model for malaria control presented by the authors in an earlier study. Five Mamdani fuzzy controllers are constructed to control the input (some epidemiological parameters) to the malaria model simulated by 9 fully nonlinear ordinary differential ...

  4. Model-based Sensor Data Acquisition and Management

    OpenAIRE

    Aggarwal, Charu C.; Sathe, Saket; Papaioannou, Thanasis G.; Jeung, Ho Young; Aberer, Karl

    2012-01-01

    In recent years, due to the proliferation of sensor networks, there has been a genuine need of researching techniques for sensor data acquisition and management. To this end, a large number of techniques have emerged that advocate model-based sensor data acquisition and management. These techniques use mathematical models for performing various, day-to-day tasks involved in managing sensor data. In this chapter, we survey the state-of-the-art techniques for model-based sensor data acquisition...

  5. Summary on several key techniques in 3D geological modeling.

    Science.gov (United States)

    Mei, Gang

    2014-01-01

    Several key techniques in 3D geological modeling including planar mesh generation, spatial interpolation, and surface intersection are summarized in this paper. Note that these techniques are generic and widely used in various applications but play a key role in 3D geological modeling. There are two essential procedures in 3D geological modeling: the first is the simulation of geological interfaces using geometric surfaces and the second is the building of geological objects by means of various geometric computations such as the intersection of surfaces. Discrete geometric surfaces that represent geological interfaces can be generated by creating planar meshes first and then spatially interpolating; those surfaces intersect and then form volumes that represent three-dimensional geological objects such as rock bodies. In this paper, the most commonly used algorithms of the key techniques in 3D geological modeling are summarized.

  6. Structural design systems using knowledge-based techniques

    International Nuclear Information System (INIS)

    Orsborn, K.

    1993-01-01

    Engineering information management and the corresponding information systems are of a strategic importance for industrial enterprises. This thesis treats the interdisciplinary field of designing computing systems for structural design and analysis using knowledge-based techniques. Specific conceptual models have been designed for representing the structure and the process of objects and activities in a structural design and analysis domain. In this thesis, it is shown how domain knowledge can be structured along several classification principles in order to reduce complexity and increase flexibility. By increasing the conceptual level of the problem description and representation of the domain knowledge in a declarative form, it is possible to enhance the development, maintenance and use of software for mechanical engineering. This will result in a corresponding increase of the efficiency of the mechanical engineering design process. These ideas together with the rule-based control point out the leverage of declarative knowledge representation within this domain. Used appropriately, a declarative knowledge representation preserves information better, is more problem-oriented and change-tolerant than procedural representations. 74 refs

  7. An introduction to laser-based tissue microdissection techniques.

    Science.gov (United States)

    Curran, Stephanie; Murray, Graeme I

    2005-01-01

    The development and application of laser-based tissue microdissection techniques has provided a major impetus to the sensitive and specific molecular analysis of solid tissues and tumors. This chapter provides an overview of the different laser-based microdissection systems and an introduction to the principles involved in the function and applications of these individual systems.

  8. The Research of Histogram Enhancement Technique Based on Matlab Software

    Directory of Open Access Journals (Sweden)

    Li Kai

    2014-08-01

    Full Text Available Histogram enhancement technique has been widely applied as a typical pattern in digital image processing. The paper is based on Matlab software, through the two ways of histogram equalization and histogram specification technologies to deal with the darker images, using two methods of partial equilibrium and mapping histogram to transform the original histograms, thereby enhanced the image information. The results show that these two kinds of techniques both can significantly improve the image quality and enhance the image feature.

  9. An Image Registration Based Technique for Noninvasive Vascular Elastography

    OpenAIRE

    Valizadeh, Sina; Makkiabadi, Bahador; Mirbagheri, Alireza; Soozande, Mehdi; Manwar, Rayyan; Mozaffarzadeh, Moein; Nasiriavanaki, Mohammadreza

    2018-01-01

    Non-invasive vascular elastography is an emerging technique in vascular tissue imaging. During the past decades, several techniques have been suggested to estimate the tissue elasticity by measuring the displacement of the Carotid vessel wall. Cross correlation-based methods are the most prevalent approaches to measure the strain exerted in the wall vessel by the blood pressure. In the case of a low pressure, the displacement is too small to be apparent in ultrasound imaging, especially in th...

  10. MPPT Technique Based on Current and Temperature Measurements

    Directory of Open Access Journals (Sweden)

    Eduardo Moreira Vicente

    2015-01-01

    Full Text Available This paper presents a new maximum power point tracking (MPPT method based on the measurement of temperature and short-circuit current, in a simple and efficient approach. These measurements, which can precisely define the maximum power point (MPP, have not been used together in other existing techniques. The temperature is measured with a low cost sensor and the solar irradiance is estimated through the relationship of the measured short-circuit current and its reference. Fast tracking speed and stable steady-state operation are advantages of this technique, which presents higher performance when compared to other well-known techniques.

  11. A Method to Test Model Calibration Techniques: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Judkoff, Ron; Polly, Ben; Neymark, Joel

    2016-09-01

    This paper describes a method for testing model calibration techniques. Calibration is commonly used in conjunction with energy retrofit audit models. An audit is conducted to gather information about the building needed to assemble an input file for a building energy modeling tool. A calibration technique is used to reconcile model predictions with utility data, and then the 'calibrated model' is used to predict energy savings from a variety of retrofit measures and combinations thereof. Current standards and guidelines such as BPI-2400 and ASHRAE-14 set criteria for 'goodness of fit' and assume that if the criteria are met, then the calibration technique is acceptable. While it is logical to use the actual performance data of the building to tune the model, it is not certain that a good fit will result in a model that better predicts post-retrofit energy savings. Therefore, the basic idea here is that the simulation program (intended for use with the calibration technique) is used to generate surrogate utility bill data and retrofit energy savings data against which the calibration technique can be tested. This provides three figures of merit for testing a calibration technique, 1) accuracy of the post-retrofit energy savings prediction, 2) closure on the 'true' input parameter values, and 3) goodness of fit to the utility bill data. The paper will also discuss the pros and cons of using this synthetic surrogate data approach versus trying to use real data sets of actual buildings.

  12. Numerical model updating technique for structures using firefly algorithm

    Science.gov (United States)

    Sai Kubair, K.; Mohan, S. C.

    2018-03-01

    Numerical model updating is a technique used for updating the existing experimental models for any structures related to civil, mechanical, automobiles, marine, aerospace engineering, etc. The basic concept behind this technique is updating the numerical models to closely match with experimental data obtained from real or prototype test structures. The present work involves the development of numerical model using MATLAB as a computational tool and with mathematical equations that define the experimental model. Firefly algorithm is used as an optimization tool in this study. In this updating process a response parameter of the structure has to be chosen, which helps to correlate the numerical model developed with the experimental results obtained. The variables for the updating can be either material or geometrical properties of the model or both. In this study, to verify the proposed technique, a cantilever beam is analyzed for its tip deflection and a space frame has been analyzed for its natural frequencies. Both the models are updated with their respective response values obtained from experimental results. The numerical results after updating show that there is a close relationship that can be brought between the experimental and the numerical models.

  13. Camera-based calibration techniques for seamless multiprojector displays.

    Science.gov (United States)

    Brown, Michael; Majumder, Aditi; Yang, Ruigang

    2005-01-01

    Multiprojector, large-scale displays are used in scientific visualization, virtual reality, and other visually intensive applications. In recent years, a number of camera-based computer vision techniques have been proposed to register the geometry and color of tiled projection-based display. These automated techniques use cameras to "calibrate" display geometry and photometry, computing per-projector corrective warps and intensity corrections that are necessary to produce seamless imagery across projector mosaics. These techniques replace the traditional labor-intensive manual alignment and maintenance steps, making such displays cost-effective, flexible, and accessible. In this paper, we present a survey of different camera-based geometric and photometric registration techniques reported in the literature to date. We discuss several techniques that have been proposed and demonstrated, each addressing particular display configurations and modes of operation. We overview each of these approaches and discuss their advantages and disadvantages. We examine techniques that address registration on both planar (video walls) and arbitrary display surfaces and photometric correction for different kinds of display surfaces. We conclude with a discussion of the remaining challenges and research opportunities for multiprojector displays.

  14. Models and Techniques for Proving Data Structure Lower Bounds

    DEFF Research Database (Denmark)

    Larsen, Kasper Green

    In this dissertation, we present a number of new techniques and tools for proving lower bounds on the operational time of data structures. These techniques provide new lines of attack for proving lower bounds in both the cell probe model, the group model, the pointer machine model and the I....../O-model. In all cases, we push the frontiers further by proving lower bounds higher than what could possibly be proved using previously known techniques. For the cell probe model, our results have the following consequences: The rst (lg n) query time lower bound for linear space static data structures...... bound of tutq = (lgd􀀀1 n). For ball range searching, we get a lower bound of tutq = (n1􀀀1=d). The highest previous lower bound proved in the group model does not exceed ((lg n= lg lg n)2) on the maximum of tu and tq. Finally, we present a new technique for proving lower bounds...

  15. Model Based Definition

    Science.gov (United States)

    Rowe, Sidney E.

    2010-01-01

    In September 2007, the Engineering Directorate at the Marshall Space Flight Center (MSFC) created the Design System Focus Team (DSFT). MSFC was responsible for the in-house design and development of the Ares 1 Upper Stage and the Engineering Directorate was preparing to deploy a new electronic Configuration Management and Data Management System with the Design Data Management System (DDMS) based upon a Commercial Off The Shelf (COTS) Product Data Management (PDM) System. The DSFT was to establish standardized CAD practices and a new data life cycle for design data. Of special interest here, the design teams were to implement Model Based Definition (MBD) in support of the Upper Stage manufacturing contract. It is noted that this MBD does use partially dimensioned drawings for auxiliary information to the model. The design data lifecycle implemented several new release states to be used prior to formal release that allowed the models to move through a flow of progressive maturity. The DSFT identified some 17 Lessons Learned as outcomes of the standards development, pathfinder deployments and initial application to the Upper Stage design completion. Some of the high value examples are reviewed.

  16. Toner and paper-based fabrication techniques for microfluidic applications.

    Science.gov (United States)

    Coltro, Wendell Karlos Tomazelli; de Jesus, Dosil Pereira; da Silva, José Alberto Fracassi; do Lago, Claudimir Lucio; Carrilho, Emanuel

    2010-08-01

    The interest in low-cost microfluidic platforms as well as emerging microfabrication techniques has increased considerably over the last years. Toner- and paper-based techniques have appeared as two of the most promising platforms for the production of disposable devices for on-chip applications. This review focuses on recent advances in the fabrication techniques and in the analytical/bioanalytical applications of toner and paper-based devices. The discussion is divided in two parts dealing with (i) toner and (ii) paper devices. Examples of miniaturized devices fabricated by using direct-printing or toner transfer masking in polyester-toner, glass, PDMS as well as conductive platforms as recordable compact disks and printed circuit board are presented. The construction and the use of paper-based devices for off-site diagnosis and bioassays are also described to cover this emerging platform for low-cost diagnostics.

  17. Modeling with data tools and techniques for scientific computing

    CERN Document Server

    Klemens, Ben

    2009-01-01

    Modeling with Data fully explains how to execute computationally intensive analyses on very large data sets, showing readers how to determine the best methods for solving a variety of different problems, how to create and debug statistical models, and how to run an analysis and evaluate the results. Ben Klemens introduces a set of open and unlimited tools, and uses them to demonstrate data management, analysis, and simulation techniques essential for dealing with large data sets and computationally intensive procedures. He then demonstrates how to easily apply these tools to the many threads of statistical technique, including classical, Bayesian, maximum likelihood, and Monte Carlo methods

  18. PC Based Linear Variable Differential Displacement Measurement Uses Optical Technique

    Directory of Open Access Journals (Sweden)

    Tapan Kumar MAITI

    2007-07-01

    Full Text Available PC based linear variable differential displacement (LVDD measurement with optical approach has been presented. The technique is a good blending of both hardware and software and is basically an alternative method of linear variable differential transformer (LVDT. A visual basic (VB programming is used for this PC based measurement. Here the voltage output and the displacement of the reflector can be studied and stored continuously. Theoretical predictions are supported by experimental results. This technique can be used for the measurement of some non-electrical parameters e.g. force, torque and liquid level etc.

  19. EVE: Explainable Vector Based Embedding Technique Using Wikipedia

    OpenAIRE

    Qureshi, M. Atif; Greene, Derek

    2017-01-01

    We present an unsupervised explainable word embedding technique, called EVE, which is built upon the structure of Wikipedia. The proposed model defines the dimensions of a semantic vector representing a word using human-readable labels, thereby it readily interpretable. Specifically, each vector is constructed using the Wikipedia category graph structure together with the Wikipedia article link structure. To test the effectiveness of the proposed word embedding model, we consider its usefulne...

  20. Modern modelling techniques are data hungry: a simulation study for predicting dichotomous endpoints.

    Science.gov (United States)

    van der Ploeg, Tjeerd; Austin, Peter C; Steyerberg, Ewout W

    2014-12-22

    Modern modelling techniques may potentially provide more accurate predictions of binary outcomes than classical techniques. We aimed to study the predictive performance of different modelling techniques in relation to the effective sample size ("data hungriness"). We performed simulation studies based on three clinical cohorts: 1282 patients with head and neck cancer (with 46.9% 5 year survival), 1731 patients with traumatic brain injury (22.3% 6 month mortality) and 3181 patients with minor head injury (7.6% with CT scan abnormalities). We compared three relatively modern modelling techniques: support vector machines (SVM), neural nets (NN), and random forests (RF) and two classical techniques: logistic regression (LR) and classification and regression trees (CART). We created three large artificial databases with 20 fold, 10 fold and 6 fold replication of subjects, where we generated dichotomous outcomes according to different underlying models. We applied each modelling technique to increasingly larger development parts (100 repetitions). The area under the ROC-curve (AUC) indicated the performance of each model in the development part and in an independent validation part. Data hungriness was defined by plateauing of AUC and small optimism (difference between the mean apparent AUC and the mean validated AUC techniques. The RF, SVM and NN models showed instability and a high optimism even with >200 events per variable. Modern modelling techniques such as SVM, NN and RF may need over 10 times as many events per variable to achieve a stable AUC and a small optimism than classical modelling techniques such as LR. This implies that such modern techniques should only be used in medical prediction problems if very large data sets are available.

  1. Structural reliability analysis based on the cokriging technique

    International Nuclear Information System (INIS)

    Zhao Wei; Wang Wei; Dai Hongzhe; Xue Guofeng

    2010-01-01

    Approximation methods are widely used in structural reliability analysis because they are simple to create and provide explicit functional relationships between the responses and variables in stead of the implicit limit state function. Recently, the kriging method which is a semi-parameter interpolation technique that can be used for deterministic optimization and structural reliability has gained popularity. However, to fully exploit the kriging method, especially in high-dimensional problems, a large number of sample points should be generated to fill the design space and this can be very expensive and even impractical in practical engineering analysis. Therefore, in this paper, a new method-the cokriging method, which is an extension of kriging, is proposed to calculate the structural reliability. cokriging approximation incorporates secondary information such as the values of the gradients of the function being approximated. This paper explores the use of the cokriging method for structural reliability problems by comparing it with the Kriging method based on some numerical examples. The results indicate that the cokriging procedure described in this work can generate approximation models to improve on the accuracy and efficiency for structural reliability problems and is a viable alternative to the kriging.

  2. Traceability in Model-Based Testing

    Directory of Open Access Journals (Sweden)

    Mathew George

    2012-11-01

    Full Text Available The growing complexities of software and the demand for shorter time to market are two important challenges that face today’s IT industry. These challenges demand the increase of both productivity and quality of software. Model-based testing is a promising technique for meeting these challenges. Traceability modeling is a key issue and challenge in model-based testing. Relationships between the different models will help to navigate from one model to another, and trace back to the respective requirements and the design model when the test fails. In this paper, we present an approach for bridging the gaps between the different models in model-based testing. We propose relation definition markup language (RDML for defining the relationships between models.

  3. Plants status monitor: Modelling techniques and inherent benefits

    International Nuclear Information System (INIS)

    Breeding, R.J.; Lainoff, S.M.; Rees, D.C.; Prather, W.A.; Fickiessen, K.O.E.

    1987-01-01

    The Plant Status Monitor (PSM) is designed to provide plant personnel with information on the operational status of the plant and compliance with the plant technical specifications. The PSM software evaluates system models using a 'distributed processing' technique in which detailed models of individual systems are processed rather than by evaluating a single, plant-level model. In addition, development of the system models for PSM provides inherent benefits to the plant by forcing detailed reviews of the technical specifications, system design and operating procedures, and plant documentation. (orig.)

  4. Sensitivity analysis technique for application to deterministic models

    International Nuclear Information System (INIS)

    Ishigami, T.; Cazzoli, E.; Khatib-Rahbar, M.; Unwin, S.D.

    1987-01-01

    The characterization of sever accident source terms for light water reactors should include consideration of uncertainties. An important element of any uncertainty analysis is an evaluation of the sensitivity of the output probability distributions reflecting source term uncertainties to assumptions regarding the input probability distributions. Historically, response surface methods (RSMs) were developed to replace physical models using, for example, regression techniques, with simplified models for example, regression techniques, with simplified models for extensive calculations. The purpose of this paper is to present a new method for sensitivity analysis that does not utilize RSM, but instead relies directly on the results obtained from the original computer code calculations. The merits of this approach are demonstrated by application of the proposed method to the suppression pool aerosol removal code (SPARC), and the results are compared with those obtained by sensitivity analysis with (a) the code itself, (b) a regression model, and (c) Iman's method

  5. Selection of productivity improvement techniques via mathematical modeling

    Directory of Open Access Journals (Sweden)

    Mahassan M. Khater

    2011-07-01

    Full Text Available This paper presents a new mathematical model to select an optimal combination of productivity improvement techniques. The proposed model of this paper considers four-stage cycle productivity and the productivity is assumed to be a linear function of fifty four improvement techniques. The proposed model of this paper is implemented for a real-world case study of manufacturing plant. The resulted problem is formulated as a mixed integer programming which can be solved for optimality using traditional methods. The preliminary results of the implementation of the proposed model of this paper indicate that the productivity can be improved through a change on equipments and it can be easily applied for both manufacturing and service industries.

  6. Modeling and design techniques for RF power amplifiers

    CERN Document Server

    Raghavan, Arvind; Laskar, Joy

    2008-01-01

    The book covers RF power amplifier design, from device and modeling considerations to advanced circuit design architectures and techniques. It focuses on recent developments and advanced topics in this area, including numerous practical designs to back the theoretical considerations. It presents the challenges in designing power amplifiers in silicon and helps the reader improve the efficiency of linear power amplifiers, and design more accurate compact device models, with faster extraction routines, to create cost effective and reliable circuits.

  7. Modeling and Simulation Techniques for Large-Scale Communications Modeling

    National Research Council Canada - National Science Library

    Webb, Steve

    1997-01-01

    .... Tests of random number generators were also developed and applied to CECOM models. It was found that synchronization of random number strings in simulations is easy to implement and can provide significant savings for making comparative studies. If synchronization is in place, then statistical experiment design can be used to provide information on the sensitivity of the output to input parameters. The report concludes with recommendations and an implementation plan.

  8. Effects of Peer Modelling Technique in Reducing Substance Abuse ...

    African Journals Online (AJOL)

    The study investigated the effects of peer modelling techniques in reducing substance abuse among undergraduates in Nigeria. The participants were one hundred and twenty (120) undergraduate students in 100 and 400 levels respectively. There are two groups: one treatment group and one control group.

  9. Using of Structural Equation Modeling Techniques in Cognitive Levels Validation

    Directory of Open Access Journals (Sweden)

    Natalija Curkovic

    2012-10-01

    Full Text Available When constructing knowledge tests, cognitive level is usually one of the dimensions comprising the test specifications with each item assigned to measure a particular level. Recently used taxonomies of the cognitive levels most often represent some modification of the original Bloom’s taxonomy. There are many concerns in current literature about existence of predefined cognitive levels. The aim of this article is to investigate can structural equation modeling techniques confirm existence of different cognitive levels. For the purpose of the research, a Croatian final high-school Mathematics exam was used (N = 9626. Confirmatory factor analysis and structural regression modeling were used to test three different models. Structural equation modeling techniques did not support existence of different cognitive levels in this case. There is more than one possible explanation for that finding. Some other techniques that take into account nonlinear behaviour of the items as well as qualitative techniques might be more useful for the purpose of the cognitive levels validation. Furthermore, it seems that cognitive levels were not efficient descriptors of the items and so improvements are needed in describing the cognitive skills measured by items.

  10. Advanced alloy design technique: High temperature cobalt base superalloy

    Science.gov (United States)

    Dreshfield, R. L.; Freche, J. C.; Sandrock, G. D.

    1972-01-01

    Advanced alloy design technique was developed for treating alloys that will have extended life in service at high temperature and intermediate temperatures. Process stabilizes microstructure of the alloy by designing it so that compound identified with embrittlement is eliminated or minimized. Design process is being used to develop both nickel and cobalt-base superalloys.

  11. A GIS-Based Optimization Technique for Spatial Location of ...

    African Journals Online (AJOL)

    GIS)-based package; TransCAD v. 5.0 was used to determine the optimal locations of one to ten waste bins. This optimization technique requires less computational time and the output of ten computer runs showed that partial service coverage ...

  12. GIS-based bivariate statistical techniques for groundwater potential ...

    Indian Academy of Sciences (India)

    Groundwater potential analysis prepares better comprehension of hydrological settings of different regions. This study shows the potency of two GIS-based data driven bivariate techniques namely statistical index (SI) and Dempster–Shafer theory (DST) to analyze groundwater potential in Broujerd region of Iran.

  13. Satellite-based technique for nowcasting of thunderstorms over ...

    Indian Academy of Sciences (India)

    Suman Goyal

    2017-08-31

    120 min forecast and longitudinal error varies from 34.1 to 90.2 km for. 30–120 min forecast. Hence there is longitudinal bias in the forecast track of the MCS. 4. Conclusions. The satellite based nowcast technique developed.

  14. Study and analysis of wavelet based image compression techniques ...

    African Journals Online (AJOL)

    This paper presented comprehensive study with performance analysis of very recent Wavelet transform based image compression techniques. Image compression is one of the necessities for such communication. The goals of image compression are to minimize the storage requirement and communication bandwidth.

  15. A novel image inpainting technique based on median diffusion

    Indian Academy of Sciences (India)

    The concept of digital inpainting was introduced by Bertalmio et al. (2000). The proposed algorithm addresses inpainting of images with a homogeneous or heteroge- neous background. The technique is based on diffusing median value of pixels which are exterior to inpainting area into the area to be inpainted. This is a ...

  16. Modal Analysis Based on the Random Decrement Technique

    DEFF Research Database (Denmark)

    Asmussen, J. C.; Brincker, Rune

    1998-01-01

    This article describes the work carried out within the project: Modal Analysis Based on the Random Decrement Technique - Application to Civil Engineering Structures. The project is part of the research programme: Dynamics of Structures sponsored by the Danish Technical Research Counsil. The planned...

  17. Generation of 3-D finite element models of restored human teeth using micro-CT techniques.

    NARCIS (Netherlands)

    Verdonschot, N.J.J.; Fennis, W.M.M.; Kuys, R.H.; Stolk, J.; Kreulen, C.M.; Creugers, N.H.J.

    2001-01-01

    PURPOSE: This article describes the development of a three-dimensional finite element model of a premolar based on a microscale computed tomographic (CT) data-acquisition technique. The development of the model is part of a project studying the optimal design and geometry of adhesive tooth-colored

  18. Space geodetic techniques for global modeling of ionospheric peak parameters

    Science.gov (United States)

    Alizadeh, M. Mahdi; Schuh, Harald; Schmidt, Michael

    The rapid development of new technological systems for navigation, telecommunication, and space missions which transmit signals through the Earth’s upper atmosphere - the ionosphere - makes the necessity of precise, reliable and near real-time models of the ionospheric parameters more crucial. In the last decades space geodetic techniques have turned into a capable tool for measuring ionospheric parameters in terms of Total Electron Content (TEC) or the electron density. Among these systems, the current space geodetic techniques, such as Global Navigation Satellite Systems (GNSS), Low Earth Orbiting (LEO) satellites, satellite altimetry missions, and others have found several applications in a broad range of commercial and scientific fields. This paper aims at the development of a three-dimensional integrated model of the ionosphere, by using various space geodetic techniques and applying a combination procedure for computation of the global model of electron density. In order to model ionosphere in 3D, electron density is represented as a function of maximum electron density (NmF2), and its corresponding height (hmF2). NmF2 and hmF2 are then modeled in longitude, latitude, and height using two sets of spherical harmonic expansions with degree and order 15. To perform the estimation, GNSS input data are simulated in such a way that the true position of the satellites are detected and used, but the STEC values are obtained through a simulation procedure, using the IGS VTEC maps. After simulating the input data, the a priori values required for the estimation procedure are calculated using the IRI-2012 model and also by applying the ray-tracing technique. The estimated results are compared with F2-peak parameters derived from the IRI model to assess the least-square estimation procedure and moreover, to validate the developed maps, the results are compared with the raw F2-peak parameters derived from the Formosat-3/Cosmic data.

  19. Optical supervised filtering technique based on Hopfield neural network

    Science.gov (United States)

    Bal, Abdullah

    2004-11-01

    Hopfield neural network is commonly preferred for optimization problems. In image segmentation, conventional Hopfield neural networks (HNN) are formulated as a cost-function-minimization problem to perform gray level thresholding on the image histogram or the pixels' gray levels arranged in a one-dimensional array [R. Sammouda, N. Niki, H. Nishitani, Pattern Rec. 30 (1997) 921-927; K.S. Cheng, J.S. Lin, C.W. Mao, IEEE Trans. Med. Imag. 15 (1996) 560-567; C. Chang, P. Chung, Image and Vision comp. 19 (2001) 669-678]. In this paper, a new high speed supervised filtering technique is proposed for image feature extraction and enhancement problems by modifying the conventional HNN. The essential improvement in this technique is to use 2D convolution operation instead of weight-matrix multiplication. Thereby, neural network based a new filtering technique has been obtained that is required just 3 × 3 sized filter mask matrix instead of large size weight coefficient matrix. Optical implementation of the proposed filtering technique is executed easily using the joint transform correlator. The requirement of non-negative data for optical implementation is provided by bias technique to convert the bipolar data to non-negative data. Simulation results of the proposed optical supervised filtering technique are reported for various feature extraction problems such as edge detection, corner detection, horizontal and vertical line extraction, and fingerprint enhancement.

  20. Video Multiple Watermarking Technique Based on Image Interlacing Using DWT

    Directory of Open Access Journals (Sweden)

    Mohamed M. Ibrahim

    2014-01-01

    Full Text Available Digital watermarking is one of the important techniques to secure digital media files in the domains of data authentication and copyright protection. In the nonblind watermarking systems, the need of the original host file in the watermark recovery operation makes an overhead over the system resources, doubles memory capacity, and doubles communications bandwidth. In this paper, a robust video multiple watermarking technique is proposed to solve this problem. This technique is based on image interlacing. In this technique, three-level discrete wavelet transform (DWT is used as a watermark embedding/extracting domain, Arnold transform is used as a watermark encryption/decryption method, and different types of media (gray image, color image, and video are used as watermarks. The robustness of this technique is tested by applying different types of attacks such as: geometric, noising, format-compression, and image-processing attacks. The simulation results show the effectiveness and good performance of the proposed technique in saving system resources, memory capacity, and communications bandwidth.

  1. Application of integrated modeling technique for data services ...

    African Journals Online (AJOL)

    This paper, therefore, describes the application of the integrated simulation technique for deriving the optimum resources required for data services in an asynchronous transfer mode (ATM) based private wide area network (WAN) to guarantee specific QoS requirement. The simulation tool drastically cuts the simulation ...

  2. Simulation technique for hard-disk models in two dimensions

    DEFF Research Database (Denmark)

    Fraser, Diane P.; Zuckermann, Martin J.; Mouritsen, Ole G.

    1990-01-01

    A method is presented for studying hard-disk systems by Monte Carlo computer-simulation techniques within the NpT ensemble. The method is based on the Voronoi tesselation, which is dynamically maintained during the simulation. By an analysis of the Voronoi statistics, a quantity is identified...

  3. Non-destructive techniques based on eddy current testing.

    Science.gov (United States)

    García-Martín, Javier; Gómez-Gil, Jaime; Vázquez-Sánchez, Ernesto

    2011-01-01

    Non-destructive techniques are used widely in the metal industry in order to control the quality of materials. Eddy current testing is one of the most extensively used non-destructive techniques for inspecting electrically conductive materials at very high speeds that does not require any contact between the test piece and the sensor. This paper includes an overview of the fundamentals and main variables of eddy current testing. It also describes the state-of-the-art sensors and modern techniques such as multi-frequency and pulsed systems. Recent advances in complex models towards solving crack-sensor interaction, developments in instrumentation due to advances in electronic devices, and the evolution of data processing suggest that eddy current testing systems will be increasingly used in the future.

  4. Non-Destructive Techniques Based on Eddy Current Testing

    Science.gov (United States)

    García-Martín, Javier; Gómez-Gil, Jaime; Vázquez-Sánchez, Ernesto

    2011-01-01

    Non-destructive techniques are used widely in the metal industry in order to control the quality of materials. Eddy current testing is one of the most extensively used non-destructive techniques for inspecting electrically conductive materials at very high speeds that does not require any contact between the test piece and the sensor. This paper includes an overview of the fundamentals and main variables of eddy current testing. It also describes the state-of-the-art sensors and modern techniques such as multi-frequency and pulsed systems. Recent advances in complex models towards solving crack-sensor interaction, developments in instrumentation due to advances in electronic devices, and the evolution of data processing suggest that eddy current testing systems will be increasingly used in the future. PMID:22163754

  5. Knowledge based systems advanced concepts, techniques and applications

    CERN Document Server

    1997-01-01

    The field of knowledge-based systems (KBS) has expanded enormously during the last years, and many important techniques and tools are currently available. Applications of KBS range from medicine to engineering and aerospace.This book provides a selected set of state-of-the-art contributions that present advanced techniques, tools and applications. These contributions have been prepared by a group of eminent researchers and professionals in the field.The theoretical topics covered include: knowledge acquisition, machine learning, genetic algorithms, knowledge management and processing under unc

  6. Electromagnetism based atmospheric ice sensing technique - A conceptual review

    Directory of Open Access Journals (Sweden)

    U Mughal

    2016-09-01

    Full Text Available Electromagnetic and vibrational properties of ice can be used to measure certain parameters such as ice thickness, type and icing rate. In this paper we present a review of the dielectric based measurement techniques for matter and the dielectric/spectroscopic properties of ice. Atmospheric Ice is a complex material with a variable dielectric constant, but precise calculation of this constant may form the basis for measurement of its other properties such as thickness and strength using some electromagnetic methods. Using time domain or frequency domain spectroscopic techniques, by measuring both the reflection and transmission characteristics of atmospheric ice in a particular frequency range, the desired parameters can be determined.

  7. An Observed Voting System Based On Biometric Technique

    Directory of Open Access Journals (Sweden)

    B. Devikiruba

    2015-08-01

    Full Text Available ABSTRACT This article describes a computational framework which can run almost on every computer connected to an IP based network to study biometric techniques. This paper discusses with a system protecting confidential information puts strong security demands on the identification. Biometry provides us with a user-friendly method for this identification and is becoming a competitor for current identification mechanisms. The experimentation section focuses on biometric verification specifically based on fingerprints. This article should be read as a warning to those thinking of using methods of identification without first examine the technical opportunities for compromising mechanisms and the associated legal consequences. The development is based on the java language that easily improves software packages that is useful to test new control techniques.

  8. A technique based on droplet evaporation to recognize alcoholic drinks

    Science.gov (United States)

    González-Gutiérrez, Jorge; Pérez-Isidoro, Rosendo; Ruiz-Suárez, J. C.

    2017-07-01

    Chromatography is, at present, the most used technique to determine the purity of alcoholic drinks. This involves a careful separation of the components of the liquid elements. However, since this technique requires sophisticated instrumentation, there are alternative techniques such as conductivity measurements and UV-Vis and infrared spectrometries. We report here a method based on salt-induced crystallization patterns formed during the evaporation of alcoholic drops. We found that droplets of different samples form different structures upon drying, which we characterize by their radial density profiles. We prove that using the dried deposit of a spirit as a control sample, our method allows us to differentiate between pure and adulterated drinks. As a proof of concept, we study tequila.

  9. Under-Frequency Load Shedding Technique Considering Event-Based for an Islanded Distribution Network

    Directory of Open Access Journals (Sweden)

    Hasmaini Mohamad

    2016-06-01

    Full Text Available One of the biggest challenge for an islanding operation is to sustain the frequency stability. A large power imbalance following islanding would cause under-frequency, hence an appropriate control is required to shed certain amount of load. The main objective of this research is to develop an adaptive under-frequency load shedding (UFLS technique for an islanding system. The technique is designed considering an event-based which includes the moment system is islanded and a tripping of any DG unit during islanding operation. A disturbance magnitude is calculated to determine the amount of load to be shed. The technique is modeled by using PSCAD simulation tool. A simulation studies on a distribution network with mini hydro generation is carried out to evaluate the UFLS model. It is performed under different load condition: peak and base load. Results show that the load shedding technique have successfully shed certain amount of load and stabilized the system frequency.

  10. Characterization techniques for graphene-based materials in catalysis

    Directory of Open Access Journals (Sweden)

    Maocong Hu

    2017-06-01

    Full Text Available Graphene-based materials have been studied in a wide range of applications including catalysis due to the outstanding electronic, thermal, and mechanical properties. The unprecedented features of graphene-based catalysts, which are believed to be responsible for their superior performance, have been characterized by many techniques. In this article, we comprehensively summarized the characterization methods covering bulk and surface structure analysis, chemisorption ability determination, and reaction mechanism investigation. We reviewed the advantages/disadvantages of different techniques including Raman spectroscopy, X-ray photoelectron spectroscopy (XPS, Fourier transform infrared spectroscopy (FTIR and Diffuse Reflectance Fourier Transform Infrared Spectroscopy (DRIFTS, X-Ray diffraction (XRD, X-ray absorption near edge structure (XANES and X-ray absorption fine structure (XAFS, atomic force microscopy (AFM, scanning electron microscopy (SEM, transmission electron microscopy (TEM, high-resolution transmission electron microscopy (HRTEM, ultraviolet-visible spectroscopy (UV-vis, X-ray fluorescence (XRF, inductively coupled plasma mass spectrometry (ICP, thermogravimetric analysis (TGA, Brunauer–Emmett–Teller (BET, and scanning tunneling microscopy (STM. The application of temperature-programmed reduction (TPR, CO chemisorption, and NH3/CO2-temperature-programmed desorption (TPD was also briefly introduced. Finally, we discussed the challenges and provided possible suggestions on choosing characterization techniques. This review provides key information to catalysis community to adopt suitable characterization techniques for their research.

  11. Fractal Image Compression Based on High Entropy Values Technique

    Directory of Open Access Journals (Sweden)

    Douaa Younis Abbaas

    2018-04-01

    Full Text Available There are many attempts tried to improve the encoding stage of FIC because it consumed time. These attempts worked by reducing size of the search pool for pair range-domain matching but most of them led to get a bad quality, or a lower compression ratio of reconstructed image. This paper aims to present a method to improve performance of the full search algorithm by combining FIC (lossy compression and another lossless technique (in this case entropy coding is used. The entropy technique will reduce size of the domain pool (i. e., number of domain blocks based on the entropy value of each range block and domain block and then comparing the results of full search algorithm and proposed algorithm based on entropy technique to see each of which give best results (such as reduced the encoding time with acceptable values in both compression quali-ty parameters which are C. R (Compression Ratio and PSNR (Image Quality. The experimental results of the proposed algorithm proven that using the proposed entropy technique reduces the encoding time while keeping compression rates and reconstruction image quality good as soon as possible.

  12. Probabilistic forecasts of near-term climate change based on a resampling ensemble technique

    OpenAIRE

    Räisänen, J.; Ruokolainen, L.

    2006-01-01

    Probabilistic forecasts of near-term climate change are derived by using a multimodel ensemble of climate change simulations and a simple resampling technique that increases the number of realizations for the possible combination of anthropogenic climate change and internal climate variability. The technique is based on the assumption that the probability distribution of local climate changes is only a function of the all-model mean global average warming. Although this is unlikely to be exac...

  13. Advanced techniques in reliability model representation and solution

    Science.gov (United States)

    Palumbo, Daniel L.; Nicol, David M.

    1992-01-01

    The current tendency of flight control system designs is towards increased integration of applications and increased distribution of computational elements. The reliability analysis of such systems is difficult because subsystem interactions are increasingly interdependent. Researchers at NASA Langley Research Center have been working for several years to extend the capability of Markov modeling techniques to address these problems. This effort has been focused in the areas of increased model abstraction and increased computational capability. The reliability model generator (RMG) is a software tool that uses as input a graphical object-oriented block diagram of the system. RMG uses a failure-effects algorithm to produce the reliability model from the graphical description. The ASSURE software tool is a parallel processing program that uses the semi-Markov unreliability range evaluator (SURE) solution technique and the abstract semi-Markov specification interface to the SURE tool (ASSIST) modeling language. A failure modes-effects simulation is used by ASSURE. These tools were used to analyze a significant portion of a complex flight control system. The successful combination of the power of graphical representation, automated model generation, and parallel computation leads to the conclusion that distributed fault-tolerant system architectures can now be analyzed.

  14. Use of hydrological modelling and isotope techniques in Guvenc basin

    International Nuclear Information System (INIS)

    Altinbilek, D.

    1991-07-01

    The study covers the work performed under Project No. 335-RC-TUR-5145 entitled ''Use of Hydrologic Modelling and Isotope Techniques in Guvenc Basin'' and is an initial part of a program for estimating runoff from Central Anatolia Watersheds. The study presented herein consists of mainly three parts: 1) the acquisition of a library of rainfall excess, direct runoff and isotope data for Guvenc basin; 2) the modification of SCS model to be applied to Guvenc basin first and then to other basins of Central Anatolia for predicting the surface runoff from gaged and ungaged watersheds; and 3) the use of environmental isotope technique in order to define the basin components of streamflow of Guvenc basin. 31 refs, figs and tabs

  15. Equivalence and Differences between Structural Equation Modeling and State-Space Modeling Techniques

    Science.gov (United States)

    Chow, Sy-Miin; Ho, Moon-ho R.; Hamaker, Ellen L.; Dolan, Conor V.

    2010-01-01

    State-space modeling techniques have been compared to structural equation modeling (SEM) techniques in various contexts but their unique strengths have often been overshadowed by their similarities to SEM. In this article, we provide a comprehensive discussion of these 2 approaches' similarities and differences through analytic comparisons and…

  16. MPPT Technique Based on Current and Temperature Measurements

    OpenAIRE

    Vicente, Eduardo Moreira; Moreno, Robson Luiz; Ribeiro, Enio Roberto

    2015-01-01

    This paper presents a new maximum power point tracking (MPPT) method based on the measurement of temperature and short-circuit current, in a simple and efficient approach. These measurements, which can precisely define the maximum power point (MPP), have not been used together in other existing techniques. The temperature is measured with a low cost sensor and the solar irradiance is estimated through the relationship of the measured short-circuit current and its reference. Fast tracking spee...

  17. Cooperative Technique Based on Sensor Selection in Wireless Sensor Network

    OpenAIRE

    ISLAM, M. R.; KIM, J.

    2009-01-01

    An energy efficient cooperative technique is proposed for the IEEE 1451 based Wireless Sensor Networks. Selected numbers of Wireless Transducer Interface Modules (WTIMs) are used to form a Multiple Input Single Output (MISO) structure wirelessly connected with a Network Capable Application Processor (NCAP). Energy efficiency and delay of the proposed architecture are derived for different combination of cluster size and selected number of WTIMs. Optimized constellation parameters are used for...

  18. Total laparoscopic gastrocystoplasty: experimental technique in a porcine model

    OpenAIRE

    Frederico R. Romero; Claudemir Trapp; Michael Muntener; Fabio A. Brito; Louis R. Kavoussi; Thomas W. Jarrett

    2007-01-01

    OBJECTIVE: Describe a unique simplified experimental technique for total laparoscopic gastrocystoplasty in a porcine model. MATERIAL AND METHODS: We performed laparoscopic gastrocystoplasty on 10 animals. The gastroepiploic arch was identified and carefully mobilized from its origin at the pylorus to the beginning of the previously demarcated gastric wedge. The gastric segment was resected with sharp dissection. Both gastric suturing and gastrovesical anastomosis were performed with absorbabl...

  19. A Bayesian Technique for Selecting a Linear Forecasting Model

    OpenAIRE

    Ramona L. Trader

    1983-01-01

    The specification of a forecasting model is considered in the context of linear multiple regression. Several potential predictor variables are available, but some of them convey little information about the dependent variable which is to be predicted. A technique for selecting the "best" set of predictors which takes into account the inherent uncertainty in prediction is detailed. In addition to current data, there is often substantial expert opinion available which is relevant to the forecas...

  20. Study on ABCD Analysis Technique for Business Models, business strategies, Operating Concepts & Business Systems

    OpenAIRE

    Sreeramana Aithal

    2016-01-01

    Studying the implications of a business model, choosing success strategies, developing viable operational concepts or evolving a functional system, it is important to analyse it in all dimensions. For this purpose, various analysing techniques/frameworks are used. This paper is a discussion on how to use an innovative analysing framework called ABCD model on a given business model, or on a business strategy or an operational concept/idea or business system. Based on four constructs Advantages...

  1. An Image Morphing Technique Based on Optimal Mass Preserving Mapping

    Science.gov (United States)

    Zhu, Lei; Yang, Yan; Haker, Steven; Tannenbaum, Allen

    2013-01-01

    Image morphing, or image interpolation in the time domain, deals with the metamorphosis of one image into another. In this paper, a new class of image morphing algorithms is proposed based on the theory of optimal mass transport. The L2 mass moving energy functional is modified by adding an intensity penalizing term, in order to reduce the undesired double exposure effect. It is an intensity-based approach and, thus, is parameter free. The optimal warping function is computed using an iterative gradient descent approach. This proposed morphing method is also extended to doubly connected domains using a harmonic parameterization technique, along with finite-element methods. PMID:17547128

  2. Constraint-Based Model Weaving

    Science.gov (United States)

    White, Jules; Gray, Jeff; Schmidt, Douglas C.

    Aspect-oriented modeling (AOM) is a promising technique for untangling the concerns of complex enterprise software systems. AOM decomposes the crosscutting concerns of a model into separate models that can be woven together to form a composite solution model. In many domains, such as multi-tiered e-commerce web applications, separating concerns is much easier than deducing the proper way to weave the concerns back together into a solution model. For example, modeling the types and sizes of caches that can be leveraged by a Web application is much easier than deducing the optimal way to weave the caches back into the solution architecture to achieve high system throughput.

  3. Sensitivity analysis techniques for models of human behavior.

    Energy Technology Data Exchange (ETDEWEB)

    Bier, Asmeret Brooke

    2010-09-01

    Human and social modeling has emerged as an important research area at Sandia National Laboratories due to its potential to improve national defense-related decision-making in the presence of uncertainty. To learn about which sensitivity analysis techniques are most suitable for models of human behavior, different promising methods were applied to an example model, tested, and compared. The example model simulates cognitive, behavioral, and social processes and interactions, and involves substantial nonlinearity, uncertainty, and variability. Results showed that some sensitivity analysis methods create similar results, and can thus be considered redundant. However, other methods, such as global methods that consider interactions between inputs, can generate insight not gained from traditional methods.

  4. Practical Techniques for Modeling Gas Turbine Engine Performance

    Science.gov (United States)

    Chapman, Jeffryes W.; Lavelle, Thomas M.; Litt, Jonathan S.

    2016-01-01

    The cost and risk associated with the design and operation of gas turbine engine systems has led to an increasing dependence on mathematical models. In this paper, the fundamentals of engine simulation will be reviewed, an example performance analysis will be performed, and relationships useful for engine control system development will be highlighted. The focus will be on thermodynamic modeling utilizing techniques common in industry, such as: the Brayton cycle, component performance maps, map scaling, and design point criteria generation. In general, these topics will be viewed from the standpoint of an example turbojet engine model; however, demonstrated concepts may be adapted to other gas turbine systems, such as gas generators, marine engines, or high bypass aircraft engines. The purpose of this paper is to provide an example of gas turbine model generation and system performance analysis for educational uses, such as curriculum creation or student reference.

  5. Event-Based Conceptual Modeling

    DEFF Research Database (Denmark)

    Bækgaard, Lars

    The paper demonstrates that a wide variety of event-based modeling approaches are based on special cases of the same general event concept, and that the general event concept can be used to unify the otherwise unrelated fields of information modeling and process modeling. A set of event-based mod......The paper demonstrates that a wide variety of event-based modeling approaches are based on special cases of the same general event concept, and that the general event concept can be used to unify the otherwise unrelated fields of information modeling and process modeling. A set of event......-based modeling approaches are analyzed and the results are used to formulate a general event concept that can be used for unifying the seemingly unrelated event concepts. Events are characterized as short-duration processes that have participants, consequences, and properties, and that may be modeled in terms...

  6. CEAI: CCM based Email Authorship Identification Model

    DEFF Research Database (Denmark)

    Nizamani, Sarwat; Memon, Nasrullah

    2013-01-01

    In this paper we present a model for email authorship identification (EAI) by employing a Cluster-based Classification (CCM) technique. Traditionally, stylometric features have been successfully employed in various authorship analysis tasks; we extend the traditional feature-set to include some...... reveal that the proposed CCM-based email authorship identification model, along with the proposed feature set, outperforms the state-of-the-art support vector machine (SVM)-based models, as well as the models proposed by Iqbal et al. [1, 2]. The proposed model attains an accuracy rate of 94% for 10...... authors, 89% for 25 authors, and 81% for 50 authors, respectively on Enron data set, while 89.5% accuracy has been achieved on authors' constructed real email data set. The results on Enron data set have been achieved on quite a large number of authors as compared to the models proposed by Iqbal et al. [1...

  7. SKILLS-BASED ECLECTIC TECHNIQUES MATRIX FOR ELT MICROTEACHINGS

    Directory of Open Access Journals (Sweden)

    İskender Hakkı Sarıgöz

    2016-10-01

    Full Text Available Foreign language teaching undergoes constant changes due to the methodological improvement. This progress may be examined in two parts. They are the methods era and the post-methods era. It is not pragmatic today to propose a particular language teaching method and its techniques for all purposes. The holistic inflexibility of mid-century methods has long gone. In the present day, constructivist foreign language teaching trends attempt to see the learner as a whole person and an individual who may be different from the other students in many respects. At the same time, the individual differences should not keep the learners away from group harmony. For this reason, current teacher training programs require eclectic teaching matrixes for unit design considering the mixed ability student groups. These matrixes can be prepared in a multidimensional fashion because there are many functional techniques in different methods and other new techniques to be created by instructors freely in accordance with the teaching aims. The hypothesis in this argument is that the collection of foreign language teaching techniques compiled in ELT microteachings for a particular group of learners has to be arranged eclectically in order to update the teaching process. Nevertheless, designing a teaching format of this sort is a demanding and highly criticized task. This study briefly argues eclecticism in language-skills based methodological struggle from the perspective of ELT teacher education.

  8. Novel optical password security technique based on optical fractal synthesizer

    Science.gov (United States)

    Wu, Kenan; Hu, Jiasheng; Wu, Xu

    2009-06-01

    A novel optical security technique for safeguarding user passwords based on an optical fractal synthesizer is proposed. A validating experiment has been carried out. In the proposed technique, a user password is protected by being converted to a fractal image. When a user sets up a new password, the password is transformed into a fractal pattern, and the fractal pattern is stored in authority. If the user is online-validated, his or her password is converted to a fractal pattern again to compare with the previous stored fractal pattern. The converting process is called the fractal encoding procedure, which consists of two steps. First, the password is nonlinearly transformed to get the parameters for the optical fractal synthesizer. Then the optical fractal synthesizer is operated to generate the output fractal image. The experimental result proves the validity of our method. The proposed technique bridges the gap between digital security systems and optical security systems and has many advantages, such as high security level, convenience, flexibility, hyper extensibility, etc. This provides an interesting optical security technique for the protection of digital passwords.

  9. Mathematical and Numerical Techniques in Energy and Environmental Modeling

    Science.gov (United States)

    Chen, Z.; Ewing, R. E.

    Mathematical models have been widely used to predict, understand, and optimize many complex physical processes, from semiconductor or pharmaceutical design to large-scale applications such as global weather models to astrophysics. In particular, simulation of environmental effects of air pollution is extensive. Here we address the need for using similar models to understand the fate and transport of groundwater contaminants and to design in situ remediation strategies. Three basic problem areas need to be addressed in the modeling and simulation of the flow of groundwater contamination. First, one obtains an effective model to describe the complex fluid/fluid and fluid/rock interactions that control the transport of contaminants in groundwater. This includes the problem of obtaining accurate reservoir descriptions at various length scales and modeling the effects of this heterogeneity in the reservoir simulators. Next, one develops accurate discretization techniques that retain the important physical properties of the continuous models. Finally, one develops efficient numerical solution algorithms that utilize the potential of the emerging computing architectures. We will discuss recent advances and describe the contribution of each of the papers in this book in these three areas. Keywords: reservoir simulation, mathematical models, partial differential equations, numerical algorithms

  10. Wind Turbine Rotor Simulation via CFD Based Actuator Disc Technique Compared to Detailed Measurement

    Directory of Open Access Journals (Sweden)

    Esmail Mahmoodi

    2015-10-01

    Full Text Available In this paper, a generalized Actuator Disc (AD is used to model the wind turbine rotor of the MEXICO experiment, a collaborative European wind turbine project. The AD model as a combination of CFD technique and User Defined Functions codes (UDF, so-called UDF/AD model is used to simulate loads and performance of the rotor in three different wind speed tests. Distributed force on the blade, thrust and power production of the rotor as important designing parameters of wind turbine rotors are focused to model. A developed Blade Element Momentum (BEM theory as a code based numerical technique as well as a full rotor simulation both from the literature are included into the results to compare and discuss. The output of all techniques is compared to detailed measurements for validation, which led us to final conclusions.

  11. A Multi-Model Reduction Technique for Optimization of Coupled Structural-Acoustic Problems

    DEFF Research Database (Denmark)

    Creixell Mediante, Ester; Jensen, Jakob Søndergaard; Brunskog, Jonas

    2016-01-01

    Finite Element models of structural-acoustic coupled systems can become very large for complex structures with multiple connected parts. Optimization of the performance of the structure based on harmonic analysis of the system requires solving the coupled problem iteratively and for several...... frequencies, which can become highly time consuming. Several modal-based model reduction techniques for structure-acoustic interaction problems have been developed in the literature. The unsymmetric nature of the pressure-displacement formulation of the problem poses the question of how the reduction modal...... base should be formed, given that the modal vectors are not orthogonal due to the asymmetry of the system matrices. In this paper, a multi-model reduction (MMR) technique for structure-acoustic interaction problems is developed. In MMR, the reduction base is formed with the modal vectors of a family...

  12. Néron Models and Base Change

    DEFF Research Database (Denmark)

    Halle, Lars Halvard; Nicaise, Johannes

    Presenting the first systematic treatment of the behavior of Néron models under ramified base change, this book can be read as an introduction to various subtle invariants and constructions related to Néron models of semi-abelian varieties, motivated by concrete research problems and complemented...... on Néron component groups, Edixhoven’s filtration and the base change conductor of Chai and Yu, and we study these invariants using various techniques such as models of curves, sheaves on Grothendieck sites and non-archimedean uniformization. We then apply our results to the study of motivic zeta functions...

  13. A probabilistic graphical model based stochastic input model construction

    International Nuclear Information System (INIS)

    Wan, Jiang; Zabaras, Nicholas

    2014-01-01

    Model reduction techniques have been widely used in modeling of high-dimensional stochastic input in uncertainty quantification tasks. However, the probabilistic modeling of random variables projected into reduced-order spaces presents a number of computational challenges. Due to the curse of dimensionality, the underlying dependence relationships between these random variables are difficult to capture. In this work, a probabilistic graphical model based approach is employed to learn the dependence by running a number of conditional independence tests using observation data. Thus a probabilistic model of the joint PDF is obtained and the PDF is factorized into a set of conditional distributions based on the dependence structure of the variables. The estimation of the joint PDF from data is then transformed to estimating conditional distributions under reduced dimensions. To improve the computational efficiency, a polynomial chaos expansion is further applied to represent the random field in terms of a set of standard random variables. This technique is combined with both linear and nonlinear model reduction methods. Numerical examples are presented to demonstrate the accuracy and efficiency of the probabilistic graphical model based stochastic input models. - Highlights: • Data-driven stochastic input models without the assumption of independence of the reduced random variables. • The problem is transformed to a Bayesian network structure learning problem. • Examples are given in flows in random media

  14. A Multiagent Based Model for Tactical Planning

    Science.gov (United States)

    2002-10-01

    Pub. Co. 1985. [10] Castillo, J.M. Aproximación mediante procedimientos de Inteligencia Artificial al planeamiento táctico. Doctoral Thesis...been developed under the same conceptual model and using similar Artificial Intelligence Tools. We use four different stimulus/response agents in...The conceptual model is built on base of the Agents theory. To implement the different agents we have used Artificial Intelligence techniques such

  15. Model-Based Motion Tracking of Infants

    DEFF Research Database (Denmark)

    Olsen, Mikkel Damgaard; Herskind, Anna; Nielsen, Jens Bo

    2014-01-01

    Even though motion tracking is a widely used technique to analyze and measure human movements, only a few studies focus on motion tracking of infants. In recent years, a number of studies have emerged focusing on analyzing the motion pattern of infants, using computer vision. Most of these studie...... that resembles the body surface of an infant, where the model is based on simple geometric shapes and a hierarchical skeleton model....

  16. Modelling of ground penetrating radar data in stratified media using the reflectivity technique

    International Nuclear Information System (INIS)

    Sena, Armando R; Sen, Mrinal K; Stoffa, Paul L

    2008-01-01

    Horizontally layered media are often encountered in shallow exploration geophysics. Ground penetrating radar (GPR) data in these environments can be modelled by techniques that are more efficient than finite difference (FD) or finite element (FE) schemes because the lateral homogeneity of the media allows us to reduce the dependence on the horizontal spatial variables through Fourier transforms on these coordinates. We adapt and implement the invariant embedding or reflectivity technique used to model elastic waves in layered media to model GPR data. The results obtained with the reflectivity and FDTD modelling techniques are in excellent agreement and the effects of the air–soil interface on the radiation pattern are correctly taken into account by the reflectivity technique. Comparison with real wide-angle GPR data shows that the reflectivity technique can satisfactorily reproduce the real GPR data. These results and the computationally efficient characteristics of the reflectivity technique (compared to FD or FE) demonstrate its usefulness in interpretation and possible model-based inversion schemes of GPR data in stratified media

  17. Efficient Identification Using a Prime-Feature-Based Technique

    DEFF Research Database (Denmark)

    Hussain, Dil Muhammad Akbar; Haq, Shaiq A.; Valente, Andrea

    2011-01-01

    Identification of authorized train drivers through biometrics is a growing area of interest in locomotive radio remote control systems. The existing technique of password authentication is not very reliable and potentially unauthorized personnel may also operate the system on behalf of the operator....... Fingerprint identification system, implemented on PC/104 based real-time systems, can accurately identify the operator. Traditionally, the uniqueness of a fingerprint is determined by the overall pattern of ridges and valleys as well as the local ridge anomalies e.g., a ridge bifurcation or a ridge ending...... in this paper. The technique involves identifying the most prominent feature of the fingerprint and searching only for that feature in the database to expedite the search process. The proposed architect provides efficient matching process and indexing feature for identification is unique....

  18. Adaptive differential correspondence imaging based on sorting technique

    Directory of Open Access Journals (Sweden)

    Heng Wu

    2017-04-01

    Full Text Available We develop an adaptive differential correspondence imaging (CI method using a sorting technique. Different from the conventional CI schemes, the bucket detector signals (BDS are first processed by a differential technique, and then sorted in a descending (or ascending order. Subsequently, according to the front and last several frames of the sorted BDS, the positive and negative subsets (PNS are created by selecting the relative frames from the reference detector signals. Finally, the object image is recovered from the PNS. Besides, an adaptive method based on two-step iteration is designed to select the optimum number of frames. To verify the proposed method, a single-detector computational ghost imaging (GI setup is constructed. We experimentally and numerically compare the performance of the proposed method with different GI algorithms. The results show that our method can improve the reconstruction quality and reduce the computation cost by using fewer measurement data.

  19. Wear Detection of Drill Bit by Image-based Technique

    Science.gov (United States)

    Sukeri, Maziyah; Zulhilmi Paiz Ismadi, Mohd; Rahim Othman, Abdul; Kamaruddin, Shahrul

    2018-03-01

    Image processing for computer vision function plays an essential aspect in the manufacturing industries for the tool condition monitoring. This study proposes a dependable direct measurement method to measure the tool wear using image-based analysis. Segmentation and thresholding technique were used as the means to filter and convert the colour image to binary datasets. Then, the edge detection method was applied to characterize the edge of the drill bit. By using cross-correlation method, the edges of original and worn drill bits were correlated to each other. Cross-correlation graphs were able to detect the difference of the worn edge despite small difference between the graphs. Future development will focus on quantifying the worn profile as well as enhancing the sensitivity of the technique.

  20. Underwater Time Service and Synchronization Based on Time Reversal Technique

    Science.gov (United States)

    Lu, Hao; Wang, Hai-bin; Aissa-El-Bey, Abdeldjalil; Pyndiah, Ramesh

    2010-09-01

    Real time service and synchronization are very important to many underwater systems. But the time service and synchronization in existence cannot work well due to the multi-path propagation and random phase fluctuation of signals in the ocean channel. The time reversal mirror technique can realize energy concentration through self-matching of the ocean channel and has very good spatial and temporal focusing properties. Based on the TRM technique, we present the Time Reversal Mirror Real Time service and synchronization (TRMRT) method which can bypass the processing of multi-path on the server side and reduce multi-path contamination on the client side. So TRMRT can improve the accuracy of time service. Furthermore, as an efficient and precise method of time service, TRMRT could be widely used in underwater exploration activities and underwater navigation and positioning systems.

  1. Improved ceramic slip casting technique. [application to aircraft model fabrication

    Science.gov (United States)

    Buck, Gregory M. (Inventor); Vasquez, Peter (Inventor)

    1993-01-01

    A primary concern in modern fluid dynamics research is the experimental verification of computational aerothermodynamic codes. This research requires high precision and detail in the test model employed. Ceramic materials are used for these models because of their low heat conductivity and their survivability at high temperatures. To fabricate such models, slip casting techniques were developed to provide net-form, precision casting capability for high-purity ceramic materials in aqueous solutions. In previous slip casting techniques, block, or flask molds made of plaster-of-paris were used to draw liquid from the slip material. Upon setting, parts were removed from the flask mold and cured in a kiln at high temperatures. Casting detail was usually limited with this technique -- detailed parts were frequently damaged upon separation from the flask mold, as the molded parts are extremely delicate in the uncured state, and the flask mold is inflexible. Ceramic surfaces were also marred by 'parting lines' caused by mold separation. This adversely affected the aerodynamic surface quality of the model as well. (Parting lines are invariably necessary on or near the leading edges of wings, nosetips, and fins for mold separation. These areas are also critical for flow boundary layer control.) Parting agents used in the casting process also affected surface quality. These agents eventually soaked into the mold, the model, or flaked off when releasing the case model. Different materials were tried, such as oils, paraffin, and even an algae. The algae released best, but some of it remained on the model and imparted an uneven texture and discoloration on the model surface when cured. According to the present invention, a wax pattern for a shell mold is provided, and an aqueous mixture of a calcium sulfate-bonded investment material is applied as a coating to the wax pattern. The coated wax pattern is then dried, followed by curing to vaporize the wax pattern and leave a shell

  2. Integrated approach to model decomposed flow hydrograph using artificial neural network and conceptual techniques

    Science.gov (United States)

    Jain, Ashu; Srinivasulu, Sanaga

    2006-02-01

    This paper presents the findings of a study aimed at decomposing a flow hydrograph into different segments based on physical concepts in a catchment, and modelling different segments using different technique viz. conceptual and artificial neural networks (ANNs). An integrated modelling framework is proposed capable of modelling infiltration, base flow, evapotranspiration, soil moisture accounting, and certain segments of the decomposed flow hydrograph using conceptual techniques and the complex, non-linear, and dynamic rainfall-runoff process using ANN technique. Specifically, five different multi-layer perceptron (MLP) and two self-organizing map (SOM) models have been developed. The rainfall and streamflow data derived from the Kentucky River catchment were employed to test the proposed methodology and develop all the models. The performance of all the models was evaluated using seven different standard statistical measures. The results obtained in this study indicate that (a) the rainfall-runoff relationship in a large catchment consists of at least three or four different mappings corresponding to different dynamics of the underlying physical processes, (b) an integrated approach that models the different segments of the decomposed flow hydrograph using different techniques is better than a single ANN in modelling the complex, dynamic, non-linear, and fragmented rainfall runoff process, (c) a simple model based on the concept of flow recession is better than an ANN to model the falling limb of a flow hydrograph, and (d) decomposing a flow hydrograph into the different segments corresponding to the different dynamics based on the physical concepts is better than using the soft decomposition employed using SOM.

  3. A gray-box DPDA-based intrusion detection technique using system-call monitoring

    NARCIS (Netherlands)

    Jafarian, Jafar Haadi; Abbasi, Ali; Safaei Sheikhabadi, Siavash

    2011-01-01

    In this paper, we present a novel technique for automatic and efficient intrusion detection based on learning program behaviors. Program behavior is captured in terms of issued system calls augmented with point-of-system-call information, and is modeled according to an efficient deterministic

  4. Mobile Augmented Reality Support for Architects based on feature Tracking Techniques

    DEFF Research Database (Denmark)

    Grønbæk, Kaj; Nielsen, Mikkel Bang; Kramp, Gunnar

    2004-01-01

    This paper presents a mobile Augmented Reality (AR) system called the SitePack supporting architects in visualizing 3D models in real-time on site. We describe how vision based feature tracking techniques can help architects making decisions on site concerning visual impact assessment. The AR sys...

  5. Numerical and modeling techniques used in the EPIC code

    International Nuclear Information System (INIS)

    Pizzica, P.A.; Abramson, P.B.

    1977-01-01

    EPIC models fuel and coolant motion which result from internal fuel pin pressure (from fission gas or fuel vapor) and/or from the generation of sodium vapor pressures in the coolant channel subsequent to pin failure in an LMFBR. The modeling includes the ejection of molten fuel from the pin into a coolant channel with any amount of voiding through a clad rip which may be of any length or which may expand with time. One-dimensional Eulerian hydrodynamics is used to model both the motion of fuel and fission gas inside a molten fuel cavity and the mixture of two-phase sodium and fission gas in the channel. Motion of molten fuel particles in the coolant channel is tracked with a particle-in-cell technique

  6. Teaching scientific concepts through simple models and social communication techniques

    International Nuclear Information System (INIS)

    Tilakaratne, K.

    2011-01-01

    For science education, it is important to demonstrate to students the relevance of scientific concepts in every-day life experiences. Although there are methods available for achieving this goal, it is more effective if cultural flavor is also added to the teaching techniques and thereby the teacher and students can easily relate the subject matter to their surroundings. Furthermore, this would bridge the gap between science and day-to-day experiences in an effective manner. It could also help students to use science as a tool to solve problems faced by them and consequently they would feel science is a part of their lives. In this paper, it has been described how simple models and cultural communication techniques can be used effectively in demonstrating important scientific concepts to the students of secondary and higher secondary levels by using two consecutive activities carried out at the Institute of Fundamental Studies (IFS), Sri Lanka. (author)

  7. Artificial intelligence techniques for modeling database user behavior

    Science.gov (United States)

    Tanner, Steve; Graves, Sara J.

    1990-01-01

    The design and development of the adaptive modeling system is described. This system models how a user accesses a relational database management system in order to improve its performance by discovering use access patterns. In the current system, these patterns are used to improve the user interface and may be used to speed data retrieval, support query optimization and support a more flexible data representation. The system models both syntactic and semantic information about the user's access and employs both procedural and rule-based logic to manipulate the model.

  8. A fast image reconstruction technique based on ART

    International Nuclear Information System (INIS)

    Zhang Shunli; Zhang Dinghua; Wang Kai; Huang Kuidong; Li Weibin

    2007-01-01

    Algebraic Reconstruction Technique (ART) is an iterative method for image reconstruction. Improving its reconstruction speed has been one of the important researching aspects of ART. For the simplified weight coefficients reconstruction model of ART, a fast grid traverse algorithm is proposed, which can determine the grid index by simple operations such as addition, subtraction and comparison. Since the weight coefficients are calculated at real time during iteration, large amount of storage is saved and the reconstruction speed is greatly increased. Experimental results show that the new algorithm is very effective and the reconstruction speed is improved about 10 times compared with the traditional algorithm. (authors)

  9. Integrating knowledge-based techniques into well-test interpretation

    Energy Technology Data Exchange (ETDEWEB)

    Harrison, I.W.; Fraser, J.L. [Artificial Intelligence Applications Inst., Edinburgh (United Kingdom)

    1995-04-01

    The goal of the Spirit Project was to develop a prototype of next-generation well-test-interpretation (WTI) software that would include knowledge-based decision support for the WTI model selection task. This paper describes how Spirit makes use of several different types of information (pressure, seismic, petrophysical, geological, and engineering) to support the user in identifying the most appropriate WTI model. Spirit`s knowledge-based approach to type-curve matching is to generate several different feasible interpretations by making assumptions about the possible presence of both wellbore storage and late-time boundary effects. Spirit fuses information from type-curve matching and other data sources by use of a knowledge-based decision model developed in collaboration with a WTI expert. The sponsors of the work have judged the resulting prototype system a success.

  10. Constructing a Soil Class Map of Denmark based on the FAO Legend Using Digital Techniques

    DEFF Research Database (Denmark)

    Adhikari, Kabindra; Minasny, Budiman; Greve, Mette Balslev

    2014-01-01

    Soil mapping in Denmark has a long history and a series of soil maps based on conventional mapping approaches have been produced. In this study, a national soil map of Denmark was constructed based on the FAO–Unesco Revised Legend 1990 using digital soil mapping techniques, existing soil profile...... observations and environmental data. This map was developed using soil-landscape models generated with a decision tree-based digital soil mapping technique. As input variables in the model, more than 1170 soil profile data and 17 environmental variables including geology, land use, landscape type, area...... of wetlands, digital elevation model and its derivatives were compiled. The predicted map showed that Podzols and Luvisols were the most frequent soil groups, covering almost two-thirds of the area of Denmark. Geographically, Podzols occupied a major portion of western Denmark, where the soils have developed...

  11. Reduced order modelling techniques for mesh movement strategies as applied to fluid structure interactions

    CSIR Research Space (South Africa)

    Bogaers, Alfred EJ

    2010-01-01

    Full Text Available In this paper, we implement the method of Proper Orthogonal Decomposition (POD) to generate a reduced order model (ROM) of an optimization based mesh movement technique. In the study it is shown that POD can be used effectively to generate a ROM...

  12. Antimisting kerosene: Base fuel effects, blending and quality control techniques

    Science.gov (United States)

    Yavrouian, A. H.; Ernest, J.; Sarohia, V.

    1984-01-01

    The problems associated with blending of the AMK additive with Jet A, and the base fuel effects on AMK properties are addressed. The results from the evaluation of some of the quality control techniques for AMK are presented. The principal conclusions of this investigation are: significant compositional differences for base fuel (Jet A) within the ASTM specification DI655; higher aromatic content of the base fuel was found to be beneficial for the polymer dissolution at ambient (20 C) temperature; using static mixer technology, the antimisting additive (FM-9) is in-line blended with Jet A, producing AMK which has adequate fire-protection properties 15 to 20 minutes after blending; degradability of freshly blended and equilibrated AMK indicated that maximum degradability is reached after adequate fire protection is obtained; the results of AMK degradability as measured by filter ratio, confirmed previous RAE data that power requirements to decade freshly blended AMK are significantly higher than equilibrated AMK; blending of the additive by using FM-9 concentrate in Jet A produces equilibrated AMK almost instantly; nephelometry offers a simple continuous monitoring capability and is used as a real time quality control device for AMK; and trajectory (jet thurst) and pressure drop tests are useful laboratory techniques for evaluating AMK quality.

  13. Two-dimensional gel electrophoresis image registration using block-matching techniques and deformation models.

    Science.gov (United States)

    Rodriguez, Alvaro; Fernandez-Lozano, Carlos; Dorado, Julian; Rabuñal, Juan R

    2014-06-01

    Block-matching techniques have been widely used in the task of estimating displacement in medical images, and they represent the best approach in scenes with deformable structures such as tissues, fluids, and gels. In this article, a new iterative block-matching technique-based on successive deformation, search, fitting, filtering, and interpolation stages-is proposed to measure elastic displacements in two-dimensional polyacrylamide gel electrophoresis (2D-PAGE) images. The proposed technique uses different deformation models in the task of correlating proteins in real 2D electrophoresis gel images, obtaining an accuracy of 96.6% and improving the results obtained with other techniques. This technique represents a general solution, being easy to adapt to different 2D deformable cases and providing an experimental reference for block-matching algorithms. Copyright © 2014 Elsevier Inc. All rights reserved.

  14. Model-based Software Engineering

    DEFF Research Database (Denmark)

    Kindler, Ekkart

    2010-01-01

    The vision of model-based software engineering is to make models the main focus of software development and to automatically generate software from these models. Part of that idea works already today. But, there are still difficulties when it comes to behaviour. Actually, there is no lack in models...

  15. Full-duplex MIMO system based on antenna cancellation technique

    DEFF Research Database (Denmark)

    Foroozanfard, Ehsan; Franek, Ondrej; Tatomirescu, Alexandru

    2014-01-01

    The performance of an antenna cancellation technique for a multiple-input– multiple-output (MIMO) full-duplex system that is based on null-steering beamforming and antenna polarization diversity is investigated. A practical implementation of a symmetric antenna topology comprising three dual......-polarized patch antennas operating at 2.4 GHz is described. The measurement results show an average of 60 dB self-interference cancellation over 200 MHz bandwidth. Moreover, a decoupling level of up to 22 dB is achieved for MIMO multiplexing using antenna polarization diversity. The performance evaluation...

  16. Prediction of drug synergy in cancer using ensemble-based machine learning techniques

    Science.gov (United States)

    Singh, Harpreet; Rana, Prashant Singh; Singh, Urvinder

    2018-04-01

    Drug synergy prediction plays a significant role in the medical field for inhibiting specific cancer agents. It can be developed as a pre-processing tool for therapeutic successes. Examination of different drug-drug interaction can be done by drug synergy score. It needs efficient regression-based machine learning approaches to minimize the prediction errors. Numerous machine learning techniques such as neural networks, support vector machines, random forests, LASSO, Elastic Nets, etc., have been used in the past to realize requirement as mentioned above. However, these techniques individually do not provide significant accuracy in drug synergy score. Therefore, the primary objective of this paper is to design a neuro-fuzzy-based ensembling approach. To achieve this, nine well-known machine learning techniques have been implemented by considering the drug synergy data. Based on the accuracy of each model, four techniques with high accuracy are selected to develop ensemble-based machine learning model. These models are Random forest, Fuzzy Rules Using Genetic Cooperative-Competitive Learning method (GFS.GCCL), Adaptive-Network-Based Fuzzy Inference System (ANFIS) and Dynamic Evolving Neural-Fuzzy Inference System method (DENFIS). Ensembling is achieved by evaluating the biased weighted aggregation (i.e. adding more weights to the model with a higher prediction score) of predicted data by selected models. The proposed and existing machine learning techniques have been evaluated on drug synergy score data. The comparative analysis reveals that the proposed method outperforms others in terms of accuracy, root mean square error and coefficient of correlation.

  17. Application of Soft Computing Techniques and Multiple Regression Models for CBR prediction of Soils

    Directory of Open Access Journals (Sweden)

    Fatimah Khaleel Ibrahim

    2017-08-01

    Full Text Available The techniques of soft computing technique such as Artificial Neutral Network (ANN have improved the predicting capability and have actually discovered application in Geotechnical engineering. The aim of this research is to utilize the soft computing technique and Multiple Regression Models (MLR for forecasting the California bearing ratio CBR( of soil from its index properties. The indicator of CBR for soil could be predicted from various soils characterizing parameters with the assist of MLR and ANN methods. The data base that collected from the laboratory by conducting tests on 86 soil samples that gathered from different projects in Basrah districts. Data gained from the experimental result were used in the regression models and soft computing techniques by using artificial neural network. The liquid limit, plastic index , modified compaction test and the CBR test have been determined. In this work, different ANN and MLR models were formulated with the different collection of inputs to be able to recognize their significance in the prediction of CBR. The strengths of the models that were developed been examined in terms of regression coefficient (R2, relative error (RE% and mean square error (MSE values. From the results of this paper, it absolutely was noticed that all the proposed ANN models perform better than that of MLR model. In a specific ANN model with all input parameters reveals better outcomes than other ANN models.

  18. Modelling galaxy formation with multi-scale techniques

    International Nuclear Information System (INIS)

    Hobbs, A.

    2011-01-01

    Full text: Galaxy formation and evolution depends on a wide variety of physical processes - star formation, gas cooling, supernovae explosions and stellar winds etc. - that span an enormous range of physical scales. We present a novel technique for modelling such massively multiscale systems. This has two key new elements: Lagrangian re simulation, and convergent 'sub-grid' physics. The former allows us to hone in on interesting simulation regions with very high resolution. The latter allows us to increase resolution for the physics that we can resolve, without unresolved physics spoiling convergence. We illustrate the power of our new approach by showing some new results for star formation in the Milky Way. (author)

  19. Prescribed wind shear modelling with the actuator line technique

    DEFF Research Database (Denmark)

    Mikkelsen, Robert Flemming; Sørensen, Jens Nørkær; Troldborg, Niels

    2007-01-01

    A method for prescribing arbitrary steady atmospheric wind shear profiles combined with CFD is presented. The method is furthermore combined with the actuator line technique governing the aerodynamic loads on a wind turbine. Computation are carried out on a wind turbine exposed to a representative...... steady atmospheric wind shear profile with and without wind direction changes up through the atmospheric boundary layer. Results show that the main impact on the turbine is captured by the model. Analysis of the wake behind the wind turbine, reveal the formation of a skewed wake geometry interacting...

  20. Radiation-Based Medical Imaging Techniques: An Overview

    Science.gov (United States)

    Prior, John O.; Lecoq, Paul

    This chapter will present an overview of two radiation-based medical imaging techniques using radiopharmaceuticals used in nuclear medicine/molecular imaging, namely, single-photon emission computed tomography (SPECT) and positron emission tomography (PET). The relative merits in terms of radiation sensitivity and image resolution of SPECT and PET will be compared to the main conventional radiologic modalities that are computed tomography (CT) and magnetic resonance (MR) imaging. Differences in terms of temporal resolution will also be outlined, as well as the other similarities and dissimilarities of these two techniques, including their latest and upcoming multimodality combination. The main clinical applications are briefly described and examples of specific SPECT and PET radiopharmaceuticals are listed. SPECT and PET imaging will be then further detailed in the two subsequent chapters describing in greater depth the basics and future trends of each technique (see Chaps. 37, "SPECT Imaging: Basics and New Trends" 10.1007/978-3-642-13271-1_37 and 38, "PET Imaging: Basics and New Trends" 10.1007/978-3-642-13271-1_38.

  1. Principles of models based engineering

    Energy Technology Data Exchange (ETDEWEB)

    Dolin, R.M.; Hefele, J.

    1996-11-01

    This report describes a Models Based Engineering (MBE) philosophy and implementation strategy that has been developed at Los Alamos National Laboratory`s Center for Advanced Engineering Technology. A major theme in this discussion is that models based engineering is an information management technology enabling the development of information driven engineering. Unlike other information management technologies, models based engineering encompasses the breadth of engineering information, from design intent through product definition to consumer application.

  2. Structural level characterization of base oils using advanced analytical techniques

    KAUST Repository

    Hourani, Nadim

    2015-05-21

    Base oils, blended for finished lubricant formulations, are classified by the American Petroleum Institute into five groups, viz., groups I-V. Groups I-III consist of petroleum based hydrocarbons whereas groups IV and V are made of synthetic polymers. In the present study, five base oil samples belonging to groups I and III were extensively characterized using high performance liquid chromatography (HPLC), comprehensive two-dimensional gas chromatography (GC×GC), and Fourier transform ion cyclotron resonance mass spectrometry (FT-ICR MS) equipped with atmospheric pressure chemical ionization (APCI) and atmospheric pressure photoionization (APPI) sources. First, the capabilities and limitations of each analytical technique were evaluated, and then the availed information was combined to reveal compositional details on the base oil samples studied. HPLC showed the overwhelming presence of saturated over aromatic compounds in all five base oils. A similar trend was further corroborated using GC×GC, which yielded semiquantitative information on the compound classes present in the samples and provided further details on the carbon number distributions within these classes. In addition to chromatography methods, FT-ICR MS supplemented the compositional information on the base oil samples by resolving the aromatics compounds into alkyl- and naphtheno-subtituted families. APCI proved more effective for the ionization of the highly saturated base oil components compared to APPI. Furthermore, for the detailed information on hydrocarbon molecules FT-ICR MS revealed the presence of saturated and aromatic sulfur species in all base oil samples. The results presented herein offer a unique perspective into the detailed molecular structure of base oils typically used to formulate lubricants. © 2015 American Chemical Society.

  3. Model-Based Methods for Fault Diagnosis: Some Guide-Lines

    DEFF Research Database (Denmark)

    Patton, R.J.; Chen, J.; Nielsen, S.B.

    1995-01-01

    This paper provides a review of model-based fault diagnosis techniques. Starting from basic principles, the properties.......This paper provides a review of model-based fault diagnosis techniques. Starting from basic principles, the properties....

  4. Weighted Least Squares Techniques for Improved Received Signal Strength Based Localization

    Directory of Open Access Journals (Sweden)

    José R. Casar

    2011-09-01

    Full Text Available The practical deployment of wireless positioning systems requires minimizing the calibration procedures while improving the location estimation accuracy. Received Signal Strength localization techniques using propagation channel models are the simplest alternative, but they are usually designed under the assumption that the radio propagation model is to be perfectly characterized a priori. In practice, this assumption does not hold and the localization results are affected by the inaccuracies of the theoretical, roughly calibrated or just imperfect channel models used to compute location. In this paper, we propose the use of weighted multilateration techniques to gain robustness with respect to these inaccuracies, reducing the dependency of having an optimal channel model. In particular, we propose two weighted least squares techniques based on the standard hyperbolic and circular positioning algorithms that specifically consider the accuracies of the different measurements to obtain a better estimation of the position. These techniques are compared to the standard hyperbolic and circular positioning techniques through both numerical simulations and an exhaustive set of real experiments on different types of wireless networks (a wireless sensor network, a WiFi network and a Bluetooth network. The algorithms not only produce better localization results with a very limited overhead in terms of computational cost but also achieve a greater robustness to inaccuracies in channel modeling.

  5. Laparoscopic anterior resection: new anastomosis technique in a pig model.

    Science.gov (United States)

    Bedirli, Abdulkadir; Yucel, Deniz; Ekim, Burcu

    2014-01-01

    Bowel anastomosis after anterior resection is one of the most difficult tasks to perform during laparoscopic colorectal surgery. This study aims to evaluate a new feasible and safe intracorporeal anastomosis technique after laparoscopic left-sided colon or rectum resection in a pig model. The technique was evaluated in 5 pigs. The OrVil device (Covidien, Mansfield, Massachusetts) was inserted into the anus and advanced proximally to the rectum. A 0.5-cm incision was made in the sigmoid colon, and the 2 sutures attached to its delivery tube were cut. After the delivery tube was evacuated through the anus, the tip of the anvil was removed through the perforation. The sigmoid colon was transected just distal to the perforation with an endoscopic linear stapler. The rectosigmoid segment to be resected was removed through the anus with a grasper, and distal transection was performed. A 25-mm circular stapler was inserted and combined with the anvil, and end-to-side intracorporeal anastomosis was then performed. We performed the technique in 5 pigs. Anastomosis required an average of 12 minutes. We observed that the proximal and distal donuts were completely removed in all pigs. No anastomotic air leakage was observed in any of the animals. This study shows the efficacy and safety of intracorporeal anastomosis with the OrVil device after laparoscopic anterior resection.

  6. Modeling and Control of Multivariable Process Using Intelligent Techniques

    Directory of Open Access Journals (Sweden)

    Subathra Balasubramanian

    2010-10-01

    Full Text Available For nonlinear dynamic systems, the first principles based modeling and control is difficult to implement. In this study, a fuzzy controller and recurrent fuzzy controller are developed for MIMO process. Fuzzy logic controller is a model free controller designed based on the knowledge about the process. In fuzzy controller there are two types of rule-based fuzzy models are available: one the linguistic (Mamdani model and the other is Takagi–Sugeno model. Of these two, Takagi-Sugeno model (TS has attracted most attention. The fuzzy controller application is limited to static processes due to their feedforward structure. But, most of the real-time processes are dynamic and they require the history of input/output data. In order to store the past values a memory unit is needed, which is introduced by the recurrent structure. The proposed recurrent fuzzy structure is used to develop a controller for the two tank heating process. Both controllers are designed and implemented in a real time environment and their performance is compared.

  7. MIPCE: An MI-based protein complex extraction technique

    Indian Academy of Sciences (India)

    2015-09-28

    Sep 28, 2015 ... complexity of protein interaction data, many methods which are based on modelling the PPI data with graphs have been developed for analysing the structure of PPI networks. PPI networks are represented as undirected graphs with nodes corresponding to proteins and edges representing the interactions ...

  8. Problem-Based Learning Supported by Semantic Techniques

    Science.gov (United States)

    Lozano, Esther; Gracia, Jorge; Corcho, Oscar; Noble, Richard A.; Gómez-Pérez, Asunción

    2015-01-01

    Problem-based learning has been applied over the last three decades to a diverse range of learning environments. In this educational approach, different problems are posed to the learners so that they can develop different solutions while learning about the problem domain. When applied to conceptual modelling, and particularly to Qualitative…

  9. On HTML and XML based web design and implementation techniques

    International Nuclear Information System (INIS)

    Bezboruah, B.; Kalita, M.

    2006-05-01

    Web implementation is truly a multidisciplinary field with influences from programming, choosing of scripting languages, graphic design, user interface design, and database design. The challenge of a Web designer/implementer is his ability to create an attractive and informative Web. To work with the universal framework and link diagrams from the design process as well as the Web specifications and domain information, it is essential to create Hypertext Markup Language (HTML) or other software and multimedia to accomplish the Web's objective. In this article we will discuss Web design standards and the techniques involved in Web implementation based on HTML and Extensible Markup Language (XML). We will also discuss the advantages and disadvantages of HTML over its successor XML in designing and implementing a Web. We have developed two Web pages, one utilizing the features of HTML and the other based on the features of XML to carry out the present investigation. (author)

  10. Thiophene-based monolayer OFETs prepared by Langmuir techniques

    Science.gov (United States)

    Agina, Elena V.; Sizov, Alexey S.; Anisimov, Daniil S.; Trul, Askold A.; Borshchev, Oleg V.; Paraschuk, Dmitry Y.; Shcherbina, Maxim A.; Chvalun, Sergey N.; Ponomarenko, Sergey A.

    2015-08-01

    A novel fast, easily processible and highly reproducible approach to thiophene-based monolayer OFETs fabrication by Langmuir-Blodgett or Langmuir-Schaefer techniques was developed and successfully applied. It is based on selfassembly of organosilicon derivatives of oligothiophenes or benzothienobenzothiophene on the water-air interface. Influence of the conjugation length and the anchor group chemistry of the self-assembling molecules on the monolayer structure and electric performance of monolayer OFETs was systematically investigated. The efficient monolayer OFETs with the charge carrier mobilities up to 0.01 cm2/Vs and on/off ratio up to 106 were fabricated, and their functionality in integrated circuits under normal air conditions was demonstrated.

  11. Designing on ICT reconstruction software based on DSP techniques

    International Nuclear Information System (INIS)

    Liu Jinhui; Xiang Xincheng

    2006-01-01

    The convolution back project (CBP) algorithm is used to realize the CT image's reconstruction in ICT generally, which is finished by using PC or workstation. In order to add the ability of multi-platform operation of CT reconstruction software, a CT reconstruction method based on modern digital signal processor (DSP) technique is proposed and realized in this paper. The hardware system based on TI's C6701 DSP processor is selected to support the CT software construction. The CT reconstruction software is compiled only using assembly language related to the DSP hardware. The CT software can be run on TI's C6701 EVM board by inputting the CT data, and can get the CT Images that satisfy the real demands. (authors)

  12. Biological modelling of pelvic radiotherapy. Potential gains from conformal techniques

    Energy Technology Data Exchange (ETDEWEB)

    Fenwick, J.D

    1999-07-01

    Models have been developed which describe the dose and volume dependences of various long-term rectal complications of radiotherapy; assumptions underlying the models are consistent with clinical and experimental descriptions of complication pathogenesis. In particular, rectal bleeding - perhaps the most common complication of modern external beam prostate radiotherapy, and which might be viewed as its principle dose-limiting toxicity - has been modelled as a parallel-type complication. Rectal dose-surface-histograms have been calculated for 79 patients treated, in the course of the Royal Marsden trial of pelvic conformal radiotherapy, for prostate cancer using conformal or conventional techniques; rectal bleeding data is also available for these patients. The maximum- likelihood fit of the parallel bleeding model to the dose-surface-histograms and complication data shows that the complication status of the patients analysed (most of whom received reference point doses of 64 Gy) was significantly dependent on, and almost linearly proportional to, the volume of highly dosed rectal wall: a 1% decrease in the fraction of rectal wall (outlined over an 11 cm rectal length) receiving a dose of 58 Gy or more lead to a reduction in the (RTOG) grade 1,2,3 bleeding rate of about 1.1% - 95% confidence interval [0.04%, 2.2%]. The parallel model fit to the bleeding data is only marginally biased by uncertainties in the calculated dose-surface-histograms (due to setup errors, rectal wall movement and absolute rectal surface area variability), causing the gradient of the observed volume-response curve to be slightly lower than that which would be seen in the absence of these uncertainties. An analysis of published complication data supports these single-centre findings and indicates that the reductions in highly dosed rectal wall volumes obtainable using conformal radiotherapy techniques can be exploited to allow escalation of the dose delivered to the prostate target volume, the

  13. Risk-based maintenance-Techniques and applications

    International Nuclear Information System (INIS)

    Arunraj, N.S.; Maiti, J.

    2007-01-01

    Plant and equipment, however well designed, will not remain safe or reliable if it is not maintained. The general objective of the maintenance process is to make use of the knowledge of failures and accidents to achieve the possible safety with the lowest possible cost. The concept of risk-based maintenance was developed to inspect the high-risk components usually with greater frequency and thoroughness and to maintain in a greater manner, to achieve tolerable risk criteria. Risk-based maintenance methodology provides a tool for maintenance planning and decision making to reduce the probability of failure of equipment and the consequences of failure. In this paper, the risk analysis and risk-based maintenance methodologies were identified and classified into suitable classes. The factors affecting the quality of risk analysis were identified and analyzed. The applications, input data and output data were studied to understand their functioning and efficiency. The review showed that there is no unique way to perform risk analysis and risk-based maintenance. The use of suitable techniques and methodologies, careful investigation during the risk analysis phase, and its detailed and structured results are necessary to make proper risk-based maintenance decisions

  14. Generation of Quasi-Gaussian Pulses Based on Correlation Techniques

    Directory of Open Access Journals (Sweden)

    POHOATA, S.

    2012-02-01

    Full Text Available The Gaussian pulses have been mostly used within communications, where some applications can be emphasized: mobile telephony (GSM, where GMSK signals are used, as well as the UWB communications, where short-period pulses based on Gaussian waveform are generated. Since the Gaussian function signifies a theoretical concept, which cannot be accomplished from the physical point of view, this should be expressed by using various functions, able to determine physical implementations. New techniques of generating the Gaussian pulse responses of good precision are approached, proposed and researched in this paper. The second and third order derivatives with regard to the Gaussian pulse response are accurately generated. The third order derivates is composed of four individual rectangular pulses of fixed amplitudes, being easily to be generated by standard techniques. In order to generate pulses able to satisfy the spectral mask requirements, an adequate filter is necessary to be applied. This paper emphasizes a comparative analysis based on the relative error and the energy spectra of the proposed pulses.

  15. Semantic-based technique for thai documents plagiarism detection

    Directory of Open Access Journals (Sweden)

    Sorawat Prapanitisatian

    2014-03-01

    Full Text Available Plagiarism is the act of taking another person's writing or idea without referring to the source of information. This is one of major problems in educational institutes. There is a number of plagiarism detection software available on the Internet. However, a few numbers of them works. Typically, they use a simple method for plagiarism detection e.g. string matching. The main weakness of this method is it cannot detect the plagiarism when the author replaces some words using synonyms. As such, this paper presents a new technique for a semantic-based plagiarism detection using Semantic Role Labeling (SRL and term weighting. SRL is deployed in order to calculate the semantic-based similarity. The main different from the existing framework is terms in a sentence are weighted dynamically depending on their roles in the sentence e.g. subject, verb or object. This technique enhances the plagiarism detection mechanism more efficiently than existing system although positions of terms in a sentence are reordered. The experimental results show that the proposed method can detect the plagiarism document more effective than the existing methods, Anti-kobpae, Turnit-in and Traditional Semantic Role Labeling.

  16. Determination of Complex-Valued Parametric Model Coefficients Using Artificial Neural Network Technique

    Directory of Open Access Journals (Sweden)

    A. M. Aibinu

    2010-01-01

    Full Text Available A new approach for determining the coefficients of a complex-valued autoregressive (CAR and complex-valued autoregressive moving average (CARMA model coefficients using complex-valued neural network (CVNN technique is discussed in this paper. The CAR and complex-valued moving average (CMA coefficients which constitute a CARMA model are computed simultaneously from the adaptive weights and coefficients of the linear activation functions in a two-layered CVNN. The performance of the proposed technique has been evaluated using simulated complex-valued data (CVD with three different types of activation functions. The results show that the proposed method can accurately determine the model coefficients provided that the network is properly trained. Furthermore, application of the developed CVNN-based technique for MRI K-space reconstruction results in images with improve resolution.

  17. Experimental evaluation of a quasi-modal parameter based rotor foundation identification technique

    Science.gov (United States)

    Yu, Minli; Liu, Jike; Feng, Ningsheng; Hahn, Eric J.

    2017-12-01

    Correct modelling of the foundation of rotating machinery is an invaluable asset in model-based rotor dynamic study. One attractive approach for such purpose is to identify the relevant modal parameters of an equivalent foundation using the motion measurements of rotor and foundation at the bearing supports. Previous research showed that, a complex quasi-modal parameter based system identification technique could be feasible for this purpose; however, the technique was only validated by identifying simple structures under harmonic excitation. In this paper, such identification technique is further extended and evaluated by identifying the foundation of a numerical rotor-bearing-foundation system and an experimental rotor rig respectively. In the identification of rotor foundation with multiple bearing supports, all application points of excitation forces transmitted through bearings need to be included; however the assumed vibration modes far outside the rotor operating speed cannot or not necessary to be identified. The extended identification technique allows one to identify correctly an equivalent foundation with fewer modes than the assumed number of degrees of freedom, essentially by generalising the technique to be able to handle rectangular complex modal matrices. The extended technique is robust in numerical and experimental validation and is therefore likely to be applicable in the field.

  18. Experimental investigation of the predictive capabilities of data driven modeling techniques in hydrology - Part 2: Application

    Directory of Open Access Journals (Sweden)

    A. Elshorbagy

    2010-10-01

    Full Text Available In this second part of the two-part paper, the data driven modeling (DDM experiment, presented and explained in the first part, is implemented. Inputs for the five case studies (half-hourly actual evapotranspiration, daily peat soil moisture, daily till soil moisture, and two daily rainfall-runoff datasets are identified, either based on previous studies or using the mutual information content. Twelve groups (realizations were randomly generated from each dataset by randomly sampling without replacement from the original dataset. Neural networks (ANNs, genetic programming (GP, evolutionary polynomial regression (EPR, Support vector machines (SVM, M5 model trees (M5, K-nearest neighbors (K-nn, and multiple linear regression (MLR techniques are implemented and applied to each of the 12 realizations of each case study. The predictive accuracy and uncertainties of the various techniques are assessed using multiple average overall error measures, scatter plots, frequency distribution of model residuals, and the deterioration rate of prediction performance during the testing phase. Gamma test is used as a guide to assist in selecting the appropriate modeling technique. Unlike two nonlinear soil moisture case studies, the results of the experiment conducted in this research study show that ANNs were a sub-optimal choice for the actual evapotranspiration and the two rainfall-runoff case studies. GP is the most successful technique due to its ability to adapt the model complexity to the modeled data. EPR performance could be close to GP with datasets that are more linear than nonlinear. SVM is sensitive to the kernel choice and if appropriately selected, the performance of SVM can improve. M5 performs very well with linear and semi linear data, which cover wide range of hydrological situations. In highly nonlinear case studies, ANNs, K-nn, and GP could be more successful than other modeling techniques. K-nn is also successful in linear situations, and it

  19. Experimental investigation of the predictive capabilities of data driven modeling techniques in hydrology - Part 2: Application

    Science.gov (United States)

    Elshorbagy, A.; Corzo, G.; Srinivasulu, S.; Solomatine, D. P.

    2010-10-01

    In this second part of the two-part paper, the data driven modeling (DDM) experiment, presented and explained in the first part, is implemented. Inputs for the five case studies (half-hourly actual evapotranspiration, daily peat soil moisture, daily till soil moisture, and two daily rainfall-runoff datasets) are identified, either based on previous studies or using the mutual information content. Twelve groups (realizations) were randomly generated from each dataset by randomly sampling without replacement from the original dataset. Neural networks (ANNs), genetic programming (GP), evolutionary polynomial regression (EPR), Support vector machines (SVM), M5 model trees (M5), K-nearest neighbors (K-nn), and multiple linear regression (MLR) techniques are implemented and applied to each of the 12 realizations of each case study. The predictive accuracy and uncertainties of the various techniques are assessed using multiple average overall error measures, scatter plots, frequency distribution of model residuals, and the deterioration rate of prediction performance during the testing phase. Gamma test is used as a guide to assist in selecting the appropriate modeling technique. Unlike two nonlinear soil moisture case studies, the results of the experiment conducted in this research study show that ANNs were a sub-optimal choice for the actual evapotranspiration and the two rainfall-runoff case studies. GP is the most successful technique due to its ability to adapt the model complexity to the modeled data. EPR performance could be close to GP with datasets that are more linear than nonlinear. SVM is sensitive to the kernel choice and if appropriately selected, the performance of SVM can improve. M5 performs very well with linear and semi linear data, which cover wide range of hydrological situations. In highly nonlinear case studies, ANNs, K-nn, and GP could be more successful than other modeling techniques. K-nn is also successful in linear situations, and it should

  20. Optimizing Availability of a Framework in Series Configuration Utilizing Markov Model and Monte Carlo Simulation Techniques

    Directory of Open Access Journals (Sweden)

    Mansoor Ahmed Siddiqui

    2017-06-01

    Full Text Available This research work is aimed at optimizing the availability of a framework comprising of two units linked together in series configuration utilizing Markov Model and Monte Carlo (MC Simulation techniques. In this article, effort has been made to develop a maintenance model that incorporates three distinct states for each unit, while taking into account their different levels of deterioration. Calculations are carried out using the proposed model for two distinct cases of corrective repair, namely perfect and imperfect repairs, with as well as without opportunistic maintenance. Initially, results are accomplished using an analytical technique i.e., Markov Model. Validation of the results achieved is later carried out with the help of MC Simulation. In addition, MC Simulation based codes also work well for the frameworks that follow non-exponential failure and repair rates, and thus overcome the limitations offered by the Markov Model.

  1. Kerf modelling in abrasive waterjet milling using evolutionary computation and ANOVA techniques

    Science.gov (United States)

    Alberdi, A.; Rivero, A.; Carrascal, A.; Lamikiz, A.

    2012-04-01

    Many researchers demonstrated the capability of Abrasive Waterjet (AWJ) technology for precision milling operations. However, the concurrence of several input parameters along with the stochastic nature of this technology leads to a complex process control, which requires a work focused in process modelling. This research work introduces a model to predict the kerf shape in AWJ slot milling in Aluminium 7075-T651 in terms of four important process parameters: the pressure, the abrasive flow rate, the stand-off distance and the traverse feed rate. A hybrid evolutionary approach was employed for kerf shape modelling. This technique allowed characterizing the profile through two parameters: the maximum cutting depth and the full width at half maximum. On the other hand, based on ANOVA and regression techniques, these two parameters were also modelled as a function of process parameters. Combination of both models resulted in an adequate strategy to predict the kerf shape for different machining conditions.

  2. Refractive index sensor based on optical fiber end face using pulse reference-based compensation technique

    Science.gov (United States)

    Bian, Qiang; Song, Zhangqi; Zhang, Xueliang; Yu, Yang; Chen, Yuzhong

    2018-03-01

    We proposed a refractive index sensor based on optical fiber end face using pulse reference-based compensation technique. With good compensation effect of this compensation technique, the power fluctuation of light source, the change of optic components transmission loss and coupler splitting ratio can be compensated, which largely reduces the background noise. The refractive index resolutions can achieve 3.8 × 10-6 RIU and1.6 × 10-6 RIU in different refractive index regions.

  3. Control System Design for Cylindrical Tank Process Using Neural Model Predictive Control Technique

    Directory of Open Access Journals (Sweden)

    M. Sridevi

    2010-10-01

    Full Text Available Chemical manufacturing and process industry requires innovative technologies for process identification. This paper deals with model identification and control of cylindrical process. Model identification of the process was done using ARMAX technique. A neural model predictive controller was designed for the identified model. The performance of the controllers was evaluated using MATLAB software. The performance of NMPC controller was compared with Smith Predictor controller and IMC controller based on rise time, settling time, overshoot and ISE and it was found that the NMPC controller is better suited for this process.

  4. Leveraging model-based study designs and serial micro-sampling techniques to understand the oral pharmacokinetics of the potent LTB4 inhibitor, CP-105696, for mouse pharmacology studies.

    Science.gov (United States)

    Spilker, Mary E; Chung, Heekyung; Visswanathan, Ravi; Bagrodia, Shubha; Gernhardt, Steven; Fantin, Valeria R; Ellies, Lesley G

    2017-07-01

    1. Leukotriene B4 (LTB4) is a proinflammatory mediator important in the progression of a number of inflammatory diseases. Preclinical models can explore the role of LTB4 in pathophysiology using tool compounds, such as CP-105696, that modulate its activity. To support preclinical pharmacology studies, micro-sampling techniques and mathematical modeling were used to determine the pharmacokinetics of CP-105696 in mice within the context of systemic inflammation induced by a high-fat diet (HFD). 2. Following oral administration of doses > 35 mg/kg, CP-105696 kinetics can be described by a one-compartment model with first order absorption. The compound's half-life is 44-62 h with an apparent volume of distribution of 0.51-0.72 L/kg. Exposures in animals fed an HFD are within 2-fold of those fed a normal chow diet. Daily dosing at 100 mg/kg was not tolerated and resulted in a >20% weight loss in the mice. 3. CP-105696's long half-life has the potential to support a twice weekly dosing schedule. Given that most chronic inflammatory diseases will require long-term therapies, these results are useful in determining the optimal dosing schedules for preclinical studies using CP-105696.

  5. Enhancing the effectiveness of IST through risk-based techniques

    Energy Technology Data Exchange (ETDEWEB)

    Floyd, S.D.

    1996-12-01

    Current IST requirements were developed mainly through deterministic-based methods. While this approach has resulted in an adequate level of safety and reliability for pumps and valves, insights from probabilistic safety assessments suggest a better safety focus can be achieved at lower costs. That is, some high safety impact pumps and valves are currently not tested under the IST program and should be added, while low safety impact valves could be tested at significantly greater intervals than allowed by the current IST program. The nuclear utility industry, through the Nuclear Energy Institute (NEI), has developed a draft guideline for applying risk-based techniques to focus testing on those pumps and valves with a high safety impact while reducing test frequencies on low safety impact pumps and valves. The guideline is being validated through an industry pilot application program that is being reviewed by the U.S. Nuclear Regulatory Commission. NEI and the ASME maintain a dialogue on the two groups` activities related to risk-based IST. The presenter will provide an overview of the NEI guideline, discuss the methodological approach for applying risk-based technology to IST and provide the status of the industry pilot plant effort.

  6. Controller Design of DFIG Based Wind Turbine by Using Evolutionary Soft Computational Techniques

    Directory of Open Access Journals (Sweden)

    O. P. Bharti

    2017-06-01

    Full Text Available This manuscript illustrates the controller design for a doubly fed induction generator based variable speed wind turbine by using a bioinspired scheme. This methodology is based on exploiting two proficient swarm intelligence based evolutionary soft computational procedures. The particle swarm optimization (PSO and bacterial foraging optimization (BFO techniques are employed to design the controller intended for small damping plant of the DFIG. Wind energy overview and DFIG operating principle along with the equivalent circuit model is adequately discussed in this paper. The controller design for DFIG based WECS using PSO and BFO are described comparatively in detail. The responses of the DFIG system regarding terminal voltage, current, active-reactive power, and DC-Link voltage have slightly improved with the evolutionary soft computational procedure. Lastly, the obtained output is equated with a standard technique for performance improvement of DFIG based wind energy conversion system.

  7. Cluster Based Text Classification Model

    DEFF Research Database (Denmark)

    Nizamani, Sarwat; Memon, Nasrullah; Wiil, Uffe Kock

    2011-01-01

    We propose a cluster based classification model for suspicious email detection and other text classification tasks. The text classification tasks comprise many training examples that require a complex classification model. Using clusters for classification makes the model simpler and increases......, the classifier is trained on each cluster having reduced dimensionality and less number of examples. The experimental results show that the proposed model outperforms the existing classification models for the task of suspicious email detection and topic categorization on the Reuters-21578 and 20 Newsgroups...... datasets. Our model also outperforms A Decision Cluster Classification (ADCC) and the Decision Cluster Forest Classification (DCFC) models on the Reuters-21578 dataset....

  8. Satellite communication performance evaluation: Computational techniques based on moments

    Science.gov (United States)

    Omura, J. K.; Simon, M. K.

    1980-01-01

    Computational techniques that efficiently compute bit error probabilities when only moments of the various interference random variables are available are presented. The approach taken is a generalization of the well known Gauss-Quadrature rules used for numerically evaluating single or multiple integrals. In what follows, basic algorithms are developed. Some of its properties and generalizations are shown and its many potential applications are described. Some typical interference scenarios for which the results are particularly applicable include: intentional jamming, adjacent and cochannel interferences; radar pulses (RFI); multipath; and intersymbol interference. While the examples presented stress evaluation of bit error probilities in uncoded digital communication systems, the moment techniques can also be applied to the evaluation of other parameters, such as computational cutoff rate under both normal and mismatched receiver cases in coded systems. Another important application is the determination of the probability distributions of the output of a discrete time dynamical system. This type of model occurs widely in control systems, queueing systems, and synchronization systems (e.g., discrete phase locked loops).

  9. Modeling and Forecasting Electricity Demand in Azerbaijan Using Cointegration Techniques

    Directory of Open Access Journals (Sweden)

    Fakhri J. Hasanov

    2016-12-01

    Full Text Available Policymakers in developing and transitional economies require sound models to: (i understand the drivers of rapidly growing energy consumption and (ii produce forecasts of future energy demand. This paper attempts to model electricity demand in Azerbaijan and provide future forecast scenarios—as far as we are aware this is the first such attempt for Azerbaijan using a comprehensive modelling framework. Electricity consumption increased and decreased considerably in Azerbaijan from 1995 to 2013 (the period used for the empirical analysis—it increased on average by about 4% per annum from 1995 to 2006 but decreased by about 4½% per annum from 2006 to 2010 and increased thereafter. It is therefore vital that Azerbaijani planners and policymakers understand what drives electricity demand and be able to forecast how it will grow in order to plan for future power production. However, modeling electricity demand for such a country has many challenges. Azerbaijan is rich in energy resources, consequently GDP is heavily influenced by oil prices; hence, real non-oil GDP is employed as the activity driver in this research (unlike almost all previous aggregate energy demand studies. Moreover, electricity prices are administered rather than market driven. Therefore, different cointegration and error correction techniques are employed to estimate a number of per capita electricity demand models for Azerbaijan, which are used to produce forecast scenarios for up to 2025. The resulting estimated models (in terms of coefficients, etc. and forecasts of electricity demand for Azerbaijan in 2025 prove to be very similar; with the Business as Usual forecast ranging from about of 19½ to 21 TWh.

  10. Graph Model Based Indoor Tracking

    DEFF Research Database (Denmark)

    Jensen, Christian Søndergaard; Lu, Hua; Yang, Bin

    2009-01-01

    The tracking of the locations of moving objects in large indoor spaces is important, as it enables a range of applications related to, e.g., security and indoor navigation and guidance. This paper presents a graph model based approach to indoor tracking that offers a uniform data management...... infrastructure for different symbolic positioning technologies, e.g., Bluetooth and RFID. More specifically, the paper proposes a model of indoor space that comprises a base graph and mappings that represent the topology of indoor space at different levels. The resulting model can be used for one or several...... indoor positioning technologies. Focusing on RFID-based positioning, an RFID specific reader deployment graph model is built from the base graph model. This model is then used in several algorithms for constructing and refining trajectories from raw RFID readings. Empirical studies with implementations...

  11. Evaluation of mesh morphing and mapping techniques in patient specific modeling of the human pelvis.

    Science.gov (United States)

    Salo, Zoryana; Beek, Maarten; Whyne, Cari Marisa

    2013-01-01

    Robust generation of pelvic finite element models is necessary to understand the variation in mechanical behaviour resulting from differences in gender, aging, disease and injury. The objective of this study was to apply and evaluate mesh morphing and mapping techniques to facilitate the creation and structural analysis of specimen-specific finite element (FE) models of the pelvis. A specimen-specific pelvic FE model (source mesh) was generated following a traditional user-intensive meshing scheme. The source mesh was morphed onto a computed tomography scan generated target surface of a second pelvis using a landmarked-based approach, in which exterior source nodes were shifted to target surface vertices, while constrained along a normal. A second copy of the morphed model was further refined through mesh mapping, in which surface nodes of the initial morphed model were selected in patches and remapped onto the surfaces of the target model. Computed tomography intensity based material properties were assigned to each model. The source, target, morphed and mapped models were analyzed under axial compression using linear static FE analysis and their strain distributions evaluated. Morphing and mapping techniques were effectively applied to generate good quality geometrically complex specimen-specific pelvic FE models. Mapping significantly improved strain concurrence with the target pelvis FE model. Copyright © 2012 John Wiley & Sons, Ltd.

  12. Evaluation of mesh morphing and mapping techniques in patient specific modelling of the human pelvis.

    Science.gov (United States)

    Salo, Zoryana; Beek, Maarten; Whyne, Cari Marisa

    2012-08-01

    Robust generation of pelvic finite element models is necessary to understand variation in mechanical behaviour resulting from differences in gender, aging, disease and injury. The objective of this study was to apply and evaluate mesh morphing and mapping techniques to facilitate the creation and structural analysis of specimen-specific finite element (FE) models of the pelvis. A specimen-specific pelvic FE model (source mesh) was generated following a traditional user-intensive meshing scheme. The source mesh was morphed onto a computed tomography scan generated target surface of a second pelvis using a landmarked-based approach, in which exterior source nodes were shifted to target surface vertices, while constrained along a normal. A second copy of the morphed model was further refined through mesh mapping, in which surface nodes of the initial morphed model were selected in patches and remapped onto the surfaces of the target model. Computed tomography intensity-based material properties were assigned to each model. The source, target, morphed and mapped models were analyzed under axial compression using linear static FE analysis, and their strain distributions were evaluated. Morphing and mapping techniques were effectively applied to generate good quality and geometrically complex specimen-specific pelvic FE models. Mapping significantly improved strain concurrence with the target pelvis FE model. Copyright © 2012 John Wiley & Sons, Ltd.

  13. Activity-based DEVS modeling

    DEFF Research Database (Denmark)

    Alshareef, Abdurrahman; Sarjoughian, Hessam S.; Zarrin, Bahram

    2018-01-01

    architecture and the UML concepts. In this paper, we further this work by grounding Activity-based DEVS modeling and developing a fully-fledged modeling engine to demonstrate applicability. We also detail the relevant aspects of the created metamodel in terms of modeling and simulation. A significant number...

  14. Use of machine learning techniques for modeling of snow depth

    Directory of Open Access Journals (Sweden)

    G. V. Ayzel

    2017-01-01

    Full Text Available Snow exerts significant regulating effect on the land hydrological cycle since it controls intensity of heat and water exchange between the soil-vegetative cover and the atmosphere. Estimating of a spring flood runoff or a rain-flood on mountainous rivers requires understanding of the snow cover dynamics on a watershed. In our work, solving a problem of the snow cover depth modeling is based on both available databases of hydro-meteorological observations and easily accessible scientific software that allows complete reproduction of investigation results and further development of this theme by scientific community. In this research we used the daily observational data on the snow cover and surface meteorological parameters, obtained at three stations situated in different geographical regions: Col de Porte (France, Sodankyla (Finland, and Snoquamie Pass (USA.Statistical modeling of the snow cover depth is based on a complex of freely distributed the present-day machine learning models: Decision Trees, Adaptive Boosting, Gradient Boosting. It is demonstrated that use of combination of modern machine learning methods with available meteorological data provides the good accuracy of the snow cover modeling. The best results of snow cover depth modeling for every investigated site were obtained by the ensemble method of gradient boosting above decision trees – this model reproduces well both, the periods of snow cover accumulation and its melting. The purposeful character of learning process for models of the gradient boosting type, their ensemble character, and use of combined redundancy of a test sample in learning procedure makes this type of models a good and sustainable research tool. The results obtained can be used for estimating the snow cover characteristics for river basins where hydro-meteorological information is absent or insufficient.

  15. Classification of the Regional Ionospheric Disturbance Based on Machine Learning Techniques

    Science.gov (United States)

    Terzi, Merve Begum; Arikan, Orhan; Karatay, Secil; Arikan, Feza; Gulyaeva, Tamara

    2016-08-01

    In this study, Total Electron Content (TEC) estimated from GPS receivers is used to model the regional and local variability that differs from global activity along with solar and geomagnetic indices. For the automated classification of regional disturbances, a classification technique based on a robust machine learning technique that have found wide spread use, Support Vector Machine (SVM) is proposed. Performance of developed classification technique is demonstrated for midlatitude ionosphere over Anatolia using TEC estimates generated from GPS data provided by Turkish National Permanent GPS Network (TNPGN-Active) for solar maximum year of 2011. As a result of implementing developed classification technique to Global Ionospheric Map (GIM) TEC data, which is provided by the NASA Jet Propulsion Laboratory (JPL), it is shown that SVM can be a suitable learning method to detect anomalies in TEC variations.

  16. Chemistry research and chemical techniques based on research reactors

    International Nuclear Information System (INIS)

    1963-01-01

    Chemistry has occupied an important position historically in the sciences associated with nuclear reactors and it continues to play a prominent role in reactor-based research investigations. This Panel of prominent scientists in the field was convened by the International Atomic Energy Agency (IAEA) to assess the present state of such chemistry research for the information of its Member States and others interested in the subject. There are two ways in which chemistry is associated with nuclear reactors: (a) general applications to many scientific fields in which chemical techniques are involved as essential service functions; and (b) specific applications of reactor facilities to the solution of chemical problems themselves. Twenty years of basic research with nuclear reactors have demonstrated a very widespread, and still increasing, demand for radioisotopes and isotopically-labelled molecules in all fields of the physical and biological sciences. Similarly, the determination of the elemental composition of a material through the analytical technique of activation analysis can be applied throughout experimental science. Refs, figs and tabs

  17. Detecting Molecular Properties by Various Laser-Based Techniques

    Energy Technology Data Exchange (ETDEWEB)

    Hsin, Tse-Ming [Iowa State Univ., Ames, IA (United States)

    2007-01-01

    Four different laser-based techniques were applied to study physical and chemical characteristics of biomolecules and dye molecules. These techniques are liole burning spectroscopy, single molecule spectroscopy, time-resolved coherent anti-Stokes Raman spectroscopy and laser-induced fluorescence microscopy. Results from hole burning and single molecule spectroscopy suggested that two antenna states (C708 & C714) of photosystem I from cyanobacterium Synechocystis PCC 6803 are connected by effective energy transfer and the corresponding energy transfer time is ~6 ps. In addition, results from hole burning spectroscopy indicated that the chlorophyll dimer of the C714 state has a large distribution of the dimer geometry. Direct observation of vibrational peaks and evolution of coumarin 153 in the electronic excited state was demonstrated by using the fs/ps CARS, a variation of time-resolved coherent anti-Stokes Raman spectroscopy. In three different solvents, methanol, acetonitrile, and butanol, a vibration peak related to the stretch of the carbonyl group exhibits different relaxation dynamics. Laser-induced fluorescence microscopy, along with the biomimetic containers-liposomes, allows the measurement of the enzymatic activity of individual alkaline phosphatase from bovine intestinal mucosa without potential interferences from glass surfaces. The result showed a wide distribution of the enzyme reactivity. Protein structural variation is one of the major reasons that are responsible for this highly heterogeneous behavior.

  18. A human visual based binarization technique for histological images

    Science.gov (United States)

    Shreyas, Kamath K. M.; Rajendran, Rahul; Panetta, Karen; Agaian, Sos

    2017-05-01

    In the field of vision-based systems for object detection and classification, thresholding is a key pre-processing step. Thresholding is a well-known technique for image segmentation. Segmentation of medical images, such as Computed Axial Tomography (CAT), Magnetic Resonance Imaging (MRI), X-Ray, Phase Contrast Microscopy, and Histological images, present problems like high variability in terms of the human anatomy and variation in modalities. Recent advances made in computer-aided diagnosis of histological images help facilitate detection and classification of diseases. Since most pathology diagnosis depends on the expertise and ability of the pathologist, there is clearly a need for an automated assessment system. Histological images are stained to a specific color to differentiate each component in the tissue. Segmentation and analysis of such images is problematic, as they present high variability in terms of color and cell clusters. This paper presents an adaptive thresholding technique that aims at segmenting cell structures from Haematoxylin and Eosin stained images. The thresholded result can further be used by pathologists to perform effective diagnosis. The effectiveness of the proposed method is analyzed by visually comparing the results to the state of art thresholding methods such as Otsu, Niblack, Sauvola, Bernsen, and Wolf. Computer simulations demonstrate the efficiency of the proposed method in segmenting critical information.

  19. Analysis of Multipath Mitigation Techniques with Land Mobile Satellite Channel Model

    Directory of Open Access Journals (Sweden)

    M. Z. H. Bhuiyan J. Zhang

    2012-12-01

    Full Text Available Multipath is undesirable for Global Navigation Satellite System (GNSS receivers, since the reception of multipath can create a significant distortion to the shape of the correlation function leading to an error in the receivers’ position estimate. Many multipath mitigation techniques exist in the literature to deal with the multipath propagation problem in the context of GNSS. The multipath studies in the literature are often based on optimistic assumptions, for example, assuming a static two-path channel or a fading channel with a Rayleigh or a Nakagami distribution. But, in reality, there are a lot of channel modeling issues, for example, satellite-to-user geometry, variable number of paths, variable path delays and gains, Non Line-Of-Sight (NLOS path condition, receiver movements, etc. that are kept out of consideration when analyzing the performance of these techniques. Therefore, this is of utmost importance to analyze the performance of different multipath mitigation techniques in some realistic measurement-based channel models, for example, the Land Multipath is undesirable for Global Navigation Satellite System (GNSS receivers, since the reception of multipath can create a significant distortion to the shape of the correlation function leading to an error in the receivers’ position estimate. Many multipath mitigation techniques exist in the literature to deal with the multipath propagation problem in the context of GNSS. The multipath studies in the literature are often based on optimistic assumptions, for example, assuming a static two-path channel or a fading channel with a Rayleigh or a Nakagami distribution. But, in reality, there are a lot of channel modeling issues, for example, satellite-to-user geometry, variable number of paths, variable path delays and gains, Non Line-Of-Sight (NLOS path condition, receiver movements, etc. that are kept out of consideration when analyzing the performance of these techniques. Therefore, this

  20. Diagnosis of Dengue Infection Using Conventional and Biosensor Based Techniques

    Science.gov (United States)

    Parkash, Om; Hanim Shueb, Rafidah

    2015-01-01

    Dengue is an arthropod-borne viral disease caused by four antigenically different serotypes of dengue virus. This disease is considered as a major public health concern around the world. Currently, there is no licensed vaccine or antiviral drug available for the prevention and treatment of dengue disease. Moreover, clinical features of dengue are indistinguishable from other infectious diseases such as malaria, chikungunya, rickettsia and leptospira. Therefore, prompt and accurate laboratory diagnostic test is urgently required for disease confirmation and patient triage. The traditional diagnostic techniques for the dengue virus are viral detection in cell culture, serological testing, and RNA amplification using reverse transcriptase PCR. This paper discusses the conventional laboratory methods used for the diagnosis of dengue during the acute and convalescent phase and highlights the advantages and limitations of these routine laboratory tests. Subsequently, the biosensor based assays developed using various transducers for the detection of dengue are also reviewed. PMID:26492265

  1. Diagnosis of Dengue Infection Using Conventional and Biosensor Based Techniques.

    Science.gov (United States)

    Parkash, Om; Shueb, Rafidah Hanim

    2015-10-19

    Dengue is an arthropod-borne viral disease caused by four antigenically different serotypes of dengue virus. This disease is considered as a major public health concern around the world. Currently, there is no licensed vaccine or antiviral drug available for the prevention and treatment of dengue disease. Moreover, clinical features of dengue are indistinguishable from other infectious diseases such as malaria, chikungunya, rickettsia and leptospira. Therefore, prompt and accurate laboratory diagnostic test is urgently required for disease confirmation and patient triage. The traditional diagnostic techniques for the dengue virus are viral detection in cell culture, serological testing, and RNA amplification using reverse transcriptase PCR. This paper discusses the conventional laboratory methods used for the diagnosis of dengue during the acute and convalescent phase and highlights the advantages and limitations of these routine laboratory tests. Subsequently, the biosensor based assays developed using various transducers for the detection of dengue are also reviewed.

  2. An RSS based location estimation technique for cognitive relay networks

    KAUST Repository

    Qaraqe, Khalid A.

    2010-11-01

    In this paper, a received signal strength (RSS) based location estimation method is proposed for a cooperative wireless relay network where the relay is a cognitive radio. We propose a method for the considered cognitive relay network to determine the location of the source using the direct and the relayed signal at the destination. We derive the Cramer-Rao lower bound (CRLB) expressions separately for x and y coordinates of the location estimate. We analyze the effects of cognitive behaviour of the relay on the performance of the proposed method. We also discuss and quantify the reliability of the location estimate using the proposed technique if the source is not stationary. The overall performance of the proposed method is presented through simulations. ©2010 IEEE.

  3. Whitelists Based Multiple Filtering Techniques in SCADA Sensor Networks

    Directory of Open Access Journals (Sweden)

    DongHo Kang

    2014-01-01

    Full Text Available Internet of Things (IoT consists of several tiny devices connected together to form a collaborative computing environment. Recently IoT technologies begin to merge with supervisory control and data acquisition (SCADA sensor networks to more efficiently gather and analyze real-time data from sensors in industrial environments. But SCADA sensor networks are becoming more and more vulnerable to cyber-attacks due to increased connectivity. To safely adopt IoT technologies in the SCADA environments, it is important to improve the security of SCADA sensor networks. In this paper we propose a multiple filtering technique based on whitelists to detect illegitimate packets. Our proposed system detects the traffic of network and application protocol attacks with a set of whitelists collected from normal traffic.

  4. Model assessment using a multi-metric ranking technique

    Science.gov (United States)

    Fitzpatrick, P. J.; Lau, Y.; Alaka, G.; Marks, F.

    2017-12-01

    Validation comparisons of multiple models presents challenges when skill levels are similar, especially in regimes dominated by the climatological mean. Assessing skill separation will require advanced validation metrics and identifying adeptness in extreme events, but maintain simplicity for management decisions. Flexibility for operations is also an asset. This work postulates a weighted tally and consolidation technique which ranks results by multiple types of metrics. Variables include absolute error, bias, acceptable absolute error percentages, outlier metrics, model efficiency, Pearson correlation, Kendall's Tau, reliability Index, multiplicative gross error, and root mean squared differences. Other metrics, such as root mean square difference and rank correlation were also explored, but removed when the information was discovered to be generally duplicative to other metrics. While equal weights are applied, weights could be altered depending for preferred metrics. Two examples are shown comparing ocean models' currents and tropical cyclone products, including experimental products. The importance of using magnitude and direction for tropical cyclone track forecasts instead of distance, along-track, and cross-track are discussed. Tropical cyclone intensity and structure prediction are also assessed. Vector correlations are not included in the ranking process, but found useful in an independent context, and will be briefly reported.

  5. Total laparoscopic gastrocystoplasty: experimental technique in a porcine model

    Directory of Open Access Journals (Sweden)

    Frederico R. Romero

    2007-02-01

    Full Text Available OBJECTIVE: Describe a unique simplified experimental technique for total laparoscopic gastrocystoplasty in a porcine model. MATERIAL AND METHODS: We performed laparoscopic gastrocystoplasty on 10 animals. The gastroepiploic arch was identified and carefully mobilized from its origin at the pylorus to the beginning of the previously demarcated gastric wedge. The gastric segment was resected with sharp dissection. Both gastric suturing and gastrovesical anastomosis were performed with absorbable running sutures. The complete procedure and stages of gastric dissection, gastric closure, and gastrovesical anastomosis were separately timed for each laparoscopic gastrocystoplasty. The end-result of the gastric suturing and the bladder augmentation were evaluated by fluoroscopy or endoscopy. RESULTS: Mean total operative time was 5.2 (range 3.5 - 8 hours: 84.5 (range 62 - 110 minutes for the gastric dissection, 56 (range 28 - 80 minutes for the gastric suturing, and 170.6 (range 70 to 200 minutes for the gastrovesical anastomosis. A cystogram showed a small leakage from the vesical anastomosis in the first two cases. No extravasation from gastric closure was observed in the postoperative gastrogram. CONCLUSIONS: Total laparoscopic gastrocystoplasty is a feasible but complex procedure that currently has limited clinical application. With the increasing use of laparoscopy in reconstructive surgery of the lower urinary tract, gastrocystoplasty may become an attractive option because of its potential advantages over techniques using small and large bowel segments.

  6. An investigation of a video-based patient repositioning technique

    International Nuclear Information System (INIS)

    Yan Yulong; Song Yulin; Boyer, Arthur L.

    2002-01-01

    Purpose: We have investigated a video-based patient repositioning technique designed to use skin features for radiotherapy repositioning. We investigated the feasibility of the clinical application of this system by quantitative evaluation of performance characteristics of the methodology. Methods and Materials: Multiple regions of interest (ROI) were specified in the field of view of video cameras. We used a normalized correlation pattern-matching algorithm to compute the translations of each ROI pattern in a target image. These translations were compared against trial translations using a quadratic cost function for an optimization process in which the patient rotation and translational parameters were calculated. Results: A hierarchical search technique achieved high-speed (compute correlation for 128x128 ROI in 512x512 target image within 0.005 s) and subpixel spatial accuracy (as high as 0.2 pixel). By treating the observed translations as movements of points on the surfaces of a hypothetical cube, we were able to estimate accurately the actual translations and rotations of the test phantoms used in our experiments to less than 1 mm and 0.2 deg. with a standard deviation of 0.3 mm and 0.5 deg. respectively. For human volunteer cases, we estimated the translations and rotations to have an accuracy of 2 mm and 1.2 deg. Conclusion: A personal computer-based video system is suitable for routine patient setup of fractionated conformal radiotherapy. It is expected to achieve high-precision repositioning of the skin surface with high efficiency

  7. CANDU in-reactor quantitative visual-based inspection techniques

    Science.gov (United States)

    Rochefort, P. A.

    2009-02-01

    This paper describes two separate visual-based inspection procedures used at CANDU nuclear power generating stations. The techniques are quantitative in nature and are delivered and operated in highly radioactive environments with access that is restrictive, and in one case is submerged. Visual-based inspections at stations are typically qualitative in nature. For example a video system will be used to search for a missing component, inspect for a broken fixture, or locate areas of excessive corrosion in a pipe. In contrast, the methods described here are used to measure characteristic component dimensions that in one case ensure ongoing safe operation of the reactor and in the other support reactor refurbishment. CANDU reactors are Pressurized Heavy Water Reactors (PHWR). The reactor vessel is a horizontal cylindrical low-pressure calandria tank approximately 6 m in diameter and length, containing heavy water as a neutron moderator. Inside the calandria, 380 horizontal fuel channels (FC) are supported at each end by integral end-shields. Each FC holds 12 fuel bundles. The heavy water primary heat transport water flows through the FC pressure tube, removing the heat from the fuel bundles and delivering it to the steam generator. The general design of the reactor governs both the type of measurements that are required and the methods to perform the measurements. The first inspection procedure is a method to remotely measure the gap between FC and other in-core horizontal components. The technique involves delivering vertically a module with a high-radiation-resistant camera and lighting into the core of a shutdown but fuelled reactor. The measurement is done using a line-of-sight technique between the components. Compensation for image perspective and viewing elevation to the measurement is required. The second inspection procedure measures flaws within the reactor's end shield FC calandria tube rolled joint area. The FC calandria tube (the outer shell of the FC) is

  8. A Visual Attention Model Based Image Fusion

    OpenAIRE

    Rishabh Gupta; M.R.Vimala Devi; M. Devi

    2013-01-01

    To develop an efficient image fusion algorithm based on visual attention model for images with distinct objects. Image fusion is a process of combining complementary information from multiple images of the same scene into an image, so that the resultant image contains a more accurate description of the scene than any of the individual source images. The two basic fusion techniques are pixel level and region level fusion. Pixel level fusion deals with the operations on each and every pixel sep...

  9. Coupled Numerical Methods to Analyze Interacting Acoustic-Dynamic Models by Multidomain Decomposition Techniques

    Directory of Open Access Journals (Sweden)

    Delfim Soares

    2011-01-01

    Full Text Available In this work, coupled numerical analysis of interacting acoustic and dynamic models is focused. In this context, several numerical methods, such as the finite difference method, the finite element method, the boundary element method, meshless methods, and so forth, are considered to model each subdomain of the coupled model, and multidomain decomposition techniques are applied to deal with the coupling relations. Two basic coupling algorithms are discussed here, namely the explicit direct coupling approach and the implicit iterative coupling approach, which are formulated based on explicit/implicit time-marching techniques. Completely independent spatial and temporal discretizations among the interacting subdomains are permitted, allowing optimal discretization for each sub-domain of the model to be considered. At the end of the paper, numerical results are presented, illustrating the performance and potentialities of the discussed methodologies.

  10. Analysis on the Metrics used in Optimizing Electronic Business based on Learning Techniques

    Directory of Open Access Journals (Sweden)

    Irina-Steliana STAN

    2014-09-01

    Full Text Available The present paper proposes a methodology of analyzing the metrics related to electronic business. The drafts of the optimizing models include KPIs that can highlight the business specific, if only they are integrated by using learning-based techniques. Having set the most important and high-impact elements of the business, the models should get in the end the link between them, by automating business flows. The human resource will be found in the situation of collaborating more and more with the optimizing models which will translate into high quality decisions followed by profitability increase.

  11. An improved visualization-based force-measurement technique for short-duration hypersonic facilities

    Energy Technology Data Exchange (ETDEWEB)

    Laurence, Stuart J.; Karl, Sebastian [Institute of Aerodynamics and Flow Technology, Spacecraft Section, German Aerospace Center (DLR), Goettingen (Germany)

    2010-06-15

    This article is concerned with describing and exploring the limitations of an improved version of a recently proposed visualization-based technique for the measurement of forces and moments in short-duration hypersonic wind tunnels. The technique is based on tracking the motion of a free-flying body over a sequence of high-speed visualizations; while this idea is not new in itself, the use of high-speed digital cinematography combined with a highly accurate least-squares tracking algorithm allows improved results over what have been previously possible with such techniques. The technique precision is estimated through the analysis of artificially constructed and experimental test images, and the resulting error in acceleration measurements is characterized. For wind-tunnel scale models, position measurements to within a few microns are shown to be readily attainable. Image data from two previous experimental studies in the T5 hypervelocity shock tunnel are then reanalyzed with the improved technique: the uncertainty in the mean drag acceleration is shown to be reduced to the order of the flow unsteadiness, 2-3%, and time-resolved acceleration measurements are also shown to be possible. The response time of the technique for the configurations studied is estimated to be {proportional_to}0.5 ms. Comparisons with computations using the DLR TAU code also yield agreement to within the overall experimental uncertainty. Measurement of the pitching moment for blunt geometries still appears challenging, however. (orig.)

  12. Reliability Assessment of Wind Farm Electrical System Based on a Probability Transfer Technique

    Directory of Open Access Journals (Sweden)

    Hejun Yang

    2018-03-01

    Full Text Available The electrical system of a wind farm has a significant influence on the wind farm reliability and electrical energy yield. The disconnect switch installed in an electrical system cannot only improve the operating flexibility, but also enhance the reliability for a wind farm. Therefore, this paper develops a probabilistic transfer technique for integrating the electrical topology structure, the isolation operation of disconnect switch, and stochastic failure of electrical equipment into the reliability assessment of wind farm electrical system. Firstly, as the traditional two-state reliability model of electrical equipment cannot consider the isolation operation, so the paper develops a three-state reliability model to replace the two-state model for incorporating the isolation operation. In addition, a proportion apportion technique is presented to evaluate the state probability. Secondly, this paper develops a probabilistic transfer technique based on the thoughts that through transfer the unreliability of electrical system to the energy transmission interruption of wind turbine generators (WTGs. Finally, some novel indices for describing the reliability of wind farm electrical system are designed, and the variance coefficient of the designed indices is used as a convergence criterion to determine the termination of the assessment process. The proposed technique is applied to the reliability assessment of a wind farm with the different topologies. The simulation results show that the proposed techniques are effective in practical applications.

  13. A novel technique for active vibration control, based on optimal

    Indian Academy of Sciences (India)

    In the last few decades, researchers have proposed many control techniques to suppress unwanted vibrations in a structure. In this work, a novel and simple technique is proposed for the active vibration control. In this technique, an optimal tracking control is employed to suppress vibrations in a structure by simultaneously ...

  14. Néron Models and Base Change

    DEFF Research Database (Denmark)

    Halle, Lars Halvard; Nicaise, Johannes

    Presenting the first systematic treatment of the behavior of Néron models under ramified base change, this book can be read as an introduction to various subtle invariants and constructions related to Néron models of semi-abelian varieties, motivated by concrete research problems and complemented...... with explicit examples. Néron models of abelian and semi-abelian varieties have become an indispensable tool in algebraic and arithmetic geometry since Néron introduced them in his seminal 1964 paper. Applications range from the theory of heights in Diophantine geometry to Hodge theory. We focus specifically...... on Néron component groups, Edixhoven’s filtration and the base change conductor of Chai and Yu, and we study these invariants using various techniques such as models of curves, sheaves on Grothendieck sites and non-archimedean uniformization. We then apply our results to the study of motivic zeta functions...

  15. The effects of processing techniques on magnesium-based composite

    Science.gov (United States)

    Rodzi, Siti Nur Hazwani Mohamad; Zuhailawati, Hussain

    2016-12-01

    The aim of this study is to investigate the effect of processing techniques on the densification, hardness and compressive strength of Mg alloy and Mg-based composite for biomaterial application. The control sample (pure Mg) and Mg-based composite (Mg-Zn/HAp) were fabricated through mechanical alloying process using high energy planetary mill, whilst another Mg-Zn/HAp composite was fabricated through double step processing (the matrix Mg-Zn alloy was fabricated by planetary mill, subsequently HAp was dispersed by roll mill). As-milled powder was then consolidated by cold press into 10 mm diameter pellet under 400 MPa compaction pressure before being sintered at 300 °C for 1 hour under the flow of argon. The densification of the sintered pellets were then determined by Archimedes principle. Mechanical properties of the sintered pellets were characterized by microhardness and compression test. The results show that the density of the pellets was significantly increased by addition of HAp, but the most optimum density was observed when the sample was fabricated through double step processing (1.8046 g/cm3). Slight increment in hardness and ultimate compressive strength were observed for Mg-Zn/HAp composite that was fabricated through double step processing (58.09 HV, 132.19 MPa), as compared to Mg-Zn/HAp produced through single step processing (47.18 HV, 122.49 MPa).

  16. Acellular dermal matrix based nipple reconstruction: A modified technique

    Directory of Open Access Journals (Sweden)

    Raghavan Vidya

    2017-09-01

    Full Text Available Nipple areolar reconstruction (NAR has evolved with the advancement in breast reconstruction and can improve self-esteem and, consequently, patient satisfaction. Although a variety of reconstruction techniques have been described in the literature varying from nipple sharing, local flaps to alloplastic and allograft augmentation, over time, loss of nipple projection remains a major problem. Acellular dermal matrices (ADM have revolutionised breast reconstruction more recently. We discuss the use of ADM to act as a base plate and strut to give support to the base and offer nipple bulk and projection in a primary procedure of NAR with a local clover shaped dermal flap in 5 breasts (4 patients. We used 5-point Likert scales (1 = highly unsatisfied, 5 = highly satisfied to assess patient satisfaction. Median age was 46 years (range: 38–55 years. Nipple projection of 8 mm, 7 mm, and 7 mms were achieved in the unilateral cases and 6 mm in the bilateral case over a median 18 month period. All patients reported at least a 4 on the Likert scale. We had no post-operative complications. It seems that nipple areolar reconstruction [NAR] using ADM can achieve nipple projection which is considered aesthetically pleasing for patients.

  17. Anthropometrically-Based Surgical Technique for Tessier 3 Cleft Reconstruction.

    Science.gov (United States)

    Morgan, Aaron L; Cason, Roger; El Amm, Christian A

    2016-11-01

    Craniofacial clefts are rare entities, with an incidence reported as 1.43 to 4.85 per 100,000 births. The Tessier number 3 cleft, the most medial of the oblique clefts, can manifest as clefting of the lip between the canine and lateral incisors, colobomas of the nasal ala and lower eyelid, and inferior displacement of the medial canthus-frequently disrupting the lacrimal system with extreme variability in expressivity (Eppley).Literature on cleft lip repair is extensive and has evolved to incorporate anthropometric techniques, based on identifiable landmarks and anthropometric measurements that are compared with contralateral unaffected anatomy or population means and tracked over time to assess impact on growth. Recent focus has been placed on "subunit" repair that repairs "like with like." These approaches have resulted in a remarkable reproducibility of methods and outcomes.Facial cleft surgery publications are sparse due to the rarity of the disorders, and consensus has yet to develop on standardized landmarks, reference measurements, and principles of repair. The authors describe a method of correcting incomplete unilateral Tessier 3 cleft based on the principles described above. Intraoperative photographs, including secondary revisions, as well as immediate and long-term postoperative results are presented.

  18. Crack identification based on synthetic artificial intelligent technique

    International Nuclear Information System (INIS)

    Shim, Mun Bo; Suh, Myung Won

    2001-01-01

    It has been established that a crack has an important effect on the dynamic behavior of a structure. This effect depends mainly on the location and depth of the crack. To identify the location and depth of a crack in a structure, a method is presented in this paper which uses synthetic artificial intelligent technique, that is, Adaptive-Network-based Fuzzy Inference System(ANFIS) solved via hybrid learning algorithm(the back-propagation gradient descent and the least-squares method) are used to learn the input(the location and depth of a crack)-output(the structural eigenfrequencies) relation of the structural system. With this ANFIS and a Continuous Evolutionary Algorithm(CEA), it is possible to formulate the inverse problem. CEAs based on genetic algorithms work efficiently for continuous search space optimization problems like a parameter identification problem. With this ANFIS, CEAs are used to identify the crack location and depth minimizing the difference from the measured frequencies. We have tried this new idea on a simple beam structure and the results are promising

  19. Electron tomography based on a total variation minimization reconstruction technique

    Energy Technology Data Exchange (ETDEWEB)

    Goris, B., E-mail: bart.goris@ua.ac.be [EMAT, University of Antwerp, Groenenborgerlaan 171, B-2020 Antwerp (Belgium); Van den Broek, W. [EMAT, University of Antwerp, Groenenborgerlaan 171, B-2020 Antwerp (Belgium); Batenburg, K.J. [Centrum Wiskunde and Informatica, Science Park 123, NL-1098XG Amsterdam (Netherlands); Vision Lab, University of Antwerp, Universiteitsplein 1, B-2610 Wilrijk (Belgium); Heidari Mezerji, H.; Bals, S. [EMAT, University of Antwerp, Groenenborgerlaan 171, B-2020 Antwerp (Belgium)

    2012-02-15

    The 3D reconstruction of a tilt series for electron tomography is mostly carried out using the weighted backprojection (WBP) algorithm or using one of the iterative algorithms such as the simultaneous iterative reconstruction technique (SIRT). However, it is known that these reconstruction algorithms cannot compensate for the missing wedge. Here, we apply a new reconstruction algorithm for electron tomography, which is based on compressive sensing. This is a field in image processing specialized in finding a sparse solution or a solution with a sparse gradient to a set of ill-posed linear equations. Therefore, it can be applied to electron tomography where the reconstructed objects often have a sparse gradient at the nanoscale. Using a combination of different simulated and experimental datasets, it is shown that missing wedge artefacts are reduced in the final reconstruction. Moreover, it seems that the reconstructed datasets have a higher fidelity and are easier to segment in comparison to reconstructions obtained by more conventional iterative algorithms. -- Highlights: Black-Right-Pointing-Pointer A reconstruction algorithm for electron tomography is investigated based on total variation minimization. Black-Right-Pointing-Pointer Missing wedge artefacts are reduced by this algorithm. Black-Right-Pointing-Pointer The reconstruction is easier to segment. Black-Right-Pointing-Pointer More reliable quantitative information can be obtained.

  20. A Modified Particle Swarm Optimization Technique for Finding Optimal Designs for Mixture Models

    Science.gov (United States)

    Wong, Weng Kee; Chen, Ray-Bing; Huang, Chien-Chih; Wang, Weichung

    2015-01-01

    Particle Swarm Optimization (PSO) is a meta-heuristic algorithm that has been shown to be successful in solving a wide variety of real and complicated optimization problems in engineering and computer science. This paper introduces a projection based PSO technique, named ProjPSO, to efficiently find different types of optimal designs, or nearly optimal designs, for mixture models with and without constraints on the components, and also for related models, like the log contrast models. We also compare the modified PSO performance with Fedorov's algorithm, a popular algorithm used to generate optimal designs, Cocktail algorithm, and the recent algorithm proposed by [1]. PMID:26091237

  1. Floating Node Method and Virtual Crack Closure Technique for Modeling Matrix Cracking-Delamination Migration

    Science.gov (United States)

    DeCarvalho, Nelson V.; Chen, B. Y.; Pinho, Silvestre T.; Baiz, P. M.; Ratcliffe, James G.; Tay, T. E.

    2013-01-01

    A novel approach is proposed for high-fidelity modeling of progressive damage and failure in composite materials that combines the Floating Node Method (FNM) and the Virtual Crack Closure Technique (VCCT) to represent multiple interacting failure mechanisms in a mesh-independent fashion. In this study, the approach is applied to the modeling of delamination migration in cross-ply tape laminates. Delamination, matrix cracking, and migration are all modeled using fracture mechanics based failure and migration criteria. The methodology proposed shows very good qualitative and quantitative agreement with experiments.

  2. Comparing modelling techniques when designing VPH gratings for BigBOSS

    Science.gov (United States)

    Poppett, Claire; Edelstein, Jerry; Lampton, Michael; Jelinsky, Patrick; Arns, James

    2012-09-01

    BigBOSS is a Stage IV Dark Energy instrument based on the Baryon Acoustic Oscillations (BAO) and Red Shift Distortions (RSD) techniques using spectroscopic data of 20 million ELG and LRG galaxies at 0.5VPH) gratings have been identified as a key technology which will enable the efficiency requirement to be met, however it is important to be able to accurately predict their performance. In this paper we quantitatively compare different modelling techniques in order to assess the parameter space over which they are more capable of accurately predicting measured performance. Finally we present baseline parameters for grating designs that are most suitable for the BigBOSS instrument.

  3. A characteristic study of CCF modeling techniques and optimization of CCF defense strategies

    International Nuclear Information System (INIS)

    Kim, Min Chull

    2000-02-01

    Common Cause Failures (CCFs ) are among the major contributors to risk and core damage frequency (CDF ) from operating nuclear power plants (NPPs ). Our study on CCF focused on the following aspects : 1) a characteristic study on the CCF modeling techniques and 2) development of the optimal CCF defense strategy. Firstly, the characteristics of CCF modeling techniques were studied through sensitivity study of CCF occurrence probability upon system redundancy. The modeling techniques considered in this study include those most widely used worldwide, i.e., beta factor, MGL, alpha factor, and binomial failure rate models. We found that MGL and alpha factor models are essentially identical in terms of the CCF probability. Secondly, in the study for CCF defense, the various methods identified in the previous studies for defending against CCF were classified into five different categories. Based on these categories, we developed a generic method by which the optimal CCF defense strategy can be selected. The method is not only qualitative but also quantitative in nature: the selection of the optimal strategy among candidates is based on the use of analytic hierarchical process (AHP). We applied this method to two motor-driven valves for containment sump isolation in Ulchin 3 and 4 nuclear power plants. The result indicates that the method for developing an optimal CCF defense strategy is effective

  4. Ground-based intercomparison of two isoprene measurement techniques

    Directory of Open Access Journals (Sweden)

    E. Leibrock

    2003-01-01

    Full Text Available An informal intercomparison of two isoprene (C5H8 measurement techniques was carried out during Fall of 1998 at a field site located approximately 3 km west of Boulder, Colorado, USA. A new chemical ionization mass spectrometric technique (CIMS was compared to a well-established gas chromatographic technique (GC. The CIMS technique utilized benzene cation chemistry to ionize isoprene. The isoprene levels measured by the CIMS were often larger than those obtained with the GC. The results indicate that the CIMS technique suffered from an anthropogenic interference associated with air masses from the Denver, CO metropolitan area as well as an additional interference occurring in clean conditions. However, the CIMS technique is also demonstrated to be sensitive and fast. Especially after introduction of a tandem mass spectrometric technique, it is therefore a candidate for isoprene measurements in remote environments near isoprene sources.

  5. The phase field technique for modeling multiphase materials

    Science.gov (United States)

    Singer-Loginova, I.; Singer, H. M.

    2008-10-01

    This paper reviews methods and applications of the phase field technique, one of the fastest growing areas in computational materials science. The phase field method is used as a theory and computational tool for predictions of the evolution of arbitrarily shaped morphologies and complex microstructures in materials. In this method, the interface between two phases (e.g. solid and liquid) is treated as a region of finite width having a gradual variation of different physical quantities, i.e. it is a diffuse interface model. An auxiliary variable, the phase field or order parameter \\phi(\\vec{x}) , is introduced, which distinguishes one phase from the other. Interfaces are identified by the variation of the phase field. We begin with presenting the physical background of the phase field method and give a detailed thermodynamical derivation of the phase field equations. We demonstrate how equilibrium and non-equilibrium physical phenomena at the phase interface are incorporated into the phase field methods. Then we address in detail dendritic and directional solidification of pure and multicomponent alloys, effects of natural convection and forced flow, grain growth, nucleation, solid-solid phase transformation and highlight other applications of the phase field methods. In particular, we review the novel phase field crystal model, which combines atomistic length scales with diffusive time scales. We also discuss aspects of quantitative phase field modeling such as thin interface asymptotic analysis and coupling to thermodynamic databases. The phase field methods result in a set of partial differential equations, whose solutions require time-consuming large-scale computations and often limit the applicability of the method. Subsequently, we review numerical approaches to solve the phase field equations and present a finite difference discretization of the anisotropic Laplacian operator.

  6. Event-Based Conceptual Modeling

    DEFF Research Database (Denmark)

    Bækgaard, Lars

    2009-01-01

    The purpose of the paper is to obtain insight into and provide practical advice for event-based conceptual modeling. We analyze a set of event concepts and use the results to formulate a conceptual event model that is used to identify guidelines for creation of dynamic process models and static...... information models. We characterize events as short-duration processes that have participants, consequences, and properties, and that may be modeled in terms of information structures. The conceptual event model is used to characterize a variety of event concepts and it is used to illustrate how events can...... be used to integrate dynamic modeling of processes and static modeling of information structures. The results are unique in the sense that no other general event concept has been used to unify a similar broad variety of seemingly incompatible event concepts. The general event concept can be used...

  7. Active Vibration damping of Smart composite beams based on system identification technique

    Science.gov (United States)

    Bendine, Kouider; Satla, Zouaoui; Boukhoulda, Farouk Benallel; Nouari, Mohammed

    2018-03-01

    In the present paper, the active vibration control of a composite beam using piezoelectric actuator is investigated. The space state equation is determined using system identification technique based on the structure input output response provided by ANSYS APDL finite element package. The Linear Quadratic (LQG) control law is designed and integrated into ANSYS APDL to perform closed loop simulations. Numerical examples for different types of excitation loads are presented to test the efficiency and the accuracy of the proposed model.

  8. Temperature Control of Gas Chromatograph Based on Switched Delayed System Techniques

    Directory of Open Access Journals (Sweden)

    Xiao-Liang Wang

    2014-01-01

    Full Text Available We address the temperature control problem of the gas chromatograph. We model the temperature control system of the gas chromatograph into a switched delayed system and analyze the stability by common Lyapunov functional technique. The PI controller parameters can be given based on the proposed linear matrix inequalities (LMIs condition and the designed controller can make the temperature of gas chromatograph track the reference signal asymptotically. An experiment is given to illustrate the effectiveness of the stability criterion.

  9. A linac-based stereotactic irradiation technique of uveal melanoma

    International Nuclear Information System (INIS)

    Dieckmann, Karin; Bogner, Joachim; Georg, Dietmar; Zehetmayer, Martin; Kren, Gerhard; Poetter, Richard

    2001-01-01

    Purpose: To describe a stereotactic irradiation technique for uveal melanomas performed at a linac, based on a non-invasive eye fixation and eye monitoring system. Methods: For eye immobilization a light source system is integrated in a standard stereotactic mask system in front of the healthy eye: During treatment preparation (computed tomography/magnetic resonance imaging) as well as for treatment delivery, patients are instructed to gaze at the fixation light source. A mini-video camera monitors the pupil center position of the diseased eye. For treatment planning and beam delivery standard stereotactic radiotherapy equipment is used. If the pupil center deviation from a predefined 'zero-position' exceeds 1 mm (for more than 2 s), treatment delivery is interrupted. Between 1996 and 1999 60 patients with uveal melanomas, where (i) tumor height exceeded 7 mm, or (ii) tumor height was more than 3 mm, and the central tumor distance to the optic disc and/or the macula was less than 3 mm, have been treated. A total dose of 60 or 70 Gy has been given in 5 fractions within 10 days. Results: The repositioning accuracy in the mask system is 0.47±0.36 mm in rostral-occipital direction, 0.75±0.52 mm laterally, and 1.12±0.96 mm in vertical direction. An eye movement analysis performed for 23 patients shows a pupil center deviation from the 'zero' position<1 mm in 91% of all cases investigated. In a theoretical analysis, pupil center deviations are correlated with GTV 'movements'. For a pupil center deviation of 1 mm (rotation of the globe of 5 degree sign ) the GTV is still encompassed by the 80% isodose in 94%. Conclusion: For treatments of uveal melanomas, linac-based stereotactic radiotherapy combined with a non-invasive eye immobilization and monitoring system represents a feasible, accurate and reproducible method. Besides considerable technical requirements, the complexity of the treatment technique demands an interdisciplinary team continuously dedicated to this

  10. Event-Based Activity Modeling

    DEFF Research Database (Denmark)

    Bækgaard, Lars

    2004-01-01

    We present and discuss a modeling approach that supports event-based modeling of information and activity in information systems. Interacting human actors and IT-actors may carry out such activity. We use events to create meaningful relations between information structures and the related...

  11. Establishment of reproducible osteosarcoma rat model using orthotopic implantation technique.

    Science.gov (United States)

    Yu, Zhe; Sun, Honghui; Fan, Qingyu; Long, Hua; Yang, Tongtao; Ma, Bao'an

    2009-05-01

    In experimental musculoskeletal oncology, there remains a need for animal models that can be used to assess the efficacy of new and innovative treatment methodologies for bone tumors. Rat plays a very important role in the bone field especially in the evaluation of metabolic bone diseases. The objective of this study was to develop a rat osteosarcoma model for evaluation of new surgical and molecular methods of treatment for extremity sarcoma. One hundred male SD rats weighing 125.45+/-8.19 g were divided into 5 groups and anesthetized intraperitoneally with 10% chloral hydrate. Orthotopic implantation models of rat osteosarcoma were performed by injecting directly into the SD rat femur with a needle for inoculation with SD tumor cells. In the first step of the experiment, 2x10(5) to 1x10(6) UMR106 cells in 50 microl were injected intraosseously into median or distal part of the femoral shaft and the tumor take rate was determined. The second stage consisted of determining tumor volume, correlating findings from ultrasound with findings from necropsia and determining time of survival. In the third stage, the orthotopically implanted tumors and lung nodules were resected entirely, sectioned, and then counter stained with hematoxylin and eosin for histopathologic evaluation. The tumor take rate was 100% for implants with 8x10(5) tumor cells or more, which was much less than the amount required for subcutaneous implantation, with a high lung metastasis rate of 93.0%. Ultrasound and necropsia findings matched closely (r=0.942; p<0.01), which demonstrated that Doppler ultrasonography is a convenient and reliable technique for measuring cancer at any stage. Tumor growth curve showed that orthotopically implanted tumors expanded vigorously with time-lapse, especially in the first 3 weeks. The median time of survival was 38 days and surgical mortality was 0%. The UMR106 cell line has strong carcinogenic capability and high lung metastasis frequency. The present rat

  12. Efficient techniques for wave-based sound propagation in interactive applications

    Science.gov (United States)

    Mehra, Ravish

    Sound propagation techniques model the effect of the environment on sound waves and predict their behavior from point of emission at the source to the final point of arrival at the listener. Sound is a pressure wave produced by mechanical vibration of a surface that propagates through a medium such as air or water, and the problem of sound propagation can be formulated mathematically as a second-order partial differential equation called the wave equation. Accurate techniques based on solving the wave equation, also called the wave-based techniques, are too expensive computationally and memory-wise. Therefore, these techniques face many challenges in terms of their applicability in interactive applications including sound propagation in large environments, time-varying source and listener directivity, and high simulation cost for mid-frequencies. In this dissertation, we propose a set of efficient wave-based sound propagation techniques that solve these three challenges and enable the use of wave-based sound propagation in interactive applications. Firstly, we propose a novel equivalent source technique for interactive wave-based sound propagation in large scenes spanning hundreds of meters. It is based on the equivalent source theory used for solving radiation and scattering problems in acoustics and electromagnetics. Instead of using a volumetric or surface-based approach, this technique takes an object-centric approach to sound propagation. The proposed equivalent source technique generates realistic acoustic effects and takes orders of magnitude less runtime memory compared to prior wave-based techniques. Secondly, we present an efficient framework for handling time-varying source and listener directivity for interactive wave-based sound propagation. The source directivity is represented as a linear combination of elementary spherical harmonic sources. This spherical harmonic-based representation of source directivity can support analytical, data

  13. Analysis of fluidized bed granulation process using conventional and novel modeling techniques.

    Science.gov (United States)

    Petrović, Jelena; Chansanroj, Krisanin; Meier, Brigitte; Ibrić, Svetlana; Betz, Gabriele

    2011-10-09

    Various modeling techniques have been applied to analyze fluidized-bed granulation process. Influence of various input parameters (product, inlet and outlet air temperature, consumption of liquid-binder, granulation liquid-binder spray rate, spray pressure, drying time) on granulation output properties (granule flow rate, granule size determined using light scattering method and sieve analysis, granules Hausner ratio, porosity and residual moisture) has been assessed. Both conventional and novel modeling techniques were used, such as screening test, multiple regression analysis, self-organizing maps, artificial neural networks, decision trees and rule induction. Diverse testing of developed models (internal and external validation) has been discussed. Good correlation has been obtained between the predicted and the experimental data. It has been shown that nonlinear methods based on artificial intelligence, such as neural networks, are far better in generalization and prediction in comparison to conventional methods. Possibility of usage of SOMs, decision trees and rule induction technique to monitor and optimize fluidized-bed granulation process has also been demonstrated. Obtained findings can serve as guidance to implementation of modeling techniques in fluidized-bed granulation process understanding and control. Copyright © 2011 Elsevier B.V. All rights reserved.

  14. Flexible multibody simulation of automotive systems with non-modal model reduction techniques

    Science.gov (United States)

    Shiiba, Taichi; Fehr, Jörg; Eberhard, Peter

    2012-12-01

    The stiffness of the body structure of an automobile has a strong relationship with its noise, vibration, and harshness (NVH) characteristics. In this paper, the effect of the stiffness of the body structure upon ride quality is discussed with flexible multibody dynamics. In flexible multibody simulation, the local elastic deformation of the vehicle has been described traditionally with modal shape functions. Recently, linear model reduction techniques from system dynamics and mathematics came into the focus to find more sophisticated elastic shape functions. In this work, the NVH-relevant states of a racing kart are simulated, whereas the elastic shape functions are calculated with modern model reduction techniques like moment matching by projection on Krylov-subspaces, singular value decomposition-based reduction techniques, and combinations of those. The whole elastic multibody vehicle model consisting of tyres, steering, axle, etc. is considered, and an excitation with a vibration characteristics in a wide frequency range is evaluated in this paper. The accuracy and the calculation performance of those modern model reduction techniques is investigated including a comparison of the modal reduction approach.

  15. Concepts and techniques for conducting performance-based audits

    International Nuclear Information System (INIS)

    Espy, I.J.

    1990-01-01

    Quality assurance (QA) audits have historically varied in purpose and approach and have earned labels that attempt to name each type of audit. Some more common labels for QA audits include compliance, program, product, and performance-based. While documentation and methodologies are important and hence controlled, an organizations product has ultimate impact on the user. Product quality then must be of more concern to the user than documentation and methodologies of the provider. Performance-based auditing (PBA) provides for assessing product quality by evaluating the suppliers activities that produce and affect product quality. PBA is defined as auditing that evaluates the ability of an activity to regularly produce and release only acceptable product, where product refers to the output of the activity. The output may be hardware, software, or a service, and acceptability includes suitability to the users needs. To satisfy this definition, PBA should focus on the activities that produce and affect product and should evaluate the systematics of each activity in terms of its ability to produce acceptable product. The activity evaluation model provides a framework for evaluating the systematicness of any activity. Elements of the activity evaluation model are described

  16. Reduced technique for modeling electromagnetic immunity on braid shielding cable bundles

    International Nuclear Information System (INIS)

    Xiao Pei; Du Ping-An; Nie Bao-Lin; Ren Dan

    2017-01-01

    In this paper, an efficient multi-conductor simplification technique is proposed to model the electromagnetic immunity on cable bundles within a braid shielding structure over a large frequency range. By grouping together the conductors based on the knowledge of Z -Smith chart, the required computation time is markedly reduced and the complexity of modeling the completely shielding cable bundles is significantly simplified with a good accuracy. After a brief description of the immunity problems in shielding structure, a six-phase procedure is detailed to generate the geometrical characteristics of the reduced cable bundles. Numerical simulation is carried out by using a commercial software CST to validate the efficiency and advantages of the proposed approach. The research addressed in this paper is considered as a simplified modeling technique for the electromagnetic immunity within a shielding structure. (paper)

  17. Finite-element-model updating using computational intelligence techniques applications to structural dynamics

    CERN Document Server

    Marwala, Tshilidzi

    2010-01-01

    Finite element models (FEMs) are widely used to understand the dynamic behaviour of various systems. FEM updating allows FEMs to be tuned better to reflect measured data and may be conducted using two different statistical frameworks: the maximum likelihood approach and Bayesian approaches. Finite Element Model Updating Using Computational Intelligence Techniques applies both strategies to the field of structural mechanics, an area vital for aerospace, civil and mechanical engineering. Vibration data is used for the updating process. Following an introduction a number of computational intelligence techniques to facilitate the updating process are proposed; they include: • multi-layer perceptron neural networks for real-time FEM updating; • particle swarm and genetic-algorithm-based optimization methods to accommodate the demands of global versus local optimization models; • simulated annealing to put the methodologies into a sound statistical basis; and • response surface methods and expectation m...

  18. Nonlinear modelling of polymer electrolyte membrane fuel cell stack using nonlinear cancellation technique

    International Nuclear Information System (INIS)

    Barus, R. P. P.; Tjokronegoro, H. A.; Leksono, E.; Ismunandar

    2014-01-01

    Fuel cells are promising new energy conversion devices that are friendly to the environment. A set of control systems are required in order to operate a fuel cell based power plant system optimally. For the purpose of control system design, an accurate fuel cell stack model in describing the dynamics of the real system is needed. Currently, linear model are widely used for fuel cell stack control purposes, but it has limitations in narrow operation range. While nonlinear models lead to nonlinear control implemnetation whos more complex and hard computing. In this research, nonlinear cancellation technique will be used to transform a nonlinear model into a linear form while maintaining the nonlinear characteristics. The transformation is done by replacing the input of the original model by a certain virtual input that has nonlinear relationship with the original input. Then the equality of the two models is tested by running a series of simulation. Input variation of H2, O2 and H2O as well as disturbance input I (current load) are studied by simulation. The error of comparison between the proposed model and the original nonlinear model are less than 1 %. Thus we can conclude that nonlinear cancellation technique can be used to represent fuel cell nonlinear model in a simple linear form while maintaining the nonlinear characteristics and therefore retain the wide operation range

  19. A Technique for Volumetric CSG Based on Morphology

    DEFF Research Database (Denmark)

    Bærentzen, Jakob Andreas; Christensen, Niels Jørgen

    2001-01-01

    In this paper, a new technique for volumetric CSG is presented. The technique requires the input volumes to correspond to solids which fulfill a voxelization suitability criterion. Assume the CSG operation is union. The volumetric union of two such volumes is defined in terms of the voxelization...... of the union of the two original solids. The theory behind the new technique is discussed, the algorithm and implementation are presented. Finally, we present images and timings....

  20. A new membrane-based crystallization technique: tests on lysozyme

    Science.gov (United States)

    Curcio, Efrem; Profio, Gianluca Di; Drioli, Enrico

    2003-01-01

    The great importance of protein science both in industrial and scientific fields, in conjunction with the intrinsic difficulty to grow macromolecular crystals, stimulates the development of new observations and ideas that can be useful in initiating more systematic studies using novel approaches. In this regard, an innovative technique, based on the employment of microporous hydrophobic membranes in order to promote the formation of lysozyme crystals from supersaturated solutions, is introduced in this work. Operational principles and possible advantages, both in terms of controlled extraction of solvent by acting on the concentration of the stripping solution and reduced induction times, are outlined. Theoretical developments and experimental results concerning the mass transfer, in vapour phase, through the membrane are presented, as well as the results from X-ray diffraction to 1.7 Å resolution of obtained lysozyme crystals using NaCl as the crystallizing agent and sodium acetate as the buffer. Crystals were found to be tetragonal with unit cell dimensions of a= b=79.1 Å and c=37.9 Å; the overall Rmerge on intensities in the resolution range from 25 to 1.7 Å was, in the best case, 4.4%.

  1. Model-based biosignal interpretation.

    Science.gov (United States)

    Andreassen, S

    1994-03-01

    Two relatively new approaches to model-based biosignal interpretation, qualitative simulation and modelling by causal probabilistic networks, are compared to modelling by differential equations. A major problem in applying a model to an individual patient is the estimation of the parameters. The available observations are unlikely to allow a proper estimation of the parameters, and even if they do, the task appears to have exponential computational complexity if the model is non-linear. Causal probabilistic networks have both differential equation models and qualitative simulation as special cases, and they can provide both Bayesian and maximum-likelihood parameter estimates, in most cases in much less than exponential time. In addition, they can calculate the probabilities required for a decision-theoretical approach to medical decision support. The practical applicability of causal probabilistic networks to real medical problems is illustrated by a model of glucose metabolism which is used to adjust insulin therapy in type I diabetic patients.

  2. DEVELOPMENT MODEL OF PATISSERIE PROJECT-BASED LEARNING

    OpenAIRE

    Ana Ana; Lutfhiyah Nurlaela

    2013-01-01

    The study aims to find a model of patisserie project-based learning with production approach that can improve effectiveness of patisserie learning. Delphi Technique, Cohen's Kappa and percentages of agreements were used to assess model of patisserie project based learning. Data collection techniques employed in the study were questionnaire, check list worksheet, observation, and interview sheets. Subjects were 13 lectures of expertise food and nutrition and 91 students of Food and Nutrition ...

  3. Model-based machine learning.

    Science.gov (United States)

    Bishop, Christopher M

    2013-02-13

    Several decades of research in the field of machine learning have resulted in a multitude of different algorithms for solving a broad range of problems. To tackle a new application, a researcher typically tries to map their problem onto one of these existing methods, often influenced by their familiarity with specific algorithms and by the availability of corresponding software implementations. In this study, we describe an alternative methodology for applying machine learning, in which a bespoke solution is formulated for each new application. The solution is expressed through a compact modelling language, and the corresponding custom machine learning code is then generated automatically. This model-based approach offers several major advantages, including the opportunity to create highly tailored models for specific scenarios, as well as rapid prototyping and comparison of a range of alternative models. Furthermore, newcomers to the field of machine learning do not have to learn about the huge range of traditional methods, but instead can focus their attention on understanding a single modelling environment. In this study, we show how probabilistic graphical models, coupled with efficient inference algorithms, provide a very flexible foundation for model-based machine learning, and we outline a large-scale commercial application of this framework involving tens of millions of users. We also describe the concept of probabilistic programming as a powerful software environment for model-based machine learning, and we discuss a specific probabilistic programming language called Infer.NET, which has been widely used in practical applications.

  4. Evaluation of 3-dimensional superimposition techniques on various skeletal structures of the head using surface models.

    Directory of Open Access Journals (Sweden)

    Nikolaos Gkantidis

    Full Text Available To test the applicability, accuracy, precision, and reproducibility of various 3D superimposition techniques for radiographic data, transformed to triangulated surface data.Five superimposition techniques (3P: three-point registration; AC: anterior cranial base; AC + F: anterior cranial base + foramen magnum; BZ: both zygomatic arches; 1Z: one zygomatic arch were tested using eight pairs of pre-existing CT data (pre- and post-treatment. These were obtained from non-growing orthodontic patients treated with rapid maxillary expansion. All datasets were superimposed by three operators independently, who repeated the whole procedure one month later. Accuracy was assessed by the distance (D between superimposed datasets on three form-stable anatomical areas, located on the anterior cranial base and the foramen magnum. Precision and reproducibility were assessed using the distances between models at four specific landmarks. Non parametric multivariate models and Bland-Altman difference plots were used for analyses.There was no difference among operators or between time points on the accuracy of each superimposition technique (p>0.05. The AC + F technique was the most accurate (D0.05, the detected structural changes differed significantly between different techniques (p<0.05. Bland-Altman difference plots showed that BZ superimposition was comparable to AC, though it presented slightly higher random error.Superimposition of 3D datasets using surface models created from voxel data can provide accurate, precise, and reproducible results, offering also high efficiency and increased post-processing capabilities. In the present study population, the BZ superimposition was comparable to AC, with the added advantage of being applicable to scans with a smaller field of view.

  5. Evaluation of 3-dimensional superimposition techniques on various skeletal structures of the head using surface models.

    Science.gov (United States)

    Gkantidis, Nikolaos; Schauseil, Michael; Pazera, Pawel; Zorkun, Berna; Katsaros, Christos; Ludwig, Björn

    2015-01-01

    To test the applicability, accuracy, precision, and reproducibility of various 3D superimposition techniques for radiographic data, transformed to triangulated surface data. Five superimposition techniques (3P: three-point registration; AC: anterior cranial base; AC + F: anterior cranial base + foramen magnum; BZ: both zygomatic arches; 1Z: one zygomatic arch) were tested using eight pairs of pre-existing CT data (pre- and post-treatment). These were obtained from non-growing orthodontic patients treated with rapid maxillary expansion. All datasets were superimposed by three operators independently, who repeated the whole procedure one month later. Accuracy was assessed by the distance (D) between superimposed datasets on three form-stable anatomical areas, located on the anterior cranial base and the foramen magnum. Precision and reproducibility were assessed using the distances between models at four specific landmarks. Non parametric multivariate models and Bland-Altman difference plots were used for analyses. There was no difference among operators or between time points on the accuracy of each superimposition technique (p>0.05). The AC + F technique was the most accurate (D0.05), the detected structural changes differed significantly between different techniques (p<0.05). Bland-Altman difference plots showed that BZ superimposition was comparable to AC, though it presented slightly higher random error. Superimposition of 3D datasets using surface models created from voxel data can provide accurate, precise, and reproducible results, offering also high efficiency and increased post-processing capabilities. In the present study population, the BZ superimposition was comparable to AC, with the added advantage of being applicable to scans with a smaller field of view.

  6. Reformulation linearization technique based branch-and-reduce approach applied to regional water supply system planning

    Science.gov (United States)

    Lan, Fujun; Bayraksan, Güzin; Lansey, Kevin

    2016-03-01

    A regional water supply system design problem that determines pipe and pump design parameters and water flows over a multi-year planning horizon is considered. A non-convex nonlinear model is formulated and solved by a branch-and-reduce global optimization approach. The lower bounding problem is constructed via a three-pronged effort that involves transforming the space of certain decision variables, polyhedral outer approximations, and the Reformulation Linearization Technique (RLT). Range reduction techniques are employed systematically to speed up convergence. Computational results demonstrate the efficiency of the proposed algorithm; in particular, the critical role range reduction techniques could play in RLT based branch-and-bound methods. Results also indicate using reclaimed water not only saves freshwater sources but is also a cost-effective non-potable water source in arid regions. Supplemental data for this article can be accessed at http://dx.doi.org/10.1080/0305215X.2015.1016508.

  7. A novel image inpainting technique based on median diffusion

    Indian Academy of Sciences (India)

    Image inpainting is the technique of filling-in the missing regions and removing unwanted objects from an image by diffusing the pixel information from the neighbourhood pixels. Image inpainting techniques are in use over a long time for various applications like removal of scratches, restoring damaged/missing portions or ...

  8. Modeling of an Aged Porous Silicon Humidity Sensor Using ANN Technique

    Directory of Open Access Journals (Sweden)

    Tarikul ISLAM

    2006-10-01

    Full Text Available Porous silicon (PS sensor based on capacitive technique used for measuring relative humidity has the advantages of low cost, ease of fabrication with controlled structure and CMOS compatibility. But the response of the sensor is nonlinear function of humidity and suffers from errors due to aging and stability. One adaptive linear (ADALINE ANN model has been developed to model the behavior of the sensor with a view to estimate these errors and compensate them. The response of the sensor is represented by third order polynomial basis function whose coefficients are determined by the ANN technique. The drift in sensor output due to aging of PS layer is also modeled by adapting the weights of the polynomial function. ANN based modeling is found to be more suitable than conventional physical modeling of PS humidity sensor in changing environment and drift due to aging. It helps online estimation of nonlinearity as well as monitoring of the fault of the PS humidity sensor using the coefficients of the model.

  9. Software Reuse of Mobile Systems based on Modelling

    Directory of Open Access Journals (Sweden)

    Guo Ping

    2016-01-01

    Full Text Available This paper presents an architectural style based modelling approach for architectural design, analysis of mobile systems. The approach is developed based on UML-like meta models and graph transformation techniques to support sound methodological principals, formal analysis and refinement. The approach could support mobile system development.

  10. Review on discretization techniques for complex fluid flow models: past, present and future

    Science.gov (United States)

    Ammar, A.; Chinesta, F.; Cueto, E.; Phillips, T.

    2007-04-01

    In the last decades several new and advanced numerical strategies have been proposed for solving the flow models of complex fluids. Most of them were based in the classical discretization techniques (finite elements, finite volumes, finite differences, spectral methods, meshless approachesĚ) applied on the macroscopic descriptions of such flows (differential and integral models) where special advances were introduced for accounting for the mixed character of the associated variational formulations as well as for stabilizing the advection terms in the motion and constitutive equations. Recently micro-macro approaches are being the more and more applied. They allows to avoid closure relations and the microscopic physics are better described. These models are based on kinetic theory and their main difficulty concerns the curse of dimension. The microstructure conformation is defined in a multidimensional space where standard discretization techniques fail. To overcome this difficulty stochastic techniques were introduced (inspired in the Monte Carlo techniques) but the control of the statistical noise and the low convergence order are some of their main drawbacks. Other new strategies have been recently proposed, as for example the ones based on the sparse grid and the separated representation that allows circumventing the aforementioned difficulties. However the models are the more and more focused on the microscopic scale, where they are formulated in terms of Brownian or molecular dynamics. They allow describing very precisely the molecular dynamics, but the computing time remains its main drawback. Thus, in the next years new efforts must be paid to reduce the computing time involved in microscopic simulations and the definitions of bridges between the different descriptions scales.

  11. Numerical Time-Domain Modeling of Lamb Wave Propagation Using Elastodynamic Finite Integration Technique

    Directory of Open Access Journals (Sweden)

    Hussein Rappel

    2014-01-01

    integration technique (EFIT as well as its validation with analytical results. Lamb wave method is a long range inspection technique which is considered to have unique future in the field of structural health monitoring. One of the main problems facing the lamb wave method is how to choose the most appropriate frequency to generate the waves for adequate transmission capable of properly propagating in the material, interfering with defects/damages, and being received in good conditions. Modern simulation tools based on numerical methods such as finite integration technique (FIT, finite element method (FEM, and boundary element method (BEM may be used for modeling. In this paper, two sets of simulation are performed. In the first set, group velocities of lamb wave in a steel plate are obtained numerically. Results are then compared with analytical results to validate the simulation. In the second set, EFIT is employed to study fundamental symmetric mode interaction with a surface braking defect.

  12. Proposal of a congestion control technique in LAN networks using an econometric model ARIMA

    Directory of Open Access Journals (Sweden)

    Joaquín F Sánchez

    2017-01-01

    Full Text Available Hasty software development can produce immediate implementations with source code unnecessarily complex and hardly readable. These small kinds of software decay generate a technical debt that could be big enough to seriously affect future maintenance activities. This work presents an analysis technique for identifying architectural technical debt related to non-uniformity of naming patterns; the technique is based on term frequency over package hierarchies. The proposal has been evaluated on projects of two popular organizations, Apache and Eclipse. The results have shown that most of the projects have frequent occurrences of the proposed naming patterns, and using a graph model and aggregated data could enable the elaboration of simple queries for debt identification. The technique has features that favor its applicability on emergent architectures and agile software development.

  13. SPAM CLASSIFICATION BASED ON SUPERVISED LEARNING USING MACHINE LEARNING TECHNIQUES

    Directory of Open Access Journals (Sweden)

    T. Hamsapriya

    2011-12-01

    Full Text Available E-mail is one of the most popular and frequently used ways of communication due to its worldwide accessibility, relatively fast message transfer, and low sending cost. The flaws in the e-mail protocols and the increasing amount of electronic business and financial transactions directly contribute to the increase in e-mail-based threats. Email spam is one of the major problems of the today’s Internet, bringing financial damage to companies and annoying individual users. Spam emails are invading users without their consent and filling their mail boxes. They consume more network capacity as well as time in checking and deleting spam mails. The vast majority of Internet users are outspoken in their disdain for spam, although enough of them respond to commercial offers that spam remains a viable source of income to spammers. While most of the users want to do right think to avoid and get rid of spam, they need clear and simple guidelines on how to behave. In spite of all the measures taken to eliminate spam, they are not yet eradicated. Also when the counter measures are over sensitive, even legitimate emails will be eliminated. Among the approaches developed to stop spam, filtering is the one of the most important technique. Many researches in spam filtering have been centered on the more sophisticated classifier-related issues. In recent days, Machine learning for spam classification is an important research issue. The effectiveness of the proposed work is explores and identifies the use of different learning algorithms for classifying spam messages from e-mail. A comparative analysis among the algorithms has also been presented.

  14. Multi-disciplinary techniques for understanding time-varying space-based imagery

    Science.gov (United States)

    Casasent, D.; Sanderson, A.; Kanade, T.

    1984-06-01

    A multidisciplinary program for space-based image processing is reported. This project combines optical and digital processing techniques and pattern recognition, image understanding and artificial intelligence methodologies. Time change image processing was recognized as the key issue to be addressed. Three time change scenarios were defined based on the frame rate of the data change. This report details the recent research on: various statistical and deterministic image features, recognition of sub-pixel targets in time varying imagery, and 3-D object modeling and recognition.

  15. Spectral-based features ranking for gamelan instruments identification using filter techniques

    Directory of Open Access Journals (Sweden)

    Diah P Wulandari

    2013-03-01

    Full Text Available In this paper, we describe an approach of spectral-based features ranking for Javanese gamelaninstruments identification using filter techniques. The model extracted spectral-based features set of thesignal using Short Time Fourier Transform (STFT. The rank of the features was determined using the fivealgorithms; namely ReliefF, Chi-Squared, Information Gain, Gain Ratio, and Symmetric Uncertainty. Then,we tested the ranked features by cross validation using Support Vector Machine (SVM. The experimentshowed that Gain Ratio algorithm gave the best result, it yielded accuracy of 98.93%.

  16. A Technique for Estimating Intensity of Emotional Expressions and Speaking Styles in Speech Based on Multiple-Regression HSMM

    Science.gov (United States)

    Nose, Takashi; Kobayashi, Takao

    In this paper, we propose a technique for estimating the degree or intensity of emotional expressions and speaking styles appearing in speech. The key idea is based on a style control technique for speech synthesis using a multiple regression hidden semi-Markov model (MRHSMM), and the proposed technique can be viewed as the inverse of the style control. In the proposed technique, the acoustic features of spectrum, power, fundamental frequency, and duration are simultaneously modeled using the MRHSMM. We derive an algorithm for estimating explanatory variables of the MRHSMM, each of which represents the degree or intensity of emotional expressions and speaking styles appearing in acoustic features of speech, based on a maximum likelihood criterion. We show experimental results to demonstrate the ability of the proposed technique using two types of speech data, simulated emotional speech and spontaneous speech with different speaking styles. It is found that the estimated values have correlation with human perception.

  17. Model-Based Analysis of Hand Radiographs

    Science.gov (United States)

    Levitt, Tod S.; Hedgcock, Marcus W.

    1989-05-01

    As a step toward computer assisted imagery interpretation, we are developing algorithms for computed radiography that allow a computer to recognize specific bones and joints, and to identify variations from normal in size, shape and density. In this paper we report on our approach to model-based computer recognition of hands in radiographs. First, image processing hypotheses of the imaged bones. Multiple hypotheses of the size and orientation of the imaged anatomy are matched against stored 3D models fof the relevant bones, obtained from statistically valid populations studies. Probabilities of the hypotheses are accrued using Bayesian inference techniques whose evaluation is guided by the structure of the hand model and the observed image-derived evidence such as anti-parallel edges, local contrast, etc. High probability matches between the hand model and the image data can cue additional image processing-based ssearch for bones, joints and soft-tissue to confirm hypotheses of the location of the imaged hand. At this point multipule disease detection techniques, automated bone age identification, etc. can be employed.

  18. Estimation of single plane unbalance parameters of a rotor-bearing system using Kalman filtering based force estimation technique

    Science.gov (United States)

    Shrivastava, Akash; Mohanty, A. R.

    2018-03-01

    This paper proposes a model-based method to estimate single plane unbalance parameters (amplitude and phase angle) in a rotor using Kalman filter and recursive least square based input force estimation technique. Kalman filter based input force estimation technique requires state-space model and response measurements. A modified system equivalent reduction expansion process (SEREP) technique is employed to obtain a reduced-order model of the rotor system so that limited response measurements can be used. The method is demonstrated using numerical simulations on a rotor-disk-bearing system. Results are presented for different measurement sets including displacement, velocity, and rotational response. Effects of measurement noise level, filter parameters (process noise covariance and forgetting factor), and modeling error are also presented and it is observed that the unbalance parameter estimation is robust with respect to measurement noise.

  19. Computer-Based Modeling Environments

    Science.gov (United States)

    1989-01-01

    1988). "An introduction to graph-based modeling Rich. E. (1983). Artificial Inteligence , McGraw-Hill, New York. systems", Working Paper 88-10-2...Hall, J., S. Lippman, and J. McCall. "Expected Utility Maximizing Job Search," Chapter 7 of Studies in the Economics of Search, 1979, North-Holland. WMSI...The same shape has been used theory, as knowledge representation in artificial for data sources and analytical models because, at intelligence, and as

  20. Modal Analysis Based on the Random Decrement Technique

    DEFF Research Database (Denmark)

    Asmussen, J. C.

    The thesis describes and develops the theoretical foundations of the Random Decrement technique, while giving several examples of modal analysis of large building constructions (bridges). The connection between modal parameters and Random Decrement functions is described theoretically....... The efficiency of the Random Decrement technique for the estimation of correlation functions is compared to other equivalent methods (FFT, Direct method). It is shown that the Random Decrement technique can be as much as a hundred times faster than other methods. The theory behind the Random Decrement technique...... is expanded to include both a vector formulation that increases speed considerably, and a new method for the prediction of the variance of the estimated Random Decrement functions. The thesis closes with a number of examples of modal analysis of bridges exposed to natural (ambient) load....

  1. Development and optimisation of a novel three-way extraction technique based on a combination of Soxhlet extraction, membrane assisted solvent extraction and a molecularly imprinted polymer using sludge polycyclic aromatic hydrocarbons as model compounds.

    Science.gov (United States)

    Ncube, Somandla; Lekoto, Goitsemang; Cukrowska, Ewa; Chimuka, Luke

    2018-02-01

    A novel technique that integrates extraction and clean-up into a single step format is reported as part of the search for new sample preparation techniques in the analysis of persistent organic pollutants from complex samples. This was achieved by combining the extraction efficiency of the Soxhlet extractor, the selectivity of a size exclusion membrane and the specificity of a molecularly imprinted polymer for the extraction of polycyclic aromatic hydrocarbons from wastewater sludge followed by quantitation using gas chromatography with time-of-flight mass spectrometry. The approach is described as the Soxhlet extraction membrane-assisted solvent extraction molecularly imprinted polymer technique. This technique was optimised for various parameters such as extraction solvent, reflux time and membrane acceptor phase. The applicability of the developed technique was optimised using a wastewater sludge certified reference material and then tested on real wastewater sludge samples. The method detection limits ranged from 0.14 to 12.86 ng/g with relative standard deviation values for the extraction of the 16 US-EPA priority polycyclic aromatic hydrocarbons from wastewater sludge samples ranging from 0.78 to 18%. The extraction process was therefore reproducible and showed remarkable selectivity. The developed technique is a promising prospect that can be applied in the analysis of organic pollutants from complex solid samples. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  2. FPGA based mixed-signal circuit novel testing techniques

    International Nuclear Information System (INIS)

    Pouros, Sotirios; Vassios, Vassilios; Papakostas, Dimitrios; Hristov, Valentin

    2013-01-01

    Electronic circuits fault detection techniques, especially on modern mixed-signal circuits, are evolved and customized around the world to meet the industry needs. The paper presents techniques used on fault detection in mixed signal circuits. Moreover, the paper involves standardized methods, along with current innovations for external testing like Design for Testability (DfT) and Built In Self Test (BIST) systems. Finally, the research team introduces a circuit implementation scheme using FPGA

  3. Biogeosystem technique as a base of Sustainable Irrigated Agriculture

    Science.gov (United States)

    Batukaev, Abdulmalik

    2016-04-01

    The world water strategy is to be changed because the current imitational gravitational frontal isotropic-continual paradigm of irrigation is not sustainable. This paradigm causes excessive consumption of fresh water - global deficit - up to 4-15 times, adverse effects on soils and landscapes. Current methods of irrigation does not control the water spread throughout the soil continuum. The preferable downward fluxes of irrigation water are forming, up to 70% and more of water supply loses into vadose zone. The moisture of irrigated soil is high, soil loses structure in the process of granulometric fractions flotation decomposition, the stomatal apparatus of plant leaf is fully open, transpiration rate is maximal. We propose the Biogeosystem technique - the transcendental, uncommon and non-imitating methods for Sustainable Natural Resources Management. New paradigm of irrigation is based on the intra-soil pulse discrete method of water supply into the soil continuum by injection in small discrete portions. Individual volume of water is supplied as a vertical cylinder of soil preliminary watering. The cylinder position in soil is at depth form 10 to 30 cm. Diameter of cylinder is 1-2 cm. Within 5-10 min after injection the water spreads from the cylinder of preliminary watering into surrounding soil by capillary, film and vapor transfer. Small amount of water is transferred gravitationally to the depth of 35-40 cm. The soil watering cylinder position in soil profile is at depth of 5-50 cm, diameter of the cylinder is 2-4 cm. Lateral distance between next cylinders along the plant raw is 10-15 cm. The soil carcass which is surrounding the cylinder of non-watered soil remains relatively dry and mechanically stable. After water injection the structure of soil in cylinder restores quickly because of no compression from the stable adjoining volume of soil and soil structure memory. The mean soil thermodynamic water potential of watered zone is -0.2 MPa. At this potential

  4. Applications of soft computing in time series forecasting simulation and modeling techniques

    CERN Document Server

    Singh, Pritpal

    2016-01-01

    This book reports on an in-depth study of fuzzy time series (FTS) modeling. It reviews and summarizes previous research work in FTS modeling and also provides a brief introduction to other soft-computing techniques, such as artificial neural networks (ANNs), rough sets (RS) and evolutionary computing (EC), focusing on how these techniques can be integrated into different phases of the FTS modeling approach. In particular, the book describes novel methods resulting from the hybridization of FTS modeling approaches with neural networks and particle swarm optimization. It also demonstrates how a new ANN-based model can be successfully applied in the context of predicting Indian summer monsoon rainfall. Thanks to its easy-to-read style and the clear explanations of the models, the book can be used as a concise yet comprehensive reference guide to fuzzy time series modeling, and will be valuable not only for graduate students, but also for researchers and professionals working for academic, business and governmen...

  5. Experimental Study on Damage Detection in Timber Specimens Based on an Electromechanical Impedance Technique and RMSD-Based Mahalanobis Distance

    Directory of Open Access Journals (Sweden)

    Dansheng Wang

    2016-10-01

    Full Text Available In the electromechanical impedance (EMI method, the PZT patch performs the functions of both sensor and exciter. Due to the high frequency actuation and non-model based characteristics, the EMI method can be utilized to detect incipient structural damage. In recent years EMI techniques have been widely applied to monitor the health status of concrete and steel materials, however, studies on application to timber are limited. This paper will explore the feasibility of using the EMI technique for damage detection in timber specimens. In addition, the conventional damage index, namely root mean square deviation (RMSD is employed to evaluate the level of damage. On that basis, a new damage index, Mahalanobis distance based on RMSD, is proposed to evaluate the damage severity of timber specimens. Experimental studies are implemented to detect notch and hole damage in the timber specimens. Experimental results verify the availability and robustness of the proposed damage index and its superiority over the RMSD indexes.

  6. Robust image modeling technique with a bioluminescence image segmentation application

    Science.gov (United States)

    Zhong, Jianghong; Wang, Ruiping; Tian, Jie

    2009-02-01

    A robust pattern classifier algorithm for the variable symmetric plane model, where the driving noise is a mixture of a Gaussian and an outlier process, is developed. The veracity and high-speed performance of the pattern recognition algorithm is proved. Bioluminescence tomography (BLT) has recently gained wide acceptance in the field of in vivo small animal molecular imaging. So that it is very important for BLT to how to acquire the highprecision region of interest in a bioluminescence image (BLI) in order to decrease loss of the customers because of inaccuracy in quantitative analysis. An algorithm in the mode is developed to improve operation speed, which estimates parameters and original image intensity simultaneously from the noise corrupted image derived from the BLT optical hardware system. The focus pixel value is obtained from the symmetric plane according to a more realistic assumption for the noise sequence in the restored image. The size of neighborhood is adaptive and small. What's more, the classifier function is base on the statistic features. If the qualifications for the classifier are satisfied, the focus pixel intensity is setup as the largest value in the neighborhood.Otherwise, it will be zeros.Finally,pseudo-color is added up to the result of the bioluminescence segmented image. The whole process has been implemented in our 2D BLT optical system platform and the model is proved.

  7. A titration model for evaluating calcium hydroxide removal techniques

    Directory of Open Access Journals (Sweden)

    Mark PHILLIPS

    2015-02-01

    Full Text Available Objective Calcium hydroxide (Ca(OH2 has been used in endodontics as an intracanal medicament due to its antimicrobial effects and its ability to inactivate bacterial endotoxin. The inability to totally remove this intracanal medicament from the root canal system, however, may interfere with the setting of eugenol-based sealers or inhibit bonding of resin to dentin, thus presenting clinical challenges with endodontic treatment. This study used a chemical titration method to measure residual Ca(OH2 left after different endodontic irrigation methods. Material and Methods Eighty-six human canine roots were prepared for obturation. Thirty teeth were filled with known but different amounts of Ca(OH2 for 7 days, which were dissolved out and titrated to quantitate the residual Ca(OH2 recovered from each root to produce a standard curve. Forty-eight of the remaining teeth were filled with equal amounts of Ca(OH2 followed by gross Ca(OH2 removal using hand files and randomized treatment of either: 1 Syringe irrigation; 2 Syringe irrigation with use of an apical file; 3 Syringe irrigation with added 30 s of passive ultrasonic irrigation (PUI, or 4 Syringe irrigation with apical file and PUI (n=12/group. Residual Ca(OH2 was dissolved with glycerin and titrated to measure residual Ca(OH2 left in the root. Results No method completely removed all residual Ca(OH2. The addition of 30 s PUI with or without apical file use removed Ca(OH2 significantly better than irrigation alone. Conclusions This technique allowed quantification of residual Ca(OH2. The use of PUI (with or without apical file resulted in significantly lower Ca(OH2 residue compared to irrigation alone.

  8. Nuclear power plant monitoring and fault diagnosis methods based on the artificial intelligence technique

    International Nuclear Information System (INIS)

    Yoshikawa, S.; Saiki, A.; Ugolini, D.; Ozawa, K.

    1996-01-01

    The main objective of this paper is to develop an advanced diagnosis system based on the artificial intelligence technique to monitor the operation and to improve the operational safety of nuclear power plants. Three different methods have been elaborated in this study: an artificial neural network local diagnosis (NN ds ) scheme that acting at the component level discriminates between normal and abnormal transients, a model-based diagnostic reasoning mechanism that combines a physical causal network model-based knowledge compiler (KC) that generates applicable diagnostic rules from widely accepted physical knowledge compiler (KC) that generates applicable diagnostic rules from widely accepted physical knowledge. Although the three methods have been developed and verified independently, they are highly correlated and, when connected together, form a effective and robust diagnosis and monitoring tool. (authors)

  9. Determining and ranking dimensions of knowledge management implementation using Hicks model and fuzzy TOPSIS Technique

    Directory of Open Access Journals (Sweden)

    Mona Ahani

    2013-02-01

    Full Text Available The 20th century was the age of an industry-based as well as knowledge-based economy. In a knowledge-based economy, knowledge plays an essential role to produce wealth compared with other tangible and physical assets. The purpose of this research is to identify and rank different aspects of knowledge management based on the Hicks model using the fuzzy TOPSIS technique for one of the most prestigious universities in Iran. The proposed model considers four main criteria of knowledge including creation, distribution, storage, and application along with 17 sub-criteria. The Chi-square correlation test indicates a positive and meaningful correlation between four mentioned criteria and knowledge management implementation. Using the fuzzy TOPSIS technique, the results also indicate that “Need for new and updated information and knowledge” was selected as the most important sub-criterion and “Sharing or distribution of knowledge” was selected as the most important main criterion on Hicks model.

  10. Data-driven remaining useful life prognosis techniques stochastic models, methods and applications

    CERN Document Server

    Si, Xiao-Sheng; Hu, Chang-Hua

    2017-01-01

    This book introduces data-driven remaining useful life prognosis techniques, and shows how to utilize the condition monitoring data to predict the remaining useful life of stochastic degrading systems and to schedule maintenance and logistics plans. It is also the first book that describes the basic data-driven remaining useful life prognosis theory systematically and in detail. The emphasis of the book is on the stochastic models, methods and applications employed in remaining useful life prognosis. It includes a wealth of degradation monitoring experiment data, practical prognosis methods for remaining useful life in various cases, and a series of applications incorporated into prognostic information in decision-making, such as maintenance-related decisions and ordering spare parts. It also highlights the latest advances in data-driven remaining useful life prognosis techniques, especially in the contexts of adaptive prognosis for linear stochastic degrading systems, nonlinear degradation modeling based pro...

  11. Adaptive Atmospheric Modeling Key Techniques in Grid Generation, Data Structures, and Numerical Operations with Applications

    CERN Document Server

    Behrens, Jörn

    2006-01-01

    Gives an overview and guidance in the development of adaptive techniques for atmospheric modeling. This book covers paradigms of adaptive techniques, such as error estimation and adaptation criteria. Considering applications, it demonstrates several techniques for discretizing relevant conservation laws from atmospheric modeling.

  12. Modeling, Control and Analyze of Multi-Machine Drive Systems using Bond Graph Technique

    Directory of Open Access Journals (Sweden)

    J. Belhadj

    2006-03-01

    Full Text Available In this paper, a system viewpoint method has been investigated to study and analyze complex systems using Bond Graph technique. These systems are multimachine multi-inverter based on Induction Machine (IM, well used in industries like rolling mills, textile, and railway traction. These systems are multi-domains, multi-scales time and present very strong internal and external couplings, with non-linearity characterized by a high model order. The classical study with analytic model is difficult to manipulate and it is limited to some performances. In this study, a “systemic approach” is presented to design these kinds of systems, using an energetic representation based on Bond Graph formalism. Three types of multimachine are studied with their control strategies. The modeling is carried out by Bond Graph and results are discussed to show the performances of this methodology

  13. Frequency Weighted Model Order Reduction Technique and Error Bounds for Discrete Time Systems

    Directory of Open Access Journals (Sweden)

    Muhammad Imran

    2014-01-01

    for whole frequency range. However, certain applications (like controller reduction require frequency weighted approximation, which introduce the concept of using frequency weights in model reduction techniques. Limitations of some existing frequency weighted model reduction techniques include lack of stability of reduced order models (for two sided weighting case and frequency response error bounds. A new frequency weighted technique for balanced model reduction for discrete time systems is proposed. The proposed technique guarantees stable reduced order models even for the case when two sided weightings are present. Efficient technique for frequency weighted Gramians is also proposed. Results are compared with other existing frequency weighted model reduction techniques for discrete time systems. Moreover, the proposed technique yields frequency response error bounds.

  14. Constitutional Model and Rationality in Judicial Decisions from Proportionality Technique

    OpenAIRE

    Feio, Thiago Alves

    2016-01-01

    In the current legal systems, the content of the Constitutions consists of values that serve to limit state action. The department in charge of the control of this system is, usually, the Judiciary. This choice leads to two major problems, the tension between democracy and constitutionalism and the subjectivity that control. One of the solutions to subjectivity is weighting of principles through the proportionality technique, which aims to give rational decisions. This technique doesn’t elimi...

  15. A finite element model updating technique for adjustment of parameters near boundaries

    Science.gov (United States)

    Gwinn, Allen Fort, Jr.

    Even though there have been many advances in research related to methods of updating finite element models based on measured normal mode vibration characteristics, there is yet to be a widely accepted method that works reliably with a wide range of problems. This dissertation focuses on the specific class of problems having to do with changes in stiffness near the clamped boundary of plate structures. This class of problems is especially important as it relates to the performance of turbine engine blades, where a change in stiffness at the base of the blade can be indicative of structural damage. The method that is presented herein is a new technique for resolving the differences between the physical structure and the finite element model. It is a semi-iterative technique that incorporates a "physical expansion" of the measured eigenvectors along with appropriate scaling of these expanded eigenvectors into an iterative loop that uses the Engel's model modification method to then calculate adjusted stiffness parameters for the finite element model. Three example problems are presented that use eigenvalues and mass normalized eigenvectors that have been calculated from experimentally obtained accelerometer readings. The test articles that were used were all thin plates with one edge fully clamped. They each had a cantilevered length of 8.5 inches and a width of 4 inches. The three plates differed from one another in thickness from 0.100 inches to 0.188 inches. These dimensions were selected in order to approximate a gas turbine engine blade. The semi-iterative modification technique is shown to do an excellent job of calculating the necessary adjustments to the finite element model so that the analytically determined eigenvalues and eigenvectors for the adjusted model match the corresponding values from the experimental data with good agreement. Furthermore, the semi-iterative method is quite robust. For the examples presented here, the method consistently converged

  16. High frequency magnetic field technique: mathematical modelling and development of a full scale water fraction meter

    Energy Technology Data Exchange (ETDEWEB)

    Cimpan, Emil

    2004-09-15

    This work is concerned with the development of a new on-line measuring technique to be used in measurements of the water concentration in a two component oil/water or three component (i.e. multiphase) oil/water/gas flow. The technique is based on using non-intrusive coil detectors and experiments were performed both statically (medium at rest) and dynamically (medium flowing through a flow rig). The various coil detectors were constructed with either one or two coils and specially designed electronics were used. The medium was composed by air, machine oil, and water having different conductivity values, i.e. seawater and salt water with various conductivities (salt concentrations) such as 1 S/m, 4.9 S/m and 9.3 S/m. The experimental measurements done with the different mixtures were further used to mathematically model the physical principle used in the technique. This new technique is based on measuring the coil impedance and signal frequency at the self-resonance frequency of the coil to determine the water concentration in the mix. By using numerous coils it was found, experimentally, that generally both the coil impedance and the self-resonance frequency of the coil decreased as the medium conductivity increased. Both the impedance and the self-resonance frequency of the coil depended on the medium loss due to the induced eddy currents within the conductive media in the mixture, i.e. water. In order to detect relatively low values of the medium loss, the self-resonance frequency of the coil and also of the magnetic field penetrating the media should be relatively high (within the MHz range and higher). Therefore, the technique was called and referred to throughout the entire work as the high frequency magnetic field technique (HFMFT). To practically use the HFMFT, it was necessary to circumscribe an analytical frame to this technique. This was done by working out a mathematical model that relates the impedance and the self-resonance frequency of the coil to the

  17. Knowledge-based geometric modeling in construction

    DEFF Research Database (Denmark)

    Bonev, Martin; Hvam, Lars

    2012-01-01

    A wider application of IT-based solutions, such as configuration systems and the implementation of modeling standards, has facilitated the trend to produce mass customized products to support inter alia the specification process of the increasing product variety. However, not all industries have...... realized the full potential of using product and process modelling tools as well as the implementation of configuration systems to support their business processes. Especially in the building industry, where Engineer-to-Order (ETO) manufacturers provide complex custom tailored products, up to now, often...... a considerably high amount of their recourses is required for designing and specifying the majority of their product assortment. As design decisions are hereby based on knowledge and experience about behaviour and applicability of construction techniques and materials for a predefined design situation, smart...

  18. A singular value decomposition linear programming (SVDLP) optimization technique for circular cone based robotic radiotherapy

    Science.gov (United States)

    Liang, Bin; Li, Yongbao; Wei, Ran; Guo, Bin; Xu, Xuang; Liu, Bo; Li, Jiafeng; Wu, Qiuwen; Zhou, Fugen

    2018-01-01

    With robot-controlled linac positioning, robotic radiotherapy systems such as CyberKnife significantly increase freedom of radiation beam placement, but also impose more challenges on treatment plan optimization. The resampling mechanism in the vendor-supplied treatment planning system (MultiPlan) cannot fully explore the increased beam direction search space. Besides, a sparse treatment plan (using fewer beams) is desired to improve treatment efficiency. This study proposes a singular value decomposition linear programming (SVDLP) optimization technique for circular collimator based robotic radiotherapy. The SVDLP approach initializes the input beams by simulating the process of covering the entire target volume with equivalent beam tapers. The requirements on dosimetry distribution are modeled as hard and soft constraints, and the sparsity of the treatment plan is achieved by compressive sensing. The proposed linear programming (LP) model optimizes beam weights by minimizing the deviation of soft constraints subject to hard constraints, with a constraint on the l 1 norm of the beam weight. A singular value decomposition (SVD) based acceleration technique was developed for the LP model. Based on the degeneracy of the influence matrix, the model is first compressed into lower dimension for optimization, and then back-projected to reconstruct the beam weight. After beam weight optimization, the number of beams is reduced by removing the beams with low weight, and optimizing the weights of the remaining beams using the same model. This beam reduction technique is further validated by a mixed integer programming (MIP) model. The SVDLP approach was tested on a lung case. The results demonstrate that the SVD acceleration technique speeds up the optimization by a factor of 4.8. Furthermore, the beam reduction achieves a similar plan quality to the globally optimal plan obtained by the MIP model, but is one to two orders of magnitude faster. Furthermore, the SVDLP

  19. A comparison of approximation techniques for variance-based sensitivity analysis of biochemical reaction systems

    Directory of Open Access Journals (Sweden)

    Goutsias John

    2010-05-01

    Full Text Available Abstract Background Sensitivity analysis is an indispensable tool for the analysis of complex systems. In a recent paper, we have introduced a thermodynamically consistent variance-based sensitivity analysis approach for studying the robustness and fragility properties of biochemical reaction systems under uncertainty in the standard chemical potentials of the activated complexes of the reactions and the standard chemical potentials of the molecular species. In that approach, key sensitivity indices were estimated by Monte Carlo sampling, which is computationally very demanding and impractical for large biochemical reaction systems. Computationally efficient algorithms are needed to make variance-based sensitivity analysis applicable to realistic cellular networks, modeled by biochemical reaction systems that consist of a large number of reactions and molecular species. Results We present four techniques, derivative approximation (DA, polynomial approximation (PA, Gauss-Hermite integration (GHI, and orthonormal Hermite approximation (OHA, for analytically approximating the variance-based sensitivity indices associated with a biochemical reaction system. By using a well-known model of the mitogen-activated protein kinase signaling cascade as a case study, we numerically compare the approximation quality of these techniques against traditional Monte Carlo sampling. Our results indicate that, although DA is computationally the most attractive technique, special care should be exercised when using it for sensitivity analysis, since it may only be accurate at low levels of uncertainty. On the other hand, PA, GHI, and OHA are computationally more demanding than DA but can work well at high levels of uncertainty. GHI results in a slightly better accuracy than PA, but it is more difficult to implement. OHA produces the most accurate approximation results and can be implemented in a straightforward manner. It turns out that the computational cost of the

  20. Damage identification in beams by a response surface based technique

    Directory of Open Access Journals (Sweden)

    Teidj S.

    2014-01-01

    Full Text Available In this work, identification of damage in uniform homogeneous metallic beams was considered through the propagation of non dispersive elastic torsional waves. The proposed damage detection procedure consisted of the following sequence. Giving a localized torque excitation, having the form of a short half-sine pulse, the first step was calculating the transient solution of the resulting torsional wave. This torque could be generated in practice by means of asymmetric laser irradiation of the beam surface. Then, a localized defect assumed to be characterized by an abrupt reduction of beam section area with a given height and extent was placed at a known location of the beam. Next, the response in terms of transverse section rotation rate was obtained for a point situated afterwards the defect, where the sensor was positioned. This last could utilize in practice the concept of laser vibrometry. A parametric study has been conducted after that by using a full factorial design of experiments table and numerical simulations based on a finite difference characteristic scheme. This has enabled the derivation of a response surface model that was shown to represent adequately the response of the system in terms of the following factors: defect extent and severity. The final step was performing the inverse problem solution in order to identify the defect characteristics by using measurement.

  1. Skull base tumours part I: Imaging technique, anatomy and anterior skull base tumours

    Energy Technology Data Exchange (ETDEWEB)

    Borges, Alexandra [Instituto Portugues de Oncologia Francisco Gentil, Centro de Lisboa, Servico de Radiologia, Rua Professor Lima Basto, 1093 Lisboa Codex (Portugal)], E-mail: borgesalexandra@clix.pt

    2008-06-15

    Advances in cross-sectional imaging, surgical technique and adjuvant treatment have largely contributed to ameliorate the prognosis, lessen the morbidity and mortality of patients with skull base tumours and to the growing medical investment in the management of these patients. Because clinical assessment of the skull base is limited, cross-sectional imaging became indispensable in the diagnosis, treatment planning and follow-up of patients with suspected skull base pathology and the radiologist is increasingly responsible for the fate of these patients. This review will focus on the advances in imaging technique; contribution to patient's management and on the imaging features of the most common tumours affecting the anterior skull base. Emphasis is given to a systematic approach to skull base pathology based upon an anatomic division taking into account the major tissue constituents in each skull base compartment. The most relevant information that should be conveyed to surgeons and radiation oncologists involved in patient's management will be discussed.

  2. A novel technique for extracting clouds base height using ground based imaging

    Directory of Open Access Journals (Sweden)

    E. Hirsch

    2011-01-01

    Full Text Available The height of a cloud in the atmospheric column is a key parameter in its characterization. Several remote sensing techniques (passive and active, either ground-based or on space-borne platforms and in-situ measurements are routinely used in order to estimate top and base heights of clouds. In this article we present a novel method that combines thermal imaging from the ground and sounded wind profile in order to derive the cloud base height. This method is independent of cloud types, making it efficient for both low boundary layer and high clouds. In addition, using thermal imaging ensures extraction of clouds' features during daytime as well as at nighttime. The proposed technique was validated by comparison to active sounding by ceilometers (which is a standard ground based method, to lifted condensation level (LCL calculations, and to MODIS products obtained from space. As all passive remote sensing techniques, the proposed method extracts only the height of the lowest cloud layer, thus upper cloud layers are not detected. Nevertheless, the information derived from this method can be complementary to space-borne cloud top measurements when deep-convective clouds are present. Unlike techniques such as LCL, this method is not limited to boundary layer clouds, and can extract the cloud base height at any level, as long as sufficient thermal contrast exists between the radiative temperatures of the cloud and its surrounding air parcel. Another advantage of the proposed method is its simplicity and modest power needs, making it particularly suitable for field measurements and deployment at remote locations. Our method can be further simplified for use with visible CCD or CMOS camera (although nighttime clouds will not be observed.

  3. [Detection of Carbon Dioxide Concentration in Soil Profile Based on Nondispersive Infrared Spectroscopy Technique].

    Science.gov (United States)

    Tu, Zhi-hua; Zhao, Yang; Zheng, Li-wen; Jia, Guo-dong; Chen, Li-hua; Yu, Xin-xiao

    2015-04-01

    In order to explore the variation of CO2 concentration and soil respiration in soil profile, the nondispersive infrared (NDIR) spectroscopy technique was applied to continually estimate the soil CO2 concentration in different soil layers (the humus horizon, A-, B-, C-horizon) in situ. The main instrument used in this experiment was silicon-based nondispersive infrared sensor, which could work in severe environment. We collected the Measurement value by NDIR spectroscopy technique throughout 2013. The values of soil carbon flux in different soil layers were calculated based on the model of gradient method and calibrated by measuring with an automated soil CO2 efflux system (LI-8100). The results showed that: a vertical gradient for the carbon dioxide concentration in soil profile was found, and the concentration was highest in the deepest soil horizon. Moreover, A linear correlation between the soil CO2 effluxes was calculated based on model and measurement, and the model prediction correlation coefficient was 0.9069, 0.7185, 0.8382, and 0.9030 in the H-, A-, B-, and C-horizon, respectively. The roots of mean square error (RMSE) were 0.2067, 0.1041, 0.0156, and 0.0096 in the H-, A-, B-, and C-horizon, respectively. These results suggest that the gradient method based on the NDIR spectroscopy technique can be successfully used to measure soil CO2 efflux in different soil layers, which reveal that diffusion and convection transport CO2 between the soil layers. It is a promising sensor for detecting CO2 concentration in soil profile, providing the basic data for calculating the global carbon in soil profile.

  4. Model-based phase-shifting interferometer

    Science.gov (United States)

    Liu, Dong; Zhang, Lei; Shi, Tu; Yang, Yongying; Chong, Shiyao; Miao, Liang; Huang, Wei; Shen, Yibing; Bai, Jian

    2015-10-01

    A model-based phase-shifting interferometer (MPI) is developed, in which a novel calculation technique is proposed instead of the traditional complicated system structure, to achieve versatile, high precision and quantitative surface tests. In the MPI, the partial null lens (PNL) is employed to implement the non-null test. With some alternative PNLs, similar as the transmission spheres in ZYGO interferometers, the MPI provides a flexible test for general spherical and aspherical surfaces. Based on modern computer modeling technique, a reverse iterative optimizing construction (ROR) method is employed for the retrace error correction of non-null test, as well as figure error reconstruction. A self-compiled ray-tracing program is set up for the accurate system modeling and reverse ray tracing. The surface figure error then can be easily extracted from the wavefront data in forms of Zernike polynomials by the ROR method. Experiments of the spherical and aspherical tests are presented to validate the flexibility and accuracy. The test results are compared with those of Zygo interferometer (null tests), which demonstrates the high accuracy of the MPI. With such accuracy and flexibility, the MPI would possess large potential in modern optical shop testing.

  5. AN IMAGE-BASED TECHNIQUE FOR 3D BUILDING RECONSTRUCTION USING MULTI-VIEW UAV IMAGES

    Directory of Open Access Journals (Sweden)

    F. Alidoost

    2015-12-01

    Full Text Available Nowadays, with the development of the urban areas, the automatic reconstruction of the buildings, as an important objects of the city complex structures, became a challenging topic in computer vision and photogrammetric researches. In this paper, the capability of multi-view Unmanned Aerial Vehicles (UAVs images is examined to provide a 3D model of complex building façades using an efficient image-based modelling workflow. The main steps of this work include: pose estimation, point cloud generation, and 3D modelling. After improving the initial values of interior and exterior parameters at first step, an efficient image matching technique such as Semi Global Matching (SGM is applied on UAV images and a dense point cloud is generated. Then, a mesh model of points is calculated using Delaunay 2.5D triangulation and refined to obtain an accurate model of building. Finally, a texture is assigned to mesh in order to create a realistic 3D model. The resulting model has provided enough details of building based on visual assessment.

  6. Review of air quality modeling techniques. Volume 8

    International Nuclear Information System (INIS)

    Rosen, L.C.

    1977-01-01

    Air transport and diffusion models which are applicable to the assessment of the environmental effects of nuclear, geothermal, and fossil-fuel electric generation are reviewed. The general classification of models and model inputs are discussed. A detailed examination of the statistical, Gaussian plume, Gaussian puff, one-box and species-conservation-of-mass models is given. Representative models are discussed with attention given to the assumptions, input data requirement, advantages, disadvantages and applicability of each

  7. Modeling technique for the process of liquid film disintegration

    Science.gov (United States)

    Modorskii, V. Ya.; Sipatov, A. M.; Babushkina, A. V.; Kolodyazhny, D. Yu.; Nagorny, V. S.

    2016-10-01

    In the course of numerical experiments the method of calculation of two-phase flows was developed by solving a model problem. The results of the study were compared between the two models that describe the processes of two-phase flow and the collapse of the liquid jet into droplets. VoF model and model QMOM - two mathematical models were considered the implementation of the spray.

  8. GIS-based bivariate statistical techniques for groundwater potential ...

    Indian Academy of Sciences (India)

    ... complicated relation between groundwater occurrence and groundwater conditioning factors, which permits investigation of both systemic and stochastic uncertainty. Finally, it can be realized that these techniques are very beneficial for groundwater potential analyzing and can be practical for water-resource management ...

  9. The use of continuous improvement techniques: A survey-based ...

    African Journals Online (AJOL)

    Prior research has focused mainly on the effect of continuous improvement practices on performance. In contrast, this research collected data through a survey administered across multiple industries throughout the U.S. and identified the continuous improvement techniques (i.e., lean and quality improvement methods) that ...

  10. GIS-based bivariate statistical techniques for groundwater potential ...

    Indian Academy of Sciences (India)

    Ali Haghizadeh

    2017-11-23

    Nov 23, 2017 ... of both systemic and stochastic uncertainty. Finally, it can be realized that these techniques are very beneficial for groundwater potential analyzing and can be practical for water-resource management experts. Keywords. Groundwater; statistical index; Dempster–Shafer theory; water resource management; ...

  11. Application of entropy measurement technique in grey based ...

    African Journals Online (AJOL)

    Welding is an absolutely essential technique used in various industries like automotive industry, construction industry as well as in the aviation industry. ... For this study, four control variables are selected current, voltage, gas flow rate and wire feed rate and analysing their effect on the four quality characteristics ultimate ...

  12. Non-lens-based surgical techniques for presbyopia correction.

    Science.gov (United States)

    Bilbao-Calabuig, R; Llovet-Osuna, F

    2017-09-01

    Presbyopia is the most prevalent refractive error. With a progressive aging population, its surgical correction is an important challenge for the ophthalmology community, as well as the millions of patients who suffer from it, and who are increasingly demanding alternatives to its correction with glasses or contact lenses. A review is presented with a synthesised discussion on the pathophysiological theories of presbyopia and an updated and analytical description of the non-lens involvement surgical techniques used to treat presbyopia. Corneal procedures include various types of corneal implants and photo-ablative techniques that generate a multifocal cornea, or monovision. Scleral procedures exert a traction on supralenticular sclera that supposedly would improve the amplitude of accommodation. None of the techniques are able to completely eliminate the need for near -vision glasses, but many of them manage to improve the refractive status of the patients. More studies with rigorous and standardised methods and longer follow-up are needed to evaluate the changes in the near vision of the patients, in order to corroborate the real and practical usefulness of many of these techniques. Copyright © 2017 Sociedad Española de Oftalmología. Publicado por Elsevier España, S.L.U. All rights reserved.

  13. MRA Based Efficient Database Storing and Fast Querying Technique

    Directory of Open Access Journals (Sweden)

    Mitko Kostov

    2017-02-01

    Full Text Available In this paper we consider a specific way of organizing 1D signals or 2D image databases, such that a more efficient storage and faster querying is achieved. A multiresolution technique of data processing is used in order of saving the most significant processed data.

  14. Advanced and automated laser-based technique to evaluate aggregates

    CSIR Research Space (South Africa)

    Anochie-Boateng, Joseph

    2011-11-01

    Full Text Available -dimensional laser scanning technique to determine flatness, elongation and sphericity of aggregates used in South African roads. A three-dimensional (3-D) laser scanning device was used to obtain the dimensions, surface area, volume, and subsequently, compute flat...

  15. MySQL based selection of appropriate indexing technique in ...

    African Journals Online (AJOL)

    This paper deals with selection of appropriate indexing technique applied on MySQL Database for a health care system and related performance issues using multiclass support vector machine (SVM). The patient database is generally huge and contains lot of variations. For the quick search or fast retrieval of the desired ...

  16. Efficient Identification Using a Prime-Feature-Based Technique

    DEFF Research Database (Denmark)

    Hussain, Dil Muhammad Akbar; Haq, Shaiq A.; Valente, Andrea

    2011-01-01

    Identification of authorized train drivers through biometrics is a growing area of interest in locomotive radio remote control systems. The existing technique of password authentication is not very reliable and potentially unauthorized personnel may also operate the system on behalf of the operat...

  17. Assessing sequential data assimilation techniques for integrating GRACE data into a hydrological model

    KAUST Repository

    Khaki, M.

    2017-07-06

    The time-variable terrestrial water storage (TWS) products from the Gravity Recovery And Climate Experiment (GRACE) have been increasingly used in recent years to improve the simulation of hydrological models by applying data assimilation techniques. In this study, for the first time, we assess the performance of the most popular data assimilation sequential techniques for integrating GRACE TWS into the World-Wide Water Resources Assessment (W3RA) model. We implement and test stochastic and deterministic ensemble-based Kalman filters (EnKF), as well as Particle filters (PF) using two different resampling approaches of Multinomial Resampling and Systematic Resampling. These choices provide various opportunities for weighting observations and model simulations during the assimilation and also accounting for error distributions. Particularly, the deterministic EnKF is tested to avoid perturbing observations before assimilation (that is the case in an ordinary EnKF). Gaussian-based random updates in the EnKF approaches likely do not fully represent the statistical properties of the model simulations and TWS observations. Therefore, the fully non-Gaussian PF is also applied to estimate more realistic updates. Monthly GRACE TWS are assimilated into W3RA covering the entire Australia. To evaluate the filters performances and analyze their impact on model simulations, their estimates are validated by independent in-situ measurements. Our results indicate that all implemented filters improve the estimation of water storage simulations of W3RA. The best results are obtained using two versions of deterministic EnKF, i.e. the Square Root Analysis (SQRA) scheme and the Ensemble Square Root Filter (EnSRF), respectively improving the model groundwater estimations errors by 34% and 31% compared to a model run without assimilation. Applying the PF along with Systematic Resampling successfully decreases the model estimation error by 23%.

  18. Egg volume prediction using machine vision technique based on pappus theorem and artificial neural network.

    Science.gov (United States)

    Soltani, Mahmoud; Omid, Mahmoud; Alimardani, Reza

    2015-05-01

    Egg size is one of the important properties of egg that is judged by customers. Accordingly, in egg sorting and grading, the size of eggs must be considered. In this research, a new method of egg volume prediction was proposed without need to measure weight of egg. An accurate and efficient image processing algorithm was designed and implemented for computing major and minor diameters of eggs. Two methods of egg size modeling were developed. In the first method, a mathematical model was proposed based on Pappus theorem. In second method, Artificial Neural Network (ANN) technique was used to estimate egg volume. The determined egg volume by these methods was compared statistically with actual values. For mathematical modeling, the R(2), Mean absolute error and maximum absolute error values were obtained as 0.99, 0.59 cm(3) and 1.69 cm(3), respectively. To determine the best ANN, R(2) test and RMSEtest were used as selection criteria. The best ANN topology was 2-28-1 which had the R(2) test and RMSEtest of 0.992 and 0.66, respectively. After system calibration, the proposed models were evaluated. The results which indicated the mathematical modeling yielded more satisfying results. So this technique was selected for egg size determination.

  19. Modelling tick abundance using machine learning techniques and satellite imagery

    DEFF Research Database (Denmark)

    Kjær, Lene Jung; Korslund, L.; Kjelland, V.

    satellite images to run Boosted Regression Tree machine learning algorithms to predict overall distribution (presence/absence of ticks) and relative tick abundance of nymphs and larvae in southern Scandinavia. For nymphs, the predicted abundance had a positive correlation with observed abundance...... the predicted distribution of larvae was mostly even throughout Denmark, it was primarily around the coastlines in Norway and Sweden. Abundance was fairly low overall except in some fragmented patches corresponding to forested habitats in the region. Machine learning techniques allow us to predict for larger...... the collected ticks for pathogens and using the same machine learning techniques to develop prevalence maps of the ScandTick region....

  20. Model Based Vision for Aircraft Position Determination

    Science.gov (United States)

    Sridhar, Banavar; Chatterji, Gano B.; Soni, Tarun; Showman, Robert D. (Technical Monitor)

    1994-01-01

    This paper investigates the use of imaging sensors to estimate the position of an aircraft with respect to the runway during landing. Passive vision techniques to estimate aircraft position during landing rely on the known runway model, images acquired by onboard imaging sensor, orientation information provided by the inertial navigation system and the position estimate provided by devices such as the global positioning system. Point features in the runway model are compared with the onboard sensor images of the features and the difference between the two is used to correct the aircraft position and orientation. In this paper the sensitivity of point features is examined as a means of determining the accuracy of such position estimation techniques. Expressions are derived for the sensitivity of image point to errors in the position and orientation of the sensor. Using these, the sensitivity of the image to aircraft position and orientation errors along a typical landing glide path is studied. A least squares technique based on this sensitivity analysis is described for the correction of position and orientation estimates. The final version of the paper will include results from the application of this analysis to real image sequences collected in flight.

  1. A model evaluation checklist for process-based environmental models

    Science.gov (United States)

    Jackson-Blake, Leah

    2015-04-01

    the conceptual model on which it is based. In this study, a number of model structural shortcomings were identified, such as a lack of dissolved phosphorus transport via infiltration excess overland flow, potential discrepancies in the particulate phosphorus simulation and a lack of spatial granularity. (4) Conceptual challenges, as conceptual models on which predictive models are built are often outdated, having not kept up with new insights from monitoring and experiments. For example, soil solution dissolved phosphorus concentration in INCA-P is determined by the Freundlich adsorption isotherm, which could potentially be replaced using more recently-developed adsorption models that take additional soil properties into account. This checklist could be used to assist in identifying why model performance may be poor or unreliable. By providing a model evaluation framework, it could help prioritise which areas should be targeted to improve model performance or model credibility, whether that be through using alternative calibration techniques and statistics, improved data collection, improving or simplifying the model structure or updating the model to better represent current understanding of catchment processes.

  2. Vibration measurement-based simple technique for damage detection of truss bridges: A case study

    Directory of Open Access Journals (Sweden)

    Sudath C. Siriwardane

    2015-10-01

    Full Text Available The bridges experience increasing traffic volume and weight, deteriorating of components and large number of stress cycles. Therefore, assessment of the current condition of steel railway bridges becomes necessary. Most of the commonly available approaches for structural health monitoring are based on visual inspection and non-destructive testing methods. The visual inspection is unreliable as those depend on uncertainty behind inspectors and their experience. Also, the non-destructive testing methods are found to be expensive. Therefore, recent researches have noticed that dynamic modal parameters or vibration measurement-based structural health monitoring methods are economical and may also provide more realistic predictions to damage state of civil infrastructure. Therefore this paper proposes a simple technique to locate the damage region of railway truss bridges based on measured modal parameters. The technique is discussed with a case study. Initially paper describes the details of considered railway bridge. Then observations of visual inspection, material testing and in situ load testing are discussed under separate sections. Development of validated finite element model of the considered bridge is comprehensively discussed. Hence, variations of modal parameters versus position of the damage are plotted. These plots are considered as the main reference for locating the damage of the railway bridge in future periodical inspection by comparing the measured corresponding modal parameters. Finally the procedure of periodical vibration measurement and damage locating technique are clearly illustrated.

  3. Remediation of textile effluents by membrane based treatment techniques: a state of the art review.

    Science.gov (United States)

    Dasgupta, Jhilly; Sikder, Jaya; Chakraborty, Sudip; Curcio, Stefano; Drioli, Enrico

    2015-01-01

    The textile industries hold an important position in the global industrial arena because of their undeniable contributions to basic human needs satisfaction and to the world economy. These industries are however major consumers of water, dyes and other toxic chemicals. The effluents generated from each processing step comprise substantial quantities of unutilized resources. The effluents if discharged without prior treatment become potential sources of pollution due to their several deleterious effects on the environment. The treatment of heterogeneous textile effluents therefore demands the application of environmentally benign technology with appreciable quality water reclamation potential. These features can be observed in various innovative membrane based techniques. The present review paper thus elucidates the contributions of membrane technology towards textile effluent treatment and unexhausted raw materials recovery. The reuse possibilities of water recovered through membrane based techniques, such as ultrafiltration and nanofiltration in primary dye houses or auxiliary rinse vats have also been explored. Advantages and bottlenecks, such as membrane fouling associated with each of these techniques have also been highlighted. Additionally, several pragmatic models simulating transport mechanism across membranes have been documented. Finally, various accounts dealing with techno-economic evaluation of these membrane based textile wastewater treatment processes have been provided. Copyright © 2014 Elsevier Ltd. All rights reserved.

  4. Adaptive subdomain modeling: A multi-analysis technique for ocean circulation models

    Science.gov (United States)

    Altuntas, Alper; Baugh, John

    2017-07-01

    Many coastal and ocean processes of interest operate over large temporal and geographical scales and require a substantial amount of computational resources, particularly when engineering design and failure scenarios are also considered. This study presents an adaptive multi-analysis technique that improves the efficiency of these computations when multiple alternatives are being simulated. The technique, called adaptive subdomain modeling, concurrently analyzes any number of child domains, with each instance corresponding to a unique design or failure scenario, in addition to a full-scale parent domain providing the boundary conditions for its children. To contain the altered hydrodynamics originating from the modifications, the spatial extent of each child domain is adaptively adjusted during runtime depending on the response of the model. The technique is incorporated in ADCIRC++, a re-implementation of the popular ADCIRC ocean circulation model with an updated software architecture designed to facilitate this adaptive behavior and to utilize concurrent executions of multiple domains. The results of our case studies confirm that the method substantially reduces computational effort while maintaining accuracy.

  5. Synopsis of Soft Computing Techniques used in Quadrotor UAV Modelling and Control

    Directory of Open Access Journals (Sweden)

    Attila Nemes

    2015-01-01

    Full Text Available The aim of this article is to give an introduction to quadrotor systems with an overview of soft computing techniques used in quadrotor unmanned aerial vehicle (UAV control, modelling, object following and collision avoidance. The quadrotor system basics, its structure and dynamic model definitions are recapitulated. Further on synopsis is given of previously proposed methods, results evaluated and conclusions drown by authors of referenced publications. The result of this article is a summary of multiple papers on fuzzy logic techniques used in position and altitude control systems for UAVs. Also an overview of fuzzy system based visual servoing for object tracking and collision avoidance is given together with a briefing of quadrotor UAV control techniques efficiency study. Conclusion is that though soft computing methods are widely used with good results, there is still place for much research to be done on find more efficient soft computing tools for simple modelling, robust dynamic control and fast collision avoidance in quadrotor UAV control.

  6. A Survey of Model-based Sensor Data Acquisition and Management

    OpenAIRE

    Aggarwal, Charu C.; Sathe, Saket; Papaioannou, Thanasis; Jeung, Hoyoung; Aberer, Karl

    2013-01-01

    In recent years, due to the proliferation of sensor networks, there has been a genuine need of researching techniques for sensor data acquisition and management. To this end, a large number of techniques have emerged that advocate model-based sensor data acquisition and management. These techniques use mathematical models for performing various, day-to-day tasks involved in managing sensor data. In this chapter, we survey the state-of-the-art techniques for model-based sensor data acquisition...

  7. LFC based adaptive PID controller using ANN and ANFIS techniques

    Directory of Open Access Journals (Sweden)

    Mohamed I. Mosaad

    2014-12-01

    Full Text Available This paper presents an adaptive PID Load Frequency Control (LFC for power systems using Neuro-Fuzzy Inference Systems (ANFIS and Artificial Neural Networks (ANN oriented by Genetic Algorithm (GA. PID controller parameters are tuned off-line by using GA to minimize integral error square over a wide-range of load variations. The values of PID controller parameters obtained from GA are used to train both ANFIS and ANN. Therefore, the two proposed techniques could, online, tune the PID controller parameters for optimal response at any other load point within the operating range. Testing of the developed techniques shows that the adaptive PID-LFC could preserve optimal performance over the whole loading range. Results signify superiority of ANFIS over ANN in terms of performance measures.

  8. A Review On Segmentation Based Image Compression Techniques

    Directory of Open Access Journals (Sweden)

    S.Thayammal

    2013-11-01

    Full Text Available Abstract -The storage and transmission of imagery become more challenging task in the current scenario of multimedia applications. Hence, an efficient compression scheme is highly essential for imagery, which reduces the requirement of storage medium and transmission bandwidth. Not only improvement in performance and also the compression techniques must converge quickly in order to apply them for real time applications. There are various algorithms have been done in image compression, but everyone has its own pros and cons. Here, an extensive analysis between existing methods is performed. Also, the use of existing works is highlighted, for developing the novel techniques which face the challenging task of image storage and transmission in multimedia applications.

  9. (NHIS) using data mining technique as a statistical model

    African Journals Online (AJOL)

    kofi.mereku

    2014-05-23

    May 23, 2014 ... Scheme (NHIS) claims in the Awutu-Effutu-Senya District using data mining techniques, with a specific focus on .... transform them into a format that is friendly to data mining algorithms, such as .... many groups to access the data, facilitate updating the data, and improve the efficiency of checking the data for ...

  10. Fuzzy model-based control of a nuclear reactor

    International Nuclear Information System (INIS)

    Van Den Durpel, L.; Ruan, D.

    1994-01-01

    The fuzzy model-based control of a nuclear power reactor is an emerging research topic world-wide. SCK-CEN is dealing with this research in a preliminary stage, including two aspects, namely fuzzy control and fuzzy modelling. The aim is to combine both methodologies in contrast to conventional model-based PID control techniques, and to state advantages of including fuzzy parameters as safety and operator feedback. This paper summarizes the general scheme of this new research project

  11. Survey of Contrast Enhancement Techniques based on Histogram Equalization

    OpenAIRE

    Manpreet Kaur,; Jasdeep Kaur; Jappreet Kaur

    2011-01-01

    This Contrast enhancement is frequently referred to as one of the most important issues in image processing. Histogram equalization (HE) is one of the common methods used for improving contrast in digital images. Histogram equalization (HE) has proved to be a simple and effective image contrast enhancement technique. However, the conventional histogram equalization methods usually result in excessive contrast enhancement, which causes the unnatural look and visual artifacts of the processed i...

  12. Research on chronicles correlation based network intrusion detection techniques

    International Nuclear Information System (INIS)

    Han Zhengping; Jin Yan; Chen Taiwei; Xu Rongsheng

    2007-01-01

    According to some problems existed in network intrusion detection technique, such as alerts overwhelming, false-positives and lack of alert description, this paper introduces chronicle correlation method to alert events analysis by some correlative examples. With designed chronicle recognition language, portscan's alerts can be reduced, false-positives in buffer overflow's alerts can be detected, and NetBios DCERPC attack's alerts semantics can be improved. (authors)

  13. Wavelet-based de-noising techniques in MRI

    Czech Academy of Sciences Publication Activity Database

    Bartušek, Karel; Přinosil, J.; Smékal, Z.

    2011-01-01

    Roč. 104, č. 3 (2011), s. 480-488 ISSN 0169-2607 R&D Projects: GA MŠk ED0017/01/01; GA ČR GAP102/11/0318 Institutional research plan: CEZ:AV0Z20650511 Keywords : wavelet transformation * filtering technique * magnetic resonance imaging Subject RIV: JA - Electronics ; Optoelectronics, Electrical Engineering Impact factor: 1.516, year: 2011

  14. Computer based policy making in draining old surgical techniques.

    Science.gov (United States)

    Dragomirescu, E; Stavri, E; Dimitriu, R; Belu, I

    1995-01-01

    The discopathy disease has an important incidence all over the world affecting more men than women and is influenced by social factors. The treatment has often surgical indication. The operating techniques on one hand are dependent on the equipment, instruments and expertise and on the other hand they are related to the specificity of the case, surgeon's inspiration etc. These factors are influencing the postoperative patient evolution.

  15. Use of System Dynamics Techniques in the Garrison Health Modelling Tool

    Science.gov (United States)

    2010-11-01

    Joint Health Command (JHC) tasked DSTO to develop techniques for modelling Defence health service delivery both in a Garrison environment in Australia ...UNCLASSIFIED UNCLASSIFIED Use of System Dynamics Techniques in the Garrison Health Modelling Tool Mark Burnett, Kerry Clifford and...Garrison Health Modelling Tool, a prototype software package designed to provide decision-support to JHC health officers and managers in a garrison

  16. On a Numerical and Graphical Technique for Evaluating some Models Involving Rational Expectations

    DEFF Research Database (Denmark)

    Johansen, Søren; Swensen, Anders Rygh

    Campbell and Shiller (1987) proposed a graphical technique for the present value model which consists of plotting the spread and theoretical spread as calculated from the cointegrated vector autoregressive model. We extend these techniques to a number of rational expectation models and give...

  17. On a numerical and graphical technique for evaluating some models involving rational expectations

    DEFF Research Database (Denmark)

    Johansen, Søren; Swensen, Anders Rygh

    Campbell and Shiller (1987) proposed a graphical technique for the present value model which consists of plotting the spread and theoretical spread as calculated from the cointegrated vector autoregressive model. We extend these techniques to a number of rational expectation models and give...

  18. Space Vector Pulse Width Modulation Technique Based Design and Simulation of a Three-Phase Voltage Source Converter Systems

    OpenAIRE

    Farhan Beg

    2014-01-01

    A Space Vector based Pulse Width Modulation control technique for the three-phase PWM converter is proposed in this paper. The proposed control scheme is based on a synchronous reference frame model. High performance and efficiency is obtained with regards to the DC bus voltage and the power factor considerations of the PWM rectifier thus leading to low losses. MATLAB/SIMULINK are used as a platform for the simulations and a SIMULINK model is presented in the paper. The r...

  19. Using an inverse modelling approach to evaluate the water retention in a simple water harvesting technique

    Directory of Open Access Journals (Sweden)

    K. Verbist

    2009-10-01

    Full Text Available In arid and semi-arid zones, runoff harvesting techniques are often applied to increase the water retention and infiltration on steep slopes. Additionally, they act as an erosion control measure to reduce land degradation hazards. Nevertheless, few efforts were observed to quantify the water harvesting processes of these techniques and to evaluate their efficiency. In this study, a combination of detailed field measurements and modelling with the HYDRUS-2D software package was used to visualize the effect of an infiltration trench on the soil water content of a bare slope in northern Chile. Rainfall simulations were combined with high spatial and temporal resolution water content monitoring in order to construct a useful dataset for inverse modelling purposes. Initial estimates of model parameters were provided by detailed infiltration and soil water retention measurements. Four different measurement techniques were used to determine the saturated hydraulic conductivity (Ksat independently. The tension infiltrometer measurements proved a good estimator of the Ksat value and a proxy for those measured under simulated rainfall, whereas the pressure and constant head well infiltrometer measurements showed larger variability. Six different parameter optimization functions were tested as a combination of soil-water content, water retention and cumulative infiltration data. Infiltration data alone proved insufficient to obtain high model accuracy, due to large scatter on the data set, and water content data were needed to obtain optimized effective parameter sets with small confidence intervals. Correlation between the observed soil water content and the simulated values was as high as R2=0.93 for ten selected observation points used in the model calibration phase, with overall correlation for the 22 observation points equal to 0.85. The model results indicate that the infiltration trench has a

  20. Development of new techniques for assimilating satellite altimetry data into ocean models

    Science.gov (United States)

    Yu, Peng

    State of the art fully three-dimensional ocean models are very computationally expensive and their adjoints are even more resource intensive. However, many features of interest are approximated by the first baroclinic mode over much of the ocean, especially in the lower and mid latitude regions. Based on this dynamical feature, a new type of data assimilation scheme to assimilate sea surface height (SSH) data, a reduced-space adjoint technique, is developed and implemented with a three-dimensional model using vertical normal mode decomposition. The technique is tested with the Navy Coastal Ocean Model (NCOM) configured to simulate the Gulf of Mexico. The assimilation procedure works by minimizing the cost function, which generalizes the misfit between the observations and their counterpart model variables. The "forward" model is integrated for the period during which the data are assimilated. Vertical normal mode decomposition retrieves the first baroclinic mode, and the data misfit between the model outputs and observations is calculated. Adjoint equations based on a one-active-layer reduced gravity model, which approximates the first baroclinic mode, are integrated backward in time to get the gradient of the cost function with respect to the control variables (velocity and SSH of the first baroclinic mode). The gradient is input to an optimization algorithm (the limited memory Broyden-Fletcher-Goldfarb-Shanno (BFGS) method is used for the cases presented here) to determine the new first baroclinic mode velocity and SSH fields, which are used to update the forward model variables at the initial time. Two main issues in the area of ocean data assimilation are addressed: (1) How can information provided only at the sea surface be transferred dynamically into deep layers? (2) How can information provided only locally, in limited oceanic regions, be horizontally transferred to ocean areas far away from the data-dense regions, but dynamically connected to it? The first

  1. A BLENDING TECHNIQUE OF TOPOGRAHIC AND HYDROGRAPHIC DEMs FOR RIVER ALIGNMENT MODELLING

    Directory of Open Access Journals (Sweden)

    H. Karim

    2017-10-01

    Full Text Available Current practice in combining bathymetry and topographic DEM is based on overlaying and merging both datasets into a new DEM based on river boundary. Throughout a few sample of datasets from previous recent projects, authors realize that this method is not preserving the nature of natural river characteristic, especially at the slope in between riverbank and riverbed. Some arising issues were also highlighted; validity of the topographic DEM as well as the river boundary, limitations of DEMs and how bathymetry survey was carried out on field. To overcome these issues, a new technique called blending DEMs was proposed and tested to the project datasets. It is based on a fusion of two DEMs (with respective buffer, offset and fusion ratio from a validated river boundary to produce riverbank slope and a merging of two different interpolation results to produce a best riverbed DEM. Simple riverbank ontology was prescribed to illustrate the model enhancement in accuracy and visualization provided by this technique. The output from three projects/DEM results was presented as a comparison study between the current practices with the proposed technique.

  2. The application of neural networks with artificial intelligence technique in the modeling of industrial processes

    International Nuclear Information System (INIS)

    Saini, K. K.; Saini, Sanju

    2008-01-01

    Neural networks are a relatively new artificial intelligence technique that emulates the behavior of biological neural systems in digital software or hardware. These networks can 'learn', automatically, complex relationships among data. This feature makes the technique very useful in modeling processes for which mathematical modeling is difficult or impossible. The work described here outlines some examples of the application of neural networks with artificial intelligence technique in the modeling of industrial processes.

  3. Knowledge-Based Environmental Context Modeling

    Science.gov (United States)

    Pukite, P. R.; Challou, D. J.

    2017-12-01

    As we move from the oil-age to an energy infrastructure based on renewables, the need arises for new educational tools to support the analysis of geophysical phenomena and their behavior and properties. Our objective is to present models of these phenomena to make them amenable for incorporation into more comprehensive analysis contexts. Starting at the level of a college-level computer science course, the intent is to keep the models tractable and therefore practical for student use. Based on research performed via an open-source investigation managed by DARPA and funded by the Department of Interior [1], we have adapted a variety of physics-based environmental models for a computer-science curriculum. The original research described a semantic web architecture based on patterns and logical archetypal building-blocks (see figure) well suited for a comprehensive environmental modeling framework. The patterns span a range of features that cover specific land, atmospheric and aquatic domains intended for engineering modeling within a virtual environment. The modeling engine contained within the server relied on knowledge-based inferencing capable of supporting formal terminology (through NASA JPL's Semantic Web for Earth and Environmental Technology (SWEET) ontology and a domain-specific language) and levels of abstraction via integrated reasoning modules. One of the key goals of the research was to simplify models that were ordinarily computationally intensive to keep them lightweight enough for interactive or virtual environment contexts. The breadth of the elements incorporated is well-suited for learning as the trend toward ontologies and applying semantic information is vital for advancing an open knowledge infrastructure. As examples of modeling, we have covered such geophysics topics as fossil-fuel depletion, wind statistics, tidal analysis, and terrain modeling, among others. Techniques from the world of computer science will be necessary to promote efficient

  4. Application of geo-spatial techniques and cellular automata for modelling urban growth of a heterogeneous urban fringe

    OpenAIRE

    Mahesh Kumar Jat; Mahender Choudhary; Ankita Saxena

    2017-01-01

    Urban growth monitoring and assessment are essential for the sustainable natural resources planning & optimum utilization and reducing the risk of problems arising from unplanned urban growth like pollution, urban heat island and ecological disturbances. Cellular Automata (CA) based modelling techniques have become popular in recent past for simulating the urban growth. Present study is aimed to evaluate the performance of the CA based SLEUTH model in simulating the urban growth of a complex ...

  5. Modelling Data Mining Dynamic Code Attributes with Scheme Definition Technique

    OpenAIRE

    Sipayung, Evasaria M; Fiarni, Cut; Tanudjaja, Randy

    2014-01-01

    Data mining is a technique used in differentdisciplines to search for significant relationships among variablesin large data sets. One of the important steps on data mining isdata preparation. On these step, we need to transform complexdata with more than one attributes into representative format fordata mining algorithm. In this study, we concentrated on thedesigning a proposed system to fetch attributes from a complexdata such as product ID. Then the proposed system willdetermine the basic ...

  6. Wave propagation in fluids models and numerical techniques

    CERN Document Server

    Guinot, Vincent

    2012-01-01

    This second edition with four additional chapters presents the physical principles and solution techniques for transient propagation in fluid mechanics and hydraulics. The application domains vary including contaminant transport with or without sorption, the motion of immiscible hydrocarbons in aquifers, pipe transients, open channel and shallow water flow, and compressible gas dynamics. The mathematical formulation is covered from the angle of conservation laws, with an emphasis on multidimensional problems and discontinuous flows, such as steep fronts and shock waves. Finite

  7. An eigenexpansion technique for modelling plasma start-up

    International Nuclear Information System (INIS)

    Pillsbury, R.D.

    1989-01-01

    An algorithm has been developed and implemented in a computer program that allows the estimation of PF coil voltages required to start-up an axisymmetric plasma in a tokamak in the presence of eddy currents in toroidally continuous conducting structures. The algorithm makes use of an eigen-expansion technique to solve the lumped parameter circuit loop voltage equations associated with the PF coils and passive (conducting) structures. An example of start-up for CIT (Compact Ignition Tokamak) is included

  8. Near infrared spectrometric technique for testing fruit quality: optimisation of regression models using genetic algorithms

    Science.gov (United States)

    Isingizwe Nturambirwe, J. Frédéric; Perold, Willem J.; Opara, Umezuruike L.

    2016-02-01

    Near infrared (NIR) spectroscopy has gained extensive use in quality evaluation. It is arguably one of the most advanced spectroscopic tools in non-destructive quality testing of food stuff, from measurement to data analysis and interpretation. NIR spectral data are interpreted through means often involving multivariate statistical analysis, sometimes associated with optimisation techniques for model improvement. The objective of this research was to explore the extent to which genetic algorithms (GA) can be used to enhance model development, for predicting fruit quality. Apple fruits were used, and NIR spectra in the range from 12000 to 4000 cm-1 were acquired on both bruised and healthy tissues, with different degrees of mechanical damage. GAs were used in combination with partial least squares regression methods to develop bruise severity prediction models, and compared to PLS models developed using the full NIR spectrum. A classification model was developed, which clearly separated bruised from unbruised apple tissue. GAs helped improve prediction models by over 10%, in comparison with full spectrum-based models, as evaluated in terms of error of prediction (Root Mean Square Error of Cross-validation). PLS models to predict internal quality, such as sugar content and acidity were developed and compared to the versions optimized by genetic algorithm. Overall, the results highlighted the potential use of GA method to improve speed and accuracy of fruit quality prediction.

  9. Voltage Stabilizer Based on SPWM technique Using Microcontroller

    Directory of Open Access Journals (Sweden)

    K. N. Tarchanidis

    2013-01-01

    Full Text Available This paper presents an application of the well known SPWM technique on a voltage stabilizer, using a microcontroller. The stabilizer is AC/DC/AC type. So, the system rectifies the input AC voltage to a suitable DC level and the intelligent control of an embedded microcontroller regulates the pulse width of the output voltage in order to produce through a filter a perfect sinusoidal AC voltage. The control program on the microcontroller has the ability to change the FET transistor firing in order to compensate any input voltage variation. The applied software using the microcontroller’s interrupts managed to achieve concurrency on the running program.

  10. Ozone Monitoring Based on Multi-Sensor Information Fusion Techniques

    Directory of Open Access Journals (Sweden)

    Yang GAO

    2014-07-01

    Full Text Available As a strong oxidizer, ozone has much damage to a variety of things. So when using ozone to execute purification, we need to monitor and control its concentration strictly. The monitoring and control of the ozone concentration can greatly reduce the damage caused by ozone concentration. Thus, multi - sensor information fusion technique is applied, and then fuses the gathered multi-sensor data information. It realizes real time monitoring, forecasting and controlling, which can improve the accuracy of ozone concentration collection, and the careless error is eliminated effectively and the system error is reduced efficiently.

  11. GPU-Based Techniques for Global Illumination Effects

    CERN Document Server

    Szirmay-Kalos, László; Sbert, Mateu

    2008-01-01

    This book presents techniques to render photo-realistic images by programming the Graphics Processing Unit (GPU). We discuss effects such as mirror reflections, refractions, caustics, diffuse or glossy indirect illumination, radiosity, single or multiple scattering in participating media, tone reproduction, glow, and depth of field. This book targets game developers, graphics programmers, and also students with some basic understanding of computer graphics algorithms, rendering APIs like Direct3D or OpenGL, and shader programming. In order to make this book self-contained, the most important c

  12. Adaptive optics correction based on stochastic parallel gradient descent technique using Zernike polynomials

    Science.gov (United States)

    Ma, Huimin; Qiao, Yan; Shen, Chunshan

    2017-10-01

    Adaptive optics systems based on stochastic parallel gradient descent optimization (SPGD) have yet shown great potential on compensation of phase distortions induced by wave propagation through atmosphere turbulence. One of the key technique is increasing the converge rate of SPGD correction system. In the present study, it is shown that the convergence rate of the algorithm will be greatly reduced with the increase of the number of corrector units. In this paper, an improved SPGD optimization process which is based on Zernike-mode is developed. The optimized object is changed from the voltage to Zernike coefficient. Adaptive optics correction system simulation model base on SPGD for a laser beam projecting system, and the numerical simulation of compensation process of random atmosphere turbulence is proposed. To improve the converge speed, we explore to ameliorate the correction system by controlling the perturbation considering atmosphere aberration proportion. The results show that the system converges smoothly and increasingly after considering the aberration proportion by Zernike correction.

  13. Lab-Based Measurement of Remediation Techniques for Radiation Portal Monitors (Initial Report)

    International Nuclear Information System (INIS)

    Livesay, Jake

    2012-01-01

    Radiation Portal Monitors (RPM) deployed by the Second Line of Defense (SLD) are known to be sensitive to the natural environmental radioactive background. There are several techniques used to mitigate the effects of background on the monitors, but since the installation environments can vary significantly from one another the need for a standardized, systematic, study of remediation techniques was proposed and carried out. This study is not meant to serve as the absolute last word on the subject. The data collected are, however, intelligible and useful. Some compromises were made, each of which will be described in detail. The hope of this initial report is to familiarize the SLD science teams with ORNL's effort to model the effect of various remediation techniques on simple, static backgrounds. This study provides a good start toward benchmarking the model, and each additional increment of data will serve to make the model more robust. The scope of this initial study is limited to a few basic cases. Its purpose is to prove the utility of lab-based study of remediation techniques and serve as a standard data set for future use. This importance of this first step of standardization will become obvious when science teams are working in parallel on issues of remediation; having a common starting point will do away with one category of difference, thereby making easier the task of determining the sources of disagreement. Further measurements will augment this data set, allowing for further constraint of the universe of possible situations. As will be discussed in the 'Going Forward' section, more data will be included in the final report of this work. Of particular interest will be the data taken with the official TSA lead collimators, which will provide more direct results for comparison with installation data.

  14. TESTING DIFFERENT SURVEY TECHNIQUES TO MODEL ARCHITECTONIC NARROW SPACES

    Directory of Open Access Journals (Sweden)

    A. Mandelli

    2017-08-01

    Full Text Available In the architectural survey field, there has been the spread of a vast number of automated techniques. However, it is important to underline the gap that exists between the technical specification sheet of a particular instrument and its usability, accuracy and level of automation reachable in real cases scenario, especially speaking about Cultural Heritage (CH field. In fact, even if the technical specifications (range, accuracy and field of view are known for each instrument, their functioning and features are influenced by the environment, shape and materials of the object. The results depend more on how techniques are employed than the nominal specifications of the instruments. The aim of this article is to evaluate the real usability, for the 1:50 architectonic restitution scale, of common and not so common survey techniques applied to the complex scenario of dark, intricate and narrow spaces such as service areas, corridors and stairs of Milan’s cathedral indoors. Tests have shown that the quality of the results is strongly affected by side-issues like the impossibility of following the theoretical ideal methodology when survey such spaces. The tested instruments are: the laser scanner Leica C10, the GeoSLAM ZEB1, the DOT DPI 8 and two photogrammetric setups, a full frame camera with a fisheye lens and the NCTech iSTAR, a panoramic camera. Each instrument presents advantages and limits concerning both the sensors themselves and the acquisition phase.

  15. Testing Different Survey Techniques to Model Architectonic Narrow Spaces

    Science.gov (United States)

    Mandelli, A.; Fassi, F.; Perfetti, L.; Polari, C.

    2017-08-01

    In the architectural survey field, there has been the spread of a vast number of automated techniques. However, it is important to underline the gap that exists between the technical specification sheet of a particular instrument and its usability, accuracy and level of automation reachable in real cases scenario, especially speaking about Cultural Heritage (CH) field. In fact, even if the technical specifications (range, accuracy and field of view) are known for each instrument, their functioning and features are influenced by the environment, shape and materials of the object. The results depend more on how techniques are employed than the nominal specifications of the instruments. The aim of this article is to evaluate the real usability, for the 1:50 architectonic restitution scale, of common and not so common survey techniques applied to the complex scenario of dark, intricate and narrow spaces such as service areas, corridors and stairs of Milan's cathedral indoors. Tests have shown that the quality of the results is strongly affected by side-issues like the impossibility of following the theoretical ideal methodology when survey such spaces. The tested instruments are: the laser scanner Leica C10, the GeoSLAM ZEB1, the DOT DPI 8 and two photogrammetric setups, a full frame camera with a fisheye lens and the NCTech iSTAR, a panoramic camera. Each instrument presents advantages and limits concerning both the sensors themselves and the acquisition phase.

  16. Dynamic document object model formation technique for corporate website protection against automatic coping of information

    Directory of Open Access Journals (Sweden)

    Galushka Vasily

    2017-01-01

    Full Text Available The article describes solution path of the problem of information automatic copying from web-sites in the Internet, which is implemented using parsing techniques based on regular expressions or function libraries. To protecting against this type of information security threat, it is proposed to dynamically generate and periodically change the object model of the HTML document when generated and sent to the browser. These changes should affect the values of the identifying tag attributes and the structure of the object model tree. As attribute values it is offered to use character sets of limited length obtained as a result of random numbers hashing; change of the structure of the object model should be done by adding of additional tags at the corresponding levels of the hierarchy of the tree representing it. The simultaneous application of these methods excludes the possibility of algorithm compile for the necessary information extraction in the overall structure of the web page.

  17. Comparing smoothing techniques in Cox models for exposure-response relationships.

    Science.gov (United States)

    Govindarajulu, Usha S; Spiegelman, Donna; Thurston, Sally W; Ganguli, Bhaswati; Eisen, Ellen A

    2007-09-10

    To allow for non-linear exposure-response relationships, we applied flexible non-parametric smoothing techniques to models of time to lung cancer mortality in two occupational cohorts with skewed exposure distributions. We focused on three different smoothing techniques in Cox models: penalized splines, restricted cubic splines, and fractional polynomials. We compared standard software implementations of these three methods based on their visual representation and criterion for model selection. We propose a measure of the difference between a pair of curves based on the area between them, standardized by the average of the areas under the pair of curves. To capture the variation in the difference over the range of exposure, the area between curves was also calculated at percentiles of exposure and expressed as a percentage of the total difference. The dose-response curves from the three methods were similar in both studies over the denser portion of the exposure range, with the difference between curves up to the 50th percentile less than 1 per cent of the total difference. A comparison of inverse variance weighted areas applied to the data set with a more skewed exposure distribution allowed us to estimate area differences with more precision by reducing the proportion attributed to the upper 1 per cent tail region. Overall, the penalized spline and the restricted cubic spline were closer to each other than either was to the fractional polynomial. (c) 2007 John Wiley & Sons, Ltd.

  18. Surrogate based approaches to parameter inference in ocean models

    KAUST Repository

    Knio, Omar

    2016-01-06

    This talk discusses the inference of physical parameters using model surrogates. Attention is focused on the use of sampling schemes to build suitable representations of the dependence of the model response on uncertain input data. Non-intrusive spectral projections and regularized regressions are used for this purpose. A Bayesian inference formalism is then applied to update the uncertain inputs based on available measurements or observations. To perform the update, we consider two alternative approaches, based on the application of Markov Chain Monte Carlo methods or of adjoint-based optimization techniques. We outline the implementation of these techniques to infer dependence of wind drag, bottom drag, and internal mixing coefficients.

  19. Multi-factor models and signal processing techniques application to quantitative finance

    CERN Document Server

    Darolles, Serges; Jay, Emmanuelle

    2013-01-01

    With recent outbreaks of multiple large-scale financial crises, amplified by interconnected risk sources, a new paradigm of fund management has emerged. This new paradigm leverages "embedded" quantitative processes and methods to provide more transparent, adaptive, reliable and easily implemented "risk assessment-based" practices.This book surveys the most widely used factor models employed within the field of financial asset pricing. Through the concrete application of evaluating risks in the hedge fund industry, the authors demonstrate that signal processing techniques are an intere

  20. Shape modeling technique KOALA validated by ESA Rosetta at (21) Lutetia

    Science.gov (United States)

    Carry, B.; Kaasalainen, M.; Merline, W. J.; Müller, T. G.; Jorda, L.; Drummond, J. D.; Berthier, J.; O'Rourke, L.; Ďurech, J.; Küppers, M.; Conrad, A.; Tamblyn, P.; Dumas, C.; Sierks, H.; Osiris Team (M. A'Hearn, F. Angrilli, C. Barbieri, A. Barucci, J.-L. Bertaux, G.Cremonese, V. Da Deppo, B. Davidsson, S. Debei, M. De Cecco, S. Fornasier, M. Fulle, O. Groussin, P. Gutiérrez, W.-H. Ip, S. Hviid, H.U. Keller, D. Koschny, J. Knollenberg, J.R. Kramm, E. Kuehrt, P. Lamy, L.M. Lara, M. Lazzarin, J.J. López-Moreno, F. Marzari, H. Michalik, G. Naletto, H. Rickman, R. Rodrigo, L. Sabau, N. Thomas, K.-P. Wenzel.)

    2012-06-01

    We present here a comparison of our results from ground-based observations of asteroid (21) Lutetia with imaging data acquired during the flyby of the asteroid by the ESA Rosetta mission. This flyby provided a unique opportunity to evaluate and calibrate our method of determination of size, 3-D shape, and spin of an asteroid from ground-based observations. Knowledge of certain observable physical properties of small bodies (e.g., size, spin, 3-D shape, and density) have far-reaching implications in furthering our understanding of these objects, such as composition, internal structure, and the effects of non-gravitational forces. We review the different observing techniques used to determine the above physical properties of asteroids and present our 3-D shape-modeling technique KOALA - Knitted Occultation, Adaptive-optics, and Lightcurve Analysis - which is based on multi-dataset inversion. We compare the results we obtained with KOALA, prior to the flyby, on asteroid (21) Lutetia with the high-spatial resolution images of the asteroid taken with the OSIRIS camera on-board the ESA Rosetta spacecraft, during its encounter with Lutetia on 2010 July 10. The spin axis determined with KOALA was found to be accurate to within 2°, while the KOALA diameter determinations were within 2% of the Rosetta-derived values. The 3-D shape of the KOALA model is also confirmed by the spectacular visual agreement between both 3-D shape models (KOALA pre- and OSIRIS post-flyby). We found a typical deviation of only 2 km at local scales between the profiles from KOALA predictions and OSIRIS images, resulting in a volume uncertainty provided by KOALA better than 10%. Radiometric techniques for the interpretation of thermal infrared data also benefit greatly from the KOALA shape model: the absolute size and geometric albedo can be derived with high accuracy, and thermal properties, for example the thermal inertia, can be determined unambiguously. The corresponding Lutetia analysis leads

  1. Wide-area Power System Oscillation Damping using Model Predictive Control Technique

    Science.gov (United States)

    Mohamed, Tarek Hassan; Abdel-Rahim, Abdel-Moamen Mohammed; Hassan, Ahmed Abd-Eltawwab; Hiyama, Takashi

    This paper presents a new approach to deal with the problem of robust tuning of power system stabilizer (PSS) and automatic voltage regulator (AVR) in multi-machine power systems. The proposed method is based on a model predictive control (MPC) technique, for improvement stability of the wide-area power system with multiple generators and distribution systems including dispersed generations. The proposed method provides better damping of power system oscillations under small and large disturbances even with the inclusion of local PSSs. The effectiveness of the proposed approach is demonstrated through a two areas, four machines power system. A performance comparison between the proposed controller and some of other controllers is carried out confirming the superiority of the proposed technique. It has also been observed that the proposed algorithm can be successfully applied to larger multiarea power systems and do not suffer with computational difficulties. The proposed algorithm carried out using MATLAB/SIMULINK software package.

  2. Identifying content-based and relational techniques to change behaviour in motivational interviewing.

    Science.gov (United States)

    Hardcastle, Sarah J; Fortier, Michelle; Blake, Nicola; Hagger, Martin S

    2017-03-01

    Motivational interviewing (MI) is a complex intervention comprising multiple techniques aimed at changing health-related motivation and behaviour. However, MI techniques have not been systematically isolated and classified. This study aimed to identify the techniques unique to MI, classify them as content-related or relational, and evaluate the extent to which they overlap with techniques from the behaviour change technique taxonomy version 1 [BCTTv1; Michie, S., Richardson, M., Johnston, M., Abraham, C., Francis, J., Hardeman, W., … Wood, C. E. (2013). The behavior change technique taxonomy (v1) of 93 hierarchically clustered techniques: Building an international consensus for the reporting of behavior change interventions. Annals of Behavioral Medicine, 46, 81-95]. Behaviour change experts (n = 3) content-analysed MI techniques based on Miller and Rollnick's [(2013). Motivational interviewing: Preparing people for change (3rd ed.). New York: Guildford Press] conceptualisation. Each technique was then coded for independence and uniqueness by independent experts (n = 10). The experts also compared each MI technique to those from the BCTTv1. Experts identified 38 distinct MI techniques with high agreement on clarity, uniqueness, preciseness, and distinctiveness ratings. Of the identified techniques, 16 were classified as relational techniques. The remaining 22 techniques were classified as content based. Sixteen of the MI techniques were identified as having substantial overlap with techniques from the BCTTv1. The isolation and classification of MI techniques will provide researchers with the necessary tools to clearly specify MI interventions and test the main and interactive effects of the techniques on health behaviour. The distinction between relational and content-based techniques within MI is also an important advance, recognising that changes in motivation and behaviour in MI is a function of both intervention content and the interpersonal style

  3. Reduction of thermal models of buildings: improvement of techniques using meteorological influence models; Reduction de modeles thermiques de batiments: amelioration des techniques par modelisation des sollicitations meteorologiques

    Energy Technology Data Exchange (ETDEWEB)

    Dautin, S.

    1997-04-01

    This work concerns the modeling of thermal phenomena inside buildings for the evaluation of energy exploitation costs of thermal installations and for the modeling of thermal and aeraulic transient phenomena. This thesis comprises 7 chapters dealing with: (1) the thermal phenomena inside buildings and the CLIM2000 calculation code, (2) the ETNA and GENEC experimental cells and their modeling, (3) the techniques of model reduction tested (Marshall`s truncature, Michailesco aggregation method and Moore truncature) with their algorithms and their encoding in the MATRED software, (4) the application of model reduction methods to the GENEC and ETNA cells and to a medium size dual-zone building, (5) the modeling of meteorological influences classically applied to buildings (external temperature and solar flux), (6) the analytical expression of these modeled meteorological influences. The last chapter presents the results of these improved methods on the GENEC and ETNA cells and on a lower inertia building. These new methods are compared to classical methods. (J.S.) 69 refs.

  4. Agent-based modeling in ecological economics.

    Science.gov (United States)

    Heckbert, Scott; Baynes, Tim; Reeson, Andrew

    2010-01-01

    Interconnected social and environmental systems are the domain of ecological economics, and models can be used to explore feedbacks and adaptations inherent in these systems. Agent-based modeling (ABM) represents autonomous entities, each with dynamic behavior and heterogeneous characteristics. Agents interact with each other and their environment, resulting in emergent outcomes at the macroscale that can be used to quantitatively analyze complex systems. ABM is contributing to research questions in ecological economics in the areas of natural resource management and land-use change, urban systems modeling, market dynamics, changes in consumer attitudes, innovation, and diffusion of technology and management practices, commons dilemmas and self-governance, and psychological aspects to human decision making and behavior change. Frontiers for ABM research in ecological economics involve advancing the empirical calibration and validation of models through mixed methods, including surveys, interviews, participatory modeling, and, notably, experimental economics to test specific decision-making hypotheses. Linking ABM with other modeling techniques at the level of emergent properties will further advance efforts to understand dynamics of social-environmental systems.

  5. Application of Neural Networks Technique in depositional environment interpretation for the Niger Delta a Novel computer-Based methodology for 3-D reservoir geological modelling and exploration studies. (The pilot application in X-Field, Niger Delta)

    International Nuclear Information System (INIS)

    Iloghalu, E.M.

    2002-01-01

    Artificial neural network is a virtual intelligence tool, which mimics the human brain to do analysis and come out with results. Its application in petroleum engineering is very recent and is gradually evolving and is set to dominate or take over other analytical tools used in the Exploration and Production industry.There are two types of neural network namely, unsupervised and supervised neural networks. A proper combination of these two types of neural networks produces high-resolution results.In this work, interpreted core data was depth matched to well logs and 5 genetic units were calibrated to define the combined log responses for each genetic unit. These combined log responses were then used to train the supervised neural networks to recognise and interpret these units elsewhere in the field. Thereafter, the unsupervised neural network was run to generate classes within the cored interval. The results were then compared with the supervised network output and were then extrapolated vertically and laterally to other parts of the field.This technique having been used successfully to perform automatic interpretation of genetic units and lithofacies associations in reservoir scale is also very useful and applicable in exploration. Specific reservoirs or stratigraphic units can be automatically interpreted across a wide area using well data controlled by one or a combination of lithostratigraphy, allostratigraphy, sequence stratigraphy and biostratigraphy.Using this technique, well data cost and time are saved tremendously. It is the key to achieving computerised Basin-Scale Reservoir characterisation for the Niger Delta

  6. Optimization models and techniques for implementation and pricing of electricity markets

    Science.gov (United States)

    Madrigal Martinez, Marcelino

    Vertically integrated electric power systems extensively use optimization models and solution techniques to guide their optimal operation and planning. The advent of electric power systems re-structuring has created needs for new optimization tools and the revision of the inherited ones from the vertical integration era into the market environment. This thesis presents further developments on the use of optimization models and techniques for implementation and pricing of primary electricity markets. New models, solution approaches, and price setting alternatives are proposed. Three different modeling groups are studied. The first modeling group considers simplified continuous and discrete models for power pool auctions driven by central-cost minimization. The direct solution of the dual problems, and the use of a Branch-and-Bound algorithm to solve the primal, allows to identify the effects of disequilibrium, and different price setting alternatives over the existence of multiple solutions. It is shown that particular pricing rules worsen the conflict of interest that arise when multiple solutions exist under disequilibrium. A price-setting alternative based on dual variables is shown to diminish such conflict. The second modeling group considers the unit commitment problem. An interior-point/cutting-plane method is proposed for the solution of the dual problem. The new method has better convergence characteristics and does not suffer from the parameter tuning drawback as previous methods The robustness characteristics of the interior-point/cutting-plane method, combined with a non-uniform price setting alternative, show that the conflict of interest is diminished when multiple near optimal solutions exist. The non-uniform price setting alternative is compared to a classic average pricing rule. The last modeling group concerns to a new type of linear network-constrained clearing system models for daily markets for power and spinning reserve. A new model and

  7. Intelligent Transportation and Evacuation Planning A Modeling-Based Approach

    CERN Document Server

    Naser, Arab

    2012-01-01

    Intelligent Transportation and Evacuation Planning: A Modeling-Based Approach provides a new paradigm for evacuation planning strategies and techniques. Recently, evacuation planning and modeling have increasingly attracted interest among researchers as well as government officials. This interest stems from the recent catastrophic hurricanes and weather-related events that occurred in the southeastern United States (Hurricane Katrina and Rita). The evacuation methods that were in place before and during the hurricanes did not work well and resulted in thousands of deaths. This book offers insights into the methods and techniques that allow for implementing mathematical-based, simulation-based, and integrated optimization and simulation-based engineering approaches for evacuation planning. This book also: Comprehensively discusses the application of mathematical models for evacuation and intelligent transportation modeling Covers advanced methodologies in evacuation modeling and planning Discusses principles a...

  8. Techniques based on genetic algorithms for large deflection ...

    Indian Academy of Sciences (India)

    A couple of non-convex search strategies, based on the genetic algorithm, are suggested and numerically explored in the context of large-deflection analysis of planar, elastic beams. The first of these strategies is based on the stationarity of the energy functional in the equilibrium state and may therefore be considered ...

  9. Wavelet packet transform-based robust video watermarking technique

    Indian Academy of Sciences (India)

    In this paper, a wavelet packet transform (WPT)-based robust video watermarking algorithm is proposed. A visible meaningful binary image is used as the watermark. First, sequent frames are extracted from the video clip. Then, WPT is applied on each frame and from each orientation one sub-band is selected based on ...

  10. Predicting Plywood Properties with Wood-based Composite Models

    Science.gov (United States)

    Christopher Adam Senalik; Robert J. Ross

    2015-01-01

    Previous research revealed that stress wave nondestructive testing techniques could be used to evaluate the tensile and flexural properties of wood-based composite materials. Regression models were developed that related stress wave transmission characteristics (velocity and attenuation) to modulus of elasticity and strength. The developed regression models accounted...

  11. Activity based costing model for inventory valuation

    Directory of Open Access Journals (Sweden)

    Vineet Chouhan

    2017-03-01

    Full Text Available Activity-Based-Model (ABC is used for the purpose of significant improvement for overhead accounting systems by providing the best information required for managerial decision. This pa-per discusses implacability of ABC technique on inventory valuation as a management account-ing innovation. In order to prove the applicability of ABC for inventory control a material driven medium-sized and privately owned company from engineering (iron and steel industry is select-ed and by analysis of its production process and its material dependency and use of indirect in-ventory, an ABC model is explored for better inventory control. The case revealed that the ne-cessity of ABC in the area of inventory control is significant. The company is not only able to increase its quality of decision but also it can significantly analyze its cost of direct material cost, valuation of direct material and use its implications for better decision making.

  12. Multiparous Ewe as a Model for Teaching Vaginal Hysterectomy Techniques.

    Science.gov (United States)

    Kerbage, Yohan; Cosson, Michel; Hubert, Thomas; Giraudet, Géraldine

    2017-12-01

    Despite being linked to improving patient outcomes and limiting costs, the use of vaginal hysterectomy is on the wane. Although a combination of reasons might explain this trend, one cause is a lack of practical training. An appropriate teaching model must therefore be devised. Currently, only low-fidelity simulators exist. Ewes provide an appropriate model for pelvic anatomy and are well-suited for testing vaginal mesh properties. This article sets out a vaginal hysterectomy procedure for use as an education and training model. A multiparous ewe was the model. Surgery was performed under general anesthesia. The ewe was in a lithotomy position resembling that assumed by women on the operating table. Two vaginal hysterectomies were performed on two ewes, following every step precisely as if the model were human. Each surgical step of vaginal hysterectomy performed on the ewe and on a woman were compared side by side. We identified that all surgical steps were particularly similar. The main limitations of this model are costs ($500/procedure), logistic problems (housing large animals), and public opposition to animal training models. The ewe appears to be an appropriate model for teaching and training of vaginal hysterectomy.

  13. Model–Based Techniques for Virtual Sensing of Longitudinal Flight Parameters

    Directory of Open Access Journals (Sweden)

    Seren Cédric

    2015-03-01

    Full Text Available Introduction of fly-by-wire and increasing levels of automation significantly improve the safety of civil aircraft, and result in advanced capabilities for detecting, protecting and optimizing A/C guidance and control. However, this higher complexity requires the availability of some key flight parameters to be extended. Hence, the monitoring and consolidation of those signals is a significant issue, usually achieved via many functionally redundant sensors to extend the way those parameters are measured. This solution penalizes the overall system performance in terms of weight, maintenance, and so on. Other alternatives rely on signal processing or model-based techniques that make a global use of all or part of the sensor data available, supplemented by a model-based simulation of the flight mechanics. That processing achieves real-time estimates of the critical parameters and yields dissimilar signals. Filtered and consolidated information is delivered in unfaulty conditions by estimating an extended state vector, including wind components, and can replace failed signals in degraded conditions. Accordingly, this paper describes two model-based approaches allowing the longitudinal flight parameters of a civil A/C to be estimated on-line. Results are displayed to evaluate the performances in different simulated and real flight conditions, including realistic external disturbances and modeling errors.

  14. CROWDSOURCING BASED 3D MODELING

    Directory of Open Access Journals (Sweden)

    A. Somogyi

    2016-06-01

    Full Text Available Web-based photo albums that support organizing and viewing the users’ images are widely used. These services provide a convenient solution for storing, editing and sharing images. In many cases, the users attach geotags to the images in order to enable using them e.g. in location based applications on social networks. Our paper discusses a procedure that collects open access images from a site frequently visited by tourists. Geotagged pictures showing the image of a sight or tourist attraction are selected and processed in photogrammetric processing software that produces the 3D model of the captured object. For the particular investigation we selected three attractions in Budapest. To assess the geometrical accuracy, we used laser scanner and DSLR as well as smart phone photography to derive reference values to enable verifying the spatial model obtained from the web-album images. The investigation shows how detailed and accurate models could be derived applying photogrammetric processing software, simply by using images of the community, without visiting the site.

  15. Fault Tree Analysis with Temporal Gates and Model Checking Technique for Qualitative System Safety Analysis

    International Nuclear Information System (INIS)

    Koh, Kwang Yong; Seong, Poong Hyun

    2010-01-01

    Fault tree analysis (FTA) has suffered from several drawbacks such that it uses only static gates and hence can not capture dynamic behaviors of the complex system precisely, and it is in lack of rigorous semantics, and reasoning process which is to check whether basic events really cause top events is done manually and hence very labor-intensive and time-consuming for the complex systems while it has been one of the most widely used safety analysis technique in nuclear industry. Although several attempts have been made to overcome this problem, they can not still do absolute or actual time modeling because they adapt relative time concept and can capture only sequential behaviors of the system. In this work, to resolve the problems, FTA and model checking are integrated to provide formal, automated and qualitative assistance to informal and/or quantitative safety analysis. Our approach proposes to build a formal model of the system together with fault trees. We introduce several temporal gates based on timed computational tree logic (TCTL) to capture absolute time behaviors of the system and to give concrete semantics to fault tree gates to reduce errors during the analysis, and use model checking technique to automate the reasoning process of FTA

  16. Viable Techniques, Leontief’s Closed Model, and Sraffa’s Subsistence Economies

    Directory of Open Access Journals (Sweden)

    Alberto Benítez

    2014-11-01

    Full Text Available This paper studies the production techniques employed in economies that reproduce themselves. Special attention is paid to the distinction usually made between those that do not produce a surplus and those that do, which are referred to as first and second class economies, respectively. Based on this, we present a new definition of viable economies and show that every viable economy of the second class can be represented as a viable economy of the first class under two different forms, Leontief‘s closed model and Sraffa’s subsistence economies. This allows us to present some remarks concerning the economic interpretation of the two models. On the one hand, we argue that the participation of each good in the production of every good can be considered as a normal characteristic of the first model and, on the other hand, we provide a justification for the same condition to be considered a characteristic of the second model. Furthermore, we discuss three definitions of viable techniques advanced by other authors and show that they differ from ours because they admit economies that do not reproduce themselves completely.

  17. Néron models and base change

    CERN Document Server

    Halle, Lars Halvard

    2016-01-01

    Presenting the first systematic treatment of the behavior of Néron models under ramified base change, this book can be read as an introduction to various subtle invariants and constructions related to Néron models of semi-abelian varieties, motivated by concrete research problems and complemented with explicit examples. Néron models of abelian and semi-abelian varieties have become an indispensable tool in algebraic and arithmetic geometry since Néron introduced them in his seminal 1964 paper. Applications range from the theory of heights in Diophantine geometry to Hodge theory. We focus specifically on Néron component groups, Edixhoven’s filtration and the base change conductor of Chai and Yu, and we study these invariants using various techniques such as models of curves, sheaves on Grothendieck sites and non-archimedean uniformization. We then apply our results to the study of motivic zeta functions of abelian varieties. The final chapter contains a list of challenging open questions. This book is a...

  18. Household water use and conservation models using Monte Carlo techniques

    Directory of Open Access Journals (Sweden)

    R. Cahill

    2013-10-01

    Full Text Available The increased availability of end use measurement studies allows for mechanistic and detailed approaches to estimating household water demand and conservation potential. This study simulates water use in a single-family residential neighborhood using end-water-use parameter probability distributions generated from Monte Carlo sampling. This model represents existing water use conditions in 2010 and is calibrated to 2006–2011 metered data. A two-stage mixed integer optimization model is then developed to estimate the least-cost combination of long- and short-term conservation actions for each household. This least-cost conservation model provides an estimate of the upper bound of reasonable conservation potential for varying pricing and rebate conditions. The models were adapted from previous work in Jordan and are applied to a neighborhood in San Ramon, California in the eastern San Francisco Bay Area. The existing conditions model produces seasonal use results very close to the metered data. The least-cost conservation model suggests clothes washer rebates are among most cost-effective rebate programs for indoor uses. Retrofit of faucets and toilets is also cost-effective and holds the highest potential for water savings from indoor uses. This mechanistic modeling approach can improve understanding of water demand and estimate cost-effectiveness of water conservation programs.

  19. A measurement-based technique for incipient anomaly detection

    KAUST Repository

    Harrou, Fouzi

    2016-06-13

    Fault detection is essential for safe operation of various engineering systems. Principal component analysis (PCA) has been widely used in monitoring highly correlated process variables. Conventional PCA-based methods, nevertheless, often fail to detect small or incipient faults. In this paper, we develop new PCA-based monitoring charts, combining PCA with multivariate memory control charts, such as the multivariate cumulative sum (MCUSUM) and multivariate exponentially weighted moving average (MEWMA) monitoring schemes. The multivariate control charts with memory are sensitive to small and moderate faults in the process mean, which significantly improves the performance of PCA methods and widen their applicability in practice. Using simulated data, we demonstrate that the proposed PCA-based MEWMA and MCUSUM control charts are more effective in detecting small shifts in the mean of the multivariate process variables, and outperform the conventional PCA-based monitoring charts. © 2015 IEEE.

  20. Chebyshev-based technique for automated restoration of digital copies of faded photographic prints

    Science.gov (United States)

    Uchaev, Dmitry V.; Uchaev, Denis V.; Malinnikov, Vasiliy A.

    2017-01-01

    We present a technique for automated restoration of digital images obtained from faded photographic prints. The proposed defading technique uses our early proposed image contrast enhancement algorithm based on a contrast measure of images in the Chebyshev moment transform domain. Obtained experimental results demonstrate some advantages of the technique as compared to other widely used image enhancement methods.