WorldWideScience

Sample records for modeling technique based

  1. Model-checking techniques based on cumulative residuals.

    Science.gov (United States)

    Lin, D Y; Wei, L J; Ying, Z

    2002-03-01

    Residuals have long been used for graphical and numerical examinations of the adequacy of regression models. Conventional residual analysis based on the plots of raw residuals or their smoothed curves is highly subjective, whereas most numerical goodness-of-fit tests provide little information about the nature of model misspecification. In this paper, we develop objective and informative model-checking techniques by taking the cumulative sums of residuals over certain coordinates (e.g., covariates or fitted values) or by considering some related aggregates of residuals, such as moving sums and moving averages. For a variety of statistical models and data structures, including generalized linear models with independent or dependent observations, the distributions of these stochastic processes tinder the assumed model can be approximated by the distributions of certain zero-mean Gaussian processes whose realizations can be easily generated by computer simulation. Each observed process can then be compared, both graphically and numerically, with a number of realizations from the Gaussian process. Such comparisons enable one to assess objectively whether a trend seen in a residual plot reflects model misspecification or natural variation. The proposed techniques are particularly useful in checking the functional form of a covariate and the link function. Illustrations with several medical studies are provided.

  2. Crop Yield Forecasted Model Based on Time Series Techniques

    Institute of Scientific and Technical Information of China (English)

    Li Hong-ying; Hou Yan-lin; Zhou Yong-juan; Zhao Hui-ming

    2012-01-01

    Traditional studies on potential yield mainly referred to attainable yield: the maximum yield which could be reached by a crop in a given environment. The new concept of crop yield under average climate conditions was defined in this paper, which was affected by advancement of science and technology. Based on the new concept of crop yield, the time series techniques relying on past yield data was employed to set up a forecasting model. The model was tested by using average grain yields of Liaoning Province in China from 1949 to 2005. The testing combined dynamic n-choosing and micro tendency rectification, and an average forecasting error was 1.24%. In the trend line of yield change, and then a yield turning point might occur, in which case the inflexion model was used to solve the problem of yield turn point.

  3. A formal model for integrity protection based on DTE technique

    Institute of Scientific and Technical Information of China (English)

    JI Qingguang; QING Sihan; HE Yeping

    2006-01-01

    In order to provide integrity protection for the secure operating system to satisfy the structured protection class' requirements, a DTE technique based integrity protection formalization model is proposed after the implications and structures of the integrity policy have been analyzed in detail. This model consists of some basic rules for configuring DTE and a state transition model, which are used to instruct how the domains and types are set, and how security invariants obtained from initial configuration are maintained in the process of system transition respectively. In this model, ten invariants are introduced, especially, some new invariants dealing with information flow are proposed, and their relations with corresponding invariants described in literatures are also discussed.The thirteen transition rules with well-formed atomicity are presented in a well-operational manner. The basic security theorems correspond to these invariants and transition rules are proved. The rationalities for proposing the invariants are further annotated via analyzing the differences between this model and ones described in literatures. At last but not least, future works are prospected, especially, it is pointed out that it is possible to use this model to analyze SE-Linux security.

  4. Validation techniques of agent based modelling for geospatial simulations

    Science.gov (United States)

    Darvishi, M.; Ahmadi, G.

    2014-10-01

    One of the most interesting aspects of modelling and simulation study is to describe the real world phenomena that have specific properties; especially those that are in large scales and have dynamic and complex behaviours. Studying these phenomena in the laboratory is costly and in most cases it is impossible. Therefore, Miniaturization of world phenomena in the framework of a model in order to simulate the real phenomena is a reasonable and scientific approach to understand the world. Agent-based modelling and simulation (ABMS) is a new modelling method comprising of multiple interacting agent. They have been used in the different areas; for instance, geographic information system (GIS), biology, economics, social science and computer science. The emergence of ABM toolkits in GIS software libraries (e.g. ESRI's ArcGIS, OpenMap, GeoTools, etc) for geospatial modelling is an indication of the growing interest of users to use of special capabilities of ABMS. Since ABMS is inherently similar to human cognition, therefore it could be built easily and applicable to wide range applications than a traditional simulation. But a key challenge about ABMS is difficulty in their validation and verification. Because of frequent emergence patterns, strong dynamics in the system and the complex nature of ABMS, it is hard to validate and verify ABMS by conventional validation methods. Therefore, attempt to find appropriate validation techniques for ABM seems to be necessary. In this paper, after reviewing on Principles and Concepts of ABM for and its applications, the validation techniques and challenges of ABM validation are discussed.

  5. Validation techniques of agent based modelling for geospatial simulations

    Directory of Open Access Journals (Sweden)

    M. Darvishi

    2014-10-01

    Full Text Available One of the most interesting aspects of modelling and simulation study is to describe the real world phenomena that have specific properties; especially those that are in large scales and have dynamic and complex behaviours. Studying these phenomena in the laboratory is costly and in most cases it is impossible. Therefore, Miniaturization of world phenomena in the framework of a model in order to simulate the real phenomena is a reasonable and scientific approach to understand the world. Agent-based modelling and simulation (ABMS is a new modelling method comprising of multiple interacting agent. They have been used in the different areas; for instance, geographic information system (GIS, biology, economics, social science and computer science. The emergence of ABM toolkits in GIS software libraries (e.g. ESRI’s ArcGIS, OpenMap, GeoTools, etc for geospatial modelling is an indication of the growing interest of users to use of special capabilities of ABMS. Since ABMS is inherently similar to human cognition, therefore it could be built easily and applicable to wide range applications than a traditional simulation. But a key challenge about ABMS is difficulty in their validation and verification. Because of frequent emergence patterns, strong dynamics in the system and the complex nature of ABMS, it is hard to validate and verify ABMS by conventional validation methods. Therefore, attempt to find appropriate validation techniques for ABM seems to be necessary. In this paper, after reviewing on Principles and Concepts of ABM for and its applications, the validation techniques and challenges of ABM validation are discussed.

  6. Demand Management Based on Model Predictive Control Techniques

    Directory of Open Access Journals (Sweden)

    Yasser A. Davizón

    2014-01-01

    Full Text Available Demand management (DM is the process that helps companies to sell the right product to the right customer, at the right time, and for the right price. Therefore the challenge for any company is to determine how much to sell, at what price, and to which market segment while maximizing its profits. DM also helps managers efficiently allocate undifferentiated units of capacity to the available demand with the goal of maximizing revenue. This paper introduces control system approach to demand management with dynamic pricing (DP using the model predictive control (MPC technique. In addition, we present a proper dynamical system analogy based on active suspension and a stability analysis is provided via the Lyapunov direct method.

  7. NVC Based Model for Selecting Effective Requirement Elicitation Technique

    Directory of Open Access Journals (Sweden)

    Md. Rizwan Beg

    2012-10-01

    Full Text Available Requirement Engineering process starts from gathering of requirements i.e.; requirements elicitation. Requirementselicitation (RE is the base building block for a software project and has very high impact onsubsequent design and builds phases as well. Accurately capturing system requirements is the major factorin the failure of most of software projects. Due to the criticality and impact of this phase, it is very importantto perform the requirements elicitation in no less than a perfect manner. One of the most difficult jobsfor elicitor is to select appropriate technique for eliciting the requirement. Interviewing and Interactingstakeholder during Elicitation process is a communication intensive activity involves Verbal and Nonverbalcommunication (NVC. Elicitor should give emphasis to Non-verbal communication along with verbalcommunication so that requirements recorded more efficiently and effectively. In this paper we proposea model in which stakeholders are classified by observing non-verbal communication and use it as a basefor elicitation technique selection. We also propose an efficient plan for requirements elicitation which intendsto overcome on the constraints, faced by elicitor.

  8. Non-linear control logics for vibrations suppression: a comparison between model-based and non-model-based techniques

    Science.gov (United States)

    Ripamonti, Francesco; Orsini, Lorenzo; Resta, Ferruccio

    2015-04-01

    Non-linear behavior is present in many mechanical system operating conditions. In these cases, a common engineering practice is to linearize the equation of motion around a particular operating point, and to design a linear controller. The main disadvantage is that the stability properties and validity of the controller are local. In order to improve the controller performance, non-linear control techniques represent a very attractive solution for many smart structures. The aim of this paper is to compare non-linear model-based and non-model-based control techniques. In particular the model-based sliding-mode-control (SMC) technique is considered because of its easy implementation and the strong robustness of the controller even under heavy model uncertainties. Among the non-model-based control techniques, the fuzzy control (FC), allowing designing the controller according to if-then rules, has been considered. It defines the controller without a system reference model, offering many advantages such as an intrinsic robustness. These techniques have been tested on the pendulum nonlinear system.

  9. A review of propeller modelling techniques based on Euler methods

    NARCIS (Netherlands)

    Zondervan, G.J.D.

    1998-01-01

    Future generation civil aircraft will be powered by new, highly efficient propeller propulsion systems. New, advanced design tools like Euler methods will be needed in the design process of these aircraft. This report describes the application of Euler methods to the modelling of flowfields generate

  10. Ionospheric scintillation forecasting model based on NN-PSO technique

    Science.gov (United States)

    Sridhar, M.; Venkata Ratnam, D.; Padma Raju, K.; Sai Praharsha, D.; Saathvika, K.

    2017-09-01

    The forecasting and modeling of ionospheric scintillation effects are crucial for precise satellite positioning and navigation applications. In this paper, a Neural Network model, trained using Particle Swarm Optimization (PSO) algorithm, has been implemented for the prediction of amplitude scintillation index (S4) observations. The Global Positioning System (GPS) and Ionosonde data available at Darwin, Australia (12.4634° S, 130.8456° E) during 2013 has been considered. The correlation analysis between GPS S4 and Ionosonde drift velocities (hmf2 and fof2) data has been conducted for forecasting the S4 values. The results indicate that forecasted S4 values closely follow the measured S4 values for both the quiet and disturbed conditions. The outcome of this work will be useful for understanding the ionospheric scintillation phenomena over low latitude regions.

  11. INTELLIGENT CAR STYLING TECHNIQUE AND SYSTEM BASED ON A NEW AERODYNAMIC-THEORETICAL MODEL

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    Car styling technique based on a new theoretical model of automotive aerodynamics is introduced, which is proved to be feasible and effective by wind tunnel tests. Development of a multi-module software system from this technique, including modules of knowledge processing, referential styling and ANN aesthetic evaluation etc, capable of assisting car styling works in an intelligent way, is also presented and discussed.

  12. Internet enabled modelling of extended manufacturing enterprises using the process based techniques

    OpenAIRE

    Cheng, K; Popov, Y

    2004-01-01

    The paper presents the preliminary results of an ongoing research project on Internet enabled process-based modelling of extended manufacturing enterprises. It is proposed to apply the Open System Architecture for CIM (CIMOSA) modelling framework alongside with object-oriented Petri Net models of enterprise processes and object-oriented techniques for extended enterprises modelling. The main features of the proposed approach are described and some components discussed. Elementary examples of ...

  13. Antenna pointing system for satellite tracking based on Kalman filtering and model predictive control techniques

    Science.gov (United States)

    Souza, André L. G.; Ishihara, João Y.; Ferreira, Henrique C.; Borges, Renato A.; Borges, Geovany A.

    2016-12-01

    The present work proposes a new approach for an antenna pointing system for satellite tracking. Such a system uses the received signal to estimate the beam pointing deviation and then adjusts the antenna pointing. The present work has two contributions. First, the estimation is performed by a Kalman filter based conical scan technique. This technique uses the Kalman filter avoiding the batch estimator and applies a mathematical manipulation avoiding the linearization approximations. Secondly, a control technique based on the model predictive control together with an explicit state feedback solution are obtained in order to reduce the computational burden. Numerical examples illustrate the results.

  14. Communication Analysis modelling techniques

    CERN Document Server

    España, Sergio; Pastor, Óscar; Ruiz, Marcela

    2012-01-01

    This report describes and illustrates several modelling techniques proposed by Communication Analysis; namely Communicative Event Diagram, Message Structures and Event Specification Templates. The Communicative Event Diagram is a business process modelling technique that adopts a communicational perspective by focusing on communicative interactions when describing the organizational work practice, instead of focusing on physical activities1; at this abstraction level, we refer to business activities as communicative events. Message Structures is a technique based on structured text that allows specifying the messages associated to communicative events. Event Specification Templates are a means to organise the requirements concerning a communicative event. This report can be useful to analysts and business process modellers in general, since, according to our industrial experience, it is possible to apply many Communication Analysis concepts, guidelines and criteria to other business process modelling notation...

  15. Modeling of PV Systems Based on Inflection Points Technique Considering Reverse Mode

    Directory of Open Access Journals (Sweden)

    Bonie J. Restrepo-Cuestas

    2013-11-01

    Full Text Available This paper proposes a methodology for photovoltaic (PV systems modeling, considering their behavior in both direct and reverse operating mode and considering mismatching conditions. The proposed methodology is based on the inflection points technique with a linear approximation to model the bypass diode and a simplified PV model. The proposed mathematical model allows to evaluate the energetic performance of a PV system, exhibiting short simulation times in large PV systems. In addition, this methodology allows to estimate the condition of the modules affected by the partial shading since it is possible to know the power dissipated due to its operation at the second quadrant.

  16. Modeling and simulation of atmosphere interference signal based on FTIR spectroscopy technique

    Science.gov (United States)

    Zhang, Yugui; Li, Qiang; Yu, Zhengyang; Liu, Zhengmin

    2016-09-01

    Fourier Transform Infrared spectroscopy technique, featured with large frequency range and high spectral resolution, is becoming the research focus in spectrum analysis area, and is spreading in atmosphere detection applications in the aerospace field. In this paper, based on FTIR spectroscopy technique, the principle of atmosphere interference signal generation is deduced in theory, and also its mathematical model and simulation are carried out. Finally, the intrinsic characteristics of the interference signal in time domain and frequency domain, which give a theoretical foundation to the performance parameter design of electrical signal processing, are analyzed.

  17. A Novel Algorithmic Cost Estimation Model Based on Soft Computing Technique

    Directory of Open Access Journals (Sweden)

    Iman Attarzadeh

    2010-01-01

    Full Text Available Problem statement: Software development effort estimation is the process of predicting the most realistic use of effort required for developing software based on some parameters. It has always characterized one of the biggest challenges in Computer Science for the last decades. Because time and cost estimate at the early stages of the software development are the most difficult to obtain and they are often the least accurate. Traditional algorithmic techniques such as regression models, Software Life Cycle Management (SLIM, COCOMO II model and function points, require an estimation process in a long term. But, nowadays that is not acceptable for software developers and companies. Newer soft computing techniques to effort estimation based on non-algorithmic techniques such as Fuzzy Logic (FL may offer an alternative for solving the problem. This work aims to propose a new fuzzy logic realistic model to achieve more accuracy in software effort estimation. The main objective of this research was to investigate the role of fuzzy logic technique in improving the effort estimation accuracy by characterizing inputs parameters using two-side Gaussian function which gave superior transition from one interval to another. Approach: The methodology adopted in this study was use of fuzzy logic approach rather than classical intervals in the COCOMO II. Using advantages of fuzzy logic such as fuzzy sets, inputs parameters can be specified by distribution of its possible values and these fuzzy sets were represented by membership functions. In this study to get a smoother transition in the membership function for input parameters, its associated linguistic values were represented by two-side Gaussian Membership Functions (2-D GMF and rules. Results: After analyzing the results attained by means of applying COCOMO II and proposed model based on fuzzy logic to the NASA dataset and created an artificial dataset, it had been found that proposed model was performing

  18. Chronology of DIC technique based on the fundamental mathematical modeling and dehydration impact.

    Science.gov (United States)

    Alias, Norma; Saipol, Hafizah Farhah Saipan; Ghani, Asnida Che Abd

    2014-12-01

    A chronology of mathematical models for heat and mass transfer equation is proposed for the prediction of moisture and temperature behavior during drying using DIC (Détente Instantanée Contrôlée) or instant controlled pressure drop technique. DIC technique has the potential as most commonly used dehydration method for high impact food value including the nutrition maintenance and the best possible quality for food storage. The model is governed by the regression model, followed by 2D Fick's and Fourier's parabolic equation and 2D elliptic-parabolic equation in a rectangular slice. The models neglect the effect of shrinkage and radiation effects. The simulations of heat and mass transfer equations with parabolic and elliptic-parabolic types through some numerical methods based on finite difference method (FDM) have been illustrated. Intel®Core™2Duo processors with Linux operating system and C programming language have been considered as a computational platform for the simulation. Qualitative and quantitative differences between DIC technique and the conventional drying methods have been shown as a comparative.

  19. Multiple Model Adaptive Estimation Techniques for Adaptive Model-Based Robot Control

    Science.gov (United States)

    1989-12-01

    Proportional Derivative (PD) or Propor- tional Integral Derivative (PID) feedback controller [6]. 1-1 The PD or PID controllers feedback the measured...Unfortunately, as the speed of the trajectory increases or the con- figuration of the robot changes, the PD or PID controllers cannot maintain track along the...desired trajectory. The main reason for poor tracking is that the PD and PID controllers were developed based on a simplified linear dynamics model

  20. Hybrid Model Testing Technique for Deep-Sea Platforms Based on Equivalent Water Depth Truncation

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    In this paper, an inner turret moored FPSO which works in the water of 320 m depth, is selected to study the so-called "passively-truncated + numerical-simulation" type of hybrid model testing technique while the truncated water depth is 160 m and the model scale λ=80. During the investigation, the optimization design of the equivalent-depth truncated system is performed by using the similarity of the static characteristics between the truncated system and the full depth one as the objective function. According to the truncated system, the corresponding physical test model is made. By adopting the coupling time domain simulation method, the truncated system model test is numerically reconstructed to carefully verify the computer simulation software and to adjust the corresponding hydrodynamic parameters. Based on the above work, the numerical extrapolation to the full depth system is performed by using the verified computer software and the adjusted hydrodynamic parameters. The full depth system model test is then performed in the basin and the results are compared with those from the numerical extrapolation. At last, the implementation procedure and the key technique of the hybrid model testing of the deep-sea platforms are summarized and printed. Through the above investigations, some beneficial conclusions are presented.

  1. Surrogate-based modeling and dimension reduction techniques for multi-scale mechanics problems

    Institute of Scientific and Technical Information of China (English)

    Wei Shyy; Young-Chang Cho; Wenbo Du; Amit Gupta; Chien-Chou Tseng; Ann Marie Sastry

    2011-01-01

    Successful modeling and/or design of engineering systems often requires one to address the impact of multiple “design variables” on the prescribed outcome.There are often multiple,competing objectives based on which we assess the outcome of optimization.Since accurate,high fidelity models are typically time consuming and computationally expensive,comprehensive evaluations can be conducted only if an efficient framework is available.Furthermore,informed decisions of the model/hardware's overall performance rely on an adequate understanding of the global,not local,sensitivity of the individual design variables on the objectives.The surrogate-based approach,which involves approximating the objectives as continuous functions of design variables from limited data,offers a rational framework to reduce the number of important input variables,i.e.,the dimension of a design or modeling space.In this paper,we review the fundamental issues that arise in surrogate-based analysis and optimization,highlighting concepts,methods,techniques,as well as modeling implications for mechanics problems.To aid the discussions of the issues involved,we summarize recent efforts in investigating cryogenic cavitating flows,active flow control based on dielectric barrier discharge concepts,and lithium (Li)-ion batteries.It is also stressed that many multi-scale mechanics problems can naturally benefit from the surrogate approach for “scale bridging.”

  2. Tsunami Hazard Preventing Based Land Use Planning Model Using GIS Techniques in Muang Krabi, Thailand

    Directory of Open Access Journals (Sweden)

    Abdul Salam Soomro

    2012-10-01

    Full Text Available The terrible tsunami disaster, on 26 December 2004 hit Krabi, one of the ecotourist and very fascinating provinces of southern Thailand including its various regions e.g. Phangna and Phuket by devastating the human lives, coastal communications and the financially viable activities. This research study has been aimed to generate the tsunami hazard preventing based lands use planning model using GIS (Geographical Information Systems based on the hazard suitability analysis approach. The different triggering factors e.g. elevation, proximity to shore line, population density, mangrove, forest, stream and road have been used based on the land use zoning criteria. Those criteria have been used by using Saaty scale of importance one, of the mathematical techniques. This model has been classified according to the land suitability classification. The various techniques of GIS, namely subsetting, spatial analysis, map difference and data conversion have been used. The model has been generated with five categories such as high, moderate, low, very low and not suitable regions illustrating with their appropriate definition for the decision makers to redevelop the region.

  3. ADBT Frame Work as a Testing Technique: An Improvement in Comparison with Traditional Model Based Testing

    Directory of Open Access Journals (Sweden)

    Mohammed Akour

    2016-05-01

    Full Text Available Software testing is an embedded activity in all software development life cycle phases. Due to the difficulties and high costs of software testing, many testing techniques have been developed with the common goal of testing software in the most optimal and cost-effective manner. Model-based testing (MBT is used to direct testing activities such as test verification and selection. MBT is employed to encapsulate and understand the behavior of the system under test, which supports and helps software engineers to validate the system with various likely actions. The widespread usage of models has influenced the usage of MBT in the testing process, especially with UML. In this research, we proposed an improved model based testing strategy, which involves and uses four different diagrams in the testing process. This paper also discusses and explains the activities in the proposed model with the finite state model (FSM. The comparisons have been done with traditional model based testings in terms of test case generation and result.

  4. Data flow modeling techniques

    Science.gov (United States)

    Kavi, K. M.

    1984-01-01

    There have been a number of simulation packages developed for the purpose of designing, testing and validating computer systems, digital systems and software systems. Complex analytical tools based on Markov and semi-Markov processes have been designed to estimate the reliability and performance of simulated systems. Petri nets have received wide acceptance for modeling complex and highly parallel computers. In this research data flow models for computer systems are investigated. Data flow models can be used to simulate both software and hardware in a uniform manner. Data flow simulation techniques provide the computer systems designer with a CAD environment which enables highly parallel complex systems to be defined, evaluated at all levels and finally implemented in either hardware or software. Inherent in data flow concept is the hierarchical handling of complex systems. In this paper we will describe how data flow can be used to model computer system.

  5. Mathematical Foundation Based Inter-Connectivity modelling of Thermal Image processing technique for Fire Protection

    Directory of Open Access Journals (Sweden)

    Sayantan Nath

    2015-09-01

    Full Text Available In this paper, integration between multiple functions of image processing and its statistical parameters for intelligent alarming series based fire detection system is presented. The proper inter-connectivity mapping between processing elements of imagery based on classification factor for temperature monitoring and multilevel intelligent alarm sequence is introduced by abstractive canonical approach. The flow of image processing components between core implementation of intelligent alarming system with temperature wise area segmentation as well as boundary detection technique is not yet fully explored in the present era of thermal imaging. In the light of analytical perspective of convolutive functionalism in thermal imaging, the abstract algebra based inter-mapping model between event-calculus supported DAGSVM classification for step-by-step generation of alarm series with gradual monitoring technique and segmentation of regions with its affected boundaries in thermographic image of coal with respect to temperature distinctions is discussed. The connectedness of the multifunctional operations of image processing based compatible fire protection system with proper monitoring sequence is presently investigated here. The mathematical models representing the relation between the temperature affected areas and its boundary in the obtained thermal image defined in partial derivative fashion is the core contribution of this study. The thermal image of coal sample is obtained in real-life scenario by self-assembled thermographic camera in this study. The amalgamation between area segmentation, boundary detection and alarm series are described in abstract algebra. The principal objective of this paper is to understand the dependency pattern and the principles of working of image processing components and structure an inter-connected modelling technique also for those components with the help of mathematical foundation.

  6. Model-Based Fault Diagnosis Techniques Design Schemes, Algorithms and Tools

    CERN Document Server

    Ding, Steven X

    2013-01-01

    Guaranteeing a high system performance over a wide operating range is an important issue surrounding the design of automatic control systems with successively increasing complexity. As a key technology in the search for a solution, advanced fault detection and identification (FDI) is receiving considerable attention. This book introduces basic model-based FDI schemes, advanced analysis and design algorithms, and mathematical and control-theoretic tools. This second edition of Model-Based Fault Diagnosis Techniques contains: ·         new material on fault isolation and identification, and fault detection in feedback control loops; ·         extended and revised treatment of systematic threshold determination for systems with both deterministic unknown inputs and stochastic noises; addition of the continuously-stirred tank heater as a representative process-industrial benchmark; and ·         enhanced discussion of residual evaluation in stochastic processes. Model-based Fault Diagno...

  7. Fuzzy Time Series Forecasting Model Based on Automatic Clustering Techniques and Generalized Fuzzy Logical Relationship

    Directory of Open Access Journals (Sweden)

    Wangren Qiu

    2015-01-01

    Full Text Available In view of techniques for constructing high-order fuzzy time series models, there are three types which are based on advanced algorithms, computational method, and grouping the fuzzy logical relationships. The last type of models is easy to be understood by the decision maker who does not know anything about fuzzy set theory or advanced algorithms. To deal with forecasting problems, this paper presented novel high-order fuzz time series models denoted as GTS (M, N based on generalized fuzzy logical relationships and automatic clustering. This paper issued the concept of generalized fuzzy logical relationship and an operation for combining the generalized relationships. Then, the procedure of the proposed model was implemented on forecasting enrollment data at the University of Alabama. To show the considerable outperforming results, the proposed approach was also applied to forecasting the Shanghai Stock Exchange Composite Index. Finally, the effects of parameters M and N, the number of order, and concerned principal fuzzy logical relationships, on the forecasting results were also discussed.

  8. Synthetic aperture radar imaging based on attributed scatter model using sparse recovery techniques

    Institute of Scientific and Technical Information of China (English)

    苏伍各; 王宏强; 阳召成

    2014-01-01

    The sparse recovery algorithms formulate synthetic aperture radar (SAR) imaging problem in terms of sparse representation (SR) of a small number of strong scatters’ positions among a much large number of potential scatters’ positions, and provide an effective approach to improve the SAR image resolution. Based on the attributed scatter center model, several experiments were performed with different practical considerations to evaluate the performance of five representative SR techniques, namely, sparse Bayesian learning (SBL), fast Bayesian matching pursuit (FBMP), smoothed l0 norm method (SL0), sparse reconstruction by separable approximation (SpaRSA), fast iterative shrinkage-thresholding algorithm (FISTA), and the parameter settings in five SR algorithms were discussed. In different situations, the performances of these algorithms were also discussed. Through the comparison of MSE and failure rate in each algorithm simulation, FBMP and SpaRSA are found suitable for dealing with problems in the SAR imaging based on attributed scattering center model. Although the SBL is time-consuming, it always get better performance when related to failure rate and high SNR.

  9. Solution Procedure for Transport Modeling in Effluent Recharge Based on Operator-Splitting Techniques

    Directory of Open Access Journals (Sweden)

    Shutang Zhu

    2008-01-01

    Full Text Available The coupling of groundwater movement and reactive transport during groundwater recharge with wastewater leads to a complicated mathematical model, involving terms to describe convection-dispersion, adsorption/desorption and/or biodegradation, and so forth. It has been found very difficult to solve such a coupled model either analytically or numerically. The present study adopts operator-splitting techniques to decompose the coupled model into two submodels with different intrinsic characteristics. By applying an upwind finite difference scheme to the finite volume integral of the convection flux term, an implicit solution procedure is derived to solve the convection-dominant equation. The dispersion term is discretized in a standard central-difference scheme while the dispersion-dominant equation is solved using either the preconditioned Jacobi conjugate gradient (PJCG method or Thomas method based on local-one-dimensional scheme. The solution method proposed in this study is applied to the demonstration project of groundwater recharge with secondary effluent at Gaobeidian sewage treatment plant (STP successfully.

  10. Development of acoustic model-based iterative reconstruction technique for thick-concrete imaging

    Science.gov (United States)

    Almansouri, Hani; Clayton, Dwight; Kisner, Roger; Polsky, Yarom; Bouman, Charles; Santos-Villalobos, Hector

    2016-02-01

    Ultrasound signals have been used extensively for non-destructive evaluation (NDE). However, typical reconstruction techniques, such as the synthetic aperture focusing technique (SAFT), are limited to quasi-homogenous thin media. New ultrasonic systems and reconstruction algorithms are in need for one-sided NDE of non-homogenous thick objects. An application example space is imaging of reinforced concrete structures for commercial nuclear power plants (NPPs). These structures provide important foundation, support, shielding, and containment functions. Identification and management of aging and degradation of concrete structures is fundamental to the proposed long-term operation of NPPs. Another example is geothermal and oil/gas production wells. These multi-layered structures are composed of steel, cement, and several types of soil and rocks. Ultrasound systems with greater penetration range and image quality will allow for better monitoring of the well's health and prediction of high-pressure hydraulic fracturing of the rock. These application challenges need to be addressed with an integrated imaging approach, where the application, hardware, and reconstruction software are highly integrated and optimized. Therefore, we are developing an ultrasonic system with Model-Based Iterative Reconstruction (MBIR) as the image reconstruction backbone. As the first implementation of MBIR for ultrasonic signals, this paper document the first implementation of the algorithm and show reconstruction results for synthetically generated data.1

  11. Development of Acoustic Model-Based Iterative Reconstruction Technique for Thick-Concrete Imaging

    Energy Technology Data Exchange (ETDEWEB)

    Almansouri, Hani [Purdue University; Clayton, Dwight A [ORNL; Kisner, Roger A [ORNL; Polsky, Yarom [ORNL; Bouman, Charlie [Purdue University; Santos-Villalobos, Hector J [ORNL

    2015-01-01

    Ultrasound signals have been used extensively for non-destructive evaluation (NDE). However, typical reconstruction techniques, such as the synthetic aperture focusing technique (SAFT), are limited to quasi-homogenous thin media. New ultrasonic systems and reconstruction algorithms are in need for one-sided NDE of non-homogenous thick objects. An application example space is imaging of reinforced concrete structures for commercial nuclear power plants (NPPs). These structures provide important foundation, support, shielding, and containment functions. Identification and management of aging and degradation of concrete structures is fundamental to the proposed long-term operation of NPPs. Another example is geothermal and oil/gas production wells. These multi-layered structures are composed of steel, cement, and several types of soil and rocks. Ultrasound systems with greater penetration range and image quality will allow for better monitoring of the well s health and prediction of high-pressure hydraulic fracturing of the rock. These application challenges need to be addressed with an integrated imaging approach, where the application, hardware, and reconstruction software are highly integrated and optimized. Therefore, we are developing an ultrasonic system with Model-Based Iterative Reconstruction (MBIR) as the image reconstruction backbone. As the first implementation of MBIR for ultrasonic signals, this paper document the first implementation of the algorithm and show reconstruction results for synthetically generated data.

  12. Development of Acoustic Model-Based Iterative Reconstruction Technique for Thick-Concrete Imaging

    Energy Technology Data Exchange (ETDEWEB)

    Almansouri, Hani [Purdue University; Clayton, Dwight A [ORNL; Kisner, Roger A [ORNL; Polsky, Yarom [ORNL; Bouman, Charlie [Purdue University; Santos-Villalobos, Hector J [ORNL

    2016-01-01

    Ultrasound signals have been used extensively for non-destructive evaluation (NDE). However, typical reconstruction techniques, such as the synthetic aperture focusing technique (SAFT), are limited to quasi-homogenous thin media. New ultrasonic systems and reconstruction algorithms are in need for one-sided NDE of non-homogenous thick objects. An application example space is imaging of reinforced concrete structures for commercial nuclear power plants (NPPs). These structures provide important foundation, support, shielding, and containment functions. Identification and management of aging and degradation of concrete structures is fundamental to the proposed long-term operation of NPPs. Another example is geothermal and oil/gas production wells. These multi-layered structures are composed of steel, cement, and several types of soil and rocks. Ultrasound systems with greater penetration range and image quality will allow for better monitoring of the well's health and prediction of high-pressure hydraulic fracturing of the rock. These application challenges need to be addressed with an integrated imaging approach, where the application, hardware, and reconstruction software are highly integrated and optimized. Therefore, we are developing an ultrasonic system with Model-Based Iterative Reconstruction (MBIR) as the image reconstruction backbone. As the first implementation of MBIR for ultrasonic signals, this paper document the first implementation of the algorithm and show reconstruction results for synthetically generated data.

  13. Model-based sub-Nyquist sampling and reconstruction technique for ultra-wideband (UWB) radar

    Science.gov (United States)

    Nguyen, Lam; Tran, Trac D.

    2010-04-01

    The Army Research Lab has recently developed an ultra-wideband (UWB) synthetic aperture radar (SAR). The radar has been employed to support proof-of-concept demonstration for several concealed target detection programs. The radar transmits and receives short impulses to achieve a wide-bandwidth from 300 MHz to 3000 MHz. Since the radar directly digitizes the wide-bandwidth receive signals, the challenges is to how to employ relatively slow and inexpensive analog-to-digital (A/D) converters to sample the signals with a rate that is greater than the minimum Nyquist rate. ARL has developed a sampling technique that allows us to employ inexpensive A/D converters (ADC) to digitize the widebandwidth signals. However, this technique still has a major drawback due to the longer time required to complete a data acquisition cycle. This in turn translates to lower average power and lower effective pulse repetition frequency (PRF). Compressed Sensing (CS) theory offers a new approach in data acquisition. From the CS framework, we can reconstruct certain signals or images from much fewer samples than the traditional sampling methods, provided that the signals are sparse in certain domains. However, while the CS framework offers the data compression feature, it still does not address the above mentioned drawback, that is the data acquisition must be operated in equivalent time since many global measurements (obtained from global random projections) are required as depicted by the sensing matrix Φ in the CS framework. In this paper, we propose a new technique that allows the sub-Nyquist sampling and the reconstruction of the wide-bandwidth data. In this technique, each wide-bandwidth radar data record is modeled as a superposition of many backscatter signals from reflective point targets. The technique is based on direct sparse recovery using a special dictionary containing many time-delayed versions of the transmitted probing signal. We demonstrate via simulated as well as

  14. Advances in Intelligent Modelling and Simulation Artificial Intelligence-Based Models and Techniques in Scalable Computing

    CERN Document Server

    Khan, Samee; Burczy´nski, Tadeusz

    2012-01-01

    One of the most challenging issues in today’s large-scale computational modeling and design is to effectively manage the complex distributed environments, such as computational clouds, grids, ad hoc, and P2P networks operating under  various  types of users with evolving relationships fraught with  uncertainties. In this context, the IT resources and services usually belong to different owners (institutions, enterprises, or individuals) and are managed by different administrators. Moreover, uncertainties are presented to the system at hand in various forms of information that are incomplete, imprecise, fragmentary, or overloading, which hinders in the full and precise resolve of the evaluation criteria, subsequencing and selection, and the assignment scores. Intelligent scalable systems enable the flexible routing and charging, advanced user interactions and the aggregation and sharing of geographically-distributed resources in modern large-scale systems.   This book presents new ideas, theories, models...

  15. Identification techniques for phenomenological models of hysteresis based on the conjugate gradient method

    Energy Technology Data Exchange (ETDEWEB)

    Andrei, Petru [Electrical and Computer Engineering Department, Florida State Unviersity, Tallahassee, FL 32310 (United States) and Electrical and Computer Engineering Department, Florida A and M Unviersity, Tallahassee, FL 32310 (United States)]. E-mail: pandrei@eng.fsu.edu; Oniciuc, Liviu [Electrical and Computer Engineering Department, Florida State Unviersity, Tallahassee, FL 32310 (United States); Stancu, Alexandru [Faculty of Physics, ' Al. I. Cuza' University, Iasi 700506 (Romania); Stoleriu, Laurentiu [Faculty of Physics, ' Al. I. Cuza' University, Iasi 700506 (Romania)

    2007-09-15

    An identification technique for the parameters of phenomenological models of hysteresis is presented. The basic idea of our technique is to set up a system of equations for the parameters of the model as a function of known quantities on the major or minor hysteresis loops (e.g. coercive force, susceptibilities at various points, remanence), or other magnetization curves. This system of equations can be either over or underspecified and is solved by using the conjugate gradient method. Numerical results related to the identification of parameters in the Energetic, Jiles-Atherton, and Preisach models are presented.

  16. An Improved Technique Based on Firefly Algorithm to Estimate the Parameters of the Photovoltaic Model

    Directory of Open Access Journals (Sweden)

    Issa Ahmed Abed

    2016-12-01

    Full Text Available This paper present a method to enhance the firefly algorithm by coupling with a local search. The constructed technique is applied to identify the solar parameters model where the method has been proved its ability to obtain the photovoltaic parameters model. Standard firefly algorithm (FA, electromagnetism-like (EM algorithm, and electromagnetism-like without local (EMW search algorithm all are compared with the suggested method to test its capability to solve this model.

  17. A statistical model-based technique for accounting for prostate gland deformation in endorectal coil-based MR imaging.

    Science.gov (United States)

    Tahmasebi, Amir M; Sharifi, Reza; Agarwal, Harsh K; Turkbey, Baris; Bernardo, Marcelino; Choyke, Peter; Pinto, Peter; Wood, Bradford; Kruecker, Jochen

    2012-01-01

    In prostate brachytherapy procedures, combining high-resolution endorectal coil (ERC)-MRI with Computed Tomography (CT) images has shown to improve the diagnostic specificity for malignant tumors. Despite such advantage, there exists a major complication in fusion of the two imaging modalities due to the deformation of the prostate shape in ERC-MRI. Conventionally, nonlinear deformable registration techniques have been utilized to account for such deformation. In this work, we present a model-based technique for accounting for the deformation of the prostate gland in ERC-MR imaging, in which a unique deformation vector is estimated for every point within the prostate gland. Modes of deformation for every point in the prostate are statistically identified using a set of MR-based training set (with and without ERC-MRI). Deformation of the prostate from a deformed (ERC-MRI) to a non-deformed state in a different modality (CT) is then realized by first calculating partial deformation information for a limited number of points (such as surface points or anatomical landmarks) and then utilizing the calculated deformation from a subset of the points to determine the coefficient values for the modes of deformations provided by the statistical deformation model. Using a leave-one-out cross-validation, our results demonstrated a mean estimation error of 1mm for a MR-to-MR registration.

  18. Applying Model-Based Techniques for Aerospace Projects in Accordance with DO-178C, DO-331, and DO-333

    OpenAIRE

    Eisemann, Ulrich

    2016-01-01

    International audience; The new standard for software development in civil aviation, DO-178C, mainly differs from its predecessor DO-178B, in that it has standard supplements to provide greater scope for using new software development methods. The most important standard supplements are DO-331 on the methods of model-based development and model-based verification and DO-333 on the use of formal methods such as model checking and abstract interpretation. These key software design techniques of...

  19. Optimization of numerical weather/wave prediction models based on information geometry and computational techniques

    Science.gov (United States)

    Galanis, George; Famelis, Ioannis; Kalogeri, Christina

    2014-10-01

    The last years a new highly demanding framework has been set for environmental sciences and applied mathematics as a result of the needs posed by issues that are of interest not only of the scientific community but of today's society in general: global warming, renewable resources of energy, natural hazards can be listed among them. Two are the main directions that the research community follows today in order to address the above problems: The utilization of environmental observations obtained from in situ or remote sensing sources and the meteorological-oceanographic simulations based on physical-mathematical models. In particular, trying to reach credible local forecasts the two previous data sources are combined by algorithms that are essentially based on optimization processes. The conventional approaches in this framework usually neglect the topological-geometrical properties of the space of the data under study by adopting least square methods based on classical Euclidean geometry tools. In the present work new optimization techniques are discussed making use of methodologies from a rapidly advancing branch of applied Mathematics, the Information Geometry. The latter prove that the distributions of data sets are elements of non-Euclidean structures in which the underlying geometry may differ significantly from the classical one. Geometrical entities like Riemannian metrics, distances, curvature and affine connections are utilized in order to define the optimum distributions fitting to the environmental data at specific areas and to form differential systems that describes the optimization procedures. The methodology proposed is clarified by an application for wind speed forecasts in the Kefaloniaisland, Greece.

  20. STATISTICAL INFERENCES FOR VARYING-COEFFICINT MODELS BASED ON LOCALLY WEIGHTED REGRESSION TECHNIQUE

    Institute of Scientific and Technical Information of China (English)

    梅长林; 张文修; 梁怡

    2001-01-01

    Some fundamental issues on statistical inferences relating to varying-coefficient regression models are addressed and studied. An exact testing procedure is proposed for checking the goodness of fit of a varying-coefficient model fired by the locally weighted regression technique versus an ordinary linear regression model. Also, an appropriate statistic for testing variation of model parameters over the locations where the observations are collected is constructed and a formal testing approach which is essential to exploring spatial non-stationarity in geography science is suggested.

  1. Coronary stent on coronary CT angiography: Assessment with model-based iterative reconstruction technique

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Eun Chae; Kim, Yeo Koon; Chun, Eun Ju; Choi, Sang IL [Dept. of of Radiology, Seoul National University Bundang Hospital, Seongnam (Korea, Republic of)

    2016-05-15

    To assess the performance of model-based iterative reconstruction (MBIR) technique for evaluation of coronary artery stents on coronary CT angiography (CCTA). Twenty-two patients with coronary stent implantation who underwent CCTA were retrospectively enrolled for comparison of image quality between filtered back projection (FBP), adaptive statistical iterative reconstruction (ASIR) and MBIR. In each data set, image noise was measured as the standard deviation of the measured attenuation units within circular regions of interest in the ascending aorta (AA) and left main coronary artery (LM). To objectively assess the noise and blooming artifacts in coronary stent, we additionally measured the standard deviation of the measured attenuation and intra-luminal stent diameters of total 35 stents with dedicated software. All image noise measured in the AA (all p < 0.001), LM (p < 0.001, p = 0.001) and coronary stent (all p < 0.001) were significantly lower with MBIR in comparison to those with FBP or ASIR. Intraluminal stent diameter was significantly higher with MBIR, as compared with ASIR or FBP (p < 0.001, p = 0.001). MBIR can reduce image noise and blooming artifact from the stent, leading to better in-stent assessment in patients with coronary artery stent.

  2. Meso-damage modelling of polymer based particulate composites using finite element technique

    Science.gov (United States)

    Tsui, Chi Pong

    To develop a new particulate polymer composite (PPC) with desired mechanical properties is usually accomplished by an experimental trial-and-error approach. A new technique, which predicts the damage mechanism and its effects on the mechanical properties of PPC, has been proposed. This meso-mechanical modelling technique, which offers a means to bridge the micro-damage mechanism and the macro-structural behaviour, has been implemented in a finite element code. A three-dimensional finite element meso-cell model has been designed and constructed to simulate the damage mechanism of PPC. The meso-cell model consists of a micro-particle, an interface, and a matrix. The initiation of the particle/polymer matrix debonding process has been predicted on the basis of a tensile criterion. By considering the meso-cell model as a representative volume element (RVE), the effects of damage on the macro-structural constitutive behaviour of PPC have been determined. An experimental investigation has been made on glass beads (GB) reinforced polyphenylene oxides (PPO) for verification of the meso-cell model and the meso-mechanical finite element technique. The predicted constitutive relation has been found to be in good agreement with the experimental results. The results of the in-situ microscopic test also verify the correctness of the meso-cell model. The application of the meso-mechanical finite element modelling technique has been extended to a macro-structural analysis to simulate the response an engineering structure made of PPC under a static load. In the simulation, a damage variable has been defined in terms of the computational results of the cell model in meso-scale. Hence, the damage-coupled constitutive relation of the GB/PPO composite could be derived. A user-defined subroutine VUMAT in FORTRAN language describing the damage-coupled constitutive behaviour has then been incorporated into the ABAQUS finite element code. On a macro-scale, the ABAQUS finite element code

  3. Validation of a COMSOL Multiphysics based soil model using imaging techniques

    Science.gov (United States)

    Hayes, Robert; Newill, Paul; Podd, Frank; Dorn, Oliver; York, Trevor; Grieve, Bruce

    2010-05-01

    In the face of climate change the ability to rapidly identify new plant varieties that will be tolerant to drought, and other stresses, is going to be key to breeding the food crops of tomorrow. Currently, above soil features (phenotypes) are monitored in industrial greenhouses and field trials during seed breeding programmes so as to provide an indication of which plants have the most likely preferential genetics to thrive in the future global environments. These indicators of 'plant vigour' are often based on loosely related features which may be straightforward to examine, such as an additional ear of corn on a maize plant, but which are labour intensive and often lacking in direct linkage to the required crop features. A new visualisation tool is being developed for seed breeders, providing on-line data for each individual plant in a screening programme indicating how efficiently each plant utilises the water and nutrients available in the surrounding soil. It will be used as an in-field tool for early detection of desirable genetic traits with the aim of increased efficiency in identification and delivery of tomorrow's drought tolerant food crops. Visualisation takes the form of Electrical Impedance Tomography (EIT), a non-destructive and non-intrusive imaging technique. The measurement space is typical of medical and industrial process monitoring i.e. on a small spatial scale as opposed to that of typical geophysical applications. EIT measurements are obtained for an individual plant thus allowing water and nutrient absorption levels for an individual specimen to be inferred from the resistance distribution image obtained. In addition to traditional soft-field image reconstruction techniques the inverse problem is solved using mathematical models for the mobility of water and solutes in soil. The University of Manchester/Syngenta LCT2 (Low Cost Tomography 2) instrument has been integrated into crop growth studies under highly controlled soil, nutrient and

  4. Model-based wear measurements in total knee arthroplasty : development and validation of novel radiographic techniques

    NARCIS (Netherlands)

    IJsseldijk, van E.A.

    2016-01-01

    The primary aim of this work was to develop novel model-based mJSW measurement methods using a 3D reconstruction and compare the accuracy and precision of these methods to conventional mJSW measurement. This thesis contributed to the development, validation and clinical application of model-based mJ

  5. Model-based review of Doppler global velocimetry techniques with laser frequency modulation

    Science.gov (United States)

    Fischer, Andreas

    2017-06-01

    Optical measurements of flow velocity fields are of crucial importance to understand the behavior of complex flow. One flow field measurement technique is Doppler global velocimetry (DGV). A large variety of different DGV approaches exist, e.g., applying different kinds of laser frequency modulation. In order to investigate the measurement capabilities especially of the newer DGV approaches with laser frequency modulation, a model-based review of all DGV measurement principles is performed. The DGV principles can be categorized by the respective number of required time steps. The systematic review of all DGV principle reveals drawbacks and benefits of the different measurement approaches with respect to the temporal resolution, the spatial resolution and the measurement range. Furthermore, the Cramér-Rao bound for photon shot is calculated and discussed, which represents a fundamental limit of the achievable measurement uncertainty. As a result, all DGV techniques provide similar minimal uncertainty limits. With Nphotons as the number of scattered photons, the minimal standard deviation of the flow velocity reads about 106 m / s /√{Nphotons } , which was calculated for a perpendicular arrangement of the illumination and observation direction and a laser wavelength of 895 nm. As a further result, the signal processing efficiencies are determined with a Monte-Carlo simulation. Except for the newest correlation-based DGV method, the signal processing algorithms are already optimal or near the optimum. Finally, the different DGV approaches are compared regarding errors due to temporal variations of the scattered light intensity and the flow velocity. The influence of a linear variation of the scattered light intensity can be reduced by maximizing the number of time steps, because this means to acquire more information for the correction of this systematic effect. However, more time steps can result in a flow velocity measurement with a lower temporal resolution

  6. Diffusion of a Sustainable Farming Technique in Sri Lanka: An Agent-Based Modeling Approach

    Science.gov (United States)

    Jacobi, J. H.; Gilligan, J. M.; Carrico, A. R.; Truelove, H. B.; Hornberger, G.

    2012-12-01

    We live in a changing world - anthropogenic climate change is disrupting historic climate patterns and social structures are shifting as large scale population growth and massive migrations place unprecedented strain on natural and social resources. Agriculture in many countries is affected by these changes in the social and natural environments. In Sri Lanka, rice farmers in the Mahaweli River watershed have seen increases in temperature and decreases in precipitation. In addition, a government led resettlement project has altered the demographics and social practices in villages throughout the watershed. These changes have the potential to impact rice yields in a country where self-sufficiency in rice production is a point of national pride. Studies of the climate can elucidate physical effects on rice production, while research on social behaviors can illuminate the influence of community dynamics on agricultural practices. Only an integrated approach, however, can capture the combined and interactive impacts of these global changes on Sri Lankan agricultural. As part of an interdisciplinary team, we present an agent-based modeling (ABM) approach to studying the effects of physical and social changes on farmers in Sri Lanka. In our research, the diffusion of a sustainable farming technique, the system of rice intensification (SRI), throughout a farming community is modeled to identify factors that either inhibit or promote the spread of a more sustainable approach to rice farming. Inputs into the ABM are both physical and social and include temperature, precipitation, the Palmer Drought Severity Index (PDSI), community trust, and social networks. Outputs from the ABM demonstrate the importance of meteorology and social structure on the diffusion of SRI throughout a farming community.

  7. A New Profile Learning Model for Recommendation System based on Machine Learning Technique

    Directory of Open Access Journals (Sweden)

    Shereen H. Ali

    2016-03-01

    Full Text Available Recommender systems (RSs have been used to successfully address the information overload problem by providing personalized and targeted recommendations to the end users. RSs are software tools and techniques providing suggestions for items to be of use to a user, hence, they typically apply techniques and methodologies from Data Mining. The main contribution of this paper is to introduce a new user profile learning model to promote the recommendation accuracy of vertical recommendation systems. The proposed profile learning model employs the vertical classifier that has been used in multi classification module of the Intelligent Adaptive Vertical Recommendation (IAVR system to discover the user’s area of interest, and then build the user’s profile accordingly. Experimental results have proven the effectiveness of the proposed profile learning model, which accordingly will promote the recommendation accuracy.

  8. Introduction to Information Visualization (InfoVis) Techniques for Model-Based Systems Engineering

    Science.gov (United States)

    Sindiy, Oleg; Litomisky, Krystof; Davidoff, Scott; Dekens, Frank

    2013-01-01

    This paper presents insights that conform to numerous system modeling languages/representation standards. The insights are drawn from best practices of Information Visualization as applied to aerospace-based applications.

  9. A Comparative of business process modelling techniques

    Science.gov (United States)

    Tangkawarow, I. R. H. T.; Waworuntu, J.

    2016-04-01

    In this era, there is a lot of business process modeling techniques. This article is the research about differences of business process modeling techniques. For each technique will explain about the definition and the structure. This paper presents a comparative analysis of some popular business process modelling techniques. The comparative framework is based on 2 criteria: notation and how it works when implemented in Somerleyton Animal Park. Each technique will end with the advantages and disadvantages. The final conclusion will give recommend of business process modeling techniques that easy to use and serve the basis for evaluating further modelling techniques.

  10. Combining variational and model-based techniques to register PET and MR images in hand osteoarthritis

    Energy Technology Data Exchange (ETDEWEB)

    Magee, Derek [School of Computing, University of Leeds, Leeds (United Kingdom); Tanner, Steven F; Jeavons, Alan P [Division of Medical Physics, University of Leeds, Leeds (United Kingdom); Waller, Michael; Tan, Ai Lyn; McGonagle, Dennis, E-mail: D.R.Magee@leeds.ac.u [Leeds Teaching Hospitals NHS Trust, Leeds (United Kingdom)

    2010-08-21

    Co-registration of clinical images acquired using different imaging modalities and equipment is finding increasing use in patient studies. Here we present a method for registering high-resolution positron emission tomography (PET) data of the hand acquired using high-density avalanche chambers with magnetic resonance (MR) images of the finger obtained using a 'microscopy coil'. This allows the identification of the anatomical location of the PET radiotracer and thereby locates areas of active bone metabolism/'turnover'. Image fusion involving data acquired from the hand is demanding because rigid-body transformations cannot be employed to accurately register the images. The non-rigid registration technique that has been implemented in this study uses a variational approach to maximize the mutual information between images acquired using these different imaging modalities. A piecewise model of the fingers is employed to ensure that the methodology is robust and that it generates an accurate registration. Evaluation of the accuracy of the technique is tested using both synthetic data and PET and MR images acquired from patients with osteoarthritis. The method outperforms some established non-rigid registration techniques and results in a mean registration error that is less than approximately 1.5 mm in the vicinity of the finger joints.

  11. Combining variational and model-based techniques to register PET and MR images in hand osteoarthritis

    Science.gov (United States)

    Magee, Derek; Tanner, Steven F.; Waller, Michael; Tan, Ai Lyn; McGonagle, Dennis; Jeavons, Alan P.

    2010-08-01

    Co-registration of clinical images acquired using different imaging modalities and equipment is finding increasing use in patient studies. Here we present a method for registering high-resolution positron emission tomography (PET) data of the hand acquired using high-density avalanche chambers with magnetic resonance (MR) images of the finger obtained using a 'microscopy coil'. This allows the identification of the anatomical location of the PET radiotracer and thereby locates areas of active bone metabolism/'turnover'. Image fusion involving data acquired from the hand is demanding because rigid-body transformations cannot be employed to accurately register the images. The non-rigid registration technique that has been implemented in this study uses a variational approach to maximize the mutual information between images acquired using these different imaging modalities. A piecewise model of the fingers is employed to ensure that the methodology is robust and that it generates an accurate registration. Evaluation of the accuracy of the technique is tested using both synthetic data and PET and MR images acquired from patients with osteoarthritis. The method outperforms some established non-rigid registration techniques and results in a mean registration error that is less than approximately 1.5 mm in the vicinity of the finger joints.

  12. Wavelet-based spatial comparison technique for analysing and evaluating two-dimensional geophysical model fields

    Directory of Open Access Journals (Sweden)

    S. Saux Picart

    2011-11-01

    Full Text Available Complex numerical models of the Earth's environment, based around 3-D or 4-D time and space domains are routinely used for applications including climate predictions, weather forecasts, fishery management and environmental impact assessments. Quantitatively assessing the ability of these models to accurately reproduce geographical patterns at a range of spatial and temporal scales has always been a difficult problem to address. However, this is crucial if we are to rely on these models for decision making. Satellite data are potentially the only observational dataset able to cover the large spatial domains analysed by many types of geophysical models. Consequently optical wavelength satellite data is beginning to be used to evaluate model hindcast fields of terrestrial and marine environments. However, these satellite data invariably contain regions of occluded or missing data due to clouds, further complicating or impacting on any comparisons with the model. A methodology has recently been developed to evaluate precipitation forecasts using radar observations. It allows model skill to be evaluated at a range of spatial scales and rain intensities. Here we extend the original method to allow its generic application to a range of continuous and discontinuous geophysical data fields, and therefore allowing its use with optical satellite data. This is achieved through two major improvements to the original method: (i all thresholds are determined based on the statistical distribution of the input data, so no a priori knowledge about the model fields being analysed is required and (ii occluded data can be analysed without impacting on the metric results. The method can be used to assess a model's ability to simulate geographical patterns over a range of spatial scales. We illustrate how the method provides a compact and concise way of visualising the degree of agreement between spatial features in two datasets. The application of the new method, its

  13. A combination of receptor-based pharmacophore modeling & QM techniques for identification of human chymase inhibitors.

    Directory of Open Access Journals (Sweden)

    Mahreen Arooj

    Full Text Available Inhibition of chymase is likely to divulge therapeutic ways for the treatment of cardiovascular diseases, and fibrotic disorders. To find novel and potent chymase inhibitors and to provide a new idea for drug design, we used both ligand-based and structure-based methods to perform the virtual screening(VS of commercially available databases. Different pharmacophore models generated from various crystal structures of enzyme may depict diverse inhibitor binding modes. Therefore, multiple pharmacophore-based approach is applied in this study. X-ray crystallographic data of chymase in complex with different inhibitors were used to generate four structure-based pharmacophore models. One ligand-based pharmacophore model was also developed from experimentally known inhibitors. After successful validation, all pharmacophore models were employed in database screening to retrieve hits with novel chemical scaffolds. Drug-like hit compounds were subjected to molecular docking using GOLD and AutoDock. Finally four structurally diverse compounds with high GOLD score and binding affinity for several crystal structures of chymase were selected as final hits. Identification of final hits by three different pharmacophore models necessitates the use of multiple pharmacophore-based approach in VS process. Quantum mechanical calculation is also conducted for analysis of electrostatic characteristics of compounds which illustrates their significant role in driving the inhibitor to adopt a suitable bioactive conformation oriented in the active site of enzyme. In general, this study is used as example to illustrate how multiple pharmacophore approach can be useful in identifying structurally diverse hits which may bind to all possible bioactive conformations available in the active site of enzyme. The strategy used in the current study could be appropriate to design drugs for other enzymes as well.

  14. A Combination of Receptor-Based Pharmacophore Modeling & QM Techniques for Identification of Human Chymase Inhibitors

    Science.gov (United States)

    Arooj, Mahreen; Sakkiah, Sugunadevi; Kim, Songmi; Arulalapperumal, Venkatesh; Lee, Keun Woo

    2013-01-01

    Inhibition of chymase is likely to divulge therapeutic ways for the treatment of cardiovascular diseases, and fibrotic disorders. To find novel and potent chymase inhibitors and to provide a new idea for drug design, we used both ligand-based and structure-based methods to perform the virtual screening(VS) of commercially available databases. Different pharmacophore models generated from various crystal structures of enzyme may depict diverse inhibitor binding modes. Therefore, multiple pharmacophore-based approach is applied in this study. X-ray crystallographic data of chymase in complex with different inhibitors were used to generate four structure–based pharmacophore models. One ligand–based pharmacophore model was also developed from experimentally known inhibitors. After successful validation, all pharmacophore models were employed in database screening to retrieve hits with novel chemical scaffolds. Drug-like hit compounds were subjected to molecular docking using GOLD and AutoDock. Finally four structurally diverse compounds with high GOLD score and binding affinity for several crystal structures of chymase were selected as final hits. Identification of final hits by three different pharmacophore models necessitates the use of multiple pharmacophore-based approach in VS process. Quantum mechanical calculation is also conducted for analysis of electrostatic characteristics of compounds which illustrates their significant role in driving the inhibitor to adopt a suitable bioactive conformation oriented in the active site of enzyme. In general, this study is used as example to illustrate how multiple pharmacophore approach can be useful in identifying structurally diverse hits which may bind to all possible bioactive conformations available in the active site of enzyme. The strategy used in the current study could be appropriate to design drugs for other enzymes as well. PMID:23658661

  15. Visualization of simulated small vessels on computed tomography using a model-based iterative reconstruction technique

    Directory of Open Access Journals (Sweden)

    Toru Higaki

    2017-08-01

    Full Text Available This article describes a quantitative evaluation of visualizing small vessels using several image reconstruction methods in computed tomography. Simulated vessels with diameters of 1–6 mm made by 3D printer was scanned using 320-row detector computed tomography (CT. Hybrid iterative reconstruction (hybrid IR and model-based iterative reconstruction (MBIR were performed for the image reconstruction.

  16. BOLD-based Techniques for Quantifying Brain Hemodynamic and Metabolic Properties – Theoretical Models and Experimental Approaches

    Science.gov (United States)

    Yablonskiy, Dmitriy A.; Sukstanskii, Alexander L.; He, Xiang

    2012-01-01

    Quantitative evaluation of brain hemodynamics and metabolism, particularly the relationship between brain function and oxygen utilization, is important for understanding normal human brain operation as well as pathophysiology of neurological disorders. It can also be of great importance for evaluation of hypoxia within tumors of the brain and other organs. A fundamental discovery by Ogawa and co-workers of the BOLD (Blood Oxygenation Level Dependent) contrast opened a possibility to use this effect to study brain hemodynamic and metabolic properties by means of MRI measurements. Such measurements require developing theoretical models connecting MRI signal to brain structure and functioning and designing experimental techniques allowing MR measurements of salient features of theoretical models. In our review we discuss several such theoretical models and experimental methods for quantification brain hemodynamic and metabolic properties. Our review aims mostly at methods for measuring oxygen extraction fraction, OEF, based on measuring blood oxygenation level. Combining measurement of OEF with measurement of CBF allows evaluation of oxygen consumption, CMRO2. We first consider in detail magnetic properties of blood – magnetic susceptibility, MR relaxation and theoretical models of intravascular contribution to MR signal under different experimental conditions. Then, we describe a “through-space” effect – the influence of inhomogeneous magnetic fields, created in the extravascular space by intravascular deoxygenated blood, on the MR signal formation. Further we describe several experimental techniques taking advantage of these theoretical models. Some of these techniques - MR susceptometry, and T2-based quantification of oxygen OEF – utilize intravascular MR signal. Another technique – qBOLD – evaluates OEF by making use of through-space effects. In this review we targeted both scientists just entering the MR field and more experienced MR researchers

  17. The estimation of coal thickness based on Kriging technique and 3D coal seam modeling

    Energy Technology Data Exchange (ETDEWEB)

    Li, X.; Hu, J.; Zhu, H.; Ding, X. [Tongji University, Shanghai (China)

    2008-07-15

    Based on borehole data, the spatial variation of coal seam thickness was studied. Using contour data, a Triangulated Irregular Network (TIN) model was constructed with a strip algorithm. By using the Kriging method, the thickness at each point of the TIN was calculated. The thickness of coal has the characteristic of uncertainties. The top TIN of the coal seam can be acquired through mapping the bottom TIN with the calculated thickness. A 3D model of the coal seam was constructed by building the corresponding relationship between the top and bottom TIN. 15 refs., 6 figs., 1 tab.

  18. Image reconstruction algorithms for electrical capacitance tomography based on ROF model using new numerical techniques

    Science.gov (United States)

    Chen, Jiaoxuan; Zhang, Maomao; Liu, Yinyan; Chen, Jiaoliao; Li, Yi

    2017-03-01

    Electrical capacitance tomography (ECT) is a promising technique applied in many fields. However, the solutions for ECT are not unique and highly sensitive to the measurement noise. To remain a good shape of reconstructed object and endure a noisy data, a Rudin–Osher–Fatemi (ROF) model with total variation regularization is applied to image reconstruction in ECT. Two numerical methods, which are simplified augmented Lagrangian (SAL) and accelerated alternating direction method of multipliers (AADMM), are innovatively introduced to try to solve the above mentioned problems in ECT. The effect of the parameters and the number of iterations for different algorithms, and the noise level in capacitance data are discussed. Both simulation and experimental tests were carried out to validate the feasibility of the proposed algorithms, compared to the Landweber iteration (LI) algorithm. The results show that the SAL and AADMM algorithms can handle a high level of noise and the AADMM algorithm outperforms other algorithms in identifying the object from its background.

  19. Suppression of Spiral Waves by Voltage Clamp Techniques in a Conductance-Based Cardiac Tissue Model

    Institute of Scientific and Technical Information of China (English)

    YU Lian-Chun; MA Jun; ZHANG Guo-Yong; CHEN Yong

    2008-01-01

    A new control method is proposed to control the spatio-temporal dynamics in excitable media, which is described by the Morris-Lecar cells model. It is confirmed that successful suppression of spiral waves can be obtained by spatially clamping the membrane voltage of the excitable cells. The low voltage clamping induces breakup of spiral waves and the fragments are soon absorbed by low voltage obstacles, whereas the high voltage clamping generates travel waves that annihilate spiral waves through collision with them. However, each method has its shortcomings. Furthermore, a two-step method that combines both low and high voltage clamp techniques is then presented as a possible way of out this predicament.

  20. Model-based fault diagnosis techniques design schemes, algorithms, and tools

    CERN Document Server

    Ding, Steven

    2008-01-01

    The objective of this book is to introduce basic model-based FDI schemes, advanced analysis and design algorithms, and the needed mathematical and control theory tools at a level for graduate students and researchers as well as for engineers. This is a textbook with extensive examples and references. Most methods are given in the form of an algorithm that enables a direct implementation in a programme. Comparisons among different methods are included when possible.

  1. Spectral Target Detection using Physics-Based Modeling and a Manifold Learning Technique

    Science.gov (United States)

    Albano, James A.

    Identification of materials from calibrated radiance data collected by an airborne imaging spectrometer depends strongly on the atmospheric and illumination conditions at the time of collection. This thesis demonstrates a methodology for identifying material spectra using the assumption that each unique material class forms a lower-dimensional manifold (surface) in the higher-dimensional spectral radiance space and that all image spectra reside on, or near, these theoretic manifolds. Using a physical model, a manifold characteristic of the target material exposed to varying illumination and atmospheric conditions is formed. A graph-based model is then applied to the radiance data to capture the intricate structure of each material manifold, followed by the application of the commute time distance (CTD) transformation to separate the target manifold from the background. Detection algorithms are then applied in the CTD subspace. This nonlinear transformation is based on a random walk on a graph and is derived from an eigendecomposition of the pseudoinverse of the graph Laplacian matrix. This work provides a geometric interpretation of the CTD transformation, its algebraic properties, the atmospheric and illumination parameters varied in the physics-based model, and the influence the target manifold samples have on the orientation of the coordinate axes in the transformed space. This thesis concludes by demonstrating improved detection results in the CTD subspace as compared to detection in the original spectral radiance space.

  2. Regression Test-Selection Technique Using Component Model Based Modification: Code to Test Traceability

    Directory of Open Access Journals (Sweden)

    Ahmad A. Saifan

    2016-04-01

    Full Text Available Regression testing is a safeguarding procedure to validate and verify adapted software, and guarantee that no errors have emerged. However, regression testing is very costly when testers need to re-execute all the test cases against the modified software. This paper proposes a new approach in regression test selection domain. The approach is based on meta-models (test models and structured models to decrease the number of test cases to be used in the regression testing process. The approach has been evaluated using three Java applications. To measure the effectiveness of the proposed approach, we compare the results using the re-test to all approaches. The results have shown that our approach reduces the size of test suite without negative impact on the effectiveness of the fault detection.

  3. Replacement Value - Representation of Fair Value in Accounting. Techniques and Modeling Suitable for the Income Based Approach

    OpenAIRE

    MANEA MARINELA – DANIELA

    2011-01-01

    The term fair value is spread within the sphere of international standards without reference to any detailed guidance on how to apply. However, specialized tangible assets, which are rarely sold, the rule IAS 16 "Intangible assets " makes it possible to estimate fair value using an income approach or a replacement cost or depreciation. The following material is intended to identify potential modeling of fair value as an income-based approach, appealing to techniques used by professional evalu...

  4. Comparison between two meshless methods based on collocation technique for the numerical solution of four-species tumor growth model

    Science.gov (United States)

    Dehghan, Mehdi; Mohammadi, Vahid

    2017-03-01

    As is said in [27], the tumor-growth model is the incorporation of nutrient within the mixture as opposed to being modeled with an auxiliary reaction-diffusion equation. The formulation involves systems of highly nonlinear partial differential equations of surface effects through diffuse-interface models [27]. Simulations of this practical model using numerical methods can be applied for evaluating it. The present paper investigates the solution of the tumor growth model with meshless techniques. Meshless methods are applied based on the collocation technique which employ multiquadrics (MQ) radial basis function (RBFs) and generalized moving least squares (GMLS) procedures. The main advantages of these choices come back to the natural behavior of meshless approaches. As well as, a method based on meshless approach can be applied easily for finding the solution of partial differential equations in high-dimension using any distributions of points on regular and irregular domains. The present paper involves a time-dependent system of partial differential equations that describes four-species tumor growth model. To overcome the time variable, two procedures will be used. One of them is a semi-implicit finite difference method based on Crank-Nicolson scheme and another one is based on explicit Runge-Kutta time integration. The first case gives a linear system of algebraic equations which will be solved at each time-step. The second case will be efficient but conditionally stable. The obtained numerical results are reported to confirm the ability of these techniques for solving the two and three-dimensional tumor-growth equations.

  5. A New Model for Intrusion Detection based on Reduced Error Pruning Technique

    Directory of Open Access Journals (Sweden)

    Mradul Dhakar

    2013-09-01

    Full Text Available The increasing counterfeit of the internet usage has raised concerns of the security agencies to work very hard in order to diminish the presence of the abnormal users from the web. The motive of these illicit users (called intruders is to harm the system or the network either by gaining access to the system or prohibiting genuine users to access the resources. Hence in order to tackle the abnormalities Intrusion Detection System (IDS with Data Mining has evolved as the most demanding approach. On the one end IDS aims to detect the intrusions by monitoring a given environment while on the other end Data Mining allows mining of these intrusions hidden among genuine users. In this regard, IDS with Data Mining has been through several revisions in consideration to meet the current requirements with efficient detection of intrusions. Also several models have been proposed for enhancing the system performance. In context to improved performance, the paper presents a new model for intrusion detection. This improved model, named as REP (Reduced Error Pruning based Intrusion Detection Model results in higher accuracy along with the increased number of correctly classified instances.

  6. Recent developments of the projected shell model based on many-body techniques

    Directory of Open Access Journals (Sweden)

    Sun Yang

    2015-01-01

    Full Text Available Recent developments of the projected shell model (PSM are summarized. Firstly, by using the Pfaffian algorithm, the multi-quasiparticle configuration space is expanded to include 6-quasiparticle states. The yrast band of 166Hf at very high spins is studied as an example, where the observed third back-bending in the moment of inertia is well reproduced and explained. Secondly, an angular-momentum projected generate coordinate method is developed based on PSM. The evolution of the low-lying states, including the second 0+ state, of the soft Gd, Dy, and Er isotopes to the well-deformed ones is calculated, and compared with experimental data.

  7. Model building techniques for analysis.

    Energy Technology Data Exchange (ETDEWEB)

    Walther, Howard P.; McDaniel, Karen Lynn; Keener, Donald; Cordova, Theresa Elena; Henry, Ronald C.; Brooks, Sean; Martin, Wilbur D.

    2009-09-01

    The practice of mechanical engineering for product development has evolved into a complex activity that requires a team of specialists for success. Sandia National Laboratories (SNL) has product engineers, mechanical designers, design engineers, manufacturing engineers, mechanical analysts and experimentalists, qualification engineers, and others that contribute through product realization teams to develop new mechanical hardware. The goal of SNL's Design Group is to change product development by enabling design teams to collaborate within a virtual model-based environment whereby analysis is used to guide design decisions. Computer-aided design (CAD) models using PTC's Pro/ENGINEER software tools are heavily relied upon in the product definition stage of parts and assemblies at SNL. The three-dimensional CAD solid model acts as the design solid model that is filled with all of the detailed design definition needed to manufacture the parts. Analysis is an important part of the product development process. The CAD design solid model (DSM) is the foundation for the creation of the analysis solid model (ASM). Creating an ASM from the DSM currently is a time-consuming effort; the turnaround time for results of a design needs to be decreased to have an impact on the overall product development. This effort can be decreased immensely through simple Pro/ENGINEER modeling techniques that summarize to the method features are created in a part model. This document contains recommended modeling techniques that increase the efficiency of the creation of the ASM from the DSM.

  8. Temperature based daily incoming solar radiation modeling based on gene expression programming, neuro-fuzzy and neural network computing techniques.

    Science.gov (United States)

    Landeras, G.; López, J. J.; Kisi, O.; Shiri, J.

    2012-04-01

    The correct observation/estimation of surface incoming solar radiation (RS) is very important for many agricultural, meteorological and hydrological related applications. While most weather stations are provided with sensors for air temperature detection, the presence of sensors necessary for the detection of solar radiation is not so habitual and the data quality provided by them is sometimes poor. In these cases it is necessary to estimate this variable. Temperature based modeling procedures are reported in this study for estimating daily incoming solar radiation by using Gene Expression Programming (GEP) for the first time, and other artificial intelligence models such as Artificial Neural Networks (ANNs), and Adaptive Neuro-Fuzzy Inference System (ANFIS). Traditional temperature based solar radiation equations were also included in this study and compared with artificial intelligence based approaches. Root mean square error (RMSE), mean absolute error (MAE) RMSE-based skill score (SSRMSE), MAE-based skill score (SSMAE) and r2 criterion of Nash and Sutcliffe criteria were used to assess the models' performances. An ANN (a four-input multilayer perceptron with ten neurons in the hidden layer) presented the best performance among the studied models (2.93 MJ m-2 d-1 of RMSE). A four-input ANFIS model revealed as an interesting alternative to ANNs (3.14 MJ m-2 d-1 of RMSE). Very limited number of studies has been done on estimation of solar radiation based on ANFIS, and the present one demonstrated the ability of ANFIS to model solar radiation based on temperatures and extraterrestrial radiation. By the way this study demonstrated, for the first time, the ability of GEP models to model solar radiation based on daily atmospheric variables. Despite the accuracy of GEP models was slightly lower than the ANFIS and ANN models the genetic programming models (i.e., GEP) are superior to other artificial intelligence models in giving a simple explicit equation for the

  9. Progress in bionic information processing techniques for an electronic nose based on olfactory models

    Institute of Scientific and Technical Information of China (English)

    LI Guang; FU Jun; ZHANG Jia; ZHENG JunBao

    2009-01-01

    As a novel bionic analytical technique, an electronic nose, inspired by the mechanism of the biological olfactory system and integrated with modern sensing technology, electronic technology and pattern recognition technology, has been widely used in many areas. Moreover, recent basic research findings in biological olfaction combined with computational neuroscience promote its development both in methodology and application. In this review, the basic information processing principle of biological olfaction and artificial olfaction are summarized and compared, and four olfactory models and their applications to electronic noses are presented. Finally, a chaotic olfactory neural network is detailed and the utilization of several biologically oriented learning rules and its spatiotemporal dynamic prop-ties for electronic noses are discussed. The integration of various phenomena and their mechanisms for biological olfaction into an electronic nose context for information processing will not only make them more bionic, but also perform better than conventional methods. However, many problems still remain, which should be solved by further cooperation between theorists and engineers.

  10. Inverse reconstruction technique based on time-dependent Petschek-type reconnection model: first application to THEMIS magnetotail observations

    Directory of Open Access Journals (Sweden)

    V. Ivanova

    2009-12-01

    Full Text Available We apply the inverse reconstruction technique based on the two-dimensional time-dependent Petschek-type reconnection model to a dual bipolar magnetic structure observed by THEMIS B probe in the Earth's magnetotail during a substorm on 22 February 2008 around 04:35 UT. The technique exploits the recorded bipolar magnetic field variation as an input and provides the reconnection electric field and the location of the X-line as an output. As a result of the technique application, we get (1 the electric field, reaching ~1.1 mV/m at the maximum and consisting of two successive pulses with total duration of ~6 min, and (2 the approximate X-line position located in the magnetotail between 18 and 20 RE.

  11. Model assisted probability of detection for a guided waves based SHM technique

    Science.gov (United States)

    Memmolo, V.; Ricci, F.; Maio, L.; Boffa, N. D.; Monaco, E.

    2016-04-01

    Guided wave (GW) Structural Health Monitoring (SHM) allows to assess the health of aerostructures thanks to the great sensitivity to delamination and/or debondings appearance. Due to the several complexities affecting wave propagation in composites, an efficient GW SHM system requires its effective quantification associated to a rigorous statistical evaluation procedure. Probability of Detection (POD) approach is a commonly accepted measurement method to quantify NDI results and it can be effectively extended to an SHM context. However, it requires a very complex setup arrangement and many coupons. When a rigorous correlation with measurements is adopted, Model Assisted POD (MAPOD) is an efficient alternative to classic methods. This paper is concerned with the identification of small emerging delaminations in composite structural components. An ultrasonic GW tomography focused to impact damage detection in composite plate-like structures recently developed by authors is investigated, getting the bases for a more complex MAPOD analysis. Experimental tests carried out on a typical wing composite structure demonstrated the effectiveness of modeling approach in order to detect damages with the tomographic algorithm. Environmental disturbances, which affect signal waveforms and consequently damage detection, are considered simulating a mathematical noise in the modeling stage. A statistical method is used for an effective making decision procedure. A Damage Index approach is implemented as metric to interpret the signals collected from a distributed sensor network and a subsequent graphic interpolation is carried out to reconstruct the damage appearance. A model validation and first reliability assessment results are provided, in view of performance system quantification and its optimization as well.

  12. Modeling and Analyzing Wavelet based Watermarking System using Game Theoretic Optimization Technique

    Directory of Open Access Journals (Sweden)

    Shweta

    2011-03-01

    Full Text Available This paper deals with establishing an economically viable and robust multilevel Game theoretic watermarking security system for the digital community; based on CPU time utilization with respect to channel capacity and system complexity. The coefficients of watermark are embedded into the host image at selected transformation level, which in turn extracted by inverse transformation at the decoder to develop the game matrix, which is iteratively searched for the optimized stable state for the system using permissible threshold. The rational thinking of maximizing the payoff of the watermarker (encoder with respect to the attacker (noise can be merged with the model deriving winning strategies to gain optimality. The data is rationally analyzed and tested to describe the behavior of the method for varying system parameter values and to gain performance optimization on different gray images with added Gaussian, salt and pepper, JPEG compression noises. Signal to noise ratio and correlation coefficients are used as criteria for testing the method.

  13. A New 3D Model-Based Tracking Technique for Robust Camera Pose Estimation

    Directory of Open Access Journals (Sweden)

    Fakhreddine Ababsa

    2012-04-01

    Full Text Available In this paper we present a new robust camera pose estimation approach based on 3D lines features. The proposed method is well adapted for mobile augmented reality applications We used an Extended Kalman Filter (EKF to incrementally update the camera pose in real-time. The principal contributions of our method include first, the expansion of the RANSAC scheme in order to achieve a robust matching algorithm that associates 2D edges from the image with the 3D line segments from the input model. And second, a new powerful framework for camera pose estimation using only 2D-3D straight-lines within an EKF. Experimental results on real image sequences are presented to evaluate the performances and the feasibility of the proposed approach in indoor and outdoor environments.

  14. 2D Flood Modelling Using Advanced Terrain Analysis Techniques And A Fully Continuous DEM-Based Rainfall-Runoff Algorithm

    Science.gov (United States)

    Nardi, F.; Grimaldi, S.; Petroselli, A.

    2012-12-01

    Remotely sensed Digital Elevation Models (DEMs), largely available at high resolution, and advanced terrain analysis techniques built in Geographic Information Systems (GIS), provide unique opportunities for DEM-based hydrologic and hydraulic modelling in data-scarce river basins paving the way for flood mapping at the global scale. This research is based on the implementation of a fully continuous hydrologic-hydraulic modelling optimized for ungauged basins with limited river flow measurements. The proposed procedure is characterized by a rainfall generator that feeds a continuous rainfall-runoff model producing flow time series that are routed along the channel using a bidimensional hydraulic model for the detailed representation of the inundation process. The main advantage of the proposed approach is the characterization of the entire physical process during hydrologic extreme events of channel runoff generation, propagation, and overland flow within the floodplain domain. This physically-based model neglects the need for synthetic design hyetograph and hydrograph estimation that constitute the main source of subjective analysis and uncertainty of standard methods for flood mapping. Selected case studies show results and performances of the proposed procedure as respect to standard event-based approaches.

  15. NEO fireball diversity: energetics-based entry modeling and analysis techniques

    Science.gov (United States)

    Revelle, Douglas O.

    2007-05-01

    Observations of fireballs reveal that a number of very different types of materials are routinely entering the atmosphere over a very large height and corresponding mass and energy range. There are five well-known fireball groups. The compositions of these groups can be reliably deduced on a statistical basis based entirely on their observed end-heights in the atmosphere (Ceplecha and McCrosky, 1970, Wetherill and ReVelle, 1981). ReVelle (1983, 2001, 2002, 2005) has also reinterpreted these observations in terms of the properties of porous meteoroids, using the degree to which the observational data can be reproduced using a modern hypersonic aerodynamic entry dynamics approach for porous as well as homogeneous bodies. These data and modeled parameters include the standard properties of drag, deceleration, ablation and fragmentation as well as most recently a model of the panchromatic luminous emission from the fireball during progressive atmospheric penetration. Using a recently developed bolide entry modeling code, ReVelle (2005) has systematically examined the behavior of meteoroids using their semi-well known physical properties. In order to illustrate this, we have investigated a sampling of four of the possible extremes within the NEO bolide population: 1) Type I: Antarctic bolide of 2003: A "small" Aten asteroid, 2) Type I: Park Forest meteorite fall: March 27, 2003, 3) Type I: Mediterranean bolide June 6, 2002, 4) Type II: Revelstoke meteorite fall: March 31, 1965 (with no luminosity data available), and 5) Type II/III: Tagish Lake meteorite fall: January 18, 2000 (with infrasonic data questionable?) In addition to the entry properties, each of these events (except possibly Tagish Lake) also had mechanical, acoustic-gravity waves generated that were subsequently detected following their entry into the atmosphere. Since these waves can also be used to identify key physical properties of these unusual objects, we will also report on our ability to model such

  16. Research on giant magnetostrictive actuator online nonlinear modeling based on data driven principle with grating sensing technique

    Science.gov (United States)

    Han, Ping

    2017-01-01

    A novel Giant Magnetostrictive Actuator (GMA) experimental system with Fiber Bragg Grating (FBG) sensing technique and its modeling method based on data driven principle are proposed. The FBG sensors are adopted to gather the multi-physics fields' status data of GMA considering the strong nonlinearity of the Giant Magnetostrictive Material and GMA micro-actuated structure. The feedback features are obtained from the raw dynamic status data, which are preprocessed by data fill and abnormal value detection algorithms. Correspondingly the Least Squares Support Vector Machine method is utilized to realize GMA online nonlinear modeling with data driven principle. The model performance and its relative algorithms are experimentally evaluated. The model can regularly run in the frequency range from 10 to 1000 Hz and temperature range from 20 to 100 °C with the minimum prediction error stable in the range from -1.2% to 1.1%.

  17. Kurtosis based weighted sparse model with convex optimization technique for bearing fault diagnosis

    Science.gov (United States)

    Zhang, Han; Chen, Xuefeng; Du, Zhaohui; Yan, Ruqiang

    2016-12-01

    The bearing failure, generating harmful vibrations, is one of the most frequent reasons for machine breakdowns. Thus, performing bearing fault diagnosis is an essential procedure to improve the reliability of the mechanical system and reduce its operating expenses. Most of the previous studies focused on rolling bearing fault diagnosis could be categorized into two main families, kurtosis-based filter method and wavelet-based shrinkage method. Although tremendous progresses have been made, their effectiveness suffers from three potential drawbacks: firstly, fault information is often decomposed into proximal frequency bands and results in impulsive feature frequency band splitting (IFFBS) phenomenon, which significantly degrades the performance of capturing the optimal information band; secondly, noise energy spreads throughout all frequency bins and contaminates fault information in the information band, especially under the heavy noisy circumstance; thirdly, wavelet coefficients are shrunk equally to satisfy the sparsity constraints and most of the feature information energy are thus eliminated unreasonably. Therefore, exploiting two pieces of prior information (i.e., one is that the coefficient sequences of fault information in the wavelet basis is sparse, and the other is that the kurtosis of the envelope spectrum could evaluate accurately the information capacity of rolling bearing faults), a novel weighted sparse model and its corresponding framework for bearing fault diagnosis is proposed in this paper, coined KurWSD. KurWSD formulates the prior information into weighted sparse regularization terms and then obtains a nonsmooth convex optimization problem. The alternating direction method of multipliers (ADMM) is sequentially employed to solve this problem and the fault information is extracted through the estimated wavelet coefficients. Compared with state-of-the-art methods, KurWSD overcomes the three drawbacks and utilizes the advantages of both family

  18. DNA-COMPACT: DNA COMpression based on a pattern-aware contextual modeling technique.

    Directory of Open Access Journals (Sweden)

    Pinghao Li

    Full Text Available Genome data are becoming increasingly important for modern medicine. As the rate of increase in DNA sequencing outstrips the rate of increase in disk storage capacity, the storage and data transferring of large genome data are becoming important concerns for biomedical researchers. We propose a two-pass lossless genome compression algorithm, which highlights the synthesis of complementary contextual models, to improve the compression performance. The proposed framework could handle genome compression with and without reference sequences, and demonstrated performance advantages over best existing algorithms. The method for reference-free compression led to bit rates of 1.720 and 1.838 bits per base for bacteria and yeast, which were approximately 3.7% and 2.6% better than the state-of-the-art algorithms. Regarding performance with reference, we tested on the first Korean personal genome sequence data set, and our proposed method demonstrated a 189-fold compression rate, reducing the raw file size from 2986.8 MB to 15.8 MB at a comparable decompression cost with existing algorithms. DNAcompact is freely available at https://sourceforge.net/projects/dnacompact/for research purpose.

  19. AN AIR POLLUTION PREDICTION TECHNIQUE FOR URBAN DISTRICTS BASED ON MESO-SCALE NUMERICAL MODEL

    Institute of Scientific and Technical Information of China (English)

    YAN Jing-hua; XU Jian-ping

    2005-01-01

    Taking Shenzhen city as an example, the statistical and physical relationship between the density of pollutants and various atmospheric parameters are analyzed in detail, and a space-partitioned city air pollution potential prediction scheme is established based on it. The scheme considers quantitatively more than ten factors at the surface and planetary boundary layer (PBL), especially the effects of anisotropy of geographical environment, and treats wind direction as an independent impact factor. While the scheme treats the prediction equation respectively for different pollutants according to their differences in dilute properties, it considers as well the possible differences in dilute properties at different districts of the city under the same atmospheric condition, treating predictions respectively for different districts. Finally, the temporally and spatially high resolution predictions for the atmospheric factors are made with a high resolution numerical model, and further the space-partitioned and time-variational city pollution potential predictions are made. The scheme is objective and quantitative, and with clear physical meaning, so it is suitable to use in making high resolution air pollution predictions.

  20. Mathematical modelling techniques

    CERN Document Server

    Aris, Rutherford

    1995-01-01

    ""Engaging, elegantly written."" - Applied Mathematical ModellingMathematical modelling is a highly useful methodology designed to enable mathematicians, physicists and other scientists to formulate equations from a given nonmathematical situation. In this elegantly written volume, a distinguished theoretical chemist and engineer sets down helpful rules not only for setting up models but also for solving the mathematical problems they pose and for evaluating models.The author begins with a discussion of the term ""model,"" followed by clearly presented examples of the different types of mode

  1. Fusion of 3D models derived from TLS and image-based techniques for CH enhanced documentation

    Science.gov (United States)

    Bastonero, P.; Donadio, E.; Chiabrando, F.; Spanò, A.

    2014-05-01

    Recognizing the various advantages offered by 3D new metric survey technologies in the Cultural Heritage documentation phase, this paper presents some tests of 3D model generation, using different methods, and their possible fusion. With the aim to define potentialities and problems deriving from integration or fusion of metric data acquired with different survey techniques, the elected test case is an outstanding Cultural Heritage item, presenting both widespread and specific complexities connected to the conservation of historical buildings. The site is the Staffarda Abbey, the most relevant evidence of medieval architecture in Piedmont. This application faced one of the most topical architectural issues consisting in the opportunity to study and analyze an object as a whole, from twice location of acquisition sensors, both the terrestrial and the aerial one. In particular, the work consists in the evaluation of chances deriving from a simple union or from the fusion of different 3D cloudmodels of the abbey, achieved by multi-sensor techniques. The aerial survey is based on a photogrammetric RPAS (Remotely piloted aircraft system) flight while the terrestrial acquisition have been fulfilled by laser scanning survey. Both techniques allowed to extract and process different point clouds and to generate consequent 3D continuous models which are characterized by different scale, that is to say different resolutions and diverse contents of details and precisions. Starting from these models, the proposed process, applied to a sample area of the building, aimed to test the generation of a unique 3Dmodel thorough a fusion of different sensor point clouds. Surely, the describing potential and the metric and thematic gains feasible by the final model exceeded those offered by the two detached models.

  2. Crash Risk Prediction Modeling Based on the Traffic Conflict Technique and a Microscopic Simulation for Freeway Interchange Merging Areas.

    Science.gov (United States)

    Li, Shen; Xiang, Qiaojun; Ma, Yongfeng; Gu, Xin; Li, Han

    2016-11-19

    This paper evaluates the traffic safety of freeway interchange merging areas based on the traffic conflict technique. The hourly composite risk indexes (HCRI) was defined. By the use of unmanned aerial vehicle (UAV) photography and video processing techniques, the conflict type and severity was judged. Time to collision (TTC) was determined with the traffic conflict evaluation index. Then, the TTC severity threshold was determined. Quantizing the weight of the conflict by direct losses of different severities of freeway traffic accidents, the calculated weight of the HCRI can be obtained. Calibration of the relevant parameters of the micro-simulation simulator VISSIM is conducted by the travel time according to the field data. Variables are placed into orthogonal tables at different levels. On the basis of this table, the trajectory file of every traffic condition is simulated, and then submitted into a surrogate safety assessment model (SSAM), identifying the number of hourly traffic conflicts in the merging area, a statistic of HCRI. Moreover, the multivariate linear regression model was presented and validated to study the relationship between HCRI and the influencing variables. A comparison between the HCRI model and the hourly conflicts ratio (HCR), without weight, shows that the HCRI model fitting degree was obviously higher than the HCR. This will be a reference to design and implement operational planners.

  3. Crash Risk Prediction Modeling Based on the Traffic Conflict Technique and a Microscopic Simulation for Freeway Interchange Merging Areas

    Directory of Open Access Journals (Sweden)

    Shen Li

    2016-11-01

    Full Text Available This paper evaluates the traffic safety of freeway interchange merging areas based on the traffic conflict technique. The hourly composite risk indexes (HCRI was defined. By the use of unmanned aerial vehicle (UAV photography and video processing techniques, the conflict type and severity was judged. Time to collision (TTC was determined with the traffic conflict evaluation index. Then, the TTC severity threshold was determined. Quantizing the weight of the conflict by direct losses of different severities of freeway traffic accidents, the calculated weight of the HCRI can be obtained. Calibration of the relevant parameters of the micro-simulation simulator VISSIM is conducted by the travel time according to the field data. Variables are placed into orthogonal tables at different levels. On the basis of this table, the trajectory file of every traffic condition is simulated, and then submitted into a surrogate safety assessment model (SSAM, identifying the number of hourly traffic conflicts in the merging area, a statistic of HCRI. Moreover, the multivariate linear regression model was presented and validated to study the relationship between HCRI and the influencing variables. A comparison between the HCRI model and the hourly conflicts ratio (HCR, without weight, shows that the HCRI model fitting degree was obviously higher than the HCR. This will be a reference to design and implement operational planners.

  4. Experimental models of brain ischemia: a review of techniques, magnetic resonance imaging and investigational cell-based therapies

    Directory of Open Access Journals (Sweden)

    Alessandra eCanazza

    2014-02-01

    Full Text Available Stroke continues to be a significant cause of death and disability worldwide. Although major advances have been made in the past decades in prevention, treatment and rehabilitation, enormous challenges remain in the way of translating new therapeutic approaches from bench to bedside. Thrombolysis, while routinely used for ischemic stroke, is only a viable option within a narrow time window. Recently, progress in stem cell biology has opened up avenues to therapeutic strategies aimed at supporting and replacing neural cells in infarcted areas. Realistic experimental animal models are crucial to understand the mechanisms of neuronal survival following ischemic brain injury and to develop therapeutic interventions. Current studies on experimental stroke therapies evaluate the efficiency of neuroprotective agents and cell-based approaches using primarily rodent models of permanent or transient focal cerebral ischemia. In parallel, advancements in imaging techniques permit better mapping of the spatial-temporal evolution of the lesioned cortex and its functional responses. This review provides a condensed conceptual review of the state of the art of this field, from models and magnetic resonance imaging techniques through to stem cell therapies.

  5. Hybrid and Model-Based Iterative Reconstruction Techniques for Pediatric CT

    NARCIS (Netherlands)

    den Harder, Annemarie M.; Willemink, Martin J.; Budde, Ricardo P. J.; Schilham, Arnold M. R.; Leiner, Tim; de Jong, Pim A.

    2015-01-01

    OBJECTIVE. Radiation exposure from CT examinations should be reduced to a minimum in children. Iterative reconstruction (IR) is a method to reduce image noise that can be used to improve CT image quality, thereby allowing radiation dose reduction. This article reviews the use of hybrid and model-bas

  6. 3D Buildings Modelling Based on a Combination of Techniques and Methodologies

    NARCIS (Netherlands)

    Pop, G.; Bucksch, A.K.; Gorte, B.G.H.

    2007-01-01

    Three dimensional architectural models are more and more important for a large number of applications. Specialists look for faster and more precise ways to generate them. This paper discusses methods to combine methodologies for handling data acquired from multiple sources: maps, terrestrial laser a

  7. A review of experimental and modeling techniques to determine properties of biopolymer-based nanocomposites

    Science.gov (United States)

    The nonbiodegradable and nonrenewable nature of plastic packaging has led to a renewed interest in packaging materials based on bio-nanocomposites (biopolymer matrix reinforced with nanoparticles such as layered silicates). One of the reasons for unique properties of bio-nanocomposites is the differ...

  8. Retention of features on a mapped Drosophila brain surface using a Bézier-tube-based surface model averaging technique.

    Science.gov (United States)

    Chen, Guan-Yu; Wu, Cheng-Chi; Shao, Hao-Chiang; Chang, Hsiu-Ming; Chiang, Ann-Shyn; Chen, Yung-Chang

    2012-12-01

    Model averaging is a widely used technique in biomedical applications. Two established model averaging methods, iterative shape averaging (ISA) method and virtual insect brain (VIB) method, have been applied to several organisms to generate average representations of their brain surfaces. However, without sufficient samples, some features of the average Drosophila brain surface obtained using the above methods may disappear or become distorted. To overcome this problem, we propose a Bézier-tube-based surface model averaging strategy. The proposed method first compensates for disparities in position, orientation, and dimension of input surfaces, and then evaluates the average surface by performing shape-based interpolation. Structural features with larger individual disparities are simplified with half-ellipse-shaped Bézier tubes, and are unified according to these tubes to avoid distortion during the averaging process. Experimental results show that the average model yielded by our method could preserve fine features and avoid structural distortions even if only a limit amount of input samples are used. Finally, we qualitatively compare our results with those obtained by ISA and VIB methods by measuring the surface-to-surface distances between input surfaces and the averaged ones. The comparisons show that the proposed method could generate a more representative average surface than both ISA and VIB methods.

  9. 3D printing of high-resolution PLA-based structures by hybrid electrohydrodynamic and fused deposition modeling techniques

    Science.gov (United States)

    Zhang, Bin; Seong, Baekhoon; Nguyen, VuDat; Byun, Doyoung

    2016-02-01

    Recently, the three-dimensional (3D) printing technique has received much attention for shape forming and manufacturing. The fused deposition modeling (FDM) printer is one of the various 3D printers available and has become widely used due to its simplicity, low-cost, and easy operation. However, the FDM technique has a limitation whereby its patterning resolution is too low at around 200 μm. In this paper, we first present a hybrid mechanism of electrohydrodynamic jet printing with the FDM technique, which we name E-FDM. We then develop a novel high-resolution 3D printer based on the E-FDM process. To determine the optimal condition for structuring, we also investigated the effect of several printing parameters, such as temperature, applied voltage, working height, printing speed, flow-rate, and acceleration on the patterning results. This method was capable of fabricating both high resolution 2D and 3D structures with the use of polylactic acid (PLA). PLA has been used to fabricate scaffold structures for tissue engineering, which has different hierarchical structure sizes. The fabrication speed was up to 40 mm/s and the pattern resolution could be improved to 10 μm.

  10. Review of Advances in Cobb Angle Calculation and Image-Based Modelling Techniques for Spinal Deformities

    Science.gov (United States)

    Giannoglou, V.; Stylianidis, E.

    2016-06-01

    Scoliosis is a 3D deformity of the human spinal column that is caused from the bending of the latter, causing pain, aesthetic and respiratory problems. This internal deformation is reflected in the outer shape of the human back. The golden standard for diagnosis and monitoring of scoliosis is the Cobb angle, which refers to the internal curvature of the trunk. This work is the first part of a post-doctoral research, presenting the most important researches that have been done in the field of scoliosis, concerning its digital visualisation, in order to provide a more precise and robust identification and monitoring of scoliosis. The research is divided in four fields, namely, the X-ray processing, the automatic Cobb angle(s) calculation, the 3D modelling of the spine that provides a more accurate representation of the trunk and the reduction of X-ray radiation exposure throughout the monitoring of scoliosis. Despite the fact that many researchers have been working on the field for the last decade at least, there is no reliable and universal tool to automatically calculate the Cobb angle(s) and successfully perform proper 3D modelling of the spinal column that would assist a more accurate detection and monitoring of scoliosis.

  11. REVIEW OF ADVANCES IN COBB ANGLE CALCULATION AND IMAGE-BASED MODELLING TECHNIQUES FOR SPINAL DEFORMITIES

    Directory of Open Access Journals (Sweden)

    V. Giannoglou

    2016-06-01

    Full Text Available Scoliosis is a 3D deformity of the human spinal column that is caused from the bending of the latter, causing pain, aesthetic and respiratory problems. This internal deformation is reflected in the outer shape of the human back. The golden standard for diagnosis and monitoring of scoliosis is the Cobb angle, which refers to the internal curvature of the trunk. This work is the first part of a post-doctoral research, presenting the most important researches that have been done in the field of scoliosis, concerning its digital visualisation, in order to provide a more precise and robust identification and monitoring of scoliosis. The research is divided in four fields, namely, the X-ray processing, the automatic Cobb angle(s calculation, the 3D modelling of the spine that provides a more accurate representation of the trunk and the reduction of X-ray radiation exposure throughout the monitoring of scoliosis. Despite the fact that many researchers have been working on the field for the last decade at least, there is no reliable and universal tool to automatically calculate the Cobb angle(s and successfully perform proper 3D modelling of the spinal column that would assist a more accurate detection and monitoring of scoliosis.

  12. Localization technique research of a pipeline robot based on the magnetic-dipole model

    Institute of Scientific and Technical Information of China (English)

    Junyuan LI; Hongjun CHEN; Shengfeng LI; Xiaohua ZHANG

    2008-01-01

    The magnetic field distribution of an emission antenna is studied in this paper. When the slenderness ratio of the emission antenna is high, the emission antenna can be simplified as a magnetic dipole for practical application. The numerical results of the magnetic dipole magnetic field show that the magnetic magnitude dis-tribution has a hump-shape, whose direction is perpendi-cular with the antenna axis direction. A localization method based on the hump-shape signal detection is presented. The experimental result shows that the preci-sion can reach a value of + 5 cm. The method can be used to localize a pipeline robot working in a metal pipe.

  13. GAUSSIAN MIXTURE MODEL BASED LEVEL SET TECHNIQUE FOR AUTOMATED SEGMENTATION OF CARDIAC MR IMAGES

    Directory of Open Access Journals (Sweden)

    G. Dharanibai,

    2011-04-01

    Full Text Available In this paper we propose a Gaussian Mixture Model (GMM integrated level set method for automated segmentation of left ventricle (LV, right ventricle (RV and myocardium from short axis views of cardiacmagnetic resonance image. By fitting GMM to the image histogram, global pixel intensity characteristics of the blood pool, myocardium and background are estimated. GMM provides initial segmentation andthe segmentation solution is regularized using level set. Parameters for controlling the level set evolution are automatically estimated from the Bayesian inference classification of pixels. We propose a new speed function that combines edge and region information that stops the evolving level set at the myocardial boundary. Segmentation efficacy is analyzed qualitatively via visual inspection. Results show the improved performance of our of proposed speed function over the conventional Bayesian driven adaptive speed function in automatic segmentation of myocardium

  14. A new mathematical modelling based shape extraction technique for Forensic Odontology.

    Science.gov (United States)

    G, Jaffino; A, Banumathi; Gurunathan, Ulaganathan; B, Vijayakumari; J, Prabin Jose

    2017-02-28

    Forensic Odontology is a specific means for identifying a person in which deceased, and particularly in fatality incidents. The algorithm can be proposed to identify a person by comparing both postmortem (PM) and antemortem (AM) dental radiographs and photographs. This work aims to introduce a new mathematical algorithm for photographs in addition with radiographs. Isoperimetric graph partitioning method is used to extract the shape of dental images in forensic identification. Shape matching is done by comparing AM and PM dental images using both similarity and distance measures. Experimental results prove that the higher matching distance is observed by distance metric rather than similarity measures. The results of this algorithm show that a high hit rate is observed for distance based performance measures and it is well suited for forensic odontologist to identify a person.

  15. A novel approach to model exposure of coastal-marine ecosystems to riverine flood plumes based on remote sensing techniques.

    Science.gov (United States)

    Álvarez-Romero, Jorge G; Devlin, Michelle; Teixeira da Silva, Eduardo; Petus, Caroline; Ban, Natalie C; Pressey, Robert L; Kool, Johnathan; Roberts, Jason J; Cerdeira-Estrada, Sergio; Wenger, Amelia S; Brodie, Jon

    2013-04-15

    Increased loads of land-based pollutants are a major threat to coastal-marine ecosystems. Identifying the affected marine areas and the scale of influence on ecosystems is critical to assess the impacts of degraded water quality and to inform planning for catchment management and marine conservation. Studies using remotely-sensed data have contributed to our understanding of the occurrence and influence of river plumes, and to our ability to assess exposure of marine ecosystems to land-based pollutants. However, refinement of plume modeling techniques is required to improve risk assessments. We developed a novel, complementary, approach to model exposure of coastal-marine ecosystems to land-based pollutants. We used supervised classification of MODIS-Aqua true-color satellite imagery to map the extent of plumes and to qualitatively assess the dispersal of pollutants in plumes. We used the Great Barrier Reef (GBR), the world's largest coral reef system, to test our approach. We combined frequency of plume occurrence with spatially distributed loads (based on a cost-distance function) to create maps of exposure to suspended sediment and dissolved inorganic nitrogen. We then compared annual exposure maps (2007-2011) to assess inter-annual variability in the exposure of coral reefs and seagrass beds to these pollutants. We found this method useful to map plumes and qualitatively assess exposure to land-based pollutants. We observed inter-annual variation in exposure of ecosystems to pollutants in the GBR, stressing the need to incorporate a temporal component into plume exposure/risk models. Our study contributes to our understanding of plume spatial-temporal dynamics of the GBR and offers a method that can also be applied to monitor exposure of coastal-marine ecosystems to plumes and explore their ecological influences. Copyright © 2013 Elsevier Ltd. All rights reserved.

  16. A Survey on Hidden Markov Model (HMM Based Intention Prediction Techniques

    Directory of Open Access Journals (Sweden)

    Mrs. Manisha Bharati

    2016-01-01

    Full Text Available The extensive use of virtualization in implementing cloud infrastructure brings unrivaled security concerns for cloud tenants or customers and introduces an additional layer that itself must be completely configured and secured. Intruders can exploit the large amount of cloud resources for their attacks. This paper discusses two approaches In the first three features namely ongoing attacks, autonomic prevention actions, and risk measure are Integrated to our Autonomic Cloud Intrusion Detection Framework (ACIDF as most of the current security technologies do not provide the essential security features for cloud systems such as early warnings about future ongoing attacks, autonomic prevention actions, and risk measure. The early warnings are signaled through a new finite State Hidden Markov prediction model that captures the interaction between the attackers and cloud assets. The risk assessment model measures the potential impact of a threat on assets given its occurrence probability. The estimated risk of each security alert is updated dynamically as the alert is correlated to prior ones. This enables the adaptive risk metric to evaluate the cloud’s overall security state. The prediction system raises early warnings about potential attacks to the autonomic component, controller. Thus, the controller can take proactive corrective actions before the attacks pose a serious security risk to the system. In another Attack Sequence Detection (ASD approach as Tasks from different users may be performed on the same machine. Therefore, one primary security concern is whether user data is secure in cloud. On the other hand, hacker may facilitate cloud computing to launch larger range of attack, such as a request of port scan in cloud with multiple virtual machines executing such malicious action. In addition, hacker may perform a sequence of attacks in order to compromise his target system in cloud, for example, evading an easy-to-exploit machine in a

  17. Survey of semantic modeling techniques

    Energy Technology Data Exchange (ETDEWEB)

    Smith, C.L.

    1975-07-01

    The analysis of the semantics of programing languages was attempted with numerous modeling techniques. By providing a brief survey of these techniques together with an analysis of their applicability for answering semantic issues, this report attempts to illuminate the state-of-the-art in this area. The intent is to be illustrative rather than thorough in the coverage of semantic models. A bibliography is included for the reader who is interested in pursuing this area of research in more detail.

  18. Survey of semantic modeling techniques

    Energy Technology Data Exchange (ETDEWEB)

    Smith, C.L.

    1975-07-01

    The analysis of the semantics of programing languages was attempted with numerous modeling techniques. By providing a brief survey of these techniques together with an analysis of their applicability for answering semantic issues, this report attempts to illuminate the state-of-the-art in this area. The intent is to be illustrative rather than thorough in the coverage of semantic models. A bibliography is included for the reader who is interested in pursuing this area of research in more detail.

  19. Feasibility Study on Tension Estimation Technique for Hanger Cables Using the FE Model-Based System Identification Method

    Directory of Open Access Journals (Sweden)

    Kyu-Sik Park

    2015-01-01

    Full Text Available Hanger cables in suspension bridges are partly constrained by horizontal clamps. So, existing tension estimation methods based on a single cable model are prone to higher errors as the cable gets shorter, making it more sensitive to flexural rigidity. Therefore, inverse analysis and system identification methods based on finite element models are suggested recently. In this paper, the applicability of system identification methods is investigated using the hanger cables of Gwang-An bridge. The test results show that the inverse analysis and systemic identification methods based on finite element models are more reliable than the existing string theory and linear regression method for calculating the tension in terms of natural frequency errors. However, the estimation error of tension can be varied according to the accuracy of finite element model in model based methods. In particular, the boundary conditions affect the results more profoundly when the cable gets shorter. Therefore, it is important to identify the boundary conditions through experiment if it is possible. The FE model-based tension estimation method using system identification method can take various boundary conditions into account. Also, since it is not sensitive to the number of natural frequency inputs, the availability of this system is high.

  20. Research on the Forecast Model of Total Viable Count on Bacon Based on Hyper spectral Imaging Technique

    Directory of Open Access Journals (Sweden)

    Zhao Junhua

    2016-01-01

    Full Text Available The total viable count (TVC in bacon overweight can cause serious damage to human health. In order to find a rapid and nondestructive method of TVC, hyper spectral imaging technique was applied to quantitatively analysis of TVC on bacon. Comprehensively comparing the pretreatment method of multiple scattering, derivative method and so on finally the multiple scattering for pretreatment was used. And the interval optimization method of least squares model was set up to predict, and get a good prediction results. The correlation coefficient of the calibration and predictions respectively was 0.808 and 0.808, interactive authentication root mean square error was 0.115 and 0.198 respectively. Therefore, hyper spectral imaging technique combining iPLS can be used for the rapid detection of TVC on bacon.

  1. Computed tomography landmark-based semi-automated mesh morphing and mapping techniques: generation of patient specific models of the human pelvis without segmentation.

    Science.gov (United States)

    Salo, Zoryana; Beek, Maarten; Wright, David; Whyne, Cari Marisa

    2015-04-13

    Current methods for the development of pelvic finite element (FE) models generally are based upon specimen specific computed tomography (CT) data. This approach has traditionally required segmentation of CT data sets, which is time consuming and necessitates high levels of user intervention due to the complex pelvic anatomy. The purpose of this research was to develop and assess CT landmark-based semi-automated mesh morphing and mapping techniques to aid the generation and mechanical analysis of specimen-specific FE models of the pelvis without the need for segmentation. A specimen-specific pelvic FE model (source) was created using traditional segmentation methods and morphed onto a CT scan of a different (target) pelvis using a landmark-based method. The morphed model was then refined through mesh mapping by moving the nodes to the bone boundary. A second target model was created using traditional segmentation techniques. CT intensity based material properties were assigned to the morphed/mapped model and to the traditionally segmented target models. Models were analyzed to evaluate their geometric concurrency and strain patterns. Strains generated in a double-leg stance configuration were compared to experimental strain gauge data generated from the same target cadaver pelvis. CT landmark-based morphing and mapping techniques were efficiently applied to create a geometrically multifaceted specimen-specific pelvic FE model, which was similar to the traditionally segmented target model and better replicated the experimental strain results (R(2)=0.873). This study has shown that mesh morphing and mapping represents an efficient validated approach for pelvic FE model generation without the need for segmentation.

  2. A Biomechanical Modeling Guided CBCT Estimation Technique.

    Science.gov (United States)

    Zhang, You; Tehrani, Joubin Nasehi; Wang, Jing

    2017-02-01

    Two-dimensional-to-three-dimensional (2D-3D) deformation has emerged as a new technique to estimate cone-beam computed tomography (CBCT) images. The technique is based on deforming a prior high-quality 3D CT/CBCT image to form a new CBCT image, guided by limited-view 2D projections. The accuracy of this intensity-based technique, however, is often limited in low-contrast image regions with subtle intensity differences. The solved deformation vector fields (DVFs) can also be biomechanically unrealistic. To address these problems, we have developed a biomechanical modeling guided CBCT estimation technique (Bio-CBCT-est) by combining 2D-3D deformation with finite element analysis (FEA)-based biomechanical modeling of anatomical structures. Specifically, Bio-CBCT-est first extracts the 2D-3D deformation-generated displacement vectors at the high-contrast anatomical structure boundaries. The extracted surface deformation fields are subsequently used as the boundary conditions to drive structure-based FEA to correct and fine-tune the overall deformation fields, especially those at low-contrast regions within the structure. The resulting FEA-corrected deformation fields are then fed back into 2D-3D deformation to form an iterative loop, combining the benefits of intensity-based deformation and biomechanical modeling for CBCT estimation. Using eleven lung cancer patient cases, the accuracy of the Bio-CBCT-est technique has been compared to that of the 2D-3D deformation technique and the traditional CBCT reconstruction techniques. The accuracy was evaluated in the image domain, and also in the DVF domain through clinician-tracked lung landmarks.

  3. Plankton Biomass Models Based on GIS and Remote Sensing Technique for Predicting Marine Megafauna Hotspots in the Solor Waters

    Science.gov (United States)

    Putra, MIH; Lewis, SA; Kurniasih, EM; Prabuning, D.; Faiqoh, E.

    2016-11-01

    Geographic information system and remote sensing techniques can be used to assist with distribution modelling; a useful tool that helps with strategic design and management plans for MPAs. This study built a pilot model of plankton biomass and distribution in the waters off Solor and Lembata, and is the first study to identify marine megafauna foraging areas in the region. Forty-three samples of zooplankton were collected every 4 km according to the range time and station of aqua MODIS. Generalized additive model (GAM) we used to modelling zooplankton biomass response from environmental properties.Thirty one samples were used to build a model of inverse distance weighting (IDW) (cell size 0.01°) and 12 samples were used as a control to verify the models accuracy. Furthermore, Getis-Ord Gi was used to identify the significance of the hotspot and cold-spot for foraging area. The GAM models was explain 88.1% response of zooplankton biomass and percent to full moon, phytopankton biomassbeing strong predictors. The sampling design was essential in order to build highly accurate models. Our models 96% accurate for phytoplankton and 88% accurate for zooplankton. The foraging behaviour was significantly related to plankton biomass hotspots, which were two times higher compared to plankton cold-spots. In addition, extremely steep slopes of the Lamakera strait support strong upwelling with highly productive waters that affect the presence of marine megafauna. This study detects that the Lamakera strait provides the planktonic requirements for marine megafauna foraging, helping to explain why this region supports such high diversity and abundance of marine megafauna.

  4. Polynomial fuzzy model-based control systems stability analysis and control synthesis using membership function dependent techniques

    CERN Document Server

    Lam, Hak-Keung

    2016-01-01

    This book presents recent research on the stability analysis of polynomial-fuzzy-model-based control systems where the concept of partially/imperfectly matched premises and membership-function dependent analysis are considered. The membership-function-dependent analysis offers a new research direction for fuzzy-model-based control systems by taking into account the characteristic and information of the membership functions in the stability analysis. The book presents on a research level the most recent and advanced research results, promotes the research of polynomial-fuzzy-model-based control systems, and provides theoretical support and point a research direction to postgraduate students and fellow researchers. Each chapter provides numerical examples to verify the analysis results, demonstrate the effectiveness of the proposed polynomial fuzzy control schemes, and explain the design procedure. The book is comprehensively written enclosing detailed derivation steps and mathematical derivations also for read...

  5. Modeling Techniques for IN/Internet Interworking

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    This paper focuses on the authors' contributions to ITU-T to develop the network modeling for the support of IN/Internet interworking. Following an introduction to benchmark interworking services, the paper describes the consensus enhanced DFP architecture, which is reached based on IETF reference model and the authors' proposal. Then the proposed information flows for benchmark services are presented with new or updated flows identified. Finally a brief description is given to implementation techniques.

  6. Wavelet Based Image Denoising Technique

    Directory of Open Access Journals (Sweden)

    Sachin D Ruikar

    2011-03-01

    Full Text Available This paper proposes different approaches of wavelet based image denoising methods. The search for efficient image denoising methods is still a valid challenge at the crossing of functional analysis and statistics. In spite of the sophistication of the recently proposed methods, most algorithms have not yet attained a desirable level of applicability. Wavelet algorithms are useful tool for signal processing such as image compression and denoising. Multi wavelets can be considered as an extension of scalar wavelets. The main aim is to modify the wavelet coefficients in the new basis, the noise can be removed from the data. In this paper, we extend the existing technique and providing a comprehensive evaluation of the proposed method. Results based on different noise, such as Gaussian, Poisson’s, Salt and Pepper, and Speckle performed in this paper. A signal to noise ratio as a measure of the quality of denoising was preferred.

  7. Probabilistic hydrological nowcasting using radar based nowcasting techniques and distributed hydrological models: application in the Mediterranean area

    Science.gov (United States)

    Poletti, Maria Laura; Pignone, Flavio; Rebora, Nicola; Silvestro, Francesco

    2017-04-01

    The exposure of the urban areas to flash-floods is particularly significant to Mediterranean coastal cities, generally densely-inhabited. Severe rainfall events often associated to intense and organized thunderstorms produced, during the last century, flash-floods and landslides causing serious damages to urban areas and in the worst events led to human losses. The temporal scale of these events has been observed strictly linked to the size of the catchments involved: in the Mediterranean area a great number of catchments that pass through coastal cities have a small drainage area (less than 100 km2) and a corresponding hydrologic response timescale in the order of a few hours. A suitable nowcasting chain is essential for the on time forecast of this kind of events. In fact meteorological forecast systems are unable to predict precipitation at the scale of these events, small both at spatial (few km) and temporal (hourly) scales. Nowcasting models, covering the time interval of the following two hours starting from the observation try to extend the predictability limits of the forecasting models in support of real-time flood alert system operations. This work aims to present the use of hydrological models coupled with nowcasting techniques. The nowcasting model PhaSt furnishes an ensemble of equi-probable future precipitation scenarios on time horizons of 1-3 h starting from the most recent radar observations. The coupling of the nowcasting model PhaSt with the hydrological model Continuum allows to forecast the flood with a few hours in advance. In this way it is possible to generate different discharge prediction for the following hours and associated return period maps: these maps can be used as a support in the decisional process for the warning system.

  8. Assessing the performance of different model-based techniques to estimate water content in the upper soil layer

    Science.gov (United States)

    Negm, Amro; Capodici, Fulvio; Ciraolo, Giuseppe; Maltese, Antonino; Minacapilli, Mario; Provenzano, Giuseppe; Rallo, Giovanni

    2016-04-01

    The knowledge of soil water content (SWC) of the upper soil layer is important for most hydrological processes occurring over vegetated areas and under dry climate. Because direct field measurements of SWC are difficult, the use of different type of sensors and model-based approaches have been proposed and extensively used during the last decade. The main objective of this work is to assess the performance of two models estimating SWC of the upper soil layer: the transient line heat source method and the physically based Hydrus-1D model. The models' performance is assessed using field measurements acquired through a Time Domain Reflectometer (TDR). The experiment was carried out on an olive orchard located near the town of Castelvetrano (South-West of Sicily - latitude 37.6429°, longitude 12.8471°). The temporal dynamic of topsoil water content was investigated in two samplers, under wet and dry conditions. The samplers were opened at the upper boundary and inserted into the soil to ensure the continuity of the soil surface. A K2D Pro sensor allowed to measure the soil thermal properties allowing to estimate soil thermal inertia and then SWC. The physically based Hydrus-1D model was also used to estimate SWC of both samples. Hourly records of soil water contents, acquired by a TDR100 probe, were used to validate both the considered models. The comparison between SWCs simulated by Hydrus-1D and the corresponding values measured by the TDR method evidenced a good agreement. Similarly, even SWCs derived from the thermal diffusion model resulted fairly close to those measured with the TDR.

  9. Certain investigations on the reduction of side lobe level of an uniform linear antenna array using biogeography based optimization technique with sinusoidal migration model and simplified-BBO

    Indian Academy of Sciences (India)

    T S Jeyali Laseetha; R Sukanesh

    2014-02-01

    In this paper, we propose biogeography based optimization technique, with linear and sinusoidal migration models and simplified biogeography based optimization (S-BBO), for uniformly spaced linear antenna array synthesis to maximize the reduction of side lobe level (SLL). This paper explores biogeography theory. It generalizes two migration models in BBO namely, linear migration model and sinusoidal migration model. The performance of SLL reduction in ULA is investigated. Our performance study shows that among the two, sinusoidal migration model is a promising candidate for optimization. In our work, simplified – BBO algorithmis also deployed. This determines an optimum set value for amplitude excitations of antenna array elements that generate a radiation pattern with maximum side lobe level reduction. Our detailed investigation also shows that sinusoidal migration model of BBO performs better compared to the other evolutionary algorithms discussed in this paper.

  10. Aerosol model selection and uncertainty modelling by adaptive MCMC technique

    Directory of Open Access Journals (Sweden)

    M. Laine

    2008-12-01

    Full Text Available We present a new technique for model selection problem in atmospheric remote sensing. The technique is based on Monte Carlo sampling and it allows model selection, calculation of model posterior probabilities and model averaging in Bayesian way.

    The algorithm developed here is called Adaptive Automatic Reversible Jump Markov chain Monte Carlo method (AARJ. It uses Markov chain Monte Carlo (MCMC technique and its extension called Reversible Jump MCMC. Both of these techniques have been used extensively in statistical parameter estimation problems in wide area of applications since late 1990's. The novel feature in our algorithm is the fact that it is fully automatic and easy to use.

    We show how the AARJ algorithm can be implemented and used for model selection and averaging, and to directly incorporate the model uncertainty. We demonstrate the technique by applying it to the statistical inversion problem of gas profile retrieval of GOMOS instrument on board the ENVISAT satellite. Four simple models are used simultaneously to describe the dependence of the aerosol cross-sections on wavelength. During the AARJ estimation all the models are used and we obtain a probability distribution characterizing how probable each model is. By using model averaging, the uncertainty related to selecting the aerosol model can be taken into account in assessing the uncertainty of the estimates.

  11. IMAGE ANALYSIS BASED ON EDGE DETECTION TECHNIQUES

    Institute of Scientific and Technical Information of China (English)

    纳瑟; 刘重庆

    2002-01-01

    A method that incorporates edge detection technique, Markov Random field (MRF), watershed segmentation and merging techniques was presented for performing image segmentation and edge detection tasks. It first applies edge detection technique to obtain a Difference In Strength (DIS) map. An initial segmented result is obtained based on K-means clustering technique and the minimum distance. Then the region process is modeled by MRF to obtain an image that contains different intensity regions. The gradient values are calculated and then the watershed technique is used. DIS calculation is used for each pixel to define all the edges (weak or strong) in the image. The DIS map is obtained. This help as priority knowledge to know the possibility of the region segmentation by the next step (MRF), which gives an image that has all the edges and regions information. In MRF model,gray level l, at pixel location i, in an image X, depends on the gray levels of neighboring pixels. The segmentation results are improved by using watershed algorithm. After all pixels of the segmented regions are processed, a map of primitive region with edges is generated. The edge map is obtained using a merge process based on averaged intensity mean values. A common edge detectors that work on (MRF) segmented image are used and the results are compared. The segmentation and edge detection result is one closed boundary per actual region in the image.

  12. Rapid Tooling Technique Based on Stereolithograph Prototype

    Institute of Scientific and Technical Information of China (English)

    丁浩; 狄平; 顾伟生; 朱世根

    2001-01-01

    Rapid tooling technique based on the sterelithograph prototype is investigated. The epoxy tooling technological process was elucidated. It is analyzed in detail that the epoxy resin formula is easy to cast, curing process, and release agents. The transitional plaster model is also proposed. The mold to encrust mutual.inductors with epoxy and mold to inject plastic soapboxes was made with the technique The tooling needs very little time and cost, for the process is only to achieve the nice replica of the prototype. It is benefit for the trial and small batch of production.

  13. Language Based Techniques for Systems Biology

    DEFF Research Database (Denmark)

    Pilegaard, Henrik

    on the π calculus fragment of BioAmbients. In both cases the analyses compute very precise estimates of the temporal structure of the underlying pathways; hence they are applicable across a family of widely used bio-ware languages that descend from Milner’s Calculus of Communicating Systems. The presented...... calculi have similarly been used for the study of bio-chemical reactive systems. In this dissertation it is argued that techniques rooted in the theory and practice of programming languages, language based techniques if you will, constitute a strong basis for the investigation of models of biological...

  14. A cone-beam CT based technique to augment the 3D virtual skull model with a detailed dental surface.

    Science.gov (United States)

    Swennen, G R J; Mommaerts, M Y; Abeloos, J; De Clercq, C; Lamoral, P; Neyt, N; Casselman, J; Schutyser, F

    2009-01-01

    Cone-beam computed tomography (CBCT) is used for maxillofacial imaging. 3D virtual planning of orthognathic and facial orthomorphic surgery requires detailed visualisation of the interocclusal relationship. This study aimed to introduce and evaluate the use of a double CBCT scan procedure with a modified wax bite wafer to augment the 3D virtual skull model with a detailed dental surface. The impressions of the dental arches and the wax bite wafer were scanned for ten patient separately using a high resolution standardized CBCT scanning protocol. Surface-based rigid registration using ICP (iterative closest points) was used to fit the virtual models on the wax bite wafer. Automatic rigid point-based registration of the wax bite wafer on the patient scan was performed to implement the digital virtual dental arches into the patient's skull model. Probability error histograms showed errors of wax bite wafer to set-up a 3D virtual augmented model of the skull with detailed dental surface.

  15. Technique for infrared and visible image fusion based on non-subsampled shearlet transform and spiking cortical model

    Science.gov (United States)

    Kong, Weiwei; Wang, Binghe; Lei, Yang

    2015-07-01

    Fusion of infrared and visible images is an active research area in image processing, and a variety of relevant algorithms have been developed. However, the existing techniques commonly cannot gain good fusion performance and acceptable computational complexity simultaneously. This paper proposes a novel image fusion approach that integrates the non-subsampled shearlet transform (NSST) with spiking cortical model (SCM) to overcome the above drawbacks. On the one hand, using NSST to conduct the decompositions and reconstruction not only consists with human vision characteristics, but also effectively decreases the computational complexity compared with the current popular multi-resolution analysis tools such as non-subsampled contourlet transform (NSCT). On the other hand, SCM, which has been considered to be an optimal neuron network model recently, is responsible for the fusion of sub-images from different scales and directions. Experimental results indicate that the proposed method is promising, and it does significantly improve the fusion quality in both aspects of subjective visual performance and objective comparisons compared with other current popular ones.

  16. Predicting ecological regime shift under climate change: New modelling techniques and potential of molecular-based approaches

    Directory of Open Access Journals (Sweden)

    Richard STAFFORD, V. Anne SMITH, Dirk HUSMEIER, Thomas GRIMA, Barbara-ann GUINN

    2013-06-01

    Full Text Available Ecological regime shift is the rapid transition from one stable community structure to another, often ecologically inferior, stable community. Such regime shifts are especially common in shallow marine communities, such as the transition of kelp forests to algal turfs that harbour far lower biodiversity. Stable regimes in communities are a result of balanced interactions between species, and predicting new regimes therefore requires an evaluation of new species interactions, as well as the resilience of the ‘stable’ position. While computational optimisation techniques can predict new potential regimes, predicting the most likely community state of the various options produced is currently educated guess work. In this study we integrate a stable regime optimisation approach with a Bayesian network used to infer prior knowledge of the likely stress of climate change (or, in practice, any other disturbance on each component species of a representative rocky shore community model. Combining the results, by calculating the product of the match between resilient computational predictions and the posterior probabilities of the Bayesian network, gives a refined set of model predictors, and demonstrates the use of the process in determining community changes, as might occur through processes such as climate change. To inform Bayesian priors, we conduct a review of molecular approaches applied to the analysis of the transcriptome of rocky shore organisms, and show how such an approach could be linked to measureable stress variables in the field. Hence species-specific microarrays could be designed as biomarkers of in situ stress, and used to inform predictive modelling approaches such as those described here [Current Zoology 59 (3: 403–417, 2013].

  17. Predicting ecological regime shift under climate change:New modelling techniques and potential of molecular-based approaches

    Institute of Scientific and Technical Information of China (English)

    Richard STAFFORD; V.Anne SMITH; Dirk HUSMEIER; Thomas GRIMA; Barbara-ann GUINN

    2013-01-01

    Ecological regime shift is the rapid transition from one stable community structure to another,often ecologically inferior,stable community.Such regime shifts are especially common in shallow marine communities,such as the transition of kelp forests to algal turfs that harbour far lower biodiversity.Stable regimes in communities are a result of balanced interactions between species,and predicting new regimes therefore requires an evaluation of new species interactions,as well as the resilience of the ‘stable' position.While computational optimisation techniques can predict new potential regimes,predicting the most likely community state of the various options produced is currently educated guess work.In this study we integrate a stable regime optimisation approach with a Bayesian network used to infer prior knowledge of the likely stress of climate change (or,in practice,any other disturbance) on each component species of a representative rocky shore community model.Combining the results,by calculating the product of the match between resilient computational predictions and the posterior probabilities of the Bayesian network,gives a refined set of model predictors,and demonstrates the use of the process in determining community changes,as might occur through processes such as climate change.To inform Bayesian priors,we conduct a review of molecular approaches applied to the analysis of the transcriptome of rocky shore organisms,and show how such an approach could be linked to measureable stress variables in the field.Hence species-specific microarrays could be designed as biomarkers of in situ stress,and used to inform predictive modelling approaches such as those described here.

  18. Advanced interaction techniques for medical models

    OpenAIRE

    Monclús, Eva

    2014-01-01

    Advances in Medical Visualization allows the analysis of anatomical structures with the use of 3D models reconstructed from a stack of intensity-based images acquired through different techniques, being Computerized Tomographic (CT) modality one of the most common. A general medical volume graphics application usually includes an exploration task which is sometimes preceded by an analysis process where the anatomical structures of interest are first identified. ...

  19. Enhancements to commissioning techniques and quality assurance of brachytherapy treatment planning systems that use model-based dose calculation algorithms.

    Science.gov (United States)

    Rivard, Mark J; Beaulieu, Luc; Mourtada, Firas

    2010-06-01

    The current standard for brachytherapy dose calculations is based on the AAPM TG-43 formalism. Simplifications used in the TG-43 formalism have been challenged by many publications over the past decade. With the continuous increase in computing power, approaches based on fundamental physics processes or physics models such as the linear-Boltzmann transport equation are now applicable in a clinical setting. Thus, model-based dose calculation algorithms (MBDCAs) have been introduced to address TG-43 limitations for brachytherapy. The MBDCA approach results in a paradigm shift, which will require a concerted effort to integrate them properly into the radiation therapy community. MBDCA will improve treatment planning relative to the implementation of the traditional TG-43 formalism by accounting for individualized, patient-specific radiation scatter conditions, and the radiological effect of material heterogeneities differing from water. A snapshot of the current status of MBDCA and AAPM Task Group reports related to the subject of QA recommendations for brachytherapy treatment planning is presented. Some simplified Monte Carlo simulation results are also presented to delineate the effects MBDCA are called to account for and facilitate the discussion on suggestions for (i) new QA standards to augment current societal recommendations, (ii) consideration of dose specification such as dose to medium in medium, collisional kerma to medium in medium, or collisional kerma to water in medium, and (iii) infrastructure needed to uniformly introduce these new algorithms. Suggestions in this Vision 20/20 article may serve as a basis for developing future standards to be recommended by professional societies such as the AAPM, ESTRO, and ABS toward providing consistent clinical implementation throughout the brachytherapy community and rigorous quality management of MBDCA-based treatment planning systems.

  20. Simulation and Prediction of Soil Organic Carbon Dynamics in Jiangsu Province Based on Model and GIS Techniques

    Institute of Scientific and Technical Information of China (English)

    SHEN Yu; HUANG Yao; ZONG Liang-gang; ZHANG Wen; XU Mao; LIU Lin-wang

    2003-01-01

    Based on the databases of soils, meteorology, crop production, and agricultural management, changes in the soil organic carbon (SOC) of agro-ecosystems in Jiangsu Province were simulated by using a soil organic carbon model with a linkage of GIS. Four data sets of soil organic carbon measured from various field experiments in Jiangsu Province were used to validate the model. It was demonstrated that the model simulation in general agreed with the field measurements. Model simulation indicated that the SOC content in approximately 77% of the agricultural soils in Jiangsu Province has increased since the Second National Soil Survey completed in the early 1980s. Compared with the values in 1985, the SOC content in 2000 was estimated to increase by 1.0-3.0 g kg-1 for the north and the coastal areas of the province, and by 3.5-5.0 g kg-1 for the region of Tai Lake in the south. A slight decrease (about 0.5-1.5 g kg-1 ) was estimated for the central region of Jiangsu Province and the Nanjing-Zhenjiang hilly area. Model prediction for 2010 A.D. under two scena-rios, i.e., with 30 and 50% of the harvested crop straw incorporation, suggested that the SOC in Jiangsu Province would increase, and thus that the agricultural soils would have potential as organic carbon storage. The incorporation of crop straw into soils is of great benefit to increase soil carbon storage, consequently to benefit the control of the rise of atmospheric CO2 concentration and to maintain the sustainable development of agriculture.

  1. Optimization-based human motion prediction using an inverse-inverse dynamics technique implemented in the AnyBody Modeling System

    DEFF Research Database (Denmark)

    Farahani, Saeed Davoudabadi; Andersen, Michael Skipper; de Zee, Mark

    2012-01-01

    derived from the detailed musculoskeletal analysis. The technique is demonstrated on a human model pedaling a bicycle. We use a physiology-based cost function expressing the mean square of all muscle activities over the cycle to predict a realistic motion pattern. Posture and motion prediction......This paper presents an optimization-based human movement prediction using the AnyBody modeling system (AMS). It is explained how AMS can enables prediction of a realistic human movement by means of a computationally efficient optimization-based algorithm. The human motion predicted in AMS is based......, the parameters of these functions are optimized to produce an optimum posture or movement according to a user-defined cost function and constraints. The cost function and the constraints are typically express performance, comfort, injury risk, fatigue, muscle load, joint forces and other physiological properties...

  2. Customization of UWB 3D-RTLS Based on the New Uncertainty Model of the AoA Ranging Technique

    Directory of Open Access Journals (Sweden)

    Bartosz Jachimczyk

    2017-01-01

    Full Text Available The increased potential and effectiveness of Real-time Locating Systems (RTLSs substantially influence their application spectrum. They are widely used, inter alia, in the industrial sector, healthcare, home care, and in logistic and security applications. The research aims to develop an analytical method to customize UWB-based RTLS, in order to improve their localization performance in terms of accuracy and precision. The analytical uncertainty model of Angle of Arrival (AoA localization in a 3D indoor space, which is the foundation of the customization concept, is established in a working environment. Additionally, a suitable angular-based 3D localization algorithm is introduced. The paper investigates the following issues: the influence of the proposed correction vector on the localization accuracy; the impact of the system’s configuration and LS’s relative deployment on the localization precision distribution map. The advantages of the method are verified by comparing them with a reference commercial RTLS localization engine. The results of simulations and physical experiments prove the value of the proposed customization method. The research confirms that the analytical uncertainty model is the valid representation of RTLS’ localization uncertainty in terms of accuracy and precision and can be useful for its performance improvement. The research shows, that the Angle of Arrival localization in a 3D indoor space applying the simple angular-based localization algorithm and correction vector improves of localization accuracy and precision in a way that the system challenges the reference hardware advanced localization engine. Moreover, the research guides the deployment of location sensors to enhance the localization precision.

  3. 基于图像技术的真实场景造型与编辑%Modeling and Editing Real Scenes with Image-Based Techniques

    Institute of Scientific and Technical Information of China (English)

    俞益洲

    2000-01-01

    @@ Image-based modeling and rendering techniques greatly advanced the level of photorealism in computer graphics. They were originally proposed to accelerate rendering with the ability to vary viewpoint only. My work in this area focused on capturing and modeling real scenes for novel visual interactions such as varying lighting condition and scene configuration in addition to viewpoint. This work can lead to Applications such as virtual navigation of a real scene, interaction with the scene, novel scene composition, interior lighting design, and augmented reality.

  4. Research Techniques Made Simple: Skin Carcinogenesis Models: Xenotransplantation Techniques.

    Science.gov (United States)

    Mollo, Maria Rosaria; Antonini, Dario; Cirillo, Luisa; Missero, Caterina

    2016-02-01

    Xenotransplantation is a widely used technique to test the tumorigenic potential of human cells in vivo using immunodeficient mice. Here we describe basic technologies and recent advances in xenotransplantation applied to study squamous cell carcinomas (SCCs) of the skin. SCC cells isolated from tumors can either be cultured to generate a cell line or injected directly into mice. Several immunodeficient mouse models are available for selection based on the experimental design and the type of tumorigenicity assay. Subcutaneous injection is the most widely used technique for xenotransplantation because it involves a simple procedure allowing the use of a large number of cells, although it may not mimic the original tumor environment. SCC cell injections at the epidermal-to-dermal junction or grafting of organotypic cultures containing human stroma have also been used to more closely resemble the tumor environment. Mixing of SCC cells with cancer-associated fibroblasts can allow the study of their interaction and reciprocal influence, which can be followed in real time by intradermal ear injection using conventional fluorescent microscopy. In this article, we will review recent advances in xenotransplantation technologies applied to study behavior of SCC cells and their interaction with the tumor environment in vivo.

  5. Performability Modelling Tools, Evaluation Techniques and Applications

    NARCIS (Netherlands)

    Haverkort, Boudewijn R.H.M.

    1990-01-01

    This thesis deals with three aspects of quantitative evaluation of fault-tolerant and distributed computer and communication systems: performability evaluation techniques, performability modelling tools, and performability modelling applications. Performability modelling is a relatively new

  6. Selected Logistics Models and Techniques.

    Science.gov (United States)

    1984-09-01

    ACCESS PROCEDURE: On-Line System (OLS), UNINET . RCA maintains proprietary control of this model, and the model is available only through a lease...System (OLS), UNINET . RCA maintains proprietary control of this model, and the model is available only through a lease arrangement. • SPONSOR: ASD/ACCC

  7. The MSFC/J70 orbital atmosphere model and the data bases for the MSFC solar activity prediction technique

    Science.gov (United States)

    Johnson, D. L.; Smith, R. E.

    1985-01-01

    The MSFC/J70 Orbital Atmospheric Density Model, a modified version of the Smithsonian Astrophysical Observatory Jacchia 1970 model is explained. The algorithms describing the MSFC/J70 model are included as well as listing of the computer program. The 13-month smoothed values of solar flux (F sub 10.7) and geomagnetic index (S sub p), which are required as inputs for the MSFC/J70 model, are also included and discussed.

  8. Nontraditional manufacturing technique-Nano machining technique based on SPM

    Institute of Scientific and Technical Information of China (English)

    DONG; Shen; YAN; Yongda; SUN; Tao; LIANG; Yingchun; CHENG

    2004-01-01

    Nano machining based on SPM is a novel, nontraditional advanced manufacturing technique. There are three main machining methods based on SPM, i.e.single atom manipulation, surface modification using physical or chemical actions and mechanical scratching. The current development of this technique is summarized. Based on the analysis of mechanical scratching mechanism, a 5 μm micro inflation hole is fabricated on the surface of inertial confinement fusion (ICF) target. The processing technique is optimized. The machining properties of brittle material, single crystal Ge, are investigated. A micro machining system combining SPM and a high accuracy stage is developed. Some 2D and 3D microstructures are fabricated using the system. This method has broad applications in the field of nano machining.

  9. Neural-network-based prediction techniques for single station modeling and regional mapping of the foF2 and M(3000F2 ionospheric characteristics

    Directory of Open Access Journals (Sweden)

    T. D. Xenos

    2002-01-01

    Full Text Available In this work, Neural-Network-based single-station hourly daily foF2 and M(3000F2 modelling of 15 European ionospheric stations is investigated. The data used are neural networks and hourly daily values from the period 1964- 1988 for training the neural networks and from the period 1989-1994 for checking the prediction accuracy. Two types of models are presented for the F2-layer critical frequency prediction and two for the propagation factor M(3000F2. The first foF2 model employs the E-layer local noon calculated daily critical frequency (foE12 and the local noon F2- layer critical frequency of the previous day. The second foF2 model, which introduces a new regional mapping technique, employs the Juliusruh neural network model and uses the E-layer local noon calculated daily critical frequency (foE12, and the previous day F2-layer critical frequency measured at Juliusruh at noon. The first M(3000F2 model employs the E-layer local noon calculated daily critical frequency (foE12, its ± 3 h deviations and the local noon cosine of the solar zenith angle (cos c12. The second model, which introduces a new M(3000F2 mapping technique, employs Juliusruh neural network model and uses the E-layer local noon calculated daily critical frequency (foE12, and the previous day F2-layer critical frequency measured at Juliusruh at noon.

  10. Crude oil price forecasting based on hybridizing wavelet multiple linear regression model, particle swarm optimization techniques, and principal component analysis.

    Science.gov (United States)

    Shabri, Ani; Samsudin, Ruhaidah

    2014-01-01

    Crude oil prices do play significant role in the global economy and are a key input into option pricing formulas, portfolio allocation, and risk measurement. In this paper, a hybrid model integrating wavelet and multiple linear regressions (MLR) is proposed for crude oil price forecasting. In this model, Mallat wavelet transform is first selected to decompose an original time series into several subseries with different scale. Then, the principal component analysis (PCA) is used in processing subseries data in MLR for crude oil price forecasting. The particle swarm optimization (PSO) is used to adopt the optimal parameters of the MLR model. To assess the effectiveness of this model, daily crude oil market, West Texas Intermediate (WTI), has been used as the case study. Time series prediction capability performance of the WMLR model is compared with the MLR, ARIMA, and GARCH models using various statistics measures. The experimental results show that the proposed model outperforms the individual models in forecasting of the crude oil prices series.

  11. Crude Oil Price Forecasting Based on Hybridizing Wavelet Multiple Linear Regression Model, Particle Swarm Optimization Techniques, and Principal Component Analysis

    Directory of Open Access Journals (Sweden)

    Ani Shabri

    2014-01-01

    Full Text Available Crude oil prices do play significant role in the global economy and are a key input into option pricing formulas, portfolio allocation, and risk measurement. In this paper, a hybrid model integrating wavelet and multiple linear regressions (MLR is proposed for crude oil price forecasting. In this model, Mallat wavelet transform is first selected to decompose an original time series into several subseries with different scale. Then, the principal component analysis (PCA is used in processing subseries data in MLR for crude oil price forecasting. The particle swarm optimization (PSO is used to adopt the optimal parameters of the MLR model. To assess the effectiveness of this model, daily crude oil market, West Texas Intermediate (WTI, has been used as the case study. Time series prediction capability performance of the WMLR model is compared with the MLR, ARIMA, and GARCH models using various statistics measures. The experimental results show that the proposed model outperforms the individual models in forecasting of the crude oil prices series.

  12. Modeling Techniques: Theory and Practice

    OpenAIRE

    Odd A. Asbjørnsen

    1985-01-01

    A survey is given of some crucial concepts in chemical process modeling. Those are the concepts of physical unit invariance, of reaction invariance and stoichiometry, the chromatographic effect in heterogeneous systems, the conservation and balance principles and the fundamental structures of cause and effect relationships. As an example, it is shown how the concept of reaction invariance may simplify the homogeneous reactor modeling to a large extent by an orthogonal decomposition of the pro...

  13. Modeling Techniques: Theory and Practice

    Directory of Open Access Journals (Sweden)

    Odd A. Asbjørnsen

    1985-07-01

    Full Text Available A survey is given of some crucial concepts in chemical process modeling. Those are the concepts of physical unit invariance, of reaction invariance and stoichiometry, the chromatographic effect in heterogeneous systems, the conservation and balance principles and the fundamental structures of cause and effect relationships. As an example, it is shown how the concept of reaction invariance may simplify the homogeneous reactor modeling to a large extent by an orthogonal decomposition of the process variables. This allows residence time distribution function parameters to be estimated with the reaction in situ, but without any correlation between the estimated residence time distribution parameters and the estimated reaction kinetic parameters. A general word of warning is given to the choice of wrong mathematical structure of models.

  14. In Vivo Imaging-Based Mathematical Modeling Techniques That Enhance the Understanding of Oncogene Addiction in relation to Tumor Growth

    Directory of Open Access Journals (Sweden)

    Chinyere Nwabugwu

    2013-01-01

    Full Text Available The dependence on the overexpression of a single oncogene constitutes an exploitable weakness for molecular targeted therapy. These drugs can produce dramatic tumor regression by targeting the driving oncogene, but relapse often follows. Understanding the complex interactions of the tumor’s multifaceted response to oncogene inactivation is key to tumor regression. It has become clear that a collection of cellular responses lead to regression and that immune-mediated steps are vital to preventing relapse. Our integrative mathematical model includes a variety of cellular response mechanisms of tumors to oncogene inactivation. It allows for correct predictions of the time course of events following oncogene inactivation and their impact on tumor burden. A number of aspects of our mathematical model have proven to be necessary for recapitulating our experimental results. These include a number of heterogeneous tumor cell states since cells following different cellular programs have vastly different fates. Stochastic transitions between these states are necessary to capture the effect of escape from oncogene addiction (i.e., resistance. Finally, delay differential equations were used to accurately model the tumor growth kinetics that we have observed. We use this to model oncogene addiction in MYC-induced lymphoma, osteosarcoma, and hepatocellular carcinoma.

  15. Mechanistic model to predict colostrum intake based on deuterium oxide dilution technique data and impact of gestation and prefarrowing diets on piglet intake and sow yield of colostrum

    DEFF Research Database (Denmark)

    Theil, Peter Kappel; Flummer, Christine; Hurley, W L

    2014-01-01

    The aims of the present study were to quantify colostrum intake (CI) of piglets using the D2O dilution technique, to develop a mechanistic model to predict CI, to compare these data with CI predicted by a previous empirical predictive model developed for bottle-fed piglets, and to study how...... were fed 1 of 4 gestation diets (n = 10 per diet) based on different fiber sources (low fiber [17%] or potato pulp, pectin residue, or sugarbeet pulp [32 to 40%]) from mating until d 108 of gestation. From d 108 of gestation until parturition, sows were fed 1 of 5 prefarrowing diets (n = 8 per diet......) varying in supplemented fat (3% animal fat, 8% coconut oil, 8% sunflower oil, 8% fish oil, or 4% fish oil + 4% octanoic acid). Sows fed diets with pectin residue or sugarbeet pulp during gestation produced colostrum with lower protein, fat, DM, and energy concentrations and higher lactose concentrations...

  16. Bases en technique du vide

    CERN Document Server

    Rouviere, Nelly

    2017-01-01

    Cette seconde édition, 20 ans après la première, devrait continuer à aider les techniciens pour la réalisation de leur système de vide. La technologie du vide est utilisée, à présent, dans de nombreux domaines très différents les uns des autres et avec des matériels très fiables. Or, elle est souvent bien peu étudiée, de plus, c'est une discipline où le savoir-faire prend tout son sens. Malheureusement la transmission par des ingénieurs et techniciens expérimentés ne se fait plus ou trop rapidement. La technologie du vide fait appel à la physique, à la chimie, à la mécanique, à la métallurgie, au dessin industriel, à l'électronique, à la thermique, etc. Cette discipline demande donc de maîtriser des techniques de domaines très divers, et ce n'est pas chose facile. Chaque installation est en soi un cas particulier avec ses besoins, sa façon de traiter les matériaux et celle d'utiliser les matériels. Les systèmes de vide sont parfois copiés d'un laboratoire à un autre et le...

  17. Symmetry and partial order reduction techniques in model checking Rebeca

    NARCIS (Netherlands)

    Jaghouri, M.M.; Sirjani, M.; Mousavi, M.R.; Movaghar, A.

    2007-01-01

    Rebeca is an actor-based language with formal semantics that can be used in modeling concurrent and distributed software and protocols. In this paper, we study the application of partial order and symmetry reduction techniques to model checking dynamic Rebeca models. Finding symmetry based equivalen

  18. Geometrical geodesy techniques in Goddard earth models

    Science.gov (United States)

    Lerch, F. J.

    1974-01-01

    The method for combining geometrical data with satellite dynamical and gravimetry data for the solution of geopotential and station location parameters is discussed. Geometrical tracking data (simultaneous events) from the global network of BC-4 stations are currently being processed in a solution that will greatly enhance of geodetic world system of stations. Previously the stations in Goddard earth models have been derived only from dynamical tracking data. A linear regression model is formulated from combining the data, based upon the statistical technique of weighted least squares. Reduced normal equations, independent of satellite and instrumental parameters, are derived for the solution of the geodetic parameters. Exterior standards for the evaluation of the solution and for the scale of the earth's figure are discussed.

  19. Validation Techniques of network harmonic models based on switching of a series linear component and measuring resultant harmonic increments

    DEFF Research Database (Denmark)

    Wiechowski, Wojciech Tomasz; Lykkegaard, Jan; Bak, Claus Leth

    2007-01-01

    In this paper two methods of validation of transmission network harmonic models are introduced. The methods were developed as a result of the work presented in [1]. The first method allows calculating the transfer harmonic impedance between two nodes of a network. Switching a linear, series network...... are used for calculation of the transfer harmonic impedance between the nodes. The determined transfer harmonic impedance can be used to validate a computer model of the network. The second method is an extension of the fist one. It allows switching a series element that contains a shunt branch......, as for example a transmission line. Both methods require that harmonic measurements performed at two ends of the disconnected element are precisely synchronized....

  20. Fundamental Behavior Analysis of Single-Frequency Sine Wave Forced Oscillator based on Linear Model and Multi-Time Technique

    Directory of Open Access Journals (Sweden)

    A. Kitipongwatana

    2014-06-01

    Full Text Available In this article, an excited oscillator which is analyzed by using a multi-time linear analytical model is proposed. An obtained closed-form solution can be exploited not only to explain phenomena in the beat and locked states that are mostly studied in literature but also in an additional state called the non-locked state. With the proposed analysis, it is found that the non-locked state of the oscillator behaves similarly to the up-conversion process. It provides a new point-of-view to the phase noise oscillator. Moreover, our principle indicates that the important factor defining the behavior in each state and state transition is the transfer function of the system. The proposed mathematical model is verified by the experimental and numerical results.

  1. Efficient workflows for 3D building full-color model reconstruction using LIDAR long-range laser and image-based modeling techniques

    Science.gov (United States)

    Shih, Chihhsiong

    2005-01-01

    Two efficient workflow are developed for the reconstruction of a 3D full color building model. One uses a point wise sensing device to sample an unknown object densely and attach color textures from a digital camera separately. The other uses an image based approach to reconstruct the model with color texture automatically attached. The point wise sensing device reconstructs the CAD model using a modified best view algorithm that collects the maximum number of construction faces in one view. The partial views of the point clouds data are then glued together using a common face between two consecutive views. Typical overlapping mesh removal and coarsening procedures are adapted to generate a unified 3D mesh shell structure. A post processing step is then taken to combine the digital image content from a separate camera with the 3D mesh shell surfaces. An indirect uv mapping procedure first divide the model faces into groups within which every face share the same normal direction. The corresponding images of these faces in a group is then adjusted using the uv map as a guidance. The final assembled image is then glued back to the 3D mesh to present a full colored building model. The result is a virtual building that can reflect the true dimension and surface material conditions of a real world campus building. The image based modeling procedure uses a commercial photogrammetry package to reconstruct the 3D model. A novel view planning algorithm is developed to guide the photos taking procedure. This algorithm successfully generate a minimum set of view angles. The set of pictures taken at these view angles can guarantee that each model face shows up at least in two of the pictures set and no more than three. The 3D model can then be reconstructed with minimum amount of labor spent in correlating picture pairs. The finished model is compared with the original object in both the topological and dimensional aspects. All the test cases show exact same topology and

  2. Study of factors affecting the productivity of nurses based on the ACHIEVE model and prioritizing them using analytic hierarchy process technique, 2012

    Directory of Open Access Journals (Sweden)

    Payam Farhadi

    2013-01-01

    Full Text Available Objective: Improving productivity is one of the most important strategies for social-economic development. Human resources are known as the most important resources in the organizations′ survival and success. Aims: To determine the factors affecting the human resource productivity using the ACHIEVEa model from the nurses′ perspective and then prioritize them from the perspective of head nurses using Analytic Hierarchy Process (AHP technique. Settings and Design: Iran, Shiraz University of Medical Sciences teaching hospitals in 2012. Materials and Methods: This was an applied, cross-sectional and analytical-descriptive study conducted in two phases. In the first phase, to determine the factors affecting the human resource productivity from nurses′ perspective, 110 nurses were selected using a two-stage cluster sampling method. Required data were collected using the Persian version of Hersey and Goldsmith′s Human Resource Productivity Questionnaire. In the second phase, in order to prioritize the factors affecting human resource productivity based on the ACHIEVE model using AHP technique, pairwise comparisons matrices were given to the 19 randomly selected head nurses to express their opinions about those factors relative priorities or importance. Statistical Analysis Used: Collected data and matrices in two mentioned phases were analyzed using SPSS 15.0 and some statistical tests including Independent-Samples T-Test and Pearson Correlation coefficient, as well as, Super Decisions software (Latest Beta. Results: The human resource productivity had significant relationships with nurses′ sex (P = 0.008, marital status (P < 0.001, education level (P < 0.001, and all questionnaire factors (P < 0.05. Nurses′ productivity from their perspective was below average (44.97 ΁ 7.43. Also, the priorities of factors affecting the productivity of nurses based on the ACHIEVE model from the head nurses′ perspective using AHP technique, from the

  3. Incorporation of RAM techniques into simulation modeling

    Energy Technology Data Exchange (ETDEWEB)

    Nelson, S.C. Jr.; Haire, M.J.; Schryver, J.C.

    1995-07-01

    This work concludes that reliability, availability, and maintainability (RAM) analytical techniques can be incorporated into computer network simulation modeling to yield an important new analytical tool. This paper describes the incorporation of failure and repair information into network simulation to build a stochastic computer model represents the RAM Performance of two vehicles being developed for the US Army: The Advanced Field Artillery System (AFAS) and the Future Armored Resupply Vehicle (FARV). The AFAS is the US Army`s next generation self-propelled cannon artillery system. The FARV is a resupply vehicle for the AFAS. Both vehicles utilize automation technologies to improve the operational performance of the vehicles and reduce manpower. The network simulation model used in this work is task based. The model programmed in this application requirements a typical battle mission and the failures and repairs that occur during that battle. Each task that the FARV performs--upload, travel to the AFAS, refuel, perform tactical/survivability moves, return to logistic resupply, etc.--is modeled. Such a model reproduces a model reproduces operational phenomena (e.g., failures and repairs) that are likely to occur in actual performance. Simulation tasks are modeled as discrete chronological steps; after the completion of each task decisions are programmed that determine the next path to be followed. The result is a complex logic diagram or network. The network simulation model is developed within a hierarchy of vehicle systems, subsystems, and equipment and includes failure management subnetworks. RAM information and other performance measures are collected which have impact on design requirements. Design changes are evaluated through ``what if`` questions, sensitivity studies, and battle scenario changes.

  4. Towards elicitation of users requirements for hospital information system: from a care process modelling technique to a web based collaborative tool.

    Science.gov (United States)

    Staccini, Pascal M; Joubert, Michel; Quaranta, Jean-Francois; Fieschi, Marius

    2002-01-01

    Growing attention is being given to the use of process modeling methodology for user requirements elicitation. In the analysis phase of hospital information systems, the usefulness of care-process models has been investigated to evaluate the conceptual applicability and practical understandability by clinical staff and members of users teams. Nevertheless, there still remains a gap between users and analysts in their mutual ability to share conceptual views and vocabulary, keeping the meaning of clinical context while providing elements for analysis. One of the solutions for filling this gap is to consider the process model itself in the role of a hub as a centralized means of facilitating communication between team members. Starting with a robust and descriptive technique for process modeling called IDEF0/SADT, we refined the basic data model by extracting concepts from ISO 9000 process analysis and from enterprise ontology. We defined a web-based architecture to serve as a collaborative tool and implemented it using an object-oriented database. The prospects of such a tool are discussed notably regarding to its ability to generate data dictionaries and to be used as a navigation tool through the medium of hospital-wide documentation.

  5. 基于混合建模技术的复合肥养分含量MIMO软测量模型%MIMO Soft-sensor Model of Nutrient Content for Compound Fertilizer Based on Hybrid Modeling Technique

    Institute of Scientific and Technical Information of China (English)

    傅永峰; 苏宏业; 褚健

    2007-01-01

    In compound fertilizer production, several quality variables need to be monitored and controlled simultaneously. It is very difficult to measure these variables on-line by existing instruments and sensors. So, soft-sensor technique becomes an indispensable method to implement real-time quality control. In this article, a new model of multi-inputs multi-outputs (MIMO) soft-sensor, which is constructed based on hybrid modeling technique, is proposed for these interactional variables. Data-driven modeling method and simplified first principle modeling method are combined in this model. Data-driven modeling method based on limited memory partial least squares (LM-PLS) algorithm is used to build soft-senor models for some secondary variables; then, the simplified first principle model is used to compute three primary variables on line. The proposed model has been used in practical process; the results indicate that the proposed model is precise and efficient, and it is possible to realize on line quality control for compound fertilizer process.

  6. Model checking timed automata : techniques and applications

    NARCIS (Netherlands)

    Hendriks, Martijn.

    2006-01-01

    Model checking is a technique to automatically analyse systems that have been modeled in a formal language. The timed automaton framework is such a formal language. It is suitable to model many realistic problems in which time plays a central role. Examples are distributed algorithms, protocols, emb

  7. Advanced structural equation modeling issues and techniques

    CERN Document Server

    Marcoulides, George A

    2013-01-01

    By focusing primarily on the application of structural equation modeling (SEM) techniques in example cases and situations, this book provides an understanding and working knowledge of advanced SEM techniques with a minimum of mathematical derivations. The book was written for a broad audience crossing many disciplines, assumes an understanding of graduate level multivariate statistics, including an introduction to SEM.

  8. Using Visualization Techniques in Multilayer Traffic Modeling

    Science.gov (United States)

    Bragg, Arnold

    We describe visualization techniques for multilayer traffic modeling - i.e., traffic models that span several protocol layers, and traffic models of protocols that cross layers. Multilayer traffic modeling is challenging, as one must deal with disparate traffic sources; control loops; the effects of network elements such as IP routers; cross-layer protocols; asymmetries in bandwidth, session lengths, and application behaviors; and an enormous number of complex interactions among the various factors. We illustrate by using visualization techniques to identify relationships, transformations, and scaling; to smooth simulation and measurement data; to examine boundary cases, subtle effects and interactions, and outliers; to fit models; and to compare models with others that have fewer parameters. Our experience suggests that visualization techniques can provide practitioners with extraordinary insight about complex multilayer traffic effects and interactions that are common in emerging next-generation networks.

  9. Cells, Agents, and Support Vectors in Interaction - Modeling Urban Sprawl based on Machine Learning and Artificial Intelligence Techniques in a Post-Industrial Region

    Science.gov (United States)

    Rienow, A.; Menz, G.

    2015-12-01

    Since the beginning of the millennium, artificial intelligence techniques as cellular automata (CA) and multi-agent systems (MAS) have been incorporated into land-system simulations to address the complex challenges of transitions in urban areas as open, dynamic systems. The study presents a hybrid modeling approach for modeling the two antagonistic processes of urban sprawl and urban decline at once. The simulation power of support vector machines (SVM), cellular automata (CA) and multi-agent systems (MAS) are integrated into one modeling framework and applied to the largest agglomeration of Central Europe: the Ruhr. A modified version of SLEUTH (short for Slope, Land-use, Exclusion, Urban, Transport, and Hillshade) functions as the CA component. SLEUTH makes use of historic urban land-use data sets and growth coefficients for the purpose of modeling physical urban expansion. The machine learning algorithm of SVM is applied in order to enhance SLEUTH. Thus, the stochastic variability of the CA is reduced and information about the human and ecological forces driving the local suitability of urban sprawl is incorporated. Subsequently, the supported CA is coupled with the MAS ReHoSh (Residential Mobility and the Housing Market of Shrinking City Systems). The MAS models population patterns, housing prices, and housing demand in shrinking regions based on interactions between household and city agents. Semi-explicit urban weights are introduced as a possibility of modeling from and to the pixel simultaneously. Three scenarios of changing housing preferences reveal the urban development of the region in terms of quantity and location. They reflect the dissemination of sustainable thinking among stakeholders versus the steady dream of owning a house in sub- and exurban areas. Additionally, the outcomes are transferred into a digital petri dish reflecting a synthetic environment with perfect conditions of growth. Hence, the generic growth elements affecting the future

  10. Graph based techniques for tag cloud generation

    DEFF Research Database (Denmark)

    Leginus, Martin; Dolog, Peter; Lage, Ricardo Gomes

    2013-01-01

    Tag cloud is one of the navigation aids for exploring documents. Tag cloud also link documents through the user defined terms. We explore various graph based techniques to improve the tag cloud generation. Moreover, we introduce relevance measures based on underlying data such as ratings or citat......Tag cloud is one of the navigation aids for exploring documents. Tag cloud also link documents through the user defined terms. We explore various graph based techniques to improve the tag cloud generation. Moreover, we introduce relevance measures based on underlying data such as ratings...

  11. Fuzzy Logic-Based Techniques for Modeling the Correlation between the Weld Bead Dimension and the Process Parameters in MIG Welding

    Directory of Open Access Journals (Sweden)

    Y. Surender

    2013-01-01

    Full Text Available Fuzzy logic-based techniques have been developed to model input-output relationships of metal inert gas (MIG welding process. Both conventional and hierarchical fuzzy logic controllers (FLCs of Mamdani type have been developed, and their performances are compared. The conventional FLC suffers from the curse of dimensionality for handling a large number of variables, and a hierarchical FLC was proposed earlier to tackle this problem. However, in that study, both the structure and knowledge base of the FLC were not optimized simultaneously, which has been attempted here. Simultaneous optimization of the structure and knowledge base is a difficult task, and to solve it, a genetic algorithm (GA will have to deal with the strings having varied lengths. A new scheme has been proposed here to tackle the problem related to crossover of two parents with unequal lengths. It is interesting to observe that the conventional FLC yields the best accuracy in predictions, whereas the hierarchical FLC can be computationally faster than others but at the cost of accuracy. Moreover, there is no improvement of interpretability by introducing a hierarchical fuzzy system. Thus, there exists a trade-off between the accuracy obtained in predictions and computational complexity of various FLCs.

  12. Mechanistic model to predict colostrum intake based on deuterium oxide dilution technique data and impact of gestation and prefarrowing diets on piglet intake and sow yield of colostrum.

    Science.gov (United States)

    Theil, P K; Flummer, C; Hurley, W L; Kristensen, N B; Labouriau, R L; Sørensen, M T

    2014-12-01

    The aims of the present study were to quantify colostrum intake (CI) of piglets using the D2O dilution technique, to develop a mechanistic model to predict CI, to compare these data with CI predicted by a previous empirical predictive model developed for bottle-fed piglets, and to study how composition of diets fed to gestating sows affected piglet CI, sow colostrum yield (CY), and colostrum composition. In total, 240 piglets from 40 litters were enriched with D2O. The CI measured by D2O from birth until 24 h after the birth of first-born piglet was on average 443 g (SD 151). Based on measured CI, a mechanistic model to predict CI was developed using piglet characteristics (24-h weight gain [WG; g], BW at birth [BWB; kg], and duration of CI [D; min]: CI, g=-106+2.26 WG+200 BWB+0.111 D-1,414 WG/D+0.0182 WG/BWB (R2=0.944). This model was used to predict the CI for all colostrum suckling piglets within the 40 litters (n=500, mean=437 g, SD=153 g) and was compared with the CI predicted by a previous empirical predictive model (mean=305 g, SD=140 g). The previous empirical model underestimated the CI by 30% compared with that obtained by the new mechanistic model. The sows were fed 1 of 4 gestation diets (n=10 per diet) based on different fiber sources (low fiber [17%] or potato pulp, pectin residue, or sugarbeet pulp [32 to 40%]) from mating until d 108 of gestation. From d 108 of gestation until parturition, sows were fed 1 of 5 prefarrowing diets (n=8 per diet) varying in supplemented fat (3% animal fat, 8% coconut oil, 8% sunflower oil, 8% fish oil, or 4% fish oil+4% octanoic acid). Sows fed diets with pectin residue or sugarbeet pulp during gestation produced colostrum with lower protein, fat, DM, and energy concentrations and higher lactose concentrations, and their piglets had greater CI as compared with sows fed potato pulp or the low-fiber diet (Pcolostrum compared with other prefarrowing diets (Pcolostrum composition.

  13. Comparative Analysis of Vehicle Make and Model Recognition Techniques

    Directory of Open Access Journals (Sweden)

    Faiza Ayub Syed

    2014-03-01

    Full Text Available Vehicle Make and Model Recognition (VMMR has emerged as a significant element of vision based systems because of its application in access control systems, traffic control and monitoring systems, security systems and surveillance systems, etc. So far a number of techniques have been developed for vehicle recognition. Each technique follows different methodology and classification approaches. The evaluation results highlight the recognition technique with highest accuracy level. In this paper we have pointed out the working of various vehicle make and model recognition techniques and compare these techniques on the basis of methodology, principles, classification approach, classifier and level of recognition After comparing these factors we concluded that Locally Normalized Harris Corner Strengths (LHNS performs best as compared to other techniques. LHNS uses Bayes and K-NN classification approaches for vehicle classification. It extracts information from frontal view of vehicles for vehicle make and model recognition.

  14. Modeling Fatigue Damage Onset and Progression in Composites Using an Element-Based Virtual Crack Closure Technique Combined With the Floating Node Method

    Science.gov (United States)

    De Carvalho, Nelson V.; Krueger, Ronald

    2016-01-01

    A new methodology is proposed to model the onset and propagation of matrix cracks and delaminations in carbon-epoxy composites subject to fatigue loading. An extended interface element, based on the Floating Node Method, is developed to represent delaminations and matrix cracks explicitly in a mesh independent fashion. Crack propagation is determined using an element-based Virtual Crack Closure Technique approach to determine mixed-mode energy release rates, and the Paris-Law relationship to obtain crack growth rate. Crack onset is determined using a stressbased onset criterion coupled with a stress vs. cycle curve and Palmgren-Miner rule to account for fatigue damage accumulation. The approach is implemented in Abaqus/Standard® via the user subroutine functionality. Verification exercises are performed to assess the accuracy and correct implementation of the approach. Finally, it was demonstrated that this approach captured the differences in failure morphology in fatigue for two laminates of identical stiffness, but with layups containing ?deg plies that were either stacked in a single group, or distributed through the laminate thickness.

  15. Matlab-Based Modeling and Simulations to Study the Performance of Different MPPT Techniques Used for Photovoltaic Systems under Partially Shaded Conditions

    Directory of Open Access Journals (Sweden)

    Jehun Hahm

    2015-01-01

    Full Text Available A pulse-width-modulator- (PWM- based sliding mode controller is developed to study the effects of partial shade, temperature, and insolation on the performance of maximum power point tracking (MPPT used in photovoltaic (PV systems. Under partially shaded conditions and temperature, PV array characteristics become more complex, with multiple power-voltage maxima. MPPT is an automatic control technique to adjust power interfaces and deliver power for a diverse range of insolation values, temperatures, and partially shaded modules. The PV system is tested using two conventional algorithms: the Perturb and Observe (P&O algorithm and the Incremental Conductance (IncCond algorithm, which are simple to implement for a PV array. The proposed method applied a model to simulate the performance of the PV system for solar energy usage, which is compared to the conventional methods under nonuniform insolation improving the PV system utilization efficiency and allowing optimization of the system performance. The PWM-based sliding mode controller successfully overcomes the issues presented by nonuniform conditions and tracks the global MPP. In this paper, the PV system consists of a solar module under shade connected to a boost converter that is controlled by three different algorithms and is generated using Matlab/Simulink.

  16. A Method to Test Model Calibration Techniques

    Energy Technology Data Exchange (ETDEWEB)

    Judkoff, Ron; Polly, Ben; Neymark, Joel

    2016-08-26

    This paper describes a method for testing model calibration techniques. Calibration is commonly used in conjunction with energy retrofit audit models. An audit is conducted to gather information about the building needed to assemble an input file for a building energy modeling tool. A calibration technique is used to reconcile model predictions with utility data, and then the 'calibrated model' is used to predict energy savings from a variety of retrofit measures and combinations thereof. Current standards and guidelines such as BPI-2400 and ASHRAE-14 set criteria for 'goodness of fit' and assume that if the criteria are met, then the calibration technique is acceptable. While it is logical to use the actual performance data of the building to tune the model, it is not certain that a good fit will result in a model that better predicts post-retrofit energy savings. Therefore, the basic idea here is that the simulation program (intended for use with the calibration technique) is used to generate surrogate utility bill data and retrofit energy savings data against which the calibration technique can be tested. This provides three figures of merit for testing a calibration technique, 1) accuracy of the post-retrofit energy savings prediction, 2) closure on the 'true' input parameter values, and 3) goodness of fit to the utility bill data. The paper will also discuss the pros and cons of using this synthetic surrogate data approach versus trying to use real data sets of actual buildings.

  17. Quantifying sources of black carbon in Western North America using observationally based analysis and an emission tagging technique in the Community Atmosphere Model

    Directory of Open Access Journals (Sweden)

    R. Zhang

    2015-05-01

    Full Text Available The Community Atmosphere Model (CAM5, equipped with a technique to tag black carbon (BC emissions by source regions and types, has been employed to establish source-receptor relationships for atmospheric BC and its deposition to snow over Western North America. The CAM5 simulation was conducted with meteorological fields constrained by reanalysis for year 2013 when measurements of BC in both near-surface air and snow are available for model evaluation. We find that CAM5 has a significant low bias in predicted mixing ratios of BC in snow but only a small low bias in predicted atmospheric concentrations over the Northwest USA and West Canada. Even with a strong low bias in snow mixing ratios, radiative transfer calculations show that the BC-in-snow darkening effect is substantially larger than the BC dimming effect at the surface by atmospheric BC. Local sources contribute more to near-surface atmospheric BC and to deposition than distant sources, while the latter are more important in the middle and upper troposphere where wet removal is relatively weak. Fossil fuel (FF is the dominant source type for total column BC burden over the two regions. FF is also the dominant local source type for BC column burden, deposition, and near-surface BC, while for all distant source regions combined the contribution of biomass/biofuel (BB is larger than FF. An observationally based Positive Matrix Factorization (PMF analysis of the snow-impurity chemistry is conducted to quantitatively evaluate the CAM5 BC source-type attribution. While CAM5 is qualitatively consistent with the PMF analysis with respect to partitioning of BC originating from BB and FF emissions, it significantly underestimates the relative contribution of BB. In addition to a possible low bias in BB emissions used in the simulation, the model is likely missing a significant source of snow darkening from local soil found in the observations.

  18. A Shape Based Image Search Technique

    Directory of Open Access Journals (Sweden)

    Aratrika Sarkar

    2014-08-01

    Full Text Available This paper describes an interactive application we have developed based on shaped-based image retrieval technique. The key concepts described in the project are, imatching of images based on contour matching; iimatching of images based on edge matching; iiimatching of images based on pixel matching of colours. Further, the application facilitates the matching of images invariant of transformations like i translation ; ii rotation; iii scaling. The key factor of the system is, the system shows the percentage unmatched of the image uploaded with respect to the images already existing in the database graphically, whereas, the integrity of the system lies on the unique matching techniques used for optimum result. This increases the accuracy of the system. For example, when a user uploads an image say, an image of a mango leaf, then the application shows all mango leaves present in the database as well other leaves matching the colour and shape of the mango leaf uploaded.

  19. Numerical modeling techniques for flood analysis

    Science.gov (United States)

    Anees, Mohd Talha; Abdullah, K.; Nawawi, M. N. M.; Ab Rahman, Nik Norulaini Nik; Piah, Abd. Rahni Mt.; Zakaria, Nor Azazi; Syakir, M. I.; Mohd. Omar, A. K.

    2016-12-01

    Topographic and climatic changes are the main causes of abrupt flooding in tropical areas. It is the need to find out exact causes and effects of these changes. Numerical modeling techniques plays a vital role for such studies due to their use of hydrological parameters which are strongly linked with topographic changes. In this review, some of the widely used models utilizing hydrological and river modeling parameters and their estimation in data sparse region are discussed. Shortcomings of 1D and 2D numerical models and the possible improvements over these models through 3D modeling are also discussed. It is found that the HEC-RAS and FLO 2D model are best in terms of economical and accurate flood analysis for river and floodplain modeling respectively. Limitations of FLO 2D in floodplain modeling mainly such as floodplain elevation differences and its vertical roughness in grids were found which can be improve through 3D model. Therefore, 3D model was found to be more suitable than 1D and 2D models in terms of vertical accuracy in grid cells. It was also found that 3D models for open channel flows already developed recently but not for floodplain. Hence, it was suggested that a 3D model for floodplain should be developed by considering all hydrological and high resolution topographic parameter's models, discussed in this review, to enhance the findings of causes and effects of flooding.

  20. Techniques for managing behaviour in pediatric dentistry: comparative study of live modelling and tell-show-do based on children's heart rates during treatment.

    Science.gov (United States)

    Farhat-McHayleh, Nada; Harfouche, Alice; Souaid, Philippe

    2009-05-01

    Tell-show-do is the most popular technique for managing children"s behaviour in dentists" offices. Live modelling is used less frequently, despite the satisfactory results obtained in studies conducted during the 1980s. The purpose of this study was to compare the effects of these 2 techniques on children"s heart rates during dental treatments, heart rate being the simplest biological parameter to measure and an increase in heart rate being the most common physiologic indicator of anxiety and fear. For this randomized, controlled, parallel-group single-centre clinical trial, children 5 to 9 years of age presenting for the first time to the Saint Joseph University dental care centre in Beirut, Lebanon, were divided into 3 groups: those in groups A and B were prepared for dental treatment by means of live modelling, the mother serving as the model for children in group A and the father as the model for children in group B. The children in group C were prepared by a pediatric dentist using the tell-show-do method. Each child"s heart rate was monitored during treatment, which consisted of an oral examination and cleaning. A total of 155 children met the study criteria and participated in the study. Children who received live modelling with the mother as model had lower heart rates than those who received live modelling with the father as model and those who were prepared by the tell-show-do method (p dentistry.

  1. Dynamic modeling of breast tissue with application of model reference adaptive system identification technique based on clinical robot-assisted palpation.

    Science.gov (United States)

    Keshavarz, M; Mojra, A

    2015-11-01

    Accurate identification of breast tissue's dynamic behavior in physical examination is critical to successful diagnosis and treatment. In this study a model reference adaptive system identification (MRAS) algorithm is utilized to estimate the dynamic behavior of breast tissue from mechanical stress-strain datasets. A robot-assisted device (Robo-Tac-BMI) is going to mimic physical palpation on a 45 year old woman having a benign mass in the left breast. Stress-strain datasets will be collected over 14 regions of both breasts in a specific period of time. Then, a 2nd order linear model is adapted to the experimental datasets. It was confirmed that a unique dynamic model with maximum error about 0.89% is descriptive of the breast tissue behavior meanwhile mass detection may be achieved by 56.1% difference from the normal tissue.

  2. Quantitative model validation techniques: new insights

    CERN Document Server

    Ling, You

    2012-01-01

    This paper develops new insights into quantitative methods for the validation of computational model prediction. Four types of methods are investigated, namely classical and Bayesian hypothesis testing, a reliability-based method, and an area metric-based method. Traditional Bayesian hypothesis testing is extended based on interval hypotheses on distribution parameters and equality hypotheses on probability distributions, in order to validate models with deterministic/stochastic output for given inputs. Two types of validation experiments are considered - fully characterized (all the model/experimental inputs are measured and reported as point values) and partially characterized (some of the model/experimental inputs are not measured or are reported as intervals). Bayesian hypothesis testing can minimize the risk in model selection by properly choosing the model acceptance threshold, and its results can be used in model averaging to avoid Type I/II errors. It is shown that Bayesian interval hypothesis testing...

  3. Virtual 3d City Modeling: Techniques and Applications

    Science.gov (United States)

    Singh, S. P.; Jain, K.; Mandla, V. R.

    2013-08-01

    3D city model is a digital representation of the Earth's surface and it's related objects such as Building, Tree, Vegetation, and some manmade feature belonging to urban area. There are various terms used for 3D city models such as "Cybertown", "Cybercity", "Virtual City", or "Digital City". 3D city models are basically a computerized or digital model of a city contains the graphic representation of buildings and other objects in 2.5 or 3D. Generally three main Geomatics approach are using for Virtual 3-D City models generation, in first approach, researcher are using Conventional techniques such as Vector Map data, DEM, Aerial images, second approach are based on High resolution satellite images with LASER scanning, In third method, many researcher are using Terrestrial images by using Close Range Photogrammetry with DSM & Texture mapping. We start this paper from the introduction of various Geomatics techniques for 3D City modeling. These techniques divided in to two main categories: one is based on Automation (Automatic, Semi-automatic and Manual methods), and another is Based on Data input techniques (one is Photogrammetry, another is Laser Techniques). After details study of this, finally in short, we are trying to give the conclusions of this study. In the last, we are trying to give the conclusions of this research paper and also giving a short view for justification and analysis, and present trend for 3D City modeling. This paper gives an overview about the Techniques related with "Generation of Virtual 3-D City models using Geomatics Techniques" and the Applications of Virtual 3D City models. Photogrammetry, (Close range, Aerial, Satellite), Lasergrammetry, GPS, or combination of these modern Geomatics techniques play a major role to create a virtual 3-D City model. Each and every techniques and method has some advantages and some drawbacks. Point cloud model is a modern trend for virtual 3-D city model. Photo-realistic, Scalable, Geo-referenced virtual 3

  4. Wavelet-based technique for target segmentation

    Science.gov (United States)

    Sadjadi, Firooz A.

    1995-07-01

    Segmentation of targets embedded in clutter obtained by IR imaging sensors is one of the challenging problems in automatic target recognition (ATR). In this paper a new texture-based segmentation technique is presented that uses the statistics of 2D wavelet decomposition components of the lcoal sections of the image. A measure of statistical similarity is then used to segment the image and separate the target from the background. This technique is applied on a set of real sequential IR imagery and has shown to produce a high degree of segmentation accuracy across varying ranges.

  5. A brief review of dispensing-based rapid prototyping techniques in tissue scaffold fabrication: role of modeling on scaffold properties prediction.

    Science.gov (United States)

    Li, M G; Tian, X Y; Chen, X B

    2009-09-01

    Artificial scaffolds play vital roles in tissue engineering as they provide a supportive environment for cell attachment, proliferation and differentiation during tissue formation. Fabrication of tissue scaffolds is thus of fundamental importance for tissue engineering. Of the variety of scaffold fabrication techniques available, rapid prototyping (RP) methods have attracted a great deal of attention in recent years. This method can improve conventional scaffold fabrication by controlling scaffold microstructure, incorporating cells into scaffolds and regulating cell distribution. All of these contribute towards the ultimate goal of tissue engineering: functional tissues or organs. Dispensing is typically used in different RP techniques to implement the layer-by-layer fabrication process. This article reviews RP methods in tissue scaffold fabrication, with emphasis on dispensing-based techniques, and analyzes the effects of different process factors on fabrication performance, including flow rate, pore size and porosity, and mechanical cell damage that can occur in the bio-manufacturing process.

  6. Ultra-low dose abdominal MDCT: Using a knowledge-based Iterative Model Reconstruction technique for substantial dose reduction in a prospective clinical study

    Energy Technology Data Exchange (ETDEWEB)

    Khawaja, Ranish Deedar Ali, E-mail: rkhawaja@mgh.harvard.edu [MGH Imaging, Massachusetts General Hospital and Harvard Medical School, Boston, MA (United States); Singh, Sarabjeet; Blake, Michael; Harisinghani, Mukesh; Choy, Gary; Karosmangulu, Ali; Padole, Atul; Do, Synho [MGH Imaging, Massachusetts General Hospital and Harvard Medical School, Boston, MA (United States); Brown, Kevin; Thompson, Richard; Morton, Thomas; Raihani, Nilgoun [CT Research and Advanced Development, Philips Healthcare, Cleveland, OH (United States); Koehler, Thomas [Philips Technologie GmbH, Innovative Technologies, Hamburg (Germany); Kalra, Mannudeep K. [MGH Imaging, Massachusetts General Hospital and Harvard Medical School, Boston, MA (United States)

    2015-01-15

    Highlights: • Limited abdominal CT indications can be performed at a size specific dose estimate of (SSDE) 1.5 mGy (∼0.9 mSv) in smaller patients (BMI less than or equal to 25 kg/m{sup 2}) using a knowledge based Iterative Model Reconstruction (IMR) technique. • Evaluation of liver tumors and pathologies is unacceptable at this reduced dose with IMR technique especially in patients with a BMI greater than 25 kg/m{sup 2}. • IMR body soft tissue and routine settings perform substantially better than IMR sharp plus setting in reduced dose CT images. • At SSDE of 1.5 mGy, objective image noise in reduced dose IMR images is 8–56% less than compared to standard dose FBP images, with lowest image noise in IMR body-soft tissue images. - Abstract: Purpose: To assess lesion detection and image quality parameters of a knowledge-based Iterative Model Reconstruction (IMR) in reduced dose (RD) abdominal CT examinations. Materials and methods: This IRB-approved prospective study included 82 abdominal CT examinations performed for 41 consecutive patients (mean age, 62 ± 12 years; F:M 28:13) who underwent a RD CT (SSDE, 1.5 mGy ± 0.4 [∼0.9 mSv] at 120 kV with 17–20 mAs/slice) immediately after their standard dose (SD) CT exam (10 mGy ± 3 [∼6 mSv] at 120 kV with automatic exposure control) on 256 MDCT (iCT, Philips Healthcare). SD data were reconstructed using filtered back projection (FBP). RD data were reconstructed with FBP and IMR. Four radiologists used a five-point scale (1 = image quality better than SD CT to 5 = image quality unacceptable) to assess both subjective image quality and artifacts. Lesions were first detected on RD FBP images. RD IMR and RD FBP images were then compared side-by-side to SD-FBP images in an independent, randomized and blinded fashion. Friedman's test and intraclass correlation coefficient were used for data analysis. Objective measurements included image noise and attenuation as well as noise spectral density (NSD) curves

  7. MATRIX BASED INDEXING TECHNIQUE FOR VIDEO DATA

    Directory of Open Access Journals (Sweden)

    Devarj Saravanan

    2013-01-01

    Full Text Available Due to increasing the usage of media, the utilization of video play central role as it supports various applications. Video is the particular media which contains complex collection of objects like audio, motion, text, color and picture. Due to the rapid growth of this information video indexing process is mandatory for fast and effective retrieval. Many current indexing techniques fails to extract the needed image from the stored data set, based on the users query. Urgent attention in the field of video indexing and image retrieval is the need of the hour. Here a new matrix based indexing technique for image retrieval has been proposed. The proposed method provide better result, experimental results prove this.

  8. Interactive early warning technique based on SVDD

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    After reviewing current researches on early warning,it is found that"bad" data of some systems is not easy to obtain,which makes methods proposed by these researches unsuitable for monitored systems.An interactive early warning technique based on SVDD(support vector data description)is proposed to adopt"good" data as samples to overcome the difficulty in obtaining the"bad"data.The process consists of two parts:(1)A hypersphere is fitted on"good"data using SVDD.If the data object are outside the hypersphere,it would be taken as"suspicious";(2)A group of experts would decide whether the suspicious data is"bad"or"good",early warning messages would be issued according to the decisions.And the detailed process of implementation is proposed.At last,an experiment based on data of a macroeconomic system is conducted to verify the proposed technique.

  9. Power system stabilizers based on modern control techniques

    Energy Technology Data Exchange (ETDEWEB)

    Malik, O.P.; Chen, G.P.; Zhang, Y.; El-Metwally, K. [Calgary Univ., AB (Canada). Dept. of Electrical and Computer Engineering

    1994-12-31

    Developments in digital technology have made it feasible to develop and implement improved controllers based on sophisticated control techniques. Power system stabilizers based on adaptive control, fuzzy logic and artificial networks are being developed. Each of these control techniques possesses unique features and strengths. In this paper, the relative performance of power systems stabilizers based on adaptive control, fuzzy logic and neural network, both in simulation studies and real time tests on a physical model of a power system, is presented and compared to that of a fixed parameter conventional power system stabilizer. (author) 16 refs., 45 figs., 3 tabs.

  10. Efficient Plant Supervision Strategy Using NN Based Techniques

    Science.gov (United States)

    Garcia, Ramon Ferreiro; Rolle, Jose Luis Calvo; Castelo, Francisco Javier Perez

    Most of non-linear type one and type two control systems suffers from lack of detectability when model based techniques are applied on FDI (fault detection and isolation) tasks. In general, all types of processes suffer from lack of detectability also due to the ambiguity to discriminate the process, sensors and actuators in order to isolate any given fault. This work deals with a strategy to detect and isolate faults which include massive neural networks based functional approximation procedures associated to recursive rule based techniques applied to a parity space approach.

  11. Field Assessment Techniques for Bank Erosion Modeling

    Science.gov (United States)

    1990-11-22

    Field Assessment Techniques for Bank Erosion Modeling First Interim Report Prepared for US Army European Research Office US AR DS G-. EDISON HOUSE...SEDIMENTATION ANALYSIS SHEETS and GUIDELINES FOR THE USE OF SEDIMENTATION ANALYSIS SHEETS IN THE FIELD Prepared for US Army Engineer Waterways Experiment...Material Type 3 Material Type 4 Cobbles Toe[’ Toe Toefl Toefl Protection Status Cobbles/boulders Mid-Bnak .. Mid-na.k Mid-Bnask[ Mid-Boak

  12. MATRIX BASED INDEXING TECHNIQUE FOR VIDEO DATA

    OpenAIRE

    2013-01-01

    Due to increasing the usage of media, the utilization of video play central role as it supports various applications. Video is the particular media which contains complex collection of objects like audio, motion, text, color and picture. Due to the rapid growth of this information video indexing process is mandatory for fast and effective retrieval. Many current indexing techniques fails to extract the needed image from the stored data set, based on the users query. Urgent attention in the fi...

  13. Multiview video codec based on KTA techniques

    Science.gov (United States)

    Seo, Jungdong; Kim, Donghyun; Ryu, Seungchul; Sohn, Kwanghoon

    2011-03-01

    Multi-view video coding (MVC) is a video coding standard developed by MPEG and VCEG for multi-view video. It showed average PSNR gain of 1.5dB compared with view-independent coding by H.264/AVC. However, because resolutions of multi-view video are getting higher for more realistic 3D effect, high performance video codec is needed. MVC adopted hierarchical B-picture structure and inter-view prediction as core techniques. The hierarchical B-picture structure removes the temporal redundancy, and the inter-view prediction reduces the inter-view redundancy by compensated prediction from the reconstructed neighboring views. Nevertheless, MVC has inherent limitation in coding efficiency, because it is based on H.264/AVC. To overcome the limit, an enhanced video codec for multi-view video based on Key Technology Area (KTA) is proposed. KTA is a high efficiency video codec by Video Coding Expert Group (VCEG), and it was carried out for coding efficiency beyond H.264/AVC. The KTA software showed better coding gain than H.264/AVC by using additional coding techniques. The techniques and the inter-view prediction are implemented into the proposed codec, which showed high coding gain compared with the view-independent coding result by KTA. The results presents that the inter-view prediction can achieve higher efficiency in a multi-view video codec based on a high performance video codec such as HEVC.

  14. Guidelines for a Digital Reinterpretation of Architectural Restoration Work: Reality-Based Models and Reverse Modelling Techniques Applied to the Architectural Decoration of the Teatro Marittimo, Villa Adriana

    Science.gov (United States)

    Adembri, B.; Cipriani, L.; Bertacchi, G.

    2017-05-01

    The Maritime Theatre is one of the iconic buildings of Hadrian's Villa, Tivoli. The state of conservation of the theatre is not only the result of weathering over time, but also due to restoration work carried out during the Fifties of the past century. Although this anastylosis process had the virtue of partially restoring a few of the fragments of the compound's original image, it now reveals diverse inconsistencies and genuine errors in the reassembling of the fragments. This study aims at carrying out a digital reinterpretation of the restoration of the architectural fragments in relation to the architectural order, with particular reference to the miscellaneous decoration of the frieze of the Teatro Marittimo (vestibule and atrium). Over the course of the last few years the Teatro Marittimo has been the target of numerous surveying campaigns using digital methodology (laser scanner and photogrammetry SfM/MVS). Starting with the study of the remains of the opus caementicium on the ground, it is possible to identify surfaces which are then used in the model for subsequent cross sections, so as to achieve the best fitting circumferences to use as reference points to put the fragments back into place.

  15. Simulation-based optimization parametric optimization techniques and reinforcement learning

    CERN Document Server

    Gosavi, Abhijit

    2003-01-01

    Simulation-Based Optimization: Parametric Optimization Techniques and Reinforcement Learning introduces the evolving area of simulation-based optimization. The book's objective is two-fold: (1) It examines the mathematical governing principles of simulation-based optimization, thereby providing the reader with the ability to model relevant real-life problems using these techniques. (2) It outlines the computational technology underlying these methods. Taken together these two aspects demonstrate that the mathematical and computational methods discussed in this book do work. Broadly speaking, the book has two parts: (1) parametric (static) optimization and (2) control (dynamic) optimization. Some of the book's special features are: *An accessible introduction to reinforcement learning and parametric-optimization techniques. *A step-by-step description of several algorithms of simulation-based optimization. *A clear and simple introduction to the methodology of neural networks. *A gentle introduction to converg...

  16. Level of detail technique for plant models

    Institute of Scientific and Technical Information of China (English)

    Xiaopeng ZHANG; Qingqiong DENG; Marc JAEGER

    2006-01-01

    Realistic modelling and interactive rendering of forestry and landscape is a challenge in computer graphics and virtual reality. Recent new developments in plant growth modelling and simulation lead to plant models faithful to botanical structure and development, not only representing the complex architecture of a real plant but also its functioning in interaction with its environment. Complex geometry and material of a large group of plants is a big burden even for high performances computers, and they often overwhelm the numerical calculation power and graphic rendering power. Thus, in order to accelerate the rendering speed of a group of plants, software techniques are often developed. In this paper, we focus on plant organs, i.e. leaves, flowers, fruits and inter-nodes. Our approach is a simplification process of all sparse organs at the same time, i. e. , Level of Detail (LOD) , and multi-resolution models for plants. We do explain here the principle and construction of plant simplification. They are used to construct LOD and multi-resolution models of sparse organs and branches of big trees. These approaches take benefit from basic knowledge of plant architecture, clustering tree organs according to biological structures. We illustrate the potential of our approach on several big virtual plants for geometrical compression or LOD model definition. Finally we prove the efficiency of the proposed LOD models for realistic rendering with a virtual scene composed by 184 mature trees.

  17. Flood alert system based on bayesian techniques

    Science.gov (United States)

    Gulliver, Z.; Herrero, J.; Viesca, C.; Polo, M. J.

    2012-04-01

    The problem of floods in the Mediterranean regions is closely linked to the occurrence of torrential storms in dry regions, where even the water supply relies on adequate water management. Like other Mediterranean basins in Southern Spain, the Guadalhorce River Basin is a medium sized watershed (3856 km2) where recurrent yearly floods occur , mainly in autumn and spring periods, driven by cold front phenomena. The torrential character of the precipitation in such small basins, with a concentration time of less than 12 hours, produces flash flood events with catastrophic effects over the city of Malaga (600000 inhabitants). From this fact arises the need for specific alert tools which can forecast these kinds of phenomena. Bayesian networks (BN) have been emerging in the last decade as a very useful and reliable computational tool for water resources and for the decision making process. The joint use of Artificial Neural Networks (ANN) and BN have served us to recognize and simulate the two different types of hydrological behaviour in the basin: natural and regulated. This led to the establishment of causal relationships between precipitation, discharge from upstream reservoirs, and water levels at a gauging station. It was seen that a recurrent ANN model working at an hourly scale, considering daily precipitation and the two previous hourly values of reservoir discharge and water level, could provide R2 values of 0.86. BN's results slightly improve this fit, but contribute with uncertainty to the prediction. In our current work to Design a Weather Warning Service based on Bayesian techniques the first steps were carried out through an analysis of the correlations between the water level and rainfall at certain representative points in the basin, along with the upstream reservoir discharge. The lower correlation found between precipitation and water level emphasizes the highly regulated condition of the stream. The autocorrelations of the variables were also

  18. A general technique to train language models on language models

    NARCIS (Netherlands)

    Nederhof, MJ

    2005-01-01

    We show that under certain conditions, a language model can be trained oil the basis of a second language model. The main instance of the technique trains a finite automaton on the basis of a probabilistic context-free grammar, such that the Kullback-Leibler distance between grammar and trained auto

  19. A finite element parametric modeling technique of aircraft wing structures

    Institute of Scientific and Technical Information of China (English)

    Tang Jiapeng; Xi Ping; Zhang Baoyuan; Hu Bifu

    2013-01-01

    A finite element parametric modeling method of aircraft wing structures is proposed in this paper because of time-consuming characteristics of finite element analysis pre-processing. The main research is positioned during the preliminary design phase of aircraft structures. A knowledge-driven system of fast finite element modeling is built. Based on this method, employing a template parametric technique, knowledge including design methods, rules, and expert experience in the process of modeling is encapsulated and a finite element model is established automatically, which greatly improves the speed, accuracy, and standardization degree of modeling. Skeleton model, geometric mesh model, and finite element model including finite element mesh and property data are established on parametric description and automatic update. The outcomes of research show that the method settles a series of problems of parameter association and model update in the pro-cess of finite element modeling which establishes a key technical basis for finite element parametric analysis and optimization design.

  20. Knowledge-based techniques in software engineering

    Energy Technology Data Exchange (ETDEWEB)

    Jairam, B.N.; Agarwal, A.; Emrich, M.L.

    1988-05-04

    Recent trends in software engineering research focus on the incorporation of AI techniques. The feasibility of an overlap between AI and software engineering is examined. The benefits of merging the two fields are highlighted. The long-term goal is to automate the software development process. Some projects being undertaken towards the attainment of this goal are presented as examples. Finally, research on the Oak Ridge Reservation aimed at developing a knowledge-based software project management aid is presented. 25 refs., 1 tab.

  1. Improved modeling techniques for turbomachinery flow fields

    Energy Technology Data Exchange (ETDEWEB)

    Lakshminarayana, B.; Fagan, J.R. Jr.

    1995-12-31

    This program has the objective of developing an improved methodology for modeling turbomachinery flow fields, including the prediction of losses and efficiency. Specifically, the program addresses the treatment of the mixing stress tensor terms attributed to deterministic flow field mechanisms required in steady-state Computational Fluid Dynamic (CFD) models for turbomachinery flow fields. These mixing stress tensors arise due to spatial and temporal fluctuations (in an absolute frame of reference) caused by rotor-stator interaction due to various blade rows and by blade-to-blade variation of flow properties. This will be accomplished in a cooperative program by Penn State University and the Allison Engine Company. These tasks include the acquisition of previously unavailable experimental data in a high-speed turbomachinery environment, the use of advanced techniques to analyze the data, and the development of a methodology to treat the deterministic component of the mixing stress tenor.

  2. Artificial Intelligence based technique for BTS placement

    Science.gov (United States)

    Alenoghena, C. O.; Emagbetere, J. O.; Aibinu, A. M.

    2013-12-01

    The increase of the base transceiver station (BTS) in most urban areas can be traced to the drive by network providers to meet demand for coverage and capacity. In traditional network planning, the final decision of BTS placement is taken by a team of radio planners, this decision is not fool proof against regulatory requirements. In this paper, an intelligent based algorithm for optimal BTS site placement has been proposed. The proposed technique takes into consideration neighbour and regulation considerations objectively while determining cell site. The application will lead to a quantitatively unbiased evaluated decision making process in BTS placement. An experimental data of a 2km by 3km territory was simulated for testing the new algorithm, results obtained show a 100% performance of the neighbour constrained algorithm in BTS placement optimization. Results on the application of GA with neighbourhood constraint indicate that the choices of location can be unbiased and optimization of facility placement for network design can be carried out.

  3. Feature-based multiresolution techniques for product design

    Institute of Scientific and Technical Information of China (English)

    LEE Sang Hun; LEE Kunwoo

    2006-01-01

    3D computer-aided design (CAD) systems based on feature-based solid modelling technique have been widely spread and used for product design. However, when part models associated with features are used in various downstream applications,simplified models in various levels of detail (LODs) are frequently more desirable than the full details of the parts. In particular,the need for feature-based multiresolution representation of a solid model representing an object at multiple LODs in the feature unit is increasing for engineering tasks. One challenge is to generate valid models at various LODs after an arbitrary rearrangement of features using a certain LOD criterion, because composite Boolean operations consisting of union and subtraction are not commutative. The other challenges are to devise proper topological framework for multiresolution representation, to suggest more reasonable LOD criteria, and to extend applications. This paper surveys the recent research on these issues.

  4. Technique for Assessing the Stability and Controllability Characteristics of Naval Aircraft Systems Based on the Rational Combination of Modeling, Identification and Flight Experiments

    Directory of Open Access Journals (Sweden)

    S. V. Nikolaev

    2015-01-01

    Full Text Available The aim of this work is to improve test quality and reliability of modern naval aircraft for assessment of stability and controllability characteristics and test shortening. To achieve this goal it is necessary to develop an algorithmic, mathematical and methodological support of the flight trials and the mathematical modeling of controlled flight modes to determine the stability and controllability characteristics of the naval aircraft.The article analyses the problems related to determining the stability and controllability characteristics under flight tests, describes the technique to correct a mathematical model of aerodynamic characteristics and engine thrust forces of modern naval aircraft. It shows the importance of using algorithm to control the correctness of onboard measurements of flight parameters. The article presents new results of identification of the aircraft aerodynamic coefficients and proves that in identifying characteristics of the longitudinal control channel it is necessary to take into account the engine thrust forces. In the article the aerodynamic coefficients, obtained by identification methods, are compared with those in the original aerodynamic data Bank.An important and new component of the work described in the fourth part of the article, is a set of computer programmes, integrated into a common interface. The development of this software has greatly improved a processing technology of the flight experiment materials and identification of the aerodynamic characteristics of the aircraft.When applying the work results in the testing phase, the required characteristics of stability and controllability are determined by simulation, and identification provides the model refinement according to the flight data.The created technology of practical identification is used to verify and refine the mathematical models according to the flight experiment data. Thus, the result is a proven and refined model of the aircraft

  5. 基于激光扫描技术的三维模型重建%3D Model Reconstruction Based on Laser Scanning Technique

    Institute of Scientific and Technical Information of China (English)

    Nguyen Tien Thanh; 刘修国; 王红平; 于明旭; 周文浩

    2011-01-01

    通过分析三维激光扫描系统获取的点云数据,得到了利用点云数据构建三维模型的技术、方法和流程.介绍了利用地面三维激光扫描仪获取点云数据的过程以及结合RiSCAN PRO软件和Geomagic Studio软件进行建模的方法.对原始测量的点云数据进行处理(去除噪声,平滑,对多站点数据做拼接配准,提取目标建筑物等)得到正确和完整的目标建筑物的表面信息,然后构建三角网建立它的三维表面模型,最后通过所拍的照片进行纹理映射得到真实的三维模型.实验结果表明,利用上述方法可以有效地处理三维激光扫描获取的点云数据,实现对建筑物快速三维可视化建模.%The technique, method and workflow of building 3D model through the use of 3D laser scanning system to acquire point cloud data are presented. The point cloud data acquirement process and the combination of RiSCAN PRO software and Geomagic Studio software to build 3D model are discussed. The original measurement data (noise elimination, smoothing, data registration, target object extraction and so on) are processed to get the exact and full facet information of the target, and then a triangular mesh model is built for the target. Finally, through texture mapping done by using photos taken in the data acquirement process, the real 3D model of the target object is got. The experiment shows that the point cloud acquired by 3D laser scanning system can be effectively dealt with and the 3D model can be achieved via the technique mentioned above.

  6. Choice of rainfall inputs for event-based rainfall-runoff modeling in a catchment with multiple rainfall stations using data-driven techniques

    Science.gov (United States)

    Chang, Tak Kwin; Talei, Amin; Alaghmand, Sina; Ooi, Melanie Po-Leen

    2017-02-01

    Input selection for data-driven rainfall-runoff models is an important task as these models find the relationship between rainfall and runoff by direct mapping of inputs to output. In this study, two different input selection methods were used: cross-correlation analysis (CCA), and a combination of mutual information and cross-correlation analyses (MICCA). Selected inputs were used to develop adaptive network-based fuzzy inference system (ANFIS) in Sungai Kayu Ara basin, Selangor, Malaysia. The study catchment has 10 rainfall stations and one discharge station located at the outlet of the catchment. A total of 24 rainfall-runoff events (10-min interval) from 1996 to 2004 were selected from which 18 events were used for training and the remaining 6 were reserved for validating (testing) the models. The results of ANFIS models then were compared against the ones obtained by conceptual model HEC-HMS. The CCA and MICCA methods selected the rainfall inputs only from 2 (stations 1 and 5) and 3 (stations 1, 3, and 5) rainfall stations, respectively. ANFIS model developed based on MICCA inputs (ANFIS-MICCA) performed slightly better than the one developed based on CCA inputs (ANFIS-CCA). ANFIS-CCA and ANFIS-MICCA were able to perform comparably to HEC-HMS model where rainfall data of all 10 stations had been used; however, in peak estimation, ANFIS-MICCA was the best model. The sensitivity analysis on HEC-HMS was conducted by recalibrating the model by using the same selected rainfall stations for ANFIS. It was concluded that HEC-HMS model performance deteriorates if the number of rainfall stations reduces. In general, ANFIS was found to be a reliable alternative for HEC-HMS in cases whereby not all rainfall stations are functioning. This study showed that the selected stations have received the highest total rain and rainfall intensity (stations 3 and 5). Moreover, the contributing rainfall stations selected by CCA and MICCA were found to be located near the outlet of

  7. Laser-based direct-write techniques for cell printing.

    Science.gov (United States)

    Schiele, Nathan R; Corr, David T; Huang, Yong; Raof, Nurazhani Abdul; Xie, Yubing; Chrisey, Douglas B

    2010-09-01

    Fabrication of cellular constructs with spatial control of cell location (+/-5 microm) is essential to the advancement of a wide range of applications including tissue engineering, stem cell and cancer research. Precise cell placement, especially of multiple cell types in co- or multi-cultures and in three dimensions, can enable research possibilities otherwise impossible, such as the cell-by-cell assembly of complex cellular constructs. Laser-based direct writing, a printing technique first utilized in electronics applications, has been adapted to transfer living cells and other biological materials (e.g., enzymes, proteins and bioceramics). Many different cell types have been printed using laser-based direct writing, and this technique offers significant improvements when compared to conventional cell patterning techniques. The predominance of work to date has not been in application of the technique, but rather focused on demonstrating the ability of direct writing to pattern living cells, in a spatially precise manner, while maintaining cellular viability. This paper reviews laser-based additive direct-write techniques for cell printing, and the various cell types successfully laser direct-written that have applications in tissue engineering, stem cell and cancer research are highlighted. A particular focus is paid to process dynamics modeling and process-induced cell injury during laser-based cell direct writing.

  8. Laser-based direct-write techniques for cell printing

    Energy Technology Data Exchange (ETDEWEB)

    Schiele, Nathan R; Corr, David T [Biomedical Engineering Department, Rensselaer Polytechnic Institute, Troy, NY (United States); Huang Yong [Department of Mechanical Engineering, Clemson University, Clemson, SC (United States); Raof, Nurazhani Abdul; Xie Yubing [College of Nanoscale Science and Engineering, University at Albany, SUNY, Albany, NY (United States); Chrisey, Douglas B, E-mail: schien@rpi.ed, E-mail: chrisd@rpi.ed [Material Science and Engineering Department, Rensselaer Polytechnic Institute, Troy, NY (United States)

    2010-09-15

    Fabrication of cellular constructs with spatial control of cell location ({+-}5 {mu}m) is essential to the advancement of a wide range of applications including tissue engineering, stem cell and cancer research. Precise cell placement, especially of multiple cell types in co- or multi-cultures and in three dimensions, can enable research possibilities otherwise impossible, such as the cell-by-cell assembly of complex cellular constructs. Laser-based direct writing, a printing technique first utilized in electronics applications, has been adapted to transfer living cells and other biological materials (e.g., enzymes, proteins and bioceramics). Many different cell types have been printed using laser-based direct writing, and this technique offers significant improvements when compared to conventional cell patterning techniques. The predominance of work to date has not been in application of the technique, but rather focused on demonstrating the ability of direct writing to pattern living cells, in a spatially precise manner, while maintaining cellular viability. This paper reviews laser-based additive direct-write techniques for cell printing, and the various cell types successfully laser direct-written that have applications in tissue engineering, stem cell and cancer research are highlighted. A particular focus is paid to process dynamics modeling and process-induced cell injury during laser-based cell direct writing. (topical review)

  9. Model assisted qualification of NDE techniques

    Science.gov (United States)

    Ballisat, Alexander; Wilcox, Paul; Smith, Robert; Hallam, David

    2017-02-01

    The costly and time consuming nature of empirical trials typically performed for NDE technique qualification is a major barrier to the introduction of NDE techniques into service. The use of computational models has been proposed as a method by which the process of qualification can be accelerated. However, given the number of possible parameters present in an inspection, the number of combinations of parameter values scales to a power law and running simulations at all of these points rapidly becomes infeasible. Given that many NDE inspections result in a single valued scalar quantity, such as a phase or amplitude, using suitable sampling and interpolation methods significantly reduces the number of simulations that have to be performed. This paper presents initial results of applying Latin Hypercube Designs and M ultivariate Adaptive Regression Splines to the inspection of a fastener hole using an oblique ultrasonic shear wave inspection. It is demonstrated that an accurate mapping of the response of the inspection for the variations considered can be achieved by sampling only a small percentage of the parameter space of variations and that the required percentage decreases as the number of parameters and the number of possible sample points increases. It is then shown how the outcome of this process can be used to assess the reliability of the inspection through commonly used metrics such as probability of detection, thereby providing an alternative methodology to the current practice of performing empirical probability of detection trials.

  10. Natural and human causes of a flash flood in a small catchment (Rhodes Island, Greece) based on atmospheric forcing and runoff modeling techniques

    Science.gov (United States)

    Karalis, Sotirios; Katsafados, Petros; Karymbalis, Efthimios; Tsanakas, Konstantinos; Valkanou, Kanella

    2014-05-01

    This study investigates the natural (hydro-meteorological and geomorphological) and human induced factors responsible for a flash flood event that occurred on November 22nd, 2013 in a small ungauged catchment (covering an area of about 24km2) of Rhodes Island, Greece. The flash flooding killed four people and caused over â¬10 million worth of damages located mainly around the Kremasti village. In this study the reconstruction of this extreme hydro-meteorological event is attempted by using detailed spatiotemporal rainfall information, a physically based hydrological model (LISEM) and the 1D hydraulic model HEC-RAS. Furthermore, the human impacts, which are responsible for extreme flood discharge within the drainage basin, are recorded and mapped. The major meteorological feature of this event is associated with the passage of a cold front over SE Aegean Sea. The destructive flash flood was triggered by the extreme precipitation (almost 100 mm in 4 hours was recorded at the meteorological stations closest to the flooded area). An advanced nowcasting method is applied in order to provide high spatiotemporal distribution of the precipitation over the catchment area. OpenLisem (Limbourg Soil Erosion Model) is used as a runoff model for exploring the response of the catchment. It is a freeware raster model (based on PCRaster) that simulates the surface water and sediment balance for every gridcell. It is event based and has fine spatial and temporal resolution. The model is designed to simulate the effects of detailed land use changes or conservation measures on runoff, flooding and erosion during heavy rainstorms. Since OpenLISEM provides a detailed simulation of runoff processes, it is very demanding on input data (it requires a minimum of 24 maps depending on the input options). The PCRaster GIS functionality was used to derive the necessary data from the basic maps (DEM, land unit map and map of impermeable areas). The sources for the basic maps include geological

  11. A novel model-free data analysis technique based on clustering in a mutual information space: application to resting-state fMRI

    Directory of Open Access Journals (Sweden)

    Simon Benjaminsson

    2010-08-01

    Full Text Available Non-parametric data-driven analysis techniques can be used to study datasets with few assumptions about the data and underlying experiment. Variations of Independent Component Analysis (ICA have been the methods mostly used on fMRI data, e.g. in finding resting-state networks thought to reflect the connectivity of the brain. Here we present a novel data analysis technique and demonstrate it on resting-state fMRI data. It is a generic method with few underlying assumptions about the data. The results are built from the statistical relations between all input voxels, resulting in a whole-brain analysis on a voxel level. It has good scalability properties and the parallel implementation is capable of handling large datasets and databases. From the mutual information between the activities of the voxels over time, a distance matrix is created for all voxels in the input space. Multidimensional scaling is used to put the voxels in a lower-dimensional space reflecting the dependency relations based on the distance matrix. By performing clustering in this space we can find the strong statistical regularities in the data, which for the resting-state data turns out to be the resting-state networks. The decomposition is performed in the last step of the algorithm and is computationally simple. This opens up for rapid analysis and visualization of the data on different spatial levels, as well as automatically finding a suitable number of decomposition components.

  12. Modelling Gesture Based Ubiquitous Applications

    CERN Document Server

    Zacharia, Kurien; Varghese, Surekha Mariam

    2011-01-01

    A cost effective, gesture based modelling technique called Virtual Interactive Prototyping (VIP) is described in this paper. Prototyping is implemented by projecting a virtual model of the equipment to be prototyped. Users can interact with the virtual model like the original working equipment. For capturing and tracking the user interactions with the model image and sound processing techniques are used. VIP is a flexible and interactive prototyping method that has much application in ubiquitous computing environments. Different commercial as well as socio-economic applications and extension to interactive advertising of VIP are also discussed.

  13. Improved modeling techniques for turbomachinery flow fields

    Energy Technology Data Exchange (ETDEWEB)

    Lakshminarayana, B. [Pennsylvania State Univ., University Park, PA (United States); Fagan, J.R. Jr. [Allison Engine Company, Indianapolis, IN (United States)

    1995-10-01

    This program has the objective of developing an improved methodology for modeling turbomachinery flow fields, including the prediction of losses and efficiency. Specifically, the program addresses the treatment of the mixing stress tensor terms attributed to deterministic flow field mechanisms required in steady-state Computational Fluid Dynamic (CFD) models for turbo-machinery flow fields. These mixing stress tensors arise due to spatial and temporal fluctuations (in an absolute frame of reference) caused by rotor-stator interaction due to various blade rows and by blade-to-blade variation of flow properties. These tasks include the acquisition of previously unavailable experimental data in a high-speed turbomachinery environment, the use of advanced techniques to analyze the data, and the development of a methodology to treat the deterministic component of the mixing stress tensor. Penn State will lead the effort to make direct measurements of the momentum and thermal mixing stress tensors in high-speed multistage compressor flow field in the turbomachinery laboratory at Penn State. They will also process the data by both conventional and conditional spectrum analysis to derive momentum and thermal mixing stress tensors due to blade-to-blade periodic and aperiodic components, revolution periodic and aperiodic components arising from various blade rows and non-deterministic (which includes random components) correlations. The modeling results from this program will be publicly available and generally applicable to steady-state Navier-Stokes solvers used for turbomachinery component (compressor or turbine) flow field predictions. These models will lead to improved methodology, including loss and efficiency prediction, for the design of high-efficiency turbomachinery and drastically reduce the time required for the design and development cycle of turbomachinery.

  14. Clustering economies based on multiple criteria decision making techniques

    OpenAIRE

    2011-01-01

    One of the primary concerns on many countries is to determine different important factors affecting economic growth. In this paper, we study some factors such as unemployment rate, inflation ratio, population growth, average annual income, etc to cluster different countries. The proposed model of this paper uses analytical hierarchy process (AHP) to prioritize the criteria and then uses a K-mean technique to cluster 59 countries based on the ranked criteria into four groups. The first group i...

  15. AN OVERVIEW OF REDUCED ORDER MODELING TECHNIQUES FOR SAFETY APPLICATIONS

    Energy Technology Data Exchange (ETDEWEB)

    Mandelli, D.; Alfonsi, A.; Talbot, P.; Wang, C.; Maljovec, D.; Smith, C.; Rabiti, C.; Cogliati, J.

    2016-10-01

    The RISMC project is developing new advanced simulation-based tools to perform Computational Risk Analysis (CRA) for the existing fleet of U.S. nuclear power plants (NPPs). These tools numerically model not only the thermal-hydraulic behavior of the reactors primary and secondary systems, but also external event temporal evolution and component/system ageing. Thus, this is not only a multi-physics problem being addressed, but also a multi-scale problem (both spatial, µm-mm-m, and temporal, seconds-hours-years). As part of the RISMC CRA approach, a large amount of computationally-expensive simulation runs may be required. An important aspect is that even though computational power is growing, the overall computational cost of a RISMC analysis using brute-force methods may be not viable for certain cases. A solution that is being evaluated to assist the computational issue is the use of reduced order modeling techniques. During the FY2015, we investigated and applied reduced order modeling techniques to decrease the RISMC analysis computational cost by decreasing the number of simulation runs; for this analysis improvement we used surrogate models instead of the actual simulation codes. This article focuses on the use of reduced order modeling techniques that can be applied to RISMC analyses in order to generate, analyze, and visualize data. In particular, we focus on surrogate models that approximate the simulation results but in a much faster time (microseconds instead of hours/days).

  16. Facial Expression Recognition Techniques Based on Bilinear Model%基于双线性模型的人脸表情识别技术

    Institute of Scientific and Technical Information of China (English)

    徐欢

    2014-01-01

    Aiming at the problems existing in facial expression recognition currently , based on the data in the 3D expression data-base BU-3DFE, we study the point cloud alignment of 3D facial expression data , establish the bilinear models based on the align-ment data , and improve the recognition algorithms based on bilinear model in order to form the new recognition and classification algorithms, to reduce the quantity of identity feature calculation in original algorithm , to minimize the influence of identity feature on the total expression recognition process , to improve the results of facial expression recognition , and to ultimately achieve the high robustness of 3D facial expression recognition .%针对现阶段人脸表情识别过程中所遇到的问题,基于三维数据库BU-3DFE中的三维表情数据,研究三维人脸表情数据的点云对齐及基于对齐数据的双线性模型建立,对基于双线性模型的识别算法加以改进,形成新的识别分类算法,降低原有算法中身份特征参与计算的比重,最大可能地降低身份特征对于整个表情识别过程的影响。旨在提高表情识别的结果,最终实现高鲁棒性的三维表情识别。

  17. Flat-Panel Detector—Based Volume Computed Tomography: A Novel 3D Imaging Technique to Monitor Osteolytic Bone Lesions in a Mouse Tumor Metastasis Model

    Directory of Open Access Journals (Sweden)

    Jeannine Missbach-Guentner

    2007-09-01

    Full Text Available Skeletal metastasis is an important cause of mortality in patients with breast cancer. Hence, animal models, in combination with various imaging techniques, are in high demand for preclinical assessment of novel therapies. We evaluated the applicability of flat-panel volume computed tomography (fpVCT to noninvasive detection of osteolytic bone metastases that develop in severe immunodeficient mice after intracardial injection of MDA-MB-231 breast cancer cells. A single fpVCT scan at 200-wm isotropic resolution was employed to detect osteolysis within the entire skeleton. Osteolytic lesions identified by fpVCT correlated with Faxitron X-ray analysis and were subsequently confirmed by histopathological examination. Isotropic three-dimensional image data sets obtained by fpVCT were the basis for the precise visualization of the extent of the lesion within the cortical bone and for the measurement of bone loss. Furthermore, fpVCT imaging allows continuous monitoring of growth kinetics for each metastatic site and visualization of lesions in more complex regions of the skeleton, such as the skull. Our findings suggest that fpVCT is a powerful tool that can be used to monitor the occurrence and progression of osteolytic lesions in vivo and can be further developed to monitor responses to antimetastatic therapies over the course of the disease.

  18. X-Ray based Lung Function measurement–a sensitive technique to quantify lung function in allergic airway inflammation mouse models

    Science.gov (United States)

    Dullin, C.; Markus, M. A.; Larsson, E.; Tromba, G.; Hülsmann, S.; Alves, F.

    2016-11-01

    In mice, along with the assessment of eosinophils, lung function measurements, most commonly carried out by plethysmography, are essential to monitor the course of allergic airway inflammation, to examine therapy efficacy and to correlate animal with patient data. To date, plethysmography techniques either use intubation and/or restraining of the mice and are thus invasive, or are limited in their sensitivity. We present a novel unrestrained lung function method based on low-dose planar cinematic x-ray imaging (X-Ray Lung Function, XLF) and demonstrate its performance in monitoring OVA induced experimental allergic airway inflammation in mice and an improved assessment of the efficacy of the common treatment dexamethasone. We further show that XLF is more sensitive than unrestrained whole body plethysmography (UWBP) and that conventional broncho-alveolar lavage and histology provide only limited information of the efficacy of a treatment when compared to XLF. Our results highlight the fact that a multi-parametric imaging approach as delivered by XLF is needed to address the combined cellular, anatomical and functional effects that occur during the course of asthma and in response to therapy.

  19. X-Ray based Lung Function measurement-a sensitive technique to quantify lung function in allergic airway inflammation mouse models.

    Science.gov (United States)

    Dullin, C; Markus, M A; Larsson, E; Tromba, G; Hülsmann, S; Alves, F

    2016-11-02

    In mice, along with the assessment of eosinophils, lung function measurements, most commonly carried out by plethysmography, are essential to monitor the course of allergic airway inflammation, to examine therapy efficacy and to correlate animal with patient data. To date, plethysmography techniques either use intubation and/or restraining of the mice and are thus invasive, or are limited in their sensitivity. We present a novel unrestrained lung function method based on low-dose planar cinematic x-ray imaging (X-Ray Lung Function, XLF) and demonstrate its performance in monitoring OVA induced experimental allergic airway inflammation in mice and an improved assessment of the efficacy of the common treatment dexamethasone. We further show that XLF is more sensitive than unrestrained whole body plethysmography (UWBP) and that conventional broncho-alveolar lavage and histology provide only limited information of the efficacy of a treatment when compared to XLF. Our results highlight the fact that a multi-parametric imaging approach as delivered by XLF is needed to address the combined cellular, anatomical and functional effects that occur during the course of asthma and in response to therapy.

  20. Integration of genetic virtual screening patterns and latent multivariate modeling techniques for QSAR optimization based on combinations and/or interactions between peptides and proteins

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    Both the concept and the model of snug quantitative structure-activity relationship (QSAR) were pro-posed and developed for molecular design through constructing QSAR based on some known mode of receptor/ligand interactions. Many disadvantages of traditional models can be avoided by using the proposed method because the traditional models only determined upon molecular structural features in sample sets themselves. A genetic virtual screening of peptide/protein combinations (GVSPPC) is proposed for the first time by utilizing this idea to examine peptide/protein affinity activities. A genetic algorithm (GA) was developed for screening combinative targets with an interaction mode for virtual receptors. GVSPPC succeeds in disposing difficulties in rational QSAR,in order to search for the ligand/receptor interactions on conditions of unknown structures. Some bioactive oligo-/poly-peptide systems covering 58 angiotensin converting enzyme (ACE) inhibitors and 18 double site mutation residues in camel antibody protein cAb-Lys3 were investigated by GVSPPC with satisfactory results (R 2 cu>0.91,Q 2 cv > 0.86,ERMS=0.19-0.95),respectively,which demonstrates that GVSPPC is more inter-pretable in the ligand-receptor interaction than the traditional QSAR method.

  1. Integration of genetic virtual screening patterns and latent multivariate modeling techniques for QSAR optimization based on combinations and/or interactions between peptides and proteins

    Institute of Scientific and Technical Information of China (English)

    LI ZhiLiang; CHEN Gang; LI GenRong; TIAN FeiFei; WU ShiRong; YANG ShanBin; YANG ShengXi; ZHOU Yuan; ZHANG QiaoXia; QIN RenHui; MEI Hu

    2008-01-01

    Both the concept and the model of snug quantitative structure-activity relationship (QSAR) were pro-posed and developed for molecular design through constructing QSAR based on some known mode of receptor/ligand interactions. Many disadvantages of traditional models can be avoided by using the proposed method because the traditional models only determined upon molecular structural features in sample sets themselves. A genetic virtual screening of peptide/protein combinations (GVSPPC) is proposed for the first time by utilizing this idea to examine peptide/protein affinity activities. A genetic algorithm (GA) was developed for screening combinative targets with an interaction mode for virtual receptors. GVSPPC succeeds in disposing difficulties in rational QSAR, in order to search for the ligand/receptor interactions on conditions of unknown structures. Some bioactive oligo-/poly-peptide systems covering 58 angiotensin converting enzyme (ACE) inhibitors and 18 double site mutation residues in camel antibody protein cAb-Lys3 were investigated by GVSPPC with satisfactory results (Rcu2 > 0.91, Qcv2 0.86, ERMS = 0.19-0.95), respectively, which demonstrates that GVSPPC is more inter-pretable in the ligand-receptor interaction than the traditional QSAR method.

  2. Videogrammetric Model Deformation Measurement Technique for Wind Tunnel Applications

    Science.gov (United States)

    Barrows, Danny A.

    2006-01-01

    Videogrammetric measurement technique developments at NASA Langley were driven largely by the need to quantify model deformation at the National Transonic Facility (NTF). This paper summarizes recent wind tunnel applications and issues at the NTF and other NASA Langley facilities including the Transonic Dynamics Tunnel, 31-Inch Mach 10 Tunnel, 8-Ft high Temperature Tunnel, and the 20-Ft Vertical Spin Tunnel. In addition, several adaptations of wind tunnel techniques to non-wind tunnel applications are summarized. These applications include wing deformation measurements on vehicles in flight, determining aerodynamic loads based on optical elastic deformation measurements, measurements on ultra-lightweight and inflatable space structures, and the use of an object-to-image plane scaling technique to support NASA s Space Exploration program.

  3. Extensions in model-based system analysis

    OpenAIRE

    Graham, Matthew R.

    2007-01-01

    Model-based system analysis techniques provide a means for determining desired system performance prior to actual implementation. In addition to specifying desired performance, model-based analysis techniques require mathematical descriptions that characterize relevant behavior of the system. The developments of this dissertation give ex. tended formulations for control- relevant model estimation as well as model-based analysis conditions for performance requirements specified as frequency do...

  4. Compact Models and Measurement Techniques for High-Speed Interconnects

    CERN Document Server

    Sharma, Rohit

    2012-01-01

    Compact Models and Measurement Techniques for High-Speed Interconnects provides detailed analysis of issues related to high-speed interconnects from the perspective of modeling approaches and measurement techniques. Particular focus is laid on the unified approach (variational method combined with the transverse transmission line technique) to develop efficient compact models for planar interconnects. This book will give a qualitative summary of the various reported modeling techniques and approaches and will help researchers and graduate students with deeper insights into interconnect models in particular and interconnect in general. Time domain and frequency domain measurement techniques and simulation methodology are also explained in this book.

  5. NEW VERSATILE CAMERA CALIBRATION TECHNIQUE BASED ON LINEAR RECTIFICATION

    Institute of Scientific and Technical Information of China (English)

    Pan Feng; Wang Xuanyin

    2004-01-01

    A new versatile camera calibration technique for machine vision using off-the-shelf cameras is described. Aimed at the large distortion of the off-the-shelf cameras, a new camera distortion rectification technology based on line-rectification is proposed. A full-camera-distortion model is introduced and a linear algorithm is provided to obtain the solution. After the camera rectification intrinsic and extrinsic parameters are obtained based on the relationship between the homograph and absolute conic. This technology needs neither a high-accuracy three-dimensional calibration block, nor a complicated translation or rotation platform. Both simulations and experiments show that this method is effective and robust.

  6. New modulation-based watermarking technique for video

    Science.gov (United States)

    Lemma, Aweke; van der Veen, Michiel; Celik, Mehmet

    2006-02-01

    Successful watermarking algorithms have already been developed for various applications ranging from meta-data tagging to forensic tracking. Nevertheless, it is commendable to develop alternative watermarking techniques that provide a broader basis for meeting emerging services, usage models and security threats. To this end, we propose a new multiplicative watermarking technique for video, which is based on the principles of our successful MASK audio watermark. Audio-MASK has embedded the watermark by modulating the short-time envelope of the audio signal and performed detection using a simple envelope detector followed by a SPOMF (symmetrical phase-only matched filter). Video-MASK takes a similar approach and modulates the image luminance envelope. In addition, it incorporates a simple model to account for the luminance sensitivity of the HVS (human visual system). Preliminary tests show algorithms transparency and robustness to lossy compression.

  7. 基于RCAS-TVA技术的哺乳动物模型%Mammalian Models Based on RCAS-TVA Technique

    Institute of Scientific and Technical Information of China (English)

    牛屹东; 梁蜀龙

    2008-01-01

    近年来,鸟类逆转录病毒载体(RCAS)及其受体(TVA)系统在哺乳动物转基因模型中得到广泛应用.本文对转tv-a基因小鼠的制备、特异性启动子选择、RCAS载体的改进等方面进行综述,展示近来RCAS-TVA系统在哺乳动物所取得的成果,并对RCAS-TVA基因转移技术的应用前景作一展望.%The retroviral vector (RCAS) has been widely used in avian system to study development and diseases, but is not suitable for mammals which do not produce the retrovirus receptor TVA. In this review, we trace the current uses of RCAS-TVA approach in mammalian system with improved strategies, including generation of tv-a transgenic mice, use of soluble TVA receptor and retroviral receptor-ligand fusion proteins, improvement of RCAS vectors, and compare a series of mammalian models in variant studies of gene function, development, oncogenesis and gene therapy. All those studies demonstrate that the RCAS-TVA based mammalian models are powerful tools for understanding the mechanisms and target treating of human diseases.

  8. Dynamics-based Nondestructive Structural Monitoring Techniques

    Science.gov (United States)

    2012-06-21

    in the practice of non- destructive evaluation ( NDE ) and structural health monitoring (SHM). Guided wave techniques have several advantages over...conventional bulk wave ultrasonic NDE /SHM techniques. Some of these advantages are outlined in Table I. However, in addition to the advantages of...PVDF transducers for SHM applications with controlled guided wave modes and frequencies [7]. Wilcox used EMATs with circular coils in a guided wave

  9. DCT-based cyber defense techniques

    Science.gov (United States)

    Amsalem, Yaron; Puzanov, Anton; Bedinerman, Anton; Kutcher, Maxim; Hadar, Ofer

    2015-09-01

    With the increasing popularity of video streaming services and multimedia sharing via social networks, there is a need to protect the multimedia from malicious use. An attacker may use steganography and watermarking techniques to embed malicious content, in order to attack the end user. Most of the attack algorithms are robust to basic image processing techniques such as filtering, compression, noise addition, etc. Hence, in this article two novel, real-time, defense techniques are proposed: Smart threshold and anomaly correction. Both techniques operate at the DCT domain, and are applicable for JPEG images and H.264 I-Frames. The defense performance was evaluated against a highly robust attack, and the perceptual quality degradation was measured by the well-known PSNR and SSIM quality assessment metrics. A set of defense techniques is suggested for improving the defense efficiency. For the most aggressive attack configuration, the combination of all the defense techniques results in 80% protection against cyber-attacks with PSNR of 25.74 db.

  10. An observational model for biomechanical assessment of sprint kayaking technique.

    Science.gov (United States)

    McDonnell, Lisa K; Hume, Patria A; Nolte, Volker

    2012-11-01

    Sprint kayaking stroke phase descriptions for biomechanical analysis of technique vary among kayaking literature, with inconsistencies not conducive for the advancement of biomechanics applied service or research. We aimed to provide a consistent basis for the categorisation and analysis of sprint kayak technique by proposing a clear observational model. Electronic databases were searched using key words kayak, sprint, technique, and biomechanics, with 20 sources reviewed. Nine phase-defining positions were identified within the kayak literature and were divided into three distinct types based on how positions were defined: water-contact-defined positions, paddle-shaft-defined positions, and body-defined positions. Videos of elite paddlers from multiple camera views were reviewed to determine the visibility of positions used to define phases. The water-contact-defined positions of catch, immersion, extraction, and release were visible from multiple camera views, therefore were suitable for practical use by coaches and researchers. Using these positions, phases and sub-phases were created for a new observational model. We recommend that kayaking data should be reported using single strokes and described using two phases: water and aerial. For more detailed analysis without disrupting the basic two-phase model, a four-sub-phase model consisting of entry, pull, exit, and aerial sub-phases should be used.

  11. Nasal base narrowing: the combined alar base excision technique.

    Science.gov (United States)

    Foda, Hossam M T

    2007-01-01

    To evaluate the role of the combined alar base excision technique in narrowing the nasal base and correcting excessive alar flare. The study included 60 cases presenting with a wide nasal base and excessive alar flaring. The surgical procedure combined an external alar wedge resection with an internal vestibular floor excision. All cases were followed up for a mean of 32 (range, 12-144) months. Nasal tip modification and correction of any preexisting caudal septal deformities were always completed before the nasal base narrowing. The mean width of the external alar wedge excised was 7.2 (range, 4-11) mm, whereas the mean width of the sill excision was 3.1 (range, 2-7) mm. Completing the internal excision first resulted in a more conservative external resection, thus avoiding any blunting of the alar-facial crease. No cases of postoperative bleeding, infection, or keloid formation were encountered, and the external alar wedge excision healed with an inconspicuous scar that was well hidden in the depth of the alar-facial crease. Finally, the risk of notching of the alar rim, which can occur at the junction of the external and internal excisions, was significantly reduced by adopting a 2-layered closure of the vestibular floor (P = .01). The combined alar base excision resulted in effective narrowing of the nasal base with elimination of excessive alar flare. Commonly feared complications, such as blunting of the alar-facial crease or notching of the alar rim, were avoided by using simple modifications in the technique of excision and closure.

  12. SMS Spam Filtering Technique Based on Artificial Immune System

    Directory of Open Access Journals (Sweden)

    Tarek M Mahmoud

    2012-03-01

    Full Text Available The Short Message Service (SMS have an important economic impact for end users and service providers. Spam is a serious universal problem that causes problems for almost all users. Several studies have been presented, including implementations of spam filters that prevent spam from reaching their destination. Nave Bayesian algorithm is one of the most effective approaches used in filtering techniques. The computational power of smart phones are increasing, making increasingly possible to perform spam filtering at these devices as a mobile agent application, leading to better personalization and effectiveness. The challenge of filtering SMS spam is that the short messages often consist of few words composed of abbreviations and idioms. In this paper, we propose an anti-spam technique based on Artificial Immune System (AIS for filtering SMS spam messages. The proposed technique utilizes a set of some features that can be used as inputs to spam detection model. The idea is to classify message using trained dataset that contains Phone Numbers, Spam Words, and Detectors. Our proposed technique utilizes a double collection of bulk SMS messages Spam and Ham in the training process. We state a set of stages that help us to build dataset such as tokenizer, stop word filter, and training process. Experimental results presented in this paper are based on iPhone Operating System (iOS. The results applied to the testing messages show that the proposed system can classify the SMS spam and ham with accurate compared with Nave Bayesian algorithm.

  13. A Comparative Study of Three Vibration Based Damage Assessment Techniques

    DEFF Research Database (Denmark)

    Kirkegaard, Poul Henning; Rytter, A.

    Three different vibration based damage assessment techniques have been compared. One of the techniques uses the ratios between changes in experimentally and theoretically estimated natural frequencies, respectively, to locate a damage. The second technique relies on updating of an FEM based...

  14. One technique for refining the global Earth gravity models

    Science.gov (United States)

    Koneshov, V. N.; Nepoklonov, V. B.; Polovnev, O. V.

    2017-01-01

    The results of the theoretical and experimental research on the technique for refining the global Earth geopotential models such as EGM2008 in the continental regions are presented. The discussed technique is based on the high-resolution satellite data for the Earth's surface topography which enables the allowance for the fine structure of the Earth's gravitational field without the additional gravimetry data. The experimental studies are conducted by the example of the new GGMplus global gravity model of the Earth with a resolution about 0.5 km, which is obtained by expanding the EGM2008 model to degree 2190 with the corrections for the topograohy calculated from the SRTM data. The GGMplus and EGM2008 models are compared with the regional geoid models in 21 regions of North America, Australia, Africa, and Europe. The obtained estimates largely support the possibility of refining the global geopotential models such as EGM2008 by the procedure implemented in GGMplus, particularly in the regions with relatively high elevation difference.

  15. Noninvasive in vivo glucose sensing using an iris based technique

    Science.gov (United States)

    Webb, Anthony J.; Cameron, Brent D.

    2011-03-01

    Physiological glucose monitoring is important aspect in the treatment of individuals afflicted with diabetes mellitus. Although invasive techniques for glucose monitoring are widely available, it would be very beneficial to make such measurements in a noninvasive manner. In this study, a New Zealand White (NZW) rabbit animal model was utilized to evaluate a developed iris-based imaging technique for the in vivo measurement of physiological glucose concentration. The animals were anesthetized with isoflurane and an insulin/dextrose protocol was used to control blood glucose concentration. To further help restrict eye movement, a developed ocular fixation device was used. During the experimental time frame, near infrared illuminated iris images were acquired along with corresponding discrete blood glucose measurements taken with a handheld glucometer. Calibration was performed using an image based Partial Least Squares (PLS) technique. Independent validation was also performed to assess model performance along with Clarke Error Grid Analysis (CEGA). Initial validation results were promising and show that a high percentage of the predicted glucose concentrations are within 20% of the reference values.

  16. IMAGE SEGMENTATION BASED ON MARKOV RANDOM FIELD AND WATERSHED TECHNIQUES

    Institute of Scientific and Technical Information of China (English)

    纳瑟; 刘重庆

    2002-01-01

    This paper presented a method that incorporates Markov Random Field(MRF), watershed segmentation and merging techniques for performing image segmentation and edge detection tasks. MRF is used to obtain an initial estimate of x regions in the image under process where in MRF model, gray level x, at pixel location i, in an image X, depends on the gray levels of neighboring pixels. The process needs an initial segmented result. An initial segmentation is got based on K-means clustering technique and the minimum distance, then the region process in modeled by MRF to obtain an image contains different intensity regions. Starting from this we calculate the gradient values of that image and then employ a watershed technique. When using MRF method it obtains an image that has different intensity regions and has all the edge and region information, then it improves the segmentation result by superimpose closed and an accurate boundary of each region using watershed algorithm. After all pixels of the segmented regions have been processed, a map of primitive region with edges is generated. Finally, a merge process based on averaged mean values is employed. The final segmentation and edge detection result is one closed boundary per actual region in the image.

  17. OFF-LINE HANDWRITING RECOGNITION USING VARIOUS HYBRID MODELING TECHNIQUES AND CHARACTER N-GRAMS

    NARCIS (Netherlands)

    Brakensiek, A.; Rottland, J.; Kosmala, A.; Rigoll, G.

    2004-01-01

    In this paper a system for on-line cursive handwriting recognition is described. The system is based on Hidden Markov Models (HMMs) using discrete and hybrid modeling techniques. Here, we focus on two aspects of the recognition system. First, we present different hybrid modeling techniques, whereas

  18. A Comparison of Evolutionary Computation Techniques for IIR Model Identification

    Directory of Open Access Journals (Sweden)

    Erik Cuevas

    2014-01-01

    Full Text Available System identification is a complex optimization problem which has recently attracted the attention in the field of science and engineering. In particular, the use of infinite impulse response (IIR models for identification is preferred over their equivalent FIR (finite impulse response models since the former yield more accurate models of physical plants for real world applications. However, IIR structures tend to produce multimodal error surfaces whose cost functions are significantly difficult to minimize. Evolutionary computation techniques (ECT are used to estimate the solution to complex optimization problems. They are often designed to meet the requirements of particular problems because no single optimization algorithm can solve all problems competitively. Therefore, when new algorithms are proposed, their relative efficacies must be appropriately evaluated. Several comparisons among ECT have been reported in the literature. Nevertheless, they suffer from one limitation: their conclusions are based on the performance of popular evolutionary approaches over a set of synthetic functions with exact solutions and well-known behaviors, without considering the application context or including recent developments. This study presents the comparison of various evolutionary computation optimization techniques applied to IIR model identification. Results over several models are presented and statistically validated.

  19. Formal modelling techniques in human-computer interaction

    NARCIS (Netherlands)

    Haan, de G.; Veer, van der G.C.; Vliet, van J.C.

    1991-01-01

    This paper is a theoretical contribution, elaborating the concept of models as used in Cognitive Ergonomics. A number of formal modelling techniques in human-computer interaction will be reviewed and discussed. The analysis focusses on different related concepts of formal modelling techniques in hum

  20. A Different Web-Based Geocoding Service Using Fuzzy Techniques

    Science.gov (United States)

    Pahlavani, P.; Abbaspour, R. A.; Zare Zadiny, A.

    2015-12-01

    Geocoding - the process of finding position based on descriptive data such as address or postal code - is considered as one of the most commonly used spatial analyses. Many online map providers such as Google Maps, Bing Maps and Yahoo Maps present geocoding as one of their basic capabilities. Despite the diversity of geocoding services, users usually face some limitations when they use available online geocoding services. In existing geocoding services, proximity and nearness concept is not modelled appropriately as well as these services search address only by address matching based on descriptive data. In addition there are also some limitations in display searching results. Resolving these limitations can enhance efficiency of the existing geocoding services. This paper proposes the idea of integrating fuzzy technique with geocoding process to resolve these limitations. In order to implement the proposed method, a web-based system is designed. In proposed method, nearness to places is defined by fuzzy membership functions and multiple fuzzy distance maps are created. Then these fuzzy distance maps are integrated using fuzzy overlay technique for obtain the results. Proposed methods provides different capabilities for users such as ability to search multi-part addresses, searching places based on their location, non-point representation of results as well as displaying search results based on their priority.

  1. A DIFFERENT WEB-BASED GEOCODING SERVICE USING FUZZY TECHNIQUES

    Directory of Open Access Journals (Sweden)

    P. Pahlavani

    2015-12-01

    Full Text Available Geocoding – the process of finding position based on descriptive data such as address or postal code - is considered as one of the most commonly used spatial analyses. Many online map providers such as Google Maps, Bing Maps and Yahoo Maps present geocoding as one of their basic capabilities. Despite the diversity of geocoding services, users usually face some limitations when they use available online geocoding services. In existing geocoding services, proximity and nearness concept is not modelled appropriately as well as these services search address only by address matching based on descriptive data. In addition there are also some limitations in display searching results. Resolving these limitations can enhance efficiency of the existing geocoding services. This paper proposes the idea of integrating fuzzy technique with geocoding process to resolve these limitations. In order to implement the proposed method, a web-based system is designed. In proposed method, nearness to places is defined by fuzzy membership functions and multiple fuzzy distance maps are created. Then these fuzzy distance maps are integrated using fuzzy overlay technique for obtain the results. Proposed methods provides different capabilities for users such as ability to search multi-part addresses, searching places based on their location, non-point representation of results as well as displaying search results based on their priority.

  2. A VIKOR Technique with Applications Based on DEMATEL and ANP

    Science.gov (United States)

    Ou Yang, Yu-Ping; Shieh, How-Ming; Tzeng, Gwo-Hshiung

    In multiple criteria decision making (MCDM) methods, the compromise ranking method (named VIKOR) was introduced as one applicable technique to implement within MCDM. It was developed for multicriteria optimization of complex systems. However, few papers discuss conflicting (competing) criteria with dependence and feedback in the compromise solution method. Therefore, this study proposes and provides applications for a novel model using the VIKOR technique based on DEMATEL and the ANP to solve the problem of conflicting criteria with dependence and feedback. In addition, this research also uses DEMATEL to normalize the unweighted supermatrix of the ANP to suit the real world. An example is also presented to illustrate the proposed method with applications thereof. The results show the proposed method is suitable and effective in real-world applications.

  3. Model-based geostatistics

    CERN Document Server

    Diggle, Peter J

    2007-01-01

    Model-based geostatistics refers to the application of general statistical principles of modeling and inference to geostatistical problems. This volume provides a treatment of model-based geostatistics and emphasizes on statistical methods and applications. It also features analyses of datasets from a range of scientific contexts.

  4. Regression based peak load forecasting using a transformation technique

    Energy Technology Data Exchange (ETDEWEB)

    Haida, Takeshi; Muto, Shoichi (Tokyo Electric Power Co. (Japan). Computer and Communication Research Center)

    1994-11-01

    This paper presents a regression based daily peak load forecasting method with a transformation technique. In order to forecast the load precisely through a year, the authors should consider seasonal load change, annual load growth and the latest daily load change. To deal with these characteristics in the load forecasting, a transformation technique is presented. This technique consists of a transformation function with translation and reflection methods. The transformation function is estimated with the previous year's data points, in order that the function converts the data points into a set of new data points with preserving the shape of temperature-load relationships in the previous year. Then, the function is slightly translated so that the transformed data points will fit the shape of temperature-load relationships in the year. Finally, multivariate regression analysis with the latest daily loads and weather observations estimates the forecasting model. Large forecasting errors caused by the weather-load nonlinear characteristic in the transitional seasons such as spring and fall are reduced. Performance of the technique which is verified with simulations on actual load data of Tokyo Electric Power Company is also described.

  5. Cooperative cognitive radio networking system model, enabling techniques, and performance

    CERN Document Server

    Cao, Bin; Mark, Jon W

    2016-01-01

    This SpringerBrief examines the active cooperation between users of Cooperative Cognitive Radio Networking (CCRN), exploring the system model, enabling techniques, and performance. The brief provides a systematic study on active cooperation between primary users and secondary users, i.e., (CCRN), followed by the discussions on research issues and challenges in designing spectrum-energy efficient CCRN. As an effort to shed light on the design of spectrum-energy efficient CCRN, they model the CCRN based on orthogonal modulation and orthogonally dual-polarized antenna (ODPA). The resource allocation issues are detailed with respect to both models, in terms of problem formulation, solution approach, and numerical results. Finally, the optimal communication strategies for both primary and secondary users to achieve spectrum-energy efficient CCRN are analyzed.

  6. Evolution of Modelling Techniques for Service Oriented Architecture

    Directory of Open Access Journals (Sweden)

    Mikit Kanakia

    2014-07-01

    Full Text Available Service-oriented architecture (SOA is a software design and architecture design pattern based on independent pieces of software providing functionality as services to other applications. The benefit of SOA in the IT infrastructure is to allow parallel use and data exchange between programs which are services to the enterprise. Unified Modelling Language (UML is a standardized general-purpose modelling language in the field of software engineering. The UML includes a set of graphic notation techniques to create visual models of object-oriented software systems. We want to make UML available for SOA as well. SoaML (Service oriented architecture Modelling Language is an open source specification project from the Object Management Group (OMG, describing a UML profile and meta-model for the modelling and design of services within a service-oriented architecture. BPMN was also extended for SOA but there were few pitfalls. There is a need of a modelling framework which dedicated to SOA. Michael Bell authored a framework called Service Oriented Modelling Framework (SOMF which is dedicated for SOA.

  7. Techniques and Simulation Models in Risk Management

    OpenAIRE

    Mirela GHEORGHE

    2012-01-01

    In the present paper, the scientific approach of the research starts from the theoretical framework of the simulation concept and then continues in the setting of the practical reality, thus providing simulation models for a broad range of inherent risks specific to any organization and simulation of those models, using the informatics instrument @Risk (Palisade). The reason behind this research lies in the need for simulation models that will allow the person in charge with decision taking i...

  8. Path Based Mapping Technique for Robots

    Directory of Open Access Journals (Sweden)

    Amiraj Dhawan

    2013-05-01

    Full Text Available The purpose of this paper is to explore a new way of autonomous mapping. Current systems using perception techniques like LAZER or SONAR use probabilistic methods and have a drawback of allowing considerable uncertainty in the mapping process. Our approach is to break down the environment, specifically indoor, into reachable areas and objects, separated by boundaries, and identifying their shape, to render various navigable paths around them. This is a novel method to do away with uncertainties, as far as possible, at the cost of temporal efficiency. Also this system demands only minimum and cheap hardware, as it relies on only Infra-Red sensors to do the job.

  9. PIE: A Dynamic Failure-Based Technique

    Science.gov (United States)

    Voas, Jeffrey M.

    1990-01-01

    This paper presents a dynamic technique for statistically estimating three program characteristics that affect a program's computational behavior: (1) the probability that a particular section of a program is executed, (2) the probability that the particular section affects the data state, and (3) the probability that a data state produced by that section has an effect on program output. These three characteristics can be used to predict whether faults are likely to be uncovered by software testing. Index Terms: Software testing, data state, fault, failure, testability. 1 Introduction

  10. GIS Based Stereoscopic Visualization Technique for Weather Radar Data

    Science.gov (United States)

    Lim, S.; Jang, B. J.; Lee, K. H.; Lee, C.; Kim, W.

    2014-12-01

    As rainfall characteristic is more quixotic and localized, it is important to provide a prompt and accurate warning for public. To monitor localized heavy rainfall, a reliable disaster monitoring system with advanced remote observation technology and high-precision display system is needed. To advance even more accurate weather monitoring using weather radar, there have been growing concerns regarding the real-time changes of mapping radar observations on geographical coordinate systems along with the visualization and display methods of radar data based on spatial interpolation techniques and geographical information system (GIS). Currently, the method of simultaneously displaying GIS and radar data is widely used to synchronize the radar and ground systems accurately, and the method of displaying radar data in the 2D GIS coordinate system has been extensively used as the display method for providing weather information from weather radar. This paper proposes a realistic 3D weather radar data display technique with higher spatiotemporal resolution, which is based on the integration of 3D image processing and GIS interaction. This method is focused on stereoscopic visualization, while conventional radar image display works are based on flat or two-dimensional interpretation. Furthermore, using the proposed technique, the atmospheric change at each moment can be observed three-dimensionally at various geological locations simultaneously. Simulation results indicate that 3D display of weather radar data can be performed in real time. One merit of the proposed technique is that it can provide intuitive understanding of the influence of beam blockage by topography. Through an exact matching each 3D modeled radar beam with 3D GIS map, we can find out the terrain masked areas and accordingly it facilitates the precipitation correction from QPE underestimation caused by ground clutter filtering. It can also be expected that more accurate short-term forecasting will be

  11. Application of experimental design techniques to structural simulation meta-model building using neural network

    Institute of Scientific and Technical Information of China (English)

    费庆国; 张令弥

    2004-01-01

    Neural networks are being used to construct meta-models in numerical simulation of structures. In addition to network structures and training algorithms, training samples also greatly affect the accuracy of neural network models. In this paper, some existing main sampling techniques are evaluated, including techniques based on experimental design theory,random selection, and rotating sampling. First, advantages and disadvantages of each technique are reviewed. Then, seven techniques are used to generate samples for training radial neural networks models for two benchmarks: an antenna model and an aircraft model. Results show that the uniform design, in which the number of samples and mean square error network models are considered, is the best sampling technique for neural network based meta-model building.

  12. Comparison of Vibration-Based Damage Assessment Techniques

    DEFF Research Database (Denmark)

    Kirkegaard, Poul Henning; Rytter, A.

    1995-01-01

    Three different vibration-based damage assessment techniques have been compared. One of the techniques uses the ratios between changes in experimentally and theoretically estimated natural frequencies, respectively, to locate a damage. The second technique relies on updating of a finite element m...

  13. A TECHNIQUE OF DIGITAL SURFACE MODEL GENERATION

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    It is usually a time-consuming process to real-time set up 3D digital surface mo del(DSM) of an object with complex sur face.On the basis of the architectural survey proje ct of“Chilin Nunnery Reconstruction",this paper investigates an easy and feasi ble way,that is,on project site,applying digital close range photogrammetry an d CAD technique to establish the DSM for simulating ancient architectures with c omplex surface.The method has been proved very effective in practice.

  14. Moving objects management models, techniques and applications

    CERN Document Server

    Meng, Xiaofeng; Xu, Jiajie

    2014-01-01

    This book describes the topics of moving objects modeling and location tracking, indexing and querying, clustering, location uncertainty, traffic aware navigation and privacy issues as well as the application to intelligent transportation systems.

  15. Empirically Based, Agent-based models

    Directory of Open Access Journals (Sweden)

    Elinor Ostrom

    2006-12-01

    Full Text Available There is an increasing drive to combine agent-based models with empirical methods. An overview is provided of the various empirical methods that are used for different kinds of questions. Four categories of empirical approaches are identified in which agent-based models have been empirically tested: case studies, stylized facts, role-playing games, and laboratory experiments. We discuss how these different types of empirical studies can be combined. The various ways empirical techniques are used illustrate the main challenges of contemporary social sciences: (1 how to develop models that are generalizable and still applicable in specific cases, and (2 how to scale up the processes of interactions of a few agents to interactions among many agents.

  16. A New Particle Swarm Optimization Based Stock Market Prediction Technique

    Directory of Open Access Journals (Sweden)

    Essam El. Seidy

    2016-04-01

    Full Text Available Over the last years, the average person's interest in the stock market has grown dramatically. This demand has doubled with the advancement of technology that has opened in the International stock market, so that nowadays anybody can own stocks, and use many types of software to perform the aspired profit with minimum risk. Consequently, the analysis and prediction of future values and trends of the financial markets have got more attention, and due to large applications in different business transactions, stock market prediction has become a critical topic of research. In this paper, our earlier presented particle swarm optimization with center of mass technique (PSOCoM is applied to the task of training an adaptive linear combiner to form a new stock market prediction model. This prediction model is used with some common indicators to maximize the return and minimize the risk for the stock market. The experimental results show that the proposed technique is superior than the other PSO based models according to the prediction accuracy.

  17. Noise Evaluation Technique Based on Surface Pressure

    DEFF Research Database (Denmark)

    Fischer, Andreas

    2012-01-01

    In this chapter the relevant theory for the understanding of TE noise modeling is collected. It contains the acoustic formulations of [31] and [57]. Both give a relation for the far field sound pressure in dependence of the frequency wave number spectral density of the pressure on the airfoil...

  18. IMPROVED AERODYNAMIC APPROXIMATION MODEL WITH CASE-BASED REASONING TECHNIQUE FOR MDO OF AIRCRAFT%多学科优化中基于实例推理的气动近似模型

    Institute of Scientific and Technical Information of China (English)

    白振东; 刘虎; 柴雪; 武哲

    2008-01-01

    为提高飞机多学科优化效率,对气动近似计算模型进行了改进.在比值修正模型所构造的气动近似模型的基础上,引入了基于实例的推理技术对其加以改进.通过采用与飞机气动特性相关的参数作为实例属性,建立了飞机方案实例,并改进了实例检索公式,最后通过复用与当前方案最相似实例的修正因子改进了优化中的气动近似模型.以常规布局民用飞机概念设计为例,采用改进的气动近似模型进行了多学科优化研究.结果表明,对于具有大量设计变量的飞机方案优化问题,采用改进的气动近似模型能够有效提高计算精度与优化效率.%To increase the efficiency of the muhidisciplinary optimization of aircraft,an aerodynamic approximation model is improved.Based on the study of aerodynamic approximation model constructed by the scaling correction model,case-based reasoning technique is introduced to improve the approximation model for optimization.The aircraft case model is constructed by utilizing the plane parameters related to aerodynamic characteristics as attributes of cases,and the formula of case retrieving is improved.Finally,the aerodynamic approximation model for optimization is improved by reusing the correction factors of the most similar aircraft to the current one.The muhidisciplinary optimization of a civil aircraft concept is carried out with the improved aerodynamic approximation model.The results demonstrate that the precision and the efficiency of the optimization can be improved by utilizing the improved aerodynamic approximation model with case-based reasoning technique.

  19. The base line problem in DLTS technique

    OpenAIRE

    G. Couturier; Thabti, A.; Barrière, A.S.

    1989-01-01

    This paper describes a solution to suppress the base line problem in DLTS spectroscopy using a lock-in amplifier. The method has been used to characterize deep levels in a GaAs Schottky diode. Comparison with the classical method based on the use of a capacitance meter in the differential mode is established. The electric field dependence of the DLTS signal in a weakly doped semiconductor is also reported and proves the efficiency of the method. Finally, the data process is discussed.

  20. Use of advanced modeling techniques to optimize thermal packaging designs.

    Science.gov (United States)

    Formato, Richard M; Potami, Raffaele; Ahmed, Iftekhar

    2010-01-01

    Through a detailed case study the authors demonstrate, for the first time, the capability of using advanced modeling techniques to correctly simulate the transient temperature response of a convective flow-based thermal shipper design. The objective of this case study was to demonstrate that simulation could be utilized to design a 2-inch-wall polyurethane (PUR) shipper to hold its product box temperature between 2 and 8 °C over the prescribed 96-h summer profile (product box is the portion of the shipper that is occupied by the payload). Results obtained from numerical simulation are in excellent agreement with empirical chamber data (within ±1 °C at all times), and geometrical locations of simulation maximum and minimum temperature match well with the corresponding chamber temperature measurements. Furthermore, a control simulation test case was run (results taken from identical product box locations) to compare the coupled conduction-convection model with a conduction-only model, which to date has been the state-of-the-art method. For the conduction-only simulation, all fluid elements were replaced with "solid" elements of identical size and assigned thermal properties of air. While results from the coupled thermal/fluid model closely correlated with the empirical data (±1 °C), the conduction-only model was unable to correctly capture the payload temperature trends, showing a sizeable error compared to empirical values (ΔT > 6 °C). A modeling technique capable of correctly capturing the thermal behavior of passively refrigerated shippers can be used to quickly evaluate and optimize new packaging designs. Such a capability provides a means to reduce the cost and required design time of shippers while simultaneously improving their performance. Another advantage comes from using thermal modeling (assuming a validated model is available) to predict the temperature distribution in a shipper that is exposed to ambient temperatures which were not bracketed

  1. Microwave Diffraction Techniques from Macroscopic Crystal Models

    Science.gov (United States)

    Murray, William Henry

    1974-01-01

    Discusses the construction of a diffractometer table and four microwave models which are built of styrofoam balls with implanted metallic reflecting spheres and designed to simulate the structures of carbon (graphite structure), sodium chloride, tin oxide, and palladium oxide. Included are samples of Bragg patterns and computer-analysis results.…

  2. Application Research on Multiple Material Modeling Based on FDM Technique%基于FDM工艺的多材料建模方法及应用研究

    Institute of Scientific and Technical Information of China (English)

    莫志勇; 冯春梅; 尹亚楠; 白英杰; 袁尧; 陈瑾

    2016-01-01

    在三维打印成型中,多材料的应用不仅可以增强模型的表达效果,还能够制造出多功能复合材料零件,所以近年取得越来越广泛的应用。针对困扰多材料三维打印成型的基本问题,研究了建模方法,在综合比较的基础上,提出采用多色距离场方法解决多材料CAD建模的问题,这样的模型不仅能表达模型传统的结构信息,同时也能准确的用不同颜色表达不同的材料信息;此外,变形是制约成型质量的重要因素,因此针对FDM工艺中的变形,进行了理论分析,建立了数学模型,在此基础上得出减少变形的办法。并成功应用在多材料的三维打印中,取得了良好的效果。%The use of multiple materials in three-dimension printing manufacturing can not only enhance the appearance of the mod-el, but also make it possible to produce heterogeneous parts. ln recent years it has been used widely. ln order to solve the funda-mental problems that hinder the application of multiple material 3-D printing, several modeling methods are studied. On the basis of comprehensive comparison, the method of multiple color distance field for multiple material CAD modeling is presented, which can describe not only the traditional structure information but also the material distribution of the model with different colors. Besides, de-formation is an important factor to determine the quality of the model. ln order to solve the problem of deformation in FDM printing, theoretical analysis is conducted, and a mathematical model is established so as to find out ways to reduce deformation. The new methods are now being used in multiple material 3-D printing with satisfactory results.

  3. 肿瘤和心血管疾病单病种知识库建模技术研究%A Study of Big Data Modeling Techniques of Single Disease Knowledge Base of Tumor and Cardiovascular Disease

    Institute of Scientific and Technical Information of China (English)

    林婕

    2015-01-01

    This article studies the big data modeling techniques of single disease knowledge base of Tumor and Cardiovascular disease. Key technical difficulties and solutions are discussed, in order to provide clinicians with technology support of a full-sample evidence based medicine proof, thus to improve diagnosis and treatments.%对应用数据技术建立肿瘤和心血管疾病单病种知识库进行研究,重点讨论建模过程中的技术难点和解决对策,为临床提供全量样本的循证医学证据技术支持,提高重大疾病诊治水平。

  4. An Authentication Technique Based on Classification

    Institute of Scientific and Technical Information of China (English)

    李钢; 杨杰

    2004-01-01

    We present a novel watermarking approach based on classification for authentication, in which a watermark is embedded into the host image. When the marked image is modified, the extracted watermark is also different to the original watermark, and different kinds of modification lead to different extracted watermarks. In this paper, different kinds of modification are considered as classes, and we used classification algorithm to recognize the modifications with high probability. Simulation results show that the proposed method is potential and effective.

  5. Validation technique using mean and variance of kriging model

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Ho Sung; Jung, Jae Jun; Lee, Tae Hee [Hanyang Univ., Seoul (Korea, Republic of)

    2007-07-01

    To validate rigorously the accuracy of metamodel is an important research area in metamodel techniques. A leave-k-out cross-validation technique not only requires considerable computational cost but also cannot measure quantitatively the fidelity of metamodel. Recently, the average validation technique has been proposed. However the average validation criterion may stop a sampling process prematurely even if kriging model is inaccurate yet. In this research, we propose a new validation technique using an average and a variance of response during a sequential sampling method, such as maximum entropy sampling. The proposed validation technique becomes more efficient and accurate than cross-validation technique, because it integrates explicitly kriging model to achieve an accurate average and variance, rather than numerical integration. The proposed validation technique shows similar trend to root mean squared error such that it can be used as a strop criterion for sequential sampling.

  6. Model-based tomographic reconstruction

    Science.gov (United States)

    Chambers, David H; Lehman, Sean K; Goodman, Dennis M

    2012-06-26

    A model-based approach to estimating wall positions for a building is developed and tested using simulated data. It borrows two techniques from geophysical inversion problems, layer stripping and stacking, and combines them with a model-based estimation algorithm that minimizes the mean-square error between the predicted signal and the data. The technique is designed to process multiple looks from an ultra wideband radar array. The processed signal is time-gated and each section processed to detect the presence of a wall and estimate its position, thickness, and material parameters. The floor plan of a building is determined by moving the array around the outside of the building. In this paper we describe how the stacking and layer stripping algorithms are combined and show the results from a simple numerical example of three parallel walls.

  7. Sketch-based Interfaces and Modeling

    CERN Document Server

    Jorge, Joaquim

    2011-01-01

    The field of sketch-based interfaces and modeling (SBIM) is concerned with developing methods and techniques to enable users to interact with a computer through sketching - a simple, yet highly expressive medium. SBIM blends concepts from computer graphics, human-computer interaction, artificial intelligence, and machine learning. Recent improvements in hardware, coupled with new machine learning techniques for more accurate recognition, and more robust depth inferencing techniques for sketch-based modeling, have resulted in an explosion of both sketch-based interfaces and pen-based computing

  8. Huffman-based code compression techniques for embedded processors

    KAUST Repository

    Bonny, Mohamed Talal

    2010-09-01

    The size of embedded software is increasing at a rapid pace. It is often challenging and time consuming to fit an amount of required software functionality within a given hardware resource budget. Code compression is a means to alleviate the problem by providing substantial savings in terms of code size. In this article we introduce a novel and efficient hardware-supported compression technique that is based on Huffman Coding. Our technique reduces the size of the generated decoding table, which takes a large portion of the memory. It combines our previous techniques, Instruction Splitting Technique and Instruction Re-encoding Technique into new one called Combined Compression Technique to improve the final compression ratio by taking advantage of both previous techniques. The instruction Splitting Technique is instruction set architecture (ISA)-independent. It splits the instructions into portions of varying size (called patterns) before Huffman coding is applied. This technique improves the final compression ratio by more than 20% compared to other known schemes based on Huffman Coding. The average compression ratios achieved using this technique are 48% and 50% for ARM and MIPS, respectively. The Instruction Re-encoding Technique is ISA-dependent. It investigates the benefits of reencoding unused bits (we call them reencodable bits) in the instruction format for a specific application to improve the compression ratio. Reencoding those bits can reduce the size of decoding tables by up to 40%. Using this technique, we improve the final compression ratios in comparison to the first technique to 46% and 45% for ARM and MIPS, respectively (including all overhead that incurs). The Combined Compression Technique improves the compression ratio to 45% and 42% for ARM and MIPS, respectively. In our compression technique, we have conducted evaluations using a representative set of applications and we have applied each technique to two major embedded processor architectures

  9. Metamaterials modelling, fabrication and characterisation techniques

    DEFF Research Database (Denmark)

    Malureanu, Radu; Zalkovskij, Maksim; Andryieuski, Andrei

    Metamaterials are artificially designed media that show averaged properties not yet encountered in nature. Among such properties, the possibility of obtaining optical magnetism and negative refraction are the ones mainly exploited but epsilon-near-zero and sub-unitary refraction index are also...... parameters that can be obtained. Such behaviour enables unprecedented applications. Within this work, we will present various aspects of metamaterials research field that we deal with at our department. From the modelling part, various approaches for determining the value of the refractive index...

  10. Metamaterials modelling, fabrication, and characterisation techniques

    DEFF Research Database (Denmark)

    Malureanu, Radu; Zalkovskij, Maksim; Andryieuski, Andrei

    2012-01-01

    Metamaterials are artificially designed media that show averaged properties not yet encountered in nature. Among such properties, the possibility of obtaining optical magnetism and negative refraction are the ones mainly exploited but epsilon-near-zero and sub-unitary refraction index are also...... parameters that can be obtained. Such behaviour enables unprecedented applications. Within this work, we will present various aspects of metamaterials research field that we deal with at our department. From the modelling part, we will present tour approach for determining the field enhancement in slits...

  11. FDI and Accommodation Using NN Based Techniques

    Science.gov (United States)

    Garcia, Ramon Ferreiro; de Miguel Catoira, Alberto; Sanz, Beatriz Ferreiro

    Massive application of dynamic backpropagation neural networks is used on closed loop control FDI (fault detection and isolation) tasks. The process dynamics is mapped by means of a trained backpropagation NN to be applied on residual generation. Process supervision is then applied to discriminate faults on process sensors, and process plant parameters. A rule based expert system is used to implement the decision making task and the corresponding solution in terms of faults accommodation and/or reconfiguration. Results show an efficient and robust FDI system which could be used as the core of an SCADA or alternatively as a complement supervision tool operating in parallel with the SCADA when applied on a heat exchanger.

  12. Autonomous selection of PDE inpainting techniques vs. exemplar inpainting techniques for void fill of high resolution digital surface models

    Science.gov (United States)

    Rahmes, Mark; Yates, J. Harlan; Allen, Josef DeVaughn; Kelley, Patrick

    2007-04-01

    High resolution Digital Surface Models (DSMs) may contain voids (missing data) due to the data collection process used to obtain the DSM, inclement weather conditions, low returns, system errors/malfunctions for various collection platforms, and other factors. DSM voids are also created during bare earth processing where culture and vegetation features have been extracted. The Harris LiteSite TM Toolkit handles these void regions in DSMs via two novel techniques. We use both partial differential equations (PDEs) and exemplar based inpainting techniques to accurately fill voids. The PDE technique has its origin in fluid dynamics and heat equations (a particular subset of partial differential equations). The exemplar technique has its origin in texture analysis and image processing. Each technique is optimally suited for different input conditions. The PDE technique works better where the area to be void filled does not have disproportionately high frequency data in the neighborhood of the boundary of the void. Conversely, the exemplar based technique is better suited for high frequency areas. Both are autonomous with respect to detecting and repairing void regions. We describe a cohesive autonomous solution that dynamically selects the best technique as each void is being repaired.

  13. Model order reduction techniques with applications in finite element analysis

    CERN Document Server

    Qu, Zu-Qing

    2004-01-01

    Despite the continued rapid advance in computing speed and memory the increase in the complexity of models used by engineers persists in outpacing them. Even where there is access to the latest hardware, simulations are often extremely computationally intensive and time-consuming when full-blown models are under consideration. The need to reduce the computational cost involved when dealing with high-order/many-degree-of-freedom models can be offset by adroit computation. In this light, model-reduction methods have become a major goal of simulation and modeling research. Model reduction can also ameliorate problems in the correlation of widely used finite-element analyses and test analysis models produced by excessive system complexity. Model Order Reduction Techniques explains and compares such methods focusing mainly on recent work in dynamic condensation techniques: - Compares the effectiveness of static, exact, dynamic, SEREP and iterative-dynamic condensation techniques in producing valid reduced-order mo...

  14. Segmentation of Color Images Based on Different Segmentation Techniques

    Directory of Open Access Journals (Sweden)

    Purnashti Bhosale

    2013-03-01

    Full Text Available In this paper, we propose an Color image segmentation algorithm based on different segmentation techniques. We recognize the background objects such as the sky, ground, and trees etc based on the color and texture information using various methods of segmentation. The study of segmentation techniques by using different threshold methods such as global and local techniques and they are compared with one another so as to choose the best technique for threshold segmentation. Further segmentation is done by using clustering method and Graph cut method to improve the results of segmentation.

  15. Models and Techniques for Proving Data Structure Lower Bounds

    DEFF Research Database (Denmark)

    Larsen, Kasper Green

    In this dissertation, we present a number of new techniques and tools for proving lower bounds on the operational time of data structures. These techniques provide new lines of attack for proving lower bounds in both the cell probe model, the group model, the pointer machine model and the I/O-mod...... for range reporting problems in the pointer machine and the I/O-model. With this technique, we tighten the gap between the known upper bound and lower bound for the most fundamental range reporting problem, orthogonal range reporting. 5......In this dissertation, we present a number of new techniques and tools for proving lower bounds on the operational time of data structures. These techniques provide new lines of attack for proving lower bounds in both the cell probe model, the group model, the pointer machine model and the I....../O-model. In all cases, we push the frontiers further by proving lower bounds higher than what could possibly be proved using previously known techniques. For the cell probe model, our results have the following consequences: The rst (lg n) query time lower bound for linear space static data structures...

  16. Earthquake Analysis of Structure by Base Isolation Technique in SAP

    OpenAIRE

    T. Subramani; J. Jothi

    2014-01-01

    This paper presents an overview of the present state of base isolation techniques with special emphasis and a brief on other techniques developed world over for mitigating earthquake forces on the structures. The dynamic analysis procedure for isolated structures is briefly explained. The provisions of FEMA 450 for base isolated structures are highlighted. The effects of base isolation on structures located on soft soils and near active faults are given in brief. Simple case s...

  17. First-Stage Development and Validation of a Web-Based Automated Dietary Modeling Tool: Using Constraint Optimization Techniques to Streamline Food Group and Macronutrient Focused Dietary Prescriptions for Clinical Trials

    Science.gov (United States)

    Morrison, Evan; Sullivan, Emma; Dam, Hoa Khanh

    2016-01-01

    Background Standardizing the background diet of participants during a dietary randomized controlled trial is vital to trial outcomes. For this process, dietary modeling based on food groups and their target servings is employed via a dietary prescription before an intervention, often using a manual process. Partial automation has employed the use of linear programming. Validity of the modeling approach is critical to allow trial outcomes to be translated to practice. Objective This paper describes the first-stage development of a tool to automatically perform dietary modeling using food group and macronutrient requirements as a test case. The Dietary Modeling Tool (DMT) was then compared with existing approaches to dietary modeling (manual and partially automated), which were previously available to dietitians working within a dietary intervention trial. Methods Constraint optimization techniques were implemented to determine whether nonlinear constraints are best suited to the development of the automated dietary modeling tool using food composition and food consumption data. Dietary models were produced and compared with a manual Microsoft Excel calculator, a partially automated Excel Solver approach, and the automated DMT that was developed. Results The web-based DMT was produced using nonlinear constraint optimization, incorporating estimated energy requirement calculations, nutrition guidance systems, and the flexibility to amend food group targets for individuals. Percentage differences between modeling tools revealed similar results for the macronutrients. Polyunsaturated fatty acids and monounsaturated fatty acids showed greater variation between tools (practically equating to a 2-teaspoon difference), although it was not considered clinically significant when the whole diet, as opposed to targeted nutrients or energy requirements, were being addressed. Conclusions Automated modeling tools can streamline the modeling process for dietary intervention trials

  18. Orientation of student entrepreneurial practices based on administrative techniques

    Directory of Open Access Journals (Sweden)

    Héctor Horacio Murcia Cabra

    2005-07-01

    Full Text Available As part of the second phase of the research project «Application of a creativity model to update the teaching of the administration in Colombian agricultural entrepreneurial systems» it was decided to re-enforce student planning and execution of the students of the Agricultural business Administration Faculty of La Salle University. Those finishing their studies were given special attention. The plan of action was initiated in the second semester of 2003. It was initially defined as a model of entrepreneurial strengthening based on a coherent methodology that included the most recent administration and management techniques. Later, the applicability of this model was tested in some organizations of the agricultural sector that had asked for support in their planning processes. Through an investigation-action process the methodology was redefined in order to arrive at a final model that could be used by faculty students and graduates. The results obtained were applied to the teaching of Entrepreneurial Laboratory of ninth semester students with the hope of improving administrative support to agricultural enterprises. Following this procedure more than 100 students and 200 agricultural producers have applied this procedure between June 2003 and July 2005. The methodology used and the results obtained are presented in this article.

  19. Array-based techniques for fingerprinting medicinal herbs

    Directory of Open Access Journals (Sweden)

    Xue Charlie

    2011-05-01

    Full Text Available Abstract Poor quality control of medicinal herbs has led to instances of toxicity, poisoning and even deaths. The fundamental step in quality control of herbal medicine is accurate identification of herbs. Array-based techniques have recently been adapted to authenticate or identify herbal plants. This article reviews the current array-based techniques, eg oligonucleotides microarrays, gene-based probe microarrays, Suppression Subtractive Hybridization (SSH-based arrays, Diversity Array Technology (DArT and Subtracted Diversity Array (SDA. We further compare these techniques according to important parameters such as markers, polymorphism rates, restriction enzymes and sample type. The applicability of the array-based methods for fingerprinting depends on the availability of genomics and genetics of the species to be fingerprinted. For the species with few genome sequence information but high polymorphism rates, SDA techniques are particularly recommended because they require less labour and lower material cost.

  20. Inverter-based circuit design techniques for low supply voltages

    CERN Document Server

    Palani, Rakesh Kumar

    2017-01-01

    This book describes intuitive analog design approaches using digital inverters, providing filter architectures and circuit techniques enabling high performance analog circuit design. The authors provide process, supply voltage and temperature (PVT) variation-tolerant design techniques for inverter based circuits. They also discuss various analog design techniques for lower technology nodes and lower power supply, which can be used for designing high performance systems-on-chip.    .

  1. Clustering economies based on multiple criteria decision making techniques

    Directory of Open Access Journals (Sweden)

    Mansour Momeni

    2011-10-01

    Full Text Available One of the primary concerns on many countries is to determine different important factors affecting economic growth. In this paper, we study some factors such as unemployment rate, inflation ratio, population growth, average annual income, etc to cluster different countries. The proposed model of this paper uses analytical hierarchy process (AHP to prioritize the criteria and then uses a K-mean technique to cluster 59 countries based on the ranked criteria into four groups. The first group includes countries with high standards such as Germany and Japan. In the second cluster, there are some developing countries with relatively good economic growth such as Saudi Arabia and Iran. The third cluster belongs to countries with faster rates of growth compared with the countries located in the second group such as China, India and Mexico. Finally, the fourth cluster includes countries with relatively very low rates of growth such as Jordan, Mali, Niger, etc.

  2. Prediction of survival with alternative modeling techniques using pseudo values

    NARCIS (Netherlands)

    T. van der Ploeg (Tjeerd); F.R. Datema (Frank); R.J. Baatenburg de Jong (Robert Jan); E.W. Steyerberg (Ewout)

    2014-01-01

    textabstractBackground: The use of alternative modeling techniques for predicting patient survival is complicated by the fact that some alternative techniques cannot readily deal with censoring, which is essential for analyzing survival data. In the current study, we aimed to demonstrate that pseudo

  3. Verification Techniques for Parameter Selection and Bayesian Model Calibration Presented for an HIV Model

    Science.gov (United States)

    Wentworth, Mami Tonoe

    techniques for model calibration. For Bayesian model calibration, we employ adaptive Metropolis algorithms to construct densities for input parameters in the heat model and the HIV model. To quantify the uncertainty in the parameters, we employ two MCMC algorithms: Delayed Rejection Adaptive Metropolis (DRAM) [33] and Differential Evolution Adaptive Metropolis (DREAM) [66, 68]. The densities obtained using these methods are compared to those obtained through the direct numerical evaluation of the Bayes' formula. We also combine uncertainties in input parameters and measurement errors to construct predictive estimates for a model response. A significant emphasis is on the development and illustration of techniques to verify the accuracy of sampling-based Metropolis algorithms. We verify the accuracy of DRAM and DREAM by comparing chains, densities and correlations obtained using DRAM, DREAM and the direct evaluation of Bayes formula. We also perform similar analysis for credible and prediction intervals for responses. Once the parameters are estimated, we employ energy statistics test [63, 64] to compare the densities obtained by different methods for the HIV model. The energy statistics are used to test the equality of distributions. We also consider parameter selection and verification techniques for models having one or more parameters that are noninfluential in the sense that they minimally impact model outputs. We illustrate these techniques for a dynamic HIV model but note that the parameter selection and verification framework is applicable to a wide range of biological and physical models. To accommodate the nonlinear input to output relations, which are typical for such models, we focus on global sensitivity analysis techniques, including those based on partial correlations, Sobol indices based on second-order model representations, and Morris indices, as well as a parameter selection technique based on standard errors. A significant objective is to provide

  4. Techniques to Access Databases and Integrate Data for Hydrologic Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Whelan, Gene; Tenney, Nathan D.; Pelton, Mitchell A.; Coleman, Andre M.; Ward, Duane L.; Droppo, James G.; Meyer, Philip D.; Dorow, Kevin E.; Taira, Randal Y.

    2009-06-17

    This document addresses techniques to access and integrate data for defining site-specific conditions and behaviors associated with ground-water and surface-water radionuclide transport applicable to U.S. Nuclear Regulatory Commission reviews. Environmental models typically require input data from multiple internal and external sources that may include, but are not limited to, stream and rainfall gage data, meteorological data, hydrogeological data, habitat data, and biological data. These data may be retrieved from a variety of organizations (e.g., federal, state, and regional) and source types (e.g., HTTP, FTP, and databases). Available data sources relevant to hydrologic analyses for reactor licensing are identified and reviewed. The data sources described can be useful to define model inputs and parameters, including site features (e.g., watershed boundaries, stream locations, reservoirs, site topography), site properties (e.g., surface conditions, subsurface hydraulic properties, water quality), and site boundary conditions, input forcings, and extreme events (e.g., stream discharge, lake levels, precipitation, recharge, flood and drought characteristics). Available software tools for accessing established databases, retrieving the data, and integrating it with models were identified and reviewed. The emphasis in this review was on existing software products with minimal required modifications to enable their use with the FRAMES modeling framework. The ability of four of these tools to access and retrieve the identified data sources was reviewed. These four software tools were the Hydrologic Data Acquisition and Processing System (HDAPS), Integrated Water Resources Modeling System (IWRMS) External Data Harvester, Data for Environmental Modeling Environmental Data Download Tool (D4EM EDDT), and the FRAMES Internet Database Tools. The IWRMS External Data Harvester and the D4EM EDDT were identified as the most promising tools based on their ability to access and

  5. Use of surgical techniques in the rat pancreas transplantation model

    National Research Council Canada - National Science Library

    Ma, Yi; Guo, Zhi-Yong

    2008-01-01

    ... (also called type 1 diabetes). With the improvement of microsurgical techniques, pancreas transplantation in rats has been the major model for physiological and immunological experimental studies in the past 20 years...

  6. Web Based VRML Modelling

    NARCIS (Netherlands)

    Kiss, S.

    2001-01-01

    Presents a method to connect VRML (Virtual Reality Modeling Language) and Java components in a Web page using EAI (External Authoring Interface), which makes it possible to interactively generate and edit VRML meshes. The meshes used are based on regular grids, to provide an interaction and modeling

  7. 基于概率关系的面部特征点定位技术方法%New technique based on probabilistic model for facial features location

    Institute of Scientific and Technical Information of China (English)

    彭小宁; 邹北骥; 王磊; 罗平

    2009-01-01

    This paper described a novel technique called analytic boosted cascade detector (ABCD) to automatically locate features on the human face. ABCD extended the original boosted cascade detector (BCD) in three ways: a) a probabilistic model was included to connect the classifier responses with the facial features, b) formulated a features location method based on the probabilistic model, c) presented two selection criterions for face candidates. The new technique melted face detection and facial features location into a unified process. It outperformed average positions (AVG) and boosted classifiers + best response (BestHit). It also shows great speed superior to the methods based on nonlinear optimization, e.g. AAM and SOS.%基于BCD提出了一种新的面部特征点定位方法,该方法在以下三个方面扩展了传统的BCD(boosted cascade detector):a) 建立了BCD决策响应与特征点位置之间的概率关系;b) 提出一种基于上述概率关系的特征点定位方法;c) 设计了两种最佳人脸候选区域的选择方法.解析式的BCD把人脸检测和面部特征点定位融合成一个统一的过程.实验表明其精度和速度高于平均位置法(AVG)和基于boosted classifiers的最佳命中法(BestHit),并且它的运行速度也明显高于基于非线性优化的AAM和SOS法.

  8. Introducing Risk Management Techniques Within Project Based Software Engineering Courses

    Science.gov (United States)

    Port, Daniel; Boehm, Barry

    2002-03-01

    In 1996, USC switched its core two-semester software engineering course from a hypothetical-project, homework-and-exam course based on the Bloom taxonomy of educational objectives (knowledge, comprehension, application, analysis, synthesis, and evaluation). The revised course is a real-client team-project course based on the CRESST model of learning objectives (content understanding, problem solving, collaboration, communication, and self-regulation). We used the CRESST cognitive demands analysis to determine the necessary student skills required for software risk management and the other major project activities, and have been refining the approach over the last 5 years of experience, including revised versions for one-semester undergraduate and graduate project course at Columbia. This paper summarizes our experiences in evolving the risk management aspects of the project course. These have helped us mature more general techniques such as risk-driven specifications, domain-specific simplifier and complicator lists, and the schedule as an independent variable (SAIV) process model. The largely positive results in terms of review of pass / fail rates, client evaluations, product adoption rates, and hiring manager feedback are summarized as well.

  9. The use of continuous improvement techniques: A survey-based ...

    African Journals Online (AJOL)

    International Journal of Engineering, Science and Technology ... The use of continuous improvement techniques: A survey-based study of current practices ... Prior research has focused mainly on the effect of continuous improvement practices ...

  10. Technique for image fusion based on non-subsampled contourlet transform domain receptive field model%基于NSCT域感受野模型的图像融合方法

    Institute of Scientific and Technical Information of China (English)

    孔韦韦; 雷英杰; 雷阳; 李卫忠

    2011-01-01

    To the multi-sensor image fusion problem,a technique for image fusion based on non-subsampled contourlet transform(NSCT) domain receptive field model is presented.Firstly,by using NSCT,multi-scale and multi-direction sparse decomposition of source images are performed.Then,an improved receptive field model is utilized to achieve the fusion of the low frequency sub-images.In addition,the course of the high frequency sub-images fusion can be completed by using the model of adaptive unit-fast-linking pulse coupled neural network.Finally,the final fused image can be gained by adopting inverse NSCT to all sub-images.The simulation experimental results show the effectiveness of the proposed technique.%针对多传感器图像融合问题,提出了一种基于非下采样轮廓波变换域感受野模型的图像融合方法.首先,采用非下采样轮廓波变换对源图像进行多尺度、多方向稀疏分解;然后,对低频子图像采用改进型感受野模型进行融合,高频子图像则采用自适应Unit-Fast-Linking脉冲耦合神经网络模型进行融合;最后,将各子图像进行非下采样轮廓波逆变换,得到最终融合图像.仿真实验表明了所提出方法的有效性.

  11. Model-Based Security Testing

    CERN Document Server

    Schieferdecker, Ina; Schneider, Martin; 10.4204/EPTCS.80.1

    2012-01-01

    Security testing aims at validating software system requirements related to security properties like confidentiality, integrity, authentication, authorization, availability, and non-repudiation. Although security testing techniques are available for many years, there has been little approaches that allow for specification of test cases at a higher level of abstraction, for enabling guidance on test identification and specification as well as for automated test generation. Model-based security testing (MBST) is a relatively new field and especially dedicated to the systematic and efficient specification and documentation of security test objectives, security test cases and test suites, as well as to their automated or semi-automated generation. In particular, the combination of security modelling and test generation approaches is still a challenge in research and of high interest for industrial applications. MBST includes e.g. security functional testing, model-based fuzzing, risk- and threat-oriented testing,...

  12. Simple parameter estimation for complex models — Testing evolutionary techniques on 3-dimensional biogeochemical ocean models

    Science.gov (United States)

    Mattern, Jann Paul; Edwards, Christopher A.

    2017-01-01

    Parameter estimation is an important part of numerical modeling and often required when a coupled physical-biogeochemical ocean model is first deployed. However, 3-dimensional ocean model simulations are computationally expensive and models typically contain upwards of 10 parameters suitable for estimation. Hence, manual parameter tuning can be lengthy and cumbersome. Here, we present four easy to implement and flexible parameter estimation techniques and apply them to two 3-dimensional biogeochemical models of different complexities. Based on a Monte Carlo experiment, we first develop a cost function measuring the model-observation misfit based on multiple data types. The parameter estimation techniques are then applied and yield a substantial cost reduction over ∼ 100 simulations. Based on the outcome of multiple replicate experiments, they perform on average better than random, uninformed parameter search but performance declines when more than 40 parameters are estimated together. Our results emphasize the complex cost function structure for biogeochemical parameters and highlight dependencies between different parameters as well as different cost function formulations.

  13. Simulation-driven design by knowledge-based response correction techniques

    CERN Document Server

    Koziel, Slawomir

    2016-01-01

    Focused on efficient simulation-driven multi-fidelity optimization techniques, this monograph on simulation-driven optimization covers simulations utilizing physics-based low-fidelity models, often based on coarse-discretization simulations or other types of simplified physics representations, such as analytical models. The methods presented in the book exploit as much as possible any knowledge about the system or device of interest embedded in the low-fidelity model with the purpose of reducing the computational overhead of the design process. Most of the techniques described in the book are of response correction type and can be split into parametric (usually based on analytical formulas) and non-parametric, i.e., not based on analytical formulas. The latter, while more complex in implementation, tend to be more efficient. The book presents a general formulation of response correction techniques as well as a number of specific methods, including those based on correcting the low-fidelity model response (out...

  14. Prediction of survival with alternative modeling techniques using pseudo values.

    Directory of Open Access Journals (Sweden)

    Tjeerd van der Ploeg

    Full Text Available BACKGROUND: The use of alternative modeling techniques for predicting patient survival is complicated by the fact that some alternative techniques cannot readily deal with censoring, which is essential for analyzing survival data. In the current study, we aimed to demonstrate that pseudo values enable statistically appropriate analyses of survival outcomes when used in seven alternative modeling techniques. METHODS: In this case study, we analyzed survival of 1282 Dutch patients with newly diagnosed Head and Neck Squamous Cell Carcinoma (HNSCC with conventional Kaplan-Meier and Cox regression analysis. We subsequently calculated pseudo values to reflect the individual survival patterns. We used these pseudo values to compare recursive partitioning (RPART, neural nets (NNET, logistic regression (LR general linear models (GLM and three variants of support vector machines (SVM with respect to dichotomous 60-month survival, and continuous pseudo values at 60 months or estimated survival time. We used the area under the ROC curve (AUC and the root of the mean squared error (RMSE to compare the performance of these models using bootstrap validation. RESULTS: Of a total of 1282 patients, 986 patients died during a median follow-up of 66 months (60-month survival: 52% [95% CI: 50%-55%]. The LR model had the highest optimism corrected AUC (0.791 to predict 60-month survival, followed by the SVM model with a linear kernel (AUC 0.787. The GLM model had the smallest optimism corrected RMSE when continuous pseudo values were considered for 60-month survival or the estimated survival time followed by SVM models with a linear kernel. The estimated importance of predictors varied substantially by the specific aspect of survival studied and modeling technique used. CONCLUSIONS: The use of pseudo values makes it readily possible to apply alternative modeling techniques to survival problems, to compare their performance and to search further for promising

  15. The detection of bulk explosives using nuclear-based techniques

    Energy Technology Data Exchange (ETDEWEB)

    Morgado, R.E.; Gozani, T.; Seher, C.C.

    1988-01-01

    In 1986 we presented a rationale for the detection of bulk explosives based on nuclear techniques that addressed the requirements of civil aviation security in the airport environment. Since then, efforts have intensified to implement a system based on thermal neutron activation (TNA), with new work developing in fast neutron and energetic photon reactions. In this paper we will describe these techniques and present new results from laboratory and airport testing. Based on preliminary results, we contended in our earlier paper that nuclear-based techniques did provide sufficiently penetrating probes and distinguishable detectable reaction products to achieve the FAA operational goals; new data have supported this contention. The status of nuclear-based techniques for the detection of bulk explosives presently under investigation by the US Federal Aviation Administration (FAA) is reviewed. These include thermal neutron activation (TNA), fast neutron activation (FNA), the associated particle technique, nuclear resonance absorption, and photoneutron activation. The results of comprehensive airport testing of the TNA system performed during 1987-88 are summarized. From a technical point of view, nuclear-based techniques now represent the most comprehensive and feasible approach for meeting the operational criteria of detection, false alarms, and throughput. 9 refs., 5 figs., 2 tabs.

  16. Circuit oriented electromagnetic modeling using the PEEC techniques

    CERN Document Server

    Ruehli, Albert; Jiang, Lijun

    2017-01-01

    This book provides intuitive solutions to electromagnetic problems by using the Partial Eelement Eequivalent Ccircuit (PEEC) method. This book begins with an introduction to circuit analysis techniques, laws, and frequency and time domain analyses. The authors also treat Maxwell's equations, capacitance computations, and inductance computations through the lens of the PEEC method. Next, readers learn to build PEEC models in various forms: equivalent circuit models, non orthogonal PEEC models, skin-effect models, PEEC models for dielectrics, incident and radiate field models, and scattering PEEC models. The book concludes by considering issues like such as stability and passivity, and includes five appendices some with formulas for partial elements.

  17. Advanced Multipath Mitigation Techniques for Satellite-Based Positioning Applications

    Directory of Open Access Journals (Sweden)

    Mohammad Zahidul H. Bhuiyan

    2010-01-01

    Full Text Available Multipath remains a dominant source of ranging errors in Global Navigation Satellite Systems (GNSS, such as the Global Positioning System (GPS or the future European satellite navigation system Galileo. Multipath is generally considered undesirable in the context of GNSS, since the reception of multipath can make significant distortion to the shape of the correlation function used for time delay estimation. However, some wireless communications techniques exploit multipath in order to provide signal diversity though in GNSS, the major challenge is to effectively mitigate the multipath, since we are interested only in the satellite-receiver transit time offset of the Line-Of-Sight (LOS signal for the receiver's position estimate. Therefore, the multipath problem has been approached from several directions in order to mitigate the impact of multipath on navigation receivers, including the development of novel signal processing techniques. In this paper, we propose a maximum likelihood-based technique, namely, the Reduced Search Space Maximum Likelihood (RSSML delay estimator, which is capable of mitigating the multipath effects reasonably well at the expense of increased complexity. The proposed RSSML attempts to compensate the multipath error contribution by performing a nonlinear curve fit on the input correlation function, which finds a perfect match from a set of ideal reference correlation functions with certain amplitude(s, phase(s, and delay(s of the multipath signal. It also incorporates a threshold-based peak detection method, which eventually reduces the code-delay search space significantly. However, the downfall of RSSML is the memory requirement which it uses to store the reference correlation functions. The multipath performance of other delay-tracking methods previously studied for Binary Phase Shift Keying-(BPSK- and Sine Binary Offset Carrier- (SinBOC- modulated signals is also analyzed in closed loop model with the new Composite

  18. Using data mining techniques for building fusion models

    Science.gov (United States)

    Zhang, Zhongfei; Salerno, John J.; Regan, Maureen A.; Cutler, Debra A.

    2003-03-01

    Over the past decade many techniques have been developed which attempt to predict possible events through the use of given models or patterns of activity. These techniques work quite well given the case that one has a model or a valid representation of activity. However, in reality for the majority of the time this is not the case. Models that do exist, in many cases were hand crafted, required many man-hours to develop and they are very brittle in the dynamic world in which we live. Data mining techniques have shown some promise in providing a set of solutions. In this paper we will provide the details for our motivation, theory and techniques which we have developed, as well as the results of a set of experiments.

  19. Simulation of brushstroke multi-color in Chinese painting based on technique of writing model%基于笔法模型的国画一笔多色模拟

    Institute of Scientific and Technical Information of China (English)

    邓学雄; 李牧; 章文; 王脘卿

    2012-01-01

    为了完善“一笔多色”国画技法效果的计算机模拟,提出了基于笔法的多色效果仿真.通过建立笔锋模型,改变子笔道的粗细(中锋)或将多个子笔道偏移一定位置(侧锋)进行叠加,避免了逐点计算笔道中的颜色的复杂过程,提高了运算速度.中锋、侧锋是根据笔杆与纸面的夹角进行判断的,计算机智能笔法就是通过中锋和侧锋的转换来实现的,从而产生两种不同笔法的一笔多色模拟效果.%In order to improve the computer simulation of the brushstroke multi-color in Chinese painting, this paper proposes the multi-color effect emulate based on the technique of writing. By establishing a stroke model, the thick or thin of sub-stroke (center writing) is changed, or several sub-strokes (side writing) are superposed which offset a certain distance. Thus the complex process of calculating the color in the stroke area point by point is avoided and the running speed is enhanced. The center writing and the side writing are determined by the angle between brush and paper, and the intelligent technique of writing is realized when the center writing and the side writing are interchanged. Then the effect of brushstroke multi-color emulate is produced by two different techniques of writing.

  20. Model-Based Security Testing

    Directory of Open Access Journals (Sweden)

    Ina Schieferdecker

    2012-02-01

    Full Text Available Security testing aims at validating software system requirements related to security properties like confidentiality, integrity, authentication, authorization, availability, and non-repudiation. Although security testing techniques are available for many years, there has been little approaches that allow for specification of test cases at a higher level of abstraction, for enabling guidance on test identification and specification as well as for automated test generation. Model-based security testing (MBST is a relatively new field and especially dedicated to the systematic and efficient specification and documentation of security test objectives, security test cases and test suites, as well as to their automated or semi-automated generation. In particular, the combination of security modelling and test generation approaches is still a challenge in research and of high interest for industrial applications. MBST includes e.g. security functional testing, model-based fuzzing, risk- and threat-oriented testing, and the usage of security test patterns. This paper provides a survey on MBST techniques and the related models as well as samples of new methods and tools that are under development in the European ITEA2-project DIAMONDS.

  1. Ionospheric Plasma Drift Analysis Technique Based On Ray Tracing

    Science.gov (United States)

    Ari, Gizem; Toker, Cenk

    2016-07-01

    Ionospheric drift measurements provide important information about the variability in the ionosphere, which can be used to quantify ionospheric disturbances caused by natural phenomena such as solar, geomagnetic, gravitational and seismic activities. One of the prominent ways for drift measurement depends on instrumentation based measurements, e.g. using an ionosonde. The drift estimation of an ionosonde depends on measuring the Doppler shift on the received signal, where the main cause of Doppler shift is the change in the length of the propagation path of the signal between the transmitter and the receiver. Unfortunately, ionosondes are expensive devices and their installation and maintenance require special care. Furthermore, the ionosonde network over the world or even Europe is not dense enough to obtain a global or continental drift map. In order to overcome the difficulties related to an ionosonde, we propose a technique to perform ionospheric drift estimation based on ray tracing. First, a two dimensional TEC map is constructed by using the IONOLAB-MAP tool which spatially interpolates the VTEC estimates obtained from the EUREF CORS network. Next, a three dimensional electron density profile is generated by inputting the TEC estimates to the IRI-2015 model. Eventually, a close-to-real situation electron density profile is obtained in which ray tracing can be performed. These profiles can be constructed periodically with a period of as low as 30 seconds. By processing two consequent snapshots together and calculating the propagation paths, we estimate the drift measurements over any coordinate of concern. We test our technique by comparing the results to the drift measurements taken at the DPS ionosonde at Pruhonice, Czech Republic. This study is supported by TUBITAK 115E915 and Joint TUBITAK 114E092 and AS CR14/001 projects.

  2. On a Graphical Technique for Evaluating Some Rational Expectations Models

    DEFF Research Database (Denmark)

    Johansen, Søren; Swensen, Anders R.

    2011-01-01

    . In addition to getting a visual impression of the fit of the model, the purpose is to see if the two spreads are nevertheless similar as measured by correlation, variance ratio, and noise ratio. We extend these techniques to a number of rational expectation models and give a general definition of spread...

  3. Matrix eigenvalue model: Feynman graph technique for all genera

    Energy Technology Data Exchange (ETDEWEB)

    Chekhov, Leonid [Steklov Mathematical Institute, ITEP and Laboratoire Poncelet, Moscow (Russian Federation); Eynard, Bertrand [SPhT, CEA, Saclay (France)

    2006-12-15

    We present the diagrammatic technique for calculating the free energy of the matrix eigenvalue model (the model with arbitrary power {beta} by the Vandermonde determinant) to all orders of 1/N expansion in the case where the limiting eigenvalue distribution spans arbitrary (but fixed) number of disjoint intervals (curves)

  4. Manifold learning techniques and model reduction applied to dissipative PDEs

    CERN Document Server

    Sonday, Benjamin E; Gear, C William; Kevrekidis, Ioannis G

    2010-01-01

    We link nonlinear manifold learning techniques for data analysis/compression with model reduction techniques for evolution equations with time scale separation. In particular, we demonstrate a `"nonlinear extension" of the POD-Galerkin approach to obtaining reduced dynamic models of dissipative evolution equations. The approach is illustrated through a reaction-diffusion PDE, and the performance of different simulators on the full and the reduced models is compared. We also discuss the relation of this nonlinear extension with the so-called "nonlinear Galerkin" methods developed in the context of Approximate Inertial Manifolds.

  5. Research on the Propagation Models and Defense Techniques of Internet Worms

    Institute of Scientific and Technical Information of China (English)

    Tian-Yun Huang

    2008-01-01

    Internet worm is harmful to network security, and it has become a research hotspot in recent years. A thorough survey on the propagation models and defense techniques of Internet worm is made in this paper. We first give its strict definition and discuss the working mechanism. We then analyze and compare some repre sentative worm propagation models proposed in recent years, such as K-M model, two-factor model, worm-anti worm model (WAW), firewall-based model, quarantine based model and hybrid benign worm-based model, etc. Some typical defense techniques such as virtual honeypot, active worm prevention and agent-oriented worm defense, etc, are also discussed. The future direction of the worm defense system is pointed out.

  6. Formal Verification Techniques Based on Boolean Satisfiability Problem

    Institute of Scientific and Technical Information of China (English)

    Xiao-Wei Li; Guang-Hui Li; Ming Shao

    2005-01-01

    This paper exploits Boolean satisfiability problem in equivalence checking and model checking respectively. A combinational equivalence checking method based on incremental satisfiability is presented. This method chooses the can didate equivalent pairs with some new techniques, and uses incremental satisfiability algorithm to improve its performance. By substituting the internal equivalent pairs and converting the equivalence relations into conjunctive normal form (CNF) formulas, this approach can avoid the false negatives, and reduce the search space of SAT procedure. Experimental results on ISCAS'85 benchmark circuits show that, the presented approach is faster and more robust than those existed in literature.This paper also presents an algorithm for extracting of unsatisfiable core, which has an important application in abstraction and refinement for model checking to alleviate the state space explosion bottleneck. The error of approximate extraction is analyzed by means of simulation. An analysis reveals that an interesting phenomenon occurs, with the increasing density of the formula, the average error of the extraction is decreasing. An exact extraction approach for MU subformula, referred to as pre-assignment algorithm, is proposed. Both theoretical analysis and experimental results show that it is more efficient.

  7. A Comparison of Approximation Modeling Techniques: Polynomial Versus Interpolating Models

    Science.gov (United States)

    Giunta, Anthony A.; Watson, Layne T.

    1998-01-01

    Two methods of creating approximation models are compared through the calculation of the modeling accuracy on test problems involving one, five, and ten independent variables. Here, the test problems are representative of the modeling challenges typically encountered in realistic engineering optimization problems. The first approximation model is a quadratic polynomial created using the method of least squares. This type of polynomial model has seen considerable use in recent engineering optimization studies due to its computational simplicity and ease of use. However, quadratic polynomial models may be of limited accuracy when the response data to be modeled have multiple local extrema. The second approximation model employs an interpolation scheme known as kriging developed in the fields of spatial statistics and geostatistics. This class of interpolating model has the flexibility to model response data with multiple local extrema. However, this flexibility is obtained at an increase in computational expense and a decrease in ease of use. The intent of this study is to provide an initial exploration of the accuracy and modeling capabilities of these two approximation methods.

  8. An Improved Particle Swarm Optimization Algorithm Based on Ensemble Technique

    Institute of Scientific and Technical Information of China (English)

    SHI Yan; HUANG Cong-ming

    2006-01-01

    An improved particle swarm optimization (PSO) algorithm based on ensemble technique is presented. The algorithm combines some previous best positions (pbest) of the particles to get an ensemble position (Epbest), which is used to replace the global best position (gbest). It is compared with the standard PSO algorithm invented by Kennedy and Eberhart and some improved PSO algorithms based on three different benchmark functions. The simulation results show that the improved PSO based on ensemble technique can get better solutions than the standard PSO and some other improved algorithms under all test cases.

  9. An Agent Communication Framework Based on XML and SOAP Technique

    Institute of Scientific and Technical Information of China (English)

    李晓瑜

    2009-01-01

    This thesis introducing XML technology and SOAP technology,present an agent communication fi-amework based on XML and SOAP technique,and analyze the principle,architecture,function and benefit of it. At the end, based on KQML communication primitive lan- guages.

  10. Decomposition Techniques and Effective Algorithms in Reliability-Based Optimization

    DEFF Research Database (Denmark)

    Enevoldsen, I.; Sørensen, John Dalsgaard

    1995-01-01

    The common problem of an extensive number of limit state function calculations in the various formulations and applications of reliability-based optimization is treated. It is suggested to use a formulation based on decomposition techniques so the nested two-level optimization problem can be solved...

  11. Data Mining and Neural Network Techniques in Case Based System

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    This paper first puts forward a case-based system framework basedon data mining techniques. Then the paper examines the possibility of using neural n etworks as a method of retrieval in such a case-based system. In this system we propose data mining algorithms to discover case knowledge and other algorithms.

  12. A Hough Transform based Technique for Text Segmentation

    CERN Document Server

    Saha, Satadal; Nasipuri, Mita; Basu, Dipak Kr

    2010-01-01

    Text segmentation is an inherent part of an OCR system irrespective of the domain of application of it. The OCR system contains a segmentation module where the text lines, words and ultimately the characters must be segmented properly for its successful recognition. The present work implements a Hough transform based technique for line and word segmentation from digitized images. The proposed technique is applied not only on the document image dataset but also on dataset for business card reader system and license plate recognition system. For standardization of the performance of the system the technique is also applied on public domain dataset published in the website by CMATER, Jadavpur University. The document images consist of multi-script printed and hand written text lines with variety in script and line spacing in single document image. The technique performs quite satisfactorily when applied on mobile camera captured business card images with low resolution. The usefulness of the technique is verifie...

  13. 3D Modeling Techniques for Print and Digital Media

    Science.gov (United States)

    Stephens, Megan Ashley

    In developing my thesis, I looked to gain skills using ZBrush to create 3D models, 3D scanning, and 3D printing. The models created compared the hearts of several vertebrates and were intended for students attending Comparative Vertebrate Anatomy. I used several resources to create a model of the human heart and was able to work from life while creating heart models from other vertebrates. I successfully learned ZBrush and 3D scanning, and successfully printed 3D heart models. ZBrush allowed me to create several intricate models for use in both animation and print media. The 3D scanning technique did not fit my needs for the project, but may be of use for later projects. I was able to 3D print using two different techniques as well.

  14. A Method to Test Model Calibration Techniques: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Judkoff, Ron; Polly, Ben; Neymark, Joel

    2016-09-01

    This paper describes a method for testing model calibration techniques. Calibration is commonly used in conjunction with energy retrofit audit models. An audit is conducted to gather information about the building needed to assemble an input file for a building energy modeling tool. A calibration technique is used to reconcile model predictions with utility data, and then the 'calibrated model' is used to predict energy savings from a variety of retrofit measures and combinations thereof. Current standards and guidelines such as BPI-2400 and ASHRAE-14 set criteria for 'goodness of fit' and assume that if the criteria are met, then the calibration technique is acceptable. While it is logical to use the actual performance data of the building to tune the model, it is not certain that a good fit will result in a model that better predicts post-retrofit energy savings. Therefore, the basic idea here is that the simulation program (intended for use with the calibration technique) is used to generate surrogate utility bill data and retrofit energy savings data against which the calibration technique can be tested. This provides three figures of merit for testing a calibration technique, 1) accuracy of the post-retrofit energy savings prediction, 2) closure on the 'true' input parameter values, and 3) goodness of fit to the utility bill data. The paper will also discuss the pros and cons of using this synthetic surrogate data approach versus trying to use real data sets of actual buildings.

  15. An Empirical Study of Smoothing Techniques for Language Modeling

    CERN Document Server

    Chen, S F; Chen, Stanley F.; Goodman, Joshua T.

    1996-01-01

    We present an extensive empirical comparison of several smoothing techniques in the domain of language modeling, including those described by Jelinek and Mercer (1980), Katz (1987), and Church and Gale (1991). We investigate for the first time how factors such as training data size, corpus (e.g., Brown versus Wall Street Journal), and n-gram order (bigram versus trigram) affect the relative performance of these methods, which we measure through the cross-entropy of test data. In addition, we introduce two novel smoothing techniques, one a variation of Jelinek-Mercer smoothing and one a very simple linear interpolation technique, both of which outperform existing methods.

  16. Memory Based Machine Intelligence Techniques in VLSI hardware

    OpenAIRE

    James, Alex Pappachen

    2012-01-01

    We briefly introduce the memory based approaches to emulate machine intelligence in VLSI hardware, describing the challenges and advantages. Implementation of artificial intelligence techniques in VLSI hardware is a practical and difficult problem. Deep architectures, hierarchical temporal memories and memory networks are some of the contemporary approaches in this area of research. The techniques attempt to emulate low level intelligence tasks and aim at providing scalable solutions to high ...

  17. Bond strength with custom base indirect bonding techniques.

    Science.gov (United States)

    Klocke, Arndt; Shi, Jianmin; Kahl-Nieke, Bärbel; Bismayer, Ulrich

    2003-04-01

    Different types of adhesives for indirect bonding techniques have been introduced recently. But there is limited information regarding bond strength with these new materials. In this in vitro investigation, stainless steel brackets were bonded to 100 permanent bovine incisors using the Thomas technique, the modified Thomas technique, and light-cured direct bonding for a control group. The following five groups of 20 teeth each were formed: (1) modified Thomas technique with thermally cured base composite (Therma Cure) and chemically cured sealant (Maximum Cure), (2) Thomas technique with thermally cured base composite (Therma Cure) and chemically cured sealant (Custom I Q), (3) Thomas technique with light-cured base composite (Transbond XT) and chemically cured sealant (Sondhi Rapid Set), (4) modified Thomas technique with chemically cured base adhesive (Phase II) and chemically cured sealant (Maximum Cure), and (5) control group directly bonded with light-cured adhesive (Transbond XT). Mean bond strengths in groups 3, 4, and 5 were 14.99 +/- 2.85, 15.41 +/- 3.21, and 13.88 +/- 2.33 MPa, respectively, and these groups were not significantly different from each other. Groups 1 (mean bond strength 7.28 +/- 4.88 MPa) and 2 (mean bond strength 7.07 +/- 4.11 MPa) showed significantly lower bond strengths than groups 3, 4, and 5 and a higher probability of bond failure. Both the original (group 2) and the modified (group 1) Thomas technique were able to achieve bond strengths comparable to the light-cured direct bonded control group.

  18. Memory Based Machine Intelligence Techniques in VLSI hardware

    CERN Document Server

    James, Alex Pappachen

    2012-01-01

    We briefly introduce the memory based approaches to emulate machine intelligence in VLSI hardware, describing the challenges and advantages. Implementation of artificial intelligence techniques in VLSI hardware is a practical and difficult problem. Deep architectures, hierarchical temporal memories and memory networks are some of the contemporary approaches in this area of research. The techniques attempt to emulate low level intelligence tasks and aim at providing scalable solutions to high level intelligence problems such as sparse coding and contextual processing.

  19. Image analysis techniques associated with automatic data base generation.

    Science.gov (United States)

    Bond, A. D.; Ramapriyan, H. K.; Atkinson, R. J.; Hodges, B. C.; Thomas, D. T.

    1973-01-01

    This paper considers some basic problems relating to automatic data base generation from imagery, the primary emphasis being on fast and efficient automatic extraction of relevant pictorial information. Among the techniques discussed are recursive implementations of some particular types of filters which are much faster than FFT implementations, a 'sequential similarity detection' technique of implementing matched filters, and sequential linear classification of multispectral imagery. Several applications of the above techniques are presented including enhancement of underwater, aerial and radiographic imagery, detection and reconstruction of particular types of features in images, automatic picture registration and classification of multiband aerial photographs to generate thematic land use maps.

  20. MPPT Technique Based on Current and Temperature Measurements

    Directory of Open Access Journals (Sweden)

    Eduardo Moreira Vicente

    2015-01-01

    Full Text Available This paper presents a new maximum power point tracking (MPPT method based on the measurement of temperature and short-circuit current, in a simple and efficient approach. These measurements, which can precisely define the maximum power point (MPP, have not been used together in other existing techniques. The temperature is measured with a low cost sensor and the solar irradiance is estimated through the relationship of the measured short-circuit current and its reference. Fast tracking speed and stable steady-state operation are advantages of this technique, which presents higher performance when compared to other well-known techniques.

  1. Comparative Studies of Clustering Techniques for Real-Time Dynamic Model Reduction

    CERN Document Server

    Hogan, Emilie; Halappanavar, Mahantesh; Huang, Zhenyu; Lin, Guang; Lu, Shuai; Wang, Shaobu

    2015-01-01

    Dynamic model reduction in power systems is necessary for improving computational efficiency. Traditional model reduction using linearized models or offline analysis would not be adequate to capture power system dynamic behaviors, especially the new mix of intermittent generation and intelligent consumption makes the power system more dynamic and non-linear. Real-time dynamic model reduction emerges as an important need. This paper explores the use of clustering techniques to analyze real-time phasor measurements to determine generator groups and representative generators for dynamic model reduction. Two clustering techniques -- graph clustering and evolutionary clustering -- are studied in this paper. Various implementations of these techniques are compared and also compared with a previously developed Singular Value Decomposition (SVD)-based dynamic model reduction approach. Various methods exhibit different levels of accuracy when comparing the reduced model simulation against the original model. But some ...

  2. Modelling and Design of a Microstrip Band-Pass Filter Using Space Mapping Techniques

    CERN Document Server

    Tavakoli, Saeed; Mohanna, Shahram

    2010-01-01

    Determination of design parameters based on electromagnetic simulations of microwave circuits is an iterative and often time-consuming procedure. Space mapping is a powerful technique to optimize such complex models by efficiently substituting accurate but expensive electromagnetic models, fine models, with fast and approximate models, coarse models. In this paper, we apply two space mapping, an explicit space mapping as well as an implicit and response residual space mapping, techniques to a case study application, a microstrip band-pass filter. First, we model the case study application and optimize its design parameters, using explicit space mapping modelling approach. Then, we use implicit and response residual space mapping approach to optimize the filter's design parameters. Finally, the performance of each design methods is evaluated. It is shown that the use of above-mentioned techniques leads to achieving satisfactory design solutions with a minimum number of computationally expensive fine model eval...

  3. Molecular dynamics techniques for modeling G protein-coupled receptors.

    Science.gov (United States)

    McRobb, Fiona M; Negri, Ana; Beuming, Thijs; Sherman, Woody

    2016-10-01

    G protein-coupled receptors (GPCRs) constitute a major class of drug targets and modulating their signaling can produce a wide range of pharmacological outcomes. With the growing number of high-resolution GPCR crystal structures, we have the unprecedented opportunity to leverage structure-based drug design techniques. Here, we discuss a number of advanced molecular dynamics (MD) techniques that have been applied to GPCRs, including long time scale simulations, enhanced sampling techniques, water network analyses, and free energy approaches to determine relative binding free energies. On the basis of the many success stories, including those highlighted here, we expect that MD techniques will be increasingly applied to aid in structure-based drug design and lead optimization for GPCRs.

  4. Optimization using surrogate models - by the space mapping technique

    DEFF Research Database (Denmark)

    Søndergaard, Jacob

    2003-01-01

    mapping surrogate has a lower approximation error for long steps. For short steps, however, the Taylor model of the expensive model is best, due to exact interpolation at the model origin. Five algorithms for space mapping optimization are presented and the numerical performance is evaluated. Three...... conditions are satisfied. So hybrid methods, combining the space mapping technique with classical optimization methods, should be used if convergence to high accuracy is wanted. Approximation abilities of the space mapping surrogate are compared with those of a Taylor model of the expensive model. The space...

  5. Optimization using surrogate models - by the space mapping technique

    DEFF Research Database (Denmark)

    Søndergaard, Jacob

    2003-01-01

    mapping surrogate has a lower approximation error for long steps. For short steps, however, the Taylor model of the expensive model is best, due to exact interpolation at the model origin. Five algorithms for space mapping optimization are presented and the numerical performance is evaluated. Three...... conditions are satisfied. So hybrid methods, combining the space mapping technique with classical optimization methods, should be used if convergence to high accuracy is wanted. Approximation abilities of the space mapping surrogate are compared with those of a Taylor model of the expensive model. The space...

  6. PENGEMBANGAN MODEL INTERNALISASI NILAI KARAKTER DALAM PEMBELAJARAN SEJARAH MELALUI MODEL VALUE CLARIFICATION TECHNIQUE

    Directory of Open Access Journals (Sweden)

    Nunuk Suryani

    2013-07-01

    Full Text Available This research produce a product model of internalization of the character in learning history through Value Clarification Technique as a revitalization of the role of social studies in the formation of national character. In general, this research consist of three levels : (1 doing  pre-survey which identified the current condition of  the learning value of character in ​​in learning history (2 development of a model based on the findings of  pre-survey, the model used is the Dick and Carey Model, and (3 validating the models. Development models implemented with limited trials and extensive testing. The findings of this study lead to the conclusion that the VCT model is effective to internalize the character value in learning history. VCT models effective for increasing the role of learning history in the formation of student character. It can be concluded VCT models effective for improving the quality of processes and products of learning character values ​​in social studies SMP especially in Surakarta Keywords: Internalization, the value of character, Model VCT, learning history, learning social studies Penelitian ini bertujuan menghasilkan suatu produk model internalisasi nilai karakter dalam pembelajaran IPS melalui Model Value Clarification Technique sebagai revitalisasi peran pembelajaran IPS dalam pembentukan karakter bangsa. Secara garis besar tahapan penelitian meliputi (1 prasurvai untuk mengidetifikasi kondisi pembelajaran nilai karakter pada pembelajaran  IPS Sejarah SMP yang sedang berjalan, (2 pengembangan model berdasarkan hasil prasurvai, model yang digunakan adalah model Dick and Carey, dan (3 vaidasi model. Pengembangan model dilaksanakan dengan ujicoba terbatas dan uji coba luas. Temuan penelitian ini menghasilkan kesimpulan bahwa model VCT efektif  menginternalisasi nilai karakter dalam pembelajaran Sejarah. Model VCT efektif untuk meningkatkan peran pembelajaran Sejarah dalam

  7. Runtime Monitoring Technique to handle Tautology based SQL Injection Attacks

    Directory of Open Access Journals (Sweden)

    Ramya Dharam

    2015-05-01

    Full Text Available Software systems, like web applications, are often used to provide reliable online services such as banking, shopping, social networking, etc., to users. The increasing use of such systems has led to a high need for assuring confidentiality, integrity, and availability of user data. SQL Injection Attacks (SQLIAs is one of the major security threats to web applications. It allows attackers to get unauthorized access to the back-end database consisting of confidential user information. In this paper we present and evaluate a Runtime Monitoring Technique to detect and prevent tautology based SQLIAs in web applications. Our technique monitors the behavior of the application during its post- deployment to identify all the tautology based SQLIAs. A framework called Runtime Monitoring Framework, that implements our technique, is used in the development of runtime monitors. The framework uses two pre-deployment testing techniques, such as basis-path and data-flow to identify a minimal set of all legal/valid execution paths of the application. Runtime monitors are then developed and integrated to perform runtime monitoring of the application, during its post-deployment for the identified valid/legal execution paths. For evaluation we targeted a subject application with a large number of both legitimate inputs and illegitimate tautology based inputs, and measured the performance of the proposed technique. The results of our study show that runtime monitor developed for the application was successfully able to detect all the tautology based attacks without generating any false positives.

  8. PCA Based Rapid and Real Time Face Recognition Technique

    Directory of Open Access Journals (Sweden)

    T R Chandrashekar

    2013-12-01

    Full Text Available Economical and efficient that is used in various applications is face Biometric which has been a popular form biometric system. Face recognition system is being a topic of research for last few decades. Several techniques are proposed to improve the performance of face recognition system. Accuracy is tested against intensity, distance from camera, and pose variance. Multiple face recognition is another subtopic which is under research now a day. Speed at which the technique works is a parameter under consideration to evaluate a technique. As an example a support vector machine performs really well for face recognition but the computational efficiency degrades significantly with increase in number of classes. Eigen Face technique produces quality features for face recognition but the accuracy is proved to be comparatively less to many other techniques. With increase in use of core processors in personal computers and application demanding speed in processing and multiple face detection and recognition system (for example an entry detection system in shopping mall or an industry, demand for such systems are cumulative as there is a need for automated systems worldwide. In this paper we propose a novel system of face recognition developed with C# .Net that can detect multiple faces and can recognize the faces parallel by utilizing the system resources and the core processors. The system is built around Haar Cascade based face detection and PCA based face recognition system with C#.Net. Parallel library designed for .Net is used to aide to high speed detection and recognition of the real time faces. Analysis of the performance of the proposed technique with some of the conventional techniques reveals that the proposed technique is not only accurate, but also is fast in comparison to other techniques.

  9. Application of Krylov Reduction Technique for a Machine Tool Multibody Modelling

    Directory of Open Access Journals (Sweden)

    M. Sulitka

    2014-02-01

    Full Text Available Quick calculation of machine tool dynamic response represents one of the major requirements for machine tool virtual modelling and virtual machining, aiming at simulating the machining process performance, quality, and precision of a workpiece. Enhanced time effectiveness in machine tool dynamic simulations may be achieved by employing model order reduction (MOR techniques of the full finite element (FE models. The paper provides a case study aimed at comparison of Krylov subspace base and mode truncation technique. Application of both of the reduction techniques for creating a machine tool multibody model is evaluated. The Krylov subspace reduction technique shows high quality in terms of both dynamic properties of the reduced multibody model and very low time demands at the same time.

  10. Team mental models: techniques, methods, and analytic approaches.

    Science.gov (United States)

    Langan-Fox, J; Code, S; Langfield-Smith, K

    2000-01-01

    Effective team functioning requires the existence of a shared or team mental model among members of a team. However, the best method for measuring team mental models is unclear. Methods reported vary in terms of how mental model content is elicited and analyzed or represented. We review the strengths and weaknesses of vatrious methods that have been used to elicit, represent, and analyze individual and team mental models and provide recommendations for method selection and development. We describe the nature of mental models and review techniques that have been used to elicit and represent them. We focus on a case study on selecting a method to examine team mental models in industry. The processes involved in the selection and development of an appropriate method for eliciting, representing, and analyzing team mental models are described. The criteria for method selection were (a) applicability to the problem under investigation; (b) practical considerations - suitability for collecting data from the targeted research sample; and (c) theoretical rationale - the assumption that associative networks in memory are a basis for the development of mental models. We provide an evaluation of the method matched to the research problem and make recommendations for future research. The practical applications of this research include the provision of a technique for analyzing team mental models in organizations, the development of methods and processes for eliciting a mental model from research participants in their normal work environment, and a survey of available methodologies for mental model research.

  11. A Knowledge—Based Specification Technique for Protocol Development

    Institute of Scientific and Technical Information of China (English)

    张尧学; 史美林; 等

    1993-01-01

    is paper proposes a knowledge-based specification technique(KST)for protocol development.This technique semi-automatically translates a protocol described in an informal description(natural languages or graphs)into one described in forml specifications(Estells and SDL).The translation processes are suported by knowledge stored in the knowledge base.This paper discusses the concept,the specification control mechanism of KST and the rules and algorithms for production of FSM's which is the basis of Estelle and SDL.

  12. Research on Feasibilityof Top-Coal Caving Based on Neural Network Technique

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    Based on the neural network technique, this paper proposes a BP neural network model which integratesgeological factors which affect top-coal caving in a comprehensive index. The index of top-coal caving may be usedto forecast the mining cost of working faces, which shows the model's potential prospect of applications.

  13. Fault Based Techniques for Testing Boolean Expressions: A Survey

    CERN Document Server

    Badhera, Usha; Taruna, S

    2012-01-01

    Boolean expressions are major focus of specifications and they are very much prone to introduction of faults, this survey presents various fault based testing techniques. It identifies that the techniques differ in their fault detection capabilities and generation of test suite. The various techniques like Cause effect graph, meaningful impact strategy, Branch Operator Strategy (BOR), BOR+MI, MUMCUT, Modified Condition/ Decision Coverage (MCDC) has been considered. This survey describes the basic algorithms and fault categories used by these strategies for evaluating their performance. Finally, it contains short summaries of the papers that use Boolean expressions used to specify the requirements for detecting faults. These techniques have been empirically evaluated by various researchers on a simplified safety related real time control system.

  14. Least-squares based iterative multipath super-resolution technique

    CERN Document Server

    Nam, Wooseok

    2011-01-01

    In this paper, we study the problem of multipath channel estimation for direct sequence spread spectrum signals. To resolve multipath components arriving within a short interval, we propose a new algorithm called the least-squares based iterative multipath super-resolution (LIMS). Compared to conventional super-resolution techniques, such as the multiple signal classification (MUSIC) and the estimation of signal parameters via rotation invariance techniques (ESPRIT), our algorithm has several appealing features. In particular, even in critical situations where the conventional super-resolution techniques are not very powerful due to limited data or the correlation between path coefficients, the LIMS algorithm can produce successful results. In addition, due to its iterative nature, the LIMS algorithm is suitable for recursive multipath tracking, whereas the conventional super-resolution techniques may not be. Through numerical simulations, we show that the LIMS algorithm can resolve the first arrival path amo...

  15. Comparison of multivariate preprocessing techniques as applied to electronic tongue based pattern classification for black tea

    Energy Technology Data Exchange (ETDEWEB)

    Palit, Mousumi [Department of Electronics and Telecommunication Engineering, Central Calcutta Polytechnic, Kolkata 700014 (India); Tudu, Bipan, E-mail: bt@iee.jusl.ac.in [Department of Instrumentation and Electronics Engineering, Jadavpur University, Kolkata 700098 (India); Bhattacharyya, Nabarun [Centre for Development of Advanced Computing, Kolkata 700091 (India); Dutta, Ankur; Dutta, Pallab Kumar [Department of Instrumentation and Electronics Engineering, Jadavpur University, Kolkata 700098 (India); Jana, Arun [Centre for Development of Advanced Computing, Kolkata 700091 (India); Bandyopadhyay, Rajib [Department of Instrumentation and Electronics Engineering, Jadavpur University, Kolkata 700098 (India); Chatterjee, Anutosh [Department of Electronics and Communication Engineering, Heritage Institute of Technology, Kolkata 700107 (India)

    2010-08-18

    In an electronic tongue, preprocessing on raw data precedes pattern analysis and choice of the appropriate preprocessing technique is crucial for the performance of the pattern classifier. While attempting to classify different grades of black tea using a voltammetric electronic tongue, different preprocessing techniques have been explored and a comparison of their performances is presented in this paper. The preprocessing techniques are compared first by a quantitative measurement of separability followed by principle component analysis; and then two different supervised pattern recognition models based on neural networks are used to evaluate the performance of the preprocessing techniques.

  16. Comparison of multivariate preprocessing techniques as applied to electronic tongue based pattern classification for black tea.

    Science.gov (United States)

    Palit, Mousumi; Tudu, Bipan; Bhattacharyya, Nabarun; Dutta, Ankur; Dutta, Pallab Kumar; Jana, Arun; Bandyopadhyay, Rajib; Chatterjee, Anutosh

    2010-08-18

    In an electronic tongue, preprocessing on raw data precedes pattern analysis and choice of the appropriate preprocessing technique is crucial for the performance of the pattern classifier. While attempting to classify different grades of black tea using a voltammetric electronic tongue, different preprocessing techniques have been explored and a comparison of their performances is presented in this paper. The preprocessing techniques are compared first by a quantitative measurement of separability followed by principle component analysis; and then two different supervised pattern recognition models based on neural networks are used to evaluate the performance of the preprocessing techniques.

  17. Research on technique of wavefront retrieval based on Foucault test

    Science.gov (United States)

    Yuan, Lvjun; Wu, Zhonghua

    2010-05-01

    During finely grinding the best fit sphere and initial stage of polishing, surface error of large aperture aspheric mirrors is too big to test using common interferometer. Foucault test is widely used in fabricating large aperture mirrors. However, the optical path is disturbed seriously by air turbulence, and changes of light and dark zones can not be identified, which often lowers people's judging ability and results in making mistake to diagnose surface error of the whole mirror. To solve the problem, the research presents wavefront retrieval based on Foucault test through digital image processing and quantitative calculation. Firstly, real Foucault image can be gained through collecting a variety of images by CCD, and then average these image to eliminate air turbulence. Secondly, gray values are converted into surface error values through principle derivation, mathematical modeling, and software programming. Thirdly, linear deviation brought by defocus should be removed by least-square method to get real surface error. At last, according to real surface error, plot wavefront map, gray contour map and corresponding pseudo color contour map. The experimental results indicates that the three-dimensional wavefront map and two-dimensional contour map are able to accurately and intuitively show surface error on the whole mirrors under test, and they are beneficial to grasp surface error as a whole. The technique can be used to guide the fabrication of large aperture and long focal mirrors during grinding and initial stage of polishing the aspheric surface, which improves fabricating efficiency and precision greatly.

  18. Laser polymerization-based novel lift-off technique

    Energy Technology Data Exchange (ETDEWEB)

    Bhuian, B. [Tyndall National Institute, Lee Maltings, Prospect Row, Cork (Ireland); Department of Microelectronic Engineering, University College Cork, Cork (Ireland); Winfield, R.J. [Tyndall National Institute, Lee Maltings, Prospect Row, Cork (Ireland)], E-mail: richard.winfield@tyndall.ie; Crean, G.M. [Tyndall National Institute, Lee Maltings, Prospect Row, Cork (Ireland); Department of Microelectronic Engineering, University College Cork, Cork (Ireland)

    2009-03-01

    The fabrication of microstructures by two-photon polymerization has been widely reported as a means of directly writing three-dimensional nanoscale structures. In the majority of cases a single point serial writing technique is used to form a polymer model. Single layer writing can also be used to fabricate two-dimensional patterns and we report an extension of this capability by using two-photon polymerization to form a template that can be used as a sacrificial layer for a novel lift-off process. A Ti:sapphire laser, with wavelength 795 nm, 80 MHz repetition rate, 100 fs pulse duration and an average power of 700 mW, was used to write 2D grid patterns with pitches of 0.8 and 1.0 {mu}m in a urethane acrylate resin that was spun on to a lift-off base layer. This was overcoated with gold and the grid lifted away to leave an array of gold islands. The optical transmission properties of the gold arrays were measured and found to be in agreement with a rigorous coupled-wave analysis simulation.

  19. Selection of productivity improvement techniques via mathematical modeling

    Directory of Open Access Journals (Sweden)

    Mahassan M. Khater

    2011-07-01

    Full Text Available This paper presents a new mathematical model to select an optimal combination of productivity improvement techniques. The proposed model of this paper considers four-stage cycle productivity and the productivity is assumed to be a linear function of fifty four improvement techniques. The proposed model of this paper is implemented for a real-world case study of manufacturing plant. The resulted problem is formulated as a mixed integer programming which can be solved for optimality using traditional methods. The preliminary results of the implementation of the proposed model of this paper indicate that the productivity can be improved through a change on equipments and it can be easily applied for both manufacturing and service industries.

  20. Maternal, Infant Characteristics, Breastfeeding Techniques, and Initiation: Structural Equation Modeling Approaches

    OpenAIRE

    2015-01-01

    Objectives The aim of this study was to examine the relationships among maternal and infant characteristics, breastfeeding techniques, and exclusive breastfeeding initiation in different modes of birth using structural equation modeling approaches. Methods We examined a hypothetical model based on integrating concepts of a breastfeeding decision-making model, a breastfeeding initiation model, and a social cognitive theory among 952 mother-infant dyads. The LATCH breastfeeding assessment tool ...

  1. Experimental evaluation of optimal Vehicle Dynamic Control based on the State Dependent Riccati Equation technique

    NARCIS (Netherlands)

    Alirezaei, M.; Kanarachos, S.A.; Scheepers, B.T.M.; Maurice, J.P.

    2013-01-01

    Development and experimentally evaluation of an optimal Vehicle Dynamic Control (VDC) strategy based on the State Dependent Riccati Equation (SDRE) control technique is presented. The proposed nonlinear controller is based on a nonlinear vehicle model with nonlinear tire characteristics. A novel ext

  2. Experimental evaluation of optimal Vehicle Dynamic Control based on the State Dependent Riccati Equation technique

    NARCIS (Netherlands)

    Alirezaei, M.; Kanarachos, S.A.; Scheepers, B.T.M.; Maurice, J.P.

    2013-01-01

    Development and experimentally evaluation of an optimal Vehicle Dynamic Control (VDC) strategy based on the State Dependent Riccati Equation (SDRE) control technique is presented. The proposed nonlinear controller is based on a nonlinear vehicle model with nonlinear tire characteristics. A novel

  3. Concerning the Feasibility of Example-driven Modelling Techniques

    OpenAIRE

    Thorne, Simon; Ball, David; Lawson, Zoe Frances

    2008-01-01

    We report on a series of experiments concerning the feasibility of example driven \\ud modelling. The main aim was to establish experimentally within an academic \\ud environment; the relationship between error and task complexity using a) Traditional \\ud spreadsheet modelling, b) example driven techniques. We report on the experimental \\ud design, sampling, research methods and the tasks set for both control and treatment \\ud groups. Analysis of the completed tasks allows comparison of several...

  4. Advanced Phase noise modeling techniques of nonlinear microwave devices

    OpenAIRE

    Prigent, M.; J. C. Nallatamby; R. Quere

    2004-01-01

    In this paper we present a coherent set of tools allowing an accurate and predictive design of low phase noise oscillators. Advanced phase noise modelling techniques in non linear microwave devices must be supported by a proven combination of the following : - Electrical modeling of low-frequency noise of semiconductor devices, oriented to circuit CAD . The local noise sources will be either cyclostationary noise sources or quasistationary noise sources. - Theoretic...

  5. Modeling and design techniques for RF power amplifiers

    CERN Document Server

    Raghavan, Arvind; Laskar, Joy

    2008-01-01

    The book covers RF power amplifier design, from device and modeling considerations to advanced circuit design architectures and techniques. It focuses on recent developments and advanced topics in this area, including numerous practical designs to back the theoretical considerations. It presents the challenges in designing power amplifiers in silicon and helps the reader improve the efficiency of linear power amplifiers, and design more accurate compact device models, with faster extraction routines, to create cost effective and reliable circuits.

  6. A Novel Nanofabrication Technique of Silicon-Based Nanostructures

    Science.gov (United States)

    Meng, Lingkuan; He, Xiaobin; Gao, Jianfeng; Li, Junjie; Wei, Yayi; Yan, Jiang

    2016-11-01

    A novel nanofabrication technique which can produce highly controlled silicon-based nanostructures in wafer scale has been proposed using a simple amorphous silicon (α-Si) material as an etch mask. SiO2 nanostructures directly fabricated can serve as nanotemplates to transfer into the underlying substrates such as silicon, germanium, transistor gate, or other dielectric materials to form electrically functional nanostructures and devices. In this paper, two typical silicon-based nanostructures such as nanoline and nanofin have been successfully fabricated by this technique, demonstrating excellent etch performance. In addition, silicon nanostructures fabricated above can be further trimmed to less than 10 nm by combing with assisted post-treatment methods. The novel nanofabrication technique will be expected a new emerging technology with low process complexity and good compatibility with existing silicon integrated circuit and is an important step towards the easy fabrication of a wide variety of nanoelectronics, biosensors, and optoelectronic devices.

  7. Membrane-based microextraction techniques in analytical chemistry: A review.

    Science.gov (United States)

    Carasek, Eduardo; Merib, Josias

    2015-06-23

    The use of membrane-based sample preparation techniques in analytical chemistry has gained growing attention from the scientific community since the development of miniaturized sample preparation procedures in the 1990s. The use of membranes makes the microextraction procedures more stable, allowing the determination of analytes in complex and "dirty" samples. This review describes some characteristics of classical membrane-based microextraction techniques (membrane-protected solid-phase microextraction, hollow-fiber liquid-phase microextraction and hollow-fiber renewal liquid membrane) as well as some alternative configurations (thin film and electromembrane extraction) used successfully for the determination of different analytes in a large variety of matrices, some critical points regarding each technique are highlighted.

  8. Validation of Models : Statistical Techniques and Data Availability

    NARCIS (Netherlands)

    Kleijnen, J.P.C.

    1999-01-01

    This paper shows which statistical techniques can be used to validate simulation models, depending on which real-life data are available. Concerning this availability three situations are distinguished (i) no data, (ii) only output data, and (iii) both input and output data. In case (i) - no real

  9. Techniques and tools for efficiently modeling multiprocessor systems

    Science.gov (United States)

    Carpenter, T.; Yalamanchili, S.

    1990-01-01

    System-level tools and methodologies associated with an integrated approach to the development of multiprocessor systems are examined. Tools for capturing initial program structure, automated program partitioning, automated resource allocation, and high-level modeling of the combined application and resource are discussed. The primary language focus of the current implementation is Ada, although the techniques should be appropriate for other programming paradigms.

  10. Using of Structural Equation Modeling Techniques in Cognitive Levels Validation

    Directory of Open Access Journals (Sweden)

    Natalija Curkovic

    2012-10-01

    Full Text Available When constructing knowledge tests, cognitive level is usually one of the dimensions comprising the test specifications with each item assigned to measure a particular level. Recently used taxonomies of the cognitive levels most often represent some modification of the original Bloom’s taxonomy. There are many concerns in current literature about existence of predefined cognitive levels. The aim of this article is to investigate can structural equation modeling techniques confirm existence of different cognitive levels. For the purpose of the research, a Croatian final high-school Mathematics exam was used (N = 9626. Confirmatory factor analysis and structural regression modeling were used to test three different models. Structural equation modeling techniques did not support existence of different cognitive levels in this case. There is more than one possible explanation for that finding. Some other techniques that take into account nonlinear behaviour of the items as well as qualitative techniques might be more useful for the purpose of the cognitive levels validation. Furthermore, it seems that cognitive levels were not efficient descriptors of the items and so improvements are needed in describing the cognitive skills measured by items.

  11. Comparing modelling techniques for analysing urban pluvial flooding.

    Science.gov (United States)

    van Dijk, E; van der Meulen, J; Kluck, J; Straatman, J H M

    2014-01-01

    Short peak rainfall intensities cause sewer systems to overflow leading to flooding of streets and houses. Due to climate change and densification of urban areas, this is expected to occur more often in the future. Hence, next to their minor (i.e. sewer) system, municipalities have to analyse their major (i.e. surface) system in order to anticipate urban flooding during extreme rainfall. Urban flood modelling techniques are powerful tools in both public and internal communications and transparently support design processes. To provide more insight into the (im)possibilities of different urban flood modelling techniques, simulation results have been compared for an extreme rainfall event. The results show that, although modelling software is tending to evolve towards coupled one-dimensional (1D)-two-dimensional (2D) simulation models, surface flow models, using an accurate digital elevation model, prove to be an easy and fast alternative to identify vulnerable locations in hilly and flat areas. In areas at the transition between hilly and flat, however, coupled 1D-2D simulation models give better results since catchments of major and minor systems can differ strongly in these areas. During the decision making process, surface flow models can provide a first insight that can be complemented with complex simulation models for critical locations.

  12. Image encryption techniques based on the fractional Fourier transform

    Science.gov (United States)

    Hennelly, B. M.; Sheridan, J. T.

    2003-11-01

    The fractional Fourier transform, (FRT), is a generalisation of the Fourier transform which allows domains of mixed spatial frequency and spatial information to be examined. A number of method have recently been proposed in the literature for the encryption of two dimensional information using optical systems based on the FRT. Typically, these methods require random phase screen keys to decrypt the data, which must be stored at the receiver and must be carefully aligned with the received encrypted data. We have proposed a new technique based on a random shifting or Jigsaw transformation. This method does not require the use of phase keys. The image is encrypted by juxtaposition of sections of the image in various FRT domains. The new method has been compared numerically with existing methods and shows comparable or superior robustness to blind decryption. An optical implementation is also proposed and the sensitivity of the various encryption keys to blind decryption is quantified. We also present a second image encryption technique, which is based on a recently proposed method of optical phase retrieval using the optical FRT and one of its discrete counterparts. Numerical simulations of the new algorithm indicates that the sensitivity of the keys is much greater than any of the techniques currently available. In fact the sensitivity appears to be so high that optical implementation, based on existing optical signal processing technology, may be impossible. However, the technique has been shown to be a powerful method of 2-D image data encryption.

  13. Full-duplex MIMO system based on antenna cancellation technique

    DEFF Research Database (Denmark)

    Foroozanfard, Ehsan; Franek, Ondrej; Tatomirescu, Alexandru

    2014-01-01

    The performance of an antenna cancellation technique for a multiple-input– multiple-output (MIMO) full-duplex system that is based on null-steering beamforming and antenna polarization diversity is investigated. A practical implementation of a symmetric antenna topology comprising three dual-pola...

  14. Model Based Iterative Reconstruction for Bright Field Electron Tomography (Postprint)

    Science.gov (United States)

    2013-02-01

    Reconstruction Technique ( SIRT ) are applied to the data. Model based iterative reconstruction (MBIR) provides a powerful framework for tomographic...the reconstruction when the typical algorithms such as Filtered Back Projection (FBP) and Simultaneous Iterative Reconstruction Technique ( SIRT ) are

  15. Separable Watermarking Technique Using the Biological Color Model

    Directory of Open Access Journals (Sweden)

    David Nino

    2009-01-01

    Full Text Available Problem statement: The issue of having robust and fragile watermarking is still main focus for various researchers worldwide. Performance of a watermarking technique depends on how complex as well as how feasible to implement. These issues are tested using various kinds of attacks including geometry and transformation. Watermarking techniques in color images are more challenging than gray images in terms of complexity and information handling. In this study, we focused on implementation of watermarking technique in color images using the biological model. Approach: We proposed a novel method for watermarking using spatial and the Discrete Cosine Transform (DCT domains. The proposed method deled with colored images in the biological color model, the Hue, Saturation and Intensity (HSI. Technique was implemented and used against various colored images including the standard ones such as pepper image. The experiments were done using various attacks such as cropping, transformation and geometry. Results: The method robustness showed high accuracy in retrieval data and technique is fragile against geometric attacks. Conclusion: Watermark security was increased by using the Hadamard transform matrix. The watermarks used were meaningful and of varying sizes and details.

  16. Video multiple watermarking technique based on image interlacing using DWT.

    Science.gov (United States)

    Ibrahim, Mohamed M; Abdel Kader, Neamat S; Zorkany, M

    2014-01-01

    Digital watermarking is one of the important techniques to secure digital media files in the domains of data authentication and copyright protection. In the nonblind watermarking systems, the need of the original host file in the watermark recovery operation makes an overhead over the system resources, doubles memory capacity, and doubles communications bandwidth. In this paper, a robust video multiple watermarking technique is proposed to solve this problem. This technique is based on image interlacing. In this technique, three-level discrete wavelet transform (DWT) is used as a watermark embedding/extracting domain, Arnold transform is used as a watermark encryption/decryption method, and different types of media (gray image, color image, and video) are used as watermarks. The robustness of this technique is tested by applying different types of attacks such as: geometric, noising, format-compression, and image-processing attacks. The simulation results show the effectiveness and good performance of the proposed technique in saving system resources, memory capacity, and communications bandwidth.

  17. Face Veins Based MCMT Technique for Personal Identification

    Directory of Open Access Journals (Sweden)

    Kamta Nath Mishra

    2015-08-01

    Full Text Available Face veins based personal identification is a challenging task in the field of identity verification of a person. It is because many other techniques are not identifying the uniqueness of a person in the universe. This research paper finds the uniqueness of a person on the basis of face veins based technique. In this paper five different persons face veins images have been used with different rotation angles (left/right 900 to 2700 and 3150 . For each person, eight different images at different rotations were used and for each of these images the same minimum cost minutiae tree (MCMT is obtained. Here, Prim‟s or Kruskal‟s algorithm is used for finding the MCMT from a minutiae graph. The MCMT is traversed in pre-order to generate the unique string of vertices and edge lengths. We deviated the edge lengths of each MCMT by five pixels in positive and negative directions for robustness testing. It is observed in our experiments that the traversed string which consists of vertices and edge lengths of MCMT is unique for each person and this unique sequence is correctly identifying a person with an accuracy of above 95%. Further, we have compared the performance of our proposed technique with other standard techniques and it is observed that the proposed technique is giving the promising result.

  18. The Real-Time Image Processing Technique Based on DSP

    Institute of Scientific and Technical Information of China (English)

    QI Chang; CHEN Yue-hua; HUANG Tian-shu

    2005-01-01

    This paper proposes a novel real-time image processing technique based on digital singnal processor (DSP). At the aspect of wavelet transform(WT) algorithm, the technique uses algorithm of second generation wavelet transform-lifting scheme WT that has low calculation complexity property for the 2-D image data processing. Since the processing effect of lifting scheme WT for 1-D data is better than the effect of it for 2-D data obviously, this paper proposes a reformative processing method: Transform 2-D image data to 1-D data sequence by linearization method, then process the 1-D data sequence by algorithm of lifting scheme WT. The method changes the image convolution mode,which based on the cross filtering of rows and columns. At the aspect of hardware realization, the technique optimizes the program structure of DSP to exert the operation power with the in-chip memorizer of DSP. The experiment results show that the real-time image processing technique proposed in this paper can meet the real-time requirement of video-image transmitting in the video surveillance system of electric power. So the technique is a feasible and efficient DSP solution.

  19. Impact of Domain Modeling Techniques on the Quality of Domain Model: An Experiment

    Directory of Open Access Journals (Sweden)

    Hiqmat Nisa

    2016-10-01

    Full Text Available The unified modeling language (UML is widely used to analyze and design different software development artifacts in an object oriented development. Domain model is a significant artifact that models the problem domain and visually represents real world objects and relationships among them. It facilitates the comprehension process by identifying the vocabulary and key concepts of the business world. Category list technique identifies concepts and associations with the help of pre defined categories, which are important to business information systems. Whereas noun phrasing technique performs grammatical analysis of use case description to recognize concepts and associations. Both of these techniques are used for the construction of domain model, however, no empirical evidence exists that evaluates the quality of the resultant domain model constructed via these two basic techniques. A controlled experiment was performed to investigate the impact of category list and noun phrasing technique on quality of the domain model. The constructed domain model is evaluated for completeness, correctness and effort required for its design. The obtained results show that category list technique is better than noun phrasing technique for the identification of concepts as it avoids generating unnecessary elements i.e. extra concepts, associations and attributes in the domain model. The noun phrasing technique produces a comprehensive domain model and requires less effort as compared to category list. There is no statistically significant difference between both techniques in case of correctness.

  20. Impact of Domain Modeling Techniques on the Quality of Domain Model: An Experiment

    Directory of Open Access Journals (Sweden)

    Hiqmat Nisa

    2016-11-01

    Full Text Available The unified modeling language (UML is widely used to analyze and design different software development artifacts in an object oriented development. Domain model is a significant artifact that models the problem domain and visually represents real world objects and relationships among them. It facilitates the comprehension process by identifying the vocabulary and key concepts of the business world. Category list technique identifies concepts and associations with the help of pre defined categories, which are important to business information systems. Whereas noun phrasing technique performs grammatical analysis of use case description to recognize concepts and associations. Both of these techniques are used for the construction of domain model, however, no empirical evidence exists that evaluates the quality of the resultant domain model constructed via these two basic techniques. A controlled experiment was performed to investigate the impact of category list and noun phrasing technique on quality of the domain model. The constructed domain model is evaluated for completeness, correctness and effort required for its design. The obtained results show that category list technique is better than noun phrasing technique for the identification of concepts as it avoids generating unnecessary elements i.e. extra concepts, associations and attributes in the domain model. The noun phrasing technique produces a comprehensive domain model and requires less effort as compared to category list. There is no statistically significant difference between both techniques in case of correctness.

  1. Probabilistic Analysis Techniques Applied to Complex Spacecraft Power System Modeling

    Science.gov (United States)

    Hojnicki, Jeffrey S.; Rusick, Jeffrey J.

    2005-01-01

    Electric power system performance predictions are critical to spacecraft, such as the International Space Station (ISS), to ensure that sufficient power is available to support all the spacecraft s power needs. In the case of the ISS power system, analyses to date have been deterministic, meaning that each analysis produces a single-valued result for power capability because of the complexity and large size of the model. As a result, the deterministic ISS analyses did not account for the sensitivity of the power capability to uncertainties in model input variables. Over the last 10 years, the NASA Glenn Research Center has developed advanced, computationally fast, probabilistic analysis techniques and successfully applied them to large (thousands of nodes) complex structural analysis models. These same techniques were recently applied to large, complex ISS power system models. This new application enables probabilistic power analyses that account for input uncertainties and produce results that include variations caused by these uncertainties. Specifically, N&R Engineering, under contract to NASA, integrated these advanced probabilistic techniques with Glenn s internationally recognized ISS power system model, System Power Analysis for Capability Evaluation (SPACE).

  2. Spatial Modeling of Geometallurgical Properties: Techniques and a Case Study

    Energy Technology Data Exchange (ETDEWEB)

    Deutsch, Jared L., E-mail: jdeutsch@ualberta.ca [University of Alberta, School of Mining and Petroleum Engineering, Department of Civil and Environmental Engineering (Canada); Palmer, Kevin [Teck Resources Limited (Canada); Deutsch, Clayton V.; Szymanski, Jozef [University of Alberta, School of Mining and Petroleum Engineering, Department of Civil and Environmental Engineering (Canada); Etsell, Thomas H. [University of Alberta, Department of Chemical and Materials Engineering (Canada)

    2016-06-15

    High-resolution spatial numerical models of metallurgical properties constrained by geological controls and more extensively by measured grade and geomechanical properties constitute an important part of geometallurgy. Geostatistical and other numerical techniques are adapted and developed to construct these high-resolution models accounting for all available data. Important issues that must be addressed include unequal sampling of the metallurgical properties versus grade assays, measurements at different scale, and complex nonlinear averaging of many metallurgical parameters. This paper establishes techniques to address each of these issues with the required implementation details and also demonstrates geometallurgical mineral deposit characterization for a copper–molybdenum deposit in South America. High-resolution models of grades and comminution indices are constructed, checked, and are rigorously validated. The workflow demonstrated in this case study is applicable to many other deposit types.

  3. Model Based Definition

    Science.gov (United States)

    Rowe, Sidney E.

    2010-01-01

    In September 2007, the Engineering Directorate at the Marshall Space Flight Center (MSFC) created the Design System Focus Team (DSFT). MSFC was responsible for the in-house design and development of the Ares 1 Upper Stage and the Engineering Directorate was preparing to deploy a new electronic Configuration Management and Data Management System with the Design Data Management System (DDMS) based upon a Commercial Off The Shelf (COTS) Product Data Management (PDM) System. The DSFT was to establish standardized CAD practices and a new data life cycle for design data. Of special interest here, the design teams were to implement Model Based Definition (MBD) in support of the Upper Stage manufacturing contract. It is noted that this MBD does use partially dimensioned drawings for auxiliary information to the model. The design data lifecycle implemented several new release states to be used prior to formal release that allowed the models to move through a flow of progressive maturity. The DSFT identified some 17 Lessons Learned as outcomes of the standards development, pathfinder deployments and initial application to the Upper Stage design completion. Some of the high value examples are reviewed.

  4. The Integrated Use of Enterprise and System Dynamics Modelling Techniques in Support of Business Decisions

    Directory of Open Access Journals (Sweden)

    K. Agyapong-Kodua

    2012-01-01

    Full Text Available Enterprise modelling techniques support business process (reengineering by capturing existing processes and based on perceived outputs, support the design of future process models capable of meeting enterprise requirements. System dynamics modelling tools on the other hand are used extensively for policy analysis and modelling aspects of dynamics which impact on businesses. In this paper, the use of enterprise and system dynamics modelling techniques has been integrated to facilitate qualitative and quantitative reasoning about the structures and behaviours of processes and resource systems used by a Manufacturing Enterprise during the production of composite bearings. The case study testing reported has led to the specification of a new modelling methodology for analysing and managing dynamics and complexities in production systems. This methodology is based on a systematic transformation process, which synergises the use of a selection of public domain enterprise modelling, causal loop and continuous simulation modelling techniques. The success of the modelling process defined relies on the creation of useful CIMOSA process models which are then converted to causal loops. The causal loop models are then structured and translated to equivalent dynamic simulation models using the proprietary continuous simulation modelling tool iThink.

  5. Traceability in Model-Based Testing

    Directory of Open Access Journals (Sweden)

    Mathew George

    2012-11-01

    Full Text Available The growing complexities of software and the demand for shorter time to market are two important challenges that face today’s IT industry. These challenges demand the increase of both productivity and quality of software. Model-based testing is a promising technique for meeting these challenges. Traceability modeling is a key issue and challenge in model-based testing. Relationships between the different models will help to navigate from one model to another, and trace back to the respective requirements and the design model when the test fails. In this paper, we present an approach for bridging the gaps between the different models in model-based testing. We propose relation definition markup language (RDML for defining the relationships between models.

  6. Proposing a Wiki-Based Technique for Collaborative Essay Writing

    Directory of Open Access Journals (Sweden)

    Mabel Ortiz Navarrete

    2014-10-01

    Full Text Available This paper aims at proposing a technique for students learning English as a foreign language when they collaboratively write an argumentative essay in a wiki environment. A wiki environment and collaborative work play an important role within the academic writing task. Nevertheless, an appropriate and systematic work assignment is required in order to make use of both. In this paper the proposed technique when writing a collaborative essay mainly attempts to provide the most effective way to enhance equal participation among group members by taking as a base computer mediated collaboration. Within this context, the students’ role is clearly defined and individual and collaborative tasks are explained.

  7. Knowledge based systems advanced concepts, techniques and applications

    CERN Document Server

    1997-01-01

    The field of knowledge-based systems (KBS) has expanded enormously during the last years, and many important techniques and tools are currently available. Applications of KBS range from medicine to engineering and aerospace.This book provides a selected set of state-of-the-art contributions that present advanced techniques, tools and applications. These contributions have been prepared by a group of eminent researchers and professionals in the field.The theoretical topics covered include: knowledge acquisition, machine learning, genetic algorithms, knowledge management and processing under unc

  8. Line Search-Based Inverse Lithography Technique for Mask Design

    Directory of Open Access Journals (Sweden)

    Xin Zhao

    2012-01-01

    Full Text Available As feature size is much smaller than the wavelength of illumination source of lithography equipments, resolution enhancement technology (RET has been increasingly relied upon to minimize image distortions. In advanced process nodes, pixelated mask becomes essential for RET to achieve an acceptable resolution. In this paper, we investigate the problem of pixelated binary mask design in a partially coherent imaging system. Similar to previous approaches, the mask design problem is formulated as a nonlinear program and is solved by gradient-based search. Our contributions are four novel techniques to achieve significantly better image quality. First, to transform the original bound-constrained formulation to an unconstrained optimization problem, we propose a new noncyclic transformation of mask variables to replace the wellknown cyclic one. As our transformation is monotonic, it enables a better control in flipping pixels. Second, based on this new transformation, we propose a highly efficient line search-based heuristic technique to solve the resulting unconstrained optimization. Third, to simplify the optimization, instead of using discretization regularization penalty technique, we directly round the optimized gray mask into binary mask for pattern error evaluation. Forth, we introduce a jump technique in order to jump out of local minimum and continue the search.

  9. A Survey on Statistical Based Single Channel Speech Enhancement Techniques

    Directory of Open Access Journals (Sweden)

    Sunnydayal. V

    2014-11-01

    Full Text Available Speech enhancement is a long standing problem with various applications like hearing aids, automatic recognition and coding of speech signals. Single channel speech enhancement technique is used for enhancement of the speech degraded by additive background noises. The background noise can have an adverse impact on our ability to converse without hindrance or smoothly in very noisy environments, such as busy streets, in a car or cockpit of an airplane. Such type of noises can affect quality and intelligibility of speech. This is a survey paper and its object is to provide an overview of speech enhancement algorithms so that enhance the noisy speech signal which is corrupted by additive noise. The algorithms are mainly based on statistical based approaches. Different estimators are compared. Challenges and Opportunities of speech enhancement are also discussed. This paper helps in choosing the best statistical based technique for speech enhancement

  10. An Observed Voting System Based On Biometric Technique

    Directory of Open Access Journals (Sweden)

    B. Devikiruba

    2015-08-01

    Full Text Available ABSTRACT This article describes a computational framework which can run almost on every computer connected to an IP based network to study biometric techniques. This paper discusses with a system protecting confidential information puts strong security demands on the identification. Biometry provides us with a user-friendly method for this identification and is becoming a competitor for current identification mechanisms. The experimentation section focuses on biometric verification specifically based on fingerprints. This article should be read as a warning to those thinking of using methods of identification without first examine the technical opportunities for compromising mechanisms and the associated legal consequences. The development is based on the java language that easily improves software packages that is useful to test new control techniques.

  11. Non-Destructive Techniques Based on Eddy Current Testing

    Science.gov (United States)

    García-Martín, Javier; Gómez-Gil, Jaime; Vázquez-Sánchez, Ernesto

    2011-01-01

    Non-destructive techniques are used widely in the metal industry in order to control the quality of materials. Eddy current testing is one of the most extensively used non-destructive techniques for inspecting electrically conductive materials at very high speeds that does not require any contact between the test piece and the sensor. This paper includes an overview of the fundamentals and main variables of eddy current testing. It also describes the state-of-the-art sensors and modern techniques such as multi-frequency and pulsed systems. Recent advances in complex models towards solving crack-sensor interaction, developments in instrumentation due to advances in electronic devices, and the evolution of data processing suggest that eddy current testing systems will be increasingly used in the future. PMID:22163754

  12. Non-Destructive Techniques Based on Eddy Current Testing

    Directory of Open Access Journals (Sweden)

    Ernesto Vázquez-Sánchez

    2011-02-01

    Full Text Available Non-destructive techniques are used widely in the metal industry in order to control the quality of materials. Eddy current testing is one of the most extensively used non-destructive techniques for inspecting electrically conductive materials at very high speeds that does not require any contact between the test piece and the sensor. This paper includes an overview of the fundamentals and main variables of eddy current testing. It also describes the state-of-the-art sensors and modern techniques such as multi-frequency and pulsed systems. Recent advances in complex models towards solving crack-sensor interaction, developments in instrumentation due to advances in electronic devices, and the evolution of data processing suggest that eddy current testing systems will be increasingly used in the future.

  13. Non-destructive techniques based on eddy current testing.

    Science.gov (United States)

    García-Martín, Javier; Gómez-Gil, Jaime; Vázquez-Sánchez, Ernesto

    2011-01-01

    Non-destructive techniques are used widely in the metal industry in order to control the quality of materials. Eddy current testing is one of the most extensively used non-destructive techniques for inspecting electrically conductive materials at very high speeds that does not require any contact between the test piece and the sensor. This paper includes an overview of the fundamentals and main variables of eddy current testing. It also describes the state-of-the-art sensors and modern techniques such as multi-frequency and pulsed systems. Recent advances in complex models towards solving crack-sensor interaction, developments in instrumentation due to advances in electronic devices, and the evolution of data processing suggest that eddy current testing systems will be increasingly used in the future.

  14. Nonlinear Second-Order Partial Differential Equation-Based Image Smoothing Technique

    Directory of Open Access Journals (Sweden)

    Tudor Barbu

    2016-09-01

    Full Text Available A second-order nonlinear parabolic PDE-based restoration model is provided in this article. The proposed anisotropic diffusion-based denoising approach is based on some robust versions of the edge-stopping function and of the conductance parameter. Two stable and consistent approximation schemes are then developed for this differential model. Our PDE-based filtering technique achieves an efficient noise removal while preserving the edges and other image features. It outperforms both the conventional filters and also many PDE-based denoising approaches, as it results from the successful experiments and method comparison applied.

  15. 利用照片重建技术生成坡面侵蚀沟三维模型%Generating 3D model of slope eroded gully based on photo reconstruction technique

    Institute of Scientific and Technical Information of China (English)

    李俊利; 李斌兵; 柳方明; 李占斌

    2015-01-01

    Based on Structure from Motion(SFM) and Multi-View Stereo(MVS) techniques, this paper proposed a rapid 3d reconstruction method of slope eroded gully. Firstly, feature points were extracted and described by using the Scale-Invariant Feature Transform(SIFT), and then Random Sample and Consensus(RANSAC) algorithm was applied to filter inaccurate matching points generated by Nearest Neighbor(NN) algorithm; Secondly, in the condition that there were no camera parameters and scenario-based three-dimensional information, SFM was used because it provided a solution to iterate and get camera matrix and 3d point coordinates. During the iterating process, Bundle Adjustment(BA) algorithm was used for nonlinear optimizing and to ensure symmetrical distribution of the error in order to keep precision of the reconstructed model;After that, with the constraints of local photometric consistency and global visibility, Patch-Based Multi-View Stereo(PMVS) algorithm was adopted to expand sparse point cloud generated by SFM. Thus far the dense reconstruction of point cloud had finished. In order to validate the rationality and accuracy of using this method to monitor gully erosion, indoor runoff scouring experiment was conducted in“hydrology and water resources”laboratory at Xi’an University of Technology. Photos used in the reconstruction were taken by Canon 550d SLR camera. Because modeling process relied on tracking with the oriented point on the subject to determine the final 3d model of point set, so two adjacent photos’ differential seat angle can’t be too large, in case of losing trace points. Reasonable selection of photo shooting location, trajectory and angle should be considered according to the experimental environment and conditions. This paper used the VisualSFM software to complete detecting and matching of feature points, sparse reconstructing of point cloud as well as self-calibrating of camera;used CMVS and PMVS2 tools to finish dense reconstruction, and

  16. Multivariate discrimination technique based on the Bayesian theory

    Institute of Scientific and Technical Information of China (English)

    JIN Ping; PAN Chang-zhou; XIAO Wei-guo

    2007-01-01

    A multivariate discrimination technique was established based on the Bayesian theory. Using this technique, P/S ratios of different types (e.g., Pn/Sn, Pn/Lg, Pg/Sn or Pg/Lg) measured within different frequency bands and from different stations were combined together to discriminate seismic events in Central Asia. Major advantages of the Bayesian approach are that the probability to be an explosion for any unknown event can be directly calculated given the measurements of a group of discriminants, and at the same time correlations among these discriminants can be fully taken into account. It was proved theoretically that the Bayesian technique would be optimal and its discriminating performance would be better than that of any individual discriminant as well as better than that yielded by the linear combination approach ignoring correlations among discriminants. This conclusion was also validated in this paper by applying the Bayesian approach to the above-mentioned observed data.

  17. RANKINGTHEREFACTORING TECHNIQUES BASED ON THE INTERNAL QUALITY ATTRIBUTES

    Directory of Open Access Journals (Sweden)

    Sultan Alshehri

    2014-01-01

    Full Text Available The analytic hierarchy process (AHP has been applied in many fields and especially to complex engineering problems and applications. The AHP is capable of structuring decision problems and finding mathematically determined judgments built on knowledge and experience. This suggests that AHP should prove useful in agile software development where complex decisions occur routinely. In this paper, the AHP is used to rank the refactoring techniques based on the internal code quality attributes. XP encourages applying the refactoring where the code smells bad. However, refactoring may consume more time and efforts.So, to maximize the benefits of the refactoring in less time and effort, AHP has been applied to achieve this purpose. It was found that ranking the refactoring techniques helped the XP team to focus on the technique that improve the code and the XP development process in general.

  18. MEMS-Based Power Generation Techniques for Implantable Biosensing Applications

    Directory of Open Access Journals (Sweden)

    Jonathan Lueke

    2011-01-01

    Full Text Available Implantable biosensing is attractive for both medical monitoring and diagnostic applications. It is possible to monitor phenomena such as physical loads on joints or implants, vital signs, or osseointegration in vivo and in real time. Microelectromechanical (MEMS-based generation techniques can allow for the autonomous operation of implantable biosensors by generating electrical power to replace or supplement existing battery-based power systems. By supplementing existing battery-based power systems for implantable biosensors, the operational lifetime of the sensor is increased. In addition, the potential for a greater amount of available power allows additional components to be added to the biosensing module, such as computational and wireless and components, improving functionality and performance of the biosensor. Photovoltaic, thermovoltaic, micro fuel cell, electrostatic, electromagnetic, and piezoelectric based generation schemes are evaluated in this paper for applicability for implantable biosensing. MEMS-based generation techniques that harvest ambient energy, such as vibration, are much better suited for implantable biosensing applications than fuel-based approaches, producing up to milliwatts of electrical power. High power density MEMS-based approaches, such as piezoelectric and electromagnetic schemes, allow for supplemental and replacement power schemes for biosensing applications to improve device capabilities and performance. In addition, this may allow for the biosensor to be further miniaturized, reducing the need for relatively large batteries with respect to device size. This would cause the implanted biosensor to be less invasive, increasing the quality of care received by the patient.

  19. MODELING AND COMPENSATION TECHNIQUE FOR THE GEOMETRIC ERRORS OF FIVE-AXIS CNC MACHINE TOOLS

    Institute of Scientific and Technical Information of China (English)

    2003-01-01

    One of the important trends in precision machining is the development of real-time error compensation technique.The error compensation for multi-axis CNC machine tools is very difficult and attractive.The modeling for the geometric error of five-axis CNC machine tools based on multi-body systems is proposed.And the key technique of the compensation-identifying geometric error parameters-is developed.The simulation of cutting workpiece to verify the modeling based on the multi-body systems is also considered.

  20. Comparison of different uncertainty techniques in urban stormwater quantity and quality modelling

    DEFF Research Database (Denmark)

    Dotto, C. B.; Mannina, G.; Kleidorfer, M.

    2012-01-01

    is the assessment and comparison of different techniques generally used in the uncertainty assessment of the parameters of water models. This paper compares a number of these techniques: the Generalized Likelihood Uncertainty Estimation (GLUE), the Shuffled Complex Evolution Metropolis algorithm (SCEM......-UA), an approach based on a multi-objective auto-calibration (a multialgorithm, genetically adaptive multiobjective method, AMALGAM) and a Bayesian approach based on a simplified Markov Chain Monte Carlo method (implemented in the software MICA). To allow a meaningful comparison among the different uncertainty...... techniques, common criteria have been set for the likelihood formulation, defining the number of simulations, and the measure of uncertainty bounds. Moreover, all the uncertainty techniques were implemented for the same case study, in which the same stormwater quantity and quality model was used alongside...

  1. Q-DPM: An Efficient Model-Free Dynamic Power Management Technique

    CERN Document Server

    Li, Min; Yao, Richard; Yan, Xiaolang

    2011-01-01

    When applying Dynamic Power Management (DPM) technique to pervasively deployed embedded systems, the technique needs to be very efficient so that it is feasible to implement the technique on low end processor and tight-budget memory. Furthermore, it should have the capability to track time varying behavior rapidly because the time varying is an inherent characteristic of real world system. Existing methods, which are usually model-based, may not satisfy the aforementioned requirements. In this paper, we propose a model-free DPM technique based on Q-Learning. Q-DPM is much more efficient because it removes the overhead of parameter estimator and mode-switch controller. Furthermore, its policy optimization is performed via consecutive online trialing, which also leads to very rapid response to time varying behavior.

  2. Deriving Framework Usages Based on Behavioral Models

    Science.gov (United States)

    Zenmyo, Teruyoshi; Kobayashi, Takashi; Saeki, Motoshi

    One of the critical issue in framework-based software development is a huge introduction cost caused by technical gap between developers and users of frameworks. This paper proposes a technique for deriving framework usages to implement a given requirements specification. By using the derived usages, the users can use the frameworks without understanding the framework in detail. Requirements specifications which describe definite behavioral requirements cannot be related to frameworks in as-is since the frameworks do not have definite control structure so that the users can customize them to suit given requirements specifications. To cope with this issue, a new technique based on satisfiability problems (SAT) is employed to derive the control structures of the framework model. In the proposed technique, requirements specifications and frameworks are modeled based on Labeled Transition Systems (LTSs) with branch conditions represented by predicates. Truth assignments of the branch conditions in the framework models are not given initially for representing the customizable control structure. The derivation of truth assignments of the branch conditions is regarded as the SAT by assuming relations between termination states of the requirements specification model and ones of the framework model. This derivation technique is incorporated into a technique we have proposed previously for relating actions of requirements specifications to ones of frameworks. Furthermore, this paper discuss a case study of typical use cases in e-commerce systems.

  3. An EMG-assisted model calibration technique that does not require MVCs.

    Science.gov (United States)

    Dufour, Jonathan S; Marras, William S; Knapik, Gregory G

    2013-06-01

    As personalized biologically-assisted models of the spine have evolved, the normalization of raw electromyographic (EMG) signals has become increasingly important. The traditional method of normalizing myoelectric signals, relative to measured maximum voluntary contractions (MVCs), is susceptible to error and is problematic for evaluating symptomatic low back pain (LBP) patients. Additionally, efforts to circumvent MVCs have not been validated during complex free-dynamic exertions. Therefore, the objective of this study was to develop an MVC-independent biologically-assisted model calibration technique that overcomes the limitations of previous normalization efforts, and to validate this technique over a variety of complex free-dynamic conditions including symmetrical and asymmetrical lifting. The newly developed technique (non-MVC) eliminates the need to collect MVCs by combining gain (maximum strength per unit area) and MVC into a single muscle property (gain ratio) that can be determined during model calibration. Ten subjects (five male, five female) were evaluated to compare gain ratio prediction variability, spinal load predictions, and model fidelity between the new non-MVC and established MVC-based model calibration techniques. The new non-MVC model calibration technique demonstrated at least as low gain ratio prediction variability, similar spinal loads, and similar model fidelity when compared to the MVC-based technique, indicating that it is a valid alternative to traditional MVC-based EMG normalization. Spinal loading for individuals who are unwilling or unable to produce reliable MVCs can now be evaluated. In particular, this technique will be valuable for evaluating symptomatic LBP patients, which may provide significant insight into the underlying nature of the LBP disorder.

  4. An Efficient Image Compression Technique Based on Arithmetic Coding

    Directory of Open Access Journals (Sweden)

    Prof. Rajendra Kumar Patel

    2012-12-01

    Full Text Available The rapid growth of digital imaging applications, including desktop publishing, multimedia, teleconferencing, and high visual definition has increased the need for effective and standardized image compression techniques. Digital Images play a very important role for describing the detailed information. The key obstacle for many applications is the vast amount of data required to represent a digital image directly. The various processes of digitizing the images to obtain it in the best quality for the more clear and accurate information leads to the requirement of more storage space and better storage and accessing mechanism in the form of hardware or software. In this paper we concentrate mainly on the above flaw so that we reduce the space with best quality image compression. State-ofthe-art techniques can compress typical images from 1/10 to 1/50 their uncompressed size without visibly affecting image quality. From our study I observe that there is a need of good image compression technique which provides better reduction technique in terms of storage and quality. Arithmetic coding is the best way to reducing encoding data. So in this paper we propose arithmetic coding with walsh transformation based image compression technique which is an efficient way of reduction

  5. Characterization techniques for graphene-based materials in catalysis

    Directory of Open Access Journals (Sweden)

    Maocong Hu

    2017-06-01

    Full Text Available Graphene-based materials have been studied in a wide range of applications including catalysis due to the outstanding electronic, thermal, and mechanical properties. The unprecedented features of graphene-based catalysts, which are believed to be responsible for their superior performance, have been characterized by many techniques. In this article, we comprehensively summarized the characterization methods covering bulk and surface structure analysis, chemisorption ability determination, and reaction mechanism investigation. We reviewed the advantages/disadvantages of different techniques including Raman spectroscopy, X-ray photoelectron spectroscopy (XPS, Fourier transform infrared spectroscopy (FTIR and Diffuse Reflectance Fourier Transform Infrared Spectroscopy (DRIFTS, X-Ray diffraction (XRD, X-ray absorption near edge structure (XANES and X-ray absorption fine structure (XAFS, atomic force microscopy (AFM, scanning electron microscopy (SEM, transmission electron microscopy (TEM, high-resolution transmission electron microscopy (HRTEM, ultraviolet-visible spectroscopy (UV-vis, X-ray fluorescence (XRF, inductively coupled plasma mass spectrometry (ICP, thermogravimetric analysis (TGA, Brunauer–Emmett–Teller (BET, and scanning tunneling microscopy (STM. The application of temperature-programmed reduction (TPR, CO chemisorption, and NH3/CO2-temperature-programmed desorption (TPD was also briefly introduced. Finally, we discussed the challenges and provided possible suggestions on choosing characterization techniques. This review provides key information to catalysis community to adopt suitable characterization techniques for their research.

  6. Earthquake Analysis of Structure by Base Isolation Technique in SAP

    Directory of Open Access Journals (Sweden)

    T. Subramani

    2014-06-01

    Full Text Available This paper presents an overview of the present state of base isolation techniques with special emphasis and a brief on other techniques developed world over for mitigating earthquake forces on the structures. The dynamic analysis procedure for isolated structures is briefly explained. The provisions of FEMA 450 for base isolated structures are highlighted. The effects of base isolation on structures located on soft soils and near active faults are given in brief. Simple case study on natural base isolation using naturally available soils is presented. Also, the future areas of research are indicated. Earthquakes are one of nature IS greatest hazards; throughout historic time they have caused significant loss offline and severe damage to property, especially to man-made structures. On the other hand, earthquakes provide architects and engineers with a number of important design criteria foreign to the normal design process. From well established procedures reviewed by many researchers, seismic isolation may be used to provide an effective solution for a wide range of seismic design problems. The application of the base isolation techniques to protect structures against damage from earthquake attacks has been considered as one of the most effective approaches and has gained increasing acceptance during the last two decades. This is because base isolation limits the effects of the earthquake attack, a flexible base largely decoupling the structure from the ground motion, and the structural response accelerations are usually less than the ground acceleration. In general, the increase of additional viscous damping in the structure may reduce displacement and acceleration responses of the structure. This study also seeks to evaluate the effects of additional damping on the seismic response when compared with structures without additional damping for the different ground motions.

  7. Interpolation techniques in robust constrained model predictive control

    Science.gov (United States)

    Kheawhom, Soorathep; Bumroongsri, Pornchai

    2017-05-01

    This work investigates interpolation techniques that can be employed on off-line robust constrained model predictive control for a discrete time-varying system. A sequence of feedback gains is determined by solving off-line a series of optimal control optimization problems. A sequence of nested corresponding robustly positive invariant set, which is either ellipsoidal or polyhedral set, is then constructed. At each sampling time, the smallest invariant set containing the current state is determined. If the current invariant set is the innermost set, the pre-computed gain associated with the innermost set is applied. If otherwise, a feedback gain is variable and determined by a linear interpolation of the pre-computed gains. The proposed algorithms are illustrated with case studies of a two-tank system. The simulation results showed that the proposed interpolation techniques significantly improve control performance of off-line robust model predictive control without much sacrificing on-line computational performance.

  8. An effectiveness-NTU technique for characterising a finned tubes PCM system using a CFD model

    OpenAIRE

    Tay, N. H. Steven; Belusko, M.; Castell, Albert; Cabeza, Luisa F.; Bruno, F.

    2014-01-01

    Numerical modelling is commonly used to design, analyse and optimise tube-in-tank phase change thermal energy storage systems with fins. A new simplified two dimensional mathematical model, based on the effectiveness-number of transfer units technique, has been developed to characterise tube-in-tank phase change material systems, with radial round fins. The model applies an empirically derived P factor which defines the proportion of the heat flow which is parallel and isothermal....

  9. Study on ABCD Analysis Technique for Business Models, business strategies, Operating Concepts & Business Systems

    OpenAIRE

    Aithal, Sreeramana

    2016-01-01

    Studying the implications of a business model, choosing success strategies, developing viable operational concepts or evolving a functional system, it is important to analyse it in all dimensions. For this purpose, various analysing techniques/frameworks are used. This paper is a discussion on how to use an innovative analysing framework called ABCD model on a given business model, or on a business strategy or a operational concept/idea or business system. Based on four constructs Advantages,...

  10. Simulation-based Manufacturing System Modeling

    Institute of Scientific and Technical Information of China (English)

    卫东; 金烨; 范秀敏; 严隽琪

    2003-01-01

    In recent years, computer simulation appears to be very advantageous technique for researching the resource-constrained manufacturing system. This paper presents an object-oriented simulation modeling method, which combines the merits of traditional methods such as IDEF0 and Petri Net. In this paper, a four-layer-one-angel hierarchical modeling framework based on OOP is defined. And the modeling description of these layers is expounded, such as: hybrid production control modeling and human resource dispatch modeling. To validate the modeling method, a case study of an auto-product line in a motor manufacturing company has been carried out.

  11. Finding Within Cluster Dense Regions Using Distance Based Technique

    Directory of Open Access Journals (Sweden)

    Wesam Ashour

    2012-03-01

    Full Text Available One of the main categories in Data Clustering is density based clustering. Density based clustering techniques like DBSCAN are attractive because they can find arbitrary shaped clusters along with noisy outlier. The main weakness of the traditional density based algorithms like DBSCAN is clustering the different density level data sets. DBSCAN calculations done according to given parameters applied to all points in a data set, while densities of the data set clusters may be totally different. The proposed algorithm overcomes this weakness of the traditional density based algorithms. The algorithm starts with partitioning the data within a cluster to units based on a user parameter and compute the density for each unit separately. Consequently, the algorithm compares the results and merges neighboring units with closer approximate density values to become a new cluster. The experimental results of the simulation show that the proposed algorithm gives good results in finding clusters for different density cluster data set.

  12. Gabor-based fusion technique for Optical Coherence Microscopy.

    Science.gov (United States)

    Rolland, Jannick P; Meemon, Panomsak; Murali, Supraja; Thompson, Kevin P; Lee, Kye-sung

    2010-02-15

    We recently reported on an Optical Coherence Microscopy technique, whose innovation intrinsically builds on a recently reported - 2 microm invariant lateral resolution by design throughout a 2 mm cubic full-field of view - liquid-lens-based dynamic focusing optical probe [Murali et al., Optics Letters 34, 145-147, 2009]. We shall report in this paper on the image acquisition enabled by this optical probe when combined with an automatic data fusion method developed and described here to produce an in-focus high resolution image throughout the imaging depth of the sample. An African frog tadpole (Xenopus laevis) was imaged with the novel probe and the Gabor-based fusion technique, demonstrating subcellular resolution in a 0.5 mm (lateral) x 0.5 mm (axial) without the need, for the first time, for x-y translation stages, depth scanning, high-cost adaptive optics, or manual intervention. In vivo images of human skin are also presented.

  13. Mathematical analysis techniques for modeling the space network activities

    Science.gov (United States)

    Foster, Lisa M.

    1992-01-01

    The objective of the present work was to explore and identify mathematical analysis techniques, and in particular, the use of linear programming. This topic was then applied to the Tracking and Data Relay Satellite System (TDRSS) in order to understand the space network better. Finally, a small scale version of the system was modeled, variables were identified, data was gathered, and comparisons were made between actual and theoretical data.

  14. EXPERIENCE WITH SYNCHRONOUS GENERATOR MODEL USING PARTICLE SWARM OPTIMIZATION TECHNIQUE

    OpenAIRE

    N.RATHIKA; Dr.A.Senthil kumar; A.ANUSUYA

    2014-01-01

    This paper intends to the modeling of polyphase synchronous generator and minimization of power losses using Particle swarm optimization (PSO) technique with a constriction factor. Usage of Polyphase synchronous generator mainly leads to the total power circulation in the system which can be distributed in all phases. Another advantage of polyphase system is the fault at one winding does not lead to the system shutdown. The Process optimization is the chastisement of adjusting a process so as...

  15. Equivalence and Differences between Structural Equation Modeling and State-Space Modeling Techniques

    Science.gov (United States)

    Chow, Sy-Miin; Ho, Moon-ho R.; Hamaker, Ellen L.; Dolan, Conor V.

    2010-01-01

    State-space modeling techniques have been compared to structural equation modeling (SEM) techniques in various contexts but their unique strengths have often been overshadowed by their similarities to SEM. In this article, we provide a comprehensive discussion of these 2 approaches' similarities and differences through analytic comparisons and…

  16. Equivalence and differences between structural equation modeling and state-space modeling techniques

    NARCIS (Netherlands)

    Chow, Sy-Miin; Ho, Moon-ho R.; Hamaker, E.L.; Dolan, C.V.

    2010-01-01

    State-space modeling techniques have been compared to structural equation modeling (SEM) techniques in various contexts but their unique strengths have often been overshadowed by their similarities to SEM. In this article, we provide a comprehensive discussion of these 2 approaches' similarities and

  17. Equivalence and Differences between Structural Equation Modeling and State-Space Modeling Techniques

    Science.gov (United States)

    Chow, Sy-Miin; Ho, Moon-ho R.; Hamaker, Ellen L.; Dolan, Conor V.

    2010-01-01

    State-space modeling techniques have been compared to structural equation modeling (SEM) techniques in various contexts but their unique strengths have often been overshadowed by their similarities to SEM. In this article, we provide a comprehensive discussion of these 2 approaches' similarities and differences through analytic comparisons and…

  18. Ultrabroadband Phased-Array Receivers Based on Optical Techniques

    Science.gov (United States)

    2016-02-26

    AFRL-AFOSR-VA-TR-2016-0121 Ultrabroadband Phased-array Receivers Based on Optical Techniques Christopher Schuetz UNIVERSITY OF DELAWARE Final Report...NUMBER 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) University of Delaware 210 Hullihen Hall Newark, DE 19716 8. PERFORMING ORGANIZATION...Rev. 8/98) Prescribed by ANSI Std . Z39.18 Adobe Professional 7.0 Reset INSTRUCTIONS FOR COMPLETING SF 298 1. REPORT DATE. Full publication date

  19. Comparative analysis of affinity-based 5-hydroxymethylation enrichment techniques

    Science.gov (United States)

    Thomson, John P.; Hunter, Jennifer M.; Nestor, Colm E.; Dunican, Donncha S.; Terranova, Rémi; Moggs, Jonathan G.; Meehan, Richard R.

    2013-01-01

    The epigenetic modification of 5-hydroxymethylcytosine (5hmC) is receiving great attention due to its potential role in DNA methylation reprogramming and as a cell state identifier. Given this interest, it is important to identify reliable and cost-effective methods for the enrichment of 5hmC marked DNA for downstream analysis. We tested three commonly used affinity-based enrichment techniques; (i) antibody, (ii) chemical capture and (iii) protein affinity enrichment and assessed their ability to accurately and reproducibly report 5hmC profiles in mouse tissues containing high (brain) and lower (liver) levels of 5hmC. The protein-affinity technique is a poor reporter of 5hmC profiles, delivering 5hmC patterns that are incompatible with other methods. Both antibody and chemical capture-based techniques generate highly similar genome-wide patterns for 5hmC, which are independently validated by standard quantitative PCR (qPCR) and glucosyl-sensitive restriction enzyme digestion (gRES-qPCR). Both antibody and chemical capture generated profiles reproducibly link to unique chromatin modification profiles associated with 5hmC. However, there appears to be a slight bias of the antibody to bind to regions of DNA rich in simple repeats. Ultimately, the increased specificity observed with chemical capture-based approaches makes this an attractive method for the analysis of locus-specific or genome-wide patterns of 5hmC. PMID:24214958

  20. Sensitivity analysis techniques for models of human behavior.

    Energy Technology Data Exchange (ETDEWEB)

    Bier, Asmeret Brooke

    2010-09-01

    Human and social modeling has emerged as an important research area at Sandia National Laboratories due to its potential to improve national defense-related decision-making in the presence of uncertainty. To learn about which sensitivity analysis techniques are most suitable for models of human behavior, different promising methods were applied to an example model, tested, and compared. The example model simulates cognitive, behavioral, and social processes and interactions, and involves substantial nonlinearity, uncertainty, and variability. Results showed that some sensitivity analysis methods create similar results, and can thus be considered redundant. However, other methods, such as global methods that consider interactions between inputs, can generate insight not gained from traditional methods.

  1. Comparing Four Touch-Based Interaction Techniques for an Image-Based Audience Response System

    NARCIS (Netherlands)

    Jorritsma, Wiard; Prins, Jonatan T.; van Ooijen, Peter M. A.

    2015-01-01

    This study aimed to determine the most appropriate touch-based interaction technique for I2Vote, an image-based audience response system for radiology education in which users need to accurately mark a target on a medical image. Four plausible techniques were identified: land-on, take-off, zoom-poin

  2. User Identification Detector Based on Power of R Technique

    Institute of Scientific and Technical Information of China (English)

    WANG Chun-jiang; YU Quan; LIU Yuan-an

    2005-01-01

    To avoid the inaccurate estimation of the active user's number and the corresponding performance degradation, a novel POR-based User Identification Detector (UID) is proposed for the Code Division Multiple Access (CDMA) systems. The new detector adopts the Power of R (POR) technique and the Multiple Signal Classification (MUSIC) method, which does not require the estimation of active users' number, and obtains lower false alarm probability than the subspace-based UID in the multipath channels. However, from our analysis, increasing the order m does not improve the performance. Therefore, when m is one, the performance of the new detector is maximal.

  3. A Multi-Model Reduction Technique for Optimization of Coupled Structural-Acoustic Problems

    DEFF Research Database (Denmark)

    Creixell Mediante, Ester; Jensen, Jakob Søndergaard; Brunskog, Jonas;

    2016-01-01

    Finite Element models of structural-acoustic coupled systems can become very large for complex structures with multiple connected parts. Optimization of the performance of the structure based on harmonic analysis of the system requires solving the coupled problem iteratively and for several...... frequencies, which can become highly time consuming. Several modal-based model reduction techniques for structure-acoustic interaction problems have been developed in the literature. The unsymmetric nature of the pressure-displacement formulation of the problem poses the question of how the reduction modal...... base should be formed, given that the modal vectors are not orthogonal due to the asymmetry of the system matrices. In this paper, a multi-model reduction (MMR) technique for structure-acoustic interaction problems is developed. In MMR, the reduction base is formed with the modal vectors of a family...

  4. Multi-Model Combination techniques for Hydrological Forecasting: Application to Distributed Model Intercomparison Project Results

    Energy Technology Data Exchange (ETDEWEB)

    Ajami, N K; Duan, Q; Gao, X; Sorooshian, S

    2005-04-11

    This paper examines several multi-model combination techniques: the Simple Multi-model Average (SMA), the Multi-Model Super Ensemble (MMSE), Modified Multi-Model Super Ensemble (M3SE) and the Weighted Average Method (WAM). These model combination techniques were evaluated using the results from the Distributed Model Intercomparison Project (DMIP), an international project sponsored by the National Weather Service (NWS) Office of Hydrologic Development (OHD). All of the multi-model combination results were obtained using uncalibrated DMIP model outputs and were compared against the best uncalibrated as well as the best calibrated individual model results. The purpose of this study is to understand how different combination techniques affect the skill levels of the multi-model predictions. This study revealed that the multi-model predictions obtained from uncalibrated single model predictions are generally better than any single member model predictions, even the best calibrated single model predictions. Furthermore, more sophisticated multi-model combination techniques that incorporated bias correction steps work better than simple multi-model average predictions or multi-model predictions without bias correction.

  5. Wind Turbine Rotor Simulation via CFD Based Actuator Disc Technique Compared to Detailed Measurement

    Directory of Open Access Journals (Sweden)

    Esmail Mahmoodi

    2015-10-01

    Full Text Available In this paper, a generalized Actuator Disc (AD is used to model the wind turbine rotor of the MEXICO experiment, a collaborative European wind turbine project. The AD model as a combination of CFD technique and User Defined Functions codes (UDF, so-called UDF/AD model is used to simulate loads and performance of the rotor in three different wind speed tests. Distributed force on the blade, thrust and power production of the rotor as important designing parameters of wind turbine rotors are focused to model. A developed Blade Element Momentum (BEM theory as a code based numerical technique as well as a full rotor simulation both from the literature are included into the results to compare and discuss. The output of all techniques is compared to detailed measurements for validation, which led us to final conclusions.

  6. Nasal base narrowing: the alar flap advancement technique.

    Science.gov (United States)

    Ismail, Ahmed Soliman

    2011-01-01

    To evaluate the role of creating an alar-based advancement flap in narrowing the nasal base and correcting excessive alar flare. Case series with chart review. This is a retrospective record review study. The study included 35 cases presenting with a wide nasal base and excessive alar flaring. The surgical procedure combined the alar base reduction with alar flare excision by creating a single laterally based alar flap. Any caudal septal deformities and any nasal tip modification procedures were corrected before the nasal base narrowing. The mean follow-up period was 23 months. The mean alar flap narrowing was 6.3 mm, whereas the mean width of sill narrowing was 2.9 mm. This single laterally based advancement alar flap resulted in a more conservative external resection, thus avoiding alar wedge overresection or blunting of the alar-facial crease. No cases of postoperative bleeding, infection, or keloid were encountered, and the external alar wedge excision healed with no apparent scar that was hidden in the depth of the alar-facial crease. The risk of notching of the alar rim at the sill incision is reduced by adopting a 2-layer closure of the vestibular floor. The alar base advancement flap is an effective technique in narrowing both the nasal base and excessive alar flare. It adopts a single skin excision to correct the 2 deformities while commonly feared complications were avoided.

  7. Validation of transport models using additive flux minimization technique

    Energy Technology Data Exchange (ETDEWEB)

    Pankin, A. Y.; Kruger, S. E. [Tech-X Corporation, 5621 Arapahoe Ave., Boulder, Colorado 80303 (United States); Groebner, R. J. [General Atomics, San Diego, California 92121 (United States); Hakim, A. [Princeton Plasma Physics Laboratory, Princeton, New Jersey 08543-0451 (United States); Kritz, A. H.; Rafiq, T. [Department of Physics, Lehigh University, Bethlehem, Pennsylvania 18015 (United States)

    2013-10-15

    A new additive flux minimization technique is proposed for carrying out the verification and validation (V and V) of anomalous transport models. In this approach, the plasma profiles are computed in time dependent predictive simulations in which an additional effective diffusivity is varied. The goal is to obtain an optimal match between the computed and experimental profile. This new technique has several advantages over traditional V and V methods for transport models in tokamaks and takes advantage of uncertainty quantification methods developed by the applied math community. As a demonstration of its efficiency, the technique is applied to the hypothesis that the paleoclassical density transport dominates in the plasma edge region in DIII-D tokamak discharges. A simplified version of the paleoclassical model that utilizes the Spitzer resistivity for the parallel neoclassical resistivity and neglects the trapped particle effects is tested in this paper. It is shown that a contribution to density transport, in addition to the paleoclassical density transport, is needed in order to describe the experimental profiles. It is found that more additional diffusivity is needed at the top of the H-mode pedestal, and almost no additional diffusivity is needed at the pedestal bottom. The implementation of this V and V technique uses the FACETS::Core transport solver and the DAKOTA toolkit for design optimization and uncertainty quantification. The FACETS::Core solver is used for advancing the plasma density profiles. The DAKOTA toolkit is used for the optimization of plasma profiles and the computation of the additional diffusivity that is required for the predicted density profile to match the experimental profile.

  8. A New Mathematical Modeling Technique for Pull Production Control Systems

    Directory of Open Access Journals (Sweden)

    O. Srikanth

    2013-12-01

    Full Text Available The Kanban Control System widely used to control the release of parts of multistage manufacturing system operating under a pull production control system. Most of the work on Kanban Control System deals with multi-product manufacturing system. In this paper, we are proposing a regression modeling technique in a multistage manufacturing system is to be coordinates the release of parts into each stage of the system with the arrival of customer demands for final products. And also comparing two variants stages of the Kanban Control System model and combines with mathematical and Simulink model for the production coordination of parts in an assembly manufacturing systems. In both variants, the production of a new subassembly is authorized only when an assembly Kanban is available. Assembly kanbans become available when finished product is consumed. A simulation environment for the product line system has to generate with the proposed model and the mathematical model have to give implementation against the simulation model in the working platform of MATLAB. Both the simulation and model outputs have provided an in depth analysis of each of the resulting control system for offering model of a product line system.

  9. Performance Based Novel Techniques for Semantic Web Mining

    Directory of Open Access Journals (Sweden)

    Mahendra Thakur

    2012-01-01

    Full Text Available The explosive growth in the size and use of the World Wide Web continuously creates new great challenges and needs. The need for predicting the users preferences in order to expedite and improve the browsing though a site can be achieved through personalizing of the websites. Most of the research efforts in web personalization correspond to the evolution of extensive research in web usage mining, i.e. the exploitation of the navigational patterns of the web site visitors. When a personalization system relies solely on usage-based results, however, valuable information conceptually related to what is finally recommended may be missed. Moreover, the structural properties of the web site are often disregarded. In this paper, we propose novel techniques that use the content semantics and the structural properties of a web site in order to improve the effectiveness of web personalization. In the first part of our work we present standing for Semantic Web Personalization, a personalization system that integrates usage data with content semantics, expressed in ontology terms, in order to compute semantically enhanced navigational patterns and effectively generate useful recommendations. To the best of our knowledge, our proposed technique is the only semantic web personalization system that may be used by non-semantic web sites. In the second part of our work, we present a novel approach for enhancing the quality of recommendations based on the underlying structure of a web site. We introduce UPR (Usage-based PageRank, a PageRank-style algorithm that relies on the recorded usage data and link analysis techniques. Overall, we demonstrate that our proposed hybrid personalization framework results in more objective and representative predictions than existing techniques.

  10. Laser image denoising technique based on multi-fractal theory

    Science.gov (United States)

    Du, Lin; Sun, Huayan; Tian, Weiqing; Wang, Shuai

    2014-02-01

    The noise of laser images is complex, which includes additive noise and multiplicative noise. Considering the features of laser images, the basic processing capacity and defects of the common algorithm, this paper introduces the fractal theory into the research of laser image denoising. The research of laser image denoising is implemented mainly through the analysis of the singularity exponent of each pixel in fractal space and the feature of multi-fractal spectrum. According to the quantitative and qualitative evaluation of the processed image, the laser image processing technique based on fractal theory not only effectively removes the complicated noise of the laser images obtained by range-gated laser active imaging system, but can also maintains the detail information when implementing the image denoising processing. For different laser images, multi-fractal denoising technique can increase SNR of the laser image at least 1~2dB compared with other denoising techniques, which basically meet the needs of the laser image denoising technique.

  11. Multi-Model Combination Techniques for Hydrological Forecasting: Application to Distributed Model Intercomparison Project Results

    Energy Technology Data Exchange (ETDEWEB)

    Ajami, N; Duan, Q; Gao, X; Sorooshian, S

    2006-05-08

    This paper examines several multi-model combination techniques: the Simple Multimodel Average (SMA), the Multi-Model Super Ensemble (MMSE), Modified Multi-Model Super Ensemble (M3SE) and the Weighted Average Method (WAM). These model combination techniques were evaluated using the results from the Distributed Model Intercomparison Project (DMIP), an international project sponsored by the National Weather Service (NWS) Office of Hydrologic Development (OHD). All of the multi-model combination results were obtained using uncalibrated DMIP model outputs and were compared against the best uncalibrated as well as the best calibrated individual model results. The purpose of this study is to understand how different combination techniques affect the skill levels of the multi-model predictions. This study revealed that the multi-model predictions obtained from uncalibrated single model predictions are generally better than any single member model predictions, even the best calibrated single model predictions. Furthermore, more sophisticated multi-model combination techniques that incorporated bias correction steps work better than simple multi-model average predictions or multi-model predictions without bias correction.

  12. System identification and model reduction using modulating function techniques

    Science.gov (United States)

    Shen, Yan

    1993-01-01

    Weighted least squares (WLS) and adaptive weighted least squares (AWLS) algorithms are initiated for continuous-time system identification using Fourier type modulating function techniques. Two stochastic signal models are examined using the mean square properties of the stochastic calculus: an equation error signal model with white noise residuals, and a more realistic white measurement noise signal model. The covariance matrices in each model are shown to be banded and sparse, and a joint likelihood cost function is developed which links the real and imaginary parts of the modulated quantities. The superior performance of above algorithms is demonstrated by comparing them with the LS/MFT and popular predicting error method (PEM) through 200 Monte Carlo simulations. A model reduction problem is formulated with the AWLS/MFT algorithm, and comparisons are made via six examples with a variety of model reduction techniques, including the well-known balanced realization method. Here the AWLS/MFT algorithm manifests higher accuracy in almost all cases, and exhibits its unique flexibility and versatility. Armed with this model reduction, the AWLS/MFT algorithm is extended into MIMO transfer function system identification problems. The impact due to the discrepancy in bandwidths and gains among subsystem is explored through five examples. Finally, as a comprehensive application, the stability derivatives of the longitudinal and lateral dynamics of an F-18 aircraft are identified using physical flight data provided by NASA. A pole-constrained SIMO and MIMO AWLS/MFT algorithm is devised and analyzed. Monte Carlo simulations illustrate its high-noise rejecting properties. Utilizing the flight data, comparisons among different MFT algorithms are tabulated and the AWLS is found to be strongly favored in almost all facets.

  13. Wavelet-Based Techniques for the Gamma-Ray Sky

    CERN Document Server

    McDermott, Samuel D; Cholis, Ilias; Lee, Samuel K

    2015-01-01

    We demonstrate how the image analysis technique of wavelet decomposition can be applied to the gamma-ray sky to separate emission on different angular scales. New structures on scales that differ from the scales of the conventional astrophysical foreground and background uncertainties can be robustly extracted, allowing a model-independent characterization with no presumption of exact signal morphology. As a test case, we generate mock gamma-ray data to demonstrate our ability to extract extended signals without assuming a fixed spatial template. For some point source luminosity functions, our technique also allows us to differentiate a diffuse signal in gamma-rays from dark matter annihilation and extended gamma-ray point source populations in a data-driven way.

  14. Use of surgical techniques in the rat pancreas transplantation model

    Institute of Scientific and Technical Information of China (English)

    Yi Ma; Zhi-Yong Guo

    2008-01-01

    BACKGROUND:Pancreas transplantation is currently considered to be the most reliable and effective treatment for insulin-dependent diabetes mellitus (also called type 1 diabetes). With the improvement of microsurgical techniques, pancreas transplantation in rats has been the major model for physiological and immunological experimental studies in the past 20 years. We investigated the surgical techniques of pancreas transplantation in rats by analysing the difference between cervical segmental pancreas transplantation and abdominal pancreaticoduodenal transplantation. METHODS:Two hundred and forty male adult Wistar rats weighing 200-300 g were used, 120 as donors and 120 as recipients. Sixty cervical segmental pancreas transplants and 60 abdominal pancreaticoduodenal transplants were carried out and vessel anastomoses were made with microsurgical techniques. RESULTS:The time of donor pancreas harvesting in the cervical and abdominal groups was 31±6 and 37.6±3.8 min, respectively, and the lengths of recipient operations were 49.2±5.6 and 60.6±7.8 min. The time for donor operation was not signiifcantly different (P>0.05), but the recipient operation time in the abdominal group was longer than that in the cervical group (P0.05). CONCLUSIONS:Both pancreas transplantation methods are stable models for immunological and physiological studies in pancreas transplantation. Since each has its own advantages and disadvantages, the designer can choose the appropriate method according to the requirements of the study.

  15. Mobile Augmented Reality Support for Architects based on feature Tracking Techniques

    DEFF Research Database (Denmark)

    Grønbæk, Kaj; Nielsen, Mikkel Bang; Kramp, Gunnar

    2004-01-01

    This paper presents a mobile Augmented Reality (AR) system called the SitePack supporting architects in visualizing 3D models in real-time on site. We describe how vision based feature tracking techniques can help architects making decisions on site concerning visual impact assessment. The AR...

  16. Mobile Augmented Reality Support for Architects based on feature Tracking Techniques

    DEFF Research Database (Denmark)

    Grønbæk, Kaj; Nielsen, Mikkel Bang; Kramp, Gunnar

    2004-01-01

    This paper presents a mobile Augmented Reality (AR) system called the SitePack supporting architects in visualizing 3D models in real-time on site. We describe how vision based feature tracking techniques can help architects making decisions on site concerning visual impact assessment. The AR...

  17. A gray-box DPDA-based intrusion detection technique using system-call monitoring

    NARCIS (Netherlands)

    Jafarian, Jafar Haadi; Abbasi, Ali; Safaei Sheikhabadi, Siavash

    2011-01-01

    In this paper, we present a novel technique for automatic and efficient intrusion detection based on learning program behaviors. Program behavior is captured in terms of issued system calls augmented with point-of-system-call information, and is modeled according to an efficient deterministic

  18. Establishment of mean sea surface height model for Zhejiang coastal areas based on satellite altimetry technique%基于卫星测高技术的浙江近海平均海面高模型建立

    Institute of Scientific and Technical Information of China (English)

    李静; 吉渊明; 岳建平; 彭刚跃; 宋亚宏

    2015-01-01

    Based on the Waveform data from Jason⁃2 and SARAL/AltiKa satellites, a new method of eliminating the gross error of altimetry data was developed. By eliminating the sea surface height that was not in the predetermined trajectory , the gross error in each segment of altimetry data was eliminated according to the sea surface height in each cycle of each Pass file, in order to improve the usability of satellite data in coastal areas. Through crossover adjustment, the time⁃varying signals for the radial orbit error and sea level were further weakened. The discrete sea surface height with high accuracy, which was obtained with the remove⁃restore technique, was gridded using the radial basis function method. A mean sea surface height model with grid resolutions of 2. 5′×2. 5′was established. The root mean square error between the sea surface height data from the established model and the data from tidal stations is ±0. 017 m, and the standard deviation between the established model and MSS⁃CNES⁃CLS11 was ±0. 070 m. The results show that the established mean sea surface height model for Zhejiang coastal areas is reliable.%利用Jason⁃2卫星与SARAL/AltiKa卫星的Waveform数据,研究一种新的测高数据粗差剔除方法,即剔除不在预定轨迹的海面高后,参考拟合各Pass每个Cycle的海面高值,对测高数据分段剔除粗差,以提高近海卫星数据的可用性;经过交叉点平差,进一步削弱径向轨道误差和海平面时变信号;采用“移去⁃恢复”法得到较高精度的离散海面高;采用径向基函数法格网化离散海面高,建立2.5′×2.5′格网分辨率的平均海面高模型;将所得模型与验潮站提供的海面高比较,均方根为±0.017 m,与MSS⁃CNES⁃CLS11比较,标准差为±0.070 m。研究结果表明,采用本文方法建立的浙江近海平均海面高模型精度可靠。

  19. Néron Models and Base Change

    DEFF Research Database (Denmark)

    Halle, Lars Halvard; Nicaise, Johannes

    on Néron component groups, Edixhoven’s filtration and the base change conductor of Chai and Yu, and we study these invariants using various techniques such as models of curves, sheaves on Grothendieck sites and non-archimedean uniformization. We then apply our results to the study of motivic zeta functions...

  20. Efficient Identification Using a Prime-Feature-Based Technique

    DEFF Research Database (Denmark)

    Hussain, Dil Muhammad Akbar; Haq, Shaiq A.; Valente, Andrea

    2011-01-01

    Identification of authorized train drivers through biometrics is a growing area of interest in locomotive radio remote control systems. The existing technique of password authentication is not very reliable and potentially unauthorized personnel may also operate the system on behalf of the operator....... Fingerprint identification system, implemented on PC/104 based real-time systems, can accurately identify the operator. Traditionally, the uniqueness of a fingerprint is determined by the overall pattern of ridges and valleys as well as the local ridge anomalies e.g., a ridge bifurcation or a ridge ending...... in this paper. The technique involves identifying the most prominent feature of the fingerprint and searching only for that feature in the database to expedite the search process. The proposed architect provides efficient matching process and indexing feature for identification is unique....

  1. An Improved Face Recognition Technique Based on Modular LPCA Approach

    Directory of Open Access Journals (Sweden)

    Mathu S.S. Kumar

    2011-01-01

    Full Text Available Problem statement: A face identification algorithm based on modular localized variation by Eigen Subspace technique, also called modular localized principal component analysis, is presented in this study. Approach: The face imagery was partitioned into smaller sub-divisions from a predefined neighborhood and they were ultimately fused to acquire many sets of features. Since a few of the normal facial features of an individual do not differ even when the pose and illumination may differ, the proposed method manages these variations. Results: The proposed feature selection module has significantly, enhanced the identification precision using standard face databases when compared to conservative and modular PCA techniques. Conclusion: The proposed algorithm, when related with conservative PCA algorithm and modular PCA, has enhanced recognition accuracy for face imagery with illumination, expression and pose variations.

  2. Novel synchrotron based techniques for characterization of energy materials

    Energy Technology Data Exchange (ETDEWEB)

    Poulsen, H.F.; Nielsen, S.F.; Olsen, U.L.; Schmidt, S. (Risoe DTU, Materials Research Dept., Roskilde (Denmark)); Wright, J. (European Synchrotron Radiation Facility, Grenoble Cedex (France))

    2008-10-15

    Two synchrotron techniques are reviewed, both based on the use of high energy x-rays, and both applicable to in situ studies of bulk materials. Firstly, 3DXRD microscopy, which enables 3D characterization of the position, morphology, phase, elastic strain and crystallographic orientation of the individual embedded grains in polycrystalline specimens. In favourable cases, hundreds of grains can be studied simultaneously during processing. Secondly, plastic strain tomography: a unique method for determining the plastic strain field within materials during processing the potential applications of these techniques for basic and applied studies of four types of energy materials are discussed: polymer composites for wind turbines, solid oxide fuel cells, hydrogen storage materials and superconducting tapes. Furthermore, progress on new detectors aiming at improving the spatial and temporal resolution of such measurements is described. (au)

  3. New Intellectual Economized Technique on Electricity Based on DSP

    Institute of Scientific and Technical Information of China (English)

    Chang-ming LI; Tao JI; Ying SUN

    2010-01-01

    In order to resolve the problem of the unbalanced threephase and unstable voltage,intellectual economized technique on electricity based on electromagnetic regulation and control is proposed in this paper.We choose the TMS320LF2407A as the control chip and stepper motor as the executing agency.The equipment controls the movable contact reaching to the assigned position on the magnetic coil quickly and accurately,and outputs the sine-wave voltage steadily along with the network voltage variation though the fuzzy Porpornonal Integral Derivative(PID)control algorithm of integral separation and incremental mode with setting dead area.The principle of work and the key technique on the electromagnetic regulation and control are introduced in detail in this paper.The experiment result gives a proof for all the algorithm mentioned in this paper.

  4. Experimental technique of calibration of symmetrical air pollution models

    Indian Academy of Sciences (India)

    P Kumar

    2005-10-01

    Based on the inherent property of symmetry of air pollution models,a Symmetrical Air Pollution Model Index (SAPMI)has been developed to calibrate the accuracy of predictions made by such models,where the initial quantity of release at the source is not known.For exact prediction the value of SAPMI should be equal to 1.If the predicted values are overestimating then SAPMI is > 1and if it is underestimating then SAPMI is < 1.Specific design for the layout of receptors has been suggested as a requirement for the calibration experiments.SAPMI is applicable for all variations of symmetrical air pollution dispersion models.

  5. Artificial intelligence techniques for modeling database user behavior

    Science.gov (United States)

    Tanner, Steve; Graves, Sara J.

    1990-01-01

    The design and development of the adaptive modeling system is described. This system models how a user accesses a relational database management system in order to improve its performance by discovering use access patterns. In the current system, these patterns are used to improve the user interface and may be used to speed data retrieval, support query optimization and support a more flexible data representation. The system models both syntactic and semantic information about the user's access and employs both procedural and rule-based logic to manipulate the model.

  6. Automated Techniques for the Qualitative Analysis of Ecological Models: Continuous Models

    Directory of Open Access Journals (Sweden)

    Lynn van Coller

    1997-06-01

    Full Text Available The mathematics required for a detailed analysis of the behavior of a model can be formidable. In this paper, I demonstrate how various computer packages can aid qualitative analyses by implementing techniques from dynamical systems theory. Because computer software is used to obtain the results, the techniques can be used by nonmathematicians as well as mathematicians. In-depth analyses of complicated models that were previously very difficult to study can now be done. Because the paper is intended as an introduction to applying the techniques to ecological models, I have included an appendix describing some of the ideas and terminology. A second appendix shows how the techniques can be applied to a fairly simple predator-prey model and establishes the reliability of the computer software. The main body of the paper discusses a ratio-dependent model. The new techniques highlight some limitations of isocline analyses in this three-dimensional setting and show that the model is structurally unstable. Another appendix describes a larger model of a sheep-pasture-hyrax-lynx system. Dynamical systems techniques are compared with a traditional sensitivity analysis and are found to give more information. As a result, an incomplete relationship in the model is highlighted. I also discuss the resilience of these models to both parameter and population perturbations.

  7. A quantitative comparison of the TERA modeling and DFT magnetic resonance image reconstruction techniques.

    Science.gov (United States)

    Smith, M R; Nichols, S T; Constable, R T; Henkelman, R M

    1991-05-01

    The resolution of magnetic resonance images reconstructed using the discrete Fourier transform (DFT) algorithm is limited by the effective window generated by the finite data length. The transient error reconstruction approach (TERA) is an alternative reconstruction method based on autoregressive moving average (ARMA) modeling techniques. Quantitative measurements comparing the truncation artifacts present during DFT and TERA image reconstruction show that the modeling method substantially reduces these artifacts on "full" (256 X 256), "truncated" (256 X 192), and "severely truncated" (256 X 128) data sets without introducing the global amplitude distortion found in other modeling techniques. Two global measures for determining the success of modeling are suggested. Problem areas for one-dimensional modeling are examined and reasons for considering two-dimensional modeling discussed. Analysis of both medical and phantom data reconstructions are presented.

  8. Automatic parameter extraction techniques in IC-CAP for a compact double gate MOSFET model

    Science.gov (United States)

    Darbandy, Ghader; Gneiting, Thomas; Alius, Heidrun; Alvarado, Joaquín; Cerdeira, Antonio; Iñiguez, Benjamin

    2013-05-01

    In this paper, automatic parameter extraction techniques of Agilent's IC-CAP modeling package are presented to extract our explicit compact model parameters. This model is developed based on a surface potential model and coded in Verilog-A. The model has been adapted to Trigate MOSFETs, includes short channel effects (SCEs) and allows accurate simulations of the device characteristics. The parameter extraction routines provide an effective way to extract the model parameters. The techniques minimize the discrepancy and error between the simulation results and the available experimental data for more accurate parameter values and reliable circuit simulation. Behavior of the second derivative of the drain current is also verified and proves to be accurate and continuous through the different operating regimes. The results show good agreement with measured transistor characteristics under different conditions and through all operating regimes.

  9. CEAI: CCM based Email Authorship Identification Model

    DEFF Research Database (Denmark)

    Nizamani, Sarwat; Memon, Nasrullah

    2013-01-01

    In this paper we present a model for email authorship identification (EAI) by employing a Cluster-based Classification (CCM) technique. Traditionally, stylometric features have been successfully employed in various authorship analysis tasks; we extend the traditional feature-set to include some...... more interesting and effective features for email authorship identification (e.g. the last punctuation mark used in an email, the tendency of an author to use capitalization at the start of an email, or the punctuation after a greeting or farewell). We also included Info Gain feature selection based...... reveal that the proposed CCM-based email authorship identification model, along with the proposed feature set, outperforms the state-of-the-art support vector machine (SVM)-based models, as well as the models proposed by Iqbal et al. [1, 2]. The proposed model attains an accuracy rate of 94% for 10...

  10. Theoretical modeling techniques and their impact on tumor immunology.

    Science.gov (United States)

    Woelke, Anna Lena; Murgueitio, Manuela S; Preissner, Robert

    2010-01-01

    Currently, cancer is one of the leading causes of death in industrial nations. While conventional cancer treatment usually results in the patient suffering from severe side effects, immunotherapy is a promising alternative. Nevertheless, some questions remain unanswered with regard to using immunotherapy to treat cancer hindering it from being widely established. To help rectify this deficit in knowledge, experimental data, accumulated from a huge number of different studies, can be integrated into theoretical models of the tumor-immune system interaction. Many complex mechanisms in immunology and oncology cannot be measured in experiments, but can be analyzed by mathematical simulations. Using theoretical modeling techniques, general principles of tumor-immune system interactions can be explored and clinical treatment schedules optimized to lower both tumor burden and side effects. In this paper, we aim to explain the main mathematical and computational modeling techniques used in tumor immunology to experimental researchers and clinicians. In addition, we review relevant published work and provide an overview of its impact to the field.

  11. Model-Based Motion Tracking of Infants

    DEFF Research Database (Denmark)

    Olsen, Mikkel Damgaard; Herskind, Anna; Nielsen, Jens Bo;

    2014-01-01

    Even though motion tracking is a widely used technique to analyze and measure human movements, only a few studies focus on motion tracking of infants. In recent years, a number of studies have emerged focusing on analyzing the motion pattern of infants, using computer vision. Most of these studies...... are based on 2D images, but few are based on 3D information. In this paper, we present a model-based approach for tracking infants in 3D. The study extends a novel study on graph-based motion tracking of infants and we show that the extension improves the tracking results. A 3D model is constructed...... that resembles the body surface of an infant, where the model is based on simple geometric shapes and a hierarchical skeleton model....

  12. Spoken Document Retrieval Leveraging Unsupervised and Supervised Topic Modeling Techniques

    Science.gov (United States)

    Chen, Kuan-Yu; Wang, Hsin-Min; Chen, Berlin

    This paper describes the application of two attractive categories of topic modeling techniques to the problem of spoken document retrieval (SDR), viz. document topic model (DTM) and word topic model (WTM). Apart from using the conventional unsupervised training strategy, we explore a supervised training strategy for estimating these topic models, imagining a scenario that user query logs along with click-through information of relevant documents can be utilized to build an SDR system. This attempt has the potential to associate relevant documents with queries even if they do not share any of the query words, thereby improving on retrieval quality over the baseline system. Likewise, we also study a novel use of pseudo-supervised training to associate relevant documents with queries through a pseudo-feedback procedure. Moreover, in order to lessen SDR performance degradation caused by imperfect speech recognition, we investigate leveraging different levels of index features for topic modeling, including words, syllable-level units, and their combination. We provide a series of experiments conducted on the TDT (TDT-2 and TDT-3) Chinese SDR collections. The empirical results show that the methods deduced from our proposed modeling framework are very effective when compared with a few existing retrieval approaches.

  13. Prospective memory rehabilitation based on visual imagery techniques.

    Science.gov (United States)

    Potvin, Marie-Julie; Rouleau, Isabelle; Sénéchal, Geneviève; Giguère, Jean-François

    2011-12-01

    Despite the frequency of prospective memory (PM) problems in the traumatic brain injury (TBI) population, there are only a few rehabilitation programmes that have been specifically designed to address this issue, other than those using external compensatory strategies. In the present study, a PM rehabilitation programme based on visual imagery techniques expected to strengthen the cue-action association was developed. Ten moderate to severe chronic TBI patients learned to create a mental image representing the association between a prospective cue and an intended action within progressively more complex and naturalistic PM tasks. We hypothesised that compared to TBI patients (n = 20) who received a short session of education (control condition), TBI patients in the rehabilitation group would exhibit a greater improvement on the event-based than on the time-based condition of a PM ecological task. Results revealed however that this programme was similarly beneficial for both conditions. TBI patients in the rehabilitation group and their relatives also reported less everyday PM failures following the programme, which suggests generalisation. The PM improvement appears to be specific since results on cognitive control tasks remained similar. Therefore, visual imagery techniques appear to improve PM functioning by strengthening the memory trace of the intentions and inducing an automatic recall of the intentions.

  14. Antimisting kerosene: Base fuel effects, blending and quality control techniques

    Science.gov (United States)

    Yavrouian, A. H.; Ernest, J.; Sarohia, V.

    1984-01-01

    The problems associated with blending of the AMK additive with Jet A, and the base fuel effects on AMK properties are addressed. The results from the evaluation of some of the quality control techniques for AMK are presented. The principal conclusions of this investigation are: significant compositional differences for base fuel (Jet A) within the ASTM specification DI655; higher aromatic content of the base fuel was found to be beneficial for the polymer dissolution at ambient (20 C) temperature; using static mixer technology, the antimisting additive (FM-9) is in-line blended with Jet A, producing AMK which has adequate fire-protection properties 15 to 20 minutes after blending; degradability of freshly blended and equilibrated AMK indicated that maximum degradability is reached after adequate fire protection is obtained; the results of AMK degradability as measured by filter ratio, confirmed previous RAE data that power requirements to decade freshly blended AMK are significantly higher than equilibrated AMK; blending of the additive by using FM-9 concentrate in Jet A produces equilibrated AMK almost instantly; nephelometry offers a simple continuous monitoring capability and is used as a real time quality control device for AMK; and trajectory (jet thurst) and pressure drop tests are useful laboratory techniques for evaluating AMK quality.

  15. ADVANCED TECHNIQUES FOR RESERVOIR SIMULATION AND MODELING OF NONCONVENTIONAL WELLS

    Energy Technology Data Exchange (ETDEWEB)

    Louis J. Durlofsky; Khalid Aziz

    2004-08-20

    Nonconventional wells, which include horizontal, deviated, multilateral and ''smart'' wells, offer great potential for the efficient management of oil and gas reservoirs. These wells are able to contact larger regions of the reservoir than conventional wells and can also be used to target isolated hydrocarbon accumulations. The use of nonconventional wells instrumented with downhole inflow control devices allows for even greater flexibility in production. Because nonconventional wells can be very expensive to drill, complete and instrument, it is important to be able to optimize their deployment, which requires the accurate prediction of their performance. However, predictions of nonconventional well performance are often inaccurate. This is likely due to inadequacies in some of the reservoir engineering and reservoir simulation tools used to model and optimize nonconventional well performance. A number of new issues arise in the modeling and optimization of nonconventional wells. For example, the optimal use of downhole inflow control devices has not been addressed for practical problems. In addition, the impact of geological and engineering uncertainty (e.g., valve reliability) has not been previously considered. In order to model and optimize nonconventional wells in different settings, it is essential that the tools be implemented into a general reservoir simulator. This simulator must be sufficiently general and robust and must in addition be linked to a sophisticated well model. Our research under this five year project addressed all of the key areas indicated above. The overall project was divided into three main categories: (1) advanced reservoir simulation techniques for modeling nonconventional wells; (2) improved techniques for computing well productivity (for use in reservoir engineering calculations) and for coupling the well to the simulator (which includes the accurate calculation of well index and the modeling of multiphase flow

  16. Estimating monthly temperature using point based interpolation techniques

    Science.gov (United States)

    Saaban, Azizan; Mah Hashim, Noridayu; Murat, Rusdi Indra Zuhdi

    2013-04-01

    This paper discusses the use of point based interpolation to estimate the value of temperature at an unallocated meteorology stations in Peninsular Malaysia using data of year 2010 collected from the Malaysian Meteorology Department. Two point based interpolation methods which are Inverse Distance Weighted (IDW) and Radial Basis Function (RBF) are considered. The accuracy of the methods is evaluated using Root Mean Square Error (RMSE). The results show that RBF with thin plate spline model is suitable to be used as temperature estimator for the months of January and December, while RBF with multiquadric model is suitable to estimate the temperature for the rest of the months.

  17. Signal Processing Technique for Combining Numerous MEMS Gyroscopes Based on Dynamic Conditional Correlation

    OpenAIRE

    Jieyu Liu; Qiang Shen; Weiwei Qin

    2015-01-01

    A signal processing technique is presented to improve the angular rate accuracy of Micro-Electro-Mechanical System (MEMS) gyroscope by combining numerous gyroscopes. Based on the conditional correlation between gyroscopes, a dynamic data fusion model is established. Firstly, the gyroscope error model is built through Generalized Autoregressive Conditional Heteroskedasticity (GARCH) process to improve overall performance. Then the conditional covariance obtained through dynamic conditional cor...

  18. Transformer-based design techniques for oscillators and frequency dividers

    CERN Document Server

    Luong, Howard Cam

    2016-01-01

    This book provides in-depth coverage of transformer-based design techniques that enable CMOS oscillators and frequency dividers to achieve state-of-the-art performance.  Design, optimization, and measured performance of oscillators and frequency dividers for different applications are discussed in detail, focusing on not only ultra-low supply voltage but also ultra-wide frequency tuning range and locking range.  This book will be an invaluable reference for anyone working or interested in CMOS radio-frequency or mm-Wave integrated circuits and systems.

  19. Modal Analysis Based on the Random Decrement Technique

    DEFF Research Database (Denmark)

    Asmussen, J. C.; Brincker, Rune

    1998-01-01

    This article describes the work carried out within the project: Modal Analysis Based on the Random Decrement Technique - Application to Civil Engineering Structures. The project is part of the research programme: Dynamics of Structures sponsored by the Danish Technical Research Counsil. The planned...... contents and the requirement for the project prior to its start are described together with thee results obtained during the 3 year period of the project. The project was mainly carried out as a Ph.D project by the first author from September 1994 to August 1997 in cooperation with associate professor Rune...

  20. A comparison of base running and sliding techniques in collegiate baseball with implications for sliding into first base

    Institute of Scientific and Technical Information of China (English)

    Travis Ficklin; Jesus Dapena; Alexander Brunfeldt

    2016-01-01

    Purpose: The purpose of this study was to compare 4 techniques for arrival at a base after sprinting maximally to reach it:sliding head-first, sliding feet-first, running through the base without slowing, and stopping on the base. A secondary purpose of the study was to determine any advantage there may be to diving into first base to arrive sooner than running through the base. Methods: Two high-definition video cameras were used to capture 3-dimensional kinematics of sliding techniques of 9 intercollegiate baseball players. Another video camera was used to time runs from first base to second in 4 counterbalanced conditions:running through the base, sliding head-first, sliding feet-first, and running to a stop. Mathematical modeling was used to simulate diving to first base such that the slide would begin when the hand touches the base. Results: Based upon overall results, the quickest way to the base is by running through it, followed by head-first, feet-first, and running to a stop. Conclusion: There was a non-significant trend toward an advantage for diving into first base over running through it, but more research is needed, and even if the advantage is real, the risks of executing this technique probably outweigh the miniscule gain.

  1. Office-based tracheoesophageal puncture: updates in techniques and outcomes.

    Science.gov (United States)

    Bergeron, Jennifer L; Jamal, Nausheen; Erman, Andrew; Chhetri, Dinesh K

    2014-01-01

    Tracheoesophageal puncture (TEP) is an effective rehabilitation method for postlaryngectomy speech and has already been described as a procedure that is safely performed in the office. We review our long-term experience with office-based TEP over the past 7 years in the largest cohort published to date. A retrospective chart review was performed of all patients who underwent TEP by a single surgeon from 2005 through 2012, including office-based and operating room procedures. Indications for the chosen technique (office versus operating room) and surgical outcomes were evaluated. Fifty-nine patients underwent 72 TEP procedures, with 55 performed in the outpatient setting and 17 performed in the operating room, all without complication. The indications for performing TEPs in the operating room included 2 primary TEPs, 14 due to concomitant procedures requiring general anesthesia, and 1 due to failed attempt at office-based TEP. Nineteen patients with prior rotational or free flap reconstruction successfully underwent office-based TEP. TEP in an office-based setting with immediate voice prosthesis placement continues to be a safe method of voice rehabilitation for postlaryngectomy patients, including those who have previously undergone free flap or rotational flap reconstruction. Office-based TEP is now our primary approach for postlaryngectomy voice rehabilitation. Copyright © 2014 Elsevier Inc. All rights reserved.

  2. Cluster Based Text Classification Model

    DEFF Research Database (Denmark)

    2011-01-01

    We propose a cluster based classification model for suspicious email detection and other text classification tasks. The text classification tasks comprise many training examples that require a complex classification model. Using clusters for classification makes the model simpler and increases th...... datasets. Our model also outperforms A Decision Cluster Classification (ADCC) and the Decision Cluster Forest Classification (DCFC) models on the Reuters-21578 dataset....

  3. A Multiagent Based Model for Tactical Planning

    Science.gov (United States)

    2002-10-01

    Pub. Co. 1985. [10] Castillo, J.M. Aproximación mediante procedimientos de Inteligencia Artificial al planeamiento táctico. Doctoral Thesis...been developed under the same conceptual model and using similar Artificial Intelligence Tools. We use four different stimulus/response agents in...The conceptual model is built on base of the Agents theory. To implement the different agents we have used Artificial Intelligence techniques such

  4. Weighted Least Squares Techniques for Improved Received Signal Strength Based Localization

    Directory of Open Access Journals (Sweden)

    José R. Casar

    2011-09-01

    Full Text Available The practical deployment of wireless positioning systems requires minimizing the calibration procedures while improving the location estimation accuracy. Received Signal Strength localization techniques using propagation channel models are the simplest alternative, but they are usually designed under the assumption that the radio propagation model is to be perfectly characterized a priori. In practice, this assumption does not hold and the localization results are affected by the inaccuracies of the theoretical, roughly calibrated or just imperfect channel models used to compute location. In this paper, we propose the use of weighted multilateration techniques to gain robustness with respect to these inaccuracies, reducing the dependency of having an optimal channel model. In particular, we propose two weighted least squares techniques based on the standard hyperbolic and circular positioning algorithms that specifically consider the accuracies of the different measurements to obtain a better estimation of the position. These techniques are compared to the standard hyperbolic and circular positioning techniques through both numerical simulations and an exhaustive set of real experiments on different types of wireless networks (a wireless sensor network, a WiFi network and a Bluetooth network. The algorithms not only produce better localization results with a very limited overhead in terms of computational cost but also achieve a greater robustness to inaccuracies in channel modeling.

  5. Weighted least squares techniques for improved received signal strength based localization.

    Science.gov (United States)

    Tarrío, Paula; Bernardos, Ana M; Casar, José R

    2011-01-01

    The practical deployment of wireless positioning systems requires minimizing the calibration procedures while improving the location estimation accuracy. Received Signal Strength localization techniques using propagation channel models are the simplest alternative, but they are usually designed under the assumption that the radio propagation model is to be perfectly characterized a priori. In practice, this assumption does not hold and the localization results are affected by the inaccuracies of the theoretical, roughly calibrated or just imperfect channel models used to compute location. In this paper, we propose the use of weighted multilateration techniques to gain robustness with respect to these inaccuracies, reducing the dependency of having an optimal channel model. In particular, we propose two weighted least squares techniques based on the standard hyperbolic and circular positioning algorithms that specifically consider the accuracies of the different measurements to obtain a better estimation of the position. These techniques are compared to the standard hyperbolic and circular positioning techniques through both numerical simulations and an exhaustive set of real experiments on different types of wireless networks (a wireless sensor network, a WiFi network and a Bluetooth network). The algorithms not only produce better localization results with a very limited overhead in terms of computational cost but also achieve a greater robustness to inaccuracies in channel modeling.

  6. Advanced modeling techniques in application to plasma pulse treatment

    Science.gov (United States)

    Pashchenko, A. F.; Pashchenko, F. F.

    2016-06-01

    Different approaches considered for simulation of plasma pulse treatment process. The assumption of a significant non-linearity of processes in the treatment of oil wells has been confirmed. Method of functional transformations and fuzzy logic methods suggested for construction of a mathematical model. It is shown, that models, based on fuzzy logic are able to provide a satisfactory accuracy of simulation and prediction of non-linear processes observed.

  7. Structural level characterization of base oils using advanced analytical techniques

    KAUST Repository

    Hourani, Nadim

    2015-05-21

    Base oils, blended for finished lubricant formulations, are classified by the American Petroleum Institute into five groups, viz., groups I-V. Groups I-III consist of petroleum based hydrocarbons whereas groups IV and V are made of synthetic polymers. In the present study, five base oil samples belonging to groups I and III were extensively characterized using high performance liquid chromatography (HPLC), comprehensive two-dimensional gas chromatography (GC×GC), and Fourier transform ion cyclotron resonance mass spectrometry (FT-ICR MS) equipped with atmospheric pressure chemical ionization (APCI) and atmospheric pressure photoionization (APPI) sources. First, the capabilities and limitations of each analytical technique were evaluated, and then the availed information was combined to reveal compositional details on the base oil samples studied. HPLC showed the overwhelming presence of saturated over aromatic compounds in all five base oils. A similar trend was further corroborated using GC×GC, which yielded semiquantitative information on the compound classes present in the samples and provided further details on the carbon number distributions within these classes. In addition to chromatography methods, FT-ICR MS supplemented the compositional information on the base oil samples by resolving the aromatics compounds into alkyl- and naphtheno-subtituted families. APCI proved more effective for the ionization of the highly saturated base oil components compared to APPI. Furthermore, for the detailed information on hydrocarbon molecules FT-ICR MS revealed the presence of saturated and aromatic sulfur species in all base oil samples. The results presented herein offer a unique perspective into the detailed molecular structure of base oils typically used to formulate lubricants. © 2015 American Chemical Society.

  8. A Comparative Analysis of Exemplar Based and Wavelet Based Inpainting Technique

    Directory of Open Access Journals (Sweden)

    Vaibhav V Nalawade

    2012-06-01

    Full Text Available Image inpainting is the process of filling in of missing region so as to preserve its overall continuity. Image inpainting is manipulation and modification of an image in a form that is not easily detected. Digital image inpainting is relatively new area of research, but numerous and different approaches to tackle the inpainting problem have been proposed since the concept was first introduced. This paper compares two separate techniques viz, Exemplar based inpainting technique and Wavelet based inpainting technique, each portraying a different set of characteristics. The algorithms analyzed under exemplar technique are large object removal by exemplar based inpainting technique (Criminisi’s and modified exemplar (Cheng. The algorithm analyzed under wavelet is Chen’s visual image inpainting method. A number of examples on real and synthetic images are demonstrated to compare the results of different algorithms using both qualitative and quantitative parameters.

  9. Determination of Complex-Valued Parametric Model Coefficients Using Artificial Neural Network Technique

    Directory of Open Access Journals (Sweden)

    A. M. Aibinu

    2010-01-01

    Full Text Available A new approach for determining the coefficients of a complex-valued autoregressive (CAR and complex-valued autoregressive moving average (CARMA model coefficients using complex-valued neural network (CVNN technique is discussed in this paper. The CAR and complex-valued moving average (CMA coefficients which constitute a CARMA model are computed simultaneously from the adaptive weights and coefficients of the linear activation functions in a two-layered CVNN. The performance of the proposed technique has been evaluated using simulated complex-valued data (CVD with three different types of activation functions. The results show that the proposed method can accurately determine the model coefficients provided that the network is properly trained. Furthermore, application of the developed CVNN-based technique for MRI K-space reconstruction results in images with improve resolution.

  10. Simulation technique for hard-disk models in two dimensions

    DEFF Research Database (Denmark)

    Fraser, Diane P.; Zuckermann, Martin J.; Mouritsen, Ole G.

    1990-01-01

    A method is presented for studying hard-disk systems by Monte Carlo computer-simulation techniques within the NpT ensemble. The method is based on the Voronoi tesselation, which is dynamically maintained during the simulation. By an analysis of the Voronoi statistics, a quantity is identified...... that is extremely sensitive to structural changes in the system. This quantity, which is derived from the edge-length distribution function of the Voronoi polygons, displays a dramatic change at the solid-liquid transition. This is found to be more useful for locating the transition than either the defect density...

  11. Technique for Early Reliability Prediction of Software Components Using Behaviour Models

    Science.gov (United States)

    Ali, Awad; N. A. Jawawi, Dayang; Adham Isa, Mohd; Imran Babar, Muhammad

    2016-01-01

    Behaviour models are the most commonly used input for predicting the reliability of a software system at the early design stage. A component behaviour model reveals the structure and behaviour of the component during the execution of system-level functionalities. There are various challenges related to component reliability prediction at the early design stage based on behaviour models. For example, most of the current reliability techniques do not provide fine-grained sequential behaviour models of individual components and fail to consider the loop entry and exit points in the reliability computation. Moreover, some of the current techniques do not tackle the problem of operational data unavailability and the lack of analysis results that can be valuable for software architects at the early design stage. This paper proposes a reliability prediction technique that, pragmatically, synthesizes system behaviour in the form of a state machine, given a set of scenarios and corresponding constraints as input. The state machine is utilized as a base for generating the component-relevant operational data. The state machine is also used as a source for identifying the nodes and edges of a component probabilistic dependency graph (CPDG). Based on the CPDG, a stack-based algorithm is used to compute the reliability. The proposed technique is evaluated by a comparison with existing techniques and the application of sensitivity analysis to a robotic wheelchair system as a case study. The results indicate that the proposed technique is more relevant at the early design stage compared to existing works, and can provide a more realistic and meaningful prediction. PMID:27668748

  12. Technique for Early Reliability Prediction of Software Components Using Behaviour Models.

    Science.gov (United States)

    Ali, Awad; N A Jawawi, Dayang; Adham Isa, Mohd; Imran Babar, Muhammad

    Behaviour models are the most commonly used input for predicting the reliability of a software system at the early design stage. A component behaviour model reveals the structure and behaviour of the component during the execution of system-level functionalities. There are various challenges related to component reliability prediction at the early design stage based on behaviour models. For example, most of the current reliability techniques do not provide fine-grained sequential behaviour models of individual components and fail to consider the loop entry and exit points in the reliability computation. Moreover, some of the current techniques do not tackle the problem of operational data unavailability and the lack of analysis results that can be valuable for software architects at the early design stage. This paper proposes a reliability prediction technique that, pragmatically, synthesizes system behaviour in the form of a state machine, given a set of scenarios and corresponding constraints as input. The state machine is utilized as a base for generating the component-relevant operational data. The state machine is also used as a source for identifying the nodes and edges of a component probabilistic dependency graph (CPDG). Based on the CPDG, a stack-based algorithm is used to compute the reliability. The proposed technique is evaluated by a comparison with existing techniques and the application of sensitivity analysis to a robotic wheelchair system as a case study. The results indicate that the proposed technique is more relevant at the early design stage compared to existing works, and can provide a more realistic and meaningful prediction.

  13. Comparison of acrylamide intake from Western and guideline based diets using probabilistic techniques and linear programming.

    Science.gov (United States)

    Katz, Josh M; Winter, Carl K; Buttrey, Samuel E; Fadel, James G

    2012-03-01

    Western and guideline based diets were compared to determine if dietary improvements resulting from following dietary guidelines reduce acrylamide intake. Acrylamide forms in heat treated foods and is a human neurotoxin and animal carcinogen. Acrylamide intake from the Western diet was estimated with probabilistic techniques using teenage (13-19 years) National Health and Nutrition Examination Survey (NHANES) food consumption estimates combined with FDA data on the levels of acrylamide in a large number of foods. Guideline based diets were derived from NHANES data using linear programming techniques to comport to recommendations from the Dietary Guidelines for Americans, 2005. Whereas the guideline based diets were more properly balanced and rich in consumption of fruits, vegetables, and other dietary components than the Western diets, acrylamide intake (mean±SE) was significantly greater (Plinear programming and results demonstrate that linear programming techniques can be used to model specific diets for the assessment of toxicological and nutritional dietary components.

  14. Concerning the Feasibility of Example-driven Modelling Techniques

    CERN Document Server

    Thorne, Simon R; Lawson, Z

    2008-01-01

    We report on a series of experiments concerning the feasibility of example driven modelling. The main aim was to establish experimentally within an academic environment: the relationship between error and task complexity using a) Traditional spreadsheet modelling; b) example driven techniques. We report on the experimental design, sampling, research methods and the tasks set for both control and treatment groups. Analysis of the completed tasks allows comparison of several different variables. The experimental results compare the performance indicators for the treatment and control groups by comparing accuracy, experience, training, confidence measures, perceived difficulty and perceived completeness. The various results are thoroughly tested for statistical significance using: the Chi squared test, Fisher's exact test for significance, Cochran's Q test and McNemar's test on difficulty.

  15. Advanced computer modeling techniques expand belt conveyor technology

    Energy Technology Data Exchange (ETDEWEB)

    Alspaugh, M.

    1998-07-01

    Increased mining production is continuing to challenge engineers and manufacturers to keep up. The pressure to produce larger and more versatile equipment is increasing. This paper will show some recent major projects in the belt conveyor industry that have pushed the limits of design and engineering technology. Also, it will discuss the systems engineering discipline and advanced computer modeling tools that have helped make these achievements possible. Several examples of technologically advanced designs will be reviewed. However, new technology can sometimes produce increased problems with equipment availability and reliability if not carefully developed. Computer modeling techniques that help one design larger equipment can also compound operational headaches if engineering processes and algorithms are not carefully analyzed every step of the way.

  16. EXPERIENCE WITH SYNCHRONOUS GENERATOR MODEL USING PARTICLE SWARM OPTIMIZATION TECHNIQUE

    Directory of Open Access Journals (Sweden)

    N.RATHIKA

    2014-07-01

    Full Text Available This paper intends to the modeling of polyphase synchronous generator and minimization of power losses using Particle swarm optimization (PSO technique with a constriction factor. Usage of Polyphase synchronous generator mainly leads to the total power circulation in the system which can be distributed in all phases. Another advantage of polyphase system is the fault at one winding does not lead to the system shutdown. The Process optimization is the chastisement of adjusting a process so as to optimize some stipulated set of parameters without violating some constraint. Accurate value can be extracted using PSO and it can be reformulated. Modeling and simulation of the machine is executed. MATLAB/Simulink has been cast-off to implement and validate the result.

  17. Method for gesture based modeling

    DEFF Research Database (Denmark)

    2006-01-01

    A computer program based method is described for creating models using gestures. On an input device, such as an electronic whiteboard, a user draws a gesture which is recognized by a computer program and interpreted relative to a predetermined meta-model. Based on the interpretation, an algorithm...... is assigned to the gesture drawn by the user. The executed algorithm may, for example, consist in creating a new model element, modifying an existing model element, or deleting an existing model element....

  18. Multiple Fan-Beam Optical Tomography: Modelling Techniques

    Directory of Open Access Journals (Sweden)

    Pang Jon Fea

    2009-10-01

    Full Text Available This paper explains in detail the solution to the forward and inverse problem faced in this research. In the forward problem section, the projection geometry and the sensor modelling are discussed. The dimensions, distributions and arrangements of the optical fibre sensors are determined based on the real hardware constructed and these are explained in the projection geometry section. The general idea in sensor modelling is to simulate an artificial environment, but with similar system properties, to predict the actual sensor values for various flow models in the hardware system. The sensitivity maps produced from the solution of the forward problems are important in reconstructing the tomographic image.

  19. ONLINE GRINDING WHEEL WEAR COMPENSATION BY IMAGE BASED MEASURING TECHNIQUES

    Institute of Scientific and Technical Information of China (English)

    WAN Daping; HU Dejin; WU Qi; ZHANG Yonghong

    2006-01-01

    Automatic compensation of grinding wheel wear in dry grinding is accomplished by an image based online measurement method. A kind of PC-based charge-coupled device image recognition system is schemed out, which detects the topography changes of the grinding wheel surface. Profile data, which corresponds to the wear and the topography, is measured by using a digital image processing method. The grinding wheel wear is evaluated by analyzing the position deviation of the grinding wheel edge. The online wear compensation is achieved according to the measure results. The precise detection and automatic compensation system is integrated into an open structure CNC curve grinding machine. A practical application is carried out to fulfil the precision curve grinding. The experimental results confirm the benefits of the proposed techniques, and the online detection accuracy is less than 5 μm. The grinding machine provides higher precision according to the in-process grinding wheel error compensation.

  20. On combining Laplacian and optimization-based mesh smoothing techniques

    Energy Technology Data Exchange (ETDEWEB)

    Freitag, L.A.

    1997-07-01

    Local mesh smoothing algorithms have been shown to be effective in repairing distorted elements in automatically generated meshes. The simplest such algorithm is Laplacian smoothing, which moves grid points to the geometric center of incident vertices. Unfortunately, this method operates heuristically and can create invalid meshes or elements of worse quality than those contained in the original mesh. In contrast, optimization-based methods are designed to maximize some measure of mesh quality and are very effective at eliminating extremal angles in the mesh. These improvements come at a higher computational cost, however. In this article the author proposes three smoothing techniques that combine a smart variant of Laplacian smoothing with an optimization-based approach. Several numerical experiments are performed that compare the mesh quality and computational cost for each of the methods in two and three dimensions. The author finds that the combined approaches are very cost effective and yield high-quality meshes.

  1. Model-Based Methods for Fault Diagnosis: Some Guide-Lines

    DEFF Research Database (Denmark)

    Patton, R.J.; Chen, J.; Nielsen, S.B.

    1995-01-01

    This paper provides a review of model-based fault diagnosis techniques. Starting from basic principles, the properties.......This paper provides a review of model-based fault diagnosis techniques. Starting from basic principles, the properties....

  2. Variational Data Assimilation Technique in Mathematical Modeling of Ocean Dynamics

    Science.gov (United States)

    Agoshkov, V. I.; Zalesny, V. B.

    2012-03-01

    Problems of the variational data assimilation for the primitive equation ocean model constructed at the Institute of Numerical Mathematics, Russian Academy of Sciences are considered. The model has a flexible computational structure and consists of two parts: a forward prognostic model, and its adjoint analog. The numerical algorithm for the forward and adjoint models is constructed based on the method of multicomponent splitting. The method includes splitting with respect to physical processes and space coordinates. Numerical experiments are performed with the use of the Indian Ocean and the World Ocean as examples. These numerical examples support the theoretical conclusions and demonstrate the rationality of the approach using an ocean dynamics model with an observed data assimilation procedure.

  3. Updates on measurements and modeling techniques for expendable countermeasures

    Science.gov (United States)

    Gignilliat, Robert; Tepfer, Kathleen; Wilson, Rebekah F.; Taczak, Thomas M.

    2016-10-01

    The potential threat of recently-advertised anti-ship missiles has instigated research at the United States (US) Naval Research Laboratory (NRL) into the improvement of measurement techniques for visual band countermeasures. The goal of measurements is the collection of radiometric imagery for use in the building and validation of digital models of expendable countermeasures. This paper will present an overview of measurement requirements unique to the visual band and differences between visual band and infrared (IR) band measurements. A review of the metrics used to characterize signatures in the visible band will be presented and contrasted to those commonly used in IR band measurements. For example, the visual band measurements require higher fidelity characterization of the background, including improved high-transmittance measurements and better characterization of solar conditions to correlate results more closely with changes in the environment. The range of relevant engagement angles has also been expanded to include higher altitude measurements of targets and countermeasures. In addition to the discussion of measurement techniques, a top-level qualitative summary of modeling approaches will be presented. No quantitative results or data will be presented.

  4. Improving predictive mapping of deep-water habitats: Considering multiple model outputs and ensemble techniques

    Science.gov (United States)

    Robert, Katleen; Jones, Daniel O. B.; Roberts, J. Murray; Huvenne, Veerle A. I.

    2016-07-01

    In the deep sea, biological data are often sparse; hence models capturing relationships between observed fauna and environmental variables (acquired via acoustic mapping techniques) are often used to produce full coverage species assemblage maps. Many statistical modelling techniques are being developed, but there remains a need to determine the most appropriate mapping techniques. Predictive habitat modelling approaches (redundancy analysis, maximum entropy and random forest) were applied to a heterogeneous section of seabed on Rockall Bank, NE Atlantic, for which landscape indices describing the spatial arrangement of habitat patches were calculated. The predictive maps were based on remotely operated vehicle (ROV) imagery transects high-resolution autonomous underwater vehicle (AUV) sidescan backscatter maps. Area under the curve (AUC) and accuracy indicated similar performances for the three models tested, but performance varied by species assemblage, with the transitional species assemblage showing the weakest predictive performances. Spatial predictions of habitat suitability differed between statistical approaches, but niche similarity metrics showed redundancy analysis and random forest predictions to be most similar. As one statistical technique could not be found to outperform the others when all assemblages were considered, ensemble mapping techniques, where the outputs of many models are combined, were applied. They showed higher accuracy than any single model. Different statistical approaches for predictive habitat modelling possess varied strengths and weaknesses and by examining the outputs of a range of modelling techniques and their differences, more robust predictions, with better described variation and areas of uncertainties, can be achieved. As improvements to prediction outputs can be achieved without additional costly data collection, ensemble mapping approaches have clear value for spatial management.

  5. Hash Based Least Significant Bit Technique For Video Steganography

    Directory of Open Access Journals (Sweden)

    Prof. Dr. P. R. Deshmukh ,

    2014-01-01

    Full Text Available The Hash Based Least Significant Bit Technique For Video Steganography deals with hiding secret message or information within a video.Steganography is nothing but the covered writing it includes process that conceals information within other data and also conceals the fact that a secret message is being sent.Steganography is the art of secret communication or the science of invisible communication. In this paper a Hash based least significant bit technique for video steganography has been proposed whose main goal is to embed a secret information in a particular video file and then extract it using a stego key or password. In this Least Significant Bit insertion method is used for steganography so as to embed data in cover video with change in the lower bit.This LSB insertion is not visible.Data hidding is the process of embedding information in a video without changing its perceptual quality. The proposed method involve with two terms that are Peak Signal to Noise Ratio (PSNR and the Mean Square Error (MSE .This two terms measured between the original video files and steganographic video files from all video frames where a distortion is measured using PSNR. A hash function is used to select the particular position for insertion of bits of secret message in LSB bits.

  6. Kerf modelling in abrasive waterjet milling using evolutionary computation and ANOVA techniques

    Science.gov (United States)

    Alberdi, A.; Rivero, A.; Carrascal, A.; Lamikiz, A.

    2012-04-01

    Many researchers demonstrated the capability of Abrasive Waterjet (AWJ) technology for precision milling operations. However, the concurrence of several input parameters along with the stochastic nature of this technology leads to a complex process control, which requires a work focused in process modelling. This research work introduces a model to predict the kerf shape in AWJ slot milling in Aluminium 7075-T651 in terms of four important process parameters: the pressure, the abrasive flow rate, the stand-off distance and the traverse feed rate. A hybrid evolutionary approach was employed for kerf shape modelling. This technique allowed characterizing the profile through two parameters: the maximum cutting depth and the full width at half maximum. On the other hand, based on ANOVA and regression techniques, these two parameters were also modelled as a function of process parameters. Combination of both models resulted in an adequate strategy to predict the kerf shape for different machining conditions.

  7. Deformable surface modeling based on dual subdivision

    Institute of Scientific and Technical Information of China (English)

    WANG Huawei; SUN Hanqiu; QIN Kaihuai

    2005-01-01

    Based on dual Doo-Sabin subdivision and the corresponding parameterization, a modeling technique of deformable surfaces is presented in this paper. In the proposed model, all the dynamic parameters are computed in a unified way for both non-defective and defective subdivision matrices, and central differences are used to discretize the Lagrangian dynamics equation instead of backward differences. Moreover, a local scheme is developed to solve the dynamics equation approximately, thus the order of the linear equation is reduced greatly. Therefore, the proposed model is more efficient and faster than the existing dynamic models. It can be used for deformable surface design, interactive surface editing, medical imaging and simulation.

  8. Model Construct Based Enterprise Model Architecture and Its Modeling Approach

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    In order to support enterprise integration, a kind of model construct based enterprise model architecture and its modeling approach are studied in this paper. First, the structural makeup and internal relationships of enterprise model architecture are discussed. Then, the concept of reusable model construct (MC) which belongs to the control view and can help to derive other views is proposed. The modeling approach based on model construct consists of three steps, reference model architecture synthesis, enterprise model customization, system design and implementation. According to MC based modeling approach a case study with the background of one-kind-product machinery manufacturing enterprises is illustrated. It is shown that proposal model construct based enterprise model architecture and modeling approach are practical and efficient.

  9. Controller Design of DFIG Based Wind Turbine by Using Evolutionary Soft Computational Techniques

    Directory of Open Access Journals (Sweden)

    O. P. Bharti

    2017-06-01

    Full Text Available This manuscript illustrates the controller design for a doubly fed induction generator based variable speed wind turbine by using a bioinspired scheme. This methodology is based on exploiting two proficient swarm intelligence based evolutionary soft computational procedures. The particle swarm optimization (PSO and bacterial foraging optimization (BFO techniques are employed to design the controller intended for small damping plant of the DFIG. Wind energy overview and DFIG operating principle along with the equivalent circuit model is adequately discussed in this paper. The controller design for DFIG based WECS using PSO and BFO are described comparatively in detail. The responses of the DFIG system regarding terminal voltage, current, active-reactive power, and DC-Link voltage have slightly improved with the evolutionary soft computational procedure. Lastly, the obtained output is equated with a standard technique for performance improvement of DFIG based wind energy conversion system.

  10. Evaluation of mesh morphing and mapping techniques in patient specific modelling of the human pelvis.

    Science.gov (United States)

    Salo, Zoryana; Beek, Maarten; Whyne, Cari Marisa

    2012-08-01

    Robust generation of pelvic finite element models is necessary to understand variation in mechanical behaviour resulting from differences in gender, aging, disease and injury. The objective of this study was to apply and evaluate mesh morphing and mapping techniques to facilitate the creation and structural analysis of specimen-specific finite element (FE) models of the pelvis. A specimen-specific pelvic FE model (source mesh) was generated following a traditional user-intensive meshing scheme. The source mesh was morphed onto a computed tomography scan generated target surface of a second pelvis using a landmarked-based approach, in which exterior source nodes were shifted to target surface vertices, while constrained along a normal. A second copy of the morphed model was further refined through mesh mapping, in which surface nodes of the initial morphed model were selected in patches and remapped onto the surfaces of the target model. Computed tomography intensity-based material properties were assigned to each model. The source, target, morphed and mapped models were analyzed under axial compression using linear static FE analysis, and their strain distributions were evaluated. Morphing and mapping techniques were effectively applied to generate good quality and geometrically complex specimen-specific pelvic FE models. Mapping significantly improved strain concurrence with the target pelvis FE model.

  11. Study on modeling of vehicle dynamic stability and control technique

    Institute of Scientific and Technical Information of China (English)

    GAO Yun-ting; LI Pan-feng

    2012-01-01

    In order to solve the problem of enhancing the vehicle driving stability and safety,which has been the hot question researched by scientific and engineering in the vehicle industry,the new control method was investigated.After the analysis of tire moving characteristics and the vehicle stress analysis,the tire model based on the extension pacejka magic formula which combined longitudinal motion and lateral motion was developed and a nonlinear vehicle dynamical stability model with seven freedoms was made.A new model reference adaptive control project which made the slip angle and yaw rate of vehicle body as the output and feedback variable in adjusting the torque of vehicle body to control the vehicle stability was designed.A simulation model was also built in Matlab/Simulink to evaluate this control project.It was made up of many mathematical subsystem models mainly including the tire model module,the yaw moment calculation module,the center of mass parameter calculation module,tire parameter calculation module of multiple and so forth.The severe lane change simulation result shows that this vehicle model and the model reference adaptive control method have an excellent performance.

  12. HMM-based Trust Model

    DEFF Research Database (Denmark)

    ElSalamouny, Ehab; Nielsen, Mogens; Sassone, Vladimiro

    2010-01-01

    with their dynamic behaviour. Using Hidden Markov Models (HMMs) for both modelling and approximating the behaviours of principals, we introduce the HMM-based trust model as a new approach to evaluating trust in systems exhibiting dynamic behaviour. This model avoids the fixed behaviour assumption which is considered...... the major limitation of existing Beta trust model. We show the consistency of the HMM-based trust model and contrast it against the well known Beta trust model with the decay principle in terms of the estimation precision....

  13. An interactive tutorial-based training technique for vertebral morphometry.

    Science.gov (United States)

    Gardner, J C; von Ingersleben, G; Heyano, S L; Chesnut, C H

    2001-01-01

    The purpose of this work was to develop a computer-based procedure for training technologists in vertebral morphometry. The utility of the resulting interactive, tutorial based training method was evaluated in this study. The training program was composed of four steps: (1) review of an online tutorial, (2) review of analyzed spine images, (3) practice in fiducial point placement and (4) testing. During testing, vertebral heights were measured from digital, lateral spine images containing osteoporotic fractures. Inter-observer measurement precision was compared between research technicians, and between technologists and radiologist. The technologists participating in this study had no prior experience in vertebral morphometry. Following completion of the online training program, good inter-observer measurement precision was seen between technologists, showing mean coefficients of variation of 2.33% for anterior, 2.87% for central and 2.65% for posterior vertebral heights. Comparisons between the technicians and radiologist ranged from 2.19% to 3.18%. Slightly better precision values were seen with height measurements compared with height ratios, and with unfractured compared with fractured vertebral bodies. The findings of this study indicate that self-directed, tutorial-based training for spine image analyses is effective, resulting in good inter-observer measurement precision. The interactive tutorial-based approach provides standardized training methods and assures consistency of instructional technique over time.

  14. Enhancing the effectiveness of IST through risk-based techniques

    Energy Technology Data Exchange (ETDEWEB)

    Floyd, S.D.

    1996-12-01

    Current IST requirements were developed mainly through deterministic-based methods. While this approach has resulted in an adequate level of safety and reliability for pumps and valves, insights from probabilistic safety assessments suggest a better safety focus can be achieved at lower costs. That is, some high safety impact pumps and valves are currently not tested under the IST program and should be added, while low safety impact valves could be tested at significantly greater intervals than allowed by the current IST program. The nuclear utility industry, through the Nuclear Energy Institute (NEI), has developed a draft guideline for applying risk-based techniques to focus testing on those pumps and valves with a high safety impact while reducing test frequencies on low safety impact pumps and valves. The guideline is being validated through an industry pilot application program that is being reviewed by the U.S. Nuclear Regulatory Commission. NEI and the ASME maintain a dialogue on the two groups` activities related to risk-based IST. The presenter will provide an overview of the NEI guideline, discuss the methodological approach for applying risk-based technology to IST and provide the status of the industry pilot plant effort.

  15. Model-Based Reasoning

    Science.gov (United States)

    Ifenthaler, Dirk; Seel, Norbert M.

    2013-01-01

    In this paper, there will be a particular focus on mental models and their application to inductive reasoning within the realm of instruction. A basic assumption of this study is the observation that the construction of mental models and related reasoning is a slowly developing capability of cognitive systems that emerges effectively with proper…

  16. Application of Discrete Fracture Modeling and Upscaling Techniques to Complex Fractured Reservoirs

    Science.gov (United States)

    Karimi-Fard, M.; Lapene, A.; Pauget, L.

    2012-12-01

    During the last decade, an important effort has been made to improve data acquisition (seismic and borehole imaging) and workflow for reservoir characterization which has greatly benefited the description of fractured reservoirs. However, the geological models resulting from the interpretations need to be validated or calibrated against dynamic data. Flow modeling in fractured reservoirs remains a challenge due to the difficulty of representing mass transfers at different heterogeneity scales. The majority of the existing approaches are based on dual continuum representation where the fracture network and the matrix are represented separately and their interactions are modeled using transfer functions. These models are usually based on idealized representation of the fracture distribution which makes the integration of real data difficult. In recent years, due to increases in computer power, discrete fracture modeling techniques (DFM) are becoming popular. In these techniques the fractures are represented explicitly allowing the direct use of data. In this work we consider the DFM technique developed by Karimi-Fard et al. [1] which is based on an unstructured finite-volume discretization. The mass flux between two adjacent control-volumes is evaluated using an optimized two-point flux approximation. The result of the discretization is a list of control-volumes with the associated pore-volumes and positions, and a list of connections with the associated transmissibilities. Fracture intersections are simplified using a connectivity transformation which contributes considerably to the efficiency of the methodology. In addition, the method is designed for general purpose simulators and any connectivity based simulator can be used for flow simulations. The DFM technique is either used standalone or as part of an upscaling technique. The upscaling techniques are required for large reservoirs where the explicit representation of all fractures and faults is not possible

  17. A Hybrid Model for the Mid-Long Term Runoff Forecasting by Evolutionary Computaion Techniques

    Institute of Scientific and Technical Information of China (English)

    Zou Xiu-fen; Kang Li-shan; Cae Hong-qing; Wu Zhi-jian

    2003-01-01

    The mid-long term hydrology forecasting is one of most challenging problems in hydrological studies. This paper proposes an efficient dynamical system prediction model using evolutionary computation techniques. The new model overcomes some disadvantages of conventional hydrology fore casting ones. The observed data is divided into two parts: the slow "smooth and steady" data, and the fast "coarse and fluctuation" data. Under the divide and conquer strategy, the behavior of smooth data is modeled by ordinary differential equations based on evolutionary modeling, and that of the coarse data is modeled using gray correlative forecasting method. Our model is verified on the test data of the mid-long term hydrology forecast in tbe northeast region of China. The experimental results show that the model is superior to gray system prediction model (GSPM).

  18. Liquid propellant analogy technique in dynamic modeling of launch vehicle

    Institute of Scientific and Technical Information of China (English)

    2010-01-01

    The coupling effects among lateral mode,longitudinal mode and torsional mode of a launch vehicle cannot be taken into account in traditional dynamic analysis using lateral beam model and longitudinal spring-mass model individually.To deal with the problem,propellant analogy methods based on beam model are proposed and coupled mass-matrix of liquid propellant is constructed through additional mass in the present study.Then an integrated model of launch vehicle for free vibration analysis is established,by which research on the interactions between longitudinal and lateral modes,longitudinal and torsional modes of the launch vehicle can be implemented.Numerical examples for tandem tanks validate the present method and its necessity.

  19. Evaluation of dynamical models: dissipative synchronization and other techniques.

    Science.gov (United States)

    Aguirre, Luis Antonio; Furtado, Edgar Campos; Tôrres, Leonardo A B

    2006-12-01

    Some recent developments for the validation of nonlinear models built from data are reviewed. Besides giving an overall view of the field, a procedure is proposed and investigated based on the concept of dissipative synchronization between the data and the model, which is very useful in validating models that should reproduce dominant dynamical features, like bifurcations, of the original system. In order to assess the discriminating power of the procedure, four well-known benchmarks have been used: namely, Duffing-Ueda, Duffing-Holmes, and van der Pol oscillators, plus the Hénon map. The procedure, developed for discrete-time systems, is focused on the dynamical properties of the model, rather than on statistical issues. For all the systems investigated, it is shown that the discriminating power of the procedure is similar to that of bifurcation diagrams--which in turn is much greater than, say, that of correlation dimension--but at a much lower computational cost.

  20. Evaluation of dynamical models: Dissipative synchronization and other techniques

    Science.gov (United States)

    Aguirre, Luis Antonio; Furtado, Edgar Campos; Tôrres, Leonardo A. B.

    2006-12-01

    Some recent developments for the validation of nonlinear models built from data are reviewed. Besides giving an overall view of the field, a procedure is proposed and investigated based on the concept of dissipative synchronization between the data and the model, which is very useful in validating models that should reproduce dominant dynamical features, like bifurcations, of the original system. In order to assess the discriminating power of the procedure, four well-known benchmarks have been used: namely, Duffing-Ueda, Duffing-Holmes, and van der Pol oscillators, plus the Hénon map. The procedure, developed for discrete-time systems, is focused on the dynamical properties of the model, rather than on statistical issues. For all the systems investigated, it is shown that the discriminating power of the procedure is similar to that of bifurcation diagrams—which in turn is much greater than, say, that of correlation dimension—but at a much lower computational cost.

  1. Model-based Software Engineering

    DEFF Research Database (Denmark)

    2010-01-01

    The vision of model-based software engineering is to make models the main focus of software development and to automatically generate software from these models. Part of that idea works already today. But, there are still difficulties when it comes to behaviour. Actually, there is no lack in models...

  2. Model-based Software Engineering

    DEFF Research Database (Denmark)

    2010-01-01

    The vision of model-based software engineering is to make models the main focus of software development and to automatically generate software from these models. Part of that idea works already today. But, there are still difficulties when it comes to behaviour. Actually, there is no lack in models...

  3. Classification of the Regional Ionospheric Disturbance Based on Machine Learning Techniques

    Science.gov (United States)

    Terzi, Merve Begum; Arikan, Orhan; Karatay, Secil; Arikan, Feza; Gulyaeva, Tamara

    2016-08-01

    In this study, Total Electron Content (TEC) estimated from GPS receivers is used to model the regional and local variability that differs from global activity along with solar and geomagnetic indices. For the automated classification of regional disturbances, a classification technique based on a robust machine learning technique that have found wide spread use, Support Vector Machine (SVM) is proposed. Performance of developed classification technique is demonstrated for midlatitude ionosphere over Anatolia using TEC estimates generated from GPS data provided by Turkish National Permanent GPS Network (TNPGN-Active) for solar maximum year of 2011. As a result of implementing developed classification technique to Global Ionospheric Map (GIM) TEC data, which is provided by the NASA Jet Propulsion Laboratory (JPL), it is shown that SVM can be a suitable learning method to detect anomalies in TEC variations.

  4. Satellite communication performance evaluation: Computational techniques based on moments

    Science.gov (United States)

    Omura, J. K.; Simon, M. K.

    1980-01-01

    Computational techniques that efficiently compute bit error probabilities when only moments of the various interference random variables are available are presented. The approach taken is a generalization of the well known Gauss-Quadrature rules used for numerically evaluating single or multiple integrals. In what follows, basic algorithms are developed. Some of its properties and generalizations are shown and its many potential applications are described. Some typical interference scenarios for which the results are particularly applicable include: intentional jamming, adjacent and cochannel interferences; radar pulses (RFI); multipath; and intersymbol interference. While the examples presented stress evaluation of bit error probilities in uncoded digital communication systems, the moment techniques can also be applied to the evaluation of other parameters, such as computational cutoff rate under both normal and mismatched receiver cases in coded systems. Another important application is the determination of the probability distributions of the output of a discrete time dynamical system. This type of model occurs widely in control systems, queueing systems, and synchronization systems (e.g., discrete phase locked loops).

  5. Use of machine learning techniques for modeling of snow depth

    Directory of Open Access Journals (Sweden)

    G. V. Ayzel

    2017-01-01

    Full Text Available Snow exerts significant regulating effect on the land hydrological cycle since it controls intensity of heat and water exchange between the soil-vegetative cover and the atmosphere. Estimating of a spring flood runoff or a rain-flood on mountainous rivers requires understanding of the snow cover dynamics on a watershed. In our work, solving a problem of the snow cover depth modeling is based on both available databases of hydro-meteorological observations and easily accessible scientific software that allows complete reproduction of investigation results and further development of this theme by scientific community. In this research we used the daily observational data on the snow cover and surface meteorological parameters, obtained at three stations situated in different geographical regions: Col de Porte (France, Sodankyla (Finland, and Snoquamie Pass (USA.Statistical modeling of the snow cover depth is based on a complex of freely distributed the present-day machine learning models: Decision Trees, Adaptive Boosting, Gradient Boosting. It is demonstrated that use of combination of modern machine learning methods with available meteorological data provides the good accuracy of the snow cover modeling. The best results of snow cover depth modeling for every investigated site were obtained by the ensemble method of gradient boosting above decision trees – this model reproduces well both, the periods of snow cover accumulation and its melting. The purposeful character of learning process for models of the gradient boosting type, their ensemble character, and use of combined redundancy of a test sample in learning procedure makes this type of models a good and sustainable research tool. The results obtained can be used for estimating the snow cover characteristics for river basins where hydro-meteorological information is absent or insufficient.

  6. Principles of models based engineering

    Energy Technology Data Exchange (ETDEWEB)

    Dolin, R.M.; Hefele, J.

    1996-11-01

    This report describes a Models Based Engineering (MBE) philosophy and implementation strategy that has been developed at Los Alamos National Laboratory`s Center for Advanced Engineering Technology. A major theme in this discussion is that models based engineering is an information management technology enabling the development of information driven engineering. Unlike other information management technologies, models based engineering encompasses the breadth of engineering information, from design intent through product definition to consumer application.

  7. PDMS microchannel fabrication technique based on microwire-molding

    Institute of Scientific and Technical Information of China (English)

    JIA YueFei; JIANG JiaHuan; MA XiaoDong; LI Yuan; HUANG HeMing; CAI KunBao; CAI ShaoXi; WU YunPeng

    2008-01-01

    Micro-flow channel is basic functional component of microfluidic chip, and every step-forward of its construction technique has been receiving concern all over the world. This article presents a notcomplicated but flexible method for fabrication of micro-flow channels. This method mainly utilizes the conventional molding capability of polydimethylsiloxane (PDMS) and widespread commercial microwires as templates. We have fabricated out some conventional types of microchannels with different topological shapes, as examples for the demonstration of this flexible fabrication route which was not dependent on the stringent demands of photolithographical or microelectromechanical system (MEMS)techniques. The smooth surface, high-intensity, and high flexibility of the wires made it possible to create many types of topological structures of the two-dimensional or three-dimensional microchannel or channel array. The geometric shape of the cross-section of thus forming microchannel in PDMS was the negative of that of embedded-in microwire, in high-fidelity if suitable measures were taken. Moreover, such a microchannel fabrication process can easily integrate the conductivity and low resistivity of the metal wire to create micro-flow devices that are suitable for the electromagnetic control of liquid or the temperature regulation in the microchannel. Furthermore some preliminary optical analysis was provided for the observation of thus forming rounded microchannel. Based on this molding strategy,we even made some prototypes for functional microflow application, such as microsolenoids chip and temperature control gadgets. And an experiment of forming a droplet in the cross channel further confirmed the feasibility and applicability of this flexible microchannel forming technique.

  8. Intelligent model-based OPC

    Science.gov (United States)

    Huang, W. C.; Lai, C. M.; Luo, B.; Tsai, C. K.; Chih, M. H.; Lai, C. W.; Kuo, C. C.; Liu, R. G.; Lin, H. T.

    2006-03-01

    Optical proximity correction is the technique of pre-distorting mask layouts so that the printed patterns are as close to the desired shapes as possible. For model-based optical proximity correction, a lithographic model to predict the edge position (contour) of patterns on the wafer after lithographic processing is needed. Generally, segmentation of edges is performed prior to the correction. Pattern edges are dissected into several small segments with corresponding target points. During the correction, the edges are moved back and forth from the initial drawn position, assisted by the lithographic model, to finally settle on the proper positions. When the correction converges, the intensity predicted by the model in every target points hits the model-specific threshold value. Several iterations are required to achieve the convergence and the computation time increases with the increase of the required iterations. An artificial neural network is an information-processing paradigm inspired by biological nervous systems, such as how the brain processes information. It is composed of a large number of highly interconnected processing elements (neurons) working in unison to solve specific problems. A neural network can be a powerful data-modeling tool that is able to capture and represent complex input/output relationships. The network can accurately predict the behavior of a system via the learning procedure. A radial basis function network, a variant of artificial neural network, is an efficient function approximator. In this paper, a radial basis function network was used to build a mapping from the segment characteristics to the edge shift from the drawn position. This network can provide a good initial guess for each segment that OPC has carried out. The good initial guess reduces the required iterations. Consequently, cycle time can be shortened effectively. The optimization of the radial basis function network for this system was practiced by genetic algorithm

  9. RESEARCH AND IMPLEMENTATION OF MODEL CHECKING OF PETRI NET BASED ON ON-THE-FLY TECHNIQUE%基于on-the-fly的Petri网模型检查技术研究与实现

    Institute of Scientific and Technical Information of China (English)

    沈云付; 解晓方

    2011-01-01

    Petri网是一种应用非常广泛的建模工具,它能深刻、简洁地描述控制系统,特别是能较好地描述并发系统的结构,并能对系统的动态性质进行分析.在探讨了Petri网的模型检查的基础上,采用双DFS算法,对基于Petri网的模型检查的算法进行了改进,提出了针对Petri网的on-the-fly算法,同时给出了基于on-the-fly的Petri网模型检查的实现和测试,从而可以有效地对Petri网表示的系统模型进行模型检查.%Petri net is a very widely used modelling tool ,which can describe the control system in a profound and brief way.In particular,it can better describe the structure of concurrent system,and can analyse the dynamic property of a system.In this paper, based on the discussion of model checking of the Petri net,we adopt double DFS algorithm to improve the Petri net-based model checking algorithm,put forward the on-the-fly algorithm in light to the Petri net.Meanwhile,we also present the realisation and test of model checking of Petri net based on on-the-fly algorithm,so that the model checking on the system model expressed by Petri net can be applied effectively.

  10. Element-Based Computational Model

    Directory of Open Access Journals (Sweden)

    Conrad Mueller

    2012-02-01

    Full Text Available A variation on the data-flow model is proposed to use for developing parallel architectures. While the model is a data driven model it has significant differences to the data-flow model. The proposed model has an evaluation cycleof processing elements (encapsulated data that is similar to the instruction cycle of the von Neumann model. The elements contain the information required to process them. The model is inherently parallel. An emulation of the model has been implemented. The objective of this paper is to motivate support for taking the research further. Using matrix multiplication as a case study, the element/data-flow based model is compared with the instruction-based model. This is done using complexity analysis followed by empirical testing to verify this analysis. The positive results are given as motivation for the research to be taken to the next stage - that is, implementing the model using FPGAs.

  11. Leveraging model-based study designs and serial micro-sampling techniques to understand the oral pharmacokinetics of the potent LTB4 inhibitor, CP-105696, for mouse pharmacology studies.

    Science.gov (United States)

    Spilker, Mary E; Chung, Heekyung; Visswanathan, Ravi; Bagrodia, Shubha; Gernhardt, Steven; Fantin, Valeria R; Ellies, Lesley G

    2017-07-01

    1. Leukotriene B4 (LTB4) is a proinflammatory mediator important in the progression of a number of inflammatory diseases. Preclinical models can explore the role of LTB4 in pathophysiology using tool compounds, such as CP-105696, that modulate its activity. To support preclinical pharmacology studies, micro-sampling techniques and mathematical modeling were used to determine the pharmacokinetics of CP-105696 in mice within the context of systemic inflammation induced by a high-fat diet (HFD). 2. Following oral administration of doses > 35 mg/kg, CP-105696 kinetics can be described by a one-compartment model with first order absorption. The compound's half-life is 44-62 h with an apparent volume of distribution of 0.51-0.72 L/kg. Exposures in animals fed an HFD are within 2-fold of those fed a normal chow diet. Daily dosing at 100 mg/kg was not tolerated and resulted in a >20% weight loss in the mice. 3. CP-105696's long half-life has the potential to support a twice weekly dosing schedule. Given that most chronic inflammatory diseases will require long-term therapies, these results are useful in determining the optimal dosing schedules for preclinical studies using CP-105696.

  12. Filling-Based Techniques Applied to Object Projection Feature Estimation

    CERN Document Server

    Quesada, Luis

    2012-01-01

    3D motion tracking is a critical task in many computer vision applications. Unsupervised markerless 3D motion tracking systems determine the most relevant object in the screen and then track it by continuously estimating its projection features (center and area) from the edge image and a point inside the relevant object projection (namely, inner point), until the tracking fails. Existing object projection feature estimation techniques are based on ray-casting from the inner point. These techniques present three main drawbacks: when the inner point is surrounded by edges, rays may not reach other relevant areas; as a consequence of that issue, the estimated features may greatly vary depending on the position of the inner point relative to the object projection; and finally, increasing the number of rays being casted and the ray-casting iterations (which would make the results more accurate and stable) increases the processing time to the point the tracking cannot be performed on the fly. In this paper, we anal...

  13. Investigations on landmine detection by neutron-based techniques.

    Science.gov (United States)

    Csikai, J; Dóczi, R; Király, B

    2004-07-01

    Principles and techniques of some neutron-based methods used to identify the antipersonnel landmines (APMs) are discussed. New results have been achieved in the field of neutron reflection, transmission, scattering and reaction techniques. Some conclusions are as follows: The neutron hand-held detector is suitable for the observation of anomaly caused by a DLM2-like sample in different soils with a scanning speed of 1m(2)/1.5 min; the reflection cross section of thermal neutrons rendered the determination of equivalent thickness of different soil components possible; a simple method was developed for the determination of the thermal neutron flux perturbation factor needed for multi-elemental analysis of bulky samples; unfolded spectra of elastically backscattered neutrons using broad-spectrum sources render the identification of APMs possible; the knowledge of leakage spectra of different source neutrons is indispensable for the determination of the differential and integrated reaction rates and through it the dimension of the interrogated volume; the precise determination of the C/O atom fraction requires the investigations on the angular distribution of the 6.13MeV gamma-ray emitted in the (16)O(n,n'gamma) reaction. These results, in addition to the identification of landmines, render the improvement of the non-intrusive neutron methods possible.

  14. Investigations on landmine detection by neutron-based techniques

    Energy Technology Data Exchange (ETDEWEB)

    Csikai, J. E-mail: csikai@delfin.klte.hu; Doczi, R.; Kiraly, B

    2004-07-01

    Principles and techniques of some neutron-based methods used to identify the antipersonnel landmines (APMs) are discussed. New results have been achieved in the field of neutron reflection, transmission, scattering and reaction techniques. Some conclusions are as follows: The neutron hand-held detector is suitable for the observation of anomaly caused by a DLM2-like sample in different soils with a scanning speed of 1 m{sup 2}/1.5 min; the reflection cross section of thermal neutrons rendered the determination of equivalent thickness of different soil components possible; a simple method was developed for the determination of the thermal neutron flux perturbation factor needed for multi-elemental analysis of bulky samples; unfolded spectra of elastically backscattered neutrons using broad-spectrum sources render the identification of APMs possible; the knowledge of leakage spectra of different source neutrons is indispensable for the determination of the differential and integrated reaction rates and through it the dimension of the interrogated volume; the precise determination of the C/O atom fraction requires the investigations on the angular distribution of the 6.13 MeV gamma-ray emitted in the {sup 16}O(n,n'{gamma}) reaction. These results, in addition to the identification of landmines, render the improvement of the non-intrusive neutron methods possible.

  15. A human visual based binarization technique for histological images

    Science.gov (United States)

    Shreyas, Kamath K. M.; Rajendran, Rahul; Panetta, Karen; Agaian, Sos

    2017-05-01

    In the field of vision-based systems for object detection and classification, thresholding is a key pre-processing step. Thresholding is a well-known technique for image segmentation. Segmentation of medical images, such as Computed Axial Tomography (CAT), Magnetic Resonance Imaging (MRI), X-Ray, Phase Contrast Microscopy, and Histological images, present problems like high variability in terms of the human anatomy and variation in modalities. Recent advances made in computer-aided diagnosis of histological images help facilitate detection and classification of diseases. Since most pathology diagnosis depends on the expertise and ability of the pathologist, there is clearly a need for an automated assessment system. Histological images are stained to a specific color to differentiate each component in the tissue. Segmentation and analysis of such images is problematic, as they present high variability in terms of color and cell clusters. This paper presents an adaptive thresholding technique that aims at segmenting cell structures from Haematoxylin and Eosin stained images. The thresholded result can further be used by pathologists to perform effective diagnosis. The effectiveness of the proposed method is analyzed by visually comparing the results to the state of art thresholding methods such as Otsu, Niblack, Sauvola, Bernsen, and Wolf. Computer simulations demonstrate the efficiency of the proposed method in segmenting critical information.

  16. Detecting Molecular Properties by Various Laser-Based Techniques

    Energy Technology Data Exchange (ETDEWEB)

    Hsin, Tse-Ming [Iowa State Univ., Ames, IA (United States)

    2007-01-01

    Four different laser-based techniques were applied to study physical and chemical characteristics of biomolecules and dye molecules. These techniques are liole burning spectroscopy, single molecule spectroscopy, time-resolved coherent anti-Stokes Raman spectroscopy and laser-induced fluorescence microscopy. Results from hole burning and single molecule spectroscopy suggested that two antenna states (C708 & C714) of photosystem I from cyanobacterium Synechocystis PCC 6803 are connected by effective energy transfer and the corresponding energy transfer time is ~6 ps. In addition, results from hole burning spectroscopy indicated that the chlorophyll dimer of the C714 state has a large distribution of the dimer geometry. Direct observation of vibrational peaks and evolution of coumarin 153 in the electronic excited state was demonstrated by using the fs/ps CARS, a variation of time-resolved coherent anti-Stokes Raman spectroscopy. In three different solvents, methanol, acetonitrile, and butanol, a vibration peak related to the stretch of the carbonyl group exhibits different relaxation dynamics. Laser-induced fluorescence microscopy, along with the biomimetic containers-liposomes, allows the measurement of the enzymatic activity of individual alkaline phosphatase from bovine intestinal mucosa without potential interferences from glass surfaces. The result showed a wide distribution of the enzyme reactivity. Protein structural variation is one of the major reasons that are responsible for this highly heterogeneous behavior.

  17. Graph Model Based Indoor Tracking

    DEFF Research Database (Denmark)

    Jensen, Christian Søndergaard; Lu, Hua; Yang, Bin

    2009-01-01

    infrastructure for different symbolic positioning technologies, e.g., Bluetooth and RFID. More specifically, the paper proposes a model of indoor space that comprises a base graph and mappings that represent the topology of indoor space at different levels. The resulting model can be used for one or several...... indoor positioning technologies. Focusing on RFID-based positioning, an RFID specific reader deployment graph model is built from the base graph model. This model is then used in several algorithms for constructing and refining trajectories from raw RFID readings. Empirical studies with implementations...

  18. Machine learning techniques for astrophysical modelling and photometric redshift estimation of quasars in optical sky surveys

    CERN Document Server

    Kumar, N Daniel

    2008-01-01

    Machine learning techniques are utilised in several areas of astrophysical research today. This dissertation addresses the application of ML techniques to two classes of problems in astrophysics, namely, the analysis of individual astronomical phenomena over time and the automated, simultaneous analysis of thousands of objects in large optical sky surveys. Specifically investigated are (1) techniques to approximate the precise orbits of the satellites of Jupiter and Saturn given Earth-based observations as well as (2) techniques to quickly estimate the distances of quasars observed in the Sloan Digital Sky Survey. Learning methods considered include genetic algorithms, particle swarm optimisation, artificial neural networks, and radial basis function networks. The first part of this dissertation demonstrates that GAs and PSOs can both be efficiently used to model functions that are highly non-linear in several dimensions. It is subsequently demonstrated in the second part that ANNs and RBFNs can be used as ef...

  19. Total laparoscopic gastrocystoplasty: experimental technique in a porcine model

    Directory of Open Access Journals (Sweden)

    Frederico R. Romero

    2007-02-01

    Full Text Available OBJECTIVE: Describe a unique simplified experimental technique for total laparoscopic gastrocystoplasty in a porcine model. MATERIAL AND METHODS: We performed laparoscopic gastrocystoplasty on 10 animals. The gastroepiploic arch was identified and carefully mobilized from its origin at the pylorus to the beginning of the previously demarcated gastric wedge. The gastric segment was resected with sharp dissection. Both gastric suturing and gastrovesical anastomosis were performed with absorbable running sutures. The complete procedure and stages of gastric dissection, gastric closure, and gastrovesical anastomosis were separately timed for each laparoscopic gastrocystoplasty. The end-result of the gastric suturing and the bladder augmentation were evaluated by fluoroscopy or endoscopy. RESULTS: Mean total operative time was 5.2 (range 3.5 - 8 hours: 84.5 (range 62 - 110 minutes for the gastric dissection, 56 (range 28 - 80 minutes for the gastric suturing, and 170.6 (range 70 to 200 minutes for the gastrovesical anastomosis. A cystogram showed a small leakage from the vesical anastomosis in the first two cases. No extravasation from gastric closure was observed in the postoperative gastrogram. CONCLUSIONS: Total laparoscopic gastrocystoplasty is a feasible but complex procedure that currently has limited clinical application. With the increasing use of laparoscopy in reconstructive surgery of the lower urinary tract, gastrocystoplasty may become an attractive option because of its potential advantages over techniques using small and large bowel segments.

  20. Method of pectus excavatum measurement based on structured light technique

    Science.gov (United States)

    Glinkowski, Wojciech; Sitnik, Robert; Witkowski, Marcin; Kocoń, Hanna; Bolewicki, Pawel; Górecki, Andrzej

    2009-07-01

    We present an automatic method for assessment of pectus excavatum severity based on an optical 3-D markerless shape measurement. A four-directional measurement system based on a structured light projection method is built to capture the shape of the body surface of the patients. The system setup is described and typical measurement parameters are given. The automated data analysis path is explained. Their main steps are: normalization of trunk model orientation, cutting the model into slices, analysis of each slice shape, selecting the proper slice for the assessment of pectus excavatum of the patient, and calculating its shape parameter. We develop a new shape parameter (I3ds) that shows high correlation with the computed tomography (CT) Haller index widely used for assessment of pectus excavatum. Clinical results and the evaluation of developed indexes are presented.

  1. High-extensible scene graph framework based on component techniques

    Institute of Scientific and Technical Information of China (English)

    LI Qi-cheng; WANG Guo-ping; ZHOU Feng

    2006-01-01

    In this paper, a novel component-based scene graph is proposed, in which all objects in the scene are classified to different entities, and a scene can be represented as a hierarchical graph composed of the instances of entities. Each entity contains basic data and its operations which are encapsulated into the entity component. The entity possesses certain behaviours which are responses to rules and interaction defined by the high-level application. Such behaviours can be described by script or behaviours model. The component-based scene graph in the paper is more abstractive and high-level than traditional scene graphs. The contents of a scene could be extended flexibly by adding new entities and new entity components, and behaviour modification can be obtained by modifying the model components or behaviour scripts. Its robustness and efficiency are verified by many examples implemented in the Virtual Scenario developed by Peking University.

  2. Method of pectus excavatum measurement based on structured light technique.

    Science.gov (United States)

    Glinkowski, Wojciech; Sitnik, Robert; Witkowski, Marcin; Kocoń, Hanna; Bolewicki, Pawel; Górecki, Andrzej

    2009-01-01

    We present an automatic method for assessment of pectus excavatum severity based on an optical 3-D markerless shape measurement. A four-directional measurement system based on a structured light projection method is built to capture the shape of the body surface of the patients. The system setup is described and typical measurement parameters are given. The automated data analysis path is explained. Their main steps are: normalization of trunk model orientation, cutting the model into slices, analysis of each slice shape, selecting the proper slice for the assessment of pectus excavatum of the patient, and calculating its shape parameter. We develop a new shape parameter (I(3ds)) that shows high correlation with the computed tomography (CT) Haller index widely used for assessment of pectus excavatum. Clinical results and the evaluation of developed indexes are presented.

  3. An RSS based location estimation technique for cognitive relay networks

    KAUST Repository

    Qaraqe, Khalid A.

    2010-11-01

    In this paper, a received signal strength (RSS) based location estimation method is proposed for a cooperative wireless relay network where the relay is a cognitive radio. We propose a method for the considered cognitive relay network to determine the location of the source using the direct and the relayed signal at the destination. We derive the Cramer-Rao lower bound (CRLB) expressions separately for x and y coordinates of the location estimate. We analyze the effects of cognitive behaviour of the relay on the performance of the proposed method. We also discuss and quantify the reliability of the location estimate using the proposed technique if the source is not stationary. The overall performance of the proposed method is presented through simulations. ©2010 IEEE.

  4. A 3-Level Secure Histogram Based Image Steganography Technique

    Directory of Open Access Journals (Sweden)

    G V Chaitanya

    2013-04-01

    Full Text Available Steganography is an art that involves communication of secret data in an appropriate carrier, eg. images, audio, video, etc. with a goal to hide the very existence of embedded data so as not to arouse an eavesdropper’s suspicion. In this paper, a steganographic technique with high level of security and having a data hiding capacity close to 20% of cover image data has been developed. An adaptive and matched bit replacement method is used based on the sensitivity of Human Visual System (HVS at different intensities. The proposed algorithm ensures that the generated stego image has a PSNR greater than 38.5 and is also resistant to visual attack. A three level security is infused into the algorithm which makes data retrieval from the stego image possible only in case of having all the right keys.

  5. Proposed Arabic Text Steganography Method Based on New Coding Technique

    Directory of Open Access Journals (Sweden)

    Assist. prof. Dr. Suhad M. Kadhem

    2016-09-01

    Full Text Available Steganography is one of the important fields of information security that depend on hiding secret information in a cover media (video, image, audio, text such that un authorized person fails to realize its existence. One of the lossless data compression techniques which are used for a given file that contains many redundant data is run length encoding (RLE. Sometimes the RLE output will be expanded rather than compressed, and this is the main problem of RLE. In this paper we will use a new coding method such that its output will be contains sequence of ones with few zeros, so modified RLE that we proposed in this paper will be suitable for compression, finally we employ the modified RLE output for stenography purpose that based on Unicode and non-printed characters to hide the secret information in an Arabic text.

  6. A polarization-based Thomson scattering technique for burning plasmas

    CERN Document Server

    Parke, E; Hartog, D J Den

    2013-01-01

    The traditional Thomson scattering diagnostic is based on measurement of the wavelength spectrum of scattered light, where electron temperature measurements are inferred from thermal broadening of the scattered laser light. At sufficiently high temperatures, especially those predicted for ITER and other burning plasmas, relativistic effects cause a change in the polarization state of the scattered photons. The resulting depolarization of the scattered light is temperature dependent and has been proposed elsewhere as a potential alternative to the traditional spectral decomposition technique. Following similar work, we analytically calculate the degree of polarization for incoherent Thomson scattering. For the first time, we obtain exact results valid for the full range of incident laser polarization states and electron temperatures. While previous work focused only on linear polarization, we show that circularly polarized incident light optimizes the degree of depolarization for a wide range of temperatures r...

  7. SAR IMAGE ENHANCEMENT BASED ON BEAM SHARPENING TECHNIQUE

    Institute of Scientific and Technical Information of China (English)

    LIYong; ZI-IANGKun-hui; ZHUDai-yin; ZHUZhao-da

    2004-01-01

    A major problem encountered in enhancing SAR image is the total loss of phase information and the unknown parameters of imaging system. The beam sharpening technique, combined with synthetic aperture radiation pattern estimation provides an approach to process this kind of data to achieve higher apparent resolution. Based on the criterion of minimizing the expected quadratic estimation error, an optimum FIR filter with a symmetrical structure is designed whose coefficients depend on the azimuth response of local isolated prominent points because this response can be approximately regarded as the synthetic aperture radiation pattern of the imaging system. The point target simulation shows that the angular resolution is improved by a ratio of almost two to one. The processing results of a live SAR image demonstrate the validity of the method.

  8. Dynamic analysis of granite rockburst based on the PIV technique

    Institute of Scientific and Technical Information of China (English)

    Wang Hongjian; Liu Da’an; Gong Weili; Li Liyun

    2015-01-01

    This paper describes the deep rockburst simulation system to reproduce the granite instantaneous rock-burst process. Based on the PIV (Particle Image Velocimetry) technique, quantitative analysis of a rock-burst, the images of tracer particle, displacement and strain fields can be obtained, and the debris trajectory described. According to the observation of on-site tests, the dynamic rockburst is actually a gas–solid high speed flow process, which is caused by the interaction of rock fragments and surrounding air. With the help of analysis on high speed video and PIV images, the granite rockburst failure process is composed of six stages of platey fragment spalling and debris ejection. Meanwhile, the elastic energy for these six stages has been calculated to study the energy variation. The results indicate that the rockburst process can be summarized as:an initiating stage, intensive developing stage and gradual decay stage. This research will be helpful for our further understanding of the rockburst mechanism.

  9. Hierarchical Spread Spectrum Fingerprinting Scheme Based on the CDMA Technique

    Directory of Open Access Journals (Sweden)

    Kuribayashi Minoru

    2011-01-01

    Full Text Available Abstract Digital fingerprinting is a method to insert user's own ID into digital contents in order to identify illegal users who distribute unauthorized copies. One of the serious problems in a fingerprinting system is the collusion attack such that several users combine their copies of the same content to modify/delete the embedded fingerprints. In this paper, we propose a collusion-resistant fingerprinting scheme based on the CDMA technique. Our fingerprint sequences are orthogonal sequences of DCT basic vectors modulated by PN sequence. In order to increase the number of users, a hierarchical structure is produced by assigning a pair of the fingerprint sequences to a user. Under the assumption that the frequency components of detected sequences modulated by PN sequence follow Gaussian distribution, the design of thresholds and the weighting of parameters are studied to improve the performance. The robustness against collusion attack and the computational costs required for the detection are estimated in our simulation.

  10. Whitelists Based Multiple Filtering Techniques in SCADA Sensor Networks

    Directory of Open Access Journals (Sweden)

    DongHo Kang

    2014-01-01

    Full Text Available Internet of Things (IoT consists of several tiny devices connected together to form a collaborative computing environment. Recently IoT technologies begin to merge with supervisory control and data acquisition (SCADA sensor networks to more efficiently gather and analyze real-time data from sensors in industrial environments. But SCADA sensor networks are becoming more and more vulnerable to cyber-attacks due to increased connectivity. To safely adopt IoT technologies in the SCADA environments, it is important to improve the security of SCADA sensor networks. In this paper we propose a multiple filtering technique based on whitelists to detect illegitimate packets. Our proposed system detects the traffic of network and application protocol attacks with a set of whitelists collected from normal traffic.

  11. A Novel Technique Based on Node Registration in MANETs

    Directory of Open Access Journals (Sweden)

    Rashid Jalal Qureshi

    2012-09-01

    Full Text Available In ad hoc network communication links between the nodes are wireless and each node acts as a router for the other node and packet is forward from one node to other. This type of networks helps in solving challenges and problems that may arise in every day communication. Mobile Ad Hoc Networks is a new field of research and it is particularly useful in situations where network infrastructure is costly. Protecting MANETs from security threats is a challenging task because of the MANETs dynamic topology. Every node in a MANETs is independent and is free to move in any direction, therefore change its connections to other nodes frequently. Due to its decentralized nature different types of attacks can be occur. The aim of this research paper is to investigate different MANETs security attacks and proposed nodes registration based technique by using cryptography functions.

  12. Diagnosis of Dengue Infection Using Conventional and Biosensor Based Techniques

    Directory of Open Access Journals (Sweden)

    Om Parkash

    2015-10-01

    Full Text Available Dengue is an arthropod-borne viral disease caused by four antigenically different serotypes of dengue virus. This disease is considered as a major public health concern around the world. Currently, there is no licensed vaccine or antiviral drug available for the prevention and treatment of dengue disease. Moreover, clinical features of dengue are indistinguishable from other infectious diseases such as malaria, chikungunya, rickettsia and leptospira. Therefore, prompt and accurate laboratory diagnostic test is urgently required for disease confirmation and patient triage. The traditional diagnostic techniques for the dengue virus are viral detection in cell culture, serological testing, and RNA amplification using reverse transcriptase PCR. This paper discusses the conventional laboratory methods used for the diagnosis of dengue during the acute and convalescent phase and highlights the advantages and limitations of these routine laboratory tests. Subsequently, the biosensor based assays developed using various transducers for the detection of dengue are also reviewed.

  13. Astronomical Image Compression Techniques Based on ACC and KLT Coder

    Directory of Open Access Journals (Sweden)

    J. Schindler

    2011-01-01

    Full Text Available This paper deals with a compression of image data in applications in astronomy. Astronomical images have typical specific properties — high grayscale bit depth, size, noise occurrence and special processing algorithms. They belong to the class of scientific images. Their processing and compression is quite different from the classical approach of multimedia image processing. The database of images from BOOTES (Burst Observer and Optical Transient Exploring System has been chosen as a source of the testing signal. BOOTES is a Czech-Spanish robotic telescope for observing AGN (active galactic nuclei and the optical transient of GRB (gamma ray bursts searching. This paper discusses an approach based on an analysis of statistical properties of image data. A comparison of two irrelevancy reduction methods is presented from a scientific (astrometric and photometric point of view. The first method is based on a statistical approach, using the Karhunen-Loeve transform (KLT with uniform quantization in the spectral domain. The second technique is derived from wavelet decomposition with adaptive selection of used prediction coefficients. Finally, the comparison of three redundancy reduction methods is discussed. Multimedia format JPEG2000 and HCOMPRESS, designed especially for astronomical images, are compared with the new Astronomical Context Coder (ACC coder based on adaptive median regression.

  14. CIVA workstation for NDE: mixing of NDE techniques and modeling

    Energy Technology Data Exchange (ETDEWEB)

    Benoist, P.; Besnard, R. [CEA Centre d`Etudes de Saclay, 91 - Gif-sur-Yvette (France). Dept. des Procedes et Systemes Avances; Bayon, G. [CEA Centre d`Etudes de Saclay, 91 - Gif-sur-Yvette (France). Dept. des Reacteurs Experimentaux; Boutaine, J.L. [CEA Centre d`Etudes de Saclay, 91 - Gif-sur-Yvette (France). Dept. des Applications et de la Metrologie des Rayonnements Ionisants

    1994-12-31

    In order to compare the capabilities of different NDE techniques, or to use complementary inspection methods, the same components are examined with different procedures. It is then very useful to have a single evaluation tool allowing direct comparison of the methods: CIVA is an open system for processing NDE data; it is adapted to a standard work station (UNIX, C, MOTIF) and can read different supports on which the digitized data are stored. It includes a large library of signal and image processing methods accessible and adapted to NDE data (filtering, deconvolution, 2D and 3D spatial correlations...). Different CIVA application examples are described: brazing inspection (neutronography, ultrasonic), tube inspection (eddy current, ultrasonic), aluminium welds examination (UT and radiography). Modelling and experimental results are compared. 16 fig., 7 ref.

  15. Randomization techniques for the intensity modulation-based quantum stream cipher and progress of experiment

    Science.gov (United States)

    Kato, Kentaro; Hirota, Osamu

    2011-08-01

    The quantum noise based direct encryption protocol Y-OO is expected to provide physical complexity based security, which is thought to be comparable to information theoretic security in mathematical cryptography, for the. physical layer of fiber-optic communication systems. So far, several randomization techniques for the quantum stream cipher by Y-OO protocol have been proposed, but most of them were developed under the assumption that phase shift keying is used as the modulation format. On the other hand, the recent progress in the experimental study on the intensity modulation based quantum stream cipher by Y-OO protocol raises expectations for its realization. The purpose of this paper is to present design and implementation methods of a composite model of the intensity modulation based quantum stream cipher with some randomization techniques. As a result this paper gives a viewpoint of how the Y-OO cryptosystem is miniaturized.

  16. CANDU in-reactor quantitative visual-based inspection techniques

    Science.gov (United States)

    Rochefort, P. A.

    2009-02-01

    This paper describes two separate visual-based inspection procedures used at CANDU nuclear power generating stations. The techniques are quantitative in nature and are delivered and operated in highly radioactive environments with access that is restrictive, and in one case is submerged. Visual-based inspections at stations are typically qualitative in nature. For example a video system will be used to search for a missing component, inspect for a broken fixture, or locate areas of excessive corrosion in a pipe. In contrast, the methods described here are used to measure characteristic component dimensions that in one case ensure ongoing safe operation of the reactor and in the other support reactor refurbishment. CANDU reactors are Pressurized Heavy Water Reactors (PHWR). The reactor vessel is a horizontal cylindrical low-pressure calandria tank approximately 6 m in diameter and length, containing heavy water as a neutron moderator. Inside the calandria, 380 horizontal fuel channels (FC) are supported at each end by integral end-shields. Each FC holds 12 fuel bundles. The heavy water primary heat transport water flows through the FC pressure tube, removing the heat from the fuel bundles and delivering it to the steam generator. The general design of the reactor governs both the type of measurements that are required and the methods to perform the measurements. The first inspection procedure is a method to remotely measure the gap between FC and other in-core horizontal components. The technique involves delivering vertically a module with a high-radiation-resistant camera and lighting into the core of a shutdown but fuelled reactor. The measurement is done using a line-of-sight technique between the components. Compensation for image perspective and viewing elevation to the measurement is required. The second inspection procedure measures flaws within the reactor's end shield FC calandria tube rolled joint area. The FC calandria tube (the outer shell of the FC) is

  17. An improved visualization-based force-measurement technique for short-duration hypersonic facilities

    Energy Technology Data Exchange (ETDEWEB)

    Laurence, Stuart J.; Karl, Sebastian [Institute of Aerodynamics and Flow Technology, Spacecraft Section, German Aerospace Center (DLR), Goettingen (Germany)

    2010-06-15

    This article is concerned with describing and exploring the limitations of an improved version of a recently proposed visualization-based technique for the measurement of forces and moments in short-duration hypersonic wind tunnels. The technique is based on tracking the motion of a free-flying body over a sequence of high-speed visualizations; while this idea is not new in itself, the use of high-speed digital cinematography combined with a highly accurate least-squares tracking algorithm allows improved results over what have been previously possible with such techniques. The technique precision is estimated through the analysis of artificially constructed and experimental test images, and the resulting error in acceleration measurements is characterized. For wind-tunnel scale models, position measurements to within a few microns are shown to be readily attainable. Image data from two previous experimental studies in the T5 hypervelocity shock tunnel are then reanalyzed with the improved technique: the uncertainty in the mean drag acceleration is shown to be reduced to the order of the flow unsteadiness, 2-3%, and time-resolved acceleration measurements are also shown to be possible. The response time of the technique for the configurations studied is estimated to be {proportional_to}0.5 ms. Comparisons with computations using the DLR TAU code also yield agreement to within the overall experimental uncertainty. Measurement of the pitching moment for blunt geometries still appears challenging, however. (orig.)

  18. An improved visualization-based force-measurement technique for short-duration hypersonic facilities

    Science.gov (United States)

    Laurence, Stuart J.; Karl, Sebastian

    2010-06-01

    This article is concerned with describing and exploring the limitations of an improved version of a recently proposed visualization-based technique for the measurement of forces and moments in short-duration hypersonic wind tunnels. The technique is based on tracking the motion of a free-flying body over a sequence of high-speed visualizations; while this idea is not new in itself, the use of high-speed digital cinematography combined with a highly accurate least-squares tracking algorithm allows improved results over what have been previously possible with such techniques. The technique precision is estimated through the analysis of artificially constructed and experimental test images, and the resulting error in acceleration measurements is characterized. For wind-tunnel scale models, position measurements to within a few microns are shown to be readily attainable. Image data from two previous experimental studies in the T5 hypervelocity shock tunnel are then reanalyzed with the improved technique: the uncertainty in the mean drag acceleration is shown to be reduced to the order of the flow unsteadiness, 2-3%, and time-resolved acceleration measurements are also shown to be possible. The response time of the technique for the configurations studied is estimated to be ˜0.5 ms. Comparisons with computations using the DLR TAU code also yield agreement to within the overall experimental uncertainty. Measurement of the pitching moment for blunt geometries still appears challenging, however.

  19. Modeling and Analyzing Terrain Data Acquired by Modern Mapping Techniques

    Science.gov (United States)

    2009-09-22

    enhanced by new terrain mapping technologies such as Laser altimetry (LIDAR), ground based laser scanning and Real Time Kinematic GPS ( RTK - GPS ) that...developed and implemented an approach that has the following features: it is modular so that a user can use different models for each of the modules ...support some way of connecting separate modules together to form pipelines, however this requires manual intervention. While a typical GIS can manage

  20. Analysis on the Metrics used in Optimizing Electronic Business based on Learning Techniques

    Directory of Open Access Journals (Sweden)

    Irina-Steliana STAN

    2014-09-01

    Full Text Available The present paper proposes a methodology of analyzing the metrics related to electronic business. The drafts of the optimizing models include KPIs that can highlight the business specific, if only they are integrated by using learning-based techniques. Having set the most important and high-impact elements of the business, the models should get in the end the link between them, by automating business flows. The human resource will be found in the situation of collaborating more and more with the optimizing models which will translate into high quality decisions followed by profitability increase.