Sample records for model reference automatic

  1. Disain dan Implementasi Kontrol PID Model Reference Adaptive Control untuk Automatic Safe Landing pada Pesawat UAV Quadcopter

    Teddy Sudewo


    Full Text Available Pada fase penerbangan quadcopter, fase landing (pendaratan merupakan fase paling kritis, dimana resiko terjadi kecelakaan paling besar. Permasalahan tersebut muncul karena adanya beberapa kendala, seperti kendala pada struktur rangka pesawat yang kecil, peningkatan beban pada sayap pesawat serta pengaruh angin sehingga menyebabkan pesawat tidak stabil. Pada penelitian tugas akhir ini, didesain suatu sistem kontrol pada UAV quadcopter menggunakan kontrol PID dengan Model Reference Adaptive Control (MRAC. Sistem pengendalian berbasis MRAC menawarkan beberapa kelebihan untuk mengatasi karakteristik plant non-linear salah satunya quadcopter. MRAC merupakan kontrol adaptif dimana performansi keluaran sistem (proses akan mengikuti performansi keluaran model referensinya. Pada tugas akhir ini, model referensi sudah ditentukan diawal dan spesifikasinya tetap sehingga dapat langsung didisain mekanisme adaptasi dari MRAC. Parameter proses θ (a1,a2,b0,b1 diestimasi menggunakan metode Extended Least Square, parameter proses tersebut akan mentuning parameter kontroler (k0,k1,k2,k3 sehingga menghasilkan sinyal kontrol PID. Hasil pengujian menunjukkan bahwa ketika terjadi perubahan parameter pada plant, kontroler mampu memperbaiki respon agar tetap dapat mengikuti model referensinya dan dalam mengatasi gangguan metode adaptasi MRAC memiliki kemampuan yang baik dilihat dari waktu yang dibutuhkan yang relatif singkat.

  2. Automatic reference level control for an antenna pattern recording system

    Lipin, R., Jr.


    Automatic gain control system keeps recorder reference levels within 0.2 decibels during operation. System reduces recorder drift during antenna radiation distribution determinations over an eight hour period.

  3. A mixture model with a reference-based automatic selection of components for disease classification from protein and/or gene expression levels

    Kopriva Ivica


    Full Text Available Abstract Background Bioinformatics data analysis is often using linear mixture model representing samples as additive mixture of components. Properly constrained blind matrix factorization methods extract those components using mixture samples only. However, automatic selection of extracted components to be retained for classification analysis remains an open issue. Results The method proposed here is applied to well-studied protein and genomic datasets of ovarian, prostate and colon cancers to extract components for disease prediction. It achieves average sensitivities of: 96.2 (sd = 2.7%, 97.6% (sd = 2.8% and 90.8% (sd = 5.5% and average specificities of: 93.6% (sd = 4.1%, 99% (sd = 2.2% and 79.4% (sd = 9.8% in 100 independent two-fold cross-validations. Conclusions We propose an additive mixture model of a sample for feature extraction using, in principle, sparseness constrained factorization on a sample-by-sample basis. As opposed to that, existing methods factorize complete dataset simultaneously. The sample model is composed of a reference sample representing control and/or case (disease groups and a test sample. Each sample is decomposed into two or more components that are selected automatically (without using label information as control specific, case specific and not differentially expressed (neutral. The number of components is determined by cross-validation. Automatic assignment of features (m/z ratios or genes to particular component is based on thresholds estimated from each sample directly. Due to the locality of decomposition, the strength of the expression of each feature across the samples can vary. Yet, they will still be allocated to the related disease and/or control specific component. Since label information is not used in the selection process, case and control specific components can be used for classification. That is not the case with standard factorization methods. Moreover, the component selected by proposed method

  4. [Automatization of microscopic blood smear analyses and quality control using reference virtual slides].

    Medovyĭ, V S; Nikolaenko, D S; Parpara, A A; Piatnitskiĭ, A M; Sokolinskiĭ, B Z; Dem'ianov, V L; Zhurkina, T V; Pal'chunova, I B


    MEKOC microscopy complexes have a group of specialized automatic functions for medical analyses of biomaterials integrated with general virtual microscopy accessories. Such functions provide a way of making specialized reference virtual slides (RVS). The latter contain the results of virtual analysis or expert evidence of the automatic analysis results presented in the virtual slide. The use of RVS yields an open system with a step-by-step control of the quality of automatic operations. RVS as realistic preparation models are also used to train staff. The results of step-by-step trials of the MEKOC--2 are presented in the paper.

  5. Reference Man anatomical model

    Cristy, M.


    The 70-kg Standard Man or Reference Man has been used in physiological models since at least the 1920s to represent adult males. It came into use in radiation protection in the late 1940s and was developed extensively during the 1950s and used by the International Commission on Radiological Protection (ICRP) in its Publication 2 in 1959. The current Reference Man for Purposes of Radiation Protection is a monumental book published in 1975 by the ICRP as ICRP Publication 23. It has a wealth of information useful for radiation dosimetry, including anatomical and physiological data, gross and elemental composition of the body and organs and tissues of the body. The anatomical data includes specified reference values for an adult male and an adult female. Other reference values are primarily for the adult male. The anatomical data include much data on fetuses and children, although reference values are not established. There is an ICRP task group currently working on revising selected parts of the Reference Man document.

  6. Extending reference assembly models

    Church, Deanna M.; Schneider, Valerie A.; Steinberg, Karyn Meltz


    The human genome reference assembly is crucial for aligning and analyzing sequence data, and for genome annotation, among other roles. However, the models and analysis assumptions that underlie the current assembly need revising to fully represent human sequence diversity. Improved analysis tools...

  7. Extending reference assembly models

    Church, Deanna M.; Schneider, Valerie A.; Steinberg, Karyn Meltz;


    The human genome reference assembly is crucial for aligning and analyzing sequence data, and for genome annotation, among other roles. However, the models and analysis assumptions that underlie the current assembly need revising to fully represent human sequence diversity. Improved analysis tools...

  8. The Reference Encounter Model.

    White, Marilyn Domas


    Develops model of the reference interview which explicitly incorporates human information processing, particularly schema ideas presented by Marvin Minsky and other theorists in cognitive processing and artificial intelligence. Questions are raised concerning use of content analysis of transcribed verbal protocols as methodology for studying…

  9. A Reference Implementation of a Generic Automatic Pointing System

    Staig, T.; Tobar, R.; Araya, M. A.; Guajardo, C.; von Brand, H. H.


    The correction of every existent observation error is impossible. Nevertheless, the approach taken to do this should be the best possible one. Regardless of the fact that there are a huge number of problems to solve, if one knows how much they affect the observation for given conditions then it would be possible to observe as desired by counteracting these deviations. Automatic pointing adjustments help us to do this by providing mathematical support to model the perturbations, and therefore the deviations. This paper presents a generic open-sourced pointing system developed by the ALMA-UTFSM team, intended to work with the gTCS. This pointing system includes several physical terms, terms with spherical harmonics and user-customised terms which allow the generation of pointing models in a generic way. Accurate results have been obtained with test data. Graphical support is also included in our work and helps to show the variation between experimental and theoretical values of several variables in relation to different coordinates. Thanks to its open-source characteristic, it could be easily integrated into a TCS, automating the pointing calibration process for a given telescope and allowing the interesting unseen functionality of changing the pointing model while observing.

  10. Preliminary reference Earth model

    Dziewonski, Adam M.; Anderson, Don L.


    A large data set consisting of about 1000 normal mode periods, 500 summary travel time observations, 100 normal mode Q values, mass and moment of inertia have been inverted to obtain the radial distribution of elastic properties, Q values and density in the Earth's interior. The data set was supplemented with a special study of 12 years of ISC phase data which yielded an additional 1.75 × 10 6 travel time observations for P and S waves. In order to obtain satisfactory agreement with the entire data set we were required to take into account anelastic dispersion. The introduction of transverse isotropy into the outer 220 km of the mantle was required in order to satisfy the shorter period fundamental toroidal and spheroidal modes. This anisotropy also improved the fit of the larger data set. The horizontal and vertical velocities in the upper mantle differ by 2-4%, both for P and S waves. The mantle below 220 km is not required to be anisotropic. Mantle Rayleigh waves are surprisingly sensitive to compressional velocity in the upper mantle. High S n velocities, low P n velocities and a pronounced low-velocity zone are features of most global inversion models that are suppressed when anisotropy is allowed for in the inversion. The Preliminary Reference Earth Model, PREM, and auxiliary tables showing fits to the data are presented.

  11. 1927 reference in the new millennium: where is the Automat?

    Worel, Sunny Lynn; Rethlefsen, Melissa Lyle


    James Ballard, director at the Boston Medical Library, tracked questions he received at the reference desk in 1927 to recognize the trend of queries and to record the information for future use. He presented a paper on reference services that listed sixty of his reference questions at the Thirtieth Annual Meeting of the Medical Library Association (MLA) in 1927. During a two-month period in 2001, the authors examined Ballard's questions by attempting to answer them with print sources from the 1920s and with the Internet. The searchers answered 85% of the questions with the Internet and 80% with 1920s reference sources. The authors compared Internet and 1920s print resources for practical use. When answering the questions with 1920s resources, the searchers rediscovered a time in health sciences libraries when there was no Ulrich's Periodicals Directory, no standardized subject headings, and no comprehensive listings of available books. Yet, the authors found many of the 1920s reference materials to be quite useful and often multifunctional. The authors recorded observations regarding the impact of automation on answering reference questions. Even though the Internet has changed the outward appearance of reference services, many things remain the same.

  12. Automatic terrain modeling using transfinite element analysis

    Collier, Nathaniel O.


    An automatic procedure for modeling terrain is developed based on L2 projection-based interpolation of discrete terrain data onto transfinite function spaces. The function space is refined automatically by the use of image processing techniques to detect regions of high error and the flexibility of the transfinite interpolation to add degrees of freedom to these areas. Examples are shown of a section of the Palo Duro Canyon in northern Texas.

  13. Reference-free automatic quality assessment of tracheoesophageal speech.

    Huang, Andy; Falk, Tiago H; Chan, Wai-Yip; Parsa, Vijay; Doyle, Philip


    Evaluation of the quality of tracheoesophageal (TE) speech using machines instead of human experts can enhance the voice rehabilitation process for patients who have undergone total laryngectomy and voice restoration. Towards the goal of devising a reference-free TE speech quality estimation algorithm, we investigate the efficacy of speech signal features that are used in standard telephone-speech quality assessment algorithms, in conjunction with a recently introduced speech modulation spectrum measure. Tests performed on two TE speech databases demonstrate that the modulation spectral measure and a subset of features in the standard ITU-T P.563 algorithm estimate TE speech quality with better correlation (up to 0.9) than previously proposed features.

  14. Reference Models for Virtual Enterprises

    Tølle, Martin; Bernus, Peter; Vesterager, Johan


    This paper analyses different types of Reference Models (RMs) applicable to support the set up and (re)configuration of virtual enterprises (VEs). RMs are models capturing concepts common to VEs aiming to convert the task of setting up of VE towards a configuration task, and hence reduce the time...... needed for VE creation. The RMs are analysed through a mapping onto the Virtual Enterprise Reference Architecture (VERA) created in the IMS GLOBEMEN project based upon GERAM.......This paper analyses different types of Reference Models (RMs) applicable to support the set up and (re)configuration of virtual enterprises (VEs). RMs are models capturing concepts common to VEs aiming to convert the task of setting up of VE towards a configuration task, and hence reduce the time...

  15. Reference Models for Virtual Enterprises

    Tølle, Martin; Bernus, Peter; Vesterager, Johan


    This paper analyses different types of Reference Models (RMs) applicable to support the set up and (re)configuration of virtual enterprises (VEs). RMs are models capturing concepts common to VEs aiming to convert the task of setting up of VE towards a configuration task, and hence reduce the time...... needed for VE creation. The RMs are analysed through a mapping onto the Virtual Enterprise Reference Architecture (VERA) created in the IMS GLOBEMEN project based upon GERAM.......This paper analyses different types of Reference Models (RMs) applicable to support the set up and (re)configuration of virtual enterprises (VEs). RMs are models capturing concepts common to VEs aiming to convert the task of setting up of VE towards a configuration task, and hence reduce the time...

  16. Linguistics Computation, Automatic Model Generation, and Intensions

    Nourani, C F


    Techniques are presented for defining models of computational linguistics theories. The methods of generalized diagrams that were developed by this author for modeling artificial intelligence planning and reasoning are shown to be applicable to models of computation of linguistics theories. It is shown that for extensional and intensional interpretations, models can be generated automatically which assign meaning to computations of linguistics theories for natural languages. Keywords: Computational Linguistics, Reasoning Models, G-diagrams For Models, Dynamic Model Implementation, Linguistics and Logics For Artificial Intelligence

  17. Modeling of a Multiple Digital Automatic Gain Control System

    WANG Jingdian; LU Xiuhong; ZHANG Li


    Automatic gain control (AGC) has been used in many applications. The key features of AGC, including a steady state output and static/dynamic timing response, depend mainly on key parameters such as the reference and the filter coefficients. A simple model developed to describe AGC systems based on several simple assumptions shows that AGC always converges to the reference and that the timing constant depends on the filter coefficients. Measures are given to prevent oscillations and limit cycle effects. The simple AGC system is adapted to a multiple AGC system for a TV tuner in a much more efficient model. Simulations using the C language are 16 times faster than those with MATLAB, and 10 times faster than those with a mixed register transfer level (RTL)-simulation program with integrated circuit emphasis (SPICE) model.

  18. The Reference Forward Model (RFM)

    Dudhia, Anu


    The Reference Forward Model (RFM) is a general purpose line-by-line radiative transfer model, currently supported by the UK National Centre for Earth Observation. This paper outlines the algorithms used by the RFM, focusing on standard calculations of terrestrial atmospheric infrared spectra followed by a brief summary of some additional capabilities and extensions to microwave wavelengths and extraterrestrial atmospheres. At its most basic level - the 'line-by-line' component - it calculates molecular absorption cross-sections by applying the Voigt lineshape to all transitions up to ±25 cm-1 from line-centre. Alternatively, absorptions can be directly interpolated from various forms of tabulated data. These cross-sections are then used to construct infrared radiance or transmittance spectra for ray paths through homogeneous cells, plane-parallel or circular atmospheres. At a higher level, the RFM can apply instrumental convolutions to simulate measurements from Fourier transform spectrometers. It can also calculate Jacobian spectra and so act as a stand-alone forward model within a retrieval scheme. The RFM is designed for robustness, flexibility and ease-of-use (particularly by the non-expert), and no claims are made for superior accuracy, or indeed novelty, compared to other line-by-line codes. Its main limitations at present are a lack of scattering and simplified modelling of surface reflectance and line-mixing.

  19. Automatic Queuing Model for Banking Applications

    Dr. Ahmed S. A. AL-Jumaily


    Full Text Available Queuing is the process of moving customers in a specific sequence to a specific service according to the customer need. The term scheduling stands for the process of computing a schedule. This may be done by a queuing based scheduler. This paper focuses on the banks lines system, the different queuing algorithms that are used in banks to serve the customers, and the average waiting time. The aim of this paper is to build automatic queuing system for organizing the banks queuing system that can analyses the queue status and take decision which customer to serve. The new queuing architecture model can switch between different scheduling algorithms according to the testing results and the factor of the average waiting time. The main innovation of this work concerns the modeling of the average waiting time is taken into processing, in addition with the process of switching to the scheduling algorithm that gives the best average waiting time.

  20. Using full-reference image quality metrics for automatic image sharpening

    Krasula, Lukas; Fliegel, Karel; Le Callet, Patrick; Klíma, Miloš


    Image sharpening is a post-processing technique employed for the artificial enhancement of the perceived sharpness by shortening the transitions between luminance levels or increasing the contrast on the edges. The greatest challenge in this area is to determine the level of perceived sharpness which is optimal for human observers. This task is complex because the enhancement is gained only until the certain threshold. After reaching it, the quality of the resulting image drops due to the presence of annoying artifacts. Despite the effort dedicated to the automatic sharpness estimation, none of the existing metrics is designed for localization of this threshold. Nevertheless, it is a very important step towards the automatic image sharpening. In this work, possible usage of full-reference image quality metrics for finding the optimal amount of sharpening is proposed and investigated. The intentionally over-sharpened "anchor image" was included to the calculation as the "anti-reference" and the final metric score was computed from the differences between reference, processed, and anchor versions of the scene. Quality scores obtained from the subjective experiment were used to determine the optimal combination of partial metric values. Five popular fidelity metrics - SSIM, MS-SSIM, IW-SSIM, VIF, and FSIM - were tested. The performance of the proposed approach was then verified in the subjective experiment.

  1. Genetic Programming for Automatic Hydrological Modelling

    Chadalawada, Jayashree; Babovic, Vladan


    One of the recent challenges for the hydrologic research community is the need for the development of coupled systems that involves the integration of hydrologic, atmospheric and socio-economic relationships. This poses a requirement for novel modelling frameworks that can accurately represent complex systems, given, the limited understanding of underlying processes, increasing volume of data and high levels of uncertainity. Each of the existing hydrological models vary in terms of conceptualization and process representation and is the best suited to capture the environmental dynamics of a particular hydrological system. Data driven approaches can be used in the integration of alternative process hypotheses in order to achieve a unified theory at catchment scale. The key steps in the implementation of integrated modelling framework that is influenced by prior understanding and data, include, choice of the technique for the induction of knowledge from data, identification of alternative structural hypotheses, definition of rules, constraints for meaningful, intelligent combination of model component hypotheses and definition of evaluation metrics. This study aims at defining a Genetic Programming based modelling framework that test different conceptual model constructs based on wide range of objective functions and evolves accurate and parsimonious models that capture dominant hydrological processes at catchment scale. In this paper, GP initializes the evolutionary process using the modelling decisions inspired from the Superflex framework [Fenicia et al., 2011] and automatically combines them into model structures that are scrutinized against observed data using statistical, hydrological and flow duration curve based performance metrics. The collaboration between data driven and physical, conceptual modelling paradigms improves the ability to model and manage hydrologic systems. Fenicia, F., D. Kavetski, and H. H. Savenije (2011), Elements of a flexible approach

  2. Hidden Markov models in automatic speech recognition

    Wrzoskowicz, Adam


    This article describes a method for constructing an automatic speech recognition system based on hidden Markov models (HMMs). The author discusses the basic concepts of HMM theory and the application of these models to the analysis and recognition of speech signals. The author provides algorithms which make it possible to train the ASR system and recognize signals on the basis of distinct stochastic models of selected speech sound classes. The author describes the specific components of the system and the procedures used to model and recognize speech. The author discusses problems associated with the choice of optimal signal detection and parameterization characteristics and their effect on the performance of the system. The author presents different options for the choice of speech signal segments and their consequences for the ASR process. The author gives special attention to the use of lexical, syntactic, and semantic information for the purpose of improving the quality and efficiency of the system. The author also describes an ASR system developed by the Speech Acoustics Laboratory of the IBPT PAS. The author discusses the results of experiments on the effect of noise on the performance of the ASR system and describes methods of constructing HMM's designed to operate in a noisy environment. The author also describes a language for human-robot communications which was defined as a complex multilevel network from an HMM model of speech sounds geared towards Polish inflections. The author also added mandatory lexical and syntactic rules to the system for its communications vocabulary.

  3. Biological models for automatic target detection

    Schachter, Bruce


    Humans are better at detecting targets in literal imagery than any known algorithm. Recent advances in modeling visual processes have resulted from f-MRI brain imaging with humans and the use of more invasive techniques with monkeys. There are four startling new discoveries. 1) The visual cortex does not simply process an incoming image. It constructs a physics based model of the image. 2) Coarse category classification and range-to-target are estimated quickly - possibly through the dorsal pathway of the visual cortex, combining rapid coarse processing of image data with expectations and goals. This data is then fed back to lower levels to resize the target and enhance the recognition process feeding forward through the ventral pathway. 3) Giant photosensitive retinal ganglion cells provide data for maintaining circadian rhythm (time-of-day) and modeling the physics of the light source. 4) Five filter types implemented by the neurons of the primary visual cortex have been determined. A computer model for automatic target detection has been developed based upon these recent discoveries. It uses an artificial neural network architecture with multiple feed-forward and feedback paths. Our implementation's efficiency derives from the observation that any 2-D filter kernel can be approximated by a sum of 2-D box functions. And, a 2-D box function easily decomposes into two 1-D box functions. Further efficiency is obtained by decomposing the largest neural filter into a high pass filter and a more sparsely sampled low pass filter.

  4. Virtual Reference, Real Money: Modeling Costs in Virtual Reference Services

    Eakin, Lori; Pomerantz, Jeffrey


    Libraries nationwide are in yet another phase of belt tightening. Without an understanding of the economic factors that influence library operations, however, controlling costs and performing cost-benefit analyses on services is difficult. This paper describes a project to develop a cost model for collaborative virtual reference services. This…

  5. A novel GIS-based tool for estimating present-day ocean reference depth using automatically processed gridded bathymetry data

    Jurecka, Mirosława; Niedzielski, Tomasz; Migoń, Piotr


    This paper presents a new method for computing the present-day value of the reference depth (dr) which is an essential input information for assessment of past sea-level changes. The method applies a novel automatic geoprocessing tool developed using Python script and ArcGIS, and uses recent data about ocean floor depth, sediment thickness, and age of oceanic crust. The procedure is multi-step and involves creation of a bathymetric dataset corrected for sediment loading and isostasy, delineation of subduction zones, computation of perpendicular sea-floor profiles, and statistical analysis of these profiles versus crust age. The analysis of site-specific situations near the subduction zones all around the world shows a number of instances where the depth of the oceanic crust stabilizes at a certain level before reaching the subduction zone, and this occurs at depths much lower than proposed in previous approaches to the reference depth issue. An analysis of Jurassic and Cretaceous oceanic lithosphere shows that the most probable interval at which the reference depth occurs is 5300-5800 m. This interval is broadly consistent with dr estimates determined using the Global Depth-Heatflow model (GDH1), but is significantly lower than dr estimates calculated on a basis of the Parsons-Sclater Model (PSM).

  6. The Role of Item Models in Automatic Item Generation

    Gierl, Mark J.; Lai, Hollis


    Automatic item generation represents a relatively new but rapidly evolving research area where cognitive and psychometric theories are used to produce tests that include items generated using computer technology. Automatic item generation requires two steps. First, test development specialists create item models, which are comparable to templates…

  7. SAM Photovoltaic Model Technical Reference

    Gilman, P. [National Renewable Energy Laboratory (NREL), Golden, CO (United States)


    This manual describes the photovoltaic performance model in the System Advisor Model (SAM). The U.S. Department of Energy’s National Renewable Energy Laboratory maintains and distributes SAM, which is available as a free download from These descriptions are based on SAM 2015.1.30 (SSC 41).

  8. A Reference Model for Virtual Machine Launching Overhead

    Wu, Hao; Ren, Shangping; Garzoglio, Gabriele; Timm, Steven; Bernabeu, Gerard; Chadwick, Keith; Noh, Seo-Young


    Cloud bursting is one of the key research topics in the cloud computing communities. A well designed cloud bursting module enables private clouds to automatically launch virtual machines (VMs) to public clouds when more resources are needed. One of the main challenges in developing a cloud bursting module is to decide when and where to launch a VM so that all resources are most effectively and efficiently utilized and the system performance is optimized. However, based on system operational data obtained from FermiCloud, a private cloud developed by the Fermi National Accelerator Laboratory for scientific workflows, the VM launching overhead is not a constant. It varies with physical resource utilization, such as CPU and I/O device utilizations, at the time when a VM is launched. Hence, to make judicious decisions as to when and where a VM should be launched, a VM launching overhead reference model is needed. In this paper, we first develop a VM launching overhead reference model based on operational data we have obtained on FermiCloud. Second, we apply the developed reference model on FermiCloud and compare calculated VM launching overhead values based on the model with measured overhead values on FermiCloud. Our empirical results on FermiCloud indicate that the developed reference model is accurate. We believe, with the guidance of the developed reference model, efficient resource allocation algorithms can be developed for cloud bursting process to minimize the operational cost and resource waste.

  9. Reference Inflow Characterization for River Resource Reference Model (RM2)

    Neary, Vincent S [ORNL


    Sandia National Laboratory (SNL) is leading an effort to develop reference models for marine and hydrokinetic technologies and wave and current energy resources. This effort will allow the refinement of technology design tools, accurate estimates of a baseline levelized cost of energy (LCoE), and the identification of the main cost drivers that need to be addressed to achieve a competitive LCoE. As part of this effort, Oak Ridge National Laboratory was charged with examining and reporting reference river inflow characteristics for reference model 2 (RM2). Published turbulent flow data from large rivers, a water supply canal and laboratory flumes, are reviewed to determine the range of velocities, turbulence intensities and turbulent stresses acting on hydrokinetic technologies, and also to evaluate the validity of classical models that describe the depth variation of the time-mean velocity and turbulent normal Reynolds stresses. The classical models are found to generally perform well in describing river inflow characteristics. A potential challenge in river inflow characterization, however, is the high variability of depth and flow over the design life of a hydrokinetic device. This variation can have significant effects on the inflow mean velocity and turbulence intensity experienced by stationary and bottom mounted hydrokinetic energy conversion devices, which requires further investigation, but are expected to have minimal effects on surface mounted devices like the vertical axis turbine device designed for RM2. A simple methodology for obtaining an approximate inflow characterization for surface deployed devices is developed using the relation umax=(7/6)V where V is the bulk velocity and umax is assumed to be the near-surface velocity. The application of this expression is recommended for deriving the local inflow velocity acting on the energy extraction planes of the RM2 vertical axis rotors, where V=Q/A can be calculated given a USGS gage flow time

  10. AD Model Builder: using automatic differentiation for statistical inference of highly parameterized complex nonlinear models

    Fournier, David A.; Skaug, Hans J.; Ancheta, Johnoel


    Many criteria for statistical parameter estimation, such as maximum likelihood, are formulated as a nonlinear optimization problem.Automatic Differentiation Model Builder (ADMB) is a programming framework based on automatic differentiation, aimed at highly nonlinear models with a large number...

  11. A Reference Model for Online Learning Communities

    Seufert, Sabine; Lechner, Ulrike; Stanoevska, Katarina


    Online learning communities are introduced as a comprehensive model for technology-enabled learning. We give an analysis of goals in education and the requirements to community platforms. The main contribution of the article is a reference model for online learning communities that consists of four layers designing the organizational, interaction, channel or service and the technological model of learning communities. This reference model captures didactic goals, learning methods and learning...

  12. Semi-automatic Assessment Model of Student Texts - Pedagogical Foundations

    Kakkonen, T.; Sutinen, E.


    This paper introduces the concept of the semi-automatic assessment of student texts that aims at offering the twin benefits of fully automatic grading and feedback together with the advantages that can be provided by human assessors. This paper concentrates on the pedagogical foundations of the model by demonstrating how the relevant findings in research into written composition and writing education have been taken into account in the model design.

  13. A Comparison Between the OSI Reference Model and the B-ISDN Protocol Reference Model

    Staalhagen, Lars


    This article aims at comparing the Open Systems Interconnection (OSI) Reference Model (RM) and the broadband integrated services digital network (B-ISDN) Protocol Reference Model (PRM). According to the International Telecommunications Union - Telecommunications Sector (ITU-T), the exact...

  14. A periodic pricing model considering reference effect

    Yang Hui


    Full Text Available The purpose of this paper is to investigate the optimal pricing strategies with reference effects in revenue management settings. We firstly propose a static pricing model with the properties of stochastic demand, finite horizon and fixed capacity, and prove the existence and uniqueness of the solution. Secondly, we extend the fixed pricing model to a periodic pricing model and incorporate a memory-based reference price in the demand function to investigate how the reference effect impacts on traditional revenue management decisions. We present numerical examples in both low demand situations and high demand situations for different levels of reference effects and different updating frequencies. The results show that the dynamic pricing strategies are superior to a static one even when reference effects are taken into consideration. We also provide some manage-rial insights including pricing directions, pricing dispersion and the optimal updating frequency for both demand situations.

  15. An automatic composition model of Chinese folk music

    Zheng, Xiaomei; Li, Dongyang; Wang, Lei; Shen, Lin; Gao, Yanyuan; Zhu, Yuanyuan


    The automatic composition has achieved rich results in recent decades, including Western and some other areas of music. However, the automatic composition of Chinese music is less involved. After thousands of years of development, Chinese folk music has a wealth of resources. To design an automatic composition mode, learn the characters of Chinese folk melody and imitate the creative process of music is of some significance. According to the melodic features of Chinese folk music, a Chinese folk music composition based on Markov model is proposed to analyze Chinese traditional music. Folk songs with typical Chinese national characteristics are selected for analysis. In this paper, an example of automatic composition is given. The experimental results show that this composition model can produce music with characteristics of Chinese folk music.

  16. Automatic Modeling of Virtual Humans and Body Clothing

    Nadia Magnenat-Thalmann; Hyewon Seo; Frederic Cordier


    Highly realistic virtual human models are rapidly becoming commonplace in computer graphics.These models, often represented by complex shape and requiring labor-intensive process, challenge the problem of automatic modeling. The problem and solutions to automatic modeling of animatable virtual humans are studied. Methods for capturing the shape of real people, parameterization techniques for modeling static shape (the variety of human body shapes) and dynamic shape (how the body shape changes as it moves) of virtual humans are classified, summarized and compared. Finally, methods for clothed virtual humans are reviewed.

  17. Neuro-fuzzy system modeling based on automatic fuzzy clustering

    Yuangang TANG; Fuchun SUN; Zengqi SUN


    A neuro-fuzzy system model based on automatic fuzzy clustering is proposed.A hybrid model identification algorithm is also developed to decide the model structure and model parameters.The algorithm mainly includes three parts:1) Automatic fuzzy C-means (AFCM),which is applied to generate fuzzy rules automatically,and then fix on the size of the neuro-fuzzy network,by which the complexity of system design is reducesd greatly at the price of the fitting capability;2) Recursive least square estimation (RLSE).It is used to update the parameters of Takagi-Sugeno model,which is employed to describe the behavior of the system;3) Gradient descent algorithm is also proposed for the fuzzy values according to the back propagation algorithm of neural network.Finally,modeling the dynamical equation of the two-link manipulator with the proposed approach is illustrated to validate the feasibility of the method.

  18. Issues in acoustic modeling of speech for automatic speech recognition

    Gong, Yifan; Haton, Jean-Paul; Mari, Jean-François


    Projet RFIA; Stochastic modeling is a flexible method for handling the large variability in speech for recognition applications. In contrast to dynamic time warping where heuristic training methods for estimating word templates are used, stochastic modeling allows a probabilistic and automatic training for estimating models. This paper deals with the improvement of stochastic techniques, especially for a better representation of time varying phenomena.

  19. Automatic modeling of the linguistic values for database fuzzy querying



    Full Text Available In order to evaluate vague queries, each linguistic term is considered according to its fuzzy model. Usually, the linguistic terms are defined as fuzzy sets, during a classical knowledge acquisition off-line process. But they can also be automatically extracted from the actual content of the database, by an online process. In at least two situations, automatically modeling the linguistic values would be very useful: first, to simplify the knowledge engineer’s task by extracting the definitions from the database content; and second, where mandatory, to dynamically define the linguistic values in complex criteria queries evaluation. Procedures to automatically extract the fuzzy model of the linguistic values from the existing data are presented in this paper.

  20. A reference model for database security proxy


    How to protect the database, the kernel resources of information warfare, is becoming more and more important since the rapid development of computer and communication technology. As an application-level firewall, database security proxy can successfully repulse attacks originated from outside the network, reduce to zerolevel damage from foreign DBMS products. We enhanced the capability of the COAST's firewall reference model by adding a transmission unit modification function and an attribute value mapping function, describes the schematic and semantic layer reference model, and finally forms a reference model for DBMS security proxy which greatly helps in the design and implementation of database security proxies. This modeling process can clearly separate the system functionality into three layers, define the possible security functions for each layer, and estimate the computational cost for each layer.

  1. A reference model for database security proxy

    蔡亮; 杨小虎; 董金祥


    How to protect the database, the kernel resources of information warfare, is becoming more and more important since the rapid development of computer and communication technology. As an application-level firewall, database security proxy can successfully repulse attacks originated from outside the network, reduce to zerolevel damage from foreign DBMS products. We enhanced the capability of the COAST' s firewall reference model by adding a transmission unit modification function and an attribute value mapping function,describes the schematic and semantic layer reference model, and finally forms a reference model for DBMS security proxy which greatly helps in the design and implementation of database security proxies. This modeling process can clearly separate the system functionality into three layers, define the possible security functions for each layer, and estimate the computational cost for each layer.

  2. Using suggestion to model different types of automatic writing.

    Walsh, E; Mehta, M A; Oakley, D A; Guilmette, D N; Gabay, A; Halligan, P W; Deeley, Q


    Our sense of self includes awareness of our thoughts and movements, and our control over them. This feeling can be altered or lost in neuropsychiatric disorders as well as in phenomena such as "automatic writing" whereby writing is attributed to an external source. Here, we employed suggestion in highly hypnotically suggestible participants to model various experiences of automatic writing during a sentence completion task. Results showed that the induction of hypnosis, without additional suggestion, was associated with a small but significant reduction of control, ownership, and awareness for writing. Targeted suggestions produced a double dissociation between thought and movement components of writing, for both feelings of control and ownership, and additionally, reduced awareness of writing. Overall, suggestion produced selective alterations in the control, ownership, and awareness of thought and motor components of writing, thus enabling key aspects of automatic writing, observed across different clinical and cultural settings, to be modelled.

  3. An automatic 3D CAD model errors detection method of aircraft structural part for NC machining

    Bo Huang


    Full Text Available Feature-based NC machining, which requires high quality of 3D CAD model, is widely used in machining aircraft structural part. However, there has been little research on how to automatically detect the CAD model errors. As a result, the user has to manually check the errors with great effort before NC programming. This paper proposes an automatic CAD model errors detection approach for aircraft structural part. First, the base faces are identified based on the reference directions corresponding to machining coordinate systems. Then, the CAD models are partitioned into multiple local regions based on the base faces. Finally, the CAD model error types are evaluated based on the heuristic rules. A prototype system based on CATIA has been developed to verify the effectiveness of the proposed approach.

  4. Model-Based Reasoning in Humans Becomes Automatic with Training.

    Marcos Economides


    Full Text Available Model-based and model-free reinforcement learning (RL have been suggested as algorithmic realizations of goal-directed and habitual action strategies. Model-based RL is more flexible than model-free but requires sophisticated calculations using a learnt model of the world. This has led model-based RL to be identified with slow, deliberative processing, and model-free RL with fast, automatic processing. In support of this distinction, it has recently been shown that model-based reasoning is impaired by placing subjects under cognitive load--a hallmark of non-automaticity. Here, using the same task, we show that cognitive load does not impair model-based reasoning if subjects receive prior training on the task. This finding is replicated across two studies and a variety of analysis methods. Thus, task familiarity permits use of model-based reasoning in parallel with other cognitive demands. The ability to deploy model-based reasoning in an automatic, parallelizable fashion has widespread theoretical implications, particularly for the learning and execution of complex behaviors. It also suggests a range of important failure modes in psychiatric disorders.

  5. Towards Automatic Processing of Virtual City Models for Simulations

    Piepereit, R.; Schilling, A.; Alam, N.; Wewetzer, M.; Pries, M.; Coors, V.


    Especially in the field of numerical simulations, such as flow and acoustic simulations, the interest in using virtual 3D models to optimize urban systems is increasing. The few instances in which simulations were already carried out in practice have been associated with an extremely high manual and therefore uneconomical effort for the processing of models. Using different ways of capturing models in Geographic Information System (GIS) and Computer Aided Engineering (CAE), increases the already very high complexity of the processing. To obtain virtual 3D models suitable for simulation, we developed a tool for automatic processing with the goal to establish ties between the world of GIS and CAE. In this paper we introduce a way to use Coons surfaces for the automatic processing of building models in LoD2, and investigate ways to simplify LoD3 models in order to reduce unnecessary information for a numerical simulation.

  6. Automatic Building Information Model Query Generation

    Jiang, Yufei; Yu, Nan; Ming, Jiang; Lee, Sanghoon; DeGraw, Jason; Yen, John; Messner, John I.; Wu, Dinghao


    Energy efficient building design and construction calls for extensive collaboration between different subfields of the Architecture, Engineering and Construction (AEC) community. Performing building design and construction engineering raises challenges on data integration and software interoperability. Using Building Information Modeling (BIM) data hub to host and integrate building models is a promising solution to address those challenges, which can ease building design information management. However, the partial model query mechanism of current BIM data hub collaboration model has several limitations, which prevents designers and engineers to take advantage of BIM. To address this problem, we propose a general and effective approach to generate query code based on a Model View Definition (MVD). This approach is demonstrated through a software prototype called QueryGenerator. By demonstrating a case study using multi-zone air flow analysis, we show how our approach and tool can help domain experts to use BIM to drive building design with less labour and lower overhead cost.

  7. Automatic 3D modeling of the urban landscape

    I. Esteban; J. Dijk; F. Groen


    In this paper we present a fully automatic system for building 3D models of urban areas at the street level. We propose a novel approach for the accurate estimation of the scale consistent camera pose given two previous images. We employ a new method for global optimization and use a novel sampling

  8. Automatic 3D Modeling of the Urban Landscape

    Esteban, I.; Dijk, J.; Groen, F.A.


    In this paper we present a fully automatic system for building 3D models of urban areas at the street level. We propose a novel approach for the accurate estimation of the scale consistent camera pose given two previous images. We employ a new method for global optimization and use a novel sampling

  9. Small-Scale Helicopter Automatic Autorotation: Modeling, Guidance, and Control

    Taamallah, S.


    Our research objective consists in developing a, model-based, automatic safety recovery system, for a small-scale helicopter Unmanned Aerial Vehicle (UAV) in autorotation, i.e. an engine OFF flight condition, that safely flies and lands the helicopter to a pre-specified ground location. In pursuit o

  10. Reference models for advanced e-services

    Mendes, M.J.; Vissers, C.A.; Suomi, R.; Lankhorst, M.M.; Passos, C.; Slagter, R.J.


    Reference models (RMs) capitalize on the experience that key functions and relationships determine a system’s main design structure which has to be established before other design details can be settled. As such RMs can play an important role in designing complex (distributed) systems, in allocating

  11. A numerical reference model for themomechanical subduction

    Quinquis, Matthieu; Chemia, Zurab; Tosi, Nicola


    . Our reference model represents ocean-ocean convergence and describes initial geometries and lithological stratification for a three-layered subducting slab and overriding plate along with their respective flow laws and chemical composition. It also includes kinematic and thermal boundary conditions...

  12. Dissociating Working Memory Updating and Automatic Updating: The Reference-Back Paradigm

    Rac-Lubashevsky, Rachel; Kessler, Yoav


    Working memory (WM) updating is a controlled process through which relevant information in the environment is selected to enter the gate to WM and substitute its contents. We suggest that there is also an automatic form of updating, which influences performance in many tasks and is primarily manifested in reaction time sequential effects. The goal…

  13. Reference spectral signature selection using density-based cluster for automatic oil spill detection in hyperspectral images.

    Liu, Delian; Zhang, Jianqi; Wang, Xiaorui


    Reference spectral signature selection is a fundamental work for automatic oil spill detection. To address this issue, a new approach is proposed here, which employs the density-based cluster to select a specific spectral signature from a hyperspectral image. This paper first introduces the framework of oil spill detection from hyperspectral images, indicating that detecting oil spill requires a reference spectral signature of oil spill, parameters of background, and a target detection algorithm. Based on the framework, we give the new reference spectral signature selection approach in details. Then, we demonstrate the estimation of background parameters according to the reflectance of seawater in the infrared bands. Next, the conventional adaptive cosine estimator (ACE) algorithm is employed to achieve oil spill detection. Finally, the proposed approach is tested via several practical hyperspectral images that are collected during the Horizon Deep water oil spill. The experimental results show that this new approach can automatically select the reference spectral signature of oil spills from hyperspectral images and has high detection performance.

  14. Global daily reference evapotranspiration modeling and evaluation

    Senay, G.B.; Verdin, J.P.; Lietzow, R.; Melesse, Assefa M.


    Accurate and reliable evapotranspiration (ET) datasets are crucial in regional water and energy balance studies. Due to the complex instrumentation requirements, actual ET values are generally estimated from reference ET values by adjustment factors using coefficients for water stress and vegetation conditions, commonly referred to as crop coefficients. Until recently, the modeling of reference ET has been solely based on important weather variables collected from weather stations that are generally located in selected agro-climatic locations. Since 2001, the National Oceanic and Atmospheric Administration's Global Data Assimilation System (GDAS) has been producing six-hourly climate parameter datasets that are used to calculate daily reference ET for the whole globe at 1-degree spatial resolution. The U.S. Geological Survey Center for Earth Resources Observation and Science has been producing daily reference ET (ETo) since 2001, and it has been used on a variety of operational hydrological models for drought and streamflow monitoring all over the world. With the increasing availability of local station-based reference ET estimates, we evaluated the GDAS-based reference ET estimates using data from the California Irrigation Management Information System (CIMIS). Daily CIMIS reference ET estimates from 85 stations were compared with GDAS-based reference ET at different spatial and temporal scales using five-year daily data from 2002 through 2006. Despite the large difference in spatial scale (point vs. ???100 km grid cell) between the two datasets, the correlations between station-based ET and GDAS-ET were very high, exceeding 0.97 on a daily basis to more than 0.99 on time scales of more than 10 days. Both the temporal and spatial correspondences in trend/pattern and magnitudes between the two datasets were satisfactory, suggesting the reliability of using GDAS parameter-based reference ET for regional water and energy balance studies in many parts of the world

  15. Geometric model of robotic arc welding for automatic programming


    Geometric information is important for automatic programming of arc welding robot. Complete geometric models of robotic arc welding are established in this paper. In the geometric model of weld seam, an equation with seam length as its parameter is introduced to represent any weld seam. The method to determine discrete programming points on a weld seam is presented. In the geometric model of weld workpiece, three class primitives and CSG tree are used to describe weld workpiece. Detailed data structure is presented. In pose transformation of torch, world frame, torch frame and active frame are defined, and transformation between frames is presented. Based on these geometric models, an automatic programming software package for robotic arc welding, RAWCAD, is developed. Experiments show that the geometric models are practical and reliable.

  16. Time series modeling for automatic target recognition

    Sokolnikov, Andre


    Time series modeling is proposed for identification of targets whose images are not clearly seen. The model building takes into account air turbulence, precipitation, fog, smoke and other factors obscuring and distorting the image. The complex of library data (of images, etc.) serving as a basis for identification provides the deterministic part of the identification process, while the partial image features, distorted parts, irrelevant pieces and absence of particular features comprise the stochastic part of the target identification. The missing data approach is elaborated that helps the prediction process for the image creation or reconstruction. The results are provided.

  17. Towards automatic calibration of 2-dimensional flood propagation models

    P. Fabio


    Full Text Available Hydraulic models for flood propagation description are an essential tool in many fields, e.g. civil engineering, flood hazard and risk assessments, evaluation of flood control measures, etc. Nowadays there are many models of different complexity regarding the mathematical foundation and spatial dimensions available, and most of them are comparatively easy to operate due to sophisticated tools for model setup and control. However, the calibration of these models is still underdeveloped in contrast to other models like e.g. hydrological models or models used in ecosystem analysis. This has basically two reasons: first, the lack of relevant data against the models can be calibrated, because flood events are very rarely monitored due to the disturbances inflicted by them and the lack of appropriate measuring equipment in place. Secondly, especially the two-dimensional models are computationally very demanding and therefore the use of available sophisticated automatic calibration procedures is restricted in many cases. This study takes a well documented flood event in August 2002 at the Mulde River in Germany as an example and investigates the most appropriate calibration strategy for a full 2-D hyperbolic finite element model. The model independent optimiser PEST, that gives the possibility of automatic calibrations, is used. The application of the parallel version of the optimiser to the model and calibration data showed that a it is possible to use automatic calibration in combination of 2-D hydraulic model, and b equifinality of model parameterisation can also be caused by a too large number of degrees of freedom in the calibration data in contrast to a too simple model setup. In order to improve model calibration and reduce equifinality a method was developed to identify calibration data with likely errors that obstruct model calibration.

  18. Automatization of hydrodynamic modelling in a Floreon+ system

    Ronovsky, Ales; Kuchar, Stepan; Podhoranyi, Michal; Vojtek, David


    The paper describes fully automatized hydrodynamic modelling as a part of the Floreon+ system. The main purpose of hydrodynamic modelling in the disaster management is to provide an accurate overview of the hydrological situation in a given river catchment. Automatization of the process as a web service could provide us with immediate data based on extreme weather conditions, such as heavy rainfall, without the intervention of an expert. Such a service can be used by non scientific users such as fire-fighter operators or representatives of a military service organizing evacuation during floods or river dam breaks. The paper describes the whole process beginning with a definition of a schematization necessary for hydrodynamic model, gathering of necessary data and its processing for a simulation, the model itself and post processing of a result and visualization on a web service. The process is demonstrated on a real data collected during floods in our Moravian-Silesian region in 2010.

  19. Automatic extraction of reference gene from literature in plants based on texting mining.

    He, Lin; Shen, Gengyu; Li, Fei; Huang, Shuiqing


    Real-Time Quantitative Polymerase Chain Reaction (qRT-PCR) is widely used in biological research. It is a key to the availability of qRT-PCR experiment to select a stable reference gene. However, selecting an appropriate reference gene usually requires strict biological experiment for verification with high cost in the process of selection. Scientific literatures have accumulated a lot of achievements on the selection of reference gene. Therefore, mining reference genes under specific experiment environments from literatures can provide quite reliable reference genes for similar qRT-PCR experiments with the advantages of reliability, economic and efficiency. An auxiliary reference gene discovery method from literature is proposed in this paper which integrated machine learning, natural language processing and text mining approaches. The validity tests showed that this new method has a better precision and recall on the extraction of reference genes and their environments.




    Apesar do reconhecimento amplo da qualidade das previsões obtidas na aplicação de um modelo ARIMA à previsão de séries temporais univariadas, seu uso tem permanecido restrito pela falta de procedimentos automáticos, computadorizados. Neste trabalho este problema é discutido e um algoritmo é proposto. Inspite of general recognition of the good forecasting ability of ARIMA models in predicting time series, this approach is not widely used because of the lack of ...

  1. Automatic balancing of 3D models

    Christiansen, Asger Nyman; Schmidt, Ryan; Bærentzen, Jakob Andreas


    3D printing technologies allow for more diverse shapes than are possible with molds and the cost of making just one single object is negligible compared to traditional production methods. However, not all shapes are suitable for 3D print. One of the remaining costs is therefore human time spent......, in these cases, we will apply a rotation of the object which only deforms the shape a little near the base. No user input is required but it is possible to specify manufacturing constraints related to specific 3D print technologies. Several models have successfully been balanced and printed using both polyjet...

  2. Semi-automatic term extraction for the African languages, with special reference to Northern Sotho

    Elsabé Taljard; Gilles-Maurice de Schryver


    Abstract: Worldwide, semi-automatically extracting terms from corpora is becoming the norm for the compilation of terminology lists, term banks or dictionaries for special purposes. If Africanlanguage terminologists are willing to take their rightful place in the new millennium, they must not only take cognisance of this trend but also be ready to implement the new technology. In this article it is advocated that the best way to do the latter two at this stage, is to opt for computat...

  3. Behavioral Reference Model for Pervasive Healthcare Systems.

    Tahmasbi, Arezoo; Adabi, Sahar; Rezaee, Ali


    The emergence of mobile healthcare systems is an important outcome of application of pervasive computing concepts for medical care purposes. These systems provide the facilities and infrastructure required for automatic and ubiquitous sharing of medical information. Healthcare systems have a dynamic structure and configuration, therefore having an architecture is essential for future development of these systems. The need for increased response rate, problem limited storage, accelerated processing and etc. the tendency toward creating a new generation of healthcare system architecture highlight the need for further focus on cloud-based solutions for transfer data and data processing challenges. Integrity and reliability of healthcare systems are of critical importance, as even the slightest error may put the patients' lives in danger; therefore acquiring a behavioral model for these systems and developing the tools required to model their behaviors are of significant importance. The high-level designs may contain some flaws, therefor the system must be fully examined for different scenarios and conditions. This paper presents a software architecture for development of healthcare systems based on pervasive computing concepts, and then models the behavior of described system. A set of solutions are then proposed to improve the design's qualitative characteristics including, availability, interoperability and performance.

  4. Automatic Determination of the Conic Coronal Mass Ejection Model Parameters

    Pulkkinen, A.; Oates, T.; Taktakishvili, A.


    Characterization of the three-dimensional structure of solar transients using incomplete plane of sky data is a difficult problem whose solutions have potential for societal benefit in terms of space weather applications. In this paper transients are characterized in three dimensions by means of conic coronal mass ejection (CME) approximation. A novel method for the automatic determination of cone model parameters from observed halo CMEs is introduced. The method uses both standard image processing techniques to extract the CME mass from white-light coronagraph images and a novel inversion routine providing the final cone parameters. A bootstrap technique is used to provide model parameter distributions. When combined with heliospheric modeling, the cone model parameter distributions will provide direct means for ensemble predictions of transient propagation in the heliosphere. An initial validation of the automatic method is carried by comparison to manually determined cone model parameters. It is shown using 14 halo CME events that there is reasonable agreement, especially between the heliocentric locations of the cones derived with the two methods. It is argued that both the heliocentric locations and the opening half-angles of the automatically determined cones may be more realistic than those obtained from the manual analysis

  5. Automatic Texture Mapping of Architectural and Archaeological 3d Models

    Kersten, T. P.; Stallmann, D.


    Today, detailed, complete and exact 3D models with photo-realistic textures are increasingly demanded for numerous applications in architecture and archaeology. Manual texture mapping of 3D models by digital photographs with software packages, such as Maxon Cinema 4D, Autodesk 3Ds Max or Maya, still requires a complex and time-consuming workflow. So, procedures for automatic texture mapping of 3D models are in demand. In this paper two automatic procedures are presented. The first procedure generates 3D surface models with textures by web services, while the second procedure textures already existing 3D models with the software tmapper. The program tmapper is based on the Multi Layer 3D image (ML3DImage) algorithm and developed in the programming language C++. The studies showing that the visibility analysis using the ML3DImage algorithm is not sufficient to obtain acceptable results of automatic texture mapping. To overcome the visibility problem the Point Cloud Painter algorithm in combination with the Z-buffer-procedure will be applied in the future.

  6. MEMOPS: data modelling and automatic code generation.

    Fogh, Rasmus H; Boucher, Wayne; Ionides, John M C; Vranken, Wim F; Stevens, Tim J; Laue, Ernest D


    In recent years the amount of biological data has exploded to the point where much useful information can only be extracted by complex computational analyses. Such analyses are greatly facilitated by metadata standards, both in terms of the ability to compare data originating from different sources, and in terms of exchanging data in standard forms, e.g. when running processes on a distributed computing infrastructure. However, standards thrive on stability whereas science tends to constantly move, with new methods being developed and old ones modified. Therefore maintaining both metadata standards, and all the code that is required to make them useful, is a non-trivial problem. Memops is a framework that uses an abstract definition of the metadata (described in UML) to generate internal data structures and subroutine libraries for data access (application programming interfaces--APIs--currently in Python, C and Java) and data storage (in XML files or databases). For the individual project these libraries obviate the need for writing code for input parsing, validity checking or output. Memops also ensures that the code is always internally consistent, massively reducing the need for code reorganisation. Across a scientific domain a Memops-supported data model makes it easier to support complex standards that can capture all the data produced in a scientific area, share them among all programs in a complex software pipeline, and carry them forward to deposition in an archive. The principles behind the Memops generation code will be presented, along with example applications in Nuclear Magnetic Resonance (NMR) spectroscopy and structural biology.

  7. Automatic paper sliceform design from 3D solid models.

    Le-Nguyen, Tuong-Vu; Low, Kok-Lim; Ruiz, Conrado; Le, Sang N


    A paper sliceform or lattice-style pop-up is a form of papercraft that uses two sets of parallel paper patches slotted together to make a foldable structure. The structure can be folded flat, as well as fully opened (popped-up) to make the two sets of patches orthogonal to each other. Automatic design of paper sliceforms is still not supported by existing computational models and remains a challenge. We propose novel geometric formulations of valid paper sliceform designs that consider the stability, flat-foldability and physical realizability of the designs. Based on a set of sufficient construction conditions, we also present an automatic algorithm for generating valid sliceform designs that closely depict the given 3D solid models. By approximating the input models using a set of generalized cylinders, our method significantly reduces the search space for stable and flat-foldable sliceforms. To ensure the physical realizability of the designs, the algorithm automatically generates slots or slits on the patches such that no two cycles embedded in two different patches are interlocking each other. This guarantees local pairwise assembility between patches, which is empirically shown to lead to global assembility. Our method has been demonstrated on a number of example models, and the output designs have been successfully made into real paper sliceforms.

  8. Automatic computational models of acoustical category features: Talking versus singing

    Gerhard, David


    The automatic discrimination between acoustical categories has been an increasingly interesting problem in the fields of computer listening, multimedia databases, and music information retrieval. A system is presented which automatically generates classification models, given a set of destination classes and a set of a priori labeled acoustic events. Computational models are created using comparative probability density estimations. For the specific example presented, the destination classes are talking and singing. Individual feature models are evaluated using two measures: The Kologorov-Smirnov distance measures feature separation, and accuracy is measured using absolute and relative metrics. The system automatically segments the event set into a user-defined number (n) of development subsets, and runs a development cycle for each set, generating n separate systems, each of which is evaluated using the above metrics to improve overall system accuracy and to reduce inherent data skew from any one development subset. Multiple features for the same acoustical categories are then compared for underlying feature overlap using cross-correlation. Advantages of automated computational models include improved system development and testing, shortened development cycle, and automation of common system evaluation tasks. Numerical results are presented relating to the talking/singing classification problem.

  9. A model for automatic identification of human pulse signals

    Hui-yan WANG; Pei-yong ZHANG


    This paper presents a quantitative method for automatic identification of human pulse signals. The idea is to start with the extraction of characteristic parameters and then to construct the recognition model based on Bayesian networks. To identify depth, frequency and rhythm, several parameters are proposed. To distinguish the strength and shape, which cannot be represented by one or several parameters and are hard to recognize, the main time-domain feature parameters are computed based on the feature points of the pulse signal. Then the extracted parameters are taken as the input and five models for automatic pulse signal identification are constructed based on Bayesian networks. Experimental results demonstrate that the method is feasible and effective in recognizing depth, frequency, rhythm, strength and shape of pulse signals, which can be expected to facilitate the modernization of pulse diagnosis.

  10. Automatic Model Generation Framework for Computational Simulation of Cochlear Implantation

    Mangado Lopez, Nerea; Ceresa, Mario; Duchateau, Nicolas


    Recent developments in computational modeling of cochlear implantation are promising to study in silico the performance of the implant before surgery. However, creating a complete computational model of the patient's anatomy while including an external device geometry remains challenging. To addr......Recent developments in computational modeling of cochlear implantation are promising to study in silico the performance of the implant before surgery. However, creating a complete computational model of the patient's anatomy while including an external device geometry remains challenging....... To address such a challenge, we propose an automatic framework for the generation of patient-specific meshes for finite element modeling of the implanted cochlea. First, a statistical shape model is constructed from high-resolution anatomical μCT images. Then, by fitting the statistical model to a patient......'s CT image, an accurate model of the patient-specific cochlea anatomy is obtained. An algorithm based on the parallel transport frame is employed to perform the virtual insertion of the cochlear implant. Our automatic framework also incorporates the surrounding bone and nerve fibers and assigns...

  11. Excel Automatic Locking Scaffold. Deactivation and Decommissioning Focus Area. OST Reference #2320

    None, None


    The United States Department of Energy (DOE) continually seeks safer and more cost-effective technologies for decontamination and decommissioning (D&D) of nuclear facilities. To this end, the Deactivation and Decommissioning Focus Area (DDFA) of the DOE’s Office of Science and Technology sponsors large-scale demonstration and deployment projects (LSDDPs). At these LSDDPs, developers and vendors of improved or innovative technologies showcase products that are potentially beneficial to the DOE’s projects and to others in the D&D community. Benefits sought include decreased health and safety risks to personnel and the environment, increased productivity, and decreased cost of operation. The Idaho National Engineering and Environmental Laboratory (INEEL) LSDDP generated a list of need statements defining specific needs or problems where improved technologies could be incorporated into ongoing D&D tasks. Although not addressed explicitly, the use of scaffolds is needed in several of the listed needs, including characterization, demolition, and asbestos abatement. In these areas, scaffold towers are used to access areas that are not accessible using mechanical methods such as manlifts or mechanical platforms. In addition, the work requires more mobility than what can be achieved using ladders. Because of the wide use of scaffold on D&D projects, a need exists for a safer to use, faster to set up, and overall cheaper scaffold system. This demonstration investigated the feasibility of using the Excel Automatic Locking Scaffold (innovative technology) to access areas where tube and clamp scaffold (baseline) is currently being used on D&D activities. Benefits expected from using the innovative technology include: Decreased exposure to radiation, chemical, and/or physical hazards during scaffold erection and dismantlement; Increased safety; Easier use; Shorten D&D Schedule; Reduced cost of operation; Excel Scaffold is compatible with tube and clamp scaffold. This report

  12. Automatic Navigation for Rat-Robots with Modeling of the Human Guidance

    Chao Sun; Nenggan Zheng; Xinlu Zhang; Weidong Chen; Xiaoxiang Zheng


    A bio-robot system refers to an animal equipped with Brain-Computer Interface (BCI),through which the outer stimulation is delivered directly into the animal's brain to control its behaviors.The development ofbio-robots suffers from the dependency on real-time guidance by human operators.Because of its inherent difficulties,there is no feasible method for automatic controlling of bio-robots yet.In this paper,we propose a new method to realize the automatic navigation for bio-robots.A General Regression Neural Network (GRNN) is adopted to analyze and model the controlling procedure of human operations.Comparing to the traditional approaches with explicit controlling rules,our algorithm learns the controlling process and imitates the decision-making of human-beings to steer the rat-robot automatically.In real-time navigation experiments,our method successfully controls bio-robots to follow given paths automatically and precisely.This work would be significant for future applications of bio-robots and provide a new way to realize hybrid intelligent systems with artificial intelligence and natural biological intelligence combined together.

  13. Automatic code generation from the OMT-based dynamic model

    Ali, J.; Tanaka, J.


    The OMT object-oriented software development methodology suggests creating three models of the system, i.e., object model, dynamic model and functional model. We have developed a system that automatically generates implementation code from the dynamic model. The system first represents the dynamic model as a table and then generates executable Java language code from it. We used inheritance for super-substate relationships. We considered that transitions relate to states in a state diagram exactly as operations relate to classes in an object diagram. In the generated code, each state in the state diagram becomes a class and each event on a state becomes an operation on the corresponding class. The system is implemented and can generate executable code for any state diagram. This makes the role of the dynamic model more significant and the job of designers even simpler.

  14. Biomass Scenario Model Documentation: Data and References

    Lin, Y.; Newes, E.; Bush, B.; Peterson, S.; Stright, D.


    The Biomass Scenario Model (BSM) is a system dynamics model that represents the entire biomass-to-biofuels supply chain, from feedstock to fuel use. The BSM is a complex model that has been used for extensive analyses; the model and its results can be better understood if input data used for initialization and calibration are well-characterized. It has been carefully validated and calibrated against the available data, with data gaps filled in using expert opinion and internally consistent assumed values. Most of the main data sources that feed into the model are recognized as baseline values by the industry. This report documents data sources and references in Version 2 of the BSM (BSM2), which only contains the ethanol pathway, although subsequent versions of the BSM contain multiple conversion pathways. The BSM2 contains over 12,000 total input values, with 506 distinct variables. Many of the variables are opportunities for the user to define scenarios, while others are simply used to initialize a stock, such as the initial number of biorefineries. However, around 35% of the distinct variables are defined by external sources, such as models or reports. The focus of this report is to provide insight into which sources are most influential in each area of the supply chain.

  15. Automatic Generation of 3D Building Models with Multiple Roofs

    Kenichi Sugihara; Yoshitugu Hayashi


    Based on building footprints (building polygons) on digital maps, we are proposing the GIS and CG integrated system that automatically generates 3D building models with multiple roofs. Most building polygons' edges meet at right angles (orthogonal polygon). The integrated system partitions orthogonal building polygons into a set of rectangles and places rectangular roofs and box-shaped building bodies on these rectangles. In order to partition an orthogonal polygon, we proposed a useful polygon expression in deciding from which vertex a dividing line is drawn. In this paper, we propose a new scheme for partitioning building polygons and show the process of creating 3D roof models.

  16. Aircraft automatic flight control system with model inversion

    Smith, G. A.; Meyer, George


    A simulator study was conducted to verify the advantages of a Newton-Raphson model-inversion technique as a design basis for an automatic trajectory control system in an aircraft with highly nonlinear characteristics. The simulation employed a detailed mathematical model of the aerodynamic and propulsion system performance characteristics of a vertical-attitude takeoff and landing tactical aircraft. The results obtained confirm satisfactory control system performance over a large portion of the flight envelope. System response to wind gusts was satisfactory for various plausible combinations of wind magnitude and direction.

  17. Aircraft automatic flight control system with model inversion

    Smith, G. A.; Meyer, George


    A simulator study was conducted to verify the advantages of a Newton-Raphson model-inversion technique as a design basis for an automatic trajectory control system in an aircraft with highly nonlinear characteristics. The simulation employed a detailed mathematical model of the aerodynamic and propulsion system performance characteristics of a vertical-attitude takeoff and landing tactical aircraft. The results obtained confirm satisfactory control system performance over a large portion of the flight envelope. System response to wind gusts was satisfactory for various plausible combinations of wind magnitude and direction.

  18. A New Model for Automatic Raster-to-Vector Conversion

    Hesham E. ElDeeb


    Full Text Available There is a growing need for automatic digitizing, or so called automated raster to vector conversion (ARVC for maps. The benefit of ARVC is the production of maps that consume less space and are easy to search for or retrieve information from. In addition, ARVC is the fundamental step to reusing old maps at higher level of recognition. In this paper, a new model for an ARVC is developed. The proposed model converts the “paper maps” into electronic formats for Geographic Information Systems (GIS and evaluates the performance of the conversion process. To overcome the limitations of existing commercial vectorization software packages, the proposed model is customized to separate textual information, usually the cause of problems in the automatic conversion process, from the delimiting graphics of the map. The model retains the coordinates of the textual information for a later merge with the map after the conversion process. The propose model also addresses the localization problems in ARVC through the knowledge-supported intelligent vectorization system that is designed specifically to improve the accuracy and speed of the vectorization process. Finally, the model has beenimplemented on a symmetric multiprocessing (SMP architecture, in order to achieve higher speed up and performance.

  19. Variable-mass Thermodynamics Calculation Model for Gas-operated Automatic Weapon%Variable-mass Thermodynamics Calculation Model for Gas-operated Automatic Weapon

    陈建彬; 吕小强


    Aiming at the fact that the energy and mass exchange phenomena exist between barrel and gas-operated device of the automatic weapon, for describing its interior ballistics and dynamic characteristics of the gas-operated device accurately, a new variable-mass thermodynamics model is built. It is used to calculate the automatic mechanism velocity of a certain automatic weapon, the calculation results coincide with the experimental results better, and thus the model is validated. The influences of structure parameters on gas-operated device' s dynamic characteristics are discussed. It shows that the model is valuable for design and accurate performance prediction of gas-operated automatic weapon.

  20. Automatic Relevance Determination for multi-way models

    Mørup, Morten; Hansen, Lars Kai


    Estimating the adequate number of components is an important yet difficult problem in multi-way modelling. We demonstrate how a Bayesian framework for model selection based on Automatic Relevance Determination (ARD) can be adapted to the Tucker and CP models. By assigning priors for the model...... parameters and learning the hyperparameters of these priors the method is able to turn off excess components and simplify the core structure at a computational cost of fitting the conventional Tucker/CP model. To investigate the impact of the choice of priors we based the ARD on both Laplace and Gaussian...... of components of data within the Tucker and CP structure. For the Tucker and CP model the approach performs better than heuristics such as the Bayesian Information Criterion, Akaikes Information Criterion, DIFFIT and the numerical convex hull (NumConvHull) while operating only at the cost of estimating...

  1. Progress towards a Venus reference cloud model

    Wilson, Colin; Ignatiev, Nikolay; Marcq, Emmanuel

    Venus is completely enveloped by clouds. The main cloud layers stretch from altitudes of 48 - 75 km, with additional tenuous hazes found at altitudes 30 - 100 km. Clouds play a crucial role in governing atmospheric circulation, chemistry and climate on all planets, but particularly so on Venus due to the optical thickness of the atmosphere. The European Space Agency’s Venus Express (VEx) satellite has carried out a wealth of observations of Venus clouds since its arrival at Venus in April 2006. Many VEx observations are relevant to cloud science - from imagers and spectrometers to solar, stellar and radio occultation - each covering different altitude ranges, spectral ranges and atmospheric constituents. We have formed an International Team at the International Space Science Institute to bring together scientists from each of the relevant Venus Express investigation teams as well as from previous missions, as well as those developing computational and analytical models of clouds and hazes. The aims of the project are (1) to create self-consistent reference cloud/haze models which capture not only a mean cloud structure but also its main modes of variability; and (2) to bring together modelers and observers, to reach an understanding of clouds and hazes on Venus which matches all observables and is physically consistent. Our approach is to first to assemble an averaged cloud profile for low latitudes, showing how cloud number abundances and other observables vary as a function of altitude, consistent with all available observations. In a second step, we will expand this work to produce a reference cloud profile which varies with latitude and local solar time, as well as optical thickness of the cloud. We will present our status in progressing towards this goal. We acknowledge the support of the International Space Science Institute of Berne, Switzerland, in hosting our Team’s meetings.

  2. An Automatic Registration Algorithm for 3D Maxillofacial Model

    Qiu, Luwen; Zhou, Zhongwei; Guo, Jixiang; Lv, Jiancheng


    3D image registration aims at aligning two 3D data sets in a common coordinate system, which has been widely used in computer vision, pattern recognition and computer assisted surgery. One challenging problem in 3D registration is that point-wise correspondences between two point sets are often unknown apriori. In this work, we develop an automatic algorithm for 3D maxillofacial models registration including facial surface model and skull model. Our proposed registration algorithm can achieve a good alignment result between partial and whole maxillofacial model in spite of ambiguous matching, which has a potential application in the oral and maxillofacial reparative and reconstructive surgery. The proposed algorithm includes three steps: (1) 3D-SIFT features extraction and FPFH descriptors construction; (2) feature matching using SAC-IA; (3) coarse rigid alignment and refinement by ICP. Experiments on facial surfaces and mandible skull models demonstrate the efficiency and robustness of our algorithm.

  3. Automatic identification of model reductions for discrete stochastic simulation

    Wu, Sheng; Fu, Jin; Li, Hong; Petzold, Linda


    Multiple time scales in cellular chemical reaction systems present a challenge for the efficiency of stochastic simulation. Numerous model reductions have been proposed to accelerate the simulation of chemically reacting systems by exploiting time scale separation. However, these are often identified and deployed manually, requiring expert knowledge. This is time-consuming, prone to error, and opportunities for model reduction may be missed, particularly for large models. We propose an automatic model analysis algorithm using an adaptively weighted Petri net to dynamically identify opportunities for model reductions for both the stochastic simulation algorithm and tau-leaping simulation, with no requirement of expert knowledge input. Results are presented to demonstrate the utility and effectiveness of this approach.

  4. Automatic Generation of Symbolic Model for Parameterized Synchronous Systems

    Wei-Wen Xu


    With the purpose of making the verification of parameterized system more general and easier, in this paper, a new and intuitive language PSL (Parameterized-system Specification Language) is proposed to specify a class of parameterized synchronous systems. From a PSL script, an automatic method is proposed to generate a constraint-based symbolic model. The model can concisely symbolically represent the collections of global states by counting the number of processes in a given state. Moreover, a theorem has been proved that there is a simulation relation between the original system and its symbolic model. Since the abstract and symbolic techniques are exploited in the symbolic model, state-explosion problem in traditional verification methods is efficiently avoided. Based on the proposed symbolic model, a reachability analysis procedure is implemented using ANSI C++ on UNIX platform. Thus, a complete tool for verifying the parameterized synchronous systems is obtained and tested for some cases. The experimental results show that the method is satisfactory.

  5. Establishing a business process reference model for Universities

    Svensson, Carsten; Hvolby, Hans-Henrik


    process enablement, collection of performance data and systematic reuse of existing community experience and knowledge. For these reasons reference models such as the SCOR (Supply Chain Operations Reference), DCOR (Design Chain Operations Reference) and ITIL (Information Technology Infrastructure Library...

  6. Respiratory motion correction of liver contrast-enhanced ultrasound sequences by selecting reference image automatically

    Zhang, Ji; Zhang, Yan-Rong; Chen, Juan; Chen, Xiao-Hui; Zhong, Xiao-Li


    Objective: Respiratory motion correction is necessary to quantitative analysis of liver contrast-enhance ultrasound (CEUS) image sequences. However, traditionally manual selecting reference image would affect the accuracy of the respiratory motion correction. Methods First, the original high-dimensional ultrasound gray-level image data was mapped into a two-dimensional space by using Laplacian Eigenmaps (LE). Then, the cluster analysis was adopted using K-means, and the optimal ultrasound reference image could be gotten for respiratory motion correction. Finally, this proposed method was validated on 18 CEUS cases of VX2 tumor in rabbit liver, and the effectiveness of this method was demonstrated. Results After correction, the time-intensity curves extracted from the region of interest of CEUS image sequences became smoother. Before correction, the average of total mean structural similarity (TMSSIM) and the average of mean correlation coefficient (MCC) from image sequences were 0.45+/-0.11 and 0.67+/-0.16, respectively. After correction, the two parameters were increased obviously P<0.001), and were 0.59+/-0.11 and 0.81+/-0.11, respectively. The average of deviation valve (DV) from image sequences before correction was 92.16+/-18.12. After correction, the average was reduced to one-third of the original value. Conclusions: The proposed respiratory motion method could improve the accuracy of the quantitative analysis of CEUS by using the reference image based on the traditionally manual selection. This method is operated simply and has a potential in clinical application.

  7. Automatic selection of reference taxa for protein-protein interaction prediction with phylogenetic profiling

    Simonsen, Martin; Maetschke, S.R.; Ragan, M.A.


    Motivation: Phylogenetic profiling methods can achieve good accuracy in predicting protein–protein interactions, especially in prokaryotes. Recent studies have shown that the choice of reference taxa (RT) is critical for accurate prediction, but with more than 2500 fully sequenced taxa publicly......: We present three novel methods for automating the selection of RT, using machine learning based on known protein–protein interaction networks. One of these methods in particular, Tree-Based Search, yields greatly improved prediction accuracies. We further show that different methods for constituting...

  8. Fast tracking ICT infrastructure requirements and design, based on Enterprise Reference Architecture and matching Reference Models

    Bernus, Peter; Baltrusch, Rob; Vesterager, Johan


    The Globemen Consortium has developed the virtual enterprise reference architecture and methodology (VERAM), based on GERAM and developed reference models for virtual enterprise management and joint mission delivery. The planned virtual enterprise capability includes the areas of sales and market......The Globemen Consortium has developed the virtual enterprise reference architecture and methodology (VERAM), based on GERAM and developed reference models for virtual enterprise management and joint mission delivery. The planned virtual enterprise capability includes the areas of sales...

  9. Therapeutic concordance of two portable monitors and two routine automatic oral anticoagulant monitoring systems using as reference the manual prothrombin time technique.

    Vacas, Marta; Lafuente, Pedro José; Unanue, Iciar; Santos, Mónica; Iriarte, Jose Antonio


    Two models of capillary blood prothrombin time (PT) monitoring systems were evaluated for analytical performance and then compared with two routine PT systems using the reference manual technique and a high-sensitivity thromboplastin. Two sets of 60 and 80 plasmas were analyzed from anticoagulated patients stabilized over 3 months in an INR range 2-3.5 for therapy. Capillary PT determination was performed in two portable monitors, CoaguChek S and CoaguChek PT (Roche Diagnostics), and plasma automatic methods were Neoplastine/STA (Diagnostics Stago) and PT-FibrinogenHsPlus/ACL7000 (Instrumental Laboratories). Thromboplastin Bilbao (TBi), an in-house high-sensitivity rabbit thromboplastin (ISI=1.08), recommended as the reference reagent by an External Spanish Oral Anticoagulant Quality Assessment, was used in the PT manual technique. The two monitors' coefficients of correlation with the reference system were 0.74 for CoaguChek S and 0.81 for CoaguChek PT. The automatic routine systems showed a correlation of 0.92 (Neoplastine/STA) and 0.91 (PT-FbHsPlus/ACL7000). Clinical agreement expressed as the percentage of simple correlation ranged between 75.0% (CoaguChek S) and 88.9% (Neoplastine/STA). The systems having the best kappa index with the manual technique were CoaguChek PT (71.9%) and the Neoplastine/STA system (73%). The routine PT management systems exhibited better correlation and percentage of concordance when using the TBi/manual technique than did the portable monitors, which moreover performed unequally in this regard.

  10. Model Considerations for Memory-based Automatic Music Transcription

    Albrecht, Štěpán; Šmídl, Václav


    The problem of automatic music description is considered. The recorded music is modeled as a superposition of known sounds from a library weighted by unknown weights. Similar observation models are commonly used in statistics and machine learning. Many methods for estimation of the weights are available. These methods differ in the assumptions imposed on the weights. In Bayesian paradigm, these assumptions are typically expressed in the form of prior probability density function (pdf) on the weights. In this paper, commonly used assumptions about music signal are summarized and complemented by a new assumption. These assumptions are translated into pdfs and combined into a single prior density using combination of pdfs. Validity of the model is tested in simulation using synthetic data.

  11. Automatic Modelling of Photograhed Parts in CATIA CAD Environment

    Yunus Kayır


    Full Text Available In this study, a system was developed that can model parts in CATIA CAD program automatically by using photographic images obtained from the parts. The system, called ImageCAD, can use very kind of photography that was taken for prismatic and cylindrical parts. It can recognize geometric entities, such as lines, circles, arc and free curve, in the image by according to the selection of the user. ImageCAD can save generated knowledge of the entities in a suitable format for the CATIA program. ImageCAD, is controlled by using menus that were done in the CATIA interface, turn whatever you want photographs into 3B CAD models. The obtained CAD models have suitable structure that can be used for all CATIA application. Visual Basic programing language was preferred to design the system.

  12. Fast Automatic Precision Tree Models from Terrestrial Laser Scanner Data

    Mathias Disney


    Full Text Available This paper presents a new method for constructing quickly and automatically precision tree models from point clouds of the trunk and branches obtained by terrestrial laser scanning. The input of the method is a point cloud of a single tree scanned from multiple positions. The surface of the visible parts of the tree is robustly reconstructed by making a flexible cylinder model of the tree. The thorough quantitative model records also the topological branching structure. In this paper, every major step of the whole model reconstruction process, from the input to the finished model, is presented in detail. The model is constructed by a local approach in which the point cloud is covered with small sets corresponding to connected surface patches in the tree surface. The neighbor-relations and geometrical properties of these cover sets are used to reconstruct the details of the tree and, step by step, the whole tree. The point cloud and the sets are segmented into branches, after which the branches are modeled as collections of cylinders. From the model, the branching structure and size properties, such as volume and branch size distributions, for the whole tree or some of its parts, can be approximated. The approach is validated using both measured and modeled terrestrial laser scanner data from real trees and detailed 3D models. The results show that the method allows an easy extraction of various tree attributes from terrestrial or mobile laser scanning point clouds.

  13. An automatic fault management model for distribution networks

    Lehtonen, M.; Haenninen, S. [VTT Energy, Espoo (Finland); Seppaenen, M. [North-Carelian Power Co (Finland); Antila, E.; Markkila, E. [ABB Transmit Oy (Finland)


    An automatic computer model, called the FI/FL-model, for fault location, fault isolation and supply restoration is presented. The model works as an integrated part of the substation SCADA, the AM/FM/GIS system and the medium voltage distribution network automation systems. In the model, three different techniques are used for fault location. First, by comparing the measured fault current to the computed one, an estimate for the fault distance is obtained. This information is then combined, in order to find the actual fault point, with the data obtained from the fault indicators in the line branching points. As a third technique, in the absence of better fault location data, statistical information of line section fault frequencies can also be used. For combining the different fault location information, fuzzy logic is used. As a result, the probability weights for the fault being located in different line sections, are obtained. Once the faulty section is identified, it is automatically isolated by remote control of line switches. Then the supply is restored to the remaining parts of the network. If needed, reserve connections from other adjacent feeders can also be used. During the restoration process, the technical constraints of the network are checked. Among these are the load carrying capacity of line sections, voltage drop and the settings of relay protection. If there are several possible network topologies, the model selects the technically best alternative. The FI/IL-model has been in trial use at two substations of the North-Carelian Power Company since November 1996. This chapter lists the practical experiences during the test use period. Also the benefits of this kind of automation are assessed and future developments are outlined

  14. The International Reference Ionosphere: Model Update 2016

    Bilitza, Dieter; Altadill, David; Reinisch, Bodo; Galkin, Ivan; Shubin, Valentin; Truhlik, Vladimir


    The International Reference Ionosphere (IRI) is recognized as the official standard for the ionosphere (COSPAR, URSI, ISO) and is widely used for a multitude of different applications as evidenced by the many papers in science and engineering journals that acknowledge the use of IRI (e.g., about 11% of all Radio Science papers each year). One of the shortcomings of the model has been the dependence of the F2 peak height modeling on the propagation factor M(3000)F2. With the 2016 version of IRI, two new models will be introduced for hmF2 that were developed directly based on hmF2 measurements by ionosondes [Altadill et al., 2013] and by COSMIC radio occultation [Shubin, 2015], respectively. In addition IRI-2016 will include an improved representation of the ionosphere during the very low solar activities that were reached during the last solar minimum in 2008/2009. This presentation will review these and other improvements that are being implemented with the 2016 version of the IRI model. We will also discuss recent IRI workshops and their findings and results. One of the most exciting new projects is the development of the Real-Time IRI [Galkin et al., 2012]. We will discuss the current status and plans for the future. Altadill, D., S. Magdaleno, J.M. Torta, E. Blanch (2013), Global empirical models of the density peak height and of the equivalent scale height for quiet conditions, Advances in Space Research 52, 1756-1769, doi:10.1016/j.asr.2012.11.018. Galkin, I.A., B.W. Reinisch, X. Huang, and D. Bilitza (2012), Assimilation of GIRO Data into a Real-Time IRI, Radio Science, 47, RS0L07, doi:10.1029/2011RS004952. Shubin V.N. (2015), Global median model of the F2-layer peak height based on ionospheric radio-occultation and ground-based Digisonde observations, Advances in Space Research 56, 916-928, doi:10.1016/j.asr.2015.05.029.

  15. An automatic invisible axion in the SUSY preon model

    Babu, K. S.; Choi, Kiwoon; Pati, J. C.; Zhang, X.


    It is shown that the recently proposed preon model which provides a unified origin of the diverse mass scales and an explanation of family replication as well as of inter-family mass-hierarchy, naturally possesses a Peccei-Quinn (PQ) symmetry whose spontaneous breaking leads to an automatic invisible axion. Existence of the PQ-symmetry is simply a consequence of supersymmetry and the requirement of minimality in the field-content and interactions, which proposes that the lagrangian should possess only those terms which are dictated by the gauge principle and no others. In addition to the axion, the model also generates two superlight Goldstone bosons and their superpartners all of which are cosmologically safe.

  16. Automatically extracting sheet-metal features from solid model

    刘志坚; 李建军; 王义林; 李材元; 肖祥芷


    With the development of modern industry,sheet-metal parts in mass production have been widely applied in mechanical,communication,electronics,and light industries in recent decades; but the advances in sheet-metal part design and manufacturing remain too slow compared with the increasing importance of sheet-metal parts in modern industry. This paper proposes a method for automatically extracting features from an arbitrary solid model of sheet-metal parts; whose characteristics are used for classification and graph-based representation of the sheet-metal features to extract the features embodied in a sheet-metal part. The extracting feature process can be divided for valid checking of the model geometry,feature matching,and feature relationship. Since the extracted features include abundant geometry and engineering information,they will be effective for downstream application such as feature rebuilding and stamping process planning.

  17. Automatically extracting sheet-metal features from solid model

    刘志坚; 李建军; 王义林; 李材元; 肖祥芷


    With the development of modern industry, sheet-metal parts in mass production have been widely applied in mechanical, communication, electronics, and light industries in recent decades; but the advances in sheet-metal part design and manufacturing remain too slow compared with the increasing importance of sheet-metal parts in modern industry. This paper proposes a method for automatically extracting features from an arbitrary solid model of sheet-metal parts; whose characteristics are used for classification and graph-based representation of the sheet-metal features to extract the features embodied in a sheet-metal part. The extracting feature process can be divided for valid checking of the model geometry, feature matching, and feature relationship. Since the extracted features include abundant geometry and engineering information, they will be effective for downstream application such as feature rebuilding and stamping process planning.

  18. Automatic generation of matrix element derivatives for tight binding models

    Elena, Alin M.; Meister, Matthias


    Tight binding (TB) models are one approach to the quantum mechanical many-particle problem. An important role in TB models is played by hopping and overlap matrix elements between the orbitals on two atoms, which of course depend on the relative positions of the atoms involved. This dependence can be expressed with the help of Slater-Koster parameters, which are usually taken from tables. Recently, a way to generate these tables automatically was published. If TB approaches are applied to simulations of the dynamics of a system, also derivatives of matrix elements can appear. In this work we give general expressions for first and second derivatives of such matrix elements. Implemented in a tight binding computer program, like, for instance, DINAMO, they obviate the need to type all the required derivatives of all occurring matrix elements by hand.

  19. Virtual Reference Transcript Analysis: A Few Models.

    Smyth, Joanne


    Describes the introduction of virtual, or digital, reference service at the University of New Brunswick libraries. Highlights include analyzing transcripts from LIVE (Library Information in a Virtual Environment); reference question types; ACRL (Association of College and Research Libraries) information literacy competency standards; and the Big 6…

  20. Quantitative evaluation of six graph based semi-automatic liver tumor segmentation techniques using multiple sets of reference segmentation

    Su, Zihua; Deng, Xiang; Chefd'hotel, Christophe; Grady, Leo; Fei, Jun; Zheng, Dong; Chen, Ning; Xu, Xiaodong


    Graph based semi-automatic tumor segmentation techniques have demonstrated great potential in efficiently measuring tumor size from CT images. Comprehensive and quantitative validation is essential to ensure the efficacy of graph based tumor segmentation techniques in clinical applications. In this paper, we present a quantitative validation study of six graph based 3D semi-automatic tumor segmentation techniques using multiple sets of expert segmentation. The six segmentation techniques are Random Walk (RW), Watershed based Random Walk (WRW), LazySnapping (LS), GraphCut (GHC), GrabCut (GBC), and GrowCut (GWC) algorithms. The validation was conducted using clinical CT data of 29 liver tumors and four sets of expert segmentation. The performance of the six algorithms was evaluated using accuracy and reproducibility. The accuracy was quantified using Normalized Probabilistic Rand Index (NPRI), which takes into account of the variation of multiple expert segmentations. The reproducibility was evaluated by the change of the NPRI from 10 different sets of user initializations. Our results from the accuracy test demonstrated that RW (0.63) showed the highest NPRI value, compared to WRW (0.61), GWC (0.60), GHC (0.58), LS (0.57), GBC (0.27). The results from the reproducibility test indicated that GBC is more sensitive to user initialization than the other five algorithms. Compared to previous tumor segmentation validation studies using one set of reference segmentation, our evaluation methods use multiple sets of expert segmentation to address the inter or intra rater variability issue in ground truth annotation, and provide quantitative assessment for comparing different segmentation algorithms.

  1. An Ideological Analysis of Digital Reference Service Models.

    Dilevko, Juris


    Looks at some of the new paradigms for reference service, in particular the ideological implications of the digital reference call-center model, demonstrates how they lead to a "deprofessionalization" of reference work, and provides examples of how extensive reading can help reference librarians provide better service and become an…

  2. Automatic Texture Reconstruction of 3d City Model from Oblique Images

    Kang, Junhua; Deng, Fei; Li, Xinwei; Wan, Fang


    In recent years, the photorealistic 3D city models are increasingly important in various geospatial applications related to virtual city tourism, 3D GIS, urban planning, real-estate management. Besides the acquisition of high-precision 3D geometric data, texture reconstruction is also a crucial step for generating high-quality and visually realistic 3D models. However, most of the texture reconstruction approaches are probably leading to texture fragmentation and memory inefficiency. In this paper, we introduce an automatic framework of texture reconstruction to generate textures from oblique images for photorealistic visualization. Our approach include three major steps as follows: mesh parameterization, texture atlas generation and texture blending. Firstly, mesh parameterization procedure referring to mesh segmentation and mesh unfolding is performed to reduce geometric distortion in the process of mapping 2D texture to 3D model. Secondly, in the texture atlas generation step, the texture of each segmented region in texture domain is reconstructed from all visible images with exterior orientation and interior orientation parameters. Thirdly, to avoid color discontinuities at boundaries between texture regions, the final texture map is generated by blending texture maps from several corresponding images. We evaluated our texture reconstruction framework on a dataset of a city. The resulting mesh model can get textured by created texture without resampling. Experiment results show that our method can effectively mitigate the occurrence of texture fragmentation. It is demonstrated that the proposed framework is effective and useful for automatic texture reconstruction of 3D city model.

  3. An automatic and effective parameter optimization method for model tuning

    T. Zhang


    Full Text Available Physical parameterizations in General Circulation Models (GCMs, having various uncertain parameters, greatly impact model performance and model climate sensitivity. Traditional manual and empirical tuning of these parameters is time consuming and ineffective. In this study, a "three-step" methodology is proposed to automatically and effectively obtain the optimum combination of some key parameters in cloud and convective parameterizations according to a comprehensive objective evaluation metrics. Different from the traditional optimization methods, two extra steps, one determines parameter sensitivity and the other chooses the optimum initial value of sensitive parameters, are introduced before the downhill simplex method to reduce the computational cost and improve the tuning performance. Atmospheric GCM simulation results show that the optimum combination of these parameters determined using this method is able to improve the model's overall performance by 9%. The proposed methodology and software framework can be easily applied to other GCMs to speed up the model development process, especially regarding unavoidable comprehensive parameters tuning during the model development stage.

  4. Improving Statistical Language Model Performance with Automatically Generated Word Hierarchies

    McMahon, J; Mahon, John Mc


    An automatic word classification system has been designed which processes word unigram and bigram frequency statistics extracted from a corpus of natural language utterances. The system implements a binary top-down form of word clustering which employs an average class mutual information metric. Resulting classifications are hierarchical, allowing variable class granularity. Words are represented as structural tags --- unique $n$-bit numbers the most significant bit-patterns of which incorporate class information. Access to a structural tag immediately provides access to all classification levels for the corresponding word. The classification system has successfully revealed some of the structure of English, from the phonemic to the semantic level. The system has been compared --- directly and indirectly --- with other recent word classification systems. Class based interpolated language models have been constructed to exploit the extra information supplied by the classifications and some experiments have sho...

  5. Towards Automatic Semantic Labelling of 3D City Models

    Rook, M.; Biljecki, F.; Diakité, A. A.


    The lack of semantic information in many 3D city models is a considerable limiting factor in their use, as a lot of applications rely on semantics. Such information is not always available, since it is not collected at all times, it might be lost due to data transformation, or its lack may be caused by non-interoperability in data integration from other sources. This research is a first step in creating an automatic workflow that semantically labels plain 3D city model represented by a soup of polygons, with semantic and thematic information, as defined in the CityGML standard. The first step involves the reconstruction of the topology, which is used in a region growing algorithm that clusters upward facing adjacent triangles. Heuristic rules, embedded in a decision tree, are used to compute a likeliness score for these regions that either represent the ground (terrain) or a RoofSurface. Regions with a high likeliness score, to one of the two classes, are used to create a decision space, which is used in a support vector machine (SVM). Next, topological relations are utilised to select seeds that function as a start in a region growing algorithm, to create regions of triangles of other semantic classes. The topological relationships of the regions are used in the aggregation of the thematic building features. Finally, the level of detail is detected to generate the correct output in CityGML. The results show an accuracy between 85 % and 99 % in the automatic semantic labelling on four different test datasets. The paper is concluded by indicating problems and difficulties implying the next steps in the research.

  6. Signature prediction for model-based automatic target recognition

    Keydel, Eric R.; Lee, Shung W.


    The moving and stationary target recognition (MSTAR) model- based automatic target recognition (ATR) system utilizes a paradigm which matches features extracted form an unknown SAR target signature against predictions of those features generated from models of the sensing process and candidate target geometries. The candidate target geometry yielding the best match between predicted and extracted features defines the identify of the unknown target. MSTAR will extend the current model-based ATR state-of-the-art in a number of significant directions. These include: use of Bayesian techniques for evidence accrual, reasoning over target subparts, coarse-to-fine hypothesis search strategies, and explicit reasoning over target articulation, configuration, occlusion, and lay-over. These advances also imply significant technical challenges, particularly for the MSTAR feature prediction module (MPM). In addition to accurate electromagnetics, the MPM must provide traceback between input target geometry and output features, on-line target geometry manipulation, target subpart feature prediction, explicit models for local scene effects, and generation of sensitivity and uncertainty measures for the predicted features. This paper describes the MPM design which is being developed to satisfy these requirements. The overall module structure is presented, along with the specific deign elements focused on MSTAR requirements. Particular attention is paid to design elements that enable on-line prediction of features within the time constraints mandated by model-driven ATR. Finally, the current status, development schedule, and further extensions in the module design are described.

  7. Systematic approach for the identification of process reference models

    Van Der Merwe, A


    Full Text Available Process models are used in different application domains to capture knowledge on the process flow. Process reference models (PRM) are used to capture reusable process models, which should simplify the identification process of process models...

  8. Brain-inspired speech segmentation for automatic speech recognition using the speech envelope as a temporal reference

    Lee, Byeongwook; Cho, Kwang-Hyun


    Speech segmentation is a crucial step in automatic speech recognition because additional speech analyses are performed for each framed speech segment. Conventional segmentation techniques primarily segment speech using a fixed frame size for computational simplicity. However, this approach is insufficient for capturing the quasi-regular structure of speech, which causes substantial recognition failure in noisy environments. How does the brain handle quasi-regular structured speech and maintain high recognition performance under any circumstance? Recent neurophysiological studies have suggested that the phase of neuronal oscillations in the auditory cortex contributes to accurate speech recognition by guiding speech segmentation into smaller units at different timescales. A phase-locked relationship between neuronal oscillation and the speech envelope has recently been obtained, which suggests that the speech envelope provides a foundation for multi-timescale speech segmental information. In this study, we quantitatively investigated the role of the speech envelope as a potential temporal reference to segment speech using its instantaneous phase information. We evaluated the proposed approach by the achieved information gain and recognition performance in various noisy environments. The results indicate that the proposed segmentation scheme not only extracts more information from speech but also provides greater robustness in a recognition test.

  9. Automatic localization of IASLC-defined mediastinal lymph node stations on CT images using fuzzy models

    Matsumoto, Monica M. S.; Beig, Niha G.; Udupa, Jayaram K.; Archer, Steven; Torigian, Drew A.


    Lung cancer is associated with the highest cancer mortality rates among men and women in the United States. The accurate and precise identification of the lymph node stations on computed tomography (CT) images is important for staging disease and potentially for prognosticating outcome in patients with lung cancer, as well as for pretreatment planning and response assessment purposes. To facilitate a standard means of referring to lymph nodes, the International Association for the Study of Lung Cancer (IASLC) has recently proposed a definition of the different lymph node stations and zones in the thorax. However, nodal station identification is typically performed manually by visual assessment in clinical radiology. This approach leaves room for error due to the subjective and potentially ambiguous nature of visual interpretation, and is labor intensive. We present a method of automatically recognizing the mediastinal IASLC-defined lymph node stations by modifying a hierarchical fuzzy modeling approach previously developed for body-wide automatic anatomy recognition (AAR) in medical imagery. Our AAR-lymph node (AAR-LN) system follows the AAR methodology and consists of two steps. In the first step, the various lymph node stations are manually delineated on a set of CT images following the IASLC definitions. These delineations are then used to build a fuzzy hierarchical model of the nodal stations which are considered as 3D objects. In the second step, the stations are automatically located on any given CT image of the thorax by using the hierarchical fuzzy model and object recognition algorithms. Based on 23 data sets used for model building, 22 independent data sets for testing, and 10 lymph node stations, a mean localization accuracy of within 1-6 voxels has been achieved by the AAR-LN system.

  10. Multiobjective Automatic Parameter Calibration of a Hydrological Model

    Donghwi Jung


    Full Text Available This study proposes variable balancing approaches for the exploration (diversification and exploitation (intensification of the non-dominated sorting genetic algorithm-II (NSGA-II with simulated binary crossover (SBX and polynomial mutation (PM in the multiobjective automatic parameter calibration of a lumped hydrological model, the HYMOD model. Two objectives—minimizing the percent bias and minimizing three peak flow differences—are considered in the calibration of the six parameters of the model. The proposed balancing approaches, which migrate the focus between exploration and exploitation over generations by varying the crossover and mutation distribution indices of SBX and PM, respectively, are compared with traditional static balancing approaches (the two dices value is fixed during optimization in a benchmark hydrological calibration problem for the Leaf River (1950 km2 near Collins, Mississippi. Three performance metrics—solution quality, spacing, and convergence—are used to quantify and compare the quality of the Pareto solutions obtained by the two different balancing approaches. The variable balancing approaches that migrate the focus of exploration and exploitation differently for SBX and PM outperformed other methods.

  11. Semi-automatic registration of 3D orthodontics models from photographs

    Destrez, Raphaël.; Treuillet, Sylvie; Lucas, Yves; Albouy-Kissi, Benjamin


    In orthodontics, a common practice used to diagnose and plan the treatment is the dental cast. After digitization by a CT-scan or a laser scanner, the obtained 3D surface models can feed orthodontics numerical tools for computer-aided diagnosis and treatment planning. One of the pre-processing critical steps is the 3D registration of dental arches to obtain the occlusion of these numerical models. For this task, we propose a vision based method to automatically compute the registration based on photos of patient mouth. From a set of matched singular points between two photos and the dental 3D models, the rigid transformation to apply to the mandible to be in contact with the maxillary may be computed by minimizing the reprojection errors. During a precedent study, we established the feasibility of this visual registration approach with a manual selection of singular points. This paper addresses the issue of automatic point detection. Based on a priori knowledge, histogram thresholding and edge detection are used to extract specific points in 2D images. Concurrently, curvatures information detects 3D corresponding points. To improve the quality of the final registration, we also introduce a combined optimization of the projection matrix with the 2D/3D point positions. These new developments are evaluated on real data by considering the reprojection errors and the deviation angles after registration in respect to the manual reference occlusion realized by a specialist.

  12. Reference models supporting enterprise networks and virtual enterprises

    Tølle, Martin; Bernus, Peter


    This article analyses different types of reference models applicable to support the set up and (re)configuration of Virtual Enterprises (VEs). Reference models are models capturing concepts common to VEs aiming to convert the task of setting up of VE into a configuration task, and hence reducing ...... the time needed for VE creation. The reference models are analysed through a mapping onto the Virtual Enterprise Reference Architecture (VERA) based upon GERAM and created in the IMS GLOBEMEN project.......This article analyses different types of reference models applicable to support the set up and (re)configuration of Virtual Enterprises (VEs). Reference models are models capturing concepts common to VEs aiming to convert the task of setting up of VE into a configuration task, and hence reducing...

  13. The Use of Reference Models in Business Process Renovation

    Dejan Pajk


    Full Text Available Enterprise resource planning (ERP systems are often used by companies to automate and enhance their busi- ness processes. The capabilities of ERP systems can be described by best-practice reference models. The purpose of the article is to demonstrate the business process renovation approach with the use of reference models. Although the use of reference models brings many positive effects for business, they are still rarely used in Slovenian small and medium-sized compa- nies. The reasons for this may be found in the reference models themselves as well as in project implementation methodologies. In the article a reference model based on Microsoft Dynamics NAV is suggested. The reference model is designed using upgraded BPMN notation with additional business objects, which help to describe the models in more detail.

  14. Charging of mobile services by mobile payment reference model

    Pousttchi, Key; Wiedemann, Dietmar Georg


    The purpose of the paper is to analyze mobile payments in the mobile commerce scenario. Therefore, we first classify the mobile payment in the mobile commerce scenario by explaining general offer models, charging concepts, and intermediaries. Second, we describe the mobile payment reference model, especially, the mobile payment reference organization model and different mobile payment standard types. Finally, we conclude our findings.

  15. Reference models supporting enterprise networks and virtual enterprises

    Tølle, Martin; Bernus, Peter


    This article analyses different types of reference models applicable to support the set up and (re)configuration of Virtual Enterprises (VEs). Reference models are models capturing concepts common to VEs aiming to convert the task of setting up of VE into a configuration task, and hence reducing...

  16. Automatic versus manual model differentiation to compute sensitivities and solve non-linear inverse problems

    Elizondo, D.; Cappelaere, B.; Faure, Ch.


    Emerging tools for automatic differentiation (AD) of computer programs should be of great benefit for the implementation of many derivative-based numerical methods such as those used for inverse modeling. The Odyssée software, one such tool for Fortran 77 codes, has been tested on a sample model that solves a 2D non-linear diffusion-type equation. Odyssée offers both the forward and the reverse differentiation modes, that produce the tangent and the cotangent models, respectively. The two modes have been implemented on the sample application. A comparison is made with a manually-produced differentiated code for this model (MD), obtained by solving the adjoint equations associated with the model's discrete state equations. Following a presentation of the methods and tools and of their relative advantages and drawbacks, the performances of the codes produced by the manual and automatic methods are compared, in terms of accuracy and of computing efficiency (CPU and memory needs). The perturbation method (finite-difference approximation of derivatives) is also used as a reference. Based on the test of Taylor, the accuracy of the two AD modes proves to be excellent and as high as machine precision permits, a good indication of Odyssée's capability to produce error-free codes. In comparison, the manually-produced derivatives (MD) sometimes appear to be slightly biased, which is likely due to the fact that a theoretical model (state equations) and a practical model (computer program) do not exactly coincide, while the accuracy of the perturbation method is very uncertain. The MD code largely outperforms all other methods in computing efficiency, a subject of current research for the improvement of AD tools. Yet these tools can already be of considerable help for the computer implementation of many numerical methods, avoiding the tedious task of hand-coding the differentiation of complex algorithms.

  17. Research in Adaptronic Automatic Control System and Biosensor System Modelling

    Skopis Vladimir


    Full Text Available This paper describes the research on adaptronic systems made by the author and offers to use biosensors that can be later inserted into the adaptronic systems. Adaptronic systems are based, on the one hand, on the adaptronic approach when the system is designed not to always meet the worst condition, but to change the structure of the system according to the external conditions. On the other hand, it is an extension of common automatic control ad adaptive systems. So, in the introduction firstly the adaptronic approach and biosensor as a term is explained. Adaptive systems, upon which adaptronic ones are based, are also mentioned. Then the construction of biosensor is described, as well as some information is given about the classification of biosensors and their main groups. Also it is suggested to use lichen indicators in industry to control concentration of chemical substances in the air. After that mathematical models and computer experiments for adaptronic system and biosensor analysis are given.

  18. Automatic prediction of facial trait judgments: appearance vs. structural models.

    Mario Rojas

    Full Text Available Evaluating other individuals with respect to personality characteristics plays a crucial role in human relations and it is the focus of attention for research in diverse fields such as psychology and interactive computer systems. In psychology, face perception has been recognized as a key component of this evaluation system. Multiple studies suggest that observers use face information to infer personality characteristics. Interactive computer systems are trying to take advantage of these findings and apply them to increase the natural aspect of interaction and to improve the performance of interactive computer systems. Here, we experimentally test whether the automatic prediction of facial trait judgments (e.g. dominance can be made by using the full appearance information of the face and whether a reduced representation of its structure is sufficient. We evaluate two separate approaches: a holistic representation model using the facial appearance information and a structural model constructed from the relations among facial salient points. State of the art machine learning methods are applied to a derive a facial trait judgment model from training data and b predict a facial trait value for any face. Furthermore, we address the issue of whether there are specific structural relations among facial points that predict perception of facial traits. Experimental results over a set of labeled data (9 different trait evaluations and classification rules (4 rules suggest that a prediction of perception of facial traits is learnable by both holistic and structural approaches; b the most reliable prediction of facial trait judgments is obtained by certain type of holistic descriptions of the face appearance; and c for some traits such as attractiveness and extroversion, there are relationships between specific structural features and social perceptions.

  19. A numerical reference model for themomechanical subduction

    Quinquis, Matthieu; Chemia, Zurab; Tosi, Nicola;


    Building an advanced numerical model of subduction requires choosing values for various geometrical parameters and material properties, among others, the initial lithosphere thicknesses, representative lithological types and their mechanical and thermal properties, rheologies, initial temperature...

  20. A blood circulation model for reference man

    Leggett, R.W.; Eckerman, K.F. [Oak Ridge National Lab., TN (United States). Health Sciences Research Div.; Williams, L.R. [Indiana Univ., South Bend, IN (United States). Div. of Liberal Arts and Sciences


    This paper describes a dynamic blood circulation model that predicts the movement and gradual dispersal of a bolus of material in the circulation after its intravascular injection into an adult human. The main purpose of the model is to improve the dosimetry of internally deposited radionuclides that decay in the circulation to a significant extent. The total blood volume is partitioned into the blood contents of 24 separate organs or tissues, right heart chambers, left heart chambers, pulmonary circulation, arterial outflow to the systemic tissues (aorta and large arteries), and venous return from the systemic tissues (large veins). As a compromise between physical reality and computational simplicity, the circulation of blood is viewed as a system of first-order transfers between blood pools, with the delay time depending on the mean transit time across the pool. The model allows consideration of incomplete, tissue-dependent extraction of material during passage through the circulation and return of material from tissues to plasma.

  1. Moving object detection using keypoints reference model

    Wan Zaki Wan Mimi Diyana


    Full Text Available Abstract This article presents a new method for background subtraction (BGS and object detection for a real-time video application using a combination of frame differencing and a scale-invariant feature detector. This method takes the benefits of background modelling and the invariant feature detector to improve the accuracy in various environments. The proposed method consists of three main modules, namely, modelling, matching and subtraction modules. The comparison study of the proposed method with a popular Gaussian mixture model proved that the improvement in correct classification can be increased up to 98% with a reduction of false negative and true positive rates. Beside that the proposed method has shown great potential to overcome the drawback of the traditional BGS in handling challenges like shadow effect and lighting fluctuation.

  2. Fast tracking ICT infrastructure requirements and design, based on Enterprise Reference Architecture and matching Reference Models

    Bernus, Peter; Baltrusch, Rob; Vesterager, Johan;


    The Globemen Consortium has developed the virtual enterprise reference architecture and methodology (VERAM), based on GERAM and developed reference models for virtual enterprise management and joint mission delivery. The planned virtual enterprise capability includes the areas of sales and market......The Globemen Consortium has developed the virtual enterprise reference architecture and methodology (VERAM), based on GERAM and developed reference models for virtual enterprise management and joint mission delivery. The planned virtual enterprise capability includes the areas of sales...... and marketing, global engineering, and customer relationship management. The reference models are the basis for the development of ICT infrastructure requirements. These in turn can be used for ICT infrastructure specification (sometimes referred to as 'ICT architecture').Part of the ICT architecture...... is industry-wide, part of it is industry-specific and a part is specific to the domains of the joint activity that characterises the given Virtual Enterprise Network at hand. The article advocates a step by step approach to building virtual enterprise capability....

  3. Automatic generation of computable implementation guides from clinical information models.

    Boscá, Diego; Maldonado, José Alberto; Moner, David; Robles, Montserrat


    Clinical information models are increasingly used to describe the contents of Electronic Health Records. Implementation guides are a common specification mechanism used to define such models. They contain, among other reference materials, all the constraints and rules that clinical information must obey. However, these implementation guides typically are oriented to human-readability, and thus cannot be processed by computers. As a consequence, they must be reinterpreted and transformed manually into an executable language such as Schematron or Object Constraint Language (OCL). This task can be difficult and error prone due to the big gap between both representations. The challenge is to develop a methodology for the specification of implementation guides in such a way that humans can read and understand easily and at the same time can be processed by computers. In this paper, we propose and describe a novel methodology that uses archetypes as basis for generation of implementation guides. We use archetypes to generate formal rules expressed in Natural Rule Language (NRL) and other reference materials usually included in implementation guides such as sample XML instances. We also generate Schematron rules from NRL rules to be used for the validation of data instances. We have implemented these methods in LinkEHR, an archetype editing platform, and exemplify our approach by generating NRL rules and implementation guides from EN ISO 13606, openEHR, and HL7 CDA archetypes.

  4. Evaluation of Model Recognition for Grammar-Based Automatic 3d Building Model Reconstruction

    Yu, Qian; Helmholz, Petra; Belton, David


    In recent years, 3D city models are in high demand by many public and private organisations, and the steadily growing capacity in both quality and quantity are increasing demand. The quality evaluation of these 3D models is a relevant issue both from the scientific and practical points of view. In this paper, we present a method for the quality evaluation of 3D building models which are reconstructed automatically from terrestrial laser scanning (TLS) data based on an attributed building grammar. The entire evaluation process has been performed in all the three dimensions in terms of completeness and correctness of the reconstruction. Six quality measures are introduced to apply on four datasets of reconstructed building models in order to describe the quality of the automatic reconstruction, and also are assessed on their validity from the evaluation point of view.

  5. Path Tracking Control of Automatic Parking Cloud Model considering the Influence of Time Delay

    Yiding Hua


    Full Text Available This paper establishes the kinematic model of the automatic parking system and analyzes the kinematic constraints of the vehicle. Furthermore, it solves the problem where the traditional automatic parking system model fails to take into account the time delay. Firstly, based on simulating calculation, the influence of time delay on the dynamic trajectory of a vehicle in the automatic parking system is analyzed under the transverse distance Dlateral between different target spaces. Secondly, on the basis of cloud model, this paper utilizes the tracking control of an intelligent path closer to human intelligent behavior to further study the Cloud Generator-based parking path tracking control method and construct a vehicle path tracking control model. Moreover, tracking and steering control effects of the model are verified through simulation analysis. Finally, the effectiveness and timeliness of automatic parking controller in the aspect of path tracking are tested through a real vehicle experiment.

  6. Automaticity and control in prospective memory: a computational model.

    Sam J Gilbert

    Full Text Available Prospective memory (PM refers to our ability to realize delayed intentions. In event-based PM paradigms, participants must act on an intention when they detect the occurrence of a pre-established cue. Some theorists propose that in such paradigms PM responding can only occur when participants deliberately initiate processes for monitoring their environment for appropriate cues. Others propose that perceptual processing of PM cues can directly trigger PM responding in the absence of strategic monitoring, at least under some circumstances. In order to address this debate, we present a computational model implementing the latter account, using a parallel distributed processing (interactive activation framework. In this model PM responses can be triggered directly as a result of spreading activation from units representing perceptual inputs. PM responding can also be promoted by top-down monitoring for PM targets. The model fits a wide variety of empirical findings from PM paradigms, including the effect of maintaining PM intentions on ongoing response time and the intention superiority effect. The model also makes novel predictions concerning the effect of stimulus degradation on PM performance, the shape of response time distributions on ongoing and prospective memory trials, and the effects of instructing participants to make PM responses instead of ongoing responses or alongside them. These predictions were confirmed in two empirical experiments. We therefore suggest that PM should be considered to result from the interplay between bottom-up triggering of PM responses by perceptual input, and top-down monitoring for appropriate cues. We also show how the model can be extended to simulate encoding new intentions and subsequently deactivating them, and consider links between the model's performance and results from neuroimaging.

  7. Related work on reference modeling for collaborative networks

    Afsarmanesh, H.; Camarinha-Matos, L.M.; Camarinha-Matos, L.M.; Afsarmanesh, H.


    Several international research and development initiatives have led to development of models for organizations and organization interactions. These models and their approaches constitute a background for development of reference models for collaborative networks. A brief survey of work on modeling t

  8. Automatic removal of eye movement artifacts from the EEG using ICA and the dipole model

    Weidong Zhou; Jean Gotman


    12 patients were analyzed.The experimental results indicate that ICA with the dipole model is very efficient at automatically subtracting the eye movement artifacts,while retaining the EEG slow waves and making their interpretation easier.

  9. Reference analysis of the signal + background model in counting experiments II. Approximate reference prior

    Casadei, D.


    The objective Bayesian treatment of a model representing two independent Poisson processes, labelled as ``signal'' and ``background'' and both contributing additively to the total number of counted events, is considered. It is shown that the reference prior for the parameter of interest (the signal intensity) can be well approximated by the widely (ab)used flat prior only when the expected background is very high. On the other hand, a very simple approximation (the limiting form of the reference prior for perfect prior background knowledge) can be safely used over a large portion of the background parameters space. The resulting approximate reference posterior is a Gamma density whose parameters are related to the observed counts. This limiting form is simpler than the result obtained with a flat prior, with the additional advantage of representing a much closer approximation to the reference posterior in all cases. Hence such limiting prior should be considered a better default or conventional prior than the uniform prior. On the computing side, it is shown that a 2-parameter fitting function is able to reproduce extremely well the reference prior for any background prior. Thus, it can be useful in applications requiring the evaluation of the reference prior for a very large number of times.

  10. Technical Note: Automatic river network generation for a physically-based river catchment model


    SHETRAN is a physically-based distributed modelling system that gives detailed simulations in time and space of water flow and sediment and solute transport in river catchments. Standard algorithms for the automatic generation of river channel networks from digital elevation data are impossible to apply in SHETRAN and other similar models because the river channels are assumed to run along the edges of grid cells. In this work a new algorithm for the automatic generation of a river cha...

  11. Technical Note: Automatic river network generation for a physically-based river catchment model


    SHETRAN is a physically-based distributed modelling system that gives detailed simulations in time and space of water flow and sediment and solute transport in river catchments. Standard algorithms for the automatic generation of river channel networks from digital elevation data are impossible to apply in SHETRAN and other similar models because the river channels are assumed to run along the edges of grid cells. In this work a new algorithm for the automatic generation of a river channel ne...

  12. A Reference Model for Mobile Social Software for Learning

    De Jong, Tim; Specht, Marcus; Koper, Rob


    De Jong, T., Specht, M., & Koper, R. (2008). A reference model for mobile social software for learning. International Journal of Continuing Engineering Education and Life-Long Learning, 18(1), 118-138.

  13. Reference models for forming organisational or collaborative pedagogical best practices

    Lee, Chien-Sing; Koper, Rob; Kommers, Piet; Hedberg, John


    Lee, Chien-Sing, Koper, R., Kommers, P., & Hedberg, John (Eds.) (2008). Reference models for forming organisational or collaborative pedagogical best practices [special issue]. International Journal of Continuing Engineering Education and Life-Long Learning, 18(1).

  14. Establishing a Business Process Reference Model for Universities

    Svensson, Carsten


    Modern universities are by any standard complex organizations that, from an IT perspective, present a number of unique challenges. This paper will propose establishing a business process reference framework. The benefit to the users would be a better understanding of the system landscape, business process enablement, collection of performance data and systematic reuse of existing community experience and knowledge. For these reasons reference models such as the SCOR (Supply Chain Operations Reference), DCOR (Design Chain Operations Reference) and ITIL (Information Technology Infrastructure Library) have gained popularity among organizations in both the private and public sectors. We speculate that this success can be replicated in a university setting. Furthermore the paper will outline how the research group suggests moving ahead with the research which will lead to a reference model.

  15. Assessing Automatic Aid as an Emergency Response Model


    often include the idea of mutual and automatic aid. Clovis noted in his paper “Thinking about National Preparedness” that resources, being a...Attributes of Collaboration.” National Public Management Research Conference. October 2009. 39 Samuel Clovis , “Thinking About National Fail. Boston, MA: Perseus Books Group. Kindle Edition Clovis , Samuel. “Thinking About National Preparedness: The National Planning Scenarios and

  16. Modeling and Prototyping of Automatic Clutch System for Light Vehicles

    Murali, S.; Jothi Prakash, V. M.; Vishal, S.


    Nowadays, recycling or regenerating the waste in to something useful is appreciated all around the globe. It reduces greenhouse gas emissions that contribute to global climate change. This study deals with provision of the automatic clutch mechanism in vehicles to facilitate the smooth changing of gears. This study proposed to use the exhaust gases which are normally expelled out as a waste from the turbocharger to actuate the clutch mechanism in vehicles to facilitate the smooth changing of gears. At present, clutches are operated automatically by using an air compressor in the four wheelers. In this study, a conceptual design is proposed in which the clutch is operated by the exhaust gas from the turbocharger and this will remove the usage of air compressor in the existing system. With this system, usage of air compressor is eliminated and the riders need not to operate the clutch manually. This work involved in development, analysation and validation of the conceptual design through simulation software. Then the developed conceptual design of an automatic pneumatic clutch system is tested with proto type.

  17. Measuring the Compliance of Processes with Reference Models

    Gerke, Kerstin; Cardoso, Jorge; Claus, Alexander

    Reference models provide a set of generally accepted best practices to create efficient processes to be deployed inside organizations. However, a central challenge is to determine how these best practices are implemented in practice. One limitation of existing approaches for measuring compliance is the assumption that the compliance can be determined using the notion of process equivalence. Nonetheless, the use of equivalence algorithms is not adequate since two models can have different structures but one process can still be compliant with the other. This paper presents a new approach and algorithm which allow to measure the compliance of process models with reference models. We evaluate our approach by measuring the compliance of a model currently used by a German passenger airline with the IT Infrastructure Library (ITIL) reference model and by comparing our results with existing approaches.

  18. Towards a reference model for portfolio management for product development

    Larsson, Flemming


    The aim of this paper is to explore the concept of portfolio management for product development at company level. Departing from a combination of explorative interviews with industry professionals and a literature review a proposal for a reference model for portfolio management is developed....... The model consists of a set of defined and interrelated concepts which forms a coherent and consistent reference model that explicate the totality of the portfolio management concept at company level in terms of structure, processes and elements. The model simultaneously pinpoints, positions and integrates...... several central dimensions of portfolio management....

  19. Towards a reference model for portfolio management for product development

    Larsson, Flemming


    The aim of this paper is to explore the concept of portfolio management for product development at company level. Departing from a combination of explorative interviews with industry professionals and a literature review a proposal for a reference model for portfolio management is developed....... The model consists of a set of defined and interrelated concepts which forms a coherent and consistent reference model that explicate the totality of the portfolio management concept at company level in terms of structure, processes and elements. The model simultaneously pinpoints, positions and integrates...... several central dimensions of portfolio management....

  20. Cognitive Modeling of Individual Variation in Reference Production and Comprehension.

    Hendriks, Petra


    A challenge for most theoretical and computational accounts of linguistic reference is the observation that language users vary considerably in their referential choices. Part of the variation observed among and within language users and across tasks may be explained from variation in the cognitive resources available to speakers and listeners. This paper presents a computational model of reference production and comprehension developed within the cognitive architecture ACT-R. Through simulations with this ACT-R model, it is investigated how cognitive constraints interact with linguistic constraints and features of the linguistic discourse in speakers' production and listeners' comprehension of referring expressions in specific tasks, and how this interaction may give rise to variation in referential choice. The ACT-R model of reference explains and predicts variation among language users in their referential choices as a result of individual and task-related differences in processing speed and working memory capacity. Because of limitations in their cognitive capacities, speakers sometimes underspecify or overspecify their referring expressions, and listeners sometimes choose incorrect referents or are overly liberal in their interpretation of referring expressions.

  1. Cognitive Modeling of Individual Variation in Reference Production and Comprehension

    Petra eHendriks


    Full Text Available A challenge for most theoretical and computational accounts of linguistic reference is the observation that language users vary considerably in their referential choices. Part of the variation observed among and within language users and across tasks may be explained from variation in the cognitive resources available to speakers and listeners. This paper presents a computational model of reference production and comprehension developed within the cognitive architecture ACT-R. Through simulations with this ACT-R model, it is investigated how cognitive constraints interact with linguistic constraints and features of the linguistic discourse in speakers’ production and listeners’ comprehension of referring expressions in specific tasks, and how this interaction may give rise to variation in referential choice. The ACT-R model of reference explains and predicts variation among language users in their referential choices as a result of individual and task-related differences in processing speed and working memory capacity. Because of limitations in their cognitive capacities, speakers sometimes underspecify or overspecify their referring expressions, and listeners sometimes choose incorrect referents or are overly liberal in their interpretation of referring expressions.

  2. Brain-inspired speech segmentation for automatic speech recognition using the speech envelope as a temporal reference

    Byeongwook Lee; Kwang-Hyun Cho


    Speech segmentation is a crucial step in automatic speech recognition because additional speech analyses are performed for each framed speech segment. Conventional segmentation techniques primarily segment speech using a fixed frame size for computational simplicity. However, this approach is insufficient for capturing the quasi-regular structure of speech, which causes substantial recognition failure in noisy environments. How does the brain handle quasi-regular structured speech and maintai...

  3. Model reference, sliding mode adaptive control for flexible structures

    Yurkovich, S.; Ozguner, U.; Al-Abbass, F.


    A decentralized model reference adaptive approach using a variable-structure sliding model control has been developed for the vibration suppression of large flexible structures. Local models are derived based upon the desired damping and response time in a model-following scheme, and variable structure controllers are then designed which employ colocated angular rate and position feedback. Numerical simulations have been performed using NASA's flexible grid experimental apparatus.

  4. Reference physiological parameters for pharmacodynamic modeling of liver cancer

    Travis, C.C.; Arms, A.D.


    This document presents a compilation of measured values for physiological parameters used in pharamacodynamic modeling of liver cancer. The physiological parameters include body weight, liver weight, the liver weight/body weight ratio, and number of hepatocytes. Reference values for use in risk assessment are given for each of the physiological parameters based on analyses of valid measurements taken from the literature and other reliable sources. The proposed reference values for rodents include sex-specific measurements for the B6C3F{sub 1}, mice and Fishcer 344/N, Sprague-Dawley, and Wistar rats. Reference values are also provided for humans. 102 refs., 65 tabs.

  5. Automatic gate design model from wood & tire for farmers

    Indrawan Ivan


    Full Text Available Indonesia is one of the potential paddy farming area in Southeast Asia, and North Sumatra Province is one of many provinces that provides it. Yet, Indonesia is still importing rice from foreign country, eventhough today the government has been willing to supply its own need. Almost 10% irrigation areas in Indonesia are connected to sea current, which means it must have a system to manage the circulation of fresh water and block the seawater from entering the irrigation area through the irrigation channel. Many systems use gates to control the water management, and most of them are using automatic sluice gate because the gates are usually positioned far from village, this makes the manual operating become difficult. Unfortunately, not all farmers can use this kind of gate due to its accessibility and cost. This research was done to design the automatic gates, which are easy to build, user friendly, low cost and dependable. In the future, poor farmers or farmers who do not have connection to government, can make this gate by themselves. The research was conducted in laboratory, using flume, pumps, reservoir, and gate prototype.

  6. A Java Reference Model of Transacted Memory for Smart Cards

    Poll, Erik; Hartel, Pieter; Jong, de Eduard


    Transacted Memory offers persistence, undoability and auditing. We present a Java/JML Reference Model of the Transacted Memory system on the basis of our earlier separate Z model and C implementation. We conclude that Java/JML combines the advantages of a high level specification in the JML part (ba

  7. Modelling Human Speech Recognition using Automatic Speech Recognition Paradigms in SpeM

    Scharenborg, O.E.; McQueen, J.M.; Bosch, L.F.M. ten; Norris, D.


    We have recently developed a new model of human speech recognition, based on automatic speech recognition techniques [1]. The present paper has two goals. First, we show that the new model performs well in the recognition of lexically ambiguous input. These demonstrations suggest that the model is

  8. A Model-Based Method for Content Validation of Automatically Generated Test Items

    Zhang, Xinxin; Gierl, Mark


    The purpose of this study is to describe a methodology to recover the item model used to generate multiple-choice test items with a novel graph theory approach. Beginning with the generated test items and working backward to recover the original item model provides a model-based method for validating the content used to automatically generate test…

  9. JEDI Marine and Hydrokinetic Model: User Reference Guide

    Goldberg, M.; Previsic, M.


    The Jobs and Economic Development Impact Model (JEDI) for Marine and Hydrokinetics (MHK) is a user-friendly spreadsheet-based tool designed to demonstrate the economic impacts associated with developing and operating MHK power systems in the United States. The JEDI MHK User Reference Guide was developed to assist users in using and understanding the model. This guide provides information on the model's underlying methodology, as well as the sources and parameters used to develop the cost data utilized in the model. This guide also provides basic instruction on model add-in features, operation of the model, and a discussion of how the results should be interpreted.

  10. On the monitoring model of reference point of VLBI antenna

    Zhang, J.; Li, J.


    By parameterizing the rotation of VLBI antenna and modeling in local control network the coordinates of targets fixed on the antenna, it is expected to perform fully automatic monitoring of antenna parameters without any interference to normal operations of the telescope. Some insights and analysis are presented concerning the mathematical monitoring model, the setting of parameters and selection of constraints to the observation equation, which are verified via data simulation analysis to be rational and effective. Some factors which may affect the estimation precision of antenna parameters are analyzed in order to design and develop monitoring procedure, data analysis software and to make necessary preparation to practical application of the new monitoring concept of VLBI antenna.

  11. Reference analysis of the signal + background model in counting experiments

    Casadei, D.


    The model representing two independent Poisson processes, labelled as ``signal'' and ``background'' and both contributing additively to the total number of counted events, is considered from a Bayesian point of view. This is a widely used model for the searches of rare or exotic events in presence of a background source, as for example in the searches performed by high-energy physics experiments. In the assumption of prior knowledge about the background yield, a reference prior is obtained for the signal alone and its properties are studied. Finally, the properties of the full solution, the marginal reference posterior, are illustrated with few examples.

  12. Automatic discovery of the communication network topology for building a supercomputer model

    Sobolev, Sergey; Stefanov, Konstantin; Voevodin, Vadim


    The Research Computing Center of Lomonosov Moscow State University is developing the Octotron software suite for automatic monitoring and mitigation of emergency situations in supercomputers so as to maximize hardware reliability. The suite is based on a software model of the supercomputer. The model uses a graph to describe the computing system components and their interconnections. One of the most complex components of a supercomputer that needs to be included in the model is its communication network. This work describes the proposed approach for automatically discovering the Ethernet communication network topology in a supercomputer and its description in terms of the Octotron model. This suite automatically detects computing nodes and switches, collects information about them and identifies their interconnections. The application of this approach is demonstrated on the "Lomonosov" and "Lomonosov-2" supercomputers.

  13. Reference analysis of the signal + background model in counting experiments II. Approximate reference prior

    Casadei, Diego


    The objective Bayesian treatment of a model representing two independent Poisson processes, labelled as "signal" and "background" and both contributing additively to the total number of counted events, is considered. It is shown that the reference prior for the parameter of interest (the signal intensity) is well approximated by the widely (ab)used flat prior only when the expected background is very high. For a large portion of the background parameters space, a very simple approximation (the asymptotic form of the reference prior for the limit of perfect prior background knowledge) can be safely used. In all cases, this approximation outperforms the uniform prior. When the asymptotic prior is not good enough, a simple 1-parameter fitting function is often sufficient to obtain an objective Bayesian solution. Otherwise, it is shown that a 2-parameters fitting function is able to reproduce the reference prior in all other cases. The latter is also useful to speed-up the computing time, which can be useful in a...

  14. Reference Model of Desired Yaw Angle for Automated Lane Changing Behavior of Vehicle

    Dianbo Ren; Guanzhe Zhang; Hangzhe Wu


    In this paper, it studies the problem of trajectory planning and tracking for lane changing behavior of vehicle in automatic highway systems. Based on the model of yaw angle acceleration with positive and negative trapezoid constraint, by analyzing the variation laws of yaw motion of vehicle during a lane changing maneuver, the reference model of desired yaw angle and yaw rate for lane changing is generated. According to the yaw angle model, the vertical and horizontal coordinates of trajectory for vehicle lane change are calculated. Assuming that the road curvature is a constant, the difference and associations between two scenarios are analyzed, the lane changing maneuvers occurred on curve road and straight road, respectively. On this basis, it deduces the calculation method of desired yaw angle for lane changing on circular road. Simulation result shows that, it is different from traditional lateral acceleration planning method with the trapezoid constraint, by applying the trapezoidal yaw acceleration reference model proposed in this paper, the resulting expected yaw angular acceleration is continuous, and the step tracking for steering angle is not needed to implement. Due to the desired yaw model is direct designed based on the variation laws of raw movement of vehicle during a lane changing maneuver, rather than indirectly calculated from the trajectory model for lane changing, the calculation steps are simplified.

  15. A new method for automatic discontinuity traces sampling on rock mass 3D model

    Umili, G.; Ferrero, A.; Einstein, H. H.


    A new automatic method for discontinuity traces mapping and sampling on a rock mass digital model is described in this work. The implemented procedure allows one to automatically identify discontinuity traces on a Digital Surface Model: traces are detected directly as surface breaklines, by means of maximum and minimum principal curvature values of the vertices that constitute the model surface. Color influence and user errors, that usually characterize the trace mapping on images, are eliminated. Also trace sampling procedures based on circular windows and circular scanlines have been implemented: they are used to infer trace data and to calculate values of mean trace length, expected discontinuity diameter and intensity of rock discontinuities. The method is tested on a case study: results obtained applying the automatic procedure on the DSM of a rock face are compared to those obtained performing a manual sampling on the orthophotograph of the same rock face.

  16. Automatic Model-Based Generation of Parameterized Test Cases Using Data Abstraction

    Calamé, Jens R.; Ioustinova, Natalia; Pol, van de Jaco; Romijn, J.M.T.; Smith, G.; Pol, van de J.C.


    Developing test suites is a costly and error-prone process. Model-based test generation tools facilitate this process by automatically generating test cases from system models. The applicability of these tools, however, depends on the size of the target systems. Here, we propose an approach to gener

  17. The MADE reference information model for interoperable pervasive telemedicine systems

    Fung, N.L.S.; Jones, V.M.; Hermens, H.J.


    Objectives: The main objective is to develop and validate a reference information model (RIM) to support semantic interoperability of pervasive telemedicine systems. The RIM is one component within a larger, computer-interpretable "MADE language" developed by the authors in the context of the MobiGu

  18. Reference Priors for the General Location-Scale Model

    Fernández, C.; Steel, M.F.J.


    The reference prior algorithm (Berger and Bernardo 1992) is applied to multivariate location-scale models with any regular sampling density, where we establish the irrelevance of the usual assumption of Normal sampling if our interest is in either the location or the scale. This result immediately

  19. Implementing Kuhlthau: A New Model for Library and Reference Instruction.

    Isbell, Dennis; Kammerlocher, Lisa


    Summarizes Carol Kuhlthau's research on the information search process. Discusses how Kuhlthau's model of students' information search process (ISP) has been integrated into a course at Arizona State University and is being used experimentally as a training tool in the library's reference services. Selected student responses to research process…

  20. Fraction Multiplication and Division Models: A Practitioner Reference Paper

    Ervin, Heather K.


    It is well documented in literature that rational number is an important area of understanding in mathematics. Therefore, it follows that teachers and students need to have an understanding of rational number and related concepts such as fraction multiplication and division. This practitioner reference paper examines models that are important to…

  1. Efficient Adoption and Assessment of Multiple Process Improvement Reference Models

    Simona Jeners


    Full Text Available A variety of reference models such as CMMI, COBIT or ITIL support IT organizations to improve their processes. These process improvement reference models (IRMs cover different domains such as IT development, IT Services or IT Governance but also share some similarities. As there are organizations that address multiple domains and need to coordinate their processes in their improvement we present MoSaIC, an approach to support organizations to efficiently adopt and conform to multiple IRMs. Our solution realizes a semantic integration of IRMs based on common meta-models. The resulting IRM integration model enables organizations to efficiently implement and asses multiple IRMs and to benefit from synergy effects.

  2. On fractional order composite model reference adaptive control

    Wei, Yiheng; Sun, Zhenyuan; Hu, Yangsheng; Wang, Yong


    This paper presents a novel composite model reference adaptive control approach for a class of fractional order linear systems with unknown constant parameters. The method is extended from the model reference adaptive control. The parameter estimation error of our method depends on both the tracking error and the prediction error, whereas the existing method only depends on the tracking error, which makes our method has better transient performance in the sense of generating smooth system output. By the aid of the continuous frequency distributed model, stability of the proposed approach is established in the Lyapunov sense. Furthermore, the convergence property of the model parameters estimation is presented, on the premise that the closed-loop control system is stable. Finally, numerical simulation examples are given to demonstrate the effectiveness of the proposed schemes.

  3. Automatically inferred Markov network models for classification of chromosomal band pattern structures.

    Granum, E; Thomason, M G


    A structural pattern recognition approach to the analysis and classification of metaphase chromosome band patterns is presented. An operational method of representing band pattern profiles as sharp edged idealized profiles is outlined. These profiles are nonlinearly scaled to a few, but fixed number of "density" levels. Previous experience has shown that profiles of six levels are appropriate and that the differences between successive bands in these profiles are suitable for classification. String representations, which focuses on the sequences of transitions between local band pattern levels, are derived from such "difference profiles." A method of syntactic analysis of the band transition sequences by dynamic programming for optimal (maximal probability) string-to-network alignments is described. It develops automatic data-driven inference of band pattern models (Markov networks) per class, and uses these models for classification. The method does not use centromere information, but assumes the p-q-orientation of the band pattern profiles to be known a priori. It is experimentally established that the method can build Markov network models, which, when used for classification, show a recognition rate of about 92% on test data. The experiments used 200 samples (chromosome profiles) for each of the 22 autosome chromosome types and are designed to also investigate various classifier design problems. It is found that the use of a priori knowledge of Denver Group assignment only improved classification by 1 or 2%. A scheme for typewise normalization of the class relationship measures prove useful, partly through improvements on average results and partly through a more evenly distributed error pattern. The choice of reference of the p-q-orientation of the band patterns is found to be unimportant, and results of timing of the execution time of the analysis show that recent and efficient implementations can process one cell in less than 1 min on current standard

  4. IP Telephony Interconnection Reference Challenges, Models, and Engineering

    Boucadair, Mohamed; Neves, Pedro Miguel; Einarsson, Olafur Pall


    Addressing the growth of IP telephony service offerings within the corporate and residential realm, IP Telephony Interconnection Reference: Challenges, Models, and Engineering examines the technical and regulatory issues related to IP telephony interconnection at the large scale. It describes business and interconnection models, reviews emerging architectures such as IMS and TISPAN, identifies commonly-encountered issues, and supplies solutions to technical issues. The authors offer a detailed overview of SPEERMINT activity and proposed architecture, the current work undertaken in i3 Forum, an

  5. Spatial uncertainty assessment in modelling reference evapotranspiration at regional scale

    G. Buttafuoco


    Full Text Available Evapotranspiration is one of the major components of the water balance and has been identified as a key factor in hydrological modelling. For this reason, several methods have been developed to calculate the reference evapotranspiration (ET0. In modelling reference evapotranspiration it is inevitable that both model and data input will present some uncertainty. Whatever model is used, the errors in the input will propagate to the output of the calculated ET0. Neglecting information about estimation uncertainty, however, may lead to improper decision-making and water resources management. One geostatistical approach to spatial analysis is stochastic simulation, which draws alternative and equally probable, realizations of a regionalized variable. Differences between the realizations provide a measure of spatial uncertainty and allow to carry out an error propagation analysis. Among the evapotranspiration models, the Hargreaves-Samani model was used.

    The aim of this paper was to assess spatial uncertainty of a monthly reference evapotranspiration model resulting from the uncertainties in the input attributes (mainly temperature at regional scale. A case study was presented for Calabria region (southern Italy. Temperature data were jointly simulated by conditional turning bands simulation with elevation as external drift and 500 realizations were generated.

    The ET0 was then estimated for each set of the 500 realizations of the input variables, and the ensemble of the model outputs was used to infer the reference evapotranspiration probability distribution function. This approach allowed to delineate the areas characterized by greater uncertainty, to improve supplementary sampling strategies and ET0 value predictions.

  6. Technical Note: Automatic river network generation for a physically-based river catchment model

    S. J. Birkinshaw


    Full Text Available SHETRAN is a physically-based distributed modelling system that gives detailed simulations in time and space of water flow and sediment and solute transport in river catchments. Standard algorithms for the automatic generation of river channel networks from digital elevation data are impossible to apply in SHETRAN and other similar models because the river channels are assumed to run along the edges of grid cells. In this work a new algorithm for the automatic generation of a river channel network in SHETRAN is described and its use in an example catchment demonstrated.

  7. Technical Note: Automatic river network generation for a physically-based river catchment model

    Birkinshaw, S. J.


    SHETRAN is a physically-based distributed modelling system that gives detailed simulations in time and space of water flow and sediment and solute transport in river catchments. Standard algorithms for the automatic generation of river channel networks from digital elevation data are impossible to apply in SHETRAN and other similar models because the river channels are assumed to run along the edges of grid cells. In this work a new algorithm for the automatic generation of a river channel network in SHETRAN is described and its use in an example catchment demonstrated.

  8. Technical Note: Automatic river network generation for a physically-based river catchment model

    S. J. Birkinshaw


    Full Text Available SHETRAN is a physically-based distributed modelling system that gives detailed simulations in time and space of water flow and sediment and solute transport in river catchments. Standard algorithms for the automatic generation of river channel networks from digital elevation data are impossible to apply in SHETRAN and other similar models because the river channels are assumed to run along the edges of grid cells. In this work a new algorithm for the automatic generation of a river channel network in SHETRAN is described and its use in an example catchment demonstrated.

  9. Semi-automatic simulation model generation of virtual dynamic networks for production flow planning

    Krenczyk, D.; Skolud, B.; Olender, M.


    Computer modelling, simulation and visualization of production flow allowing to increase the efficiency of production planning process in dynamic manufacturing networks. The use of the semi-automatic model generation concept based on parametric approach supporting processes of production planning is presented. The presented approach allows the use of simulation and visualization for verification of production plans and alternative topologies of manufacturing network configurations as well as with automatic generation of a series of production flow scenarios. Computational examples with the application of Enterprise Dynamics simulation software comprising the steps of production planning and control for manufacturing network have been also presented.

  10. Automatic methods for the refinement of system models from the specification to the implementation

    Seiter, Julia; Drechsler, Rolf


    This book provides a comprehensive overview of automatic model refinement, which helps readers close the gap between initial textual specification and its desired implementation. The authors enable readers to follow two “directions” for refinement: Vertical refinement, for adding detail and precision to single description for a given model and Horizontal refinement, which considers several views on one level of abstraction, refining the system specification by dedicated descriptions for structure or behavior. The discussion includes several methods which support designers of electronic systems in this refinement process, including verification methods to check automatically whether a refinement has been conducted as intended.

  11. A cultural evolutionary programming approach to automatic analytical modeling of electrochemical phenomena through impedance spectroscopy

    Arpaia, Pasquale


    An approach to automatic analytical modeling of electrochemical impedance spectroscopy data by evolutionary programming based on cultural algorithms is proposed. A solution-search strategy based on a cultural mechanism is exploited for defining the equivalent-circuit model automatically: information on search advance is transmitted to all potential solutions, rather than only to a small inheriting subset, such as in a traditional genetic approach. Moreover, with respect to the state of the art, also specific information related to constraints on the application physics knowledge is transferred. Experimental results of the proposed approach implementation in impedance spectroscopy for general-purpose electrochemical circuit analysis and for corrosion monitoring and diagnosing are presented.

  12. Reference Model 6 (RM6): Oscillating Wave Energy Converter.

    Bull, Diana L; Smith, Chris; Jenne, Dale Scott; Jacob, Paul; Copping, Andrea; Willits, Steve; Fontaine, Arnold; Brefort, Dorian; Gordon, Margaret Ellen; Copeland, Robert; Jepsen, Richard Alan


    This report is an addendum to SAND2013-9040: Methodology for Design and Economic Analysis of Marine Energy Conversion (MEC) Technologies. This report describes an Oscillating Water Column Wave Energy Converter reference model design in a complementary manner to Reference Models 1-4 contained in the above report. In this report, a conceptual design for an Oscillating Water Column Wave Energy Converter (WEC) device appropriate for the modeled reference resource site was identified, and a detailed backward bent duct buoy (BBDB) device design was developed using a combination of numerical modeling tools and scaled physical models. Our team used the methodology in SAND2013-9040 for the economic analysis that included costs for designing, manufacturing, deploying, and operating commercial-scale MEC arrays, up to 100 devices. The methodology was applied to identify key cost drivers and to estimate levelized cost of energy (LCOE) for this RM6 Oscillating Water Column device in dollars per kilowatt-hour ($/kWh). Although many costs were difficult to estimate at this time due to the lack of operational experience, the main contribution of this work was to disseminate a detailed set of methodologies and models that allow for an initial cost analysis of this emerging technology. This project is sponsored by the U.S. Department of Energy's (DOE) Wind and Water Power Technologies Program Office (WWPTO), within the Office of Energy Efficiency & Renewable Energy (EERE). Sandia National Laboratories, the lead in this effort, collaborated with partners from National Laboratories, industry, and universities to design and test this reference model.

  13. [Spirographic reference values. Mathematical models and practical use (author's transl)].

    Drouet, D; Kauffmann, F; Brille, D; Lellouch, J


    Various models predicting VC and FEV1 from age and height have been compared by both theoretical and practical approaches on several subgroups of a working population examined in 1960 and 1972. The models in which spirographic values are proportional to the cube of the height give a significantly worse fit of the data. All the other models give similar predicted values in practical terms, but cutoff points depend on the distributions of VC and FEV1 given age and height. Results show that these distributions are closer to a normal than to a lognormal distribution. The use of reference values and classical cutoffs is then discussed. Rather than using a single cutoff point, a more quantitative way is proposed to describe the subjects' functional status, for example by situating him in the percentile of the reference population. In screening, cutoff points cannot be choosen without specifying first the decision considered and the population concerned.

  14. Tracking stochastic resonance curves using an assisted reference model

    Calderón Ramírez, Mario; Rico Martínez, Ramiro [Departamento de Ingeniería Química, Instituto Tecnológico de Celaya, Av. Tecnológico y A. García Cubas S/N, Celaya, Guanajuato, 38010 (Mexico); Ramírez Álvarez, Elizeth [Nonequilibrium Chemical Physics, Physik-Department, TU-München, James-Franck-Str. 1, 85748 Garching bei München (Germany); Parmananda, P. [Department of Physics, Indian Institute of Technology Bombay, Powai, Mumbai 400 076 (India)


    The optimal noise amplitude for Stochastic Resonance (SR) is located employing an Artificial Neural Network (ANN) reference model with a nonlinear predictive capability. A modified Kalman Filter (KF) was coupled to this reference model in order to compensate for semi-quantitative forecast errors. Three manifestations of stochastic resonance, namely, Periodic Stochastic Resonance (PSR), Aperiodic Stochastic Resonance (ASR), and finally Coherence Resonance (CR) were considered. Using noise amplitude as the control parameter, for the case of PSR and ASR, the cross-correlation curve between the sub-threshold input signal and the system response is tracked. However, using the same parameter the Normalized Variance curve is tracked for the case of CR. The goal of the present work is to track these curves and converge to their respective extremal points. The ANN reference model strategy captures and subsequently predicts the nonlinear features of the model system while the KF compensates for the perturbations inherent to the superimposed noise. This technique, implemented in the FitzHugh-Nagumo model, enabled us to track the resonance curves and eventually locate their optimal (extremal) values. This would yield the optimal value of noise for the three manifestations of the SR phenomena.

  15. Towards an automatic model transformation mechanism from UML state machines to DEVS models

    Ariel González


    Full Text Available The development of complex event-driven systems requires studies and analysis prior to deployment with the goal of detecting unwanted behavior. UML is a language widely used by the software engineering community for modeling these systems through state machines, among other mechanisms. Currently, these models do not have appropriate execution and simulation tools to analyze the real behavior of systems. Existing tools do not provide appropriate libraries (sampling from a probability distribution, plotting, etc. both to build and to analyze models. Modeling and simulation for design and prototyping of systems are widely used techniques to predict, investigate and compare the performance of systems. In particular, the Discrete Event System Specification (DEVS formalism separates the modeling and simulation; there are several tools available on the market that run and collect information from DEVS models. This paper proposes a model transformation mechanism from UML state machines to DEVS models in the Model-Driven Development (MDD context, through the declarative QVT Relations language, in order to perform simulations using tools, such as PowerDEVS. A mechanism to validate the transformation is proposed. Moreover, examples of application to analyze the behavior of an automatic banking machine and a control system of an elevator are presented.

  16. An automatic and effective parameter optimization method for model tuning

    T. Zhang


    simulation results show that the optimum combination of these parameters determined using this method is able to improve the model's overall performance by 9 %. The proposed methodology and software framework can be easily applied to other GCMs to speed up the model development process, especially regarding unavoidable comprehensive parameter tuning during the model development stage.

  17. Direct Model Reference Adaptive Control for a Magnetic Bearing

    Durling, Mike [Rensselaer Polytechnic Inst., Troy, NY (United States)


    A Direct Model Reference Adaptive Controller (DMRAC) is applied to a magnetic bearing test stand. The bearing of interest is the MBC 500 Magnetic Bearing System manufactured by Magnetic Moments, LLC. The bearing model is presented in state space form and the system transfer function is measured directly using a closed-loop swept sine technique. Next, the bearing models are used to design a phase-lead controller, notch filter and then a DMRAC. The controllers are tuned in simulations and finally are implemented using a combination of MATLAB, SIMULINK and dSPACE. The results show a successful implementation of a DMRAC on the magnetic bearing hardware.

  18. Automatic fitting of spiking neuron models to electrophysiological recordings

    Cyrille Rossant


    Full Text Available Spiking models can accurately predict the spike trains produced by cortical neurons in response to somatically injected currents. Since the specific characteristics of the model depend on the neuron, a computational method is required to fit models to electrophysiological recordings. The fitting procedure can be very time consuming both in terms of computer simulations and in terms of code writing. We present algorithms to fit spiking models to electrophysiological data (time-varying input and spike trains that can run in parallel on graphics processing units (GPUs. The model fitting library is interfaced with Brian, a neural network simulator in Python. If a GPU is present it uses just-in-time compilation to translate model equations into optimized code. Arbitrary models can then be defined at script level and run on the graphics card. This tool can be used to obtain empirically validated spiking models of neurons in various systems. We demonstrate its use on public data from the INCF Quantitative Single-Neuron Modeling 2009 competition by comparing the performance of a number of neuron spiking models.




    In this paper the presentation of the ball-packing method is reviewed,and a scheme to generate mesh for complex 3-D geometric models is given,which consists of 4 steps:(1)create nodes in 3-D models by ball-packing method,(2)connect nodes to generate mesh by 3-D Delaunay triangulation,(3)retrieve the boundary of the model after Delaunay triangulation,(4)improve the mesh.

  20. Active Shapes for Automatic 3D Modeling of Buildings

    Sirmacek, B.; Lindenbergh, R.C.


    Recent technological developments help us to acquire high quality 3D measurements of our urban environment. However, these measurements, which come as point clouds or Digital Surface Models (DSM), do not directly give 3D geometrical models of buildings. In addition to that, they are not suitable for


    Gennady N. Zverev


    Full Text Available The paper deals with the problems of construction objective models of educational course, learning processes, control of learning results. We considered the possibility of automated test generation using formalized concepts of testology, semiotic and mathematical models of pedagogical processes. 

  2. Showing Automatically Generated Students' Conceptual Models to Students and Teachers

    Perez-Marin, Diana; Pascual-Nieto, Ismael


    A student conceptual model can be defined as a set of interconnected concepts associated with an estimation value that indicates how well these concepts are used by the students. It can model just one student or a group of students, and can be represented as a concept map, conceptual diagram or one of several other knowledge representation…

  3. Towards automatic model based controller design for reconfigurable plants

    Michelsen, Axel Gottlieb; Stoustrup, Jakob; Izadi-Zamanabadi, Roozbeh


    This paper introduces model-based Plug and Play Process Control, a novel concept for process control, which allows a model-based control system to be reconfigured when a sensor or an actuator is plugged into a controlled process. The work reported in this paper focuses on composing a monolithic m...


    Numerical models are a useful tool in evaluating and designing NAPL remediation systems. Traditional constitutive finite difference and finite element models are complex and expensive to apply. For this reason, this paper presents the application of a simplified stochastic-Lagran...

  5. GOES-R Ground Segment Technical Reference Model

    Krause, R. G.; Burnett, M.; Khanna, R.


    NOAA Geostationary Environmental Operational Satellite -R Series (GOES-R) Ground Segment Project (GSP) has developed a Technical Reference Model (TRM) to support the documentation of technologies that could form the basis for a set of requirements that could support the evolution towards a NESDIS enterprise ground system. Architecture and technologies in this TRM can be applied or extended to other ground systems for planning and development. The TRM maps GOES-R technologies to the Office of Management and Budget's (OMB) Federal Enterprise Architecture (FEA) Consolidated Reference Model (CRM) V 2.3 Technical Services Standard (TSS). The FEA TRM categories are the framework for the GOES-R TRM. This poster will present the GOES-R TRM.

  6. A Method for Modeling the Virtual Instrument Automatic Test System Based on the Petri Net

    MA Min; CHEN Guang-ju


    Virtual instrument is playing the important role in automatic test system. This paper introduces a composition of a virtual instrument automatic test system and takes the VXIbus based a test software platform which is developed by CAT lab of the UESTC as an example. Then a method to model this system based on Petri net is proposed. Through this method, we can analyze the test task scheduling to prevent the deadlock or resources conflict. At last, this paper analyzes the feasibility of this method.

  7. Automatically Creating Design Models from 3D Anthropometry Data

    Wuhrer, Stefanie; Bose, Prosenjit


    When designing a product that needs to fit the human shape, designers often use a small set of 3D models, called design models, either in physical or digital form, as representative shapes to cover the shape variabilities of the population for which the products are designed. Until recently, the process of creating these models has been an art involving manual interaction and empirical guesswork. The availability of the 3D anthropometric databases provides an opportunity to create design models optimally. In this paper, we propose a novel way to use 3D anthropometric databases to generate design models that represent a given population for design applications such as the sizing of garments and gear. We generate the representative shapes by solving a covering problem in a parameter space. Well-known techniques in computational geometry are used to solve this problem. We demonstrate the method using examples in designing glasses and helmets.

  8. A Stochastic Approach for Automatic and Dynamic Modeling of Students' Learning Styles in Adaptive Educational Systems

    Dorça, Fabiano Azevedo; Lima, Luciano Vieira; Fernandes, Márcia Aparecida; Lopes, Carlos Roberto


    Considering learning and how to improve students' performances, an adaptive educational system must know how an individual learns best. In this context, this work presents an innovative approach for student modeling through probabilistic learning styles combination. Experiments have shown that our approach is able to automatically detect and…

  9. Unidirectional high fiber content composites: Automatic 3D FE model generation and damage simulation

    Qing, Hai; Mishnaevsky, Leon


    A new method and a software code for the automatic generation of 3D micromechanical FE models of unidirectional long-fiber-reinforced composite (LFRC) with high fiber volume fraction with random fiber arrangement are presented. The fiber arrangement in the cross-section is generated through random...

  10. Reference-data modelling for tracking and tracing

    Dorp, van C.A.


    Subject headings: supply chain, tracking and tracing, reference-data modelling

  11. Reference Model 5 (RM5): Oscillating Surge Wave Energy Converter

    Yu, Y. H. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Jenne, D. S. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Thresher, R. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Copping, A. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Geerlofs, S. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Hanna, L. A. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)


    This report is an addendum to SAND2013-9040: Methodology for Design and Economic Analysis of Marine Energy Conversion (MEC) Technologies. This report describes an Oscillating Water Column Wave Energy Converter (OSWEC) reference model design in a complementary manner to Reference Models 1-4 contained in the above report. A conceptual design for a taut moored oscillating surge wave energy converter was developed. The design had an annual electrical power of 108 kilowatts (kW), rated power of 360 kW, and intended deployment at water depths between 50 m and 100 m. The study includes structural analysis, power output estimation, a hydraulic power conversion chain system, and mooring designs. The results were used to estimate device capital cost and annual operation and maintenance costs. The device performance and costs were used for the economic analysis, following the methodology presented in SAND2013-9040 that included costs for designing, manufacturing, deploying, and operating commercial-scale MEC arrays up to 100 devices. The levelized cost of energy estimated for the Reference Model 5 OSWEC, presented in this report, was for a single device and arrays of 10, 50, and 100 units, and it enabled the economic analysis to account for cost reductions associated with economies of scale. The baseline commercial levelized cost of energy estimate for the Reference Model 5 device in an array comprised of 10 units is $1.44/kilowatt-hour (kWh), and the value drops to approximately $0.69/kWh for an array of 100 units.

  12. Reference-data modelling for tracking and tracing

    Dorp, van C.A.


    Subject headings: supply chain, tracking and tracing, reference-data modelling

  13. Mathematical modeling and analytical solution for stretching force of automatic feed mechanism

    魏志芳; 陈国光


    Load of an automatic feed mechanism is composed of the stretching force of feed belt at the entrance to lower flexible guidance and the friction force between feed belt and flexible guidance. A mathematical model for computing the load was presented. An optimization problem was formulated to determine the attitude of the flexible guidance based on the principle that the potential energy stored in the system was the minimum at the equilibrium. Then the friction force was obtained according to the attitude of guide leaves and the moving velocity of the feed belt and the friction factor. Consequently, the load of the automatic feed mechanism can be calculated. Finally, an example was given to compute the load when the horizontal and elevating firing angles of the automation were respectively 45° and 30°. The computing result can be a criterion to determine the designing parameters of automat.

  14. A new reference viscosity model for hydrogen sulfide

    Schmidt, K.A.G. [Alberta Univ., Edmonton, AB (Canada). Dept. of Chemical and Materials Engineering, Electrical and Computer Engineering Research Facility; Quinones-Cisneros, S.E. [Univ. Nacional Autonoma de Mexico, Mexico City (Mexico). Dept. of Rheology, Materials Research Inst.; Giri, B.R.; Blais, P.; Marriott, R.A. [Alberta Sulphur Research Ltd., Calgary, AB (Canada); Calgary Univ., AB (Canada). Dept. of Chemistry


    New and economical ways of reducing emissions of acid gases to the atmosphere are becoming increasingly important in the petroleum industry. This presentation discussed the promising sequestration option of injecting these acid gases into formations for disposal and or storage. Acid gas injection (AGI) is a commonly used process for the disposal of mixtures of hydrogen sulphide and carbon dioxide, particularly in small scale schemes. The acid gas is sometimes used as a miscible flood fluid for pressure maintenance. The use of AGI is being considered for the production of elemental sulphur. Accurate viscosities are needed in the design of these injection schemes to determine pressure drops due to fluid flow in both the acid gas pipeline and the injection well. This presentation included experimental data and discussed the applicability of the friction theory for viscosity modelling to reproduce the existing experimental visco cities of hydrogen sulphide and its mixtures. The friction theory model was shown to be a highly flexible and powerful tool for the modelling the viscosity of reservoir fluids, from light to heavy fluids under broad conditions of temperature, pressure and composition. During the development of this reference viscosity model, a literature review identified areas where additional data is needed to fill voids and resolve discrepancies of existing data sets. It was concluded that although the developed model was based on limited data, the sound physical reasoning provided good results. An experimental program has been launched to determine the viscosities of hydrogen sulphide (H{sub 2}S) in the critical areas identified in the initial reference model. The current update to the data set consists of experimental H{sub 2}S viscosities up to 1000 bar and at temperatures between 0 and 150 degrees C. The data will be applied to update the H{sub 2}S reference viscosity model based on the friction-theory. The updated reference equation will help improve

  15. CADLIVE toolbox for MATLAB: automatic dynamic modeling of biochemical networks with comprehensive system analysis.

    Inoue, Kentaro; Maeda, Kazuhiro; Miyabe, Takaaki; Matsuoka, Yu; Kurata, Hiroyuki


    Mathematical modeling has become a standard technique to understand the dynamics of complex biochemical systems. To promote the modeling, we had developed the CADLIVE dynamic simulator that automatically converted a biochemical map into its associated mathematical model, simulated its dynamic behaviors and analyzed its robustness. To enhance the feasibility by CADLIVE and extend its functions, we propose the CADLIVE toolbox available for MATLAB, which implements not only the existing functions of the CADLIVE dynamic simulator, but also the latest tools including global parameter search methods with robustness analysis. The seamless, bottom-up processes consisting of biochemical network construction, automatic construction of its dynamic model, simulation, optimization, and S-system analysis greatly facilitate dynamic modeling, contributing to the research of systems biology and synthetic biology. This application can be freely downloaded from together with an instruction.

  16. Semi-Automatic Modelling of Building FAÇADES with Shape Grammars Using Historic Building Information Modelling

    Dore, C.; Murphy, M.


    This paper outlines a new approach for generating digital heritage models from laser scan or photogrammetric data using Historic Building Information Modelling (HBIM). HBIM is a plug-in for Building Information Modelling (BIM) software that uses parametric library objects and procedural modelling techniques to automate the modelling stage. The HBIM process involves a reverse engineering solution whereby parametric interactive objects representing architectural elements are mapped onto laser scan or photogrammetric survey data. A library of parametric architectural objects has been designed from historic manuscripts and architectural pattern books. These parametric objects were built using an embedded programming language within the ArchiCAD BIM software called Geometric Description Language (GDL). Procedural modelling techniques have been implemented with the same language to create a parametric building façade which automatically combines library objects based on architectural rules and proportions. Different configurations of the façade are controlled by user parameter adjustment. The automatically positioned elements of the façade can be subsequently refined using graphical editing while overlaying the model with orthographic imagery. Along with this semi-automatic method for generating façade models, manual plotting of library objects can also be used to generate a BIM model from survey data. After the 3D model has been completed conservation documents such as plans, sections, elevations and 3D views can be automatically generated for conservation projects.

  17. A generic method for automatic translation between input models for different versions of simulation codes

    Serfontein, Dawid E., E-mail: [School of Mechanical and Nuclear Engineering, North West University (PUK-Campus), PRIVATE BAG X6001 (Internal Post Box 360), Potchefstroom 2520 (South Africa); Mulder, Eben J. [School of Mechanical and Nuclear Engineering, North West University (South Africa); Reitsma, Frederik [Calvera Consultants (South Africa)


    A computer code was developed for the semi-automatic translation of input models for the VSOP-A diffusion neutronics simulation code to the format of the newer VSOP 99/05 code. In this paper, this algorithm is presented as a generic method for producing codes for the automatic translation of input models from the format of one code version to another, or even to that of a completely different code. Normally, such translations are done manually. However, input model files, such as for the VSOP codes, often are very large and may consist of many thousands of numeric entries that make no particular sense to the human eye. Therefore the task, of for instance nuclear regulators, to verify the accuracy of such translated files can be very difficult and cumbersome. This may cause translation errors not to be picked up, which may have disastrous consequences later on when a reactor with such a faulty design is built. Therefore a generic algorithm for producing such automatic translation codes may ease the translation and verification process to a great extent. It will also remove human error from the process, which may significantly enhance the accuracy and reliability of the process. The developed algorithm also automatically creates a verification log file which permanently record the names and values of each variable used, as well as the list of meanings of all the possible values. This should greatly facilitate reactor licensing applications.

  18. Strategies to Automatically Derive a Process Model from a Configurable Process Model Based on Event Data

    Mauricio Arriagada-Benítez


    Full Text Available Configurable process models are frequently used to represent business workflows and other discrete event systems among different branches of large organizations: they unify commonalities shared by all branches and describe their differences, at the same time. The configuration of such models is usually done manually, which is challenging. On the one hand, when the number of configurable nodes in the configurable process model grows, the size of the search space increases exponentially. On the other hand, the person performing the configuration may lack the holistic perspective to make the right choice for all configurable nodes at the same time, since choices influence each other. Nowadays, information systems that support the execution of business processes create event data reflecting how processes are performed. In this article, we propose three strategies (based on exhaustive search, genetic algorithms and a greedy heuristic that use event data to automatically derive a process model from a configurable process model that better represents the characteristics of the process in a specific branch. These strategies have been implemented in our proposed framework and tested in both business-like event logs as recorded in a higher educational enterprise resource planning system and a real case scenario involving a set of Dutch municipalities.

  19. Automatic age and gender classification using supervised appearance model

    Bukar, Ali Maina; Ugail, Hassan; Connah, David


    Age and gender classification are two important problems that recently gained popularity in the research community, due to their wide range of applications. Research has shown that both age and gender information are encoded in the face shape and texture, hence the active appearance model (AAM), a statistical model that captures shape and texture variations, has been one of the most widely used feature extraction techniques for the aforementioned problems. However, AAM suffers from some drawbacks, especially when used for classification. This is primarily because principal component analysis (PCA), which is at the core of the model, works in an unsupervised manner, i.e., PCA dimensionality reduction does not take into account how the predictor variables relate to the response (class labels). Rather, it explores only the underlying structure of the predictor variables, thus, it is no surprise if PCA discards valuable parts of the data that represent discriminatory features. Toward this end, we propose a supervised appearance model (sAM) that improves on AAM by replacing PCA with partial least-squares regression. This feature extraction technique is then used for the problems of age and gender classification. Our experiments show that sAM has better predictive power than the conventional AAM.

  20. Information Model for Connection Management in Automatic Switched Optical Network

    Xu Yunbin(徐云斌); Song Hongsheng; Gui Xuan; Zhang Jie; Gu Wanyi


    The three types of connections (Permanent Connection, Soft Permanent Connection and Switched Connection) provided by ASON can adapt the requirement of different network services. Management and maintenance of these three connections are the most important aspect of ASON management. The information models proposed in this paper are used for the purpose of ASON connection management. Firstly a new information model is proposed to meet the requirement for the control plane introduced by ASON. In this model, a new class ControlNE is given, and the relationship between the ControlNE and the transport NE (network element) is also defined. Then this paper proposes information models for the three types of connections for the first time, and analyzes the relationship between the three kinds of connections and the basic network transport entities. Finally, the paper defines some CORBA interfaces for the management of the three connections. In these interfaces, some operations such as create or release a connection are defined, and some other operations can manage the performance of the three kinds of connections, which is necessary for a distributed management system.

  1. Renaturalisation of forest ecosystems: is a reference model really needed?

    Nocentini S


    Full Text Available Renaturalisation is more and more often considered the aim of management when dealing with simplified forests. The term “renaturalisation” has become the keyword of many forest management projects. A reference model or system is often considered essential for forest renaturalisation. This approach is coherent with a school of thought which finds relevant examples in the science and practice of Ecological restoration. The search for a reference system has several practical limitations and, especially, a severe theoretical fault. The definition of a reference system underlies the idea that ecosystem reactions to management can be exactly forecast and thus ecosystems can be guided towards a predefined composition, structure and functionality. This idea stems from a deterministic imprinting which characterises traditional forestry thinking and which is clearly in contrast with the dynamic nature of forest ecosystems. If renaturalisation is seen as a silvicultural and management approach which tends to favour natural evolutionary processes through the system’s ability to autonomously increase its complexity and biodiversity, then the actual system under management is the only possible reference system. An accurate analysis of the evolutionary trends in relation to the actual environmental conditions and landscape matrix should therefore be the basis for the renaturalisation process. Management must proceed as an experiment: the reaction to each intervention must be monitored using appropriate indicators. These are not to be seen as reference limits but as parameters for quantifying changes in the system’s self-regulating processes. In conclusion, renaturalisation has more to do with the way we interact with nature than with a closed project with a clearly defined beginning and end.

  2. Coordinate Reference System Metadata in Interdisciplinary Environmental Modeling

    Blodgett, D. L.; Arctur, D. K.; Hnilo, J.; Danko, D. M.; Rutledge, G. K.


    For global climate modeling based on a unit sphere, the positional accuracy of transformations between "real earth" coordinates and the spherical earth coordinates is practically irrelevant due to the coarse grid and precision of global models. Consequently, many climate models are driven by data using real-earth coordinates without transforming them to the shape of the model grid. Additionally, metadata to describe the earth shape and its relationship to latitude longitude demarcations, or datum, used for model output is often left unspecified or ambiguous. Studies of weather and climate effects on coastal zones, water resources, agriculture, biodiversity, and other critical domains typically require positional accuracy on the order of several meters or less. This precision requires that a precise datum be used and accounted for in metadata. While it may be understood that climate model results using spherical earth coordinates could not possibly approach this level of accuracy, precise coordinate reference system metadata is nevertheless required by users and applications integrating climate and geographic information. For this reason, data publishers should provide guidance regarding the appropriate datum to assume for their data. Without some guidance, analysts must make assumptions they are uncomfortable or unwilling to make and may spend inordinate amounts of time researching the correct assumption to make. A consequence of the (practically justified for global climate modeling) disregard for datums is that datums are also neglected when publishing regional or local scale climate and weather data where datum information may be important. For example, observed data, like precipitation and temperature measurements, used in downscaling climate model results are georeferenced precisely. If coordinate reference system metadata are disregarded in cases like this, systematic biases in geolocation can result. Additionally, if no datum transformation was applied to

  3. A reference worldwide model for antineutrinos from reactors

    Baldoncini, Marica; Fiorentini, Giovanni; Mantovani, Fabio; Ricci, Barbara; Strati, Virginia; Xhixha, Gerti


    Antineutrinos produced at nuclear reactors constitute a severe source of background for the detection of geoneutrinos, which bring to the Earth's surface information about natural radioactivity in the whole planet. In this framework we provide a reference worldwide model for antineutrinos from reactors, in view of reactors operational records yearly published by the International Atomic Energy Agency (IAEA). We evaluate the expected signal from commercial reactors for ongoing (KamLAND and Borexino), planned (SNO+) and proposed (Juno, RENO-50, LENA and Hanohano) experimental sites. Uncertainties related to reactor antineutrino production, propagation and detection processes are estimated using a Monte Carlo based approach, which provides an overall site dependent uncertainty on the signal in the geoneutrino energy window on the order of 3%. We also implement the off-equilibrium correction to the reference reactor spectra associated with the long-lived isotopes and we estimate a 2.4% increase of the unoscillate...

  4. Automatic segmentation of vertebral arteries in CT angiography using combined circular and cylindrical model fitting

    Lee, Min Jin; Hong, Helen; Chung, Jin Wook


    We propose an automatic vessel segmentation method of vertebral arteries in CT angiography using combined circular and cylindrical model fitting. First, to generate multi-segmented volumes, whole volume is automatically divided into four segments by anatomical properties of bone structures along z-axis of head and neck. To define an optimal volume circumscribing vertebral arteries, anterior-posterior bounding and side boundaries are defined as initial extracted vessel region. Second, the initial vessel candidates are tracked using circular model fitting. Since boundaries of the vertebral arteries are ambiguous in case the arteries pass through the transverse foramen in the cervical vertebra, the circle model is extended along z-axis to cylinder model for considering additional vessel information of neighboring slices. Finally, the boundaries of the vertebral arteries are detected using graph-cut optimization. From the experiments, the proposed method provides accurate results without bone artifacts and eroded vessels in the cervical vertebra.

  5. Stochastic Modeling as a Means of Automatic Speech Recognition


    posienon probability Pr( X| I :T|=x| l:T| | Y| I :T|«.y! I :T|. A, I, P, S ). where A, L, P, S represent the acoustic- phonetic , lexical, phonological , and... phonetic sequence by using multiple dictionary entries, phonological rules embedded in the dictionary, and a "degarbling" procedure. The search is...statistical model i f the hnguage. 2) a phonemic dictionary and statistical phonological rules. 3) a phonetic matching algorithm. 4) word level search

  6. On Automatic Modeling and Use of Domain-specific Ontologies

    Andreasen, Troels; Knappe, Rasmus; Bulskov, Henrik


    is a specific lattice-based concept algebraic language by which ontologies are inherently generative. The modeling of a domain specific ontology is based on a general ontology built upon common knowledge resources as dictionaries and thesauri. Based on analysis of concept occurrences in the object document......-based navigation. Finally, a measure of concept similarity is derived from the domain specific ontology based on occurrences, commonalities, and distances in the ontology....


    Olha Sushchenko


    Full Text Available Purpose: The paper deals with the mathematical description of the gimballed attitude and heading reference systems, which can be applied in design of strategic precision navigation systems. The main goal is to created mathematical description taking into consideration the necessity to use different navigations operating modes of this class of navigation systems. To provide the high accuracy the indirect control is used when the position of the gimballed platform is controlled by signals of gyroscopic devices, which are corrected using accelerometer’s signals. Methods: To solve the given problem the methods of the classical theoretical mechanics, gyro theory, and inertial navigation are used. Results: The full mathematical model of the gimballed attitude and heading reference system is derived including descriptions of different operating modes. The mathematical models of the system Expressions for control and correction moments in the different modes are represented. The simulation results are given. Conclusions: The represented results prove efficiency of the proposed models. Developed mathematical models can be useful for design of navigation systems of the wide class of moving vehicles.

  8. Generic method for automatic bladder segmentation on cone beam CT using a patient-specific bladder shape model

    Schoot, A. J. A. J. van de, E-mail:; Schooneveldt, G.; Wognum, S.; Stalpers, L. J. A.; Rasch, C. R. N.; Bel, A. [Department of Radiation Oncology, Academic Medical Center, University of Amsterdam, Meibergdreef 9, 1105 AZ Amsterdam (Netherlands); Hoogeman, M. S. [Department of Radiation Oncology, Daniel den Hoed Cancer Center, Erasmus Medical Center, Groene Hilledijk 301, 3075 EA Rotterdam (Netherlands); Chai, X. [Department of Radiation Oncology, Stanford University School of Medicine, 875 Blake Wilbur Drive, Palo Alto, California 94305 (United States)


    Purpose: The aim of this study is to develop and validate a generic method for automatic bladder segmentation on cone beam computed tomography (CBCT), independent of gender and treatment position (prone or supine), using only pretreatment imaging data. Methods: Data of 20 patients, treated for tumors in the pelvic region with the entire bladder visible on CT and CBCT, were divided into four equally sized groups based on gender and treatment position. The full and empty bladder contour, that can be acquired with pretreatment CT imaging, were used to generate a patient-specific bladder shape model. This model was used to guide the segmentation process on CBCT. To obtain the bladder segmentation, the reference bladder contour was deformed iteratively by maximizing the cross-correlation between directional grey value gradients over the reference and CBCT bladder edge. To overcome incorrect segmentations caused by CBCT image artifacts, automatic adaptations were implemented. Moreover, locally incorrect segmentations could be adapted manually. After each adapted segmentation, the bladder shape model was expanded and new shape patterns were calculated for following segmentations. All available CBCTs were used to validate the segmentation algorithm. The bladder segmentations were validated by comparison with the manual delineations and the segmentation performance was quantified using the Dice similarity coefficient (DSC), surface distance error (SDE) and SD of contour-to-contour distances. Also, bladder volumes obtained by manual delineations and segmentations were compared using a Bland-Altman error analysis. Results: The mean DSC, mean SDE, and mean SD of contour-to-contour distances between segmentations and manual delineations were 0.87, 0.27 cm and 0.22 cm (female, prone), 0.85, 0.28 cm and 0.22 cm (female, supine), 0.89, 0.21 cm and 0.17 cm (male, supine) and 0.88, 0.23 cm and 0.17 cm (male, prone), respectively. Manual local adaptations improved the segmentation

  9. Forecasting the Reference Evapotranspiration Using Time Series Model

    H. Zare Abyaneh


    Full Text Available Introduction: Reference evapotranspiration is one of the most important factors in irrigation timing and field management. Moreover, reference evapotranspiration forecasting can play a vital role in future developments. Therefore in this study, the seasonal autoregressive integrated moving average (ARIMA model was used to forecast the reference evapotranspiration time series in the Esfahan, Semnan, Shiraz, Kerman, and Yazd synoptic stations. Materials and Methods: In the present study in all stations (characteristics of the synoptic stations are given in Table 1, the meteorological data, including mean, maximum and minimum air temperature, relative humidity, dry-and wet-bulb temperature, dew-point temperature, wind speed, precipitation, air vapor pressure and sunshine hours were collected from the Islamic Republic of Iran Meteorological Organization (IRIMO for the 41 years from 1965 to 2005. The FAO Penman-Monteith equation was used to calculate the monthly reference evapotranspiration in the five synoptic stations and the evapotranspiration time series were formed. The unit root test was used to identify whether the time series was stationary, then using the Box-Jenkins method, seasonal ARIMA models were applied to the sample data. Table 1. The geographical location and climate conditions of the synoptic stations Station\tGeographical location\tAltitude (m\tMean air temperature (°C\tMean precipitation (mm\tClimate, according to the De Martonne index classification Longitude (E\tLatitude (N Annual\tMin. and Max. Esfahan\t51° 40'\t32° 37'\t1550.4\t16.36\t9.4-23.3\t122\tArid Semnan\t53° 33'\t35° 35'\t1130.8\t18.0\t12.4-23.8\t140\tArid Shiraz\t52° 36'\t29° 32'\t1484\t18.0\t10.2-25.9\t324\tSemi-arid Kerman\t56° 58'\t30° 15'\t1753.8\t15.6\t6.7-24.6\t142\tArid Yazd\t54° 17'\t31° 54'\t1237.2\t19.2\t11.8-26.0\t61\tArid Results and Discussion: The monthly meteorological data were used as input for the Ref-ET software and monthly reference

  10. Trajectory models and reference frames for crustal motion geodesy

    Bevis, Michael; Brown, Abel


    We sketch the evolution of station trajectory models used in crustal motion geodesy over the last several decades, and describe some recent generalizations of these models that allow geodesists and geophysicists to parameterize accelerating patterns of displacement in general, and postseismic transient deformation in particular. Modern trajectory models are composed of three sub-models that represent secular trends, annual oscillations, and instantaneous jumps in coordinate time series. Traditionally the trend model invoked constant station velocity. This can be generalized by assuming that position is a polynomial function of time. The trajectory model can also be augmented as needed, by including one or more logarithmic transients in order to account for typical multi-year patterns of postseismic transient motion. Many geodetic and geophysical research groups are using general classes of trajectory model to characterize their crustal displacement time series, but few if any of them are using these trajectory models to define and realize the terrestrial reference frames (RFs) in which their time series are expressed. We describe a global GPS reanalysis program in which we use two general classes of trajectory model, tuned on a station by station basis. We define the network trajectory model as the set of station trajectory models encompassing every station in the network. We use the network trajectory model from the each global analysis to assign prior position estimates for the next round of GPS data processing. We allow our daily orbital solutions to relax so as to maintain their consistency with the network polyhedron. After several iterations we produce GPS time series expressed in a RF similar to, but not identical with ITRF2008. We find that each iteration produces an improvement in the daily repeatability of our global time series and in the predictive power of our trajectory models.

  11. Towards the Availability of the Distributed Cluster Rendering System: Automatic Modeling and Verification

    Wang, Kemin; Jiang, Zhengtao; Wang, Yongbin;


    , whenever the number of node-n and related parameters vary, we can create the PRISM model file rapidly and then we can use PRISM model checker to verify ralated system properties. At the end of this study, we analyzed and verified the availability distributions of the Distributed Cluster Rendering System......In this study, we proposed a Continuous Time Markov Chain Model towards the availability of n-node clusters of Distributed Rendering System. It's an infinite one, we formalized it, based on the model, we implemented a software, which can automatically model with PRISM language. With the tool...

  12. Mooring Design for the Floating Oscillating Water Column Reference Model

    Brefort, Dorian [Univ. of Michigan, Ann Arbor, MI (United States); Bull, Diana L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)


    To reduce the price of the reference Backward Bent Duct Buoy (BBDB), a study was done analyzing the effects of reducing the mooring line length, and a new mooring design was developed. It was found that the overall length of the mooring lines could be reduced by 1290 meters, allowing a significant price reduction of the system. In this paper, we will first give a description of the model and the storm environment it will be subject to. We will then give a recommendation for the new mooring system, followed by a discussion of the severe weather simulation results, and an analysis of the conservative and aggressive aspects of the design.

  13. Transformation of equations in analysis of proportionality through referent models

    Romay, E O


    In proportionality of objects, samples or populations, usually we work with Z score of proportionality calculated through referent models, instead directly with the variables of the objects in itself. In these studies we have the necessity to transform, the equations that use the variables of the object, in equations that directly use like variables Z score. In the present work a method is developed to transform the parametric equations, in equations in variables Z using like example the studies of human proportionality from the Phantom stratagem of Ross and Wilson.

  14. Fully automatic and reference-marker-free image stitching method for full-spine and full-leg imaging with computed radiography

    Wang, Xiaohui; Foos, David H.; Doran, James; Rogers, Michael K.


    Full-leg and full-spine imaging with standard computed radiography (CR) systems requires several cassettes/storage phosphor screens to be placed in a staggered arrangement and exposed simultaneously to achieve an increased imaging area. A method has been developed that can automatically and accurately stitch the acquired sub-images without relying on any external reference markers. It can detect and correct the order, orientation, and overlap arrangement of the subimages for stitching. The automatic determination of the order, orientation, and overlap arrangement of the sub-images consists of (1) constructing a hypothesis list that includes all cassette/screen arrangements, (2) refining hypotheses based on a set of rules derived from imaging physics, (3) correlating each consecutive sub-image pair in each hypothesis and establishing an overall figure-of-merit, (4) selecting the hypothesis of maximum figure-of-merit. The stitching process requires the CR reader to over scan each CR screen so that the screen edges are completely visible in the acquired sub-images. The rotational displacement and vertical displacement between two consecutive sub-images are calculated by matching the orientation and location of the screen edge in the front image and its corresponding shadow in the back image. The horizontal displacement is estimated by maximizing the correlation function between the two image sections in the overlap region. Accordingly, the two images are stitched together. This process is repeated for the newly stitched composite image and the next consecutive sub-image until a full-image composite is created. The method has been evaluated in both phantom experiments and clinical studies. The standard deviation of image misregistration is below one image pixel.

  15. An automatic synthesis method of compact models of integrated circuit devices based on equivalent circuits

    Abramov, I. I.


    An automatic synthesis method of equivalent circuits of integrated circuit devices is described in the paper. This method is based on a physical approach to construction of finite-difference approximation to basic equations of semiconductor device physics. It allows to synthesize compact equivalent circuits of different devices automatically as alternative to, for example, sufficiently formal BSIM2 and BSIM3 models used in circuit simulation programs of SPICE type. The method is one of possible variants of general methodology for automatic synthesis of compact equivalent circuits of almost arbitrary devices and circuit-type structures of micro- and nanoelecronics [1]. The method is easily extended in the case of necessity to account thermal effects in integrated circuits. It was shown that its application would be especially perspective for analysis of integrated circuit fragments as a whole and for identification of significant collective physical effects, including parasitic effects in VLSI and ULSI. In the paper the examples illustrating possibilities of the method for automatic synthesis of compact equivalent circuits of some of semiconductor devices and integrated circuit devices are considered. Special attention is given to examples of integrated circuit devices for coarse grids of spatial discretization (less than 10 nodes).

  16. A Simulated Annealing based Optimization Algorithm for Automatic Variogram Model Fitting

    Soltani-Mohammadi, Saeed; Safa, Mohammad


    Fitting a theoretical model to an experimental variogram is an important issue in geostatistical studies because if the variogram model parameters are tainted with uncertainty, the latter will spread in the results of estimations and simulations. Although the most popular fitting method is fitting by eye, in some cases use is made of the automatic fitting method on the basis of putting together the geostatistical principles and optimization techniques to: 1) provide a basic model to improve fitting by eye, 2) fit a model to a large number of experimental variograms in a short time, and 3) incorporate the variogram related uncertainty in the model fitting. Effort has been made in this paper to improve the quality of the fitted model by improving the popular objective function (weighted least squares) in the automatic fitting. Also, since the variogram model function (£) and number of structures (m) too affect the model quality, a program has been provided in the MATLAB software that can present optimum nested variogram models using the simulated annealing method. Finally, to select the most desirable model from among the single/multi-structured fitted models, use has been made of the cross-validation method, and the best model has been introduced to the user as the output. In order to check the capability of the proposed objective function and the procedure, 3 case studies have been presented.

  17. Bond graph modeling, simulation, and reflex control of the Mars planetary automatic vehicle

    Amara, Maher; Friconneau, Jean Pierre; Micaelli, Alain


    The bond graph modeling, simulation, and reflex control study of the Planetary Automatic Vehicle are considered. A simulator derived from a complete bond graph model of the vehicle is presented. This model includes both knowledge and representation models of the mechanical structure, the floor contact, and the Mars site. The MACSYMEN (French acronym for aided design method of multi-energetic systems) is used and applied to study the input-output power transfers. The reflex control is then considered. Controller architecture and locomotion specificity are described. A numerical stage highlights some interesting results of the robot and the controller capabilities.

  18. A Semi-Automatic Image-Based Close Range 3D Modeling Pipeline Using a Multi-Camera Configuration

    Po-Chia Yeh


    Full Text Available The generation of photo-realistic 3D models is an important task for digital recording of cultural heritage objects. This study proposes an image-based 3D modeling pipeline which takes advantage of a multi-camera configuration and multi-image matching technique that does not require any markers on or around the object. Multiple digital single lens reflex (DSLR cameras are adopted and fixed with invariant relative orientations. Instead of photo-triangulation after image acquisition, calibration is performed to estimate the exterior orientation parameters of the multi-camera configuration which can be processed fully automatically using coded targets. The calibrated orientation parameters of all cameras are applied to images taken using the same camera configuration. This means that when performing multi-image matching for surface point cloud generation, the orientation parameters will remain the same as the calibrated results, even when the target has changed. Base on this invariant character, the whole 3D modeling pipeline can be performed completely automatically, once the whole system has been calibrated and the software was seamlessly integrated. Several experiments were conducted to prove the feasibility of the proposed system. Images observed include that of a human being, eight Buddhist statues, and a stone sculpture. The results for the stone sculpture, obtained with several multi-camera configurations were compared with a reference model acquired by an ATOS-I 2M active scanner. The best result has an absolute accuracy of 0.26 mm and a relative accuracy of 1:17,333. It demonstrates the feasibility of the proposed low-cost image-based 3D modeling pipeline and its applicability to a large quantity of antiques stored in a museum.

  19. A semi-automatic image-based close range 3D modeling pipeline using a multi-camera configuration.

    Rau, Jiann-Yeou; Yeh, Po-Chia


    The generation of photo-realistic 3D models is an important task for digital recording of cultural heritage objects. This study proposes an image-based 3D modeling pipeline which takes advantage of a multi-camera configuration and multi-image matching technique that does not require any markers on or around the object. Multiple digital single lens reflex (DSLR) cameras are adopted and fixed with invariant relative orientations. Instead of photo-triangulation after image acquisition, calibration is performed to estimate the exterior orientation parameters of the multi-camera configuration which can be processed fully automatically using coded targets. The calibrated orientation parameters of all cameras are applied to images taken using the same camera configuration. This means that when performing multi-image matching for surface point cloud generation, the orientation parameters will remain the same as the calibrated results, even when the target has changed. Base on this invariant character, the whole 3D modeling pipeline can be performed completely automatically, once the whole system has been calibrated and the software was seamlessly integrated. Several experiments were conducted to prove the feasibility of the proposed system. Images observed include that of a human being, eight Buddhist statues, and a stone sculpture. The results for the stone sculpture, obtained with several multi-camera configurations were compared with a reference model acquired by an ATOS-I 2M active scanner. The best result has an absolute accuracy of 0.26 mm and a relative accuracy of 1:17,333. It demonstrates the feasibility of the proposed low-cost image-based 3D modeling pipeline and its applicability to a large quantity of antiques stored in a museum.

  20. Regular algorithm for the automatic refinement of the spectral characteristics of acoustic finite element models

    Suvorov, A. S.; Sokov, E. M.; V'yushkina, I. A.


    A new method is presented for the automatic refinement of finite element models of complex mechanical-acoustic systems using the results of experimental studies. The method is based on control of the spectral characteristics via selection of the optimal distribution of adjustments to the stiffness of a finite element mesh. The results of testing the method are given to show the possibility of its use to significantly increase the simulation accuracy of vibration characteristics of bodies with arbitrary spatial configuration.

  1. GIS Data Based Automatic High-Fidelity 3D Road Network Modeling

    Wang, Jie; Shen, Yuzhong


    3D road models are widely used in many computer applications such as racing games and driving simulations_ However, almost all high-fidelity 3D road models were generated manually by professional artists at the expense of intensive labor. There are very few existing methods for automatically generating 3D high-fidelity road networks, especially those existing in the real world. This paper presents a novel approach thai can automatically produce 3D high-fidelity road network models from real 2D road GIS data that mainly contain road. centerline in formation. The proposed method first builds parametric representations of the road centerlines through segmentation and fitting . A basic set of civil engineering rules (e.g., cross slope, superelevation, grade) for road design are then selected in order to generate realistic road surfaces in compliance with these rules. While the proposed method applies to any types of roads, this paper mainly addresses automatic generation of complex traffic interchanges and intersections which are the most sophisticated elements in the road networks

  2. Grammar-based Automatic 3D Model Reconstruction from Terrestrial Laser Scanning Data

    Yu, Q.; Helmholz, P.; Belton, D.; West, G.


    The automatic reconstruction of 3D buildings has been an important research topic during the last years. In this paper, a novel method is proposed to automatically reconstruct the 3D building models from segmented data based on pre-defined formal grammar and rules. Such segmented data can be extracted e.g. from terrestrial or mobile laser scanning devices. Two steps are considered in detail. The first step is to transform the segmented data into 3D shapes, for instance using the DXF (Drawing Exchange Format) format which is a CAD data file format used for data interchange between AutoCAD and other program. Second, we develop a formal grammar to describe the building model structure and integrate the pre-defined grammars into the reconstruction process. Depending on the different segmented data, the selected grammar and rules are applied to drive the reconstruction process in an automatic manner. Compared with other existing approaches, our proposed method allows the model reconstruction directly from 3D shapes and takes the whole building into account.

  3. Properties of Closed-Loop Reference Models in Adaptive Control: Part I Full States Accessible

    Gibson, Travis E; Lavretsky, Eugene


    This paper explores the properties of adaptive systems with closed-loop reference models. Historically, reference models in adaptive systems run open-loop in parallel with the plant and controller, using no information from the plant or controller to alter the trajectory of the reference system. Closed-loop reference models on the other hand use information from the plant to alter the reference trajectory. We show that closed-loop reference models have one more free design parameter as compared to their open-loop counterparts. Using the extra design freedom, we study closed--loop reference models and their impact on transient response and robustness in adaptive systems.

  4. Reference Model 2: %22Rev 0%22 Rotor Design.

    Barone, Matthew F.; Berg, Jonathan Charles; Griffith, Daniel


    The preliminary design for a three-bladed cross-flow rotor for a reference marine hydrokinetic turbine is presented. A rotor performance design code is described, along with modifications to the code to allow prediction of blade support strut drag as well as interference between two counter-rotating rotors. The rotor is designed to operate in a reference site corresponding to a riverine environment. Basic rotor performance and rigid-body loads calculations are performed to size the rotor elements and select the operating speed range. The preliminary design is verified with a simple finite element model that provides estimates of bending stresses during operation. A concept for joining the blades and support struts is developed and analyzed with a separate finite element analysis. Rotor mass, production costs, and annual energy capture are estimated in order to allow calculations of system cost-of-energy. Evaluation Only. Created with Aspose.Pdf.Kit. Copyright 2002-2011 Aspose Pty Ltd Evaluation Only. Created with Aspose.Pdf.Kit. Copyright 2002-2011 Aspose Pty Ltd

  5. Testing and reference model analysis of FTTH system

    Feng, Xiancheng; Cui, Wanlong; Chen, Ying


    With rapid development of Internet and broadband access network, the technologies of xDSL, FTTx+LAN , WLAN have more applications, new network service emerges in endless stream, especially the increase of network game, meeting TV, video on demand, etc. FTTH supports all present and future service with enormous bandwidth, including traditional telecommunication service, traditional data service and traditional TV service, and the future digital TV and VOD. With huge bandwidth of FTTH, it wins the final solution of broadband network, becomes the final goal of development of optical access network.. Fiber to the Home (FTTH) will be the goal of telecommunications cable broadband access. In accordance with the development trend of telecommunication services, to enhance the capacity of integrated access network, to achieve triple-play (voice, data, image), based on the existing optical Fiber to the curb (FTTC), Fiber To The Zone (FTTZ), Fiber to the Building (FTTB) user optical cable network, the optical fiber can extend to the FTTH system of end-user by using EPON technology. The article first introduced the basic components of FTTH system; and then explain the reference model and reference point for testing of the FTTH system; Finally, by testing connection diagram, the testing process, expected results, primarily analyze SNI Interface Testing, PON interface testing, Ethernet performance testing, UNI interface testing, Ethernet functional testing, PON functional testing, equipment functional testing, telephone functional testing, operational support capability testing and so on testing of FTTH system. ...

  6. Approach for the Semi-Automatic Verification of 3d Building Models

    Helmholz, P.; Belton, D.; Moncrieff, S.


    In the field of spatial sciences, there are a large number of disciplines and techniques for capturing data to solve a variety of different tasks and problems for different applications. Examples include: traditional survey for boundary definitions, aerial imagery for building models, and laser scanning for heritage facades. These techniques have different attributes such as the number of dimensions, accuracy and precision, and the format of the data. However, because of the number of applications and jobs, often over time these data sets captured from different sensor platforms and for different purposes will overlap in some way. In most cases, while this data is archived, it is not used in future applications to value add to the data capture campaign of current projects. It is also the case that newly acquire data are often not used to combine and improve existing models and data integrity. The purpose of this paper is to discuss a methodology and infrastructure to automatically support this concept. That is, based on a job specification, to automatically query existing and newly acquired data based on temporal and spatial relations, and to automatically combine and generate the best solution. To this end, there are three main challenges to examine; change detection, thematic accuracy and data matching.

  7. Modeling and monitoring of pipelines and networks advanced tools for automatic monitoring and supervision of pipelines

    Torres, Lizeth


    This book focuses on the analysis and design of advanced techniques for on-line automatic computational monitoring of pipelines and pipe networks. It discusses how to improve the systems’ security considering mathematical models of the flow, historical flow rate and pressure data, with the main goal of reducing the number of sensors installed along a pipeline. The techniques presented in the book have been implemented in digital systems to enhance the abilities of the pipeline network’s operators in recognizing anomalies. A real leak scenario in a Mexican water pipeline is used to illustrate the benefits of these techniques in locating the position of a leak. Intended for an interdisciplinary audience, the book addresses researchers and professionals in the areas of mechanical, civil and control engineering. It covers topics on fluid mechanics, instrumentation, automatic control, signal processing, computing, construction and diagnostic technologies.

  8. Reference respiratory waveforms by minimum jerk model analysis

    Anetai, Yusuke, E-mail:; Sumida, Iori; Takahashi, Yutaka; Yagi, Masashi; Mizuno, Hirokazu; Ogawa, Kazuhiko [Department of Radiation Oncology, Osaka University Graduate School of Medicine, Yamadaoka 2-2, Suita-shi, Osaka 565-0871 (Japan); Ota, Seiichi [Department of Medical Technology, Osaka University Hospital, Yamadaoka 2-15, Suita-shi, Osaka 565-0871 (Japan)


    Purpose: CyberKnife{sup ®} robotic surgery system has the ability to deliver radiation to a tumor subject to respiratory movements using Synchrony{sup ®} mode with less than 2 mm tracking accuracy. However, rapid and rough motion tracking causes mechanical tracking errors and puts mechanical stress on the robotic joint, leading to unexpected radiation delivery errors. During clinical treatment, patient respiratory motions are much more complicated, suggesting the need for patient-specific modeling of respiratory motion. The purpose of this study was to propose a novel method that provides a reference respiratory wave to enable smooth tracking for each patient. Methods: The minimum jerk model, which mathematically derives smoothness by means of jerk, or the third derivative of position and the derivative of acceleration with respect to time that is proportional to the time rate of force changed was introduced to model a patient-specific respiratory motion wave to provide smooth motion tracking using CyberKnife{sup ®}. To verify that patient-specific minimum jerk respiratory waves were being tracked smoothly by Synchrony{sup ®} mode, a tracking laser projection from CyberKnife{sup ®} was optically analyzed every 0.1 s using a webcam and a calibrated grid on a motion phantom whose motion was in accordance with three pattern waves (cosine, typical free-breathing, and minimum jerk theoretical wave models) for the clinically relevant superior–inferior directions from six volunteers assessed on the same node of the same isocentric plan. Results: Tracking discrepancy from the center of the grid to the beam projection was evaluated. The minimum jerk theoretical wave reduced the maximum-peak amplitude of radial tracking discrepancy compared with that of the waveforms modeled by cosine and typical free-breathing model by 22% and 35%, respectively, and provided smooth tracking for radial direction. Motion tracking constancy as indicated by radial tracking discrepancy

  9. Broadband access network reference models: a different prospective

    Mostafa, Mohamed S.


    The current view of the fiber-based broadband access network is that it could basically be modeled into two target networks represented by the following architectures, the fiber to the curb, building, home (FTTC/B/H) -- also termed switched digital video (SDV) -- architecture, and the hybrid fiber coax (HFC) architecture. Both architectures support on-demand digital services. One way to distinguish between these two architectures is based on the digital modulation scheme. The SDV/FTTC architecture utilizes baseband digital modulation both in the fiber distribution and the point-to- point drop. Whereas, the HFC architecture is pass-band and utilizes digitally modulated (as well as analog modulated) subcarriers both on the fiber and the coax for distribution to customers. From a network modeling point of view, the distinction between these two architectures is fuzzy. A hybrid between the above two architectures represents other architectural advantages especially bandwidth utilization in the upstream direction. This paper describes this hybrid architecture and provides an evaluation of the different access network configuration scenarios based on an expanded version of the DAVIC reference models.

  10. Model-reference robust tuning of PID controllers

    Alfaro, Victor M


    This book presents a unified methodology for the design of PID controllers that encompasses the wide range of different dynamics to be found in industrial processes. This is extended to provide a coherent way of dealing with the tuning of PID controllers. The particular method at the core of the book is the so-called model-reference robust tuning (MoReRT), developed by the authors. MoReRT constitutes a novel and powerful way of thinking of a robust design and taking into account the usual design trade-offs encountered in any control design problem. The book starts by presenting the different two-degree-of-freedom PID control algorithm variations and their conversion relations as well as the indexes used for performance, robustness and fragility evaluation:the bases of the proposed model. Secondly, the MoReRT design methodology and normalized controlled process models and controllers used in the design are described in order to facilitate the formulation of the different design problems and subsequent derivati...

  11. Developing Dynamic Reference Models and a Decision Support Framework for Southeastern Ecosystems: An Integrated Approach


    rates (e.g., see Section 3.5). This is important to note, however, because this assumption could automatically reduce the accuracy of ST-SIM model...polyglottos Orchard Oriole Icterus spurius Pine Warbler Setophaga pinus Pileated Woodpecker Dryocopus pileatus Purple Martin Progne subis Red-bellied...DSS. In addition to facilitating the processing of the field data, the DSS automatically generates standardized and accessible reports that allow

  12. Automatic Generation of Cycle-Approximate TLMs with Timed RTOS Model Support

    Hwang, Yonghyun; Schirner, Gunar; Abdi, Samar

    This paper presents a technique for automatically generating cycle-approximate transaction level models (TLMs) for multi-process applications mapped to embedded platforms. It incorporates three key features: (a) basic block level timing annotation, (b) RTOS model integration, and (c) RTOS overhead delay modeling. The inputs to TLM generation are application C processes and their mapping to processors in the platform. A processor data model, including pipelined datapath, memory hierarchy and branch delay model is used to estimate basic block execution delays. The delays are annotated to the C code, which is then integrated with a generated SystemC RTOS model. Our abstract RTOS provides dynamic scheduling and inter-process communication (IPC) with processor- and RTOS-specific pre-characterized timing. Our experiments using a MP3 decoder and a JPEG encoder show that timed TLMs, with integrated RTOS models, can be automatically generated in less than a minute. Our generated TLMs simulated three times faster than real-time and showed less than 10% timing error compared to board measurements.

  13. Automatic parametrization of implicit solvent models for the blind prediction of solvation free energies

    Wang, Bao; Wei, Guowei


    In this work, a systematic protocol is proposed to automatically parametrize implicit solvent models with polar and nonpolar components. The proposed protocol utilizes the classical Poisson model or the Kohn-Sham density functional theory (KSDFT) based polarizable Poisson model for modeling polar solvation free energies. For the nonpolar component, either the standard model of surface area, molecular volume, and van der Waals interactions, or a model with atomic surface areas and molecular volume is employed. Based on the assumption that similar molecules have similar parametrizations, we develop scoring and ranking algorithms to classify solute molecules. Four sets of radius parameters are combined with four sets of charge force fields to arrive at a total of 16 different parametrizations for the Poisson model. A large database with 668 experimental data is utilized to validate the proposed protocol. The lowest leave-one-out root mean square (RMS) error for the database is 1.33k cal/mol. Additionally, five s...

  14. Adjustment of automatic control systems of production facilities at coal processing plants using multivariant physico- mathematical models

    Evtushenko, V. F.; Myshlyaev, L. P.; Makarov, G. V.; Ivushkin, K. A.; Burkova, E. V.


    The structure of multi-variant physical and mathematical models of control system is offered as well as its application for adjustment of automatic control system (ACS) of production facilities on the example of coal processing plant.

  15. LanHEP - a package for automatic generation of Feynman rules in gauge models

    Semenov, A Yu


    We consider the general problem of derivation of the Feynman rules for the matrix elements in momentum representation from the given Lagrangian in coordinate space invariant under the transformation of some gauge group. LanHEP package presented in this paper allows to define in a convenient way the gauge model Lagrangian in canonical form and then to generate automatically the Feynman rules that can be used in the following calculation of the physical processes by means of CompHEP package. The detailed description of LanHEP commands is given and several examples of LanHEP applications (QED, QCD, Standard Model in the t'Hooft-Feynman gauge) are presented.

  16. Preface: International Reference Ionosphere - Progress in Ionospheric Modelling

    Bilitza Dieter; Reinisch, Bodo


    The international reference ionosphere (lRI) is the internationally recommended empirical model for the specification of ionospheric parameters supported by the Committee on Space Research (COSPAR) and the International Union of Radio Science (URSI) and recognized by the International Standardization Organization (ISO). IRI is being continually improved by a team of international experts as new data become available and better models are being developed. This issue chronicles the latest phase of model updates as reported during two IRI-related meetings. The first was a special session during the Scientific Assembly of the Committee of Space Research (COSPAR) in Montreal, Canada in July 2008 and the second was an IRI Task Force Activity at the US Air Force Academy in Colorado Springs in May 2009. This work led to several improvements and additions of the model which will be included in the next version, IRI-201O. The issue is divided into three sections focusing on the improvements made in the topside ionosphere, the F-peak, and the lower ionosphere, respectively. This issue would not have been possible without the reviewing efforts of many individuals. Each paper was reviewed by two referees. We thankfully acknowledge the contribution to this issue made by the following reviewers: Jacob Adeniyi, David Altadill, Eduardo Araujo, Feza Arikan, Dieter Bilitza, Jilijana Cander, Bela Fejer, Tamara Gulyaeva, Manuel Hermindez-Pajares, Ivan Kutiev, John MacDougal, Leo McNamara, Bruno Nava, Olivier Obrou, Elijah Oyeyemi, Vadym Paznukhov, Bodo Reinisch, John Retterer, Phil Richards, Gary Sales, J.H. Sastri, Ludger Scherliess, Iwona Stanislavska, Stamir Stankov, Shin-Yi Su, Manlian Zhang, Y ongliang Zhang, and Irina Zakharenkova. We are grateful to Peggy Ann Shea for her final review and guidance as the editor-in-chief for special issues of Advances in Space Research. We thank the authors for their timely submission and their quick response to the reviewer comments and humbly

  17. The Modelling Of Basing Holes Machining Of Automatically Replaceable Cubical Units For Reconfigurable Manufacturing Systems With Low-Waste Production

    Bobrovskij, N. M.; Levashkin, D. G.; Bobrovskij, I. N.; Melnikov, P. A.; Lukyanov, A. A.


    Article is devoted the decision of basing holes machining accuracy problems of automatically replaceable cubical units (carriers) for reconfigurable manufacturing systems with low-waste production (RMS). Results of automatically replaceable units basing holes machining modeling on the basis of the dimensional chains analysis are presented. Influence of machining parameters processing on accuracy spacings on centers between basing apertures is shown. The mathematical model of carriers basing holes machining accuracy is offered.

  18. Automatic parameter extraction techniques in IC-CAP for a compact double gate MOSFET model

    Darbandy, Ghader; Gneiting, Thomas; Alius, Heidrun; Alvarado, Joaquín; Cerdeira, Antonio; Iñiguez, Benjamin


    In this paper, automatic parameter extraction techniques of Agilent's IC-CAP modeling package are presented to extract our explicit compact model parameters. This model is developed based on a surface potential model and coded in Verilog-A. The model has been adapted to Trigate MOSFETs, includes short channel effects (SCEs) and allows accurate simulations of the device characteristics. The parameter extraction routines provide an effective way to extract the model parameters. The techniques minimize the discrepancy and error between the simulation results and the available experimental data for more accurate parameter values and reliable circuit simulation. Behavior of the second derivative of the drain current is also verified and proves to be accurate and continuous through the different operating regimes. The results show good agreement with measured transistor characteristics under different conditions and through all operating regimes.

  19. Model-based automatic 3d building model generation by integrating LiDAR and aerial images

    Habib, A.; Kwak, E.; Al-Durgham, M.


    Accurate, detailed, and up-to-date 3D building models are important for several applications such as telecommunication network planning, urban planning, and military simulation. Existing building reconstruction approaches can be classified according to the data sources they use (i.e., single versus multi-sensor approaches), the processing strategy (i.e., data-driven, model-driven, or hybrid), or the amount of user interaction (i.e., manual, semiautomatic, or fully automated). While it is obvious that 3D building models are important components for many applications, they still lack the economical and automatic techniques for their generation while taking advantage of the available multi-sensory data and combining processing strategies. In this research, an automatic methodology for building modelling by integrating multiple images and LiDAR data is proposed. The objective of this research work is to establish a framework for automatic building generation by integrating data driven and model-driven approaches while combining the advantages of image and LiDAR datasets.

  20. Automaticity or active control

    Tudoran, Ana Alina; Olsen, Svein Ottar

    This study addresses the quasi-moderating role of habit strength in explaining action loyalty. A model of loyalty behaviour is proposed that extends the traditional satisfaction–intention–action loyalty network. Habit strength is conceptualised as a cognitive construct to refer to the psychological...... aspects of the construct, such as routine, inertia, automaticity, or very little conscious deliberation. The data consist of 2962 consumers participating in a large European survey. The results show that habit strength significantly moderates the association between satisfaction and action loyalty, and...


    T. Partovi


    Full Text Available Through the improvements of satellite sensor and matching technology, the derivation of 3D models from space borne stereo data obtained a lot of interest for various applications such as mobile navigation, urban planning, telecommunication, and tourism. The automatic reconstruction of 3D building models from space borne point cloud data is still an active research topic. The challenging problem in this field is the relatively low quality of the Digital Surface Model (DSM generated by stereo matching of satellite data comparing to airborne LiDAR data. In order to establish an efficient method to achieve high quality models and complete automation from the mentioned DSM, in this paper a new method based on a model-driven strategy is proposed. For improving the results, refined orthorectified panchromatic images are introduced into the process as additional data. The idea of this method is based on ridge line extraction and analysing height values in direction of and perpendicular to the ridgeline direction. After applying pre-processing to the orthorectified data, some feature descriptors are extracted from the DSM, to improve the automatic ridge line detection. Applying RANSAC a line is fitted to each group of ridge points. Finally these ridge lines are refined by matching them or closing gaps. In order to select the type of roof model the heights of point in extension of the ridge line and height differences perpendicular to the ridge line are analysed. After roof model selection, building edge information is extracted from canny edge detection and parameters derived from the roof parts. Then the best model is fitted to extracted façade roofs based on detected type of model. Each roof is modelled independently and final 3D buildings are reconstructed by merging the roof models with the corresponding walls.

  2. Automatic parameter extraction technique for gate leakage current modeling in double gate MOSFET

    Darbandy, Ghader; Gneiting, Thomas; Alius, Heidrun; Alvarado, Joaquín; Cerdeira, Antonio; Iñiguez, Benjamin


    Direct Tunneling (DT) and Trap Assisted Tunneling (TAT) gate leakage current parameters have been extracted and verified considering automatic parameter extraction approach. The industry standard package IC-CAP is used to extract our leakage current model parameters. The model is coded in Verilog-A and the comparison between the model and measured data allows to obtain the model parameter values and parameters correlations/relations. The model and parameter extraction techniques have been used to study the impact of parameters in the gate leakage current based on the extracted parameter values. It is shown that the gate leakage current depends on the interfacial barrier height more strongly than the barrier height of the dielectric layer. There is almost the same scenario with respect to the carrier effective masses into the interfacial layer and the dielectric layer. The comparison between the simulated results and available measured gate leakage current transistor characteristics of Trigate MOSFETs shows good agreement.

  3. A 6D CAD Model for the Automatic Assessment of Building Sustainability

    Ping Yung


    Full Text Available Current building assessment methods limit themselves in their environmental impact by failing to consider the other two aspects of sustainability: the economic and the social. They tend to be complex and costly to run, and therefore are of limited value in comparing design options. This paper proposes and develops a model for the automatic assessment of a building’s sustainability life cycle with the building information modelling (BIM approach and its enabling technologies. A 6D CAD model is developed which could be used as a design aid instead of as a post-construction evaluation tool. 6D CAD includes 3D design as well as a fourth dimension (schedule, a fifth dimension (cost and a sixth dimension (sustainability. The model can automatically derive quantities (5D, calculate economic (5D and 6D, environmental and social impacts (6D, and evaluate the sustainability performance of alternative design options. The sustainability assessment covers the life cycle stages of a building, namely material production, construction, operation, maintenance, demolition and disposal.

  4. A semi-automatic method for developing an anthropomorphic numerical model of dielectric anatomy by MRI

    Mazzurana, M [ITC-irst - Bioelectromagnetism Laboratory, FCS Department, 38050 Povo, Trento (Italy); Sandrini, L [ITC-irst - Bioelectromagnetism Laboratory, FCS Department, 38050 Povo, Trento (Italy); Vaccari, A [ITC-irst - Bioelectromagnetism Laboratory, FCS Department, 38050 Povo, Trento (Italy); Malacarne, C [ITC-irst - Bioelectromagnetism Laboratory, FCS Department, 38050 Povo, Trento (Italy); Cristoforetti, L [ITC-irst - Bioelectromagnetism Laboratory, FCS Department, 38050 Povo, Trento (Italy); Pontalti, R [ITC-irst - Bioelectromagnetism Laboratory, FCS Department, 38050 Povo, Trento (Italy)


    Complex permittivity values have a dominant role in the overall consideration of interaction between radiofrequency electromagnetic fields and living matter, and in related applications such as electromagnetic dosimetry. There are still some concerns about the accuracy of published data and about their variability due to the heterogeneous nature of biological tissues. The aim of this study is to provide an alternative semi-automatic method by which numerical dielectric human models for dosimetric studies can be obtained. Magnetic resonance imaging (MRI) tomography was used to acquire images. A new technique was employed to correct nonuniformities in the images and frequency-dependent transfer functions to correlate image intensity with complex permittivity were used. The proposed method provides frequency-dependent models in which permittivity and conductivity vary with continuity-even in the same tissue-reflecting the intrinsic realistic spatial dispersion of such parameters. The human model is tested with an FDTD (finite difference time domain) algorithm at different frequencies; the results of layer-averaged and whole-body-averaged SAR (specific absorption rate) are compared with published work, and reasonable agreement has been found. Due to the short time needed to obtain a whole body model, this semi-automatic method may be suitable for efficient study of various conditions that can determine large differences in the SAR distribution, such as body shape, posture, fat-to-muscle ratio, height and weight.

  5. Switching Control System Based on Robust Model Reference Adaptive Control

    HU Qiong; FEI Qing; MA Hongbin; WU Qinghe; GENG Qingbo


    For conventional adaptive control,time-varying parametric uncertainty and unmodeled dynamics are ticklish problems,which will lead to undesirable performance or even instability and nonrobust behavior,respectively.In this study,a class of discrete-time switched systems with unmodeled dynamics is taken into consideration.Moreover,nonlinear systems are here supposed to be approximated with the class of switched systems considered in this paper,and thereby switching control design is investigated for both switched systems and nonlinear systems to assure stability and performance.For robustness against unmodeled dynamics and uncertainty,robust model reference aclaptive control (RMRAC) law is developed as the basis of controller design for each individual subsystem in the switched systems or nonlinear systems.Meanwhile,two different switching laws are presented for switched systems and nonlinear systems,respectively.Thereby,the authors incorporate the corresponding switching law into the RMRAC law to construct two schemes of switching control respectively for the two kinds of controlled systems.Both closed-loop analyses and simulation examples are provided to illustrate the validity of the two proposed switching control schemes.Furthermore,as to the proposed scheme for nonlinear systems,its potential for practical application is demonstrated through simulations of longitudinal control for F-16 aircraft.

  6. Reference Models for Structural Technology Assessment and Weight Estimation

    Cerro, Jeff; Martinovic, Zoran; Eldred, Lloyd


    Previously the Exploration Concepts Branch of NASA Langley Research Center has developed techniques for automating the preliminary design level of launch vehicle airframe structural analysis for purposes of enhancing historical regression based mass estimating relationships. This past work was useful and greatly reduced design time, however its application area was very narrow in terms of being able to handle a large variety in structural and vehicle general arrangement alternatives. Implementation of the analysis approach presented herein also incorporates some newly developed computer programs. Loft is a program developed to create analysis meshes and simultaneously define structural element design regions. A simple component defining ASCII file is read by Loft to begin the design process. HSLoad is a Visual Basic implementation of the HyperSizer Application Programming Interface, which automates the structural element design process. Details of these two programs and their use are explained in this paper. A feature which falls naturally out of the above analysis paradigm is the concept of "reference models". The flexibility of the FEA based JAVA processing procedures and associated process control classes coupled with the general utility of Loft and HSLoad make it possible to create generic program template files for analysis of components ranging from something as simple as a stiffened flat panel, to curved panels, fuselage and cryogenic tank components, flight control surfaces, wings, through full air and space vehicle general arrangements.

  7. Automatic Seamline Network Generation for Urban Orthophoto Mosaicking with the Use of a Digital Surface Model

    Qi Chen


    Full Text Available Intelligent seamline selection for image mosaicking is an area of active research in the fields of massive data processing, computer vision, photogrammetry and remote sensing. In mosaicking applications for digital orthophoto maps (DOMs, the visual transition in mosaics is mainly caused by differences in positioning accuracy, image tone and relief displacement of high ground objects between overlapping DOMs. Among these three factors, relief displacement, which prevents the seamless mosaicking of images, is relatively more difficult to address. To minimize visual discontinuities, many optimization algorithms have been studied for the automatic selection of seamlines to avoid high ground objects. Thus, a new automatic seamline selection algorithm using a digital surface model (DSM is proposed. The main idea of this algorithm is to guide a seamline toward a low area on the basis of the elevation information in a DSM. Given that the elevation of a DSM is not completely synchronous with a DOM, a new model, called the orthoimage elevation synchronous model (OESM, is derived and introduced. OESM can accurately reflect the elevation information for each DOM unit. Through the morphological processing of the OESM data in the overlapping area, an initial path network is obtained for seamline selection. Subsequently, a cost function is defined on the basis of several measurements, and Dijkstra’s algorithm is adopted to determine the least-cost path from the initial network. Finally, the proposed algorithm is employed for automatic seamline network construction; the effective mosaic polygon of each image is determined, and a seamless mosaic is generated. The experiments with three different datasets indicate that the proposed method meets the requirements for seamline network construction. In comparative trials, the generated seamlines pass through fewer ground objects with low time consumption.

  8. Cooperative Reference Services Policy Manual: A Model Outline.

    RQ, 1995


    Provides a framework of topics that should be covered by a policy manual on cooperative reference services. It is organized into sections on mission statement, administration, delivery of services, and evaluation of services, and is intended for use in conjunction with existing RASD (Reference and Adult Services Division) documents. (Author)

  9. SAR Automatic Target Recognition Based on Numerical Scattering Simulation and Model-based Matching

    Zhou Yu


    Full Text Available This study proposes a model-based Synthetic Aperture Radar (SAR automatic target recognition algorithm. Scattering is computed offline using the laboratory-developed Bidirectional Analytic Ray Tracing software and the same system parameter settings as the Moving and Stationary Target Acquisition and Recognition (MSTAR datasets. SAR images are then created by simulated electromagnetic scattering data. Shape features are extracted from the measured and simulated images, and then, matches are searched. The algorithm is verified using three types of targets from MSTAR data and simulated SAR images, and it is shown that the proposed approach is fast and easy to implement with high accuracy.

  10. Highly accurate SVM model with automatic feature selection for word sense disambiguation

    王浩; 陈贵林; 吴连献


    A novel algorithm for word sense disambiguation(WSD) that is based on SVM model improved with automatic feature selection is introduced. This learning method employs rich contextual features to predict the proper senses for specific words. Experimental results show that this algorithm can achieve an execellent performance on the set of data released during the SENSEEVAL-2 competition. We present the results obtained and discuss the transplantation of this algorithm to other languages such as Chinese. Experimental results on Chinese corpus show that our algorithm achieves an accuracy of 70.0 % even with small training data.

  11. Sequential Clustering based Facial Feature Extraction Method for Automatic Creation of Facial Models from Orthogonal Views

    Ghahari, Alireza


    Multiview 3D face modeling has attracted increasing attention recently and has become one of the potential avenues in future video systems. We aim to make more reliable and robust automatic feature extraction and natural 3D feature construction from 2D features detected on a pair of frontal and profile view face images. We propose several heuristic algorithms to minimize possible errors introduced by prevalent nonperfect orthogonal condition and noncoherent luminance. In our approach, we first extract the 2D features that are visible to both cameras in both views. Then, we estimate the coordinates of the features in the hidden profile view based on the visible features extracted in the two orthogonal views. Finally, based on the coordinates of the extracted features, we deform a 3D generic model to perform the desired 3D clone modeling. Present study proves the scope of resulted facial models for practical applications like face recognition and facial animation.

  12. Automatic method for building indoor boundary models from dense point clouds collected by laser scanners.

    Valero, Enrique; Adán, Antonio; Cerrada, Carlos


    In this paper we present a method that automatically yields Boundary Representation Models (B-rep) for indoors after processing dense point clouds collected by laser scanners from key locations through an existing facility. Our objective is particularly focused on providing single models which contain the shape, location and relationship of primitive structural elements of inhabited scenarios such as walls, ceilings and floors. We propose a discretization of the space in order to accurately segment the 3D data and generate complete B-rep models of indoors in which faces, edges and vertices are coherently connected. The approach has been tested in real scenarios with data coming from laser scanners yielding promising results. We have deeply evaluated the results by analyzing how reliably these elements can be detected and how accurately they are modeled.

  13. A Model for Semi-Automatic Composition of Educational Content from Open Repositories of Learning Objects

    Paula Andrea Rodríguez Marín


    Full Text Available Learning objects (LOs repositories are important in building educational content and should allow search, retrieval and composition processes to be successfully developed to reach educational goals. However, such processes require so much time-consuming and not always provide the desired results. Thus, the aim of this paper is to propose a model for the semiautomatic composition of LOs, which are automatically recovered from open repositories. For the development of model, various text similarity measures are discussed, while for calibration and validation some comparison experiments were performed using the results obtained by teachers. Experimental results show that when using a value of k (number of LOs selected of at least 3, the percentage of similarities between the model and such made by experts exceeds 75%. To conclude, it can be established that the model proposed allows teachers to save time and effort for LOs selection by performing a pre-filter process.

  14. Automatic Assessment of Craniofacial Growth in a Mouse Model of Crouzon Syndrome

    Thorup, Signe Strann; Larsen, Rasmus; Darvann, Tron Andre


    BACKGROUND & PURPOSE: Crouzon syndrome is characterized by growth disturbances caused by premature craniosynostosis. A mouse model with mutation Fgfr2C342Y, equivalent to the most common Crouzon syndrome mutation (henceforth called the Crouzon mouse model), has a phenotype showing many parallels...... to the human counterpart. Quantifying growth in the Crouzon mouse model could test hypotheses of the relationship between craniosynostosis and dysmorphology, leading to better understanding of the causes of Crouzon syndrome as well as providing knowledge relevant for surgery planning. METHODS: Automatic non...... for each mouse-type; growth models were created using linear interpolation and visualized as 3D animations. Spatial regions of significantly different growth were identified using the local False Discovery Rate method, estimating the expected percentage of false predictions in a set of predictions. For all...

  15. A chest-shape target automatic detection method based on Deformable Part Models

    Zhang, Mo; Jin, Weiqi; Li, Li


    Automatic weapon platform is one of the important research directions at domestic and overseas, it needs to accomplish fast searching for the object to be shot under complex background. Therefore, fast detection for given target is the foundation of further task. Considering that chest-shape target is common target of shoot practice, this paper treats chestshape target as the target and studies target automatic detection method based on Deformable Part Models. The algorithm computes Histograms of Oriented Gradient(HOG) features of the target and trains a model using Latent variable Support Vector Machine(SVM); In this model, target image is divided into several parts then we can obtain foot filter and part filters; Finally, the algorithm detects the target at the HOG features pyramid with method of sliding window. The running time of extracting HOG pyramid with lookup table can be shorten by 36%. The result indicates that this algorithm can detect the chest-shape target in natural environments indoors or outdoors. The true positive rate of detection reaches 76% with many hard samples, and the false positive rate approaches 0. Running on a PC (Intel(R)Core(TM) i5-4200H CPU) with C++ language, the detection time of images with the resolution of 640 × 480 is 2.093s. According to TI company run library about image pyramid and convolution for DM642 and other hardware, our detection algorithm is expected to be implemented on hardware platform, and it has application prospect in actual system.

  16. Automatic Gauge Control in Rolling Process Based on Multiple Smith Predictor Models

    Jiangyun Li


    Full Text Available Automatic rolling process is a high-speed system which always requires high-speed control and communication capabilities. Meanwhile, it is also a typical complex electromechanical system; distributed control has become the mainstream of computer control system for rolling mill. Generally, the control system adopts the 2-level control structure—basic automation (Level 1 and process control (Level 2—to achieve the automatic gauge control. In Level 1, there is always a certain distance between the roll gap of each stand and the thickness testing point, leading to the time delay of gauge control. Smith predictor is a method to cope with time-delay system, but the practical feedback control based on traditional Smith predictor cannot get the ideal control result, because the time delay is hard to be measured precisely and in some situations it may vary in a certain range. In this paper, based on adaptive Smith predictor, we employ multiple models to cover the uncertainties of time delay. The optimal model will be selected by the proposed switch mechanism. Simulations show that the proposed multiple Smith model method exhibits excellent performance in improving the control result even for system with jumping time delay.

  17. Automatic intelligibility assessment of speakers after laryngeal cancer by means of acoustic modeling.

    Bocklet, Tobias; Riedhammer, Korbinian; Nöth, Elmar; Eysholdt, Ulrich; Haderlein, Tino


    One aspect of voice and speech evaluation after laryngeal cancer is acoustic analysis. Perceptual evaluation by expert raters is a standard in the clinical environment for global criteria such as overall quality or intelligibility. So far, automatic approaches evaluate acoustic properties of pathologic voices based on voiced/unvoiced distinction and fundamental frequency analysis of sustained vowels. Because of the high amount of noisy components and the increasing aperiodicity of highly pathologic voices, a fully automatic analysis of fundamental frequency is difficult. We introduce a purely data-driven system for the acoustic analysis of pathologic voices based on recordings of a standard text. Short-time segments of the speech signal are analyzed in the spectral domain, and speaker models based on this information are built. These speaker models act as a clustered representation of the acoustic properties of a person's voice and are thus characteristic for speakers with different kinds and degrees of pathologic conditions. The system is evaluated on two different data sets with speakers reading standardized texts. One data set contains 77 speakers after laryngeal cancer treated with partial removal of the larynx. The other data set contains 54 totally laryngectomized patients, equipped with a Provox shunt valve. Each speaker was rated by five expert listeners regarding three different criteria: strain, voice quality, and speech intelligibility. We show correlations for each data set with r and ρ≥0.8 between the automatic system and the mean value of the five raters. The interrater correlation of one rater to the mean value of the remaining raters is in the same range. We thus assume that for selected evaluation criteria, the system can serve as a validated objective support for acoustic voice and speech analysis. Copyright © 2012 The Voice Foundation. Published by Mosby, Inc. All rights reserved.

  18. Semi-Automatic Building Models and FAÇADE Texture Mapping from Mobile Phone Images

    Jeong, J.; Kim, T.


    Research on 3D urban modelling has been actively carried out for a long time. Recently the need of 3D urban modelling research is increased rapidly due to improved geo-web services and popularized smart devices. Nowadays 3D urban models provided by, for example, Google Earth use aerial photos for 3D urban modelling but there are some limitations: immediate update for the change of building models is difficult, many buildings are without 3D model and texture, and large resources for maintaining and updating are inevitable. To resolve the limitations mentioned above, we propose a method for semi-automatic building modelling and façade texture mapping from mobile phone images and analyze the result of modelling with actual measurements. Our method consists of camera geometry estimation step, image matching step, and façade mapping step. Models generated from this method were compared with actual measurement value of real buildings. Ratios of edge length of models and measurements were compared. Result showed 5.8% average error of length ratio. Through this method, we could generate a simple building model with fine façade textures without expensive dedicated tools and dataset.

  19. Automatic procedure for realistic 3D finite element modelling of human brain for bioelectromagnetic computations

    Aristovich, K Y; Khan, S H, E-mail: [School of Engineering and Mathematical Sciences, City University London, Northampton Square, London EC1V 0HB (United Kingdom)


    Realistic computer modelling of biological objects requires building of very accurate and realistic computer models based on geometric and material data, type, and accuracy of numerical analyses. This paper presents some of the automatic tools and algorithms that were used to build accurate and realistic 3D finite element (FE) model of whole-brain. These models were used to solve the forward problem in magnetic field tomography (MFT) based on Magnetoencephalography (MEG). The forward problem involves modelling and computation of magnetic fields produced by human brain during cognitive processing. The geometric parameters of the model were obtained from accurate Magnetic Resonance Imaging (MRI) data and the material properties - from those obtained from Diffusion Tensor MRI (DTMRI). The 3D FE models of the brain built using this approach has been shown to be very accurate in terms of both geometric and material properties. The model is stored on the computer in Computer-Aided Parametrical Design (CAD) format. This allows the model to be used in a wide a range of methods of analysis, such as finite element method (FEM), Boundary Element Method (BEM), Monte-Carlo Simulations, etc. The generic model building approach presented here could be used for accurate and realistic modelling of human brain and many other biological objects.

  20. Analyzing Students' Understanding of Models and Modeling Referring to the Disciplines Biology, Chemistry, and Physics

    Krell, Moritz; Reinisch, Bianca; Krüger, Dirk


    In this study, secondary school students' (N?=?617; grades 7 to 10) understanding of models and modeling was assessed using tasks which explicitly refer to the scientific disciplines of biology, chemistry, and physics and, as a control, to no scientific discipline. The students' responses are interpreted as their biology-, chemistry-, and…

  1. Modeling Technology for Automatic Test System Software Based on Automatic Test Markup Language Standard%基于ATML标准的ATS软件建模技术

    杨占才; 王红; 范利花; 张桂英; 杨小辉


      论述了国外ATML标准体系结构和构成ATML标准所有子模型的描述方法,提出了在现有ATS软件平台基础上,实现兼容ATML标准所需的建模流程设计、模型识别及模型运行流程设计等技术途径,为实现ATS软件平台的通用性、开放性及武器装备各种维护级别的测试资源的共享奠定了技术基础。%  all the model definition method, system architecture and the expression manner for ATML standard are discussed. Several major technology problems for the existing automatic test system software platform compatible with ATML standard are presented, such as the design for modeling flow, model identification and model running flow. All the Technology Foundation is supplied for resolving the general and open issues of the automatic test system software platform, and the testing resources share for all the maintenance level.


    İsmail ŞAHİN


    Full Text Available This paper examines how to automatically reconstruct three dimentions (3D models from their orthographic two and three views and explains a new approach developed for that purpose. The approach is based on the identification of geometric features with the interpretation of 2B views, their volumetric intersections and reconstruction of solid models. A number of rules have been defined for this goal and they implemented on a prototype software with the approach of expert systems. The developed software allows determination of some features efficiently such as slot, holes, blind holes, closed prismatic holes, etc. Another contrubition of this research is to reconstruct solid models from their full section and half section views that is almost noneexistend in the releated literature.

  3. Learning to Automatically Detect Features for Mobile Robots Using Second-Order Hidden Markov Models

    Olivier Aycard


    Full Text Available In this paper, we propose a new method based on Hidden Markov Models to interpret temporal sequences of sensor data from mobile robots to automatically detect features. Hidden Markov Models have been used for a long time in pattern recognition, especially in speech recognition. Their main advantages over other methods (such as neural networks are their ability to model noisy temporal signals of variable length. We show in this paper that this approach is well suited for interpretation of temporal sequences of mobile-robot sensor data. We present two distinct experiments and results: the first one in an indoor environment where a mobile robot learns to detect features like open doors or T-intersections, the second one in an outdoor environment where a different mobile robot has to identify situations like climbing a hill or crossing a rock.

  4. Automatic Generation of Building Models with Levels of Detail 1-3

    Nguatem, W.; Drauschke, M.; Mayer, H.


    We present a workflow for the automatic generation of building models with levels of detail (LOD) 1 to 3 according to the CityGML standard (Gröger et al., 2012). We start with orienting unsorted image sets employing (Mayer et al., 2012), we compute depth maps using semi-global matching (SGM) (Hirschmüller, 2008), and fuse these depth maps to reconstruct dense 3D point clouds (Kuhn et al., 2014). Based on planes segmented from these point clouds, we have developed a stochastic method for roof model selection (Nguatem et al., 2013) and window model selection (Nguatem et al., 2014). We demonstrate our workflow up to the export into CityGML.

  5. A Parallel Interval Computation Model for Global Optimization with Automatic Load Balancing

    Yong Wu; Arun Kumar


    In this paper,we propose a decentralized parallel computation model for global optimization using interval analysis.The model is adaptive to any number of processors and the workload is automatically and evenly distributed among all processors by alternative message passing.The problems received by each processor are processed based on their local dominance properties,which avoids unnecessary interval evaluations.Further,the problem is treated as a whole at the beginning of computation so that no initial decomposition scheme is required.Numerical experiments indicate that the model works well and is stable with different number of parallel processors,distributes the load evenly among the processors,and provides an impressive speedup,especially when the problem is time-consuming to solve.

  6. Learning to Automatically Detect Features for Mobile Robots Using Second-Order Hidden Markov Models

    Richard Washington


    Full Text Available In this paper, we propose a new method based on Hidden Markov Models to interpret temporal sequences of sensor data from mobile robots to automatically detect features. Hidden Markov Models have been used for a long time in pattern recognition, especially in speech recognition. Their main advantages over other methods (such as neural networks are their ability to model noisy temporal signals of variable length. We show in this paper that this approach is well suited for interpretation of temporal sequences of mobile-robot sensor data. We present two distinct experiments and results: the first one in an indoor environment where a mobile robot learns to detect features like open doors or T- intersections, the second one in an outdoor environment where a different mobile robot has to identify situations like climbing a hill or crossing a rock.

  7. LHC-GCS a model-driven approach for automatic PLC and SCADA code generation

    Thomas, Geraldine; Barillère, Renaud; Cabaret, Sebastien; Kulman, Nikolay; Pons, Xavier; Rochez, Jacques


    The LHC experiments’ Gas Control System (LHC GCS) project [1] aims to provide the four LHC experiments (ALICE, ATLAS, CMS and LHCb) with control for their 23 gas systems. To ease the production and maintenance of 23 control systems, a model-driven approach has been adopted to generate automatically the code for the Programmable Logic Controllers (PLCs) and for the Supervision Control And Data Acquisition (SCADA) systems. The first milestones of the project have been achieved. The LHC GCS framework [4] and the generation tools have been produced. A first control application has actually been generated and is in production, and a second is in preparation. This paper describes the principle and the architecture of the model-driven solution. It will in particular detail how the model-driven solution fits with the LHC GCS framework and with the UNICOS [5] data-driven tools.

  8. Lightning Protection Performance Assessment of Transmission Line Based on ATP model Automatic Generation

    Luo Hanwu


    Full Text Available This paper presents a novel method to solve the initial lightning breakdown current by combing ATP and MATLAB simulation software effectively, with the aims to evaluate the lightning protection performance of transmission line. Firstly, the executable ATP simulation model is generated automatically according to the required information such as power source parameters, tower parameters, overhead line parameters, grounding resistance and lightning current parameters, etc. through an interface program coded by MATLAB. Then, the data are extracted from the generated LIS files which can be obtained by executing the ATP simulation model, the occurrence of transmission lie breakdown can be determined by the relative data in LIS file. The lightning current amplitude should be reduced when the breakdown occurs, and vice the verse. Thus the initial lightning breakdown current of a transmission line with given parameters can be determined accurately by continuously changing the lightning current amplitude, which is realized by a loop computing algorithm that is coded by MATLAB software. The method proposed in this paper can generate the ATP simulation program automatically, and facilitates the lightning protection performance assessment of transmission line.

  9. Automatic lung tumor segmentation on PET/CT images using fuzzy Markov random field model.

    Guo, Yu; Feng, Yuanming; Sun, Jian; Zhang, Ning; Lin, Wang; Sa, Yu; Wang, Ping


    The combination of positron emission tomography (PET) and CT images provides complementary functional and anatomical information of human tissues and it has been used for better tumor volume definition of lung cancer. This paper proposed a robust method for automatic lung tumor segmentation on PET/CT images. The new method is based on fuzzy Markov random field (MRF) model. The combination of PET and CT image information is achieved by using a proper joint posterior probability distribution of observed features in the fuzzy MRF model which performs better than the commonly used Gaussian joint distribution. In this study, the PET and CT simulation images of 7 non-small cell lung cancer (NSCLC) patients were used to evaluate the proposed method. Tumor segmentations with the proposed method and manual method by an experienced radiation oncologist on the fused images were performed, respectively. Segmentation results obtained with the two methods were similar and Dice's similarity coefficient (DSC) was 0.85 ± 0.013. It has been shown that effective and automatic segmentations can be achieved with this method for lung tumors which locate near other organs with similar intensities in PET and CT images, such as when the tumors extend into chest wall or mediastinum.

  10. Automatic Lung Tumor Segmentation on PET/CT Images Using Fuzzy Markov Random Field Model

    Yu Guo


    Full Text Available The combination of positron emission tomography (PET and CT images provides complementary functional and anatomical information of human tissues and it has been used for better tumor volume definition of lung cancer. This paper proposed a robust method for automatic lung tumor segmentation on PET/CT images. The new method is based on fuzzy Markov random field (MRF model. The combination of PET and CT image information is achieved by using a proper joint posterior probability distribution of observed features in the fuzzy MRF model which performs better than the commonly used Gaussian joint distribution. In this study, the PET and CT simulation images of 7 non-small cell lung cancer (NSCLC patients were used to evaluate the proposed method. Tumor segmentations with the proposed method and manual method by an experienced radiation oncologist on the fused images were performed, respectively. Segmentation results obtained with the two methods were similar and Dice’s similarity coefficient (DSC was 0.85 ± 0.013. It has been shown that effective and automatic segmentations can be achieved with this method for lung tumors which locate near other organs with similar intensities in PET and CT images, such as when the tumors extend into chest wall or mediastinum.

  11. Automatic detection of the belt-like region in an image with variational PDE model

    Shoutao Li; Xiaomao Li; Yandong Tang


    In this paper, we propose a novel method to automatically detect the belt-like object, such as highway,river, etc., in a given image based on Mumford-Shah function and the evolution of two phase curves. The method can automatically detect two curves that are the boundaries of the belt-like object. In fact, this is a partition problem and we model it as an energy minimization of a Mumford-Shah function based minimal partition problem like active contour model. With Eulerian formulation the partial differential equations (PDEs) of curve evolution are given and the two curves will stop on the desired boundary. The stop term does not depend on the gradient of the image and the initial curves can be anywhere in the image. We also give a numerical algorithm using finite differences and present various experimental results. Compared with other methods, our method can directly detect the boundaries of belt-like object as two continuous curves, even if the image is very noisy.

  12. Automatic orientation and 3D modelling from markerless rock art imagery

    Lerma, J. L.; Navarro, S.; Cabrelles, M.; Seguí, A. E.; Hernández, D.


    This paper investigates the use of two detectors and descriptors on image pyramids for automatic image orientation and generation of 3D models. The detectors and descriptors replace manual measurements and are used to detect, extract and match features across multiple imagery. The Scale-Invariant Feature Transform (SIFT) and the Speeded Up Robust Features (SURF) will be assessed based on speed, number of features, matched features, and precision in image and object space depending on the adopted hierarchical matching scheme. The influence of applying in addition Area Based Matching (ABM) with normalised cross-correlation (NCC) and least squares matching (LSM) is also investigated. The pipeline makes use of photogrammetric and computer vision algorithms aiming minimum interaction and maximum accuracy from a calibrated camera. Both the exterior orientation parameters and the 3D coordinates in object space are sequentially estimated combining relative orientation, single space resection and bundle adjustment. The fully automatic image-based pipeline presented herein to automate the image orientation step of a sequence of terrestrial markerless imagery is compared with manual bundle block adjustment and terrestrial laser scanning (TLS) which serves as ground truth. The benefits of applying ABM after FBM will be assessed both in image and object space for the 3D modelling of a complex rock art shelter.

  13. Automatic pre-processing for an object-oriented distributed hydrological model using GRASS-GIS

    Sanzana, P.; Jankowfsky, S.; Branger, F.; Braud, I.; Vargas, X.; Hitschfeld, N.


    Landscapes are very heterogeneous, which impact the hydrological processes occurring in the catchments, especially in the modeling of peri-urban catchments. The Hydrological Response Units (HRUs), resulting from the intersection of different maps, such as land use, soil types and geology, and flow networks, allow the representation of these elements in an explicit way, preserving natural and artificial contours of the different layers. These HRUs are used as model mesh in some distributed object-oriented hydrological models, allowing the application of a topological oriented approach. The connectivity between polygons and polylines provides a detailed representation of the water balance and overland flow in these distributed hydrological models, based on irregular hydro-landscape units. When computing fluxes between these HRUs, the geometrical parameters, such as the distance between the centroid of gravity of the HRUs and the river network, and the length of the perimeter, can impact the realism of the calculated overland, sub-surface and groundwater fluxes. Therefore, it is necessary to process the original model mesh in order to avoid these numerical problems. We present an automatic pre-processing implemented in the open source GRASS-GIS software, for which several Python scripts or some algorithms already available were used, such as the Triangle software. First, some scripts were developed to improve the topology of the various elements, such as snapping of the river network to the closest contours. When data are derived with remote sensing, such as vegetation areas, their perimeter has lots of right angles that were smoothed. Second, the algorithms more particularly address bad-shaped elements of the model mesh such as polygons with narrow shapes, marked irregular contours and/or the centroid outside of the polygons. To identify these elements we used shape descriptors. The convexity index was considered the best descriptor to identify them with a threshold

  14. Automaticity or active control

    Tudoran, Ana Alina; Olsen, Svein Ottar

    This study addresses the quasi-moderating role of habit strength in explaining action loyalty. A model of loyalty behaviour is proposed that extends the traditional satisfaction–intention–action loyalty network. Habit strength is conceptualised as a cognitive construct to refer to the psychologic......, respectively, between intended loyalty and action loyalty. At high levels of habit strength, consumers are more likely to free up cognitive resources and incline the balance from controlled to routine and automatic-like responses.......This study addresses the quasi-moderating role of habit strength in explaining action loyalty. A model of loyalty behaviour is proposed that extends the traditional satisfaction–intention–action loyalty network. Habit strength is conceptualised as a cognitive construct to refer to the psychological...... aspects of the construct, such as routine, inertia, automaticity, or very little conscious deliberation. The data consist of 2962 consumers participating in a large European survey. The results show that habit strength significantly moderates the association between satisfaction and action loyalty, and...

  15. Automatic detection of avalanches in seismic data using Hidden Markov Models

    Heck, Matthias; Hammer, Conny; van Herwijnen, Alec; Schweizer, Jürg; Fäh, Donat


    Seismic monitoring systems are well suited for the remote detection of mass movements, such as landslides, rockfalls and debris flows. For snow avalanches, this has been known since the 1970s and seismic monitoring could potentially provide valuable information for avalanche forecasting. We thus explored continuous seismic data from a string of vertical component geophones in an avalanche starting zone above Davos, Switzerland. The overall goal is to automatically detect avalanches with a Hidden Markov Model (HMM), a statistical pattern recognition tool widely used for speech recognition. A HMM uses a classifier to determine the likelihood that input objects belong to a finite number of classes. These classes are obtained by learning a multidimensional Gaussian mixture model representation of the overall observable feature space. This model is then used to derive the HMM parameters for avalanche waveforms using a single training sample to build the final classifier. We classified data from the winter seasons of 2010 and compared the results to several hundred avalanches manually identified in the seismic data. First results of a classification of a single day have shown, that the model is good in terms of probability of detection while having a relatively low false alarm rate. We further implemented a voting based classification approach to neglect events detected only by one sensor to further improve the model performance. For instance, on 22 March 2010, a day with particular high avalanche activity, 17 avalanches were positively identified by at least three sensors with no false alarms. These results show, that the automatic detection of avalanches in seismic data is feasible, bringing us one step closer to implementing seismic monitoring system in operational forecasting.

  16. Slow Dynamics Model of Compressed Air Energy Storage and Battery Storage Technologies for Automatic Generation Control

    Krishnan, Venkat; Das, Trishna


    Increasing variable generation penetration and the consequent increase in short-term variability makes energy storage technologies look attractive, especially in the ancillary market for providing frequency regulation services. This paper presents slow dynamics model for compressed air energy storage and battery storage technologies that can be used in automatic generation control studies to assess the system frequency response and quantify the benefits from storage technologies in providing regulation service. The paper also represents the slow dynamics model of the power system integrated with storage technologies in a complete state space form. The storage technologies have been integrated to the IEEE 24 bus system with single area, and a comparative study of various solution strategies including transmission enhancement and combustion turbine have been performed in terms of generation cycling and frequency response performance metrics.

  17. Automatic sleep classification using a data-driven topic model reveals latent sleep states

    Koch, Henriette; Christensen, Julie Anja Engelhard; Frandsen, Rune


    Background: The golden standard for sleep classification uses manual scoring of polysomnography despite points of criticism such as oversimplification, low inter-rater reliability and the standard being designed on young and healthy subjects. New method: To meet the criticism and reveal the latent...... sleep states, this study developed a general and automatic sleep classifier using a data-driven approach. Spectral EEG and EOG measures and eye correlation in 1 s windows were calculated and each sleep epoch was expressed as a mixture of probabilities of latent sleep states by using the topic model...... Latent Dirichlet Allocation. Model application was tested on control subjects and patients with periodic leg movements (PLM) representing a non-neurodegenerative group, and patients with idiopathic REM sleep behavior disorder (iRBD) and Parkinson's Disease (PD) representing a neurodegenerative group...

  18. Out-of-Bounds Array Access Fault Model and Automatic Testing Method Study

    GAO Chuanping; DUAN Miyi; TAN Liqun; GONG Yunzhan


    Out-of-bounds array access(OOB) is one of the fault models commonly employed in the objectoriented programming language. At present, the technology of code insertion and optimization is widely used in the world to detect and fix this kind of fault. Although this method can examine some of the faults in OOB programs, it cannot test programs thoroughly, neither to find the faults correctly. The way of code insertion makes the test procedures so inefficient that the test becomes costly and time-consuming. This paper, uses a kind of special static test technology to realize the fault detection in OOB programs. We first establish the fault models in OOB program, and then develop an automatic test tool to detect the faults. Some experiments have exercised and the results show that the method proposed in the paper is efficient and feasible in practical applications.

  19. Perceptual quality estimation of H.264/AVC videos using reduced-reference and no-reference models

    Shahid, Muhammad; Pandremmenou, Katerina; Kondi, Lisimachos P.; Rossholm, Andreas; Lövström, Benny


    Reduced-reference (RR) and no-reference (NR) models for video quality estimation, using features that account for the impact of coding artifacts, spatio-temporal complexity, and packet losses, are proposed. The purpose of this study is to analyze a number of potentially quality-relevant features in order to select the most suitable set of features for building the desired models. The proposed sets of features have not been used in the literature and some of the features are used for the first time in this study. The features are employed by the least absolute shrinkage and selection operator (LASSO), which selects only the most influential of them toward perceptual quality. For comparison, we apply feature selection in the complete feature sets and ridge regression on the reduced sets. The models are validated using a database of H.264/AVC encoded videos that were subjectively assessed for quality in an ITU-T compliant laboratory. We infer that just two features selected by RR LASSO and two bitstream-based features selected by NR LASSO are able to estimate perceptual quality with high accuracy, higher than that of ridge, which uses more features. The comparisons with competing works and two full-reference metrics also verify the superiority of our models.

  20. [Automatic detection of exudates in retinal images based on threshold moving average models].

    Wisaeng, K; Hiransakolwong, N; Pothiruk, E


    Since exudate diagnostic procedures require the attention of an expert ophthalmologist as well as regular monitoring of the disease, the workload of expert ophthalmologists will eventually exceed the current screening capabilities. Retinal imaging technology is a current practice screening capability providing a great potential solution. In this paper, a fast and robust automatic detection of exudates based on moving average histogram models of the fuzzy image was applied, and then the better histogram was derived. After segmentation of the exudate candidates, the true exudates were pruned based on Sobel edge detector and automatic Otsu's thresholding algorithm that resulted in the accurate location of the exudates in digital retinal images. To compare the performance of exudate detection methods we have constructed a large database of digital retinal images. The method was trained on a set of 200 retinal images, and tested on a completely independent set of 1220 retinal images. Results show that the exudate detection method performs overall best sensitivity, specificity, and accuracy of 90.42%, 94.60%, and 93.69%, respectively.

  1. Model design and simulation of automatic sorting machine using proximity sensor

    Bankole I. Oladapo


    Full Text Available The automatic sorting system has been reported to be complex and a global problem. This is because of the inability of sorting machines to incorporate flexibility in their design concept. This research therefore designed and developed an automated sorting object of a conveyor belt. The developed automated sorting machine is able to incorporate flexibility and separate species of non-ferrous metal objects and at the same time move objects automatically to the basket as defined by the regulation of the Programmable Logic Controllers (PLC with a capacitive proximity sensor to detect a value range of objects. The result obtained shows that plastic, wood, and steel were sorted into their respective and correct position with an average, sorting, time of 9.903 s, 14.072 s and 18.648 s respectively. The proposed developed model of this research could be adopted at any institution or industries, whose practices are based on mechatronics engineering systems. This is to guide the industrial sector in sorting of object and teaching aid to institutions and hence produce the list of classified materials according to the enabled sorting program commands.

  2. Automatic detection and classification of sleep stages by multichannel EEG signal modeling.

    Zhovna, Inna; Shallom, Ilan D


    In this paper a novel method for automatic detection and classification of sleep stages using a multichannel electroencephalography (EEG) is presented. Understanding the sleep mechanism is vital for diagnosis and treatment of sleep disorders. The EEG is one of the most important tools of studying and diagnosing sleep disorders. EEG signals waveforms activity interpretation is performed by visual analysis (a very difficult procedure). This research aim is to ease the difficulties involved in the existing manual process of EEG interpretation by proposing an automatic sleep stage detection and classification system. The suggested method based on Multichannel Auto Regressive (MAR) model. The multichannel analysis approach incorporates the cross correlation information existing between different EEG signals. In the training phase, we used the vector quantization (VQ) algorithm, Linde-Buzo-Gray (LBG) and sleep stage definition, by estimation of probability mass functions (pmf) per every sleep stage using Generalized Log Likelihood Ratio (GLLR) distortion. The classification phase was performed using Kullback-Leibler (KL) divergence. The results of this research are promising with classification accuracy rate of 93.2%. The results encourage continuation of this research in the sleep field and in other biomedical signals applications.


    Saeed Shahrivari


    Full Text Available Page tagging is one of the most important facilities for increasing the accuracy of information retrieval in the web. Tags are simple pieces of data that usually consist of one or several words, and briefly describe a page. Tags provide useful information about a page and can be used for boosting the accuracy of searching, document clustering, and result grouping. The most accurate solution to page tagging is using human experts. However, when the number of pages is large, humans cannot be used, and some automatic solutions should be used instead. We propose a solution called PerTag which can automatically tag a set of Persian web pages. PerTag is based on n-gram models and uses the tf-idf method plus some effective Persian language rules to select proper tags for each web page. Since our target is huge sets of web pages, PerTag is built on top of the MapReduce distributed computing framework. We used a set of more than 500 million Persian web pages during our experiments, and extracted tags for each page using a cluster of 40 machines. The experimental results show that PerTag is both fast and accurate

  4. Automatic corpus callosum segmentation using a deformable active Fourier contour model

    Vachet, Clement; Yvernault, Benjamin; Bhatt, Kshamta; Smith, Rachel G.; Gerig, Guido; Cody Hazlett, Heather; Styner, Martin


    The corpus callosum (CC) is a structure of interest in many neuroimaging studies of neuro-developmental pathology such as autism. It plays an integral role in relaying sensory, motor and cognitive information from homologous regions in both hemispheres. We have developed a framework that allows automatic segmentation of the corpus callosum and its lobar subdivisions. Our approach employs constrained elastic deformation of flexible Fourier contour model, and is an extension of Szekely's 2D Fourier descriptor based Active Shape Model. The shape and appearance model, derived from a large mixed population of 150+ subjects, is described with complex Fourier descriptors in a principal component shape space. Using MNI space aligned T1w MRI data, the CC segmentation is initialized on the mid-sagittal plane using the tissue segmentation. A multi-step optimization strategy, with two constrained steps and a final unconstrained step, is then applied. If needed, interactive segmentation can be performed via contour repulsion points. Lobar connectivity based parcellation of the corpus callosum can finally be computed via the use of a probabilistic CC subdivision model. Our analysis framework has been integrated in an open-source, end-to-end application called CCSeg both with a command line and Qt-based graphical user interface (available on NITRC). A study has been performed to quantify the reliability of the semi-automatic segmentation on a small pediatric dataset. Using 5 subjects randomly segmented 3 times by two experts, the intra-class correlation coefficient showed a superb reliability (0.99). CCSeg is currently applied to a large longitudinal pediatric study of brain development in autism.

  5. Stability and homogeneity of microbiological reference materials: some statistical models

    Heisterkamp SH; Hoogenveen RT; van Strijp-Lockefeer NGWM; Hoekstra JA; Havelaar AH; Mooijman KA


    Microbiological reference materials are being developed by the RIVM since several years. These materials consist of capsules filled with milkpowder artificially contaminated with a bacteria test strain of choice (i.e. Escherichia coli, Enterobacter cloacae, Salmonella typhimurium). Both from long

  6. Towards Automatic Validation and Healing of Citygml Models for Geometric and Semantic Consistency

    Alam, N.; Wagner, D.; Wewetzer, M.; von Falkenhausen, J.; Coors, V.; Pries, M.


    A steadily growing number of application fields for large 3D city models have emerged in recent years. Like in many other domains, data quality is recognized as a key factor for successful business. Quality management is mandatory in the production chain nowadays. Automated domain-specific tools are widely used for validation of business-critical data but still common standards defining correct geometric modeling are not precise enough to define a sound base for data validation of 3D city models. Although the workflow for 3D city models is well-established from data acquisition to processing, analysis and visualization, quality management is not yet a standard during this workflow. Processing data sets with unclear specification leads to erroneous results and application defects. We show that this problem persists even if data are standard compliant. Validation results of real-world city models are presented to demonstrate the potential of the approach. A tool to repair the errors detected during the validation process is under development; first results are presented and discussed. The goal is to heal defects of the models automatically and export a corrected CityGML model.

  7. Man vs. Machine: An interactive poll to evaluate hydrological model performance of a manual and an automatic calibration

    Wesemann, Johannes; Burgholzer, Reinhard; Herrnegger, Mathew; Schulz, Karsten


    In recent years, a lot of research in hydrological modelling has been invested to improve the automatic calibration of rainfall-runoff models. This includes for example (1) the implementation of new optimisation methods, (2) the incorporation of new and different objective criteria and signatures in the optimisation and (3) the usage of auxiliary data sets apart from runoff. Nevertheless, in many applications manual calibration is still justifiable and frequently applied. The hydrologist performing the manual calibration, with his expert knowledge, is able to judge the hydrographs simultaneously concerning details but also in a holistic view. This integrated eye-ball verification procedure available to man can be difficult to formulate in objective criteria, even when using a multi-criteria approach. Comparing the results of automatic and manual calibration is not straightforward. Automatic calibration often solely involves objective criteria such as Nash-Sutcliffe Efficiency Coefficient or the Kling-Gupta-Efficiency as a benchmark during the calibration. Consequently, a comparison based on such measures is intrinsically biased towards automatic calibration. Additionally, objective criteria do not cover all aspects of a hydrograph leaving questions concerning the quality of a simulation open. This contribution therefore seeks to examine the quality of manually and automatically calibrated hydrographs by interactively involving expert knowledge in the evaluation. Simulations have been performed for the Mur catchment in Austria with the rainfall-runoff model COSERO using two parameter sets evolved from a manual and an automatic calibration. A subset of resulting hydrographs for observation and simulation, representing the typical flow conditions and events, will be evaluated in this study. In an interactive crowdsourcing approach experts attending the session can vote for their preferred simulated hydrograph without having information on the calibration method that

  8. Modelling Nonlinearities and Reference Dependence in General Practitioners' Income Preferences.

    Holte, Jon Helgheim; Sivey, Peter; Abelsen, Birgit; Olsen, Jan Abel


    This paper tests for the existence of nonlinearity and reference dependence in income preferences for general practitioners. Confirming the theory of reference dependent utility within the context of a discrete choice experiment, we find that losses loom larger than gains in income for Norwegian general practitioners, i.e. they value losses from their current income level around three times higher than the equivalent gains. Our results are validated by comparison with equivalent contingent valuation values for marginal willingness to pay and marginal willingness to accept compensation for changes in job characteristics. Physicians' income preferences determine the effectiveness of 'pay for performance' and other incentive schemes. Our results may explain the relative ineffectiveness of financial incentive schemes that rely on increasing physicians' incomes. Copyright © 2015 John Wiley & Sons, Ltd.

  9. Non-parametric iterative model constraint graph min-cut for automatic kidney segmentation.

    Freiman, M; Kronman, A; Esses, S J; Joskowicz, L; Sosna, J


    We present a new non-parametric model constraint graph min-cut algorithm for automatic kidney segmentation in CT images. The segmentation is formulated as a maximum a-posteriori estimation of a model-driven Markov random field. A non-parametric hybrid shape and intensity model is treated as a latent variable in the energy functional. The latent model and labeling map that minimize the energy functional are then simultaneously computed with an expectation maximization approach. The main advantages of our method are that it does not assume a fixed parametric prior model, which is subjective to inter-patient variability and registration errors, and that it combines both the model and the image information into a unified graph min-cut based segmentation framework. We evaluated our method on 20 kidneys from 10 CT datasets with and without contrast agent for which ground-truth segmentations were generated by averaging three manual segmentations. Our method yields an average volumetric overlap error of 10.95%, and average symmetric surface distance of 0.79 mm. These results indicate that our method is accurate and robust for kidney segmentation.

  10. BioASF: a framework for automatically generating executable pathway models specified in BioPAX.

    Haydarlou, Reza; Jacobsen, Annika; Bonzanni, Nicola; Feenstra, K Anton; Abeln, Sanne; Heringa, Jaap


    Biological pathways play a key role in most cellular functions. To better understand these functions, diverse computational and cell biology researchers use biological pathway data for various analysis and modeling purposes. For specifying these biological pathways, a community of researchers has defined BioPAX and provided various tools for creating, validating and visualizing BioPAX models. However, a generic software framework for simulating BioPAX models is missing. Here, we attempt to fill this gap by introducing a generic simulation framework for BioPAX. The framework explicitly separates the execution model from the model structure as provided by BioPAX, with the advantage that the modelling process becomes more reproducible and intrinsically more modular; this ensures natural biological constraints are satisfied upon execution. The framework is based on the principles of discrete event systems and multi-agent systems, and is capable of automatically generating a hierarchical multi-agent system for a given BioPAX model. To demonstrate the applicability of the framework, we simulated two types of biological network models: a gene regulatory network modeling the haematopoietic stem cell regulators and a signal transduction network modeling the Wnt/β-catenin signaling pathway. We observed that the results of the simulations performed using our framework were entirely consistent with the simulation results reported by the researchers who developed the original models in a proprietary language. The framework, implemented in Java, is open source and its source code, documentation and tutorial are available at CONTACT: © The Author 2016. Published by Oxford University Press.

  11. Levelized Cost of Energy Analysis of Marine and Hydrokinetic Reference Models: Preprint

    Jenne, D. S.; Yu, Y. H.; Neary, V.


    In 2010 the U.S. Department of Energy initiated the development of six marine energy converter reference models. The reference models are point designs of well-known marine energy converters. Each device was designed to operate in a specific marine resource, instead of a generic device that can be deployed at any location. This method allows each device to be used as a benchmark for future reference model to benchmark future devices. The six designs consist of three current energy converters and three wave energy converters. The reference model project has generated both technical and economic data sets that are available in the public domain. The methodology to calculate the levelized cost of energy for the reference model project and an overall comparison of the cost of energy from these six reference-model designs are presented in this paper.

  12. Toward Automatic Recognition of Children's Affective State Using Physiological Parameters and Fuzzy Model of Emotions

    SCHIPOR, O.-A.


    Full Text Available Affective computing � the ability of a system to recognize, understand and simulate human emotional intelligence � is one of the most dynamic fields of HCI � Human Computer Interaction. These characteristics find their applicability in those areas where it is necessary to extend traditional cognitive communication with emotional features. That is why, Computer Based Speech Therapy Systems (CBST, and especially those involving children with speech disorders, require this qualitative shift. So in this paper we propose an original emotional framework recognition as an extension for our previous developed system � Logomon. A fuzzy model is used in order to interpret the values of specific physiological parameters and to obtain the emotional state of the subject. Moreover, an experiment that indicates the emotion pattern (average fuzzy sets for each therapeutic sequence is also presented. The obtained results encourage us to continue working on automatic emotion recognition and provide important clues regarding the future development of our CBST.

  13. A semi-automatic multiple view texture mapping for the surface model extracted by laser scanning

    Zhang, Zhichao; Huang, Xianfeng; Zhang, Fan; Chang, Yongmin; Li, Deren


    Laser scanning is an effective way to acquire geometry data of the cultural heritage with complex architecture. After generating the 3D model of the object, it's difficult to do the exactly texture mapping for the real object. we take effort to create seamless texture maps for a virtual heritage of arbitrary topology. Texture detail is acquired directly from the real object in a light condition as uniform as we can make. After preprocessing, images are then registered on the 3D mesh by a semi-automatic way. Then we divide the mesh into mesh patches overlapped with each other according to the valid texture area of each image. An optimal correspondence between mesh patches and sections of the acquired images is built. Then, a smoothing approach is proposed to erase the seam between different images that map on adjacent mesh patches, based on texture blending. The obtained result with a Buddha of Dunhuang Mogao Grottoes is presented and discussed.

  14. Modeling Earthen Dike Stability: Sensitivity Analysis and Automatic Calibration of Diffusivities Based on Live Sensor Data

    Melnikova, N B; Sloot, P M A


    The paper describes concept and implementation details of integrating a finite element module for dike stability analysis Virtual Dike into an early warning system for flood protection. The module operates in real-time mode and includes fluid and structural sub-models for simulation of porous flow through the dike and for dike stability analysis. Real-time measurements obtained from pore pressure sensors are fed into the simulation module, to be compared with simulated pore pressure dynamics. Implementation of the module has been performed for a real-world test case - an earthen levee protecting a sea-port in Groningen, the Netherlands. Sensitivity analysis and calibration of diffusivities have been performed for tidal fluctuations. An algorithm for automatic diffusivities calibration for a heterogeneous dike is proposed and studied. Analytical solutions describing tidal propagation in one-dimensional saturated aquifer are employed in the algorithm to generate initial estimates of diffusivities.

  15. Hierarchical Model-Based Activity Recognition With Automatic Low-Level State Discovery

    Justin Muncaster


    Full Text Available Activity recognition in video streams is increasingly important for both the computer vision and artificial intelligence communities. Activity recognition has many applications in security and video surveillance. Ultimately in such applications one wishes to recognize complex activities, which can be viewed as combination of simple activities. In this paper, we present a general framework of a Dlevel dynamic Bayesian network to perform complex activity recognition. The levels of the network are constrained to enforce state hierarchy while the Dth level models the duration of simplest event. Moreover, in this paper we propose to use the deterministic annealing clustering method to automatically define the simple activities, which corresponds to the low level states of observable levels in a Dynamic Bayesian Networks. We used real data sets for experiments. The experimental results show the effectiveness of our proposed method.

  16. Automatic Detection of Repetitive Components in 3D Mechanical Engineering Models

    Laixiang Wen


    Full Text Available We present an intelligent method to automatically detect repetitive components in 3D mechanical engineering models. In our work, a new Voxel-based Shape Descriptor (VSD is proposed for effective matching, based on which a similarity function is defined. It uses the voxels intersecting with 3D outline of mechanical components as the feature descriptor. Because each mechanical component may have different poses, the alignment before the matching is needed. For the alignment, we adopt the genetic algorithm to search for optimal solution where the maximum global similarity is the objective. Two components are the same if the maximum global similarity is over a certain threshold. Note that the voxelization of component during feature extraction and the genetic algorithm for searching maximum global similarity are entirely implemented on GPU; the efficiency is improved significantly than with CPU. Experimental results show that our method is more effective and efficient than that existing methods for repetitive components detection.

  17. Contour-based automatic crater recognition using digital elevation models from Chang'E missions

    Zuo, Wei; Zhang, Zhoubin; Li, Chunlai; Wang, Rongwu; Yu, Linjie; Geng, Liang


    In order to provide fundamental information for exploration and related scientific research on the Moon and other planets, we propose a new automatic method to recognize craters on the lunar surface based on contour data extracted from a digital elevation model (DEM). Through DEM and image processing, this method can be used to reconstruct contour surfaces, extract and combine contour lines, set the characteristic parameters of crater morphology, and establish a crater pattern recognition program. The method has been tested and verified with DEM data from Chang'E-1 (CE-1) and Chang'E-2 (CE-2), showing a strong crater recognition ability with high detection rate, high robustness, and good adaptation to recognize various craters with different diameter and morphology. The method has been used to identify craters with high precision and accuracy on the Moon. The results meet requirements for supporting exploration and related scientific research for the Moon and planets.

  18. Automatic Construction of Predictive Neuron Models through Large Scale Assimilation of Electrophysiological Data

    Nogaret, Alain; Meliza, C. Daniel; Margoliash, Daniel; Abarbanel, Henry D. I.


    We report on the construction of neuron models by assimilating electrophysiological data with large-scale constrained nonlinear optimization. The method implements interior point line parameter search to determine parameters from the responses to intracellular current injections of zebra finch HVC neurons. We incorporated these parameters into a nine ionic channel conductance model to obtain completed models which we then use to predict the state of the neuron under arbitrary current stimulation. Each model was validated by successfully predicting the dynamics of the membrane potential induced by 20–50 different current protocols. The dispersion of parameters extracted from different assimilation windows was studied. Differences in constraints from current protocols, stochastic variability in neuron output, and noise behave as a residual temperature which broadens the global minimum of the objective function to an ellipsoid domain whose principal axes follow an exponentially decaying distribution. The maximum likelihood expectation of extracted parameters was found to provide an excellent approximation of the global minimum and yields highly consistent kinetics for both neurons studied. Large scale assimilation absorbs the intrinsic variability of electrophysiological data over wide assimilation windows. It builds models in an automatic manner treating all data as equal quantities and requiring minimal additional insight.

  19. Fitmunk: improving protein structures by accurate, automatic modeling of side-chain conformations.

    Porebski, Przemyslaw Jerzy; Cymborowski, Marcin; Pasenkiewicz-Gierula, Marta; Minor, Wladek


    Improvements in crystallographic hardware and software have allowed automated structure-solution pipelines to approach a near-`one-click' experience for the initial determination of macromolecular structures. However, in many cases the resulting initial model requires a laborious, iterative process of refinement and validation. A new method has been developed for the automatic modeling of side-chain conformations that takes advantage of rotamer-prediction methods in a crystallographic context. The algorithm, which is based on deterministic dead-end elimination (DEE) theory, uses new dense conformer libraries and a hybrid energy function derived from experimental data and prior information about rotamer frequencies to find the optimal conformation of each side chain. In contrast to existing methods, which incorporate the electron-density term into protein-modeling frameworks, the proposed algorithm is designed to take advantage of the highly discriminatory nature of electron-density maps. This method has been implemented in the program Fitmunk, which uses extensive conformational sampling. This improves the accuracy of the modeling and makes it a versatile tool for crystallographic model building, refinement and validation. Fitmunk was extensively tested on over 115 new structures, as well as a subset of 1100 structures from the PDB. It is demonstrated that the ability of Fitmunk to model more than 95% of side chains accurately is beneficial for improving the quality of crystallographic protein models, especially at medium and low resolutions. Fitmunk can be used for model validation of existing structures and as a tool to assess whether side chains are modeled optimally or could be better fitted into electron density. Fitmunk is available as a web service at or at




    Full Text Available This study describes an approach for modeling of an assembly system which is, one of the main problems encountered during assembly. In this approach the wire-frame modeling of the assembly system is used. In addition, each part is drawn in a different color. Assembly drawing and its various approaches are scanned along three different (-x, -y, -z axis. Scanning is done automatically the software developed. The color codes obtained by scanning and representing different assembly parts are assessed by the software along the six axes of Cartesian coordinate. Then contact matrices are formed to represent the relations among the assembly parts. These matrices are complete enough to represent an assembly modeling. This approach was applied for various assembly systems. These assembly systems are as follows; pincer, hinge and clutch systems. One of the basic advantages of this approach is that the wire-frame modeling of the assembly system can be formed through various CAD programs; and it can be applied to assembly systems contain many parts.

  1. Automatic modeling of pectus excavatum corrective prosthesis using artificial neural networks.

    Rodrigues, Pedro L; Rodrigues, Nuno F; Pinho, A C M; Fonseca, Jaime C; Correia-Pinto, Jorge; Vilaça, João L


    Pectus excavatum is the most common deformity of the thorax. Pre-operative diagnosis usually includes Computed Tomography (CT) to successfully employ a thoracic prosthesis for anterior chest wall remodeling. Aiming at the elimination of radiation exposure, this paper presents a novel methodology for the replacement of CT by a 3D laser scanner (radiation-free) for prosthesis modeling. The complete elimination of CT is based on an accurate determination of ribs position and prosthesis placement region through skin surface points. The developed solution resorts to a normalized and combined outcome of an artificial neural network (ANN) set. Each ANN model was trained with data vectors from 165 male patients and using soft tissue thicknesses (STT) comprising information from the skin and rib cage (automatically determined by image processing algorithms). Tests revealed that ribs position for prosthesis placement and modeling can be estimated with an average error of 5.0 ± 3.6mm. One also showed that the ANN performance can be improved by introducing a manually determined initial STT value in the ANN normalization procedure (average error of 2.82 ± 0.76 mm). Such error range is well below current prosthesis manual modeling (approximately 11 mm), which can provide a valuable and radiation-free procedure for prosthesis personalization. Copyright © 2014 IPEM. Published by Elsevier Ltd. All rights reserved.

  2. Multiobjective Optimal Algorithm for Automatic Calibration of Daily Streamflow Forecasting Model

    Yi Liu


    Full Text Available Single-objection function cannot describe the characteristics of the complicated hydrologic system. Consequently, it stands to reason that multiobjective functions are needed for calibration of hydrologic model. The multiobjective algorithms based on the theory of nondominate are employed to solve this multiobjective optimal problem. In this paper, a novel multiobjective optimization method based on differential evolution with adaptive Cauchy mutation and Chaos searching (MODE-CMCS is proposed to optimize the daily streamflow forecasting model. Besides, to enhance the diversity performance of Pareto solutions, a more precise crowd distance assigner is presented in this paper. Furthermore, the traditional generalized spread metric (SP is sensitive with the size of Pareto set. A novel diversity performance metric, which is independent of Pareto set size, is put forward in this research. The efficacy of the new algorithm MODE-CMCS is compared with the nondominated sorting genetic algorithm II (NSGA-II on a daily streamflow forecasting model based on support vector machine (SVM. The results verify that the performance of MODE-CMCS is superior to the NSGA-II for automatic calibration of hydrologic model.

  3. ADOPT: A tool for automatic detection of tectonic plates at the surface of convection models

    Mallard, C.; Jacquet, B.; Coltice, N.


    Mantle convection models with plate-like behavior produce surface structures comparable to Earth's plate boundaries. However, analyzing those structures is a difficult task, since convection models produce, as on Earth, diffuse deformation and elusive plate boundaries. Therefore we present here and share a quantitative tool to identify plate boundaries and produce plate polygon layouts from results of numerical models of convection: Automatic Detection Of Plate Tectonics (ADOPT). This digital tool operates within the free open-source visualization software Paraview. It is based on image segmentation techniques to detect objects. The fundamental algorithm used in ADOPT is the watershed transform. We transform the output of convection models into a topographic map, the crest lines being the regions of deformation (plate boundaries) and the catchment basins being the plate interiors. We propose two generic protocols (the field and the distance methods) that we test against an independent visual detection of plate polygons. We show that ADOPT is effective to identify the smaller plates and to close plate polygons in areas where boundaries are diffuse or elusive. ADOPT allows the export of plate polygons in the standard OGR-GMT format for visualization, modification, and analysis under generic softwares like GMT or GPlates.

  4. Fuzzy Time Series Forecasting Model Based on Automatic Clustering Techniques and Generalized Fuzzy Logical Relationship

    Wangren Qiu


    Full Text Available In view of techniques for constructing high-order fuzzy time series models, there are three types which are based on advanced algorithms, computational method, and grouping the fuzzy logical relationships. The last type of models is easy to be understood by the decision maker who does not know anything about fuzzy set theory or advanced algorithms. To deal with forecasting problems, this paper presented novel high-order fuzz time series models denoted as GTS (M, N based on generalized fuzzy logical relationships and automatic clustering. This paper issued the concept of generalized fuzzy logical relationship and an operation for combining the generalized relationships. Then, the procedure of the proposed model was implemented on forecasting enrollment data at the University of Alabama. To show the considerable outperforming results, the proposed approach was also applied to forecasting the Shanghai Stock Exchange Composite Index. Finally, the effects of parameters M and N, the number of order, and concerned principal fuzzy logical relationships, on the forecasting results were also discussed.

  5. Utilization of Expert Knowledge in a Multi-Objective Hydrologic Model Automatic Calibration Process

    Quebbeman, J.; Park, G. H.; Carney, S.; Day, G. N.; Micheletty, P. D.


    Spatially distributed continuous simulation hydrologic models have a large number of parameters for potential adjustment during the calibration process. Traditional manual calibration approaches of such a modeling system is extremely laborious, which has historically motivated the use of automatic calibration procedures. With a large selection of model parameters, achieving high degrees of objective space fitness - measured with typical metrics such as Nash-Sutcliffe, Kling-Gupta, RMSE, etc. - can easily be achieved using a range of evolutionary algorithms. A concern with this approach is the high degree of compensatory calibration, with many similarly performing solutions, and yet grossly varying parameter set solutions. To help alleviate this concern, and mimic manual calibration processes, expert knowledge is proposed for inclusion within the multi-objective functions, which evaluates the parameter decision space. As a result, Pareto solutions are identified with high degrees of fitness, but also create parameter sets that maintain and utilize available expert knowledge resulting in more realistic and consistent solutions. This process was tested using the joint SNOW-17 and Sacramento Soil Moisture Accounting method (SAC-SMA) within the Animas River basin in Colorado. Three different elevation zones, each with a range of parameters, resulted in over 35 model parameters simultaneously calibrated. As a result, high degrees of fitness were achieved, in addition to the development of more realistic and consistent parameter sets such as those typically achieved during manual calibration procedures.


    S. J. Tang


    Full Text Available With the goal to achieve an accuracy navigation within the building environment, it is critical to explore a feasible way for building the connectivity relationships among 3D geographical features called in-building topology network. Traditional topology construction approaches for indoor space always based on 2D maps or pure geometry model, which remained information insufficient problem. Especially, an intelligent navigation for different applications depends mainly on the precise geometry and semantics of the navigation network. The trouble caused by existed topology construction approaches can be smoothed by employing IFC building model which contains detailed semantic and geometric information. In this paper, we present a method which combined a straight media axis transformation algorithm (S-MAT with IFC building model to reconstruct indoor geometric topology network. This derived topology aimed at facilitating the decision making for different in-building navigation. In this work, we describe a multi-step deviation process including semantic cleaning, walkable features extraction, Multi-Storey 2D Mapping and S-MAT implementation to automatically generate topography information from existing indoor building model data given in IFC.

  7. Multilevel spatial semantic model for urban house information extraction automatically from QuickBird imagery

    Guan, Li; Wang, Ping; Liu, Xiangnan


    Based on the introduction to the characters and constructing flow of space semantic model, the feature space and context of house information in high resolution remote sensing image are analyzed, and the house semantic network model of Quick Bird image is also constructed. Furthermore, the accuracy and practicability of space semantic model are checked up through extracting house information automatically from Quick Bird image after extracting candidate semantic nodes to the image by taking advantage of grey division method, window threshold value method and Hough transformation. Sample result indicates that its type coherence, shape coherence and area coherence are 96.75%, 89.5 % and 88 % respectively. Thereinto the effect of the extraction of the houses with rectangular roof is the best and that with herringbone and the polygonal roofs is just ideal. However, the effect of the extraction of the houses with round roof is not satisfied and thus they need the further perfection to the semantic model to make them own higher applied value.

  8. A marked point process of rectangles and segments for automatic analysis of digital elevation models.

    Ortner, Mathias; Descombe, Xavier; Zerubia, Josiane


    This work presents a framework for automatic feature extraction from images using stochastic geometry. Features in images are modeled as realizations of a spatial point process of geometrical shapes. This framework allows the incorporation of a priori knowledge on the spatial repartition of features. More specifically, we present a model based on the superposition of a process of segments and a process of rectangles. The former is dedicated to the detection of linear networks of discontinuities, while the latter aims at segmenting homogeneous areas. An energy is defined, favoring connections of segments, alignments of rectangles, as well as a relevant interaction between both types of objects. The estimation is performed by minimizing the energy using a simulated annealing algorithm. The proposed model is applied to the analysis of Digital Elevation Models (DEMs). These images are raster data representing the altimetry of a dense urban area. We present results on real data provided by the IGN (French National Geographic Institute) consisting in low quality DEMs of various types.

  9. A reference model for model-based design of critical infrastructure protection systems

    Shin, Young Don; Park, Cheol Young; Lee, Jae-Chon


    Today's war field environment is getting versatile as the activities of unconventional wars such as terrorist attacks and cyber-attacks have noticeably increased lately. The damage caused by such unconventional wars has also turned out to be serious particularly if targets are critical infrastructures that are constructed in support of banking and finance, transportation, power, information and communication, government, and so on. The critical infrastructures are usually interconnected to each other and thus are very vulnerable to attack. As such, to ensure the security of critical infrastructures is very important and thus the concept of critical infrastructure protection (CIP) has come. The program to realize the CIP at national level becomes the form of statute in each country. On the other hand, it is also needed to protect each individual critical infrastructure. The objective of this paper is to study on an effort to do so, which can be called the CIP system (CIPS). There could be a variety of ways to design CIPS's. Instead of considering the design of each individual CIPS, a reference model-based approach is taken in this paper. The reference model represents the design of all the CIPS's that have many design elements in common. In addition, the development of the reference model is also carried out using a variety of model diagrams. The modeling language used therein is the systems modeling language (SysML), which was developed and is managed by Object Management Group (OMG) and a de facto standard. Using SysML, the structure and operational concept of the reference model are designed to fulfil the goal of CIPS's, resulting in the block definition and activity diagrams. As a case study, the operational scenario of the nuclear power plant while being attacked by terrorists is studied using the reference model. The effectiveness of the results is also analyzed using multiple analysis models. It is thus expected that the approach taken here has some merits

  10. Reference Models for Multi-Layer Tissue Structures


    simplification to develop cost-effective models of surface manipulation of multi-layer tissues. Deliverables. Specimen- (or subject) and region-specific... simplification to develop cost-effective models of surgical manipulation. Deliverables. Specimen-specific surrogate models of upper legs confirmed against collection software and in experimentation procedures were identified through these mock-up sessions and were addressed. Testing of the subjects

  11. A Deformable Template Model, with Special Reference to Elliptical Templates

    Hobolth, Asger; Pedersen, Jan; Jensen, Eva Bjørn Vedel


    This paper suggests a high-level continuous image model for planar star-shaped objects. Under this model, a planar object is a stochastic deformation of a star-shaped template. The residual process, describing the difference between the radius-vector function of the template and the object...

  12. Modelling Diverse Soil Attributes with Visible to Longwave Infrared Spectroscopy Using PLSR Employed by an Automatic Modelling Engine

    Veronika Kopačková


    Full Text Available The study tested a data mining engine (PARACUDA® to predict various soil attributes (BC, CEC, BS, pH, Corg, Pb, Hg, As, Zn and Cu using reflectance data acquired for both optical and thermal infrared regions. The engine was designed to utilize large data in parallel and automatic processing to build and process hundreds of diverse models in a unified manner while avoiding bias and deviations caused by the operator(s. The system is able to systematically assess the effect of diverse preprocessing techniques; additionally, it analyses other parameters, such as different spectral resolutions and spectral coverages that affect soil properties. Accordingly, the system was used to extract models across both optical and thermal infrared spectral regions, which holds significant chromophores. In total, 2880 models were evaluated where each model was generated with a different preprocessing scheme of the input spectral data. The models were assessed using statistical parameters such as coefficient of determination (R2, square error of prediction (SEP, relative percentage difference (RPD and by physical explanation (spectral assignments. It was found that the smoothing procedure is the most beneficial preprocessing stage, especially when combined with spectral derivation (1st or 2nd derivatives. Automatically and without the need of an operator, the data mining engine enabled the best prediction models to be found from all the combinations tested. Furthermore, the data mining approach used in this study and its processing scheme proved to be efficient tools for getting a better understanding of the geochemical properties of the samples studied (e.g., mineral associations.

  13. Automatic calibration of a global flow routing model in the Amazon basin using virtual SWOT data

    Rogel, P. Y.; Mouffe, M.; Getirana, A.; Ricci, S. M.; Lion, C.; Mognard, N. M.; Biancamaria, S.; Boone, A.


    The Surface Water and Ocean Topography (SWOT) wide swath altimetry mission will provide a global coverage of surface water elevation, which will be used to help correct water height and discharge prediction from hydrological models. Here, the aim is to investigate the use of virtually generated SWOT data to improve water height and discharge simulation using calibration of model parameters (like river width, river depth and roughness coefficient). In this work, we use the HyMAP model to estimate water height and discharge on the Amazon catchment area. Before reaching the river network, surface and subsurface runoff are delayed by a set of linear and independent reservoirs. The flow routing is performed by the kinematic wave equation.. Since the SWOT mission has not yet been launched, virtual SWOT data are generated with a set of true parameters for HyMAP as well as measurement errors from a SWOT data simulator (i.e. a twin experiment approach is implemented). These virtual observations are used to calibrate key parameters of HyMAP through the minimization of a cost function defining the difference between the simulated and observed water heights over a one-year simulation period. The automatic calibration procedure is achieved using the MOCOM-UA multicriteria global optimization algorithm as well as the local optimization algorithm BC-DFO that is considered as a computational cost saving alternative. First, to reduce the computational cost of the calibration procedure, each spatially distributed parameter (Manning coefficient, river width and river depth) is corrupted through the multiplication of a spatially uniform factor that is the only factor optimized. In this case, it is shown that, when the measurement errors are small, the true water heights and discharges are easily retrieved. Because of equifinality, the true parameters are not always identified. A spatial correction of the model parameters is then investigated and the domain is divided into 4 regions

  14. Dynamic Data Driven Applications Systems (DDDAS) modeling for automatic target recognition

    Blasch, Erik; Seetharaman, Guna; Darema, Frederica


    The Dynamic Data Driven Applications System (DDDAS) concept uses applications modeling, mathematical algorithms, and measurement systems to work with dynamic systems. A dynamic systems such as Automatic Target Recognition (ATR) is subject to sensor, target, and the environment variations over space and time. We use the DDDAS concept to develop an ATR methodology for multiscale-multimodal analysis that seeks to integrated sensing, processing, and exploitation. In the analysis, we use computer vision techniques to explore the capabilities and analogies that DDDAS has with information fusion. The key attribute of coordination is the use of sensor management as a data driven techniques to improve performance. In addition, DDDAS supports the need for modeling from which uncertainty and variations are used within the dynamic models for advanced performance. As an example, we use a Wide-Area Motion Imagery (WAMI) application to draw parallels and contrasts between ATR and DDDAS systems that warrants an integrated perspective. This elementary work is aimed at triggering a sequence of deeper insightful research towards exploiting sparsely sampled piecewise dense WAMI measurements - an application where the challenges of big-data with regards to mathematical fusion relationships and high-performance computations remain significant and will persist. Dynamic data-driven adaptive computations are required to effectively handle the challenges with exponentially increasing data volume for advanced information fusion systems solutions such as simultaneous target tracking and ATR.


    G. Jóźków


    Full Text Available The ideal mapping technology for transmission line inspection is the airborne LiDAR executed from helicopter platforms. It allows for full 3D geometry extraction in highly automated manner. Large scale aerial images can be also used for this purpose, however, automation is possible only for finding transmission line positions (2D geometry, and the sag needs to be estimated manually. For longer lines, these techniques are less expensive than ground surveys, yet they are still expensive. UAS technology has the potential to reduce these costs, especially if using inexpensive platforms with consumer grade cameras. This study investigates the potential of using high resolution UAS imagery for automatic modeling of transmission line 3D geometry. The key point of this experiment was to employ dense matching algorithms to appropriately acquired UAS images to have points created also on wires. This allowed to model the 3D geometry of transmission lines similarly to LiDAR acquired point clouds. Results showed that the transmission line modeling is possible with a high internal accuracy for both, horizontal and vertical directions, even when wires were represented by a partial (sparse point cloud.

  16. Modelling the adoption of automatic milking systems in Noord-Holland

    Matteo Floridi


    Full Text Available Innovation and new technology adoption represent two central elements for the business and industry development process in agriculture. One of the most relevant innovations in dairy farms is the robotisation of the milking process through the adoption of Automatic Milking Systems (AMS. The purpose of this paper is to assess the impact of selected Common Agricultural Policy measures on the adoption of AMS in dairy farms. The model developed is a dynamic farm-household model that is able to simulate the adoption of AMS taking into account the allocation of productive factors between on-farm and off-farm activities. The model simulates the decision to replace a traditional milking system with AMS using a Real Options approach that allows farmers to choose the optimal timing of investments. Results show that the adoption of AMS, and the timing of such a decision, is strongly affected by policy uncertainty and market conditions. The effect of this uncertainty is to postpone the decision to adopt the new technology until farmers have gathered enough information to reduce the negative effects of the technological lock-in. AMS adoption results in an increase in farm size and herd size due to the reduction in the labour required for milking operations.

  17. Automatic Sex Determination of Skulls Based on a Statistical Shape Model

    Li Luo


    Full Text Available Sex determination from skeletons is an important research subject in forensic medicine. Previous skeletal sex assessments are through subjective visual analysis by anthropologists or metric analysis of sexually dimorphic features. In this work, we present an automatic sex determination method for 3D digital skulls, in which a statistical shape model for skulls is constructed, which projects the high-dimensional skull data into a low-dimensional shape space, and Fisher discriminant analysis is used to classify skulls in the shape space. This method combines the advantages of metrical and morphological methods. It is easy to use without professional qualification and tedious manual measurement. With a group of Chinese skulls including 127 males and 81 females, we choose 92 males and 58 females to establish the discriminant model and validate the model with the other skulls. The correct rate is 95.7% and 91.4% for females and males, respectively. Leave-one-out test also shows that the method has a high accuracy.

  18. A computer program to automatically generate state equations and macro-models. [for network analysis and design

    Garrett, S. J.; Bowers, J. C.; Oreilly, J. E., Jr.


    A computer program, PROSE, that produces nonlinear state equations from a simple topological description of an electrical or mechanical network is described. Unnecessary states are also automatically eliminated, so that a simplified terminal circuit model is obtained. The program also prints out the eigenvalues of a linearized system and the sensitivities of the eigenvalue of largest magnitude.

  19. Automatic 3D modelling of metal frame connections from LiDAR data for structural engineering purposes

    Cabaleiro, M.; Riveiro, B.; Arias, P.; Caamaño, J. C.; Vilán, J. A.


    The automatic generation of 3D as-built models from LiDAR data is a topic where significant progress has been made in recent years. This paper describes a new method for the detection and automatic 3D modelling of frame connections and the formation of profiles comprising a metal frame from LiDAR data. The method has been developed using an approach to create 2.5D density images for subsequent processing using the Hough transform. The structure connections can be automatically identified after selecting areas in the point cloud. As a result, the coordinates of the connection centre, composition (profiles, size and shape of the haunch) and direction of their profiles are extracted. A standard file is generated with the data obtained from the geometric and semantic characterisation of the connections. The 3D model of connections and metal frames, which are suitable for processing software for structural engineering applications, are generated automatically based on this file. The algorithm presented in this paper has been tested under laboratory conditions and also with several industrial portal frames, achieving promising results. Finally, 3D models were generated, and structural calculations were performed.

  20. Automatic generation of a JET 3D neutronics model from CAD geometry data for Monte Carlo calculations

    Tsige-Tamirat, H. [Association FZK-Euratom, Forschungszentrum Karlsruhe, P.O. Box 3640, 76021 Karlsruhe (Germany)]. E-mail:; Fischer, U. [Association FZK-Euratom, Forschungszentrum Karlsruhe, P.O. Box 3640, 76021 Karlsruhe (Germany); Carman, P.P. [Euratom/UKAEA Fusion Association, Culham Science Center, Abingdon, Oxfordshire OX14 3DB (United Kingdom); Loughlin, M. [Euratom/UKAEA Fusion Association, Culham Science Center, Abingdon, Oxfordshire OX14 3DB (United Kingdom)


    The paper describes the automatic generation of a JET 3D neutronics model from data of computer aided design (CAD) system for Monte Carlo (MC) calculations. The applied method converts suitable CAD data into a representation appropriate for MC codes. The converted geometry is fully equivalent to the CAD geometry.

  1. Improving CCTA-based lesions' hemodynamic significance assessment by accounting for partial volume modeling in automatic coronary lumen segmentation.

    Freiman, Moti; Nickisch, Hannes; Prevrhal, Sven; Schmitt, Holger; Vembar, Mani; Maurovich-Horvat, Pál; Donnelly, Patrick; Goshen, Liran


    The goal of this study was to assess the potential added benefit of accounting for partial volume effects (PVE) in an automatic coronary lumen segmentation algorithm that is used to determine the hemodynamic significance of a coronary artery stenosis from coronary computed tomography angiography (CCTA). Two sets of data were used in our work: (a) multivendor CCTA datasets of 18 subjects from the MICCAI 2012 challenge with automatically generated centerlines and 3 reference segmentations of 78 coronary segments and (b) additional CCTA datasets of 97 subjects with 132 coronary lesions that had invasive reference standard FFR measurements. We extracted the coronary artery centerlines for the 97 datasets by an automated software program followed by manual correction if required. An automatic machine-learning-based algorithm segmented the coronary tree with and without accounting for the PVE. We obtained CCTA-based FFR measurements using a flow simulation in the coronary trees that were generated by the automatic algorithm with and without accounting for PVE. We assessed the potential added value of PVE integration as a part of the automatic coronary lumen segmentation algorithm by means of segmentation accuracy using the MICCAI 2012 challenge framework and by means of flow simulation overall accuracy, sensitivity, specificity, negative and positive predictive values, and the receiver operated characteristic (ROC) area under the curve. We also evaluated the potential benefit of accounting for PVE in automatic segmentation for flow simulation for lesions that were diagnosed as obstructive based on CCTA which could have indicated a need for an invasive exam and revascularization. Our segmentation algorithm improves the maximal surface distance error by ~39% compared to previously published method on the 18 datasets from the MICCAI 2012 challenge with comparable Dice and mean surface distance. Results with and without accounting for PVE were comparable. In contrast

  2. Piloted Simulation Evaluation of a Model-Predictive Automatic Recovery System to Prevent Vehicle Loss of Control on Approach

    Litt, Jonathan S.; Liu, Yuan; Sowers, Thomas S.; Owen, A. Karl; Guo, Ten-Huei


    This paper describes a model-predictive automatic recovery system for aircraft on the verge of a loss-of-control situation. The system determines when it must intervene to prevent an imminent accident, resulting from a poor approach. It estimates the altitude loss that would result from a go-around maneuver at the current flight condition. If the loss is projected to violate a minimum altitude threshold, the maneuver is automatically triggered. The system deactivates to allow landing once several criteria are met. Piloted flight simulator evaluation showed the system to provide effective envelope protection during extremely unsafe landing attempts. The results demonstrate how flight and propulsion control can be integrated to recover control of the vehicle automatically and prevent a potential catastrophe.

  3. Calibration of the Hydrological Simulation Program Fortran (HSPF) model using automatic calibration and geographical information systems

    Al-Abed, N. A.; Whiteley, H. R.


    Calibrating a comprehensive, multi-parameter conceptual hydrological model, such as the Hydrological Simulation Program Fortran model, is a major challenge. This paper describes calibration procedures for water-quantity parameters of the HSPF version 10·11 using the automatic-calibration parameter estimator model coupled with a geographical information system (GIS) approach for spatially averaged properties. The study area was the Grand River watershed, located in southern Ontario, Canada, between 79° 30 and 80° 57W longitude and 42° 51 and 44° 31N latitude. The drainage area is 6965 km2. Calibration efforts were directed to those model parameters that produced large changes in model response during sensitivity tests run prior to undertaking calibration. A GIS was used extensively in this study. It was first used in the watershed segmentation process. During calibration, the GIS data were used to establish realistic starting values for the surface and subsurface zone parameters LZSN, UZSN, COVER, and INFILT and physically reasonable ratios of these parameters among watersheds were preserved during calibration with the ratios based on the known properties of the subwatersheds determined using GIS. This calibration procedure produced very satisfactory results; the percentage difference between the simulated and the measured yearly discharge ranged between 4 to 16%, which is classified as good to very good calibration. The average simulated daily discharge for the watershed outlet at Brantford for the years 1981-85 was 67 m3 s-1 and the average measured discharge at Brantford was 70 m3 s-1. The coupling of a GIS with automatice calibration produced a realistic and accurate calibration for the HSPF model with much less effort and subjectivity than would be required for unassisted calibration.

  4. A reference model of an instrument for quality measurement of semantic IS standards

    Folmer, E.J.A.; Oude Luttighuis, P.; Hillegersberg, J. van


    This study describes the design of a reference model for an instrument to measure quality of semantic Information System (IS) standards. This design satisfies requirements gathered among potential users, in a previous study. The reference model features three layers: concerned with quality, semantic

  5. The importance of the reference populations for coherent mortality forecasting models

    Kjærgaard, Søren; Canudas-Romo, Vladimir; Vaupel, James W.

    -population mortality models aiming to find the optimal of the set of countries to use as reference population and analyse the importance of the selection of countries. The two multi-population mortality models used are the Li-Lee model and the Double-Gap life expectancy forecasting model. The reference populations...... is calculated taking into account all the possible combinations of a set of 20 industrialized countries. The different reference populations possibilities are compared by their forecast performance. The results show that the selection of countries for multi-population mortality models has a significant effect...

  6. Support Vector Machine Model for Automatic Detection and Classification of Seismic Events

    Barros, Vesna; Barros, Lucas


    The automated processing of multiple seismic signals to detect, localize and classify seismic events is a central tool in both natural hazards monitoring and nuclear treaty verification. However, false detections and missed detections caused by station noise and incorrect classification of arrivals are still an issue and the events are often unclassified or poorly classified. Thus, machine learning techniques can be used in automatic processing for classifying the huge database of seismic recordings and provide more confidence in the final output. Applied in the context of the International Monitoring System (IMS) - a global sensor network developed for the Comprehensive Nuclear-Test-Ban Treaty (CTBT) - we propose a fully automatic method for seismic event detection and classification based on a supervised pattern recognition technique called the Support Vector Machine (SVM). According to Kortström et al., 2015, the advantages of using SVM are handleability of large number of features and effectiveness in high dimensional spaces. Our objective is to detect seismic events from one IMS seismic station located in an area of high seismicity and mining activity and classify them as earthquakes or quarry blasts. It is expected to create a flexible and easily adjustable SVM method that can be applied in different regions and datasets. Taken a step further, accurate results for seismic stations could lead to a modification of the model and its parameters to make it applicable to other waveform technologies used to monitor nuclear explosions such as infrasound and hydroacoustic waveforms. As an authorized user, we have direct access to all IMS data and bulletins through a secure signatory account. A set of significant seismic waveforms containing different types of events (e.g. earthquake, quarry blasts) and noise is being analysed to train the model and learn the typical pattern of the signal from these events. Moreover, comparing the performance of the support

  7. Business process modelling in demand-driven agri-food supply chains : a reference framework

    Verdouw, C.N.


    Keywords: Business process models; Supply chain management; Information systems; Reference information models; Market orientation; Mass customisation; Configuration; Coordination; Control; SCOR; Pot plants; Fruit industry Abstract The increasing volatility and diversity of demand urge agri-food


    Sowmiya Murthy


    Full Text Available We propose a secure cloud storage model that addresses security and storage issues for cloud computing environments. Security is achieved by anonymous authentication which ensures that cloud users remain anonymous while getting duly authenticated. For achieving this goal, we propose a digital signature based authentication scheme with a decentralized architecture for distributed key management with multiple Key Distribution Centers. Homomorphic encryption scheme using Paillier public key cryptosystem is used for encrypting the data that is stored in the cloud. We incorporate a query driven approach for validating the access policies defined by an individual user for his/her data i.e. the access is granted to a requester only if his credentials matches with the hidden access policy. Further, since data is vulnerable to losses or damages due to the vagaries of the network, we propose an automatic retrieval mechanism where lost data is recovered by data replication and file replacement with string matching algorithm. We describe a prototype implementation of our proposed model.

  9. Gene prediction using the Self-Organizing Map: automatic generation of multiple gene models

    Smith Terry J


    Full Text Available Abstract Background Many current gene prediction methods use only one model to represent protein-coding regions in a genome, and so are less likely to predict the location of genes that have an atypical sequence composition. It is likely that future improvements in gene finding will involve the development of methods that can adequately deal with intra-genomic compositional variation. Results This work explores a new approach to gene-prediction, based on the Self-Organizing Map, which has the ability to automatically identify multiple gene models within a genome. The current implementation, named RescueNet, uses relative synonymous codon usage as the indicator of protein-coding potential. Conclusions While its raw accuracy rate can be less than other methods, RescueNet consistently identifies some genes that other methods do not, and should therefore be of interest to gene-prediction software developers and genome annotation teams alike. RescueNet is recommended for use in conjunction with, or as a complement to, other gene prediction methods.

  10. Artificial neural networks for automatic modelling of the pectus excavatum corrective prosthesis

    Rodrigues, Pedro L.; Moreira, António H. J.; Rodrigues, Nuno F.; Pinho, ACM; Fonseca, Jaime C.; Correia-Pinto, Jorge; Vilaça, João. L.


    Pectus excavatum is the most common deformity of the thorax and usually comprises Computed Tomography (CT) examination for pre-operative diagnosis. Aiming at the elimination of the high amounts of CT radiation exposure, this work presents a new methodology for the replacement of CT by a laser scanner (radiation-free) in the treatment of pectus excavatum using personally modeled prosthesis. The complete elimination of CT involves the determination of ribs external outline, at the maximum sternum depression point for prosthesis placement, based on chest wall skin surface information, acquired by a laser scanner. The developed solution resorts to artificial neural networks trained with data vectors from 165 patients. Scaled Conjugate Gradient, Levenberg-Marquardt, Resilient Back propagation and One Step Secant gradient learning algorithms were used. The training procedure was performed using the soft tissue thicknesses, determined using image processing techniques that automatically segment the skin and rib cage. The developed solution was then used to determine the ribs outline in data from 20 patient scanners. Tests revealed that ribs position can be estimated with an average error of about 6.82+/-5.7 mm for the left and right side of the patient. Such an error range is well below current prosthesis manual modeling (11.7+/-4.01 mm) even without CT imagiology, indicating a considerable step forward towards CT replacement by a 3D scanner for prosthesis personalization.

  11. Storm Water Management Model Reference Manual Volume I, Hydrology

    SWMM is a dynamic rainfall-runoff simulation model used for single event or long-term (continuous) simulation of runoff quantity and quality from primarily urban areas. The runoff component of SWMM operates on a collection of subcatchment areas that receive precipitation and gene...

  12. Toolkit for Conceptual Modeling (TCM): User's Guide and Reference

    Dehne, F.; Wieringa, R.J.


    The Toolkit for Conceptual Modeling (TCM) is a suite of graphical editors for a number of graphical notation systems that are used in software specification methods. The notations can be used to represent the conceptual structure of the software - hence the name of the suite. This manual describes

  13. Toolkit for Conceptual Modeling (TCM): User's Guide and Reference

    Dehne, F.; Wieringa, Roelf J.

    The Toolkit for Conceptual Modeling (TCM) is a suite of graphical editors for a number of graphical notation systems that are used in software specification methods. The notations can be used to represent the conceptual structure of the software - hence the name of the suite. This manual describes

  14. Selection of References in Wind Turbine Model Predictive Control Design

    Odgaard, Peter Fogh; Hovgaard, Tobias


    Lowering the cost of energy is one of the major focus areas in the wind turbine industry. Recent research has indicated that wind turbine controllers based on model predictive control methods can be useful in obtaining this objective. A number of design considerations have to be made when designi...

  15. Organisational models in agriculture with special reference to small farmers

    Zakić Nebojša


    Full Text Available Agricultural value chains can be understood as the systems of people, organizations and activities needed to create process and deliver agricultural products from producers to consumers. Over time and due to huge changes that have happened in the surroundings, agricultural value chains have become very integrated and complex. Small farmers can prosper by joining in modern higher-level agricultural value chains, but there are numerous obstacles, as well. The work presents the typology of organizational models for agricultural production that consists of the models organised by producers, agribusiness companies (processors, retail chains, and intermediaries, facilitators (governments, non-governmental organisations and completely integrated models, established by some big companies. None of these models provides ideal solutions from the perspective of small producers. However, they say that the institutions, such as cooperatives and small farmers' organisations, present important mechanisms for including small producers in modern value chains and realizing the cooperation with agribusiness companies and other important players. This is also important for decision-makers and governmental bodies that should create a suitable environment and provide support so that small farmers and their organisations can integrate in modern value chains in a successful way.

  16. Storm Water Management Model Reference Manual Volume II – Hydraulics

    SWMM is a dynamic rainfall-runoff simulation model used for single event or long-term (continuous) simulation of runoff quantity and quality from primarily urban areas. The runoff component of SWMM operates on a collection of subcatchment areas that receive precipitation and gene...

  17. An Update to the NASA Reference Solar Sail Thrust Model

    Heaton, Andrew F.; Artusio-Glimpse, Alexandra B.


    An optical model of solar sail material originally derived at JPL in 1978 has since served as the de facto standard for NASA and other solar sail researchers. The optical model includes terms for specular and diffuse reflection, thermal emission, and non-Lambertian diffuse reflection. The standard coefficients for these terms are based on tests of 2.5 micrometer Kapton sail material coated with 100 nm of aluminum on the front side and chromium on the back side. The original derivation of these coefficients was documented in an internal JPL technical memorandum that is no longer available. Additionally more recent optical testing has taken place and different materials have been used or are under consideration by various researchers for solar sails. Here, where possible, we re-derive the optical coefficients from the 1978 model and update them to accommodate newer test results and sail material. The source of the commonly used value for the front side non-Lambertian coefficient is not clear, so we investigate that coefficient in detail. Although this research is primarily designed to support the upcoming NASA NEA Scout and Lunar Flashlight solar sail missions, the results are also of interest to the wider solar sail community.

  18. Implementing an inclusive staffing model for today's reference services a practical guide for librarians

    Nims, Julia K; Stevens, Robert


    Reference service remains a core function of modern libraries. However, how and where we provide assistance has evolved with changing technologies and the shifting habits and preferences of our users. One way libraries can provide the on-demand, in-person assistance while managing and developing new services and resources that will benefit current and future users is to reconsider how their reference points and services are staffed and adopt a staff-based reference model. In Implementing an Inclusive Staffing Model for Today's Reference Services, Nims, Storm, and Stevens describe step-by-step

  19. Parabolic Trough Reference Plant for Cost Modeling with the Solar Advisor Model (SAM)

    Turchi, C.


    This report describes a component-based cost model developed for parabolic trough solar power plants. The cost model was developed by the National Renewable Energy Laboratory (NREL), assisted by WorleyParsons Group Inc., for use with NREL's Solar Advisor Model (SAM). This report includes an overview and explanation of the model, two summary contract reports from WorleyParsons, and an Excel spreadsheet for use with SAM. The cost study uses a reference plant with a 100-MWe capacity and six hours of thermal energy storage. Wet-cooling and dry-cooling configurations are considered. The spreadsheet includes capital and operating cost by component to allow users to estimate the impact of changes in component costs.

  20. Development of skeleton model for use in polygonal-mesh-type ICRP reference phantoms

    Nguyen, Thang Tat; Yeom, Yeon Soo; Han, Min Cheol; Wang, Zhao Jun; Kim, Han Sung; Kim, Chan Hyeong [Dept.of Nuclear Engineering, Hanyang University, Seoul (Korea, Republic of)


    In order to overcome the limitations, we are currently developing the polygonal-mesh versions of the ICRP reference phantoms by converting the ICRP reference voxel phantoms to polygonal-mesh format. As a part of the ICRP reference phantom conversion project, the present study completed the conversion of skeleton, which is a very complex framework of the body, while addressing some critical problems of the skeleton of the ICRP reference voxel phantoms. The converted skeleton models were also evaluated by comparing dose values of RBM and endosteum with those of the ICRP reference voxel phantoms. As a part of the ICRP reference phantom conversion project, the present study successfully completed skeleton conversion of the ICRP reference adult male and female phantoms to polygonal-mesh format. A comprehensive study of dosimetric effects by the skeleton conversion will be performed in the future.


    Dmitry N. Bolotov


    Full Text Available The article deals with the main form of international payment - bank transfer and features when it is charging by banks correspondent fees for transit funds in their correspondent accounts. In order to optimize the cost of expenses for international money transfers there is a need to develop models and toolkit of automatic generation of the total amount of commissions in international interbank settlements. Accordingly, based on graph theory, approach to the construction of the model was developed.

  2. A Computer Model of the Evaporator for the Development of an Automatic Control System

    Kozin, K. A.; Efremov, E. V.; Kabrysheva, O. P.; Grachev, M. I.


    For the implementation of a closed nuclear fuel cycle it is necessary to carry out a series of experimental studies to justify the choice of technology. In addition, the operation of the radiochemical plant is impossible without high-quality automatic control systems. In the technologies of spent nuclear fuel reprocessing, the method of continuous evaporation is often used for a solution conditioning. Therefore, the effective continuous technological process will depend on the operation of the evaporation equipment. Its essential difference from similar devices is a small size. In this paper the method of mathematic simulation is applied for the investigation of one-effect evaporator with an external heating chamber. Detailed modelling is quite difficult because the phase equilibrium dynamics of the evaporation process is not described. Moreover, there is a relationship with the other process units. The results proved that the study subject is a MIMO plant, nonlinear over separate control channels and not selfbalancing. Adequacy was tested using the experimental data obtained at the laboratory evaporation unit.

  3. Automatic weight determination in nonlinear model predictive control of wind turbines using swarm optimization technique

    Tofighi, Elham; Mahdizadeh, Amin


    This paper addresses the problem of automatic tuning of weighting coefficients for the nonlinear model predictive control (NMPC) of wind turbines. The choice of weighting coefficients in NMPC is critical due to their explicit impact on efficiency of the wind turbine control. Classically, these weights are selected based on intuitive understanding of the system dynamics and control objectives. The empirical methods, however, may not yield optimal solutions especially when the number of parameters to be tuned and the nonlinearity of the system increase. In this paper, the problem of determining weighting coefficients for the cost function of the NMPC controller is formulated as a two-level optimization process in which the upper- level PSO-based optimization computes the weighting coefficients for the lower-level NMPC controller which generates control signals for the wind turbine. The proposed method is implemented to tune the weighting coefficients of a NMPC controller which drives the NREL 5-MW wind turbine. The results are compared with similar simulations for a manually tuned NMPC controller. Comparison verify the improved performance of the controller for weights computed with the PSO-based technique.

  4. Development of polygonal-surface version of ICRP reference phantoms: Lymphatic node modeling

    Thang, Ngyen Tat; Yeom, Yeon Soo; Han, Min Cheol; Kim, Chan Hyeong [Hanyang University, Seoul (Korea, Republic of)


    Among radiosensitive organs/tissues considered in ICRP Publication 103, lymphatic nodes are many small size tissues and widely distributed in the ICRP reference phantoms. It is difficult to directly convert lymphatic nodes of ICRP reference voxel phantoms to polygonal surfaces. Furthermore, in the ICRP reference phantoms lymphatic nodes were manually drawn only in six lymphatic node regions and the reference number of lymphatic nodes reported in ICRP Publication 89 was not considered. To address aforementioned limitations, the present study developed a new lymphatic node modeling method for the polygonal-surface version of ICRP reference phantoms. By using the developed method, lymphatic nodes were modelled in the preliminary version of ICRP male polygonal-surface phantom. Then, lymphatic node dose values were calculated and compared with those of the ICRP reference male voxel phantom to validate the developed modeling method. The present study developed the new lymphatic node modeling method and successfully modeled lymphatic nodes in the preliminary version of the ICRP male polygonal-surface phantom. From the results, it was demonstrated that the developed modeling method can be used to model lymphatic nodes in polygonal-surface version of ICRP reference phantoms.

  5. H{sub {infinity}} with reference model for active suspension system : an LMI approach

    Abdellahi, E.; Mehdi, D. [LAII ESIP, Poitiers (France); Ramirez Mendoza, R. [LAII ESIP, Poitiers (France)]|[Dept. of Mechatronic and Automation, ITESM Garza Sada, Monterrey (Mexico); M' saad, M. [LAII ESIP, Poitiers (France)]|[LAP-ISMRA, Caen (France)


    This paper deals with active suspension systems. A two-degree-of-freedom quarter-car model is used to implement an LMI-based H{sub {infinity}} approach with reference model. The results are compared with those obtained with H{sub {infinity}} control without reference model. The main purpose of the controller is to ensure vibration isolation between different parts of the system. Careful tuning of weighting functions are necessary for H{sub {infinity}} design method. (orig.)

  6. Automatic sequences

    Haeseler, Friedrich


    Automatic sequences are sequences which are produced by a finite automaton. Although they are not random they may look as being random. They are complicated, in the sense of not being not ultimately periodic, they may look rather complicated, in the sense that it may not be easy to name the rule by which the sequence is generated, however there exists a rule which generates the sequence. The concept automatic sequences has special applications in algebra, number theory, finite automata and formal languages, combinatorics on words. The text deals with different aspects of automatic sequences, in particular:· a general introduction to automatic sequences· the basic (combinatorial) properties of automatic sequences· the algebraic approach to automatic sequences· geometric objects related to automatic sequences.

  7. A Reference Model for Software and System Inspections. White Paper

    He, Lulu; Shull, Forrest


    Software Quality Assurance (SQA) is an important component of the software development process. SQA processes provide assurance that the software products and processes in the project life cycle conform to their specified requirements by planning, enacting, and performing a set of activities to provide adequate confidence that quality is being built into the software. Typical techniques include: (1) Testing (2) Simulation (3) Model checking (4) Symbolic execution (5) Management reviews (6) Technical reviews (7) Inspections (8) Walk-throughs (9) Audits (10) Analysis (complexity analysis, control flow analysis, algorithmic analysis) (11) Formal method Our work over the last few years has resulted in substantial knowledge about SQA techniques, especially the areas of technical reviews and inspections. But can we apply the same QA techniques to the system development process? If yes, what kind of tailoring do we need before applying them in the system engineering context? If not, what types of QA techniques are actually used at system level? And, is there any room for improvement.) After a brief examination of the system engineering literature (especially focused on NASA and DoD guidance) we found that: (1) System and software development process interact with each other at different phases through development life cycle (2) Reviews are emphasized in both system and software development. (Figl.3). For some reviews (e.g. SRR, PDR, CDR), there are both system versions and software versions. (3) Analysis techniques are emphasized (e.g. Fault Tree Analysis, Preliminary Hazard Analysis) and some details are given about how to apply them. (4) Reviews are expected to use the outputs of the analysis techniques. In other words, these particular analyses are usually conducted in preparation for (before) reviews. The goal of our work is to explore the interaction between the Quality Assurance (QA) techniques at the system level and the software level.

  8. The Reciprocal Internal/External Frame of Reference Model Using Grades and Test Scores

    Möller, Jens; Zimmermann, Friederike; Köller, Olaf


    Background: The reciprocal I/E model (RI/EM) combines the internal/external frame of reference model (I/EM) with the reciprocal effects model (REM). The RI/EM extends the I/EM longitudinally and the REM across domains. The model predicts that, within domains, mathematics and verbal achievement (VACH) and academic self-concept have positive effects…

  9. Model Reference Adaptive Control of the Air Flow Rate of Centrifugal Compressor Using State Space Method

    Han, Jaeyoung; Jung, Mooncheong; Yu, Sangseok [Chungnam Nat’l Univ., Daejeon (Korea, Republic of); Yi, Sun [North Carolina A and T State Univ., Raleigh (United States)


    In this study, a model reference adaptive controller is developed to regulate the outlet air flow rate of centrifugal compressor for automotive supercharger. The centrifugal compressor is developed using the analytical based method to predict the transient behavior of operating and the designed model is validated with experimental data to confirm the system accuracy. The model reference adaptive control structure consists of a compressor model and a MRAC(model reference adaptive control) mechanism. The feedback control do not robust with variation of system parameter but the applied adaptive control is robust even if the system parameter is changed. As a result, the MRAC was regulated to reference air flow rate. Also MRAC was found to be more robust control compared with the feedback control even if the system parameter is changed.

  10. Reference Management Methodologies for Large Structural Models at Kennedy Space Center

    Jones, Corey; Bingham, Ryan; Schmidt, Rick


    There have been many challenges associated with modeling some of NASA KSC's largest structures. Given the size of the welded structures here at KSC, it was critically important to properly organize model struc.ture and carefully manage references. Additionally, because of the amount of hardware to be installed on these structures, it was very important to have a means to coordinate between different design teams and organizations, check for interferences, produce consistent drawings, and allow for simple release processes. Facing these challenges, the modeling team developed a unique reference management methodology and model fidelity methodology. This presentation will describe the techniques and methodologies that were developed for these projects. The attendees will learn about KSC's reference management and model fidelity methodologies for large structures. The attendees will understand the goals of these methodologies. The attendees will appreciate the advantages of developing a reference management methodology.

  11. Citations, References and the Growth of Scientific Literature: A Model of Dynamic Interaction

    Krauze, Tadeusz K.; Hillinger, Claude


    A mathematical model is presented which explains the observed exponential growth rates of citations and references in a scientific discipline. The independent variables are the growth rate of the number of articles published and the decay rate of citation of old literature. (13 references) (Author)

  12. Some Behavioral Considerations on the GPS4GEF Cloud-Based Generator of Evaluation Forms with Automatic Feedback and References to Interactive Support Content

    Daniel HOMOCIANU


    Full Text Available The paper introduces some considerations on a previously defined general purpose system used to dynamically generate online evaluation forms with automatic feedback immediately after submitting responses and working with a simple and well-known data source format able to store questions, answers and links to additional support materials in order to increase the productivity of evaluation and assessment. Beyond presenting a short description of the prototype’s components and underlining advantages and limitations of using it for any user involved in assessment and evaluation processes, this paper promotes the use of such a system together with a simple technique of generating and referencing interactive support content cited within this paper and defined together with the LIVES4IT approach. This type of content means scenarios having adhoc documentation and interactive simulation components useful when emulating concrete examples of working with real world objects, operating with devices or using software applications from any activity field.

  13. Automatic detection of alpine rockslides in continuous seismic data using hidden Markov models

    Dammeier, Franziska; Moore, Jeffrey R.; Hammer, Conny; Haslinger, Florian; Loew, Simon


    Data from continuously recording permanent seismic networks can contain information about rockslide occurrence and timing complementary to eyewitness observations and thus aid in construction of robust event catalogs. However, detecting infrequent rockslide signals within large volumes of continuous seismic waveform data remains challenging and often requires demanding manual intervention. We adapted an automatic classification method using hidden Markov models to detect rockslide signals in seismic data from two stations in central Switzerland. We first processed 21 known rockslides, with event volumes spanning 3 orders of magnitude and station event distances varying by 1 order of magnitude, which resulted in 13 and 19 successfully classified events at the two stations. Retraining the models to incorporate seismic noise from the day of the event improved the respective results to 16 and 19 successful classifications. The missed events generally had low signal-to-noise ratio and small to medium volumes. We then processed nearly 14 years of continuous seismic data from the same two stations to detect previously unknown events. After postprocessing, we classified 30 new events as rockslides, of which we could verify three through independent observation. In particular, the largest new event, with estimated volume of 500,000 m3, was not generally known within the Swiss landslide community, highlighting the importance of regional seismic data analysis even in densely populated mountainous regions. Our method can be easily implemented as part of existing earthquake monitoring systems, and with an average event detection rate of about two per month, manual verification would not significantly increase operational workload.

  14. Automatic detection of volcano-seismic events by modeling state and event duration in hidden Markov models

    Bhatti, Sohail Masood; Khan, Muhammad Salman; Wuth, Jorge; Huenupan, Fernando; Curilem, Millaray; Franco, Luis; Yoma, Nestor Becerra


    In this paper we propose an automatic volcano event detection system based on Hidden Markov Model (HMM) with state and event duration models. Since different volcanic events have different durations, therefore the state and whole event durations learnt from the training data are enforced on the corresponding state and event duration models within the HMM. Seismic signals from the Llaima volcano are used to train the system. Two types of events are employed in this study, Long Period (LP) and Volcano-Tectonic (VT). Experiments show that the standard HMMs can detect the volcano events with high accuracy but generates false positives. The results presented in this paper show that the incorporation of duration modeling can lead to reductions in false positive rate in event detection as high as 31% with a true positive accuracy equal to 94%. Further evaluation of the false positives indicate that the false alarms generated by the system were mostly potential events based on the signal-to-noise ratio criteria recommended by a volcano expert.

  15. Path Generation for High-Performance Motion of ROVs Based on a Reference Model

    Daniel de A. Fernandes


    Full Text Available This paper deals with the generation of sufficiently smooth position, velocity, and acceleration references for guiding the motion of an ROV along purposefully defined curvature-continuous paths in automated missions. The references are meant to be employed in high-performance trajectory tracking and dynamic positioning applications. The path planning problem is not in the scope of this work. A reference model that synthesises references concerning a single Degree-of-Freedom (DoF motion is initially described. Then, the use of the synthesised references as the parametrisation for other references concerning multiple DoF motion along curvature-continuous paths is exploited. Results from computer simulations and full-scale sea trials, both based on the NTNU's ROV Minerva, are presented and discussed.

  16. A reference model and technical framework for mobile social software for learning

    De Jong, Tim; Specht, Marcus; Koper, Rob


    De Jong, T., Specht, M., & Koper, R. (2008). A reference model and technical framework for mobile social software for learning. Presented at the IADIS m-learning 2008 Conference. April, 11-13, 2008, Carvoeiro, Portugal.

  17. Comparison of function approximation, heuristic, and derivative-based methods for automatic calibration of computationally expensive groundwater bioremediation models

    Mugunthan, Pradeep; Shoemaker, Christine A.; Regis, Rommel G.


    The performance of function approximation (FA) methods is compared to heuristic and derivative-based nonlinear optimization methods for automatic calibration of biokinetic parameters of a groundwater bioremediation model of chlorinated ethenes on a hypothetical and a real field case. For the hypothetical case, on the basis of 10 trials on two different objective functions, the FA methods had the lowest mean and smaller deviation of the objective function among all algorithms for a combined Nash-Sutcliffe objective and among all but the derivative-based algorithm for a total squared error objective. The best algorithms in the hypothetical case were applied to calibrate eight parameters to data obtained from a site in California. In three trials the FA methods outperformed heuristic and derivative-based methods for both objective functions. This study indicates that function approximation methods could be a more efficient alternative to heuristic and derivative-based methods for automatic calibration of computationally expensive bioremediation models.

  18. DQ reference frame modeling and control of single-phase active power decoupling circuits

    Tang, Yi; Qin, Zian; Blaabjerg, Frede


    . This paper presents the dq synchronous reference frame modeling of single-phase power decoupling circuits and a complete model describing the dynamics of dc-link ripple voltage is presented. The proposed model is universal and valid for both inductive and capacitive decoupling circuits, and the input...... of decoupling circuits can be either dependent or independent of its front-end converters. Based on this model, a dq synchronous reference frame controller is designed which allows the decoupling circuit to operate in two different modes because of the circuit symmetry. Simulation and experimental results...... are presented to verify the effectiveness of the proposed modeling and control method....

  19. ACL-TOP700血凝仪凝血4项正常参考区间的建立%Establishment of normal reference interval for four items of blood coagulation on ACL-TOP Automatic coagulation analyzer

    陈锐; 鲁燕飞; 周志兰; 姚振国; 陈国强


    Objective To establish normal reference interval for four items of blood coagulation on ACL‐TOP Automatic coagu‐lation analyzer .Methods The fasting anti‐coagulation blood samples were collected from 1 268 inpatients and people conducted physical examination ,all subjects without liver disease ,history of blood disease and coagulation disfunction .The prothrombin time (PT) ,activated partical prothrombin time(APTT) ,thrombin time(TT) and serum levels of fibrinogen(FIB) were determined by u‐sing ACL‐TOP automatic coagulation analyzer which was producted by America IL company .And data of determination results were used to establish the normal reference intervals of indexes in this laboratory .Results The precision and accuracy of this analy‐zer was good .There were differences of normal reference intervals between which established in this laboratory and which provided by the manufacturer .Conclusion Each laboratory should establish its own normal reference interval ,not blindly refer to reference interval provided by regents manual .%目的:建立ACL‐TOP700全自动血凝分析仪本实验室凝血4项的正常参考区间。方法筛选1268住院患者及门诊体检者,均无肝病、血液病史及出凝血功能障碍,空腹采集其静脉抗凝血。采用美国IL公司生产的ACL‐TOP700全自动血凝分析仪进行凝血酶原时间(PT)、活化部分凝血酶原时间(APTT)、凝血酶时间(TT)、纤维蛋白原(FIB)测定,建立本实验室 PT、APTT、TT、FIB的正常参考区间。结果该仪器精密度、正确度均良好,各参考区间与厂家提供的参考区间有一定的差异。结论各个实验室应建立自己的参考区间,不可盲目引用厂家试剂说明书上提供的正常参考区间。


    Wang Bo


    Full Text Available Purpose: Development of mathematical and experimental models of polikopter UAV NAU PKF "Aurora" of oktakopter scheme for experimental flights in manual, semi-automatic and unmanned mode.                  Methods: 14/03/2016 - 21/03/2016 held a serіe of experiental flights (10 flights of 10 rats on altitude 700 meters on polіkopter (oktakopter NAU PKF "Aurora" in germetic kabіn with the study of his somatic,  nevrologіcal status after the flight. Flights also carried out with experimental animals on board for such a safety assessment. Results: The obtained logs of 'black box' of the autopilot indicate very small (almost invisible fluctuations in pitch, roll and yaw during the flight, minor variations on altitude during almost stationary hovering of polikopter at different altitudes, and fully adequate to movements and maneuvers of aircraft vibrations and parameters of these sensors. Discussion: In the course of these studies demonstrated experimentally the possibility of completely safe flight of the mammals (rats on polikopter vehicle, even in the open cockpit. With appropriate refinement possible in the future to raise the issue of the development and construction of passenger polikopter flyers for totally safe air transportation of people [6,7,8]. In terms of adverse mechanical effects on the human body (acceleration overload fluctuations, vibrations polikopter transport is safer and less harmful to the passengers than road transport, which is particularly important in the delivery of patient of neurosurgical, politravmatological, cardiologycal and critical care profile at critical condition in intensive care units and operating hospitals and medical centers.

  1. Robust DTC-SVM Method for Matrix Converter Drives with Model Reference Adaptive Control Scheme

    Lee, Kyo Beum; Huh, Sunghoi; Sim, Kyung-Hun


    strategy using space vector modulations and a deadbeat algorithm in the stator flux reference frame. The lumped disturbances such as parameter variation and load disturbance of the system are estimated by a neuro-sliding mode approach based on model reference adaptive control (MRAC). An adaptive observer......This paper presents a new robust DTC-SVM control system for high performance induction motor drives fed by a matrix converter with variable structure - model reference adaptive control scheme (VS-MRAC). It is possible to combine the advantages of matrix converters with the advantages of the DTC...

  2. Indian summer heat wave of 2015: a biometeorological analysis using half hourly automatic weather station data with special reference to Andhra Pradesh

    Sarath Chandran, M. A.; Subba Rao, A. V. M.; Sandeep, V. M.; Pramod, V. P.; Pani, P.; Rao, V. U. M.; Visha Kumari, V.; Srinivasa Rao, Ch


    Heat wave is a hazardous weather-related extreme event that affects living beings. The 2015 summer heat wave affected many regions in India and caused the death of 2248 people across the country. An attempt has been made to quantify the intensity and duration of heat wave that resulted in high mortality across the country. Half hourly Physiologically Equivalent Temperature (PET), based on a complete heat budget of human body, was estimated using automatic weather station (AWS) data of four locations in Andhra Pradesh state, where the maximum number of deaths was reported. The heat wave characterization using PET revealed that extreme heat load conditions (PET >41) existed in all the four locations throughout May during 2012-2015, with varying intensity. The intensity and duration of heat waves characterized by "area under the curve" method showed good results for Srikakulam and Undi locations. Variations in PET during each half an hour were estimated. Such studies will help in fixing thresholds for defining heat waves, designing early warning systems, etc.

  3. SU-E-T-50: Automatic Validation of Megavoltage Beams Modeled for Clinical Use in Radiation Therapy

    Melchior, M [Terapia Radiante S.A., La Plata, Buenos Aires (Argentina); Salinas Aranda, F [Vidt Centro Medico, Ciudad Autonoma De Buenos Aires (Argentina); 21st Century Oncology, Ft. Myers, FL (United States); Sciutto, S [Universidad Nacional de La Plata, La Plata, Buenos Aires (Argentina); Dodat, D [Centro Medico Privado Dean Funes, La Plata, Buenos Aires (Argentina); Larragueta, N [Universidad Nacional de La Plata, La Plata, Buenos Aires (Argentina); Centro Medico Privado Dean Funes, La Plata, Buenos Aires (Argentina)


    Purpose: To automatically validate megavoltage beams modeled in XiO™ 4.50 (Elekta, Stockholm, Sweden) and Varian Eclipse™ Treatment Planning Systems (TPS) (Varian Associates, Palo Alto, CA, USA), reducing validation time before beam-on for clinical use. Methods: A software application that can automatically read and analyze DICOM RT Dose and W2CAD files was developed using MatLab integrated development environment.TPS calculated dose distributions, in DICOM RT Dose format, and dose values measured in different Varian Clinac beams, in W2CAD format, were compared. Experimental beam data used were those acquired for beam commissioning, collected on a water phantom with a 2D automatic beam scanning system.Two methods were chosen to evaluate dose distributions fitting: gamma analysis and point tests described in Appendix E of IAEA TECDOC-1583. Depth dose curves and beam profiles were evaluated for both open and wedged beams. Tolerance parameters chosen for gamma analysis are 3% and 3 mm dose and distance, respectively.Absolute dose was measured independently at points proposed in Appendix E of TECDOC-1583 to validate software results. Results: TPS calculated depth dose distributions agree with measured beam data under fixed precision values at all depths analyzed. Measured beam dose profiles match TPS calculated doses with high accuracy in both open and wedged beams. Depth and profile dose distributions fitting analysis show gamma values < 1. Relative errors at points proposed in Appendix E of TECDOC-1583 meet therein recommended tolerances.Independent absolute dose measurements at points proposed in Appendix E of TECDOC-1583 confirm software results. Conclusion: Automatic validation of megavoltage beams modeled for their use in the clinic was accomplished. The software tool developed proved efficient, giving users a convenient and reliable environment to decide whether to accept or not a beam model for clinical use. Validation time before beam-on for clinical use

  4. Exploring Automatization Processes.

    DeKeyser, Robert M.


    Presents the rationale for and the results of a pilot study attempting to document in detail how automatization takes place as the result of different kinds of intensive practice. Results show that reaction times and error rates gradually decline with practice, and the practice effect is skill-specific. (36 references) (CK)

  5. Color Image Segmentation Based on Different Color Space Models Using Automatic GrabCut

    Dina Khattab


    Full Text Available This paper presents a comparative study using different color spaces to evaluate the performance of color image segmentation using the automatic GrabCut technique. GrabCut is considered as one of the semiautomatic image segmentation techniques, since it requires user interaction for the initialization of the segmentation process. The automation of the GrabCut technique is proposed as a modification of the original semiautomatic one in order to eliminate the user interaction. The automatic GrabCut utilizes the unsupervised Orchard and Bouman clustering technique for the initialization phase. Comparisons with the original GrabCut show the efficiency of the proposed automatic technique in terms of segmentation, quality, and accuracy. As no explicit color space is recommended for every segmentation problem, automatic GrabCut is applied with RGB, HSV, CMY, XYZ, and YUV color spaces. The comparative study and experimental results using different color images show that RGB color space is the best color space representation for the set of the images used.

  6. 76 FR 8917 - Special Conditions: Gulfstream Model GVI Airplane; Automatic Speed Protection for Design Dive Speed


    ...; Automatic Speed Protection for Design Dive Speed AGENCY: Federal Aviation Administration (FAA), DOT. ACTION... design features include a high speed protection system. These proposed special conditions contain the... Design Features The GVI is equipped with a high speed protection system that limits nose down...

  7. 76 FR 31454 - Special Conditions: Gulfstream Model GVI Airplane; Automatic Speed Protection for Design Dive Speed


    ... for Gulfstream GVI airplanes was published in the Federal Register on February 16, 2011 (76 FR 8917...; Automatic Speed Protection for Design Dive Speed AGENCY: Federal Aviation Administration (FAA), DOT. ACTION... high speed protection system. These special conditions contain the additional safety standards that...

  8. Performance Modelling of Automatic Identification System with Extended Field of View

    Lauersen, Troels; Mortensen, Hans Peter; Pedersen, Nikolaj Bisgaard


    This paper deals with AIS (Automatic Identification System) behavior, to investigate the severity of packet collisions in an extended field of view (FOV). This is an important issue for satellite-based AIS, and the main goal is a feasibility study to find out to what extent an increased FOV...

  9. Decentralized model reference adaptive sliding mode control based on fuzzy model

    Gu Haijun; Zhang Tianping; Shen Qikun


    A new design scheme of decentralized model reference adaptive sliding mode controller for a class of MIMO nonlinear systems with the high-order interconnections is proposed. The design is based on the universal approximation capability of the Takagi - Seguno (T-S) fuzzy systems. Motivated by the principle of certainty equivalentcontrol, a decentralized adaptive controller is designed to achieve the tracking objective without computation of the T-S fuzz ymodel. The approach does not require the upper bound of the uncertainty term to be known through some adaptive estimation. By theoretical analysis, the closed-loop fuzzy control system is proven to be globally stable in the sense that all signalsinvolved are bounded, with tracking errors converging to zero. Simulation results demonstrate the effectiveness of the approach.

  10. Fusing moving average model and stationary wavelet decomposition for automatic incident detection: case study of Tokyo Expressway

    Qinghua Liu


    Full Text Available Traffic congestion is a growing problem in urban areas all over the world. The transport sector has been in full swing event study on intelligent transportation system for automatic detection. The functionality of automatic incident detection on expressways is a primary objective of advanced traffic management system. In order to save lives and prevent secondary incidents, accurate and prompt incident detection is necessary. This paper presents a methodology that integrates moving average (MA model with stationary wavelet decomposition for automatic incident detection, in which parameters of layer coefficient are extracted from the difference between the upstream and downstream occupancy. Unlike other wavelet-based method presented before, firstly it smooths the raw data with MA model. Then it uses stationary wavelet to decompose, which can achieve accurate reconstruction of the signal, and does not shift the signal transfer coefficients. Thus, it can detect the incidents more accurately. The threshold to trigger incident alarm is also adjusted according to normal traffic condition with congestion. The methodology is validated with real data from Tokyo Expressway ultrasonic sensors. Experimental results show that it is accurate and effective, and that it can differentiate traffic accident from other condition such as recurring traffic congestion.

  11. Optimal reference interval for homeostasis model assessment of insulin resistance in a Japanese population.

    Yamada, Chizumi; Mitsuhashi, Toshitake; Hiratsuka, Noboru; Inabe, Fumiyo; Araida, Nami; Takahashi, Eiko


    The aim of the present study was to establish a reference interval for homeostasis model assessment of insulin resistance (HOMA-IR) in a Japanese population based on the C28-A3 document from the Clinical and Laboratory Standards Institute (CLSI). We selected healthy subjects aged 20-79 years, with fasting plasma glucose reference limits of HOMA-IR. We selected 2173 subjects as reference individuals, and 2153 subjects were used for analysis. The reference interval for HOMA-IR was established as between 0.4 and 2.4. This represents the first reference interval study for HOMA-IR that applies the stringent CLSI C28-A3 document. HOMA-IR ≥ 2.5 should be considered a reasonable indicator of insulin resistance in Japanese. (J Diabetes Invest, doi: 10.1111/j.2040-1124.2011.00113.x, 2011).

  12. Incorporation of ICRP-116 eye model into ICRP reference polygonal surface phantoms

    Nguyen, Thang Tat; Yeom, Yeon Soo; Han, Min Cheol; Wang, Zhao Jun; Kim, Han Sung; Kim, Chan Hyeong [Dept. of Nuclear Engineering, Hanyang University, Seoul (Korea, Republic of)


    The ICRP adopted a detailed stylized eye model developed by Behrens et al. for evaluation of lens dose coefficients released in ICRP publication 116. However, the dose coefficients were calculated with the stylized eye model modelled into the head of mathematical phantoms not the ICRP reference phantoms, which may cause inconsistency in lens dose assessment. In order to keep consistency in the lens dose assessment, the present study incorporates the ICRP-116 eye model into the currently developing polygonal-mesh-type ICRP reference phantoms which are being converted from the voxel-type ICRP reference phantoms. Then, lens dose values were calculated and compared with those calculated with the mathematical phantom to see how it affects lens doses. The present study incorporated the ICRP-116 eye model into the currently developing polygonal-mesh-type ICRP reference phantoms and showed significant dose differences when compared with ICRP-116 data calculated with the mathematical phantom. We believe that the ICRP reference phantoms including the detailed eye model provide more consistent assessment for eye lens dose.

  13. Teuchos::RefCountPtr beginner's guide : an introduction to the Trilinos smart reference-counted pointer class for (almost) automatic dynamic memory management in C++.

    Bartlett, Roscoe A


    Dynamic memory management in C++ is one of the most common areas of difficulty and errors for amateur and expert C++ developers alike. The improper use of operator new and operator delete is arguably the most common cause of incorrect program behavior and segmentation faults in C++ programs. Here we introduce a templated concrete C++ class Teuchos::RefCountPtr<>, which is part of the Trilinos tools package Teuchos, that combines the concepts of smart pointers and reference counting to build a low-overhead but effective tool for simplifying dynamic memory management in C++. We discuss why the use of raw pointers for memory management, managed through explicit calls to operator new and operator delete, is so difficult to accomplish without making mistakes and how programs that use raw pointers for memory management can easily be modified to use RefCountPtr<>. In addition, explicit calls to operator delete is fragile and results in memory leaks in the presents of C++ exceptions. In its most basic usage, RefCountPtr<> automatically determines when operator delete should be called to free an object allocated with operator new and is not fragile in the presents of exceptions. The class also supports more sophisticated use cases as well. This document describes just the most basic usage of RefCountPtr<> to allow developers to get started using it right away. However, more detailed information on the design and advanced features of RefCountPtr<> is provided by the companion document 'Teuchos::RefCountPtr : The Trilinos Smart Reference-Counted Pointer Class for (Almost) Automatic Dynamic Memory Management in C++'.

  14. Modeling difference of reference surfaces of the Earth's body to solve the problem of vertical positioning

    Tucikešić Sanja S.


    Full Text Available The aim of this paper is modeling difference of reference surfaces of the Earth's body to solve the problem of vertical positioning. With development of GNSS technology determining geoid undulation obtained scientific and practical significance especially in the vertical position with the aim of replacing the traditional geometrical leveling. The paper presents the modeling corrective surface based on GNSS measurements through a practical example of Local Spatial Reference Network (PLRM Mrkonjic Grad where the measurements were made with GNSS observations. The modeling was performed onedimensional similarity transformation and the average differences between orthometric height of a GNSS measurements and transformed height were determined.

  15. Outdoor sound propagation reference model developed in the European harmonoise project

    Defrance, J.; Salomons, E.; Noordhoek, I.; Heimann, D.; Plovsing, B.; Watts, G.; Jonasson, H.; Xuetao, Z.; Premat, E.; Schmich, I.; Aballea, F.; Baulac, M.; Roo,


    The Harmonoise reference model has been developed in order to predict long-term average sound levels in road and railway situations that are geometrically relatively simple but physically complex. The present paper describes all steps of calculations with this powerful model which includes several


    Nosenko S. V.


    Full Text Available In the article, we present the mathematical model of referring documents entering the automated system to the spheres of users responsibility. The possibility of application of mathematical apparatus of final predicates algebra as a basic means of model description is proved


    Lisoviett Pérez Pinto


    Full Text Available In this paper, the mathematical modeling and simulation of the automatic control of the quintuple effect of evaporation of a sugar mill “El Palmar” in Venezuela is made. The multiple effect consist of 5 evaporators Robert type, with equal characteristics, connected in series. Starting from the desired operating conditions and control requirements: level in each evaporator vessel, cane syrup concentration and pressure in the fifth evaporator vessel using mass balances, balance on solids for each evaporator and energy to the barometric condenser is present at the output of the fifth vessel, the nonlinear model of the process is obtained, resulting in a system of multiple inputs and multiple outputs, with strong interactions between variables. In the design of the system of the automatic process control, we are interested in maintaining the variables that characterize the performance of it and they are regulated in an operating point; we proceed to linearize the model around an equilibrium point, resulting in a new model in terms of the variables variations around an environment from that point. Then, it is processed the model obtained in terms of input and output relations, based on the characterization of it in terms of variables and transfer relationships in the complex frequency domain. Finally, the evaporation process is simulated, establishing the adequacy of the model to the real process.

  18. Automatic single questionnaire intensity (SQI, EMS98 scale) estimation using ranking models built on the existing BCSF database

    Schlupp, A.; Sira, C.; Schmitt, K.; Schaming, M.


    In charge of intensity estimations in France, BCSF has collected and manually analyzed more than 47000 online individual macroseismic questionnaires since 2000 up to intensity VI. These macroseismic data allow us to estimate one SQI value (Single Questionnaire Intensity) for each form following the EMS98 scale. The reliability of the automatic intensity estimation is important as they are today used for automatic shakemaps communications and crisis management. Today, the automatic intensity estimation at BCSF is based on the direct use of thumbnails selected on a menu by the witnesses. Each thumbnail corresponds to an EMS-98 intensity value, allowing us to quickly issue an intensity map of the communal intensity by averaging the SQIs at each city. Afterwards an expert, to determine a definitive SQI, manually analyzes each form. This work is time consuming and not anymore suitable considering the increasing number of testimonies at BCSF. Nevertheless, it can take into account incoherent answers. We tested several automatic methods (USGS algorithm, Correlation coefficient, Thumbnails) (Sira et al. 2013, IASPEI) and compared them with 'expert' SQIs. These methods gave us medium score (between 50 to 60% of well SQI determined and 35 to 40% with plus one or minus one intensity degree). The best fit was observed with the thumbnails. Here, we present new approaches based on 3 statistical ranking methods as 1) Multinomial logistic regression model, 2) Discriminant analysis DISQUAL and 3) Support vector machines (SVMs). The two first methods are standard methods, while the third one is more recent. Theses methods could be applied because the BCSF has already in his database more then 47000 forms and because their questions and answers are well adapted for a statistical analysis. The ranking models could then be used as automatic method constrained on expert analysis. The performance of the automatic methods and the reliability of the estimated SQI can be evaluated thanks to

  19. Jobs and Economic Development Impact (JEDI) Model Geothermal User Reference Guide

    Johnson, C.; Augustine, C.; Goldberg, M.


    The Geothermal Jobs and Economic Development Impact (JEDI) model, developed through the National Renewable Energy Laboratory (NREL), is an Excel-based user-friendly tools that estimates the economic impacts of constructing and operating hydrothermal and Enhanced Geothermal System (EGS) power generation projects at the local level for a range of conventional and renewable energy technologies. The JEDI Model Geothermal User Reference Guide was developed to assist users in using and understanding the model. This guide provides information on the model's underlying methodology, as well as the parameters and references used to develop the cost data utilized in the model. This guide also provides basic instruction on model add-in features, operation of the model, and a discussion of how the results should be interpreted.

  20. Automatic iterative segmentation of multiple sclerosis lesions using Student's t mixture models and probabilistic anatomical atlases in FLAIR images.

    Freire, Paulo G L; Ferrari, Ricardo J


    Multiple sclerosis (MS) is a demyelinating autoimmune disease that attacks the central nervous system (CNS) and affects more than 2 million people worldwide. The segmentation of MS lesions in magnetic resonance imaging (MRI) is a very important task to assess how a patient is responding to treatment and how the disease is progressing. Computational approaches have been proposed over the years to segment MS lesions and reduce the amount of time spent on manual delineation and inter- and intra-rater variability and bias. However, fully-automatic segmentation of MS lesions still remains an open problem. In this work, we propose an iterative approach using Student's t mixture models and probabilistic anatomical atlases to automatically segment MS lesions in Fluid Attenuated Inversion Recovery (FLAIR) images. Our technique resembles a refinement approach by iteratively segmenting brain tissues into smaller classes until MS lesions are grouped as the most hyperintense one. To validate our technique we used 21 clinical images from the 2015 Longitudinal Multiple Sclerosis Lesion Segmentation Challenge dataset. Evaluation using Dice Similarity Coefficient (DSC), True Positive Ratio (TPR), False Positive Ratio (FPR), Volume Difference (VD) and Pearson's r coefficient shows that our technique has a good spatial and volumetric agreement with raters' manual delineations. Also, a comparison between our proposal and the state-of-the-art shows that our technique is comparable and, in some cases, better than some approaches, thus being a viable alternative for automatic MS lesion segmentation in MRI.

  1. Stable Model Reference Adaptive Control in the Presence of Bounded Disturbances,


    IEEE Transactions on Automatic Control , Vol...AC-25, No. 3, June 1980. [2) A. Stephen Morse, "Global Stability of Parameter-Adaptive Control Systems," IEEE Transactions on Automatic Control , Vol...34 IEEE Transactions on Automatic Control , Vol. AC-25, No. 3, June 1980. [41 Benjamin B. Peterson and Kumpati S. Narendra, "Bounded Error

  2. Automatic pelvis segmentation from x-ray images of a mouse model

    Al Okashi, Omar M.; Du, Hongbo; Al-Assam, Hisham


    The automatic detection and quantification of skeletal structures has a variety of different applications for biological research. Accurate segmentation of the pelvis from X-ray images of mice in a high-throughput project such as the Mouse Genomes Project not only saves time and cost but also helps achieving an unbiased quantitative analysis within the phenotyping pipeline. This paper proposes an automatic solution for pelvis segmentation based on structural and orientation properties of the pelvis in X-ray images. The solution consists of three stages including pre-processing image to extract pelvis area, initial pelvis mask preparation and final pelvis segmentation. Experimental results on a set of 100 X-ray images showed consistent performance of the algorithm. The automated solution overcomes the weaknesses of a manual annotation procedure where intra- and inter-observer variations cannot be avoided.

  3. Regional Image Features Model for Automatic Classification between Normal and Glaucoma in Fundus and Scanning Laser Ophthalmoscopy (SLO) Images.

    Haleem, Muhammad Salman; Han, Liangxiu; Hemert, Jano van; Fleming, Alan; Pasquale, Louis R; Silva, Paolo S; Song, Brian J; Aiello, Lloyd Paul


    Glaucoma is one of the leading causes of blindness worldwide. There is no cure for glaucoma but detection at its earliest stage and subsequent treatment can aid patients to prevent blindness. Currently, optic disc and retinal imaging facilitates glaucoma detection but this method requires manual post-imaging modifications that are time-consuming and subjective to image assessment by human observers. Therefore, it is necessary to automate this process. In this work, we have first proposed a novel computer aided approach for automatic glaucoma detection based on Regional Image Features Model (RIFM) which can automatically perform classification between normal and glaucoma images on the basis of regional information. Different from all the existing methods, our approach can extract both geometric (e.g. morphometric properties) and non-geometric based properties (e.g. pixel appearance/intensity values, texture) from images and significantly increase the classification performance. Our proposed approach consists of three new major contributions including automatic localisation of optic disc, automatic segmentation of disc, and classification between normal and glaucoma based on geometric and non-geometric properties of different regions of an image. We have compared our method with existing approaches and tested it on both fundus and Scanning laser ophthalmoscopy (SLO) images. The experimental results show that our proposed approach outperforms the state-of-the-art approaches using either geometric or non-geometric properties. The overall glaucoma classification accuracy for fundus images is 94.4% and accuracy of detection of suspicion of glaucoma in SLO images is 93.9 %.

  4. An Interactive Tool for Automatic Predimensioning and Numerical Modeling of Arch Dams

    D. J. Vicente


    Full Text Available The construction of double-curvature arch dams is an attractive solution from an economic viewpoint due to the reduced volume of concrete necessary for their construction as compared to conventional gravity dams. Due to their complex geometry, many criteria have arisen for their design. However, the most widespread methods are based on recommendations of traditional technical documents without taking into account the possibilities of computer-aided design. In this paper, an innovative software tool to design FEM models of double-curvature arch dams is presented. Several capabilities are allowed: simplified geometry creation (interesting for academic purposes, preliminary geometrical design, high-detailed model construction, and stochastic calculation performance (introducing uncertainty associated with material properties and other parameters. This paper specially focuses on geometrical issues describing the functionalities of the tool and the fundamentals of the design procedure with regard to the following aspects: topography, reference cylinder, excavation depth, crown cantilever thickness and curvature, horizontal arch curvature, excavation and concrete mass volume, and additional elements such as joints or spillways. Examples of application on two Spanish dams are presented and the results obtained analyzed.

  5. Automatic barcode recognition method based on adaptive edge detection and a mapping model

    Yang, Hua; Chen, Lianzheng; Chen, Yifan; Lee, Yong; Yin, Zhouping


    An adaptive edge detection and mapping (AEDM) algorithm to address the challenging one-dimensional barcode recognition task with the existence of both image degradation and barcode shape deformation is presented. AEDM is an edge detection-based method that has three consecutive phases. The first phase extracts the scan lines from a cropped image. The second phase involves detecting the edge points in a scan line. The edge positions are assumed to be the intersecting points between a scan line and a corresponding well-designed reference line. The third phase involves adjusting the preliminary edge positions to more reasonable positions by employing prior information of the coding rules. Thus, a universal edge mapping model is established to obtain the coding positions of each edge in this phase, followed by a decoding procedure. The Levenberg-Marquardt method is utilized to solve this nonlinear model. The computational complexity and convergence analysis of AEDM are also provided. Several experiments were implemented to evaluate the performance of AEDM algorithm. The results indicate that the efficient AEDM algorithm outperforms state-of-the-art methods and adequately addresses multiple issues, such as out-of-focus blur, nonlinear distortion, noise, nonlinear optical illumination, and situations that involve the combinations of these issues.


    N. A. Pelevin


    Full Text Available The paper presents simulation results of hydrostatic bearing dynamics in spindle assembly of standard flexible production module with throttled circuit. The necessity of dynamic quality increase for automatic control system of the hydrostatic bearing with the use of correcting means in the form of RC-chains is shown. The features of correction parameters choice coming from the existence of the crossing connections in automatic control system structure are noted. We propose the block diagram of automatic control system of the hydrostatic bearing in Simulink working field and cyclic algorithm for determination program of RC-chain parameters implemented in MATLAB taking into account typical thermal processes for the finishing treatment. Graphic-analytical method for the correction parameters choice is presented based on the stability stock phase gradient for dynamic quality determination of automatic control system. Researches of the method estimability in case of using the standard metal bellow valve as the hydrocapacity for RC-chain are also carried out. Recommendations for the bellow valve choice are formulated. The check of dynamic quality indicators concerning transition processes calculated by means of the appropriate programs developed for MATLAB is performed. Examples are given for phase stability factor gradient schedules with partition of various areas of hydrostatic bearing dynamic quality for different frequencies of spindle rotation and procedure description of data cursor function application on MATLAB toolbar. Improvement of hydrostatic bearing dynamics under typical low loadings for finishing treatment is noted. Also, decrease of dynamic indicators for high loadings treatment in case of roughing treatment is marked.

  7. Robust model-reference control for descriptor linear systems subject to parameter uncertainties

    Guangren DUAN; Biao ZHANG


    Robust model-reference control for descriptor linear systems with structural parameter uncertainties is investigated. A sufficient condition for existing a model-reference zero-error asymptotic tracking controller is given. It is shown that the robust model reference control problem can be decomposed into two subproblems: a robust state feedback stabilization problem for descriptor systems subject to parameter uncertainties and a robust compensation problem. The latter aims to find three coefficient matrices which satisfy four matrix equations and simultaneously minimize the effect of the uncertainties to the tracking error. Based on a complete parametric solution to a class of generalized Sylvester matrix equations, the robust compensation problem is converted into a minimization problem with quadratic cost and linear constraints. A numerical example shows the effect of the proposed approach.

  8. The mean error estimation of TOPSIS method using a fuzzy reference models

    Wojciech Sałabun


    Full Text Available The Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS is a commonly used multi-criteria decision-making method. A number of authors have proposed improvements, known as extensions, of the TOPSIS method, but these extensions have not been examined with respect to accuracy. Accuracy estimation is very difficult because reference values for the obtained results are not known, therefore, the results of each extension are compared to one another. In this paper, the author propose a new method to estimate the mean error of TOPSIS with the use of a fuzzy reference model (FRM. This method provides reference values. In experiments involving 1,000 models, 28 million cases are simulated to estimate the mean error. Results of four commonly used normalization procedures were compared. Additionally, the author demonstrated the relationship between the value of the mean error and the nonlinearity of models and a number of alternatives.

  9. Face it a visual reference for multi-ethnic facial modeling

    Beckmann Wells, Patricia


    Face It  presents practical hands-on techniques, 3D modeling and sculpting tools with Maya and ZBrush production pipelines, uniquely focused on the facial modeling of 7 ethnicity models, featuring over 100 different models ranging in age from newborn to elderly characters. Face It is a resource for academic and professionals alike. Explore the modeling possibilities beyond the digital reference galleries online. No more having to adapt medical anatomy texts to your own models! Explore the finite details of facial anatomy with focus on skull development, muscle structure, e

  10. An automatic modeling system of the reaction mechanisms for chemical vapor deposition processes using real-coded genetic algorithms.

    Takahashi, Takahiro; Nakai, Hiroyuki; Kinpara, Hiroki; Ema, Yoshinori


    The identification of appropriate reaction models is very helpful for developing chemical vapor deposition (CVD) processes. In this study, we have developed an automatic system to model reaction mechanisms in the CVD processes by analyzing the experimental results, which are cross-sectional shapes of the deposited films on substrates with micrometer- or nanometer-sized trenches. We designed the inference engine to model the reaction mechanism in the system by the use of real-coded genetic algorithms (RCGAs). We studied the dependence of the system performance on two methods using simple genetic algorithms (SGAs) and the RCGAs; the one involves the conventional GA operators and the other involves the blend crossover operator (BLX-alpha). Although we demonstrated that the systems using both the methods could successfully model the reaction mechanisms, the RCGAs showed the better performance with respect to the accuracy and the calculation cost for identifying the models.

  11. The Model Reference Adaptive Fuzzy Control for the Vehicle Semi-Active Suspension

    管继富; 侯朝桢; 顾亮; 武云鹏


    The LQG control system is employed as vehicle suspension's optimal target system, which has an adaptive ability to the road conditions and vehicle speed in a limited bandwidth. In order to keep the optimal performances when the suspension parameters change, a model reference adaptive fuzzy control (MRAFC) strategy is presented. The LQG control system serves as the reference model in the MRAFC system. The simulation results indicate that the presented MRAFC system can adapt to the parameters variation of vehicle suspension and track the optimality of the LQG control system, the presented vehicle suspension MRAFC system has the ability to adapt to road conditions and suspension parameters change.

  12. A Reference Model for Distribution Grid Control in the 21st Century

    Taft, Jeffrey D. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); De Martini, Paul [California Inst. of Technology (CalTech), Pasadena, CA (United States); Kristov, Lorenzo [California Independent System Operator, Folsom, CA (United States)


    Intensive changes in the structure of the grid due to the penetration of new technologies, coupled with changing societal needs are outpacing the capabilities of traditional grid control systems. The gap is widening at an accelerating rate with the biggest impacts occurring at the distribution level due to the widespread adoption of diverse distribution-connected energy resources (DER) . This paper outlines the emerging distribution grid control environment, defines the new distribution control problem, and provides a distribution control reference model. The reference model offers a schematic representation of the problem domain to inform development of system architecture and control solutions for the high-DER electric system.

  13. Transmission Line Jobs and Economic Development Impact (JEDI) Model User Reference Guide

    Goldberg, M.; Keyser, D.


    The Jobs and Economic Development Impact (JEDI) models, developed through the National Renewable Energy Laboratory (NREL), are freely available, user-friendly tools that estimate the potential economic impacts of constructing and operating power generation projects for a range of conventional and renewable energy technologies. The Transmission Line JEDI model can be used to field questions about the economic impacts of transmission lines in a given state, region, or local community. This Transmission Line JEDI User Reference Guide was developed to provide basic instruction on operating the model and understanding the results. This guide also provides information on the model's underlying methodology, as well as the parameters and references used to develop the cost data contained in the model.

  14. An automatic and accurate method of full heart segmentation from CT image based on linear gradient model

    Yang, Zili


    Heart segmentation is an important auxiliary method in the diagnosis of many heart diseases, such as coronary heart disease and atrial fibrillation, and in the planning of tumor radiotherapy. Most of the existing methods for full heart segmentation treat the heart as a whole part and cannot accurately extract the bottom of the heart. In this paper, we propose a new method based on linear gradient model to segment the whole heart from the CT images automatically and accurately. Twelve cases were tested in order to test this method and accurate segmentation results were achieved and identified by clinical experts. The results can provide reliable clinical support.

  15. Adaptive Software Development supported by an Automated Process: a Reference Model

    AFFONSO, F. J.


    Full Text Available This paper presents a reference model as an automated process to assist the adaptive software development at runtime, also known as Self-adaptive Systems (SaS at runtime. This type of software has specific characteristics in comparison to traditional one, since it allows that changes (structural or behavioral to be incorporated at runtime. Automated processes have been used as a feasible solution to conduct software adaptation at runtime by minimizing human involvement (developers and speeding up the execution of tasks. In parallel, reference models have been used to aggregate knowledge and architectural artifacts, since they capture the systems essence in specific domains. However, presently no there is reference model based on reflection for the automation of software adaptation at runtime. In this scenario, this paper presents a reference model based on reflection, as an automated process, for the development of software systems that require adaptation at runtime. To show the applicability of the model, a case study was conducted and a good perspective to efficiently contribute to the area of SaS has been obtained.

  16. Personalized drug administration for cancer treatment using Model Reference Adaptive Control.

    Babaei, Naser; Salamci, Metin U


    A new Model Reference Adaptive Control (MRAC) approach is proposed for the nonlinear regulation problem of cancer treatment via chemotherapy. We suggest an approach for determining an optimal anticancer drug delivery scenario for cancer patients without prior knowledge of nonlinear model structure and parameters by compounding State Dependent Riccati Equation (SDRE) and MRAC which will lead to personalized drug administration. Several approaches have been proposed for eradicating cancerous cells in nonlinear tumor growth model. The main difficulty in these approaches is the requirement of nonlinear model parameters, which are unknown to physicians in reality. To cope with this shortage, we first determine the drug delivery scenario for a reference patient with known mathematical model and parameters via SDRE technique, and by using the proposed approach we adapt the drug administration scenario for another cancer patient despite unknown nonlinear model structure and model parameters. We propose an efficient approach to determine drug administration which will help physicians for prescribing a chemotherapy protocol for a cancer patient by regulating the drug delivery scenario of the reference patient. Stabilizing the tumor growth nonlinear model has been achieved via full state feedback techniques and yields a near optimal solution to cancer treatment problem. Numerical simulations show the effectiveness of the proposed algorithm for eradicating tumor lumps with different sizes in different patients.

  17. Automatic human body modeling for vision-based motion capture system using B-spline parameterization of the silhouette

    Jaume-i-Capó, Antoni; Varona, Javier; González-Hidalgo, Manuel; Mas, Ramon; Perales, Francisco J.


    Human motion capture has a wide variety of applications, and in vision-based motion capture systems a major issue is the human body model and its initialization. We present a computer vision algorithm for building a human body model skeleton in an automatic way. The algorithm is based on the analysis of the human shape. We decompose the body into its main parts by computing the curvature of a B-spline parameterization of the human contour. This algorithm has been applied in a context where the user is standing in front of a camera stereo pair. The process is completed after the user assumes a predefined initial posture so as to identify the main joints and construct the human model. Using this model, the initialization problem of a vision-based markerless motion capture system of the human body is solved.

  18. Structural families in loops of homologous proteins: automatic classification, modelling and application to antibodies.

    Martin, A C; Thornton, J M


    Loop regions of polypeptide in homologous proteins may be classified into structural families. A method is described by which this classification may be performed automatically and "key residue" templates, which may be responsible for the loop adopting a given conformation, are defined. The technique has been applied to the hypervariable loops of antibodies and the results are compared with the previous definition of canonical classes. We have extended these definitions and provide complete sets of structurally determining residues (SDRs) for the observed clusters including the first set of key residues for seven-residue CDR-H3 loops.

  19. Pedestrians' intention to jaywalk: Automatic or planned? A study based on a dual-process model in China.

    Xu, Yaoshan; Li, Yongjuan; Zhang, Feng


    The present study investigates the determining factors of Chinese pedestrians' intention to violate traffic laws using a dual-process model. This model divides the cognitive processes of intention formation into controlled analytical processes and automatic associative processes. Specifically, the process explained by the augmented theory of planned behavior (TPB) is controlled, whereas the process based on past behavior is automatic. The results of a survey conducted on 323 adult pedestrian respondents showed that the two added TPB variables had different effects on the intention to violate, i.e., personal norms were significantly related to traffic violation intention, whereas descriptive norms were non-significant predictors. Past behavior significantly but uniquely predicted the intention to violate: the results of the relative weight analysis indicated that the largest percentage of variance in pedestrians' intention to violate was explained by past behavior (42%). According to the dual-process model, therefore, pedestrians' intention formation relies more on habit than on cognitive TPB components and social norms. The implications of these findings for the development of intervention programs are discussed.

  20. Automatic parametrization of non-polar implicit solvent models for the blind prediction of solvation free energies

    Wang, Bao; Zhao, Zhixiong; Wei, Guo-Wei


    In this work, a systematic protocol is proposed to automatically parametrize the non-polar part of implicit solvent models with polar and non-polar components. The proposed protocol utilizes either the classical Poisson model or the Kohn-Sham density functional theory based polarizable Poisson model for modeling polar solvation free energies. Four sets of radius parameters are combined with four sets of charge force fields to arrive at a total of 16 different parametrizations for the polar component. For the non-polar component, either the standard model of surface area, molecular volume, and van der Waals interactions or a model with atomic surface areas and molecular volume is employed. To automatically parametrize a non-polar model, we develop scoring and ranking algorithms to classify solute molecules. The their non-polar parametrization is obtained based on the assumption that similar molecules have similar parametrizations. A large database with 668 experimental data is collected and employed to validate the proposed protocol. The lowest leave-one-out root mean square (RMS) error for the database is 1.33 kcal/mol. Additionally, five subsets of the database, i.e., SAMPL0-SAMPL4, are employed to further demonstrate that the proposed protocol. The optimal RMS errors are 0.93, 2.82, 1.90, 0.78, and 1.03 kcal/mol, respectively, for SAMPL0, SAMPL1, SAMPL2, SAMPL3, and SAMPL4 test sets. The corresponding RMS errors for the polarizable Poisson model with the Amber Bondi radii are 0.93, 2.89, 1.90, 1.16, and 1.07 kcal/mol, respectively.

  1. Image Semantic Automatic Annotation by Relevance Feedback

    ZHANG Tong-zhen; SHEN Rui-min


    A large semantic gap exists between content based index retrieval (CBIR) and high-level semantic, additional semantic information should be attached to the images, it refers in three respects including semantic representation model, semantic information building and semantic retrieval techniques. In this paper, we introduce an associated semantic network and an automatic semantic annotation system. In the system, a semantic network model is employed as the semantic representation model, it uses semantic keywords, linguistic ontology and low-level features in semantic similarity calculating. Through several times of users' relevance feedback, semantic network is enriched automatically. To speed up the growth of semantic network and get a balance annotation, semantic seeds and semantic loners are employed especially.

  2. Model benchmarking and reference signals for angled-beam shear wave ultrasonic nondestructive evaluation (NDE) inspections

    Aldrin, John C.; Hopkins, Deborah; Datuin, Marvin; Warchol, Mark; Warchol, Lyudmila; Forsyth, David S.; Buynak, Charlie; Lindgren, Eric A.


    For model benchmark studies, the accuracy of the model is typically evaluated based on the change in response relative to a selected reference signal. The use of a side drilled hole (SDH) in a plate was investigated as a reference signal for angled beam shear wave inspection for aircraft structure inspections of fastener sites. Systematic studies were performed with varying SDH depth and size, and varying the ultrasonic probe frequency, focal depth, and probe height. Increased error was observed with the simulation of angled shear wave beams in the near-field. Even more significant, asymmetry in real probes and the inherent sensitivity of signals in the near-field to subtle test conditions were found to provide a greater challenge with achieving model agreement. To achieve quality model benchmark results for this problem, it is critical to carefully align the probe with the part geometry, to verify symmetry in probe response, and ideally avoid using reference signals from the near-field response. Suggested reference signals for angled beam shear wave inspections include using the `through hole' corner specular reflection signal and the full skip' signal off of the far wall from the side drilled hole.

  3. Antecedents of Academic Emotions: Testing the Internal/External Frame of Reference Model for Academic Enjoyment

    Goetz, Thomas; Frenzel, Anne C.; Hall, Nathan C.; Pekrun, Reinhard


    The present study focused on students' academic enjoyment as predicted by achievement in multiple academic domains. Assumptions were based on Marsh's internal/external (I/E) frame of reference model and Pekrun's control-value theory of achievement emotions, and were tested in a sample of 1380 German students from grades 5 to 10. Students' academic…

  4. Power System Stabilizer Design Based on Model Reference Robust Fuzzy Control

    Mohammad Reza Yazdchi


    Full Text Available Power System Stabilizers (PSS are used to generate supplementary damping control signals for the excitation system in order to damp the Low Frequency Oscillations (LFO of the electric power system. The PSS is usually designed based on classical control approaches but this Conventional PSS (CPSS has some problems in power system control and stability enhancement. To overcome the drawbacks of CPSS, numerous techniques have been proposed in literatures. In this study a new method based on Model Reference Robust Fuzzy Control (MRRFC is considered to design PSS. In this new approach, in first an optimal PSS is designed in the nominal operating condition and then power system identification is used to obtain model reference of power system including optimal PSS. With changing system operating condition from the nominal condition, the error between obtained model reference and power system response in sent to a fuzzy controller and this fuzzy controller provides the stabilizing signal for damping power system oscillations just like PSS. In order to model reference identification a PID type PSS (PID-PSS is considered for damping electric power system oscillations. The parameters of this PID-PSS are tuned based on hybrid Genetic Algorithms (GA optimization method. The proposed MRRFC is evaluated against the CPSS at a single machine infinite bus power system considering system parametric uncertainties. The simulation results clearly indicate the effectiveness and validity of the proposed method.

  5. A reference model and technical framework for mobile social software for learning

    De Jong, Tim; Specht, Marcus; Koper, Rob


    De Jong,T., Specht, M., & Koper, R. (2008). A reference model and technical framework for mobile social software for learning. In I. A. Sánchez & P. Isaías (Eds.), Proceedings of the IADIS Mobile Learning Conference 2008 (pp. 206-210). April, 11-13, 2008, Carvoeiro, Portugal.

  6. The Parametric Model for PLC Reference Chanells and its Verification in Real PLC Environment


    For the expansion of PLC systems, it is necesssary to have a detailed knowledge of the PLC transmission channel properties. This contribution shortly discusses characteristics of the PLC environment and a classification of PLC transmission channels. A main part is focused on the parametric model for PLC reference channels and its verification in the real PLC environment utilizing experimental measurements.

  7. A reference-dependent model of the price-quality heuristic

    Gneezy, A.; Gneezy, U.; Lauga, D.O.


    People often use price as a proxy for quality, resulting in a positive correlation between prices and product liking, known as the "price- quality" (P-Q) heuristic. Using data from three experiments conducted at a winery, this article offers a more complex and complete reference-dependent model of

  8. Certified reference materials for food packaging specific migration tests: development, validation and modelling

    Stoffers, N.H.


    Keywords:certified reference materials; diffusion; food contact materials; food packaging; laurolactam; migration modelling; nylon; specific migration This thesis compiles several research topics

  9. Toward a Dexter-based model for open hypermedia: Unifying embedded references and link objects

    Grønbæk, Kaj; Trigg, Randall Hagner


    Nominated for the Doug Engelbart best paper award. This paper discusses experiences and lessons learned from the design of an open hypermedia system, one that integrates applications and data not ''owned'' by the hypermedia. The Dexter Hypertext Reference Model was used as the basis for the design...

  10. A reference-dependent model of the price-quality heuristic

    A. Gneezy; U. Gneezy; D.O. Lauga


    People often use price as a proxy for quality, resulting in a positive correlation between prices and product liking, known as the "price- quality" (P-Q) heuristic. Using data from three experiments conducted at a winery, this article offers a more complex and complete reference-dependent model of t

  11. Intonation in unaccompanied singing: accuracy, drift, and a model of reference pitch memory.

    Mauch, Matthias; Frieler, Klaus; Dixon, Simon


    This paper presents a study on intonation and intonation drift in unaccompanied singing, and proposes a simple model of reference pitch memory that accounts for many of the effects observed. Singing experiments were conducted with 24 singers of varying ability under three conditions (Normal, Masked, Imagined). Over the duration of a recording, ∼50 s, a median absolute intonation drift of 11 cents was observed. While smaller than the median note error (19 cents), drift was significant in 22% of recordings. Drift magnitude did not correlate with other measures of singing accuracy, singing experience, or the presence of conditions tested. Furthermore, it is shown that neither a static intonation memory model nor a memoryless interval-based intonation model can account for the accuracy and drift behavior observed. The proposed causal model provides a better explanation as it treats the reference pitch as a changing latent variable.

  12. Automatic video summarization driven by a spatio-temporal attention model

    Barland, R.; Saadane, A.


    According to the literature, automatic video summarization techniques can be classified in two parts, following the output nature: "video skims", which are generated using portions of the original video and "key-frame sets", which correspond to the images, selected from the original video, having a significant semantic content. The difference between these two categories is reduced when we consider automatic procedures. Most of the published approaches are based on the image signal and use either pixel characterization or histogram techniques or image decomposition by blocks. However, few of them integrate properties of the Human Visual System (HVS). In this paper, we propose to extract keyframes for video summarization by studying the variations of salient information between two consecutive frames. For each frame, a saliency map is produced simulating the human visual attention by a bottom-up (signal-dependent) approach. This approach includes three parallel channels for processing three early visual features: intensity, color and temporal contrasts. For each channel, the variations of the salient information between two consecutive frames are computed. These outputs are then combined to produce the global saliency variation which determines the key-frames. Psychophysical experiments have been defined and conducted to analyze the relevance of the proposed key-frame extraction algorithm.

  13. Automatic construction of 3D basic-semantic models of inhabited interiors using laser scanners and RFID sensors.

    Valero, Enrique; Adan, Antonio; Cerrada, Carlos


    This paper is focused on the automatic construction of 3D basic-semantic models of inhabited interiors using laser scanners with the help of RFID technologies. This is an innovative approach, in whose field scarce publications exist. The general strategy consists of carrying out a selective and sequential segmentation from the cloud of points by means of different algorithms which depend on the information that the RFID tags provide. The identification of basic elements of the scene, such as walls, floor, ceiling, windows, doors, tables, chairs and cabinets, and the positioning of their corresponding models can then be calculated. The fusion of both technologies thus allows a simplified 3D semantic indoor model to be obtained. This method has been tested in real scenes under difficult clutter and occlusion conditions, and has yielded promising results.

  14. Conceptual Model for Automatic Early Warning Information System of Infectious Diseases Based on Internet Reporting Surveillance System



    Objective To establish a conceptual model of automatic early warning of infectious diseases based on internet reporting surveillance system,with a view to realizing an automated warning system on a daily basis and timely identifying potential outbreaks of infectious diseases. Methods The statistic conceptual model was established using historic surveillance data with movable percentile method.Results Based on the infectious disease surveillance information platform,the conceptualmodelfor early warning was established.The parameter,threshold,and revised sensitivity and specificity of early warning value were changed to realize dynamic alert of infectious diseases on a daily basis.Conclusion The instructive conceptual model of dynamic alert can be used as a validating tool in institutions of infectious disease surveillance in different districts.

  15. Automatic white balance: whitebalPR using the dichromatic reflection model

    Sajjaa, Matthias; Fischer, Gregor


    The current color constancy methods are based on an image processing of the sensor's RGB data to estimate the color of illumination. Unlike previous methods, whitebalPR measures the illuminant by separating diffuse and specular components in a scene by taking advantage of the polarizing effect occurring to light reflection. Polarization difference imaging (PDI) detects the polarization degree of the neutrally reflected (specular) parts and eliminates the remitted (diffuse) non-polarized colored parts. Different experiments explore the signal level within the polarization difference image in relation to multicolored objects, different object surfaces and to the arrangement of light source, camera and object. The results exhibit a high accuracy of measuring the color of illumination for glossy and matte surfaces. As these setups work best for achromatic objects, this new approach for data analysis combines the ideas of the dichromatic reflection model (DRM) and whitebalPR and delivers reliable results for mainly colored objects. Unlike the DRM needs to segment the image referring to the objects in the scene, the new proposal (polarization difference line imaging, PDLI) is independent from any knowledge of the image content. A further arbitrarily segmentation of the image into macro-pixels of any size reduces the computational effort and diminishes the impact of noise on the PDI signal. An according experiment visualizes the coherency between the size of the macro-pixels, the angle of incidence and the accuracy of the process. To sum up, by means of the segmentation the PDLI process gains further stabilization in detecting the color of the illuminant while the computational effort decreases.

  16. Extended intrinsic mean spin tensor for turbulence modelling in non-inertial frame of reference

    HUANG Yu-ning; MA Hui-yang


    We investigate the role of extended intrinsic mean spin tensor introduced in this work for turbulence modelling in a non-inertial frame of reference.It is described by the Euclidean group of transformations and,in particular,its significance and importance in the approach of the algebraic Reynolds stress modelling,such as in a nonlinear K-εmodel.To this end and for illustration of the effect of extended intrinsic spin tensor on turbulence modelling,we examine several recently developed nonlinear K-ε models and compare their performance in predicting the homogeneous turbulent shear flow in a rotating frame of reference with LES data.Our results and analysis indicate that,only if the deficiencies of these models and the like be well understood and properly corrected,may in the near future,more sophisticated nonlinear K-ε models be 0eveloped to better predict complex turbulent flows in a non-inertial frame of reference.

  17. Petroleum Refinery Jobs and Economic Development Impact (JEDI) Model User Reference Guide

    Goldberg, M.


    The Jobs and Economic Development Impact (JEDI) models, developed through the National Renewable Energy Laboratory (NREL), are user-friendly tools utilized to estimate the economic impacts at the local level of constructing and operating fuel and power generation projects for a range of conventional and renewable energy technologies. The JEDI Petroleum Refinery Model User Reference Guide was developed to assist users in employing and understanding the model. This guide provides information on the model's underlying methodology, as well as the parameters and references used to develop the cost data utilized in the model. This guide also provides basic instruction on model add-in features, operation of the model, and a discussion of how the results should be interpreted. Based on project-specific inputs from the user, the model estimates job creation, earning and output (total economic activity) for a given petroleum refinery. This includes the direct, indirect and induced economic impacts to the local economy associated with the refinery's construction and operation phases. Project cost and job data used in the model are derived from the most current cost estimations available. Local direct and indirect economic impacts are estimated using economic multipliers derived from IMPLAN software. By determining the regional economic impacts and job creation for a proposed refinery, the JEDI Petroleum Refinery model can be used to field questions about the added value refineries may bring to the local community.

  18. Evaluation of models proposed for the 1991 revision of the International Geomagnetic Reference Field

    Peddie, N.W.


    The 1991 revision of the International Geomagnetic Reference Field (IGRF) comprises a definitive main-field model for 1985.0, a main-field model for 1990.0, and a forecast secular-variation model for the period 1990-1995. The five 1985.0 main-field models and five 1990.0 main-field models that were proposed have been evaluated by comparing them with one another, with magnetic observatory data, and with Project MAGNET aerial survey data. The comparisons indicate that the main-field models proposed by IZMIRAN, and the secular-variation model proposed jointly by the British Geological Survey and the US Naval Oceanographic Office, should be assigned relatively lower weight in the derivation of the new IGRF models. -Author

  19. Characterization and adaptive fuzzy model reference control for a magnetic levitation system

    J.J. Hernández-Casañas


    Full Text Available This paper shows the implementation of a fuzzy controller applied for magnetic levitation, to make this, the characterization of the magnetic actuator was computed by using ANSYS® analysis. The control law was a Mamdani PD implemented with two microcontrollers, to get a smooth control signal, it was used a model reference. A learning scheme was used to update the consequents of the fuzzy rules. Different reference signals and disturbances were applied to the system to show the robustness of the controller. Finally, LabVIEW® was used to plot the results.

  20. Neural networks for action representation underlying automatic mimicry: A functional magnetic-resonance imaging and dynamic causal modeling study

    Akihiro T Sasaki


    Full Text Available Automatic mimicry is based on the tight linkage between motor and perception action representations in which internal models play a key role. Based on the anatomical connection, we hypothesized that the direct effective connectivity from the posterior superior temporal sulcus (pSTS to the ventral premotor area (PMv formed an inverse internal model, converting visual representation into a motor plan, and that reverse connectivity formed a forward internal model, converting the motor plan into a sensory outcome of action. To test this hypothesis, we employed dynamic causal-modeling analysis with functional magnetic-resonance imaging. Twenty-four normal participants underwent a change-detection task involving two visually-presented balls that were either manually rotated by the investigator’s right hand (‘Hand’ or automatically rotated. The effective connectivity from the pSTS to the PMv was enhanced by hand observation and suppressed by execution, corresponding to the inverse model. Opposite effects were observed from the PMv to the pSTS, suggesting the forward model. Additionally, both execution and hand observation commonly enhanced the effective connectivity from the pSTS to the inferior parietal lobule (IPL, the IPL to the primary sensorimotor cortex (S/M1, the PMv to the IPL, and the PMv to the S/M1. Representation of the hand action therefore was implemented in the motor system including the S/M1. During hand observation, effective connectivity toward the pSTS was suppressed whereas that toward the PMv and S/M1 was enhanced. Thus the action-representation network acted as a dynamic feedback-control system during action observation.

  1. Automatic calibration of a parsimonious ecohydrological model in a sparse basin using the spatio-temporal variation of the NDVI

    Ruiz-Pérez, Guiomar; Manfreda, Salvatore; Caylor, Kelly; Francés, Félix


    Drylands are extensive, covering 30% of the Earth's land surface and 50% of Africa. In these water-controlled areas, vegetation plays a key role in the water cycle. Ecohydrological models provide a tool to investigate the relationships between vegetation and water resources. However, studies in Africa often face the problem that many ecohydrological models have quite extensive parametrical requirements, while available data are scarce. Therefore, there is a need for searching new sources of information such as satellite data. The advantages of the use of satellite data in dry regions has been deeply demonstrated and studied. But, the use of this kind of data forces to introduce the concept of spatio-temporal information. In this context, we have to deal with the fact that there is a lack in terms of statistics and methodologies to incorporate the spatio-temporal data during the calibration and validation processes. This research wants to be a contribution in that sense. The used ecohydrological model was calibrated in the Upper Ewaso river basin in Kenya only using NDVI (Normalized Difference Vegetation Index) data from MODIS. An automatic calibration methodology based on Singular Value Decomposition techniques was proposed in order to calibrate the model taking into account the temporal variation and, also, the spatial pattern of the observed NDVI and the simulated LAI. The obtained results have demonstrated: (1) the satellite data is an extraordinary useful tool of information and it can be used to implement ecohydrological models in dry regions; (2) the proposed model calibrated only using satellite data is able to reproduce the vegetation dynamics (in time and in space) and, also, the observed discharge at the outlet point; and (3) the proposed automatic calibration methodology works satisfactorily and it includes spatio-temporal data, in other words, it takes into account the temporal variation and the spatial pattern of the analyzed data.

  2. Earth Global Reference Atmospheric Model 2007 (Earth-GRAM07) Applications for the NASA Constellation Program

    Leslie, Fred W.; Justus, C. G.


    Engineering models of the atmosphere are used extensively by the aerospace community for design issues related to vehicle ascent and descent. The Earth Global Reference Atmosphere Model version 2007 (Earth-GRAM07) is the latest in this series and includes a number of new features. Like previous versions, Earth-GRAM07 provides both mean values and perturbations for density, temperature, pressure, and winds, as well as monthly- and geographically-varying trace constituent concentrations. From 0 km to 27 km, thermodynamics and winds are based on the National Oceanic and Atmospheric Administration Global Upper Air Climatic Atlas (GUACA) climatology. For altitudes between 20 km and 120 km, the model uses data from the Middle Atmosphere Program (MAP). Above 120 km, EarthGRAM07 now provides users with a choice of three thermosphere models: the Marshall Engineering Thermosphere (MET-2007) model; the Jacchia-Bowman 2006 thermosphere model (JB2006); and the Naval Research Labs Mass Spectrometer, Incoherent Scatter Radar Extended Model (NRL MSIS E-OO) with the associated Harmonic Wind Model (HWM-93). In place of these datasets, Earth-GRAM07 has the option of using the new 2006 revised Range Reference Atmosphere (RRA) data, the earlier (1983) RRA data, or the user may also provide their own data as an auxiliary profile. Refinements of the perturbation model are also discussed which include wind shears more similar to those observed at the Kennedy Space Center than the previous version Earth-GRAM99.

  3. Model reference control of conceptual clutched train substituted for vehicular friction clutch

    Huang Wei


    Full Text Available In order to essentially reduce heat generation and frictional dissipation carried by friction clutch engagement, conceptual design of clutched train combined with hydrostatic braking system is proposed as a novel substitution for vehicular friction clutch. Potential collateral merits of clutched train may improve service life and control accuracy since less friction heat generated during synchronization process. Parameter of clutched train is obtained by Genetic Algorithm optimization aiming at axial space-saving and light weight. Control-oriented model of proposed concept is derived and used in Model Reference Control development. Based on optimum parameter of clutched train, simulation result has shown the functionality of clutched train on vehicle standing-start, and well-behaved Model Reference Control on smoothing clutched train synchronization process.

  4. A reference Earth model for the heat producing elements and associated geoneutrino flux

    Huang, Yu; Mantovani, Fabio; Rudnick, Roberta L; McDonough, William F


    The recent geoneutrino experimental results from KamLAND and Borexino detectors reveal the usefulness of analyzing the Earth geoneutrino flux, as it provides a constraint on the strength of the radiogenic heat power and this, in turn, provides a test of compositional models of the bulk silicate Earth (BSE). This flux is dependent on the amount and distribution of heat producing elements (HPEs: U, Th and K) in the Earth interior. We have developed a geophysically-based, three-dimensional global reference model for the abundances and distributions of HPEs in the BSE. The structure and composition of the outermost portion of the Earth, the crust and underlying lithospheric mantle, is detailed in the reference model, this portion of the Earth has the greatest influence on the geoneutrino fluxes. The reference model combines three existing geophysical models of the global crust and yields an average crustal thickness of 34.4+-4.1 km in the continents and 8.0+-2.7 km in the oceans. In situ seismic velocity provided...

  5. A hybrid model for automatic identification of risk factors for heart disease.

    Yang, Hui; Garibaldi, Jonathan M


    Coronary artery disease (CAD) is the leading cause of death in both the UK and worldwide. The detection of related risk factors and tracking their progress over time is of great importance for early prevention and treatment of CAD. This paper describes an information extraction system that was developed to automatically identify risk factors for heart disease in medical records while the authors participated in the 2014 i2b2/UTHealth NLP Challenge. Our approaches rely on several nature language processing (NLP) techniques such as machine learning, rule-based methods, and dictionary-based keyword spotting to cope with complicated clinical contexts inherent in a wide variety of risk factors. Our system achieved encouraging performance on the challenge test data with an overall micro-averaged F-measure of 0.915, which was competitive to the best system (F-measure of 0.927) of this challenge task.

  6. A system for environmental protection. Reference dose models for fauna and flora

    Pentreath, R.J. [Environment Agency, Bristol (United Kingdom); Woodhead, D.S.


    Ideas have already been published on how the current problems relating to environmental protection could be explicitly addressed. One of the basic cornerstones of the proposed system is that of the use of reference dose models for fauna and flora, in a manner analogous to those used for the human species. The concept is that, for a number of both aquatic and terrestrial fauna and flora types, 'reference' dose models, and dose per unit (internal and external) exposure tables, could be compiled. These would then be used to draw broad conclusions on the likely effects for such organisms in relation to three broad environment end points of concern: life shortening; impairment of reproductive capacity; and scorable, cytogenetic damage. The level of complexity of the dose models needs to be commensurate with the morphological complexity of the modelled organism, its size, and the data bases which are either available or could be reasonably obtained. The most basic models considered are either solid ellipsoids or spheres, with fixed dimensions. Secondary models contain internal, but relatively simple geometric features representative of those key organs or tissues for which more precise estimates of dose are required. Their level of complexity is also a function of different internal and external sources of radiation, and expected differences in radiosensitivities. Tertiary models -of greater complexity- are only considered to be of value for higher vertebrates. The potential derivation and use of all three sets of models is briefly discussed. (author)

  7. An active contour-based atlas registration model applied to automatic subthalamic nucleus targeting on MRI: method and validation.

    Duay, Valérie; Bresson, Xavier; Castro, Javier Sanchez; Pollo, Claudio; Cuadra, Meritxell Bach; Thiran, Jean-Philippe


    This paper presents a new non parametric atlas registration framework, derived from the optical flow model and the active contour theory, applied to automatic subthalamic nucleus (STN) targeting in deep brain stimulation (DBS) surgery. In a previous work, we demonstrated that the STN position can be predicted based on the position of surrounding visible structures, namely the lateral and third ventricles. A STN targeting process can thus be obtained by registering these structures of interest between a brain atlas and the patient image. Here we aim to improve the results of the state of the art targeting methods and at the same time to reduce the computational time. Our simultaneous segmentation and registration model shows mean STN localization errors statistically similar to the most performing registration algorithms tested so far and to the targeting expert's variability. Moreover, the computational time of our registration method is much lower, which is a worthwhile improvement from a clinical point of view.

  8. AMME: an Automatic Mental Model Evaluation to analyse user behaviour traced in a finite, discrete state space.

    Rauterberg, M


    To support the human factors engineer in designing a good user interface, a method has been developed to analyse the empirical data of the interactive user behaviour traced in a finite discrete state space. The sequences of actions produced by the user contain valuable information about the mental model of this user, the individual problem solution strategies for a given task and the hierarchical structure of the task-subtasks relationships. The presented method, AMME, can analyse the action sequences and automatically generate (1) a net description of the task dependent model of the user, (2) a complete state transition matrix, and (3) various quantitative measures of the user's task solving process. The behavioural complexity of task-solving processes carried out by novices has been found to be significantly larger than the complexity of task-solving processes carried out by experts.

  9. Automatic brain matter segmentation of computed tomography images using a statistical model: A tool to gain working time!

    Bertè, Francesco; Lamponi, Giuseppe; Bramanti, Placido; Calabrò, Rocco S


    Brain computed tomography (CT) is useful diagnostic tool for the evaluation of several neurological disorders due to its accuracy, reliability, safety and wide availability. In this field, a potentially interesting research topic is the automatic segmentation and recognition of medical regions of interest (ROIs). Herein, we propose a novel automated method, based on the use of the active appearance model (AAM) for the segmentation of brain matter in CT images to assist radiologists in the evaluation of the images. The method described, that was applied to 54 CT images coming from a sample of outpatients affected by cognitive impairment, enabled us to obtain the generation of a model overlapping with the original image with quite good precision. Since CT neuroimaging is in widespread use for detecting neurological disease, including neurodegenerative conditions, the development of automated tools enabling technicians and physicians to reduce working time and reach a more accurate diagnosis is needed.

  10. Language modeling for automatic speech recognition of inflective languages an applications-oriented approach using lexical data

    Donaj, Gregor


    This book covers language modeling and automatic speech recognition for inflective languages (e.g. Slavic languages), which represent roughly half of the languages spoken in Europe. These languages do not perform as well as English in speech recognition systems and it is therefore harder to develop an application with sufficient quality for the end user. The authors describe the most important language features for the development of a speech recognition system. This is then presented through the analysis of errors in the system and the development of language models and their inclusion in speech recognition systems, which specifically address the errors that are relevant for targeted applications. The error analysis is done with regard to morphological characteristics of the word in the recognized sentences. The book is oriented towards speech recognition with large vocabularies and continuous and even spontaneous speech. Today such applications work with a rather small number of languages compared to the nu...

  11. Application of Model Reference Adaptive Control System to Instrument Pointing System /IPS/

    Waites, H. B.


    A Model Reference Adaptive Controller (MRAC) is derived for a Shuttle payload called the Instrument Pointing System (IPS). The unique features of this MRAC design are that total state feedback is not required, that the internal structure of the model is independent of the internal structure of the IPS, and that the model input is of bounded variation and not required a priori. An application of Liapunov's stability theorems is used to synthesize a control signal which assures MRAC asymptotic stability. Exponential observers are used to obtain the necessary state information to implement the control synthesis. Results are presented which show how effectively the MRAC can maneuver the IPS.

  12. Wave Disturbance Reduction of a Floating Wind Turbine Using a Reference Model-based Predictive Control

    Christiansen, Søren; Tabatabaeipour, Seyed Mojtaba; Bak, Thomas;


    Floating wind turbines are considered as a new and promising solution for reaching higher wind resources beyond the water depth restriction of monopile wind turbines. But on a floating structure, the wave-induced loads significantly increase the oscillations of the structure. Furthermore, using...... a controller designed for an onshore wind turbine yields instability in the fore-aft rotation. In this paper, we propose a general framework, where a reference model models the desired closed-loop behavior of the system. Model predictive control combined with a state estimator finds the optimal rotor blade...... compared to a baseline floating wind turbine controller at the cost of more pitch action....

  13. Towards SWOT data assimilation for hydrology : automatic calibration of global flow routing model parameters in the Amazon basin

    Mouffe, M.; Getirana, A.; Ricci, S. M.; Lion, C.; Biancamaria, S.; Boone, A.; Mognard, N. M.; Rogel, P.


    The Surface Water and Ocean Topography (SWOT) mission is a swath mapping radar interferometer that will provide global measurements of water surface elevation (WSE). The revisit time depends upon latitude and varies from two (low latitudes) to ten (high latitudes) per 22-day orbit repeat period. The high resolution and the global coverage of the SWOT data open the way for new hydrology studies. Here, the aim is to investigate the use of virtually generated SWOT data to improve discharge simulation using data assimilation techniques. In the framework of the SWOT virtual mission (VM), this study presents the first results of the automatic calibration of a global flow routing (GFR) scheme using SWOT VM measurements for the Amazon basin. The Hydrological Modeling and Analysis Platform (HyMAP) is used along with the MOCOM-UA multi-criteria global optimization algorithm. HyMAP has a 0.25-degree spatial resolution and runs at the daily time step to simulate discharge, water levels and floodplains. The surface runoff and baseflow drainage derived from the Interactions Sol-Biosphère-Atmosphère (ISBA) model are used as inputs for HyMAP. Previous works showed that the use of ENVISAT data enables the reduction of the uncertainty on some of the hydrological model parameters, such as river width and depth, Manning roughness coefficient and groundwater time delay. In the framework of the SWOT preparation work, the automatic calibration procedure was applied using SWOT VM measurements. For this Observing System Experiment (OSE), the synthetical data were obtained applying an instrument simulator (representing realistic SWOT errors) for one hydrological year to HYMAP simulated WSE using a "true" set of parameters. Only pixels representing rivers larger than 100 meters within the Amazon basin are considered to produce SWOT VM measurements. The automatic calibration procedure leads to the estimation of optimal parametersminimizing objective functions that formulate the difference

  14. Automatic media-adventitia IVUS image segmentation based on sparse representation framework and dynamic directional active contour model.

    Zakeri, Fahimeh Sadat; Setarehdan, Seyed Kamaledin; Norouzi, Somayye


    Segmentation of the arterial wall boundaries from intravascular ultrasound images is an important image processing task in order to quantify arterial wall characteristics such as shape, area, thickness and eccentricity. Since manual segmentation of these boundaries is a laborious and time consuming procedure, many researchers attempted to develop (semi-) automatic segmentation techniques as a powerful tool for educational and clinical purposes in the past but as yet there is no any clinically approved method in the market. This paper presents a deterministic-statistical strategy for automatic media-adventitia border detection by a fourfold algorithm. First, a smoothed initial contour is extracted based on the classification in the sparse representation framework which is combined with the dynamic directional convolution vector field. Next, an active contour model is utilized for the propagation of the initial contour toward the interested borders. Finally, the extracted contour is refined in the leakage, side branch openings and calcification regions based on the image texture patterns. The performance of the proposed algorithm is evaluated by comparing the results to those manually traced borders by an expert on 312 different IVUS images obtained from four different patients. The statistical analysis of the results demonstrates the efficiency of the proposed method in the media-adventitia border detection with enough consistency in the leakage and calcification regions. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. A Combined Methodology of H∞ Fuzzy Tracking Control and Virtual Reference Model for a PMSM

    Djamel Ounnas


    Full Text Available The aim of this paper is to present a new fuzzy tracking strategy for a permanent magnet synchronous machine (PMSM by using Takagi-Sugeno models (T-S. A feedback-based fuzzy control with h-infinity tracking performance and a concept of virtual reference model are combined to develop a fuzzy tracking controller capable to track a reference signal and ensure a minimum effect of disturbance on the PMSM system. First, a T-S fuzzy model is used to represent the PMSM nonlinear system with disturbance. Next, an integral fuzzy tracking control based on the concept of virtual desired variables (VDVs is formulated to simplify the design of the virtual reference model and the control law. Finally, based on this concept, a two-stage design procedure is developed: i determine the VDVs from the nonlinear system output equation and generalized kinematics constraints ii calculate the feedback controller gains by solving a set of linear matrix inequalities (LMIs. Simulation results are provided to demonstrate the validity and the effectiveness of the proposed method.

  16. Fit Gap Analysis – The Role of Business Process Reference Models

    Dejan Pajk


    Full Text Available Enterprise resource planning (ERP systems support solutions for standard business processes such as financial, sales, procurement and warehouse. In order to improve the understandability and efficiency of their implementation, ERP vendors have introduced reference models that describe the processes and underlying structure of an ERP system. To select and successfully implement an ERP system, the capabilities of that system have to be compared with a company’s business needs. Based on a comparison, all of the fits and gaps must be identified and further analysed. This step usually forms part of ERP implementation methodologies and is called fit gap analysis. The paper theoretically overviews methods for applying reference models and describes fit gap analysis processes in detail. The paper’s first contribution is its presentation of a fit gap analysis using standard business process modelling notation. The second contribution is the demonstration of a process-based comparison approach between a supply chain process and an ERP system process reference model. In addition to its theoretical contributions, the results can also be practically applied to projects involving the selection and implementation of ERP systems.

  17. Modelo de referência para estruturar o Seis Sigma nas organizações Reference model to structure the Six Sigma in organizations

    Adriana Barbosa Santos


    Full Text Available Este artigo apresenta o modelo de referência para estruturar o Seis Sigma, o qual é resultante da incorporação de teorias que contribuem para aumentar o potencial estratégico do Seis Sigma no sentido de incrementar o desempenho organizacional. Em sua proposta, o modelo de referência engloba um direcionamento sobre certos requisitos primordiais para o sucesso do programa Seis Sigma. A base teórica de sustentação do modelo de referência foi construída a partir de estudos sobre a influência dos seguintes fatores: orientação estratégica e alinhamento estratégico; medição e gerenciamento do desempenho organizacional; uso de estatística (pensamento estatístico; capacitação/especialização de pessoas; implementação e gerenciamento de projetos; e uso de tecnologia de informação. Complementando a proposição do modelo, o artigo traz evidências empíricas acerca da contribuição dos fatores identificados na formulação do modelo de referência, expondo resultados decorrentes de estudos de caso realizados em quatro subsidiárias brasileiras de multinacionais de grande porte. A análise dos dados forneceu evidências positivas de que os fatores mencionados influenciam de forma efetiva o sucesso e a consolidação do Seis Sigma nas empresas estudadas.This paper introduces the reference model to structure Six Sigma. This model is a result of theory incorporation that contributes to increase the strategic power of Six Sigma for improving businesses performance. Reference model proposal points out certain primordial requirements for de Six Sigma program success. The theoretical basis to sustain the reference model was supported in studies about the influence of critical factors such as: strategic orientation and strategic alignment; business performance measurement; statistical approach (statistical thinking; people training; project implementation; and information technology use. Complementing the model proposition, this paper

  18. Design Novel Model Reference Artificial Intelligence Based Methodology to Optimized Fuel Ratio in IC Engine



    Full Text Available In this research, model reference fuzzy based control is presented as robust controls for IC engine. The objective of the study is to design controls for IC engines without the knowledge of the boundary of uncertainties and dynamic information by using fuzzy model reference PD plus mass of air while improve the robustness of the PD plus mass of air control. A PD plus mass of air provides for eliminate the mass of air and ultimate accuracy in the presence of the bounded disturbance/uncertainties, although this methods also causes some oscillation. The fuzzy PD plus mass of air is proposed as a solution to the problems crated by unstability. This method has a good performance in presence of uncertainty.

  19. Automatic delineation of geomorphological slope units with r.slopeunits v1.0 and their optimization for landslide susceptibility modeling

    Alvioli, Massimiliano; Marchesini, Ivan; Reichenbach, Paola; Rossi, Mauro; Ardizzone, Francesca; Fiorucci, Federica; Guzzetti, Fausto


    Automatic subdivision of landscapes into terrain units remains a challenge. Slope units are terrain units bounded by drainage and divide lines, but their use in hydrological and geomorphological studies is limited because of the lack of reliable software for their automatic delineation. We present the r.slopeunits software for the automatic delineation of slope units, given a digital elevation model and a few input parameters. We further propose an approach for the selection of optimal parameters controlling the terrain subdivision for landslide susceptibility modeling. We tested the software and the optimization approach in central Italy, where terrain, landslide, and geo-environmental information was available. The software was capable of capturing the variability of the landscape and partitioning the study area into slope units suited for landslide susceptibility modeling and zonation. We expect r.slopeunits to be used in different physiographical settings for the production of reliable and reproducible landslide susceptibility zonations.

  20. Automatic Verification of Biochemical Network Using Model Checking Method%基于模型校核的生化网络自动辨别方法

    Jinkyung Kim; Younghee Lee; Il Moon


    This study focuses on automatic searching and verifying methods for the reachability, transition logics and hierarchical structure in all possible paths of biological processes using model checking. The automatic search and verification for alternative paths within complex and large networks in biological process can provide a consid-erable amount of solutions, which is difficult to handle manually. Model checking is an automatic method for veri-fying if a circuit or a condition, expressed as a concurrent transition system, satisfies a set of properties expressed ina temporal logic, such as computational tree logic (CTL). This article represents that model checking is feasible in biochemical network verification and it shows certain advantages over simulation for querying and searching of special behavioral properties in biochemical processes.

  1. Engineering model of the electric drives of separation device for simulation of automatic control systems of reactive power compensation by means of serially connected capacitors

    Juromskiy, V. M.


    It is developed a mathematical model for an electric drive of high-speed separation device in terms of the modeling dynamic systems Simulink, MATLAB. The model is focused on the study of the automatic control systems of the power factor (Cosφ) of an actuator by compensating the reactive component of the total power by switching a capacitor bank in series with the actuator. The model is based on the methodology of the structural modeling of dynamic processes.

  2. Low-rank and sparse decomposition based shape model and probabilistic atlas for automatic pathological organ segmentation.

    Shi, Changfa; Cheng, Yuanzhi; Wang, Jinke; Wang, Yadong; Mori, Kensaku; Tamura, Shinichi


    One major limiting factor that prevents the accurate delineation of human organs has been the presence of severe pathology and pathology affecting organ borders. Overcoming these limitations is exactly what we are concerned in this study. We propose an automatic method for accurate and robust pathological organ segmentation from CT images. The method is grounded in the active shape model (ASM) framework. It leverages techniques from low-rank and sparse decomposition (LRSD) theory to robustly recover a subspace from grossly corrupted data. We first present a population-specific LRSD-based shape prior model, called LRSD-SM, to handle non-Gaussian gross errors caused by weak and misleading appearance cues of large lesions, complex shape variations, and poor adaptation to the finer local details in a unified framework. For the shape model initialization, we introduce a method based on patient-specific LRSD-based probabilistic atlas (PA), called LRSD-PA, to deal with large errors in atlas-to-target registration and low likelihood of the target organ. Furthermore, to make our segmentation framework more efficient and robust against local minima, we develop a hierarchical ASM search strategy. Our method is tested on the SLIVER07 database for liver segmentation competition, and ranks 3rd in all the published state-of-the-art automatic methods. Our method is also evaluated on some pathological organs (pathological liver and right lung) from 95 clinical CT scans and its results are compared with the three closely related methods. The applicability of the proposed method to segmentation of the various pathological organs (including some highly severe cases) is demonstrated with good results on both quantitative and qualitative experimentation; our segmentation algorithm can delineate organ boundaries that reach a level of accuracy comparable with those of human raters.

  3. An Optimal Control Modification to Model-Reference Adaptive Control for Fast Adaptation

    Nguyen, Nhan T.; Krishnakumar, Kalmanje; Boskovic, Jovan


    This paper presents a method that can achieve fast adaptation for a class of model-reference adaptive control. It is well-known that standard model-reference adaptive control exhibits high-gain control behaviors when a large adaptive gain is used to achieve fast adaptation in order to reduce tracking error rapidly. High gain control creates high-frequency oscillations that can excite unmodeled dynamics and can lead to instability. The fast adaptation approach is based on the minimization of the squares of the tracking error, which is formulated as an optimal control problem. The necessary condition of optimality is used to derive an adaptive law using the gradient method. This adaptive law is shown to result in uniform boundedness of the tracking error by means of the Lyapunov s direct method. Furthermore, this adaptive law allows a large adaptive gain to be used without causing undesired high-gain control effects. The method is shown to be more robust than standard model-reference adaptive control. Simulations demonstrate the effectiveness of the proposed method.

  4. Improving Inverse Dynamics Accuracy in a Planar Walking Model Based on Stable Reference Point

    Alaa Abdulrahman


    Full Text Available Physiologically and biomechanically, the human body represents a complicated system with an abundance of degrees of freedom (DOF. When developing mathematical representations of the body, a researcher has to decide on how many of those DOF to include in the model. Though accuracy can be enhanced at the cost of complexity by including more DOF, their necessity must be rigorously examined. In this study a planar seven-segment human body walking model with single DOF joints was developed. A reference point was added to the model to track the body’s global position while moving. Due to the kinematic instability of the pelvis, the top of the head was selected as the reference point, which also assimilates the vestibular sensor position. Inverse dynamics methods were used to formulate and solve the equations of motion based on Newton-Euler formulae. The torques and ground reaction forces generated by the planar model during a regular gait cycle were compared with similar results from a more complex three-dimensional OpenSim model with muscles, which resulted in correlation errors in the range of 0.9–0.98. The close comparison between the two torque outputs supports the use of planar models in gait studies.

  5. Automatic generation of groundwater model hydrostratigraphy from AEM resistivity and boreholes

    Marker, Pernille Aabye; Foged, N.; Christiansen, A. V.;


    Regional hydrological models are important tools in water resources management. Model prediction uncertainty is primarily due to structural (geological) non-uniqueness which makes sampling of the structural model space necessary to estimate prediction uncertainties. Geological structures and hete......Regional hydrological models are important tools in water resources management. Model prediction uncertainty is primarily due to structural (geological) non-uniqueness which makes sampling of the structural model space necessary to estimate prediction uncertainties. Geological structures...

  6. U.S. Department of Energy Reference Model Program RM1: Experimental Results.

    Hill, Craig [Univ. of Minnesota, Minneapolis, MN (United States); Neary, Vincent Sinclair [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Gunawan, Budi [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Guala, Michele [Univ. of Minnesota, Minneapolis, MN (United States); Sotiropoulos, Fotis [Univ. of Minnesota, Minneapolis, MN (United States)


    The Reference Model Project (RMP), sponsored by the U.S. Department of Energy’s (DOE) Wind and Water Power Technologies Program within the Office of Energy Efficiency & Renewable Energy (EERE), aims at expediting industry growth and efficiency by providing nonproprietary Reference Models (RM) of MHK technology designs as study objects for opensource research and development (Neary et al. 2014a,b). As part of this program, MHK turbine models were tested in a large open channel facility at the University of Minnesota’s St. Anthony Falls Laboratory (UMN-SAFL). Reference Model 1 (RM1) is a 1:40 geometric scale dual-rotor axial flow horizontal axis device with counter-rotating rotors, each with a rotor diameter dT = 0.5m. Precise blade angular position and torque measurements were synchronized with three acoustic Doppler velocimeters (ADVs) aligned with each rotor and the midpoint for RM1. Flow conditions for each case were controlled such that depth, h = 1m, and volumetric flow rate, Qw = 2.425m3s-1, resulting in a hub height velocity of approximately Uhub = 1.05ms-1 and blade chord length Reynolds numbers of Rec ≈ 3.0x105. Vertical velocity profiles collected in the wake of each device from 1 to 10 rotor diameters are used to estimate the velocity recovery and turbulent characteristics in the wake, as well as the interaction of the counter-rotating rotor wakes. The development of this high resolution laboratory investigation provides a robust dataset that enables assessing turbulence performance models and their ability to accurately predict device performance metrics, including computational fluid dynamics (CFD) models that can be used to predict turbulent inflow environments, reproduce wake velocity deficit, recovery and higher order turbulent statistics, as well as device performance metrics.

  7. U.S. Department of Energy Reference Model Program RM1: Experimental Results

    Hill, Craig [Univ. of Minnesota, Minneapolis, MN (United States); Neary, Vincent Sinclair [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Gunawan, Budi [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Guala, Michele [Univ. of Minnesota, Minneapolis, MN (United States); Sotiropoulos, Fotis [Univ. of Minnesota, Minneapolis, MN (United States)


    The Reference Model Project (RMP), sponsored by the U.S. Department of Energy’s (DOE) Wind and Water Power Technologies Program within the Office of Energy Efficiency & Renewable Energy (EERE), aims at expediting industry growth and efficiency by providing non-proprietary Reference Models (RM) of MHK technology designs as study objects for open-source research and development (Neary et al. 2014a,b). As part of this program, MHK turbine models were tested in a large open channel facility at the University of Minnesota’s St. Anthony Falls Laboratory (UMN-SAFL). Reference Model 1 (RM2) is a 1:40 geometric scale dual-rotor axial flow horizontal axis device with counter-rotating rotors, each with a rotor diameter dT = 0.5m. Precise blade angular position and torque measurements were synchronized with three acoustic Doppler velocimeters (ADVs) aligned with each rotor and the midpoint for RM1. Flow conditions for each case were controlled such that depth, h = 1m, and volumetric flow rate, Qw = 2.425m3s-1, resulting in a hub height velocity of approximately Uhub = 1.05ms-1 and blade chord length Reynolds numbers of Rec ≈ 3.0x105. Vertical velocity profiles collected in the wake of each device from 1 to 10 rotor diameters are used to estimate the velocity recovery and turbulent characteristics in the wake, as well as the interaction of the counter-rotating rotor wakes. The development of this high resolution laboratory investigation provides a robust dataset that enables assessing turbulence performance models and their ability to accurately predict device performance metrics, including computational fluid dynamics (CFD) models that can be used to predict turbulent inflow environments, reproduce wake velocity deficit, recovery and higher order turbulent statistics, as well as device performance metrics.

  8. U.S. Department of Energy Commercial Reference Building Models of the National Building Stock

    Deru, M.; Field, K.; Studer, D.; Benne, K.; Griffith, B.; Torcellini, P.; Liu, B.; Halverson, M.; Winiarski, D.; Rosenberg, M.; Yazdanian, M.; Huang, J.; Crawley, D.


    The U.S. Department of Energy (DOE) Building Technologies Program has set the aggressive goal of producing marketable net-zero energy buildings by 2025. This goal will require collaboration between the DOE laboratories and the building industry. We developed standard or reference energy models for the most common commercial buildings to serve as starting points for energy efficiency research. These models represent fairly realistic buildings and typical construction practices. Fifteen commercial building types and one multifamily residential building were determined by consensus between DOE, the National Renewable Energy Laboratory, Pacific Northwest National Laboratory, and Lawrence Berkeley National Laboratory, and represent approximately two-thirds of the commercial building stock.

  9. Reference model of future ubiquitous convergent network and context-aware telecommunication service platform

    QIAO Xiu-quan; LI Xiao-feng; LIANG Shou-qing


    A reference model for future ubiquitous convergent network is analyzed. To provide user-centric, intelligent,personalized service, this article presents a context-aware telecommunication service platform (CaTSP) to adapt to dynamically changing context. This article focuses on the new design method of context-aware telecommunication service platform and its architecture. Through the use of model-driven architecture (MDA) and semantic web technologies, CaTSP can enable context reasoning and service personalization adaption.This article explores a new approach for service intelligence,personalization, and adaptability in the semantic web service computing era.

  10. Automatic 3D City Modeling Using a Digital Map and Panoramic Images from a Mobile Mapping System

    Hyungki Kim


    Full Text Available Three-dimensional city models are becoming a valuable resource because of their close geospatial, geometrical, and visual relationship with the physical world. However, ground-oriented applications in virtual reality, 3D navigation, and civil engineering require a novel modeling approach, because the existing large-scale 3D city modeling methods do not provide rich visual information at ground level. This paper proposes a new framework for generating 3D city models that satisfy both the visual and the physical requirements for ground-oriented virtual reality applications. To ensure its usability, the framework must be cost-effective and allow for automated creation. To achieve these goals, we leverage a mobile mapping system that automatically gathers high-resolution images and supplements sensor information such as the position and direction of the captured images. To resolve problems stemming from sensor noise and occlusions, we develop a fusion technique to incorporate digital map data. This paper describes the major processes of the overall framework and the proposed techniques for each step and presents experimental results from a comparison with an existing 3D city model.

  11. Production of Referring Expressions for an Unknown Audience: A Computational Model of Communal Common Ground.

    Kutlak, Roman; van Deemter, Kees; Mellish, Chris


    This article presents a computational model of the production of referring expressions under uncertainty over the hearer's knowledge. Although situations where the hearer's knowledge is uncertain have seldom been addressed in the computational literature, they are common in ordinary communication, for example when a writer addresses an unknown audience, or when a speaker addresses a stranger. We propose a computational model composed of three complimentary heuristics based on, respectively, an estimation of the recipient's knowledge, an estimation of the extent to which a property is unexpected, and the question of what is the optimum number of properties in a given situation. The model was tested in an experiment with human readers, in which it was compared against the Incremental Algorithm and human-produced descriptions. The results suggest that the new model outperforms the Incremental Algorithm in terms of the proportion of correctly identified entities and in terms of the perceived quality of the generated descriptions.

  12. Automatic Reading



    <正>Reading is the key to school success and,like any skill,it takes practice.A child learns to walk by practising until he no longer has to think about how to put one foot in front of the other.The great athlete practises until he can play quickly,accurately and without thinking.Ed- ucators call it automaticity.

  13. A statistically based seasonal precipitation forecast model with automatic predictor selection and its application to central and south Asia

    Gerlitz, Lars; Vorogushyn, Sergiy; Apel, Heiko; Gafurov, Abror; Unger-Shayesteh, Katy; Merz, Bruno


    The study presents a statistically based seasonal precipitation forecast model, which automatically identifies suitable predictors from globally gridded sea surface temperature (SST) and climate variables by means of an extensive data-mining procedure and explicitly avoids the utilization of typical large-scale climate indices. This leads to an enhanced flexibility of the model and enables its automatic calibration for any target area without any prior assumption concerning adequate predictor variables. Potential predictor variables are derived by means of a cell-wise correlation analysis of precipitation anomalies with gridded global climate variables under consideration of varying lead times. Significantly correlated grid cells are subsequently aggregated to predictor regions by means of a variability-based cluster analysis. Finally, for every month and lead time, an individual random-forest-based forecast model is constructed, by means of the preliminary generated predictor variables. Monthly predictions are aggregated to running 3-month periods in order to generate a seasonal precipitation forecast. The model is applied and evaluated for selected target regions in central and south Asia. Particularly for winter and spring in westerly-dominated central Asia, correlation coefficients between forecasted and observed precipitation reach values up to 0.48, although the variability of precipitation rates is strongly underestimated. Likewise, for the monsoonal precipitation amounts in the south Asian target area, correlations of up to 0.5 were detected. The skill of the model for the dry winter season over south Asia is found to be low. A sensitivity analysis with well-known climate indices, such as the El Niño- Southern Oscillation (ENSO), the North Atlantic Oscillation (NAO) and the East Atlantic (EA) pattern, reveals the major large-scale controlling mechanisms of the seasonal precipitation climate for each target area. For the central Asian target areas, both

  14. Automatically updating predictive modeling workflows support decision-making in drug design.

    Muegge, Ingo; Bentzien, Jörg; Mukherjee, Prasenjit; Hughes, Robert O


    Using predictive models for early decision-making in drug discovery has become standard practice. We suggest that model building needs to be automated with minimum input and low technical maintenance requirements. Models perform best when tailored to answering specific compound optimization related questions. If qualitative answers are required, 2-bin classification models are preferred. Integrating predictive modeling results with structural information stimulates better decision making. For in silico models supporting rapid structure-activity relationship cycles the performance deteriorates within weeks. Frequent automated updates of predictive models ensure best predictions. Consensus between multiple modeling approaches increases the prediction confidence. Combining qualified and nonqualified data optimally uses all available information. Dose predictions provide a holistic alternative to multiple individual property predictions for reaching complex decisions.

  15. Formulation of probabilistic models of protein structure in atomic detail using the reference ratio method.

    Valentin, Jan B; Andreetta, Christian; Boomsma, Wouter; Bottaro, Sandro; Ferkinghoff-Borg, Jesper; Frellsen, Jes; Mardia, Kanti V; Tian, Pengfei; Hamelryck, Thomas


    We propose a method to formulate probabilistic models of protein structure in atomic detail, for a given amino acid sequence, based on Bayesian principles, while retaining a close link to physics. We start from two previously developed probabilistic models of protein structure on a local length scale, which concern the dihedral angles in main chain and side chains, respectively. Conceptually, this constitutes a probabilistic and continuous alternative to the use of discrete fragment and rotamer libraries. The local model is combined with a nonlocal model that involves a small number of energy terms according to a physical force field, and some information on the overall secondary structure content. In this initial study we focus on the formulation of the joint model and the evaluation of the use of an energy vector as a descriptor of a protein's nonlocal structure; hence, we derive the parameters of the nonlocal model from the native structure without loss of generality. The local and nonlocal models are combined using the reference ratio method, which is a well-justified probabilistic construction. For evaluation, we use the resulting joint models to predict the structure of four proteins. The results indicate that the proposed method and the probabilistic models show considerable promise for probabilistic protein structure prediction and related applications.

  16. Reference Manual for the System Advisor Model's Wind Power Performance Model

    Freeman, J.; Jorgenson, J.; Gilman, P.; Ferguson, T.


    This manual describes the National Renewable Energy Laboratory's System Advisor Model (SAM) wind power performance model. The model calculates the hourly electrical output of a single wind turbine or of a wind farm. The wind power performance model requires information about the wind resource, wind turbine specifications, wind farm layout (if applicable), and costs. In SAM, the performance model can be coupled to one of the financial models to calculate economic metrics for residential, commercial, or utility-scale wind projects. This manual describes the algorithms used by the wind power performance model, which is available in the SAM user interface and as part of the SAM Simulation Core (SSC) library, and is intended to supplement the user documentation that comes with the software.

  17. Reference tissue modeling with parameter coupling: application to a study of SERT binding in HIV

    Endres, Christopher J.; Hammoud, Dima A.; Pomper, Martin G.


    When applicable, it is generally preferred to evaluate positron emission tomography (PET) studies using a reference tissue-based approach as that avoids the need for invasive arterial blood sampling. However, most reference tissue methods have been shown to have a bias that is dependent on the level of tracer binding, and the variability of parameter estimates may be substantially affected by noise level. In a study of serotonin transporter (SERT) binding in HIV dementia, it was determined that applying parameter coupling to the simplified reference tissue model (SRTM) reduced the variability of parameter estimates and yielded the strongest between-group significant differences in SERT binding. The use of parameter coupling makes the application of SRTM more consistent with conventional blood input models and reduces the total number of fitted parameters, thus should yield more robust parameter estimates. Here, we provide a detailed evaluation of the application of parameter constraint and parameter coupling to [11C]DASB PET studies. Five quantitative methods, including three methods that constrain the reference tissue clearance (kr2) to a common value across regions were applied to the clinical and simulated data to compare measurement of the tracer binding potential (BPND). Compared with standard SRTM, either coupling of kr2 across regions or constraining kr2 to a first-pass estimate improved the sensitivity of SRTM to measuring a significant difference in BPND between patients and controls. Parameter coupling was particularly effective in reducing the variance of parameter estimates, which was less than 50% of the variance obtained with standard SRTM. A linear approach was also improved when constraining kr2 to a first-pass estimate, although the SRTM-based methods yielded stronger significant differences when applied to the clinical study. This work shows that parameter coupling reduces the variance of parameter estimates and may better discriminate between

  18. An Empirical Approach to Temporal Reference Resolution

    Wiebe, J; McKeever, K K; Öhrström-Sandgren, T; Wiebe, Janyce; Hara, Tom O'; Keever, Kenneth Mc; Oehrstroem-Sandgren, Thorsten


    This paper presents the results of an empirical investigation of temporal reference resolution in scheduling dialogs. The algorithm adopted is primarily a linear-recency based approach that does not include a model of global focus. A fully automatic system has been developed and evaluated on unseen test data with good results. This paper presents the results of an intercoder reliability study, a model of temporal reference resolution that supports linear recency and has very good coverage, the results of the system evaluated on unseen test data, and a detailed analysis of the dialogs assessing the viability of the approach.

  19. BIOSMILE: A semantic role labeling system for biomedical verbs using a maximum-entropy model with automatically generated template features

    Tsai Richard


    Full Text Available Abstract Background Bioinformatics tools for automatic processing of biomedical literature are invaluable for both the design and interpretation of large-scale experiments. Many information extraction (IE systems that incorporate natural language processing (NLP techniques have thus been developed for use in the biomedical field. A key IE task in this field is the extraction of biomedical relations, such as protein-protein and gene-disease interactions. However, most biomedical relation extraction systems usually ignore adverbial and prepositional phrases and words identifying location, manner, timing, and condition, which are essential for describing biomedical relations. Semantic role labeling (SRL is a natural language processing technique that identifies the semantic roles of these words or phrases in sentences and expresses them as predicate-argument structures. We construct a biomedical SRL system called BIOSMILE that uses a maximum entropy (ME machine-learning model to extract biomedical relations. BIOSMILE is trained on BioProp, our semi-automatic, annotated biomedical proposition bank. Currently, we are focusing on 30 biomedical verbs that are frequently used or considered important for describing molecular events. Results To evaluate the performance of BIOSMILE, we conducted two experiments to (1 compare the performance of SRL systems trained on newswire and biomedical corpora; and (2 examine the effects of using biomedical-specific features. The experimental results show that using BioProp improves the F-score of the SRL system by 21.45% over an SRL system that uses a newswire corpus. It is noteworthy that adding automatically generated template features improves the overall F-score by a further 0.52%. Specifically, ArgM-LOC, ArgM-MNR, and Arg2 achieve statistically significant performance improvements of 3.33%, 2.27%, and 1.44%, respectively. Conclusion We demonstrate the necessity of using a biomedical proposition bank for training

  20. Model reference adaptive impedance control for physical human-robot interaction

    Bakur ALQAUDI; Hamidreza MODARES; Isura RANATUNGA; Shaikh M TOUSIF; Frank L LEWIS; Dan O POPA


    This paper presents a novel enhanced human-robot interaction system based on model reference adaptive control. The presented method delivers guaranteed stability and task performance and has two control loops. A robot-specific inner loop, which is a neuroadaptive controller, learns the robot dynamics online and makes the robot respond like a prescribed impedance model. This loop uses no task information, including no prescribed trajectory. A task-specific outer loop takes into account the human operator dynamics and adapts the prescribed robot impedance model so that the combined human-robot system has desirable characteristics for task performance. This design is based on model reference adaptive control, but of a nonstandard form. The net result is a controller with both adaptive impedance characteristics and assistive inputs that augment the human operator to provide improved task performance of the human-robot team. Simulations verify the performance of the proposed controller in a repetitive point-to-point motion task. Actual experimental implementations on a PR2 robot further corroborate the effectiveness of the approach.

  1. Modelling reference conditions for the upper limit of Posidonia oceanica meadows: a morphodynamic approach

    Vacchi, Matteo; Misson, Gloria; Montefalcone, Monica; Archetti, Renata; Nike Bianchi, Carlo; Ferrari, Marco


    The upper portion of the meadows of the protected Mediterranean seagrass Posidonia oceanica occurs in the region of the seafloor mostly affected by surf-related effects. Evaluation of its status is part of monitoring programs, but proper conclusions are difficult to draw due to the lack of definite reference conditions. Comparing the position of the meadow upper limit with the beach morphodynamics (i.e. the distinctive type of beach produced by topography and wave climate) provided evidence that the natural landwards extension of meadows can be predicted. Here we present an innovative predictive cartographic approach able to identify the seafloor portion where the meadow upper limit should naturally lies (i.e. its reference conditions). The conceptual framework of this model is based on 3 essential components: i) Definition of the breaking depth geometry: the breaking limit represents the major constrain for the landward meadow development. We modelled the breaking limit (1 year return time) using the software Mike 21 sw. ii) Definition of the morphodynamic domain of the beach using the surf scaling index ɛ; iii) Definition of the P. oceanica upper limit geometry. We coupled detailed aerial photo with thematic bionomic cartography. In GIS environment, we modelled the seafloor extent where the meadow should naturally lies according to the breaking limit position and the morphodynamic domain of the beach. Then, we added the GIS layer with the meadow upper limit geometry. Therefore, the final output shows, on the same map, both the reference condition and the actual location of the upper limit. It make possible to assess the status of the landward extent of a given P. oceanica meadow and quantify any suspected or observed regression caused by anthropic factors. The model was elaborated and validated along the Ligurian coastline (NW Mediteraanean) and was positively tested in other Mediterranean areas.

  2. Automatic Camera Control

    Burelli, Paolo; Preuss, Mike


    Automatically generating computer animations is a challenging and complex problem with applications in games and film production. In this paper, we investigate howto translate a shot list for a virtual scene into a series of virtual camera configurations — i.e automatically controlling the virtual...... camera. We approach this problem by modelling it as a dynamic multi-objective optimisation problem and show how this metaphor allows a much richer expressiveness than a classical single objective approach. Finally, we showcase the application of a multi-objective evolutionary algorithm to generate a shot...

  3. Automatic Insall-Salvati ratio measurement on lateral knee x-ray images using model-guided landmark localization

    Chen, Hsin-Chen; Wu, Chia-Hsing; Sun, Yung-Nien [Department of Computer Science and Information Engineering, National Cheng Kung University, 1 University Road, Tainan 701, Taiwan (China); Lin, Chii-Jeng [Department of Orthopedics, College of Medicine, National Cheng Kung University, 138 Sheng Li Road, Tainan 704, Taiwan (China); Wang, Chien-Kuo, E-mail:, E-mail:, E-mail:, E-mail:, E-mail: [Department of Radiology, National Cheng Kung University Hospital, 138 Sheng Li Road, Tainan 704, Taiwan (China)


    The Insall-Salvati ratio (ISR) is important for detecting two common clinical signs of knee disease: patella alta and patella baja. Furthermore, large inter-operator differences in ISR measurement make an objective measurement system necessary for better clinical evaluation. In this paper, we define three specific bony landmarks for determining the ISR and then propose an x-ray image analysis system to localize these landmarks and measure the ISR. Due to inherent artifacts in x-ray images, such as unevenly distributed intensities, which make landmark localization difficult, we hence propose a registration-assisted active-shape model (RAASM) to localize these landmarks. We first construct a statistical model from a set of training images based on x-ray image intensity and patella shape. Since a knee x-ray image contains specific anatomical structures, we then design an algorithm, based on edge tracing, for patella feature extraction in order to automatically align the model to the patella image. We can estimate the landmark locations as well as the ISR after registration-assisted model fitting. Our proposed method successfully overcomes drawbacks caused by x-ray image artifacts. Experimental results show great agreement between the ISRs measured by the proposed method and by orthopedic clinicians.

  4. Semi-automatic liver tumor segmentation with hidden Markov measure field model and non-parametric distribution estimation.

    Häme, Yrjö; Pollari, Mika


    A novel liver tumor segmentation method for CT images is presented. The aim of this work was to reduce the manual labor and time required in the treatment planning of radiofrequency ablation (RFA), by providing accurate and automated tumor segmentations reliably. The developed method is semi-automatic, requiring only minimal user interaction. The segmentation is based on non-parametric intensity distribution estimation and a hidden Markov measure field model, with application of a spherical shape prior. A post-processing operation is also presented to remove the overflow to adjacent tissue. In addition to the conventional approach of using a single image as input data, an approach using images from multiple contrast phases was developed. The accuracy of the method was validated with two sets of patient data, and artificially generated samples. The patient data included preoperative RFA images and a public data set from "3D Liver Tumor Segmentation Challenge 2008". The method achieved very high accuracy with the RFA data, and outperformed other methods evaluated with the public data set, receiving an average overlap error of 30.3% which represents an improvement of 2.3% points to the previously best performing semi-automatic method. The average volume difference was 23.5%, and the average, the RMS, and the maximum surface distance errors were 1.87, 2.43, and 8.09 mm, respectively. The method produced good results even for tumors with very low contrast and ambiguous borders, and the performance remained high with noisy image data.

  5. An approach of crater automatic recognition based on contour digital elevation model from Chang'E Missions

    Zuo, W.; Li, C.; Zhang, Z.; Li, H.; Feng, J.


    In order to provide fundamental information for exploration and related scientific research on the Moon and other planets, we propose a new automatic method to recognize craters on the lunar surface based on contour data extracted from a digital elevation model (DEM). First, we mapped 16-bits DEM to 256 gray scales for data compression, then for the purposes of better visualization, the grayscale is converted into RGB image. After that, a median filter is applied twice to DEM for data optimization, which produced smooth, continuous outlines for subsequent construction of contour plane. Considering the fact that the morphology of crater on contour plane can be approximately expressed as an ellipse or circle, we extract the outer boundaries of contour plane with the same color(gray value) as targets for further identification though a 8- neighborhood counterclockwise searching method. Then, A library of training samples is constructed based on above targets calculated from some sample DEM data, from which real crater targets are labeled as positive samples manually, and non-crater objects are labeled as negative ones. Some morphological feathers are calculated for all these samples, which are major axis (L), circumference(C), area inside the boundary(S), and radius of the largest inscribed circle(R). We use R/L, R/S, C/L, C/S, R/C, S/L as the key factors for identifying craters, and apply Fisher discrimination method on the sample library to calculate the weight of each factor and determine the discrimination formula, which is then applied to DEM data for identifying lunar craters. The method has been tested and verified with DEM data from CE-1 and CE-2, showing strong recognition ability and robustness and is applicable for the recognition of craters with various diameters and significant morphological differences, making fast and accurate automatic crater recognition possible.

  6. NOAA/NGDC candidate models for the 12th generation International Geomagnetic Reference Field

    Alken, Patrick; Maus, Stefan; Chulliat, Arnaud; Manoj, Chandrasekharan


    The International Geomagnetic Reference Field (IGRF) is a model of the geomagnetic main field and its secular variation, produced every 5 years from candidate models proposed by a number of international research institutions. For this 12th generation IGRF, three candidate models were solicited: a main field model for the 2010.0 epoch, a main field model for the 2015.0 epoch, and the predicted secular variation for the five-year period 2015 to 2020. The National Geophysical Data Center (NGDC), part of the National Oceanic and Atmospheric Administration (NOAA), has produced three candidate models for consideration in IGRF-12. The 2010 main field candidate was produced from Challenging Minisatellite Payload (CHAMP) satellite data, while the 2015 main field and secular variation candidates were produced from Swarm and Ørsted satellite data. Careful data selection was performed to minimize the influence of magnetospheric and ionospheric fields. The secular variation predictions of our parent models, from which the candidate models were derived, have been validated against independent ground observatory data.

  7. OPENICRA: Towards A Generic Model for Automatic Deployment of Applications in the Cloud Computing

    Gadhgadhi Ridha


    Full Text Available This paper focuses on the design and the implementation of a new generic model for automated deployment of applications in the cloud to mitigate the effects of barriers to entry, reduce the complexity of application development and simplify the process of deploying cloud services. Our proposed model, called OpenICRA, implements a layered architecture that hides the implementation details, allowing having a simple deployment process. We conducted two real case studies to validate our proposed model. Our empirical results demonstrate the effectiveness of our proposed model to deploy different types of applications without any change in their source code.


    Mohamed H. Haggag


    Full Text Available Due to the environmental pressures on organizations, the demand on Business Process Management (BPM automation suites has increased. This led to the arising need for managing process-related risks. Therefore the management of risks in business processes has been the subject of many researches during the past few years. However, most of these researches focused mainly on one or two stages of the BPM life cycle and introduced a support for it. This paper aims to provide a reference model for Risk-Aware BPM which addresses the whole stages of the BPM life cycle, as well as some current techniques are listed for the implementation of this model. Additionally, a case study for a business process in an Egyptian university is introduced, in order to apply this model in real-world environment. The results will be analyzed and concluded.

  9. A coupled $2\\times2$D Babcock-Leighton solar dynamo model. II. Reference dynamo solutions

    Lemerle, Alexandre


    In this paper we complete the presentation of a new hybrid $2\\times2$D flux transport dynamo (FTD) model of the solar cycle based on the Babcock-Leighton mechanism of poloidal magnetic field regeneration via the surface decay of bipolar magnetic regions (BMRs). This hybrid model is constructed by allowing the surface flux transport (SFT) simulation described in Lemerle et al. 2015 to provide the poloidal source term to an axisymmetric FTD simulation defined in a meridional plane, which in turn generates the BMRs required by the SFT. A key aspect of this coupling is the definition of an emergence function describing the probability of BMR emergence as a function of the spatial distribution of the internal axisymmetric magnetic field. We use a genetic algorithm to calibrate this function, together with other model parameters, against observed cycle 21 emergence data. We present a reference dynamo solution reproducing many solar cycle characteristics, including good hemispheric coupling, phase relationship betwe...

  10. A nonlinear model reference adaptive inverse control algorithm with pre-compensator


    In this paper, the reduced-order modeling (ROM)technology and its corresponding linear theory are expanded from the linear dynamic system to the nonlinear one, and H∞ control theory is employed in the frequency domain to design some nonlinear system' s pre-compensator in some special way. The adaptive model inverse control (AMIC)theory coping with nonlinear system is improved as well. Such is the model reference adaptive inverse control with pre-compensator (PCMRAIC). The aim of that algorithm is to construct a strategy of control as a whole. As a practical example of the application, the numerical simulation has been given on matlab software packages. The numerical result is given. The proposed strategy realizes the linearization control of nonlinear dynamic system. And it carries out a good performance to deal with the nonlinear system.

  11. Automatic calibration of an erosion and sediment yield distributed conceptual model: application to the Goodwin Creek experimental river basin (USA)

    Bussi, G.; Francés, F.


    In the last decades, distributed hydrological models have achieved a fundamental importance in Hydrology, mainly for their capacity to describe the spatial variability of the basin processes. TETIS is a distributed conceptual model created to simulate rainfall-runoff processes. In the same way, a distributed approach to erosion and sediment yield modelling can lead to improvements for the solution of several sedimentological and geomorphological problems, such as sediment redistribution, localization of heavy erosion and soil loss zones, estimation of soil erosion and sediment yield and assessment of land use change effects on the sediment cycle. Following these considerations, the TETIS model has been coupled with a sediment cycle module with the purpose of representing erosion and sediment transport at basin scale. TETIS-SED is the result of integrating the erosion submodel of CASC2D-SED into the hydrological model TETIS. In the TETIS-SED model, the erosion/sedimentation rates are calculated as a function of the hydraulic properties of the flow, the physical properties of the soil and the surface characteristics. The modified Kilinc-Richardson equation is used to determine the upland sediment transport by grain size (silt, clay, and sand) from one cell into the next one. Sediment by size fraction is routed in the channels and the Engelund and Hansen equation is used to compute the transport capacity in one dimension. This formulation in both cases depends on hydraulic parameters (hydraulic radius, flow velocity and friction slope) and particle characteristics (specific gravity and particle diameter). Due to the uncertainty affecting the sediment parameters, the calibration stage may be a key issue in erosion and sediment yield modelling. In the TETIS model, automatic calibration is carried out by adjusting up to 9 hydrological correction factors with an automatic calibration algorithm, the Shuffled Complex Evolution (SCE-UA). In this work, 3 sedimentological

  12. Computation of a Reference Model for Robust Fault Detection and Isolation Residual Generation

    Emmanuel Mazars


    Full Text Available This paper considers matrix inequality procedures to address the robust fault detection and isolation (FDI problem for linear time-invariant systems subject to disturbances, faults, and polytopic or norm-bounded uncertainties. We propose a design procedure for an FDI filter that aims to minimize a weighted combination of the sensitivity of the residual signal to disturbances and modeling errors, and the deviation of the faults to residual dynamics from a fault to residual reference model, using the ℋ∞-norm as a measure. A key step in our procedure is the design of an optimal fault reference model. We show that the optimal design requires the solution of a quadratic matrix inequality (QMI optimization problem. Since the solution of the optimal problem is intractable, we propose a linearization technique to derive a numerically tractable suboptimal design procedure that requires the solution of a linear matrix inequality (LMI optimization. A jet engine example is employed to demonstrate the effectiveness of the proposed approach.

  13. Fuzzy virtual reference model sensorless tracking control for linear induction motors.

    Hung, Cheng-Yao; Liu, Peter; Lian, Kuang-Yow


    This paper introduces a fuzzy virtual reference model (FVRM) synthesis method for linear induction motor (LIM) speed sensorless tracking control. First, we represent the LIM as a Takagi-Sugeno fuzzy model. Second, we estimate the immeasurable mover speed and secondary flux by a fuzzy observer. Third, to convert the speed tracking control into a stabilization problem, we define the internal desired states for state tracking via an FVRM. Finally, by solving a set of linear matrix inequalities (LMIs), we obtain the observer gains and the control gains where exponential convergence is guaranteed. The contributions of the approach in this paper are threefold: 1) simplified approach--speed tracking problem converted into stabilization problem; 2) omit need of actual reference model--FVRM generates internal desired states; and 3) unification of controller and observer design--control objectives are formulated into an LMI problem where powerful numerical toolboxes solve controller and observer gains. Finally, experiments are carried out to verify the theoretical results and show satisfactory performance both in transient response and robustness.

  14. Aircraft automatic digital flight control system with inversion of the model in the feed-forward path

    Smith, G. A.; Meyer, G.


    A full-flight-envelope automatic trajectory control system concept is being investigated at Ames Research Center. This concept was developed for advanced aircraft configurations with severe nonlinear characteristics. A feature of the system is an inverse of the complete nonlinear aircraft model as part of the feed-forward control path. Simulation and flight tests have been reported at previous Digital Avionics Systems conferences. A new method for the continuous real-time inversion of the aircraft model using a Newton-Raphson trim algorithm instead of the original inverse table look-up procedure has been developed. The results of a simulation study of a vertical attitude takeoff and landing aircraft using the new inversion technique are presented. Maneuvers were successfully carried out in all directions in the vertical-attitude hover mode. Transition runs from conventional flight through the region of lift-curve-slope reversal at an angle of attack of about 32 deg and to hover at zero speed in the vertical attitude showed satisfactory transient response. Simulations were also conducted in conventional flight at high subsonic speed in steep climb and with turns up to 4 g. Successful flight tests of the system with the new model-inversion technique in a UH-1H helicopter have recently been carried out.

  15. Automatic categorization of web pages and user clustering with mixtures of hidden Markov models

    Ypma, A.; Heskes, T.M.


    We propose mixtures of hidden Markov models for modelling clickstreams of web surfers. Hence, the page categorization is learned from the data without the need for a (possibly cumbersome) manual categorization. We provide an EM algorithm for training a mixture of HMMs and show that additional static

  16. Automatic thoracic anatomy segmentation on CT images using hierarchical fuzzy models and registration

    Sun, Kaioqiong; Udupa, Jayaram K.; Odhner, Dewey; Tong, Yubing; Torigian, Drew A.


    This paper proposes a thoracic anatomy segmentation method based on hierarchical recognition and delineation guided by a built fuzzy model. Labeled binary samples for each organ are registered and aligned into a 3D fuzzy set representing the fuzzy shape model for the organ. The gray intensity distributions of the corresponding regions of the organ in the original image are recorded in the model. The hierarchical relation and mean location relation between different organs are also captured in the model. Following the hierarchical structure and location relation, the fuzzy shape model of different organs is registered to the given target image to achieve object recognition. A fuzzy connected delineation method is then used to obtain the final segmentation result of organs with seed points provided by recognition. The hierarchical structure and location relation integrated in the model provide the initial parameters for registration and make the recognition efficient and robust. The 3D fuzzy model combined with hierarchical affine registration ensures that accurate recognition can be obtained for both non-sparse and sparse organs. The results on real images are presented and shown to be better than a recently reported fuzzy model-based anatomy recognition strategy.

  17. Automatic categorization of web pages and user clustering with mixtures of hidden Markov models

    Ypma, A.; Heskes, T.M.


    We propose mixtures of hidden Markov models for modelling clickstreams of web surfers. Hence, the page categorization is learned from the data without the need for a (possibly cumbersome) manual categorization. We provide an EM algorithm for training a mixture of HMMs and show that additional static

  18. Automatic control of finite element models for temperature-controlled radiofrequency ablation

    Haemmerich Dieter


    Full Text Available Abstract Background The finite element method (FEM has been used to simulate cardiac and hepatic radiofrequency (RF ablation. The FEM allows modeling of complex geometries that cannot be solved by analytical methods or finite difference models. In both hepatic and cardiac RF ablation a common control mode is temperature-controlled mode. Commercial FEM packages don't support automating temperature control. Most researchers manually control the applied power by trial and error to keep the tip temperature of the electrodes constant. Methods We implemented a PI controller in a control program written in C++. The program checks the tip temperature after each step and controls the applied voltage to keep temperature constant. We created a closed loop system consisting of a FEM model and the software controlling the applied voltage. The control parameters for the controller were optimized using a closed loop system simulation. Results We present results of a temperature controlled 3-D FEM model of a RITA model 30 electrode. The control software effectively controlled applied voltage in the FEM model to obtain, and keep electrodes at target temperature of 100°C. The closed loop system simulation output closely correlated with the FEM model, and allowed us to optimize control parameters. Discussion The closed loop control of the FEM model allowed us to implement temperature controlled RF ablation with minimal user input.

  19. A reference data model of a metadata registry preserving semantics and representations of data elements.

    Löpprich, Martin; Jones, Jennifer; Meinecke, Marie-Claire; Goldschmidt, Hartmut; Knaup, Petra


    Integration and analysis of clinical data collected in multiple data sources over a long period of time is a major challenge even when data warehouses and metadata registries are used. Since most metadata registries focus on describing data elements to establish domain consistent data definition and providing item libraries, hierarchical and temporal dependencies cannot be mapped. Therefore we developed and validated a reference data model, based on ISO/IEC 11179, which allows revision and branching control of conceptually similar data elements with heterogeneous definitions and representations.

  20. Performance Optimizing Multi-Objective Adaptive Control with Time-Varying Model Reference Modification

    Nguyen, Nhan T.; Hashemi, Kelley E.; Yucelen, Tansel; Arabi, Ehsan


    This paper presents a new adaptive control approach that involves a performance optimization objective. The problem is cast as a multi-objective optimal control. The control synthesis involves the design of a performance optimizing controller from a subset of control inputs. The effect of the performance optimizing controller is to introduce an uncertainty into the system that can degrade tracking of the reference model. An adaptive controller from the remaining control inputs is designed to reduce the effect of the uncertainty while maintaining a notion of performance optimization in the adaptive control system.

  1. Speed Estimation of Induction Motor Using Model Reference Adaptive System with Kalman Filter

    Pavel Brandstetter


    Full Text Available The paper deals with a speed estimation of the induction motor using observer with Model Reference Adaptive System and Kalman Filter. For simulation, Hardware in Loop Simulation method is used. The first part of the paper includes the mathematical description of the observer for the speed estimation of the induction motor. The second part describes Kalman filter. The third part describes Hardware in Loop Simulation method and its realization using multifunction card MF 624. In the last section of the paper, simulation results are shown for different changes of the induction motor speed which confirm high dynamic properties of the induction motor drive with sensorless control.

  2. Dietary Reference Intakes for Zinc May Require Adjustment for Phytate Intake Based upon Model Predictions12

    Hambidge, K Michael; Miller, Leland V.; Westcott, Jamie E.; Krebs, Nancy F


    The quantity of total dietary zinc (Zn) and phytate are the principal determinants of the quantity of absorbed Zn. Recent estimates of Dietary Reference Intakes (DRI) for Zn by the Institute of Medicine (IOM) were based on data from low-phytate or phytate-free diets. The objective of this project was to estimate the effects of increasing quantities of dietary phytate on these DRI. We used a trivariate model of the quantity of Zn absorbed as a function of dietary Zn and phytate with updated pa...

  3. Experimental Wave Tank Test for Reference Model 3 Floating-Point Absorber Wave Energy Converter Project

    Yu, Y. H. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Lawson, M. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Li, Y. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Previsic, M. [Re Vision Consulting, Sacramento, CA (United States); Epler, J. [Re Vision Consulting, Sacramento, CA (United States); Lou, J. [Oregon State Univ., Corvallis, OR (United States)


    The U.S. Department of Energy established a reference model project to benchmark a set of marine and hydrokinetic technologies including current (tidal, open-ocean, and river) turbines and wave energy converters. The objectives of the project were to first evaluate the status of these technologies and their readiness for commercial applications. Second, to evaluate the potential cost of energy and identify cost-reduction pathways and areas where additional research could be best applied to accelerate technology development to market readiness.

  4. Uav Aerial Survey: Accuracy Estimation for Automatically Generated Dense Digital Surface Model and Orthothoto Plan

    Altyntsev, M. A.; Arbuzov, S. A.; Popov, R. A.; Tsoi, G. V.; Gromov, M. O.


    A dense digital surface model is one of the products generated by using UAV aerial survey data. Today more and more specialized software are supplied with modules for generating such kind of models. The procedure for dense digital model generation can be completely or partly automated. Due to the lack of reliable criterion of accuracy estimation it is rather complicated to judge the generation validity of such models. One of such criterion can be mobile laser scanning data as a source for the detailed accuracy estimation of the dense digital surface model generation. These data may be also used to estimate the accuracy of digital orthophoto plans created by using UAV aerial survey data. The results of accuracy estimation for both kinds of products are presented in the paper.

  5. The International Reference Ionosphere 2012 – a model of international collaboration☆

    Bilitza Dieter


    Full Text Available The International Reference Ionosphere (IRI project was established jointly by the Committee on Space Research (COSPAR and the International Union of Radio Science (URSI in the late sixties with the goal to develop an international standard for the specification of plasma parameters in the Earth’s ionosphere. COSPAR needed such a specification for the evaluation of environmental effects on spacecraft and experiments in space, and URSI for radiowave propagation studies and applications. At the request of COSPAR and URSI, IRI was developed as a data-based model to avoid the uncertainty of theory-based models which are only as good as the evolving theoretical understanding. Being based on most of the available and reliable observations of the ionospheric plasma from the ground and from space, IRI describes monthly averages of electron density, electron temperature, ion temperature, ion composition, and several additional parameters in the altitude range from 60 km to 2000 km. A working group of about 50 international ionospheric experts is in charge of developing and improving the IRI model. Over time as new data became available and new modeling techniques emerged, steadily improved editions of the IRI model have been published. This paper gives a brief history of the IRI project and describes the latest version of the model, IRI-2012. It also briefly discusses efforts to develop a real-time IRI model. The IRI homepage is at

  6. Jobs and Economic Development Impact (JEDI) User Reference Guide: Fast Pyrolysis Biorefinery Model

    Zhang, Yimin [National Renewable Energy Lab. (NREL), Golden, CO (United States); Goldberg, Marshall [MRG and Associates, Nevada City, CA (United States)


    This guide -- the JEDI Fast Pyrolysis Biorefinery Model User Reference Guide -- was developed to assist users in operating and understanding the JEDI Fast Pyrolysis Biorefinery Model. The guide provides information on the model's underlying methodology, as well as the parameters and data sources used to develop the cost data utilized in the model. This guide also provides basic instruction on model add-in features and a discussion of how the results should be interpreted. Based on project-specific inputs from the user, the JEDI Fast Pyrolysis Biorefinery Model estimates local (e.g., county- or state-level) job creation, earnings, and output from total economic activity for a given fast pyrolysis biorefinery. These estimates include the direct, indirect and induced economic impacts to the local economy associated with the construction and operation phases of biorefinery projects.Local revenue and supply chain impacts as well as induced impacts are estimated using economic multipliers derived from the IMPLAN software program. By determining the local economic impacts and job creation for a proposed biorefinery, the JEDI Fast Pyrolysis Biorefinery Model can be used to field questions about the added value biorefineries might bring to a local community.


    G. K. Aslanov


    Full Text Available In the article is developed the model demonstrating the forming process of pattern of antenna system of aerodrome quasidopler automatic radiodirection-finder station in the development environment of LabVIEW applied programs of National Instrument company. 

  8. U.S. Department of Energy Reference Model Program RM2: Experimental Results

    Hill, Craig [Univ. of Minnesota, Minneapolis, MN (United States). St. Anthony Falls Laboratory (UMN-SAFL); Neary, Vincent Sinclair [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Gunawan, Budi [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Guala, Michele [Univ. of Minnesota, Minneapolis, MN (United States). St. Anthony Falls Laboratory (UMN-SAFL); Sotiropoulos, Fotis [Univ. of Minnesota, Minneapolis, MN (United States). St. Anthony Falls Laboratory (UMN-SAFL)


    The Reference Model Project (RMP), sponsored by the U.S. Department of Energy’s (DOE) Wind and Water Power Technologies Program within the Office of Energy Efficiency & Renewable Energy (EERE), aims at expediting industry growth and efficiency by providing non-proprietary Reference Models (RM) of MHK technology designs as study objects for open-source research and development (Neary et al. 2014a,b). As part of this program, MHK turbine models were tested in a large open channel facility at the University of Minnesota’s St. Anthony Falls Laboratory (UMN - SAFL) . Reference Model 2 (RM2) is a 1:15 geometric scale dual - rotor cross flow vertical axis device with counter - rotating rotors, each with a rotor diameter dT = 0.43m and rotor height, hT = 0.323 m. RM2 is a river turbine designed for a site modeled after a reach in the lower Mississippi River near Baton Rouge, Louisiana (Barone et al. 2014) . Precise blade angular position and torque measurements were synchronized with three acoustic Doppler velocimeters (ADV) aligned with each rotor and the midpoint for RM2 . Flow conditions for each case were controlled such that depth, h = 1m, and volumetric flow rate, Qw = 2. 35m3s-1 , resulting in a hub height velocity of approximately Uhub = 1. 2 ms-1 and blade chord length Reynolds numbers of Rec = 6 .1x104. Vertical velocity profiles collected in the wake of each device from 1 to 10 rotor diameters are used to estimate the velocity recovery and turbulent characteristics in the wake, as well as the interaction of the counter-rotating rotor wakes. The development of this high resolution laboratory investigation provides a robust dataset that enables assessing computational fluid dynamics (CFD) models and their ability to accurately simulate turbulent inflow environments, device performance metrics, and to reproduce wake velocity deficit, recovery and higher order

  9. Modelling the influence of automaticity of behaviour on physical activity motivation, intention and actual behaviour

    Rietdijk, Yara


    In research and in practise social-cognitive models, such as the theory of planned behaviour (TPB), are used to predict physical activity behaviour. These models mainly focus on reflective cognitive processes. As a reflective process, intention is thought to be the most proximal predictor to behaviour. Nevertheless, research suggests that the relation between intention and actual behaviour, the so called intention-behaviour gap, is moderate. Many health-related actions in d...

  10. Free Model of Sentence Classifier for Automatic Extraction of Topic Sentences

    M.L. Khodra; D.H. Widyantoro; E.A. Aziz; B.R. Trilaksono


    This research employs free model that uses only sentential features without paragraph context to extract topic sentences of a paragraph. For finding optimal combination of features, corpus-based classification is used for constructing a sentence classifier as the model. The sentence classifier is trained by using Support Vector Machine (SVM). The experiment shows that position and meta-discourse features are more important than syntactic features to extract topic sentence, and the best perfor...

  11. Towards Automatic and Topologically Consistent 3D Regional Geological Modeling from Boundaries and Attitudes

    Jiateng Guo


    Full Text Available Three-dimensional (3D geological models are important representations of the results of regional geological surveys. However, the process of constructing 3D geological models from two-dimensional (2D geological elements remains difficult and is not necessarily robust. This paper proposes a method of migrating from 2D elements to 3D models. First, the geological interfaces were constructed using the Hermite Radial Basis Function (HRBF to interpolate the boundaries and attitude data. Then, the subsurface geological bodies were extracted from the spatial map area using the Boolean method between the HRBF surface and the fundamental body. Finally, the top surfaces of the geological bodies were constructed by coupling the geological boundaries to digital elevation models. Based on this workflow, a prototype system was developed, and typical geological structures (e.g., folds, faults, and strata were simulated. Geological modes were constructed through this workflow based on realistic regional geological survey data. The model construction process was rapid, and the resulting models accorded with the constraints of the original data. This method could also be used in other fields of study, including mining geology and urban geotechnical investigations.

  12. FRankenstein becomes a cyborg: the automatic recombination and realignment of fold recognition models in CASP6.

    Kosinski, Jan; Gajda, Michal J; Cymerman, Iwona A; Kurowski, Michal A; Pawlowski, Marcin; Boniecki, Michal; Obarska, Agnieszka; Papaj, Grzegorz; Sroczynska-Obuchowicz, Paulina; Tkaczuk, Karolina L; Sniezynska, Paulina; Sasin, Joanna M; Augustyn, Anna; Bujnicki, Janusz M; Feder, Marcin


    In the course of CASP6, we generated models for all targets using a new version of the "FRankenstein's monster approach." Previously (in CASP5) we were able to build many very accurate full-atom models by selection and recombination of well-folded fragments obtained from crude fold recognition (FR) results, followed by optimization of the sequence-structure fit and assessment of alternative alignments on the structural level. This procedure was however very arduous, as most of the steps required extensive visual and manual input from the human modeler. Now, we have automated the most tedious steps, such as superposition of alternative models, extraction of best-scoring fragments, and construction of a hybrid "monster" structure, as well as generation of alternative alignments in the regions that remain poorly scored in the refined hybrid model. We have also included the ROSETTA method to construct those parts of the target for which no reasonable structures were generated by FR methods (such as long insertions and terminal extensions). The analysis of successes and failures of the current version of the FRankenstein approach in modeling of CASP6 targets reveals that the considerably streamlined and automated method performs almost as well as the initial, mostly manual version, which suggests that it may be a useful tool for accurate protein structure prediction even in the hands of nonexperts.

  13. Discovering novel phenotypes with automatically inferred dynamic models: a partial melanocyte conversion in Xenopus

    Lobo, Daniel; Lobikin, Maria; Levin, Michael


    Progress in regenerative medicine requires reverse-engineering cellular control networks to infer perturbations with desired systems-level outcomes. Such dynamic models allow phenotypic predictions for novel perturbations to be rapidly assessed in silico. Here, we analyzed a Xenopus model of conversion of melanocytes to a metastatic-like phenotype only previously observed in an all-or-none manner. Prior in vivo genetic and pharmacological experiments showed that individual animals either fully convert or remain normal, at some characteristic frequency after a given perturbation. We developed a Machine Learning method which inferred a model explaining this complex, stochastic all-or-none dataset. We then used this model to ask how a new phenotype could be generated: animals in which only some of the melanocytes converted. Systematically performing in silico perturbations, the model predicted that a combination of altanserin (5HTR2 inhibitor), reserpine (VMAT inhibitor), and VP16-XlCreb1 (constitutively active CREB) would break the all-or-none concordance. Remarkably, applying the predicted combination of three reagents in vivo revealed precisely the expected novel outcome, resulting in partial conversion of melanocytes within individuals. This work demonstrates the capability of automated analysis of dynamic models of signaling networks to discover novel phenotypes and predictively identify specific manipulations that can reach them. PMID:28128301

  14. Enabling HCCI modeling: The RIOT/CMCS Web Service for Automatic Reaction Mechanism Reduction

    Oluwole, O; Pitz, W J; Schuchardt, K; Rahn, L A; Green, Jr., W H; Leahy, D; Pancerella, C; Sj?berg, M; Dec, J


    New approaches are being developed to facilitate multidisciplinary collaborative research of Homogeneous Charge Compression Ignition (HCCI) combustion processes. In this paper, collaborative sharing of the Range Identification and Optimization Toolkit (RIOT) and related data and models is discussed. RIOT is a developmental approach to reduce the computational complexity of detailed chemical kinetic mechanisms, enabling their use in modeling kinetically-controlled combustion applications such as HCCI. These approaches are being developed and piloted as a part of the Collaboratory for Multiscale Chemical Sciences (CMCS) project. The capabilities of the RIOT code are shared through a portlet in the CMCS portal that allows easy specification and processing of RIOT inputs, remote execution of RIOT, tracking of data pedigree and translation of RIOT outputs (such as the reduced model) to a table view and to the commonly-used CHEMKIN mechanism format. The reduced model is thus immediately ready to be used for more efficient simulation of the chemically reacting system of interest. This effort is motivated by the need to improve computational efficiency in modeling HCCI systems. Preliminary use of the web service to obtain reduced models for this application has yielded computational speedup factors of up to 20 as presented in this paper.

  15. Discovering novel phenotypes with automatically inferred dynamic models: a partial melanocyte conversion in Xenopus

    Lobo, Daniel; Lobikin, Maria; Levin, Michael


    Progress in regenerative medicine requires reverse-engineering cellular control networks to infer perturbations with desired systems-level outcomes. Such dynamic models allow phenotypic predictions for novel perturbations to be rapidly assessed in silico. Here, we analyzed a Xenopus model of conversion of melanocytes to a metastatic-like phenotype only previously observed in an all-or-none manner. Prior in vivo genetic and pharmacological experiments showed that individual animals either fully convert or remain normal, at some characteristic frequency after a given perturbation. We developed a Machine Learning method which inferred a model explaining this complex, stochastic all-or-none dataset. We then used this model to ask how a new phenotype could be generated: animals in which only some of the melanocytes converted. Systematically performing in silico perturbations, the model predicted that a combination of altanserin (5HTR2 inhibitor), reserpine (VMAT inhibitor), and VP16-XlCreb1 (constitutively active CREB) would break the all-or-none concordance. Remarkably, applying the predicted combination of three reagents in vivo revealed precisely the expected novel outcome, resulting in partial conversion of melanocytes within individuals. This work demonstrates the capability of automated analysis of dynamic models of signaling networks to discover novel phenotypes and predictively identify specific manipulations that can reach them.

  16. Multicomponent quantitative spectroscopic analysis without reference substances based on ICA modelling.

    Monakhova, Yulia B; Mushtakova, Svetlana P


    A fast and reliable spectroscopic method for multicomponent quantitative analysis of targeted compounds with overlapping signals in complex mixtures has been established. The innovative analytical approach is based on the preliminary chemometric extraction of qualitative and quantitative information from UV-vis and IR spectral profiles of a calibration system using independent component analysis (ICA). Using this quantitative model and ICA resolution results of spectral profiling of "unknown" model mixtures, the absolute analyte concentrations in multicomponent mixtures and authentic samples were then calculated without reference solutions. Good recoveries generally between 95% and 105% were obtained. The method can be applied to any spectroscopic data that obey the Beer-Lambert-Bouguer law. The proposed method was tested on analysis of vitamins and caffeine in energy drinks and aromatic hydrocarbons in motor fuel with 10% error. The results demonstrated that the proposed method is a promising tool for rapid simultaneous multicomponent analysis in the case of spectral overlap and the absence/inaccessibility of reference materials.

  17. 230Th-234U Model-Ages of Some Uranium Standard Reference Materials

    Williams, R W; Gaffney, A M; Kristo, M J; Hutcheon, I D


    The 'age' of a sample of uranium is an important aspect of a nuclear forensic investigation and of the attribution of the material to its source. To the extent that the sample obeys the standard rules of radiochronometry, then the production ages of even very recent material can be determined using the {sup 230}Th-{sup 234}U chronometer. These standard rules may be summarized as (a) the daughter/parent ratio at time=zero must be known, and (b) there has been no daughter/parent fractionation since production. For most samples of uranium, the 'ages' determined using this chronometer are semantically 'model-ages' because (a) some assumption of the initial {sup 230}Th content in the sample is required and (b) closed-system behavior is assumed. The uranium standard reference materials originally prepared and distributed by the former US National Bureau of Standards and now distributed by New Brunswick Laboratory as certified reference materials (NBS SRM = NBL CRM) are good candidates for samples where both rules are met. The U isotopic standards have known purification and production dates, and closed-system behavior in the solid form (U{sub 3}O{sub 8}) may be assumed with confidence. We present here {sup 230}Th-{sup 234}U model-ages for several of these standards, determined by isotope dilution mass spectrometry using a multicollector ICP-MS, and compare these ages with their known production history.

  18. Structural modeling of G-protein coupled receptors: An overview on automatic web-servers.

    Busato, Mirko; Giorgetti, Alejandro


    Despite the significant efforts and discoveries during the last few years in G protein-coupled receptor (GPCR) expression and crystallization, the receptors with known structures to date are limited only to a small fraction of human GPCRs. The lack of experimental three-dimensional structures of the receptors represents a strong limitation that hampers a deep understanding of their function. Computational techniques are thus a valid alternative strategy to model three-dimensional structures. Indeed, recent advances in the field, together with extraordinary developments in crystallography, in particular due to its ability to capture GPCRs in different activation states, have led to encouraging results in the generation of accurate models. This, prompted the community of modelers to render their methods publicly available through dedicated databases and web-servers. Here, we present an extensive overview on these services, focusing on their advantages, drawbacks and their role in successful applications. Future challenges in the field of GPCR modeling, such as the predictions of long loop regions and the modeling of receptor activation states are presented as well.

  19. Automatically multi-paradigm requirements modeling and analyzing: An ontology-based approach


    There are several purposes for modeling and analyzing the problem domain before starting the software requirements analysis. First, it focuses on the problem domain, so that the domain users could be involved easily. Secondly, a comprehensive description on the problem domain will advantage getting a comprehensive software requirements model. This paper proposes an ontology-based approach for mod-eling the problem domain. It interacts with the domain users by using terminology that they can under-stand and guides them to provide the relevant information. A multiple paradigm analysis approach, with the basis of the description on the problem domain, has also been presented. Three criteria, i.e. the ra-tionality of organization structure, the achievability of organization goals, and the feasibility of organiza-tion process, have been proposed. The results of the analysis could be used as feedbacks for guiding the domain users to provide further information on the problem domain. And those models on the problem domain could be a kind of document for the pre-requirements analysis phase. They also will be the basis for further software requirements modeling.


    V. A. Lakhno


    Full Text Available Purpose. This scientific work considers the further development of mathematical models and algorithms for automatic decision support for dispatching management of the city passenger traffic. Methodology. Systems of dispatching management for the city passenger transport are to provide the carrying out of the routes according schedules with minimal deviations from the planned ones through the using of appropriate control actions. The systems’ algorithm focuses on the selection of control actions that compensate the disturbances. It is proposed to use the index of the waiting time minimum for passengers of buses and taxis at stops as a criterion for evaluating of dispatching control systems work. Findings. Based on the conducted analysis of the research within the existing theory of traffic flow of vehicles, it was proposed the model for the system of dispatching management for urban passenger moving units considering the effect of the most important stochastic factors on the schedule of buses and taxis movement in large cities. The obtained system of equations that models the parameter of movement on the bus routes allows you to assess quickly the influence of disturbing effects on the service quality indicators of passengers and, if necessary, to draw up the optimal schedule. Originality. The authors propose a new model for decision support of dispatching management for the city passenger transport. They take into account the effect of the most important stochastic factors, such as the overflowing buses and taxis, their descent from the lines, delays, deviations from the speed limit on the route, etc., on indicators of service quality, as well as optimizing the schedule. Practical value. The results allow to improve approaches to building models using in the systems of dispatching management of urban bus routes, as well as to improve the selection of control actions for similar systems in large cities of Ukraine.