WorldWideScience

Sample records for modeling technique applied

  1. Probabilistic Analysis Techniques Applied to Complex Spacecraft Power System Modeling

    Science.gov (United States)

    Hojnicki, Jeffrey S.; Rusick, Jeffrey J.

    2005-01-01

    Electric power system performance predictions are critical to spacecraft, such as the International Space Station (ISS), to ensure that sufficient power is available to support all the spacecraft s power needs. In the case of the ISS power system, analyses to date have been deterministic, meaning that each analysis produces a single-valued result for power capability because of the complexity and large size of the model. As a result, the deterministic ISS analyses did not account for the sensitivity of the power capability to uncertainties in model input variables. Over the last 10 years, the NASA Glenn Research Center has developed advanced, computationally fast, probabilistic analysis techniques and successfully applied them to large (thousands of nodes) complex structural analysis models. These same techniques were recently applied to large, complex ISS power system models. This new application enables probabilistic power analyses that account for input uncertainties and produce results that include variations caused by these uncertainties. Specifically, N&R Engineering, under contract to NASA, integrated these advanced probabilistic techniques with Glenn s internationally recognized ISS power system model, System Power Analysis for Capability Evaluation (SPACE).

  2. Manifold learning techniques and model reduction applied to dissipative PDEs

    CERN Document Server

    Sonday, Benjamin E; Gear, C William; Kevrekidis, Ioannis G

    2010-01-01

    We link nonlinear manifold learning techniques for data analysis/compression with model reduction techniques for evolution equations with time scale separation. In particular, we demonstrate a `"nonlinear extension" of the POD-Galerkin approach to obtaining reduced dynamic models of dissipative evolution equations. The approach is illustrated through a reaction-diffusion PDE, and the performance of different simulators on the full and the reduced models is compared. We also discuss the relation of this nonlinear extension with the so-called "nonlinear Galerkin" methods developed in the context of Approximate Inertial Manifolds.

  3. Applying Modern Techniques and Carrying Out English .Extracurricular—— On the Model United Nations Activity

    Institute of Scientific and Technical Information of China (English)

    XuXiaoyu; WangJian

    2004-01-01

    This paper is an introduction of the extracurricular activity of the Model United Nations in Northwestern Polyteehnical University (NPU) and it focuses on the application of the modem techniques in the activity and the pedagogical theories applied in it. An interview and questionnaire research will reveal the influence of the Model United Nations.

  4. Modelling the effects of the sterile insect technique applied to Eldana saccharina Walker in sugarcane

    Directory of Open Access Journals (Sweden)

    L Potgieter

    2012-12-01

    Full Text Available A mathematical model is formulated for the population dynamics of an Eldana saccharina Walker infestation of sugarcane under the influence of partially sterile released insects. The model describes the population growth of and interaction between normal and sterile E.saccharina moths in a temporally variable, but spatially homogeneous environment. The model consists of a deterministic system of difference equations subject to strictly positive initial data. The primary objective of this model is to determine suitable parameters in terms of which the above population growth and interaction may be quantified and according to which E.saccharina infestation levels and the associated sugarcane damage may be measured. Although many models have been formulated in the past describing the sterile insect technique, few of these models describe the technique for Lepidopteran species with more than one life stage and where F1-sterility is relevant. In addition, none of these models consider the technique when fully sterile females and partially sterile males are being released. The model formulated is also the first to describe the technique applied specifically to E.saccharina, and to consider the economic viability of applying the technique to this species. Pertinent decision support is provided to farm managers in terms of the best timing for releases, release ratios and release frequencies.

  5. Mixture experiment techniques for reducing the number of components applied for modeling waste glass sodium release

    Energy Technology Data Exchange (ETDEWEB)

    Piepel, G.; Redgate, T. [Pacific Northwest National Lab., Richland, WA (United States). Statistics Group

    1997-12-01

    Statistical mixture experiment techniques were applied to a waste glass data set to investigate the effects of the glass components on Product Consistency Test (PCT) sodium release (NR) and to develop a model for PCT NR as a function of the component proportions. The mixture experiment techniques indicate that the waste glass system can be reduced from nine to four components for purposes of modeling PCT NR. Empirical mixture models containing four first-order terms and one or two second-order terms fit the data quite well, and can be used to predict the NR of any glass composition in the model domain. The mixture experiment techniques produce a better model in less time than required by another approach.

  6. Applied ALARA techniques

    Energy Technology Data Exchange (ETDEWEB)

    Waggoner, L.O.

    1998-02-05

    The presentation focuses on some of the time-proven and new technologies being used to accomplish radiological work. These techniques can be applied at nuclear facilities to reduce radiation doses and protect the environment. The last reactor plants and processing facilities were shutdown and Hanford was given a new mission to put the facilities in a safe condition, decontaminate, and prepare them for decommissioning. The skills that were necessary to operate these facilities were different than the skills needed today to clean up Hanford. Workers were not familiar with many of the tools, equipment, and materials needed to accomplish:the new mission, which includes clean up of contaminated areas in and around all the facilities, recovery of reactor fuel from spent fuel pools, and the removal of millions of gallons of highly radioactive waste from 177 underground tanks. In addition, this work has to be done with a reduced number of workers and a smaller budget. At Hanford, facilities contain a myriad of radioactive isotopes that are 2048 located inside plant systems, underground tanks, and the soil. As cleanup work at Hanford began, it became obvious early that in order to get workers to apply ALARA and use hew tools and equipment to accomplish the radiological work it was necessary to plan the work in advance and get radiological control and/or ALARA committee personnel involved early in the planning process. Emphasis was placed on applying,ALARA techniques to reduce dose, limit contamination spread and minimize the amount of radioactive waste generated. Progress on the cleanup has,b6en steady and Hanford workers have learned to use different types of engineered controls and ALARA techniques to perform radiological work. The purpose of this presentation is to share the lessons learned on how Hanford is accomplishing radiological work.

  7. Mathematical Model and Artificial Intelligent Techniques Applied to a Milk Industry through DSM

    Science.gov (United States)

    Babu, P. Ravi; Divya, V. P. Sree

    2011-08-01

    The resources for electrical energy are depleting and hence the gap between the supply and the demand is continuously increasing. Under such circumstances, the option left is optimal utilization of available energy resources. The main objective of this chapter is to discuss about the Peak load management and overcome the problems associated with it in processing industries such as Milk industry with the help of DSM techniques. The chapter presents a generalized mathematical model for minimizing the total operating cost of the industry subject to the constraints. The work presented in this chapter also deals with the results of application of Neural Network, Fuzzy Logic and Demand Side Management (DSM) techniques applied to a medium scale milk industrial consumer in India to achieve the improvement in load factor, reduction in Maximum Demand (MD) and also the consumer gets saving in the energy bill.

  8. Downscaling Statistical Model Techniques for Climate Change Analysis Applied to the Amazon Region

    Directory of Open Access Journals (Sweden)

    David Mendes

    2014-01-01

    Full Text Available The Amazon is an area covered predominantly by dense tropical rainforest with relatively small inclusions of several other types of vegetation. In the last decades, scientific research has suggested a strong link between the health of the Amazon and the integrity of the global climate: tropical forests and woodlands (e.g., savannas exchange vast amounts of water and energy with the atmosphere and are thought to be important in controlling local and regional climates. Consider the importance of the Amazon biome to the global climate changes impacts and the role of the protected area in the conservation of biodiversity and state-of-art of downscaling model techniques based on ANN Calibrate and run a downscaling model technique based on the Artificial Neural Network (ANN that is applied to the Amazon region in order to obtain regional and local climate predicted data (e.g., precipitation. Considering the importance of the Amazon biome to the global climate changes impacts and the state-of-art of downscaling techniques for climate models, the shower of this work is presented as follows: the use of ANNs good similarity with the observation in the cities of Belém and Manaus, with correlations of approximately 88.9% and 91.3%, respectively, and spatial distribution, especially in the correction process, representing a good fit.

  9. Towards a human eye behavior model by applying Data Mining Techniques on Gaze Information from IEC

    CERN Document Server

    Pallez, Denis; Baccino, Thierry

    2008-01-01

    In this paper, we firstly present what is Interactive Evolutionary Computation (IEC) and rapidly how we have combined this artificial intelligence technique with an eye-tracker for visual optimization. Next, in order to correctly parameterize our application, we present results from applying data mining techniques on gaze information coming from experiments conducted on about 80 human individuals.

  10. The Double Layer Methodology and the Validation of Eigenbehavior Techniques Applied to Lifestyle Modeling

    Science.gov (United States)

    Lamichhane, Bishal

    2017-01-01

    A novel methodology, the double layer methodology (DLM), for modeling an individual's lifestyle and its relationships with health indicators is presented. The DLM is applied to model behavioral routines emerging from self-reports of daily diet and activities, annotated by 21 healthy subjects over 2 weeks. Unsupervised clustering on the first layer of the DLM separated our population into two groups. Using eigendecomposition techniques on the second layer of the DLM, we could find activity and diet routines, predict behaviors in a portion of the day (with an accuracy of 88% for diet and 66% for activity), determine between day and between individual similarities, and detect individual's belonging to a group based on behavior (with an accuracy up to 64%). We found that clustering based on health indicators was mapped back into activity behaviors, but not into diet behaviors. In addition, we showed the limitations of eigendecomposition for lifestyle applications, in particular when applied to noisy and sparse behavioral data such as dietary information. Finally, we proposed the use of the DLM for supporting adaptive and personalized recommender systems for stimulating behavior change. PMID:28133607

  11. Structure-selection techniques applied to continuous-time nonlinear models

    Science.gov (United States)

    Aguirre, Luis A.; Freitas, Ubiratan S.; Letellier, Christophe; Maquet, Jean

    2001-10-01

    This paper addresses the problem of choosing the multinomials that should compose a polynomial mathematical model starting from data. The mathematical representation used is a nonlinear differential equation of the polynomial type. Some approaches that have been used in the context of discrete-time models are adapted and applied to continuous-time models. Two examples are included to illustrate the main ideas. Models obtained with and without structure selection are compared using topological analysis. The main differences between structure-selected models and complete structure models are: (i) the former are more parsimonious than the latter, (ii) a predefined fixed-point configuration can be guaranteed for the former, and (iii) the former set of models produce attractors that are topologically closer to the original attractor than those produced by the complete structure models.

  12. Applied methods and techniques for mechatronic systems modelling, identification and control

    CERN Document Server

    Zhu, Quanmin; Cheng, Lei; Wang, Yongji; Zhao, Dongya

    2014-01-01

    Applied Methods and Techniques for Mechatronic Systems brings together the relevant studies in mechatronic systems with the latest research from interdisciplinary theoretical studies, computational algorithm development and exemplary applications. Readers can easily tailor the techniques in this book to accommodate their ad hoc applications. The clear structure of each paper, background - motivation - quantitative development (equations) - case studies/illustration/tutorial (curve, table, etc.) is also helpful. It is mainly aimed at graduate students, professors and academic researchers in related fields, but it will also be helpful to engineers and scientists from industry. Lei Liu is a lecturer at Huazhong University of Science and Technology (HUST), China; Quanmin Zhu is a professor at University of the West of England, UK; Lei Cheng is an associate professor at Wuhan University of Science and Technology, China; Yongji Wang is a professor at HUST; Dongya Zhao is an associate professor at China University o...

  13. Modelling laser speckle photographs of decayed teeth by applying a digital image information technique

    Science.gov (United States)

    Ansari, M. Z.; da Silva, L. C.; da Silva, J. V. P.; Deana, A. M.

    2016-09-01

    We report on the application of a digital image model to assess early carious lesions on teeth. When decay is in its early stages, the lesions were illuminated with a laser and the laser speckle images were obtained. Due to the differences in the optical properties between healthy and carious tissue, both regions produced different scatter patterns. The digital image information technique allowed us to produce colour-coded 3D surface plots of the intensity information in the speckle images, where the height (on the z-axis) and the colour in the rendering correlate with the intensity of a pixel in the image. The quantitative changes in colour component density enhance the contrast between the decayed and sound tissue, and visualization of the carious lesions become significantly evident. Therefore, the proposed technique may be adopted in the early diagnosis of carious lesions.

  14. Applying Intelligent Computing Techniques to Modeling Biological Networks from Expression Data

    Institute of Scientific and Technical Information of China (English)

    Wei-Po Lee; Kung-Cheng Yang

    2008-01-01

    Constructing biological networks is one of the most important issues in system sbiology. However, constructing a network from data manually takes a considerable large amount of time, therefore an automated procedure is advocated. To automate the procedure of network construction, in this work we use two intelligent computing techniques, genetic programming and neural computation, to infer two kinds of network models that use continuous variables. To verify the presented approaches, experiments have been conducted and the preliminary results show that both approaches can be used to infer networks successfully.

  15. Applying Reflective Middleware Techniques to Optimize a QoS-enabled CORBA Component Model Implementation

    Science.gov (United States)

    Wang, Nanbor; Parameswaran, Kirthika; Kircher, Michael; Schmidt, Douglas

    2003-01-01

    Although existing CORBA specifications, such as Real-time CORBA and CORBA Messaging, address many end-to-end quality-of service (QoS) properties, they do not define strategies for configuring these properties into applications flexibly, transparently, and adaptively. Therefore, application developers must make these configuration decisions manually and explicitly, which is tedious, error-prone, and open sub-optimal. Although the recently adopted CORBA Component Model (CCM) does define a standard configuration framework for packaging and deploying software components, conventional CCM implementations focus on functionality rather than adaptive quality-of-service, which makes them unsuitable for next-generation applications with demanding QoS requirements. This paper presents three contributions to the study of middleware for QoS-enabled component-based applications. It outlines rejective middleware techniques designed to adaptively (1) select optimal communication mechanisms, (2) manage QoS properties of CORBA components in their contain- ers, and (3) (re)con$gure selected component executors dynamically. Based on our ongoing research on CORBA and the CCM, we believe the application of rejective techniques to component middleware will provide a dynamically adaptive and (re)configurable framework for COTS software that is well-suited for the QoS demands of next-generation applications.

  16. Applied impulsive mathematical models

    CERN Document Server

    Stamova, Ivanka

    2016-01-01

    Using the theory of impulsive differential equations, this book focuses on mathematical models which reflect current research in biology, population dynamics, neural networks and economics. The authors provide the basic background from the fundamental theory and give a systematic exposition of recent results related to the qualitative analysis of impulsive mathematical models. Consisting of six chapters, the book presents many applicable techniques, making them available in a single source easily accessible to researchers interested in mathematical models and their applications. Serving as a valuable reference, this text is addressed to a wide audience of professionals, including mathematicians, applied researchers and practitioners.

  17. New techniques on oil spill modelling applied in the Eastern Mediterranean sea

    Science.gov (United States)

    Zodiatis, George; Kokinou, Eleni; Alves, Tiago; Lardner, Robin

    2016-04-01

    Small or large oil spills resulting from accidents on oil and gas platforms or due to the maritime traffic comprise a major environmental threat for all marine and coastal systems, and they are responsible for huge economic losses concerning the human infrastructures and the tourism. This work aims at presenting the integration of oil-spill model, bathymetric, meteorological, oceanographic, geomorphological and geological data to assess the impact of oil spills in maritime regions such as bays, as well as in the open sea, carried out in the Eastern Mediterranean Sea within the frame of NEREIDs, MEDESS-4MS and RAOP-Med EU projects. The MEDSLIK oil spill predictions are successfully combined with bathymetric analyses, the shoreline susceptibility and hazard mapping to predict the oil slick trajectories and the extend of the coastal areas affected. Based on MEDSLIK results, oil spill spreading and dispersion scenarios are produced both for non-mitigated and mitigated oil spills. MEDSLIK model considers three response combating methods of floating oil spills: a) mechanical recovery using skimmers or similar mechanisms; b) destruction by fire, c) use of dispersants or other bio-chemical means and deployment of booms. Shoreline susceptibility map can be compiled for the study areas based on the Environmental Susceptibility Index. The ESI classification considers a range of values between 1 and 9, with level 1 (ESI 1) representing areas of low susceptibility, impermeable to oil spilt during accidents, such as linear shorelines with rocky cliffs. In contrast, ESI 9 shores are highly vulnerable, and often coincide with natural reserves and special protected areas. Additionally, hazard maps of the maritime and coastal areas, possibly exposed to the danger on an oil spill, evaluate and categorize the hazard in levels from low to very high. This is important because a) Prior to an oil spill accident, hazard and shoreline susceptibility maps are made available to design

  18. fuzzy control technique fuzzy control technique applied to modified ...

    African Journals Online (AJOL)

    eobe

    ABSTRACT. In this paper, fuzzy control technique is applied to the modified mathematical model for malaria control presented ... be devised for rule-based systems that deals with continuous ... necessary to use fuzzy logic as it is not easy to follow a particular .... point movement and control is realized and designed. (e.g. α1 ...

  19. Computational optimization techniques applied to microgrids planning

    DEFF Research Database (Denmark)

    Gamarra, Carlos; Guerrero, Josep M.

    2015-01-01

    appear along the planning process. In this context, technical literature about optimization techniques applied to microgrid planning have been reviewed and the guidelines for innovative planning methodologies focused on economic feasibility can be defined. Finally, some trending techniques and new...

  20. Applying Model-Based Techniques for Aerospace Projects in Accordance with DO-178C, DO-331, and DO-333

    OpenAIRE

    Eisemann, Ulrich

    2016-01-01

    International audience; The new standard for software development in civil aviation, DO-178C, mainly differs from its predecessor DO-178B, in that it has standard supplements to provide greater scope for using new software development methods. The most important standard supplements are DO-331 on the methods of model-based development and model-based verification and DO-333 on the use of formal methods such as model checking and abstract interpretation. These key software design techniques of...

  1. Gaussian closure technique applied to the hysteretic Bouc model with non-zero mean white noise excitation

    Science.gov (United States)

    Waubke, Holger; Kasess, Christian H.

    2016-11-01

    Devices that emit structure-borne sound are commonly decoupled by elastic components to shield the environment from acoustical noise and vibrations. The elastic elements often have a hysteretic behavior that is typically neglected. In order to take hysteretic behavior into account, Bouc developed a differential equation for such materials, especially joints made of rubber or equipped with dampers. In this work, the Bouc model is solved by means of the Gaussian closure technique based on the Kolmogorov equation. Kolmogorov developed a method to derive probability density functions for arbitrary explicit first-order vector differential equations under white noise excitation using a partial differential equation of a multivariate conditional probability distribution. Up to now no analytical solution of the Kolmogorov equation in conjunction with the Bouc model exists. Therefore a wide range of approximate solutions, especially the statistical linearization, were developed. Using the Gaussian closure technique that is an approximation to the Kolmogorov equation assuming a multivariate Gaussian distribution an analytic solution is derived in this paper for the Bouc model. For the stationary case the two methods yield equivalent results, however, in contrast to statistical linearization the presented solution allows to calculate the transient behavior explicitly. Further, stationary case leads to an implicit set of equations that can be solved iteratively with a small number of iterations and without instabilities for specific parameter sets.

  2. A spatial model with pulsed releases to compare strategies for the sterile insect technique applied to the mosquito Aedes aegypti.

    Science.gov (United States)

    Oléron Evans, Thomas P; Bishop, Steven R

    2014-08-01

    We present a simple mathematical model to replicate the key features of the sterile insect technique (SIT) for controlling pest species, with particular reference to the mosquito Aedes aegypti, the main vector of dengue fever. The model differs from the majority of those studied previously in that it is simultaneously spatially explicit and involves pulsed, rather than continuous, sterile insect releases. The spatially uniform equilibria of the model are identified and analysed. Simulations are performed to analyse the impact of varying the number of release sites, the interval between pulsed releases and the overall volume of sterile insect releases on the effectiveness of SIT programmes. Results show that, given a fixed volume of available sterile insects, increasing the number of release sites and the frequency of releases increases the effectiveness of SIT programmes. It is also observed that programmes may become completely ineffective if the interval between pulsed releases is greater that a certain threshold value and that, beyond a certain point, increasing the overall volume of sterile insects released does not improve the effectiveness of SIT. It is also noted that insect dispersal drives a rapid recolonisation of areas in which the species has been eradicated and we argue that understanding the density dependent mortality of released insects is necessary to develop efficient, cost-effective SIT programmes.

  3. Communication Analysis modelling techniques

    CERN Document Server

    España, Sergio; Pastor, Óscar; Ruiz, Marcela

    2012-01-01

    This report describes and illustrates several modelling techniques proposed by Communication Analysis; namely Communicative Event Diagram, Message Structures and Event Specification Templates. The Communicative Event Diagram is a business process modelling technique that adopts a communicational perspective by focusing on communicative interactions when describing the organizational work practice, instead of focusing on physical activities1; at this abstraction level, we refer to business activities as communicative events. Message Structures is a technique based on structured text that allows specifying the messages associated to communicative events. Event Specification Templates are a means to organise the requirements concerning a communicative event. This report can be useful to analysts and business process modellers in general, since, according to our industrial experience, it is possible to apply many Communication Analysis concepts, guidelines and criteria to other business process modelling notation...

  4. Neural networks techniques applied to reservoir engineering

    Energy Technology Data Exchange (ETDEWEB)

    Flores, M. [Gerencia de Proyectos Geotermoelectricos, Morelia (Mexico); Barragan, C. [RockoHill de Mexico, Indiana (Mexico)

    1995-12-31

    Neural Networks are considered the greatest technological advance since the transistor. They are expected to be a common household item by the year 2000. An attempt to apply Neural Networks to an important geothermal problem has been made, predictions on the well production and well completion during drilling in a geothermal field. This was done in Los Humeros geothermal field, using two common types of Neural Network models, available in commercial software. Results show the learning capacity of the developed model, and its precision in the predictions that were made.

  5. Use of KNN technique to improve the efficiency of SCE-UA optimisation method applied to the calibration of HBV Rainfall-Runoff model

    Science.gov (United States)

    Dakhlaoui, H.; Bargaoui, Z.

    2007-12-01

    The Calibration of Rainfall-Runoff models can be viewed as an optimisation problem involving an objective function that measures the model performance expressed as a distance between observed and calculated discharges. Effectiveness (ability to find the optimum) and efficiency (cost expressed in number of objective function evaluations to reach the optimum) are the main criteria of choose of the optimisation method. SCE-UA is known as one of the most effective and efficient optimisation method. In this work we tried to improve the SCE-UA efficiency, in the case of the calibration of HBV model by using KNN technique to estimate the objective function. In fact after a number of iterations by SCE-UA, when objective function is evaluated by model simulation, a data base of parameter explored and respective objective function values is constituted. Within this data base it is proposed to estimate the objective function in further iterations, by an interpolation using nearest neighbours in a normalised parameter space with weighted Euclidean distance. Weights are chosen proportional to the sensitivity of parameter to objective function that gives more importance to sensitive parameter. Evaluation of model output is done through the objective function RV=R2- w |RD| where R2 is Nash Sutcliffe coefficient related to discharges, w : a weight and RD the relative bias. Applied to theoretical and practical cases in several catchments under different climatic conditions : Rottweil (Germany) and Tessa, Barbra, and Sejnane (Tunisia), the hybrid SCE-UA presents efficiency better then that of initial SCE-UA by about 20 to 30 %. By using other techniques as parameter space transformation and SCE-UA modification (2), we may obtain an algorithm two to three times faster. (1) Avi Ostfeld, Shani Salomons, "A hybrid genetic-instance learning algorithm for CE*QAL-W2 calibration", Journal of Hydrology 310 (2005) 122-125 (2) Nitin Mutil and Shie-Yui Liong, "Improved robustness and Efficiency

  6. Applying DEA Technique to Library Evaluation in Academic Research Libraries.

    Science.gov (United States)

    Shim, Wonsik

    2003-01-01

    This study applied an analytical technique called Data Envelopment Analysis (DEA) to calculate the relative technical efficiency of 95 academic research libraries, all members of the Association of Research Libraries. DEA, with the proper model of library inputs and outputs, can reveal best practices in the peer groups, as well as the technical…

  7. Applied Bayesian modelling

    CERN Document Server

    Congdon, Peter

    2014-01-01

    This book provides an accessible approach to Bayesian computing and data analysis, with an emphasis on the interpretation of real data sets. Following in the tradition of the successful first edition, this book aims to make a wide range of statistical modeling applications accessible using tested code that can be readily adapted to the reader's own applications. The second edition has been thoroughly reworked and updated to take account of advances in the field. A new set of worked examples is included. The novel aspect of the first edition was the coverage of statistical modeling using WinBU

  8. Revisión de los principales modelos para aplicar técnicas de Minería de Procesos (Review of models for applying process mining techniques)

    National Research Council Canada - National Science Library

    Arturo Orellana García; Damián Pérez Alfonso; Vivian Estrada Sentí

    2016-01-01

    .... The research focuses on collecting information on models proposed by author's worldwide reference in the process mining topic, to apply techniques for the discovery, conformance checking and process improvement...

  9. Nuclear radioactive techniques applied to materials research

    CERN Document Server

    Correia, João Guilherme; Wahl, Ulrich

    2011-01-01

    In this paper we review materials characterization techniques using radioactive isotopes at the ISOLDE/CERN facility. At ISOLDE intense beams of chemically clean radioactive isotopes are provided by selective ion-sources and high-resolution isotope separators, which are coupled on-line with particle accelerators. There, new experiments are performed by an increasing number of materials researchers, which use nuclear spectroscopic techniques such as Mössbauer, Perturbed Angular Correlations (PAC), beta-NMR and Emission Channeling with short-lived isotopes not available elsewhere. Additionally, diffusion studies and traditionally non-radioactive techniques as Deep Level Transient Spectroscopy, Hall effect and Photoluminescence measurements are performed on radioactive doped samples, providing in this way the element signature upon correlation of the time dependence of the signal with the isotope transmutation half-life. Current developments, applications and perspectives of using radioactive ion beams and tech...

  10. Sensor Data Qualification Technique Applied to Gas Turbine Engines

    Science.gov (United States)

    Csank, Jeffrey T.; Simon, Donald L.

    2013-01-01

    This paper applies a previously developed sensor data qualification technique to a commercial aircraft engine simulation known as the Commercial Modular Aero-Propulsion System Simulation 40,000 (C-MAPSS40k). The sensor data qualification technique is designed to detect, isolate, and accommodate faulty sensor measurements. It features sensor networks, which group various sensors together and relies on an empirically derived analytical model to relate the sensor measurements. Relationships between all member sensors of the network are analyzed to detect and isolate any faulty sensor within the network.

  11. Applying Mixed Methods Techniques in Strategic Planning

    Science.gov (United States)

    Voorhees, Richard A.

    2008-01-01

    In its most basic form, strategic planning is a process of anticipating change, identifying new opportunities, and executing strategy. The use of mixed methods, blending quantitative and qualitative analytical techniques and data, in the process of assembling a strategic plan can help to ensure a successful outcome. In this article, the author…

  12. Applying Mixed Methods Techniques in Strategic Planning

    Science.gov (United States)

    Voorhees, Richard A.

    2008-01-01

    In its most basic form, strategic planning is a process of anticipating change, identifying new opportunities, and executing strategy. The use of mixed methods, blending quantitative and qualitative analytical techniques and data, in the process of assembling a strategic plan can help to ensure a successful outcome. In this article, the author…

  13. Digital Speckle Technique Applied to Flow Visualization

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    Digital speckle technique uses a laser, a CCD camera, and digital processing to generate interference fringes at the television framing rate. Its most obvious advantage is that neither darkroom facilities nor photographic wet chemical processing is required. In addition, it can be used in harsh engineering environments. This paper discusses the strengths and weaknesses of three digital speckle methodologies. (1) Digital speckle pattern interferometry (DSPI) uses an optical polarization phase shifter for visualization and measurement of the density field in a flow field. (2) Digital shearing speckle interferometry (DSSI) utilizes speckle-shearing interferometry in addition to optical polarization phase shifting. (3) Digital speckle photography (DSP) with computer reconstruction. The discussion describes the concepts, the principles and the experimental arrangements with some experimental results. The investigation shows that these three digital speckle techniques provide an excellent method for visualizing flow fields and for measuring density distributions in fluid mechanics and thermal flows.

  14. Applying Cooperative Techniques in Teaching Problem Solving

    Directory of Open Access Journals (Sweden)

    Krisztina Barczi

    2013-12-01

    Full Text Available Teaching how to solve problems – from solving simple equations to solving difficult competition tasks – has been one of the greatest challenges for mathematics education for many years. Trying to find an effective method is an important educational task. Among others, the question arises as to whether a method in which students help each other might be useful. The present article describes part of an experiment that was designed to determine the effects of cooperative teaching techniques on the development of problem-solving skills.

  15. Autoadjustable sutures and modified seldinger technique applied to laparoscopic jejunostomy.

    Science.gov (United States)

    Pili, Diego; Ciotola, Franco; Riganti, Juan Martín; Badaloni, Adolfo; Nieponice, Alejandro

    2015-02-01

    This is a simple technique to be applied to those patients requiring an alternative feeding method. This technique has been successfully applied to 25 patients suffering from esophageal carcinoma. The procedure involves laparoscopic approach, suture of the selected intestinal loop to the abdominal wall and jejunostomy using Seldinger technique and autoadjustable sutures. No morbidity or mortality was reported.

  16. Guidelines for a Digital Reinterpretation of Architectural Restoration Work: Reality-Based Models and Reverse Modelling Techniques Applied to the Architectural Decoration of the Teatro Marittimo, Villa Adriana

    Science.gov (United States)

    Adembri, B.; Cipriani, L.; Bertacchi, G.

    2017-05-01

    The Maritime Theatre is one of the iconic buildings of Hadrian's Villa, Tivoli. The state of conservation of the theatre is not only the result of weathering over time, but also due to restoration work carried out during the Fifties of the past century. Although this anastylosis process had the virtue of partially restoring a few of the fragments of the compound's original image, it now reveals diverse inconsistencies and genuine errors in the reassembling of the fragments. This study aims at carrying out a digital reinterpretation of the restoration of the architectural fragments in relation to the architectural order, with particular reference to the miscellaneous decoration of the frieze of the Teatro Marittimo (vestibule and atrium). Over the course of the last few years the Teatro Marittimo has been the target of numerous surveying campaigns using digital methodology (laser scanner and photogrammetry SfM/MVS). Starting with the study of the remains of the opus caementicium on the ground, it is possible to identify surfaces which are then used in the model for subsequent cross sections, so as to achieve the best fitting circumferences to use as reference points to put the fragments back into place.

  17. Mathematical modelling techniques

    CERN Document Server

    Aris, Rutherford

    1995-01-01

    ""Engaging, elegantly written."" - Applied Mathematical ModellingMathematical modelling is a highly useful methodology designed to enable mathematicians, physicists and other scientists to formulate equations from a given nonmathematical situation. In this elegantly written volume, a distinguished theoretical chemist and engineer sets down helpful rules not only for setting up models but also for solving the mathematical problems they pose and for evaluating models.The author begins with a discussion of the term ""model,"" followed by clearly presented examples of the different types of mode

  18. Applying Stylometric Analysis Techniques to Counter Anonymity in Cyberspace

    Directory of Open Access Journals (Sweden)

    Jianwen Sun

    2012-02-01

    Full Text Available Due to the ubiquitous nature and anonymity abuses in cyberspace, it’s difficult to make criminal identity tracing in cybercrime investigation. Writeprint identification offers a valuable tool to counter anonymity by applying stylometric analysis technique to help identify individuals based on textual traces. In this study, a framework for online writeprint identification is proposed. Variable length character n-gram is used to represent the author’s writing style. The technique of IG seeded GA based feature selection for Ensemble (IGAE is also developed to build an identification model based on individual author level features. Several specific components for dealing with the individual feature set are integrated to improve the performance. The proposed feature and technique are evaluated on a real world data set encompassing reviews posted by 50 Amazon customers. The experimental results show the effectiveness of the proposed framework, with accuracy over 94% for 20 authors and over 80% for 50 ones. Compared with the baseline technique (Support Vector Machine, a higher performance is achieved by using IGAE, resulting in a 2% and 8% improvement over SVM for 20 and 50 authors respectively. Moreover, it has been shown that IGAE is more scalable in terms of the number of authors, than author group level based methods.

  19. Models of signal validation using artificial intelligence techniques applied to a nuclear reactor; Modelos de validacao de sinal utilizando tecnicas de inteligencia artificial aplicados a um reator nuclear

    Energy Technology Data Exchange (ETDEWEB)

    Oliveira, Mauro V. [Instituto de Engenharia Nuclear (IEN), Rio de Janeiro, RJ (Brazil); Schirru, Roberto [Universidade Federal, Rio de Janeiro, RJ (Brazil). Coordenacao dos Programas de Pos-graduacao de Engenharia

    2000-07-01

    This work presents two models of signal validation in which the analytical redundancy of the monitored signals from a nuclear plant is made by neural networks. In one model the analytical redundancy is made by only one neural network while in the other it is done by several neural networks, each one working in a specific part of the entire operation region of the plant. Four cluster techniques were tested to separate the entire operation region in several specific regions. An additional information of systems' reliability is supplied by a fuzzy inference system. The models were implemented in C language and tested with signals acquired from Angra I nuclear power plant, from its start to 100% of power. (author)

  20. Data flow modeling techniques

    Science.gov (United States)

    Kavi, K. M.

    1984-01-01

    There have been a number of simulation packages developed for the purpose of designing, testing and validating computer systems, digital systems and software systems. Complex analytical tools based on Markov and semi-Markov processes have been designed to estimate the reliability and performance of simulated systems. Petri nets have received wide acceptance for modeling complex and highly parallel computers. In this research data flow models for computer systems are investigated. Data flow models can be used to simulate both software and hardware in a uniform manner. Data flow simulation techniques provide the computer systems designer with a CAD environment which enables highly parallel complex systems to be defined, evaluated at all levels and finally implemented in either hardware or software. Inherent in data flow concept is the hierarchical handling of complex systems. In this paper we will describe how data flow can be used to model computer system.

  1. A New Experimental Technique for Applying Impulse Tension Loading

    OpenAIRE

    Fan, Z. S.; Yu, H. P.; Su, H; Zhang, X.; Li, C. F.

    2016-01-01

    This paper deals with a new experimental technique for applying impulse tension loads. Briefly, the technique is based on the use of pulsed-magnetic-driven tension loading. Electromagnetic forming (EMF) can be quite effective in increasing the forming limits of metal sheets, such as aluminium and magnesium alloys. Yet, why the forming limit is increased is still an open question. One reason for this is the difficulty to let forming proceed on a certain influence monotonically: ...

  2. Applied Regression Modeling A Business Approach

    CERN Document Server

    Pardoe, Iain

    2012-01-01

    An applied and concise treatment of statistical regression techniques for business students and professionals who have little or no background in calculusRegression analysis is an invaluable statistical methodology in business settings and is vital to model the relationship between a response variable and one or more predictor variables, as well as the prediction of a response value given values of the predictors. In view of the inherent uncertainty of business processes, such as the volatility of consumer spending and the presence of market uncertainty, business professionals use regression a

  3. Technique applied in electrical power distribution for Satellite Launch Vehicle

    Directory of Open Access Journals (Sweden)

    João Maurício Rosário

    2010-09-01

    Full Text Available The Satellite Launch Vehicle electrical network, which is currently being developed in Brazil, is sub-divided for analysis in the following parts: Service Electrical Network, Controlling Electrical Network, Safety Electrical Network and Telemetry Electrical Network. During the pre-launching and launching phases, these electrical networks are associated electrically and mechanically to the structure of the vehicle. In order to succeed in the integration of these electrical networks it is necessary to employ techniques of electrical power distribution, which are proper to Launch Vehicle systems. This work presents the most important techniques to be considered in the characterization of the electrical power supply applied to Launch Vehicle systems. Such techniques are primarily designed to allow the electrical networks, when submitted to the single-phase fault to ground, to be able of keeping the power supply to the loads.

  4. Applying recursive numerical integration techniques for solving high dimensional integrals

    Energy Technology Data Exchange (ETDEWEB)

    Ammon, Andreas [IVU Traffic Technologies AG, Berlin (Germany); Genz, Alan [Washington State Univ., Pullman, WA (United States). Dept. of Mathematics; Hartung, Tobias [King' s College, London (United Kingdom). Dept. of Mathematics; Jansen, Karl; Volmer, Julia [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC; Leoevey, Hernan [Humboldt Univ. Berlin (Germany). Inst. fuer Mathematik

    2016-11-15

    The error scaling for Markov-Chain Monte Carlo techniques (MCMC) with N samples behaves like 1/√(N). This scaling makes it often very time intensive to reduce the error of computed observables, in particular for applications in lattice QCD. It is therefore highly desirable to have alternative methods at hand which show an improved error scaling. One candidate for such an alternative integration technique is the method of recursive numerical integration (RNI). The basic idea of this method is to use an efficient low-dimensional quadrature rule (usually of Gaussian type) and apply it iteratively to integrate over high-dimensional observables and Boltzmann weights. We present the application of such an algorithm to the topological rotor and the anharmonic oscillator and compare the error scaling to MCMC results. In particular, we demonstrate that the RNI technique shows an error scaling in the number of integration points m that is at least exponential.

  5. Applying recursive numerical integration techniques for solving high dimensional integrals

    CERN Document Server

    Ammon, Andreas; Hartung, Tobias; Jansen, Karl; Leövey, Hernan; Volmer, Julia

    2016-01-01

    The error scaling for Markov-Chain Monte Carlo techniques (MCMC) with $N$ samples behaves like $1/\\sqrt{N}$. This scaling makes it often very time intensive to reduce the error of computed observables, in particular for applications in lattice QCD. It is therefore highly desirable to have alternative methods at hand which show an improved error scaling. One candidate for such an alternative integration technique is the method of recursive numerical integration (RNI). The basic idea of this method is to use an efficient low-dimensional quadrature rule (usually of Gaussian type) and apply it iteratively to integrate over high-dimensional observables and Boltzmann weights. We present the application of such an algorithm to the topological rotor and the anharmonic oscillator and compare the error scaling to MCMC results. In particular, we demonstrate that the RNI technique shows an error scaling in the number of integration points $m$ that is at least exponential.

  6. Monitoring dam structural health from space: Insights from novel InSAR techniques and multi-parametric modeling applied to the Pertusillo dam Basilicata, Italy

    Science.gov (United States)

    Milillo, Pietro; Perissin, Daniele; Salzer, Jacqueline T.; Lundgren, Paul; Lacava, Giusy; Milillo, Giovanni; Serio, Carmine

    2016-10-01

    The availability of new constellations of synthetic aperture radar (SAR) sensors is leading to important advances in infrastructure monitoring. These constellations offer the advantage of reduced revisit times, providing low-latency data that enable analysis that can identify infrastructure instability and dynamic deformation processes. In this paper we use COSMO-SkyMed (CSK) and TerraSAR-X (TSX) data to monitor seasonal induced deformation at the Pertusillo dam (Basilicata, Italy) using multi-temporal SAR data analysis. We analyzed 198 images spanning 2010-2015 using a coherent and incoherent PS approach to merge COSMO-SkyMed adjacent tracks and TerraSAR-X acquisitions, respectively. We used hydrostatic-seasonal-temporal (HST) and hydrostatic-temperature-temporal (HTT) models to interpret the non-linear deformation at the dam wall using ground measurements together with SAR time-series analysis. Different look geometries allowed us to characterize the horizontal deformation field typically observed at dams. Within the limits of our models and the SAR acquisition sampling we found that most of the deformation at the Pertusillo dam can be explained by taking into account only thermal seasonal dilation and hydrostatic pressure. The different models show slightly different results when interpreting the aging term at the dam wall. The results highlight how short-revisit SAR satellites in combination with models widely used in the literature for interpreting pendulum and GPS data can be used for supporting structural health monitoring and provide valuable information to ground users directly involved in field measurements.

  7. Applying Business Process Mode ling Techniques : Case Study

    Directory of Open Access Journals (Sweden)

    Bartosz Marcinkowski

    2010-12-01

    Full Text Available Selection and proper application of business process modeling methods and techniques have a significant impact on organizational improvement capabilities as well as proper understanding of functionality of information systems that shall support activity of the organization. A number of business process modeling notations were implemented in practice in recent decades. Most significant of the notations include ARIS, Business Process Modeling Notation (OMG BPMN and several Unified Modeling Language (OMG UML extensions. In this paper, the assessment whether one of the most flexible and strictly standardized contempo-rary bus iness process modeling notations, i.e. Rational UML Profile for Business Modeling, enable business analysts to prepare business models that are all-embracing and understandable by all the stakeholders. After the introduction, me-thodology of res earch is discussed. The following section presents selected case study results. The paper is concluded with a summary

  8. Structural geology and 4D evolution of a half-graben: New digital outcrop modelling techniques applied to the Nukhul half-graben, Suez rift, Egypt

    Science.gov (United States)

    Wilson, Paul; Hodgetts, David; Rarity, Franklin; Gawthorpe, Rob L.; Sharp, Ian R.

    2009-03-01

    LIDAR-based digital outcrop mapping, in conjunction with a new surface modelling approach specifically designed to deal with outcrop datasets, is used to examine the evolution of a half-graben scale normal fault array in the Suez rift. Syn-rift deposition in the Nukhul half-graben was controlled by the graben-bounding Nukhul fault. The fault can be divided into four segments based on the strike of the fault, the morphology of hangingwall strata, and the variation in throw along strike. The segments of the fault became geometrically linked within the first 2.5 m.y. of rifting, as evidenced by the presence of early syn-rift Abu Zenima Formation strata at the segment linkage points. Fault-perpendicular folds in the hangingwall related to along-strike variations in throw associated with precursor fault segments persist for a further 1.8 m.y. after linkage of the segments, suggesting that the fault remains kinematically segmented. We suggest this occurs because of sudden changes in fault strike at the segment linkage points that inhibit earthquake rupture propagation, or because displacement is geometrically inhibited at fault linkage points where the orientation of the intersection line of the segments is significantly different from the orientation of the slip vector on the fault system. Length/throw plots and throw contour patterns for minor faults show that some faults initiated in pre-rift strata, whereas late east-striking faults initiated in the syn-rift basin fill. The late initiating faults are spatially associated with the east-striking Baba-Markha fault, which was active throughout the rift history, but developed as a transfer fault between major block-bounding fault systems around 6-7 Ma after rift initiation.

  9. Finite element models applied in active structural acoustic control

    NARCIS (Netherlands)

    Oude Nijhuis, Marco H.H.; Boer, de André; Rao, Vittal S.

    2002-01-01

    This paper discusses the modeling of systems for active structural acoustic control. The finite element method is applied to model structures including the dynamics of piezoelectric sensors and actuators. A model reduction technique is presented to make the finite element model suitable for controll

  10. Data Mining Techniques Applied to Hydrogen Lactose Breath Test

    Science.gov (United States)

    Nepomuceno-Chamorro, Isabel; Pontes-Balanza, Beatriz; Hernández-Mendoza, Yoedusvany; Rodríguez-Herrera, Alfonso

    2017-01-01

    In this work, we present the results of applying data mining techniques to hydrogen breath test data. Disposal of H2 gas is of utmost relevance to maintain efficient microbial fermentation processes. Objectives Analyze a set of data of hydrogen breath tests by use of data mining tools. Identify new patterns of H2 production. Methods Hydrogen breath tests data sets as well as k-means clustering as the data mining technique to a dataset of 2571 patients. Results Six different patterns have been extracted upon analysis of the hydrogen breath test data. We have also shown the relevance of each of the samples taken throughout the test. Conclusions Analysis of the hydrogen breath test data sets using data mining techniques has identified new patterns of hydrogen generation upon lactose absorption. We can see the potential of application of data mining techniques to clinical data sets. These results offer promising data for future research on the relations between gut microbiota produced hydrogen and its link to clinical symptoms. PMID:28125620

  11. Revisión de los principales modelos para aplicar técnicas de Minería de Procesos (Review of models for applying process mining techniques

    Directory of Open Access Journals (Sweden)

    Arturo Orellana García

    2016-03-01

    Full Text Available Spanish abstract La minería de procesos constituye una alternativa novedosa para mejorar los procesos en una variedad de dominios de aplicación. Tiene como objetivo extraer información a partir de los datos almacenados en los registros de trazas de los sistemas de información, en busca de errores, inconsistencias, vulnerabilidades y variabilidad en los procesos que se ejecutan. Las técnicas de minería de procesos se utilizan en múltiples sectores, como la industria, los servicios web, la inteligencia de negocios y la salud. Sin embargo, para aplicar estas técnicas existen varios modelos a seguir y poca información sobre cual aplicar, al no contar con un análisis comparativo entre estos. La investigación se centró en recopilar información sobre los principales modelos propuestos por autores de referencia mundial en el tema de minería de procesos para aplicar técnicas en el descubrimiento, chequeo de conformidad y mejoramiento de los procesos. Se realiza un análisis de los mismos en función de seleccionar los elementos y características útiles para su aplicación en el entorno hospitalario. La actual investigación contribuye al desarrollo de un modelo para la detección y análisis de variabilidad en procesos hospitalarios utilizando técnicas de minería de procesos. Permite a los investigadores tener de forma centralizada, los criterios para decidir qué modelo utilizar, o qué fases emplear de uno o más modelos. English abstract Process mining is a novel alternative to improve processes in a variety of application domains. It aims to extract information from data stored in records of traces from information systems, looking for errors, inconsistencies, vulnerabilities and variability in processes that are executing. The process mining techniques are used in multiple sectors such as industry, web services, business intelligence and health. However, to apply these techniques there are several models and little information on

  12. Applied groundwater modeling, 2nd Edition

    Science.gov (United States)

    Anderson, Mary P.; Woessner, William W.; Hunt, Randall J.

    2015-01-01

    This second edition is extensively revised throughout with expanded discussion of modeling fundamentals and coverage of advances in model calibration and uncertainty analysis that are revolutionizing the science of groundwater modeling. The text is intended for undergraduate and graduate level courses in applied groundwater modeling and as a comprehensive reference for environmental consultants and scientists/engineers in industry and governmental agencies.

  13. Signal processing techniques applied to a small circular seismic array

    Science.gov (United States)

    Mosher, C. C.

    1980-03-01

    The travel time method (TTM) for locating earthquakes and the wavefront curvature method (WCM), which determines distance to an event by measuring the curvature of the wavefront can be combined in a procedure referred to as Apparent Velocity Mapping (AVM). Apparent velocities for mine blasts and local earthquakes computed by the WCM are inverted for a velocity structure. The velocity structure is used in the TTM to relocate events. Model studies indicate that AVM can adequately resolve the velocity structure for the case of linear velocity-depth gradient. Surface waves from mine blasts recorded by the Central Minnesota Seismic Array were analyzed using a modification of the multiple filter analysis (MFA) technique to determine group arrival times at several stations of an array. The advantages of array MFA are that source location need not be known, lateral refraction can be detected and removed, and multiple arrivals can be separated. A modeling procedure that can be used with array MFA is described.

  14. Analytical techniques applied to study cultural heritage objects

    Energy Technology Data Exchange (ETDEWEB)

    Rizzutto, M.A.; Curado, J.F.; Bernardes, S.; Campos, P.H.O.V.; Kajiya, E.A.M.; Silva, T.F.; Rodrigues, C.L.; Moro, M.; Tabacniks, M.; Added, N., E-mail: rizzutto@if.usp.br [Universidade de Sao Paulo (USP), SP (Brazil). Instituto de Fisica

    2015-07-01

    The scientific study of artistic and cultural heritage objects have been routinely performed in Europe and the United States for decades. In Brazil this research area is growing, mainly through the use of physical and chemical characterization methods. Since 2003 the Group of Applied Physics with Particle Accelerators of the Physics Institute of the University of Sao Paulo (GFAA-IF) has been working with various methodologies for material characterization and analysis of cultural objects. Initially using ion beam analysis performed with Particle Induced X-Ray Emission (PIXE), Rutherford Backscattering (RBS) and recently Ion Beam Induced Luminescence (IBIL), for the determination of the elements and chemical compounds in the surface layers. These techniques are widely used in the Laboratory of Materials Analysis with Ion Beams (LAMFI-USP). Recently, the GFAA expanded the studies to other possibilities of analysis enabled by imaging techniques that coupled with elemental and compositional characterization provide a better understanding on the materials and techniques used in the creative process in the manufacture of objects. The imaging analysis, mainly used to examine and document artistic and cultural heritage objects, are performed through images with visible light, infrared reflectography (IR), fluorescence with ultraviolet radiation (UV), tangential light and digital radiography. Expanding more the possibilities of analysis, new capabilities were added using portable equipment such as Energy Dispersive X-Ray Fluorescence (ED-XRF) and Raman Spectroscopy that can be used for analysis 'in situ' at the museums. The results of these analyzes are providing valuable information on the manufacturing process and have provided new information on objects of different University of Sao Paulo museums. Improving the arsenal of cultural heritage analysis it was recently constructed an 3D robotic stage for the precise positioning of samples in the external beam setup

  15. Research on key techniques of virtual reality applied in mining industry

    Institute of Scientific and Technical Information of China (English)

    LIAO Jun; LU Guo-bin

    2009-01-01

    Based on the applications of virtual reality technology in many fields, introduced the virtual reality technical basic concept, structure type, related technique development, etc., tallied up applications of virtual reality technique in the present mining industry, inquired into core techniques related software and hardware, especially the optimization in the setup of various 3D models technique, and carried out a virtual scene to travel extensively in real-time by stereoscopic manifestation technique and so on. Then it brought forward the solution of virtual reality technique with software and hardware to the mining industry that can satisfy the demand of different aspects and levers. Finally, it show a fine prospect of virtual reality technique applied in the mining industry.

  16. Survey of semantic modeling techniques

    Energy Technology Data Exchange (ETDEWEB)

    Smith, C.L.

    1975-07-01

    The analysis of the semantics of programing languages was attempted with numerous modeling techniques. By providing a brief survey of these techniques together with an analysis of their applicability for answering semantic issues, this report attempts to illuminate the state-of-the-art in this area. The intent is to be illustrative rather than thorough in the coverage of semantic models. A bibliography is included for the reader who is interested in pursuing this area of research in more detail.

  17. Survey of semantic modeling techniques

    Energy Technology Data Exchange (ETDEWEB)

    Smith, C.L.

    1975-07-01

    The analysis of the semantics of programing languages was attempted with numerous modeling techniques. By providing a brief survey of these techniques together with an analysis of their applicability for answering semantic issues, this report attempts to illuminate the state-of-the-art in this area. The intent is to be illustrative rather than thorough in the coverage of semantic models. A bibliography is included for the reader who is interested in pursuing this area of research in more detail.

  18. Improving operating room efficiency by applying bin-packing and portfolio techniques to surgical case scheduling

    NARCIS (Netherlands)

    Houdenhoven, van M.; Oostrum, van J.M.; Hans, E.W.; Wullink, G.; Kazemier, G.

    2013-01-01

    BACKGROUND: An operating room (OR) department has adopted an efficient business model and subsequently investigated how efficiency could be further improved. The aim of this study is to show the efficiency improvement of lowering organizational barriers and applying advanced mathematical techniques.

  19. Improving operating room efficiency by applying bin-packing and portfolio techniques to surgical case scheduling

    NARCIS (Netherlands)

    van Houdenhoven, M.; van Oostrum, J.M.; Hans, Elias W.; Wullink, Gerhard; Kazemier, G.

    2013-01-01

    BACKGROUND: An operating room (OR) department has adopted an efficient business model and subsequently investigated how efficiency could be further improved. The aim of this study is to show the efficiency improvement of lowering organizational barriers and applying advanced mathematical techniques.

  20. Free Radical Imaging Techniques Applied to Hydrocarbon Flames Diagnosis

    Institute of Scientific and Technical Information of China (English)

    A. Caldeira-Pires

    2001-01-01

    This paper evaluates the utilization of free radical chemiluminescence imaging and tomographic reconstruction techniques to assess advanced information on reacting flows. Two different laboratory flow configurations were analyzed, including unconfined non-premixed jet flame measurements to evaluate flame fuel/air mixing patterns at the burner-port of a typical glass-furnace burner. The second case characterized the reaction zone of premixed flames within gas turbine combustion chambers, based on a laboratory scale model of a lean prevaporized premixed (LPP) combustion chamber.The analysis shows that advanced imaging diagnosis can provide new information on the characterization of flame mixing and reacting phenomena. The utilization of local C2 and CH chemiluminescence can assess useful information on the quality of the combustion process, which can be used to improve the design of practical combustors.

  1. Modern Techniques and Technologies Applied to Training and Performance Monitoring.

    Science.gov (United States)

    Sands, William A; Kavanaugh, Ashley A; Murray, Steven R; McNeal, Jeni R; Jemni, Monèm

    2016-12-05

    Athlete preparation and performance continues to increase in complexity and costs. Modern coaches are shifting from reliance on personal memory, experience, and opinion to evidence from collected training load data. Training load monitoring may hold vital information for developing systems of monitoring that follow the training process with such precision that both performance prediction and day-to-day management of training become an adjunct to preparation and performance. Time series data collection and analyses in sport are still in their infancy with considerable efforts being applied in "big-data" analytics and models of the appropriate variables to monitor and methods for doing so. Training monitoring has already garnered important applications, but lacks a theoretical framework from which to develop further. As such, we propose a framework involving the following: analyses of individuals, trend analyses, rules-based analysis, and statistical process control.

  2. Volcanic Monitoring Techniques Applied to Controlled Fragmentation Experiments

    Science.gov (United States)

    Kueppers, U.; Alatorre-Ibarguengoitia, M. A.; Hort, M. K.; Kremers, S.; Meier, K.; Scharff, L.; Scheu, B.; Taddeucci, J.; Dingwell, D. B.

    2010-12-01

    Volcanic eruptions are an inevitable natural threat. The range of eruptive styles is large and short term fluctuations of explosivity or vent position pose a large risk that is not necessarily confined to the immediate vicinity of a volcano. Explosive eruptions rather may also affect aviation, infrastructure and climate, regionally as well as globally. Multiparameter monitoring networks are deployed on many active volcanoes to record signs of magmatic processes and help elucidate the secrets of volcanic phenomena. However, our mechanistic understanding of many processes hiding in recorded signals is still poor. As a direct consequence, a solid interpretation of the state of a volcano is still a challenge. In an attempt to bridge this gap, we combined volcanic monitoring and experimental volcanology. We performed 15 well-monitored, field-based, experiments and fragmented natural rock samples from Colima volcano (Mexico) by rapid decompression. We used cylindrical samples of 60 mm height and 25 mm and 60 mm diameter, respectively, and 25 and 35 vol.% open porosity. The applied pressure range was from 4 to 18 MPa. Using different experimental set-ups, the pressurised volume above the samples ranged from 60 - 170 cm3. The experiments were performed at ambient conditions and at controlled sample porosity and size, confinement geometry, and applied pressure. The experiments have been thoroughly monitored with 1) Doppler Radar (DR), 2) high-speed and high-definition cameras, 3) acoustic and infrasound sensors, 4) pressure transducers, and 5) electrically conducting wires. Our aim was to check for common results achieved by the different approaches and, if so, calibrate state-of-the-art monitoring tools. We present how the velocity of the ejected pyroclasts was measured by and evaluated for the different approaches and how it was affected by the experimental conditions and sample characteristics. We show that all deployed instruments successfully measured the pyroclast

  3. Educational software design: applying models of learning

    Directory of Open Access Journals (Sweden)

    Stephen Richards

    1996-12-01

    Full Text Available The model of learning adopted within this paper is the 'spreading ripples' (SR model proposed by Race (1994. This model was chosen for two important reasons. First, it makes use of accessible ideas and language, .and is therefore simple. Second, .Race suggests that the model can be used in the design, of educational and training programmes (and can thereby be applied to the design of computer-based learning materials.

  4. Geostatistical methods applied to field model residuals

    DEFF Research Database (Denmark)

    Maule, Fox; Mosegaard, K.; Olsen, Nils

    consists of measurement errors and unmodelled signal), and is typically assumed to be uncorrelated and Gaussian distributed. We have applied geostatistical methods to analyse the residuals of the Oersted(09d/04) field model [http://www.dsri.dk/Oersted/Field_models/IGRF_2005_candidates/], which is based...

  5. Applying data mining techniques to improve diagnosis in neonatal jaundice

    Directory of Open Access Journals (Sweden)

    Ferreira Duarte

    2012-12-01

    Full Text Available Abstract Background Hyperbilirubinemia is emerging as an increasingly common problem in newborns due to a decreasing hospital length of stay after birth. Jaundice is the most common disease of the newborn and although being benign in most cases it can lead to severe neurological consequences if poorly evaluated. In different areas of medicine, data mining has contributed to improve the results obtained with other methodologies. Hence, the aim of this study was to improve the diagnosis of neonatal jaundice with the application of data mining techniques. Methods This study followed the different phases of the Cross Industry Standard Process for Data Mining model as its methodology. This observational study was performed at the Obstetrics Department of a central hospital (Centro Hospitalar Tâmega e Sousa – EPE, from February to March of 2011. A total of 227 healthy newborn infants with 35 or more weeks of gestation were enrolled in the study. Over 70 variables were collected and analyzed. Also, transcutaneous bilirubin levels were measured from birth to hospital discharge with maximum time intervals of 8 hours between measurements, using a noninvasive bilirubinometer. Different attribute subsets were used to train and test classification models using algorithms included in Weka data mining software, such as decision trees (J48 and neural networks (multilayer perceptron. The accuracy results were compared with the traditional methods for prediction of hyperbilirubinemia. Results The application of different classification algorithms to the collected data allowed predicting subsequent hyperbilirubinemia with high accuracy. In particular, at 24 hours of life of newborns, the accuracy for the prediction of hyperbilirubinemia was 89%. The best results were obtained using the following algorithms: naive Bayes, multilayer perceptron and simple logistic. Conclusions The findings of our study sustain that, new approaches, such as data mining, may support

  6. Applying data mining techniques to improve diagnosis in neonatal jaundice.

    Science.gov (United States)

    Ferreira, Duarte; Oliveira, Abílio; Freitas, Alberto

    2012-12-07

    Hyperbilirubinemia is emerging as an increasingly common problem in newborns due to a decreasing hospital length of stay after birth. Jaundice is the most common disease of the newborn and although being benign in most cases it can lead to severe neurological consequences if poorly evaluated. In different areas of medicine, data mining has contributed to improve the results obtained with other methodologies.Hence, the aim of this study was to improve the diagnosis of neonatal jaundice with the application of data mining techniques. This study followed the different phases of the Cross Industry Standard Process for Data Mining model as its methodology.This observational study was performed at the Obstetrics Department of a central hospital (Centro Hospitalar Tâmega e Sousa--EPE), from February to March of 2011. A total of 227 healthy newborn infants with 35 or more weeks of gestation were enrolled in the study. Over 70 variables were collected and analyzed. Also, transcutaneous bilirubin levels were measured from birth to hospital discharge with maximum time intervals of 8 hours between measurements, using a noninvasive bilirubinometer.Different attribute subsets were used to train and test classification models using algorithms included in Weka data mining software, such as decision trees (J48) and neural networks (multilayer perceptron). The accuracy results were compared with the traditional methods for prediction of hyperbilirubinemia. The application of different classification algorithms to the collected data allowed predicting subsequent hyperbilirubinemia with high accuracy. In particular, at 24 hours of life of newborns, the accuracy for the prediction of hyperbilirubinemia was 89%. The best results were obtained using the following algorithms: naive Bayes, multilayer perceptron and simple logistic. The findings of our study sustain that, new approaches, such as data mining, may support medical decision, contributing to improve diagnosis in neonatal

  7. Digital prototyping technique applied for redesigning plastic products

    Science.gov (United States)

    Pop, A.; Andrei, A.

    2015-11-01

    After products are on the market for some time, they often need to be redesigned to meet new market requirements. New products are generally derived from similar but outdated products. Redesigning a product is an important part of the production and development process. The purpose of this paper is to show that using modern technology, like Digital Prototyping in industry is an effective way to produce new products. This paper tries to demonstrate and highlight the effectiveness of the concept of Digital Prototyping, both to reduce the design time of a new product, but also the costs required for implementing this step. The results of this paper show that using Digital Prototyping techniques in designing a new product from an existing one available on the market mould offers a significantly manufacturing time and cost reduction. The ability to simulate and test a new product with modern CAD-CAM programs in all aspects of production (designing of the 3D model, simulation of the structural resistance, analysis of the injection process and beautification) offers a helpful tool for engineers. The whole process can be realised by one skilled engineer very fast and effective.

  8. [Basics of PCR and related techniques applied in veterinary parasitology].

    Science.gov (United States)

    Ben Abderrazak, S

    2004-01-01

    We attempte through the following overall review pertaining to the basics of PCR techniques (Polymerase Chain Reaction), to introduce the main applications used in veterinary parasitology. A major problem restricting the application possibilities of molecular biology techniques is of quantitative nature. Amplification techniques represent a real revolution, for it makes possible the production of tens, even hundreds of nanogrammes of sequences when starting from very small quantities. The PCR technique has dramatically transformed the strategies used so far in molecular biology and subsequently research and medical diagnosis.

  9. APPLYING ARTIFICIAL INTELLIGENCE TECHNIQUES TO HUMAN-COMPUTER INTERFACES

    DEFF Research Database (Denmark)

    Sonnenwald, Diane H.

    1988-01-01

    A description is given of UIMS (User Interface Management System), a system using a variety of artificial intelligence techniques to build knowledge-based user interfaces combining functionality and information from a variety of computer systems that maintain, test, and configure customer telephone...... and data networks. Three artificial intelligence (AI) techniques used in UIMS are discussed, namely, frame representation, object-oriented programming languages, and rule-based systems. The UIMS architecture is presented, and the structure of the UIMS is explained in terms of the AI techniques....

  10. APPLYING ARTIFICIAL INTELLIGENCE TECHNIQUES TO HUMAN-COMPUTER INTERFACES

    DEFF Research Database (Denmark)

    Sonnenwald, Diane H.

    1988-01-01

    A description is given of UIMS (User Interface Management System), a system using a variety of artificial intelligence techniques to build knowledge-based user interfaces combining functionality and information from a variety of computer systems that maintain, test, and configure customer telephone...... and data networks. Three artificial intelligence (AI) techniques used in UIMS are discussed, namely, frame representation, object-oriented programming languages, and rule-based systems. The UIMS architecture is presented, and the structure of the UIMS is explained in terms of the AI techniques....

  11. Model building techniques for analysis.

    Energy Technology Data Exchange (ETDEWEB)

    Walther, Howard P.; McDaniel, Karen Lynn; Keener, Donald; Cordova, Theresa Elena; Henry, Ronald C.; Brooks, Sean; Martin, Wilbur D.

    2009-09-01

    The practice of mechanical engineering for product development has evolved into a complex activity that requires a team of specialists for success. Sandia National Laboratories (SNL) has product engineers, mechanical designers, design engineers, manufacturing engineers, mechanical analysts and experimentalists, qualification engineers, and others that contribute through product realization teams to develop new mechanical hardware. The goal of SNL's Design Group is to change product development by enabling design teams to collaborate within a virtual model-based environment whereby analysis is used to guide design decisions. Computer-aided design (CAD) models using PTC's Pro/ENGINEER software tools are heavily relied upon in the product definition stage of parts and assemblies at SNL. The three-dimensional CAD solid model acts as the design solid model that is filled with all of the detailed design definition needed to manufacture the parts. Analysis is an important part of the product development process. The CAD design solid model (DSM) is the foundation for the creation of the analysis solid model (ASM). Creating an ASM from the DSM currently is a time-consuming effort; the turnaround time for results of a design needs to be decreased to have an impact on the overall product development. This effort can be decreased immensely through simple Pro/ENGINEER modeling techniques that summarize to the method features are created in a part model. This document contains recommended modeling techniques that increase the efficiency of the creation of the ASM from the DSM.

  12. Applying the WEAP Model to Water Resource

    DEFF Research Database (Denmark)

    Gao, Jingjing; Christensen, Per; Li, Wei

    Water resources assessment is a tool to provide decision makers with an appropriate basis to make informed judgments regarding the objectives and targets to be addressed during the Strategic Environmental Assessment (SEA) process. The study shows how water resources assessment can be applied in SEA...... in assessing the effects on water resources using a case study on a Coal Industry Development Plan in an arid region in North Western China. In the case the WEAP model (Water Evaluation And Planning System) were used to simulate various scenarios using a diversity of technological instruments like irrigation...... efficiency, treatment and reuse of water. The WEAP model was applied to the Ordos catchment where it was used for the first time in China. The changes in water resource utilization in Ordos basin were assessed with the model. It was found that the WEAP model is a useful tool for water resource assessment...

  13. Applying incentive sensitization models to behavioral addiction

    DEFF Research Database (Denmark)

    Rømer Thomsen, Kristine; Fjorback, Lone; Møller, Arne

    2014-01-01

    The incentive sensitization theory is a promising model for understanding the mechanisms underlying drug addiction, and has received support in animal and human studies. So far the theory has not been applied to the case of behavioral addictions like Gambling Disorder, despite sharing clinical...

  14. Applied probability models with optimization applications

    CERN Document Server

    Ross, Sheldon M

    1992-01-01

    Concise advanced-level introduction to stochastic processes that frequently arise in applied probability. Largely self-contained text covers Poisson process, renewal theory, Markov chains, inventory theory, Brownian motion and continuous time optimization models, much more. Problems and references at chapter ends. ""Excellent introduction."" - Journal of the American Statistical Association. Bibliography. 1970 edition.

  15. Data Mining Techniques Applied to Hydrogen Lactose Breath Test.

    Science.gov (United States)

    Rubio-Escudero, Cristina; Valverde-Fernández, Justo; Nepomuceno-Chamorro, Isabel; Pontes-Balanza, Beatriz; Hernández-Mendoza, Yoedusvany; Rodríguez-Herrera, Alfonso

    2017-01-01

    Analyze a set of data of hydrogen breath tests by use of data mining tools. Identify new patterns of H2 production. Hydrogen breath tests data sets as well as k-means clustering as the data mining technique to a dataset of 2571 patients. Six different patterns have been extracted upon analysis of the hydrogen breath test data. We have also shown the relevance of each of the samples taken throughout the test. Analysis of the hydrogen breath test data sets using data mining techniques has identified new patterns of hydrogen generation upon lactose absorption. We can see the potential of application of data mining techniques to clinical data sets. These results offer promising data for future research on the relations between gut microbiota produced hydrogen and its link to clinical symptoms.

  16. Bioremediation techniques applied to aqueous media contaminated with mercury.

    Science.gov (United States)

    Velásquez-Riaño, Möritz; Benavides-Otaya, Holman D

    2016-12-01

    In recent years, the environmental and human health impacts of mercury contamination have driven the search for alternative, eco-efficient techniques different from the traditional physicochemical methods for treating this metal. One of these alternative processes is bioremediation. A comprehensive analysis of the different variables that can affect this process is presented. It focuses on determining the effectiveness of different techniques of bioremediation, with a specific consideration of three variables: the removal percentage, time needed for bioremediation and initial concentration of mercury to be treated in an aqueous medium.

  17. Adaptive Meshing Technique Applied to an Orthopaedic Finite Element Contact Problem

    OpenAIRE

    Roarty, Colleen M; Grosland, Nicole M.

    2004-01-01

    Finite element methods have been applied extensively and with much success in the analysis of orthopaedic implants.6,7,12,13,15 Recently a growing interest has developed, in the orthopaedic biomechanics community, in how numerical models can be constructed for the optimal solution of problems in contact mechanics. New developments in this area are of paramount importance in the design of improved implants for orthopaedic surgery. Finite element and other computational techniques are widely ap...

  18. Applying data-mining techniques in honeypot analysis

    CSIR Research Space (South Africa)

    Veerasamy, N

    2006-07-01

    Full Text Available This paper proposes the use of a data mining techniques to analyse the data recorded by the honeypot. This data can also be used to train Intrusion Detection Systems (IDS) in identifying attacks. Since the training is based on real data...

  19. Filter back—projection technique applied to Abel inversion

    Institute of Scientific and Technical Information of China (English)

    JiangShano-En; LiuZhong-Li; 等

    1997-01-01

    The inverse Abel transform is applicable to optically thin plasma with cylindrical symmetry,which is often encountered in plasma physics and inertial(or magnetic)confinemant fusion.The filter back-projection technique is modified,and then a new method of inverse Abel transform is presented.

  20. Applied Integer Programming Modeling and Solution

    CERN Document Server

    Chen, Der-San; Dang, Yu

    2011-01-01

    An accessible treatment of the modeling and solution of integer programming problems, featuring modern applications and software In order to fully comprehend the algorithms associated with integer programming, it is important to understand not only how algorithms work, but also why they work. Applied Integer Programming features a unique emphasis on this point, focusing on problem modeling and solution using commercial software. Taking an application-oriented approach, this book addresses the art and science of mathematical modeling related to the mixed integer programming (MIP) framework and

  1. Software engineering techniques applied to agricultural systems an object-oriented and UML approach

    CERN Document Server

    Papajorgji, Petraq J

    2014-01-01

    Software Engineering Techniques Applied to Agricultural Systems presents cutting-edge software engineering techniques for designing and implementing better agricultural software systems based on the object-oriented paradigm and the Unified Modeling Language (UML). The focus is on the presentation of  rigorous step-by-step approaches for modeling flexible agricultural and environmental systems, starting with a conceptual diagram representing elements of the system and their relationships. Furthermore, diagrams such as sequential and collaboration diagrams are used to explain the dynamic and static aspects of the software system.    This second edition includes: a new chapter on Object Constraint Language (OCL), a new section dedicated to the Model-VIEW-Controller (MVC) design pattern, new chapters presenting details of two MDA-based tools – the Virtual Enterprise and Olivia Nova, and a new chapter with exercises on conceptual modeling.  It may be highly useful to undergraduate and graduate students as t...

  2. Research Techniques Made Simple: Skin Carcinogenesis Models: Xenotransplantation Techniques.

    Science.gov (United States)

    Mollo, Maria Rosaria; Antonini, Dario; Cirillo, Luisa; Missero, Caterina

    2016-02-01

    Xenotransplantation is a widely used technique to test the tumorigenic potential of human cells in vivo using immunodeficient mice. Here we describe basic technologies and recent advances in xenotransplantation applied to study squamous cell carcinomas (SCCs) of the skin. SCC cells isolated from tumors can either be cultured to generate a cell line or injected directly into mice. Several immunodeficient mouse models are available for selection based on the experimental design and the type of tumorigenicity assay. Subcutaneous injection is the most widely used technique for xenotransplantation because it involves a simple procedure allowing the use of a large number of cells, although it may not mimic the original tumor environment. SCC cell injections at the epidermal-to-dermal junction or grafting of organotypic cultures containing human stroma have also been used to more closely resemble the tumor environment. Mixing of SCC cells with cancer-associated fibroblasts can allow the study of their interaction and reciprocal influence, which can be followed in real time by intradermal ear injection using conventional fluorescent microscopy. In this article, we will review recent advances in xenotransplantation technologies applied to study behavior of SCC cells and their interaction with the tumor environment in vivo.

  3. Ion beam analysis techniques applied to large scale pollution studies

    Energy Technology Data Exchange (ETDEWEB)

    Cohen, D.D.; Bailey, G.; Martin, J.; Garton, D.; Noorman, H.; Stelcer, E.; Johnson, P. [Australian Nuclear Science and Technology Organisation, Lucas Heights, NSW (Australia)

    1993-12-31

    Ion Beam Analysis (IBA) techniques are ideally suited to analyse the thousands of filter papers a year that may originate from a large scale aerosol sampling network. They are fast multi-elemental and, for the most part, non-destructive so other analytical methods such as neutron activation and ion chromatography can be performed afterwards. ANSTO in collaboration with the NSW EPA, Pacific Power and the Universities of NSW and Macquarie has established a large area fine aerosol sampling network covering nearly 80,000 square kilometres of NSW with 25 fine particle samplers. This network known as ASP was funded by the Energy Research and Development Corporation (ERDC) and commenced sampling on 1 July 1991. The cyclone sampler at each site has a 2.5 {mu}m particle diameter cut off and runs for 24 hours every Sunday and Wednesday using one Gillman 25mm diameter stretched Teflon filter for each day. These filters are ideal targets for ion beam analysis work. Currently ANSTO receives 300 filters per month from this network for analysis using its accelerator based ion beam techniques on the 3 MV Van de Graaff accelerator. One week a month of accelerator time is dedicated to this analysis. Four simultaneous accelerator based IBA techniques are used at ANSTO, to analyse for the following 24 elements: H, C, N, O, F, Na, Al, Si, P, S, Cl, K, Ca, Ti, V, Cr, Mn, Fe, Cu, Ni, Co, Zn, Br and Pb. The IBA techniques were proved invaluable in identifying sources of fine particles and their spatial and seasonal variations accross the large area sampled by the ASP network. 3 figs.

  4. Recent Developments in Computational Techniques for Applied Hydrodynamics.

    Science.gov (United States)

    1979-12-07

    numerical solutions of the KdV equation . More significantly, solitons are seen in nature and in solutions of "exact" hyperbolic systems, i.e...1975). Tappert, F., " Numerical Solutions of the KdV Equation and Its Generalizations by the Split- Step Fourier Method," Lect. Appl. Math. 15, AMS...reverse side if necessary and identify by block nutmber) Recently developed techniques for numerical solution of fluid equations are reviewed.

  5. Technology Assessment of Dust Suppression Techniques Applied During Structural Demolition

    Energy Technology Data Exchange (ETDEWEB)

    Boudreaux, J.F.; Ebadian, M.A.; Williams, P.T.; Dua, S.K.

    1998-10-20

    Hanford, Fernald, Savannah River, and other sites are currently reviewing technologies that can be implemented to demolish buildings in a cost-effective manner. In order to demolish a structure properly and, at the same time, minimize the amount of dust generated from a given technology, an evaluation must be conducted to choose the most appropriate dust suppression technology given site-specific conditions. Thus, the purpose of this research, which was carried out at the Hemispheric Center for Environmental Technology (HCET) at Florida International University, was to conduct an experimental study of dust aerosol abatement (dust suppression) methods as applied to nuclear D and D. This experimental study targeted the problem of dust suppression during the demolition of nuclear facilities. The resulting data were employed to assist in the development of mathematical correlations that can be applied to predict dust generation during structural demolition.

  6. Aerosol model selection and uncertainty modelling by adaptive MCMC technique

    Directory of Open Access Journals (Sweden)

    M. Laine

    2008-12-01

    Full Text Available We present a new technique for model selection problem in atmospheric remote sensing. The technique is based on Monte Carlo sampling and it allows model selection, calculation of model posterior probabilities and model averaging in Bayesian way.

    The algorithm developed here is called Adaptive Automatic Reversible Jump Markov chain Monte Carlo method (AARJ. It uses Markov chain Monte Carlo (MCMC technique and its extension called Reversible Jump MCMC. Both of these techniques have been used extensively in statistical parameter estimation problems in wide area of applications since late 1990's. The novel feature in our algorithm is the fact that it is fully automatic and easy to use.

    We show how the AARJ algorithm can be implemented and used for model selection and averaging, and to directly incorporate the model uncertainty. We demonstrate the technique by applying it to the statistical inversion problem of gas profile retrieval of GOMOS instrument on board the ENVISAT satellite. Four simple models are used simultaneously to describe the dependence of the aerosol cross-sections on wavelength. During the AARJ estimation all the models are used and we obtain a probability distribution characterizing how probable each model is. By using model averaging, the uncertainty related to selecting the aerosol model can be taken into account in assessing the uncertainty of the estimates.

  7. Applied research in uncertainty modeling and analysis

    CERN Document Server

    Ayyub, Bilal

    2005-01-01

    Uncertainty has been a concern to engineers, managers, and scientists for many years. For a long time uncertainty has been considered synonymous with random, stochastic, statistic, or probabilistic. Since the early sixties views on uncertainty have become more heterogeneous. In the past forty years numerous tools that model uncertainty, above and beyond statistics, have been proposed by several engineers and scientists. The tool/method to model uncertainty in a specific context should really be chosen by considering the features of the phenomenon under consideration, not independent of what is known about the system and what causes uncertainty. In this fascinating overview of the field, the authors provide broad coverage of uncertainty analysis/modeling and its application. Applied Research in Uncertainty Modeling and Analysis presents the perspectives of various researchers and practitioners on uncertainty analysis and modeling outside their own fields and domain expertise. Rather than focusing explicitly on...

  8. Applied Mathematics, Modelling and Computational Science

    CERN Document Server

    Kotsireas, Ilias; Makarov, Roman; Melnik, Roderick; Shodiev, Hasan

    2015-01-01

    The Applied Mathematics, Modelling, and Computational Science (AMMCS) conference aims to promote interdisciplinary research and collaboration. The contributions in this volume cover the latest research in mathematical and computational sciences, modeling, and simulation as well as their applications in natural and social sciences, engineering and technology, industry, and finance. The 2013 conference, the second in a series of AMMCS meetings, was held August 26–30 and organized in cooperation with AIMS and SIAM, with support from the Fields Institute in Toronto, and Wilfrid Laurier University. There were many young scientists at AMMCS-2013, both as presenters and as organizers. This proceedings contains refereed papers contributed by the participants of the AMMCS-2013 after the conference. This volume is suitable for researchers and graduate students, mathematicians and engineers, industrialists, and anyone who would like to delve into the interdisciplinary research of applied and computational mathematics ...

  9. Applying incentive sensitization models to behavioral addiction

    DEFF Research Database (Denmark)

    Rømer Thomsen, Kristine; Fjorback, Lone; Møller, Arne

    2014-01-01

    The incentive sensitization theory is a promising model for understanding the mechanisms underlying drug addiction, and has received support in animal and human studies. So far the theory has not been applied to the case of behavioral addictions like Gambling Disorder, despite sharing clinical...... symptoms and underlying neurobiology. We examine the relevance of this theory for Gambling Disorder and point to predictions for future studies. The theory promises a significant contribution to the understanding of behavioral addiction and opens new avenues for treatment....

  10. Applying Supervised Opinion Mining Techniques on Online User Reviews

    Directory of Open Access Journals (Sweden)

    Ion SMEUREANU

    2012-01-01

    Full Text Available In recent years, the spectacular development of web technologies, lead to an enormous quantity of user generated information in online systems. This large amount of information on web platforms make them viable for use as data sources, in applications based on opinion mining and sentiment analysis. The paper proposes an algorithm for detecting sentiments on movie user reviews, based on naive Bayes classifier. We make an analysis of the opinion mining domain, techniques used in sentiment analysis and its applicability. We implemented the proposed algorithm and we tested its performance, and suggested directions of development.

  11. Image analysis technique applied to lock-exchange gravity currents

    OpenAIRE

    Nogueira, Helena; Adduce, Claudia; Alves, Elsa; Franca, Rodrigues Pereira Da; Jorge, Mario

    2013-01-01

    An image analysis technique is used to estimate the two-dimensional instantaneous density field of unsteady gravity currents produced by full-depth lock-release of saline water. An experiment reproducing a gravity current was performed in a 3.0 m long, 0.20 m wide and 0.30 m deep Perspex flume with horizontal smooth bed and recorded with a 25 Hz CCD video camera under controlled light conditions. Using dye concentration as a tracer, a calibration procedure was established for each pixel in th...

  12. Unconventional Coding Technique Applied to Multi-Level Polarization Modulation

    Science.gov (United States)

    Rutigliano, G. G.; Betti, S.; Perrone, P.

    2016-05-01

    A new technique is proposed to improve information confidentiality in optical-fiber communications without bandwidth consumption. A pseudorandom vectorial sequence was generated by a dynamic system algorithm and used to codify a multi-level polarization modulation based on the Stokes vector. Optical-fiber birefringence, usually considered as a disturbance, was exploited to obfuscate the signal transmission. At the receiver end, the same pseudorandom sequence was generated and used to decode the multi-level polarization modulated signal. The proposed scheme, working at the physical layer, provides strong information security without introducing complex processing and thus latency.

  13. Filling-Based Techniques Applied to Object Projection Feature Estimation

    CERN Document Server

    Quesada, Luis

    2012-01-01

    3D motion tracking is a critical task in many computer vision applications. Unsupervised markerless 3D motion tracking systems determine the most relevant object in the screen and then track it by continuously estimating its projection features (center and area) from the edge image and a point inside the relevant object projection (namely, inner point), until the tracking fails. Existing object projection feature estimation techniques are based on ray-casting from the inner point. These techniques present three main drawbacks: when the inner point is surrounded by edges, rays may not reach other relevant areas; as a consequence of that issue, the estimated features may greatly vary depending on the position of the inner point relative to the object projection; and finally, increasing the number of rays being casted and the ray-casting iterations (which would make the results more accurate and stable) increases the processing time to the point the tracking cannot be performed on the fly. In this paper, we anal...

  14. Canvas and cosmos: Visual art techniques applied to astronomy data

    Science.gov (United States)

    English, Jayanne

    Bold color images from telescopes act as extraordinary ambassadors for research astronomers because they pique the public’s curiosity. But are they snapshots documenting physical reality? Or are we looking at artistic spacescapes created by digitally manipulating astronomy images? This paper provides a tour of how original black and white data, from all regimes of the electromagnetic spectrum, are converted into the color images gracing popular magazines, numerous websites, and even clothing. The history and method of the technical construction of these images is outlined. However, the paper focuses on introducing the scientific reader to visual literacy (e.g. human perception) and techniques from art (e.g. composition, color theory) since these techniques can produce not only striking but politically powerful public outreach images. When created by research astronomers, the cultures of science and visual art can be balanced and the image can illuminate scientific results sufficiently strongly that the images are also used in research publications. Included are reflections on how they could feedback into astronomy research endeavors and future forms of visualization as well as on the relevance of outreach images to visual art. (See the color online PDF version at http://dx.doi.org/10.1142/S0218271817300105; the figures can be enlarged in PDF viewers.)

  15. Compressed Sensing Techniques Applied to Ultrasonic Imaging of Cargo Containers

    Science.gov (United States)

    Álvarez López, Yuri; Martínez Lorenzo, José Ángel

    2017-01-01

    One of the key issues in the fight against the smuggling of goods has been the development of scanners for cargo inspection. X-ray-based radiographic system scanners are the most developed sensing modality. However, they are costly and use bulky sources that emit hazardous, ionizing radiation. Aiming to improve the probability of threat detection, an ultrasonic-based technique, capable of detecting the footprint of metallic containers or compartments concealed within the metallic structure of the inspected cargo, has been proposed. The system consists of an array of acoustic transceivers that is attached to the metallic structure-under-inspection, creating a guided acoustic Lamb wave. Reflections due to discontinuities are detected in the images, provided by an imaging algorithm. Taking into consideration that the majority of those images are sparse, this contribution analyzes the application of Compressed Sensing (CS) techniques in order to reduce the amount of measurements needed, thus achieving faster scanning, without compromising the detection capabilities of the system. A parametric study of the image quality, as a function of the samples needed in spatial and frequency domains, is presented, as well as the dependence on the sampling pattern. For this purpose, realistic cargo inspection scenarios have been simulated. PMID:28098841

  16. Compressed Sensing Techniques Applied to Ultrasonic Imaging of Cargo Containers.

    Science.gov (United States)

    López, Yuri Álvarez; Lorenzo, José Ángel Martínez

    2017-01-15

    One of the key issues in the fight against the smuggling of goods has been the development of scanners for cargo inspection. X-ray-based radiographic system scanners are the most developed sensing modality. However, they are costly and use bulky sources that emit hazardous, ionizing radiation. Aiming to improve the probability of threat detection, an ultrasonic-based technique, capable of detecting the footprint of metallic containers or compartments concealed within the metallic structure of the inspected cargo, has been proposed. The system consists of an array of acoustic transceivers that is attached to the metallic structure-under-inspection, creating a guided acoustic Lamb wave. Reflections due to discontinuities are detected in the images, provided by an imaging algorithm. Taking into consideration that the majority of those images are sparse, this contribution analyzes the application of Compressed Sensing (CS) techniques in order to reduce the amount of measurements needed, thus achieving faster scanning, without compromising the detection capabilities of the system. A parametric study of the image quality, as a function of the samples needed in spatial and frequency domains, is presented, as well as the dependence on the sampling pattern. For this purpose, realistic cargo inspection scenarios have been simulated.

  17. Compressed Sensing Techniques Applied to Ultrasonic Imaging of Cargo Containers

    Directory of Open Access Journals (Sweden)

    Yuri Álvarez López

    2017-01-01

    Full Text Available One of the key issues in the fight against the smuggling of goods has been the development of scanners for cargo inspection. X-ray-based radiographic system scanners are the most developed sensing modality. However, they are costly and use bulky sources that emit hazardous, ionizing radiation. Aiming to improve the probability of threat detection, an ultrasonic-based technique, capable of detecting the footprint of metallic containers or compartments concealed within the metallic structure of the inspected cargo, has been proposed. The system consists of an array of acoustic transceivers that is attached to the metallic structure-under-inspection, creating a guided acoustic Lamb wave. Reflections due to discontinuities are detected in the images, provided by an imaging algorithm. Taking into consideration that the majority of those images are sparse, this contribution analyzes the application of Compressed Sensing (CS techniques in order to reduce the amount of measurements needed, thus achieving faster scanning, without compromising the detection capabilities of the system. A parametric study of the image quality, as a function of the samples needed in spatial and frequency domains, is presented, as well as the dependence on the sampling pattern. For this purpose, realistic cargo inspection scenarios have been simulated.

  18. Surgical treatment of scoliosis: a review of techniques currently applied

    Directory of Open Access Journals (Sweden)

    Maruyama Toru

    2008-04-01

    Full Text Available Abstract In this review, basic knowledge and recent innovation of surgical treatment for scoliosis will be described. Surgical treatment for scoliosis is indicated, in general, for the curve exceeding 45 or 50 degrees by the Cobb's method on the ground that: 1 Curves larger than 50 degrees progress even after skeletal maturity. 2 Curves of greater magnitude cause loss of pulmonary function, and much larger curves cause respiratory failure. 3 Larger the curve progress, more difficult to treat with surgery. Posterior fusion with instrumentation has been a standard of the surgical treatment for scoliosis. In modern instrumentation systems, more anchors are used to connect the rod and the spine, resulting in better correction and less frequent implant failures. Segmental pedicle screw constructs or hybrid constructs using pedicle screws, hooks, and wires are the trend of today. Anterior instrumentation surgery had been a choice of treatment for the thoracolumbar and lumbar scoliosis because better correction can be obtained with shorter fusion levels. Recently, superiority of anterior surgery for the thoracolumbar and lumbar scoliosis has been lost. Initial enthusiasm for anterior instrumentation for the thoracic curve using video assisted thoracoscopic surgery technique has faded out. Various attempts are being made with use of fusionless surgery. To control growth, epiphysiodesis on the convex side of the deformity with or without instrumentation is a technique to provide gradual progressive correction and to arrest the deterioration of the curves. To avoid fusion for skeletally immature children with spinal cord injury or myelodysplasia, vertebral wedge ostetomies are performed for the treatment of progressive paralytic scoliosis. For right thoracic curve with idiopathic scoliosis, multiple vertebral wedge osteotomies without fusion are performed. To provide correction and maintain it during the growing years while allowing spinal growth for

  19. Quantitative Portfolio Optimization Techniques Applied to the Brazilian Stock Market

    Directory of Open Access Journals (Sweden)

    André Alves Portela Santos

    2012-09-01

    Full Text Available In this paper we assess the out-of-sample performance of two alternative quantitative portfolio optimization techniques - mean-variance and minimum variance optimization – and compare their performance with respect to a naive 1/N (or equally-weighted portfolio and also to the market portfolio given by the Ibovespa. We focus on short selling-constrained portfolios and consider alternative estimators for the covariance matrices: sample covariance matrix, RiskMetrics, and three covariance estimators proposed by Ledoit and Wolf (2003, Ledoit and Wolf (2004a and Ledoit and Wolf (2004b. Taking into account alternative portfolio re-balancing frequencies, we compute out-of-sample performance statistics which indicate that the quantitative approaches delivered improved results in terms of lower portfolio volatility and better risk-adjusted returns. Moreover, the use of more sophisticated estimators for the covariance matrix generated optimal portfolios with lower turnover over time.

  20. Active Learning Techniques Applied to an Interdisciplinary Mineral Resources Course.

    Science.gov (United States)

    Aird, H. M.

    2015-12-01

    An interdisciplinary active learning course was introduced at the University of Puget Sound entitled 'Mineral Resources and the Environment'. Various formative assessment and active learning techniques that have been effective in other courses were adapted and implemented to improve student learning, increase retention and broaden knowledge and understanding of course material. This was an elective course targeted towards upper-level undergraduate geology and environmental majors. The course provided an introduction to the mineral resources industry, discussing geological, environmental, societal and economic aspects, legislation and the processes involved in exploration, extraction, processing, reclamation/remediation and recycling of products. Lectures and associated weekly labs were linked in subject matter; relevant readings from the recent scientific literature were assigned and discussed in the second lecture of the week. Peer-based learning was facilitated through weekly reading assignments with peer-led discussions and through group research projects, in addition to in-class exercises such as debates. Writing and research skills were developed through student groups designing, carrying out and reporting on their own semester-long research projects around the lasting effects of the historical Ruston Smelter on the biology and water systems of Tacoma. The writing of their mini grant proposals and final project reports was carried out in stages to allow for feedback before the deadline. Speakers from industry were invited to share their specialist knowledge as guest lecturers, and students were encouraged to interact with them, with a view to employment opportunities. Formative assessment techniques included jigsaw exercises, gallery walks, placemat surveys, think pair share and take-home point summaries. Summative assessment included discussion leadership, exams, homeworks, group projects, in-class exercises, field trips, and pre-discussion reading exercises

  1. Signal Processing Techniques Applied in RFI Mitigation of Radio Astronomy

    Directory of Open Access Journals (Sweden)

    Sixiu Wang

    2012-08-01

    Full Text Available Radio broadcast and telecommunications are present at different power levels everywhere on Earth. Radio Frequency Interference (RFI substantially limits the sensitivity of existing radio telescopes in several frequency bands and may prove to be an even greater obstacle for next generation of telescopes (or arrays to overcome. A variety of RFI detection and mitigation techniques have been developed in recent years. This study describes various signal process methods of RFI mitigation in radio astronomy, choose the method of Time-frequency domain cancellation to eliminate certain interference and effectively improve the signal to noise ratio in pulsar observations. Finally, RFI mitigation researches and implements in China radio astronomy will be also presented.

  2. Neoliberal Optimism: Applying Market Techniques to Global Health.

    Science.gov (United States)

    Mei, Yuyang

    2017-01-01

    Global health and neoliberalism are becoming increasingly intertwined as organizations utilize markets and profit motives to solve the traditional problems of poverty and population health. I use field work conducted over 14 months in a global health technology company to explore how the promise of neoliberalism re-envisions humanitarian efforts. In this company's vaccine refrigerator project, staff members expect their investors and their market to allow them to achieve scale and develop accountability to their users in developing countries. However, the translation of neoliberal techniques to the global health sphere falls short of the ideal, as profits are meager and purchasing power remains with donor organizations. The continued optimism in market principles amidst such a non-ideal market reveals the tenacious ideological commitment to neoliberalism in these global health projects.

  3. Object detection techniques applied on mobile robot semantic navigation.

    Science.gov (United States)

    Astua, Carlos; Barber, Ramon; Crespo, Jonathan; Jardon, Alberto

    2014-04-11

    The future of robotics predicts that robots will integrate themselves more every day with human beings and their environments. To achieve this integration, robots need to acquire information about the environment and its objects. There is a big need for algorithms to provide robots with these sort of skills, from the location where objects are needed to accomplish a task up to where these objects are considered as information about the environment. This paper presents a way to provide mobile robots with the ability-skill to detect objets for semantic navigation. This paper aims to use current trends in robotics and at the same time, that can be exported to other platforms. Two methods to detect objects are proposed, contour detection and a descriptor based technique, and both of them are combined to overcome their respective limitations. Finally, the code is tested on a real robot, to prove its accuracy and efficiency.

  4. Object Detection Techniques Applied on Mobile Robot Semantic Navigation

    Directory of Open Access Journals (Sweden)

    Carlos Astua

    2014-04-01

    Full Text Available The future of robotics predicts that robots will integrate themselves more every day with human beings and their environments. To achieve this integration, robots need to acquire information about the environment and its objects. There is a big need for algorithms to provide robots with these sort of skills, from the location where objects are needed to accomplish a task up to where these objects are considered as information about the environment. This paper presents a way to provide mobile robots with the ability-skill to detect objets for semantic navigation. This paper aims to use current trends in robotics and at the same time, that can be exported to other platforms. Two methods to detect objects are proposed, contour detection and a descriptor based technique, and both of them are combined to overcome their respective limitations. Finally, the code is tested on a real robot, to prove its accuracy and efficiency.

  5. Applying total quality management techniques to improve software development.

    Science.gov (United States)

    Mezher, T; Assem Abdul Malak, M; el-Medawar, H

    1998-01-01

    Total Quality Management (TQM) is a new management philosophy and a set of guiding principles that represent the basis of a continuously improving organization. This paper sheds light on the application of TQM concepts for software development. A fieldwork study was conducted on a Lebanese software development firm and its customers to determine the major problems affecting the organization's operation and to assess the level of adoption of TQM concepts. Detailed questionnaires were prepared and handed out to the firm's managers, programmers, and customers. The results of the study indicate many deficiencies in applying TQM concepts, especially in the areas of planning, defining customer requirements, teamwork, relationship with suppliers, and adopting standards and performance measures. One of the major consequences of these deficiencies is considerably increased programming errors and delays in delivery. Recommendations on achieving quality are discussed.

  6. Feasibility of Applying Controllable Lubrication Techniques to Reciprocating Machines

    DEFF Research Database (Denmark)

    Pulido, Edgar Estupinan

    modified hydrostatic lubrication. In this case, the hydrostatic lubrication is modified by injecting oil at controllable pressures, through orifices circumferentially located around the bearing surface. In order to study the performance of journal bearings of reciprocating machines, operating under...... conventional lubrication conditions, a mathematical model of a reciprocating mechanism connected to a rigid / flexible rotor via thin fluid films was developed. The mathematical model involves the use of multibody dynamics theory for the modelling of the reciprocating mechanism (rigid bodies), finite elements...... of the reciprocating engine, obtained with the help of multibody dynamics (rigid components) and finite elements method (flexible components), and the global system of equations is numerically solved. The analysis of the results was carried out with focus on the behaviour of the journal orbits, maximum fluid film...

  7. Technology assessment of applied techniques for exploitation of geothermal energy

    Energy Technology Data Exchange (ETDEWEB)

    1977-04-01

    Studies were made to elucidate the effects of technological development of natural steam and hot water on the general social and industrial environments. These were followed by studies of enhanced methods for the forecasting of these impacts. The studies included assessments of actual conditions and the preparation of regional models, ranging from rural to urban-fringe situations. The economic implications of geothermal development in various regional situations are discussed, and the models developed provide for the integration of new data and their extrapolation to as yet uncertain situations.

  8. Optical Trapping Techniques Applied to the Study of Cell Membranes

    Science.gov (United States)

    Morss, Andrew J.

    Optical tweezers allow for manipulating micron-sized objects using pN level optical forces. In this work, we use an optical trapping setup to aid in three separate experiments, all related to the physics of the cellular membrane. In the first experiment, in conjunction with Brian Henslee, we use optical tweezers to allow for precise positioning and control of cells in suspension to evaluate the cell size dependence of electroporation. Theory predicts that all cells porate at a transmembrane potential VTMof roughly 1 V. The Schwann equation predicts that the transmembrane potential depends linearly on the cell radius r, thus predicting that cells should porate at threshold electric fields that go as 1/r. The threshold field required to induce poration is determined by applying a low voltage pulse to the cell and then applying additional pulses of greater and greater magnitude, checking for poration at each step using propidium iodide dye. We find that, contrary to expectations, cells do not porate at a constant value of the transmembrane potential but at a constant value of the electric field which we find to be 692 V/cm for K562 cells. Delivering precise dosages of nanoparticles into cells is of importance for assessing toxicity of nanoparticles or for genetic research. In the second experiment, we conduct nano-electroporation—a novel method of applying precise doses of transfection agents to cells—by using optical tweezers in conjunction with a confocal microscope to manipulate cells into contact with 100 nm wide nanochannels. This work was done in collaboration with Pouyan Boukany of Dr. Lee's group. The small cross sectional area of these nano channels means that the electric field within them is extremely large, 60 MV/m, which allows them to electrophoretically drive transfection agents into the cell. We find that nano electroporation results in excellent dose control (to within 10% in our experiments) compared to bulk electroporation. We also find that

  9. Dust tracking techniques applied to the STARDUST facility: First results

    Energy Technology Data Exchange (ETDEWEB)

    Malizia, A., E-mail: malizia@ing.uniroma2.it [Associazione EURATOM-ENEA, Department of Industrial Engineering, University of Rome “Tor Vergata”, Via del Politecnico 1, 00133 Rome (Italy); Camplani, M. [Grupo de Tratamiento de Imágenes, E.T.S.I de Telecomunicación, Universidad Politécnica de Madrid (Spain); Gelfusa, M. [Associazione EURATOM-ENEA, Department of Industrial Engineering, University of Rome “Tor Vergata”, Via del Politecnico 1, 00133 Rome (Italy); Lupelli, I. [Associazione EURATOM-ENEA, Department of Industrial Engineering, University of Rome “Tor Vergata”, Via del Politecnico 1, 00133 Rome (Italy); EURATOM/CCFE Association, Culham Science Centre, Abingdon (United Kingdom); Richetta, M.; Antonelli, L.; Conetta, F.; Scarpellini, D.; Carestia, M.; Peluso, E.; Bellecci, C. [Associazione EURATOM-ENEA, Department of Industrial Engineering, University of Rome “Tor Vergata”, Via del Politecnico 1, 00133 Rome (Italy); Salgado, L. [Grupo de Tratamiento de Imágenes, E.T.S.I de Telecomunicación, Universidad Politécnica de Madrid (Spain); Video Processing and Understanding Laboratory, Universidad Autónoma de Madrid (Spain); Gaudio, P. [Associazione EURATOM-ENEA, Department of Industrial Engineering, University of Rome “Tor Vergata”, Via del Politecnico 1, 00133 Rome (Italy)

    2014-10-15

    Highlights: •Use of an experimental facility, STARDUST, to analyze the dust resuspension problem inside the tokamak in case of loss of vacuum accident. •PIV technique implementation to track the dust during a LOVA reproduction inside STARDUST. •Data imaging techniques to analyze dust velocity field: first results and data discussion. -- Abstract: An important issue related to future nuclear fusion reactors fueled with deuterium and tritium is the creation of large amounts of dust due to several mechanisms (disruptions, ELMs and VDEs). The dust size expected in nuclear fusion experiments (such as ITER) is in the order of microns (between 0.1 and 1000 μm). Almost the total amount of this dust remains in the vacuum vessel (VV). This radiological dust can re-suspend in case of LOVA (loss of vacuum accident) and these phenomena can cause explosions and serious damages to the health of the operators and to the integrity of the device. The authors have developed a facility, STARDUST, in order to reproduce the thermo fluid-dynamic conditions comparable to those expected inside the VV of the next generation of experiments such as ITER in case of LOVA. The dust used inside the STARDUST facility presents particle sizes and physical characteristics comparable with those that created inside the VV of nuclear fusion experiments. In this facility an experimental campaign has been conducted with the purpose of tracking the dust re-suspended at low pressurization rates (comparable to those expected in case of LOVA in ITER and suggested by the General Safety and Security Report ITER-GSSR) using a fast camera with a frame rate from 1000 to 10,000 images per second. The velocity fields of the mobilized dust are derived from the imaging of a two-dimensional slice of the flow illuminated by optically adapted laser beam. The aim of this work is to demonstrate the possibility of dust tracking by means of image processing with the objective of determining the velocity field values

  10. Acoustic Emission Technique Applied in Textiles Mechanical Characterization

    Directory of Open Access Journals (Sweden)

    Rios-Soberanis Carlos Rolando

    2017-01-01

    Full Text Available The common textile architecture/geometry are woven, braided, knitted, stitch boded, and Z-pinned. Fibres in textile form exhibit good out-of-plane properties and good fatigue and impact resistance, additionally, they have better dimensional stability and conformability. Besides the nature of the textile, the architecture has a great role in the mechanical behaviour and mechanisms of damage in textiles, therefore damage mechanisms and mechanical performance in structural applications textiles have been a major concern. Mechanical damage occurs to a large extent during the service lifetime consequently it is vital to understand the material mechanical behaviour by identifying its mechanisms of failure such as onset of damage, crack generation and propagation. In this work, textiles of different architecture were used to manufacture epoxy based composites in order to study failure events under tensile load by using acoustic emission technique which is a powerful characterization tool due to its link between AE data and fracture mechanics, which makes this relation a very useful from the engineering point of view.

  11. Sputtering as a Technique for Applying Tribological Coatings

    Science.gov (United States)

    Ramalingam, S.

    1984-01-01

    Friction and wear-induced mechanical failures may be controlled to extend the life of tribological components through the interposition of selected solid materials between contacting surfaces. Thin solid films of soft and hard materials are appropriate to lower friction and enhance the wear resistance of precision tribo-elements. Tribological characteristics of thin hard coats deposited on a variety of ferrous and non-ferrous substrates were tested. The thin hard coats used were titanium nitride films deposited by reactive magnetron sputtering of metallic titanium. High contact stress, low speed tests showed wear rate reductions of one or more magnitude, even with films a few micrometers in thickness. Low contact stress, high speed tests carried out under rather severe test conditions showed that thin films of TiN afforded significant friction reduction and wear protection. Thin hard coats were shown to improve the friction and wear performance of rolling contacts. Satisfactory film-to-substrate adhesion strengths can be obtained with reactive magnetron sputtering. X-ray diffraction and microhardness tests were employed to assess the effectiveness of the sputtering technique.

  12. Time-resolved infrared spectroscopic techniques as applied to Channelrhodopsin

    Directory of Open Access Journals (Sweden)

    Eglof eRitter

    2015-07-01

    Full Text Available Among optogenetic tools, channelrhodopsins, the light gated ion channels of the plasma membrane from green algae, play the most important role. Properties like channel selectivity, timing parameters or color can be influenced by the exchange of selected amino acids. Although widely used, in the field of neurosciences for example, there is still little known about their photocycles and the mechanism of ion channel gating and conductance. One of the preferred methods for these studies is infrared spectroscopy since it allows observation of proteins and their function at a molecular level and in near-native environment. The absorption of a photon in channelrhodopsin leads to retinal isomerization within femtoseconds, the conductive states are reached in the microsecond time scale and the return into the fully dark-adapted state may take more than minutes. To be able to cover all these time regimes, a range of different spectroscopical approaches are necessary. This mini-review focuses on time-resolved applications of the infrared technique to study channelrhodopsins and other light triggered proteins. We will discuss the approaches with respect to their suitability to the investigation of channelrhodopsin and related proteins.

  13. Remote sensing techniques applied to seismic vulnerability assessment

    Science.gov (United States)

    Juan Arranz, Jose; Torres, Yolanda; Hahgi, Azade; Gaspar-Escribano, Jorge

    2016-04-01

    Advances in remote sensing and photogrammetry techniques have increased the degree of accuracy and resolution in the record of the earth's surface. This has expanded the range of possible applications of these data. In this research, we have used these data to document the construction characteristics of the urban environment of Lorca, Spain. An exposure database has been created with the gathered information to be used in seismic vulnerability assessment. To this end, we have used data from photogrammetric flights at different periods, using both orthorectified images in the visible and infrared spectrum. Furthermore, the analysis is completed using LiDAR data. From the combination of these data, it has been possible to delineate the building footprints and characterize the constructions with attributes such as the approximate date of construction, area, type of roof and even building materials. To carry out the calculation, we have developed different algorithms to compare images from different times, segment images, classify LiDAR data, and use the infrared data in order to remove vegetation or to compute roof surfaces with height value, tilt and spectral fingerprint. In addition, the accuracy of our results has been validated with ground truth data. Keywords: LiDAR, remote sensing, seismic vulnerability, Lorca

  14. Cleaning techniques for applied-B ion diodes

    Energy Technology Data Exchange (ETDEWEB)

    Cuneo, M.E.; Menge, P.R.; Hanson, D.L. [and others

    1995-09-01

    Measurements and theoretical considerations indicate that the lithium-fluoride (LiF) lithium ion source operates by electron-assisted field-desorption, and provides a pure lithium beam for 10--20 ns. Evidence on both the SABRE (1 TW) and PBFA-II (20 TW) accelerators indicates that the lithium beam is replaced by a beam of protons, and carbon resulting from electron thermal desorption of hydrocarbon surface and bulk contamination with subsequent avalanche ionization. Appearance of contaminant ions in the beam is accompanied by rapid impedance collapse, possibly resulting from loss of magnetic insulation in the rapidly expanding and ionizing, neutral layer. Electrode surface and source substrate cleaning techniques are being developed on the SABRE accelerator to reduce beam contamination, plasma formation, and impedance collapse. We have increased lithium current density a factor of 3 and lithium energy a factor of 5 through a combination of in-situ surface and substrate coatings, impermeable substrate coatings, and field profile modifications.

  15. Semantic Data And Visualization Techniques Applied To Geologic Field Mapping

    Science.gov (United States)

    Houser, P. I. Q.; Royo-Leon, M.; Munoz, R.; Estrada, E.; Villanueva-Rosales, N.; Pennington, D. D.

    2015-12-01

    Geologic field mapping involves the use of technology before, during, and after visiting a site. Geologists utilize hardware such as Global Positioning Systems (GPS) connected to mobile computing platforms such as tablets that include software such as ESRI's ArcPad and other software to produce maps and figures for a final analysis and report. Hand written field notes contain important information and drawings or sketches of specific areas within the field study. Our goal is to collect and geo-tag final and raw field data into a cyber-infrastructure environment with an ontology that allows for large data processing, visualization, sharing, and searching, aiding in connecting field research with prior research in the same area and/or aid with experiment replication. Online searches of a specific field area return results such as weather data from NOAA and QuakeML seismic data from USGS. These results that can then be saved to a field mobile device and searched while in the field where there is no Internet connection. To accomplish this we created the GeoField ontology service using the Web Ontology Language (OWL) and Protégé software. Advanced queries on the dataset can be made using reasoning capabilities can be supported that go beyond a standard database service. These improvements include the automated discovery of data relevant to a specific field site and visualization techniques aimed at enhancing analysis and collaboration while in the field by draping data over mobile views of the site using augmented reality. A case study is being performed at University of Texas at El Paso's Indio Mountains Research Station located near Van Horn, Texas, an active multi-disciplinary field study site. The user can interactively move the camera around the study site and view their data digitally. Geologist's can check their data against the site in real-time and improve collaboration with another person as both parties have the same interactive view of the data.

  16. Applying perceptual and adaptive learning techniques for teaching introductory histopathology

    Directory of Open Access Journals (Sweden)

    Sally Krasne

    2013-01-01

    Full Text Available Background: Medical students are expected to master the ability to interpret histopathologic images, a difficult and time-consuming process. A major problem is the issue of transferring information learned from one example of a particular pathology to a new example. Recent advances in cognitive science have identified new approaches to address this problem. Methods: We adapted a new approach for enhancing pattern recognition of basic pathologic processes in skin histopathology images that utilizes perceptual learning techniques, allowing learners to see relevant structure in novel cases along with adaptive learning algorithms that space and sequence different categories (e.g. diagnoses that appear during a learning session based on each learner′s accuracy and response time (RT. We developed a perceptual and adaptive learning module (PALM that utilized 261 unique images of cell injury, inflammation, neoplasia, or normal histology at low and high magnification. Accuracy and RT were tracked and integrated into a "Score" that reflected students rapid recognition of the pathologies and pre- and post-tests were given to assess the effectiveness. Results: Accuracy, RT and Scores significantly improved from the pre- to post-test with Scores showing much greater improvement than accuracy alone. Delayed post-tests with previously unseen cases, given after 6-7 weeks, showed a decline in accuracy relative to the post-test for 1 st -year students, but not significantly so for 2 nd -year students. However, the delayed post-test scores maintained a significant and large improvement relative to those of the pre-test for both 1 st and 2 nd year students suggesting good retention of pattern recognition. Student evaluations were very favorable. Conclusion: A web-based learning module based on the principles of cognitive science showed an evidence for improved recognition of histopathology patterns by medical students.

  17. Azimuthally Varying Noise Reduction Techniques Applied to Supersonic Jets

    Science.gov (United States)

    Heeb, Nicholas S.

    An experimental investigation into the effect of azimuthal variance of chevrons and fluidically enhanced chevrons applied to supersonic jets is presented. Flow field measurements of streamwise and cross-stream particle imaging velocimetry were employed to determine the causes of noise reduction, which was demonstrated through acoustic measurements. Results were obtained in the over- and under- expanded regimes, and at the design condition, though emphasis was placed on the overexpanded regime due to practical application. Surveys of chevron geometry, number, and arrangement were undertaken in an effort to reduce noise and/or incurred performance penalties. Penetration was found to be positively correlated with noise reduction in the overexpanded regime, and negatively correlated in underexpanded operation due to increased effective penetration and high frequency penalty, respectively. The effect of arrangement indicated the beveled configuration achieved optimal abatement in the ideally and underexpanded regimes due to superior BSAN reduction. The symmetric configuration achieved optimal overexpanded noise reduction due to LSS suppression from improved vortex persistence. Increases in chevron number generally improved reduction of all noise components for lower penetration configurations. Higher penetration configurations reached levels of saturation in the four chevron range, with the potential to introduce secondary shock structures and generate additional noise with higher number. Alternation of penetration generated limited benefit, with slight reduction of the high frequency penalty caused by increased shock spacing. The combination of alternating penetration with beveled and clustered configurations achieved comparable noise reduction to the standard counterparts. Analysis of the entire data set indicated initial improvements with projected area that saturated after a given level and either plateaued or degraded with additional increases. Optimal reductions

  18. Markov Model Applied to Gene Evolution

    Institute of Scientific and Technical Information of China (English)

    季星来; 孙之荣

    2001-01-01

    The study of nucleotide substitution is very important both to our understanding of gene evolution and to reliable estimation of phylogenetic relationships. In this paper nucleotide substitution is assumed to be random and the Markov model is applied to the study of the evolution of genes. Then a non-linear optimization approach is proposed for estimating substitution in real sequences. This substitution is called the "Nucleotide State Transfer Matrix". One of the most important conclusions from this work is that gene sequence evolution conforms to the Markov process. Also, some theoretical evidences for random evolution are given from energy analysis of DNA replication.

  19. Software factory techniques applied to process control at CERN

    CERN Document Server

    Dutour, Mathias D

    2008-01-01

    The CERN Large Hadron Collider (LHC) requires constant monitoring and control of quantities of parameters to guarantee operational conditions. For this purpose, a methodology called UNICOS (UNIfied Industrial COntrols Systems) has been implemented to standardize the design of process control applications. To further accelerate the development of these applications, we migrated our existing UNICOS tooling suite toward a software factory in charge of assembling project, domain and technical information seamlessly into deployable PLC (Programmable logic Controller) - SCADA (Supervisory Control And Data Acquisition) systems. This software factory delivers consistently high quality by reducing human error and repetitive tasks, and adapts to user specifications in a cost-efficient way. Hence, this production tool is designed to encapsulate and hide the PLC and SCADA target platforms, enabling the experts to focus on the business model rather than specific syntaxes and grammars. Based on industry standard software, ...

  20. Software factory techniques applied to Process Control at CERN

    CERN Multimedia

    Dutour, MD

    2007-01-01

    The CERN Large Hadron Collider (LHC) requires constant monitoring and control of quantities of parameters to guarantee operational conditions. For this purpose, a methodology called UNICOS (UNIfied Industrial COntrols Systems) has been implemented to standardize the design of process control applications. To further accelerate the development of these applications, we migrated our existing UNICOS tooling suite toward a software factory in charge of assembling project, domain and technical information seamlessly into deployable PLC (Programmable logic Controller) – SCADA (Supervisory Control And Data Acquisition) systems. This software factory delivers consistently high quality by reducing human error and repetitive tasks, and adapts to user specifications in a cost-efficient way. Hence, this production tool is designed to encapsulate and hide the PLC and SCADA target platforms, enabling the experts to focus on the business model rather than specific syntaxes and grammars. Based on industry standard software...

  1. Geophysical techniques applied to urban planning in complex near surface environments. Examples of Zaragoza, NE Spain

    Science.gov (United States)

    Pueyo-Anchuela, Ó.; Casas-Sainz, A. M.; Soriano, M. A.; Pocoví-Juan, A.

    Complex geological shallow subsurface environments represent an important handicap in urban and building projects. The geological features of the Central Ebro Basin, with sharp lateral changes in Quaternary deposits, alluvial karst phenomena and anthropic activity can preclude the characterization of future urban areas only from isolated geomechanical tests or from non-correctly dimensioned geophysical techniques. This complexity is here analyzed in two different test fields, (i) one of them linked to flat-bottomed valleys with irregular distribution of Quaternary deposits related to sharp lateral facies changes and irregular preconsolidated substratum position and (ii) a second one with similar complexities in the alluvial deposits and karst activity linked to solution of the underlying evaporite substratum. The results show that different geophysical techniques allow for similar geological models to be obtained in the first case (flat-bottomed valleys), whereas only the application of several geophysical techniques can permit to correctly evaluate the geological model complexities in the second case (alluvial karst). In this second case, the geological and superficial information permit to refine the sensitivity of the applied geophysical techniques to different indicators of karst activity. In both cases 3D models are needed to correctly distinguish alluvial lateral sedimentary changes from superimposed karstic activity.

  2. A Comparative of business process modelling techniques

    Science.gov (United States)

    Tangkawarow, I. R. H. T.; Waworuntu, J.

    2016-04-01

    In this era, there is a lot of business process modeling techniques. This article is the research about differences of business process modeling techniques. For each technique will explain about the definition and the structure. This paper presents a comparative analysis of some popular business process modelling techniques. The comparative framework is based on 2 criteria: notation and how it works when implemented in Somerleyton Animal Park. Each technique will end with the advantages and disadvantages. The final conclusion will give recommend of business process modeling techniques that easy to use and serve the basis for evaluating further modelling techniques.

  3. Applying waste logistics modeling to regional planning

    Energy Technology Data Exchange (ETDEWEB)

    Holter, G.M.; Khawaja, A.; Shaver, S.R.; Peterson, K.L.

    1995-05-01

    Waste logistics modeling is a powerful analytical technique that can be used for effective planning of future solid waste storage, treatment, and disposal activities. Proper waste management is essential for preventing unacceptable environmental degradation from ongoing operations, and is also a critical part of any environmental remediation activity. Logistics modeling allows for analysis of alternate scenarios for future waste flowrates and routings, facility schedules, and processing or handling capacities. Such analyses provide an increased understanding of the critical needs for waste storage, treatment, transport, and disposal while there is still adequate lead time to plan accordingly. They also provide a basis for determining the sensitivity of these critical needs to the various system parameters. This paper discusses the application of waste logistics modeling concepts to regional planning. In addition to ongoing efforts to aid in planning for a large industrial complex, the Pacific Northwest Laboratory (PNL) is currently involved in implementing waste logistics modeling as part of the planning process for material recovery and recycling within a multi-city region in the western US.

  4. A general diagnostic model applied to language testing data.

    Science.gov (United States)

    von Davier, Matthias

    2008-11-01

    Probabilistic models with one or more latent variables are designed to report on a corresponding number of skills or cognitive attributes. Multidimensional skill profiles offer additional information beyond what a single test score can provide, if the reported skills can be identified and distinguished reliably. Many recent approaches to skill profile models are limited to dichotomous data and have made use of computationally intensive estimation methods such as Markov chain Monte Carlo, since standard maximum likelihood (ML) estimation techniques were deemed infeasible. This paper presents a general diagnostic model (GDM) that can be estimated with standard ML techniques and applies to polytomous response variables as well as to skills with two or more proficiency levels. The paper uses one member of a larger class of diagnostic models, a compensatory diagnostic model for dichotomous and partial credit data. Many well-known models, such as univariate and multivariate versions of the Rasch model and the two-parameter logistic item response theory model, the generalized partial credit model, as well as a variety of skill profile models, are special cases of this GDM. In addition to an introduction to this model, the paper presents a parameter recovery study using simulated data and an application to real data from the field test for TOEFL Internet-based testing.

  5. Remote sensing applied to numerical modelling. [water resources pollution

    Science.gov (United States)

    Sengupta, S.; Lee, S. S.; Veziroglu, T. N.; Bland, R.

    1975-01-01

    Progress and remaining difficulties in the construction of predictive mathematical models of large bodies of water as ecosystems are reviewed. Surface temperature is at present the only variable than can be measured accurately and reliably by remote sensing techniques, but satellite infrared data are of sufficient resolution for macro-scale modeling of oceans and large lakes, and airborne radiometers are useful in meso-scale analysis (of lakes, bays, and thermal plumes). Finite-element and finite-difference techniques applied to the solution of relevant coupled time-dependent nonlinear partial differential equations are compared, and the specific problem of the Biscayne Bay and environs ecosystem is tackled in a finite-differences treatment using the rigid-lid model and a rigid-line grid system.

  6. Terahertz spectroscopy applied to food model systems

    DEFF Research Database (Denmark)

    Møller, Uffe

    Water plays a crucial role in the quality of food. Apart from the natural water content of a food product, the state of that water is very important. Water can be found integrated into the biological material or it can be added during production of the product. Currently it is difficult to differ...... to differentiate between these types of water in subsequent quality controls. This thesis describes terahertz time-domain spectroscopy applied on aqueous food model systems, with particular focus on ethanol-water mixtures and confined water pools in inverse micelles.......Water plays a crucial role in the quality of food. Apart from the natural water content of a food product, the state of that water is very important. Water can be found integrated into the biological material or it can be added during production of the product. Currently it is difficult...

  7. Wavelets, Curvelets and Multiresolution Analysis Techniques Applied to Implosion Symmetry Characterization of ICF Targets

    CERN Document Server

    Afeyan, Bedros; Starck, Jean Luc; Cuneo, Michael

    2012-01-01

    We introduce wavelets, curvelets and multiresolution analysis techniques to assess the symmetry of X ray driven imploding shells in ICF targets. After denoising X ray backlighting produced images, we determine the Shell Thickness Averaged Radius (STAR) of maximum density, r*(N, {\\theta}), where N is the percentage of the shell thickness over which to average. The non-uniformities of r*(N, {\\theta}) are quantified by a Legendre polynomial decomposition in angle, {\\theta}. Undecimated wavelet decompositions outperform decimated ones in denoising and both are surpassed by the curvelet transform. In each case, hard thresholding based on noise modeling is used. We have also applied combined wavelet and curvelet filter techniques with variational minimization as a way to select the significant coefficients. Gains are minimal over curvelets alone in the images we have analyzed.

  8. Applying Data Mining Techniques to Improve Information Security in the Cloud: A Single Cache System Approach

    Directory of Open Access Journals (Sweden)

    Amany AlShawi

    2016-01-01

    Full Text Available Presently, the popularity of cloud computing is gradually increasing day by day. The purpose of this research was to enhance the security of the cloud using techniques such as data mining with specific reference to the single cache system. From the findings of the research, it was observed that the security in the cloud could be enhanced with the single cache system. For future purposes, an Apriori algorithm can be applied to the single cache system. This can be applied by all cloud providers, vendors, data distributors, and others. Further, data objects entered into the single cache system can be extended into 12 components. Database and SPSS modelers can be used to implement the same.

  9. VIDEOGRAMMETRIC RECONSTRUCTION APPLIED TO VOLCANOLOGY: PERSPECTIVES FOR A NEW MEASUREMENT TECHNIQUE IN VOLCANO MONITORING

    Directory of Open Access Journals (Sweden)

    Emmanuelle Cecchi

    2011-05-01

    Full Text Available This article deals with videogrammetric reconstruction of volcanic structures. As a first step, the method is tested in laboratory. The objective is to reconstruct small sand and plaster cones, analogous to volcanoes, that deform with time. The initial stage consists in modelling the sensor (internal parameters and calculating its orientation and position in space, using a multi-view calibration method. In practice two sets of views are taken: a first one around a calibration target and a second one around the studied object. Both sets are combined in the calibration software to simultaneously compute the internal parameters modelling the sensor, and the external parameters giving the spatial location of each view around the cone. Following this first stage, a N-view reconstruction process is carried out. The principle is as follows: an initial 3D model of the cone is created and then iteratively deformed to fit the real object. The deformation of the meshed model is based on a texture coherence criterion. At present, this reconstruction method and its precision are being validated at laboratory scale. The objective will be then to follow analogue model deformation with time using successive reconstructions. In the future, the method will be applied to real volcanic structures. Modifications of the initial code will certainly be required, however excellent reconstruction accuracy, valuable simplicity and flexibility of the technique are expected, compared to classic stereophotogrammetric techniques used in volcanology.

  10. A Cooperation Model Applied in a Kindergarten

    Directory of Open Access Journals (Sweden)

    Jose I. Rodriguez

    2011-10-01

    Full Text Available The need for collaboration in a global world has become a key factor for success for many organizations and individuals. However in several regions and organizations in the world, it has not happened yet. One of the settings where major obstacles occur for collaboration is in the business arena, mainly because of competitive beliefs that cooperation could hurt profitability. We have found such behavior in a wide variety of countries, in advanced and developing economies. Such cultural behaviors or traits characterized entrepreneurs by working in isolation, avoiding the possibilities of building clusters to promote regional development. The needs to improve the essential abilities that conforms cooperation are evident. It is also very difficult to change such conduct with adults. So we decided to work with children to prepare future generations to live in a cooperative world, so badly hit by greed and individualism nowadays. We have validated that working with children at an early age improves such behavior. This paper develops a model to enhance the essential abilities in order to improve cooperation. The model has been validated by applying it at a kindergarten school.

  11. Improving Skill Development: An Exploratory Study Comparing a Philosophical and an Applied Ethical Analysis Technique

    Science.gov (United States)

    Al-Saggaf, Yeslam; Burmeister, Oliver K.

    2012-01-01

    This exploratory study compares and contrasts two types of critical thinking techniques; one is a philosophical and the other an applied ethical analysis technique. The two techniques analyse an ethically challenging situation involving ICT that a recent media article raised to demonstrate their ability to develop the ethical analysis skills of…

  12. Improving Skill Development: An Exploratory Study Comparing a Philosophical and an Applied Ethical Analysis Technique

    Science.gov (United States)

    Al-Saggaf, Yeslam; Burmeister, Oliver K.

    2012-01-01

    This exploratory study compares and contrasts two types of critical thinking techniques; one is a philosophical and the other an applied ethical analysis technique. The two techniques analyse an ethically challenging situation involving ICT that a recent media article raised to demonstrate their ability to develop the ethical analysis skills of…

  13. Surface-bounded growth modeling applied to human mandibles

    DEFF Research Database (Denmark)

    Andresen, Per Rønsholt

    1999-01-01

    This thesis presents mathematical and computational techniques for three dimensional growth modeling applied to human mandibles. The longitudinal shape changes make the mandible a complex bone. The teeth erupt and the condylar processes change direction, from pointing predominantly backward...... to yield a spatially dense field. Different methods for constructing the sparse field are compared. Adaptive Gaussian smoothing is the preferred method since it is parameter free and yields good results in practice. A new method, geometry-constrained diffusion, is used to simplify The most successful...... growth model is linear and based on results from shape analysis and principal component analysis. The growth model is tested in a cross validation study with good results. The worst case mean modeling error in the cross validation study is 3.7 mm. It occurs when modeling the shape and size of a 12 years...

  14. Molecular modeling: An open invitation for applied mathematics

    Science.gov (United States)

    Mezey, Paul G.

    2013-10-01

    Molecular modeling methods provide a very wide range of challenges for innovative mathematical and computational techniques, where often high dimensionality, large sets of data, and complicated interrelations imply a multitude of iterative approximations. The physical and chemical basis of these methodologies involves quantum mechanics with several non-intuitive aspects, where classical interpretation and classical analogies are often misleading or outright wrong. Hence, instead of the everyday, common sense approaches which work so well in engineering, in molecular modeling one often needs to rely on rather abstract mathematical constraints and conditions, again emphasizing the high level of reliance on applied mathematics. Yet, the interdisciplinary aspects of the field of molecular modeling also generates some inertia and perhaps too conservative reliance on tried and tested methodologies, that is at least partially caused by the less than up-to-date involvement in the newest developments in applied mathematics. It is expected that as more applied mathematicians take up the challenge of employing the latest advances of their field in molecular modeling, important breakthroughs may follow. In this presentation some of the current challenges of molecular modeling are discussed.

  15. Improving operating room efficiency by applying bin-packing and portfolio techniques to surgical case scheduling.

    Science.gov (United States)

    Van Houdenhoven, Mark; van Oostrum, Jeroen M; Hans, Erwin W; Wullink, Gerhard; Kazemier, Geert

    2007-09-01

    An operating room (OR) department has adopted an efficient business model and subsequently investigated how efficiency could be further improved. The aim of this study is to show the efficiency improvement of lowering organizational barriers and applying advanced mathematical techniques. We applied advanced mathematical algorithms in combination with scenarios that model relaxation of various organizational barriers using prospectively collected data. The setting is the main inpatient OR department of a university hospital, which sets its surgical case schedules 2 wk in advance using a block planning method. The main outcome measures are the number of freed OR blocks and OR utilization. Lowering organizational barriers and applying mathematical algorithms can yield a 4.5% point increase in OR utilization (95% confidence interval 4.0%-5.0%). This is obtained by reducing the total required OR time. Efficient OR departments can further improve their efficiency. The paper shows that a radical cultural change that comprises the use of mathematical algorithms and lowering organizational barriers improves OR utilization.

  16. Spatial analysis techniques applied to uranium prospecting in Chihuahua State, Mexico

    Science.gov (United States)

    Hinojosa de la Garza, Octavio R.; Montero Cabrera, María Elena; Sanín, Luz H.; Reyes Cortés, Manuel; Martínez Meyer, Enrique

    2014-07-01

    To estimate the distribution of uranium minerals in Chihuahua, the advanced statistical model "Maximun Entropy Method" (MaxEnt) was applied. A distinguishing feature of this method is that it can fit more complex models in case of small datasets (x and y data), as is the location of uranium ores in the State of Chihuahua. For georeferencing uranium ores, a database from the United States Geological Survey and workgroup of experts in Mexico was used. The main contribution of this paper is the proposal of maximum entropy techniques to obtain the mineral's potential distribution. For this model were used 24 environmental layers like topography, gravimetry, climate (worldclim), soil properties and others that were useful to project the uranium's distribution across the study area. For the validation of the places predicted by the model, comparisons were done with other research of the Mexican Service of Geological Survey, with direct exploration of specific areas and by talks with former exploration workers of the enterprise "Uranio de Mexico". Results. New uranium areas predicted by the model were validated, finding some relationship between the model predictions and geological faults. Conclusions. Modeling by spatial analysis provides additional information to the energy and mineral resources sectors.

  17. Agrochemical fate models applied in agricultural areas from Colombia

    Science.gov (United States)

    Garcia-Santos, Glenda; Yang, Jing; Andreoli, Romano; Binder, Claudia

    2010-05-01

    The misuse application of pesticides in mainly agricultural catchments can lead to severe problems for humans and environment. Especially in developing countries where there is often found overuse of agrochemicals and incipient or lack of water quality monitoring at local and regional levels, models are needed for decision making and hot spots identification. However, the complexity of the water cycle contrasts strongly with the scarce data availability, limiting the number of analysis, techniques, and models available to researchers. Therefore there is a strong need for model simplification able to appropriate model complexity and still represent the processes. We have developed a new model so-called Westpa-Pest to improve water quality management of an agricultural catchment located in the highlands of Colombia. Westpa-Pest is based on the fully distributed hydrologic model Wetspa and a fate pesticide module. We have applied a multi-criteria analysis for model selection under the conditions and data availability found in the region and compared with the new developed Westpa-Pest model. Furthermore, both models were empirically calibrated and validated. The following questions were addressed i) what are the strengths and weaknesses of the models?, ii) which are the most sensitive parameters of each model?, iii) what happens with uncertainties in soil parameters?, and iv) how sensitive are the transfer coefficients?

  18. Performability Modelling Tools, Evaluation Techniques and Applications

    NARCIS (Netherlands)

    Haverkort, Boudewijn R.H.M.

    1990-01-01

    This thesis deals with three aspects of quantitative evaluation of fault-tolerant and distributed computer and communication systems: performability evaluation techniques, performability modelling tools, and performability modelling applications. Performability modelling is a relatively new

  19. Selected Logistics Models and Techniques.

    Science.gov (United States)

    1984-09-01

    ACCESS PROCEDURE: On-Line System (OLS), UNINET . RCA maintains proprietary control of this model, and the model is available only through a lease...System (OLS), UNINET . RCA maintains proprietary control of this model, and the model is available only through a lease arrangement. • SPONSOR: ASD/ACCC

  20. Applying stereotactic technique to establish C6 brain glioma models for laser interstitial thermotherapy research%立体定向技术建立大鼠脑胶质瘤激光间质热疗模型

    Institute of Scientific and Technical Information of China (English)

    石键; 张宏; 卜文良; 陈鹏; 赵洪洋; 傅伟明

    2010-01-01

    Objective C6 brain glioma models were established with stereotactic technique to study laser interstitial thermotherapy (LITT) in SD rats C6 intracranial glioma models. Methods The C6 cells cultured in vitro were stereotaxically implanted into the right caudate nucleus of SD rat brain (20 μl free serum DMEM for one rat which concentration was 1×10~(11)/L). The following step was to judge MRI scan. Tumor was confirmed with staining of ⅧR, GFAP and S - 100 immunohistochemistry. After MRI scanning and correction of tumor location, the models were divided into groups according treating time and laser power from 2 to 10 W. Semiconductor laser optical fibers were inserted in tumors for LITT, simultaneously cortex's temperature conducted from center target was measured by ThermaCAM S65 type infrared thermograph, and (or) deep tissue's temperature around target was measured by thermocouple. Results Inoculated with optimized stereotactic technique, rat C6 gliomas resembled histopathological features of human glioma. This kind of model was a more reliant and reproducible one, with 96. 67% yield of intracranial tumor as well as no extracranial growth extension. The difference between cortex temperature conducted from center target and deep tissue temperature around target had no statistical significance (P>0.05). Conclusion A rat C6 brain glioma model resembles histopathological features of human glioma, as a perfect model to study LITT of glioma. Infrared thermograph technique to measure temperature conveniently, effectually, non invasive and the data could be treated by software in LITT research. Combining thermocouple to measure deep tissue temperature, it would have a better effect.%目的 利用立体定向技术接种SD大鼠C6脑胶质瘤,并建立脑胶质瘤激光间质热疗(LITT)模型.方法 采用立体定向技术,将体外培养并调制的C6胶质瘤细胞悬液20μl(浓度1×10~(11)/L)接种于SD大鼠右侧尾状核区.分时段MRI检查;做组织病理

  1. Applying Analytical Derivative and Sparse Matrix Techniques to Large-Scale Process Optimization Problems

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    The performance of analytical derivative and sparse matrix techniques applied to a traditional densesequential quadratic programming(SQP) is studied, and the strategy utilizing those techniques is also presented. Computational results on two typicalchemical optimization problems demonstrate significant enhancement in efficiency, which shows this strategy ispromising and suitable for large-scale process optimization problems.

  2. Operations research techniques applied to service center logistics in power distribution users

    Directory of Open Access Journals (Sweden)

    Maria Teresinha Arns Steiner

    2006-12-01

    Full Text Available This paper deals with the optimization for the logistics regarding services demanded byusers of power distribution lines, served by the Portão office, located in Curitiba, PR, Brazil,and operated by COPEL (Paranaense Power Company. Through the use of OperationsResearch techniques, an Integer Programming Mathematical model and Floyd Algorithm, amethod was defined to determine in an optimized way, the number of teams needed by theselected office, as well as, the optimized assignment for the teams to the sites in need, inorder to offer efficient services to the users and, besides that, the immediate execution onemergencies and, as to the other services, accordingly to parameters set by the NationalPower Agency together with COPEL. The methodology hereby presented is generic, so thatit could be applied to any power network (or any of its lines, and it has presented verysatisfactory results to the case in analysis.

  3. Removal of benzaldehyde from a water/ethanol mixture by applying scavenging techniques

    DEFF Research Database (Denmark)

    Mitic, Aleksandar; Skov, Thomas; Gernaey, Krist V.

    2017-01-01

    derivatization agents as the scavengers. Discovery chemistry is performed in the beginning as a screening procedure, followed by the process design of a small-scale continuous process for benzaldehyde removal with in-line real-time monitoring. Applications of tris(hydroxymethyl) aminomethane (TRIS) are found......A presence of carbonyl compounds is very common in the food industry. The nature of such compounds is to be reactive and thus many products involve aldehydes/ketones in their synthetic routes. By contrast, the high reactivity of carbonyl compounds could also lead to formation of undesired compounds......, such as genotoxic impurities. It can therefore be important to remove carbonyl compounds by implementing suitable removal techniques, with the aim of protecting final product quality. This work is focused on benzaldehyde as a model component, studying its removal from a water/ethanol mixture by applying different...

  4. Applying BI Techniques To Improve Decision Making And Provide Knowledge Based Management

    Directory of Open Access Journals (Sweden)

    Alexandra Maria Ioana FLOREA

    2015-07-01

    Full Text Available The paper focuses on BI techniques and especially data mining algorithms that can support and improve the decision making process, with applications within the financial sector. We consider the data mining techniques to be more efficient and thus we applied several techniques, supervised and unsupervised learning algorithms The case study in which these algorithms have been implemented regards the activity of a banking institution, with focus on the management of lending activities.

  5. Optimal control applied to a thoraco-abdominal CPR model.

    Science.gov (United States)

    Jung, Eunok; Lenhart, Suzanne; Protopopescu, Vladimir; Babbs, Charles

    2008-06-01

    The techniques of optimal control are applied to a validated blood circulation model of cardiopulmonary resuscitation (CPR), consisting of a system of seven difference equations. In this system, the non-homogeneous forcing terms are chest and abdominal pressures acting as the 'controls'. We seek to maximize the blood flow, as measured by the pressure difference between the thoracic aorta and the right atrium. By applying optimal control methods, we characterize the optimal waveforms for external chest and abdominal compression during cardiac arrest and CPR in terms of the solutions of the circulation model and of the corresponding adjoint system. Numerical results are given for various scenarios. The optimal waveforms confirm the previously discovered positive effects of active decompression and interposed abdominal compression. These waveforms can be implemented with manual (Lifestick-like) and mechanical (vest-like) devices to achieve levels of blood flow substantially higher than those provided by standard CPR, a technique which, despite its long history, is far from optimal.

  6. Applying mechanistic models in bioprocess development

    DEFF Research Database (Denmark)

    Lencastre Fernandes, Rita; Bodla, Vijaya Krishna; Carlquist, Magnus

    2013-01-01

    models should be combined with proper model analysis tools, such as uncertainty and sensitivity analysis. When assuming distributed inputs, the resulting uncertainty in the model outputs can be decomposed using sensitivity analysis to determine which input parameters are responsible for the major part...... of the output uncertainty. Such information can be used as guidance for experimental work; i.e., only parameters with a significant influence on model outputs need to be determined experimentally. The use of mechanistic models and model analysis tools is demonstrated in this chapter. As a practical case study......, experimental data from Saccharomyces cerevisiae fermentations are used. The data are described with the well-known model of Sonnleitner and Käppeli (Biotechnol Bioeng 28:927-937, 1986) and the model is analyzed further. The methods used are generic, and can be transferred easily to other, more complex case...

  7. Applied Creativity: The Creative Marketing Breakthrough Model

    Science.gov (United States)

    Titus, Philip A.

    2007-01-01

    Despite the increasing importance of personal creativity in today's business environment, few conceptual creativity frameworks have been presented in the marketing education literature. The purpose of this article is to advance the integration of creativity instruction into marketing classrooms by presenting an applied creative marketing…

  8. Applying Modeling Tools to Ground System Procedures

    Science.gov (United States)

    Di Pasquale, Peter

    2012-01-01

    As part of a long-term effort to revitalize the Ground Systems (GS) Engineering Section practices, Systems Modeling Language (SysML) and Business Process Model and Notation (BPMN) have been used to model existing GS products and the procedures GS engineers use to produce them.

  9. Applying MDL to Learning Best Model Granularity

    CERN Document Server

    Gao, Q; Vitanyi, P; Gao, Qiong; Li, Ming; Vitanyi, Paul

    2000-01-01

    The Minimum Description Length (MDL) principle is solidly based on a provably ideal method of inference using Kolmogorov complexity. We test how the theory behaves in practice on a general problem in model selection: that of learning the best model granularity. The performance of a model depends critically on the granularity, for example the choice of precision of the parameters. Too high precision generally involves modeling of accidental noise and too low precision may lead to confusion of models that should be distinguished. This precision is often determined ad hoc. In MDL the best model is the one that most compresses a two-part code of the data set: this embodies ``Occam's Razor.'' In two quite different experimental settings the theoretical value determined using MDL coincides with the best value found experimentally. In the first experiment the task is to recognize isolated handwritten characters in one subject's handwriting, irrespective of size and orientation. Based on a new modification of elastic...

  10. Biplot models applied to cancer mortality rates.

    Science.gov (United States)

    Osmond, C

    1985-01-01

    "A graphical method developed by Gabriel to display the rows and columns of a matrix is applied to tables of age- and period-specific cancer mortality rates. It is particularly useful when the pattern of age-specific rates changes with time. Trends in age-specific rates and changes in the age distribution are identified as projections. Three examples [from England and Wales] are given."

  11. Modeling Techniques: Theory and Practice

    OpenAIRE

    Odd A. Asbjørnsen

    1985-01-01

    A survey is given of some crucial concepts in chemical process modeling. Those are the concepts of physical unit invariance, of reaction invariance and stoichiometry, the chromatographic effect in heterogeneous systems, the conservation and balance principles and the fundamental structures of cause and effect relationships. As an example, it is shown how the concept of reaction invariance may simplify the homogeneous reactor modeling to a large extent by an orthogonal decomposition of the pro...

  12. Model assisted qualification of NDE techniques

    Science.gov (United States)

    Ballisat, Alexander; Wilcox, Paul; Smith, Robert; Hallam, David

    2017-02-01

    The costly and time consuming nature of empirical trials typically performed for NDE technique qualification is a major barrier to the introduction of NDE techniques into service. The use of computational models has been proposed as a method by which the process of qualification can be accelerated. However, given the number of possible parameters present in an inspection, the number of combinations of parameter values scales to a power law and running simulations at all of these points rapidly becomes infeasible. Given that many NDE inspections result in a single valued scalar quantity, such as a phase or amplitude, using suitable sampling and interpolation methods significantly reduces the number of simulations that have to be performed. This paper presents initial results of applying Latin Hypercube Designs and M ultivariate Adaptive Regression Splines to the inspection of a fastener hole using an oblique ultrasonic shear wave inspection. It is demonstrated that an accurate mapping of the response of the inspection for the variations considered can be achieved by sampling only a small percentage of the parameter space of variations and that the required percentage decreases as the number of parameters and the number of possible sample points increases. It is then shown how the outcome of this process can be used to assess the reliability of the inspection through commonly used metrics such as probability of detection, thereby providing an alternative methodology to the current practice of performing empirical probability of detection trials.

  13. Applying the Sport Education Model to Tennis

    Science.gov (United States)

    Ayvazo, Shiri

    2009-01-01

    The physical education field abounds with theoretically sound curricular approaches such as fitness education, skill theme approach, tactical approach, and sport education. In an era that emphasizes authentic sport experiences, the Sport Education Model includes unique features that sets it apart from other curricular models and can be a valuable…

  14. Strategy for applying scaling technique to water retention curves of forest soils

    Science.gov (United States)

    Hayashi, Y.; Kosugi, K.; Mizuyama, T.

    2009-12-01

    Describing the infiltration of water in soils on a forested hillslope requires the information of spatial variability of water retention curve (WRC). By using a scaling technique, Hayashi et al. (2009), found that the porosity mostly characterizes the spatial variability of the WRCs on a forested hillslope. This scaling technique was based on a model, which assumes a lognormal pore size distribution and contains three parameters: the median of log-transformed pore radius, ψm, the variance of log-transformed pore radius, σ, and the effective porosity, θe. Thus, in the scaling method proposed by Hayashi et al. (2009), θe is a scaling factor, which should be determined for each individual soil, and that ψm and σ are reference parameter common for the whole data set. They examined this scaling method using θe calculated as a difference between the observed saturated water content and water content observed at ψ = -1000 cm for each sample and, ψm and σ derived from the whole data set of WRCs on the slope. Then it was showed that this scaling method could explain almost 90 % of the spatial variability in WRCs on the forested hillslope. However, this method requires the whole data set of WRCs for deriving the reference parameters (ψm and σ). For applying the scaling technique more practically, in this study, we tested a scaling method using the reference parameter derived from the WRCs at a small part of the slope. In order to examine the proposed scaling method, the WRCs for the 246 undisturbed forest soil samples, collected at 15 points distributed from downslope to upslope segments, were observed. In the proposed scaling method, we optimized the common ψm and σ to the WRCs for six soil samples, collected at one point on the middle-slope, and applied these parameters to a reference parameter for the whole data sets. The scaling method proposed by this study exhibited an increase of only 6 % in the residual sum of squares as compared with that of the method

  15. Creep lifing methodologies applied to a single crystal superalloy by use of small scale test techniques

    Energy Technology Data Exchange (ETDEWEB)

    Jeffs, S.P., E-mail: s.p.jeffs@swansea.ac.uk [Institute of Structural Materials, Swansea University, Singleton Park SA2 8PP (United Kingdom); Lancaster, R.J. [Institute of Structural Materials, Swansea University, Singleton Park SA2 8PP (United Kingdom); Garcia, T.E. [IUTA (University Institute of Industrial Technology of Asturias), University of Oviedo, Edificio Departamental Oeste 7.1.17, Campus Universitario, 33203 Gijón (Spain)

    2015-06-11

    In recent years, advances in creep data interpretation have been achieved either by modified Monkman–Grant relationships or through the more contemporary Wilshire equations, which offer the opportunity of predicting long term behaviour extrapolated from short term results. Long term lifing techniques prove extremely useful in creep dominated applications, such as in the power generation industry and in particular nuclear where large static loads are applied, equally a reduction in lead time for new alloy implementation within the industry is critical. The latter requirement brings about the utilisation of the small punch (SP) creep test, a widely recognised approach for obtaining useful mechanical property information from limited material volumes, as is typically the case with novel alloy development and for any in-situ mechanical testing that may be required. The ability to correlate SP creep results with uniaxial data is vital when considering the benefits of the technique. As such an equation has been developed, known as the k{sub SP} method, which has been proven to be an effective tool across several material systems. The current work now explores the application of the aforementioned empirical approaches to correlate small punch creep data obtained on a single crystal superalloy over a range of elevated temperatures. Finite element modelling through ABAQUS software based on the uniaxial creep data has also been implemented to characterise the SP deformation and help corroborate the experimental results.

  16. Modeling Techniques: Theory and Practice

    Directory of Open Access Journals (Sweden)

    Odd A. Asbjørnsen

    1985-07-01

    Full Text Available A survey is given of some crucial concepts in chemical process modeling. Those are the concepts of physical unit invariance, of reaction invariance and stoichiometry, the chromatographic effect in heterogeneous systems, the conservation and balance principles and the fundamental structures of cause and effect relationships. As an example, it is shown how the concept of reaction invariance may simplify the homogeneous reactor modeling to a large extent by an orthogonal decomposition of the process variables. This allows residence time distribution function parameters to be estimated with the reaction in situ, but without any correlation between the estimated residence time distribution parameters and the estimated reaction kinetic parameters. A general word of warning is given to the choice of wrong mathematical structure of models.

  17. Applied mathematics: Models, Discretizations, and Solvers

    Institute of Scientific and Technical Information of China (English)

    D.E. Keyes

    2007-01-01

    @@ Computational plasma physicists inherit decades of developments in mathematical models, numerical algorithms, computer architecture, and software engineering, whose recent coming together marks the beginning of a new era of large-scale simulation.

  18. Applied modelling and computing in social science

    CERN Document Server

    Povh, Janez

    2015-01-01

    In social science outstanding results are yielded by advanced simulation methods, based on state of the art software technologies and an appropriate combination of qualitative and quantitative methods. This book presents examples of successful applications of modelling and computing in social science: business and logistic process simulation and optimization, deeper knowledge extractions from big data, better understanding and predicting of social behaviour and modelling health and environment changes.

  19. Bioclimatic and vegetation mapping of a topographically complex oceanic island applying different interpolation techniques.

    Science.gov (United States)

    Garzón-Machado, Víctor; Otto, Rüdiger; del Arco Aguilar, Marcelino José

    2014-07-01

    Different spatial interpolation techniques have been applied to construct objective bioclimatic maps of La Palma, Canary Islands. Interpolation of climatic data on this topographically complex island with strong elevation and climatic gradients represents a challenge. Furthermore, meteorological stations are not evenly distributed over the island, with few stations at high elevations. We carried out spatial interpolations of the compensated thermicity index (Itc) and the annual ombrothermic Index (Io), in order to obtain appropriate bioclimatic maps by using automatic interpolation procedures, and to establish their relation to potential vegetation units for constructing a climatophilous potential natural vegetation map (CPNV). For this purpose, we used five interpolation techniques implemented in a GIS: inverse distance weighting (IDW), ordinary kriging (OK), ordinary cokriging (OCK), multiple linear regression (MLR) and MLR followed by ordinary kriging of the regression residuals. Two topographic variables (elevation and aspect), derived from a high-resolution digital elevation model (DEM), were included in OCK and MLR. The accuracy of the interpolation techniques was examined by the results of the error statistics of test data derived from comparison of the predicted and measured values. Best results for both bioclimatic indices were obtained with the MLR method with interpolation of the residuals showing the highest R2 of the regression between observed and predicted values and lowest values of root mean square errors. MLR with correction of interpolated residuals is an attractive interpolation method for bioclimatic mapping on this oceanic island since it permits one to fully account for easily available geographic information but also takes into account local variation of climatic data.

  20. Applying Machine Trust Models to Forensic Investigations

    Science.gov (United States)

    Wojcik, Marika; Venter, Hein; Eloff, Jan; Olivier, Martin

    Digital forensics involves the identification, preservation, analysis and presentation of electronic evidence for use in legal proceedings. In the presence of contradictory evidence, forensic investigators need a means to determine which evidence can be trusted. This is particularly true in a trust model environment where computerised agents may make trust-based decisions that influence interactions within the system. This paper focuses on the analysis of evidence in trust-based environments and the determination of the degree to which evidence can be trusted. The trust model proposed in this work may be implemented in a tool for conducting trust-based forensic investigations. The model takes into account the trust environment and parameters that influence interactions in a computer network being investigated. Also, it allows for crimes to be reenacted to create more substantial evidentiary proof.

  1. Multistructure Statistical Model Applied To Factor Analysis

    Science.gov (United States)

    Bentler, Peter M.

    1976-01-01

    A general statistical model for the multivariate analysis of mean and covariance structures is described. Matrix calculus is used to develop the statistical aspects of one new special case in detail. This special case separates the confounding of principal components and factor analysis. (DEP)

  2. Support vector machine applied in QSAR modelling

    Institute of Scientific and Technical Information of China (English)

    MEI Hu; ZHOU Yuan; LIANG Guizhao; LI Zhiliang

    2005-01-01

    Support vector machine (SVM), partial least squares (PLS), and Back-Propagation artificial neural network (ANN) were employed to establish QSAR models of 2 dipeptide datasets. In order to validate predictive capabilities on external dataset of the resulting models, both internal and external validations were performed. The division of dataset into both training and test sets was carried out by D-optimal design. The results showed that support vector machine (SVM) behaved well in both calibration and prediction. For the dataset of 48 bitter tasting dipeptides (BTD), the results obtained by support vector regression (SVR) were superior to that by PLS in both calibration and prediction. When compared with BP artificial neural network, SVR showed less calibration power but more predictive capability. For the dataset of angiotensin-converting enzyme (ACE) inhibitors, the results obtained by support vector machine (SVM) regression were equivalent to those by PLS and BP artificial neural network. In both datasets, SVR using linear kernel function behaved well as that using radial basis kernel function. The results showed that there is wide prospect for the application of support vector machine (SVM) into QSAR modeling.

  3. Improving skill development: an exploratory study comparing a philosophical and an applied ethical analysis technique

    Science.gov (United States)

    Al-Saggaf, Yeslam; Burmeister, Oliver K.

    2012-09-01

    This exploratory study compares and contrasts two types of critical thinking techniques; one is a philosophical and the other an applied ethical analysis technique. The two techniques analyse an ethically challenging situation involving ICT that a recent media article raised to demonstrate their ability to develop the ethical analysis skills of ICT students and professionals. In particular the skill development focused on includes: being able to recognise ethical challenges and formulate coherent responses; distancing oneself from subjective judgements; developing ethical literacy; identifying stakeholders; and communicating ethical decisions made, to name a few.

  4. Strategies and techniques of communication and public relations applied to non-profit sector

    Directory of Open Access Journals (Sweden)

    Ioana – Julieta Josan

    2010-05-01

    Full Text Available The aim of this paper is to summarize the strategies and techniques of communication and public relations applied to non-profit sector.The approach of the paper is to identify the most appropriate strategies and techniques that non-profit sector can use to accomplish its objectives, to highlight specific differences between the strategies and techniques of the profit and non-profit sectors and to identify potential communication and public relations actions in order to increase visibility among target audience, create brand awareness and to change into positive brand sentiment the target perception about the non-profit sector.

  5. Model checking timed automata : techniques and applications

    NARCIS (Netherlands)

    Hendriks, Martijn.

    2006-01-01

    Model checking is a technique to automatically analyse systems that have been modeled in a formal language. The timed automaton framework is such a formal language. It is suitable to model many realistic problems in which time plays a central role. Examples are distributed algorithms, protocols, emb

  6. Advanced structural equation modeling issues and techniques

    CERN Document Server

    Marcoulides, George A

    2013-01-01

    By focusing primarily on the application of structural equation modeling (SEM) techniques in example cases and situations, this book provides an understanding and working knowledge of advanced SEM techniques with a minimum of mathematical derivations. The book was written for a broad audience crossing many disciplines, assumes an understanding of graduate level multivariate statistics, including an introduction to SEM.

  7. Model output statistics applied to wind power prediction

    Energy Technology Data Exchange (ETDEWEB)

    Joensen, A.; Giebel, G.; Landberg, L. [Risoe National Lab., Roskilde (Denmark); Madsen, H.; Nielsen, H.A. [The Technical Univ. of Denmark, Dept. of Mathematical Modelling, Lyngby (Denmark)

    1999-03-01

    Being able to predict the output of a wind farm online for a day or two in advance has significant advantages for utilities, such as better possibility to schedule fossil fuelled power plants and a better position on electricity spot markets. In this paper prediction methods based on Numerical Weather Prediction (NWP) models are considered. The spatial resolution used in NWP models implies that these predictions are not valid locally at a specific wind farm. Furthermore, due to the non-stationary nature and complexity of the processes in the atmosphere, and occasional changes of NWP models, the deviation between the predicted and the measured wind will be time dependent. If observational data is available, and if the deviation between the predictions and the observations exhibits systematic behavior, this should be corrected for; if statistical methods are used, this approaches is usually referred to as MOS (Model Output Statistics). The influence of atmospheric turbulence intensity, topography, prediction horizon length and auto-correlation of wind speed and power is considered, and to take the time-variations into account, adaptive estimation methods are applied. Three estimation techniques are considered and compared, Extended Kalman Filtering, recursive least squares and a new modified recursive least squares algorithm. (au) EU-JOULE-3. 11 refs.

  8. Applying a Schema for Studying the Instructive Techniques Employed by Authors of Four Novels for Adolescents.

    Science.gov (United States)

    Severin, Mary Susan

    The purpose of this study was to apply a schema to adolescent novels, to determine what lessons the authors teach and what techniques they employ in their teaching. A historical review of literary criticism established a background for interpreting the educational function of literature. A schema of questions based on the historical background was…

  9. Recommendations for learners are different: Applying memory-based recommender system techniques to lifelong learning

    NARCIS (Netherlands)

    Drachsler, Hendrik; Hummel, Hans; Koper, Rob

    2007-01-01

    Drachsler, H., Hummel, H. G. K., & Koper, R. (2007). Recommendations for learners are different: applying memory-based recommender system techniques to lifelong learning. Paper presented at the SIRTEL workshop at the EC-TEL 2007 Conference. September, 17-20, 2007, Crete, Greece.

  10. Space-mapping techniques applied to the optimization of a safety isolating transformer

    NARCIS (Netherlands)

    T.V. Tran; S. Brisset; D. Echeverria (David); D.J.P. Lahaye (Domenico); P. Brochet

    2007-01-01

    textabstractSpace-mapping optimization techniques allow to allign low-fidelity and high-fidelity models in order to reduce the computational time and increase the accuracy of the solution. The main idea is to build an approximate model from the difference of response between both models. Therefore

  11. Using Visualization Techniques in Multilayer Traffic Modeling

    Science.gov (United States)

    Bragg, Arnold

    We describe visualization techniques for multilayer traffic modeling - i.e., traffic models that span several protocol layers, and traffic models of protocols that cross layers. Multilayer traffic modeling is challenging, as one must deal with disparate traffic sources; control loops; the effects of network elements such as IP routers; cross-layer protocols; asymmetries in bandwidth, session lengths, and application behaviors; and an enormous number of complex interactions among the various factors. We illustrate by using visualization techniques to identify relationships, transformations, and scaling; to smooth simulation and measurement data; to examine boundary cases, subtle effects and interactions, and outliers; to fit models; and to compare models with others that have fewer parameters. Our experience suggests that visualization techniques can provide practitioners with extraordinary insight about complex multilayer traffic effects and interactions that are common in emerging next-generation networks.

  12. Difficulties applying recent blind source separation techniques to EEG and MEG

    CERN Document Server

    Knuth, Kevin H

    2015-01-01

    High temporal resolution measurements of human brain activity can be performed by recording the electric potentials on the scalp surface (electroencephalography, EEG), or by recording the magnetic fields near the surface of the head (magnetoencephalography, MEG). The analysis of the data is problematic due to the fact that multiple neural generators may be simultaneously active and the potentials and magnetic fields from these sources are superimposed on the detectors. It is highly desirable to un-mix the data into signals representing the behaviors of the original individual generators. This general problem is called blind source separation and several recent techniques utilizing maximum entropy, minimum mutual information, and maximum likelihood estimation have been applied. These techniques have had much success in separating signals such as natural sounds or speech, but appear to be ineffective when applied to EEG or MEG signals. Many of these techniques implicitly assume that the source distributions hav...

  13. Adaptive meshing technique applied to an orthopaedic finite element contact problem.

    Science.gov (United States)

    Roarty, Colleen M; Grosland, Nicole M

    2004-01-01

    Finite element methods have been applied extensively and with much success in the analysis of orthopaedic implants. Recently a growing interest has developed, in the orthopaedic biomechanics community, in how numerical models can be constructed for the optimal solution of problems in contact mechanics. New developments in this area are of paramount importance in the design of improved implants for orthopaedic surgery. Finite element and other computational techniques are widely applied in the analysis and design of hip and knee implants, with additional joints (ankle, shoulder, wrist) attracting increased attention. The objective of this investigation was to develop a simplified adaptive meshing scheme to facilitate the finite element analysis of a dual-curvature total wrist implant. Using currently available software, the analyst has great flexibility in mesh generation, but must prescribe element sizes and refinement schemes throughout the domain of interest. Unfortunately, it is often difficult to predict in advance a mesh spacing that will give acceptable results. Adaptive finite-element mesh capabilities operate to continuously refine the mesh to improve accuracy where it is required, with minimal intervention by the analyst. Such mesh adaptation generally means that in certain areas of the analysis domain, the size of the elements is decreased (or increased) and/or the order of the elements may be increased (or decreased). In concept, mesh adaptation is very appealing. Although there have been several previous applications of adaptive meshing for in-house FE codes, we have coupled an adaptive mesh formulation with the pre-existing commercial programs PATRAN (MacNeal-Schwendler Corp., USA) and ABAQUS (Hibbit Karlson and Sorensen, Pawtucket, RI). In doing so, we have retained several attributes of the commercial software, which are very attractive for orthopaedic implant applications.

  14. Statistical Techniques Used in Three Applied Linguistics Journals: "Language Learning,""Applied Linguistics" and "TESOL Quarterly," 1980-1986: Implications for Readers and Researchers.

    Science.gov (United States)

    Teleni, Vicki; Baldauf, Richard B., Jr.

    A study investigated the statistical techniques used by applied linguists and reported in three journals, "Language Learning,""Applied Linguistics," and "TESOL Quarterly," between 1980 and 1986. It was found that 47% of the published articles used statistical procedures. In these articles, 63% of the techniques used could be called basic, 28%…

  15. Comparison of multivariate preprocessing techniques as applied to electronic tongue based pattern classification for black tea

    Energy Technology Data Exchange (ETDEWEB)

    Palit, Mousumi [Department of Electronics and Telecommunication Engineering, Central Calcutta Polytechnic, Kolkata 700014 (India); Tudu, Bipan, E-mail: bt@iee.jusl.ac.in [Department of Instrumentation and Electronics Engineering, Jadavpur University, Kolkata 700098 (India); Bhattacharyya, Nabarun [Centre for Development of Advanced Computing, Kolkata 700091 (India); Dutta, Ankur; Dutta, Pallab Kumar [Department of Instrumentation and Electronics Engineering, Jadavpur University, Kolkata 700098 (India); Jana, Arun [Centre for Development of Advanced Computing, Kolkata 700091 (India); Bandyopadhyay, Rajib [Department of Instrumentation and Electronics Engineering, Jadavpur University, Kolkata 700098 (India); Chatterjee, Anutosh [Department of Electronics and Communication Engineering, Heritage Institute of Technology, Kolkata 700107 (India)

    2010-08-18

    In an electronic tongue, preprocessing on raw data precedes pattern analysis and choice of the appropriate preprocessing technique is crucial for the performance of the pattern classifier. While attempting to classify different grades of black tea using a voltammetric electronic tongue, different preprocessing techniques have been explored and a comparison of their performances is presented in this paper. The preprocessing techniques are compared first by a quantitative measurement of separability followed by principle component analysis; and then two different supervised pattern recognition models based on neural networks are used to evaluate the performance of the preprocessing techniques.

  16. Comparison of multivariate preprocessing techniques as applied to electronic tongue based pattern classification for black tea.

    Science.gov (United States)

    Palit, Mousumi; Tudu, Bipan; Bhattacharyya, Nabarun; Dutta, Ankur; Dutta, Pallab Kumar; Jana, Arun; Bandyopadhyay, Rajib; Chatterjee, Anutosh

    2010-08-18

    In an electronic tongue, preprocessing on raw data precedes pattern analysis and choice of the appropriate preprocessing technique is crucial for the performance of the pattern classifier. While attempting to classify different grades of black tea using a voltammetric electronic tongue, different preprocessing techniques have been explored and a comparison of their performances is presented in this paper. The preprocessing techniques are compared first by a quantitative measurement of separability followed by principle component analysis; and then two different supervised pattern recognition models based on neural networks are used to evaluate the performance of the preprocessing techniques.

  17. Applying the manifold theory to Milky Way models : First steps on morphology and kinematics

    NARCIS (Netherlands)

    Romero-Gomez, M.; Athanassoula, E.; Antoja Castelltort, Teresa; Figueras, F.; Reylé, C.; Robin, A.; Schultheis, M.

    We present recent results obtained by applying invariant manifold techniques to analytical models of the Milky Way. It has been shown that invariant manifolds can reproduce successfully the spiral arms and rings in external barred galaxies. Here, for the first time, we apply this theory to Milky Way

  18. Phase-shifting technique applied to circular harmonic-based joint transform correlator

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    The phase-shifting technique is applied to the circular harmonic expansion-based joint transform correlator. Computer simulation has shown that the light efficiency and the discrimination capability are greatly enhanced, and the full rotation invariance is preserved after the phase-shifting technique has been used. A rotation-invariant optical pattern recognition with high discrimination capability and high light efficiency is obtained. The influence of the additive noise on the performance of the correlator is also investigated. However, the anti-noise capability of this kind of correlator still needs improving.

  19. A Method to Test Model Calibration Techniques

    Energy Technology Data Exchange (ETDEWEB)

    Judkoff, Ron; Polly, Ben; Neymark, Joel

    2016-08-26

    This paper describes a method for testing model calibration techniques. Calibration is commonly used in conjunction with energy retrofit audit models. An audit is conducted to gather information about the building needed to assemble an input file for a building energy modeling tool. A calibration technique is used to reconcile model predictions with utility data, and then the 'calibrated model' is used to predict energy savings from a variety of retrofit measures and combinations thereof. Current standards and guidelines such as BPI-2400 and ASHRAE-14 set criteria for 'goodness of fit' and assume that if the criteria are met, then the calibration technique is acceptable. While it is logical to use the actual performance data of the building to tune the model, it is not certain that a good fit will result in a model that better predicts post-retrofit energy savings. Therefore, the basic idea here is that the simulation program (intended for use with the calibration technique) is used to generate surrogate utility bill data and retrofit energy savings data against which the calibration technique can be tested. This provides three figures of merit for testing a calibration technique, 1) accuracy of the post-retrofit energy savings prediction, 2) closure on the 'true' input parameter values, and 3) goodness of fit to the utility bill data. The paper will also discuss the pros and cons of using this synthetic surrogate data approach versus trying to use real data sets of actual buildings.

  20. Comparison of multivariate calibration techniques applied to experimental NIR data sets

    OpenAIRE

    Centner, V; Verdu-Andres, J; Walczak, B; Jouan-Rimbaud, D; Despagne, F; Pasti, L; Poppi, R; Massart, DL; de Noord, OE

    2000-01-01

    The present study compares the performance of different multivariate calibration techniques applied to four near-infrared data sets when test samples are well within the calibration domain. Three types of problems are discussed: the nonlinear calibration, the calibration using heterogeneous data sets, and the calibration in the presence of irrelevant information in the set of predictors. Recommendations are derived from the comparison, which should help to guide a nonchemometrician through th...

  1. Influence of a laser profile in impedance mismatch techniques applied to carbon EOS measurement

    Institute of Scientific and Technical Information of China (English)

    A.Aliverdiev; D.Batani; R.Dezulian

    2013-01-01

    We present a recent numerical analysis of impedance mismatch technique applied to carbon equation of state measurements.We consider high-power laser pulses with a Gaussian temporal profile of different durations.We show that for the laser intensity(≈1014W/cm2)and the target design considered in this paper we need to have laser pulses with rise-time less than 150 ps.

  2. Applying XML for designing and interchanging information for multidimensional model

    Institute of Scientific and Technical Information of China (English)

    Lu Changhui; Deng Su; Zhang Weiming

    2005-01-01

    In order to exchange and share information among the conceptual models of data warehouse, and to build a solid base for the integration and share of metadata, a new multidimensional concept model is presented based on XML and its DTD is defined, which can perfectly describe various semantic characteristics of multidimensional conceptual model. According to the multidimensional conceptual modeling technique which is based on UML, the mapping algorithm between the multidimensional conceptual model is described based on XML and UML class diagram, and an application base for the wide use of this technique is given.

  3. How High Is the Tramping Track? Mathematising and Applying in a Calculus Model-Eliciting Activity

    Science.gov (United States)

    Yoon, Caroline; Dreyfus, Tommy; Thomas, Michael O. J.

    2010-01-01

    Two complementary processes involved in mathematical modelling are mathematising a realistic situation and applying a mathematical technique to a given realistic situation. We present and analyse work from two undergraduate students and two secondary school teachers who engaged in both processes during a mathematical modelling task that required…

  4. Thin-Wire Modeling Techniques Applied to Antenna Analysis.

    Science.gov (United States)

    1974-10-11

    61 Lievens, J. L... and Olson, 1. ( ., "MF Antenna System Design for Patrol Hydrofoil (Missile) (PHM)," NFLC Technical Document TD 26’), 20 August...rnnrgnni OAIA Pl/i.l’iiS^ZoCQ/ IST^NH ■^.^♦^s NP = N üüi l 001) nor. Qüib |F(NS.LT.ll «CTl««( T’JK.Hi^MBSlHL/^l ziNC^nia:(HL/ NSi 00 25...IPAL li’.FlO.*) tt tP<At«3e*BiT I".T Tɝ M Hill A^2.i)l*^ i - 00*7 00*’t G 1 Td *0 0050 VI 0051 0052 Tfi ii.iT.hi i mtJ»2.00*Al ri

  5. Prediction of survival with alternative modeling techniques using pseudo values.

    Directory of Open Access Journals (Sweden)

    Tjeerd van der Ploeg

    Full Text Available BACKGROUND: The use of alternative modeling techniques for predicting patient survival is complicated by the fact that some alternative techniques cannot readily deal with censoring, which is essential for analyzing survival data. In the current study, we aimed to demonstrate that pseudo values enable statistically appropriate analyses of survival outcomes when used in seven alternative modeling techniques. METHODS: In this case study, we analyzed survival of 1282 Dutch patients with newly diagnosed Head and Neck Squamous Cell Carcinoma (HNSCC with conventional Kaplan-Meier and Cox regression analysis. We subsequently calculated pseudo values to reflect the individual survival patterns. We used these pseudo values to compare recursive partitioning (RPART, neural nets (NNET, logistic regression (LR general linear models (GLM and three variants of support vector machines (SVM with respect to dichotomous 60-month survival, and continuous pseudo values at 60 months or estimated survival time. We used the area under the ROC curve (AUC and the root of the mean squared error (RMSE to compare the performance of these models using bootstrap validation. RESULTS: Of a total of 1282 patients, 986 patients died during a median follow-up of 66 months (60-month survival: 52% [95% CI: 50%-55%]. The LR model had the highest optimism corrected AUC (0.791 to predict 60-month survival, followed by the SVM model with a linear kernel (AUC 0.787. The GLM model had the smallest optimism corrected RMSE when continuous pseudo values were considered for 60-month survival or the estimated survival time followed by SVM models with a linear kernel. The estimated importance of predictors varied substantially by the specific aspect of survival studied and modeling technique used. CONCLUSIONS: The use of pseudo values makes it readily possible to apply alternative modeling techniques to survival problems, to compare their performance and to search further for promising

  6. Numerical modeling techniques for flood analysis

    Science.gov (United States)

    Anees, Mohd Talha; Abdullah, K.; Nawawi, M. N. M.; Ab Rahman, Nik Norulaini Nik; Piah, Abd. Rahni Mt.; Zakaria, Nor Azazi; Syakir, M. I.; Mohd. Omar, A. K.

    2016-12-01

    Topographic and climatic changes are the main causes of abrupt flooding in tropical areas. It is the need to find out exact causes and effects of these changes. Numerical modeling techniques plays a vital role for such studies due to their use of hydrological parameters which are strongly linked with topographic changes. In this review, some of the widely used models utilizing hydrological and river modeling parameters and their estimation in data sparse region are discussed. Shortcomings of 1D and 2D numerical models and the possible improvements over these models through 3D modeling are also discussed. It is found that the HEC-RAS and FLO 2D model are best in terms of economical and accurate flood analysis for river and floodplain modeling respectively. Limitations of FLO 2D in floodplain modeling mainly such as floodplain elevation differences and its vertical roughness in grids were found which can be improve through 3D model. Therefore, 3D model was found to be more suitable than 1D and 2D models in terms of vertical accuracy in grid cells. It was also found that 3D models for open channel flows already developed recently but not for floodplain. Hence, it was suggested that a 3D model for floodplain should be developed by considering all hydrological and high resolution topographic parameter's models, discussed in this review, to enhance the findings of causes and effects of flooding.

  7. Teaching students to apply multiple physical modeling methods

    NARCIS (Netherlands)

    Wiegers, T.; Verlinden, J.C.; Vergeest, J.S.M.

    2014-01-01

    Design students should be able to explore a variety of shapes before elaborating one particular shape. Current modelling courses don’t address this issue. We developed the course Rapid Modelling, which teaches students to explore multiple shape models in a short time, applying different methods and

  8. Teaching students to apply multiple physical modeling methods

    NARCIS (Netherlands)

    Wiegers, T.; Verlinden, J.C.; Vergeest, J.S.M.

    2014-01-01

    Design students should be able to explore a variety of shapes before elaborating one particular shape. Current modelling courses don’t address this issue. We developed the course Rapid Modelling, which teaches students to explore multiple shape models in a short time, applying different methods and

  9. Nonlinear Eddy Viscosity Models applied to Wind Turbine Wakes

    DEFF Research Database (Denmark)

    Laan, van der, Paul Maarten; Sørensen, Niels N.; Réthoré, Pierre-Elouan;

    2013-01-01

    The linear k−ε eddy viscosity model and modified versions of two existing nonlinear eddy viscosity models are applied to single wind turbine wake simulations using a Reynolds Averaged Navier-Stokes code. Results are compared with field wake measurements. The nonlinear models give better results...

  10. A Biomechanical Modeling Guided CBCT Estimation Technique.

    Science.gov (United States)

    Zhang, You; Tehrani, Joubin Nasehi; Wang, Jing

    2017-02-01

    Two-dimensional-to-three-dimensional (2D-3D) deformation has emerged as a new technique to estimate cone-beam computed tomography (CBCT) images. The technique is based on deforming a prior high-quality 3D CT/CBCT image to form a new CBCT image, guided by limited-view 2D projections. The accuracy of this intensity-based technique, however, is often limited in low-contrast image regions with subtle intensity differences. The solved deformation vector fields (DVFs) can also be biomechanically unrealistic. To address these problems, we have developed a biomechanical modeling guided CBCT estimation technique (Bio-CBCT-est) by combining 2D-3D deformation with finite element analysis (FEA)-based biomechanical modeling of anatomical structures. Specifically, Bio-CBCT-est first extracts the 2D-3D deformation-generated displacement vectors at the high-contrast anatomical structure boundaries. The extracted surface deformation fields are subsequently used as the boundary conditions to drive structure-based FEA to correct and fine-tune the overall deformation fields, especially those at low-contrast regions within the structure. The resulting FEA-corrected deformation fields are then fed back into 2D-3D deformation to form an iterative loop, combining the benefits of intensity-based deformation and biomechanical modeling for CBCT estimation. Using eleven lung cancer patient cases, the accuracy of the Bio-CBCT-est technique has been compared to that of the 2D-3D deformation technique and the traditional CBCT reconstruction techniques. The accuracy was evaluated in the image domain, and also in the DVF domain through clinician-tracked lung landmarks.

  11. Wire-mesh and ultrasound techniques applied for the characterization of gas-liquid slug flow

    Energy Technology Data Exchange (ETDEWEB)

    Ofuchi, Cesar Y.; Sieczkowski, Wytila Chagas; Neves Junior, Flavio; Arruda, Lucia V.R.; Morales, Rigoberto E.M.; Amaral, Carlos E.F.; Silva, Marco J. da [Federal University of Technology of Parana, Curitiba, PR (Brazil)], e-mails: ofuchi@utfpr.edu.br, wytila@utfpr.edu.br, neves@utfpr.edu.br, lvrarruda@utfpr.edu.br, rmorales@utfpr.edu.br, camaral@utfpr.edu.br, mdasilva@utfpr.edu.br

    2010-07-01

    Gas-liquid two-phase flows are found in a broad range of industrial applications, such as chemical, petrochemical and nuclear industries and quite often determine the efficiency and safety of process and plants. Several experimental techniques have been proposed and applied to measure and quantify two-phase flows so far. In this experimental study the wire-mesh sensor and an ultrasound technique are used and comparatively evaluated to study two-phase slug flows in horizontal pipes. The wire-mesh is an imaging technique and thus appropriated for scientific studies while ultrasound-based technique is robust and non-intrusive and hence well suited for industrial applications. Based on the measured raw data it is possible to extract some specific slug flow parameters of interest such as mean void fraction and characteristic frequency. The experiments were performed in the Thermal Sciences Laboratory (LACIT) at UTFPR, Brazil, in which an experimental two-phase flow loop is available. The experimental flow loop comprises a horizontal acrylic pipe of 26 mm diameter and 9 m length. Water and air were used to produce the two phase flow under controlled conditions. The results show good agreement between the techniques. (author)

  12. Wire-mesh and ultrasound techniques applied for the characterization of gas-liquid slug flow

    Energy Technology Data Exchange (ETDEWEB)

    Ofuchi, Cesar Y.; Sieczkowski, Wytila Chagas; Neves Junior, Flavio; Arruda, Lucia V.R.; Morales, Rigoberto E.M.; Amaral, Carlos E.F.; Silva, Marco J. da [Federal University of Technology of Parana, Curitiba, PR (Brazil)], e-mails: ofuchi@utfpr.edu.br, wytila@utfpr.edu.br, neves@utfpr.edu.br, lvrarruda@utfpr.edu.br, rmorales@utfpr.edu.br, camaral@utfpr.edu.br, mdasilva@utfpr.edu.br

    2010-07-01

    Gas-liquid two-phase flows are found in a broad range of industrial applications, such as chemical, petrochemical and nuclear industries and quite often determine the efficiency and safety of process and plants. Several experimental techniques have been proposed and applied to measure and quantify two-phase flows so far. In this experimental study the wire-mesh sensor and an ultrasound technique are used and comparatively evaluated to study two-phase slug flows in horizontal pipes. The wire-mesh is an imaging technique and thus appropriated for scientific studies while ultrasound-based technique is robust and non-intrusive and hence well suited for industrial applications. Based on the measured raw data it is possible to extract some specific slug flow parameters of interest such as mean void fraction and characteristic frequency. The experiments were performed in the Thermal Sciences Laboratory (LACIT) at UTFPR, Brazil, in which an experimental two-phase flow loop is available. The experimental flow loop comprises a horizontal acrylic pipe of 26 mm diameter and 9 m length. Water and air were used to produce the two phase flow under controlled conditions. The results show good agreement between the techniques. (author)

  13. Statistical learning techniques applied to epidemiology: a simulated case-control comparison study with logistic regression

    Directory of Open Access Journals (Sweden)

    Land Walker H

    2011-01-01

    Full Text Available Abstract Background When investigating covariate interactions and group associations with standard regression analyses, the relationship between the response variable and exposure may be difficult to characterize. When the relationship is nonlinear, linear modeling techniques do not capture the nonlinear information content. Statistical learning (SL techniques with kernels are capable of addressing nonlinear problems without making parametric assumptions. However, these techniques do not produce findings relevant for epidemiologic interpretations. A simulated case-control study was used to contrast the information embedding characteristics and separation boundaries produced by a specific SL technique with logistic regression (LR modeling representing a parametric approach. The SL technique was comprised of a kernel mapping in combination with a perceptron neural network. Because the LR model has an important epidemiologic interpretation, the SL method was modified to produce the analogous interpretation and generate odds ratios for comparison. Results The SL approach is capable of generating odds ratios for main effects and risk factor interactions that better capture nonlinear relationships between exposure variables and outcome in comparison with LR. Conclusions The integration of SL methods in epidemiology may improve both the understanding and interpretation of complex exposure/disease relationships.

  14. Modeling Techniques for IN/Internet Interworking

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    This paper focuses on the authors' contributions to ITU-T to develop the network modeling for the support of IN/Internet interworking. Following an introduction to benchmark interworking services, the paper describes the consensus enhanced DFP architecture, which is reached based on IETF reference model and the authors' proposal. Then the proposed information flows for benchmark services are presented with new or updated flows identified. Finally a brief description is given to implementation techniques.

  15. Review of Intelligent Techniques Applied for Classification and Preprocessing of Medical Image Data

    Directory of Open Access Journals (Sweden)

    H S Hota

    2013-01-01

    Full Text Available Medical image data like ECG, EEG and MRI, CT-scan images are the most important way to diagnose disease of human being in precise way and widely used by the physician. Problem can be clearly identified with the help of these medical images. A robust model can classify the medical image data in better way .In this paper intelligent techniques like neural network and fuzzy logic techniques are explored for MRI medical image data to identify tumor in human brain. Also need of preprocessing of medical image data is explored. Classification technique has been used extensively in the field of medical imaging. The conventional method in medical science for medical image data classification is done by human inspection which may result misclassification of data sometime this type of problem identification are impractical for large amounts of data and noisy data, a noisy data may be produced due to some technical fault of the machine or by human errors and can lead misclassification of medical image data. We have collected number of papers based on neural network and fuzzy logic along with hybrid technique to explore the efficiency and robustness of the model for brain MRI data. It has been analyzed that intelligent model along with data preprocessing using principal component analysis (PCA and segmentation may be the competitive model in this domain.

  16. A parameter estimation and identifiability analysis methodology applied to a street canyon air pollution model

    DEFF Research Database (Denmark)

    Ottosen, Thor Bjørn; Ketzel, Matthias; Skov, Henrik

    2016-01-01

    Pollution Model (OSPM®). To assess the predictive validity of the model, the data is split into an estimation and a prediction data set using two data splitting approaches and data preparation techniques (clustering and outlier detection) are analysed. The sensitivity analysis, being part......Mathematical models are increasingly used in environmental science thus increasing the importance of uncertainty and sensitivity analyses. In the present study, an iterative parameter estimation and identifiability analysis methodology is applied to an atmospheric model – the Operational Street...

  17. Dynamical real space renormalization group applied to sandpile models.

    Science.gov (United States)

    Ivashkevich, E V; Povolotsky, A M; Vespignani, A; Zapperi, S

    1999-08-01

    A general framework for the renormalization group analysis of self-organized critical sandpile models is formulated. The usual real space renormalization scheme for lattice models when applied to nonequilibrium dynamical models must be supplemented by feedback relations coming from the stationarity conditions. On the basis of these ideas the dynamically driven renormalization group is applied to describe the boundary and bulk critical behavior of sandpile models. A detailed description of the branching nature of sandpile avalanches is given in terms of the generating functions of the underlying branching process.

  18. Comparison of two multiaxial fatigue models applied to dental implants

    Directory of Open Access Journals (Sweden)

    JM. Ayllon

    2015-07-01

    Full Text Available This paper presents two multiaxial fatigue life prediction models applied to a commercial dental implant. One model is called Variable Initiation Length Model and takes into account both the crack initiation and propagation phases. The second model combines the Theory of Critical Distance with a critical plane damage model to characterise the initiation and initial propagation of micro/meso cracks in the material. This paper discusses which material properties are necessary for the implementation of these models and how to obtain them in the laboratory from simple test specimens. It also describes the FE models developed for the stress/strain and stress intensity factor characterisation in the implant. The results of applying both life prediction models are compared with experimental results arising from the application of ISO-14801 standard to a commercial dental implant.

  19. AN OVERVIEW OF REDUCED ORDER MODELING TECHNIQUES FOR SAFETY APPLICATIONS

    Energy Technology Data Exchange (ETDEWEB)

    Mandelli, D.; Alfonsi, A.; Talbot, P.; Wang, C.; Maljovec, D.; Smith, C.; Rabiti, C.; Cogliati, J.

    2016-10-01

    The RISMC project is developing new advanced simulation-based tools to perform Computational Risk Analysis (CRA) for the existing fleet of U.S. nuclear power plants (NPPs). These tools numerically model not only the thermal-hydraulic behavior of the reactors primary and secondary systems, but also external event temporal evolution and component/system ageing. Thus, this is not only a multi-physics problem being addressed, but also a multi-scale problem (both spatial, µm-mm-m, and temporal, seconds-hours-years). As part of the RISMC CRA approach, a large amount of computationally-expensive simulation runs may be required. An important aspect is that even though computational power is growing, the overall computational cost of a RISMC analysis using brute-force methods may be not viable for certain cases. A solution that is being evaluated to assist the computational issue is the use of reduced order modeling techniques. During the FY2015, we investigated and applied reduced order modeling techniques to decrease the RISMC analysis computational cost by decreasing the number of simulation runs; for this analysis improvement we used surrogate models instead of the actual simulation codes. This article focuses on the use of reduced order modeling techniques that can be applied to RISMC analyses in order to generate, analyze, and visualize data. In particular, we focus on surrogate models that approximate the simulation results but in a much faster time (microseconds instead of hours/days).

  20. A TECHNIQUE OF DIGITAL SURFACE MODEL GENERATION

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    It is usually a time-consuming process to real-time set up 3D digital surface mo del(DSM) of an object with complex sur face.On the basis of the architectural survey proje ct of“Chilin Nunnery Reconstruction",this paper investigates an easy and feasi ble way,that is,on project site,applying digital close range photogrammetry an d CAD technique to establish the DSM for simulating ancient architectures with c omplex surface.The method has been proved very effective in practice.

  1. Applying traditional signal processing techniques to social media exploitation for situational understanding

    Science.gov (United States)

    Abdelzaher, Tarek; Roy, Heather; Wang, Shiguang; Giridhar, Prasanna; Al Amin, Md. Tanvir; Bowman, Elizabeth K.; Kolodny, Michael A.

    2016-05-01

    Signal processing techniques such as filtering, detection, estimation and frequency domain analysis have long been applied to extract information from noisy sensor data. This paper describes the exploitation of these signal processing techniques to extract information from social networks, such as Twitter and Instagram. Specifically, we view social networks as noisy sensors that report events in the physical world. We then present a data processing stack for detection, localization, tracking, and veracity analysis of reported events using social network data. We show using a controlled experiment that the behavior of social sources as information relays varies dramatically depending on context. In benign contexts, there is general agreement on events, whereas in conflict scenarios, a significant amount of collective filtering is introduced by conflicted groups, creating a large data distortion. We describe signal processing techniques that mitigate such distortion, resulting in meaningful approximations of actual ground truth, given noisy reported observations. Finally, we briefly present an implementation of the aforementioned social network data processing stack in a sensor network analysis toolkit, called Apollo. Experiences with Apollo show that our techniques are successful at identifying and tracking credible events in the physical world.

  2. Effectiveness of applying progressive muscle relaxation technique on quality of life of patients with multiple sclerosis.

    Science.gov (United States)

    Ghafari, Somayeh; Ahmadi, Fazlolah; Nabavi, Masoud; Anoshirvan, Kazemnejad; Memarian, Robabe; Rafatbakhsh, Mohamad

    2009-08-01

    To identify the effects of applying Progressive Muscle Relaxation Technique on Quality of Life of patients with multiple Sclerosis. In view of the growing caring options in Multiple Sclerosis, improvement of quality of life has become increasingly relevant as a caring intervention. Complementary therapies are widely used by multiple sclerosis patients and Progressive Muscle Relaxation Technique is a form of complementary therapies. Quasi-experimental study. Multiple Sclerosis patients (n = 66) were selected with no probability sampling then assigned to experimental and control groups (33 patients in each group). Means of data collection included: Individual Information Questionnaire, SF-8 Health Survey, Self-reported checklist. PMRT performed for 63 sessions by experimental group during two months but no intervention was done for control group. Statistical analysis was done by SPSS software. Student t-test showed that there was no significant difference between two groups in mean scores of health-related quality of life before the study but this test showed a significant difference between two groups, one and two months after intervention (p Relaxation Technique on quality of life of multiple sclerosis patients, further research is required to determine better methods to promote quality of life of patients suffer multiple sclerosis and other chronic disease. Progressive Muscle Relaxation Technique is practically feasible and is associated with increase of life quality of multiple sclerosis patients; so that health professionals need to update their knowledge about complementary therapies.

  3. Evaluation of hippocampal volume based on MRI applying manual and automatic segmentation techniques

    Energy Technology Data Exchange (ETDEWEB)

    Doring, Thomas M.; Gasparetto, Emerson L. [Universidade Federal do Rio de Janeiro (UFRJ), RJ (Brazil); Kubo, Tadeu T.A.; Domingues, Romeu C. [Clinica de Diagnostico por Imagem (CDPI), Rio de Janeiro, RJ (Brazil)

    2010-03-15

    Various segmentation techniques using MR sequences, including manual and automatic protocols, have been developed to optimize the determination of the hippocampal volume. For clinical application, automated methods with high reproducibility and accuracy potentially may be more efficient than manual volumetry. This study aims to compare the hippocampal volumes obtained from manual and automatic segmentation methods (FreeSurfer and FSL). The automatic segmentation method FreeSurfer showed high correlation. Comparing the absolute hippocampal volumes, there is an overestimation by the automated methods. Applying a correction factor to the automatic method, it may be an alternative for the estimation of the absolute hippocampal volume. (author)

  4. Mathematical analysis techniques for modeling the space network activities

    Science.gov (United States)

    Foster, Lisa M.

    1992-01-01

    The objective of the present work was to explore and identify mathematical analysis techniques, and in particular, the use of linear programming. This topic was then applied to the Tracking and Data Relay Satellite System (TDRSS) in order to understand the space network better. Finally, a small scale version of the system was modeled, variables were identified, data was gathered, and comparisons were made between actual and theoretical data.

  5. Geographically Weighted Logistic Regression Applied to Credit Scoring Models

    Directory of Open Access Journals (Sweden)

    Pedro Henrique Melo Albuquerque

    Full Text Available Abstract This study used real data from a Brazilian financial institution on transactions involving Consumer Direct Credit (CDC, granted to clients residing in the Distrito Federal (DF, to construct credit scoring models via Logistic Regression and Geographically Weighted Logistic Regression (GWLR techniques. The aims were: to verify whether the factors that influence credit risk differ according to the borrower’s geographic location; to compare the set of models estimated via GWLR with the global model estimated via Logistic Regression, in terms of predictive power and financial losses for the institution; and to verify the viability of using the GWLR technique to develop credit scoring models. The metrics used to compare the models developed via the two techniques were the AICc informational criterion, the accuracy of the models, the percentage of false positives, the sum of the value of false positive debt, and the expected monetary value of portfolio default compared with the monetary value of defaults observed. The models estimated for each region in the DF were distinct in their variables and coefficients (parameters, with it being concluded that credit risk was influenced differently in each region in the study. The Logistic Regression and GWLR methodologies presented very close results, in terms of predictive power and financial losses for the institution, and the study demonstrated viability in using the GWLR technique to develop credit scoring models for the target population in the study.

  6. Selection of productivity improvement techniques via mathematical modeling

    Directory of Open Access Journals (Sweden)

    Mahassan M. Khater

    2011-07-01

    Full Text Available This paper presents a new mathematical model to select an optimal combination of productivity improvement techniques. The proposed model of this paper considers four-stage cycle productivity and the productivity is assumed to be a linear function of fifty four improvement techniques. The proposed model of this paper is implemented for a real-world case study of manufacturing plant. The resulted problem is formulated as a mixed integer programming which can be solved for optimality using traditional methods. The preliminary results of the implementation of the proposed model of this paper indicate that the productivity can be improved through a change on equipments and it can be easily applied for both manufacturing and service industries.

  7. Automated Techniques for the Qualitative Analysis of Ecological Models: Continuous Models

    Directory of Open Access Journals (Sweden)

    Lynn van Coller

    1997-06-01

    Full Text Available The mathematics required for a detailed analysis of the behavior of a model can be formidable. In this paper, I demonstrate how various computer packages can aid qualitative analyses by implementing techniques from dynamical systems theory. Because computer software is used to obtain the results, the techniques can be used by nonmathematicians as well as mathematicians. In-depth analyses of complicated models that were previously very difficult to study can now be done. Because the paper is intended as an introduction to applying the techniques to ecological models, I have included an appendix describing some of the ideas and terminology. A second appendix shows how the techniques can be applied to a fairly simple predator-prey model and establishes the reliability of the computer software. The main body of the paper discusses a ratio-dependent model. The new techniques highlight some limitations of isocline analyses in this three-dimensional setting and show that the model is structurally unstable. Another appendix describes a larger model of a sheep-pasture-hyrax-lynx system. Dynamical systems techniques are compared with a traditional sensitivity analysis and are found to give more information. As a result, an incomplete relationship in the model is highlighted. I also discuss the resilience of these models to both parameter and population perturbations.

  8. Applying the ARCS Motivation Model in Technological and Vocational Education

    Science.gov (United States)

    Liao, Hung-Chang; Wang, Ya-huei

    2008-01-01

    This paper describes the incorporation of Keller's ARCS (Attention, Relevance, Confidence, and Satisfaction) motivation model into traditional classroom instruction-learning process. Viewing that technological and vocational students have low confidence and motivation in learning, the authors applied the ARCS motivation model not only in the…

  9. The HPT Model Applied to a Kayak Company's Registration Process

    Science.gov (United States)

    Martin, Florence; Hall, Herman A., IV; Blakely, Amanda; Gayford, Matthew C.; Gunter, Erin

    2009-01-01

    This case study describes the step-by-step application of the traditional human performance technology (HPT) model at a premier kayak company located on the coast of North Carolina. The HPT model was applied to address lost revenues related to three specific business issues: misinformed customers, dissatisfied customers, and guides not showing up…

  10. An applied general equilibrium model for Dutch agribusiness policy analysis.

    NARCIS (Netherlands)

    Peerlings, J.H.M.

    1993-01-01

    The purpose of this thesis was to develop a basic static applied general equilibrium (AGE) model to analyse the effects of agricultural policy changes on Dutch agribusiness. In particular the effects on inter-industry transactions, factor demand, income, and trade are of interest.The model is fairly

  11. Opto-physiological modeling applied to photoplethysmographic cardiovascular assessment.

    Science.gov (United States)

    Hu, Sijung; Azorin-Peris, Vicente; Zheng, Jia

    2013-01-01

    This paper presents opto-physiological (OP) modeling and its application in cardiovascular assessment techniques based on photoplethysmography (PPG). Existing contact point measurement techniques, i.e., pulse oximetry probes, are compared with the next generation non-contact and imaging implementations, i.e., non-contact reflection and camera-based PPG. The further development of effective physiological monitoring techniques relies on novel approaches to OP modeling that can better inform the design and development of sensing hardware and applicable signal processing procedures. With the help of finite-element optical simulation, fundamental research into OP modeling of photoplethysmography is being exploited towards the development of engineering solutions for practical biomedical systems. This paper reviews a body of research comprising two OP models that have led to significant progress in the design of transmission mode pulse oximetry probes, and approaches to 3D blood perfusion mapping for the interpretation of cardiovascular performance.

  12. Opto-Physiological Modeling Applied to Photoplethysmographic Cardiovascular Assessment

    Directory of Open Access Journals (Sweden)

    Sijung Hu

    2013-01-01

    Full Text Available This paper presents opto-physiological (OP modeling and its application in cardiovascular assessment techniques based on photoplethysmography (PPG. Existing contact point measurement techniques, i.e., pulse oximetry probes, are compared with the next generation non-contact and imaging implementations, i.e., non-contact reflection and camera-based PPG. The further development of effective physiological monitoring techniques relies on novel approaches to OP modeling that can better inform the design and development of sensing hardware and applicable signal processing procedures. With the help of finite-element optical simulation, fundamental research into OP modeling of photoplethysmography is being exploited towards the development of engineering solutions for practical biomedical systems. This paper reviews a body of research comprising two OP models that have led to significant progress in the design of transmission mode pulse oximetry probes, and approaches to 3D blood perfusion mapping for the interpretation of cardiovascular performance.

  13. Applying Statistical Models and Parametric Distance Measures for Music Similarity Search

    Science.gov (United States)

    Lukashevich, Hanna; Dittmar, Christian; Bastuck, Christoph

    Automatic deriving of similarity relations between music pieces is an inherent field of music information retrieval research. Due to the nearly unrestricted amount of musical data, the real-world similarity search algorithms have to be highly efficient and scalable. The possible solution is to represent each music excerpt with a statistical model (ex. Gaussian mixture model) and thus to reduce the computational costs by applying the parametric distance measures between the models. In this paper we discuss the combinations of applying different parametric modelling techniques and distance measures and weigh the benefits of each one against the others.

  14. Pulsed remote field eddy current technique applied to non-magnetic flat conductive plates

    Science.gov (United States)

    Yang, Binfeng; Zhang, Hui; Zhang, Chao; Zhang, Zhanbin

    2013-12-01

    Non-magnetic metal plates are widely used in aviation and industrial applications. The detection of cracks in thick plate structures, such as multilayered structures of aircraft fuselage, has been challenging in nondestructive evaluation societies. The remote field eddy current (RFEC) technique has shown advantages of deep penetration and high sensitivity to deeply buried anomalies. However, the RFEC technique is mainly used to evaluate ferromagnetic tubes. There are many problems that should be fixed before the expansion and application of this technique for the inspection of non-magnetic conductive plates. In this article, the pulsed remote field eddy current (PRFEC) technique for the detection of defects in non-magnetic conducting plates was investigated. First, the principle of the PRFEC technique was analysed, followed by the analysis of the differences between the detection of defects in ferromagnetic and non-magnetic plain structures. Three different models of the PRFEC probe were simulated using ANSYS. The location of the transition zone, defect detection sensitivity and the ability to detect defects in thick plates using three probes were analysed and compared. The simulation results showed that the probe with a ferrite core had the highest detecting ability. The conclusions derived from the simulation study were also validated by conducting experiments.

  15. Field Assessment Techniques for Bank Erosion Modeling

    Science.gov (United States)

    1990-11-22

    Field Assessment Techniques for Bank Erosion Modeling First Interim Report Prepared for US Army European Research Office US AR DS G-. EDISON HOUSE...SEDIMENTATION ANALYSIS SHEETS and GUIDELINES FOR THE USE OF SEDIMENTATION ANALYSIS SHEETS IN THE FIELD Prepared for US Army Engineer Waterways Experiment...Material Type 3 Material Type 4 Cobbles Toe[’ Toe Toefl Toefl Protection Status Cobbles/boulders Mid-Bnak .. Mid-na.k Mid-Bnask[ Mid-Boak

  16. Advanced interaction techniques for medical models

    OpenAIRE

    Monclús, Eva

    2014-01-01

    Advances in Medical Visualization allows the analysis of anatomical structures with the use of 3D models reconstructed from a stack of intensity-based images acquired through different techniques, being Computerized Tomographic (CT) modality one of the most common. A general medical volume graphics application usually includes an exploration task which is sometimes preceded by an analysis process where the anatomical structures of interest are first identified. ...

  17. LEARNING SEMANTICS-ENHANCED LANGUAGE MODELS APPLIED TO UNSUEPRVISED WSD

    Energy Technology Data Exchange (ETDEWEB)

    VERSPOOR, KARIN [Los Alamos National Laboratory; LIN, SHOU-DE [Los Alamos National Laboratory

    2007-01-29

    An N-gram language model aims at capturing statistical syntactic word order information from corpora. Although the concept of language models has been applied extensively to handle a variety of NLP problems with reasonable success, the standard model does not incorporate semantic information, and consequently limits its applicability to semantic problems such as word sense disambiguation. We propose a framework that integrates semantic information into the language model schema, allowing a system to exploit both syntactic and semantic information to address NLP problems. Furthermore, acknowledging the limited availability of semantically annotated data, we discuss how the proposed model can be learned without annotated training examples. Finally, we report on a case study showing how the semantics-enhanced language model can be applied to unsupervised word sense disambiguation with promising results.

  18. Modeling in applied sciences a kinetic theory approach

    CERN Document Server

    Pulvirenti, Mario

    2000-01-01

    Modeling complex biological, chemical, and physical systems, in the context of spatially heterogeneous mediums, is a challenging task for scientists and engineers using traditional methods of analysis Modeling in Applied Sciences is a comprehensive survey of modeling large systems using kinetic equations, and in particular the Boltzmann equation and its generalizations An interdisciplinary group of leading authorities carefully develop the foundations of kinetic models and discuss the connections and interactions between model theories, qualitative and computational analysis and real-world applications This book provides a thoroughly accessible and lucid overview of the different aspects, models, computations, and methodology for the kinetic-theory modeling process Topics and Features * Integrated modeling perspective utilized in all chapters * Fluid dynamics of reacting gases * Self-contained introduction to kinetic models * Becker–Doring equations * Nonlinear kinetic models with chemical reactions * Kinet...

  19. Micropillar compression technique applied to micron-scale mudstone elasto-plastic deformation.

    Energy Technology Data Exchange (ETDEWEB)

    Michael, Joseph Richard; Chidsey, Thomas (Utah Geological Survey, Salt Lake City, UT); Heath, Jason E.; Dewers, Thomas A.; Boyce, Brad Lee; Buchheit, Thomas Edward

    2010-12-01

    Mudstone mechanical testing is often limited by poor core recovery and sample size, preservation and preparation issues, which can lead to sampling bias, damage, and time-dependent effects. A micropillar compression technique, originally developed by Uchic et al. 2004, here is applied to elasto-plastic deformation of small volumes of mudstone, in the range of cubic microns. This study examines behavior of the Gothic shale, the basal unit of the Ismay zone of the Pennsylvanian Paradox Formation and potential shale gas play in southeastern Utah, USA. Precision manufacture of micropillars 5 microns in diameter and 10 microns in length are prepared using an ion-milling method. Characterization of samples is carried out using: dual focused ion - scanning electron beam imaging of nano-scaled pores and distribution of matrix clay and quartz, as well as pore-filling organics; laser scanning confocal (LSCM) 3D imaging of natural fractures; and gas permeability, among other techniques. Compression testing of micropillars under load control is performed using two different nanoindenter techniques. Deformation of 0.5 cm in diameter by 1 cm in length cores is carried out and visualized by a microscope loading stage and laser scanning confocal microscopy. Axisymmetric multistage compression testing and multi-stress path testing is carried out using 2.54 cm plugs. Discussion of results addresses size of representative elementary volumes applicable to continuum-scale mudstone deformation, anisotropy, and size-scale plasticity effects. Other issues include fabrication-induced damage, alignment, and influence of substrate.

  20. Optical coherence tomography: a non-invasive technique applied to conservation of paintings

    Science.gov (United States)

    Liang, Haida; Gomez Cid, Marta; Cucu, Radu; Dobre, George; Kudimov, Boris; Pedro, Justin; Saunders, David; Cupitt, John; Podoleanu, Adrian

    2005-06-01

    It is current practice to take tiny samples from a painting to mount and examine in cross-section under a microscope. However, since conservation practice and ethics limit sampling to a minimum and to areas along cracks and edges of paintings, which are often unrepresentative of the whole painting, results from such analyses cannot be taken as representative of a painting as a whole. Recently in a preliminary study, we have demonstrated that near-infrared Optical Coherence Tomography (OCT) can be used directly on paintings to examine the cross-section of paint and varnish layers without contact and the need to take samples. OCT is an optical interferometric technique developed for in vivo imaging of the eye and biological tissues; it is essentially a scanning Michelson's interferometer with a "broad-band" source that has the spatial coherence of a laser. The low temporal coherence and high spatial concentration of the source are the keys to high depth resolution and high sensitivity 3D imaging. The technique is non-invasive and non-contact with a typical working distance of 2 cm. This non-invasive technique enables cross-sections to be examined anywhere on a painting. In this paper, we will report new results on applying near-infrared en-face OCT to paintings conservation and extend the application to the examination of underdrawings, drying processes, and quantitative measurements of optical properties of paint and varnish layers.

  1. Applying machine learning techniques for forecasting flexibility of virtual power plants

    DEFF Research Database (Denmark)

    MacDougall, Pamela; Kosek, Anna Magdalena; Bindner, Henrik W.

    2016-01-01

    hidden layer artificial neural network (ANN). Both techniques are used to model a relationship between the aggregator portfolio state and requested ramp power to the longevity of the delivered flexibility. Using validated individual household models, a smart controlled aggregated virtual power plant...... is simulated. A hierarchical market-based supply-demand matching control mechanism is used to steer the heating devices in the virtual power plant. For both the training and validation set of clusters, a random number of households, between 200 and 2000, is generated with day ahead profile scaled accordingly...

  2. Pattern Recognition Techniques Applied to the Study of Leishmanial Glyceraldehyde-3-Phosphate Dehydrogenase Inhibition

    Directory of Open Access Journals (Sweden)

    Norka B. H. Lozano

    2014-02-01

    Full Text Available Chemometric pattern recognition techniques were employed in order to obtain Structure-Activity Relationship (SAR models relating the structures of a series of adenosine compounds to the affinity for glyceraldehyde 3-phosphate dehydrogenase of Leishmania mexicana (LmGAPDH. A training set of 49 compounds was used to build the models and the best ones were obtained with one geometrical and four electronic descriptors. Classification models were externally validated by predictions for a test set of 14 compounds not used in the model building process. Results of good quality were obtained, as verified by the correct classifications achieved. Moreover, the results are in good agreement with previous SAR studies on these molecules, to such an extent that we can suggest that these findings may help in further investigations on ligands of LmGAPDH capable of improving treatment of leishmaniasis.

  3. Information-theoretic model selection applied to supernovae data

    CERN Document Server

    Biesiada, M

    2007-01-01

    There are several different theoretical ideas invoked to explain the dark energy with relatively little guidance of which one of them might be right. Therefore the emphasis of ongoing and forthcoming research in this field shifts from estimating specific parameters of cosmological model to the model selection. In this paper we apply information-theoretic model selection approach based on Akaike criterion as an estimator of Kullback-Leibler entropy. In particular, we present the proper way of ranking the competing models based on Akaike weights (in Bayesian language - posterior probabilities of the models). Out of many particular models of dark energy we focus on four: quintessence, quintessence with time varying equation of state, brane-world and generalized Chaplygin gas model and test them on Riess' Gold sample. As a result we obtain that the best model - in terms of Akaike Criterion - is the quintessence model. The odds suggest that although there exist differences in the support given to specific scenario...

  4. Forecast model applied to quality control with autocorrelational data

    Directory of Open Access Journals (Sweden)

    Adriano Mendonça Souza

    2013-11-01

    Full Text Available This research approaches the prediction models applied to industrial processes, in order to check the stability of the process by means of control charts, applied to residues from linear modeling. The data used for analysis refers to the moisture content, permeability and compression resistance to the green (RCV, belonging to the casting process of green sand molding in A Company, which operates in the casting and machining, for which dynamic multivariate regression model was set. As the observations were auto-correlated, it was necessary to seek a mathematical model that produces independent and identically distribuibed residues. The models found make possible to understand the variables behavior, assisting in the achievement of the forecasts and in the monitoring of the referred process. Thus, it can be stated that the moisture content is very unstable comparing to the others variables.

  5. Methods for model selection in applied science and engineering.

    Energy Technology Data Exchange (ETDEWEB)

    Field, Richard V., Jr.

    2004-10-01

    Mathematical models are developed and used to study the properties of complex systems and/or modify these systems to satisfy some performance requirements in just about every area of applied science and engineering. A particular reason for developing a model, e.g., performance assessment or design, is referred to as the model use. Our objective is the development of a methodology for selecting a model that is sufficiently accurate for an intended use. Information on the system being modeled is, in general, incomplete, so that there may be two or more models consistent with the available information. The collection of these models is called the class of candidate models. Methods are developed for selecting the optimal member from a class of candidate models for the system. The optimal model depends on the available information, the selected class of candidate models, and the model use. Classical methods for model selection, including the method of maximum likelihood and Bayesian methods, as well as a method employing a decision-theoretic approach, are formulated to select the optimal model for numerous applications. There is no requirement that the candidate models be random. Classical methods for model selection ignore model use and require data to be available. Examples are used to show that these methods can be unreliable when data is limited. The decision-theoretic approach to model selection does not have these limitations, and model use is included through an appropriate utility function. This is especially important when modeling high risk systems, where the consequences of using an inappropriate model for the system can be disastrous. The decision-theoretic method for model selection is developed and applied for a series of complex and diverse applications. These include the selection of the: (1) optimal order of the polynomial chaos approximation for non-Gaussian random variables and stationary stochastic processes, (2) optimal pressure load model to be

  6. 2D and 3D optical diagnostic techniques applied to Madonna dei Fusi by Leonardo da Vinci

    Science.gov (United States)

    Fontana, R.; Gambino, M. C.; Greco, M.; Marras, L.; Materazzi, M.; Pampaloni, E.; Pelagotti, A.; Pezzati, L.; Poggi, P.; Sanapo, C.

    2005-06-01

    3D measurement and modelling have been traditionally applied to statues, buildings, archeological sites or similar large structures, but rarely to paintings. Recently, however, 3D measurements have been performed successfully also on easel paintings, allowing to detect and document the painting's surface. We used 3D models to integrate the results of various 2D imaging techniques on a common reference frame. These applications show how the 3D shape information, complemented with 2D colour maps as well as with other types of sensory data, provide the most interesting information. The 3D data acquisition was carried out by means of two devices: a high-resolution laser micro-profilometer, composed of a commercial distance meter mounted on a scanning device, and a laser-line scanner. The 2D data acquisitions were carried out using a scanning device for simultaneous RGB colour imaging and IR reflectography, and a UV fluorescence multispectral image acquisition system. We present here the results of the techniques described, applied to the analysis of an important painting of the Italian Reinassance: `Madonna dei Fusi', attributed to Leonardo da Vinci.

  7. Zoneless and Mixture techniques applied to the Integrated Brazilian PSHA using GEM-OpenQuake

    Science.gov (United States)

    Pirchiner, M.; Drouet, S.; Assumpcao, M.

    2013-12-01

    The main goal of this work is to propose some variations to the classic Probabilistic Seismic Hazard Analysis (PSHA) calculations, on one hand, applying the zoneless methodology to seismic source activity characterization and, on the other hand, using the gaussian mixture models to mix Ground Motion Prediction Equation (GMPE) models onto a mixed model. Our actual knowledge of the Brazilian intraplate seismicity does not allow us to identify the causative neotectonic active faults with confidence. This issue makes difficult the characterization of main seismic sources and the computation of the Gutenberg-Richter relation. Indeed seismic zonings made by different specialist could have big differences, while the zone less approach imposes a quantitative method to seismic source characterization, avoiding the subjective source zone definition. In addition, the low seismicity rate and the limited coverage in space and time of the seismic networks, do not offer enough observations to fit a confident GMPE to this region. In this case, our purpose was use a Gaussian Mixture Model to estimate a composed model from pre-existents well-fitted GMPE models which better describes the observed peak ground motion data. The other methodological evaluation is to use the OpenQuake engine (a Global Earthquake Model's initiative) for the hazard calculation. The logic tree input will allow us, in near future, to combine with weights, other hazard models from different specialists. We expect that these results will offer a new and solid basis to upgrade the brazilian civil engineering seismic rules.

  8. Level of detail technique for plant models

    Institute of Scientific and Technical Information of China (English)

    Xiaopeng ZHANG; Qingqiong DENG; Marc JAEGER

    2006-01-01

    Realistic modelling and interactive rendering of forestry and landscape is a challenge in computer graphics and virtual reality. Recent new developments in plant growth modelling and simulation lead to plant models faithful to botanical structure and development, not only representing the complex architecture of a real plant but also its functioning in interaction with its environment. Complex geometry and material of a large group of plants is a big burden even for high performances computers, and they often overwhelm the numerical calculation power and graphic rendering power. Thus, in order to accelerate the rendering speed of a group of plants, software techniques are often developed. In this paper, we focus on plant organs, i.e. leaves, flowers, fruits and inter-nodes. Our approach is a simplification process of all sparse organs at the same time, i. e. , Level of Detail (LOD) , and multi-resolution models for plants. We do explain here the principle and construction of plant simplification. They are used to construct LOD and multi-resolution models of sparse organs and branches of big trees. These approaches take benefit from basic knowledge of plant architecture, clustering tree organs according to biological structures. We illustrate the potential of our approach on several big virtual plants for geometrical compression or LOD model definition. Finally we prove the efficiency of the proposed LOD models for realistic rendering with a virtual scene composed by 184 mature trees.

  9. Development of a computational system for radiotherapic planning with the IMRT technique applied to the MCNP computer code with 3D graphic interface for voxel models; Desenvolvimento de um sistema computacional para o planejamento radioterapico com a tecnica IMRT aplicado ao codigo MCNP com interface grafica 3D para modelos de voxel

    Energy Technology Data Exchange (ETDEWEB)

    Fonseca, Telma Cristina Ferreira

    2009-07-01

    The Intensity Modulated Radiation Therapy - IMRT is an advanced treatment technique used worldwide in oncology medicine branch. On this master proposal was developed a software package for simulating the IMRT protocol, namely SOFT-RT which attachment the research group 'Nucleo de Radiacoes Ionizantes' - NRI at UFMG. The computational system SOFT-RT allows producing the absorbed dose simulation of the radiotherapic treatment through a three-dimensional voxel model of the patient. The SISCODES code, from NRI, research group, helps in producing the voxel model of the interest region from a set of CT or MRI digitalized images. The SOFT-RT allows also the rotation and translation of the model about the coordinate system axis for better visualization of the model and the beam. The SOFT-RT collects and exports the necessary parameters to MCNP code which will carry out the nuclear radiation transport towards the tumor and adjacent healthy tissues for each orientation and position of the beam planning. Through three-dimensional visualization of voxel model of a patient, it is possible to focus on a tumoral region preserving the whole tissues around them. It takes in account where exactly the radiation beam passes through, which tissues are affected and how much dose is applied in both tissues. The Out-module from SOFT-RT imports the results and express the dose response superimposing dose and voxel model in gray scale in a three-dimensional graphic representation. The present master thesis presents the new computational system of radiotherapic treatment - SOFT-RT code which has been developed using the robust and multi-platform C{sup ++} programming language with the OpenGL graphics packages. The Linux operational system was adopted with the goal of running it in an open source platform and free access. Preliminary simulation results for a cerebral tumor case will be reported as well as some dosimetric evaluations. (author)

  10. New Region Growing based on Thresholding Technique Applied to MRI Data

    Directory of Open Access Journals (Sweden)

    A. Afifi

    2015-06-01

    Full Text Available This paper proposes an optimal region growing threshold for the segmentation of magnetic resonance images (MRIs. The proposed algorithm combines local search procedure with thresholding region growing to achieve better generic seeds and optimal thresholds for region growing method. A procedure is used to detect the best possible seeds from a set of data distributed all over the image as a high accumulator of the histogram. The output seeds are fed to the local search algorithm to extract the best seeds around initial seeds. Optimal thresholds are used to overcome the limitations of region growing algorithm and to select the pixels sequentially in a random walk starting at the seed point. The proposed algorithm works automatically without any predefined parameters. The proposed algorithm is applied to the challenging application "gray matter/white matter" segmentation datasets. The experimental results compared with other segmentation techniques show that the proposed algorithm produces more accurate and stable results.

  11. Solar coronal magnetic fields derived using seismology techniques applied to omnipresent sunspot waves

    CERN Document Server

    Jess, D B; Ryans, R S I; Christian, D J; Keys, P H; Mathioudakis, M; Mackay, D H; Prasad, S Krishna; Banerjee, D; Grant, S D T; Yau, S; Diamond, C

    2016-01-01

    Sunspots on the surface of the Sun are the observational signatures of intense manifestations of tightly packed magnetic field lines, with near-vertical field strengths exceeding 6,000 G in extreme cases. It is well accepted that both the plasma density and the magnitude of the magnetic field strength decrease rapidly away from the solar surface, making high-cadence coronal measurements through traditional Zeeman and Hanle effects difficult since the observational signatures are fraught with low-amplitude signals that can become swamped with instrumental noise. Magneto-hydrodynamic (MHD) techniques have previously been applied to coronal structures, with single and spatially isolated magnetic field strengths estimated as 9-55 G. A drawback with previous MHD approaches is that they rely on particular wave modes alongside the detectability of harmonic overtones. Here we show, for the first time, how omnipresent magneto-acoustic waves, originating from within the underlying sunspot and propagating radially outwa...

  12. A systematic review of applying modern software engineering techniques to developing robotic systems

    Directory of Open Access Journals (Sweden)

    Claudia Pons

    2012-04-01

    Full Text Available Robots have become collaborators in our daily life. While robotic systems become more and more complex, the need to engineer their software development grows as well. The traditional approaches used in developing these software systems are reaching their limits; currently used methodologies and tools fall short of addressing the needs of such complex software development. Separating robotics’ knowledge from short-cycled implementation technologies is essential to foster reuse and maintenance. This paper presents a systematic review (SLR of the current use of modern software engineering techniques for developing robotic software systems and their actual automation level. The survey was aimed at summarizing existing evidence concerning applying such technologies to the field of robotic systems to identify any gaps in current research to suggest areas for further investigation and provide a background for positioning new research activities.

  13. Electron Correlation Microscopy: A New Technique for Studying Local Atom Dynamics Applied to a Supercooled Liquid.

    Science.gov (United States)

    He, Li; Zhang, Pei; Besser, Matthew F; Kramer, Matthew Joseph; Voyles, Paul M

    2015-08-01

    Electron correlation microscopy (ECM) is a new technique that utilizes time-resolved coherent electron nanodiffraction to study dynamic atomic rearrangements in materials. It is the electron scattering equivalent of photon correlation spectroscopy with the added advantage of nanometer-scale spatial resolution. We have applied ECM to a Pd40Ni40P20 metallic glass, heated inside a scanning transmission electron microscope into a supercooled liquid to measure the structural relaxation time τ between the glass transition temperature T g and the crystallization temperature, T x . τ determined from the mean diffraction intensity autocorrelation function g 2(t) decreases with temperature following an Arrhenius relationship between T g and T g +25 K, and then increases as temperature approaches T x . The distribution of τ determined from the g 2(t) of single speckles is broad and changes significantly with temperature.

  14. Applied data analysis and modeling for energy engineers and scientists

    CERN Document Server

    Reddy, T Agami

    2011-01-01

    ""Applied Data Analysis and Modeling for Energy Engineers and Scientists"" discusses mathematical models, data analysis, and decision analysis in modeling. The approach taken in this volume focuses on the modeling and analysis of thermal systems in an engineering environment, while also covering a number of other critical areas. Other material covered includes the tools that researchers and engineering professionals will need in order to explore different analysis methods, use critical assessment skills and reach sound engineering conclusions. The book also covers process and system design and

  15. Applying under-sampling techniques and cost-sensitive learning methods on risk assessment of breast cancer.

    Science.gov (United States)

    Hsu, Jia-Lien; Hung, Ping-Cheng; Lin, Hung-Yen; Hsieh, Chung-Ho

    2015-04-01

    Breast cancer is one of the most common cause of cancer mortality. Early detection through mammography screening could significantly reduce mortality from breast cancer. However, most of screening methods may consume large amount of resources. We propose a computational model, which is solely based on personal health information, for breast cancer risk assessment. Our model can be served as a pre-screening program in the low-cost setting. In our study, the data set, consisting of 3976 records, is collected from Taipei City Hospital starting from 2008.1.1 to 2008.12.31. Based on the dataset, we first apply the sampling techniques and dimension reduction method to preprocess the testing data. Then, we construct various kinds of classifiers (including basic classifiers, ensemble methods, and cost-sensitive methods) to predict the risk. The cost-sensitive method with random forest classifier is able to achieve recall (or sensitivity) as 100 %. At the recall of 100 %, the precision (positive predictive value, PPV), and specificity of cost-sensitive method with random forest classifier was 2.9 % and 14.87 %, respectively. In our study, we build a breast cancer risk assessment model by using the data mining techniques. Our model has the potential to be served as an assisting tool in the breast cancer screening.

  16. A general technique to train language models on language models

    NARCIS (Netherlands)

    Nederhof, MJ

    2005-01-01

    We show that under certain conditions, a language model can be trained oil the basis of a second language model. The main instance of the technique trains a finite automaton on the basis of a probabilistic context-free grammar, such that the Kullback-Leibler distance between grammar and trained auto

  17. Mathematical models applied in inductive non-destructive testing

    Energy Technology Data Exchange (ETDEWEB)

    Wac-Wlodarczyk, A.; Goleman, R.; Czerwinski, D. [Technical University of Lublin, 20 618 Lublin, Nadbystrzycka St 38a (Poland); Gizewski, T. [Technical University of Lublin, 20 618 Lublin, Nadbystrzycka St 38a (Poland)], E-mail: t.gizewski@pollub.pl

    2008-10-15

    Non-destructive testing are the wide group of investigative methods of non-homogenous material. Methods of computer tomography, ultrasonic, magnetic and inductive methods still developed are widely applied in industry. In apparatus used for non-destructive tests, the analysis of signals is made on the basis of complex system answers. The answer is linearized due to the model of research system. In this paper, the authors will discuss the applications of the mathematical models applied in investigations of inductive magnetic materials. The statistical models and other gathered in similarity classes will be taken into consideration. Investigation of mathematical models allows to choose the correct method, which in consequence leads to precise representation of the inner structure of examined object. Inductive research of conductive media, especially those with ferromagnetic properties, are run with high frequency magnetic field (eddy-currents method), which considerably decrease penetration depth.

  18. Availability modeling methodology applied to solar power systems

    Science.gov (United States)

    Unione, A.; Burns, E.; Husseiny, A.

    1981-01-01

    Availability is discussed as a measure for estimating the expected performance for solar- and wind-powered generation systems and for identifying causes of performance loss. Applicable analysis techniques, ranging from simple system models to probabilistic fault tree analysis, are reviewed. A methodology incorporating typical availability models is developed for estimating reliable plant capacity. Examples illustrating the impact of design and configurational differences on the expected capacity of a solar-thermal power plant with a fossil-fired backup unit are given.

  19. Geostatistical techniques applied to mapping limnological variables and quantify the uncertainty associated with estimates

    Directory of Open Access Journals (Sweden)

    Cristiano Cigagna

    2015-12-01

    Full Text Available Abstract Aim: This study aimed to map the concentrations of limnological variables in a reservoir employing semivariogram geostatistical techniques and Kriging estimates for unsampled locations, as well as the uncertainty calculation associated with the estimates. Methods: We established twenty-seven points distributed in a regular mesh for sampling. Then it was determined the concentrations of chlorophyll-a, total nitrogen and total phosphorus. Subsequently, a spatial variability analysis was performed and the semivariogram function was modeled for all variables and the variographic mathematical models were established. The main geostatistical estimation technique was the ordinary Kriging. The work was developed with the estimate of a heavy grid points for each variables that formed the basis of the interpolated maps. Results: Through the semivariogram analysis was possible to identify the random component as not significant for the estimation process of chlorophyll-a, and as significant for total nitrogen and total phosphorus. Geostatistical maps were produced from the Kriging for each variable and the respective standard deviations of the estimates calculated. These measurements allowed us to map the concentrations of limnological variables throughout the reservoir. The calculation of standard deviations provided the quality of the estimates and, consequently, the reliability of the final product. Conclusions: The use of the Kriging statistical technique to estimate heavy mesh points associated with the error dispersion (standard deviation of the estimate, made it possible to make quality and reliable maps of the estimated variables. Concentrations of limnological variables in general were higher in the lacustrine zone and decreased towards the riverine zone. The chlorophyll-a and total nitrogen correlated comparing the grid generated by Kriging. Although the use of Kriging is more laborious compared to other interpolation methods, this

  20. Transtheoretical Model of Health Behavior Change Applied to Voice Therapy

    OpenAIRE

    2007-01-01

    Studies of patient adherence to health behavior programs, such as physical exercise, smoking cessation, and diet, have resulted in the formulation and validation of the Transtheoretical Model (TTM) of behavior change. Although widely accepted as a guide for the development of health behavior interventions, this model has not been applied to vocal rehabilitation. Because resolution of vocal difficulties frequently depends on a patient’s ability to make changes in vocal and health behaviors, th...

  1. Dynamical behavior of the Niedermayer algorithm applied to Potts models

    OpenAIRE

    Girardi, D.; Penna, T. J. P.; Branco, N. S.

    2012-01-01

    In this work we make a numerical study of the dynamic universality class of the Niedermayer algorithm applied to the two-dimensional Potts model with 2, 3, and 4 states. This algorithm updates clusters of spins and has a free parameter, $E_0$, which controls the size of these clusters, such that $E_0=1$ is the Metropolis algorithm and $E_0=0$ regains the Wolff algorithm, for the Potts model. For $-1

  2. Improving Credit Scorecard Modeling Through Applying Text Analysis

    Directory of Open Access Journals (Sweden)

    Omar Ghailan

    2016-04-01

    Full Text Available In the credit card scoring and loans management, the prediction of the applicant’s future behavior is an important decision support tool and a key factor in reducing the risk of Loan Default. A lot of data mining and classification approaches have been developed for the credit scoring purpose. For the best of our knowledge, building a credit scorecard by analyzing the textual data in the application form has not been explored so far. This paper proposes a comprehensive credit scorecard model technique that improves credit scorecard modeling though employing textual data analysis. This study uses a sample of loan application forms of a financial institution providing loan services in Yemen, which represents a real-world situation of the credit scoring and loan management. The sample contains a set of Arabic textual data attributes defining the applicants. The credit scoring model based on the text mining pre-processing and logistic regression techniques is proposed and evaluated through a comparison with a group of credit scorecard modeling techniques that use only the numeric attributes in the application form. The results show that adding the textual attributes analysis achieves higher classification effectiveness and outperforms the other traditional numerical data analysis techniques.

  3. An applied general equilibrium model for Dutch agribusiness policy analysis

    NARCIS (Netherlands)

    Peerlings, J.

    1993-01-01

    The purpose of this thesis was to develop a basic static applied general equilibrium (AGE) model to analyse the effects of agricultural policy changes on Dutch agribusiness. In particular the effects on inter-industry transactions, factor demand, income, and trade are of interest.

  4. An applied general equilibrium model for Dutch agribusiness policy analysis

    NARCIS (Netherlands)

    Peerlings, J.

    1993-01-01

    The purpose of this thesis was to develop a basic static applied general equilibrium (AGE) model to analyse the effects of agricultural policy changes on Dutch agribusiness. In particular the effects on inter-industry transactions, factor demand, income, and trade are of

  5. Knowledge Growth: Applied Models of General and Individual Knowledge Evolution

    Science.gov (United States)

    Silkina, Galina Iu.; Bakanova, Svetlana A.

    2016-01-01

    The article considers the mathematical models of the growth and accumulation of scientific and applied knowledge since it is seen as the main potential and key competence of modern companies. The problem is examined on two levels--the growth and evolution of objective knowledge and knowledge evolution of a particular individual. Both processes are…

  6. Incorporation of RAM techniques into simulation modeling

    Energy Technology Data Exchange (ETDEWEB)

    Nelson, S.C. Jr.; Haire, M.J.; Schryver, J.C.

    1995-07-01

    This work concludes that reliability, availability, and maintainability (RAM) analytical techniques can be incorporated into computer network simulation modeling to yield an important new analytical tool. This paper describes the incorporation of failure and repair information into network simulation to build a stochastic computer model represents the RAM Performance of two vehicles being developed for the US Army: The Advanced Field Artillery System (AFAS) and the Future Armored Resupply Vehicle (FARV). The AFAS is the US Army`s next generation self-propelled cannon artillery system. The FARV is a resupply vehicle for the AFAS. Both vehicles utilize automation technologies to improve the operational performance of the vehicles and reduce manpower. The network simulation model used in this work is task based. The model programmed in this application requirements a typical battle mission and the failures and repairs that occur during that battle. Each task that the FARV performs--upload, travel to the AFAS, refuel, perform tactical/survivability moves, return to logistic resupply, etc.--is modeled. Such a model reproduces a model reproduces operational phenomena (e.g., failures and repairs) that are likely to occur in actual performance. Simulation tasks are modeled as discrete chronological steps; after the completion of each task decisions are programmed that determine the next path to be followed. The result is a complex logic diagram or network. The network simulation model is developed within a hierarchy of vehicle systems, subsystems, and equipment and includes failure management subnetworks. RAM information and other performance measures are collected which have impact on design requirements. Design changes are evaluated through ``what if`` questions, sensitivity studies, and battle scenario changes.

  7. Remarks on orthotropic elastic models applied to wood

    Directory of Open Access Journals (Sweden)

    Nilson Tadeu Mascia

    2006-09-01

    Full Text Available Wood is generally considered an anisotropic material. In terms of engineering elastic models, wood is usually treated as an orthotropic material. This paper presents an analysis of two principal anisotropic elastic models that are usually applied to wood. The first one, the linear orthotropic model, where the material axes L (Longitudinal, R( radial and T(tangential are coincident with the Cartesian axes (x, y, z, is more accepted as wood elastic model. The other one, the cylindrical orthotropic model is more adequate of the growth caracteristics of wood but more mathematically complex to be adopted in practical terms. Specifically due to its importance in wood elastic parameters, this paper deals with the fiber orientation influence in these models through adequate transformation of coordinates. As a final result, some examples of the linear model, which show the variation of elastic moduli, i.e., Young´s modulus and shear modulus, with fiber orientation are presented.

  8. Applying computer simulation models as learning tools in fishery management

    Science.gov (United States)

    Johnson, B.L.

    1995-01-01

    Computer models can be powerful tools for addressing many problems in fishery management, but uncertainty about how to apply models and how they should perform can lead to a cautious approach to modeling. Within this approach, we expect models to make quantitative predictions but only after all model inputs have been estimated from empirical data and after the model has been tested for agreement with an independent data set. I review the limitations to this approach and show how models can be more useful as tools for organizing data and concepts, learning about the system to be managed, and exploring management options. Fishery management requires deciding what actions to pursue to meet management objectives. Models do not make decisions for us but can provide valuable input to the decision-making process. When empirical data are lacking, preliminary modeling with parameters derived from other sources can help determine priorities for data collection. When evaluating models for management applications, we should attempt to define the conditions under which the model is a useful, analytical tool (its domain of applicability) and should focus on the decisions made using modeling results, rather than on quantitative model predictions. I describe an example of modeling used as a learning tool for the yellow perch Perca flavescens fishery in Green Bay, Lake Michigan.

  9. Socio-optics: optical knowledge applied in modeling social phenomena

    Science.gov (United States)

    Chisleag, Radu; Chisleag Losada, Ioana-Roxana

    2011-05-01

    The term "Socio-optics" (as a natural part of Socio-physics), is rather not found in literature or at Congresses. In Optics books, there are not made references to optical models applied to explain social phenomena, in spite of Optics relying on the duality particle-wave which seems convenient to model relationships among society and its members. The authors, who have developed a few models applied to explain social phenomena based on knowledge in Optics, along with a few other models applying, in Social Sciences, knowledge from other branches of Physics, give their own examples of such optical models, f. e., of relationships among social groups and their sub-groups, by using kowledge from partially coherent optical phenomena or to explain by tunnel effect, the apparently impossible penetration of social barriers by individuals. They consider that the term "Socio-optics" may come to life. There is mentioned the authors' expertise in stimulating Socio-optics approach by systematically asking students taken courses in Optics to find applications of the newly got Wave and Photon Optics knowledge, to model social and even everyday life phenomena, eventually engaging in such activities other possibly interested colleagues.

  10. How Can Synchrotron Radiation Techniques Be Applied for Detecting Microstructures in Amorphous Alloys?

    Directory of Open Access Journals (Sweden)

    Gu-Qing Guo

    2015-11-01

    Full Text Available In this work, how synchrotron radiation techniques can be applied for detecting the microstructure in metallic glass (MG is studied. The unit cells are the basic structural units in crystals, though it has been suggested that the co-existence of various clusters may be the universal structural feature in MG. Therefore, it is a challenge to detect microstructures of MG even at the short-range scale by directly using synchrotron radiation techniques, such as X-ray diffraction and X-ray absorption methods. Here, a feasible scheme is developed where some state-of-the-art synchrotron radiation-based experiments can be combined with simulations to investigate the microstructure in MG. By studying a typical MG composition (Zr70Pd30, it is found that various clusters do co-exist in its microstructure, and icosahedral-like clusters are the popular structural units. This is the structural origin where there is precipitation of an icosahedral quasicrystalline phase prior to phase transformation from glass to crystal when heating Zr70Pd30 MG.

  11. Applying machine learning and image feature extraction techniques to the problem of cerebral aneurysm rupture

    Directory of Open Access Journals (Sweden)

    Steren Chabert

    2017-01-01

    Full Text Available Cerebral aneurysm is a cerebrovascular disorder characterized by a bulging in a weak area in the wall of an artery that supplies blood to the brain. It is relevant to understand the mechanisms leading to the apparition of aneurysms, their growth and, more important, leading to their rupture. The purpose of this study is to study the impact on aneurysm rupture of the combination of different parameters, instead of focusing on only one factor at a time as is frequently found in the literature, using machine learning and feature extraction techniques. This discussion takes relevance in the context of the complex decision that the physicians have to take to decide which therapy to apply, as each intervention bares its own risks, and implies to use a complex ensemble of resources (human resources, OR, etc. in hospitals always under very high work load. This project has been raised in our actual working team, composed of interventional neuroradiologist, radiologic technologist, informatics engineers and biomedical engineers, from Valparaiso public Hospital, Hospital Carlos van Buren, and from Universidad de Valparaíso – Facultad de Ingeniería and Facultad de Medicina. This team has been working together in the last few years, and is now participating in the implementation of an “interdisciplinary platform for innovation in health”, as part of a bigger project leaded by Universidad de Valparaiso (PMI UVA1402. It is relevant to emphasize that this project is made feasible by the existence of this network between physicians and engineers, and by the existence of data already registered in an orderly manner, structured and recorded in digital format. The present proposal arises from the description in nowadays literature that the actual indicators, whether based on morphological description of the aneurysm, or based on characterization of biomechanical factor or others, these indicators were shown not to provide sufficient information in order

  12. An acceleration technique for the Gauss-Seidel method applied to symmetric linear systems

    Directory of Open Access Journals (Sweden)

    Jesús Cajigas

    2014-06-01

    Full Text Available A preconditioning technique to improve the convergence of the Gauss-Seidel method applied to symmetric linear systems while preserving symmetry is proposed. The preconditioner is of the form I + K and can be applied an arbitrary number of times. It is shown that under certain conditions the application of the preconditioner a finite number of steps reduces the matrix to a diagonal. A series of numerical experiments using matrices from spatial discretizations of partial differential equations demonstrates that both versions of the preconditioner, point and block version, exhibit lower iteration counts than its non-symmetric version. Resumen. Se propone una técnica de precondicionamiento para mejorar la convergencia del método Gauss-Seidel aplicado a sistemas lineales simétricos pero preservando simetría. El precondicionador es de la forma I + K y puede ser aplicado un número arbitrario de veces. Se demuestra que bajo ciertas condiciones la aplicación del precondicionador un número finito de pasos reduce la matriz del sistema precondicionado a una diagonal. Una serie de experimentos con matrices que provienen de la discretización de ecuaciones en derivadas parciales muestra que ambas versiones del precondicionador, por punto y por bloque, muestran un menor número de iteraciones en comparación con la versión que no preserva simetría.

  13. Micropillar Compression Technique Applied to Micron-Scale Mudstone Elasto-Plastic Deformation

    Science.gov (United States)

    Dewers, T. A.; Boyce, B.; Buchheit, T.; Heath, J. E.; Chidsey, T.; Michael, J.

    2010-12-01

    Mudstone mechanical testing is often limited by poor core recovery and sample size, preservation and preparation issues, which can lead to sampling bias, damage, and time-dependent effects. A micropillar compression technique, originally developed by Uchic et al. 2004, here is applied to elasto-plastic deformation of small volumes of mudstone, in the range of cubic microns. This study examines behavior of the Gothic shale, the basal unit of the Ismay zone of the Pennsylvanian Paradox Formation and potential shale gas play in southeastern Utah, USA. Precision manufacture of micropillars 5 microns in diameter and 10 microns in length are prepared using an ion-milling method. Characterization of samples is carried out using: dual focused ion - scanning electron beam imaging of nano-scaled pores and distribution of matrix clay and quartz, as well as pore-filling organics; laser scanning confocal (LSCM) 3D imaging of natural fractures; and gas permeability, among other techniques. Compression testing of micropillars under load control is performed using two different nanoindenter techniques. Deformation of 0.5 cm in diameter by 1 cm in length cores is carried out and visualized by a microscope loading stage and laser scanning confocal microscopy. Axisymmetric multistage compression testing and multi-stress path testing is carried out using 2.54 cm plugs. Discussion of results addresses size of representative elementary volumes applicable to continuum-scale mudstone deformation, anisotropy, and size-scale plasticity effects. Other issues include fabrication-induced damage, alignment, and influence of substrate. This work is funded by the US Department of Energy, Office of Basic Energy Sciences. Sandia is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the United States Department of Energy’s National Nuclear Security Administration under contract DE-AC04-94AL85000.

  14. Applying Particle Tracking Model In The Coastal Modeling System

    Science.gov (United States)

    2011-01-01

    Rev. 8-98) Prescribed by ANSI Std Z39-18 ERDC/CHL CHETN-IV-78 January 2011 2 Figure 1. CMS domain, grid, and bathymetry . CMS-Flow is driven by...through the simulation. At the end of the simulation, about 65 percent of the released clay particles are considered “ dead ,” ERDC/CHL CHETN-IV-78 January...2011 11 which means that they are either permanently buried at the sea bed or have moved out of the model domain. Figure 11. Specifications of

  15. Improved modeling techniques for turbomachinery flow fields

    Energy Technology Data Exchange (ETDEWEB)

    Lakshminarayana, B.; Fagan, J.R. Jr.

    1995-12-31

    This program has the objective of developing an improved methodology for modeling turbomachinery flow fields, including the prediction of losses and efficiency. Specifically, the program addresses the treatment of the mixing stress tensor terms attributed to deterministic flow field mechanisms required in steady-state Computational Fluid Dynamic (CFD) models for turbomachinery flow fields. These mixing stress tensors arise due to spatial and temporal fluctuations (in an absolute frame of reference) caused by rotor-stator interaction due to various blade rows and by blade-to-blade variation of flow properties. This will be accomplished in a cooperative program by Penn State University and the Allison Engine Company. These tasks include the acquisition of previously unavailable experimental data in a high-speed turbomachinery environment, the use of advanced techniques to analyze the data, and the development of a methodology to treat the deterministic component of the mixing stress tenor.

  16. Geometrical geodesy techniques in Goddard earth models

    Science.gov (United States)

    Lerch, F. J.

    1974-01-01

    The method for combining geometrical data with satellite dynamical and gravimetry data for the solution of geopotential and station location parameters is discussed. Geometrical tracking data (simultaneous events) from the global network of BC-4 stations are currently being processed in a solution that will greatly enhance of geodetic world system of stations. Previously the stations in Goddard earth models have been derived only from dynamical tracking data. A linear regression model is formulated from combining the data, based upon the statistical technique of weighted least squares. Reduced normal equations, independent of satellite and instrumental parameters, are derived for the solution of the geodetic parameters. Exterior standards for the evaluation of the solution and for the scale of the earth's figure are discussed.

  17. Interpolation techniques in robust constrained model predictive control

    Science.gov (United States)

    Kheawhom, Soorathep; Bumroongsri, Pornchai

    2017-05-01

    This work investigates interpolation techniques that can be employed on off-line robust constrained model predictive control for a discrete time-varying system. A sequence of feedback gains is determined by solving off-line a series of optimal control optimization problems. A sequence of nested corresponding robustly positive invariant set, which is either ellipsoidal or polyhedral set, is then constructed. At each sampling time, the smallest invariant set containing the current state is determined. If the current invariant set is the innermost set, the pre-computed gain associated with the innermost set is applied. If otherwise, a feedback gain is variable and determined by a linear interpolation of the pre-computed gains. The proposed algorithms are illustrated with case studies of a two-tank system. The simulation results showed that the proposed interpolation techniques significantly improve control performance of off-line robust model predictive control without much sacrificing on-line computational performance.

  18. A procedure for Applying a Maturity Model to Process Improvement

    Directory of Open Access Journals (Sweden)

    Elizabeth Pérez Mergarejo

    2014-09-01

    Full Text Available A maturity model is an evolutionary roadmap for implementing the vital practices from one or moredomains of organizational process. The use of the maturity models is poor in the Latin-Americancontext. This paper presents a procedure for applying the Process and Enterprise Maturity Modeldeveloped by Michael Hammer [1]. The procedure is divided into three steps: Preparation, Evaluationand Improvement plan. The Hammer´s maturity model joint to the proposed procedure can be used byorganizations to improve theirs process, involving managers and employees.

  19. Predictive control applied to an evaporator mathematical model

    Directory of Open Access Journals (Sweden)

    Daniel Alonso Giraldo Giraldo

    2010-07-01

    Full Text Available This paper outlines designing a predictive control model (PCM applied to a mathematical model of a falling film evaporator with mechanical steam compression like those used in the dairy industry. The controller was designed using the Connoisseur software package and data gathered from the simulation of a non-linear mathematical model. A control law was obtained from minimising a cost function sublect to dynamic system constraints, using a quadratic programme (QP algorithm. A linear programming (LP algorithm was used for finding a sub-optimal operation point for the process in stationary state.

  20. 3D-Laser-Scanning Technique Applied to Bulk Density Measurements of Apollo Lunar Samples

    Science.gov (United States)

    Macke, R. J.; Kent, J. J.; Kiefer, W. S.; Britt, D. T.

    2015-01-01

    In order to better interpret gravimetric data from orbiters such as GRAIL and LRO to understand the subsurface composition and structure of the lunar crust, it is import to have a reliable database of the density and porosity of lunar materials. To this end, we have been surveying these physical properties in both lunar meteorites and Apollo lunar samples. To measure porosity, both grain density and bulk density are required. For bulk density, our group has historically utilized sub-mm bead immersion techniques extensively, though several factors have made this technique problematic for our work with Apollo samples. Samples allocated for measurement are often smaller than optimal for the technique, leading to large error bars. Also, for some samples we were required to use pure alumina beads instead of our usual glass beads. The alumina beads were subject to undesirable static effects, producing unreliable results. Other investigators have tested the use of 3d laser scanners on meteorites for measuring bulk volumes. Early work, though promising, was plagued with difficulties including poor response on dark or reflective surfaces, difficulty reproducing sharp edges, and large processing time for producing shape models. Due to progress in technology, however, laser scanners have improved considerably in recent years. We tested this technique on 27 lunar samples in the Apollo collection using a scanner at NASA Johnson Space Center. We found it to be reliable and more precise than beads, with the added benefit that it involves no direct contact with the sample, enabling the study of particularly friable samples for which bead immersion is not possible

  1. Applying Model Based Systems Engineering to NASA's Space Communications Networks

    Science.gov (United States)

    Bhasin, Kul; Barnes, Patrick; Reinert, Jessica; Golden, Bert

    2013-01-01

    System engineering practices for complex systems and networks now require that requirement, architecture, and concept of operations product development teams, simultaneously harmonize their activities to provide timely, useful and cost-effective products. When dealing with complex systems of systems, traditional systems engineering methodology quickly falls short of achieving project objectives. This approach is encumbered by the use of a number of disparate hardware and software tools, spreadsheets and documents to grasp the concept of the network design and operation. In case of NASA's space communication networks, since the networks are geographically distributed, and so are its subject matter experts, the team is challenged to create a common language and tools to produce its products. Using Model Based Systems Engineering methods and tools allows for a unified representation of the system in a model that enables a highly related level of detail. To date, Program System Engineering (PSE) team has been able to model each network from their top-level operational activities and system functions down to the atomic level through relational modeling decomposition. These models allow for a better understanding of the relationships between NASA's stakeholders, internal organizations, and impacts to all related entities due to integration and sustainment of existing systems. Understanding the existing systems is essential to accurate and detailed study of integration options being considered. In this paper, we identify the challenges the PSE team faced in its quest to unify complex legacy space communications networks and their operational processes. We describe the initial approaches undertaken and the evolution toward model based system engineering applied to produce Space Communication and Navigation (SCaN) PSE products. We will demonstrate the practice of Model Based System Engineering applied to integrating space communication networks and the summary of its

  2. Size-specific sensitivity: Applying a new structured population model

    Energy Technology Data Exchange (ETDEWEB)

    Easterling, M.R.; Ellner, S.P.; Dixon, P.M.

    2000-03-01

    Matrix population models require the population to be divided into discrete stage classes. In many cases, especially when classes are defined by a continuous variable, such as length or mass, there are no natural breakpoints, and the division is artificial. The authors introduce the integral projection model, which eliminates the need for division into discrete classes, without requiring any additional biological assumptions. Like a traditional matrix model, the integral projection model provides estimates of the asymptotic growth rate, stable size distribution, reproductive values, and sensitivities of the growth rate to changes in vital rates. However, where the matrix model represents the size distributions, reproductive value, and sensitivities as step functions (constant within a stage class), the integral projection model yields smooth curves for each of these as a function of individual size. The authors describe a method for fitting the model to data, and they apply this method to data on an endangered plant species, northern monkshood (Aconitum noveboracense), with individuals classified by stem diameter. The matrix and integral models yield similar estimates of the asymptotic growth rate, but the reproductive values and sensitivities in the matrix model are sensitive to the choice of stage classes. The integral projection model avoids this problem and yields size-specific sensitivities that are not affected by stage duration. These general properties of the integral projection model will make it advantageous for other populations where there is no natural division of individuals into stage classes.

  3. Applying CFD in the Analysis of Heavy-Oil Transportation in Curved Pipes Using Core-Flow Technique

    Directory of Open Access Journals (Sweden)

    S Conceição

    2017-06-01

    Full Text Available Multiphase flow of oil, gas and water occurs in the petroleum industry from the reservoir to the processing units. The occurrence of heavy oils in the world is increasing significantly and points to the need for greater investment in the reservoirs exploitation and, consequently, to the development of new technologies for the production and transport of this oil. Therefore, it is interesting improve techniques to ensure an increase in energy efficiency in the transport of this oil. The core-flow technique is one of the most advantageous methods of lifting and transporting of oil. The core-flow technique does not alter the oil viscosity, but change the flow pattern and thus, reducing friction during heavy oil transportation. This flow pattern is characterized by a fine water pellicle that is formed close to the inner wall of the pipe, aging as lubricant of the oil flowing in the core of the pipe. In this sense, the objective of this paper is to study the isothermal flow of heavy oil in curved pipelines, employing the core-flow technique. A three-dimensional, transient and isothermal mathematical model that considers the mixture and k-e  turbulence models to address the gas-water-heavy oil three-phase flow in the pipe was applied for analysis. Simulations with different flow patterns of the involved phases (oil-gas-water have been done, in order to optimize the transport of heavy oils. Results of pressure and volumetric fraction distribution of the involved phases are presented and analyzed. It was verified that the oil core lubricated by a fine water layer flowing in the pipe considerably decreases pressure drop.

  4. How high is the tramping track? Mathematising and applying in a calculus model-eliciting activity

    Science.gov (United States)

    Yoon, Caroline; Dreyfus, Tommy; Thomas, Michael O. J.

    2010-09-01

    Two complementary processes involved in mathematical modelling are mathematising a realistic situation and applying a mathematical technique to a given realistic situation. We present and analyse work from two undergraduate students and two secondary school teachers who engaged in both processes during a mathematical modelling task that required them to find a graphical representation of an anti-derivative of a function. When determining the value of the anti-derivative as a measure of height, they mathematised the situation to develop a mathematical model, and attempted to apply their knowledge of integration that they had previously learned in class. However, the participants favoured their more primitive mathematised knowledge over the formal knowledge they tried to apply. We use these results to argue for calculus instruction to include more modelling activities that promote mathematising rather than the application of knowledge.

  5. Sensitivity analysis techniques for models of human behavior.

    Energy Technology Data Exchange (ETDEWEB)

    Bier, Asmeret Brooke

    2010-09-01

    Human and social modeling has emerged as an important research area at Sandia National Laboratories due to its potential to improve national defense-related decision-making in the presence of uncertainty. To learn about which sensitivity analysis techniques are most suitable for models of human behavior, different promising methods were applied to an example model, tested, and compared. The example model simulates cognitive, behavioral, and social processes and interactions, and involves substantial nonlinearity, uncertainty, and variability. Results showed that some sensitivity analysis methods create similar results, and can thus be considered redundant. However, other methods, such as global methods that consider interactions between inputs, can generate insight not gained from traditional methods.

  6. Applied systems ecology: models, data, and statistical methods

    Energy Technology Data Exchange (ETDEWEB)

    Eberhardt, L L

    1976-01-01

    In this report, systems ecology is largely equated to mathematical or computer simulation modelling. The need for models in ecology stems from the necessity to have an integrative device for the diversity of ecological data, much of which is observational, rather than experimental, as well as from the present lack of a theoretical structure for ecology. Different objectives in applied studies require specialized methods. The best predictive devices may be regression equations, often non-linear in form, extracted from much more detailed models. A variety of statistical aspects of modelling, including sampling, are discussed. Several aspects of population dynamics and food-chain kinetics are described, and it is suggested that the two presently separated approaches should be combined into a single theoretical framework. It is concluded that future efforts in systems ecology should emphasize actual data and statistical methods, as well as modelling.

  7. Model Driven Mutation Applied to Adaptative Systems Testing

    CERN Document Server

    Bartel, Alexandre; Munoz, Freddy; Klein, Jacques; Mouelhi, Tejeddine; Traon, Yves Le

    2012-01-01

    Dynamically Adaptive Systems modify their behav- ior and structure in response to changes in their surrounding environment and according to an adaptation logic. Critical sys- tems increasingly incorporate dynamic adaptation capabilities; examples include disaster relief and space exploration systems. In this paper, we focus on mutation testing of the adaptation logic. We propose a fault model for adaptation logics that classifies faults into environmental completeness and adaptation correct- ness. Since there are several adaptation logic languages relying on the same underlying concepts, the fault model is expressed independently from specific adaptation languages. Taking benefit from model-driven engineering technology, we express these common concepts in a metamodel and define the operational semantics of mutation operators at this level. Mutation is applied on model elements and model transformations are used to propagate these changes to a given adaptation policy in the chosen formalism. Preliminary resul...

  8. Neutron scatter and diffraction techniques applied to nucleosome and chromatin structure.

    Science.gov (United States)

    Bradbury, E M; Baldwin, J P

    1986-12-01

    Neutron scatter and diffraction techniques have made substantial contributions to our understanding of the structure of the nucleosome, the structure of the 10-nm filament, the "10-nm----30-nm" filament transition, and the structure of the "34-nm" supercoil or solenoid of nucleosomes. Neutron techniques are unique in their properties, which allows for the separation of the spatial arrangements of histones and DNA in nucleosomes and chromatin. They have equally powerful applications in structural studies of any complex two-component biological system. A major success for the application of neutron techniques was the first clear proof that DNA was located on the outside of the histone octamer in the core particle. A full analysis of the neutron-scatter data gave the parameters of Table 3 and the low-resolution structure of the core particle in solution shown in Fig. 6. Initial low-resolution X-ray diffraction studies of core particle crystals gave a model with a lower DNA pitch of 2.7 nm. Higher-resolution X-ray diffraction studies now give a structure with a DNA pitch of 3.0 nm and a hole of 0.8 nm along the axis of the DNA supercoil. The neutron-scatter solution structure and the X-ray crystal structure of the core particle are thus in full agreement within the resolution of the neutron-scatter techniques. The model for the chromatosome is largely based on the structural parameters of the DNA supercoil in the core particle, nuclease digestion results showing protection of a 168-bp DNA length by histone H1 and H1 peptide, and the conformational properties of H1. The path of the DNA outside the chromatosome is not known, and this information is crucial for our understanding of higher chromatin structure. The interactions of the flexible basic and N- and C-terminal regions of H1 within chromatin and how these interactions are modulated by H1 phosphorylation are not known. The N- and C-terminal regions of H1 represent a new type of protein behavior, i.e., extensive

  9. Electrochemical microfluidic chip based on molecular imprinting technique applied for therapeutic drug monitoring.

    Science.gov (United States)

    Liu, Jiang; Zhang, Yu; Jiang, Min; Tian, Liping; Sun, Shiguo; Zhao, Na; Zhao, Feilang; Li, Yingchun

    2017-05-15

    In this work, a novel electrochemical detection platform was established by integrating molecularly imprinting technique with microfluidic chip and applied for trace measurement of three therapeutic drugs. The chip foundation is acrylic panel with designed grooves. In the detection cell of the chip, a Pt wire is used as the counter electrode and reference electrode, and a Au-Ag alloy microwire (NPAMW) with 3D nanoporous surface modified with electro-polymerized molecularly imprinted polymer (MIP) film as the working electrode. Detailed characterization of the chip and the working electrode was performed, and the properties were explored by cyclic voltammetry and electrochemical impedance spectroscopy. Two methods, respectively based on electrochemical catalysis and MIP/gate effect were employed for detecting warfarin sodium by using the prepared chip. The linearity of electrochemical catalysis method was in the range of 5×10(-6)-4×10(-4)M, which fails to meet clinical testing demand. By contrast, the linearity of gate effect was 2×10(-11)-4×10(-9)M with remarkably low detection limit of 8×10(-12)M (S/N=3), which is able to satisfy clinical assay. Then the system was applied for 24-h monitoring of drug concentration in plasma after administration of warfarin sodium in rabbit, and the corresponding pharmacokinetic parameters were obtained. In addition, the microfluidic chip was successfully adopted to analyze cyclophosphamide and carbamazepine, implying its good versatile ability. It is expected that this novel electrochemical microfluidic chip can act as a promising format for point-of-care testing via monitoring different analytes sensitively and conveniently.

  10. UQ and V&V techniques applied to experiments and simulations of heated pipes pressurized to failure.

    Energy Technology Data Exchange (ETDEWEB)

    Romero, Vicente Jose; Dempsey, J. Franklin; Antoun, Bonnie R.

    2014-05-01

    This report demonstrates versatile and practical model validation and uncertainty quantification techniques applied to the accuracy assessment of a computational model of heated steel pipes pressurized to failure. The Real Space validation methodology segregates aleatory and epistemic uncertainties to form straightforward model validation metrics especially suited for assessing models to be used in the analysis of performance and safety margins. The methodology handles difficulties associated with representing and propagating interval and/or probabilistic uncertainties from multiple correlated and uncorrelated sources in the experiments and simulations including: material variability characterized by non-parametric random functions (discrete temperature dependent stress-strain curves); very limited (sparse) experimental data at the coupon testing level for material characterization and at the pipe-test validation level; boundary condition reconstruction uncertainties from spatially sparse sensor data; normalization of pipe experimental responses for measured input-condition differences among tests and for random and systematic uncertainties in measurement/processing/inference of experimental inputs and outputs; numerical solution uncertainty from model discretization and solver effects.

  11. UQ and V&V techniques applied to experiments and simulations of heated pipes pressurized to failure

    Energy Technology Data Exchange (ETDEWEB)

    Romero, Vicente Jose [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Dempsey, J. Franklin [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Antoun, Bonnie R. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2014-05-01

    This report demonstrates versatile and practical model validation and uncertainty quantification techniques applied to the accuracy assessment of a computational model of heated steel pipes pressurized to failure. The Real Space validation methodology segregates aleatory and epistemic uncertainties to form straightforward model validation metrics especially suited for assessing models to be used in the analysis of performance and safety margins. The methodology handles difficulties associated with representing and propagating interval and/or probabilistic uncertainties from multiple correlated and uncorrelated sources in the experiments and simulations including: material variability characterized by non-parametric random functions (discrete temperature dependent stress-strain curves); very limited (sparse) experimental data at the coupon testing level for material characterization and at the pipe-test validation level; boundary condition reconstruction uncertainties from spatially sparse sensor data; normalization of pipe experimental responses for measured input-condition differences among tests and for random and systematic uncertainties in measurement/processing/inference of experimental inputs and outputs; numerical solution uncertainty from model discretization and solver effects.

  12. An observational model for biomechanical assessment of sprint kayaking technique.

    Science.gov (United States)

    McDonnell, Lisa K; Hume, Patria A; Nolte, Volker

    2012-11-01

    Sprint kayaking stroke phase descriptions for biomechanical analysis of technique vary among kayaking literature, with inconsistencies not conducive for the advancement of biomechanics applied service or research. We aimed to provide a consistent basis for the categorisation and analysis of sprint kayak technique by proposing a clear observational model. Electronic databases were searched using key words kayak, sprint, technique, and biomechanics, with 20 sources reviewed. Nine phase-defining positions were identified within the kayak literature and were divided into three distinct types based on how positions were defined: water-contact-defined positions, paddle-shaft-defined positions, and body-defined positions. Videos of elite paddlers from multiple camera views were reviewed to determine the visibility of positions used to define phases. The water-contact-defined positions of catch, immersion, extraction, and release were visible from multiple camera views, therefore were suitable for practical use by coaches and researchers. Using these positions, phases and sub-phases were created for a new observational model. We recommend that kayaking data should be reported using single strokes and described using two phases: water and aerial. For more detailed analysis without disrupting the basic two-phase model, a four-sub-phase model consisting of entry, pull, exit, and aerial sub-phases should be used.

  13. Nonstandard Analysis Applied to Advanced Undergraduate Mathematics - Infinitesimal Modeling

    OpenAIRE

    Herrmann, Robert A.

    2003-01-01

    This is a Research and Instructional Development Project from the U. S. Naval Academy. In this monograph, the basic methods of nonstandard analysis for n-dimensional Euclidean spaces are presented. Specific rules are deveoped and these methods and rules are applied to rigorous integral and differential modeling. The topics include Robinson infinitesimals, limited and infinite numbers; convergence theory, continuity, *-transfer, internal definition, hyprefinite summation, Riemann-Stieltjes int...

  14. Validation of transport models using additive flux minimization technique

    Energy Technology Data Exchange (ETDEWEB)

    Pankin, A. Y.; Kruger, S. E. [Tech-X Corporation, 5621 Arapahoe Ave., Boulder, Colorado 80303 (United States); Groebner, R. J. [General Atomics, San Diego, California 92121 (United States); Hakim, A. [Princeton Plasma Physics Laboratory, Princeton, New Jersey 08543-0451 (United States); Kritz, A. H.; Rafiq, T. [Department of Physics, Lehigh University, Bethlehem, Pennsylvania 18015 (United States)

    2013-10-15

    A new additive flux minimization technique is proposed for carrying out the verification and validation (V and V) of anomalous transport models. In this approach, the plasma profiles are computed in time dependent predictive simulations in which an additional effective diffusivity is varied. The goal is to obtain an optimal match between the computed and experimental profile. This new technique has several advantages over traditional V and V methods for transport models in tokamaks and takes advantage of uncertainty quantification methods developed by the applied math community. As a demonstration of its efficiency, the technique is applied to the hypothesis that the paleoclassical density transport dominates in the plasma edge region in DIII-D tokamak discharges. A simplified version of the paleoclassical model that utilizes the Spitzer resistivity for the parallel neoclassical resistivity and neglects the trapped particle effects is tested in this paper. It is shown that a contribution to density transport, in addition to the paleoclassical density transport, is needed in order to describe the experimental profiles. It is found that more additional diffusivity is needed at the top of the H-mode pedestal, and almost no additional diffusivity is needed at the pedestal bottom. The implementation of this V and V technique uses the FACETS::Core transport solver and the DAKOTA toolkit for design optimization and uncertainty quantification. The FACETS::Core solver is used for advancing the plasma density profiles. The DAKOTA toolkit is used for the optimization of plasma profiles and the computation of the additional diffusivity that is required for the predicted density profile to match the experimental profile.

  15. Combustion and flow modelling applied to the OMV VTE

    Science.gov (United States)

    Larosiliere, Louis M.; Jeng, San-Mou

    1990-01-01

    A predictive tool for hypergolic bipropellant spray combustion and flow evolution in the OMV VTE (orbital maneuvering vehicle variable thrust engine) is described. It encompasses a computational technique for the gas phase governing equations, a discrete particle method for liquid bipropellant sprays, and constitutive models for combustion chemistry, interphase exchanges, and unlike impinging liquid hypergolic stream interactions. Emphasis is placed on the phenomenological modelling of the hypergolic liquid bipropellant gasification processes. An application to the OMV VTE combustion chamber is given in order to show some of the capabilities and inadequacies of this tool.

  16. Formulation of Indomethacin Colon Targeted Delivery Systems Using Polysaccharides as Carriers by Applying Liquisolid Technique

    Directory of Open Access Journals (Sweden)

    Kadria A. Elkhodairy

    2014-01-01

    Full Text Available The present study aimed at the formulation of matrix tablets for colon-specific drug delivery (CSDD system of indomethacin (IDM by applying liquisolid (LS technique. A CSDD system based on time-dependent polymethacrylates and enzyme degradable polysaccharides was established. Eudragit RL 100 (E-RL 100 was employed as time-dependent polymer, whereas bacterial degradable polysaccharides were presented as LS systems loaded with the drug. Indomethacin-loaded LS systems were prepared using different polysaccharides, namely, guar gum (GG, pectin (PEC, and chitosan (CH, as carriers separately or in mixtures of different ratios of 1 : 3, 1 : 1, and 3 : 1. Liquisolid systems that displayed promising results concerning drug release rate in both pH 1.2 and pH 6.8 were compressed into tablets after the addition of the calculated amount of E-RL 100 and lubrication with magnesium stearate and talc in the ratio of 1 : 9. It was found that E-RL 100 improved the flowability and compressibility of all LS formulations. The release data revealed that all formulations succeeded to sustain drug release over a period of 24 hours. Stability study indicated that PEC-based LS system as well as its matrix tablets was stable over the period of storage (one year and could provide a minimum shelf life of two years.

  17. Advanced examination techniques applied to the qualification of critical welds for the ITER correction coils

    CERN Document Server

    Sgobba, Stefano; Libeyre, Paul; Marcinek, Dawid Jaroslaw; Piguiet, Aline; Cécillon, Alexandre

    2015-01-01

    The ITER correction coils (CCs) consist of three sets of six coils located in between the toroidal (TF) and poloidal field (PF) magnets. The CCs rely on a Cable-in-Conduit Conductor (CICC), whose supercritical cooling at 4.5 K is provided by helium inlets and outlets. The assembly of the nozzles to the stainless steel conductor conduit includes fillet welds requiring full penetration through the thickness of the nozzle. Static and cyclic stresses have to be sustained by the inlet welds during operation. The entire volume of helium inlet and outlet welds, that are submitted to the most stringent quality levels of imperfections according to standards in force, is virtually uninspectable with sufficient resolution by conventional or computed radiography or by Ultrasonic Testing. On the other hand, X-ray computed tomography (CT) was successfully applied to inspect the full weld volume of several dozens of helium inlet qualification samples. The extensive use of CT techniques allowed a significant progress in the ...

  18. Mathematical modelling applied to LiDAR data

    Directory of Open Access Journals (Sweden)

    Javier Estornell

    2013-06-01

    Full Text Available The aim of this article is to explain the application of several mathematic calculations to LiDAR (Light Detection And Ranging data to estimate vegetation parameters and modelling the relief of a forest area in the town of Chiva (Valencia. To represent the surface that describes the topography of the area, firstly, morphological filters were applied iteratively to select LiDAR ground points. From these data, the Triangulated Irregular Network (TIN structure was applied to model the relief of the area. From LiDAR data the canopy height model (CHM was also calculated. This model allowed obtaining bare soil, shrub and tree vegetation mapping in the study area. In addition, biomass was estimated from measurements taken in the field in 39 circular plots of radius 0.5 m and the 95th percentile of the LiDAR height datanincluded in each plot. The results indicated a high relationship between the two variables (measurednbiomass and 95th percentile with a coeficient of determination (R2 of 0:73. These results reveal the importance of using mathematical modelling to obtain information of the vegetation and land relief from LiDAR data.

  19. Joint regression analysis and AMMI model applied to oat improvement

    Science.gov (United States)

    Oliveira, A.; Oliveira, T. A.; Mejza, S.

    2012-09-01

    In our work we present an application of some biometrical methods useful in genotype stability evaluation, namely AMMI model, Joint Regression Analysis (JRA) and multiple comparison tests. A genotype stability analysis of oat (Avena Sativa L.) grain yield was carried out using data of the Portuguese Plant Breeding Board, sample of the 22 different genotypes during the years 2002, 2003 and 2004 in six locations. In Ferreira et al. (2006) the authors state the relevance of the regression models and of the Additive Main Effects and Multiplicative Interactions (AMMI) model, to study and to estimate phenotypic stability effects. As computational techniques we use the Zigzag algorithm to estimate the regression coefficients and the agricolae-package available in R software for AMMI model analysis.

  20. Applying Tiab’s direct synthesis technique to dilatant non-Newtonian/Newtonian fluids

    Directory of Open Access Journals (Sweden)

    Javier Andrés Martínez

    2011-08-01

    Full Text Available Non-Newtonian fluids, such as polymer solutions, have been used by the oil industry for many years as fracturing agents and drilling mud. These solutions, which normally include thickened water and jelled fluids, are injected into the formation to enhanced oil recovery by improving sweep efficiency. It is worth noting that some heavy oils behave non-Newtonianly. Non-Newtonian fluids do not have direct proportionality between applied shear stress and shear rate and viscosity varies with shear rate depending on whether the fluid is either pseudoplastic or dilatant. Viscosity decreases as shear rate increases for the former whilst the reverse takes place for dilatants. Mathematical models of conventional fluids thus fail when applied to non-Newtonian fluids. The pressure derivative curve is introduced in this descriptive work for a dilatant fluid and its pattern was observed. Tiab’s direct synthesis (TDS methodology was used as a tool for interpreting pressure transient data to estimate effective permeability, skin factors and non-Newtonian bank radius. The methodology was successfully verified by its application to synthetic examples. Also, comparing it to pseudoplastic behavior, it was found that the radial flow regime in the Newtonian zone of dilatant fluids took longer to form regarding both the flow behavior index and consistency factor.

  1. GPU peer-to-peer techniques applied to a cluster interconnect

    CERN Document Server

    Ammendola, Roberto; Biagioni, Andrea; Bisson, Mauro; Fatica, Massimiliano; Frezza, Ottorino; Cicero, Francesca Lo; Lonardo, Alessandro; Mastrostefano, Enrico; Paolucci, Pier Stanislao; Rossetti, Davide; Simula, Francesco; Tosoratto, Laura; Vicini, Piero

    2013-01-01

    Modern GPUs support special protocols to exchange data directly across the PCI Express bus. While these protocols could be used to reduce GPU data transmission times, basically by avoiding staging to host memory, they require specific hardware features which are not available on current generation network adapters. In this paper we describe the architectural modifications required to implement peer-to-peer access to NVIDIA Fermi- and Kepler-class GPUs on an FPGA-based cluster interconnect. Besides, the current software implementation, which integrates this feature by minimally extending the RDMA programming model, is discussed, as well as some issues raised while employing it in a higher level API like MPI. Finally, the current limits of the technique are studied by analyzing the performance improvements on low-level benchmarks and on two GPU-accelerated applications, showing when and how they seem to benefit from the GPU peer-to-peer method.

  2. Phase-ratio technique as applied to the assessment of lunar surface roughness

    Science.gov (United States)

    Kaydash, Vadym; Videen, Gorden; Shkuratov, Yuriy

    Regoliths of atmosphereless celestial bodies demonstrate prominent light backscattering that is common for particulate surfaces. This occurs over a wide range of phase angles and can be seen in the phase function [1]. The slope of the function may characterize the complexity of planetary surface structure. Imagery of such a parameter suggests that information can be obtained about the surface, like variations of unresolved surface roughness and microtopography [2]. Phase-ratio imagery allows one to characterize the phase function slope. This imagery requires the ratio of two co-registered images acquired at different phase angles. One important advantage of the procedure is that the inherent albedo variations of the surface are suppressed, and, therefore, the resulting image is sensitive to the surface structure variation [2,3]. The phase-ratio image characterizes surface roughness variation at spatial scales on the order of the incident wavelengths to that of the image resolution. Applying the phase-ratio technique to ground-based telescope data has allowed us to find new lunar surface formations in the southern part of Oceanus Procellarum. These are suggested to be weak swirls [4]. We also combined the phase-ratio technique with the space-derived photometry data acquired from the NASA Lunar Reconnaissance Orbiter with high spatial resolution. Thus we exploited the method to analyze the sites of Apollo landings and Soviet sample-return missions. Phase-ratio imagery has revealed anomalies of the phase-curve slope indicating a smoothing of the surface microstructure at the sites caused by dust uplifted by the engine jets of the descent and ascent modules [5,6]. Analysis of phase-ratios helps to understand how the regolith properties have been affected by robotic and human activity on the Moon [7,8]. We have demonstrated the use of the method to search for fresh natural disturbances of surface structure, e.g., to detect areas of fresh slumps, accumulated material on

  3. Online traffic flow model applying dynamic flow-density relation

    CERN Document Server

    Kim, Y

    2002-01-01

    This dissertation describes a new approach of the online traffic flow modelling based on the hydrodynamic traffic flow model and an online process to adapt the flow-density relation dynamically. The new modelling approach was tested based on the real traffic situations in various homogeneous motorway sections and a motorway section with ramps and gave encouraging simulation results. This work is composed of two parts: first the analysis of traffic flow characteristics and second the development of a new online traffic flow model applying these characteristics. For homogeneous motorway sections traffic flow is classified into six different traffic states with different characteristics. Delimitation criteria were developed to separate these states. The hysteresis phenomena were analysed during the transitions between these traffic states. The traffic states and the transitions are represented on a states diagram with the flow axis and the density axis. For motorway sections with ramps the complicated traffic fl...

  4. Inverting travel times with a triplication. [spline fitting technique applied to lunar seismic data reduction

    Science.gov (United States)

    Jarosch, H. S.

    1982-01-01

    A method based on the use of constrained spline fits is used to overcome the difficulties arising when body-wave data in the form of T-delta are reduced to the tau-p form in the presence of cusps. In comparison with unconstrained spline fits, the method proposed here tends to produce much smoother models which lie approximately in the middle of the bounds produced by the extremal method. The method is noniterative and, therefore, computationally efficient. The method is applied to the lunar seismic data, where at least one triplication is presumed to occur in the P-wave travel-time curve. It is shown, however, that because of an insufficient number of data points for events close to the antipode of the center of the lunar network, the present analysis is not accurate enough to resolve the problem of a possible lunar core.

  5. Improved modeling techniques for turbomachinery flow fields

    Energy Technology Data Exchange (ETDEWEB)

    Lakshminarayana, B. [Pennsylvania State Univ., University Park, PA (United States); Fagan, J.R. Jr. [Allison Engine Company, Indianapolis, IN (United States)

    1995-10-01

    This program has the objective of developing an improved methodology for modeling turbomachinery flow fields, including the prediction of losses and efficiency. Specifically, the program addresses the treatment of the mixing stress tensor terms attributed to deterministic flow field mechanisms required in steady-state Computational Fluid Dynamic (CFD) models for turbo-machinery flow fields. These mixing stress tensors arise due to spatial and temporal fluctuations (in an absolute frame of reference) caused by rotor-stator interaction due to various blade rows and by blade-to-blade variation of flow properties. These tasks include the acquisition of previously unavailable experimental data in a high-speed turbomachinery environment, the use of advanced techniques to analyze the data, and the development of a methodology to treat the deterministic component of the mixing stress tensor. Penn State will lead the effort to make direct measurements of the momentum and thermal mixing stress tensors in high-speed multistage compressor flow field in the turbomachinery laboratory at Penn State. They will also process the data by both conventional and conditional spectrum analysis to derive momentum and thermal mixing stress tensors due to blade-to-blade periodic and aperiodic components, revolution periodic and aperiodic components arising from various blade rows and non-deterministic (which includes random components) correlations. The modeling results from this program will be publicly available and generally applicable to steady-state Navier-Stokes solvers used for turbomachinery component (compressor or turbine) flow field predictions. These models will lead to improved methodology, including loss and efficiency prediction, for the design of high-efficiency turbomachinery and drastically reduce the time required for the design and development cycle of turbomachinery.

  6. Apply a hydrological model to estimate local temperature trends

    Science.gov (United States)

    Igarashi, Masao; Shinozawa, Tatsuya

    2014-03-01

    Continuous times series {f(x)} such as a depth of water is written f(x) = T(x)+P(x)+S(x)+C(x) in hydrological science where T(x),P(x),S(x) and C(x) are called the trend, periodic, stochastic and catastrophic components respectively. We simplify this model and apply it to the local temperature data such as given E. Halley (1693), the UK (1853-2010), Germany (1880-2010), Japan (1876-2010). We also apply the model to CO2 data. The model coefficients are evaluated by a symbolic computation by using a standard personal computer. The accuracy of obtained nonlinear curve is evaluated by the arithmetic mean of relative errors between the data and estimations. E. Halley estimated the temperature of Gresham College from 11/1692 to 11/1693. The simplified model shows that the temperature at the time rather cold compared with the recent of London. The UK and Germany data sets show that the maximum and minimum temperatures increased slowly from the 1890s to 1940s, increased rapidly from the 1940s to 1980s and have been decreasing since the 1980s with the exception of a few local stations. The trend of Japan is similar to these results.

  7. Investigation about the efficiency of the bioaugmentation technique when applied to diesel oil contaminated soils

    Directory of Open Access Journals (Sweden)

    Adriano Pinto Mariano

    2009-10-01

    Full Text Available This work investigated the efficiency of the bioaugmentation technique when applied to diesel oil contaminated soils collected at three service stations. Batch biodegradation experiments were carried out in Bartha biometer flasks (250 mL used to measure the microbial CO2 production. Biodegradation efficiency was also measured by quantifying the concentration of hydrocarbons. In addition to the biodegradation experiments, the capability of the studied cultures and the native microorganisms to biodegrade the diesel oil purchased from a local service station, was verified using a technique based on the redox indicator 2,6 -dichlorophenol indophenol (DCPIP. Results obtained with this test showed that the inocula used in the biodegradation experiments were able to degrade the diesel oil and the tests carried out with the native microorganisms indicated that these soils had a microbiota adapted to degrade the hydrocarbons. In general, no gain was obtained with the addition of microorganisms or even negative effects were observed in the biodegradation experiments.Este trabalho investigou a eficiência da técnica do bioaumento quando aplicada a solos contaminados com óleo diesel coletados em três postos de combustíveis. Experimentos de biodegradação foram realizados em frascos de Bartha (250 mL, usados para medir a produção microbiana de CO2. A eficiência de biodegradação também foi quantificada pela concentração de hidrocarbonetos. Conjuntamente aos experimentos de biodegradação, a capacidade das culturas estudadas e dos microrganismos nativos em biodegradar óleo diesel comprado de um posto de combustíveis local, foi verificada utilizando-se a técnica baseada no indicador redox 2,6 - diclorofenol indofenol (DCPIP. Resultados obtidos com esse teste mostraram que os inóculos empregados nos experimentos de biodegradação foram capazes de biodegradar óleo diesel e os testes com os microrganismos nativos indicaram que estes solos

  8. Simple predictive electron transport models applied to sawtoothing plasmas

    Science.gov (United States)

    Kim, D.; Merle, A.; Sauter, O.; Goodman, T. P.

    2016-05-01

    In this work, we introduce two simple transport models to evaluate the time evolution of electron temperature and density profiles during sawtooth cycles (i.e. over a sawtooth period time-scale). Since the aim of these simulations is to estimate reliable profiles within a short calculation time, two simplified ad-hoc models have been developed. The goal for these models is to rely on a few easy-to-check free parameters, such as the confinement time scaling factor and the profiles’ averaged scale-lengths. Due to the simplicity and short calculation time of the models, it is expected that these models can also be applied to real-time transport simulations. We show that it works well for Ohmic and EC heated L- and H-mode plasmas. The differences between these models are discussed and we show that their predictive capabilities are similar. Thus only one model is used to reproduce with simulations the results of sawtooth control experiments on the TCV tokamak. For the sawtooth pacing, the calculated time delays between the EC power off and sawtooth crash time agree well with the experimental results. The map of possible locking range is also well reproduced by the simulation.

  9. A Comparison of Evolutionary Computation Techniques for IIR Model Identification

    Directory of Open Access Journals (Sweden)

    Erik Cuevas

    2014-01-01

    Full Text Available System identification is a complex optimization problem which has recently attracted the attention in the field of science and engineering. In particular, the use of infinite impulse response (IIR models for identification is preferred over their equivalent FIR (finite impulse response models since the former yield more accurate models of physical plants for real world applications. However, IIR structures tend to produce multimodal error surfaces whose cost functions are significantly difficult to minimize. Evolutionary computation techniques (ECT are used to estimate the solution to complex optimization problems. They are often designed to meet the requirements of particular problems because no single optimization algorithm can solve all problems competitively. Therefore, when new algorithms are proposed, their relative efficacies must be appropriately evaluated. Several comparisons among ECT have been reported in the literature. Nevertheless, they suffer from one limitation: their conclusions are based on the performance of popular evolutionary approaches over a set of synthetic functions with exact solutions and well-known behaviors, without considering the application context or including recent developments. This study presents the comparison of various evolutionary computation optimization techniques applied to IIR model identification. Results over several models are presented and statistically validated.

  10. Applying a Dynamic Resource Supply Model in a Smart Grid

    Directory of Open Access Journals (Sweden)

    Kaiyu Wan

    2014-09-01

    Full Text Available Dynamic resource supply is a complex issue to resolve in a cyber-physical system (CPS. In our previous work, a resource model called the dynamic resource supply model (DRSM has been proposed to handle resources specification, management and allocation in CPS. In this paper, we are integrating the DRSM with service-oriented architecture and applying it to a smart grid (SG, one of the most complex CPS examples. We give the detailed design of the SG for electricity charging request and electricity allocation between plug-in hybrid electric vehicles (PHEV and DRSM through the Android system. In the design, we explain a mechanism for electricity consumption with data collection and re-allocation through ZigBee network. In this design, we verify the correctness of this resource model for expected electricity allocation.

  11. Dynamic Decision Making for Graphical Models Applied to Oil Exploration

    CERN Document Server

    Martinelli, Gabriele; Hauge, Ragnar

    2012-01-01

    We present a framework for sequential decision making in problems described by graphical models. The setting is given by dependent discrete random variables with associated costs or revenues. In our examples, the dependent variables are the potential outcomes (oil, gas or dry) when drilling a petroleum well. The goal is to develop an optimal selection strategy that incorporates a chosen utility function within an approximated dynamic programming scheme. We propose and compare different approximations, from simple heuristics to more complex iterative schemes, and we discuss their computational properties. We apply our strategies to oil exploration over multiple prospects modeled by a directed acyclic graph, and to a reservoir drilling decision problem modeled by a Markov random field. The results show that the suggested strategies clearly improve the simpler intuitive constructions, and this is useful when selecting exploration policies.

  12. Curve Fitting And Interpolation Model Applied In Nonel Dosage Detection

    Directory of Open Access Journals (Sweden)

    Jiuling Li

    2013-06-01

    Full Text Available The Curve Fitting and Interpolation Model are applied in Nonel dosage detection in this paper firstly, and the gray of continuous explosive in the Nonel has been forecasted. Although the traditional infrared equipment establishes the relationship of explosive dosage and light intensity, but the forecast accuracy is very low. Therefore, gray prediction models based on curve fitting and interpolation are framed separately, and the deviations from the different models are compared. Simultaneously, combining on the sample library features, the cubic polynomial fitting curve of the higher precision is used to predict grays, and 5mg-28mg Nonel gray values are calculated by MATLAB. Through the predictive values, the dosage detection operations are simplified, and the defect missing rate of the Nonel are reduced. Finally, the quality of Nonel is improved.

  13. Three-Dimensional Gravity Model Applied to Underwater Navigation

    Institute of Scientific and Technical Information of China (English)

    YAN Lei; FENG Hao; DENG Zhongliang; GAO Zhengbing

    2004-01-01

    At present, new integrated navigation, which usesthe location function of reference gravity anomaly map to control the errors of the inertial navigation system (INS), has been developed in marine navigation. It is named the gravityaided INS. Both the INS and real-time computation of gravity anomalies need a 3-D marine normal gravity model.Conventionally, a reduction method applied in geophysical survey is directly introduced to observed data processing. This reduction does not separate anomaly from normal gravity in the observed data, so errors cannot be avoided. The 3-D marine normal gravity model was derived from the J2 gravity model, and is suitable for the region whose depth is less than 1000 m.

  14. Data Selection for Fast Projection Techniques Applied to Adaptive Nulling: A Comparative Study of Performance

    Science.gov (United States)

    1991-12-01

    point de vue d’annulation des brouilleurs, le dernier 6tant moins rapide mais donnant une meilleure annulation. En effet , ces algorithmes donnent un...techniques avec celui de la technique "sample matrix inversion ou SMI" pour trois scenarios diffdrents; ces trois derniers ddmontrent les effets du nombre de...eigenvector analysis, such as the MUSIC technique [2], are effective for both interference suppression and spectral estimation. These techniques yield

  15. Study for applying microwave power saturation technique on fingernail/EPR dosimetry

    Energy Technology Data Exchange (ETDEWEB)

    Park, Byeong Ryong; Choi, Hoon; Nam, Hyun Ill; Lee, Byung Ill [Radiation Health Research Institute, Seoul (Korea, Republic of)

    2012-10-15

    There is growing recognition worldwide of the need to develop effective uses of dosimetry methods to assess unexpected exposure to radiation in the event of a large scale event. One of physically based dosimetry methods electron paramagnetic resonance (EPR) spectroscopy has been applied to perform retrospective radiation dosimetry using extracted samples of tooth enamel and nail(fingernail and toenail), following radiation accidents and exposures resulting from weapon use, testing, and production. Human fingernails are composed largely of a keratin, which consists of {alpha} helical peptide chains that are twisted into a left handed coil and strengthened by disulphide cross links. Ionizing radiation generates free radicals in the keratin matrix, and these radicals are stable over a relatively long period (days to weeks). Most importantly, the number of radicals is proportional to the magnitude of the dose over a wide dose range (0{approx}30 Gy). Also, dose can be estimated at four different locations on the human body, providing information on the homogeneity of the radiation exposure. And The results from EPR nail dosimetry are immediately available However, relatively large background signal (BKS) converted from mechanically induced signal (MIS) after cutting process of fingernail, normally overlaps with the radiation induced signal (RIS), make it difficult to estimate accurate dose accidental exposure. Therefore, estimation method using dose response curve was difficult to ensure reliability below 5 Gy. In this study, In order to overcome these disadvantages, we measured the reactions of RIS and BKS (MIS) according to the change of Microwave power level, and researched about the applicability of the Power saturation technique at low dose.

  16. Applying Mechanistic Dam Breach Models to Historic Levee Breaches

    Directory of Open Access Journals (Sweden)

    Risher Paul

    2016-01-01

    Full Text Available Hurricane Katrina elevated levee risk in the US national consciousness, motivating agencies to assess and improve their levee risk assessment methodology. Accurate computation of the flood flow magnitude and timing associated with a levee breach remains one of the most difficult and uncertain components of levee risk analysis. Contemporary methods are largely empirical and approximate, introducing substantial uncertainty to the damage and life loss models. Levee breach progressions are often extrapolated to the final width and breach formation time based on limited experience with past breaches or using regression equations developed from a limited data base of dam failures. Physically based embankment erosion models could improve levee breach modeling. However, while several mechanistic embankment breach models are available, they were developed for dams. Several aspects of the levee breach problem are distinct, departing from dam breach assumptions. This study applies three embankments models developed for dam breach analysis (DL Breach, HR BREACH, and WinDAM C to historic levee breaches with observed (or inferred breach rates, assessing the limitations, and applicability of each model to the levee breach problem.

  17. Recent developments of surface complexation models applied to environmental aquatic chemistry

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    Based on numerous latest references, the current developments in surface complexation, surface precipitation and the corresponding models (SCMs and SPMs), were reviewed. The contents involved comparison on surface charge composition and layer-structure of solid-solution interface for the classical 1-pK and 2- pK models, In addition, the fundamental concept and relations of the new models, i.e., multi-site complexation (MUSIC) and charge -distribution (CD) MUSIC models were described as well. To avoid misuse or abuse, it must be emphasized that the applicability nd limitation for each model should be considered carefully when selecting the concerned model(s). In addition, some new powerful techniques for surface characterization and analysis applied to model establishment and modification were also briefly introduced.

  18. Model-free kinetics applied to sugarcane bagasse combustion

    Energy Technology Data Exchange (ETDEWEB)

    Ramajo-Escalera, B.; Espina, A.; Garcia, J.R. [Department of Organic and Inorganic Chemistry, University of Oviedo, 33006 Oviedo (Spain); Sosa-Arnao, J.H. [Mechanical Engineering Faculty, State University of Campinas (UNICAMP), P.O. Box 6122, 13083-970 Campinas, SP (Brazil); Nebra, S.A. [Interdisciplinary Center of Energy Planning, State University of Campinas (UNICAMP), R. Shigeo Mori 2013, 13083-770 Campinas, SP (Brazil)

    2006-09-15

    Vyazovkin's model-free kinetic algorithms were applied to determine conversion, isoconversion and apparent activation energy to both dehydration and combustion of sugarcane bagasse. Three different steps were detected with apparent activation energies of 76.1+/-1.7, 333.3+/-15.0 and 220.1+/-4.0kJ/mol in the conversion range of 2-5%, 15-60% and 70-90%, respectively. The first step is associated with the endothermic process of drying and release of water. The others correspond to the combustion (and carbonization) of organic matter (mainly cellulose, hemicellulose and lignin) and the combustion of the products of pyrolysis. (author)

  19. Relative Binding Free Energy Calculations Applied to Protein Homology Models.

    Science.gov (United States)

    Cappel, Daniel; Hall, Michelle Lynn; Lenselink, Eelke B; Beuming, Thijs; Qi, Jun; Bradner, James; Sherman, Woody

    2016-12-27

    A significant challenge and potential high-value application of computer-aided drug design is the accurate prediction of protein-ligand binding affinities. Free energy perturbation (FEP) using molecular dynamics (MD) sampling is among the most suitable approaches to achieve accurate binding free energy predictions, due to the rigorous statistical framework of the methodology, correct representation of the energetics, and thorough treatment of the important degrees of freedom in the system (including explicit waters). Recent advances in sampling methods and force fields coupled with vast increases in computational resources have made FEP a viable technology to drive hit-to-lead and lead optimization, allowing for more efficient cycles of medicinal chemistry and the possibility to explore much larger chemical spaces. However, previous FEP applications have focused on systems with high-resolution crystal structures of the target as starting points-something that is not always available in drug discovery projects. As such, the ability to apply FEP on homology models would greatly expand the domain of applicability of FEP in drug discovery. In this work we apply a particular implementation of FEP, called FEP+, on congeneric ligand series binding to four diverse targets: a kinase (Tyk2), an epigenetic bromodomain (BRD4), a transmembrane GPCR (A2A), and a protein-protein interaction interface (BCL-2 family protein MCL-1). We apply FEP+ using both crystal structures and homology models as starting points and find that the performance using homology models is generally on a par with the results when using crystal structures. The robustness of the calculations to structural variations in the input models can likely be attributed to the conformational sampling in the molecular dynamics simulations, which allows the modeled receptor to adapt to the "real" conformation for each ligand in the series. This work exemplifies the advantages of using all-atom simulation methods with

  20. 3D-QSPR Method of Computational Technique Applied on Red Reactive Dyes by Using CoMFA Strategy

    Science.gov (United States)

    Mahmood, Uzma; Rashid, Sitara; Ali, S. Ishrat; Parveen, Rasheeda; Zaheer-ul-Haq; Ambreen, Nida; Khan, Khalid Mohammed; Perveen, Shahnaz; Voelter, Wolfgang

    2011-01-01

    Cellulose fiber is a tremendous natural resource that has broad application in various productions including the textile industry. The dyes, which are commonly used for cellulose printing, are “reactive dyes” because of their high wet fastness and brilliant colors. The interaction of various dyes with the cellulose fiber depends upon the physiochemical properties that are governed by specific features of the dye molecule. The binding pattern of the reactive dye with cellulose fiber is called the ligand-receptor concept. In the current study, the three dimensional quantitative structure property relationship (3D-QSPR) technique was applied to understand the red reactive dyes interactions with the cellulose by the Comparative Molecular Field Analysis (CoMFA) method. This method was successfully utilized to predict a reliable model. The predicted model gives satisfactory statistical results and in the light of these, it was further analyzed. Additionally, the graphical outcomes (contour maps) help us to understand the modification pattern and to correlate the structural changes with respect to the absorptivity. Furthermore, the final selected model has potential to assist in understanding the charachteristics of the external test set. The study could be helpful to design new reactive dyes with better affinity and selectivity for the cellulose fiber. PMID:22272108

  1. 3D-QSPR Method of Computational Technique Applied on Red Reactive Dyes by Using CoMFA Strategy

    Directory of Open Access Journals (Sweden)

    Shahnaz Perveen

    2011-12-01

    Full Text Available Cellulose fiber is a tremendous natural resource that has broad application in various productions including the textile industry. The dyes, which are commonly used for cellulose printing, are “reactive dyes” because of their high wet fastness and brilliant colors. The interaction of various dyes with the cellulose fiber depends upon the physiochemical properties that are governed by specific features of the dye molecule. The binding pattern of the reactive dye with cellulose fiber is called the ligand-receptor concept. In the current study, the three dimensional quantitative structure property relationship (3D-QSPR technique was applied to understand the red reactive dyes interactions with the cellulose by the Comparative Molecular Field Analysis (CoMFA method. This method was successfully utilized to predict a reliable model. The predicted model gives satisfactory statistical results and in the light of these, it was further analyzed. Additionally, the graphical outcomes (contour maps help us to understand the modification pattern and to correlate the structural changes with respect to the absorptivity. Furthermore, the final selected model has potential to assist in understanding the charachteristics of the external test set. The study could be helpful to design new reactive dyes with better affinity and selectivity for the cellulose fiber.

  2. The bi-potential method applied to the modeling of dynamic problems with friction

    Science.gov (United States)

    Feng, Z.-Q.; Joli, P.; Cros, J.-M.; Magnain, B.

    2005-10-01

    The bi-potential method has been successfully applied to the modeling of frictional contact problems in static cases. This paper presents an extension of this method for dynamic analysis of impact problems with deformable bodies. A first order algorithm is applied to the numerical integration of the time-discretized equation of motion. Using the Object-Oriented Programming (OOP) techniques in C++ and OpenGL graphical support, a finite element code including pre/postprocessor FER/Impact is developed. The numerical results show that, at the present stage of development, this approach is robust and efficient in terms of numerical stability and precision compared with the penalty method.

  3. Modern Chemistry Techniques Applied to Metal Behavior and Chelation in Medical and Environmental Systems ? Final Report

    Energy Technology Data Exchange (ETDEWEB)

    Sutton, M; Andresen, B; Burastero, S R; Chiarappa-Zucca, M L; Chinn, S C; Coronado, P R; Gash, A E; Perkins, J; Sawvel, A M; Szechenyi, S C

    2005-02-03

    This report details the research and findings generated over the course of a 3-year research project funded by Lawrence Livermore National Laboratory (LLNL) Laboratory Directed Research and Development (LDRD). Originally tasked with studying beryllium chemistry and chelation for the treatment of Chronic Beryllium Disease and environmental remediation of beryllium-contaminated environments, this work has yielded results in beryllium and uranium solubility and speciation associated with toxicology; specific and effective chelation agents for beryllium, capable of lowering beryllium tissue burden and increasing urinary excretion in mice, and dissolution of beryllium contamination at LLNL Site 300; {sup 9}Be NMR studies previously unstudied at LLNL; secondary ionization mass spec (SIMS) imaging of beryllium in spleen and lung tissue; beryllium interactions with aerogel/GAC material for environmental cleanup. The results show that chelator development using modern chemical techniques such as chemical thermodynamic modeling, was successful in identifying and utilizing tried and tested beryllium chelators for use in medical and environmental scenarios. Additionally, a study of uranium speciation in simulated biological fluids identified uranium species present in urine, gastric juice, pancreatic fluid, airway surface fluid, simulated lung fluid, bile, saliva, plasma, interstitial fluid and intracellular fluid.

  4. Imaging techniques applied to the study of fluids in porous media

    Energy Technology Data Exchange (ETDEWEB)

    Tomutsa, L.; Brinkmeyer, A.; Doughty, D.

    1993-04-01

    A synergistic rock characterization methodology has been developed. It derives reservoir engineering parameters from X-ray tomography (CT) scanning, computer assisted petrographic image analysis, minipermeameter measurements, and nuclear magnetic resonance imaging (NMRI). This rock characterization methodology is used to investigate the effect of small-scale rock heterogeneity on oil distribution and recovery. It is also used to investigate the applicability of imaging technologies to the development of scaleup procedures from core plug to whole core, by comparing the results of detailed simulations with the images ofthe fluid distributions observed by CT scanning. By using the rock and fluid detailed data generated by imaging technology describe, one can verify directly, in the laboratory, various scaling up techniques. Asan example, realizations of rock properties statistically and spatially compatible with the observed values are generated by one of the various stochastic methods available (fuming bands) and are used as simulator input. The simulation results were compared with both the simulation results using the true rock properties and the fluid distributions observed by CT. Conclusions regarding the effect of the various permeability models on waterflood oil recovery were formulated.

  5. Applying satellite remote sensing technique in disastrous rainfall systems around Taiwan

    Science.gov (United States)

    Liu, Gin-Rong; Chen, Kwan-Ru; Kuo, Tsung-Hua; Liu, Chian-Yi; Lin, Tang-Huang; Chen, Liang-De

    2016-05-01

    Many people in Asia regions have been suffering from disastrous rainfalls year by year. The rainfall from typhoons or tropical cyclones (TCs) is one of their key water supply sources, but from another perspective such TCs may also bring forth unexpected heavy rainfall, thereby causing flash floods, mudslides or other disasters. So far we cannot stop or change a TC route or intensity via present techniques. Instead, however we could significantly mitigate the possible heavy casualties and economic losses if we can earlier know a TC's formation and can estimate its rainfall amount and distribution more accurate before its landfalling. In light of these problems, this short article presents methods to detect a TC's formation as earlier and to delineate its rainfall potential pattern more accurate in advance. For this first part, the satellite-retrieved air-sea parameters are obtained and used to estimate the thermal and dynamic energy fields and variation over open oceans to delineate the high-possibility typhoon occurring ocean areas and cloud clusters. For the second part, an improved tropical rainfall potential (TRaP) model is proposed with better assumptions then the original TRaP for TC rainfall band rotations, rainfall amount estimation, and topographic effect correction, to obtain more accurate TC rainfall distributions, especially for hilly and mountainous areas, such as Taiwan.

  6. Digital Image Correlation Techniques Applied to Large Scale Rocket Engine Testing

    Science.gov (United States)

    Gradl, Paul R.

    2016-01-01

    Rocket engine hot-fire ground testing is necessary to understand component performance, reliability and engine system interactions during development. The J-2X upper stage engine completed a series of developmental hot-fire tests that derived performance of the engine and components, validated analytical models and provided the necessary data to identify where design changes, process improvements and technology development were needed. The J-2X development engines were heavily instrumented to provide the data necessary to support these activities which enabled the team to investigate any anomalies experienced during the test program. This paper describes the development of an optical digital image correlation technique to augment the data provided by traditional strain gauges which are prone to debonding at elevated temperatures and limited to localized measurements. The feasibility of this optical measurement system was demonstrated during full scale hot-fire testing of J-2X, during which a digital image correlation system, incorporating a pair of high speed cameras to measure three-dimensional, real-time displacements and strains was installed and operated under the extreme environments present on the test stand. The camera and facility setup, pre-test calibrations, data collection, hot-fire test data collection and post-test analysis and results are presented in this paper.

  7. Applying the manifold theory to Milky Way models: First steps on morphology and kinematics

    Directory of Open Access Journals (Sweden)

    Antoja T.

    2012-02-01

    Full Text Available We present recent results obtained by applying invariant manifold techniques to analytical models of the Milky Way. It has been shown that invariant manifolds can reproduce successfully the spiral arms and rings in external barred galaxies. Here, for the first time, we apply this theory to Milky Way models. We select five different models from the literature and, using the parameters chosen by the authors of the papers, and three different cases, namely Case 1, where only the COBE/DIRBE bar is included in the potential; Case 2, when the COBE/DIRBE and the Long bar are aligned, and Case 3, when the COBE/DIRBE bar and the Long bar are misaligned. We compute in each case and for each model the orbits trapped by the manifolds. In general, the global morphology of the manifolds can account for the 3-kpc arms and for the Galactic Molecular Ring.

  8. Applying a realistic evaluation model to occupational safety interventions

    DEFF Research Database (Denmark)

    Pedersen, Louise Møller

    2017-01-01

    of occupational safety interventions. Conclusion: The revised realistic evaluation model can help safety science forward in identifying key factors for the success of occupational safety interventions. However, future research should strengthen the link between the immediate intervention results and outcome.......Background: Recent literature characterizes occupational safety interventions as complex social activities, applied in complex and dynamic social systems. Hence, the actual outcomes of an intervention will vary, depending on the intervention, the implementation process, context, personal...... characteristics of key actors (defined mechanisms), and the interplay between them, and can be categorized as expected or unexpected. However, little is known about ’how’ to include context and mechanisms in evaluations of intervention effectiveness. A revised realistic evaluation model has been introduced...

  9. Nature preservation acceptance model applied to tanker oil spill simulations

    DEFF Research Database (Denmark)

    Friis-Hansen, Peter; Ditlevsen, Ove Dalager

    2003-01-01

    is exemplified by a study of oil spills due to simulated tanker collisions in the Danish straits. It is found that the distribution of the oil spill volume per spill is well represented by an exponential distribution both in Oeresund and in Great Belt. When applied in the Poisson model, a risk profile reasonably...... close to the standard lognormal profile is obtained. Moreover, based on data pairs (volume, cost) for world wide oil spills it is inferred that the conditional distribution of the costs given the spill volume is well modeled by a lognormal distribution. By unconditioning by the exponential distribution...... of the single oil spill, a risk profile for the costs is obtained that is indistinguishable from the standard lognormal risk profile.Finally the question of formulating a public risk acceptance criterion is addressed following Ditlevsen, and it is argued that a Nature Preservation Willingness Index can...

  10. Compact Models and Measurement Techniques for High-Speed Interconnects

    CERN Document Server

    Sharma, Rohit

    2012-01-01

    Compact Models and Measurement Techniques for High-Speed Interconnects provides detailed analysis of issues related to high-speed interconnects from the perspective of modeling approaches and measurement techniques. Particular focus is laid on the unified approach (variational method combined with the transverse transmission line technique) to develop efficient compact models for planar interconnects. This book will give a qualitative summary of the various reported modeling techniques and approaches and will help researchers and graduate students with deeper insights into interconnect models in particular and interconnect in general. Time domain and frequency domain measurement techniques and simulation methodology are also explained in this book.

  11. Applying the luminosity function statistics in the fireshell model

    Science.gov (United States)

    Rangel Lemos, L. J.; Bianco, C. L.; Ruffini, R.

    2015-12-01

    The luminosity function (LF) statistics applied to the data of BATSE, GBM/Fermi and BAT/Swift is the theme approached in this work. The LF is a strong statistical tool to extract useful information from astrophysical samples, and the key point of this statistical analysis is in the detector sensitivity, where we have performed careful analysis. We applied the tool of the LF statistics to three GRB classes predicted by the Fireshell model. We produced, by LF statistics, predicted distributions of: peak ux N(Fph pk), redshift N(z) and peak luminosity N(Lpk) for the three GRB classes predicted by Fireshell model; we also used three GRB rates. We looked for differences among the distributions, and in fact we found. We performed a comparison between the distributions predicted and observed (with and without redshifts), where we had to build a list with 217 GRBs with known redshifts. Our goal is transform the GRBs in a standard candle, where a alternative is find a correlation between the isotropic luminosity and the Band peak spectral energy (Liso - Epk).

  12. Statistical Mechanics Ideas and Techniques Applied to Selected Problems in Ecology

    Directory of Open Access Journals (Sweden)

    Hugo Fort

    2013-11-01

    Full Text Available Ecosystem dynamics provides an interesting arena for the application of a plethora concepts and techniques from statistical mechanics. Here I review three examples corresponding each one to an important problem in ecology. First, I start with an analytical derivation of clumpy patterns for species relative abundances (SRA empirically observed in several ecological communities involving a high number n of species, a phenomenon which have puzzled ecologists for decades. An interesting point is that this derivation uses results obtained from a statistical mechanics model for ferromagnets. Second, going beyond the mean field approximation, I study the spatial version of a popular ecological model involving just one species representing vegetation. The goal is to address the phenomena of catastrophic shifts—gradual cumulative variations in some control parameter that suddenly lead to an abrupt change in the system—illustrating it by means of the process of desertification of arid lands. The focus is on the aggregation processes and the effects of diffusion that combined lead to the formation of non trivial spatial vegetation patterns. It is shown that different quantities—like the variance, the two-point correlation function and the patchiness—may serve as early warnings for the desertification of arid lands. Remarkably, in the onset of a desertification transition the distribution of vegetation patches exhibits scale invariance typical of many physical systems in the vicinity a phase transition. I comment on similarities of and differences between these catastrophic shifts and paradigmatic thermodynamic phase transitions like the liquid-vapor change of state for a fluid. Third, I analyze the case of many species interacting in space. I choose tropical forests, which are mega-diverse ecosystems that exhibit remarkable dynamics. Therefore these ecosystems represent a research paradigm both for studies of complex systems dynamics as well as to

  13. Finite-element technique applied to heat conduction in solids with temperature dependent thermal conductivity

    Science.gov (United States)

    Aguirre-Ramirez, G.; Oden, J. T.

    1969-01-01

    Finite element method applied to heat conduction in solids with temperature dependent thermal conductivity, using nonlinear constitutive equation for heat ABCDEFGHIABCDEFGHIABCDEFGHIABCDEFGHIABCDEFGHIABCDEFGHIABCDEFGHIABCDEFGHIABCDEFGHIABCDEFGHIABCDEFGHIABCDEFGH

  14. A Bidirectional Coupling Procedure Applied to Multiscale Respiratory Modeling.

    Science.gov (United States)

    Kuprat, A P; Kabilan, S; Carson, J P; Corley, R A; Einstein, D R

    2013-07-01

    In this study, we present a novel multiscale computational framework for efficiently linking multiple lower-dimensional models describing the distal lung mechanics to imaging-based 3D computational fluid dynamics (CFD) models of the upper pulmonary airways in order to incorporate physiologically appropriate outlet boundary conditions. The framework is an extension of the Modified Newton's Method with nonlinear Krylov accelerator developed by Carlson and Miller [1, 2, 3]. Our extensions include the retention of subspace information over multiple timesteps, and a special correction at the end of a timestep that allows for corrections to be accepted with verified low residual with as little as a single residual evaluation per timestep on average. In the case of a single residual evaluation per timestep, the method has zero additional computational cost compared to uncoupled or unidirectionally coupled simulations. We expect these enhancements to be generally applicable to other multiscale coupling applications where timestepping occurs. In addition we have developed a "pressure-drop" residual which allows for stable coupling of flows between a 3D incompressible CFD application and another (lower-dimensional) fluid system. We expect this residual to also be useful for coupling non-respiratory incompressible fluid applications, such as multiscale simulations involving blood flow. The lower-dimensional models that are considered in this study are sets of simple ordinary differential equations (ODEs) representing the compliant mechanics of symmetric human pulmonary airway trees. To validate the method, we compare the predictions of hybrid CFD-ODE models against an ODE-only model of pulmonary airflow in an idealized geometry. Subsequently, we couple multiple sets of ODEs describing the distal lung to an imaging-based human lung geometry. Boundary conditions in these models consist of atmospheric pressure at the mouth and intrapleural pressure applied to the multiple sets

  15. A bidirectional coupling procedure applied to multiscale respiratory modeling

    Science.gov (United States)

    Kuprat, A. P.; Kabilan, S.; Carson, J. P.; Corley, R. A.; Einstein, D. R.

    2013-07-01

    pressure applied to the multiple sets of ODEs. In both the simplified geometry and in the imaging-based geometry, the performance of the method was comparable to that of monolithic schemes, in most cases requiring only a single CFD evaluation per time step. Thus, this new accelerator allows us to begin combining pulmonary CFD models with lower-dimensional models of pulmonary mechanics with little computational overhead. Moreover, because the CFD and lower-dimensional models are totally separate, this framework affords great flexibility in terms of the type and breadth of the adopted lower-dimensional model, allowing the biomedical researcher to appropriately focus on model design. Research funded by the National Heart and Blood Institute Award 1RO1HL073598.

  16. A soil-plant model applied to phytoremediation of metals.

    Science.gov (United States)

    Lugli, Francesco; Mahler, Claudio Fernando

    2016-01-01

    This study reports a phytoremediation pot experiment using an open-source program. Unsaturated water flow was described by the Richards' equation and solute transport by the advection-dispersion equation. Sink terms in the governing flow and transport equations accounted for root water and solute uptake, respectively. Experimental data were related to application of Vetiver grass to soil contaminated by metal ions. Sensitivity analysis revealed that due to the specific experimental set-up (bottom flux not allowed), hydraulic model parameters did not influence root water (and contaminant) uptake. In contrast, the results were highly correlated with plant solar radiation interception efficiency (leaf area index). The amounts of metals accumulated in the plant tissue were compared to numerical values of cumulative uptake. Pb(2+) and Zn(2+) uptake was satisfactorily described using a passive model. However, for Ni(2+) and Cd(2+), a specific calibration of the active uptake model was necessary. Calibrated MM parameters for Ni(2+), Cd(2+), and Pb(2+) were compared to values in the literature, generally suggesting lower rates and saturation advance. A parameter (saturation ratio) was introduced to assess the efficiency of contaminant uptake. Numerical analysis, applying actual field conditions, showed the limitation of the active model for being independent of the transpiration rate.

  17. TCSC impedance regulator applied to the second benchmark model

    Energy Technology Data Exchange (ETDEWEB)

    Hamel, J.P.; Dessaint, L.A. [Ecole de Technologie Superieure, Montreal, PQ (Canada). Dept. of Electrical Engineering; Champagne, R. [Ecole de Technologie Superieure, Montreal, PQ (Canada). Dept. of Software and IT Engineering; Pare, D. [Institut de Recherche d' Hydro-Quebec, Varennes, PQ (Canada)

    2008-07-01

    Due to the combination of electrical demand growth and the high cost of building new power transmission lines, series compensation is increasingly used in power systems all around the world. Series compensation has been proposed as a new way to transfer more power on existing lines. By adding series compensation to an existing line (a relatively small change), the power transfer can be increased significantly. One of the means used for line compensation is the addition of capacitive elements in series with the line. This paper presented a thyristor-controlled series capacitor (TCSC) model that used impedance as reference, had individual controls for each phase, included a linearization module and considered only the fundamental frequency for impedance computations, without using any filter. The model's dynamic behavior was validated by applying it to the second benchmark model for subsynchronous resonance (SSR). Simulation results from the proposed model, obtained using EMTP-RV and SimPowerSystems were demonstrated. It was concluded that SSR was mitigated by the proposed approach. 19 refs., 19 figs.

  18. Applying the model of excellence in dental healthcare

    Directory of Open Access Journals (Sweden)

    Tekić Jasmina

    2015-01-01

    Full Text Available Introduction. Models of excellence are considered a practical tool in the field of management that should help a variety of organizations, including dental, to carry out the measurement of the quality of provided services, and so define their position in relation to excellence. The quality of healthcare implies the degree within which the system of healthcare and health services increases the likelihood of positive treatment outcome. Objective. The aim of the present study was to define a model of excellence in the field of dental healthcare (DHC in the Republic of Serbia and suggest the model of DHC whose services will have the characteristics of outstanding service in the dental practice. Methods. In this study a specially designed questionnaire was used for the assessment of the maturity level of applied management regarding quality in healthcare organizations of the Republic of Serbia. The questionnaire consists of 13 units and a total of 240 questions. Results. The results of the study were discussed involving four areas: (1 defining the main criteria and sub-criteria, (2 the elements of excellence of DHC in the Republic of Serbia, (3 the quality of DHC in the Republic of Serbia, and (4 defining the framework of the model of excellence for the DHC in the Republic of Serbia. The main criteria which defined the framework and implementation model of excellence in the field of DHC in Serbia were: leadership, management, human resources, policy and strategy, other resources, processes, patients’ satisfaction, employee’s satisfaction, impact on society and business results. The model had two main parts: the possibilities for the first five criteria and options for the other four criteria. Conclusion. Excellence in DHC business as well as the excellence of provided dental services are increasingly becoming the norm and good practice, and progressively less the exception.

  19. Técnicas moleculares aplicadas à microbiologia de alimentos = Molecular techniques applied to food microbiology

    Directory of Open Access Journals (Sweden)

    Eliezer Ávila Gandra

    2008-01-01

    Full Text Available A partir da década de 80, as técnicas moleculares começaram a ser utilizadas como uma alternativa aos métodos fenotípicos, tradicionalmente, utilizados em microbiologia de alimentos. Foi acelerada esta substituição com advento da descoberta da reação em cadeia da polimerase (polymerase chain reaction – PCR. Este artigo tem por objetivo revisar as principais técnicas moleculares utilizadas como ferramentas na microbiologia de alimentos, desde as, inicialmente, desenvolvidas, como a análise do perfil plasmidial, até as mais contemporâneas como o PCR em tempo real, discutindo as características, vantagens e desvantagens destas técnicas, avaliando a potencialidade destas para suprir as limitações das técnicas tradicionais.Beginning in the 1980s, molecular techniques became an alternative to the traditionally used phenotypic methods in food microbiology. With the advent of the polymerase chain reaction technique, this substitution was speed up. This article had as objective to review the main molecular techniques used as tools in food microbiology, from plasmidial profile analysis to contemporary techniques such as the real-time PCR. The characteristics, advantages anddisadvantages of these techniques are discussed, by evaluating the potential of these techniques to overcome the limitations of traditional techniques.

  20. A multiblock grid generation technique applied to a jet engine configuration

    Science.gov (United States)

    Stewart, Mark E. M.

    1992-01-01

    Techniques are presented for quickly finding a multiblock grid for a 2D geometrically complex domain from geometrical boundary data. An automated technique for determining a block decomposition of the domain is explained. Techniques for representing this domain decomposition and transforming it are also presented. Further, a linear optimization method may be used to solve the equations which determine grid dimensions within the block decomposition. These algorithms automate many stages in the domain decomposition and grid formation process and limit the need for human intervention and inputs. They are demonstrated for the meridional or throughflow geometry of a bladed jet engine configuration.

  1. Dynamical behavior of the Niedermayer algorithm applied to Potts models

    Science.gov (United States)

    Girardi, D.; Penna, T. J. P.; Branco, N. S.

    2012-08-01

    In this work, we make a numerical study of the dynamic universality class of the Niedermayer algorithm applied to the two-dimensional Potts model with 2, 3, and 4 states. This algorithm updates clusters of spins and has a free parameter, E0, which controls the size of these clusters, such that E0=1 is the Metropolis algorithm and E0=0 regains the Wolff algorithm, for the Potts model. For -1clusters of equal spins can be formed: we show that the mean size of the clusters of (possibly) turned spins initially grows with the linear size of the lattice, L, but eventually saturates at a given lattice size L˜, which depends on E0. For L≥L˜, the Niedermayer algorithm is in the same dynamic universality class of the Metropolis one, i.e, they have the same dynamic exponent. For E0>0, spins in different states may be added to the cluster but the dynamic behavior is less efficient than for the Wolff algorithm (E0=0). Therefore, our results show that the Wolff algorithm is the best choice for Potts models, when compared to the Niedermayer's generalization.

  2. Spectral Aging Model Applied to Meteosat First Generation Visible Band

    Directory of Open Access Journals (Sweden)

    Ilse Decoster

    2014-03-01

    Full Text Available The Meteosat satellites have been operational since the early eighties, creating so far a continuous time period of observations of more than 30 years. In order to use this data for climate data records, a consistent calibration is necessary between the consecutive instruments. Studies have shown that the Meteosat First Generation (MFG satellites (1982–2006 suffer from in-flight degradation which is spectral of nature and is not corrected by the official calibration of EUMETSAT. Continuing on previous published work by the same authors, this paper applies the spectral aging model to a set of clear-sky and cloudy targets, and derives the model parameters for all six MFG satellites (Meteosat-2 to -7. Several problems have been encountered, both due to the instrument and due to geophysical occurrences, and these are discussed and illustrated here in detail. The paper shows how the spectral aging model is an improvement compared to the EUMETSAT calibration method with a stability of 1%–2% for Meteosat-4 to -7, which increases up to 6% for ocean sites using the full MFG time period.

  3. Linear model applied to the evaluation of pharmaceutical stability data

    Directory of Open Access Journals (Sweden)

    Renato Cesar Souza

    2013-09-01

    Full Text Available The expiry date on the packaging of a product gives the consumer the confidence that the product will retain its identity, content, quality and purity throughout the period of validity of the drug. The definition of this term in the pharmaceutical industry is based on stability data obtained during the product registration. By the above, this work aims to apply the linear regression according to the guideline ICH Q1E, 2003, to evaluate some aspects of a product undergoing in a registration phase in Brazil. With this propose, the evaluation was realized with the development center of a multinational company in Brazil, with samples of three different batches composed by two active principal ingredients in two different packages. Based on the preliminary results obtained, it was possible to observe the difference of degradation tendency of the product in two different packages and the relationship between the variables studied, added knowledge so new models of linear equations can be applied and developed for other products.

  4. Sensorless position estimator applied to nonlinear IPMC model

    Science.gov (United States)

    Bernat, Jakub; Kolota, Jakub

    2016-11-01

    This paper addresses the issue of estimating position for an ionic polymer metal composite (IPMC) known as electro active polymer (EAP). The key step is the construction of a sensorless mode considering only current feedback. This work takes into account nonlinearities caused by electrochemical effects in the material. Owing to the recent observer design technique, the authors obtained both Lyapunov function based estimation law as well as sliding mode observer. To accomplish the observer design, the IPMC model was identified through a series of experiments. The research comprises time domain measurements. The identification process was completed by means of geometric scaling of three test samples. In the proposed design, the estimated position accurately tracks the polymer position, which is illustrated by the experiments.

  5. Applying a Hybrid MCDM Model for Six Sigma Project Selection

    Directory of Open Access Journals (Sweden)

    Fu-Kwun Wang

    2014-01-01

    Full Text Available Six Sigma is a project-driven methodology; the projects that provide the maximum financial benefits and other impacts to the organization must be prioritized. Project selection (PS is a type of multiple criteria decision making (MCDM problem. In this study, we present a hybrid MCDM model combining the decision-making trial and evaluation laboratory (DEMATEL technique, analytic network process (ANP, and the VIKOR method to evaluate and improve Six Sigma projects for reducing performance gaps in each criterion and dimension. We consider the film printing industry of Taiwan as an empirical case. The results show that our study not only can use the best project selection, but can also be used to analyze the gaps between existing performance values and aspiration levels for improving the gaps in each dimension and criterion based on the influential network relation map.

  6. Applying direct observation to model workflow and assess adoption.

    Science.gov (United States)

    Unertl, Kim M; Weinger, Matthew B; Johnson, Kevin B

    2006-01-01

    Lack of understanding about workflow can impair health IT system adoption. Observational techniques can provide valuable information about clinical workflow. A pilot study using direct observation was conducted in an outpatient chronic disease clinic. The goals of the study were to assess workflow and information flow and to develop a general model of workflow and information behavior. Over 55 hours of direct observation showed that the pilot site utilized many of the features of the informatics systems available to them, but also employed multiple non-electronic artifacts and workarounds. Gaps existed between clinic workflow and informatics tool workflow, as well as between institutional expectations of informatics tool use and actual use. Concurrent use of both paper-based and electronic systems resulted in duplication of effort and inefficiencies. A relatively short period of direct observation revealed important information about workflow and informatics tool adoption.

  7. Transient heat conduction in a pebble fuel applying fractional model

    Energy Technology Data Exchange (ETDEWEB)

    Gomez A, R.; Espinosa P, G. [Universidad Autonoma Metropolitana, Unidad Iztapalapa, Area de Ingenieria en Recursos Energeticos, Av. San Rafael Atlixco 186, Col. Vicentina, 09340 Mexico D. F. (Mexico)], e-mail: gepe@xanum.uam.mx

    2009-10-15

    In this paper we presents the equation of thermal diffusion of temporary-fractional order in the one-dimensional space in spherical coordinates, with the objective to analyze the heat transference between the fuel and coolant in a fuel element of a Pebble Bed Modular Reactor. The pebble fuel is the heterogeneous system made by microsphere constitutes by U O, pyrolytic carbon and silicon carbide mixed with graphite. To describe the heat transfer phenomena in the pebble fuel we applied a constitutive law fractional (Non-Fourier) in order to analyze the behaviour transient of the temperature distribution in the pebble fuel with anomalous thermal diffusion effects a numerical model is developed. (Author)

  8. State of the Art Review for Applying Computational Intelligence and Machine Learning Techniques to Portfolio Optimisation

    CERN Document Server

    Hurwitz, Evan

    2009-01-01

    Computational techniques have shown much promise in the field of Finance, owing to their ability to extract sense out of dauntingly complex systems. This paper reviews the most promising of these techniques, from traditional computational intelligence methods to their machine learning siblings, with particular view to their application in optimising the management of a portfolio of financial instruments. The current state of the art is assessed, and prospective further work is assessed and recommended

  9. A Survey on Data Mining Techniques Applied to Electricity-Related Time Series Forecasting

    OpenAIRE

    Francisco Martínez-Álvarez; Alicia Troncoso; Gualberto Asencio-Cortés; Riquelme, José C

    2015-01-01

    Data mining has become an essential tool during the last decade to analyze large sets of data. The variety of techniques it includes and the successful results obtained in many application fields, make this family of approaches powerful and widely used. In particular, this work explores the application of these techniques to time series forecasting. Although classical statistical-based methods provides reasonably good results, the result of the application of data mining outperforms those of ...

  10. Active lubrication applied to radial gas journal bearings. Part 2: Modelling improvement and experimental validation

    DEFF Research Database (Denmark)

    Pierart, Fabián G.; Santos, Ilmar F.

    2016-01-01

    Actively-controlled lubrication techniques are applied to radial gas bearings aiming at enhancing one of their most critical drawbacks, their lack of damping. A model-based control design approach is presented using simple feedback control laws, i.e. proportional controllers. The design approach...... by finite element method and the global model is used as control design tool. Active lubrication allows for significant increase in damping factor of the rotor-bearing system. Very good agreement between theory and experiment is obtained, supporting the multi-physic design tool developed....

  11. Acceptance and Mindfulness Techniques as Applied to Refugee and Ethnic Minority Populations with PTSD: Examples from "Culturally Adapted CBT"

    Science.gov (United States)

    Hinton, Devon E.; Pich, Vuth; Hofmann, Stefan G.; Otto, Michael W.

    2013-01-01

    In this article we illustrate how we utilize acceptance and mindfulness techniques in our treatment (Culturally Adapted CBT, or CA-CBT) for traumatized refugees and ethnic minority populations. We present a Nodal Network Model (NNM) of Affect to explain the treatment's emphasis on body-centered mindfulness techniques and its focus on psychological…

  12. Synchrotron and simulations techniques applied to problems in materials science: catalysts and Azul Maya pigments.

    Science.gov (United States)

    Chianelli, Russell R; Perez De la Rosa, Myriam; Meitzner, George; Siadati, Mohammed; Berhault, Gilles; Mehta, Apurva; Pople, John; Fuentes, Sergio; Alonzo-Nuñez, Gabriel; Polette, Lori A

    2005-03-01

    Development of synchrotron techniques for the determination of the structure of disordered, amorphous and surface materials has exploded over the past 20 years owing to the increasing availability of high-flux synchrotron radiation and the continuing development of increasingly powerful synchrotron techniques. These techniques are available to materials scientists who are not necessarily synchrotron scientists through interaction with effective user communities that exist at synchrotrons such as the Stanford Synchrotron Radiation Laboratory. In this article the application of multiple synchrotron characterization techniques to two classes of materials defined as 'surface compounds' is reviewed. One class of surface compounds are materials like MoS(2-x)C(x) that are widely used petroleum catalysts, used to improve the environmental properties of transportation fuels. These compounds may be viewed as 'sulfide-supported carbides' in their catalytically active states. The second class of 'surface compounds' are the 'Maya blue' pigments that are based on technology created by the ancient Maya. These compounds are organic/inorganic 'surface complexes' consisting of the dye indigo and palygorskite, common clay. The identification of both surface compounds relies on the application of synchrotron techniques as described here.

  13. Synchroton and Simulations Techniques Applied to Problems in Materials Science: Catalysts and Azul Maya Pigments

    Energy Technology Data Exchange (ETDEWEB)

    Chianelli, R.

    2005-01-12

    Development of synchrotron techniques for the determination of the structure of disordered, amorphous and surface materials has exploded over the past twenty years due to the increasing availability of high flux synchrotron radiation and the continuing development of increasingly powerful synchrotron techniques. These techniques are available to materials scientists who are not necessarily synchrotron scientists through interaction with effective user communities that exist at synchrotrons such as the Stanford Synchrotron Radiation Laboratory (SSRL). In this article we review the application of multiple synchrotron characterization techniques to two classes of materials defined as ''surface compounds.'' One class of surface compounds are materials like MoS{sub 2-x}C{sub x} that are widely used petroleum catalysts used to improve the environmental properties of transportation fuels. These compounds may be viewed as ''sulfide supported carbides'' in their catalytically active states. The second class of ''surface compounds'' is the ''Maya Blue'' pigments that are based on technology created by the ancient Maya. These compounds are organic/inorganic ''surface complexes'' consisting of the dye indigo and palygorskite, a common clay. The identification of both surface compounds relies on the application of synchrotron techniques as described in this report.

  14. Hierarchic stochastic modelling applied to intracellular Ca(2+ signals.

    Directory of Open Access Journals (Sweden)

    Gregor Moenke

    Full Text Available Important biological processes like cell signalling and gene expression have noisy components and are very complex at the same time. Mathematical analysis of such systems has often been limited to the study of isolated subsystems, or approximations are used that are difficult to justify. Here we extend a recently published method (Thurley and Falcke, PNAS 2011 which is formulated in observable system configurations instead of molecular transitions. This reduces the number of system states by several orders of magnitude and avoids fitting of kinetic parameters. The method is applied to Ca(2+ signalling. Ca(2+ is a ubiquitous second messenger transmitting information by stochastic sequences of concentration spikes, which arise by coupling of subcellular Ca(2+ release events (puffs. We derive analytical expressions for a mechanistic Ca(2+ model, based on recent data from live cell imaging, and calculate Ca(2+ spike statistics in dependence on cellular parameters like stimulus strength or number of Ca(2+ channels. The new approach substantiates a generic Ca(2+ model, which is a very convenient way to simulate Ca(2+ spike sequences with correct spiking statistics.

  15. Effect of the reinforcement bar arrangement on the efficiency of electrochemical chloride removal technique applied to reinforced concrete structures

    Energy Technology Data Exchange (ETDEWEB)

    Garces, P. [Dpto. Ing. de la Construccion, Obras Publicas e Infraestructura Urbana, Universidad de Alicante. Apdo. 99, E-03080 Alicante (Spain)]. E-mail: pedro.garces@ua.es; Sanchez de Rojas, M.J. [Dpto. Ing. de la Construccion, Obras Publicas e Infraestructura Urbana, Universidad de Alicante. Apdo. 99, E-03080 Alicante (Spain); Climent, M.A. [Dpto. Ing. de la Construccion, Obras Publicas e Infraestructura Urbana, Universidad de Alicante. Apdo. 99, E-03080 Alicante (Spain)

    2006-03-15

    This paper reports on the research done to find out the effect that different bar arrangements may have on the efficiency of the electrochemical chloride removal (ECR) technique when applied to a reinforced concrete structural member. Five different types of bar arrangements were considered, corresponding to typical structural members such as columns (with single and double bar reinforcing), slabs, beams and footings. ECR was applied in several steps. We observe that the extraction efficiency depends on the reinforcing bar arrangement. A uniform layer set-up favours chloride extraction. Electrochemical techniques were also used to estimate the reinforcing bar corrosion states, as well as measure the corrosion potential, and instant corrosion rate based on the polarization resistance technique. After ECR treatment, a reduction in the corrosion levels is observed falling short of the depassivation threshold.

  16. Energy saving techniques applied over a nation-wide mobile network

    DEFF Research Database (Denmark)

    Perez, Eva; Frank, Philipp; Micallef, Gilbert;

    2014-01-01

    Traffic carried over wireless networks has grown significantly in recent years and actual forecasts show that this trend is expected to continue. However, the rapid mobile data explosion and the need for higher data rates comes at a cost of increased complexity and energy consumption of the mobile...... on the energy consumption based on a nation-wide network of a leading European operator. By means of an extensive analysis, we show that with the proposed techniques significant energy savings can be realized....... networks. Although base station equipment is improving its energy efficiency by means of new power amplifiers and increased processing power, additional techniques are required to further reduce the energy consumption. In this paper, we evaluate different energy saving techniques and study their impact...

  17. A Survey on Data Mining Techniques Applied to Electricity-Related Time Series Forecasting

    Directory of Open Access Journals (Sweden)

    Francisco Martínez-Álvarez

    2015-11-01

    Full Text Available Data mining has become an essential tool during the last decade to analyze large sets of data. The variety of techniques it includes and the successful results obtained in many application fields, make this family of approaches powerful and widely used. In particular, this work explores the application of these techniques to time series forecasting. Although classical statistical-based methods provides reasonably good results, the result of the application of data mining outperforms those of classical ones. Hence, this work faces two main challenges: (i to provide a compact mathematical formulation of the mainly used techniques; (ii to review the latest works of time series forecasting and, as case study, those related to electricity price and demand markets.

  18. Schlieren technique applied to the arc temperature measurement in a high energy density cutting torch

    Science.gov (United States)

    Prevosto, L.; Artana, G.; Mancinelli, B.; Kelly, H.

    2010-01-01

    Plasma temperature and radial density profiles of the plasma species in a high energy density cutting arc have been obtained by using a quantitative schlieren technique. A Z-type two-mirror schlieren system was used in this research. Due to its great sensibility such technique allows measuring plasma composition and temperature from the arc axis to the surrounding medium by processing the gray-level contrast values of digital schlieren images recorded at the observation plane for a given position of a transverse knife located at the exit focal plane of the system. The technique has provided a good visualization of the plasma flow emerging from the nozzle and its interactions with the surrounding medium and the anode. The obtained temperature values are in good agreement with those values previously obtained by the authors on the same torch using Langmuir probes.

  19. Applied techniques for high bandwidth data transfers across wide area networks

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Jason; Gunter, Dan; Tierney, Brian; Allcock, Bill; Bester, Joe; Bresnahan, John; Tuecke, Steve

    2001-04-30

    Large distributed systems such as Computational/Data Grids require large amounts of data to be co-located with the computing facilities for processing. Ensuring that the data is there in time for the computation in today's Internet is a massive problem. From our work developing a scalable distributed network cache, we have gained experience with techniques necessary to achieve high data throughput over high bandwidth Wide Area Networks (WAN). In this paper, we discuss several hardware and software design techniques and issues, and then describe their application to an implementation of an enhanced FTP protocol called GridFTP. We also describe results from two applications using these techniques, which were obtained at the Supercomputing 2000 conference.

  20. New Control Technique Applied in Dynamic Voltage Restorer for Voltage Sag Mitigation

    Directory of Open Access Journals (Sweden)

    Rosli Omar

    2010-01-01

    Full Text Available The Dynamic Voltage Restorer (DVR was a power electronics device that was able to compensate voltage sags on critical loads dynamically. The DVR consists of VSC, injection transformers, passive filters and energy storage (lead acid battery. By injecting an appropriate voltage, the DVR restores a voltage waveform and ensures constant load voltage. There were so many types of the control techniques being used in DVR for mitigating voltage sags. The efficiency of the DVR depends on the efficiency of the control technique involved in switching the inverter. Problem statement: Simulation and experimental investigation toward new algorithms development based on SVPWM. Understanding the nature of DVR and performance comparisons between the various controller technologies available. The proposed controller using space vector modulation techniques obtain higher amplitude modulation indexes if compared with conventional SPWM techniques. Moreover, space vector modulation techniques can be easily implemented using digital processors. Space vector PWM can produce about 15% higher output voltage than standard Sinusoidal PWM. Approach: The purpose of this research was to study the implementation of SVPWM in DVR. The proposed control algorithm was investigated through computer simulation by using PSCAD/EMTDC software. Results: From simulation and experimental results showed the effectiveness and efficiency of the proposed controller based on SVPWM in mitigating voltage sags in low voltage distribution systems. It was concluded that its controller also works well both in balance and unbalance conditions of voltages. Conclusion/Recommendations: The simulation and experimental results of a DVR using PSCAD/EMTDC software based on SVPWM technique showed clearly the performance of the DVR in mitigating voltage sags. The DVR operates without any difficulties to inject the appropriate voltage component to correct rapidly any anomaly in the supply voltage to keep the

  1. Formal modelling techniques in human-computer interaction

    NARCIS (Netherlands)

    Haan, de G.; Veer, van der G.C.; Vliet, van J.C.

    1991-01-01

    This paper is a theoretical contribution, elaborating the concept of models as used in Cognitive Ergonomics. A number of formal modelling techniques in human-computer interaction will be reviewed and discussed. The analysis focusses on different related concepts of formal modelling techniques in hum

  2. Applied predictive analytics principles and techniques for the professional data analyst

    CERN Document Server

    Abbott, Dean

    2014-01-01

    Learn the art and science of predictive analytics - techniques that get results Predictive analytics is what translates big data into meaningful, usable business information. Written by a leading expert in the field, this guide examines the science of the underlying algorithms as well as the principles and best practices that govern the art of predictive analytics. It clearly explains the theory behind predictive analytics, teaches the methods, principles, and techniques for conducting predictive analytics projects, and offers tips and tricks that are essential for successful predictive mode

  3. People Recognition for Loja ECU911 applying artificial vision techniques

    Directory of Open Access Journals (Sweden)

    Diego Cale

    2016-05-01

    Full Text Available This article presents a technological proposal based on artificial vision which aims to search people in an intelligent way by using IP video cameras. Currently, manual searching process is time and resource demanding in contrast to automated searching one, which means that it could be replaced. In order to obtain optimal results, three different techniques of artificial vision were analyzed (Eigenfaces, Fisherfaces, Local Binary Patterns Histograms. The selection process considered factors like lighting changes, image quality and changes in the angle of focus of the camera. Besides, a literature review was conducted to evaluate several points of view regarding artificial vision techniques.

  4. Applied Techniques for High Bandwidth Data Transfers across Wide Area Networks

    Institute of Scientific and Technical Information of China (English)

    JasonLee; BillAllcock; 等

    2001-01-01

    Large distributed systems such as Computational/Data Grids require large amounts of data to be co-located with the computing facilities for processing.From our work develogpin a scalable distributed network cache.we have gained experience with techniques necessary to achieve high data throughput over high bandwidth Wide Area Networks(WAN).In this paper,we discuss several hardware and software dsign techniques,and then describe their application to an implementation of an enhanced FTP protocol called GridFTP,We describe results from the Supercomputing 2000 conference.

  5. Detecting Environmental Change Using Self-Organizing Map Techniques Applied to the ERA-40 Database

    Directory of Open Access Journals (Sweden)

    Mohamed Gebri

    2011-05-01

    Full Text Available Data mining is a valuable tool in meteorological applications. Properly selected data mining techniques enable researchers to process and analyze massive amounts of data collected by satellites and other instruments. Large spatial-temporal datasets can be analyzed using different linear and nonlinear methods. The Self-Organizing Map (SOM is a promising tool for clustering and visualizing high dimensional data and mapping spatial-temporal datasets describing nonlinear phenomena. We present results of the application of the SOM technique in regions of interest within the European re-analysis data set. The possibility of detecting climate change signals through the visualization capability of SOM tools is examined.

  6. Monitoring of applied stress in concrete using ultrasonic full-waveform comparison techniques

    Science.gov (United States)

    Hafiz, Ali; Schumacher, Thomas

    2017-04-01

    Ultrasonic testing is a non-destructive approach commonly used to evaluate concrete structures. A challenge with concrete is that it is heterogeneous, which causes multiple wave scattering resulting in longer and more complex wave paths. The recorded ultrasonic waveform can be divided into two portions: the coherent (or early) and the diffuse (or Coda) portion. While conventional methods only use the coherent portion, e.g. the first wave arrival to determine the wave velocity, we are interested in the entire waveform, i.e. until the wave amplitude is completely dampened out. The objective of this study was to determine what portion of the signal is most sensitive to applied stress and the associated formation and propagation of cracks. For this purpose, the squared Pearson correlation coefficient, R2 was used, which provides a measure for the strength of the linear relationship (or similarity) between a reference waveform under no stress and a waveform recorded at a certain level of applied stress. Additionally, a signal energy-based filter was developed and used to detect signals that captured acoustic emissions generated during the loading process. The experimental work for this study consisted of an active monitoring approach by employing a pitch-catch setup with two ultrasonic transducers, one transmitter and one receiver, that were attached to (nullset) 152 x 305 mm concrete cylinder specimens, which were loaded monotonically to failure. Our results show that applied stress correlates well with the R2 with remarkable sensitivity to small applied stresses. Also, the relationship between R2 and applied stress is linear for an applied stress that is less than 50% of the ultimate stress.

  7. Essays on Applied Resource Economics Using Bioeconomic Optimization Models

    Science.gov (United States)

    Affuso, Ermanno

    With rising demographic growth, there is increasing interest in analytical studies that assess alternative policies to provide an optimal allocation of scarce natural resources while ensuring environmental sustainability. This dissertation consists of three essays in applied resource economics that are interconnected methodologically within the agricultural production sector of Economics. The first chapter examines the sustainability of biofuels by simulating and evaluating an agricultural voluntary program that aims to increase the land use efficiency in the production of biofuels of first generation in the state of Alabama. The results show that participatory decisions may increase the net energy value of biofuels by 208% and reduce emissions by 26%; significantly contributing to the state energy goals. The second chapter tests the hypothesis of overuse of fertilizers and pesticides in U.S. peanut farming with respect to other inputs and address genetic research to reduce the use of the most overused chemical input. The findings suggest that peanut producers overuse fungicide with respect to any other input and that fungi resistant genetically engineered peanuts may increase the producer welfare up to 36.2%. The third chapter implements a bioeconomic model, which consists of a biophysical model and a stochastic dynamic recursive model that is used to measure potential economic and environmental welfare of cotton farmers derived from a rotation scheme that uses peanut as a complementary crop. The results show that the rotation scenario would lower farming costs by 14% due to nitrogen credits from prior peanut land use and reduce non-point source pollution from nitrogen runoff by 6.13% compared to continuous cotton farming.

  8. Applying the INN model to the MaxClique problem

    Energy Technology Data Exchange (ETDEWEB)

    Grossman, T.

    1993-09-01

    Max-Clique is the problem of finding the largest clique in a given graph. It is not only NP-hard, but, as recent results suggest, even hard to approximate. Nevertheless it is still very important to develop and test practical algorithms that will find approximate solutions for the maximum clique problem on various graphs stemming from numerous applications. Indeed, many different types of algorithmic approaches are applied to that problem. Several neural networks and related algorithms were applied recently to combinatorial optimization problems in general and to the Max-Clique problem in particular. These neural nets are dynamical system which minimize a cost (or computational ``energy``) function that represents the optimization problem, the Max-Clique in our case. Therefore they all belong to the class of integer programming algorithms surveyed in the Pardalos and Xue review. The work presented here is a development and improvement of a neural network algorithm that was introduced recently. In the previous work, we have considered two Hopfield type neural networks, the INN and the HcN, and their application to the max-clique problem. In this paper, I concentrate on the INN network and present an improved version of the t-A algorithm that was introduced in. The rest of this paper is organized as follows: in section 2, I describe the INN model and how it implements a given graph. In section 3, it is characterized in terms of graph theory. In particular, the stable states of the network are mapped to the maximal cliques of its underling graph. In section 4, I present the t-Annealing algorithm and an improved version of it, the Adaptive t-Annealing. Several experiments done with these algorithms on benchmark graphs are reported in section 5, and the efficiency of the new algorithm is demonstrated. I conclude with a short discussion.

  9. A Technical Review of Electrochemical Techniques Applied to Microbiologically Influenced Corrosion

    Science.gov (United States)

    1991-01-01

    in the literature for the study of MIC phenomena. Videla65 has used this technique in a study of the action of Cladosporium resinae growth on the...ROSALES, Corrosion 44, 638 (1988). 65. H. A. VIDs, The action of Clado.sporiuo resinae growth on the electrochemical behavior of aluminum. Proc. bit. Conf

  10. Reverse Time Migration: A Seismic Imaging Technique Applied to Synthetic Ultrasonic Data

    Directory of Open Access Journals (Sweden)

    Sabine Müller

    2012-01-01

    Full Text Available Ultrasonic echo testing is a more and more frequently used technique in civil engineering to investigate concrete building elements, to measure thickness as well as to locate and characterise built-in components or inhomogeneities. Currently the Synthetic Aperture Focusing Technique (SAFT, which is closely related to Kirchhoff migration, is used in most cases for imaging. However, this method is known to have difficulties to image steeply dipping interfaces as well as lower boundaries of tubes, voids or similar objects. We have transferred a processing technique from geophysics, the Reverse Time Migration (RTM method, to improve the imaging of complicated geometries. By using the information from wide angle reflections as well as from multiple events there are fewer limitations compared to SAFT. As a drawback the required computing power is significantly higher compared to the techniques currently used. Synthetic experiments have been performed on polyamide and concrete specimens to show the improvements compared to SAFT. We have been able to image vertical interfaces of step-like structures as well as the lower boundaries of circular objects. It has been shown that RTM is a step forward for ultrasonic testing in civil engineering.

  11. Study of Phase Reconstruction Techniques applied to Smith-Purcell Radiation Measurements

    CERN Document Server

    Delerue, Nicolas; Bezshyyko, Oleg; Khodnevych, Vitalii

    2015-01-01

    Measurements of coherent radiation at accelerators typically give the absolute value of the beam profile Fourier transform but not its phase. Phase reconstruction techniques such as Hilbert transform or Kramers Kronig reconstruction are used to recover such phase. We report a study of the performances of these methods and how to optimize the reconstructed profiles.

  12. Study of Phase Reconstruction Techniques applied to Smith-Purcell Radiation Measurements

    CERN Document Server

    Delerue, Nicolas; Vieille-Grosjean, Mélissa; Bezshyyko, Oleg; Khodnevych, Vitalii

    2014-01-01

    Measurements of coherent radiation at accelerators typically give the absolute value of the beam profile Fourier transform but not its phase. Phase reconstruction techniques such as Hilbert transform or Kramers Kronig reconstruction are used to recover such phase. We report a study of the performances of these methods and how to optimize the reconstructed profiles.

  13. X-ray micro-beam techniques and phase contrast tomography applied to biomaterials

    Science.gov (United States)

    Fratini, Michela; Campi, Gaetano; Bukreeva, Inna; Pelliccia, Daniele; Burghammer, Manfred; Tromba, Giuliana; Cancedda, Ranieri; Mastrogiacomo, Maddalena; Cedola, Alessia

    2015-12-01

    A deeper comprehension of the biomineralization (BM) process is at the basis of tissue engineering and regenerative medicine developments. Several in-vivo and in-vitro studies were dedicated to this purpose via the application of 2D and 3D diagnostic techniques. Here, we develop a new methodology, based on different complementary experimental techniques (X-ray phase contrast tomography, micro-X-ray diffraction and micro-X-ray fluorescence scanning technique) coupled to new analytical tools. A qualitative and quantitative structural investigation, from the atomic to the micrometric length scale, is obtained for engineered bone tissues. The high spatial resolution achieved by X-ray scanning techniques allows us to monitor the bone formation at the first-formed mineral deposit at the organic-mineral interface within a porous scaffold. This work aims at providing a full comprehension of the morphology and functionality of the biomineralization process, which is of key importance for developing new drugs for preventing and healing bone diseases and for the development of bio-inspired materials.

  14. X-ray micro-beam techniques and phase contrast tomography applied to biomaterials

    Energy Technology Data Exchange (ETDEWEB)

    Fratini, Michela, E-mail: michela.fratini@gmail.com [Museo Storico della Fisica e Centro Studi e Ricerche Enrico Fermi, 00184 Roma (Italy); Dipartimento di Scienze, Università di Roma Tre, 00144 Roma (Italy); Campi, Gaetano [Institute of Crystallography, CNR, 00015 Monterotondo, Roma (Italy); Bukreeva, Inna [CNR NANOTEC-Institute of Nanotechnology, 00195 Roma (Italy); P.N. Lebedev Physical Institute RAS, 119991 Moscow (Russian Federation); Pelliccia, Daniele [School of Physics, Monash University, Victoria 3800 (Australia); Burghammer, Manfred [ESRF-The European Synchrotron, 3800 Grenoble (France); Tromba, Giuliana [Sincrotrone Trieste SCpA, 34149 Basovizza, Trieste (Italy); Cancedda, Ranieri; Mastrogiacomo, Maddalena [Dipartimento di Medicina Sperimentale dell’Università di Genova & AUO San Martino-IST Istituto Nazionale per la Ricerca sul Cancro, 16132 Genova (Italy); Cedola, Alessia [CNR NANOTEC-Institute of Nanotechnology, 00195 Roma (Italy)

    2015-12-01

    A deeper comprehension of the biomineralization (BM) process is at the basis of tissue engineering and regenerative medicine developments. Several in-vivo and in-vitro studies were dedicated to this purpose via the application of 2D and 3D diagnostic techniques. Here, we develop a new methodology, based on different complementary experimental techniques (X-ray phase contrast tomography, micro-X-ray diffraction and micro-X-ray fluorescence scanning technique) coupled to new analytical tools. A qualitative and quantitative structural investigation, from the atomic to the micrometric length scale, is obtained for engineered bone tissues. The high spatial resolution achieved by X-ray scanning techniques allows us to monitor the bone formation at the first-formed mineral deposit at the organic–mineral interface within a porous scaffold. This work aims at providing a full comprehension of the morphology and functionality of the biomineralization process, which is of key importance for developing new drugs for preventing and healing bone diseases and for the development of bio-inspired materials.

  15. Do trained practice nurses apply motivational interviewing techniques in primary care consultations?

    NARCIS (Netherlands)

    Noordman, J.; Lee, I. van; Nielen, M.; Vlek, H.; Weijden, T. van; Dulmen, S. van

    2012-01-01

    BACKGROUND: Reducing the prevalence of unhealthy lifestyle behaviour could positively influence health. Motivational interviewing (MI) is used to promote change in unhealthy lifestyle behaviour as part of primary or secondary prevention. Whether MI is actually applied as taught is unknown. Practice

  16. Do trained practice nurses apply motivational interviewing techniques in primary care consultations?

    NARCIS (Netherlands)

    Noordman, J.; Lee, I. van der; Nielen, M.; Vlek, H.; Weijden, T. van der; Dulmen, S. van

    2012-01-01

    Background: Reducing the prevalence of unhealthy lifestyle behaviour could positively influence health. Motivational interviewing (MI) is used to promote change in unhealthy lifestyle behaviour as part of primary or secondary prevention. Whether MI is actually applied as taught is unknown. Practice

  17. Time-lapse motion picture technique applied to the study of geological processes

    Science.gov (United States)

    Miller, R.D.; Crandell, D.R.

    1959-01-01

    Light-weight, battery-operated timers were built and coupled to 16-mm motion-picture cameras having apertures controlled by photoelectric cells. The cameras were placed adjacent to Emmons Glacier on Mount Rainier. The film obtained confirms the view that exterior time-lapse photography can be applied to the study of slow-acting geologic processes.

  18. Quantitative model validation techniques: new insights

    CERN Document Server

    Ling, You

    2012-01-01

    This paper develops new insights into quantitative methods for the validation of computational model prediction. Four types of methods are investigated, namely classical and Bayesian hypothesis testing, a reliability-based method, and an area metric-based method. Traditional Bayesian hypothesis testing is extended based on interval hypotheses on distribution parameters and equality hypotheses on probability distributions, in order to validate models with deterministic/stochastic output for given inputs. Two types of validation experiments are considered - fully characterized (all the model/experimental inputs are measured and reported as point values) and partially characterized (some of the model/experimental inputs are not measured or are reported as intervals). Bayesian hypothesis testing can minimize the risk in model selection by properly choosing the model acceptance threshold, and its results can be used in model averaging to avoid Type I/II errors. It is shown that Bayesian interval hypothesis testing...

  19. Cellular Automata Models Applied to the Study of Landslide Dynamics

    Science.gov (United States)

    Liucci, Luisa; Melelli, Laura; Suteanu, Cristian

    2015-04-01

    Landslides are caused by complex processes controlled by the interaction of numerous factors. Increasing efforts are being made to understand the spatial and temporal evolution of this phenomenon, and the use of remote sensing data is making significant contributions in improving forecast. This paper studies landslides seen as complex dynamic systems, in order to investigate their potential Self Organized Critical (SOC) behavior, and in particular, scale-invariant aspects of processes governing the spatial development of landslides and their temporal evolution, as well as the mechanisms involved in driving the system and keeping it in a critical state. For this purpose, we build Cellular Automata Models, which have been shown to be capable of reproducing the complexity of real world features using a small number of variables and simple rules, thus allowing for the reduction of the number of input parameters commonly used in the study of processes governing landslide evolution, such as those linked to the geomechanical properties of soils. This type of models has already been successfully applied in studying the dynamics of other natural hazards, such as earthquakes and forest fires. The basic structure of the model is composed of three modules: (i) An initialization module, which defines the topographic surface at time zero as a grid of square cells, each described by an altitude value; the surface is acquired from real Digital Elevation Models (DEMs). (ii) A transition function, which defines the rules used by the model to update the state of the system at each iteration. The rules use a stability criterion based on the slope angle and introduce a variable describing the weakening of the material over time, caused for example by rainfall. The weakening brings some sites of the system out of equilibrium thus causing the triggering of landslides, which propagate within the system through local interactions between neighboring cells. By using different rates of

  20. Techniques and Simulation Models in Risk Management

    OpenAIRE

    Mirela GHEORGHE

    2012-01-01

    In the present paper, the scientific approach of the research starts from the theoretical framework of the simulation concept and then continues in the setting of the practical reality, thus providing simulation models for a broad range of inherent risks specific to any organization and simulation of those models, using the informatics instrument @Risk (Palisade). The reason behind this research lies in the need for simulation models that will allow the person in charge with decision taking i...

  1. Wavelet-based Adaptive Techniques Applied to Turbulent Hypersonic Scramjet Intake Flows

    CERN Document Server

    Frauholz, Sarah; Reinartz, Birgit U; Müller, Siegfried; Behr, Marek

    2013-01-01

    The simulation of hypersonic flows is computationally demanding due to large gradients of the flow variables caused by strong shock waves and thick boundary or shear layers. The resolution of those gradients imposes the use of extremely small cells in the respective regions. Taking turbulence into account intensives the variation in scales even more. Furthermore, hypersonic flows have been shown to be extremely grid sensitive. For the simulation of three-dimensional configurations of engineering applications, this results in a huge amount of cells and prohibitive computational time. Therefore, modern adaptive techniques can provide a gain with respect to computational costs and accuracy, allowing the generation of locally highly resolved flow regions where they are needed and retaining an otherwise smooth distribution. An h-adaptive technique based on wavelets is employed for the solution of hypersonic flows. The compressible Reynolds averaged Navier-Stokes equations are solved using a differential Reynolds s...

  2. Magnetic Resonance Techniques Applied to the Diagnosis and Treatment of Parkinson’s Disease

    Science.gov (United States)

    de Celis Alonso, Benito; Hidalgo-Tobón, Silvia S.; Menéndez-González, Manuel; Salas-Pacheco, José; Arias-Carrión, Oscar

    2015-01-01

    Parkinson’s disease (PD) affects at least 10 million people worldwide. It is a neurodegenerative disease, which is currently diagnosed by neurological examination. No neuroimaging investigation or blood biomarker is available to aid diagnosis and prognosis. Most effort toward diagnosis using magnetic resonance (MR) has been focused on the use of structural/anatomical neuroimaging and diffusion tensor imaging (DTI). However, deep brain stimulation, a current strategy for treating PD, is guided by MR imaging (MRI). For clinical prognosis, diagnosis, and follow-up investigations, blood oxygen level-dependent MRI, DTI, spectroscopy, and transcranial magnetic stimulation have been used. These techniques represent the state of the art in the last 5 years. Here, we focus on MR techniques for the diagnosis and treatment of Parkinson’s disease. PMID:26191037

  3. Simple parameter estimation for complex models — Testing evolutionary techniques on 3-dimensional biogeochemical ocean models

    Science.gov (United States)

    Mattern, Jann Paul; Edwards, Christopher A.

    2017-01-01

    Parameter estimation is an important part of numerical modeling and often required when a coupled physical-biogeochemical ocean model is first deployed. However, 3-dimensional ocean model simulations are computationally expensive and models typically contain upwards of 10 parameters suitable for estimation. Hence, manual parameter tuning can be lengthy and cumbersome. Here, we present four easy to implement and flexible parameter estimation techniques and apply them to two 3-dimensional biogeochemical models of different complexities. Based on a Monte Carlo experiment, we first develop a cost function measuring the model-observation misfit based on multiple data types. The parameter estimation techniques are then applied and yield a substantial cost reduction over ∼ 100 simulations. Based on the outcome of multiple replicate experiments, they perform on average better than random, uninformed parameter search but performance declines when more than 40 parameters are estimated together. Our results emphasize the complex cost function structure for biogeochemical parameters and highlight dependencies between different parameters as well as different cost function formulations.

  4. Improving throughput and user experience for information intensive websites by applying HTTP compression technique.

    Science.gov (United States)

    Malla, Ratnakar

    2008-11-06

    HTTP compression is a technique specified as part of the W3C HTTP 1.0 standard. It allows HTTP servers to take advantage of GZIP compression technology that is built into latest browsers. A brief survey of medical informatics websites show that compression is not enabled. With compression enabled, downloaded files sizes are reduced by more than 50% and typical transaction time is also reduced from 20 to 8 minutes, thus providing a better user experience.

  5. Microstrip coupling techniques applied to thin-film Josephson junctions at microwave frequencies

    DEFF Research Database (Denmark)

    Sørensen, O H; Pedersen, Niels Falsig; Mygind, Jesper

    1981-01-01

    Three different schemes for coupling to low impedance Josephson devices have been investigated. They all employ superconducting thin-film microstrip circuit techniques. The schemes are: (i) a quarterwave stepped impedance transformer, (ii) a microstrip resonator, (iii) an adjustable impedance...... transformer in inverted microstrip. Using single microbridges to probe the performance we found that the most primising scheme in terms of coupling efficiency and useful bandwidth was the adjustable inverted microstrip transformer....

  6. Vibrational techniques applied to photosynthesis: Resonance Raman and fluorescence line-narrowing.

    Science.gov (United States)

    Gall, Andrew; Pascal, Andrew A; Robert, Bruno

    2015-01-01

    Resonance Raman spectroscopy may yield precise information on the conformation of, and the interactions assumed by, the chromophores involved in the first steps of the photosynthetic process. Selectivity is achieved via resonance with the absorption transition of the chromophore of interest. Fluorescence line-narrowing spectroscopy is a complementary technique, in that it provides the same level of information (structure, conformation, interactions), but in this case for the emitting pigment(s) only (whether isolated or in an ensemble of interacting chromophores). The selectivity provided by these vibrational techniques allows for the analysis of pigment molecules not only when they are isolated in solvents, but also when embedded in soluble or membrane proteins and even, as shown recently, in vivo. They can be used, for instance, to relate the electronic properties of these pigment molecules to their structure and/or the physical properties of their environment. These techniques are even able to follow subtle changes in chromophore conformation associated with regulatory processes. After a short introduction to the physical principles that govern resonance Raman and fluorescence line-narrowing spectroscopies, the information content of the vibrational spectra of chlorophyll and carotenoid molecules is described in this article, together with the experiments which helped in determining which structural parameter(s) each vibrational band is sensitive to. A selection of applications is then presented, in order to illustrate how these techniques have been used in the field of photosynthesis, and what type of information has been obtained. This article is part of a Special Issue entitled: Vibrational spectroscopies and bioenergetic systems.

  7. A neuro-evolutive technique applied for predicting the liquid crystalline property of some organic compounds

    Science.gov (United States)

    Drăgoi, Elena-Niculina; Curteanu, Silvia; Lisa, Cătălin

    2012-10-01

    A simple self-adaptive version of the differential evolution algorithm was applied for simultaneous architectural and parametric optimization of feed-forward neural networks, used to classify the crystalline liquid property of a series of organic compounds. The developed optimization methodology was called self-adaptive differential evolution neural network (SADE-NN) and has the following characteristics: the base vector used is chosen as the best individual in the current population, two differential terms participate in the mutation process, the crossover type is binomial, a simple self-adaptive mechanism is employed to determine the near-optimal control parameters of the algorithm, and the integration of the neural network into the differential evolution algorithm is performed using a direct encoding scheme. It was found that a network with one hidden layer is able to make accurate predictions, indicating that the proposed methodology is efficient and, owing to its flexibility, it can be applied to a large range of problems.

  8. BCS-Hubbard model applied to anisotropic superconductors

    Energy Technology Data Exchange (ETDEWEB)

    Millan, J.S., E-mail: smillan@pampano.unacar.mx [Facultad de Ingenieria, Universidad Autonoma del Carmen, Cd. del Carmen, 24180 Campeche (Mexico); Perez, L.A. [Instituto de Fisica, Universidad Nacional Autonoma de Mexico, A.P. 20-364, 01000, Mexico D.F. (Mexico); Wang, C. [Instituto de Investigaciones en Materiales, Universidad Nacional Autonoma de Mexico, A.P. 70-360, 04510, Mexico D.F. (Mexico)

    2011-11-15

    The BCS formalism applied to a Hubbard model, including correlated hoppings, is used to study d-wave superconductors. The theoretical T{sub c} vs. n relationship is compared with experimental data from BiSr{sub 2-x}La{sub x}CuO{sub 6+{delta}} and La{sub 2-x}Sr{sub x}CuO{sub 4}. The results suggest a nontrivial correlation between the hole and the doping concentrations. Based on the BCS formalism, we study the critical temperature (T{sub c}) as a function of electron density (n) in a square lattice by means of a generalized Hubbard model, in which first ({Delta}t) and second neighbors ({Delta}t{sub 3}) correlated-hopping interactions are included in addition to the repulsive Coulomb ones. We compare the theoretical T{sub c} vs. n relationship with experimental data of cuprate superconductors BiSr{sub 2-x}La{sub x}CuO{sub 6+{delta}} (BSCO) and La{sub 2-x}Sr{sub x}CuO{sub 4}, (LSCO). The theory agrees very well with BSCO data even though the complicated association between Sr concentration (x) and hole doping (p). For the LSCO system, it is observed that in the underdoped regime, the T{sub c} vs. n behavior can be associated to different systems with small variations of t'. For the overdoped regime, a more complicated dependence n = 1 - p/2 fits better than n = 1 - p. On the other hand, it is proposed that the second neighbor hopping ratio (t'/t) should be replaced by the effective mean field hopping ratio t{sub MF}{sup '}/t{sub MF}, which can be very sensitive to small changes of t' due to the doping.

  9. Defining Requirements and Applying Information Modeling for Protecting Enterprise Assets

    Science.gov (United States)

    Fortier, Stephen C.; Volk, Jennifer H.

    The advent of terrorist threats has heightened local, regional, and national governments' interest in emergency response and disaster preparedness. The threat of natural disasters also challenges emergency responders to act swiftly and in a coordinated fashion. When a disaster occurs, an ad hoc coalition of pre-planned groups usually forms to respond to the incident. History has shown that these “system of systems” do not interoperate very well. Communications between fire, police and rescue components either do not work or are inefficient. Government agencies, non-governmental organizations (NGOs), and private industry use a wide array of software platforms for managing data about emergency conditions, resources and response activities. Most of these are stand-alone systems with very limited capability for data sharing with other agencies or other levels of government. Information technology advances have facilitated the movement towards an integrated and coordinated approach to emergency management. Other communication mechanisms, such as video teleconferencing, digital television and radio broadcasting, are being utilized to combat the challenges of emergency information exchange. Recent disasters, such as Hurricane Katrina and the tsunami in Indonesia, have illuminated the weaknesses in emergency response. This paper will discuss the need for defining requirements for components of ad hoc coalitions which are formed to respond to disasters. A goal of our effort was to develop a proof of concept that applying information modeling to the business processes used to protect and mitigate potential loss of an enterprise was feasible. These activities would be modeled both pre- and post-incident.

  10. A spectrophotometric model applied to cluster galaxies: the WINGS dataset

    CERN Document Server

    Fritz, J; Bettoni, D; Cava, A; Couch, W J; D'Onofrio, M; Dressler, A; Fasano, G; Kjaergaard, P; Moles, M; Varela, J

    2007-01-01

    [Abridged] The WIde-field Nearby Galaxy-cluster Survey (WINGS) is a project aiming at the study of the galaxy populations in clusters in the local universe (0.04model is the possibility of treating dust extinction as a function of age, allowing younger stars to be more obscured than older ones. Our technique, for the first time, takes into account this feature in a spectral fitting code. A set of template spectra spanning a wide range of star formation histories is built, with features closely resembling those of typical spectra in our sample in terms of spectral resolution, noise and wavelength coverage. Our method of analyzing these spectra allows us to test the reliability and the uncertainties related to each physical parameter we are inferring. The well-known degeneracy problem, i.e. the non-uniqu...

  11. IPR techniques applied to a multimedia environment in the HYPERMEDIA project

    Science.gov (United States)

    Munoz, Alberto; Ribagorda, Arturo; Sierra, Jose M.

    1999-04-01

    Watermarking techniques have been proved as a good method to protect intellectual copyrights in digital formats. But the simplicity for processing information supplied by digital platforms also offers many chances for eliminating marks embedded in the data due to the wide variety of techniques to modify information in digital formats. This paper analyzes a selection of the most interesting methods for image watermarking in order to test its qualities. The comparison of these watermarking techniques has shown new interesting lines of work. Some changes and extensions to these methods are proposed to increase its robustness against some usual attacks and specific watermark attacks. This works has been realized in order to provide the HYPERMEDIA project with an efficient tool for protecting IPR. The objective of this project is to establish an experimental stage on continuous multimedia material (audiovisuals) handling and delivering in a multimedia service environment, allowing the user to navigate in the hyperspace through database which belong to actors of the service chain and protecting IPR of authors or owners.

  12. An efficient permeability scaling-up technique applied to the discretized flow equations

    Energy Technology Data Exchange (ETDEWEB)

    Urgelli, D.; Ding, Yu [Institut Francais du Petrole, Rueil Malmaison (France)

    1997-08-01

    Grid-block permeability scaling-up for numerical reservoir simulations has been discussed for a long time in the literature. It is now recognized that a full permeability tensor is needed to get an accurate reservoir description at large scale. However, two major difficulties are encountered: (1) grid-block permeability cannot be properly defined because it depends on boundary conditions; (2) discretization of flow equations with a full permeability tensor is not straightforward and little work has been done on this subject. In this paper, we propose a new method, which allows us to get around both difficulties. As the two major problems are closely related, a global approach will preserve the accuracy. So, in the proposed method, the permeability up-scaling technique is integrated in the discretized numerical scheme for flow simulation. The permeability is scaled-up via the transmissibility term, in accordance with the fluid flow calculation in the numerical scheme. A finite-volume scheme is particularly studied, and the transmissibility scaling-up technique for this scheme is presented. Some numerical examples are tested for flow simulation. This new method is compared with some published numerical schemes for full permeability tensor discretization where the full permeability tensor is scaled-up through various techniques. Comparing the results with fine grid simulations shows that the new method is more accurate and more efficient.

  13. Reviewing current knowledge in snatch performance and technique: the need for future directions in applied research.

    Science.gov (United States)

    Ho, Lester K W; Lorenzen, Christian; Wilson, Cameron J; Saunders, John E; Williams, Morgan D

    2014-02-01

    This is a review of current research trends in weightlifting literature relating to the understanding of technique and its role in successful snatch performance. Reference to the world records in the snatch from the 1960s onwards indicates little progress across all weight categories. With such mediocre advances in performance at the International level, there is a need to better understand how snatch technique can improve performance even if only by a small margin. Methods of data acquisition for technical analysis of the snatch have involved mostly 2-dimensional barbell and joint kinematics. Although key variables which play a role in the successful outcome of a snatch lift have been heavily investigated, few studies have combined variables relating both the barbell and the weightlifter in their analyses. This suggests the need for a more detailed approach integrating both barbell-related and weightlifter-related data to enhance understanding of the mechanics of a successful lift. Currently, with the aid of technical advances in motion analysis, data acquisition, and methods of analysis, a more accurate representation of the movement can be provided. Better ways of understanding the key characteristics of technique in the snatch could provide the opportunity for more effective individualized feedback from the coach to the athlete, which should in turn lead to improved performance in competition.

  14. A photoacoustic technique applied to detection of ethylene emissions in edible coated passion fruit

    Energy Technology Data Exchange (ETDEWEB)

    Alves, G V L; Santos, W C dos; Vargas, H; Silva, M G da [Laboratorio de Ciencias FIsicas, Universidade Estadual do Norte Fluminense Darcy Ribeiro, Av. Alberto Lamego 2000, 28013-602, Campos dos Goytacazes, RJ (Brazil); Waldman, W R [Laboratorio de Ciencias QuImicas, Universidade Estadual do Norte Fluminense Darcy Ribeiro (Brazil); Oliveira, J G, E-mail: mgs@uenf.b [Laboratorio de Melhoramento Genetico Vegetal, Universidade Estadual do Norte Fluminense Darcy Ribeiro (Brazil)

    2010-03-01

    Photoacoustic spectroscopy was applied to study the physiological behavior of passion fruit when coated with edible films. The results have shown a reduction of the ethylene emission rate. Weight loss monitoring has not shown any significant differences between the coated and uncoated passion fruit. On the other hand, slower color changes of coated samples suggest a slowdown of the ripening process in coated passion fruit.

  15. Applied genre analysis: a multi-perspective model

    Directory of Open Access Journals (Sweden)

    Vijay K Bhatia

    2002-04-01

    Full Text Available Genre analysis can be viewed from two different perspectives: it may be seen as a reflection of the complex realities of the world of institutionalised communication, or it may be seen as a pedagogically effective and convenient tool for the design of language teaching programmes, often situated within simulated contexts of classroom activities. This paper makes an attempt to understand and resolve the tension between these two seemingly contentious perspectives to answer the question: "Is generic description a reflection of reality, or a convenient fiction invented by applied linguists?". The paper also discusses issues related to the nature and use of linguistic description in a genre-based educational enterprise, claiming that instead of using generic descriptions as models for linguistic reproduction of conventional forms to respond to recurring social contexts, as is often the case in many communication based curriculum contexts, they can be used as analytical resource to understand and manipulate complex inter-generic and multicultural realisations of professional discourse, which will enable learners to use generic knowledge to respond to novel social contexts and also to create new forms of discourse to achieve pragmatic success as well as other powerful human agendas.

  16. Hybrid multicore/vectorisation technique applied to the elastic wave equation on a staggered grid

    Science.gov (United States)

    Titarenko, Sofya; Hildyard, Mark

    2017-07-01

    In modern physics it has become common to find the solution of a problem by solving numerically a set of PDEs. Whether solving them on a finite difference grid or by a finite element approach, the main calculations are often applied to a stencil structure. In the last decade it has become usual to work with so called big data problems where calculations are very heavy and accelerators and modern architectures are widely used. Although CPU and GPU clusters are often used to solve such problems, parallelisation of any calculation ideally starts from a single processor optimisation. Unfortunately, it is impossible to vectorise a stencil structured loop with high level instructions. In this paper we suggest a new approach to rearranging the data structure which makes it possible to apply high level vectorisation instructions to a stencil loop and which results in significant acceleration. The suggested method allows further acceleration if shared memory APIs are used. We show the effectiveness of the method by applying it to an elastic wave propagation problem on a finite difference grid. We have chosen Intel architecture for the test problem and OpenMP (Open Multi-Processing) since they are extensively used in many applications.

  17. Assessment of ground-based monitoring techniques applied to landslide investigations

    Science.gov (United States)

    Uhlemann, S.; Smith, A.; Chambers, J.; Dixon, N.; Dijkstra, T.; Haslam, E.; Meldrum, P.; Merritt, A.; Gunn, D.; Mackay, J.

    2016-01-01

    A landslide complex in the Whitby Mudstone Formation at Hollin Hill, North Yorkshire, UK is periodically re-activated in response to rainfall-induced pore-water pressure fluctuations. This paper compares long-term measurements (i.e., 2009-2014) obtained from a combination of monitoring techniques that have been employed together for the first time on an active landslide. The results highlight the relative performance of the different techniques, and can provide guidance for researchers and practitioners for selecting and installing appropriate monitoring techniques to assess unstable slopes. Particular attention is given to the spatial and temporal resolutions offered by the different approaches that include: Real Time Kinematic-GPS (RTK-GPS) monitoring of a ground surface marker array, conventional inclinometers, Shape Acceleration Arrays (SAA), tilt meters, active waveguides with Acoustic Emission (AE) monitoring, and piezometers. High spatial resolution information has allowed locating areas of stability and instability across a large slope. This has enabled identification of areas where further monitoring efforts should be focused. High temporal resolution information allowed the capture of 'S'-shaped slope displacement-time behaviour (i.e. phases of slope acceleration, deceleration and stability) in response to elevations in pore-water pressures. This study shows that a well-balanced suite of monitoring techniques that provides high temporal and spatial resolutions on both measurement and slope scale is necessary to fully understand failure and movement mechanisms of slopes. In the case of the Hollin Hill landslide it enabled detailed interpretation of the geomorphological processes governing landslide activity. It highlights the benefit of regularly surveying a network of GPS markers to determine areas for installation of movement monitoring techniques that offer higher resolution both temporally and spatially. The small sensitivity of tilt meter measurements

  18. Applying data mining techniques for increasing implantation rate by selecting best sperms for intra-cytoplasmic sperm injection treatment.

    Science.gov (United States)

    Mirroshandel, Seyed Abolghasem; Ghasemian, Fatemeh; Monji-Azad, Sara

    2016-12-01

    Aspiration of a good-quality sperm during intracytoplasmic sperm injection (ICSI) is one of the main concerns. Understanding the influence of individual sperm morphology on fertilization, embryo quality, and pregnancy probability is one of the most important subjects in male factor infertility. Embryologists need to decide the best sperm for injection in real time during ICSI cycle. Our objective is to predict the quality of zygote, embryo, and implantation outcome before injection of each sperm in an ICSI cycle for male factor infertility with the aim of providing a decision support system on the sperm selection. The information was collected from 219 patients with male factor infertility at the infertility therapy center of Alzahra hospital in Rasht from 2012 through 2014. The prepared dataset included the quality of zygote, embryo, and implantation outcome of 1544 injected sperms into the related oocytes. In our study, embryo transfer was performed at day 3. Each sperm was represented with thirteen clinical features. Data preprocessing was the first step in the proposed data mining algorithm. After applying more than 30 classifiers, 9 successful classifiers were selected and evaluated by 10-fold cross validation technique using precision, recall, F1, and AUC measures. Another important experiment was measuring the effect of each feature in prediction process. In zygote and embryo quality prediction, IBK and RandomCommittee models provided 79.2% and 83.8% F1, respectively. In implantation outcome prediction, KStar model achieved 95.9% F1, which is even better than prediction of human experts. All these predictions can be done in real time. A machine learning-based decision support system would be helpful in sperm selection phase of ICSI cycle to improve the success rate of ICSI treatment. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  19. International Conference on Applied Mathematics, Modeling and Computational Science & Annual meeting of the Canadian Applied and Industrial Mathematics

    CERN Document Server

    Bélair, Jacques; Kunze, Herb; Makarov, Roman; Melnik, Roderick; Spiteri, Raymond J

    2016-01-01

    Focusing on five main groups of interdisciplinary problems, this book covers a wide range of topics in mathematical modeling, computational science and applied mathematics. It presents a wealth of new results in the development of modeling theories and methods, advancing diverse areas of applications and promoting interdisciplinary interactions between mathematicians, scientists, engineers and representatives from other disciplines. The book offers a valuable source of methods, ideas, and tools developed for a variety of disciplines, including the natural and social sciences, medicine, engineering, and technology. Original results are presented on both the fundamental and applied level, accompanied by an ample number of real-world problems and examples emphasizing the interdisciplinary nature and universality of mathematical modeling, and providing an excellent outline of today’s challenges. Mathematical modeling, with applied and computational methods and tools, plays a fundamental role in modern science a...

  20. Applying Error Diagram for Evaluating Spatial Forecasting Model of Large Aftershocks

    Science.gov (United States)

    Shebalin, Peter; Sergey, Baranov

    2016-04-01

    Difficulty of use in practice the forecasting result formulated in probability terms is well known in statistical seismology. Small values of probability of earthquake occurrence cannot be directly used for decision making to reduce losses due to seismic hazard. In this research we suggest a technique for applying Molchan's error diagram to evaluate a model of seismic hazard forecasting and make practical recommendation, applied specifically to the hazard after large earthquakes. We illustrate the suggested technique by example of evaluating retrospective forecast of an area where one can expect strong aftershock (M6+). The forecast model is based on data for 12 hours after the mainshock. We found an optimal variant among many tested by minimizing the rate of missed targets (strong aftershock) and the rate of alarm space as a loss function. Analyzing the error diagram, we suggest these three forecast strategies: "soft", "neutral", and 'hard", giving different size of the alarm area, where one may expect strong aftershocks. The suggested technique can be used for making decision at various conditions to reduce losses due to seismic hazard after a strong earthquake. This research was carried out at the expense of the Russian Science Foundation (Project Nu 16-17-00093).

  1. Zero order and signal processing spectrophotometric techniques applied for resolving interference of metronidazole with ciprofloxacin in their pharmaceutical dosage form

    Science.gov (United States)

    Attia, Khalid A. M.; Nassar, Mohammed W. I.; El-Zeiny, Mohamed B.; Serag, Ahmed

    2016-02-01

    Four rapid, simple, accurate and precise spectrophotometric methods were used for the determination of ciprofloxacin in the presence of metronidazole as interference. The methods under study are area under the curve, simultaneous equation in addition to smart signal processing techniques of manipulating ratio spectra namely Savitsky-Golay filters and continuous wavelet transform. All the methods were validated according to the ICH guidelines where accuracy, precision and repeatability were found to be within the acceptable limits. The selectivity of the proposed methods was tested using laboratory prepared mixtures and assessed by applying the standard addition technique. So, they can therefore be used for the routine analysis of ciprofloxacin in quality-control laboratories.

  2. Zero order and signal processing spectrophotometric techniques applied for resolving interference of metronidazole with ciprofloxacin in their pharmaceutical dosage form.

    Science.gov (United States)

    Attia, Khalid A M; Nassar, Mohammed W I; El-Zeiny, Mohamed B; Serag, Ahmed

    2016-02-01

    Four rapid, simple, accurate and precise spectrophotometric methods were used for the determination of ciprofloxacin in the presence of metronidazole as interference. The methods under study are area under the curve, simultaneous equation in addition to smart signal processing techniques of manipulating ratio spectra namely Savitsky-Golay filters and continuous wavelet transform. All the methods were validated according to the ICH guidelines where accuracy, precision and repeatability were found to be within the acceptable limits. The selectivity of the proposed methods was tested using laboratory prepared mixtures and assessed by applying the standard addition technique. So, they can therefore be used for the routine analysis of ciprofloxacin in quality-control laboratories.

  3. Moving objects management models, techniques and applications

    CERN Document Server

    Meng, Xiaofeng; Xu, Jiajie

    2014-01-01

    This book describes the topics of moving objects modeling and location tracking, indexing and querying, clustering, location uncertainty, traffic aware navigation and privacy issues as well as the application to intelligent transportation systems.

  4. Multivariation calibration techniques applied to NIRA (near infrared reflectance analysis) and FTIR (Fourier transform infrared) data

    Science.gov (United States)

    Long, C. L.

    1991-02-01

    Multivariate calibration techniques can reduce the time required for routine testing and can provide new methods of analysis. Multivariate calibration is commonly used with near infrared reflectance analysis (NIRA) and Fourier transform infrared (FTIR) spectroscopy. Two feasibility studies were performed to determine the capability of NIRA, using multivariate calibration techniques, to perform analyses on the types of samples that are routinely analyzed at this laboratory. The first study performed included a variety of samples and indicated that NIRA would be well-suited to perform analyses on selected materials properties such as water content and hydroxyl number on polyol samples, epoxy content on epoxy resins, water content of desiccants, and the amine values of various amine cure agents. A second study was performed to assess the capability of NIRA to perform quantitative analysis of hydroxyl numbers and water contents of hydroxyl-containing materials. Hydroxyl number and water content were selected for determination because these tests are frequently run on polyol materials and the hydroxyl number determination is time consuming. This study pointed out the necessity of obtaining calibration standards identical to the samples being analyzed for each type of polyol or other material being analyzed. Multivariate calibration techniques are frequently used with FTIR data to determine the composition of a large variety of complex mixtures. A literature search indicated many applications of multivariate calibration to FTIR data. Areas identified where quantitation by FTIR would provide a new capability are quantitation of components in epoxy and silicone resins, polychlorinated biphenyls (PCBs) in oils, and additives to polymers.

  5. Applying Squeezing Technique to Clayrocks: Lessons Learned from Experiments at Mont Terri Rock Laboratory

    Energy Technology Data Exchange (ETDEWEB)

    Fernandez, A. M.; Sanchez-Ledesma, D. M.; Tournassat, C.; Melon, A.; Gaucher, E.; Astudillo, E.; Vinsot, A.

    2013-07-01

    Knowledge of the pore water chemistry in clay rock formations plays an important role in determining radionuclide migration in the context of nuclear waste disposal. Among the different in situ and ex-situ techniques for pore water sampling in clay sediments and soils, squeezing technique dates back 115 years. Although different studies have been performed about the reliability and representativeness of squeezed pore waters, more of them were achieved on high porosity, high water content and unconsolidated clay sediments. A very few of them tackled the analysis of squeezed pore water from low-porosity, low water content and highly consolidated clay rocks. In this work, a specially designed and fabricated one-dimensional compression cell two directional fluid flow was used to extract and analyse the pore water composition of Opalinus Clay core samples from Mont Terri (Switzerland). The reproducibility of the technique is good and no ionic ultrafiltration, chemical fractionation or anion exclusion was found in the range of pressures analysed: 70-200 MPa. Pore waters extracted in this range of pressures do not decrease in concentration, which would indicate a dilution of water by mixing of the free pore water and the outer layers of double layer water (Donnan water). A threshold (safety) squeezing pressure of 175 MPa was established for avoiding membrane effects (ion filtering, anion exclusion, etc.) from clay particles induced by increasing pressures. Besides, the pore waters extracted at these pressures are representative of the Opalinus Clay formation from a direct comparison against in situ collected borehole waters. (Author)

  6. Enhanced nonlinear iterative techniques applied to a non-equilibrium plasma flow

    Energy Technology Data Exchange (ETDEWEB)

    Knoll, D.A.; McHugh, P.R. [Idaho National Engineering Lab., Idaho Falls, ID (United States)

    1996-12-31

    We study the application of enhanced nonlinear iterative methods to the steady-state solution of a system of two-dimensional convection-diffusion-reaction partial differential equations that describe the partially-ionized plasma flow in the boundary layer of a tokamak fusion reactor. This system of equations is characterized by multiple time and spatial scales, and contains highly anisotropic transport coefficients due to a strong imposed magnetic field. We use Newton`s method to linearize the nonlinear system of equations resulting from an implicit, finite volume discretization of the governing partial differential equations, on a staggered Cartesian mesh. The resulting linear systems are neither symmetric nor positive definite, and are poorly conditioned. Preconditioned Krylov iterative techniques are employed to solve these linear systems. We investigate both a modified and a matrix-free Newton-Krylov implementation, with the goal of reducing CPU cost associated with the numerical formation of the Jacobian. A combination of a damped iteration, one-way multigrid and a pseudo-transient continuation technique are used to enhance global nonlinear convergence and CPU efficiency. GMRES is employed as the Krylov method with Incomplete Lower-Upper(ILU) factorization preconditioning. The goal is to construct a combination of nonlinear and linear iterative techniques for this complex physical problem that optimizes trade-offs between robustness, CPU time, memory requirements, and code complexity. It is shown that a one-way multigrid implementation provides significant CPU savings for fine grid calculations. Performance comparisons of the modified Newton-Krylov and matrix-free Newton-Krylov algorithms will be presented.

  7. Imaging techniques applied to the study of fluids in porous media

    Energy Technology Data Exchange (ETDEWEB)

    Tomutsa, L.; Doughty, D.; Brinkmeyer, A.; Mahmood, S.

    1992-06-01

    Improved imaging techniques were used to study the dynamics of fluid flow and trapping at various scales in porous media. Two-phase and three-phase floods were performed and monitored by computed tomography (CT) scanning and/or nuclear magnetic resonance imaging (NMRI) microscopy. Permeability-porosity correlations obtained from image analysis were combined with porosity distributions from CT scanning to generate spatial permeability distributions within the core which were used in simulations of two-phase floods. Simulation-derived saturation distributions of two-phase processes showed very good agreement with the CT measured values.

  8. Full-field speckle correlation technique as applied to blood flow monitoring

    Science.gov (United States)

    Vilensky, M. A.; Agafonov, D. N.; Timoshina, P. A.; Shipovskaya, O. V.; Zimnyakov, D. A.; Tuchin, V. V.; Novikov, P. A.

    2011-03-01

    The results of experimental study of monitoring the microcirculation in tissue superficial layers of the internal organs at gastro-duodenal hemorrhage with the use of laser speckles contrast analysis technique are presented. The microcirculation monitoring was provided in the course of the laparotomy of rat abdominal cavity in the real time. Microscopic hemodynamics was analyzed for small intestine and stomach under different conditions (normal state, provoked ischemia, administration of vasodilative agents such as papaverine, lidocaine). The prospects and problems of internal monitoring of micro-vascular flow in clinical conditions are discussed.

  9. Micro-spectroscopic techniques applied to characterization of varnished archeological findings

    Science.gov (United States)

    Barone, G.; Ioppolo, S.; Majolino, D.; Migliardo, P.; Ponterio, R.

    2000-04-01

    This work reports an analysis on terracotta varnished finding recovered in east Sicily area (Messina). We have performed FTIR micro-spectroscopy and electronic microscopy (SEM)measurements in order to recognize the elemental constituents of the varnished surfaces. Furthermore, for all the samples, a study on the bulk has been performed by Fourier Transform Infrared Absorption. The analyzed samples consist of a number of pottery fragments belonging to archaic and classical ages, varnished in black and red colors. The obtained data furnished useful information about composition of decorated surfaces and bulk matrixes, about baking temperature, manufacture techniques and alteration mechanisms of findings due to the long burial.

  10. Hyphenated GC-FTIR and GC-MS techniques applied in the analysis of bioactive compounds

    Science.gov (United States)

    Gosav, Steluta; Paduraru, Nicoleta; Praisler, Mirela

    2014-08-01

    The drugs of abuse, which affect human nature and cause numerous crimes, have become a serious problem throughout the world. There are hundreds of amphetamine analogues on the black market. They consist of various alterations of the basic amphetamine molecular structure, which are yet not yet included in the lists of forbidden compounds although they retain or slightly modify the hallucinogenic effects of their parent compound. It is their important variety that makes their identification quite a challenge. A number of analytical procedures for the identification of amphetamines and their analogues have recently been reported. We are presenting the profile of the main hallucinogenic amphetamines obtained with the hyphenated techniques that are recommended for the identification of illicit amphetamines, i. e. gas chromatography combined with mass spectrometry (GC-MS) and gas chromatography coupled with Fourier transform infrared spectrometry (GC-FTIR). The infrared spectra of the analyzed hallucinogenic amphetamines present some absorption bands (1490 cm-1, 1440 cm-1, 1245 cm-1, 1050 cm-1 and 940 cm-1) that are very stable as position and shape, while their intensity depends of the side-chain substitution. The specific ionic fragment of the studied hallucinogenic compounds is the 3,4-methylenedioxybenzyl cation (m/e = 135) which has a small relative abundance (lesser than 20%). The complementarity of the above mentioned techniques for the identification of hallucinogenic compounds is discussed.

  11. Predicting Performance of Schools by Applying Data Mining Techniques on Public Examination Results

    Directory of Open Access Journals (Sweden)

    J. Macklin Abraham Navamani

    2015-02-01

    Full Text Available This study work presents a systematic analysis of various features of the higher grade school public examination results data in the state of Tamil Nadu, India through different data mining classification algorithms to predict the performance of Schools. Nowadays the parents always targets to select the right city, school and factors which contributes to the success of the results in schools of their children. There could be possible effects of factors such as Ethnic mix, Medium of study, geography could make a difference in results. The proposed work would focus on two fold factors namely Machine Learning algorithms to predict School performance with satisfying accuracy and to evaluate the data mining technique which would give better accuracy of the learning algorithms. It was found that there exist some apparent and some less noticeable attributes that demonstrate a strong correlation with student performance. Data were collected through the credible source data preparation and correlation analysis. The findings revealed that the public examinations results data was a very helpful predictor of performance of school in order to improve the result with maximum level and also improved the overall accuracy with the help of Adaboost technique.

  12. Applied feline oral anatomy and tooth extraction techniques: an illustrated guide.

    Science.gov (United States)

    Reiter, Alexander M; Soltero-Rivera, Maria M

    2014-11-01

    Tooth extraction is one of the most commonly performed surgical procedures in small animal practice. The clinician must be familiar with normal oral anatomy, utilize nomenclature accepted in dentistry and oral surgery, use the modified Triadan system for numbering teeth, identify normal structures on a dental radiograph, understand the tissues that hold the teeth in the jaws, know the biomechanical principles of tooth extraction, be able to choose the most appropriate instrument for removal of a tooth, extract teeth using closed and open techniques, and create tension-free flaps for closure of extraction sites. This review is intended to familiarize both the general and referral practitioner with feline oral anatomy and tooth extraction techniques. Tooth extraction is predominantly performed in cats with tooth resorption, chronic gingivostomatitis and periodontal disease. The basic contents of a feline tooth extraction kit are explained. The guidance contained within this review is based on a combination of the published literature, the authors' personal experience and the experience of colleagues. © ISFM and AAFP 2014.

  13. Fragrance composition of Dendrophylax lindenii (Orchidaceae using a novel technique applied in situ

    Directory of Open Access Journals (Sweden)

    James J. Sadler

    2012-02-01

    Full Text Available The ghost orchid, Dendrophylax lindenii (Lindley Bentham ex Rolfe (Orchidaceae, is one of North America’s rarest and well-known orchids. Native to Cuba and SW Florida where it frequents shaded swamps as an epiphyte, the species has experienced steady decline. Little information exists on D. lindenii’s biology in situ, raising conservation concerns. During the summer of 2009 at an undisclosed population in Collier County, FL, a substantial number (ca. 13 of plants initiated anthesis offering a unique opportunity to study this species in situ. We report a new technique aimed at capturing floral headspace of D. lindenii in situ, and identified volatile compounds using gas chromatography mass spectrometry (GC/MS. All components of the floral scent were identified as terpenoids with the exception of methyl salicylate. The most abundant compound was the sesquiterpene (E,E-α-farnesene (71% followed by (E-β-ocimene (9% and methyl salicylate (8%. Other compounds were: linalool (5%, sabinene (4%, (E-α-bergamotene (2%, α-pinene (1%, and 3-carene (1%. Interestingly, (E,E-α-farnesene has previously been associated with pestiferous insects (e.g., Hemiptera. The other compounds are common floral scent constituents in other angiosperms suggesting that our in situ technique was effective. Volatile capture was, therefore, possible without imposing physical harm (e.g., inflorescence detachment to this rare orchid.

  14. Therapeutic techniques applied in the heavy-ion therapy at IMP

    Science.gov (United States)

    Li, Qiang; Sihver, Lembit

    2011-04-01

    Superficially-placed tumors have been treated with carbon ions at the Institute of Modern Physics (IMP), Chinese Academy of Sciences (CAS), since November 2006. Up to now, 103 patients have been irradiated in the therapy terminal of the heavy ion research facility in Lanzhou (HIRFL) at IMP, where carbon-ion beams with energies up to 100 MeV/u can be supplied and a passive beam delivery system has been developed and commissioned. A number of therapeutic and clinical experiences concerning heavy-ion therapy have been acquired at IMP. To extend the heavy-ion therapy project to deep-seated tumor treatment, a horizontal beam line dedicated to this has been constructed in the cooling storage ring (CSR), which is a synchrotron connected to the HIRFL as an injector, and is now in operation. Therapeutic high-energy carbon-ion beams, extracted from the HIRFL-CSR through slow extraction techniques, have been supplied in the deep-seated tumor therapy terminal. After the beam delivery, shaping and monitoring devices installed in the therapy terminal at HIRFL-CSR were validated through therapeutic beam tests, deep-seated tumor treatment with high-energy carbon ions started in March 2009. The therapeutic techniques in terms of beam delivery system, conformal irradiation method and treatment planning used at IMP are introduced in this paper.

  15. Internet enabled modelling of extended manufacturing enterprises using the process based techniques

    OpenAIRE

    Cheng, K; Popov, Y

    2004-01-01

    The paper presents the preliminary results of an ongoing research project on Internet enabled process-based modelling of extended manufacturing enterprises. It is proposed to apply the Open System Architecture for CIM (CIMOSA) modelling framework alongside with object-oriented Petri Net models of enterprise processes and object-oriented techniques for extended enterprises modelling. The main features of the proposed approach are described and some components discussed. Elementary examples of ...

  16. Modelling of the carburizing and quenching process applied to caterpillar track bushings

    Science.gov (United States)

    Ferro, P.; Bonollo, F.

    2014-03-01

    The carburizing-quenching process applied to caterpillar track bushings was studied by means of experimental and numerical analyses. The numerical model was developed on the basis of the real cycle. The purpose of this work is to predict the carbon profiles, microstructural phase changes, hardness and residual stress that occur during quenching using finite element techniques. Good agreement was obtained between the experimental and numerical results in terms of carbon diffusion and hardness profiles. The Sysweld® numerical code was used to perform the simulations.

  17. A comparison of new, old and future densiometic techniques as applied to volcanologic study.

    Science.gov (United States)

    Pankhurst, Matthew; Moreland, William; Dobson, Kate; Þórðarson, Þorvaldur; Fitton, Godfrey; Lee, Peter

    2015-04-01

    The density of any material imposes a primary control upon its potential or actual physical behaviour in relation to its surrounds. It follows that a thorough understanding of the physical behaviour of dynamic, multi-component systems, such as active volcanoes, requires knowledge of the density of each component. If we are to accurately predict the physical behaviour of synthesized or natural volcanic systems, quantitative densiometric measurements are vital. The theoretical density of melt, crystals and bubble phases may be calculated using composition, structure, temperature and pressure inputs. However, measuring the density of natural, non-ideal, poly-phase materials remains problematic, especially if phase specific measurement is important. Here we compare three methods; Archimedes principle, He-displacement pycnometry and X-ray micro computed tomography (XMT) and discuss the utility and drawbacks of each in the context of modern volcanologic study. We have measured tephra, ash and lava from the 934 AD Eldgjá eruption (Iceland), and the 2010 AD Eyjafjallajökull eruption (Iceland), using each technique. These samples exhibit a range of particle sizes, phases and textures. We find that while the Archimedes method remains a useful, low-cost technique to generate whole-rock density data, relative precision is problematic at small particles sizes. Pycnometry offers a more precise whole-rock density value, at a comparable cost-per-sample. However, this technique is based upon the assumption pore spaces within the sample are equally available for gas exchange, which may or may not be the case. XMT produces 3D images, at resolutions from nm to tens of µm per voxel where X-ray attenuation is a qualitative measure of relative electron density, expressed as greyscale number/brightness (usually 16-bit). Phases and individual particles can be digitally segmented according to their greyscale and other characteristics. This represents a distinct advantage over both

  18. Analysis and Design of International Emission Trading Markets Applying System Dynamics Techniques

    Science.gov (United States)

    Hu, Bo; Pickl, Stefan

    2010-11-01

    The design and analysis of international emission trading markets is an important actual challenge. Time-discrete models are needed to understand and optimize these procedures. We give an introduction into this scientific area and present actual modeling approaches. Furthermore, we develop a model which is embedded in a holistic problem solution. Measures for energy efficiency are characterized. The economic time-discrete "cap-and-trade" mechanism is influenced by various underlying anticipatory effects. With a systematic dynamic approach the effects can be examined. First numerical results show that fair international emissions trading can only be conducted with the use of protective export duties. Furthermore a comparatively high price which evokes emission reduction inevitably has an inhibiting effect on economic growth according to our model. As it always has been expected it is not without difficulty to find a balance between economic growth and emission reduction. It can be anticipated using our System Dynamics model simulation that substantial changes must be taken place before international emissions trading markets can contribute to global GHG emissions mitigation.

  19. Moving beyond regression techniques in cardiovascular risk prediction: applying machine learning to address analytic challenges.

    Science.gov (United States)

    Goldstein, Benjamin A; Navar, Ann Marie; Carter, Rickey E

    2016-07-19

    Risk prediction plays an important role in clinical cardiology research. Traditionally, most risk models have been based on regression models. While useful and robust, these statistical methods are limited to using a small number of predictors which operate in the same way on everyone, and uniformly throughout their range. The purpose of this review is to illustrate the use of machine-learning methods for development of risk prediction models. Typically presented as black box approaches, most machine-learning methods are aimed at solving particular challenges that arise in data analysis that are not well addressed by typical regression approaches. To illustrate these challenges, as well as how different methods can address them, we consider trying to predicting mortality after diagnosis of acute myocardial infarction. We use data derived from our institution's electronic health record and abstract data on 13 regularly measured laboratory markers. We walk through different challenges that arise in modelling these data and then introduce different machine-learning approaches. Finally, we discuss general issues in the application of machine-learning methods including tuning parameters, loss functions, variable importance, and missing data. Overall, this review serves as an introduction for those working on risk modelling to approach the diffuse field of machine learning.

  20. Nuclear analytical techniques applied to forensic chemistry; Aplicacion de tecnicas analiticas nucleares en quimica forense

    Energy Technology Data Exchange (ETDEWEB)

    Nicolau, Veronica; Montoro, Silvia [Universidad Nacional del Litoral, Santa Fe (Argentina). Facultad de Ingenieria Quimica. Dept. de Quimica Analitica; Pratta, Nora; Giandomenico, Angel Di [Consejo Nacional de Investigaciones Cientificas y Tecnicas, Santa Fe (Argentina). Centro Regional de Investigaciones y Desarrollo de Santa Fe

    1999-11-01

    Gun shot residues produced by firing guns are mainly composed by visible particles. The individual characterization of these particles allows distinguishing those ones containing heavy metals, from gun shot residues, from those having a different origin or history. In this work, the results obtained from the study of gun shot residues particles collected from hands are presented. The aim of the analysis is to establish whether a person has shot a firing gun has been in contact with one after the shot has been produced. As reference samples, particles collected hands of persons affected to different activities were studied to make comparisons. The complete study was based on the application of nuclear analytical techniques such as Scanning Electron Microscopy, Energy Dispersive X Ray Electron Probe Microanalysis and Graphite Furnace Atomic Absorption Spectrometry. The essays allow to be completed within time compatible with the forensic requirements. (author) 5 refs., 3 figs., 1 tab.; e-mail: csedax e adigian at arcride.edu.ar

  1. A new technique for fractal analysis applied to human, intracerebrally recorded, ictal electroencephalographic signals.

    Science.gov (United States)

    Bullmore, E; Brammer, M; Alarcon, G; Binnie, C

    1992-11-09

    Application of a new method of fractal analysis to human, intracerebrally recorded, ictal electroencephalographic (EEG) signals is reported. 'Frameshift-Richardson' (FR) analysis involves estimation of fractal dimension (1 EEG data; it is suggested that this technique offers significant operational advantages over use of algorithms for FD estimation requiring preliminary reconstruction of EEG data in phase space. FR analysis was found to reduce substantially the volume of EEG data, without loss of diagnostically important information concerning onset, propagation and evolution of ictal EEG discharges. Arrhythmic EEG events were correlated with relatively increased FD; rhythmic EEG events with relatively decreased FD. It is proposed that development of this method may lead to: (i) enhanced definition and localisation of initial ictal changes in the EEG presumed due to multi-unit activity; and (ii) synoptic visualisation of long periods of EEG data.

  2. Applying the behaviour change technique (BCT) taxonomy v1: a study of coder training.

    Science.gov (United States)

    Wood, Caroline E; Richardson, Michelle; Johnston, Marie; Abraham, Charles; Francis, Jill; Hardeman, Wendy; Michie, Susan

    2015-06-01

    Behaviour Change Technique Taxonomy v1 (BCTTv1) has been used to detect active ingredients of interventions. The purpose of this study was to evaluate effectiveness of user training in improving reliable, valid and confident application of BCTTv1 to code BCTs in intervention descriptions. One hundred sixty-one trainees (109 in workshops and 52 in group tutorials) were trained to code frequent BCTs. The following measures were taken before and after training: (i) inter-coder agreement, (ii) trainee agreement with expert consensus, (iii) confidence ratings and (iv) coding competence. Coding was assessed for 12 BCTs (workshops) and for 17 BCTs (tutorials). Trainees completed a course evaluation. Methods improved agreement with expert consensus (p coder agreement (p = .08, p = .57, respectively) and increased confidence for BCTs assessed (both p coder agreement. This varied according to BCT.

  3. Emerging and Innovative Techniques for Arsenic Removal Applied to a Small Water Supply System

    Directory of Open Access Journals (Sweden)

    António J. Alçada

    2009-12-01

    Full Text Available The impact of arsenic on human health has led its drinking water MCL to be drastically reduced from 50 to 10 ppb. Consequently, arsenic levels in many water supply sources have become critical. This has resulted in technical and operational impacts on many drinking water treatment plants that have required onerous upgrading to meet the new standard. This becomes a very sensitive issue in the context of water scarcity and climate change, given the expected increasing demand on groundwater sources. This work presents a case study that describes the development of low-cost techniques for efficient arsenic control in drinking water. The results obtained at the Manteigas WTP (Portugal demonstrate the successful implementation of an effective and flexible process of reactive filtration using iron oxide. At real-scale, very high removal efficiencies of over 95% were obtained.

  4. Synchrotron radiation X-ray powder diffraction techniques applied in hydrogen storage materials - A review

    Directory of Open Access Journals (Sweden)

    Honghui Cheng

    2017-02-01

    Full Text Available Synchrotron radiation is an advanced collimated light source with high intensity. It has particular advantages in structural characterization of materials on the atomic or molecular scale. Synchrotron radiation X-ray powder diffraction (SR-XRPD has been successfully exploited to various areas of hydrogen storage materials. In the paper, we will give a brief introduction on hydrogen storage materials, X-ray powder diffraction (XRPD, and synchrotron radiation light source. The applications of ex situ and in situ time-resolved SR-XRPD in hydrogen storage materials, are reviewed in detail. Future trends and proposals in the applications of the advanced XRPD techniques in hydrogen storage materials are also discussed.

  5. Study of different filtering techniques applied to spectra from airborne gamma spectrometry.

    Science.gov (United States)

    Wilhelm, Emilien; Gutierrez, Sébastien; Arbor, Nicolas; Ménard, Stéphanie; Nourreddine, Abdel-Mjid

    2016-11-01

    One of the features of the spectra obtained by airborne gamma spectrometry is the low counting statistics due to a short acquisition time (1 s) and a large source-detector distance (40 m) which leads to large statistical fluctuations. These fluctuations bring large uncertainty in radionuclide identification and determination of their respective activities from the window method recommended by the IAEA, especially for low-level radioactivity. Different types of filter could be used on spectra in order to remove these statistical fluctuations. The present work compares the results obtained with filters in terms of errors over the whole gamma energy range of the filtered spectra with the window method. These results are used to determine which filtering technique is the most suitable in combination with some method for total stripping of the spectrum.

  6. An Encoding Technique for Multiobjective Evolutionary Algorithms Applied to Power Distribution System Reconfiguration

    Directory of Open Access Journals (Sweden)

    J. L. Guardado

    2014-01-01

    Full Text Available Network reconfiguration is an alternative to reduce power losses and optimize the operation of power distribution systems. In this paper, an encoding scheme for evolutionary algorithms is proposed in order to search efficiently for the Pareto-optimal solutions during the reconfiguration of power distribution systems considering multiobjective optimization. The encoding scheme is based on the edge window decoder (EWD technique, which was embedded in the Strength Pareto Evolutionary Algorithm 2 (SPEA2 and the Nondominated Sorting Genetic Algorithm II (NSGA-II. The effectiveness of the encoding scheme was proved by solving a test problem for which the true Pareto-optimal solutions are known in advance. In order to prove the practicability of the encoding scheme, a real distribution system was used to find the near Pareto-optimal solutions for different objective functions to optimize.

  7. Acoustic emission partial discharge detection technique applied to fault diagnosis: Case studies of generator transformers

    Directory of Open Access Journals (Sweden)

    Shanker Tangella Bhavani

    2016-01-01

    Full Text Available In power transformers, locating the partial discharge (PD source is as important as identifying it. Acoustic Emission (AE sensing offers a good solution for both PD detection and PD source location identification. In this paper the principle of the AE technique, along with in-situ findings of the online acoustic emission signals captured from partial discharges on a number of Generator Transformers (GT, is discussed. Of the two cases discussed, the first deals with Acoustic Emission Partial Discharge (AEPD tests on two identical transformers, and the second deals with the AEPD measurement of a transformer carried out on different occasions (years. These transformers are from a hydropower station and a thermal power station in India. Tests conducted in identical transformers give the provision for comparing AE signal amplitudes from the two transformers. These case studies also help in comprehending the efficacy of integrating Dissolved Gas is (DGA data with AEPD test results in detecting and locating the PD source.

  8. An encoding technique for multiobjective evolutionary algorithms applied to power distribution system reconfiguration.

    Science.gov (United States)

    Guardado, J L; Rivas-Davalos, F; Torres, J; Maximov, S; Melgoza, E

    2014-01-01

    Network reconfiguration is an alternative to reduce power losses and optimize the operation of power distribution systems. In this paper, an encoding scheme for evolutionary algorithms is proposed in order to search efficiently for the Pareto-optimal solutions during the reconfiguration of power distribution systems considering multiobjective optimization. The encoding scheme is based on the edge window decoder (EWD) technique, which was embedded in the Strength Pareto Evolutionary Algorithm 2 (SPEA2) and the Nondominated Sorting Genetic Algorithm II (NSGA-II). The effectiveness of the encoding scheme was proved by solving a test problem for which the true Pareto-optimal solutions are known in advance. In order to prove the practicability of the encoding scheme, a real distribution system was used to find the near Pareto-optimal solutions for different objective functions to optimize.

  9. Global tropospheric NO2 profiles obtained from a cloud-slicing technique applied to the Aura OMI observations

    Science.gov (United States)

    Choi, S.; Joiner, J.; Lamsal, L. N.; Marchenko, S. V.; Krotkov, N. A.

    2016-12-01

    Nitrogen dioxide (NO2) is an important trace species in the troposphere; it has adverse human health effects and also contributes to the formation of tropospheric ozone, a criteria pollutant and climate agent. We derive tropospheric NO2 volume mixing ratio (VMR) profiles by applying a cloud slicing technique to data from the Ozone Monitoring Instrument (OMI) on the Aura satellite. In the cloud-slicing approach, the slope of the above-cloud NO2 column versus the cloud scene pressure is proportional to the NO2 VMR. We apply this technique to OMI O2-O2 cloud scene pressures and above-cloud NO2 vertical column densities from a differential optical absorption spectroscopy (DOAS) algorithm. We derived a global seasonal climatology of tropospheric NO2 VMR profiles in cloudy conditions and compare the results with aircraft profiles measured during the NASA Intercontinental Chemical Transport Experiment Phase B (INTEX-B) campaign in 2006. An analysis of our cloud slicing NO2 profiles indicates signatures of uplifted and transported anthropogenic NOx in the middle troposphere as well as lightning-generated NOx in the upper troposphere. We expect that this technique can be applied to future geostationary missions including the NASA Earth Ventures Instrument (EVI) 1 selected mission Tropospheric Emissions: Monitoring of Pollution (TEMPO) over North America, the Korean Geostationary Environment Monitoring Spectrometer (GEMS) over the Asia-Pacific region, and the European Space Agency (ESA) Sentinel-4 over Europe.

  10. Microwave Diffraction Techniques from Macroscopic Crystal Models

    Science.gov (United States)

    Murray, William Henry

    1974-01-01

    Discusses the construction of a diffractometer table and four microwave models which are built of styrofoam balls with implanted metallic reflecting spheres and designed to simulate the structures of carbon (graphite structure), sodium chloride, tin oxide, and palladium oxide. Included are samples of Bragg patterns and computer-analysis results.…

  11. Erasing the Milky Way: New Cleaning Technique Applied to GBT Intensity Mapping Data

    Science.gov (United States)

    Wolz, L.; Blake, C.; Abdalla, F. B.; Anderson, C. J.; Chang, T.-C.; Li, Y.-C.; Masi, K.W.; Switzer, E.; Pen, U.-L.; Voytek, T. C.; hide

    2016-01-01

    We present the first application of a new foreground removal pipeline to the current leading HI intensity mapping dataset, obtained by the Green Bank Telescope (GBT). We study the 15- and 1-h field data of the GBT observations previously presented in Masui et al. (2013) and Switzer et al. (2013), covering about 41 square degrees at 0.6 less than z is less than 1.0, for which cross-correlations may be measured with the galaxy distribution of the WiggleZ Dark Energy Survey. In the presented pipeline, we subtract the Galactic foreground continuum and the point source contamination using an independent component analysis technique (fastica), and develop a Fourier-based optimal estimator to compute the temperature power spectrum of the intensity maps and cross-correlation with the galaxy survey data. We show that fastica is a reliable tool to subtract diffuse and point-source emission through the non-Gaussian nature of their probability distributions. The temperature power spectra of the intensity maps is dominated by instrumental noise on small scales which fastica, as a conservative sub-traction technique of non-Gaussian signals, can not mitigate. However, we determine similar GBT-WiggleZ cross-correlation measurements to those obtained by the Singular Value Decomposition (SVD) method, and confirm that foreground subtraction with fastica is robust against 21cm signal loss, as seen by the converged amplitude of these cross-correlation measurements. We conclude that SVD and fastica are complementary methods to investigate the foregrounds and noise systematics present in intensity mapping datasets.

  12. Applying stakeholder Delphi techniques for planning sustainable use of aquatic resources

    DEFF Research Database (Denmark)

    Lund, Søren; Banta, Gary Thomas; Bunting, Stuart W

    2015-01-01

    The HighARCS (Highland Aquatic Resources Conservation and Sustainable Development) project was a participatory research effort to map and better understand the patterns of resource use and livelihoods of communities who utilize highland aquatic resources in five sites across China, India...... and Vietnam. The purpose of this paper is to give an account of how the stakeholder Delphi method was adapted and applied to support the participatory integrated action planning for sustainable use of aquatic resources facilitated within the HighARCS project. An account of the steps taken and results recorded...... planning. It was found that the tool was not as effective as expected in creating stakeholder consensus where issues had already been the object of previous research and discussions with local stakeholders or where asymmetrical power relations between stakeholder groups constrained the reliability...

  13. Research on Key Techniques for Video Surveillance System Applied to Shipping Channel Management

    Institute of Scientific and Technical Information of China (English)

    WANG Lin; ZHUANG Yan-bin; ZHENG Cheng-zeng

    2007-01-01

    A video patrol and inspection system is an important part of the government's shipping channel information management. This system is mainly applied to video information gathering and processing as a patrol is carried out. The system described in this paper can preview, edit, and add essential explanation messages to the collected video data. It then transfers these data and messages to a video server for the leaders and engineering and technical personnel to retrieve, play, chart, download or print. Each department of the government will use the system's functions according to that department's mission. The system can provide an effective means for managing the shipping enterprise. It also provides a valuable reference for the modernizing of waterborne shipping.

  14. Classifying of Nellore cattle beef on Normal and DFD applying a non conventional technique

    Science.gov (United States)

    Nubiato, Keni Eduardo Zanoni; Mazon, Madeline Rezende; Antonelo, Daniel Silva; da Luz e Silva, Saulo

    2016-09-01

    The aim of this study was to evaluate the accuracy in the Normal and DFD classification in Nellore beef using a bench-top hyperspectral imaging system. A hyperspectral imaging system (λ = 928-2524 nm) was used to collect hyperspectral images of the Longissimus thoracis et lumborum (n = 78) of Nellore cattle. The images were processed, being selected region of interest and extracted spectra image and were selected the wavelengths considered most important for the treats evaluated. Six linear discriminant models were developed to classify beef samples on Normal and DFD. The model using all wavelengths associated with the reflectance and absorbance spectrum transformed with the pretreatment 2nd derivative resulted in an overall accuracy of 93.6% for both pretreatments. In this configuration, the model was able to classify correctly 73 samples from a total of 78 samples. The results demonstrate that the hyperspectral imaging system may be considered a viable technology for beef classification on Normal and DFD.

  15. An Iterative Learning Control Technique for Point-to-Point Maneuvers Applied on an Overhead Crane

    Directory of Open Access Journals (Sweden)

    Khaled A. Alhazza

    2014-01-01

    Full Text Available An iterative learning control (ILC strategy is proposed, and implemented on simple pendulum and double pendulum models of an overhead crane undergoing simultaneous traveling and hoisting maneuvers. The approach is based on generating shaped commands using the full nonlinear equations of motion combined with the iterative learning control, to use as acceleration commands to the jib of the crane. These acceleration commands are tuned to eliminate residual oscillations in rest-to-rest maneuvers. The performance of the proposed strategy is tested using an experimental scaled model of an overhead crane with hoisting. The shaped command is derived analytically and validated experimentally. Results obtained showed that the proposed ILC control strategy is capable of eliminating travel and residual oscillations in simple and double pendulum models with hoisting. It is also shown, in all cases, that the proposed approach has a low sensitivity to the initial cable lengths.

  16. Modelling and Design of a Microstrip Band-Pass Filter Using Space Mapping Techniques

    CERN Document Server

    Tavakoli, Saeed; Mohanna, Shahram

    2010-01-01

    Determination of design parameters based on electromagnetic simulations of microwave circuits is an iterative and often time-consuming procedure. Space mapping is a powerful technique to optimize such complex models by efficiently substituting accurate but expensive electromagnetic models, fine models, with fast and approximate models, coarse models. In this paper, we apply two space mapping, an explicit space mapping as well as an implicit and response residual space mapping, techniques to a case study application, a microstrip band-pass filter. First, we model the case study application and optimize its design parameters, using explicit space mapping modelling approach. Then, we use implicit and response residual space mapping approach to optimize the filter's design parameters. Finally, the performance of each design methods is evaluated. It is shown that the use of above-mentioned techniques leads to achieving satisfactory design solutions with a minimum number of computationally expensive fine model eval...

  17. Appropriate Mathematical Model of DC Servo Motors Applied in SCARA Robots

    Directory of Open Access Journals (Sweden)

    Attila L. Bencsik

    2004-11-01

    Full Text Available In the first part of the presentation detailed description of the modular technical system built up of electric components and end-effectors is given. Each of these components was developed at different industrial companies separately. The particular mechatronic unit under consideration was constructed by the use of the appropriate mathematical model of these units. The aim of this presentation is to publish the results achieved by the use of a mathematical modeling technique invented and applied in the development of different mechatronic units as drives and actuators. The unified model describing the whole system was developed with the integration of the models valid to the particular components. In the phase of testing the models a program approximating typical realistic situations in terms of work-loads and physical state of the system during operation was developed and applied. The main innovation here presented consists in integrating the conclusions of professional experiences the developers gained during their former R&D activity in different professional environments. The control system is constructed on the basis of classical methods, therefore the results of the model investigations can immediately be utilized by the developer of the whole complex system, which for instance may be an industrial robot.

  18. Validation technique using mean and variance of kriging model

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Ho Sung; Jung, Jae Jun; Lee, Tae Hee [Hanyang Univ., Seoul (Korea, Republic of)

    2007-07-01

    To validate rigorously the accuracy of metamodel is an important research area in metamodel techniques. A leave-k-out cross-validation technique not only requires considerable computational cost but also cannot measure quantitatively the fidelity of metamodel. Recently, the average validation technique has been proposed. However the average validation criterion may stop a sampling process prematurely even if kriging model is inaccurate yet. In this research, we propose a new validation technique using an average and a variance of response during a sequential sampling method, such as maximum entropy sampling. The proposed validation technique becomes more efficient and accurate than cross-validation technique, because it integrates explicitly kriging model to achieve an accurate average and variance, rather than numerical integration. The proposed validation technique shows similar trend to root mean squared error such that it can be used as a strop criterion for sequential sampling.

  19. A unified framework for benchmark dose estimation applied to mixed models and model averaging

    DEFF Research Database (Denmark)

    Ritz, Christian; Gerhard, Daniel; Hothorn, Ludwig A.

    2013-01-01

    This article develops a framework for benchmark dose estimation that allows intrinsically nonlinear dose-response models to be used for continuous data in much the same way as is already possible for quantal data. This means that the same dose-response model equations may be applied to both...

  20. Atomistic Method Applied to Computational Modeling of Surface Alloys

    Science.gov (United States)

    Bozzolo, Guillermo H.; Abel, Phillip B.

    2000-01-01

    The formation of surface alloys is a growing research field that, in terms of the surface structure of multicomponent systems, defines the frontier both for experimental and theoretical techniques. Because of the impact that the formation of surface alloys has on surface properties, researchers need reliable methods to predict new surface alloys and to help interpret unknown structures. The structure of surface alloys and when, and even if, they form are largely unpredictable from the known properties of the participating elements. No unified theory or model to date can infer surface alloy structures from the constituents properties or their bulk alloy characteristics. In spite of these severe limitations, a growing catalogue of such systems has been developed during the last decade, and only recently are global theories being advanced to fully understand the phenomenon. None of the methods used in other areas of surface science can properly model even the already known cases. Aware of these limitations, the Computational Materials Group at the NASA Glenn Research Center at Lewis Field has developed a useful, computationally economical, and physically sound methodology to enable the systematic study of surface alloy formation in metals. This tool has been tested successfully on several known systems for which hard experimental evidence exists and has been used to predict ternary surface alloy formation (results to be published: Garces, J.E.; Bozzolo, G.; and Mosca, H.: Atomistic Modeling of Pd/Cu(100) Surface Alloy Formation. Surf. Sci., 2000 (in press); Mosca, H.; Garces J.E.; and Bozzolo, G.: Surface Ternary Alloys of (Cu,Au)/Ni(110). (Accepted for publication in Surf. Sci., 2000.); and Garces, J.E.; Bozzolo, G.; Mosca, H.; and Abel, P.: A New Approach for Atomistic Modeling of Pd/Cu(110) Surface Alloy Formation. (Submitted to Appl. Surf. Sci.)). Ternary alloy formation is a field yet to be fully explored experimentally. The computational tool, which is based on

  1. Discrete classification technique applied to TV advertisements liking recognition system based on low-cost EEG headsets.

    Science.gov (United States)

    Soria Morillo, Luis M; Alvarez-Garcia, Juan A; Gonzalez-Abril, Luis; Ortega Ramírez, Juan A

    2016-07-15

    In this paper a new approach is applied to the area of marketing research. The aim of this paper is to recognize how brain activity responds during the visualization of short video advertisements using discrete classification techniques. By means of low cost electroencephalography devices (EEG), the activation level of some brain regions have been studied while the ads are shown to users. We may wonder about how useful is the use of neuroscience knowledge in marketing, or what could provide neuroscience to marketing sector, or why this approach can improve the accuracy and the final user acceptance compared to other works. By using discrete techniques over EEG frequency bands of a generated dataset, C4.5, ANN and the new recognition system based on Ameva, a discretization algorithm, is applied to obtain the score given by subjects to each TV ad. The proposed technique allows to reach more than 75 % of accuracy, which is an excellent result taking into account the typology of EEG sensors used in this work. Furthermore, the time consumption of the algorithm proposed is reduced up to 30 % compared to other techniques presented in this paper. This bring about a battery lifetime improvement on the devices where the algorithm is running, extending the experience in the ubiquitous context where the new approach has been tested.

  2. Measuring the Length Distribution of a Fibril System: a Flow Birefringence Technique applied to Amyloid Fibrils

    NARCIS (Netherlands)

    Rogers, S.S.; Venema, P.; Sagis, L.M.C.; Linden, van der E.; Donald, A.M.

    2005-01-01

    Relaxation of flow birefringence can give a direct measure of the rotational diffusion of rodlike objects in solution. With a suitable model of the rotational diffusivity, a length distribution can be sought by fitting the decay curve. We have measured the flow birefringence decay from solutions of

  3. Evaluation of Pre-Service Teachers' Opinions about Teaching Methods and Techniques Applied by Instructors

    Science.gov (United States)

    Aykac, Necdet

    2016-01-01

    Problem Statement: Training qualified teachers depends on the quality of the trainers. From this point of view, the quality of teacher educators and their instruction in the classroom are important to train qualified teachers. This is because teachers tend to see teacher educators who have trained them as role models, and during their school…

  4. Comparative Analysis of Vehicle Make and Model Recognition Techniques

    Directory of Open Access Journals (Sweden)

    Faiza Ayub Syed

    2014-03-01

    Full Text Available Vehicle Make and Model Recognition (VMMR has emerged as a significant element of vision based systems because of its application in access control systems, traffic control and monitoring systems, security systems and surveillance systems, etc. So far a number of techniques have been developed for vehicle recognition. Each technique follows different methodology and classification approaches. The evaluation results highlight the recognition technique with highest accuracy level. In this paper we have pointed out the working of various vehicle make and model recognition techniques and compare these techniques on the basis of methodology, principles, classification approach, classifier and level of recognition After comparing these factors we concluded that Locally Normalized Harris Corner Strengths (LHNS performs best as compared to other techniques. LHNS uses Bayes and K-NN classification approaches for vehicle classification. It extracts information from frontal view of vehicles for vehicle make and model recognition.

  5. Applying Discourse Analysis in ELT: a Five Cs Model

    Institute of Scientific and Technical Information of China (English)

    肖巧慧

    2009-01-01

    Based on a discussion of definitions on Discourse analysis,discourse is regard as layers consist of five elements--cohesion, coherence, culture, critique and context. Moreover, we focus on applying DA in ELT.

  6. Quality of service in banks by applying the mystery shopping technique

    Directory of Open Access Journals (Sweden)

    Dašić Danijela

    2014-01-01

    Full Text Available In our present times the quality of service is beyond any doubt the most significant category in the banking sector. Focus on consumers, i.e. clients over the last few years is experiencing an expansion in the services oriented activities especially in banks. The needs of users of financial services have also experienced a dynamic change and it is necessary for the banks to develop their long-term business relationship with their clients, in order to satisfy their needs and render their own business profitable. In addition, a robust competition prevails on the banking market, and very often quality of service appears to be the competitive advantage of a bank. One of the ways to measure quality of service is also its mystery shopping. Mystery shopping is measuring the conduct of staff employed in an organisation. Bank employees are being judged during their service offering interaction. Staff employed is also the first link in the chain of communication between the client and the bank. The paper is based on the presentation of the manner in which the mystery shopping is conducted in banks, and also on the presentation of results of the mystery shopping in the major banks operating in Novi Sad. The objective of this research work is to present the significance that the mystery shopping has in banks, and the problem thus resolved is the quality of services offered in banks. Methodology applied was interviewing of the staff employed, and the research and analysis of the internal materials received from banks. We are pointing out, based on the result obtained from the research, at the ways in which mystery shopping is affecting satisfaction of clients, and the ways in which through mystery shopping the high quality of service can be achieved, the one that is beyond expectations in the banking sector. Emphasis is also made at the importance that the human factor has, i.e. staff employed as an important category in measuring quality of service offered

  7. Molecular dynamics techniques for modeling G protein-coupled receptors.

    Science.gov (United States)

    McRobb, Fiona M; Negri, Ana; Beuming, Thijs; Sherman, Woody

    2016-10-01

    G protein-coupled receptors (GPCRs) constitute a major class of drug targets and modulating their signaling can produce a wide range of pharmacological outcomes. With the growing number of high-resolution GPCR crystal structures, we have the unprecedented opportunity to leverage structure-based drug design techniques. Here, we discuss a number of advanced molecular dynamics (MD) techniques that have been applied to GPCRs, including long time scale simulations, enhanced sampling techniques, water network analyses, and free energy approaches to determine relative binding free energies. On the basis of the many success stories, including those highlighted here, we expect that MD techniques will be increasingly applied to aid in structure-based drug design and lead optimization for GPCRs.

  8. Dosimetric properties of bio minerals applied to high-dose dosimetry using the TSEE technique

    Energy Technology Data Exchange (ETDEWEB)

    Vila, G. B.; Caldas, L. V. E., E-mail: gbvila@ipen.br [Instituto de Pesquisas Energeticas e Nucleares / CNEN, Av. Lineu Prestes 2242, Cidade Universitaria, 05508-000 Sao Paulo (Brazil)

    2014-08-15

    The study of the dosimetric properties such as reproducibility, the residual signal, lower detection dose, dose-response curve and fading of the thermally stimulated emission exo electronic (TSEE) signal of Brazilian bio minerals has shown that these materials present a potential use as radiation dosimeters. The reproducibility within ± 10% for oyster shell, mother-of-pearl and coral reef samples showed that the signal dispersion is small when compared with the mean value of the measurements. The study showed that the residual signal can be eliminated with a thermal treatment at 300 grades C/1 h. The lower detection dose of 9.8 Gy determined for the oyster shell samples when exposed to beta radiation and 1.6 Gy for oyster shell and mother-of-pearl samples when exposed to gamma radiation can be considered good, taking into account the high doses of this study. The materials presented linearity at the dose response curves in some ranges, but the lack of linearity in other cases presents no problem since a good mathematical description is possible. The fading study showed that the loss of TSEE signal can be minimized if the samples are protected from interferences such as light, heat and humidity. Taking into account the useful linearity range as the main dosimetric characteristic, the tiger shell and oyster shell samples are the most suitable for high-dose dosimetry using the TSEE technique. (Author)

  9. Comparison of motion correction techniques applied to functional near-infrared spectroscopy data from children

    Science.gov (United States)

    Hu, Xiao-Su; Arredondo, Maria M.; Gomba, Megan; Confer, Nicole; DaSilva, Alexandre F.; Johnson, Timothy D.; Shalinsky, Mark; Kovelman, Ioulia

    2015-12-01

    Motion artifacts are the most significant sources of noise in the context of pediatric brain imaging designs and data analyses, especially in applications of functional near-infrared spectroscopy (fNIRS), in which it can completely affect the quality of the data acquired. Different methods have been developed to correct motion artifacts in fNIRS data, but the relative effectiveness of these methods for data from child and infant subjects (which is often found to be significantly noisier than adult data) remains largely unexplored. The issue is further complicated by the heterogeneity of fNIRS data artifacts. We compared the efficacy of the six most prevalent motion artifact correction techniques with fNIRS data acquired from children participating in a language acquisition task, including wavelet, spline interpolation, principal component analysis, moving average (MA), correlation-based signal improvement, and combination of wavelet and MA. The evaluation of five predefined metrics suggests that the MA and wavelet methods yield the best outcomes. These findings elucidate the varied nature of fNIRS data artifacts and the efficacy of artifact correction methods with pediatric populations, as well as help inform both the theory and practice of optical brain imaging analysis.

  10. Imaging techniques applied to the study of fluids in porous media

    Energy Technology Data Exchange (ETDEWEB)

    Tomutsa, L.; Doughty, D.; Mahmood, S.; Brinkmeyer, A.; Madden, M.P.

    1991-01-01

    A detailed understanding of rock structure and its influence on fluid entrapment, storage capacity, and flow behavior can improve the effective utilization and design of methods to increase the recovery of oil and gas from petroleum reservoirs. The dynamics of fluid flow and trapping phenomena in porous media was investigated. Miscible and immiscible displacement experiments in heterogeneous Berea and Shannon sandstone samples were monitored using X-ray computed tomography (CT scanning) to determine the effect of heterogeneities on fluid flow and trapping. The statistical analysis of pore and pore throat sizes in thin sections cut from these sandstone samples enabled the delineation of small-scale spatial distributions of porosity and permeability. Multiphase displacement experiments were conducted with micromodels constructed using thin slabs of the sandstones. The combination of the CT scanning, thin section, and micromodel techniques enables the investigation of how variations in pore characteristics influence fluid front advancement, fluid distributions, and fluid trapping. Plugs cut from the sandstone samples were investigated using high resolution nuclear magnetic resonance imaging permitting the visualization of oil, water or both within individual pores. The application of these insights will aid in the proper interpretation of relative permeability, capillary pressure, and electrical resistivity data obtained from whole core studies. 7 refs., 14 figs., 2 tabs.

  11. Morphological analysis of the flippers in the Franciscana dolphin, Pontoporia blainvillei, applying X-ray technique.

    Science.gov (United States)

    Del Castillo, Daniela Laura; Panebianco, María Victoria; Negri, María Fernanda; Cappozzo, Humberto Luis

    2014-07-01

    Pectoral flippers of cetaceans function to provide stability and maneuverability during locomotion. Directional asymmetry (DA) is a common feature among odontocete cetaceans, as well as sexual dimorphism (SD). For the first time DA, allometry, physical maturity, and SD of the flipper skeleton--by X-ray technique--of Pontoporia blainvillei were analyzed. The number of carpals, metacarpals, phalanges, and morphometric characters from the humerus, radius, ulna, and digit two were studied in franciscana dolphins from Buenos Aires, Argentina. The number of visible epiphyses and their degree of fusion at the proximal and distal ends of the humerus, radius, and ulna were also analyzed. The flipper skeleton was symmetrical, showing a negative allometric trend, with similar growth patterns in both sexes with the exception of the width of the radius (P ≤ 0.01). SD was found on the number of phalanges of digit two (P ≤ 0.01), ulna and digit two lengths. Females showed a higher relative ulna length and shorter relative digit two length, and the opposite occurred in males (P ≤ 0.01). Epiphyseal fusion pattern proved to be a tool to determine dolphin's age; franciscana dolphins with a mature flipper were, at least, four years old. This study indicates that the flippers of franciscana dolphins are symmetrical; both sexes show a negative allometric trend; SD is observed in radius, ulna, and digit two; and flipper skeleton allows determine the age class of the dolphins.

  12. Experimental Studies of Active and Passive Flow Control Techniques Applied in a Twin Air-Intake

    Directory of Open Access Journals (Sweden)

    Akshoy Ranjan Paul

    2013-01-01

    Full Text Available The flow control in twin air-intakes is necessary to improve the performance characteristics, since the flow traveling through curved and diffused paths becomes complex, especially after merging. The paper presents a comparison between two well-known techniques of flow control: active and passive. It presents an effective design of a vortex generator jet (VGJ and a vane-type passive vortex generator (VG and uses them in twin air-intake duct in different combinations to establish their effectiveness in improving the performance characteristics. The VGJ is designed to insert flow from side wall at pitch angle of 90 degrees and 45 degrees. Corotating (parallel and counterrotating (V-shape are the configuration of vane type VG. It is observed that VGJ has the potential to change the flow pattern drastically as compared to vane-type VG. While the VGJ is directed perpendicular to the side walls of the air-intake at a pitch angle of 90 degree, static pressure recovery is increased by 7.8% and total pressure loss is reduced by 40.7%, which is the best among all other cases tested for VGJ. For bigger-sized VG attached to the side walls of the air-intake, static pressure recovery is increased by 5.3%, but total pressure loss is reduced by only 4.5% as compared to all other cases of VG.

  13. From birds to bees: applying video observation techniques to invertebrate pollinators

    Directory of Open Access Journals (Sweden)

    C J Lortie

    2012-01-01

    Full Text Available Observation is a critical element of behavioural ecology and ethology. Here, we propose a similar set of techniques to enhance the study of the diversity patterns of invertebrate pollinators and associated plant species. In a body of avian research, cameras are set up on nests in blinds to examine chick and parent interactions. This avoids observer bias, minimizes interference, and provides numerous other benefits including timestamps, the capacity to record frequency and duration of activities, and provides a permanent archive of activity for later analyses. Hence, we propose that small video cameras in blinds can also be used to continuously monitor pollinator activity on plants thereby capitalizing on those same benefits. This method was proofed in 2010 in the alpine in BC, Canada on target focal plant species and on open mixed assemblages of plant species. Apple ipod nanos successfully recorded activity for an entire day at a time totalling 450 hours and provided sufficient resolution and field of view to both identify pollinators to recognizable taxonomic units and monitor movement and visitation rates at a scale of view of approximately 50 cm2. This method is not a replacement for pan traps or sweep nets but an opportunity to enhance these datasets with more detailed, finer-resolution data. Importantly, the test of this specific method also indicates that far more hours of observation - using any method - are likely required than most current ecological studies published to accurately estimate pollinator diversity.

  14. A Morphing Technique Applied to Lung Motions in Radiotherapy: Preliminary Results

    Directory of Open Access Journals (Sweden)

    R. Laurent

    2010-01-01

    Full Text Available Organ motion leads to dosimetric uncertainties during a patient’s treatment. Much work has been done to quantify the dosimetric effects of lung movement during radiation treatment. There is a particular need for a good description and prediction of organ motion. To describe lung motion more precisely, we have examined the possibility of using a computer technique: a morphing algorithm. Morphing is an iterative method which consists of blending one image into another image. To evaluate the use of morphing, Four Dimensions Computed Tomography (4DCT acquisition of a patient was performed. The lungs were automatically segmented for different phases, and morphing was performed using the end-inspiration and the end-expiration phase scans only. Intermediate morphing files were compared with 4DCT intermediate images. The results showed good agreement between morphing images and 4DCT images: fewer than 2 % of the 512 by 256 voxels were wrongly classified as belonging/not belonging to a lung section. This paper presents preliminary results, and our morphing algorithm needs improvement. We can infer that morphing offers considerable advantages in terms of radiation protection of the patient during the diagnosis phase, handling of artifacts, definition of organ contours and description of organ motion.

  15. A Comparative Analysis of the 'Green' Techniques Applied for Polyphenols Extraction from Bioresources.

    Science.gov (United States)

    Talmaciu, Adina Iulia; Volf, Irina; Popa, Valentin I

    2015-11-01

    From all the valuable biomass extractives, polyphenols are a widespread group of secondary metabolites found in all plants, representing the most desirable phytochemicals due to their potential to be used as additives in food industry, cosmetics, medicine, and others fields. At present, there is an increased interest to recover them from plant of spontaneous flora, cultivated plant, and wastes resulted in agricultural and food industry. That is why many efforts have been made to provide a highly sensitive, efficiently, and eco-friendly methods, for the extraction of polyphenols, according to the green chemistry and sustainable development concepts. Many extraction procedures are known with advantages and disadvantages. From these reasons, the aim of this article is to provide a comparative analysis regarding technical and economical aspects related to the most innovative extraction techniques studied in the last time: microwave-assisted extraction (MAE), supercritical fluid extraction (SFE), and ultrasound-assisted extraction (UAE). Copyright © 2015 Verlag Helvetica Chimica Acta AG, Zürich.

  16. Erasing the Milky Way: new cleaning technique applied to GBT intensity mapping data

    CERN Document Server

    Wolz, L; Abdalla, F B; Anderson, C M; Chang, T -C; Li, Y -C; Masui, K W; Switzer, E; Pen, U -L; Voytek, T C; Yadav, J

    2015-01-01

    We present the first application of a new foreground removal pipeline to the current leading HI intensity mapping dataset, obtained by the Green Bank Telescope (GBT). We study the 15hr and 1hr field data of the GBT observations previously presented in Masui et al. (2013) and Switzer et al. (2013) covering about 41 square degrees at 0.6 < z < 1.0 which overlaps with the WiggleZ galaxy survey employed for the cross-correlation with the maps. In the presented pipeline, we subtract the Galactic foreground continuum and the point source contaminations using an independent component analysis technique (fastica) and develop a description for a Fourier-based optimal weighting estimator to compute the temperature power spectrum of the intensity maps and cross-correlation with the galaxy survey data. We show that fastica is a reliable tool to subtract diffuse and point-source emission by using the non-Gaussian nature of their probability functions. The power spectra of the intensity maps and the cross-correlation...

  17. Blade Displacement Measurement Technique Applied to a Full-Scale Rotor Test

    Science.gov (United States)

    Abrego, Anita I.; Olson, Lawrence E.; Romander, Ethan A.; Barrows, Danny A.; Burner, Alpheus W.

    2012-01-01

    Blade displacement measurements using multi-camera photogrammetry were acquired during the full-scale wind tunnel test of the UH-60A Airloads rotor, conducted in the National Full-Scale Aerodynamics Complex 40- by 80-Foot Wind Tunnel. The objectives were to measure the blade displacement and deformation of the four rotor blades as they rotated through the entire rotor azimuth. These measurements are expected to provide a unique dataset to aid in the development and validation of rotorcraft prediction techniques. They are used to resolve the blade shape and position, including pitch, flap, lag and elastic deformation. Photogrammetric data encompass advance ratios from 0.15 to slowed rotor simulations of 1.0, thrust coefficient to rotor solidity ratios from 0.01 to 0.13, and rotor shaft angles from -10.0 to 8.0 degrees. An overview of the blade displacement measurement methodology and system development, descriptions of image processing, uncertainty considerations, preliminary results covering static and moderate advance ratio test conditions and future considerations are presented. Comparisons of experimental and computational results for a moderate advance ratio forward flight condition show good trend agreements, but also indicate significant mean discrepancies in lag and elastic twist. Blade displacement pitch measurements agree well with both the wind tunnel commanded and measured values.

  18. Comparison Study of Different Lossy Compression Techniques Applied on Digital Mammogram Images

    Directory of Open Access Journals (Sweden)

    Ayman AbuBaker

    2016-12-01

    Full Text Available The huge growth of the usage of internet increases the need to transfer and save multimedia files. Mammogram images are part of these files that have large image size with high resolution. The compression of these images is used to reduce the size of the files without degrading the quality especially the suspicious regions in the mammogram images. Reduction of the size of these images gives more chance to store more images and minimize the cost of transmission in the case of exchanging information between radiologists. Many techniques exists in the literature to solve the loss of information in images. In this paper, two types of compression transformations are used which are Singular Value Decomposition (SVD that transforms the image into series of Eigen vectors that depends on the dimensions of the image and Discrete Cosine Transform (DCT that covert the image from spatial domain into frequency domain. In this paper, the Computer Aided Diagnosis (CAD system is implemented to evaluate the microcalcification appearance in mammogram images after using the two transformation compressions. The performance of both transformations SVD and DCT is subjectively compared by a radiologist. As a result, the DCT algorithm can effectively reduce the size of the mammogram images by 65% with high quality microcalcification appearance regions.

  19. Applying stereotactic injection technique to study genetic effects on animal behaviors.

    Science.gov (United States)

    McSweeney, Colleen; Mao, Yingwei

    2015-05-10

    Stereotactic injection is a useful technique to deliver high titer lentiviruses to targeted brain areas in mice. Lentiviruses can either overexpress or knockdown gene expression in a relatively focused region without significant damage to the brain tissue. After recovery, the injected mouse can be tested on various behavioral tasks such as the Open Field Test (OFT) and the Forced Swim Test (FST). The OFT is designed to assess locomotion and the anxious phenotype in mice by measuring the amount of time that a mouse spends in the center of a novel open field. A more anxious mouse will spend significantly less time in the center of the novel field compared to controls. The FST assesses the anti-depressive phenotype by quantifying the amount of time that mice spend immobile when placed into a bucket of water. A mouse with an anti-depressive phenotype will spend significantly less time immobile compared to control animals. The goal of this protocol is to use the stereotactic injection of a lentivirus in conjunction with behavioral tests to assess how genetic factors modulate animal behaviors.

  20. Experimental studies of active and passive flow control techniques applied in a twin air-intake.

    Science.gov (United States)

    Paul, Akshoy Ranjan; Joshi, Shrey; Jindal, Aman; Maurya, Shivam P; Jain, Anuj

    2013-01-01

    The flow control in twin air-intakes is necessary to improve the performance characteristics, since the flow traveling through curved and diffused paths becomes complex, especially after merging. The paper presents a comparison between two well-known techniques of flow control: active and passive. It presents an effective design of a vortex generator jet (VGJ) and a vane-type passive vortex generator (VG) and uses them in twin air-intake duct in different combinations to establish their effectiveness in improving the performance characteristics. The VGJ is designed to insert flow from side wall at pitch angle of 90 degrees and 45 degrees. Corotating (parallel) and counterrotating (V-shape) are the configuration of vane type VG. It is observed that VGJ has the potential to change the flow pattern drastically as compared to vane-type VG. While the VGJ is directed perpendicular to the side walls of the air-intake at a pitch angle of 90 degree, static pressure recovery is increased by 7.8% and total pressure loss is reduced by 40.7%, which is the best among all other cases tested for VGJ. For bigger-sized VG attached to the side walls of the air-intake, static pressure recovery is increased by 5.3%, but total pressure loss is reduced by only 4.5% as compared to all other cases of VG.

  1. Machine Learning Techniques Applied to Sensor Data Correction in Building Technologies

    Energy Technology Data Exchange (ETDEWEB)

    Smith, Matt K [ORNL; Castello, Charles C [ORNL; New, Joshua Ryan [ORNL

    2013-01-01

    Since commercial and residential buildings account for nearly half of the United States' energy consumption, making them more energy-efficient is a vital part of the nation's overall energy strategy. Sensors play an important role in this research by collecting data needed to analyze performance of components, systems, and whole-buildings. Given this reliance on sensors, ensuring that sensor data are valid is a crucial problem. Solutions being researched are machine learning techniques, namely: artificial neural networks and Bayesian Networks. Types of data investigated in this study are: (1) temperature; (2) humidity; (3) refrigerator energy consumption; (4) heat pump liquid pressure; and (5) water flow. These data are taken from Oak Ridge National Laboratory's (ORNL) ZEBRAlliance research project which is composed of four single-family homes in Oak Ridge, TN. Results show that for the temperature, humidity, pressure, and flow sensors, data can mostly be predicted with root-mean-square error (RMSE) of less than 10% of the respective sensor's mean value. Results for the energy sensor are not as good; RMSE are centered about 100% of the mean value and are often well above 200%. Bayesian networks have RSME of less than 5% of the respective sensor's mean value, but took substantially longer to train.

  2. Hyperspectral imaging techniques applied to the monitoring of wine waste anaerobic digestion process

    Science.gov (United States)

    Serranti, Silvia; Fabbri, Andrea; Bonifazi, Giuseppe

    2012-11-01

    An anaerobic digestion process, finalized to biogas production, is characterized by different steps involving the variation of some chemical and physical parameters related to the presence of specific biomasses as: pH, chemical oxygen demand (COD), volatile solids, nitrate (NO3-) and phosphate (PO3-). A correct process characterization requires a periodical sampling of the organic mixture in the reactor and a further analysis of the samples by traditional chemical-physical methods. Such an approach is discontinuous, time-consuming and expensive. A new analytical approach based on hyperspectral imaging in the NIR field (1000 to 1700 nm) is investigated and critically evaluated, with reference to the monitoring of wine waste anaerobic digestion process. The application of the proposed technique was addressed to identify and demonstrate the correlation existing, in terms of quality and reliability of the results, between "classical" chemical-physical parameters and spectral features of the digestate samples. Good results were obtained, ranging from a R2=0.68 and a RMSECV=12.83 mg/l for nitrate to a R2=0.90 and a RMSECV=5495.16 mg O2/l for COD. The proposed approach seems very useful in setting up innovative control strategies allowing for full, continuous control of the anaerobic digestion process.

  3. Muscle stiffness estimation using a system identification technique applied to evoked mechanomyogram during cycling exercise.

    Science.gov (United States)

    Uchiyama, Takanori; Saito, Kaito; Shinjo, Katsuya

    2015-12-01

    The aims of this study were to develop a method to extract the evoked mechanomyogram (MMG) during cycling exercise and to clarify muscle stiffness at various cadences, workloads, and power. Ten young healthy male participants were instructed to pedal a cycle ergometer at cadences of 40 and 60 rpm. The loads were 4.9, 9.8, 14.7, and 19.6 N, respectively. One electrical stimulus per two pedal rotations was applied to the vastus lateralis muscle at a knee angle of 80° in the down phase. MMGs were measured using a capacitor microphone, and the MMGs were divided into stimulated and non-stimulated sequences. Each sequence was synchronously averaged. The synchronously averaged non-stimulated MMG was subtracted from the synchronously averaged stimulated MMG to extract an evoked MMG. The evoked MMG system was identified and the poles of the transfer function were calculated. The poles and mass of the vastus lateralis muscle were used to estimate muscle stiffness. Results showed that muscle stiffness was 186-626 N /m and proportional to the workloads and power. In conclusion, our method can be used to assess muscle stiffness proportional to the workload and power.

  4. Evaluation and optimisation of bacterial genomic DNA extraction for no-culture techniques applied to vinegars.

    Science.gov (United States)

    Mamlouk, Dhouha; Hidalgo, Claudio; Torija, María-Jesús; Gullo, Maria

    2011-10-01

    Direct genomic DNA extraction from vinegars was set up and suitability for PCR assays performed by PCR/DGGE and sequencing of 16S rRNA gene. The method was tested on 12 intermediary products of special vinegars, fruit vinegars and condiments produced from different raw materials and procedures. DNAs extraction was performed on pellets by chemical, enzymatic, resin mediated methods and their modifications. Suitable yield and DNA purity were obtained by modification of a method based on the use of PVP/CTAB to remove polyphenolic components and esopolysaccharides. By sequencing of bands from DGGE gel, Gluconacetobacter europaeus, Acetobacter malorum/cerevisiae and Acetobacter orleanensis were detected as main species in samples having more than 4% of acetic acid content. From samples having no acetic acid content, sequences retrieved from excised bands revealed high similarity with prokaryotes with no function on vinegar fermentation: Burkholderia spp., Cupriavidus spp., Lactococcus lactis and Leuconostoc mesenteroides. The method was suitable to be applied for no-culture study of vinegars containing polyphenols and esopolysaccharides allowing a more complete assessment of vinegar bacteria. Copyright © 2011 Elsevier Ltd. All rights reserved.

  5. A non-intrusive measurement technique applying CARS for concentration measurement in a gas mixing flow

    CERN Document Server

    Yamamoto, Ken; Moriya, Madoka; Kuriyama, Reiko; Sato, Yohei

    2015-01-01

    Coherent anti-Stokes Raman scattering (CARS) microscope system was built and applied to a non-intrusive gas concentration measurement of a mixing flow in a millimeter-scale channel. Carbon dioxide and nitrogen were chosen as test fluids and CARS signals from the fluids were generated by adjusting the wavelengths of the Pump and the Stokes beams. The generated CARS signals, whose wavelengths are different from those of the Pump and the Stokes beams, were captured by an EM-CCD camera after filtering out the excitation beams. A calibration experiment was performed in order to confirm the applicability of the built-up CARS system by measuring the intensity of the CARS signal from known concentrations of the samples. After confirming that the measured CARS intensity was proportional to the second power of the concentrations as was theoretically predicted, the CARS intensities in the gas mixing flow channel were measured. Ten different measurement points were set and concentrations of both carbon dioxide and nitrog...

  6. Costs of Lost opportunities: Applying Non-Market Valuation Techniques to Potential REDD+ Participants in Cameroon

    Directory of Open Access Journals (Sweden)

    Dara Y. Thompson

    2017-03-01

    Full Text Available Reduced Emissions from Deforestation and Forest Degradation (REDD+ has been systematically advanced within the UN Framework Convention on Climate Change (UNFCCC. However, implementing REDD+ in a populated landscape requires information on local costs and acceptability of changed practices. To supply such information, many studies have adopted approaches that explore the opportunity cost of maintaining land as forest rather than converting it to agricultural uses. These approaches typically assume that the costs to the smallholder are borne exclusively through the loss or gain of the production values associated with specific categories of land use. However, evaluating the value of land to smallholders in incomplete and messy institutional and economic contexts entails other considerations, such as varying portfolios of land holdings, tenure arrangements, restricted access to capital, and unreliable food markets. We suggest that contingent valuation (CV methods may provide a more complete reflection of the viability of REDD+ in multiple-use landscapes than do opportunity cost approaches. The CV approach eliminates the need to assume a homogenous smallholder, and instead assumes heterogeneity around social, economic and institutional contexts. We apply this approach in a southern rural Cameroonian context, through the lens of a hypothetical REDD+ contract. Our findings suggest local costs of REDD+ contracts to be higher and much more variable than opportunity cost estimates.

  7. Metamaterials modelling, fabrication and characterisation techniques

    DEFF Research Database (Denmark)

    Malureanu, Radu; Zalkovskij, Maksim; Andryieuski, Andrei

    Metamaterials are artificially designed media that show averaged properties not yet encountered in nature. Among such properties, the possibility of obtaining optical magnetism and negative refraction are the ones mainly exploited but epsilon-near-zero and sub-unitary refraction index are also...... parameters that can be obtained. Such behaviour enables unprecedented applications. Within this work, we will present various aspects of metamaterials research field that we deal with at our department. From the modelling part, various approaches for determining the value of the refractive index...

  8. Metamaterials modelling, fabrication, and characterisation techniques

    DEFF Research Database (Denmark)

    Malureanu, Radu; Zalkovskij, Maksim; Andryieuski, Andrei

    2012-01-01

    Metamaterials are artificially designed media that show averaged properties not yet encountered in nature. Among such properties, the possibility of obtaining optical magnetism and negative refraction are the ones mainly exploited but epsilon-near-zero and sub-unitary refraction index are also...... parameters that can be obtained. Such behaviour enables unprecedented applications. Within this work, we will present various aspects of metamaterials research field that we deal with at our department. From the modelling part, we will present tour approach for determining the field enhancement in slits...

  9. Model order reduction techniques with applications in finite element analysis

    CERN Document Server

    Qu, Zu-Qing

    2004-01-01

    Despite the continued rapid advance in computing speed and memory the increase in the complexity of models used by engineers persists in outpacing them. Even where there is access to the latest hardware, simulations are often extremely computationally intensive and time-consuming when full-blown models are under consideration. The need to reduce the computational cost involved when dealing with high-order/many-degree-of-freedom models can be offset by adroit computation. In this light, model-reduction methods have become a major goal of simulation and modeling research. Model reduction can also ameliorate problems in the correlation of widely used finite-element analyses and test analysis models produced by excessive system complexity. Model Order Reduction Techniques explains and compares such methods focusing mainly on recent work in dynamic condensation techniques: - Compares the effectiveness of static, exact, dynamic, SEREP and iterative-dynamic condensation techniques in producing valid reduced-order mo...

  10. BiasMDP: Carrier lifetime characterization technique with applied bias voltage

    Energy Technology Data Exchange (ETDEWEB)

    Jordan, Paul M., E-mail: paul.jordan@namlab.com; Simon, Daniel K.; Dirnstorfer, Ingo [Nanoelectronic Materials Laboratory gGmbH (NaMLab), Nöthnitzer Straße 64, 01187 Dresden (Germany); Mikolajick, Thomas [Nanoelectronic Materials Laboratory gGmbH (NaMLab), Nöthnitzer Straße 64, 01187 Dresden (Germany); Technische Universität Dresden, Institut für Halbleiter- und Mikrosystemtechnik, 01062 Dresden (Germany)

    2015-02-09

    A characterization method is presented, which determines fixed charge and interface defect densities in passivation layers. This method bases on a bias voltage applied to an electrode on top of the passivation layer. During a voltage sweep, the effective carrier lifetime is measured by means of microwave detected photoconductivity. When the external voltage compensates the electric field of the fixed charges, the lifetime drops to a minimum value. This minimum value correlates to the flat band voltage determined in reference impedance measurements. This correlation is measured on p-type silicon passivated by Al{sub 2}O{sub 3} and Al{sub 2}O{sub 3}/HfO{sub 2} stacks with different fixed charge densities and layer thicknesses. Negative fixed charges with densities of 3.8 × 10{sup 12 }cm{sup −2} and 0.7 × 10{sup 12 }cm{sup −2} are determined for Al{sub 2}O{sub 3} layers without and with an ultra-thin HfO{sub 2} interface, respectively. The voltage and illumination dependencies of the effective carrier lifetime are simulated with Shockley Read Hall surface recombination at continuous defects with parabolic capture cross section distributions for electrons and holes. The best match with the measured data is achieved with a very low interface defect density of 1 × 10{sup 10 }eV{sup −1} cm{sup −2} for the Al{sub 2}O{sub 3} sample with HfO{sub 2} interface.

  11. Models and Techniques for Proving Data Structure Lower Bounds

    DEFF Research Database (Denmark)

    Larsen, Kasper Green

    In this dissertation, we present a number of new techniques and tools for proving lower bounds on the operational time of data structures. These techniques provide new lines of attack for proving lower bounds in both the cell probe model, the group model, the pointer machine model and the I/O-mod...... for range reporting problems in the pointer machine and the I/O-model. With this technique, we tighten the gap between the known upper bound and lower bound for the most fundamental range reporting problem, orthogonal range reporting. 5......In this dissertation, we present a number of new techniques and tools for proving lower bounds on the operational time of data structures. These techniques provide new lines of attack for proving lower bounds in both the cell probe model, the group model, the pointer machine model and the I....../O-model. In all cases, we push the frontiers further by proving lower bounds higher than what could possibly be proved using previously known techniques. For the cell probe model, our results have the following consequences: The rst (lg n) query time lower bound for linear space static data structures...

  12. On the Performance of Classification Techniques with Pixel Removal Applied to Digit Recognition

    Directory of Open Access Journals (Sweden)

    Jozette V. Roberts

    2016-08-01

    Full Text Available The successive loss of the outermost pixel values or frames in the digital representation of handwritten digits is postulated to have an increasing impact on the degree of accuracy of categorizations of these digits. This removal of frames is referred to as trimming. The first few frames do not contain significant amounts of information and the impact on accuracy should be negligible. As more frames are trimmed, the impact becomes more significant on the ability of each classification model to correctly identify digits. This study focuses on the effects of the trimming of frames of pixels, on the ability of the Recursive Partitioning and Classification Trees method, the Naive Bayes method, the k-Nearest Neighbor method and the Support Vector Machine method in the categorization of handwritten digits. The results from the application of the k-Nearest Neighbour and Recursive Partitioning and Classification Trees methods exemplified the white noise effect in the trimming of the first few frames whilst the Naive Bayes and the Support Vector Machine did not. With respect to time all models saw a relative decrease in time from the initial dataset. The k-Nearest Neighbour method had the greatest decreases whilst the Support Vector Machine had significantly fluctuating times.

  13. Results of error correction techniques applied on two high accuracy coordinate measuring machines

    Energy Technology Data Exchange (ETDEWEB)

    Pace, C.; Doiron, T.; Stieren, D.; Borchardt, B.; Veale, R. (Sandia National Labs., Albuquerque, NM (USA); National Inst. of Standards and Technology, Gaithersburg, MD (USA))

    1990-01-01

    The Primary Standards Laboratory at Sandia National Laboratories (SNL) and the Precision Engineering Division at the National Institute of Standards and Technology (NIST) are in the process of implementing software error correction on two nearly identical high-accuracy coordinate measuring machines (CMMs). Both machines are Moore Special Tool Company M-48 CMMs which are fitted with laser positioning transducers. Although both machines were manufactured to high tolerance levels, the overall volumetric accuracy was insufficient for calibrating standards to the levels both laboratories require. The error mapping procedure was developed at NIST in the mid 1970's on an earlier but similar model. The error mapping procedure was originally very complicated and did not make any assumptions about the rigidness of the machine as it moved, each of the possible error motions was measured at each point of the error map independently. A simpler mapping procedure was developed during the early 1980's which assumed rigid body motion of the machine. This method has been used to calibrate lower accuracy machines with a high degree of success and similar software correction schemes have been implemented by many CMM manufacturers. The rigid body model has not yet been used on highly repeatable CMMs such as the M48. In this report we present early mapping data for the two M48 CMMs. The SNL CMM was manufactured in 1985 and has been in service for approximately four years, whereas the NIST CMM was delivered in early 1989. 4 refs., 5 figs.

  14. Procedures and Compliance of a Video Modeling Applied Behavior Analysis Intervention for Brazilian Parents of Children with Autism Spectrum Disorders

    Science.gov (United States)

    Bagaiolo, Leila F.; Mari, Jair de J.; Bordini, Daniela; Ribeiro, Tatiane C.; Martone, Maria Carolina C.; Caetano, Sheila C.; Brunoni, Decio; Brentani, Helena; Paula, Cristiane S.

    2017-01-01

    Video modeling using applied behavior analysis techniques is one of the most promising and cost-effective ways to improve social skills for parents with autism spectrum disorder children. The main objectives were: (1) To elaborate/describe videos to improve eye contact and joint attention, and to decrease disruptive behaviors of autism spectrum…

  15. Empirical modeling and data analysis for engineers and applied scientists

    CERN Document Server

    Pardo, Scott A

    2016-01-01

    This textbook teaches advanced undergraduate and first-year graduate students in Engineering and Applied Sciences to gather and analyze empirical observations (data) in order to aid in making design decisions. While science is about discovery, the primary paradigm of engineering and "applied science" is design. Scientists are in the discovery business and want, in general, to understand the natural world rather than to alter it. In contrast, engineers and applied scientists design products, processes, and solutions to problems. That said, statistics, as a discipline, is mostly oriented toward the discovery paradigm. Young engineers come out of their degree programs having taken courses such as "Statistics for Engineers and Scientists" without any clear idea as to how they can use statistical methods to help them design products or processes. Many seem to think that statistics is only useful for demonstrating that a device or process actually does what it was designed to do. Statistics courses emphasize creati...

  16. APPLIED PHYTO-REMEDIATION TECHNIQUES USING HALOPHYTES FOR OIL AND BRINE SPILL SCARS

    Energy Technology Data Exchange (ETDEWEB)

    M.L. Korphage; Bruce G. Langhus; Scott Campbell

    2003-03-01

    Produced salt water from historical oil and gas production was often managed with inadequate care and unfortunate consequences. In Kansas, the production practices in the 1930's and 1940's--before statewide anti-pollution laws--were such that fluids were often produced to surface impoundments where the oil would segregate from the salt water. The oil was pumped off the pits and the salt water was able to infiltrate into the subsurface soil zones and underlying bedrock. Over the years, oil producing practices were changed so that segregation of fluids was accomplished in steel tanks and salt water was isolated from the natural environment. But before that could happen, significant areas of the state were scarred by salt water. These areas are now in need of economical remediation. Remediation of salt scarred land can be facilitated with soil amendments, land management, and selection of appropriate salt tolerant plants. Current research on the salt scars around the old Leon Waterflood, in Butler County, Kansas show the relative efficiency of remediation options. Based upon these research findings, it is possible to recommend cost efficient remediation techniques for slight, medium, and heavy salt water damaged soil. Slight salt damage includes soils with Electrical Conductivity (EC) values of 4.0 mS/cm or less. Operators can treat these soils with sufficient amounts of gypsum, install irrigation systems, and till the soil. Appropriate plants can be introduced via transplants or seeded. Medium salt damage includes soils with EC values between 4.0 and 16 mS/cm. Operators will add amendments of gypsum, till the soil, and arrange for irrigation. Some particularly salt tolerant plants can be added but most planting ought to be reserved until the second season of remediation. Severe salt damage includes soil with EC values in excess of 16 mS/cm. Operators will add at least part of the gypsum required, till the soil, and arrange for irrigation. The following

  17. Lipase immobilized by different techniques on various support materials applied in oil hydrolysis

    Directory of Open Access Journals (Sweden)

    VILMA MINOVSKA

    2005-04-01

    Full Text Available Batch hydrolysis of olive oil was performed by Candida rugosa lipase immobilized on Amberlite IRC-50 and Al2O3. These two supports were selected out of 16 carriers: inorganic materials (sand, silica gel, infusorial earth, Al2O3, inorganic salts (CaCO3, CaSO4, ion-exchange resins (Amberlite IRC-50 and IR-4B, Dowex 2X8, a natural resin (colophony, a natural biopolymer (sodium alginate, synthetic polymers (polypropylene, polyethylene and zeolites. Lipase immobilization was carried out by simple adsorption, adsorption followed by cross-linking, adsorption on ion-exchange resins, combined adsorption and precipitation, pure precipitation and gel entrapment. The suitability of the supports and techniques for the immobilization of lipase was evaluated by estimating the enzyme activity, protein loading, immobilization efficiency and reusability of the immobilizates. Most of the immobilizates exhibited either a low enzyme activity or difficulties during the hydrolytic reaction. Only those prepared by ionic adsorption on Amberlite IRC-50 and by combined adsorption and precipitation on Al2O3 showed better activity, 2000 and 430 U/g support, respectively, and demonstrated satisfactory behavior when used repeatedly. The hydrolysis was studied as a function of several parameters: surfactant concentration, enzyme concentration, pH and temperature. The immobilized preparation with Amberlite IRC-50 was stable and active in the whole range of pH (4 to 9 and temperature (20 to 50 °C, demonstrating a 99% degree of hydrolysis. In repeated usage, it was stable and active having a half-life of 16 batches, which corresponds to an operation time of 384 h. Its storage stability was remarkable too, since after 9 months it had lost only 25 % of the initial activity. The immobilizate with Al22O3 was less stable and less active. At optimal environmental conditions, the degree of hydrolysis did not exceed 79 %. In repeated usage, after the fourth batch, the degree of

  18. Robust Decision-making Applied to Model Selection

    Energy Technology Data Exchange (ETDEWEB)

    Hemez, Francois M. [Los Alamos National Laboratory

    2012-08-06

    The scientific and engineering communities are relying more and more on numerical models to simulate ever-increasingly complex phenomena. Selecting a model, from among a family of models that meets the simulation requirements, presents a challenge to modern-day analysts. To address this concern, a framework is adopted anchored in info-gap decision theory. The framework proposes to select models by examining the trade-offs between prediction accuracy and sensitivity to epistemic uncertainty. The framework is demonstrated on two structural engineering applications by asking the following question: Which model, of several numerical models, approximates the behavior of a structure when parameters that define each of those models are unknown? One observation is that models that are nominally more accurate are not necessarily more robust, and their accuracy can deteriorate greatly depending upon the assumptions made. It is posited that, as reliance on numerical models increases, establishing robustness will become as important as demonstrating accuracy.

  19. Novel multiscale modeling tool applied to Pseudomonas aeruginosa biofilm formation.

    Science.gov (United States)

    Biggs, Matthew B; Papin, Jason A

    2013-01-01

    Multiscale modeling is used to represent biological systems with increasing frequency and success. Multiscale models are often hybrids of different modeling frameworks and programming languages. We present the MATLAB-NetLogo extension (MatNet) as a novel tool for multiscale modeling. We demonstrate the utility of the tool with a multiscale model of Pseudomonas aeruginosa biofilm formation that incorporates both an agent-based model (ABM) and constraint-based metabolic modeling. The hybrid model correctly recapitulates oxygen-limited biofilm metabolic activity and predicts increased growth rate via anaerobic respiration with the addition of nitrate to the growth media. In addition, a genome-wide survey of metabolic mutants and biofilm formation exemplifies the powerful analyses that are enabled by this computational modeling tool.

  20. Novel multiscale modeling tool applied to Pseudomonas aeruginosa biofilm formation.

    Directory of Open Access Journals (Sweden)

    Matthew B Biggs

    Full Text Available Multiscale modeling is used to represent biological systems with increasing frequency and success. Multiscale models are often hybrids of different modeling frameworks and programming languages. We present the MATLAB-NetLogo extension (MatNet as a novel tool for multiscale modeling. We demonstrate the utility of the tool with a multiscale model of Pseudomonas aeruginosa biofilm formation that incorporates both an agent-based model (ABM and constraint-based metabolic modeling. The hybrid model correctly recapitulates oxygen-limited biofilm metabolic activity and predicts increased growth rate via anaerobic respiration with the addition of nitrate to the growth media. In addition, a genome-wide survey of metabolic mutants and biofilm formation exemplifies the powerful analyses that are enabled by this computational modeling tool.

  1. Low noise techniques applied to a piezoceramic receiver for gas coupled ultrasonic flaw detection

    CERN Document Server

    Farlow, R

    1998-01-01

    Piezoelectric plate transducers are commonly used for the generation and detection of ultrasonic signals and have applications in, for example, non-destructive testing and medical imaging. A rigorous theoretical investigation of thermal noise in plate transducers has been undertaken with the aim of establishing the absolute limits of receiver sensitivity in terms of both Minimum Detectable Power (MDP) and Minimum Detectable Force (MDF). The central feature of the work has been the development of two independent theories which provide identical results. One theory is based on an electrical approach which makes use of an extensively modified version of Hayward's linear systems model of the piezoelectric plate transducer, along with the well known work of Johnson and Nyquist. The other theory is based on a mechanical approach which makes use of the less well known work of Callen and Welton. Both theories indicate that only two parameters are required in order to determine the MDP and MDF of an open circuit trans...

  2. Virtual Manufacturing Techniques Designed and Applied to Manufacturing Activities in the Manufacturing Integration and Technology Branch

    Science.gov (United States)

    Shearrow, Charles A.

    1999-01-01

    One of the identified goals of EM3 is to implement virtual manufacturing by the time the year 2000 has ended. To realize this goal of a true virtual manufacturing enterprise the initial development of a machinability database and the infrastructure must be completed. This will consist of the containment of the existing EM-NET problems and developing machine, tooling, and common materials databases. To integrate the virtual manufacturing enterprise with normal day to day operations the development of a parallel virtual manufacturing machinability database, virtual manufacturing database, virtual manufacturing paradigm, implementation/integration procedure, and testable verification models must be constructed. Common and virtual machinability databases will include the four distinct areas of machine tools, available tooling, common machine tool loads, and a materials database. The machine tools database will include the machine envelope, special machine attachments, tooling capacity, location within NASA-JSC or with a contractor, and availability/scheduling. The tooling database will include available standard tooling, custom in-house tooling, tool properties, and availability. The common materials database will include materials thickness ranges, strengths, types, and their availability. The virtual manufacturing databases will consist of virtual machines and virtual tooling directly related to the common and machinability databases. The items to be completed are the design and construction of the machinability databases, virtual manufacturing paradigm for NASA-JSC, implementation timeline, VNC model of one bridge mill and troubleshoot existing software and hardware problems with EN4NET. The final step of this virtual manufacturing project will be to integrate other production sites into the databases bringing JSC's EM3 into a position of becoming a clearing house for NASA's digital manufacturing needs creating a true virtual manufacturing enterprise.

  3. Adequateness of applying the Zmijewski model on Serbian companies

    Directory of Open Access Journals (Sweden)

    Pavlović Vladan

    2012-12-01

    Full Text Available The aim of the paper is to determine the accuracy of the prediction of Zmijewski model in Serbia on the eligible sample. At the same time, the paper identifies model's strengths, weaknesses and limitations of its possible application. Bearing in mind that the economic environment in Serbia is not similar to the United States at the time the model was developed, Zmijewski model is surprisingly accurate in the case of Serbian companies. The accuracy was slightly weaker than the model results in the U.S. in its original form, but much better than the results model gave in the U.S. in the period 1988-1991, and 1992-1999. Model gave also better results in Serbia comparing those in Croatia, even in Croatia model was adjusted.

  4. Fractal and Multifractal Models Applied to Porous Media - Editorial

    Science.gov (United States)

    Given the current high level of interest in the use of fractal geometry to characterize natural porous media, a special issue of the Vadose Zone Journal was organized in order to expose established fractal analysis techniques and cutting-edge new developments to a wider Earth science audience. The ...

  5. Fielding the magnetically applied pressure-shear technique on the Z accelerator (completion report for MRT 4519).

    Energy Technology Data Exchange (ETDEWEB)

    Alexander, C. Scott; Haill, Thomas A.; Dalton, Devon Gardner; Rovang, Dean Curtis; Lamppa, Derek C.

    2013-09-01

    The recently developed Magnetically Applied Pressure-Shear (MAPS) experimental technique to measure material shear strength at high pressures on magneto-hydrodynamic (MHD) drive pulsed power platforms was fielded on August 16, 2013 on shot Z2544 utilizing hardware set A0283A. Several technical and engineering challenges were overcome in the process leading to the attempt to measure the dynamic strength of NNSA Ta at 50 GPa. The MAPS technique relies on the ability to apply an external magnetic field properly aligned and time correlated with the MHD pulse. The load design had to be modified to accommodate the external field coils and additional support was required to manage stresses from the pulsed magnets. Further, this represents the first time transverse velocity interferometry has been applied to diagnose a shot at Z. All subsystems performed well with only minor issues related to the new feed design which can be easily addressed by modifying the current pulse shape. Despite the success of each new component, the experiment failed to measure strength in the samples due to spallation failure, most likely in the diamond anvils. To address this issue, hydrocode simulations are being used to evaluate a modified design using LiF windows to minimize tension in the diamond and prevent spall. Another option to eliminate the diamond material from the experiment is also being investigated.

  6. Applying the Job Characteristics Model to the College Education Experience

    Science.gov (United States)

    Kass, Steven J.; Vodanovich, Stephen J.; Khosravi, Jasmine Y.

    2011-01-01

    Boredom is one of the most common complaints among university students, with studies suggesting its link to poor grades, drop out, and behavioral problems. Principles borrowed from industrial-organizational psychology may help prevent boredom and enrich the classroom experience. In the current study, we applied the core dimensions of the job…

  7. Ontological Relations and the Capability Maturity Model Applied in Academia

    Science.gov (United States)

    de Oliveira, Jerônimo Moreira; Campoy, Laura Gómez; Vilarino, Lilian

    2015-01-01

    This work presents a new approach to the discovery, identification and connection of ontological elements within the domain of characterization in learning organizations. In particular, the study can be applied to contexts where organizations require planning, logic, balance, and cognition in knowledge creation scenarios, which is the case for the…

  8. An effectiveness-NTU technique for characterising a finned tubes PCM system using a CFD model

    OpenAIRE

    Tay, N. H. Steven; Belusko, M.; Castell, Albert; Cabeza, Luisa F.; Bruno, F.

    2014-01-01

    Numerical modelling is commonly used to design, analyse and optimise tube-in-tank phase change thermal energy storage systems with fins. A new simplified two dimensional mathematical model, based on the effectiveness-number of transfer units technique, has been developed to characterise tube-in-tank phase change material systems, with radial round fins. The model applies an empirically derived P factor which defines the proportion of the heat flow which is parallel and isothermal....

  9. An Improved Technique Based on Firefly Algorithm to Estimate the Parameters of the Photovoltaic Model

    Directory of Open Access Journals (Sweden)

    Issa Ahmed Abed

    2016-12-01

    Full Text Available This paper present a method to enhance the firefly algorithm by coupling with a local search. The constructed technique is applied to identify the solar parameters model where the method has been proved its ability to obtain the photovoltaic parameters model. Standard firefly algorithm (FA, electromagnetism-like (EM algorithm, and electromagnetism-like without local (EMW search algorithm all are compared with the suggested method to test its capability to solve this model.

  10. Applying Meta-Analysis to Structural Equation Modeling

    Science.gov (United States)

    Hedges, Larry V.

    2016-01-01

    Structural equation models play an important role in the social sciences. Consequently, there is an increasing use of meta-analytic methods to combine evidence from studies that estimate the parameters of structural equation models. Two approaches are used to combine evidence from structural equation models: A direct approach that combines…

  11. Large scale flow visualization and anemometry applied to lab on chip models of porous media

    CERN Document Server

    Paiola, Johan; Bodiguel, Hugues

    2016-01-01

    The following is a report on an experimental technique allowing to quantify and map the velocity field with a very high resolution and a simple equipment in large 2D devices. A simple Shlieren technique is proposed to reinforce the contrast in the images and allow you to detect seeded particles that are pixel-sized or even inferior to it. The velocimetry technique that we have reported on is based on auto-correlation functions of the pixel intensity, which we have shown are directly related to the magnitude of the local average velocity. The characteristic time involved in the decorrelation of the signal is proportional to the tracer size and inversely proportional to the average velocity. We have reported on a detailed discussion about the optimization of relevant involved parameters, the spatial resolution and the accuracy of the method. The technique is then applied to a model porous media made of a random channel network. We show that it is highly efficient to determine the magnitude of the flow in each o...

  12. Applying Functional Modeling for Accident Management of Nuclear Power Plant

    Energy Technology Data Exchange (ETDEWEB)

    Lind, Morten; Zhang Xinxin [Harbin Engineering University, Harbin (China)

    2014-08-15

    The paper investigate applications of functional modeling for accident management in complex industrial plant with special reference to nuclear power production. Main applications for information sharing among decision makers and decision support are identified. An overview of Multilevel Flow Modeling is given and a detailed presentation of the foundational means-end concepts is presented and the conditions for proper use in modelling accidents are identified. It is shown that Multilevel Flow Modeling can be used for modelling and reasoning about design basis accidents. Its possible role for information sharing and decision support in accidents beyond design basis is also indicated. A modelling example demonstrating the application of Multilevel Flow Modelling and reasoning for a PWR LOCA is presented.

  13. Symmetry and partial order reduction techniques in model checking Rebeca

    NARCIS (Netherlands)

    Jaghouri, M.M.; Sirjani, M.; Mousavi, M.R.; Movaghar, A.

    2007-01-01

    Rebeca is an actor-based language with formal semantics that can be used in modeling concurrent and distributed software and protocols. In this paper, we study the application of partial order and symmetry reduction techniques to model checking dynamic Rebeca models. Finding symmetry based equivalen

  14. Prediction of survival with alternative modeling techniques using pseudo values

    NARCIS (Netherlands)

    T. van der Ploeg (Tjeerd); F.R. Datema (Frank); R.J. Baatenburg de Jong (Robert Jan); E.W. Steyerberg (Ewout)

    2014-01-01

    textabstractBackground: The use of alternative modeling techniques for predicting patient survival is complicated by the fact that some alternative techniques cannot readily deal with censoring, which is essential for analyzing survival data. In the current study, we aimed to demonstrate that pseudo

  15. Applying Mathematical Optimization Methods to an ACT-R Instance-Based Learning Model.

    Directory of Open Access Journals (Sweden)

    Nadia Said

    Full Text Available Computational models of cognition provide an interface to connect advanced mathematical tools and methods to empirically supported theories of behavior in psychology, cognitive science, and neuroscience. In this article, we consider a computational model of instance-based learning, implemented in the ACT-R cognitive architecture. We propose an approach for obtaining mathematical reformulations of such cognitive models that improve their computational tractability. For the well-established Sugar Factory dynamic decision making task, we conduct a simulation study to analyze central model parameters. We show how mathematical optimization techniques can be applied to efficiently identify optimal parameter values with respect to different optimization goals. Beyond these methodological contributions, our analysis reveals the sensitivity of this particular task with respect to initial settings and yields new insights into how average human performance deviates from potential optimal performance. We conclude by discussing possible extensions of our approach as well as future steps towards applying more powerful derivative-based optimization methods.

  16. Use of surgical techniques in the rat pancreas transplantation model

    National Research Council Canada - National Science Library

    Ma, Yi; Guo, Zhi-Yong

    2008-01-01

    ... (also called type 1 diabetes). With the improvement of microsurgical techniques, pancreas transplantation in rats has been the major model for physiological and immunological experimental studies in the past 20 years...

  17. Applying Model Checking to Industrial-Sized PLC Programs

    CERN Document Server

    AUTHOR|(CDS)2079190; Darvas, Daniel; Blanco Vinuela, Enrique; Tournier, Jean-Charles; Bliudze, Simon; Blech, Jan Olaf; Gonzalez Suarez, Victor M

    2015-01-01

    Programmable logic controllers (PLCs) are embedded computers widely used in industrial control systems. Ensuring that a PLC software complies with its specification is a challenging task. Formal verification has become a recommended practice to ensure the correctness of safety-critical software but is still underused in industry due to the complexity of building and managing formal models of real applications. In this paper, we propose a general methodology to perform automated model checking of complex properties expressed in temporal logics (\\eg CTL, LTL) on PLC programs. This methodology is based on an intermediate model (IM), meant to transform PLC programs written in various standard languages (ST, SFC, etc.) to different modeling languages of verification tools. We present the syntax and semantics of the IM and the transformation rules of the ST and SFC languages to the nuXmv model checker passing through the intermediate model. Finally, two real cases studies of \\CERN PLC programs, written mainly in th...

  18. Applying Functional Modeling for Accident Management of Nucler Power Plant

    DEFF Research Database (Denmark)

    Lind, Morten; Zhang, Xinxin

    2014-01-01

    The paper investigates applications of functional modeling for accident management in complex industrial plant with special reference to nuclear power production. Main applications for information sharing among decision makers and decision support are identified. An overview of Multilevel Flow...... for information sharing and decision support in accidents beyond design basis is also indicated. A modelling example demonstrating the application of Multilevel Flow Modelling and reasoning for a PWR LOCA is presented....

  19. Applying aerial digital photography as a spectral remote sensing technique for macrophytic cover assessment in small rural streams

    Science.gov (United States)

    Anker, Y.; Hershkovitz, Y.; Gasith, A.; Ben-Dor, E.

    2011-12-01

    Although remote sensing of fluvial ecosystems is well developed, the tradeoff between spectral and spatial resolutions prevents its application in small streams (cognitive color) and high spatial resolution of aerial photography provides noise filtration and better sub-water detection capabilities than the HSR technique. C. Only the SRGB method applies for habitat and section scales; hence, its application together with in-situ grid transects for validation, may be optimal for use in similar scenarios. The HSR dataset was first degraded to 17 bands with the same spectral range as the RGB dataset and also to a dataset with 3 equivalent bands

  20. System Response Analysis and Model Order Reduction, Using Conventional Method, Bond Graph Technique and Genetic Programming

    Directory of Open Access Journals (Sweden)

    Lubna Moin

    2009-04-01

    Full Text Available This research paper basically explores and compares the different modeling and analysis techniques and than it also explores the model order reduction approach and significance. The traditional modeling and simulation techniques for dynamic systems are generally adequate for single-domain systems only, but the Bond Graph technique provides new strategies for reliable solutions of multi-domain system. They are also used for analyzing linear and non linear dynamic production system, artificial intelligence, image processing, robotics and industrial automation. This paper describes a unique technique of generating the Genetic design from the tree structured transfer function obtained from Bond Graph. This research work combines bond graphs for model representation with Genetic programming for exploring different ideas on design space tree structured transfer function result from replacing typical bond graph element with their impedance equivalent specifying impedance lows for Bond Graph multiport. This tree structured form thus obtained from Bond Graph is applied for generating the Genetic Tree. Application studies will identify key issues and importance for advancing this approach towards becoming on effective and efficient design tool for synthesizing design for Electrical system. In the first phase, the system is modeled using Bond Graph technique. Its system response and transfer function with conventional and Bond Graph method is analyzed and then a approach towards model order reduction is observed. The suggested algorithm and other known modern model order reduction techniques are applied to a 11th order high pass filter [1], with different approach. The model order reduction technique developed in this paper has least reduction errors and secondly the final model retains structural information. The system response and the stability analysis of the system transfer function taken by conventional and by Bond Graph method is compared and

  1. System Response Analysis and Model Order Reduction, Using Conventional Method, Bond Graph Technique and Genetic Programming

    Directory of Open Access Journals (Sweden)

    Shahid Ali

    2009-04-01

    Full Text Available This research paper basically explores and compares the different modeling and analysis techniques and than it also explores the model order reduction approach and significance. The traditional modeling and simulation techniques for dynamic systems are generally adequate for single-domain systems only, but the Bond Graph technique provides new strategies for reliable solutions of multi-domain system. They are also used for analyzing linear and non linear dynamic production system, artificial intelligence, image processing, robotics and industrial automation. This paper describes a unique technique of generating the Genetic design from the tree structured transfer function obtained from Bond Graph. This research work combines bond graphs for model representation with Genetic programming for exploring different ideas on design space tree structured transfer function result from replacing typical bond graph element with their impedance equivalent specifying impedance lows for Bond Graph multiport. This tree structured form thus obtained from Bond Graph is applied for generating the Genetic Tree. Application studies will identify key issues and importance for advancing this approach towards becoming on effective and efficient design tool for synthesizing design for Electrical system. In the first phase, the system is modeled using Bond Graph technique. Its system response and transfer function with conventional and Bond Graph method is analyzed and then a approach towards model order reduction is observed. The suggested algorithm and other known modern model order reduction techniques are applied to a 11th order high pass filter [1], with different approach. The model order reduction technique developed in this paper has least reduction errors and secondly the final model retains structural information. The system response and the stability analysis of the system transfer function taken by conventional and by Bond Graph method is compared and

  2. An Online Gravity Modeling Method Applied for High Precision Free-INS.

    Science.gov (United States)

    Wang, Jing; Yang, Gongliu; Li, Jing; Zhou, Xiao

    2016-09-23

    For real-time solution of inertial navigation system (INS), the high-degree spherical harmonic gravity model (SHM) is not applicable because of its time and space complexity, in which traditional normal gravity model (NGM) has been the dominant technique for gravity compensation. In this paper, a two-dimensional second-order polynomial model is derived from SHM according to the approximate linear characteristic of regional disturbing potential. Firstly, deflections of vertical (DOVs) on dense grids are calculated with SHM in an external computer. And then, the polynomial coefficients are obtained using these DOVs. To achieve global navigation, the coefficients and applicable region of polynomial model are both updated synchronously in above computer. Compared with high-degree SHM, the polynomial model takes less storage and computational time at the expense of minor precision. Meanwhile, the model is more accurate than NGM. Finally, numerical test and INS experiment show that the proposed method outperforms traditional gravity models applied for high precision free-INS.

  3. Virtual 3d City Modeling: Techniques and Applications

    Science.gov (United States)

    Singh, S. P.; Jain, K.; Mandla, V. R.

    2013-08-01

    3D city model is a digital representation of the Earth's surface and it's related objects such as Building, Tree, Vegetation, and some manmade feature belonging to urban area. There are various terms used for 3D city models such as "Cybertown", "Cybercity", "Virtual City", or "Digital City". 3D city models are basically a computerized or digital model of a city contains the graphic representation of buildings and other objects in 2.5 or 3D. Generally three main Geomatics approach are using for Virtual 3-D City models generation, in first approach, researcher are using Conventional techniques such as Vector Map data, DEM, Aerial images, second approach are based on High resolution satellite images with LASER scanning, In third method, many researcher are using Terrestrial images by using Close Range Photogrammetry with DSM & Texture mapping. We start this paper from the introduction of various Geomatics techniques for 3D City modeling. These techniques divided in to two main categories: one is based on Automation (Automatic, Semi-automatic and Manual methods), and another is Based on Data input techniques (one is Photogrammetry, another is Laser Techniques). After details study of this, finally in short, we are trying to give the conclusions of this study. In the last, we are trying to give the conclusions of this research paper and also giving a short view for justification and analysis, and present trend for 3D City modeling. This paper gives an overview about the Techniques related with "Generation of Virtual 3-D City models using Geomatics Techniques" and the Applications of Virtual 3D City models. Photogrammetry, (Close range, Aerial, Satellite), Lasergrammetry, GPS, or combination of these modern Geomatics techniques play a major role to create a virtual 3-D City model. Each and every techniques and method has some advantages and some drawbacks. Point cloud model is a modern trend for virtual 3-D city model. Photo-realistic, Scalable, Geo-referenced virtual 3

  4. Application of a systematic finite-element model modification technique to dynamic analysis of structures

    Science.gov (United States)

    Robinson, J. C.

    1982-01-01

    A systematic finite-element model modification technique has been applied to two small problems and a model of the main wing box of a research drone aircraft. The procedure determines the sensitivity of the eigenvalues and eigenvector components to specific structural changes, calculates the required changes and modifies the finite-element model. Good results were obtained where large stiffness modifications were required to satisfy large eigenvalue changes. Sensitivity matrix conditioning problems required the development of techniques to insure existence of a solution and accelerate its convergence. A method is proposed to assist the analyst in selecting stiffness parameters for modification.

  5. Applying reliability models to the maintenance of Space Shuttle software

    Science.gov (United States)

    Schneidewind, Norman F.

    1992-01-01

    Software reliability models provide the software manager with a powerful tool for predicting, controlling, and assessing the reliability of software during maintenance. We show how a reliability model can be effectively employed for reliability prediction and the development of maintenance strategies using the Space Shuttle Primary Avionics Software Subsystem as an example.

  6. Trailing edge noise model applied to wind turbine airfoils

    DEFF Research Database (Denmark)

    Bertagnolio, Franck

    The aim of this work is firstly to provide a quick introduction to the theory of noise generation that are relevant to wind turbine technology with focus on trailing edge noise. Secondly, the socalled TNO trailing edge noise model developed by Parchen [1] is described in more details. The model...

  7. Hydrologic and water quality terminology as applied to modeling

    Science.gov (United States)

    A survey of literature and examination in particular of terminology use in a previous special collection of modeling calibration and validation papers has been conducted to arrive at a list of consistent terminology recommended for writing about hydrologic and water quality model calibration and val...

  8. Applying the General Linear Model to Repeated Measures Problems.

    Science.gov (United States)

    Pohlmann, John T.; McShane, Michael G.

    The purpose of this paper is to demonstrate the use of the general linear model (GLM) in problems with repeated measures on a dependent variable. Such problems include pretest-posttest designs, multitrial designs, and groups by trials designs. For each of these designs, a GLM analysis is demonstrated wherein full models are formed and restrictions…

  9. Community Mobilization Model Applied to Support Grandparents Raising Grandchildren

    Science.gov (United States)

    Miller, Jacque; Bruce, Ann; Bundy-Fazioli, Kimberly; Fruhauf, Christine A.

    2010-01-01

    This article discusses the application of a community mobilization model through a case study of one community's response to address the needs of grandparents raising grandchildren. The community mobilization model presented is one that is replicable in addressing diverse community identified issues. Discussed is the building of the partnerships,…

  10. [Applying multilevel models in evaluation of bioequivalence (I)].

    Science.gov (United States)

    Liu, Qiao-lan; Shen, Zhuo-zhi; Chen, Feng; Li, Xiao-song; Yang, Min

    2009-12-01

    This study aims to explore the application value of multilevel models for bioequivalence evaluation. Using a real example of 2 x 4 cross-over experimental design in evaluating bioequivalence of antihypertensive drug, this paper explores complex variance components corresponding to criteria statistics in existing methods recommended by FDA but obtained in multilevel models analysis. Results are compared with those from FDA standard Method of Moments, specifically on the feasibility and applicability of multilevel models in directly assessing the bioequivalence (ABE), the population bioequivalence (PBE) and the individual bioequivalence (IBE). When measuring ln (AUC), results from all variance components of the test and reference groups such as total variance (sigma(TT)(2) and sigma(TR)(2)), between-subject variance (sigma(BT)(2) and sigma(BR)(2)) and within-subject variance (sigma(WT)(2) and sigma(WR)(2)) estimated by simple 2-level models are very close to those that using the FDA Method of Moments. In practice, bioequivalence evaluation can be carried out directly by multilevel models, or by FDA criteria, based on variance components estimated from multilevel models. Both approaches produce consistent results. Multilevel models can be used to evaluate bioequivalence in cross-over test design. Compared to FDA methods, this one is more flexible in decomposing total variance into sub components in order to evaluate the ABE, PBE and IBE. Multilevel model provides a new way into the practice of bioequivalence evaluation.

  11. Nonstandard Finite Difference Method Applied to a Linear Pharmacokinetics Model

    Directory of Open Access Journals (Sweden)

    Oluwaseun Egbelowo

    2017-05-01

    Full Text Available We extend the nonstandard finite difference method of solution to the study of pharmacokinetic–pharmacodynamic models. Pharmacokinetic (PK models are commonly used to predict drug concentrations that drive controlled intravenous (I.V. transfers (or infusion and oral transfers while pharmacokinetic and pharmacodynamic (PD interaction models are used to provide predictions of drug concentrations affecting the response of these clinical drugs. We structure a nonstandard finite difference (NSFD scheme for the relevant system of equations which models this pharamcokinetic process. We compare the results obtained to standard methods. The scheme is dynamically consistent and reliable in replicating complex dynamic properties of the relevant continuous models for varying step sizes. This study provides assistance in understanding the long-term behavior of the drug in the system, and validation of the efficiency of the nonstandard finite difference scheme as the method of choice.

  12. A new HBV-model applied to an arctic watershed

    Energy Technology Data Exchange (ETDEWEB)

    Bruland, O.

    1995-12-31

    This paper describes the HBV-model, which was developed in the Nordic joint venture project ``Climate change and energy production``. The HBV-model is a precipitation-runoff model made mainly to create runoff forecasts for hydroelectric power plants. The model has been tested in an arctic watershed, the Bayelva drainage basin at Svalbard. The model was calibrated by means of data for the period 1989-1993 and tested on data for the period 1974-1978. For both periods, snow melt, rainfall and glacier melt events are well predicted. The largest disagreement between observed and simulated runoff occurred on warm days with heavy rain. This may be due to the precipitation measurements which may not be representative for such events. Measurements show a larger negative glacier mass balance than the simulated one although the parameters controlling the glacier melt in the model are set high. Glacier mass balance simulations in which the temperature index depends on albedo and radiation are more correct and improve model efficiency. 5 refs., 4 figs., 1 table

  13. Blue sky catastrophe as applied to modeling of cardiac rhythms

    Science.gov (United States)

    Glyzin, S. D.; Kolesov, A. Yu.; Rozov, N. Kh.

    2015-07-01

    A new mathematical model for the electrical activity of the heart is proposed. The model represents a special singularly perturbed three-dimensional system of ordinary differential equations with one fast and two slow variables. A characteristic feature of the system is that its solution performs nonclassical relaxation oscillations and simultaneously undergoes a blue sky catastrophe bifurcation. Both these factors make it possible to achieve a phenomenological proximity between the time dependence of the fast component in the model and an ECG of the human heart.

  14. Circuit oriented electromagnetic modeling using the PEEC techniques

    CERN Document Server

    Ruehli, Albert; Jiang, Lijun

    2017-01-01

    This book provides intuitive solutions to electromagnetic problems by using the Partial Eelement Eequivalent Ccircuit (PEEC) method. This book begins with an introduction to circuit analysis techniques, laws, and frequency and time domain analyses. The authors also treat Maxwell's equations, capacitance computations, and inductance computations through the lens of the PEEC method. Next, readers learn to build PEEC models in various forms: equivalent circuit models, non orthogonal PEEC models, skin-effect models, PEEC models for dielectrics, incident and radiate field models, and scattering PEEC models. The book concludes by considering issues like such as stability and passivity, and includes five appendices some with formulas for partial elements.

  15. Simple queueing model applied to the city of Portland

    Energy Technology Data Exchange (ETDEWEB)

    Simon, P.M.; Nagel, K. [Los Alamos National Lab., NM (United States)]|[Santa Fe Inst., NM (United States)

    1998-07-31

    The authors present a simple traffic micro-simulation model that models the effects of capacity cut-off, i.e. the effect of queue built-up when demand is exceeding capacity, and queue spillback, i.e. the effect that queues can spill back across intersections when a congested link is filled up. They derive the model`s fundamental diagrams and explain it. The simulation is used to simulate traffic on the emme/2 network of the Portland (Oregon) metropolitan region (20,000 links). Demand is generated by a simplified home-to-work assignment which generates about half a million trips for the AM peak. Route assignment is done by iterative feedback between micro-simulation and router. Relaxation of the route assignment for the above problem can be achieved within about half a day of computing time on a desktop workstation.

  16. Lithospheric structure models applied for locating the Romanian seismic events

    Directory of Open Access Journals (Sweden)

    V. Oancea

    1994-06-01

    Full Text Available The paper presents our attempts made for improving the locations obtained for local seismic events, using refined lithospheric structure models. The location program (based on Geiger method supposes a known model. The program is run for some seismic sequences which occurred in different regions, on the Romanian territory, using for each of the sequences three velocity models: 1 7 layers of constant velocity of seismic waves, as an average structure of the lithosphere for the whole territory; 2 site dependent structure (below each station, based on geophysical and geological information on the crust; 3 curves deseribing the dependence of propagation velocities with depth in the lithosphere, characterizing the 7 structural units delineated on the Romanian territory. The results obtained using the different velocity models are compared. Station corrections are computed for each data set. Finally, the locations determined for some quarry blasts are compared with the real ones.

  17. Pressure Sensitive Paint Applied to Flexible Models Project

    Science.gov (United States)

    Schairer, Edward T.; Kushner, Laura Kathryn

    2014-01-01

    One gap in current pressure-measurement technology is a high-spatial-resolution method for accurately measuring pressures on spatially and temporally varying wind-tunnel models such as Inflatable Aerodynamic Decelerators (IADs), parachutes, and sails. Conventional pressure taps only provide sparse measurements at discrete points and are difficult to integrate with the model structure without altering structural properties. Pressure Sensitive Paint (PSP) provides pressure measurements with high spatial resolution, but its use has been limited to rigid or semi-rigid models. Extending the use of PSP from rigid surfaces to flexible surfaces would allow direct, high-spatial-resolution measurements of the unsteady surface pressure distribution. Once developed, this new capability will be combined with existing stereo photogrammetry methods to simultaneously measure the shape of a dynamically deforming model in a wind tunnel. Presented here are the results and methodology for using PSP on flexible surfaces.

  18. Applying Functional Modeling for Accident Management of Nuclear Power Plant

    DEFF Research Database (Denmark)

    Lind, Morten; Zhang, Xinxin

    2014-01-01

    The paper investigate applications of functional modeling for accident management in complex industrial plant with special reference to nuclear power production. Main applications for information sharing among decision makers and decision support are identified. An overview of Multilevel Flow...

  19. A Model-Based Prognostics Approach Applied to Pneumatic Valves

    Data.gov (United States)

    National Aeronautics and Space Administration — Within the area of systems health management, the task of prognostics centers on predicting when components will fail. Model-based prognostics exploits domain...

  20. A Model-based Prognostics Approach Applied to Pneumatic Valves

    Data.gov (United States)

    National Aeronautics and Space Administration — Within the area of systems health management, the task of prognostics centers on predicting when components will fail. Model-based prognostics exploits domain...