WorldWideScience

Sample records for models microcomputer techniques

  1. Noninvasive technique for monitoring drug transport through the murine cochlea using micro-computed tomography.

    Science.gov (United States)

    Haghpanahi, Masoumeh; Gladstone, Miriam B; Zhu, Xiaoxia; Frisina, Robert D; Borkholder, David A

    2013-10-01

    Local delivery of drugs to the inner ear has the potential to treat inner ear disorders including permanent hearing loss or deafness. Current mathematical models describing the pharmacokinetics of drug delivery to the inner ear have been based on large rodent studies with invasive measurements of concentration at few locations within the cochlea. Hence, estimates of clearance and diffusion parameters are based on fitting measured data with limited spatial resolution to a model. To overcome these limitations, we developed a noninvasive imaging technique to monitor and characterize drug delivery inside the mouse cochlea using micro-computed tomography (μCT). To increase the measurement accuracy, we performed a subject-atlas image registration to exploit the information readily available in the atlas image of the mouse cochlea and pass segmentation or labeling information from the atlas to our μCT scans. The approach presented here has the potential to quantify concentrations at any point along fluid-filled scalae of the inner ear. This may permit determination of spatially dependent diffusion and clearance parameters for enhanced models.

  2. Micro-computed tomography and bond strength analysis of different root canal filling techniques

    Directory of Open Access Journals (Sweden)

    Juliane Nhata

    2014-01-01

    Full Text Available Introduction: The aim of this study was to evaluate the quality and bond strength of three root filling techniques (lateral compaction, continuous wave of condensation and Tagger′s Hybrid technique [THT] using micro-computed tomography (CT images and push-out tests, respectively. Materials and Methods: Thirty mandibular incisors were prepared using the same protocol and randomly divided into three groups (n = 10: Lateral condensation technique (LCT, continuous wave of condensation technique (CWCT, and THT. All specimens were filled with Gutta-percha (GP cones and AH Plus sealer. Five specimens of each group were randomly chosen for micro-CT analysis and all of them were sectioned into 1 mm slices and subjected to push-out tests. Results: Micro-CT analysis revealed less empty spaces when GP was heated within the root canals in CWCT and THT when compared to LCT. Push-out tests showed that LCT and THT had a significantly higher displacement resistance (P < 0.05 when compared to the CWCT. Bond strength was lower in apical and middle thirds than in the coronal thirds. Conclusions: It can be concluded that LCT and THT were associated with higher bond strengths to intraradicular dentine than CWCT. However, LCT was associated with more empty voids than the other techniques.

  3. Microcomputer controlled X-ray diffractometer for investigations in transmission technique

    Energy Technology Data Exchange (ETDEWEB)

    Mueller, E.C.; Kroeber, R.; Wessel, R.; Mueller, J.J.; Gernat, C.; Pleissner, K.P.; Damaschun, G.; Lucius, H. (Akademie der Wissenschaften der DDR, Berlin. Zentralinstitut fuer Molekularbiologie)

    1984-01-01

    The coupling of the microcomputer system MPS 4944 with an X-ray wide-angle diffractometer is presented. The configuration of the system, algorithms and programs are described. The microcomputer system effects control and surveillance of the goniometer, accumulation and normalization of the scattered intensities. The coupling of a microcomputer to the diffractometer afforded an increase of the precision of measurement, shortening of the time needed for the scattering experiment and improved the handling comfort of the diffractometer. New experimental variants are rendered possible, for instance, structural investigations with program-controlled temperature regulation of the samples.

  4. The microcomputer scientific software series 2: general linear model--regression.

    Science.gov (United States)

    Harold M. Rauscher

    1983-01-01

    The general linear model regression (GLMR) program provides the microcomputer user with a sophisticated regression analysis capability. The output provides a regression ANOVA table, estimators of the regression model coefficients, their confidence intervals, confidence intervals around the predicted Y-values, residuals for plotting, a check for multicollinearity, a...

  5. Doing Physics with Microcomputers.

    Science.gov (United States)

    Bak, Per

    1983-01-01

    Describes how microcomputers can perform very demanding/large-scale physics calculations at speeds not much slower than those of modern, full-size computers. Among the examples provided are a Monte Carlo simulation of the three-dimensional Ising model and a program (for the Apple microcomputer) using the time-independent Schrodinger Equation. (JN)

  6. Microcomputer interface for computer-controlled enzyme kinetic studies with the monolayer technique

    NARCIS (Netherlands)

    Haas, G. de; Donné-Op den Kelder, G.M.; Wildt, H. van der

    1984-01-01

    Abstract A microcomputer interface for computer-assisted monolayer experiments was developed, tested, and used for analysis of the enzymatic hydrolysis by pancreatic phospholipases A2 (EC 3.1.1.4) of 1,2-didodecanoyl-sn-glycero-3-sulfate monitored under constant surface pressure. The interface descr

  7. A simple dynamic model and transient simulation of the nuclear power reactor on microcomputers

    Energy Technology Data Exchange (ETDEWEB)

    Han, Yang Gee; Park, Cheol [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    1997-12-31

    A simple dynamic model is developed for the transient simulation of the nuclear power reactor. The dynamic model includes the normalized neutron kinetics model with reactivity feedback effects and the core thermal-hydraulics model. The main objective of this paper demonstrates the capability of the developed dynamic model to simulate various important variables of interest for a nuclear power reactor transient. Some representative results of transient simulations show the expected trends in all cases, even though no available data for comparison. In this work transient simulations are performed on a microcomputer using the DESIRE/N96T continuous system simulation language which is applicable to nuclear power reactor transient analysis. 3 refs., 9 figs. (Author)

  8. Mapping the calcification of bovine pericardium in rat model by enhanced micro-computed tomography.

    Science.gov (United States)

    Liu, Jing; Zhong, Shengping; Lan, Hualin; Meng, Xu; Zhang, Haibo; Fan, Yubo; Wang, Yuxing; Wang, Chunren; Wang, Zhaoxu

    2014-09-01

    The calcification initiation and progression of bioprosthetic heart valve were investigated in a rat model by enhanced micro-computed tomography, together with histologic study and scanning electron microscope analysis. The implantation data at early stage showed apparent dendritic patterns in the radiographic images for the glutaraldehyde-treated bovine pericardium and this dendritic pattern was verified to be associated with the vessel distribution in the tissue. Histologic study and scanning electron microscope analysis both indicated that the calcium deposits in the pericardium vessels regions were more grievous than those scattered in the collagen fibers in the first two weeks after implantation. Subsequently, calcification spreaded and the entire sample was severely calcified in 60 days. Copyright © 2014 Elsevier Ltd. All rights reserved.

  9. Comparative Micro-computed Tomographic Evaluation of the Sealing Quality of ProRoot MTA and MTA Angelus Apical Plugs Placed with Various Techniques.

    Science.gov (United States)

    Sisli, Selen Nihal; Ozbas, Hakan

    2017-01-01

    This study compared the effects of different mixing and placement techniques on sealing of ProRoot MTA (Dentsply Maillefer, Ballaigues, Switzerland) and MTA Angelus (Soluçoes Odontologicas, Londrina, Brazil) apical plugs using micro-computed tomographic (micro-CT) imaging. Standardized divergent open apex models were created using 120 extracted maxillary incisors and divided into 8 groups (n = 15). ProRoot MTA and MTA Angelus were mixed manually or mechanically and introduced to form apical plugs by hand condensation or indirect ultrasonic activation for 10 seconds. The samples were scanned using micro-CT imaging, and volumetric analysis of the voids between the dentin walls and the apical plugs and the porosity inside MTA was performed. Irrespective of the mixing and placement techniques used, the voids between the dentin walls and the apical plugs in the MTA Angelus groups were greater than those in the ProRoot MTA groups (P MTA than combined manual mixing and hand condensation (P MTA, both mixed and placed manually, was greater than for MTA Angelus (P  .05). ProRoot MTA showed better marginal adaptation than MTA Angelus. Mechanically mixed products had better handling characteristics than the manually mixed product. Indirect ultrasonic activation did improve the adaptation of manually mixed MTA Angelus to the dentin walls. Copyright © 2016 American Association of Endodontists. Published by Elsevier Inc. All rights reserved.

  10. "Speaking" Microcomputer

    Science.gov (United States)

    Mandy, C.; Annerton, J.

    1986-01-01

    Microcomputer system allows blind person to use portable computer and communicate with remote mainframe computer. System prepares data for transmission to mainframe, translates data transmitted from mainframe for its own use, and converts data ordinarily displayed on its video screen into synthesized speech.

  11. "Speaking" Microcomputer

    Science.gov (United States)

    Mandy, C.; Annerton, J.

    1986-01-01

    Microcomputer system allows blind person to use portable computer and communicate with remote mainframe computer. System prepares data for transmission to mainframe, translates data transmitted from mainframe for its own use, and converts data ordinarily displayed on its video screen into synthesized speech.

  12. Modeling survival of juvenile salmon during downriver migration in the Columbia River on a microcomputer

    Energy Technology Data Exchange (ETDEWEB)

    Peloquin, R.A.; McKenzie, D.H.

    1994-10-01

    A compartmental model has been implemented on a microcomputer as an aid in the analysis of alternative solutions to a problem. The model, entitled Smolt Survival Simulator, simulates the survival of juvenile salmon during their downstream migration and passage of hydroelectric dams in the Columbia River. The model is designed to function in a workshop environment where resource managers and fisheries biologists can study alternative measures that may potentially increase juvenile anadromous fish survival during downriver migration. The potential application of the model has placed several requirements on the implementing software. It must be available for use in workshop settings. The software must be easily to use with minimal computer knowledge. Scenarios must be created and executed quickly and efficiently. Results must be immediately available. Software design emphasis vas placed on the user interface because of these requirements. The discussion focuses on methods used in the development of the SSS software user interface. These methods should reduce user stress and alloy thorough and easy parameter modification.

  13. Evaluation of Angiogenesis Using Micro-Computed Tomography in a Xenograft Mouse Model of Lung Cancer

    Directory of Open Access Journals (Sweden)

    Rajkumar Savai

    2009-01-01

    Full Text Available Quantitative evaluation of lung tumor angiogenesis using immunohistochemical techniques has been limited by difficulties in generating reproducible data. To analyze intrapulmonary tumor angiogenesis, we used high-resolution micro-computed tomography (micro-CT of lung tumors of mice inoculated with mouse Lewis lung carcinoma (LLC1 or human adenocarcinoma (A549 cell lines. The lung vasculature was filled with the radiopaque silicone rubber, Microfil, through the jugular vein (in vivo application or pulmonary artery (ex vivo application. In addition, human adenocarcinoma lung tumor-bearing mice treated site-specifically with humanized monoclonal antibody (bevacizumab against vascular endothelial growth factor. Quantitative analysis of lung tumor microvessels imaged with micro-CT showed that more vessels (mainly small, <0.02 mm2 were filled using the in vivo (5.4% compared with the ex vivo (2.1% method. Furthermore, bevacizumab-treated lung tumor-bearing mice showed significantly reduced lung tumor volume and lung tumor angiogenesis compared with untreated mice as assessed by micro-CT. Interestingly, microvascularization of mainly the smaller vessels (<0.02 mm2 was reduced after bevacizumab treatment. This observation with micro-CT was nicely correlated with immunohistochemical measurement of microvessels. Therefore, micro-CT is a novel method for investigating lung tumor angiogenesis, and this might be considered as an additional complementary tool for precise quantification of angiogenesis.

  14. A Quantitative Volumetric Micro-Computed Tomography Method to Analyze Lung Tumors in Genetically Engineered Mouse Models

    Directory of Open Access Journals (Sweden)

    Brian B. Haines

    2009-01-01

    Full Text Available Two genetically engineered, conditional mouse models of lung tumor formation, K-rasLSL-G12D and K-rasLSL-G12D/p53LSL-R270H, are commonly used to model human lung cancer. Developed by Tyler Jacks and colleagues, these models have been invaluable to study in vivo lung cancer initiation and progression in a genetically and physiologically relevant context. However, heterogeneity, multiplicity and complexity of tumor formation in these models make it challenging to monitor tumor growth in vivo and have limited the application of these models in oncology drug discovery. Here, we describe a novel analytical method to quantitatively measure total lung tumor burden in live animals using micro-computed tomography imaging. Applying this methodology, we studied the kinetics of tumor development and response to targeted therapy in vivo in K-ras and K-ras/p53 mice. Consistent with previous reports, lung tumors in both models developed in a time- and dose (Cre recombinase-dependent manner. Furthermore, the compound K-rasLSL-G12D/p53LSL-R270H mice developed tumors faster and more robustly than mice harboring a single K-rasLSL-G12D oncogene, as expected. Erlotinib, a small molecule inhibitor of the epidermal growth factor receptor, significantly inhibited tumor growth in K-rasLSL-G12D/p53LSL-R270H mice. These results demonstrate that this novel imaging technique can be used to monitor both tumor progression and response to treatment and therefore supports a broader application of these genetically engineered mouse models in oncology drug discovery and development.

  15. Application of Micro-Computed Tomography with Iodine Staining to Cardiac Imaging, Segmentation and Computational Model Development

    Science.gov (United States)

    Aslanidi, OV; Nikolaidou, T; Zhao, J; Smaill, BH; Gilbert, SH; Holden, AV; Lowe, T; Withers, PJ; Jarvis, JC; Stephenson, RS; Hart, G; Hancox, JC; Boyett, MR; Zhang, H

    2012-01-01

    Micro-computed tomography (micro-CT) has been widely used to generate high-resolution 3D tissue images from small animals non-destructively, especially for mineralized skeletal tissues. However, its application to the analysis of soft cardiovascular tissues has been limited by poor inter-tissue contrast. Recent ex vivo studies have shown that contrast between muscular and connective tissue in micro-CT images can be enhanced by staining with iodine. In the present study, we apply this novel technique for imaging of cardiovascular structures in canine hearts. We optimize the method to obtain high resolution X-ray micro-CT images of the canine atria and its distinctive regions - including the Bachmann’s bundle, atrioventricular node, pulmonary arteries and veins - with clear inter-tissue contrast. The imaging results are used to reconstruct and segment the detailed 3D geometry of the atria. Structure tensor analysis shows that the arrangement of atrial fibres can also be characterised using the enhanced micro-CT images, as iodine preferentially accumulates within the muscular fibres rather than in connective tissues. This novel technique can be particularly useful in non-destructive imaging of 3D cardiac architectures from large animals and humans, due to the combination of relatively high speed (~1 hour/scan of a large canine heart) and high voxel resolution (36 μm) provided. In summary, contrast micro-CT facilitates fast and non-destructive imaging and segmenting of detailed 3D cardiovascular geometries, as well as measuring fibre orientation, which are crucial in constructing biophysically detailed computational cardiac models. PMID:22829390

  16. X-Ray Micro-Computed Tomography of Apollo Samples as a Curation Technique Enabling Better Research

    Science.gov (United States)

    Ziegler, R. A.; Almeida, N. V.; Sykes, D.; Smith, C. L.

    2014-01-01

    X-ray micro-computed tomography (micro-CT) is a technique that has been used to research meteorites for some time and many others], and recently it is becoming a more common tool for the curation of meteorites and Apollo samples. Micro-CT is ideally suited to the characterization of astromaterials in the curation process as it can provide textural and compositional information at a small spatial resolution rapidly, nondestructively, and without compromising the cleanliness of the samples (e.g., samples can be scanned sealed in Teflon bags). This data can then inform scientists and curators when making and processing future sample requests for meteorites and Apollo samples. Here we present some preliminary results on micro-CT scans of four Apollo regolith breccias. Methods: Portions of four Apollo samples were used in this study: 14321, 15205, 15405, and 60639. All samples were 8-10 cm in their longest dimension and approximately equant. These samples were micro-CT scanned on the Nikon HMXST 225 System at the Natural History Museum in London. Scans were made at 205-220 kV, 135-160 microamps beam current, with an effective voxel size of 21-44 microns. Results: Initial examination of the data identify a variety of mineral clasts (including sub-voxel FeNi metal grains) and lithic clasts within the regolith breccias. Textural information within some of the lithic clasts was also discernable. Of particular interest was a large basalt clast (approx.1.3 cc) found within sample 60639, which appears to have a sub-ophitic texture. Additionally, internal void space, e.g., fractures and voids, is readily identifiable. Discussion: It is clear from the preliminary data that micro-CT analyses are able to identify important "new" clasts within the Apollo breccias, and better characterize previously described clasts or igneous samples. For example, the 60639 basalt clast was previously believed to be quite small based on its approx.0.5 sq cm exposure on the surface of the main mass

  17. Software design methodologies for microcomputers

    Science.gov (United States)

    Dowsing, R. D.

    1982-05-01

    It is the development phase of the software lifecycle which has attracted a great deal of interest in the last few years. This paper surveys the stages in software production and the need for a rigorous, formal approach. This type of approach is illustrated by the development of a program for a microcomputer using structured programming techniques.

  18. MICRO-VERS. Micro-computer Software for the Vocational Education Reporting System. Version 3.1. Radio Shack TRS-80 Model III.

    Science.gov (United States)

    Illinois State Board of Education, Springfield. Dept. of Adult, Vocational and Technical Education.

    This manual is intended to accompany a software system for the TRS-80 Model III computer that is designed to aid local districts in completing vocational education enrollment claims and Vocational Education Data System (VEDS) reports. Part I, Introduction, gives a brief overview of the Microcomputer Vocational Education Reporting System…

  19. Communication Analysis modelling techniques

    CERN Document Server

    España, Sergio; Pastor, Óscar; Ruiz, Marcela

    2012-01-01

    This report describes and illustrates several modelling techniques proposed by Communication Analysis; namely Communicative Event Diagram, Message Structures and Event Specification Templates. The Communicative Event Diagram is a business process modelling technique that adopts a communicational perspective by focusing on communicative interactions when describing the organizational work practice, instead of focusing on physical activities1; at this abstraction level, we refer to business activities as communicative events. Message Structures is a technique based on structured text that allows specifying the messages associated to communicative events. Event Specification Templates are a means to organise the requirements concerning a communicative event. This report can be useful to analysts and business process modellers in general, since, according to our industrial experience, it is possible to apply many Communication Analysis concepts, guidelines and criteria to other business process modelling notation...

  20. Thermal-stress analysis of ceramic laminate veneer restorations with different incisal preparations using micro-computed tomography-based 3D finite element models.

    Science.gov (United States)

    Celebi, Alper Tunga; Icer, Esra; Eren, Meltem Mert; Baykasoglu, Cengiz; Mugan, Ata; Yildiz, Esra

    2017-11-01

    Main objective of this study is to investigate the thermal behavior of ceramic laminate veneer restorations of the maxillary central incisor with different incisal preparations such as butt joint and palatinal chamfer using finite element method. In addition, it is also aimed to understand the effect of different thermal loads which simulates hot and cold liquid imbibing in the mouth. Three-dimensional solid models of the sound tooth and prepared veneer restorations were obtained using micro-computed tomography images. Each ceramic veneer restoration was made up of ceramic, luting resin cement and adhesive layer which were generated based on the scanned images using computer-aided design software. Our solid model also included the remaining dental tissues such as periodontal ligament and surrounding cortical and spongy bones. Time-dependent linear thermal analyses were carried out to compare temperature changes and stress distributions of the sound and restored tooth models. The liquid is firstly in contact with the crown area where the maximum stresses were obtained. For the restorations, stresses on palatinal surfaces were found larger than buccal surfaces. Through interior tissues, the effect of thermal load diminished and smaller stress distributions were obtained near pulp and root-dentin regions. We found that the palatinal chamfer restoration presents comparatively larger stresses than the butt joint preparation. In addition, cold thermal loading showed larger temperature changes and stress distributions than those of hot thermal loading independent from the restoration technique. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. RAMS Model for Terrestrial Pathways Version 3. 0 (for microcomputers). Model-Simulation

    Energy Technology Data Exchange (ETDEWEB)

    Niebla, E.

    1989-01-01

    The RAMS Model for Terrestrial Pathways is a computer program for calculation of numeric criteria for land application and distribution and marketing of sludges under the sewage-sludge regulations at 40 CFR Part 503. The risk-assessment models covered assume that municipal sludge with specified characteristics is spread across a defined area of ground at a known rate once each year for a given number of years. Risks associated with direct land application of sludge applied after distribution and marketing are both calculated. The computer program calculates the maximum annual loading of contaminants that can be land applied and still meet the risk criteria specified as input. Software Description: The program is written in the Turbo/Basic programming language for implementation on IBM PC/AT or compatible machines using DOS 3.0 or higher operating system. Minimum core storage is 512K.

  2. Measurement of kidney stone formation in the rat model using micro-computed tomography

    Science.gov (United States)

    Umoh, Joseph U.; Pitelka, Vasek; Goldberg, Harvey A.; Holdsworth, David W.

    2012-03-01

    Kidney stones were induced in 5 rats by treating them with 1% ethylene glycol and 1% ammonium chloride through free drinking water for six weeks. The animals were anesthetized and imaged in vivo before the treatment at week 0, to obtain baseline data, then at weeks 2 and 6 to monitor the kidney stone formation. Micro-CT imaging was performed with x-ray tube voltage of 90 kV and a current of 40 mA. At week 2, kidney stone formation was observed. A micro-computed tomography methodology of estimating the volume and hydroxyapatite-equivalent mineral content of the kidney stone is presented. It determines the threshold CT number (390 HU) that separates the kidney stone from the tissue. The mean volume of the stones in the 10 kidneys significantly increased from 3.81+/-0.72 mm3 at week 2 to 23.96+/-9.12 mm3 at week 6 (perror was about 4%. This method allows analysis of the kidney stone formation to be carried out in vivo, with fewer experimental animals compared with other ex vivo methods, in which animals are sacrificed. It is precise, accurate, non-destructive, and could be used in pre-clinical research to study the formation of kidney stones in live small animals.

  3. Data flow modeling techniques

    Science.gov (United States)

    Kavi, K. M.

    1984-01-01

    There have been a number of simulation packages developed for the purpose of designing, testing and validating computer systems, digital systems and software systems. Complex analytical tools based on Markov and semi-Markov processes have been designed to estimate the reliability and performance of simulated systems. Petri nets have received wide acceptance for modeling complex and highly parallel computers. In this research data flow models for computer systems are investigated. Data flow models can be used to simulate both software and hardware in a uniform manner. Data flow simulation techniques provide the computer systems designer with a CAD environment which enables highly parallel complex systems to be defined, evaluated at all levels and finally implemented in either hardware or software. Inherent in data flow concept is the hierarchical handling of complex systems. In this paper we will describe how data flow can be used to model computer system.

  4. Compression-recovery model of absorptive glass mat (AGM) separator guided by X-ray micro-computed tomography analysis

    Science.gov (United States)

    Kameswara Rao, P. V.; Rawal, Amit; Kumar, Vijay; Rajput, Krishn Gopal

    2017-10-01

    Absorptive glass mat (AGM) separators play a key role in enhancing the cycle life of the valve regulated lead acid (VRLA) batteries by maintaining the elastic characteristics under a defined level of compression force with the plates of the electrodes. Inevitably, there are inherent challenges to maintain the required level of compression characteristics of AGM separators during the charge and discharge of the battery. Herein, we report a three-dimensional (3D) analytical model for predicting the compression-recovery behavior of AGM separators by formulating a direct relationship with the constituent fiber and structural parameters. The analytical model of compression-recovery behavior of AGM separators has successfully included the fiber slippage criterion and internal friction losses. The presented work uses, for the first time, 3D data of fiber orientation from X-ray micro-computed tomography, for predicting the compression-recovery behavior of AGM separators. A comparison has been made between the theoretical and experimental results of compression-recovery behavior of AGM samples with defined fiber orientation characteristics. In general, the theory agreed reasonably well with the experimental results of AGM samples in both dry and wet states. Through theoretical modeling, fiber volume fraction was established as one of the key structural parameters that modulates the compression hysteresis of an AGM separator.

  5. Problem Solving Using Microcomputers.

    Science.gov (United States)

    Demana, Franklin; Waits, Bert

    1987-01-01

    It is argued that microcomputer technology has evolved to the stage that it should be routinely used by mathematics students at all levels. It is shown how the use of microcomputers can change the way problems are solved. Computer-generated graphics are highlighted. (PK)

  6. Sketching with a microcomputer

    DEFF Research Database (Denmark)

    Jacobi, P.

    This report describes the use of a microcomputer as a tool for the sketch design phase of the building process. A housing development scheme comprising 175 dwellings is chosen for illustrating the procedures. Here the microcomputer is utilized for analysing the landscape, for the three-dimensiona...

  7. The Microcomputer Revolution.

    Science.gov (United States)

    Kinne, Harold C.

    The history of the development of the microcomputer industry since its inception in 1975 is explored in this brief paper, which pays special attention to the advent of the small business computer. The author describes the growth of sales, the development of successful companies, the early resistance to use of microcomputers in business, and the…

  8. A micro-computed tomography assessment of the efficacy of rotary and reciprocating techniques for filling material removal in root canal retreatment.

    Science.gov (United States)

    Monguilhott Crozeta, Bruno; Damião de Sousa-Neto, Manoel; Bianchi Leoni, Graziela; Francisco Mazzi-Chaves, Jardel; Terezinha Corrêa Silva-Sousa, Yara; Baratto-Filho, Flares

    2016-11-01

    The aim of this study was to compare the efficacy of rotary and reciprocating techniques for removing filling material from root canals, using micro-computed tomography (micro-CT) imaging. The distal root canals of 42 human mandibular molars were instrumented with hand files up to size 40 according to a step-down technique and obturated with gutta-percha and an epoxy resin-based sealer using Tagger's hybrid technique. Teeth were divided into six groups (n = 7) according to the instruments used for removal of filling material: group PTUR (Protaper Universal Retreatment), group W40 (WaveOne 40.08), group R40 (Reciproc 40.06), group R50 (Reciproc 50.05), group W25/W40 (WaveOne 25.08/40.08), and group R25/R40/R50 (Reciproc 25.08/40.06/50.05). Teeth were scanned with a micro-CT device before and after use of the instruments to calculate the percentage of remaining filling material. Data were compared using one-way ANOVA and Fisher's LSD test at 5 % significance level. The mean percentage of remaining filling material was significantly lower when canals were retreated with the set of instruments R25/R40/R50 and single instruments W40 and R50 compared with the set of instruments W25/W40 (P removed the root fillings completely. PTUR instruments performed equally effective regarding filling removal compared with W40, R40, R50, W25/W40, and R25/R40/R50. For WaveOne, the use of a single instrument (size 40, taper 0.08) was more effective in removing filling material, while for Reciproc showed similar cleaning ability using single instrument or combination of instruments. The results of this in vitro study provide consistent information on filling material removal capacity of mechanized systems during retreatment at the different root canal thirds.

  9. Analysis of single point and continuous wave of condensation root filling techniques by micro-computed tomography

    Directory of Open Access Journals (Sweden)

    Daniele Angerame

    2012-01-01

    Full Text Available The aim of the present microtomographic study was to investigate the quality of root canal filling and the voids formation in canals of extracted teeth instrumented with a simultaneous technique and filled with two different methods. Twenty-four single-rooted teeth were assigned to two experimental groups (no. = 12; canals were shaped with NiTi rotary files, irrigated with NaOCl and filled either with the single point (group 1 or the continuous wave of condensation technique (group 2. Specimens underwent microtomographic scanning. Collected data were statistically analyzed by nonparametric methods. Void mean percentages were found to be limited and similar between the two groups; the single point technique led to greater sealer thickness in partially oval canals.

  10. Marginal adaptation of mineral trioxide aggregate (MTA) to root dentin surface with orthograde/retrograde application techniques: A microcomputed tomographic analysis.

    Science.gov (United States)

    Al Fouzan, Khalid; Awadh, Mohammed; Badwelan, Moahmmed; Gamal, Abeer; Geevarghese, Amrita; Babhair, Samar; Al-Rejaie, Mansour; Al Hezaimi, Khalid; Rotstein, Ilan

    2015-01-01

    Achieving a good apical seal for root canals is known to be associated with good mineral trioxide aggregate (MTA) adaptation to dentin. This study aims to compare the marginal adaptation of MTA with root dentin between orthograde and retrograde application techniques using microcomputed tomography (micro-CT) analysis. Fifty-two single-rooted human teeth were divided into four equal groups: (Group 1) Retrograde MTA (RMTA), (Group 2) Orthograde MTA (OMTA), (Group 3) Etched RMTA (ERMTA), and (Group 4) Etched OMTA (EOMTA). For Group 1, 3-mm retrograde cavities were prepared and filled with MTA. For Group 2, the apical 6 mm of the canals were filled with MTA and sealed with sealer cement and warm gutta-percha. In Groups 3 and 4, canals were treated the same as Groups 1 and 2, respectively, except that before placing the MTA, canals were irrigated with 17% ethylenediaminetetraacetic acid (EDTA). After 48 hours, all the teeth were analyzed using a micro-CT scanner. Mean dentin-MTA contact and the mean length and width of each gap was analysed using one-way analysis of variance (ANOVA). Statistical significance was set at an α level of 5%. No significant difference in gap volumes was observed in the dentin-MTA adaptation in both orthograde and retrograde application techniques. However, significant difference in the gap volumes was observed between RMTA and ERMTA (P = 0.045). Etching significantly improved the MTA-Dentin adaptation (P MTA adaptation, instead with the use of 17% EDTA, a significant improvement could be achieved. Within the limitations of the present study, it concludes that MTA adaptation to dentin tooth structure is not significantly different between an orthograde and retrograde approach. However, the use of EDTA significantly improved the MTA-Dentin adaptation.

  11. Airway remodeling in a mouse asthma model assessed by in-vivo respiratory-gated micro-computed tomography

    Energy Technology Data Exchange (ETDEWEB)

    Lederlin, Mathieu; Montaudon, Michel [Universite Bordeaux 2, Laboratoire de Physiologie Cellulaire Respiratoire, Bordeaux (France); Institut National de la Sante et de la Recherche Medicale (INSERM), U885, Bordeaux (France); CHU Bordeaux, Unite d' Imagerie Thoracique, Pessac (France); Ozier, Annaig; Begueret, Hugues; Ousova, Olga; Marthan, Roger; Berger, Patrick [Universite Bordeaux 2, Laboratoire de Physiologie Cellulaire Respiratoire, Bordeaux (France); Institut National de la Sante et de la Recherche Medicale (INSERM), U885, Bordeaux (France); Laurent, Francois [Universite Bordeaux 2, Laboratoire de Physiologie Cellulaire Respiratoire, Bordeaux (France); Institut National de la Sante et de la Recherche Medicale (INSERM), U885, Bordeaux (France); CHU Bordeaux, Unite d' Imagerie Thoracique, Pessac (France); CHU de Bordeaux, Hopital du Haut-Leveque, Hopital Cardiologique, Unite d' Imagerie Thoracique et Cardiovasculaire, Pessac (France)

    2010-01-15

    The aim of our study was to evaluate the feasibility of non-invasive respiratory-gated micro-computed tomography (micro-CT) for assessment of airway remodelling in a mouse asthma model. Six female BALB/c mice were challenged intranasally with ovalbumin. A control group of six mice received saline inhalation. All mice underwent plethysmographic study and micro-CT. For each mouse, peribronchial attenuation values of 12 bronchi were measured, from which a peribronchial density index (PBDI) was computed. Mice were then sacrificed and lungs examined histologically. Final analysis involved 10 out of 12 mice. Agreement of measurements across observers and over time was very good (intraclass correlation coefficients: 0.94-0.98). There was a significant difference in PBDI between asthmatic and control mice (-210 vs. -338.9 HU, P=0.008). PBDI values were correlated to bronchial muscle area (r=0.72, P=0.018). This study shows that respiratory-gated micro-CT may allow non-invasive monitoring of bronchial remodelling in asthmatic mice and evaluation of innovative treatment effects. (orig.)

  12. Establishment of temporomandibular joint puncture technique in rats using in vivo micro-computed tomography (R_mCT®).

    Science.gov (United States)

    Kameoka, S; Matsumoto, K; Kai, Y; Yonehara, Y; Arai, Y; Honda, K

    2010-10-01

    The aim of the report was to establish puncture techniques for the temporomandibular joint (TMJ) cavity in rats. The experimental sample comprised 30 male Sprague-Dawley rats. Under general anaesthesia the superior joint cavity of the rat was punctured either laterally (lateral puncture technique (LPT), n = 11), anteriorly (anterosuperior puncture technique (ASPT), n = 13) or anteroinferior puncture technique (AIPT), n = 6) using a 27-gauge needle. After the tip of the needle was confirmed by micro-CT (R-mCT®, Rigaku, Tokyo, Japan) located on the mandibular fossa, 0.05 ml of contrast media was injected under micro-CT fluoroscopic guidance. After confirmation that the joint cavity was filled with contrast media, micro-CT imaging was carried out. The puncture for LPT was accurate in 5 of the 11 animals. The ASPT was accurate in all 13 animals. The AIPT punctured 3 of the 6 animals. Furthermore, the ASPT and AIPT demonstrated improved preservation of the needle; it was harder to detach the needle, which led to greater stability. These results suggest that ASPT assisted by R-mCT® is useful for basic research, including drug discovery and pathogenesis of TMJ diseases.

  13. Microcomputer Applications in Analytical Chemistry.

    Science.gov (United States)

    Long, Joseph W.

    The first part of this paper addresses the following topics: (1) the usefulness of microcomputers; (2) applications for microcomputers in analytical chemistry; (3) costs; (4) major microcomputer systems and subsystems; and (5) which microcomputer to buy. Following these brief comments, the major focus of the paper is devoted to a discussion of…

  14. Assessment of three root canal preparation techniques on root canal geometry using micro-computed tomography: In vitro study

    Directory of Open Access Journals (Sweden)

    Shaikha M Al-Ali

    2012-01-01

    Full Text Available Aim: To assess the effects of three root canal preparation techniques on canal volume and surface area using three-dimensionally reconstructed root canals in extracted human maxillary molars. Materials and Methods: Thirty extracted Human Maxillary Molars having three separate roots and similar root shape were randomly selected from a pool of extracted teeth for this study and stored in normal saline solution until used. A computed tomography scanner (Philips Brilliance CT 64-slice was used to analyze root canals in extracted maxillary molars. Specimens were scanned before and after canals were prepared using stainless steel K-Files, Ni-Ti rotary ProTaper and rotary SafeSiders instruments. Differences in dentin volume removed, the surface area, the proportion of unchanged area and canal transportation were calculated using specially developed software. Results: Instrumentation of canals increased volume and surface area. Statistical analysis found a statistically significant difference among the 3 groups in total change in volume (P = 0.001 and total change in surface area (P = 0.13. Significant differences were found when testing both groups with group III (SafeSiders. Significant differences in change of volume were noted when grouping was made with respect to canal type (in MB and DB (P < 0.05. Conclusion: The current study used computed tomography, an innovative and non destructive technique, to illustrate changes in canal geometry. Overall, there were few statistically significant differences between the three instrumentation techniques used. SafeSiders stainless steel 40/0.02 instruments exhibit a greater cutting efficiency on dentin than K-Files and ProTaper. CT is a new and valuable tool to study root canal geometry and changes after preparation in great details. Further studies with 3D-techniques are required to fully understand the biomechanical aspects of root canal preparation.

  15. A technique for evaluating bone ingrowth into 3D printed, porous Ti6Al4V implants accurately using X-ray micro-computed tomography and histomorphometry.

    Science.gov (United States)

    Palmquist, Anders; Shah, Furqan A; Emanuelsson, Lena; Omar, Omar; Suska, Felicia

    2017-03-01

    This paper investigates the application of X-ray micro-computed tomography (micro-CT) to accurately evaluate bone formation within 3D printed, porous Ti6Al4V implants manufactured using Electron Beam Melting (EBM), retrieved after six months of healing in sheep femur and tibia. All samples were scanned twice (i.e., before and after resin embedding), using fast, low-resolution scans (Skyscan 1172; Bruker micro-CT, Kontich, Belgium), and were analysed by 2D and 3D morphometry. The main questions posed were: (i) Can low resolution, fast scans provide morphometric data of bone formed inside (and around) metal implants with a complex, open-pore architecture?, (ii) Can micro-CT be used to accurately quantify both the bone area (BA) and bone-implant contact (BIC)?, (iii) What degree of error is introduced in the quantitative data by varying the threshold values?, and (iv) Does resin embedding influence the accuracy of the analysis? To validate the accuracy of micro-CT measurements, each data set was correlated with a corresponding centrally cut histological section. The results show that quantitative histomorphometry corresponds strongly with 3D measurements made by micro-CT, where a high correlation exists between the two techniques for bone area/volume measurements around and inside the porous network. On the contrary, the direct bone-implant contact is challenging to estimate accurately or reproducibly. Large errors may be introduced in micro-CT measurements when segmentation is performed without calibrating the data set against a corresponding histological section. Generally, the bone area measurement is strongly influenced by the lower threshold limit, while the upper threshold limit has little or no effect. Resin embedding does not compromise the accuracy of micro-CT measurements, although there is a change in the contrast distributions and optimisation of the threshold ranges is required. Copyright © 2016 Elsevier Ltd. All rights reserved.

  16. Tevatron extraction microcomputer

    Energy Technology Data Exchange (ETDEWEB)

    Chapman, L.; Finley, D.A.; Harrison, M.; Merz, W.

    1985-06-01

    Extraction in the Fermilab Tevatron is controlled by a multi-processor Multibus microcomputer system called QXR (Quad eXtraction Regulator). QXR monitors several analog beam signals and controls three sets of power supplies: the ''bucker'' and ''pulse'' magnets at a rate of 5760 Hz, and the ''QXR'' magnets at 720 Hz. QXR supports multiple slow spills (up to a total of 35 seconds) with multiple fast pulses intermixed. It linearizes the slow spill and bucks out the high frequency components. Fast extraction is done by outputting a variable pulse waveform. Closed loop learning techniques are used to improve performance from cycle to cycle for both slow and fast extraction. The system is connected to the Tevatron clock system so that it can track the machine cycle. QXR is also connected to the rest of the Fermilab control system, ACNET. Through ACNET, human operators and central computers can monitor and control extraction through communications with QXR. The controls hardware and software both employ some standard and some specialized components. This paper gives an overview of QXR as a control system; another paper summarizes performance.

  17. Mathematical modelling techniques

    CERN Document Server

    Aris, Rutherford

    1995-01-01

    ""Engaging, elegantly written."" - Applied Mathematical ModellingMathematical modelling is a highly useful methodology designed to enable mathematicians, physicists and other scientists to formulate equations from a given nonmathematical situation. In this elegantly written volume, a distinguished theoretical chemist and engineer sets down helpful rules not only for setting up models but also for solving the mathematical problems they pose and for evaluating models.The author begins with a discussion of the term ""model,"" followed by clearly presented examples of the different types of mode

  18. Microcomputers in Education: Why Is Earlier Better?

    Science.gov (United States)

    Cuffaro, Harriet K.

    1984-01-01

    Microcomputers are not necessarily a desirable teaching/learning tool for young children. Learning styles of the preschool child are not often compatible with computer assisted instruction techniques. An examination of the types of available programing activities and software is presented. (DF)

  19. Micro-computed tomography (CT) based assessment of dental regenerative therapy in the canine mandible model

    Science.gov (United States)

    Khobragade, P.; Jain, A.; Setlur Nagesh, S. V.; Andreana, S.; Dziak, R.; Sunkara, S. K.; Sunkara, S.; Bednarek, D. R.; Rudin, S.; Ionita, C. N.

    2015-03-01

    High-resolution 3D bone-tissue structure measurements may provide information critical to the understanding of the bone regeneration processes and to the bone strength assessment. Tissue engineering studies rely on such nondestructive measurements to monitor bone graft regeneration area. In this study, we measured bone yield, fractal dimension and trabecular thickness through micro-CT slices for different grafts and controls. Eight canines underwent surgery to remove a bone volume (defect) in the canine's jaw at a total of 44 different locations. We kept 11 defects empty for control and filled the remaining ones with three regenerative materials; NanoGen (NG), a FDA-approved material (n=11), a novel NanoCalcium Sulfate (NCS) material (n=11) and NCS alginate (NCS+alg) material (n=11). After a minimum of four and eight weeks, the canines were sacrificed and the jaw samples were extracted. We used a custombuilt micro-CT system to acquire the data volume and developed software to measure the bone yield, fractal dimension and trabecular thickness. The software used a segmentation algorithm based on histograms derived from volumes of interest indicated by the operator. Using bone yield and fractal dimension as indices we are able to differentiate between the control and regenerative material (pprocess and quality of bone were dependent upon the position of defect and time period of healing. This study presents one of the first quantitative comparisons using non-destructive Micro-CT analysis for bone regenerative material in a large animal with a critical defect model. Our results indicate that Micro-CT measurement could be used to monitor invivo bone regeneration studies for greater regenerative process understanding.

  20. Teaching with Microcomputers.

    Science.gov (United States)

    Gerber, Michael M.

    1986-01-01

    The introduction to the special section on microcomputers in special education reviews the trend toward increasing computer use, raises such issues as the need for special educators to provide leadership in the use of computers for individualized instruction, and summarizes remaining articles in the section. (DB)

  1. Survey of semantic modeling techniques

    Energy Technology Data Exchange (ETDEWEB)

    Smith, C.L.

    1975-07-01

    The analysis of the semantics of programing languages was attempted with numerous modeling techniques. By providing a brief survey of these techniques together with an analysis of their applicability for answering semantic issues, this report attempts to illuminate the state-of-the-art in this area. The intent is to be illustrative rather than thorough in the coverage of semantic models. A bibliography is included for the reader who is interested in pursuing this area of research in more detail.

  2. Survey of semantic modeling techniques

    Energy Technology Data Exchange (ETDEWEB)

    Smith, C.L.

    1975-07-01

    The analysis of the semantics of programing languages was attempted with numerous modeling techniques. By providing a brief survey of these techniques together with an analysis of their applicability for answering semantic issues, this report attempts to illuminate the state-of-the-art in this area. The intent is to be illustrative rather than thorough in the coverage of semantic models. A bibliography is included for the reader who is interested in pursuing this area of research in more detail.

  3. Microcomputer interfacing and applications

    CERN Document Server

    Mustafa, M A

    1990-01-01

    This is the applications guide to interfacing microcomputers. It offers practical non-mathematical solutions to interfacing problems in many applications including data acquisition and control. Emphasis is given to the definition of the objectives of the interface, then comparing possible solutions and producing the best interface for every situation. Dr Mustafa A Mustafa is a senior designer of control equipment and has written many technical articles and papers on the subject of computers and their application to control engineering.

  4. The Microcomputer in the Library: I. Introduction.

    Science.gov (United States)

    Leggate, Peter; Dyer, Hilary

    1985-01-01

    This first in a series of six articles introducing microcomputer applications in smaller libraries discusses microcomputer components (hardware, operating systems, applications software); microcomputer history; current hardware; library applications; performance criteria; factors influencing performance; integrated systems; information retrieval;…

  5. Life sciences flight experiments microcomputer

    Science.gov (United States)

    Bartram, Peter N.

    1987-01-01

    A promising microcomputer configuration for the Spacelab Life Sciences Lab. Equipment inventory consists of multiple processors. One processor's use is reserved, with additional processors dedicated to real time input and output operations. A simple form of such a configuration, with a processor board for analog to digital conversion and another processor board for digital to analog conversion, was studied. The system used digital parallel data lines between the boards, operating independently of the system bus. Good performance of individual components was demonstrated: the analog to digital converter was at over 10,000 samples per second. The combination of the data transfer between boards with the input or output functions on each board slowed performance, with a maximum throughput of 2800 to 2900 analog samples per second. Any of several techniques, such as use of the system bus for data transfer or the addition of direct memory access hardware to the processor boards, should give significantly improved performance.

  6. Microcomputer-controlled high-altitude data aquisition system

    Science.gov (United States)

    1985-05-01

    A new microcomputer controlled high altitude data acquisition system was developed. The system provides a new technique for data acquisition from China's astronomical, meteorological and other high altitude experiments and opens up new territory in microcomputer applications. This microcomputer controlled high altitude data acquisition system is made up of a Z80 single board computer, 10 K memory expansion board, and keyboard and display board which can collect 16 analog signals simultaneously, and through analog/digital conversion can convert external analog signals into digital signals then encode them in a certain form through program modulation and store them on audio cassette. The data is immediately retrieved from the tape and sent to the surface microcomputer system for data processing and analysis.

  7. Model building techniques for analysis.

    Energy Technology Data Exchange (ETDEWEB)

    Walther, Howard P.; McDaniel, Karen Lynn; Keener, Donald; Cordova, Theresa Elena; Henry, Ronald C.; Brooks, Sean; Martin, Wilbur D.

    2009-09-01

    The practice of mechanical engineering for product development has evolved into a complex activity that requires a team of specialists for success. Sandia National Laboratories (SNL) has product engineers, mechanical designers, design engineers, manufacturing engineers, mechanical analysts and experimentalists, qualification engineers, and others that contribute through product realization teams to develop new mechanical hardware. The goal of SNL's Design Group is to change product development by enabling design teams to collaborate within a virtual model-based environment whereby analysis is used to guide design decisions. Computer-aided design (CAD) models using PTC's Pro/ENGINEER software tools are heavily relied upon in the product definition stage of parts and assemblies at SNL. The three-dimensional CAD solid model acts as the design solid model that is filled with all of the detailed design definition needed to manufacture the parts. Analysis is an important part of the product development process. The CAD design solid model (DSM) is the foundation for the creation of the analysis solid model (ASM). Creating an ASM from the DSM currently is a time-consuming effort; the turnaround time for results of a design needs to be decreased to have an impact on the overall product development. This effort can be decreased immensely through simple Pro/ENGINEER modeling techniques that summarize to the method features are created in a part model. This document contains recommended modeling techniques that increase the efficiency of the creation of the ASM from the DSM.

  8. A General Introduction to Microcomputers.

    Science.gov (United States)

    Muiznieks, Viktors

    This basic introduction to microcomputers provides the neophyte with the terminology, definitions, and concepts that explain the microcomputer and computing technology in general. Mathematical operations with binary numbers, computer storage, controlling logic, and the concepts of stack and interrupt are explained. (RAO)

  9. Trends In Microcomputer Image Processing

    Science.gov (United States)

    Strum, William E.

    1988-05-01

    We have seen, in the last four years, the microcomputer become the platform of choice for many image processing applications. By 1991, Frost and Sullivan forecasts that 75% of all image processing will be carried out on microcomputers. Many factors have contributed to this trend and will be discussed in the following paper.

  10. Microcomputers in Education. Selected Bibliography.

    Science.gov (United States)

    Arsulich, Michael

    The more than 300 recent publications on microcomputers in education which are listed include guides, several books, ERIC documents, and journal articles. Entries are categorized by six primary topic areas: microcomputers; computer literacy; computer assisted instruction (drill and practice, tutorial, and simulation applications); computer managed…

  11. Exchanger network synthesis on a microcomputer

    Energy Technology Data Exchange (ETDEWEB)

    Govind, R.; Mocsny, D.; Cosson, P.; Klei, J.

    1986-07-01

    Recent years have seen two occurrences that have the potential to benefit the process industries; the spread of low-cost microcomputers and the coming age of process synthesis, and the systematic approach to the creation of process flowsheets. In this paper the authors explore the two themes and outline the potential for the advancement of process synthesis with the microcomputer as an instrument of technology transfer. As an illustration, a microcomputer program that aids in the synthesis of heat exchanger networks is described. Heat exchanger network synthesis is the subproblem of process synthesis in which heat integration is performed on a partially completed flowsheet. When reactors, separators, etc., are specified, they have an associated set of streams with heating and cooling requirements. It is desirable to utilize the heating requirements of ''cold'' streams to supply the cooling requirements of the ''hot'' streams to the maximum extent possible. Heat exchanger network synthesis has the twofold aim of determining the limit on this process heat recovery and of specifying the network of exchangers that does it at the minimum investment cost. The technology is demonstrating its worth; there are reports of significant savings resulting from its application to industrial processes. The spread of this valuable technique is retarded somewhat by the shortage of experts to spearhead its implementation. The authors offer a program that can perform heat exchanger network synthesis, not as a replacement for a process designer, but as an aid for learning and a tool for the solution of design problems. The fact that it runs on the most widely accepted microcomputer means that it is accessible to any interested engineer.

  12. Using Microcomputers for Institutional Research.

    Science.gov (United States)

    Suttle, J. Lloyd

    1984-01-01

    Many institutional researchers will find that the microcomputer leads to greater efficiency in everything that they do, especially in the two most critical elements of their jobs: thinking and communicating. (Author/MLW)

  13. A Comparative of business process modelling techniques

    Science.gov (United States)

    Tangkawarow, I. R. H. T.; Waworuntu, J.

    2016-04-01

    In this era, there is a lot of business process modeling techniques. This article is the research about differences of business process modeling techniques. For each technique will explain about the definition and the structure. This paper presents a comparative analysis of some popular business process modelling techniques. The comparative framework is based on 2 criteria: notation and how it works when implemented in Somerleyton Animal Park. Each technique will end with the advantages and disadvantages. The final conclusion will give recommend of business process modeling techniques that easy to use and serve the basis for evaluating further modelling techniques.

  14. Microcomputers and School Libraries in the United Kingdom: Part I.

    Science.gov (United States)

    Pain-Lewins, Helen; Watson, Linda

    1990-01-01

    Discussion of the use of microcomputers in school libraries in the United Kingdom focuses on three models of school library computerization: (1) a library management-centered model, which addresses routine library operations; (2) a pupil-centered information skills model; and (3) a curriculum-centered model, which furthers pupil learning in…

  15. Microcomputed tomography-based assessment of retrieved dental implants

    NARCIS (Netherlands)

    Narra, N.; Antalainen, A.K.; Zipprich, H.; Sándor, G.K.; Wolff, J.

    2015-01-01

    Purpose: The aim of this study was to demonstrate the potential of microcomputed tomography (micro-CT) technology in the assessment of retrieved dental implants. Cases are presented to illustrate the value of micro-CT imaging techniques in determining possible mechanical causes for dental implant

  16. Performability Modelling Tools, Evaluation Techniques and Applications

    NARCIS (Netherlands)

    Haverkort, Boudewijn R.H.M.

    1990-01-01

    This thesis deals with three aspects of quantitative evaluation of fault-tolerant and distributed computer and communication systems: performability evaluation techniques, performability modelling tools, and performability modelling applications. Performability modelling is a relatively new

  17. Selected Logistics Models and Techniques.

    Science.gov (United States)

    1984-09-01

    ACCESS PROCEDURE: On-Line System (OLS), UNINET . RCA maintains proprietary control of this model, and the model is available only through a lease...System (OLS), UNINET . RCA maintains proprietary control of this model, and the model is available only through a lease arrangement. • SPONSOR: ASD/ACCC

  18. Comparison of Synchrotron Radiation-based Propagation Phase Contrast Imaging and Conventional Micro-computed Tomography for Assessing Intervertebral Discs and Endplates in a Murine Model.

    Science.gov (United States)

    Hu, Jianzhong; Ni, Shuangfei; Cao, Yong; Wang, Xiaokai; Liao, Shenghui; Lu, Hongbin

    2017-08-01

    The synchrotron radiation-based micro-computed tomography (SRμCT) and micro-CT (μCT) were applied to comparatively assess the intervertebral disc (IVD) and endplate (EP). To explore a new approach to evaluate the detailed structure of the IVD and EP during maturation and aging in a murine model. Till date, methods to observe the morphological changes in the IVD and EP from rodents have been relatively limited. SRμCT has been recognized as a potential way to visualize the structures containing sclerous and soft tissue. Our study focused on comparing the capacity of SRμCT and μCT in evaluating the detailed structure of the IVD and EP. Both SRμCT and μCT were performed to depict the structure of spinal tissue from 4-month-old mice. Then, the imaging quality was evaluated in the three-dimensional (3D) reconstructed model. Further, the changes in the EP and IVD during the maturation and aging process were assessed morphologically and quantitatively using SRμCT. The 3D reconstructed model of the EP from both μCT and SRμCT provided detailed information on its inner structure. However, the IVD was only depicted using SRμCT. Multi-angle observations of the 3D structure of EP and IVD from mice of different ages (15 days, 2 months, 4 months, and 18 months) were dynamically performed by SRμCT. Quantitative evaluations indicated that the total volume of EP and IVD, the average height of IVD and the canal-total volume ratio of EP increased from 15-day-old mice to 4-month-old mice and decreased in 18-month-old mice. The EP and IVD were clearly visualized using SRμCT. Compared with μCT, SRμCT provided a better ultrahigh resolution image of soft tissue and hard tissue simultaneously, which makes it a promising approach for the noninvasive study of disc degeneration. N /A.

  19. History Microcomputer Games: Update 2.

    Science.gov (United States)

    Sargent, James E.

    1985-01-01

    Provides full narrative reviews of B-1 Nuclear Bomber (Avalon, 1982); American History Adventure (Social Science Microcomputer Review Software, 1985); Government Simulations (Prentice-Hall, 1985); and The Great War, FDR and the New Deal, and Hitler's War, all from New Worlds Software, 1985. Lists additional information on five other history and…

  20. Microcomputers and the Department Image.

    Science.gov (United States)

    Donaghy, William C.; Donaghy, Carole D.

    Ways in which various uses or applications of a microcomputer can help improve the image of a college or university department are discussed in this paper. The paper discusses applications such as instruction, word processing, management, programing, communications, statistics, and graphics, and audiences for these applications, such as department…

  1. Three phase voltage model AC-AC frequency-variable inverter controlled by single-chip microcomputer%单片机控制的三相电压型交-交变频器

    Institute of Scientific and Technical Information of China (English)

    王建国

    2001-01-01

    介绍一种单片机控制的三相电压型交-交变频器,论述了交-交变频器的系统组成、控制方案及有关难点问题的解决%This paper introduces a kind of three phase voltage model AC-ACfrequency-variable inverter controlled by single-chip microcomputer control. Construction of the system ,controlling program and solution of some difficult problems are described

  2. An automated microcomputer-based system for determining canine paw pressure quantitatively in the dog synovitis model.

    Science.gov (United States)

    Carlson, R P; Datko, L J; Welch, T M; Purvis, W F; Shaw, G W; Thompson, J L; Brunner, T R

    1986-04-01

    The automated system for measuring canine paw pressure is a very useful tool for the evaluation of nonsteroid antiinflammatory drugs such as indomethacin and ibuprofen in the dog synovitis model. The apparatus has been designed to give the operator control over the dog's posture while measuring hind-paw pressure. The stability of the measuring platform permits reproducible measurement of paw pressures. The software package performs direct data recording and data reduction and eliminates tedious manual calculations. Subsequently, the software produces a printout summarizing the experimental results in tabular and graphic form.

  3. Abel model: Evaluates claims of inability to afford penalties and compliance costs, version 2.6 (for microcomputers). Software

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1994-12-01

    The easy-to-use ABEL software evaluates for-profit company claims of inabiltiy to afford penalties, clean-up costs, or compliance costs. Violators raise the issue of inability to pay in most of EPA`s enforcement actions regardless of whether there is any hard evidence pay in most of EPA`s enforcement actions regardless of whether there is any hard evidence supporting those claims. The program enables Federal, State and local enforcement professionals to quickly determine if there was any validity to those claims. ABEL is a tool that promotes quick settlements by performing screening analyses of defendants and potentially responsible parties (PRP`s) to determine their financial capacity. If ABEL indicates the firm can afford the full penalty, compliance of clean-up cost, then EPA makes no adjustments for inability to pay. If it indicates that the firm cannot afford the full amount, it directs the enforcement personnel to review other financial reports before making any adjustments. After analyzing some basic financial ratios that reflect a company`s solvency, ABEL assesses the firm`s ability to pay by focusing on projected cash flows. The model explicitly calculates the value of projected, internally generated cash flows from historical tax information, and compares these cash flows to the prosposed environmental expenditure(s). The sofware is extremely easy to use. Users are taken through a series of prompts to enter specified data. On screen `help` information is available at any time.

  4. Imaging techniques for visualizing and phenotyping congenital heart defects in murine models.

    Science.gov (United States)

    Liu, Xiaoqin; Tobita, Kimimasa; Francis, Richard J B; Lo, Cecilia W

    2013-06-01

    Mouse model is ideal for investigating the genetic and developmental etiology of congenital heart disease. However, cardiovascular phenotyping for the precise diagnosis of structural heart defects in mice remain challenging. With rapid advances in imaging techniques, there are now high throughput phenotyping tools available for the diagnosis of structural heart defects. In this review, we discuss the efficacy of four different imaging modalities for congenital heart disease diagnosis in fetal/neonatal mice, including noninvasive fetal echocardiography, micro-computed tomography (micro-CT), micro-magnetic resonance imaging (micro-MRI), and episcopic fluorescence image capture (EFIC) histopathology. The experience we have gained in the use of these imaging modalities in a large-scale mouse mutagenesis screen have validated their efficacy for congenital heart defect diagnosis in the tiny hearts of fetal and newborn mice. These cutting edge phenotyping tools will be invaluable for furthering our understanding of the developmental etiology of congenital heart disease.

  5. Dual-energy X-ray absorptiometry, peripheral quantitative computed tomography, and micro-computed tomography techniques are discordant for bone density and geometry measurements in the guinea pig.

    Science.gov (United States)

    Mak, Ivy L; DeGuire, Jason R; Lavery, Paula; Agellon, Sherry; Weiler, Hope A

    2016-05-01

    This study aims to examine agreement among bone mineral content (BMC) and density (BMD) estimates obtained using dual-energy X-ray absorptiometry (DXA), peripheral quantitative computed tomography (pQCT), and micro-computed tomography (μCT) against high-resolution μCT and bone ash of the guinea pig femur. Middle-aged (n = 40, 86 weeks) male guinea pigs underwent in vivo followed by ex vivo DXA (Hologic QDR 4500A) scanning for intact and excised femur BMC and areal density. To assess bone architecture and strength, excised femurs were scanned on pQCT (Stratec XCT 2000L) as well as on two μCT scanners (LaTheta LCT-200; Skyscan 1174), followed by three-point bending test. Reproducibility was determined using triplicate scans; and agreement assessed using Bland-Altman plots with reference methods being high-resolution μCT (Skyscan) for BMD and bone ashing for BMC. All techniques showed satisfactory ex vivo precision (CV 0.05-4.3 %). However, bias compared to the reference method was highest (207.5 %) in trabecular bone volume fraction (BV/TV) measured by LaTheta, and unacceptable in most total femur and cortical bone measurements. Volumetric BMD (vBMD) and BV/TV derived by LaTheta and pQCT at the distal metaphysis were biased from the Skyscan by an average of 49.3 and 207.5 %, respectively. Variability of vBMD, BV/TV and cross-sectional area at the diaphysis ranged from -5.5 to 30.8 %. LaTheta best quantified total femur BMC with an upper bias of 3.3 %. The observed differences among imaging techniques can be attributable to inherent dissimilarity in construction design, calibration, segmentation and scanning resolution used. These bone imaging tools are precise but are not comparable, at least when assessing guinea pig bones.

  6. Modeling Techniques: Theory and Practice

    OpenAIRE

    Odd A. Asbjørnsen

    1985-01-01

    A survey is given of some crucial concepts in chemical process modeling. Those are the concepts of physical unit invariance, of reaction invariance and stoichiometry, the chromatographic effect in heterogeneous systems, the conservation and balance principles and the fundamental structures of cause and effect relationships. As an example, it is shown how the concept of reaction invariance may simplify the homogeneous reactor modeling to a large extent by an orthogonal decomposition of the pro...

  7. The Training of Microcomputer Users: Insights from Two Disciplines.

    Science.gov (United States)

    Stine, Walter D.; Wildemuth, Barbara M.

    1992-01-01

    Research in human computer interaction and instructional design can provide insights into effective methods for training microcomputer users. Suggestions include utilizing error recognition and recovery, development of mental models, simplification of documentation, and using cognitive approaches that allow transfer of knowledge to new situations.…

  8. Microcomputers and School Libraries in the United Kingdom: Part II.

    Science.gov (United States)

    Pain-Lewins, Helen; Watson, Linda

    1990-01-01

    Describes study of seven local education authorities in the United Kingdom which was conducted to discover how models of microcomputer use were being adopted by school libraries. Highlights include database information retrieval; computerized catalogs; improving students' information-handling skills; curriculum-centered plans; integrated systems;…

  9. Modeling Techniques: Theory and Practice

    Directory of Open Access Journals (Sweden)

    Odd A. Asbjørnsen

    1985-07-01

    Full Text Available A survey is given of some crucial concepts in chemical process modeling. Those are the concepts of physical unit invariance, of reaction invariance and stoichiometry, the chromatographic effect in heterogeneous systems, the conservation and balance principles and the fundamental structures of cause and effect relationships. As an example, it is shown how the concept of reaction invariance may simplify the homogeneous reactor modeling to a large extent by an orthogonal decomposition of the process variables. This allows residence time distribution function parameters to be estimated with the reaction in situ, but without any correlation between the estimated residence time distribution parameters and the estimated reaction kinetic parameters. A general word of warning is given to the choice of wrong mathematical structure of models.

  10. NDT-COMP9 microcomputer

    Energy Technology Data Exchange (ETDEWEB)

    Dodd, C.V.; Cowan, R.F.

    1980-09-01

    An 8080-based microcomputer system, the NDT-COMP9, has been designed for instrumentation control and data analysis in eddy-current tests. The NDT-COMP9 represents a significantly more powerful computer system than the NDT-COMP8 microcomputer from which it was developed. The NDT-COMP9 system is contained on a 240- by 120-mm (9.5- by 4.8-in.) circuit board and will fit in a four-wide Nuclear Instrumentation Module (NIM) BIN with 26-pin edge connectors. In addition to the 8080-compatible central processing unit (CPU), an arithmetic processing unit (APU) is available to provide up to 32-bit fixed- or floating-point, basic or transcendental math functions. The 16K of read only memory (ROM) and random access memory (RAM), one serial input-output (I/O) port (RS-232-C at a maximum speed of 9600 baud), and 72 parallel I/O ports are available. The baud rate is under software control. A system monitor and math package are available for use with the microcomputer.

  11. Microcomputer Checks Butt-Weld Accuracy

    Science.gov (United States)

    Clisham, W.; Garner, W.; Cohen, C.; Beal, J.; Polen, R.; Lloyd, J.

    1982-01-01

    Electrical gage and microcomputer eliminate time-consuming manual measurements. Alinement and angle of plates on either side of butt weld are measured and recorded automatically by hand-held gage and desk-top microcomputer. Gage/micro-computer quickly determine whether weld is within dimensional tolerances or whether reworking is needed. Microcomputer prints out measurements while operator moves gage from point to point along weld. Out-of-tolerance measurements are marked by an asterisk on printout.

  12. Model checking timed automata : techniques and applications

    NARCIS (Netherlands)

    Hendriks, Martijn.

    2006-01-01

    Model checking is a technique to automatically analyse systems that have been modeled in a formal language. The timed automaton framework is such a formal language. It is suitable to model many realistic problems in which time plays a central role. Examples are distributed algorithms, protocols, emb

  13. Advanced structural equation modeling issues and techniques

    CERN Document Server

    Marcoulides, George A

    2013-01-01

    By focusing primarily on the application of structural equation modeling (SEM) techniques in example cases and situations, this book provides an understanding and working knowledge of advanced SEM techniques with a minimum of mathematical derivations. The book was written for a broad audience crossing many disciplines, assumes an understanding of graduate level multivariate statistics, including an introduction to SEM.

  14. Using Visualization Techniques in Multilayer Traffic Modeling

    Science.gov (United States)

    Bragg, Arnold

    We describe visualization techniques for multilayer traffic modeling - i.e., traffic models that span several protocol layers, and traffic models of protocols that cross layers. Multilayer traffic modeling is challenging, as one must deal with disparate traffic sources; control loops; the effects of network elements such as IP routers; cross-layer protocols; asymmetries in bandwidth, session lengths, and application behaviors; and an enormous number of complex interactions among the various factors. We illustrate by using visualization techniques to identify relationships, transformations, and scaling; to smooth simulation and measurement data; to examine boundary cases, subtle effects and interactions, and outliers; to fit models; and to compare models with others that have fewer parameters. Our experience suggests that visualization techniques can provide practitioners with extraordinary insight about complex multilayer traffic effects and interactions that are common in emerging next-generation networks.

  15. "Hack" Is Not A Dirty Word--The Tenth Anniversary of Patron Access Microcomputer Centers in Libraries.

    Science.gov (United States)

    Dewey, Patrick R.

    1986-01-01

    The history of patron access microcomputers in libraries is described as carrying on a tradition that information and computer power should be shared. Questions that all types of libraries need to ask in planning microcomputer centers are considered and several model centers are described. (EM)

  16. "Hack" Is Not A Dirty Word--The Tenth Anniversary of Patron Access Microcomputer Centers in Libraries.

    Science.gov (United States)

    Dewey, Patrick R.

    1986-01-01

    The history of patron access microcomputers in libraries is described as carrying on a tradition that information and computer power should be shared. Questions that all types of libraries need to ask in planning microcomputer centers are considered and several model centers are described. (EM)

  17. Automating Relational Database Design for Microcomputer Users.

    Science.gov (United States)

    Pu, Hao-Che

    1991-01-01

    Discusses issues involved in automating the relational database design process for microcomputer users and presents a prototype of a microcomputer-based system (RA, Relation Assistant) that is based on expert systems technology and helps avoid database maintenance problems. Relational database design is explained and the importance of easy input…

  18. Microcomputer Polling Improvements for AFSATCOM.

    Science.gov (United States)

    1981-06-01

    SBC 80/20 SINGLE BOARD COMPUTER 28 5.2 NATIONAL SEMICONDUCTOR BLC 416 16K PROM BOARD 32 5.3 INTEL 450 16K RAM BOARD 32 5.4 I/O AND DISPLAY CIRCUITS 32...Data Adapter (Sheet 6) 41 5-10 SBC 80/20 Single Board Computer Interfaces (Sheet 7) 42 5-11 Slot Counter/Display (Sheet 8) 43 5-12 RS-232 Serial...report describes a breadb6ard microcomputer system based on an Intel SBC 80/20 Single Board Computer (SBC) which fulfills the project demonstration

  19. Possible Radiation-Induced Damage to the Molecular Structure of Wooden Artifacts Due to Micro-Computed Tomography, Handheld X-Ray Fluorescence, and X-Ray Photoelectron Spectroscopic Techniques

    Directory of Open Access Journals (Sweden)

    Madalena Kozachuk

    2016-05-01

    Full Text Available This study was undertaken to ascertain whether radiation produced by X-ray photoelectron spectroscopy (XPS, micro-computed tomography (μCT and/or portable handheld X-ray fluorescence (XRF equipment might damage wood artifacts during analysis. Changes at the molecular level were monitored by Fourier transform infrared (FTIR analysis. No significant changes in FTIR spectra were observed as a result of μCT or handheld XRF analysis. No substantial changes in the collected FTIR spectra were observed when XPS analytical times on the order of minutes were used. However, XPS analysis collected over tens of hours did produce significant changes in the FTIR spectra.

  20. A Method to Test Model Calibration Techniques

    Energy Technology Data Exchange (ETDEWEB)

    Judkoff, Ron; Polly, Ben; Neymark, Joel

    2016-08-26

    This paper describes a method for testing model calibration techniques. Calibration is commonly used in conjunction with energy retrofit audit models. An audit is conducted to gather information about the building needed to assemble an input file for a building energy modeling tool. A calibration technique is used to reconcile model predictions with utility data, and then the 'calibrated model' is used to predict energy savings from a variety of retrofit measures and combinations thereof. Current standards and guidelines such as BPI-2400 and ASHRAE-14 set criteria for 'goodness of fit' and assume that if the criteria are met, then the calibration technique is acceptable. While it is logical to use the actual performance data of the building to tune the model, it is not certain that a good fit will result in a model that better predicts post-retrofit energy savings. Therefore, the basic idea here is that the simulation program (intended for use with the calibration technique) is used to generate surrogate utility bill data and retrofit energy savings data against which the calibration technique can be tested. This provides three figures of merit for testing a calibration technique, 1) accuracy of the post-retrofit energy savings prediction, 2) closure on the 'true' input parameter values, and 3) goodness of fit to the utility bill data. The paper will also discuss the pros and cons of using this synthetic surrogate data approach versus trying to use real data sets of actual buildings.

  1. Research Techniques Made Simple: Skin Carcinogenesis Models: Xenotransplantation Techniques.

    Science.gov (United States)

    Mollo, Maria Rosaria; Antonini, Dario; Cirillo, Luisa; Missero, Caterina

    2016-02-01

    Xenotransplantation is a widely used technique to test the tumorigenic potential of human cells in vivo using immunodeficient mice. Here we describe basic technologies and recent advances in xenotransplantation applied to study squamous cell carcinomas (SCCs) of the skin. SCC cells isolated from tumors can either be cultured to generate a cell line or injected directly into mice. Several immunodeficient mouse models are available for selection based on the experimental design and the type of tumorigenicity assay. Subcutaneous injection is the most widely used technique for xenotransplantation because it involves a simple procedure allowing the use of a large number of cells, although it may not mimic the original tumor environment. SCC cell injections at the epidermal-to-dermal junction or grafting of organotypic cultures containing human stroma have also been used to more closely resemble the tumor environment. Mixing of SCC cells with cancer-associated fibroblasts can allow the study of their interaction and reciprocal influence, which can be followed in real time by intradermal ear injection using conventional fluorescent microscopy. In this article, we will review recent advances in xenotransplantation technologies applied to study behavior of SCC cells and their interaction with the tumor environment in vivo.

  2. Periodic precipitation a microcomputer analysis of transport and reaction processes in diffusion media, with software development

    CERN Document Server

    Henisch, H K

    1991-01-01

    Containing illustrations, worked examples, graphs and tables, this book deals with periodic precipitation (also known as Liesegang Ring formation) in terms of mathematical models and their logical consequences, and is entirely concerned with microcomputer analysis and software development. Three distinctive periodic precipitation mechanisms are included: binary diffusion-reaction; solubility modulation, and competitive particle growth. The book provides didactic illustrations of a valuable investigational procedure, in the form of hypothetical experimentation by microcomputer. The development

  3. Numerical modeling techniques for flood analysis

    Science.gov (United States)

    Anees, Mohd Talha; Abdullah, K.; Nawawi, M. N. M.; Ab Rahman, Nik Norulaini Nik; Piah, Abd. Rahni Mt.; Zakaria, Nor Azazi; Syakir, M. I.; Mohd. Omar, A. K.

    2016-12-01

    Topographic and climatic changes are the main causes of abrupt flooding in tropical areas. It is the need to find out exact causes and effects of these changes. Numerical modeling techniques plays a vital role for such studies due to their use of hydrological parameters which are strongly linked with topographic changes. In this review, some of the widely used models utilizing hydrological and river modeling parameters and their estimation in data sparse region are discussed. Shortcomings of 1D and 2D numerical models and the possible improvements over these models through 3D modeling are also discussed. It is found that the HEC-RAS and FLO 2D model are best in terms of economical and accurate flood analysis for river and floodplain modeling respectively. Limitations of FLO 2D in floodplain modeling mainly such as floodplain elevation differences and its vertical roughness in grids were found which can be improve through 3D model. Therefore, 3D model was found to be more suitable than 1D and 2D models in terms of vertical accuracy in grid cells. It was also found that 3D models for open channel flows already developed recently but not for floodplain. Hence, it was suggested that a 3D model for floodplain should be developed by considering all hydrological and high resolution topographic parameter's models, discussed in this review, to enhance the findings of causes and effects of flooding.

  4. A Biomechanical Modeling Guided CBCT Estimation Technique.

    Science.gov (United States)

    Zhang, You; Tehrani, Joubin Nasehi; Wang, Jing

    2017-02-01

    Two-dimensional-to-three-dimensional (2D-3D) deformation has emerged as a new technique to estimate cone-beam computed tomography (CBCT) images. The technique is based on deforming a prior high-quality 3D CT/CBCT image to form a new CBCT image, guided by limited-view 2D projections. The accuracy of this intensity-based technique, however, is often limited in low-contrast image regions with subtle intensity differences. The solved deformation vector fields (DVFs) can also be biomechanically unrealistic. To address these problems, we have developed a biomechanical modeling guided CBCT estimation technique (Bio-CBCT-est) by combining 2D-3D deformation with finite element analysis (FEA)-based biomechanical modeling of anatomical structures. Specifically, Bio-CBCT-est first extracts the 2D-3D deformation-generated displacement vectors at the high-contrast anatomical structure boundaries. The extracted surface deformation fields are subsequently used as the boundary conditions to drive structure-based FEA to correct and fine-tune the overall deformation fields, especially those at low-contrast regions within the structure. The resulting FEA-corrected deformation fields are then fed back into 2D-3D deformation to form an iterative loop, combining the benefits of intensity-based deformation and biomechanical modeling for CBCT estimation. Using eleven lung cancer patient cases, the accuracy of the Bio-CBCT-est technique has been compared to that of the 2D-3D deformation technique and the traditional CBCT reconstruction techniques. The accuracy was evaluated in the image domain, and also in the DVF domain through clinician-tracked lung landmarks.

  5. Modeling Techniques for IN/Internet Interworking

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    This paper focuses on the authors' contributions to ITU-T to develop the network modeling for the support of IN/Internet interworking. Following an introduction to benchmark interworking services, the paper describes the consensus enhanced DFP architecture, which is reached based on IETF reference model and the authors' proposal. Then the proposed information flows for benchmark services are presented with new or updated flows identified. Finally a brief description is given to implementation techniques.

  6. Pattern-Based Medical Diagnosis on a Microcomputer*

    OpenAIRE

    Fisher, Paul R.; Kurlander, David J.

    1980-01-01

    A differential diagnosis microcomputer program has been written that utilizes both pattern-recognition and logical analysis in its algorithm. Together with auxilliary routines, the program (called DX) performs medical diagnosis, stores and retrieves patient information, creates new model symptom sets using information from the patient pool, and trains its own data matrices. Designed to be user oriented, DX can communicate the reasoning behind its decisions, thereby complementing the physician...

  7. 275 C Downhole Microcomputer System

    Energy Technology Data Exchange (ETDEWEB)

    Chris Hutchens; Hooi Miin Soo

    2008-08-31

    An HC11 controller IC and along with serial SRAM and ROM support ICs chip set were developed to support a data acquisition and control for extreme temperature/harsh environment conditions greater than 275 C. The 68HC11 microprocessor is widely used in well logging tools for control, data acquisition, and signal processing applications and was the logical choice for a downhole controller. This extreme temperature version of the 68HC11 enables new high temperature designs and additionally allows 68HC11-based well logging tools and MWD tools to be upgraded for high temperature operation in deep gas reservoirs, The microcomputer chip consists of the microprocessor ALU, a small boot ROM, 4 kbyte data RAM, counter/timer unit, serial peripheral interface (SPI), asynchronous serial interface (SCI), and the A, B, C, and D parallel ports. The chip is code compatible with the single chip mode commercial 68HC11 except for the absence of the analog to digital converter system. To avoid mask programmed internal ROM, a boot program is used to load the microcomputer program from an external mask SPI ROM. A SPI RAM IC completes the chip set and allows data RAM to be added in 4 kbyte increments. The HC11 controller IC chip set is implemented in the Peregrine Semiconductor 0.5 micron Silicon-on-Sapphire (SOS) process using a custom high temperature cell library developed at Oklahoma State University. Yield data is presented for all, the HC11, SPI-RAM and ROM. The lessons learned in this project were extended to the successful development of two high temperature versions of the LEON3 and a companion 8 Kbyte SRAM, a 200 C version for the Navy and a 275 C version for the gas industry.

  8. Mathematics for seismic data processing with basic microcomputer programs

    Energy Technology Data Exchange (ETDEWEB)

    Camina, A.R.; Janacek, G.J.

    1985-01-01

    The new, refined techniques for exploration of oil and gas and the growth of computer uses have led to a far wider application of mathematics, and the interpretation of seismic data in particular. The authors explain in detail the mathematical principles required for signal processing and clarify for the reader what are sometimes very complex concepts. The book starts at precalculus level and progresses through to advanced theories and principles. A collection of BASIC microcomputer programs is included to give the reader a practical feel for the mathematics and their applications to these techniques.

  9. Aerosol model selection and uncertainty modelling by adaptive MCMC technique

    Directory of Open Access Journals (Sweden)

    M. Laine

    2008-12-01

    Full Text Available We present a new technique for model selection problem in atmospheric remote sensing. The technique is based on Monte Carlo sampling and it allows model selection, calculation of model posterior probabilities and model averaging in Bayesian way.

    The algorithm developed here is called Adaptive Automatic Reversible Jump Markov chain Monte Carlo method (AARJ. It uses Markov chain Monte Carlo (MCMC technique and its extension called Reversible Jump MCMC. Both of these techniques have been used extensively in statistical parameter estimation problems in wide area of applications since late 1990's. The novel feature in our algorithm is the fact that it is fully automatic and easy to use.

    We show how the AARJ algorithm can be implemented and used for model selection and averaging, and to directly incorporate the model uncertainty. We demonstrate the technique by applying it to the statistical inversion problem of gas profile retrieval of GOMOS instrument on board the ENVISAT satellite. Four simple models are used simultaneously to describe the dependence of the aerosol cross-sections on wavelength. During the AARJ estimation all the models are used and we obtain a probability distribution characterizing how probable each model is. By using model averaging, the uncertainty related to selecting the aerosol model can be taken into account in assessing the uncertainty of the estimates.

  10. Microcomputer Instruction for the Learning Disabled.

    Science.gov (United States)

    Schiffman, Gilbert; And Others

    1982-01-01

    The advantages of using microcomputers, particularly with learning-disabled (LD) students, are pointed out; and an example of a successful program utilizing computers at the Johns Hopkins University is described. (SW)

  11. Microcomputers: "A New Era at Ramapo Catskill."

    Science.gov (United States)

    Freund, Alfred L.

    1983-01-01

    Discussion of the use of microcomputers in a cooperative public library system notes library management applications in areas of clerical work, word processing, book ordering, inventories, special collection catalogs, mailing lists, and a union list of serials. (EJS)

  12. Field Assessment Techniques for Bank Erosion Modeling

    Science.gov (United States)

    1990-11-22

    Field Assessment Techniques for Bank Erosion Modeling First Interim Report Prepared for US Army European Research Office US AR DS G-. EDISON HOUSE...SEDIMENTATION ANALYSIS SHEETS and GUIDELINES FOR THE USE OF SEDIMENTATION ANALYSIS SHEETS IN THE FIELD Prepared for US Army Engineer Waterways Experiment...Material Type 3 Material Type 4 Cobbles Toe[’ Toe Toefl Toefl Protection Status Cobbles/boulders Mid-Bnak .. Mid-na.k Mid-Bnask[ Mid-Boak

  13. Advanced interaction techniques for medical models

    OpenAIRE

    Monclús, Eva

    2014-01-01

    Advances in Medical Visualization allows the analysis of anatomical structures with the use of 3D models reconstructed from a stack of intensity-based images acquired through different techniques, being Computerized Tomographic (CT) modality one of the most common. A general medical volume graphics application usually includes an exploration task which is sometimes preceded by an analysis process where the anatomical structures of interest are first identified. ...

  14. Simulation of Single Chip Microcomputer Efficient Scheduling Model Based on Partition Thinking Classification%基于分区思维分类下的单片机节能调度模型仿真

    Institute of Scientific and Technical Information of China (English)

    马宏骞

    2015-01-01

    In order to lower the total energy consumption of single chip microcomputer system scheduling and put forward a partition thinking under the classification fusion energy-saving scheduling method of traveling salesman algorithm and genetic algorithm. Analyzes the single chip microcomputer scheduling three key part of the total energy consumption, process switching transition energy consumption, adjust the energy consumption of energy consumption and stable operation, mold the MCU to complete the transition process scheduling model of energy consumption, smooth process based on single mode for the node, transition mode for branch, build SCM process scheduling total energy consumption of the directed graph model, a single set of process energy consumption optimization process as a classical traveling salesman problem, through improved step by step a multi-objective genetic algorithm (ga) and traveling salesman algorithm path optimization principle, query the best processing parameters under different SCM process scheduling and the best production order of multiple processes, which lower the total energy consumption in the SCM process scheduling. Experimental results indicate that the proposed model can improve the efficiency of single chip microcomputer process scheduling, reduce energy consumption of scheduling.%为了降低单片机系统调度过程的总能耗,提出一种分区思维分类下融合旅行商算法以及遗传算法的节能调度方法。基于分区思维分类方法,将单片机进程调度总能耗,划分成进程切换能耗、进程过渡调整能耗以及进程稳定调度能耗,将单片机进程节能调度问题,转化成单片机多进程调度的能耗优化问题,将单进程调度平稳模态作为节点、进程调度过渡模态作为支路,构建单片机进程调度总能耗的有向图模型,将单片机进程能耗优化过程看成旅行商问题,通过遗传算法的多目标逐层改进以及旅行商算法

  15. Level of detail technique for plant models

    Institute of Scientific and Technical Information of China (English)

    Xiaopeng ZHANG; Qingqiong DENG; Marc JAEGER

    2006-01-01

    Realistic modelling and interactive rendering of forestry and landscape is a challenge in computer graphics and virtual reality. Recent new developments in plant growth modelling and simulation lead to plant models faithful to botanical structure and development, not only representing the complex architecture of a real plant but also its functioning in interaction with its environment. Complex geometry and material of a large group of plants is a big burden even for high performances computers, and they often overwhelm the numerical calculation power and graphic rendering power. Thus, in order to accelerate the rendering speed of a group of plants, software techniques are often developed. In this paper, we focus on plant organs, i.e. leaves, flowers, fruits and inter-nodes. Our approach is a simplification process of all sparse organs at the same time, i. e. , Level of Detail (LOD) , and multi-resolution models for plants. We do explain here the principle and construction of plant simplification. They are used to construct LOD and multi-resolution models of sparse organs and branches of big trees. These approaches take benefit from basic knowledge of plant architecture, clustering tree organs according to biological structures. We illustrate the potential of our approach on several big virtual plants for geometrical compression or LOD model definition. Finally we prove the efficiency of the proposed LOD models for realistic rendering with a virtual scene composed by 184 mature trees.

  16. A general technique to train language models on language models

    NARCIS (Netherlands)

    Nederhof, MJ

    2005-01-01

    We show that under certain conditions, a language model can be trained oil the basis of a second language model. The main instance of the technique trains a finite automaton on the basis of a probabilistic context-free grammar, such that the Kullback-Leibler distance between grammar and trained auto

  17. Incorporation of RAM techniques into simulation modeling

    Energy Technology Data Exchange (ETDEWEB)

    Nelson, S.C. Jr.; Haire, M.J.; Schryver, J.C.

    1995-07-01

    This work concludes that reliability, availability, and maintainability (RAM) analytical techniques can be incorporated into computer network simulation modeling to yield an important new analytical tool. This paper describes the incorporation of failure and repair information into network simulation to build a stochastic computer model represents the RAM Performance of two vehicles being developed for the US Army: The Advanced Field Artillery System (AFAS) and the Future Armored Resupply Vehicle (FARV). The AFAS is the US Army`s next generation self-propelled cannon artillery system. The FARV is a resupply vehicle for the AFAS. Both vehicles utilize automation technologies to improve the operational performance of the vehicles and reduce manpower. The network simulation model used in this work is task based. The model programmed in this application requirements a typical battle mission and the failures and repairs that occur during that battle. Each task that the FARV performs--upload, travel to the AFAS, refuel, perform tactical/survivability moves, return to logistic resupply, etc.--is modeled. Such a model reproduces a model reproduces operational phenomena (e.g., failures and repairs) that are likely to occur in actual performance. Simulation tasks are modeled as discrete chronological steps; after the completion of each task decisions are programmed that determine the next path to be followed. The result is a complex logic diagram or network. The network simulation model is developed within a hierarchy of vehicle systems, subsystems, and equipment and includes failure management subnetworks. RAM information and other performance measures are collected which have impact on design requirements. Design changes are evaluated through ``what if`` questions, sensitivity studies, and battle scenario changes.

  18. ABEL model: Evaluates claims of inability to afford penalities and compliance costs (version 2.8) (for microcomputers). Model-Simulation

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1996-09-01

    The easy-to-use ABEL software evaluates for-profit company claims of inability to afford penalties, clean-up costs, or compliance costs. Violators raise the issue of inability to pay in most of EPA`s enforcement actions regardless of whether there is any hard evidence supporting those claims. The program enables Federal, State and local enforcement professionals to quickly determine if there was any validity to those claims. ABEL is a tool that promotes quick settlements by performing screening analyses of defendants and potentially responsible parties (PRP`s) to determine their financial capacity. If ABEL indicates the firm can afford the full penalty, compliance or clean-up costs, then EPA makes no adjustments for inability to pay. If it indicates that the firm cannot afford the full amount, it directs the enforcement personnel to review other financial reports before making any adjustments. After analyzing some basic financial ratios that reflect a company`s solvency, ABEL assesses the firm`s ability to pay by focusing on projected cash flows. The model explicitly calculates the value of projected, internally generated cash flows from historical tax information, and compares these cash flows to the proposed environmental expenditure(s). The software is extremely easy to use. Users are taken through a series of prompts to enter specified data. On screen `help` information is available at any time.

  19. Improved modeling techniques for turbomachinery flow fields

    Energy Technology Data Exchange (ETDEWEB)

    Lakshminarayana, B.; Fagan, J.R. Jr.

    1995-12-31

    This program has the objective of developing an improved methodology for modeling turbomachinery flow fields, including the prediction of losses and efficiency. Specifically, the program addresses the treatment of the mixing stress tensor terms attributed to deterministic flow field mechanisms required in steady-state Computational Fluid Dynamic (CFD) models for turbomachinery flow fields. These mixing stress tensors arise due to spatial and temporal fluctuations (in an absolute frame of reference) caused by rotor-stator interaction due to various blade rows and by blade-to-blade variation of flow properties. This will be accomplished in a cooperative program by Penn State University and the Allison Engine Company. These tasks include the acquisition of previously unavailable experimental data in a high-speed turbomachinery environment, the use of advanced techniques to analyze the data, and the development of a methodology to treat the deterministic component of the mixing stress tenor.

  20. Geometrical geodesy techniques in Goddard earth models

    Science.gov (United States)

    Lerch, F. J.

    1974-01-01

    The method for combining geometrical data with satellite dynamical and gravimetry data for the solution of geopotential and station location parameters is discussed. Geometrical tracking data (simultaneous events) from the global network of BC-4 stations are currently being processed in a solution that will greatly enhance of geodetic world system of stations. Previously the stations in Goddard earth models have been derived only from dynamical tracking data. A linear regression model is formulated from combining the data, based upon the statistical technique of weighted least squares. Reduced normal equations, independent of satellite and instrumental parameters, are derived for the solution of the geodetic parameters. Exterior standards for the evaluation of the solution and for the scale of the earth's figure are discussed.

  1. [General-purpose microcomputer for medical laboratory instruments].

    Science.gov (United States)

    Vil'ner, G A; Dudareva, I E; Kurochkin, V E; Opalev, A A; Polek, A M

    1984-01-01

    Presented in the paper is the microcomputer based on the KP580 microprocessor set. Debugging of the hardware and the software by using the unique debugging stand developed on the basis of microcomputer "Electronica-60" is discussed.

  2. Model assisted qualification of NDE techniques

    Science.gov (United States)

    Ballisat, Alexander; Wilcox, Paul; Smith, Robert; Hallam, David

    2017-02-01

    The costly and time consuming nature of empirical trials typically performed for NDE technique qualification is a major barrier to the introduction of NDE techniques into service. The use of computational models has been proposed as a method by which the process of qualification can be accelerated. However, given the number of possible parameters present in an inspection, the number of combinations of parameter values scales to a power law and running simulations at all of these points rapidly becomes infeasible. Given that many NDE inspections result in a single valued scalar quantity, such as a phase or amplitude, using suitable sampling and interpolation methods significantly reduces the number of simulations that have to be performed. This paper presents initial results of applying Latin Hypercube Designs and M ultivariate Adaptive Regression Splines to the inspection of a fastener hole using an oblique ultrasonic shear wave inspection. It is demonstrated that an accurate mapping of the response of the inspection for the variations considered can be achieved by sampling only a small percentage of the parameter space of variations and that the required percentage decreases as the number of parameters and the number of possible sample points increases. It is then shown how the outcome of this process can be used to assess the reliability of the inspection through commonly used metrics such as probability of detection, thereby providing an alternative methodology to the current practice of performing empirical probability of detection trials.

  3. The Surge of Micro-computers

    Science.gov (United States)

    Morita, Written By Ichiko; Matsumura, Translated By Tamiko

    Libraries in the United States are now experiencing another phase of rapid change brought by the sudden invasion of micro-computers. Their versatility has made available not only capabilities for word processing and producing spread sheets but also introduced new approaches to large research libraries’ information networks. OCLC’s microenhancers are one example of what is now available. They can be used for retrospective conversion, provide instruction for the use of and function as front end terminals for large online catalogs. Various new ideas for the application of micro-computers will bring interesting developments in library automation.

  4. Improved modeling techniques for turbomachinery flow fields

    Energy Technology Data Exchange (ETDEWEB)

    Lakshminarayana, B. [Pennsylvania State Univ., University Park, PA (United States); Fagan, J.R. Jr. [Allison Engine Company, Indianapolis, IN (United States)

    1995-10-01

    This program has the objective of developing an improved methodology for modeling turbomachinery flow fields, including the prediction of losses and efficiency. Specifically, the program addresses the treatment of the mixing stress tensor terms attributed to deterministic flow field mechanisms required in steady-state Computational Fluid Dynamic (CFD) models for turbo-machinery flow fields. These mixing stress tensors arise due to spatial and temporal fluctuations (in an absolute frame of reference) caused by rotor-stator interaction due to various blade rows and by blade-to-blade variation of flow properties. These tasks include the acquisition of previously unavailable experimental data in a high-speed turbomachinery environment, the use of advanced techniques to analyze the data, and the development of a methodology to treat the deterministic component of the mixing stress tensor. Penn State will lead the effort to make direct measurements of the momentum and thermal mixing stress tensors in high-speed multistage compressor flow field in the turbomachinery laboratory at Penn State. They will also process the data by both conventional and conditional spectrum analysis to derive momentum and thermal mixing stress tensors due to blade-to-blade periodic and aperiodic components, revolution periodic and aperiodic components arising from various blade rows and non-deterministic (which includes random components) correlations. The modeling results from this program will be publicly available and generally applicable to steady-state Navier-Stokes solvers used for turbomachinery component (compressor or turbine) flow field predictions. These models will lead to improved methodology, including loss and efficiency prediction, for the design of high-efficiency turbomachinery and drastically reduce the time required for the design and development cycle of turbomachinery.

  5. Micro-Computers in Biology Inquiry.

    Science.gov (United States)

    Barnato, Carolyn; Barrett, Kathy

    1981-01-01

    Describes the modification of computer programs (BISON and POLLUT) to accommodate species and areas indigenous to the Pacific Coast area. Suggests that these programs, suitable for PET microcomputers, may foster a long-term, ongoing, inquiry-directed approach in biology. (DS)

  6. Microcomputer Courseware: Characteristics and Design Trends.

    Science.gov (United States)

    Bialo, Ellen R.; Erickson, Lisa B.

    A total of 163 microcomputer programs evaluated by the Educational Products Information Exchange (EPIE) Institute through December 1983 were examined in order to identify strengths and weaknesses in instructional and technical design. Programs were evaluated in a variety of areas including the arts, business education, computer languages, computer…

  7. A microcomputer spreadsheet for aminoglycoside kinetics.

    Science.gov (United States)

    Kiacz, B J

    1990-05-01

    Development of an aminoglycoside monitoring program need not entail large capital expenditures for pharmacokinetic software. Microsoft's Excel spreadsheet was used to develop a single compartment, first-order kinetics template for individualized aminoglycoside dosing. The formulas employed may be adapted to virtually any other microcomputer spreadsheet package to provide accurate professional results.

  8. Microcomputer Peripheral Service Technician. Teacher Edition.

    Science.gov (United States)

    Brown, A. O., III; Fulkerson, Dan, Ed.

    This manual is the third of a three-text microcomputer service and repair series. This text is designed to assist instructors in teaching service and repair procedures for floppy disk drives, printers, and monitors. The manual contains five units. Each instructional unit includes some or all of these basic components: performance objectives,…

  9. Application of a microcomputer-based system to control and monitor bacterial growth.

    Science.gov (United States)

    Titus, J A; Luli, G W; Dekleva, M L; Strohl, W R

    1984-02-01

    A modular microcomputer-based system was developed to control and monitor various modes of bacterial growth. The control system was composed of an Apple II Plus microcomputer with 64-kilobyte random-access memory; a Cyborg ISAAC model 91A multichannel analog-to-digital and digital-to-analog converter; paired MRR-1 pH, pO(2), and foam control units; and in-house-designed relay, servo control, and turbidimetry systems. To demonstrate the flexibility of the system, we grew bacteria under various computer-controlled and monitored modes of growth, including batch, turbidostat, and chemostat systems. The Apple-ISAAC system was programmed in Labsoft BASIC (extended Applesoft) with an average control program using ca. 6 to 8 kilobytes of memory and up to 30 kilobytes for datum arrays. This modular microcomputer-based control system was easily coupled to laboratory scale fermentors for a variety of fermentations.

  10. Compact Models and Measurement Techniques for High-Speed Interconnects

    CERN Document Server

    Sharma, Rohit

    2012-01-01

    Compact Models and Measurement Techniques for High-Speed Interconnects provides detailed analysis of issues related to high-speed interconnects from the perspective of modeling approaches and measurement techniques. Particular focus is laid on the unified approach (variational method combined with the transverse transmission line technique) to develop efficient compact models for planar interconnects. This book will give a qualitative summary of the various reported modeling techniques and approaches and will help researchers and graduate students with deeper insights into interconnect models in particular and interconnect in general. Time domain and frequency domain measurement techniques and simulation methodology are also explained in this book.

  11. Enlist micros: Training science teachers to use microcomputers

    Science.gov (United States)

    Baird, William E.; Ellis, James D.; Kuerbis, Paul J.

    A National Science Foundation grant to the Biological Sciences Curriculum Study (BSCS) at The Colorado College supported the design and production of training materials to encourage literacy of science teachers in the use of microcomputers. ENLIST Micros is based on results of a national needs assessment that identified 22 compentencies needed by K-12 science teachers to use microcomputers for instruction. A writing team developed the 16-hour training program in the summer of 1985, and field-test coordinators tested it with 18 preservice or in-service groups during the 1985-86 academic year at 15 sites within the United States. The training materials consist of video programs, interactive computer disks for the Apple II series microcomputer, a training manual for participants, and a guide for the group leader. The experimental materials address major areas of educational computing: awareness, applications, implementation, evaluation, and resources. Each chapter contains activities developed for this program, such as viewing video segments of science teachers who are using computers effectively and running commercial science and training courseware. Role playing and small-group interaction help the teachers overcome their reluctance to use computers and plan for effective implementation of microcomputers in the school. This study examines the implementation of educational computing among 47 science teachers who completed the ENLIST Micros training at a southern university. We present results of formative evaluation for that site. Results indicate that both elementary and secondary teachers benefit from the training program and demonstrate gains in attitudes toward computer use. Participating teachers said that the program met its stated objectives and helped them obtain needed skills. Only 33 percent of these teachers, however, reported using computers one year after the training. In June 1986, the BSCS initiated a follow up to the ENLIST Micros curriculum to

  12. Microcomputer for controlled substance record keeping.

    Science.gov (United States)

    Stein, R L; Motta, L J; Yee, A D

    1984-01-01

    The use of a microcomputer to maintain controlled substance inventory and record keeping is described. The system maintains perpetual inventories of the central narcotic vault and proof-of-use controlled drug records outstanding at nursing stations. The computerized system has eliminated (1) the delay previously encountered in the posting of transactions from the numeric log to perpetual inventory logs and (2) the potential addition and subtraction errors inherent in a manual system. Computerizing the controlled drug record-keeping system has saved approximately 166 minutes of labor per day, a cost savings of approximately $26. The new system also helps prevent diversion of controlled substances. The computer may also be used for other tasks while not running the controlled substance program. A microcomputer is well suited to the task of controlled-substance record-keeping functions, and the cost of the system (less than $4000) can be quickly recouped in labor savings.

  13. Three-dimensional analysis of the pulp cavity on surface models of molar teeth, using X-ray micro-computed tomography

    DEFF Research Database (Denmark)

    Markvart, Merete; Bjørndal, Lars; Darvann, Tron Andre

    2012-01-01

    . In summary, three-dimensional surface models were made with a high precision; an increased accumulation of mineral deposits was noted in molars with small pulp chambers and combined with the consistent pattern of intra-radicular connections, the potential endodontic treatment complexity is underlined...

  14. Formal modelling techniques in human-computer interaction

    NARCIS (Netherlands)

    Haan, de G.; Veer, van der G.C.; Vliet, van J.C.

    1991-01-01

    This paper is a theoretical contribution, elaborating the concept of models as used in Cognitive Ergonomics. A number of formal modelling techniques in human-computer interaction will be reviewed and discussed. The analysis focusses on different related concepts of formal modelling techniques in hum

  15. Challenge in Numerical Software for Microcomputers

    Energy Technology Data Exchange (ETDEWEB)

    Cody, W J

    1977-09-02

    Microcomputers are now capable of serious numerical computation using programmed floating-point arithmetic and Basic compilers. Unless numerical software designers for these machines exploit experience gained in providing software for larger machines, history will repeat with the initial spread of treacherous software. This paper discusses good software, especially for the elementary functions, in terms of reliability and robustness. The emphasis. is on insight rather than detailed algorithms, to show why certain things are important and how they may be achieved.

  16. Quantitative model validation techniques: new insights

    CERN Document Server

    Ling, You

    2012-01-01

    This paper develops new insights into quantitative methods for the validation of computational model prediction. Four types of methods are investigated, namely classical and Bayesian hypothesis testing, a reliability-based method, and an area metric-based method. Traditional Bayesian hypothesis testing is extended based on interval hypotheses on distribution parameters and equality hypotheses on probability distributions, in order to validate models with deterministic/stochastic output for given inputs. Two types of validation experiments are considered - fully characterized (all the model/experimental inputs are measured and reported as point values) and partially characterized (some of the model/experimental inputs are not measured or are reported as intervals). Bayesian hypothesis testing can minimize the risk in model selection by properly choosing the model acceptance threshold, and its results can be used in model averaging to avoid Type I/II errors. It is shown that Bayesian interval hypothesis testing...

  17. Techniques and Simulation Models in Risk Management

    OpenAIRE

    Mirela GHEORGHE

    2012-01-01

    In the present paper, the scientific approach of the research starts from the theoretical framework of the simulation concept and then continues in the setting of the practical reality, thus providing simulation models for a broad range of inherent risks specific to any organization and simulation of those models, using the informatics instrument @Risk (Palisade). The reason behind this research lies in the need for simulation models that will allow the person in charge with decision taking i...

  18. A new microcomputer-based ECG analysis system.

    Science.gov (United States)

    Kyle, M C; Klingeman, J D; Conrad, J D; Freis, E D; Pipberger, H V

    1983-09-01

    A new automated ECG system using advances in microprocessor technology and computerized electrocardiography is described. This microcomputer-based system is self-contained and mobile. It acquires both the 12-lead and orthogonal lead (Frank) electrocardiograms and analyzes the latter within minutes. Software includes the program developed in the Veterans Administration which uses advanced statistical classification techniques and a large well-documented patient data base. Diagnostic probabilities are computed using a Bayesian approach. Diagnostic performance has been tested using independent clinical criteria and found to be quite accurate. This system enables the clinician to immediately review the computer's identifications, measurements, and diagnostic classifications and quickly use these results in clinical decision making. Serial comparisons are readily made since all previous recordings are stored on floppy diskettes. The use of microprocessors in this system makes it economically feasible for practicing physicians.

  19. Development of a microcomputer-based magnetic heading sensor

    Science.gov (United States)

    Garner, H. D.

    1987-01-01

    This paper explores the development of a flux-gate magnetic heading reference using a single-chip microcomputer to process heading information and to present it to the pilot in appropriate form. This instrument is intended to replace the conventional combination of mechanical compass and directional gyroscope currently in use in general aviation aircraft, at appreciable savings in cost and reduction in maintenance. Design of the sensing element, the signal processing electronics, and the computer algorithms which calculate the magnetic heading of the aircraft from the magnetometer data have been integrated in such a way as to minimize hardware requirements and simplify calibration procedures. Damping and deviation errors are avoided by the inherent design of the device, and a technique for compensating for northerly-turning-error is described.

  20. A TECHNIQUE OF DIGITAL SURFACE MODEL GENERATION

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    It is usually a time-consuming process to real-time set up 3D digital surface mo del(DSM) of an object with complex sur face.On the basis of the architectural survey proje ct of“Chilin Nunnery Reconstruction",this paper investigates an easy and feasi ble way,that is,on project site,applying digital close range photogrammetry an d CAD technique to establish the DSM for simulating ancient architectures with c omplex surface.The method has been proved very effective in practice.

  1. Moving objects management models, techniques and applications

    CERN Document Server

    Meng, Xiaofeng; Xu, Jiajie

    2014-01-01

    This book describes the topics of moving objects modeling and location tracking, indexing and querying, clustering, location uncertainty, traffic aware navigation and privacy issues as well as the application to intelligent transportation systems.

  2. Microcomputer-based digital image processing - A tutorial package for exploration geologists

    Science.gov (United States)

    Harrington, J. A., Jr.; Cartin, K. F.

    1985-01-01

    An Apple II microcomputer-based software package for analysis of digital data developed at the University of Oklahoma, the Digital Image Analysis System (DIAS), provides a relatively low-cost, portable alternative to large, dedicated minicomputers for digital image processing education. Digital processing techniques for analysis of Landsat MSS data and a series of tutorial exercises for exploration geologists are described and evaluated. DIAS allows in-house training that does not interfere with computer-based prospect analysis objectives.

  3. Microwave Diffraction Techniques from Macroscopic Crystal Models

    Science.gov (United States)

    Murray, William Henry

    1974-01-01

    Discusses the construction of a diffractometer table and four microwave models which are built of styrofoam balls with implanted metallic reflecting spheres and designed to simulate the structures of carbon (graphite structure), sodium chloride, tin oxide, and palladium oxide. Included are samples of Bragg patterns and computer-analysis results.…

  4. Validation technique using mean and variance of kriging model

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Ho Sung; Jung, Jae Jun; Lee, Tae Hee [Hanyang Univ., Seoul (Korea, Republic of)

    2007-07-01

    To validate rigorously the accuracy of metamodel is an important research area in metamodel techniques. A leave-k-out cross-validation technique not only requires considerable computational cost but also cannot measure quantitatively the fidelity of metamodel. Recently, the average validation technique has been proposed. However the average validation criterion may stop a sampling process prematurely even if kriging model is inaccurate yet. In this research, we propose a new validation technique using an average and a variance of response during a sequential sampling method, such as maximum entropy sampling. The proposed validation technique becomes more efficient and accurate than cross-validation technique, because it integrates explicitly kriging model to achieve an accurate average and variance, rather than numerical integration. The proposed validation technique shows similar trend to root mean squared error such that it can be used as a strop criterion for sequential sampling.

  5. Comparative Analysis of Vehicle Make and Model Recognition Techniques

    Directory of Open Access Journals (Sweden)

    Faiza Ayub Syed

    2014-03-01

    Full Text Available Vehicle Make and Model Recognition (VMMR has emerged as a significant element of vision based systems because of its application in access control systems, traffic control and monitoring systems, security systems and surveillance systems, etc. So far a number of techniques have been developed for vehicle recognition. Each technique follows different methodology and classification approaches. The evaluation results highlight the recognition technique with highest accuracy level. In this paper we have pointed out the working of various vehicle make and model recognition techniques and compare these techniques on the basis of methodology, principles, classification approach, classifier and level of recognition After comparing these factors we concluded that Locally Normalized Harris Corner Strengths (LHNS performs best as compared to other techniques. LHNS uses Bayes and K-NN classification approaches for vehicle classification. It extracts information from frontal view of vehicles for vehicle make and model recognition.

  6. Metamaterials modelling, fabrication and characterisation techniques

    DEFF Research Database (Denmark)

    Malureanu, Radu; Zalkovskij, Maksim; Andryieuski, Andrei

    Metamaterials are artificially designed media that show averaged properties not yet encountered in nature. Among such properties, the possibility of obtaining optical magnetism and negative refraction are the ones mainly exploited but epsilon-near-zero and sub-unitary refraction index are also...... parameters that can be obtained. Such behaviour enables unprecedented applications. Within this work, we will present various aspects of metamaterials research field that we deal with at our department. From the modelling part, various approaches for determining the value of the refractive index...

  7. Metamaterials modelling, fabrication, and characterisation techniques

    DEFF Research Database (Denmark)

    Malureanu, Radu; Zalkovskij, Maksim; Andryieuski, Andrei

    2012-01-01

    Metamaterials are artificially designed media that show averaged properties not yet encountered in nature. Among such properties, the possibility of obtaining optical magnetism and negative refraction are the ones mainly exploited but epsilon-near-zero and sub-unitary refraction index are also...... parameters that can be obtained. Such behaviour enables unprecedented applications. Within this work, we will present various aspects of metamaterials research field that we deal with at our department. From the modelling part, we will present tour approach for determining the field enhancement in slits...

  8. Applications of Local Area Networks of Microcomputers in Libraries.

    Science.gov (United States)

    Levert, Virginia M.

    1985-01-01

    Important features of local area networks (LAN) are reviewed, and several microcomputer LANs are described (ARCnet, Hinet, ShareNet, Ethernet, Omninet, PLAN 4000). Results of survey of 10 libraries using or planning to use a microcomputer LAN and considerations in choosing a LAN are reported. Forty-one references are cited. (EJS)

  9. Profile of Change in Education: A High School Uses Microcomputers.

    Science.gov (United States)

    Grossnickle, Donald R.; Laird, Bruce A.

    1981-01-01

    This report on the introduction of microcomputers into a secondary school in a Chicago suburb briefly describes the setting for change and discusses the history of computers in the school, the emergence of change agents, microcomputer hardware selection, inservice teacher training, and the current status of the project. (MER)

  10. Microcomputers: Software Evaluation. Evaluation Guides. Guide Number 17.

    Science.gov (United States)

    Gray, Peter J.

    This guide discusses three critical steps in selecting microcomputer software and hardware: setting the context, software evaluation, and managing microcomputer use. Specific topics addressed include: (1) conducting an informal task analysis to determine how the potential user's time is spent; (2) identifying tasks amenable to computerization and…

  11. Microcomputers: Communication Software. Evaluation Guides. Guide Number 13.

    Science.gov (United States)

    Gray, Peter J.

    This guide discusses four types of microcomputer-based communication programs that could prove useful to evaluators: (1) the direct communication of information generated by one computer to another computer; (2) using the microcomputer as a terminal to a mainframe computer to input, direct the analysis of, and/or output data using a statistical…

  12. Handbook and Annotated Software Bibliography. Microcomputers in ABE.

    Science.gov (United States)

    Holter, Mary Patricia; Johnson, Carmen

    This handbook and annotated bibliography presents discussions, ideas, and resources useful to adult basic education (ABE) program teachers and administrators in implementing educational microcomputing, and describes microcomputer software programs that have been used successfully in ABE. The first part of the book, the handbook, is organized in…

  13. Playing the American Revolution: A Historical Microcomputer Simulation.

    Science.gov (United States)

    Sargent, James E.; Hueston, Stephen P.

    1988-01-01

    Explanation of the purpose of microcomputer simulations in history classes at the college or secondary school level focuses on the development of one simulation, The American Revolution. Highlights include effects on learning processes of using simulations in a microcomputer laboratory; student evaluations of the simulation; and creation of the…

  14. Use of Microcomputers for School Hearing Screening and Evaluation Records.

    Science.gov (United States)

    Jackson, Coleen O'Rourke

    A pilot project evaluated the use of a microcomputer database system to maintain hearing screening, evaluation, and followup records in a school for physically, emotionally, or educationally handicapped children (6 months-18 years). Using a universal database management system for a microcomputer, a program was designed which would allow for easy…

  15. Model order reduction techniques with applications in finite element analysis

    CERN Document Server

    Qu, Zu-Qing

    2004-01-01

    Despite the continued rapid advance in computing speed and memory the increase in the complexity of models used by engineers persists in outpacing them. Even where there is access to the latest hardware, simulations are often extremely computationally intensive and time-consuming when full-blown models are under consideration. The need to reduce the computational cost involved when dealing with high-order/many-degree-of-freedom models can be offset by adroit computation. In this light, model-reduction methods have become a major goal of simulation and modeling research. Model reduction can also ameliorate problems in the correlation of widely used finite-element analyses and test analysis models produced by excessive system complexity. Model Order Reduction Techniques explains and compares such methods focusing mainly on recent work in dynamic condensation techniques: - Compares the effectiveness of static, exact, dynamic, SEREP and iterative-dynamic condensation techniques in producing valid reduced-order mo...

  16. Evolution of microcomputer-based medical instrumentation.

    Science.gov (United States)

    Tompkins, Willis J

    2009-01-01

    This paper provides a historical review of the evolution of the technologies that led to modern microcomputer-based medical instrumentation. I review the history of the microprocessor-based system because of the importance of the microprocessor in the design of modern medical instruments. I then give some examples of medical instruments in which the microprocessor has played a key role and in some cases has even empowered us to develop new instruments that were not possible before. I include a discussion of the role of the microprocessor-based personal computer in development of medical instruments.

  17. Models and Techniques for Proving Data Structure Lower Bounds

    DEFF Research Database (Denmark)

    Larsen, Kasper Green

    In this dissertation, we present a number of new techniques and tools for proving lower bounds on the operational time of data structures. These techniques provide new lines of attack for proving lower bounds in both the cell probe model, the group model, the pointer machine model and the I/O-mod...... for range reporting problems in the pointer machine and the I/O-model. With this technique, we tighten the gap between the known upper bound and lower bound for the most fundamental range reporting problem, orthogonal range reporting. 5......In this dissertation, we present a number of new techniques and tools for proving lower bounds on the operational time of data structures. These techniques provide new lines of attack for proving lower bounds in both the cell probe model, the group model, the pointer machine model and the I....../O-model. In all cases, we push the frontiers further by proving lower bounds higher than what could possibly be proved using previously known techniques. For the cell probe model, our results have the following consequences: The rst (lg n) query time lower bound for linear space static data structures...

  18. Symmetry and partial order reduction techniques in model checking Rebeca

    NARCIS (Netherlands)

    Jaghouri, M.M.; Sirjani, M.; Mousavi, M.R.; Movaghar, A.

    2007-01-01

    Rebeca is an actor-based language with formal semantics that can be used in modeling concurrent and distributed software and protocols. In this paper, we study the application of partial order and symmetry reduction techniques to model checking dynamic Rebeca models. Finding symmetry based equivalen

  19. Prediction of survival with alternative modeling techniques using pseudo values

    NARCIS (Netherlands)

    T. van der Ploeg (Tjeerd); F.R. Datema (Frank); R.J. Baatenburg de Jong (Robert Jan); E.W. Steyerberg (Ewout)

    2014-01-01

    textabstractBackground: The use of alternative modeling techniques for predicting patient survival is complicated by the fact that some alternative techniques cannot readily deal with censoring, which is essential for analyzing survival data. In the current study, we aimed to demonstrate that pseudo

  20. Information survey for microcomputer systems integration

    Energy Technology Data Exchange (ETDEWEB)

    Hake, K.A.

    1991-12-01

    One goal of the PM-AIM is to provide US Army Project Managers (PMs) and Project Executive Officers (PEOs) with a fundamental microcomputing resource to help perform acquisition information management and its concomitant reporting requirements. Providing key application software represents one means of accomplishing this goal. This workstation would furnish a broad range of capabilities needed in the PM and PEO office settings as well as software tools for specific project management and acquisition information. Although still in the conceptual phase, the practical result of this exercise in systems integration will likely be a system called the Project Manager`s Information System (PMIS) or the AIM workstation. It would include such software as, Project Manager`s System Software (PMSS), Defense Acquisition Executive Summary (DAES), and Consolidated Acquisition Reporting System (CARS) and would conform to open systems architecture as accepted by the Department of Defense. ORNL has assisted PM-AIM in the development of technology ideas for the PMIS workstation concept. This paper represents the compilation of information gained during this process. This information is presented as a body of knowledge (or knowledge domain) defining the complex technology of microcomputing. The concept of systems integration or tying together all hardware and software components reflects the nature of PM-AIM`s task in attempting to field a PMIS or AIM workstation.

  1. Information survey for microcomputer systems integration

    Energy Technology Data Exchange (ETDEWEB)

    Hake, K.A.

    1991-12-01

    One goal of the PM-AIM is to provide US Army Project Managers (PMs) and Project Executive Officers (PEOs) with a fundamental microcomputing resource to help perform acquisition information management and its concomitant reporting requirements. Providing key application software represents one means of accomplishing this goal. This workstation would furnish a broad range of capabilities needed in the PM and PEO office settings as well as software tools for specific project management and acquisition information. Although still in the conceptual phase, the practical result of this exercise in systems integration will likely be a system called the Project Manager's Information System (PMIS) or the AIM workstation. It would include such software as, Project Manager's System Software (PMSS), Defense Acquisition Executive Summary (DAES), and Consolidated Acquisition Reporting System (CARS) and would conform to open systems architecture as accepted by the Department of Defense. ORNL has assisted PM-AIM in the development of technology ideas for the PMIS workstation concept. This paper represents the compilation of information gained during this process. This information is presented as a body of knowledge (or knowledge domain) defining the complex technology of microcomputing. The concept of systems integration or tying together all hardware and software components reflects the nature of PM-AIM's task in attempting to field a PMIS or AIM workstation.

  2. Use of surgical techniques in the rat pancreas transplantation model

    National Research Council Canada - National Science Library

    Ma, Yi; Guo, Zhi-Yong

    2008-01-01

    ... (also called type 1 diabetes). With the improvement of microsurgical techniques, pancreas transplantation in rats has been the major model for physiological and immunological experimental studies in the past 20 years...

  3. Virtual 3d City Modeling: Techniques and Applications

    Science.gov (United States)

    Singh, S. P.; Jain, K.; Mandla, V. R.

    2013-08-01

    3D city model is a digital representation of the Earth's surface and it's related objects such as Building, Tree, Vegetation, and some manmade feature belonging to urban area. There are various terms used for 3D city models such as "Cybertown", "Cybercity", "Virtual City", or "Digital City". 3D city models are basically a computerized or digital model of a city contains the graphic representation of buildings and other objects in 2.5 or 3D. Generally three main Geomatics approach are using for Virtual 3-D City models generation, in first approach, researcher are using Conventional techniques such as Vector Map data, DEM, Aerial images, second approach are based on High resolution satellite images with LASER scanning, In third method, many researcher are using Terrestrial images by using Close Range Photogrammetry with DSM & Texture mapping. We start this paper from the introduction of various Geomatics techniques for 3D City modeling. These techniques divided in to two main categories: one is based on Automation (Automatic, Semi-automatic and Manual methods), and another is Based on Data input techniques (one is Photogrammetry, another is Laser Techniques). After details study of this, finally in short, we are trying to give the conclusions of this study. In the last, we are trying to give the conclusions of this research paper and also giving a short view for justification and analysis, and present trend for 3D City modeling. This paper gives an overview about the Techniques related with "Generation of Virtual 3-D City models using Geomatics Techniques" and the Applications of Virtual 3D City models. Photogrammetry, (Close range, Aerial, Satellite), Lasergrammetry, GPS, or combination of these modern Geomatics techniques play a major role to create a virtual 3-D City model. Each and every techniques and method has some advantages and some drawbacks. Point cloud model is a modern trend for virtual 3-D city model. Photo-realistic, Scalable, Geo-referenced virtual 3

  4. An Analysis of the Theoretical Foundations for the Use of Microcomputers in Early Childhood Education.

    Science.gov (United States)

    Streibel, Michael J.

    This discussion first analyzes major claims of a cognitive-developmentalist perspective on the use of microcomputers in early childhood education. Five topics are specifically addressed: microcomputers and intellectual structures, microcomputers as cultural events, microcomputers and strategy repetoires, instruction and the self-construction of…

  5. Prediction of survival with alternative modeling techniques using pseudo values.

    Directory of Open Access Journals (Sweden)

    Tjeerd van der Ploeg

    Full Text Available BACKGROUND: The use of alternative modeling techniques for predicting patient survival is complicated by the fact that some alternative techniques cannot readily deal with censoring, which is essential for analyzing survival data. In the current study, we aimed to demonstrate that pseudo values enable statistically appropriate analyses of survival outcomes when used in seven alternative modeling techniques. METHODS: In this case study, we analyzed survival of 1282 Dutch patients with newly diagnosed Head and Neck Squamous Cell Carcinoma (HNSCC with conventional Kaplan-Meier and Cox regression analysis. We subsequently calculated pseudo values to reflect the individual survival patterns. We used these pseudo values to compare recursive partitioning (RPART, neural nets (NNET, logistic regression (LR general linear models (GLM and three variants of support vector machines (SVM with respect to dichotomous 60-month survival, and continuous pseudo values at 60 months or estimated survival time. We used the area under the ROC curve (AUC and the root of the mean squared error (RMSE to compare the performance of these models using bootstrap validation. RESULTS: Of a total of 1282 patients, 986 patients died during a median follow-up of 66 months (60-month survival: 52% [95% CI: 50%-55%]. The LR model had the highest optimism corrected AUC (0.791 to predict 60-month survival, followed by the SVM model with a linear kernel (AUC 0.787. The GLM model had the smallest optimism corrected RMSE when continuous pseudo values were considered for 60-month survival or the estimated survival time followed by SVM models with a linear kernel. The estimated importance of predictors varied substantially by the specific aspect of survival studied and modeling technique used. CONCLUSIONS: The use of pseudo values makes it readily possible to apply alternative modeling techniques to survival problems, to compare their performance and to search further for promising

  6. Circuit oriented electromagnetic modeling using the PEEC techniques

    CERN Document Server

    Ruehli, Albert; Jiang, Lijun

    2017-01-01

    This book provides intuitive solutions to electromagnetic problems by using the Partial Eelement Eequivalent Ccircuit (PEEC) method. This book begins with an introduction to circuit analysis techniques, laws, and frequency and time domain analyses. The authors also treat Maxwell's equations, capacitance computations, and inductance computations through the lens of the PEEC method. Next, readers learn to build PEEC models in various forms: equivalent circuit models, non orthogonal PEEC models, skin-effect models, PEEC models for dielectrics, incident and radiate field models, and scattering PEEC models. The book concludes by considering issues like such as stability and passivity, and includes five appendices some with formulas for partial elements.

  7. Using data mining techniques for building fusion models

    Science.gov (United States)

    Zhang, Zhongfei; Salerno, John J.; Regan, Maureen A.; Cutler, Debra A.

    2003-03-01

    Over the past decade many techniques have been developed which attempt to predict possible events through the use of given models or patterns of activity. These techniques work quite well given the case that one has a model or a valid representation of activity. However, in reality for the majority of the time this is not the case. Models that do exist, in many cases were hand crafted, required many man-hours to develop and they are very brittle in the dynamic world in which we live. Data mining techniques have shown some promise in providing a set of solutions. In this paper we will provide the details for our motivation, theory and techniques which we have developed, as well as the results of a set of experiments.

  8. On a Graphical Technique for Evaluating Some Rational Expectations Models

    DEFF Research Database (Denmark)

    Johansen, Søren; Swensen, Anders R.

    2011-01-01

    . In addition to getting a visual impression of the fit of the model, the purpose is to see if the two spreads are nevertheless similar as measured by correlation, variance ratio, and noise ratio. We extend these techniques to a number of rational expectation models and give a general definition of spread...

  9. Matrix eigenvalue model: Feynman graph technique for all genera

    Energy Technology Data Exchange (ETDEWEB)

    Chekhov, Leonid [Steklov Mathematical Institute, ITEP and Laboratoire Poncelet, Moscow (Russian Federation); Eynard, Bertrand [SPhT, CEA, Saclay (France)

    2006-12-15

    We present the diagrammatic technique for calculating the free energy of the matrix eigenvalue model (the model with arbitrary power {beta} by the Vandermonde determinant) to all orders of 1/N expansion in the case where the limiting eigenvalue distribution spans arbitrary (but fixed) number of disjoint intervals (curves)

  10. Use of single chip microcomputer in hydraulic digital adaptive control system

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    Presents a one-grade adaptive controller with one reference model which is built according to δ MRACS adaptive control theorv and used to control an actual high-order hydraulic system, and the whole hard ware system used, which includes a AT89C51 single chip microcomputer, 74Ls373 flip-latch, 6116 store, eight-bit ADC0809, and so on, and the satisfactory results obtained in study on hydraulic control system.

  11. Manifold learning techniques and model reduction applied to dissipative PDEs

    CERN Document Server

    Sonday, Benjamin E; Gear, C William; Kevrekidis, Ioannis G

    2010-01-01

    We link nonlinear manifold learning techniques for data analysis/compression with model reduction techniques for evolution equations with time scale separation. In particular, we demonstrate a `"nonlinear extension" of the POD-Galerkin approach to obtaining reduced dynamic models of dissipative evolution equations. The approach is illustrated through a reaction-diffusion PDE, and the performance of different simulators on the full and the reduced models is compared. We also discuss the relation of this nonlinear extension with the so-called "nonlinear Galerkin" methods developed in the context of Approximate Inertial Manifolds.

  12. A Comparison of Approximation Modeling Techniques: Polynomial Versus Interpolating Models

    Science.gov (United States)

    Giunta, Anthony A.; Watson, Layne T.

    1998-01-01

    Two methods of creating approximation models are compared through the calculation of the modeling accuracy on test problems involving one, five, and ten independent variables. Here, the test problems are representative of the modeling challenges typically encountered in realistic engineering optimization problems. The first approximation model is a quadratic polynomial created using the method of least squares. This type of polynomial model has seen considerable use in recent engineering optimization studies due to its computational simplicity and ease of use. However, quadratic polynomial models may be of limited accuracy when the response data to be modeled have multiple local extrema. The second approximation model employs an interpolation scheme known as kriging developed in the fields of spatial statistics and geostatistics. This class of interpolating model has the flexibility to model response data with multiple local extrema. However, this flexibility is obtained at an increase in computational expense and a decrease in ease of use. The intent of this study is to provide an initial exploration of the accuracy and modeling capabilities of these two approximation methods.

  13. 3D Modeling Techniques for Print and Digital Media

    Science.gov (United States)

    Stephens, Megan Ashley

    In developing my thesis, I looked to gain skills using ZBrush to create 3D models, 3D scanning, and 3D printing. The models created compared the hearts of several vertebrates and were intended for students attending Comparative Vertebrate Anatomy. I used several resources to create a model of the human heart and was able to work from life while creating heart models from other vertebrates. I successfully learned ZBrush and 3D scanning, and successfully printed 3D heart models. ZBrush allowed me to create several intricate models for use in both animation and print media. The 3D scanning technique did not fit my needs for the project, but may be of use for later projects. I was able to 3D print using two different techniques as well.

  14. A finite element parametric modeling technique of aircraft wing structures

    Institute of Scientific and Technical Information of China (English)

    Tang Jiapeng; Xi Ping; Zhang Baoyuan; Hu Bifu

    2013-01-01

    A finite element parametric modeling method of aircraft wing structures is proposed in this paper because of time-consuming characteristics of finite element analysis pre-processing. The main research is positioned during the preliminary design phase of aircraft structures. A knowledge-driven system of fast finite element modeling is built. Based on this method, employing a template parametric technique, knowledge including design methods, rules, and expert experience in the process of modeling is encapsulated and a finite element model is established automatically, which greatly improves the speed, accuracy, and standardization degree of modeling. Skeleton model, geometric mesh model, and finite element model including finite element mesh and property data are established on parametric description and automatic update. The outcomes of research show that the method settles a series of problems of parameter association and model update in the pro-cess of finite element modeling which establishes a key technical basis for finite element parametric analysis and optimization design.

  15. A Method to Test Model Calibration Techniques: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Judkoff, Ron; Polly, Ben; Neymark, Joel

    2016-09-01

    This paper describes a method for testing model calibration techniques. Calibration is commonly used in conjunction with energy retrofit audit models. An audit is conducted to gather information about the building needed to assemble an input file for a building energy modeling tool. A calibration technique is used to reconcile model predictions with utility data, and then the 'calibrated model' is used to predict energy savings from a variety of retrofit measures and combinations thereof. Current standards and guidelines such as BPI-2400 and ASHRAE-14 set criteria for 'goodness of fit' and assume that if the criteria are met, then the calibration technique is acceptable. While it is logical to use the actual performance data of the building to tune the model, it is not certain that a good fit will result in a model that better predicts post-retrofit energy savings. Therefore, the basic idea here is that the simulation program (intended for use with the calibration technique) is used to generate surrogate utility bill data and retrofit energy savings data against which the calibration technique can be tested. This provides three figures of merit for testing a calibration technique, 1) accuracy of the post-retrofit energy savings prediction, 2) closure on the 'true' input parameter values, and 3) goodness of fit to the utility bill data. The paper will also discuss the pros and cons of using this synthetic surrogate data approach versus trying to use real data sets of actual buildings.

  16. An Empirical Study of Smoothing Techniques for Language Modeling

    CERN Document Server

    Chen, S F; Chen, Stanley F.; Goodman, Joshua T.

    1996-01-01

    We present an extensive empirical comparison of several smoothing techniques in the domain of language modeling, including those described by Jelinek and Mercer (1980), Katz (1987), and Church and Gale (1991). We investigate for the first time how factors such as training data size, corpus (e.g., Brown versus Wall Street Journal), and n-gram order (bigram versus trigram) affect the relative performance of these methods, which we measure through the cross-entropy of test data. In addition, we introduce two novel smoothing techniques, one a variation of Jelinek-Mercer smoothing and one a very simple linear interpolation technique, both of which outperform existing methods.

  17. Optimization using surrogate models - by the space mapping technique

    DEFF Research Database (Denmark)

    Søndergaard, Jacob

    2003-01-01

    mapping surrogate has a lower approximation error for long steps. For short steps, however, the Taylor model of the expensive model is best, due to exact interpolation at the model origin. Five algorithms for space mapping optimization are presented and the numerical performance is evaluated. Three...... conditions are satisfied. So hybrid methods, combining the space mapping technique with classical optimization methods, should be used if convergence to high accuracy is wanted. Approximation abilities of the space mapping surrogate are compared with those of a Taylor model of the expensive model. The space...

  18. Optimization using surrogate models - by the space mapping technique

    DEFF Research Database (Denmark)

    Søndergaard, Jacob

    2003-01-01

    mapping surrogate has a lower approximation error for long steps. For short steps, however, the Taylor model of the expensive model is best, due to exact interpolation at the model origin. Five algorithms for space mapping optimization are presented and the numerical performance is evaluated. Three...... conditions are satisfied. So hybrid methods, combining the space mapping technique with classical optimization methods, should be used if convergence to high accuracy is wanted. Approximation abilities of the space mapping surrogate are compared with those of a Taylor model of the expensive model. The space...

  19. Microcomputer array processor system. [design for electronic warfare

    Science.gov (United States)

    Slezak, K. D.

    1980-01-01

    The microcomputer array system is discussed with specific attention given to its electronic warware applications. Several aspects of the system architecture are described as well as some of its distinctive characteristics.

  20. Team mental models: techniques, methods, and analytic approaches.

    Science.gov (United States)

    Langan-Fox, J; Code, S; Langfield-Smith, K

    2000-01-01

    Effective team functioning requires the existence of a shared or team mental model among members of a team. However, the best method for measuring team mental models is unclear. Methods reported vary in terms of how mental model content is elicited and analyzed or represented. We review the strengths and weaknesses of vatrious methods that have been used to elicit, represent, and analyze individual and team mental models and provide recommendations for method selection and development. We describe the nature of mental models and review techniques that have been used to elicit and represent them. We focus on a case study on selecting a method to examine team mental models in industry. The processes involved in the selection and development of an appropriate method for eliciting, representing, and analyzing team mental models are described. The criteria for method selection were (a) applicability to the problem under investigation; (b) practical considerations - suitability for collecting data from the targeted research sample; and (c) theoretical rationale - the assumption that associative networks in memory are a basis for the development of mental models. We provide an evaluation of the method matched to the research problem and make recommendations for future research. The practical applications of this research include the provision of a technique for analyzing team mental models in organizations, the development of methods and processes for eliciting a mental model from research participants in their normal work environment, and a survey of available methodologies for mental model research.

  1. Selection of productivity improvement techniques via mathematical modeling

    Directory of Open Access Journals (Sweden)

    Mahassan M. Khater

    2011-07-01

    Full Text Available This paper presents a new mathematical model to select an optimal combination of productivity improvement techniques. The proposed model of this paper considers four-stage cycle productivity and the productivity is assumed to be a linear function of fifty four improvement techniques. The proposed model of this paper is implemented for a real-world case study of manufacturing plant. The resulted problem is formulated as a mixed integer programming which can be solved for optimality using traditional methods. The preliminary results of the implementation of the proposed model of this paper indicate that the productivity can be improved through a change on equipments and it can be easily applied for both manufacturing and service industries.

  2. Concerning the Feasibility of Example-driven Modelling Techniques

    OpenAIRE

    Thorne, Simon; Ball, David; Lawson, Zoe Frances

    2008-01-01

    We report on a series of experiments concerning the feasibility of example driven \\ud modelling. The main aim was to establish experimentally within an academic \\ud environment; the relationship between error and task complexity using a) Traditional \\ud spreadsheet modelling, b) example driven techniques. We report on the experimental \\ud design, sampling, research methods and the tasks set for both control and treatment \\ud groups. Analysis of the completed tasks allows comparison of several...

  3. Advanced Phase noise modeling techniques of nonlinear microwave devices

    OpenAIRE

    Prigent, M.; J. C. Nallatamby; R. Quere

    2004-01-01

    In this paper we present a coherent set of tools allowing an accurate and predictive design of low phase noise oscillators. Advanced phase noise modelling techniques in non linear microwave devices must be supported by a proven combination of the following : - Electrical modeling of low-frequency noise of semiconductor devices, oriented to circuit CAD . The local noise sources will be either cyclostationary noise sources or quasistationary noise sources. - Theoretic...

  4. Modeling and design techniques for RF power amplifiers

    CERN Document Server

    Raghavan, Arvind; Laskar, Joy

    2008-01-01

    The book covers RF power amplifier design, from device and modeling considerations to advanced circuit design architectures and techniques. It focuses on recent developments and advanced topics in this area, including numerous practical designs to back the theoretical considerations. It presents the challenges in designing power amplifiers in silicon and helps the reader improve the efficiency of linear power amplifiers, and design more accurate compact device models, with faster extraction routines, to create cost effective and reliable circuits.

  5. Validation of Models : Statistical Techniques and Data Availability

    NARCIS (Netherlands)

    Kleijnen, J.P.C.

    1999-01-01

    This paper shows which statistical techniques can be used to validate simulation models, depending on which real-life data are available. Concerning this availability three situations are distinguished (i) no data, (ii) only output data, and (iii) both input and output data. In case (i) - no real

  6. Techniques and tools for efficiently modeling multiprocessor systems

    Science.gov (United States)

    Carpenter, T.; Yalamanchili, S.

    1990-01-01

    System-level tools and methodologies associated with an integrated approach to the development of multiprocessor systems are examined. Tools for capturing initial program structure, automated program partitioning, automated resource allocation, and high-level modeling of the combined application and resource are discussed. The primary language focus of the current implementation is Ada, although the techniques should be appropriate for other programming paradigms.

  7. Using of Structural Equation Modeling Techniques in Cognitive Levels Validation

    Directory of Open Access Journals (Sweden)

    Natalija Curkovic

    2012-10-01

    Full Text Available When constructing knowledge tests, cognitive level is usually one of the dimensions comprising the test specifications with each item assigned to measure a particular level. Recently used taxonomies of the cognitive levels most often represent some modification of the original Bloom’s taxonomy. There are many concerns in current literature about existence of predefined cognitive levels. The aim of this article is to investigate can structural equation modeling techniques confirm existence of different cognitive levels. For the purpose of the research, a Croatian final high-school Mathematics exam was used (N = 9626. Confirmatory factor analysis and structural regression modeling were used to test three different models. Structural equation modeling techniques did not support existence of different cognitive levels in this case. There is more than one possible explanation for that finding. Some other techniques that take into account nonlinear behaviour of the items as well as qualitative techniques might be more useful for the purpose of the cognitive levels validation. Furthermore, it seems that cognitive levels were not efficient descriptors of the items and so improvements are needed in describing the cognitive skills measured by items.

  8. AN OVERVIEW OF REDUCED ORDER MODELING TECHNIQUES FOR SAFETY APPLICATIONS

    Energy Technology Data Exchange (ETDEWEB)

    Mandelli, D.; Alfonsi, A.; Talbot, P.; Wang, C.; Maljovec, D.; Smith, C.; Rabiti, C.; Cogliati, J.

    2016-10-01

    The RISMC project is developing new advanced simulation-based tools to perform Computational Risk Analysis (CRA) for the existing fleet of U.S. nuclear power plants (NPPs). These tools numerically model not only the thermal-hydraulic behavior of the reactors primary and secondary systems, but also external event temporal evolution and component/system ageing. Thus, this is not only a multi-physics problem being addressed, but also a multi-scale problem (both spatial, µm-mm-m, and temporal, seconds-hours-years). As part of the RISMC CRA approach, a large amount of computationally-expensive simulation runs may be required. An important aspect is that even though computational power is growing, the overall computational cost of a RISMC analysis using brute-force methods may be not viable for certain cases. A solution that is being evaluated to assist the computational issue is the use of reduced order modeling techniques. During the FY2015, we investigated and applied reduced order modeling techniques to decrease the RISMC analysis computational cost by decreasing the number of simulation runs; for this analysis improvement we used surrogate models instead of the actual simulation codes. This article focuses on the use of reduced order modeling techniques that can be applied to RISMC analyses in order to generate, analyze, and visualize data. In particular, we focus on surrogate models that approximate the simulation results but in a much faster time (microseconds instead of hours/days).

  9. Comparing modelling techniques for analysing urban pluvial flooding.

    Science.gov (United States)

    van Dijk, E; van der Meulen, J; Kluck, J; Straatman, J H M

    2014-01-01

    Short peak rainfall intensities cause sewer systems to overflow leading to flooding of streets and houses. Due to climate change and densification of urban areas, this is expected to occur more often in the future. Hence, next to their minor (i.e. sewer) system, municipalities have to analyse their major (i.e. surface) system in order to anticipate urban flooding during extreme rainfall. Urban flood modelling techniques are powerful tools in both public and internal communications and transparently support design processes. To provide more insight into the (im)possibilities of different urban flood modelling techniques, simulation results have been compared for an extreme rainfall event. The results show that, although modelling software is tending to evolve towards coupled one-dimensional (1D)-two-dimensional (2D) simulation models, surface flow models, using an accurate digital elevation model, prove to be an easy and fast alternative to identify vulnerable locations in hilly and flat areas. In areas at the transition between hilly and flat, however, coupled 1D-2D simulation models give better results since catchments of major and minor systems can differ strongly in these areas. During the decision making process, surface flow models can provide a first insight that can be complemented with complex simulation models for critical locations.

  10. Separable Watermarking Technique Using the Biological Color Model

    Directory of Open Access Journals (Sweden)

    David Nino

    2009-01-01

    Full Text Available Problem statement: The issue of having robust and fragile watermarking is still main focus for various researchers worldwide. Performance of a watermarking technique depends on how complex as well as how feasible to implement. These issues are tested using various kinds of attacks including geometry and transformation. Watermarking techniques in color images are more challenging than gray images in terms of complexity and information handling. In this study, we focused on implementation of watermarking technique in color images using the biological model. Approach: We proposed a novel method for watermarking using spatial and the Discrete Cosine Transform (DCT domains. The proposed method deled with colored images in the biological color model, the Hue, Saturation and Intensity (HSI. Technique was implemented and used against various colored images including the standard ones such as pepper image. The experiments were done using various attacks such as cropping, transformation and geometry. Results: The method robustness showed high accuracy in retrieval data and technique is fragile against geometric attacks. Conclusion: Watermark security was increased by using the Hadamard transform matrix. The watermarks used were meaningful and of varying sizes and details.

  11. Impact of Domain Modeling Techniques on the Quality of Domain Model: An Experiment

    Directory of Open Access Journals (Sweden)

    Hiqmat Nisa

    2016-10-01

    Full Text Available The unified modeling language (UML is widely used to analyze and design different software development artifacts in an object oriented development. Domain model is a significant artifact that models the problem domain and visually represents real world objects and relationships among them. It facilitates the comprehension process by identifying the vocabulary and key concepts of the business world. Category list technique identifies concepts and associations with the help of pre defined categories, which are important to business information systems. Whereas noun phrasing technique performs grammatical analysis of use case description to recognize concepts and associations. Both of these techniques are used for the construction of domain model, however, no empirical evidence exists that evaluates the quality of the resultant domain model constructed via these two basic techniques. A controlled experiment was performed to investigate the impact of category list and noun phrasing technique on quality of the domain model. The constructed domain model is evaluated for completeness, correctness and effort required for its design. The obtained results show that category list technique is better than noun phrasing technique for the identification of concepts as it avoids generating unnecessary elements i.e. extra concepts, associations and attributes in the domain model. The noun phrasing technique produces a comprehensive domain model and requires less effort as compared to category list. There is no statistically significant difference between both techniques in case of correctness.

  12. Impact of Domain Modeling Techniques on the Quality of Domain Model: An Experiment

    Directory of Open Access Journals (Sweden)

    Hiqmat Nisa

    2016-11-01

    Full Text Available The unified modeling language (UML is widely used to analyze and design different software development artifacts in an object oriented development. Domain model is a significant artifact that models the problem domain and visually represents real world objects and relationships among them. It facilitates the comprehension process by identifying the vocabulary and key concepts of the business world. Category list technique identifies concepts and associations with the help of pre defined categories, which are important to business information systems. Whereas noun phrasing technique performs grammatical analysis of use case description to recognize concepts and associations. Both of these techniques are used for the construction of domain model, however, no empirical evidence exists that evaluates the quality of the resultant domain model constructed via these two basic techniques. A controlled experiment was performed to investigate the impact of category list and noun phrasing technique on quality of the domain model. The constructed domain model is evaluated for completeness, correctness and effort required for its design. The obtained results show that category list technique is better than noun phrasing technique for the identification of concepts as it avoids generating unnecessary elements i.e. extra concepts, associations and attributes in the domain model. The noun phrasing technique produces a comprehensive domain model and requires less effort as compared to category list. There is no statistically significant difference between both techniques in case of correctness.

  13. Probabilistic Analysis Techniques Applied to Complex Spacecraft Power System Modeling

    Science.gov (United States)

    Hojnicki, Jeffrey S.; Rusick, Jeffrey J.

    2005-01-01

    Electric power system performance predictions are critical to spacecraft, such as the International Space Station (ISS), to ensure that sufficient power is available to support all the spacecraft s power needs. In the case of the ISS power system, analyses to date have been deterministic, meaning that each analysis produces a single-valued result for power capability because of the complexity and large size of the model. As a result, the deterministic ISS analyses did not account for the sensitivity of the power capability to uncertainties in model input variables. Over the last 10 years, the NASA Glenn Research Center has developed advanced, computationally fast, probabilistic analysis techniques and successfully applied them to large (thousands of nodes) complex structural analysis models. These same techniques were recently applied to large, complex ISS power system models. This new application enables probabilistic power analyses that account for input uncertainties and produce results that include variations caused by these uncertainties. Specifically, N&R Engineering, under contract to NASA, integrated these advanced probabilistic techniques with Glenn s internationally recognized ISS power system model, System Power Analysis for Capability Evaluation (SPACE).

  14. Spatial Modeling of Geometallurgical Properties: Techniques and a Case Study

    Energy Technology Data Exchange (ETDEWEB)

    Deutsch, Jared L., E-mail: jdeutsch@ualberta.ca [University of Alberta, School of Mining and Petroleum Engineering, Department of Civil and Environmental Engineering (Canada); Palmer, Kevin [Teck Resources Limited (Canada); Deutsch, Clayton V.; Szymanski, Jozef [University of Alberta, School of Mining and Petroleum Engineering, Department of Civil and Environmental Engineering (Canada); Etsell, Thomas H. [University of Alberta, Department of Chemical and Materials Engineering (Canada)

    2016-06-15

    High-resolution spatial numerical models of metallurgical properties constrained by geological controls and more extensively by measured grade and geomechanical properties constitute an important part of geometallurgy. Geostatistical and other numerical techniques are adapted and developed to construct these high-resolution models accounting for all available data. Important issues that must be addressed include unequal sampling of the metallurgical properties versus grade assays, measurements at different scale, and complex nonlinear averaging of many metallurgical parameters. This paper establishes techniques to address each of these issues with the required implementation details and also demonstrates geometallurgical mineral deposit characterization for a copper–molybdenum deposit in South America. High-resolution models of grades and comminution indices are constructed, checked, and are rigorously validated. The workflow demonstrated in this case study is applicable to many other deposit types.

  15. Model-checking techniques based on cumulative residuals.

    Science.gov (United States)

    Lin, D Y; Wei, L J; Ying, Z

    2002-03-01

    Residuals have long been used for graphical and numerical examinations of the adequacy of regression models. Conventional residual analysis based on the plots of raw residuals or their smoothed curves is highly subjective, whereas most numerical goodness-of-fit tests provide little information about the nature of model misspecification. In this paper, we develop objective and informative model-checking techniques by taking the cumulative sums of residuals over certain coordinates (e.g., covariates or fitted values) or by considering some related aggregates of residuals, such as moving sums and moving averages. For a variety of statistical models and data structures, including generalized linear models with independent or dependent observations, the distributions of these stochastic processes tinder the assumed model can be approximated by the distributions of certain zero-mean Gaussian processes whose realizations can be easily generated by computer simulation. Each observed process can then be compared, both graphically and numerically, with a number of realizations from the Gaussian process. Such comparisons enable one to assess objectively whether a trend seen in a residual plot reflects model misspecification or natural variation. The proposed techniques are particularly useful in checking the functional form of a covariate and the link function. Illustrations with several medical studies are provided.

  16. Inside marginal adaptation of crowns by X-ray micro-computed tomography

    Energy Technology Data Exchange (ETDEWEB)

    Dos Santos, T. M.; Lima, I.; Lopes, R. T. [Nuclear Instrumentation Laboratory, Nuclear Engineering Program, Federal University of Rio de Janeiro, RJ, (Brazil); Author, S. B. Jr. [Department of Physics, Colorado State University, Ft. Collins, CO 80523, (United States)

    2015-07-01

    The objective of this work was to access dental arcade by using X-ray micro-computed tomography. For this purpose high resolution system was used and three groups were studied: Zirkonzahn CAD-CAM system, IPS e.max Press, and metal ceramic. The three systems assessed in this study showed results of marginal and discrepancy gaps clinically accepted. The great result of 2D and 3D evaluations showed that the used technique is a powerful method to investigate quantitative characteristics of dental arcade. (authors)

  17. Videogrammetric Model Deformation Measurement Technique for Wind Tunnel Applications

    Science.gov (United States)

    Barrows, Danny A.

    2006-01-01

    Videogrammetric measurement technique developments at NASA Langley were driven largely by the need to quantify model deformation at the National Transonic Facility (NTF). This paper summarizes recent wind tunnel applications and issues at the NTF and other NASA Langley facilities including the Transonic Dynamics Tunnel, 31-Inch Mach 10 Tunnel, 8-Ft high Temperature Tunnel, and the 20-Ft Vertical Spin Tunnel. In addition, several adaptations of wind tunnel techniques to non-wind tunnel applications are summarized. These applications include wing deformation measurements on vehicles in flight, determining aerodynamic loads based on optical elastic deformation measurements, measurements on ultra-lightweight and inflatable space structures, and the use of an object-to-image plane scaling technique to support NASA s Space Exploration program.

  18. An observational model for biomechanical assessment of sprint kayaking technique.

    Science.gov (United States)

    McDonnell, Lisa K; Hume, Patria A; Nolte, Volker

    2012-11-01

    Sprint kayaking stroke phase descriptions for biomechanical analysis of technique vary among kayaking literature, with inconsistencies not conducive for the advancement of biomechanics applied service or research. We aimed to provide a consistent basis for the categorisation and analysis of sprint kayak technique by proposing a clear observational model. Electronic databases were searched using key words kayak, sprint, technique, and biomechanics, with 20 sources reviewed. Nine phase-defining positions were identified within the kayak literature and were divided into three distinct types based on how positions were defined: water-contact-defined positions, paddle-shaft-defined positions, and body-defined positions. Videos of elite paddlers from multiple camera views were reviewed to determine the visibility of positions used to define phases. The water-contact-defined positions of catch, immersion, extraction, and release were visible from multiple camera views, therefore were suitable for practical use by coaches and researchers. Using these positions, phases and sub-phases were created for a new observational model. We recommend that kayaking data should be reported using single strokes and described using two phases: water and aerial. For more detailed analysis without disrupting the basic two-phase model, a four-sub-phase model consisting of entry, pull, exit, and aerial sub-phases should be used.

  19. One technique for refining the global Earth gravity models

    Science.gov (United States)

    Koneshov, V. N.; Nepoklonov, V. B.; Polovnev, O. V.

    2017-01-01

    The results of the theoretical and experimental research on the technique for refining the global Earth geopotential models such as EGM2008 in the continental regions are presented. The discussed technique is based on the high-resolution satellite data for the Earth's surface topography which enables the allowance for the fine structure of the Earth's gravitational field without the additional gravimetry data. The experimental studies are conducted by the example of the new GGMplus global gravity model of the Earth with a resolution about 0.5 km, which is obtained by expanding the EGM2008 model to degree 2190 with the corrections for the topograohy calculated from the SRTM data. The GGMplus and EGM2008 models are compared with the regional geoid models in 21 regions of North America, Australia, Africa, and Europe. The obtained estimates largely support the possibility of refining the global geopotential models such as EGM2008 by the procedure implemented in GGMplus, particularly in the regions with relatively high elevation difference.

  20. Interpolation techniques in robust constrained model predictive control

    Science.gov (United States)

    Kheawhom, Soorathep; Bumroongsri, Pornchai

    2017-05-01

    This work investigates interpolation techniques that can be employed on off-line robust constrained model predictive control for a discrete time-varying system. A sequence of feedback gains is determined by solving off-line a series of optimal control optimization problems. A sequence of nested corresponding robustly positive invariant set, which is either ellipsoidal or polyhedral set, is then constructed. At each sampling time, the smallest invariant set containing the current state is determined. If the current invariant set is the innermost set, the pre-computed gain associated with the innermost set is applied. If otherwise, a feedback gain is variable and determined by a linear interpolation of the pre-computed gains. The proposed algorithms are illustrated with case studies of a two-tank system. The simulation results showed that the proposed interpolation techniques significantly improve control performance of off-line robust model predictive control without much sacrificing on-line computational performance.

  1. Mathematical analysis techniques for modeling the space network activities

    Science.gov (United States)

    Foster, Lisa M.

    1992-01-01

    The objective of the present work was to explore and identify mathematical analysis techniques, and in particular, the use of linear programming. This topic was then applied to the Tracking and Data Relay Satellite System (TDRSS) in order to understand the space network better. Finally, a small scale version of the system was modeled, variables were identified, data was gathered, and comparisons were made between actual and theoretical data.

  2. EXPERIENCE WITH SYNCHRONOUS GENERATOR MODEL USING PARTICLE SWARM OPTIMIZATION TECHNIQUE

    OpenAIRE

    N.RATHIKA; Dr.A.Senthil kumar; A.ANUSUYA

    2014-01-01

    This paper intends to the modeling of polyphase synchronous generator and minimization of power losses using Particle swarm optimization (PSO) technique with a constriction factor. Usage of Polyphase synchronous generator mainly leads to the total power circulation in the system which can be distributed in all phases. Another advantage of polyphase system is the fault at one winding does not lead to the system shutdown. The Process optimization is the chastisement of adjusting a process so as...

  3. Equivalence and Differences between Structural Equation Modeling and State-Space Modeling Techniques

    Science.gov (United States)

    Chow, Sy-Miin; Ho, Moon-ho R.; Hamaker, Ellen L.; Dolan, Conor V.

    2010-01-01

    State-space modeling techniques have been compared to structural equation modeling (SEM) techniques in various contexts but their unique strengths have often been overshadowed by their similarities to SEM. In this article, we provide a comprehensive discussion of these 2 approaches' similarities and differences through analytic comparisons and…

  4. Equivalence and differences between structural equation modeling and state-space modeling techniques

    NARCIS (Netherlands)

    Chow, Sy-Miin; Ho, Moon-ho R.; Hamaker, E.L.; Dolan, C.V.

    2010-01-01

    State-space modeling techniques have been compared to structural equation modeling (SEM) techniques in various contexts but their unique strengths have often been overshadowed by their similarities to SEM. In this article, we provide a comprehensive discussion of these 2 approaches' similarities and

  5. Equivalence and Differences between Structural Equation Modeling and State-Space Modeling Techniques

    Science.gov (United States)

    Chow, Sy-Miin; Ho, Moon-ho R.; Hamaker, Ellen L.; Dolan, Conor V.

    2010-01-01

    State-space modeling techniques have been compared to structural equation modeling (SEM) techniques in various contexts but their unique strengths have often been overshadowed by their similarities to SEM. In this article, we provide a comprehensive discussion of these 2 approaches' similarities and differences through analytic comparisons and…

  6. A Comparison of Evolutionary Computation Techniques for IIR Model Identification

    Directory of Open Access Journals (Sweden)

    Erik Cuevas

    2014-01-01

    Full Text Available System identification is a complex optimization problem which has recently attracted the attention in the field of science and engineering. In particular, the use of infinite impulse response (IIR models for identification is preferred over their equivalent FIR (finite impulse response models since the former yield more accurate models of physical plants for real world applications. However, IIR structures tend to produce multimodal error surfaces whose cost functions are significantly difficult to minimize. Evolutionary computation techniques (ECT are used to estimate the solution to complex optimization problems. They are often designed to meet the requirements of particular problems because no single optimization algorithm can solve all problems competitively. Therefore, when new algorithms are proposed, their relative efficacies must be appropriately evaluated. Several comparisons among ECT have been reported in the literature. Nevertheless, they suffer from one limitation: their conclusions are based on the performance of popular evolutionary approaches over a set of synthetic functions with exact solutions and well-known behaviors, without considering the application context or including recent developments. This study presents the comparison of various evolutionary computation optimization techniques applied to IIR model identification. Results over several models are presented and statistically validated.

  7. Sensitivity analysis techniques for models of human behavior.

    Energy Technology Data Exchange (ETDEWEB)

    Bier, Asmeret Brooke

    2010-09-01

    Human and social modeling has emerged as an important research area at Sandia National Laboratories due to its potential to improve national defense-related decision-making in the presence of uncertainty. To learn about which sensitivity analysis techniques are most suitable for models of human behavior, different promising methods were applied to an example model, tested, and compared. The example model simulates cognitive, behavioral, and social processes and interactions, and involves substantial nonlinearity, uncertainty, and variability. Results showed that some sensitivity analysis methods create similar results, and can thus be considered redundant. However, other methods, such as global methods that consider interactions between inputs, can generate insight not gained from traditional methods.

  8. Multi-Model Combination techniques for Hydrological Forecasting: Application to Distributed Model Intercomparison Project Results

    Energy Technology Data Exchange (ETDEWEB)

    Ajami, N K; Duan, Q; Gao, X; Sorooshian, S

    2005-04-11

    This paper examines several multi-model combination techniques: the Simple Multi-model Average (SMA), the Multi-Model Super Ensemble (MMSE), Modified Multi-Model Super Ensemble (M3SE) and the Weighted Average Method (WAM). These model combination techniques were evaluated using the results from the Distributed Model Intercomparison Project (DMIP), an international project sponsored by the National Weather Service (NWS) Office of Hydrologic Development (OHD). All of the multi-model combination results were obtained using uncalibrated DMIP model outputs and were compared against the best uncalibrated as well as the best calibrated individual model results. The purpose of this study is to understand how different combination techniques affect the skill levels of the multi-model predictions. This study revealed that the multi-model predictions obtained from uncalibrated single model predictions are generally better than any single member model predictions, even the best calibrated single model predictions. Furthermore, more sophisticated multi-model combination techniques that incorporated bias correction steps work better than simple multi-model average predictions or multi-model predictions without bias correction.

  9. Validation of transport models using additive flux minimization technique

    Energy Technology Data Exchange (ETDEWEB)

    Pankin, A. Y.; Kruger, S. E. [Tech-X Corporation, 5621 Arapahoe Ave., Boulder, Colorado 80303 (United States); Groebner, R. J. [General Atomics, San Diego, California 92121 (United States); Hakim, A. [Princeton Plasma Physics Laboratory, Princeton, New Jersey 08543-0451 (United States); Kritz, A. H.; Rafiq, T. [Department of Physics, Lehigh University, Bethlehem, Pennsylvania 18015 (United States)

    2013-10-15

    A new additive flux minimization technique is proposed for carrying out the verification and validation (V and V) of anomalous transport models. In this approach, the plasma profiles are computed in time dependent predictive simulations in which an additional effective diffusivity is varied. The goal is to obtain an optimal match between the computed and experimental profile. This new technique has several advantages over traditional V and V methods for transport models in tokamaks and takes advantage of uncertainty quantification methods developed by the applied math community. As a demonstration of its efficiency, the technique is applied to the hypothesis that the paleoclassical density transport dominates in the plasma edge region in DIII-D tokamak discharges. A simplified version of the paleoclassical model that utilizes the Spitzer resistivity for the parallel neoclassical resistivity and neglects the trapped particle effects is tested in this paper. It is shown that a contribution to density transport, in addition to the paleoclassical density transport, is needed in order to describe the experimental profiles. It is found that more additional diffusivity is needed at the top of the H-mode pedestal, and almost no additional diffusivity is needed at the pedestal bottom. The implementation of this V and V technique uses the FACETS::Core transport solver and the DAKOTA toolkit for design optimization and uncertainty quantification. The FACETS::Core solver is used for advancing the plasma density profiles. The DAKOTA toolkit is used for the optimization of plasma profiles and the computation of the additional diffusivity that is required for the predicted density profile to match the experimental profile.

  10. A New Mathematical Modeling Technique for Pull Production Control Systems

    Directory of Open Access Journals (Sweden)

    O. Srikanth

    2013-12-01

    Full Text Available The Kanban Control System widely used to control the release of parts of multistage manufacturing system operating under a pull production control system. Most of the work on Kanban Control System deals with multi-product manufacturing system. In this paper, we are proposing a regression modeling technique in a multistage manufacturing system is to be coordinates the release of parts into each stage of the system with the arrival of customer demands for final products. And also comparing two variants stages of the Kanban Control System model and combines with mathematical and Simulink model for the production coordination of parts in an assembly manufacturing systems. In both variants, the production of a new subassembly is authorized only when an assembly Kanban is available. Assembly kanbans become available when finished product is consumed. A simulation environment for the product line system has to generate with the proposed model and the mathematical model have to give implementation against the simulation model in the working platform of MATLAB. Both the simulation and model outputs have provided an in depth analysis of each of the resulting control system for offering model of a product line system.

  11. Evolution of Modelling Techniques for Service Oriented Architecture

    Directory of Open Access Journals (Sweden)

    Mikit Kanakia

    2014-07-01

    Full Text Available Service-oriented architecture (SOA is a software design and architecture design pattern based on independent pieces of software providing functionality as services to other applications. The benefit of SOA in the IT infrastructure is to allow parallel use and data exchange between programs which are services to the enterprise. Unified Modelling Language (UML is a standardized general-purpose modelling language in the field of software engineering. The UML includes a set of graphic notation techniques to create visual models of object-oriented software systems. We want to make UML available for SOA as well. SoaML (Service oriented architecture Modelling Language is an open source specification project from the Object Management Group (OMG, describing a UML profile and meta-model for the modelling and design of services within a service-oriented architecture. BPMN was also extended for SOA but there were few pitfalls. There is a need of a modelling framework which dedicated to SOA. Michael Bell authored a framework called Service Oriented Modelling Framework (SOMF which is dedicated for SOA.

  12. A DYNAMO application of microcomputer-based simulation in health sciences teaching.

    Science.gov (United States)

    Sanchez Navarro, J D; Toval Alvarez, J A; Palacios Ortega, F; Sanchez Casado, M P; Perez Polo, M

    1993-10-01

    This paper presents a concrete application of microcomputer-based simulation in health sciences education. It shows a pharmacological system dynamics model representing graphics and numerical behaviour and relations between variables, which in addition allows interaction with students for experimentation. The work also illustrates a strategy for introducing this kind of model in the classroom, as well as the suitability of their implementation, in educational environments, using professional tools such as DYNAMO and DYNEX. Subsequent evaluation of the results and comments given by the students revealed more positive attitudes towards the discipline.

  13. Multi-Model Combination Techniques for Hydrological Forecasting: Application to Distributed Model Intercomparison Project Results

    Energy Technology Data Exchange (ETDEWEB)

    Ajami, N; Duan, Q; Gao, X; Sorooshian, S

    2006-05-08

    This paper examines several multi-model combination techniques: the Simple Multimodel Average (SMA), the Multi-Model Super Ensemble (MMSE), Modified Multi-Model Super Ensemble (M3SE) and the Weighted Average Method (WAM). These model combination techniques were evaluated using the results from the Distributed Model Intercomparison Project (DMIP), an international project sponsored by the National Weather Service (NWS) Office of Hydrologic Development (OHD). All of the multi-model combination results were obtained using uncalibrated DMIP model outputs and were compared against the best uncalibrated as well as the best calibrated individual model results. The purpose of this study is to understand how different combination techniques affect the skill levels of the multi-model predictions. This study revealed that the multi-model predictions obtained from uncalibrated single model predictions are generally better than any single member model predictions, even the best calibrated single model predictions. Furthermore, more sophisticated multi-model combination techniques that incorporated bias correction steps work better than simple multi-model average predictions or multi-model predictions without bias correction.

  14. System identification and model reduction using modulating function techniques

    Science.gov (United States)

    Shen, Yan

    1993-01-01

    Weighted least squares (WLS) and adaptive weighted least squares (AWLS) algorithms are initiated for continuous-time system identification using Fourier type modulating function techniques. Two stochastic signal models are examined using the mean square properties of the stochastic calculus: an equation error signal model with white noise residuals, and a more realistic white measurement noise signal model. The covariance matrices in each model are shown to be banded and sparse, and a joint likelihood cost function is developed which links the real and imaginary parts of the modulated quantities. The superior performance of above algorithms is demonstrated by comparing them with the LS/MFT and popular predicting error method (PEM) through 200 Monte Carlo simulations. A model reduction problem is formulated with the AWLS/MFT algorithm, and comparisons are made via six examples with a variety of model reduction techniques, including the well-known balanced realization method. Here the AWLS/MFT algorithm manifests higher accuracy in almost all cases, and exhibits its unique flexibility and versatility. Armed with this model reduction, the AWLS/MFT algorithm is extended into MIMO transfer function system identification problems. The impact due to the discrepancy in bandwidths and gains among subsystem is explored through five examples. Finally, as a comprehensive application, the stability derivatives of the longitudinal and lateral dynamics of an F-18 aircraft are identified using physical flight data provided by NASA. A pole-constrained SIMO and MIMO AWLS/MFT algorithm is devised and analyzed. Monte Carlo simulations illustrate its high-noise rejecting properties. Utilizing the flight data, comparisons among different MFT algorithms are tabulated and the AWLS is found to be strongly favored in almost all facets.

  15. Use of surgical techniques in the rat pancreas transplantation model

    Institute of Scientific and Technical Information of China (English)

    Yi Ma; Zhi-Yong Guo

    2008-01-01

    BACKGROUND:Pancreas transplantation is currently considered to be the most reliable and effective treatment for insulin-dependent diabetes mellitus (also called type 1 diabetes). With the improvement of microsurgical techniques, pancreas transplantation in rats has been the major model for physiological and immunological experimental studies in the past 20 years. We investigated the surgical techniques of pancreas transplantation in rats by analysing the difference between cervical segmental pancreas transplantation and abdominal pancreaticoduodenal transplantation. METHODS:Two hundred and forty male adult Wistar rats weighing 200-300 g were used, 120 as donors and 120 as recipients. Sixty cervical segmental pancreas transplants and 60 abdominal pancreaticoduodenal transplants were carried out and vessel anastomoses were made with microsurgical techniques. RESULTS:The time of donor pancreas harvesting in the cervical and abdominal groups was 31±6 and 37.6±3.8 min, respectively, and the lengths of recipient operations were 49.2±5.6 and 60.6±7.8 min. The time for donor operation was not signiifcantly different (P>0.05), but the recipient operation time in the abdominal group was longer than that in the cervical group (P0.05). CONCLUSIONS:Both pancreas transplantation methods are stable models for immunological and physiological studies in pancreas transplantation. Since each has its own advantages and disadvantages, the designer can choose the appropriate method according to the requirements of the study.

  16. Crop Yield Forecasted Model Based on Time Series Techniques

    Institute of Scientific and Technical Information of China (English)

    Li Hong-ying; Hou Yan-lin; Zhou Yong-juan; Zhao Hui-ming

    2012-01-01

    Traditional studies on potential yield mainly referred to attainable yield: the maximum yield which could be reached by a crop in a given environment. The new concept of crop yield under average climate conditions was defined in this paper, which was affected by advancement of science and technology. Based on the new concept of crop yield, the time series techniques relying on past yield data was employed to set up a forecasting model. The model was tested by using average grain yields of Liaoning Province in China from 1949 to 2005. The testing combined dynamic n-choosing and micro tendency rectification, and an average forecasting error was 1.24%. In the trend line of yield change, and then a yield turning point might occur, in which case the inflexion model was used to solve the problem of yield turn point.

  17. Cooperative cognitive radio networking system model, enabling techniques, and performance

    CERN Document Server

    Cao, Bin; Mark, Jon W

    2016-01-01

    This SpringerBrief examines the active cooperation between users of Cooperative Cognitive Radio Networking (CCRN), exploring the system model, enabling techniques, and performance. The brief provides a systematic study on active cooperation between primary users and secondary users, i.e., (CCRN), followed by the discussions on research issues and challenges in designing spectrum-energy efficient CCRN. As an effort to shed light on the design of spectrum-energy efficient CCRN, they model the CCRN based on orthogonal modulation and orthogonally dual-polarized antenna (ODPA). The resource allocation issues are detailed with respect to both models, in terms of problem formulation, solution approach, and numerical results. Finally, the optimal communication strategies for both primary and secondary users to achieve spectrum-energy efficient CCRN are analyzed.

  18. Automated Techniques for the Qualitative Analysis of Ecological Models: Continuous Models

    Directory of Open Access Journals (Sweden)

    Lynn van Coller

    1997-06-01

    Full Text Available The mathematics required for a detailed analysis of the behavior of a model can be formidable. In this paper, I demonstrate how various computer packages can aid qualitative analyses by implementing techniques from dynamical systems theory. Because computer software is used to obtain the results, the techniques can be used by nonmathematicians as well as mathematicians. In-depth analyses of complicated models that were previously very difficult to study can now be done. Because the paper is intended as an introduction to applying the techniques to ecological models, I have included an appendix describing some of the ideas and terminology. A second appendix shows how the techniques can be applied to a fairly simple predator-prey model and establishes the reliability of the computer software. The main body of the paper discusses a ratio-dependent model. The new techniques highlight some limitations of isocline analyses in this three-dimensional setting and show that the model is structurally unstable. Another appendix describes a larger model of a sheep-pasture-hyrax-lynx system. Dynamical systems techniques are compared with a traditional sensitivity analysis and are found to give more information. As a result, an incomplete relationship in the model is highlighted. I also discuss the resilience of these models to both parameter and population perturbations.

  19. Theoretical modeling techniques and their impact on tumor immunology.

    Science.gov (United States)

    Woelke, Anna Lena; Murgueitio, Manuela S; Preissner, Robert

    2010-01-01

    Currently, cancer is one of the leading causes of death in industrial nations. While conventional cancer treatment usually results in the patient suffering from severe side effects, immunotherapy is a promising alternative. Nevertheless, some questions remain unanswered with regard to using immunotherapy to treat cancer hindering it from being widely established. To help rectify this deficit in knowledge, experimental data, accumulated from a huge number of different studies, can be integrated into theoretical models of the tumor-immune system interaction. Many complex mechanisms in immunology and oncology cannot be measured in experiments, but can be analyzed by mathematical simulations. Using theoretical modeling techniques, general principles of tumor-immune system interactions can be explored and clinical treatment schedules optimized to lower both tumor burden and side effects. In this paper, we aim to explain the main mathematical and computational modeling techniques used in tumor immunology to experimental researchers and clinicians. In addition, we review relevant published work and provide an overview of its impact to the field.

  20. A formal model for integrity protection based on DTE technique

    Institute of Scientific and Technical Information of China (English)

    JI Qingguang; QING Sihan; HE Yeping

    2006-01-01

    In order to provide integrity protection for the secure operating system to satisfy the structured protection class' requirements, a DTE technique based integrity protection formalization model is proposed after the implications and structures of the integrity policy have been analyzed in detail. This model consists of some basic rules for configuring DTE and a state transition model, which are used to instruct how the domains and types are set, and how security invariants obtained from initial configuration are maintained in the process of system transition respectively. In this model, ten invariants are introduced, especially, some new invariants dealing with information flow are proposed, and their relations with corresponding invariants described in literatures are also discussed.The thirteen transition rules with well-formed atomicity are presented in a well-operational manner. The basic security theorems correspond to these invariants and transition rules are proved. The rationalities for proposing the invariants are further annotated via analyzing the differences between this model and ones described in literatures. At last but not least, future works are prospected, especially, it is pointed out that it is possible to use this model to analyze SE-Linux security.

  1. Spoken Document Retrieval Leveraging Unsupervised and Supervised Topic Modeling Techniques

    Science.gov (United States)

    Chen, Kuan-Yu; Wang, Hsin-Min; Chen, Berlin

    This paper describes the application of two attractive categories of topic modeling techniques to the problem of spoken document retrieval (SDR), viz. document topic model (DTM) and word topic model (WTM). Apart from using the conventional unsupervised training strategy, we explore a supervised training strategy for estimating these topic models, imagining a scenario that user query logs along with click-through information of relevant documents can be utilized to build an SDR system. This attempt has the potential to associate relevant documents with queries even if they do not share any of the query words, thereby improving on retrieval quality over the baseline system. Likewise, we also study a novel use of pseudo-supervised training to associate relevant documents with queries through a pseudo-feedback procedure. Moreover, in order to lessen SDR performance degradation caused by imperfect speech recognition, we investigate leveraging different levels of index features for topic modeling, including words, syllable-level units, and their combination. We provide a series of experiments conducted on the TDT (TDT-2 and TDT-3) Chinese SDR collections. The empirical results show that the methods deduced from our proposed modeling framework are very effective when compared with a few existing retrieval approaches.

  2. Virtual casting of stab wounds in cartilage using micro-computed tomography.

    Science.gov (United States)

    Pounder, Derrick J; Sim, Louise J

    2011-06-01

    In homicidal stabbings using a serrated knife, stab wounds involving costal cartilage leave striations on the cut surface of the cartilage from the serration points on the blade edge. Class characteristics of the serrated blade can be determined from the striation marks, and individualizing characteristics may be seen also. The traditional method for recording the striation marks involves the pernickety technique of casting with dental impression material. We assessed the potential utility of micro-computed tomography scanning of the stab track as a technique for nondestructive recording of striation patterns and virtual casting of them. Stab tracks in porcine cartilage, produced with a coarsely serrated blade, were scanned with a bench-top micro-computed tomography scanner. The typical irregularly regular striation pattern could be demonstrated, and the images manipulated, using proprietary software to produce a virtual cast. Whether the technology will have sufficient resolution to image not only class characteristic tool marks but also the much finer individualizing tool marks remains to be evaluated, but the technology shows considerable promise.

  3. Use of advanced modeling techniques to optimize thermal packaging designs.

    Science.gov (United States)

    Formato, Richard M; Potami, Raffaele; Ahmed, Iftekhar

    2010-01-01

    Through a detailed case study the authors demonstrate, for the first time, the capability of using advanced modeling techniques to correctly simulate the transient temperature response of a convective flow-based thermal shipper design. The objective of this case study was to demonstrate that simulation could be utilized to design a 2-inch-wall polyurethane (PUR) shipper to hold its product box temperature between 2 and 8 °C over the prescribed 96-h summer profile (product box is the portion of the shipper that is occupied by the payload). Results obtained from numerical simulation are in excellent agreement with empirical chamber data (within ±1 °C at all times), and geometrical locations of simulation maximum and minimum temperature match well with the corresponding chamber temperature measurements. Furthermore, a control simulation test case was run (results taken from identical product box locations) to compare the coupled conduction-convection model with a conduction-only model, which to date has been the state-of-the-art method. For the conduction-only simulation, all fluid elements were replaced with "solid" elements of identical size and assigned thermal properties of air. While results from the coupled thermal/fluid model closely correlated with the empirical data (±1 °C), the conduction-only model was unable to correctly capture the payload temperature trends, showing a sizeable error compared to empirical values (ΔT > 6 °C). A modeling technique capable of correctly capturing the thermal behavior of passively refrigerated shippers can be used to quickly evaluate and optimize new packaging designs. Such a capability provides a means to reduce the cost and required design time of shippers while simultaneously improving their performance. Another advantage comes from using thermal modeling (assuming a validated model is available) to predict the temperature distribution in a shipper that is exposed to ambient temperatures which were not bracketed

  4. ADVANCED TECHNIQUES FOR RESERVOIR SIMULATION AND MODELING OF NONCONVENTIONAL WELLS

    Energy Technology Data Exchange (ETDEWEB)

    Louis J. Durlofsky; Khalid Aziz

    2004-08-20

    Nonconventional wells, which include horizontal, deviated, multilateral and ''smart'' wells, offer great potential for the efficient management of oil and gas reservoirs. These wells are able to contact larger regions of the reservoir than conventional wells and can also be used to target isolated hydrocarbon accumulations. The use of nonconventional wells instrumented with downhole inflow control devices allows for even greater flexibility in production. Because nonconventional wells can be very expensive to drill, complete and instrument, it is important to be able to optimize their deployment, which requires the accurate prediction of their performance. However, predictions of nonconventional well performance are often inaccurate. This is likely due to inadequacies in some of the reservoir engineering and reservoir simulation tools used to model and optimize nonconventional well performance. A number of new issues arise in the modeling and optimization of nonconventional wells. For example, the optimal use of downhole inflow control devices has not been addressed for practical problems. In addition, the impact of geological and engineering uncertainty (e.g., valve reliability) has not been previously considered. In order to model and optimize nonconventional wells in different settings, it is essential that the tools be implemented into a general reservoir simulator. This simulator must be sufficiently general and robust and must in addition be linked to a sophisticated well model. Our research under this five year project addressed all of the key areas indicated above. The overall project was divided into three main categories: (1) advanced reservoir simulation techniques for modeling nonconventional wells; (2) improved techniques for computing well productivity (for use in reservoir engineering calculations) and for coupling the well to the simulator (which includes the accurate calculation of well index and the modeling of multiphase flow

  5. Validation techniques of agent based modelling for geospatial simulations

    Science.gov (United States)

    Darvishi, M.; Ahmadi, G.

    2014-10-01

    One of the most interesting aspects of modelling and simulation study is to describe the real world phenomena that have specific properties; especially those that are in large scales and have dynamic and complex behaviours. Studying these phenomena in the laboratory is costly and in most cases it is impossible. Therefore, Miniaturization of world phenomena in the framework of a model in order to simulate the real phenomena is a reasonable and scientific approach to understand the world. Agent-based modelling and simulation (ABMS) is a new modelling method comprising of multiple interacting agent. They have been used in the different areas; for instance, geographic information system (GIS), biology, economics, social science and computer science. The emergence of ABM toolkits in GIS software libraries (e.g. ESRI's ArcGIS, OpenMap, GeoTools, etc) for geospatial modelling is an indication of the growing interest of users to use of special capabilities of ABMS. Since ABMS is inherently similar to human cognition, therefore it could be built easily and applicable to wide range applications than a traditional simulation. But a key challenge about ABMS is difficulty in their validation and verification. Because of frequent emergence patterns, strong dynamics in the system and the complex nature of ABMS, it is hard to validate and verify ABMS by conventional validation methods. Therefore, attempt to find appropriate validation techniques for ABM seems to be necessary. In this paper, after reviewing on Principles and Concepts of ABM for and its applications, the validation techniques and challenges of ABM validation are discussed.

  6. Validation techniques of agent based modelling for geospatial simulations

    Directory of Open Access Journals (Sweden)

    M. Darvishi

    2014-10-01

    Full Text Available One of the most interesting aspects of modelling and simulation study is to describe the real world phenomena that have specific properties; especially those that are in large scales and have dynamic and complex behaviours. Studying these phenomena in the laboratory is costly and in most cases it is impossible. Therefore, Miniaturization of world phenomena in the framework of a model in order to simulate the real phenomena is a reasonable and scientific approach to understand the world. Agent-based modelling and simulation (ABMS is a new modelling method comprising of multiple interacting agent. They have been used in the different areas; for instance, geographic information system (GIS, biology, economics, social science and computer science. The emergence of ABM toolkits in GIS software libraries (e.g. ESRI’s ArcGIS, OpenMap, GeoTools, etc for geospatial modelling is an indication of the growing interest of users to use of special capabilities of ABMS. Since ABMS is inherently similar to human cognition, therefore it could be built easily and applicable to wide range applications than a traditional simulation. But a key challenge about ABMS is difficulty in their validation and verification. Because of frequent emergence patterns, strong dynamics in the system and the complex nature of ABMS, it is hard to validate and verify ABMS by conventional validation methods. Therefore, attempt to find appropriate validation techniques for ABM seems to be necessary. In this paper, after reviewing on Principles and Concepts of ABM for and its applications, the validation techniques and challenges of ABM validation are discussed.

  7. Concerning the Feasibility of Example-driven Modelling Techniques

    CERN Document Server

    Thorne, Simon R; Lawson, Z

    2008-01-01

    We report on a series of experiments concerning the feasibility of example driven modelling. The main aim was to establish experimentally within an academic environment: the relationship between error and task complexity using a) Traditional spreadsheet modelling; b) example driven techniques. We report on the experimental design, sampling, research methods and the tasks set for both control and treatment groups. Analysis of the completed tasks allows comparison of several different variables. The experimental results compare the performance indicators for the treatment and control groups by comparing accuracy, experience, training, confidence measures, perceived difficulty and perceived completeness. The various results are thoroughly tested for statistical significance using: the Chi squared test, Fisher's exact test for significance, Cochran's Q test and McNemar's test on difficulty.

  8. Advanced computer modeling techniques expand belt conveyor technology

    Energy Technology Data Exchange (ETDEWEB)

    Alspaugh, M.

    1998-07-01

    Increased mining production is continuing to challenge engineers and manufacturers to keep up. The pressure to produce larger and more versatile equipment is increasing. This paper will show some recent major projects in the belt conveyor industry that have pushed the limits of design and engineering technology. Also, it will discuss the systems engineering discipline and advanced computer modeling tools that have helped make these achievements possible. Several examples of technologically advanced designs will be reviewed. However, new technology can sometimes produce increased problems with equipment availability and reliability if not carefully developed. Computer modeling techniques that help one design larger equipment can also compound operational headaches if engineering processes and algorithms are not carefully analyzed every step of the way.

  9. EXPERIENCE WITH SYNCHRONOUS GENERATOR MODEL USING PARTICLE SWARM OPTIMIZATION TECHNIQUE

    Directory of Open Access Journals (Sweden)

    N.RATHIKA

    2014-07-01

    Full Text Available This paper intends to the modeling of polyphase synchronous generator and minimization of power losses using Particle swarm optimization (PSO technique with a constriction factor. Usage of Polyphase synchronous generator mainly leads to the total power circulation in the system which can be distributed in all phases. Another advantage of polyphase system is the fault at one winding does not lead to the system shutdown. The Process optimization is the chastisement of adjusting a process so as to optimize some stipulated set of parameters without violating some constraint. Accurate value can be extracted using PSO and it can be reformulated. Modeling and simulation of the machine is executed. MATLAB/Simulink has been cast-off to implement and validate the result.

  10. Updates on measurements and modeling techniques for expendable countermeasures

    Science.gov (United States)

    Gignilliat, Robert; Tepfer, Kathleen; Wilson, Rebekah F.; Taczak, Thomas M.

    2016-10-01

    The potential threat of recently-advertised anti-ship missiles has instigated research at the United States (US) Naval Research Laboratory (NRL) into the improvement of measurement techniques for visual band countermeasures. The goal of measurements is the collection of radiometric imagery for use in the building and validation of digital models of expendable countermeasures. This paper will present an overview of measurement requirements unique to the visual band and differences between visual band and infrared (IR) band measurements. A review of the metrics used to characterize signatures in the visible band will be presented and contrasted to those commonly used in IR band measurements. For example, the visual band measurements require higher fidelity characterization of the background, including improved high-transmittance measurements and better characterization of solar conditions to correlate results more closely with changes in the environment. The range of relevant engagement angles has also been expanded to include higher altitude measurements of targets and countermeasures. In addition to the discussion of measurement techniques, a top-level qualitative summary of modeling approaches will be presented. No quantitative results or data will be presented.

  11. Techniques to Access Databases and Integrate Data for Hydrologic Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Whelan, Gene; Tenney, Nathan D.; Pelton, Mitchell A.; Coleman, Andre M.; Ward, Duane L.; Droppo, James G.; Meyer, Philip D.; Dorow, Kevin E.; Taira, Randal Y.

    2009-06-17

    This document addresses techniques to access and integrate data for defining site-specific conditions and behaviors associated with ground-water and surface-water radionuclide transport applicable to U.S. Nuclear Regulatory Commission reviews. Environmental models typically require input data from multiple internal and external sources that may include, but are not limited to, stream and rainfall gage data, meteorological data, hydrogeological data, habitat data, and biological data. These data may be retrieved from a variety of organizations (e.g., federal, state, and regional) and source types (e.g., HTTP, FTP, and databases). Available data sources relevant to hydrologic analyses for reactor licensing are identified and reviewed. The data sources described can be useful to define model inputs and parameters, including site features (e.g., watershed boundaries, stream locations, reservoirs, site topography), site properties (e.g., surface conditions, subsurface hydraulic properties, water quality), and site boundary conditions, input forcings, and extreme events (e.g., stream discharge, lake levels, precipitation, recharge, flood and drought characteristics). Available software tools for accessing established databases, retrieving the data, and integrating it with models were identified and reviewed. The emphasis in this review was on existing software products with minimal required modifications to enable their use with the FRAMES modeling framework. The ability of four of these tools to access and retrieve the identified data sources was reviewed. These four software tools were the Hydrologic Data Acquisition and Processing System (HDAPS), Integrated Water Resources Modeling System (IWRMS) External Data Harvester, Data for Environmental Modeling Environmental Data Download Tool (D4EM EDDT), and the FRAMES Internet Database Tools. The IWRMS External Data Harvester and the D4EM EDDT were identified as the most promising tools based on their ability to access and

  12. Online Searching with a Microcomputer--Getting Started.

    Science.gov (United States)

    Casbon, Susan

    1983-01-01

    Based on online searching experiences on microcomputer at a small liberal arts college, this article outlines for the novice advantages and disadvantages of micro-searching, legal implications, future trends, and factors to consider in selecting hardware and software. A 16-item bibliography arranged in order of usefulness and 10 references are…

  13. Microcomputer Learning Project. Willow Creek School Division No. 28.

    Science.gov (United States)

    Millar, Garnet; MacLeod, Alan

    Comparative research was conducted using gifted and average upper elementary and junior high school students to determine the effectiveness of utilizing microcomputers for the development of computer literacy and the implementation of curriculum courseware. Eighty students were selected as the treatment group and received instruction in the…

  14. The Surface Features of Four Microcomputer Reading Programs.

    Science.gov (United States)

    Bradley, Virginia N.

    1984-01-01

    Uses the Bradley Inventory of Reading Comprehension Software to evaluate surface characteristics (including documentation and instructional design) of microcomputer reading comprehension programs produced by Scott Foresman, Random House, Control Data Corporation, and Basic Learning Systems. All four programs received high scores, but the Scott…

  15. Use of Microcomputers for Spelling Assessment: Reasons to Be Cautious.

    Science.gov (United States)

    Varnhagen, Stanley; Gerber, Michael M.

    1984-01-01

    Two versions, via microcomputer, of the Test of Written Spelling were administered to underachieving students from a regular third-grade class and from an upper-grade, self-contained class for learning handicapped (LH) students. Results indicated that students took less time and spelled more words correctly on the standard written version.…

  16. Utilization of the Microcomputer in the Mathematics Classroom.

    Science.gov (United States)

    Pruett, Poppy L.; And Others

    1993-01-01

    Reports a study investigating the instructional use of microcomputers by secondary mathematics teachers, and discusses results from a sample of 128 completed questionnaires showing that computer utilization is hampered by inadequate access to equipment, lack of software appropriate to the mathematics curricula, and a lack of guidance for…

  17. An Observational Study of Social Behavior in Microcomputer Classrooms.

    Science.gov (United States)

    Feldmann, Shirley C.; And Others

    1991-01-01

    This study examined the effects of five variables--student grouping at the computer, keyboarding status, academic discipline, student gender, and gender of partner--on student social behavior, both verbal and affective, in microcomputer classrooms in a public business high school. The effect of these variables on teacher behavior was also…

  18. Microcomputers: Statistical Analysis Software. Evaluation Guide Number 5.

    Science.gov (United States)

    Gray, Peter J.

    This guide discusses six sets of features to examine when purchasing a microcomputer-based statistics program: hardware requirements; data management; data processing; statistical procedures; printing; and documentation. While the current statistical packages have several negative features, they are cost saving and convenient for small to moderate…

  19. Microcomputers: Word Processing. Evaluation Guides. Guide Number 3.

    Science.gov (United States)

    Gray, Peter J.

    Designed to provide guidance in selecting the appropriate microcomputer-based word processing program, this document discusses the key characteristics of word processing software, including formatting, editing, merging, and printing. Possible capabilities of word processing features are identified, i.e., indent, tab, center, creation of footnotes,…

  20. Microcomputers and Evaluation. Evaluation Guides: Guide Number 1.

    Science.gov (United States)

    Gray, Peter J.

    The potential uses of microcomputers in evaluation research are discussed in this pamphlet. At the beginning, a matrix is provided showing the relationship between the steps in the evaluation research process and common types of computer software. Thereafter, the guide is organized sequentially around the evaluation research activities that are…

  1. Integrating Microcomputers and Microelectronics into the Physics Curriculum.

    Science.gov (United States)

    Gale, Douglas S.

    1980-01-01

    Describes an interdisciplinary microcomputer and microelectronics program offered jointly by the Physics and Computer Science Departments of East Texas State University. The program operates on both the graduate and undergraduate level. Content as well as structure of the program are discussed. (Author/DS)

  2. Multi-microcomputer system for Monte-Carlo calculations

    CERN Document Server

    Berg, B; Krasemann, H

    1981-01-01

    The authors propose a microcomputer system that allows parallel processing for Monte Carlo calculations in lattice gauge theories, simulations of high energy physics experiments and many other fields of current interest. The master-n-slave multiprocessor system is based on the Motorola MC 6800 microprocessor. One attraction of this processor is that it allows up to 16 M Byte random access memory.

  3. Matrix algebra routines for the Acorn Archimedes microcomputer: example applications.

    Science.gov (United States)

    Fielding, A

    1988-08-01

    A set of matrix algebra routines have been written, as BASICV procedures, for the Acorn Archimedes microcomputer. It is shown that these procedures are executed so quickly that programs, which require matrix algebra computations, can be written in interpreted BASIC. Two example applications, reciprocal averaging and principal components analysis, are demonstrated.

  4. Planning the Use of Microcomputers in Higher Education Administration.

    Science.gov (United States)

    Slovacek, Simeon P.; Dolence, Michael G.

    The process of planning the role of the microcomputer in higher education administration is investigated through a survey of a sample of universities and colleges in California engaged in such efforts, and through a review of literature in education as well as computing. A major objective of the study was to systematically investigate the…

  5. Use of Microcomputers for Spelling Assessment: Reasons to Be Cautious.

    Science.gov (United States)

    Varnhagen, Stanley; Gerber, Michael M.

    1984-01-01

    Two versions, via microcomputer, of the Test of Written Spelling were administered to underachieving students from a regular third-grade class and from an upper-grade, self-contained class for learning handicapped (LH) students. Results indicated that students took less time and spelled more words correctly on the standard written version.…

  6. A Micro-Computer Based Tutor for Teaching Arithmetic Skills.

    Science.gov (United States)

    Attisha, M.; Yazdani, M.

    1983-01-01

    Describes a knowledge-based tutoring system which provides pupil interaction with the microcomputer to diagnose pupils' errors in subtraction operations. Current subtraction methods; nature and origin of subtraction errors; and the structure, achievements, and future developments of the computer system are included. Thirteen references and a…

  7. Microcomputer-Based Digital Signal Processing Laboratory Experiments.

    Science.gov (United States)

    Tinari, Jr., Rocco; Rao, S. Sathyanarayan

    1985-01-01

    Describes a system (Apple II microcomputer interfaced to flexible, custom-designed digital hardware) which can provide: (1) Fast Fourier Transform (FFT) computation on real-time data with a video display of spectrum; (2) frequency synthesis experiments using the inverse FFT; and (3) real-time digital filtering experiments. (JN)

  8. An Integrated Library System from Existing Microcomputer Programs.

    Science.gov (United States)

    Kuntz, Lynda S.

    1988-01-01

    Demonstrates how three commercial microcomputer software packages--PC-Talk III, Wordstar, and dBase III--were combined to produce an integrated library system at the U.S. Army Concepts Analysis Agency library. The retrospective conversion process is discussed, and the four modules of the system are described: acquisitions/cataloging; online…

  9. Instructional Microcomputer Applications by Business Teachers in Minnesota.

    Science.gov (United States)

    Lambrecht, Judith J.

    Data were collected from Minnesota secondary and postsecondary business teachers regarding their instructional microcomputer applications and their attitudes about several instructional computing issues. Usable surveys were returned by 342 teachers in 236 schools. The predominant brand of computer at the secondary level was the Apple II; most…

  10. Using a Microcomputer to Communicate: Part 2: Specialized Software.

    Science.gov (United States)

    Fenichel, Carol Hansen; Murphy, John J.

    1985-01-01

    Discusses microcomputer software designed specifically to be used in the online bibliographic database search process. The specialized communications software discussed includes programs that aid in database selection, logging on, uploading, searching, downloading, post-processing, and recordkeeping. A directory of specialized communications…

  11. Surface area and volume measurements of volcanic ash particles using micro-computed tomography (micro-CT): A comparison with scanning electron microscope (SEM) stereoscopic imaging and Brunauer-Emmett-Teller (BET) model

    Science.gov (United States)

    Ersoy, Orkun; Şen, Erdal; Aydar, Erkan; Tatar, Ä.°Lkan; Ćelik, H. Hamdi

    2010-05-01

    Volcanic ash particles are important components of explosive eruptions and their surface texture is the subject of intense research. Characterization of ash surfaces is crucial for understanding the physics of the volcanic plumes, remote sensing measurements of ash and aerosols, interfacial processes, modelling transportation and deposition of tephra and characterizing eruptive styles. A number of different methods have been used over the years to arrive at surface area estimates. The more common methods include estimates based on the geometric considerations (geometric surface area) and the physisorption of gas molecules on the surface of interest (physical surface area). In this study, micro computed tomography (micro-CT), a non-destructive method providing three-dimensional data enabled the measurement of surface areas and volumes of individual ash particles. Specific surface area estimates for ash particles were also obtained using nitrogen as gas adsorbent and the BET (Brunauer-Emmett-Teller) model. Results were compared with the values obtained from SEM stereoscopic imaging and geometric considerations. Surface area estimates of micro-CT and SEM stereoscopic imaging overlaps with mean specific surface area results of 0.0167 and 0.0214 m2/g, respectively. However, ash particle surface textures present quite a deviation from that of their geometric forms and approximation to sphere and ellipsoid both seemed to be inadequate for representation of real ash surfaces. The higher surface area estimate (> 0.4 m2/g) obtained from the technique based on physical sorption of gases (BET model here) was attributed to its capability for surface areas associated even with angstrom-sized pores. SEM stereoscopic and/or micro-CT imaging were suggested for characterization of textures on macro-pore regions of ash particles.

  12. A novel in vivo vascular imaging approach for hierarchical quantification of vasculature using contrast enhanced micro-computed tomography.

    Directory of Open Access Journals (Sweden)

    Laura Nebuloni

    Full Text Available The vasculature of body tissues is continuously subject to remodeling processes originating at the micro-vascular level. The formation of new blood vessels (angiogenesis is essential for a number of physiological and pathophysiological processes such as tissue regeneration, tumor development and the integration of artificial tissues. There are currently no time-lapsed in vivo imaging techniques providing information on the vascular network at the capillary level in a non-destructive, three-dimensional and high-resolution fashion. This paper presents a novel imaging framework based on contrast enhanced micro-computed tomography (micro-CT for hierarchical in vivo quantification of blood vessels in mice, ranging from largest to smallest structures. The framework combines for the first time a standard morphometric approach with densitometric analysis. Validation tests showed that the method is precise and robust. Furthermore, the framework is sensitive in detecting different perfusion levels after the implementation of a murine ischemia-reperfusion model. Correlation with both histological data and micro-CT analysis of vascular corrosion casts confirmed accuracy of the method. The newly developed time-lapsed imaging approach shows high potential for in vivo monitoring of a number of different physiological and pathological conditions in angiogenesis and vascular development.

  13. NVC Based Model for Selecting Effective Requirement Elicitation Technique

    Directory of Open Access Journals (Sweden)

    Md. Rizwan Beg

    2012-10-01

    Full Text Available Requirement Engineering process starts from gathering of requirements i.e.; requirements elicitation. Requirementselicitation (RE is the base building block for a software project and has very high impact onsubsequent design and builds phases as well. Accurately capturing system requirements is the major factorin the failure of most of software projects. Due to the criticality and impact of this phase, it is very importantto perform the requirements elicitation in no less than a perfect manner. One of the most difficult jobsfor elicitor is to select appropriate technique for eliciting the requirement. Interviewing and Interactingstakeholder during Elicitation process is a communication intensive activity involves Verbal and Nonverbalcommunication (NVC. Elicitor should give emphasis to Non-verbal communication along with verbalcommunication so that requirements recorded more efficiently and effectively. In this paper we proposea model in which stakeholders are classified by observing non-verbal communication and use it as a basefor elicitation technique selection. We also propose an efficient plan for requirements elicitation which intendsto overcome on the constraints, faced by elicitor.

  14. Total laparoscopic gastrocystoplasty: experimental technique in a porcine model

    Directory of Open Access Journals (Sweden)

    Frederico R. Romero

    2007-02-01

    Full Text Available OBJECTIVE: Describe a unique simplified experimental technique for total laparoscopic gastrocystoplasty in a porcine model. MATERIAL AND METHODS: We performed laparoscopic gastrocystoplasty on 10 animals. The gastroepiploic arch was identified and carefully mobilized from its origin at the pylorus to the beginning of the previously demarcated gastric wedge. The gastric segment was resected with sharp dissection. Both gastric suturing and gastrovesical anastomosis were performed with absorbable running sutures. The complete procedure and stages of gastric dissection, gastric closure, and gastrovesical anastomosis were separately timed for each laparoscopic gastrocystoplasty. The end-result of the gastric suturing and the bladder augmentation were evaluated by fluoroscopy or endoscopy. RESULTS: Mean total operative time was 5.2 (range 3.5 - 8 hours: 84.5 (range 62 - 110 minutes for the gastric dissection, 56 (range 28 - 80 minutes for the gastric suturing, and 170.6 (range 70 to 200 minutes for the gastrovesical anastomosis. A cystogram showed a small leakage from the vesical anastomosis in the first two cases. No extravasation from gastric closure was observed in the postoperative gastrogram. CONCLUSIONS: Total laparoscopic gastrocystoplasty is a feasible but complex procedure that currently has limited clinical application. With the increasing use of laparoscopy in reconstructive surgery of the lower urinary tract, gastrocystoplasty may become an attractive option because of its potential advantages over techniques using small and large bowel segments.

  15. Model Rockets and Microchips.

    Science.gov (United States)

    Fitzsimmons, Charles P.

    1986-01-01

    Points out the instructional applications and program possibilities of a unit on model rocketry. Describes the ways that microcomputers can assist in model rocket design and in problem calculations. Provides a descriptive listing of model rocket software for the Apple II microcomputer. (ML)

  16. CIVA workstation for NDE: mixing of NDE techniques and modeling

    Energy Technology Data Exchange (ETDEWEB)

    Benoist, P.; Besnard, R. [CEA Centre d`Etudes de Saclay, 91 - Gif-sur-Yvette (France). Dept. des Procedes et Systemes Avances; Bayon, G. [CEA Centre d`Etudes de Saclay, 91 - Gif-sur-Yvette (France). Dept. des Reacteurs Experimentaux; Boutaine, J.L. [CEA Centre d`Etudes de Saclay, 91 - Gif-sur-Yvette (France). Dept. des Applications et de la Metrologie des Rayonnements Ionisants

    1994-12-31

    In order to compare the capabilities of different NDE techniques, or to use complementary inspection methods, the same components are examined with different procedures. It is then very useful to have a single evaluation tool allowing direct comparison of the methods: CIVA is an open system for processing NDE data; it is adapted to a standard work station (UNIX, C, MOTIF) and can read different supports on which the digitized data are stored. It includes a large library of signal and image processing methods accessible and adapted to NDE data (filtering, deconvolution, 2D and 3D spatial correlations...). Different CIVA application examples are described: brazing inspection (neutronography, ultrasonic), tube inspection (eddy current, ultrasonic), aluminium welds examination (UT and radiography). Modelling and experimental results are compared. 16 fig., 7 ref.

  17. Demand Management Based on Model Predictive Control Techniques

    Directory of Open Access Journals (Sweden)

    Yasser A. Davizón

    2014-01-01

    Full Text Available Demand management (DM is the process that helps companies to sell the right product to the right customer, at the right time, and for the right price. Therefore the challenge for any company is to determine how much to sell, at what price, and to which market segment while maximizing its profits. DM also helps managers efficiently allocate undifferentiated units of capacity to the available demand with the goal of maximizing revenue. This paper introduces control system approach to demand management with dynamic pricing (DP using the model predictive control (MPC technique. In addition, we present a proper dynamical system analogy based on active suspension and a stability analysis is provided via the Lyapunov direct method.

  18. Microcomputer-Assisted Biomass Determination of Plankton Bacteria on Scanning Electron Micrographs

    Science.gov (United States)

    Krambeck, Christiane; Krambeck, Hans-Jürgen; Overbeck, Jürgen

    1981-01-01

    Although biovolume is a better measure of biomass than is cell number, biovolumes have rarely been measured because their evaluation is extremely time-consuming. We developed a microcomputer system that assists cell size measurements on images of filtered plankton: scanning electron micrograph negatives were projected on a digitizer field, bacterial length and width were marked by a cursor, and coordinates were directly transferred to an MOS 6502 microcomputer (KIM 1). The dialogue program BABI organized and controlled the digitizer measurements in cooperation with the user, enabled corrections, and printed out results with 95% confidence limits and sample description. The time for scanning electron micrograph preparation was reduced to 15 min (quick transfer to Freon 113 during filtration and air drying). Altogether, this biovolume determination took about 2.5 h for confidence limits of ±15%. Examples are given for applications of the method: (i) comparison of 10 lakes (with specific activities for glucose uptake and for heterotrophic CO2 fixation); (ii) ranges of biomass parameters in one lake; (iii) diurnal cycles (with synchronizing effects, uptake of algal exudates, and calculation of daily growth). This method is discussed in relation to other biomass methods (epifluorescent microscopy, lipopolysaccharide technique, frequency of dividing cells) and the problem of biovolume-to-carbon conversions. Images PMID:16345807

  19. Micro-Computed Tomographic Evaluation of Hard Tissue Debris Removal after Different Irrigation Methods and Its Influence on the Filling of Curved Canals.

    Science.gov (United States)

    Freire, Laila Gonzales; Iglecias, Elaine Faga; Cunha, Rodrigo Sanches; Dos Santos, Marcelo; Gavini, Giulio

    2015-10-01

    The aim of this study was to compare the efficacy of passive ultrasonic irrigation (PUI) and the EndoVac (EV) System (Discus Dental, Culver City, CA) in hard tissue debris removal and its influence on the quality of the root canal filling with the aid of micro-computed tomographic scanner. Twenty-four mandibular molars were subjected to 4 microtomographic scannings (ie, before and after instrumentation, after final irrigation, and after obturation) using the SkyScan 1176 X-ray microtomograph (Bruker microCT, Kontich, Belgium) at a resolution of 17.42 μm. Mesial canals were prepared using R25 Reciproc instruments (VDW GmbH, Munich, Germany) and divided into 2 groups according to the final irrigation method: the PUI group (n = 12) and the EV group (n = 12). All specimens were filled with the continuous wave of condensation technique. CTAn and CTvol software (Bruker microCT) were used for volumetric analysis and 3-dimensional model reconstruction of the root canals, hard tissue debris, and the filling material. Data were statistically analyzed using the Student t test. Analysis of the micro-computed tomographic scans revealed debris accumulated inside the root canals, occupying an average of 3.4% of the canal's volume. Irrigation with PUI and the EV system reduced the volume of hard tissue debris in 55.55% and 53.65%, respectively, with no statistical difference between them (P > .05). Also, there was no difference among the groups with regard to the volume of filling material and voids (P > .05). PUI and the EV system were equally efficient in the removal of hard tissue debris and the quality of root canal filling was similar in both groups, with no influence from the irrigation method. Copyright © 2015 American Association of Endodontists. Published by Elsevier Inc. All rights reserved.

  20. A microcomputer based system for current-meter data acquisition

    Science.gov (United States)

    Cheng, R.T.; Gartner, J.W.

    1979-01-01

    The U.S. Geological Survey is conducting current measurements as part of an interdisciplinary study of the San Francisco Bay estuarine system. The current meters used in the study record current speed, direction, temperature, and conductivity in digital codes on magnetic tape cartridges. Upon recovery of the current meters, the data tapes are translated by a tape reader into computer codes for further analyses. Quite often the importance of the data processing phase of a current-measurement program is underestimated and downplayed. In this paper a data-processing system which performs the complete data processing and analyses is described. The system, which is configured around an LSI-11 microcomputer, has been assembled to provide the capabilities of data translation, reduction, and tabulation and graphical display immediately following recovery of current meters. The flexibility inherent in a microcomputer has made it available to perform many other research functions which would normally be done on an institutional computer.

  1. Study of Semi-Span Model Testing Techniques

    Science.gov (United States)

    Gatlin, Gregory M.; McGhee, Robert J.

    1996-01-01

    An investigation has been conducted in the NASA Langley 14- by 22-Foot Subsonic Tunnel in order to further the development of semi-span testing capabilities. A twin engine, energy efficient transport (EET) model with a four-element wing in a takeoff configuration was used for this investigation. Initially a full span configuration was tested and force and moment data, wing and fuselage surface pressure data, and fuselage boundary layer measurements were obtained as a baseline data set. The semi-span configurations were then mounted on the wind tunnel floor, and the effects of fuselage standoff height and shape as well as the effects of the tunnel floor boundary layer height were investigated. The effectiveness of tangential blowing at the standoff/floor juncture as an active boundary-layer control technique was also studied. Results indicate that the semi-span configuration was more sensitive to variations in standoff height than to variations in floor boundary layer height. A standoff height equivalent to 30 percent of the fuselage radius resulted in better correlation with full span data than no standoff or the larger standoff configurations investigated. Undercut standoff leading edges or the use of tangential blowing in the standoff/ floor juncture improved correlation of semi-span data with full span data in the region of maximum lift coefficient.

  2. Verification Techniques for Parameter Selection and Bayesian Model Calibration Presented for an HIV Model

    Science.gov (United States)

    Wentworth, Mami Tonoe

    Uncertainty quantification plays an important role when making predictive estimates of model responses. In this context, uncertainty quantification is defined as quantifying and reducing uncertainties, and the objective is to quantify uncertainties in parameter, model and measurements, and propagate the uncertainties through the model, so that one can make a predictive estimate with quantified uncertainties. Two of the aspects of uncertainty quantification that must be performed prior to propagating uncertainties are model calibration and parameter selection. There are several efficient techniques for these processes; however, the accuracy of these methods are often not verified. This is the motivation for our work, and in this dissertation, we present and illustrate verification frameworks for model calibration and parameter selection in the context of biological and physical models. First, HIV models, developed and improved by [2, 3, 8], describe the viral infection dynamics of an HIV disease. These are also used to make predictive estimates of viral loads and T-cell counts and to construct an optimal control for drug therapy. Estimating input parameters is an essential step prior to uncertainty quantification. However, not all the parameters are identifiable, implying that they cannot be uniquely determined by the observations. These unidentifiable parameters can be partially removed by performing parameter selection, a process in which parameters that have minimal impacts on the model response are determined. We provide verification techniques for Bayesian model calibration and parameter selection for an HIV model. As an example of a physical model, we employ a heat model with experimental measurements presented in [10]. A steady-state heat model represents a prototypical behavior for heat conduction and diffusion process involved in a thermal-hydraulic model, which is a part of nuclear reactor models. We employ this simple heat model to illustrate verification

  3. Simulation of Heat Transfer and Electromagnetic Fields of Protected Microcomputers

    Directory of Open Access Journals (Sweden)

    Josef Lakatos

    2006-01-01

    Full Text Available The paper presents results of collaboration between Department of mechatronics and electronics at University of Žilina and VÚVT Engineering a.s. Žilina in area of heat transfer simulations and disturbing electromagnetic radiation simulations in computer construction. The simulations results were used in development of protected microcomputer prototypes in frame of applied research at both of workplaces.

  4. Use of Data Base Microcomputer Software in Descriptive Nursing Research

    OpenAIRE

    Chapman, Judy Jean

    1985-01-01

    Data base microcomputer software was used to design a file for data storage and retrieval in a qualitative nursing research project. The needs of 50 breast feeding mothers from birth to four months were studied. One thousand records with descriptive nursing data were entered into the file. The search and retrieval capability of data base software facilitated this qualitative research. The findings will be discussed in three areas: (1) infant concerns, (2) postpartum concerns, and (3) breast c...

  5. A Method for Evaluation of Microcomputers for Tactical Applications.

    Science.gov (United States)

    1980-06-01

    to microcomputer systems. f. Ada Programming Language Ada , which is named after the world’s first com- puter programmer, Lady Ada Lovelace , is a very...HOL) Ada used as a program development tool for real-time processing including microprocessor applications. (2) Hosted on small development systems. 4...mainstream. The Defense Department’s Ada HOL is being developed to serve programming needs for real-time processing including 14 microprocessor

  6. The microcomputer: A tool for personal language learning

    Directory of Open Access Journals (Sweden)

    David H. Wyatt

    2013-02-01

    Full Text Available Computer-assisted methods of teaching and learning languages have been surrounded by controversy and debate for over a decade. In 1979, however, microcomputers began to appear in a form suitable for educational applications, offering for the first time an alternative to both the cost and the approach of large computer systems. The impact of the microcomputer has been limited by a number of factors, and microcomputerassisted learning is still in a relative state of infancy. The main implications for language teaching and learning are only now beginning to be understood, just as the limiting factors are starting to disappear. This paper will assess the present situation and outline some likely future developments in the use of microcomputers in language learning. Rekenaargesteunde metodes by die onderrig en aanleer van tale is reeds meer as 'n dekade lank omgewe deur meningsverskil. In 1979 egter het mikrorekenaars hulle verskyning begin maak in 'n toepaslike vorm vir opvoedkundige doeleindes. Vir die eerste keer was daar 'n alternatief vir die koste verbonde aan en die benaderingswyse van groot rekenaarstelsels. Die trefkrag van die mikrorekenaar is deur 'n aantal faktore gekortwiek en rekenaargesteunde onderrig is steeds in sy kinderskoene. Die belangrikste implikasies vir die aanleer en onderrig van tale begin nou eers deurdring soos die beperkende faktore begin verdwyn. Hierdie artikel takseer die huidige situasie en omlyn moontlike toekomstige ontwikkelings vir die gebruik van mikrorekenaars by taalaanleer.

  7. A Rutherford Scattering Simulation with Microcomputer Graphics.

    Science.gov (United States)

    Calle, Carlos I.; Wright, Lavonia F.

    1989-01-01

    Lists a program for a simulation of Rutherford's gold foil experiment in BASIC for both Apple II and IBM compatible computers. Compares Rutherford's model of the atom with Thompson's plum pudding model of the atom. (MVL)

  8. Semantic techniques for enabling knowledge reuse in conceptual modelling

    NARCIS (Netherlands)

    Gracia, J.; Liem, J.; Lozano, E.; Corcho, O.; Trna, M.; Gómez-Pérez, A.; Bredeweg, B.

    2010-01-01

    Conceptual modelling tools allow users to construct formal representations of their conceptualisations. These models are typically developed in isolation, unrelated to other user models, thus losing the opportunity of incorporating knowledge from other existing models or ontologies that might enrich

  9. Micro-computed tomography imaging and analysis in developmental biology and toxicology.

    Science.gov (United States)

    Wise, L David; Winkelmann, Christopher T; Dogdas, Belma; Bagchi, Ansuman

    2013-06-01

    Micro-computed tomography (micro-CT) is a high resolution imaging technique that has expanded and strengthened in use since it was last reviewed in this journal in 2004. The technology has expanded to include more detailed analysis of bone, as well as soft tissues, by use of various contrast agents. It is increasingly applied to questions in developmental biology and developmental toxicology. Relatively high-throughput protocols now provide a powerful and efficient means to evaluate embryos and fetuses subjected to genetic manipulations or chemical exposures. This review provides an overview of the technology, including scanning, reconstruction, visualization, segmentation, and analysis of micro-CT generated images. This is followed by a review of more recent applications of the technology in some common laboratory species that highlight the diverse issues that can be addressed.

  10. Autonomous selection of PDE inpainting techniques vs. exemplar inpainting techniques for void fill of high resolution digital surface models

    Science.gov (United States)

    Rahmes, Mark; Yates, J. Harlan; Allen, Josef DeVaughn; Kelley, Patrick

    2007-04-01

    High resolution Digital Surface Models (DSMs) may contain voids (missing data) due to the data collection process used to obtain the DSM, inclement weather conditions, low returns, system errors/malfunctions for various collection platforms, and other factors. DSM voids are also created during bare earth processing where culture and vegetation features have been extracted. The Harris LiteSite TM Toolkit handles these void regions in DSMs via two novel techniques. We use both partial differential equations (PDEs) and exemplar based inpainting techniques to accurately fill voids. The PDE technique has its origin in fluid dynamics and heat equations (a particular subset of partial differential equations). The exemplar technique has its origin in texture analysis and image processing. Each technique is optimally suited for different input conditions. The PDE technique works better where the area to be void filled does not have disproportionately high frequency data in the neighborhood of the boundary of the void. Conversely, the exemplar based technique is better suited for high frequency areas. Both are autonomous with respect to detecting and repairing void regions. We describe a cohesive autonomous solution that dynamically selects the best technique as each void is being repaired.

  11. PENGEMBANGAN MODEL INTERNALISASI NILAI KARAKTER DALAM PEMBELAJARAN SEJARAH MELALUI MODEL VALUE CLARIFICATION TECHNIQUE

    Directory of Open Access Journals (Sweden)

    Nunuk Suryani

    2013-07-01

    Full Text Available This research produce a product model of internalization of the character in learning history through Value Clarification Technique as a revitalization of the role of social studies in the formation of national character. In general, this research consist of three levels : (1 doing  pre-survey which identified the current condition of  the learning value of character in ​​in learning history (2 development of a model based on the findings of  pre-survey, the model used is the Dick and Carey Model, and (3 validating the models. Development models implemented with limited trials and extensive testing. The findings of this study lead to the conclusion that the VCT model is effective to internalize the character value in learning history. VCT models effective for increasing the role of learning history in the formation of student character. It can be concluded VCT models effective for improving the quality of processes and products of learning character values ​​in social studies SMP especially in Surakarta Keywords: Internalization, the value of character, Model VCT, learning history, learning social studies Penelitian ini bertujuan menghasilkan suatu produk model internalisasi nilai karakter dalam pembelajaran IPS melalui Model Value Clarification Technique sebagai revitalisasi peran pembelajaran IPS dalam pembentukan karakter bangsa. Secara garis besar tahapan penelitian meliputi (1 prasurvai untuk mengidetifikasi kondisi pembelajaran nilai karakter pada pembelajaran  IPS Sejarah SMP yang sedang berjalan, (2 pengembangan model berdasarkan hasil prasurvai, model yang digunakan adalah model Dick and Carey, dan (3 vaidasi model. Pengembangan model dilaksanakan dengan ujicoba terbatas dan uji coba luas. Temuan penelitian ini menghasilkan kesimpulan bahwa model VCT efektif  menginternalisasi nilai karakter dalam pembelajaran Sejarah. Model VCT efektif untuk meningkatkan peran pembelajaran Sejarah dalam

  12. Establishment of C6 brain glioma models through stereotactic technique for laser interstitial thermotherapy research

    Directory of Open Access Journals (Sweden)

    Jian Shi

    2015-01-01

    Conclusion: The rat C6 brain glioma model established in the study was a perfect model to study LITT of glioma. Infrared thermograph technique measured temperature conveniently and effectively. The technique is noninvasive, and the obtained data could be further processed using software used in LITT research. To measure deep-tissue temperature, combining thermocouple with infrared thermograph technique would present better results.

  13. Frequency Weighted Model Order Reduction Technique and Error Bounds for Discrete Time Systems

    Directory of Open Access Journals (Sweden)

    Muhammad Imran

    2014-01-01

    for whole frequency range. However, certain applications (like controller reduction require frequency weighted approximation, which introduce the concept of using frequency weights in model reduction techniques. Limitations of some existing frequency weighted model reduction techniques include lack of stability of reduced order models (for two sided weighting case and frequency response error bounds. A new frequency weighted technique for balanced model reduction for discrete time systems is proposed. The proposed technique guarantees stable reduced order models even for the case when two sided weightings are present. Efficient technique for frequency weighted Gramians is also proposed. Results are compared with other existing frequency weighted model reduction techniques for discrete time systems. Moreover, the proposed technique yields frequency response error bounds.

  14. A Titration Technique for Demonstrating a Magma Replenishment Model.

    Science.gov (United States)

    Hodder, A. P. W.

    1983-01-01

    Conductiometric titrations can be used to simulate subduction-setting volcanism. Suggestions are made as to the use of this technique in teaching volcanic mechanisms and geochemical indications of tectonic settings. (JN)

  15. New Developments and Techniques in Structural Equation Modeling

    CERN Document Server

    Marcoulides, George A

    2001-01-01

    Featuring contributions from some of the leading researchers in the field of SEM, most chapters are written by the author(s) who originally proposed the technique and/or contributed substantially to its development. Content highlights include latent varia

  16. Molecular dynamics techniques for modeling G protein-coupled receptors.

    Science.gov (United States)

    McRobb, Fiona M; Negri, Ana; Beuming, Thijs; Sherman, Woody

    2016-10-01

    G protein-coupled receptors (GPCRs) constitute a major class of drug targets and modulating their signaling can produce a wide range of pharmacological outcomes. With the growing number of high-resolution GPCR crystal structures, we have the unprecedented opportunity to leverage structure-based drug design techniques. Here, we discuss a number of advanced molecular dynamics (MD) techniques that have been applied to GPCRs, including long time scale simulations, enhanced sampling techniques, water network analyses, and free energy approaches to determine relative binding free energies. On the basis of the many success stories, including those highlighted here, we expect that MD techniques will be increasingly applied to aid in structure-based drug design and lead optimization for GPCRs.

  17. Application of microcomputers for electric arc heating processes. Session 2. 2b N. 2. 2. 13

    Energy Technology Data Exchange (ETDEWEB)

    Gitarts, D.A.; Izakson-Demidov, Y.A.; Kalistratov, V.A.; Edemsky, V.M.

    1984-01-01

    Control systems employed with furnaces for steel production and for smelting of ores together with installations of electroslag and vacuum-arc remelting equipments are described. Details of mathematical models and control algorithms is given in addition to the schematic electrical circuits and lists of component parts of the control systems. The particular feature of the mathematical model is the application of the general state parameters and the static methods for their identification and adaptation. The variety of control system types employed in electroheat and having the possibility of using microcomputers, allows for expediency by using a single system approach for designing algorithm, a programme and the hardware to be used. Provision of the hardware to be employed for the real-time system, is based on a developed system of interruptions and use of a single library of hard and software modules, the latter having a general design of its layout for the different systems.

  18. Examining Interior Grid Nudging Techniques Using Two-Way Nesting in the WRF Model for Regional Climate Modeling

    Science.gov (United States)

    This study evaluates interior nudging techniques using the Weather Research and Forecasting (WRF) model for regional climate modeling over the conterminous United States (CONUS) using a two-way nested configuration. NCEP–Department of Energy Atmospheric Model Intercomparison Pro...

  19. Simple parameter estimation for complex models — Testing evolutionary techniques on 3-dimensional biogeochemical ocean models

    Science.gov (United States)

    Mattern, Jann Paul; Edwards, Christopher A.

    2017-01-01

    Parameter estimation is an important part of numerical modeling and often required when a coupled physical-biogeochemical ocean model is first deployed. However, 3-dimensional ocean model simulations are computationally expensive and models typically contain upwards of 10 parameters suitable for estimation. Hence, manual parameter tuning can be lengthy and cumbersome. Here, we present four easy to implement and flexible parameter estimation techniques and apply them to two 3-dimensional biogeochemical models of different complexities. Based on a Monte Carlo experiment, we first develop a cost function measuring the model-observation misfit based on multiple data types. The parameter estimation techniques are then applied and yield a substantial cost reduction over ∼ 100 simulations. Based on the outcome of multiple replicate experiments, they perform on average better than random, uninformed parameter search but performance declines when more than 40 parameters are estimated together. Our results emphasize the complex cost function structure for biogeochemical parameters and highlight dependencies between different parameters as well as different cost function formulations.

  20. Modelling tick abundance using machine learning techniques and satellite imagery

    DEFF Research Database (Denmark)

    Kjær, Lene Jung; Korslund, L.; Kjelland, V.

    satellite images to run Boosted Regression Tree machine learning algorithms to predict overall distribution (presence/absence of ticks) and relative tick abundance of nymphs and larvae in southern Scandinavia. For nymphs, the predicted abundance had a positive correlation with observed abundance...... the predicted distribution of larvae was mostly even throughout Denmark, it was primarily around the coastlines in Norway and Sweden. Abundance was fairly low overall except in some fragmented patches corresponding to forested habitats in the region. Machine learning techniques allow us to predict for larger...... the collected ticks for pathogens and using the same machine learning techniques to develop prevalence maps of the ScandTick region....

  1. Adaptive subdomain modeling: A multi-analysis technique for ocean circulation models

    Science.gov (United States)

    Altuntas, Alper; Baugh, John

    2017-07-01

    Many coastal and ocean processes of interest operate over large temporal and geographical scales and require a substantial amount of computational resources, particularly when engineering design and failure scenarios are also considered. This study presents an adaptive multi-analysis technique that improves the efficiency of these computations when multiple alternatives are being simulated. The technique, called adaptive subdomain modeling, concurrently analyzes any number of child domains, with each instance corresponding to a unique design or failure scenario, in addition to a full-scale parent domain providing the boundary conditions for its children. To contain the altered hydrodynamics originating from the modifications, the spatial extent of each child domain is adaptively adjusted during runtime depending on the response of the model. The technique is incorporated in ADCIRC++, a re-implementation of the popular ADCIRC ocean circulation model with an updated software architecture designed to facilitate this adaptive behavior and to utilize concurrent executions of multiple domains. The results of our case studies confirm that the method substantially reduces computational effort while maintaining accuracy.

  2. OFF-LINE HANDWRITING RECOGNITION USING VARIOUS HYBRID MODELING TECHNIQUES AND CHARACTER N-GRAMS

    NARCIS (Netherlands)

    Brakensiek, A.; Rottland, J.; Kosmala, A.; Rigoll, G.

    2004-01-01

    In this paper a system for on-line cursive handwriting recognition is described. The system is based on Hidden Markov Models (HMMs) using discrete and hybrid modeling techniques. Here, we focus on two aspects of the recognition system. First, we present different hybrid modeling techniques, whereas

  3. Prescribed wind shear modelling with the actuator line technique

    DEFF Research Database (Denmark)

    Mikkelsen, Robert Flemming; Sørensen, Jens Nørkær; Troldborg, Niels

    2007-01-01

    A method for prescribing arbitrary steady atmospheric wind shear profiles combined with CFD is presented. The method is furthermore combined with the actuator line technique governing the aerodynamic loads on a wind turbine. Computation are carried out on a wind turbine exposed to a representative...

  4. Effects of Microcomputer versus Electric Element Typewriter Instruction on Straight Copy and Production Keyboarding Performance.

    Science.gov (United States)

    Davison, Leslie J.

    1990-01-01

    One group of secondary keyboarding students was taught on typewriters and switched to microcomputers after six weeks, the other used microcomputers first, then typewriters. Using computers, students showed faster completion times and fewer typographical errors. Transfer from computers to typewriters slowed times and increased errors. Overall,…

  5. Usage Patterns at Western Kentucky University Microcomputer Lab (U.S.A.): Past and Present.

    Science.gov (United States)

    Hahn, Jung Sun

    This paper describes the results of a study of the microcomputer use patterns reflected by more than 13,000 users' records from the microcomputer laboratory at Western Kentucky University for 1983 to 1986. The data analyzed focused on: (1) numbers of users; (2) frequent user groups; (3) sex differences; (4) prime time usage; (5) the use of…

  6. Microcomputers and Evaluation Research: Potential Benefits and Problems. Paper and Report Series, No. 98.

    Science.gov (United States)

    Gray, Peter J.

    This paper deals with two questions: (1) what are the the applications that might be made of microcomputers in evaluation? and (2) what are the kinds of problems (personnel, organizational, and technical) that must be overcome if microcomputers are to be used effectively? The applications include proposals and planning, management, data…

  7. A Design Method for a State Feedback Microcomputer Controller of a Wide Bandwidth Analog Plant.

    Science.gov (United States)

    1983-12-01

    METHOD . . . .... 16 1. State Feedback Control System . . . . . . 16 2. Microcomputer Controller Design with Time Delay . . . . . . . . . . . . . . . . 18...90 C. DESIGN OF STATE FEEDBACK CONTROL SYSTEM WITH MICROCOMPUTER . . . . . . . . . . . . . . . . 91 1. Control Algorithm...FIGURES 2.1 Signal Flow Diagram of State Feedback System . . 17 2.2 Feedback Control System with PD Control . . . . 18 2.3 Bode Diagram of Eqn. 2.7

  8. The Development and Application of Advanced Video and Microcomputer- Based Command and Control (C2) Systems

    Science.gov (United States)

    1982-12-01

    video recorders; microcomputers; spatial data management; shared data microcomputer software design; 6502 Microprocessor; APPLE II M AUSTRAC -T (CaeUE...maintain a key role in supporting the continued development of the technology. In 1979, work began at the Computer Corporation of Amer- ica to

  9. A DMA interface between a Biomation 8100 and an Intel MDS-800 microcomputer development system.

    Science.gov (United States)

    Lynk, E T

    1979-09-01

    An interface is described which permits high-speed data transfer between a Biomation 8100 transient recorder and an Intel MDS-800 microcomputer system equipped with an MDS-501 DMA (direct memory access) channel controller. The interface is especially useful for data acquisition situations in which many successive traces must be recorded. For example, signal averaging can be performed within the microcomputer system.

  10. Using the Analogue Input Port on the BBC Microcomputer: Some General Principles and a Specific Example.

    Science.gov (United States)

    Millar, R. H.; Underwood, C. I.

    1984-01-01

    Discusses an easy way of interfacing physics experiments to a microcomputer, providing information on the "analog input port" of a BBC microcomputer. Also describes a capacitor discharge experiment with suggestions for several student activities such as investigating decay curve shapes. Program listing is available from author. (DH)

  11. MicroComputers: New Directions and Methods for the Preparation of Special Education Personnel.

    Science.gov (United States)

    McCann, Scott K.; Kelemen, Eve J.

    1984-01-01

    The article discusses the dual role of the microcomputer as a focus and a medium for training special educators. Current issues, applications, and outcomes surrounding the impact of microcomputer technology on the preparation of special education personnel are explored. Implications for inservice and preservice training of special educators are…

  12. Sand and gravel mine operations and reclamation planning using microcomputers

    Energy Technology Data Exchange (ETDEWEB)

    Ariffin, J.B.

    1990-02-01

    The purpose of this study is to focus on the application of microcomputers, also known as personal computers, in planning for sand and gravel mine operations and reclamation at a site in Story County, Iowa. This site, called the Arrasmith Pit, is operated by Martin Marietta Aggregates, Inc. The Arrasmith site, which encompasses an area of about 25 acres, is a relatively small site for aggregate mining. However, planning for the concurrent mine operation and reclamation program at this site is just as critical as with larger sites and the planning process is the same.

  13. Micro-computed tomography: Introducing new dimensions to taxonomy.

    Science.gov (United States)

    Faulwetter, Sarah; Vasileiadou, Aikaterini; Kouratoras, Michail; Thanos Dailianis; Arvanitidis, Christos

    2013-01-01

    Continuous improvements in the resolution of three-dimensional imaging have led to an increased application of these techniques in conventional taxonomic research in recent years. Coupled with an ever increasing research effort in cybertaxonomy, three-dimensional imaging could give a boost to the development of virtual specimen collections, allowing rapid and simultaneous access to accurate virtual representations of type material. This paper explores the potential of micro-computed tomography (X-ray micro-tomography), a non-destructive three-dimensional imaging technique based on mapping X-ray attenuation in the scanned object, for supporting research in systematics and taxonomy. The subsequent use of these data as virtual type material, so-called "cybertypes", and the creation of virtual collections lie at the core of this potential. Sample preparation, image acquisition, data processing and presentation of results are demonstrated using polychaetes (bristle worms), a representative taxon of macro-invertebrates, as a study object. Effects of the technique on the morphological, anatomical and molecular identity of the specimens are investigated. The paper evaluates the results and discusses the potential and the limitations of the technique for creating cybertypes. It also discusses the challenges that the community might face to establish virtual collections. Potential future applications of three-dimensional information in taxonomic research are outlined, including an outlook to new ways of producing, disseminating and publishing taxonomic information.

  14. Microcomputers in Art Education: An Introductory Vocabulary for Art Teachers Who Need To Make Sense of New Technology.

    Science.gov (United States)

    Brouch, Ginny

    This booklet provides introductory information and terminology for art teachers concerning the use of microcomputers in elementary and secondary school art classes. Part 1 describes the sizes and uses of microcomputers and distinguishes between maxi, mini, micro, lap, and handheld computers. Part 2 discusses: (1) basic microcomputer hardware and…

  15. Three-dimensional arrangement of β-tricalcium phosphate granules evaluated by microcomputed tomography and fractal analysis.

    Science.gov (United States)

    Ndiaye, Mambaye; Terranova, Lisa; Mallet, Romain; Mabilleau, Guillaume; Chappard, Daniel

    2015-01-01

    The macrophysical properties of granular biomaterials used to fill bone defects have rarely been considered. Granules of a given biomaterial occupy three-dimensional (3-D) space when packed together and create a macroporosity suitable for the invasion of vascular and bone cells. Granules of β-tricalcium phosphate were prepared using polyurethane foam technology and increasing the amount of material powder in the slurry (10, 11, 15, 18, 21 and 25 g). After sintering, granules of 1000-2000 μm were prepared by sieving. They were analyzed morphologically by scanning electron microscopy and placed in polyethylene test tubes to produce 3-D scaffolds. Microcomputed tomography (microCT) was used to image the scaffolds and to determine porosity and fractal dimension in three dimensions. Two-dimensional sections of the microCT models were binarized and used to compute classical morphometric parameters describing porosity (interconnectivity index, strut analysis and star volumes) and fractal dimensions. In addition, two newly important fractal parameters (lacunarity and succolarity) were measured. Compression analysis of the stacks of granules was done. Porosity decreased as the amount of material in the slurry increased but non-linear relationships were observed between microarchitectural parameters describing the pores and porosity. Lacunarity increased in the series of granules but succolarity (reflecting the penetration of a fluid) was maximal in the 15-18 g groups and decreased noticeably in the 25 g group. The 3-D arrangement of biomaterial granules studied by these new fractal techniques allows the optimal formulation to be derived based on the lowest amount of material, suitable mechanical resistance during crushing and the creation of large interconnected pores. Copyright © 2014 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved.

  16. Coronary artery wall imaging in mice using osmium tetroxide and micro-computed tomography (micro-CT)

    Energy Technology Data Exchange (ETDEWEB)

    Pai, Vinay M.; Kozlowski, Megan; Donahue, Danielle; Miller, Elishiah; Xiao, Xianghui; Chen, Marcus Y.; Yu, Zu-Xi; Connelly, Patricia; Jeffries, Kenneth; Wen, Han (NIH)

    2012-05-10

    The high spatial resolution of micro-computed tomography (micro-CT) is ideal for 3D imaging of coronary arteries in intact mouse heart specimens. Previously, micro-CT of mouse heart specimens utilized intravascular contrast agents that hardened within the vessel lumen and allowed a vascular cast to be made. However, for mouse coronary artery disease models, it is highly desirable to image coronary artery walls and highlight plaques. For this purpose, we describe an ex vivo contrast-enhanced micro-CT imaging technique based on tissue staining with osmium tetroxide (OsO{sub 4}) solution. As a tissue-staining contrast agent, OsO{sub 4} is retained in the vessel wall and surrounding tissue during the fixation process and cleared from the vessel lumens. Its high X-ray attenuation makes the artery wall visible in CT. Additionally, since OsO{sub 4} preferentially binds to lipids, it highlights lipid deposition in the artery wall. We performed micro-CT of heart specimens of 5- to 25-week-old C57BL/6 wild-type mice and 5- to 13-week-old apolipoprotein E knockout (apoE{sup -/-}) mice at 10 {mu}m resolution. The results show that walls of coronary arteries as small as 45 {mu}m in diameter are visible using a table-top micro-CT scanner. Similar image clarity was achieved with 1/2000th the scan time using a synchrotron CT scanner. In 13-week-old apoE mice, lipid-rich plaques are visible in the aorta. Our study shows that the combination of OsO{sub 4} and micro-CT permits the visualization of the coronary artery wall in intact mouse hearts.

  17. Application of experimental design techniques to structural simulation meta-model building using neural network

    Institute of Scientific and Technical Information of China (English)

    费庆国; 张令弥

    2004-01-01

    Neural networks are being used to construct meta-models in numerical simulation of structures. In addition to network structures and training algorithms, training samples also greatly affect the accuracy of neural network models. In this paper, some existing main sampling techniques are evaluated, including techniques based on experimental design theory,random selection, and rotating sampling. First, advantages and disadvantages of each technique are reviewed. Then, seven techniques are used to generate samples for training radial neural networks models for two benchmarks: an antenna model and an aircraft model. Results show that the uniform design, in which the number of samples and mean square error network models are considered, is the best sampling technique for neural network based meta-model building.

  18. Wave propagation in fluids models and numerical techniques

    CERN Document Server

    Guinot, Vincent

    2012-01-01

    This second edition with four additional chapters presents the physical principles and solution techniques for transient propagation in fluid mechanics and hydraulics. The application domains vary including contaminant transport with or without sorption, the motion of immiscible hydrocarbons in aquifers, pipe transients, open channel and shallow water flow, and compressible gas dynamics. The mathematical formulation is covered from the angle of conservation laws, with an emphasis on multidimensional problems and discontinuous flows, such as steep fronts and shock waves. Finite

  19. A vortex model for Darrieus turbine using finite element techniques

    Energy Technology Data Exchange (ETDEWEB)

    Ponta, Fernando L. [Universidad de Buenos Aires, Dept. de Electrotecnia, Grupo ISEP, Buenos Aires (Argentina); Jacovkis, Pablo M. [Universidad de Buenos Aires, Dept. de Computacion and Inst. de Calculo, Buenos Aires (Argentina)

    2001-09-01

    Since 1970 several aerodynamic prediction models have been formulated for the Darrieus turbine. We can identify two families of models: stream-tube and vortex. The former needs much less computation time but the latter is more accurate. The purpose of this paper is to show a new option for modelling the aerodynamic behaviour of Darrieus turbines. The idea is to combine a classic free vortex model with a finite element analysis of the flow in the surroundings of the blades. This avoids some of the remaining deficiencies in classic vortex models. The agreement between analysis and experiment when predicting instantaneous blade forces and near wake flow behind the rotor is better than the one obtained in previous models. (Author)

  20. TESTING DIFFERENT SURVEY TECHNIQUES TO MODEL ARCHITECTONIC NARROW SPACES

    Directory of Open Access Journals (Sweden)

    A. Mandelli

    2017-08-01

    Full Text Available In the architectural survey field, there has been the spread of a vast number of automated techniques. However, it is important to underline the gap that exists between the technical specification sheet of a particular instrument and its usability, accuracy and level of automation reachable in real cases scenario, especially speaking about Cultural Heritage (CH field. In fact, even if the technical specifications (range, accuracy and field of view are known for each instrument, their functioning and features are influenced by the environment, shape and materials of the object. The results depend more on how techniques are employed than the nominal specifications of the instruments. The aim of this article is to evaluate the real usability, for the 1:50 architectonic restitution scale, of common and not so common survey techniques applied to the complex scenario of dark, intricate and narrow spaces such as service areas, corridors and stairs of Milan’s cathedral indoors. Tests have shown that the quality of the results is strongly affected by side-issues like the impossibility of following the theoretical ideal methodology when survey such spaces. The tested instruments are: the laser scanner Leica C10, the GeoSLAM ZEB1, the DOT DPI 8 and two photogrammetric setups, a full frame camera with a fisheye lens and the NCTech iSTAR, a panoramic camera. Each instrument presents advantages and limits concerning both the sensors themselves and the acquisition phase.

  1. Testing Different Survey Techniques to Model Architectonic Narrow Spaces

    Science.gov (United States)

    Mandelli, A.; Fassi, F.; Perfetti, L.; Polari, C.

    2017-08-01

    In the architectural survey field, there has been the spread of a vast number of automated techniques. However, it is important to underline the gap that exists between the technical specification sheet of a particular instrument and its usability, accuracy and level of automation reachable in real cases scenario, especially speaking about Cultural Heritage (CH) field. In fact, even if the technical specifications (range, accuracy and field of view) are known for each instrument, their functioning and features are influenced by the environment, shape and materials of the object. The results depend more on how techniques are employed than the nominal specifications of the instruments. The aim of this article is to evaluate the real usability, for the 1:50 architectonic restitution scale, of common and not so common survey techniques applied to the complex scenario of dark, intricate and narrow spaces such as service areas, corridors and stairs of Milan's cathedral indoors. Tests have shown that the quality of the results is strongly affected by side-issues like the impossibility of following the theoretical ideal methodology when survey such spaces. The tested instruments are: the laser scanner Leica C10, the GeoSLAM ZEB1, the DOT DPI 8 and two photogrammetric setups, a full frame camera with a fisheye lens and the NCTech iSTAR, a panoramic camera. Each instrument presents advantages and limits concerning both the sensors themselves and the acquisition phase.

  2. Micro-Computed Tomography and Finite Element Method Study of Open-Cell Porous Materials

    Directory of Open Access Journals (Sweden)

    Wejrzanowski Tomasz

    2015-01-01

    Full Text Available In the present paper the characterization of structure and properties of open-cell porous materials by high-resolution x-ray micro-computed tomography (μCT and finite element method (FEM is addressed. The unique properties of open porosity foams make them interesting in a range of applications in science and engineering such as energy absorbers, lightweight construction materials or heat insulators. Consequently, a detailed knowledge of structure as well as mechanical properties (i.e. Young’s Modulus, Poisson’s Ratio of such foams is essential. The resulting pixel size of the μCT was 40 μm, which enabled satisfactory visualization of the complex foam structure and quantitative characterization. Foam morphology was studied on post-processed computed tomography images, while mechanical properties were analyzed with use of the finite element method on numerical model obtained from μCT results.

  3. Experimental technique of calibration of symmetrical air pollution models

    Indian Academy of Sciences (India)

    P Kumar

    2005-10-01

    Based on the inherent property of symmetry of air pollution models,a Symmetrical Air Pollution Model Index (SAPMI)has been developed to calibrate the accuracy of predictions made by such models,where the initial quantity of release at the source is not known.For exact prediction the value of SAPMI should be equal to 1.If the predicted values are overestimating then SAPMI is > 1and if it is underestimating then SAPMI is < 1.Specific design for the layout of receptors has been suggested as a requirement for the calibration experiments.SAPMI is applicable for all variations of symmetrical air pollution dispersion models.

  4. Artificial intelligence techniques for modeling database user behavior

    Science.gov (United States)

    Tanner, Steve; Graves, Sara J.

    1990-01-01

    The design and development of the adaptive modeling system is described. This system models how a user accesses a relational database management system in order to improve its performance by discovering use access patterns. In the current system, these patterns are used to improve the user interface and may be used to speed data retrieval, support query optimization and support a more flexible data representation. The system models both syntactic and semantic information about the user's access and employs both procedural and rule-based logic to manipulate the model.

  5. Reduction of thermal models of buildings: improvement of techniques using meteorological influence models; Reduction de modeles thermiques de batiments: amelioration des techniques par modelisation des sollicitations meteorologiques

    Energy Technology Data Exchange (ETDEWEB)

    Dautin, S.

    1997-04-01

    This work concerns the modeling of thermal phenomena inside buildings for the evaluation of energy exploitation costs of thermal installations and for the modeling of thermal and aeraulic transient phenomena. This thesis comprises 7 chapters dealing with: (1) the thermal phenomena inside buildings and the CLIM2000 calculation code, (2) the ETNA and GENEC experimental cells and their modeling, (3) the techniques of model reduction tested (Marshall`s truncature, Michailesco aggregation method and Moore truncature) with their algorithms and their encoding in the MATRED software, (4) the application of model reduction methods to the GENEC and ETNA cells and to a medium size dual-zone building, (5) the modeling of meteorological influences classically applied to buildings (external temperature and solar flux), (6) the analytical expression of these modeled meteorological influences. The last chapter presents the results of these improved methods on the GENEC and ETNA cells and on a lower inertia building. These new methods are compared to classical methods. (J.S.) 69 refs.

  6. Study on modeling of vehicle dynamic stability and control technique

    Institute of Scientific and Technical Information of China (English)

    GAO Yun-ting; LI Pan-feng

    2012-01-01

    In order to solve the problem of enhancing the vehicle driving stability and safety,which has been the hot question researched by scientific and engineering in the vehicle industry,the new control method was investigated.After the analysis of tire moving characteristics and the vehicle stress analysis,the tire model based on the extension pacejka magic formula which combined longitudinal motion and lateral motion was developed and a nonlinear vehicle dynamical stability model with seven freedoms was made.A new model reference adaptive control project which made the slip angle and yaw rate of vehicle body as the output and feedback variable in adjusting the torque of vehicle body to control the vehicle stability was designed.A simulation model was also built in Matlab/Simulink to evaluate this control project.It was made up of many mathematical subsystem models mainly including the tire model module,the yaw moment calculation module,the center of mass parameter calculation module,tire parameter calculation module of multiple and so forth.The severe lane change simulation result shows that this vehicle model and the model reference adaptive control method have an excellent performance.

  7. Variational Data Assimilation Technique in Mathematical Modeling of Ocean Dynamics

    Science.gov (United States)

    Agoshkov, V. I.; Zalesny, V. B.

    2012-03-01

    Problems of the variational data assimilation for the primitive equation ocean model constructed at the Institute of Numerical Mathematics, Russian Academy of Sciences are considered. The model has a flexible computational structure and consists of two parts: a forward prognostic model, and its adjoint analog. The numerical algorithm for the forward and adjoint models is constructed based on the method of multicomponent splitting. The method includes splitting with respect to physical processes and space coordinates. Numerical experiments are performed with the use of the Indian Ocean and the World Ocean as examples. These numerical examples support the theoretical conclusions and demonstrate the rationality of the approach using an ocean dynamics model with an observed data assimilation procedure.

  8. Wave Propagation in Fluids Models and Numerical Techniques

    CERN Document Server

    Guinot, Vincent

    2007-01-01

    This book presents the physical principles of wave propagation in fluid mechanics and hydraulics. The mathematical techniques that allow the behavior of the waves to be analyzed are presented, along with existing numerical methods for the simulation of wave propagation. Particular attention is paid to discontinuous flows, such as steep fronts and shock waves, and their mathematical treatment. A number of practical examples are taken from various areas fluid mechanics and hydraulics, such as contaminant transport, the motion of immiscible hydrocarbons in aquifers, river flow, pipe transients an

  9. Simulation technique for hard-disk models in two dimensions

    DEFF Research Database (Denmark)

    Fraser, Diane P.; Zuckermann, Martin J.; Mouritsen, Ole G.

    1990-01-01

    A method is presented for studying hard-disk systems by Monte Carlo computer-simulation techniques within the NpT ensemble. The method is based on the Voronoi tesselation, which is dynamically maintained during the simulation. By an analysis of the Voronoi statistics, a quantity is identified...... that is extremely sensitive to structural changes in the system. This quantity, which is derived from the edge-length distribution function of the Voronoi polygons, displays a dramatic change at the solid-liquid transition. This is found to be more useful for locating the transition than either the defect density...

  10. Household water use and conservation models using Monte Carlo techniques

    Science.gov (United States)

    Cahill, R.; Lund, J. R.; DeOreo, B.; Medellín-Azuara, J.

    2013-10-01

    The increased availability of end use measurement studies allows for mechanistic and detailed approaches to estimating household water demand and conservation potential. This study simulates water use in a single-family residential neighborhood using end-water-use parameter probability distributions generated from Monte Carlo sampling. This model represents existing water use conditions in 2010 and is calibrated to 2006-2011 metered data. A two-stage mixed integer optimization model is then developed to estimate the least-cost combination of long- and short-term conservation actions for each household. This least-cost conservation model provides an estimate of the upper bound of reasonable conservation potential for varying pricing and rebate conditions. The models were adapted from previous work in Jordan and are applied to a neighborhood in San Ramon, California in the eastern San Francisco Bay Area. The existing conditions model produces seasonal use results very close to the metered data. The least-cost conservation model suggests clothes washer rebates are among most cost-effective rebate programs for indoor uses. Retrofit of faucets and toilets is also cost-effective and holds the highest potential for water savings from indoor uses. This mechanistic modeling approach can improve understanding of water demand and estimate cost-effectiveness of water conservation programs.

  11. Multiple Fan-Beam Optical Tomography: Modelling Techniques

    Directory of Open Access Journals (Sweden)

    Pang Jon Fea

    2009-10-01

    Full Text Available This paper explains in detail the solution to the forward and inverse problem faced in this research. In the forward problem section, the projection geometry and the sensor modelling are discussed. The dimensions, distributions and arrangements of the optical fibre sensors are determined based on the real hardware constructed and these are explained in the projection geometry section. The general idea in sensor modelling is to simulate an artificial environment, but with similar system properties, to predict the actual sensor values for various flow models in the hardware system. The sensitivity maps produced from the solution of the forward problems are important in reconstructing the tomographic image.

  12. Size reduction techniques for vital compliant VHDL simulation models

    Science.gov (United States)

    Rich, Marvin J.; Misra, Ashutosh

    2006-08-01

    A method and system select delay values from a VHDL standard delay file that correspond to an instance of a logic gate in a logic model. Then the system collects all the delay values of the selected instance and builds super generics for the rise-time and the fall-time of the selected instance. Then, the system repeats this process for every delay value in the standard delay file (310) that correspond to every instance of every logic gate in the logic model. The system then outputs a reduced size standard delay file (314) containing the super generics for every instance of every logic gate in the logic model.

  13. Factor analysis and predictive validity of microcomputer-based tests

    Science.gov (United States)

    Kennedy, R. S.; Baltzley, D. R.; Turnage, J. J.; Jones, M. B.

    1989-01-01

    11 tests were selected from two microcomputer-based performance test batteries because previously these tests exhibited rapid stability (less than 10 min, of practice) and high retest reliability efficiencies (r greater than 0.707 for each 3 min. of testing). The battery was administered three times to each of 108 college students (48 men and 60 women) and a factor analysis was performed. Two of the three identified factors appear to be related to information processing ("encoding" and "throughput/decoding"), and the third named an "output/speed" factor. The spatial, memory, and verbal tests loaded on the "encoding" factor and included Grammatical Reasoning, Pattern Comparison, Continuous Recall, and Matrix Rotation. The "throughput/decoding" tests included perceptual/numerical tests like Math Processing, Code Substitution, and Pattern Comparison. The output speed factor was identified by Tapping and Reaction Time tests. The Wonderlic Personnel Test was group administered before the first and after the last administration of the performance tests. The multiple Rs in the total sample between combined Wonderlic as a criterion and less than 5 min. of microcomputer testing on Grammatical Reasoning and Math Processing as predictors ranged between 0.41 and 0.52 on the three test administrations. Based on these results, the authors recommend a core battery which, if time permits, would consist of two tests from each factor. Such a battery is now known to permit stable, reliable, and efficient assessment.

  14. Analysis of fault using microcomputer protection by symmetrical component method

    Directory of Open Access Journals (Sweden)

    Mr. Ashish Choubey

    2012-09-01

    Full Text Available To enhance power supply reliability for the userterminals in the case of the distribution system toavoid interference by the fault again, rapidlycomplete the automatic identification, positioning,automatic fault isolation, network reconfigurationuntil the resumption of supply of non-fault section,a microprocessor-based relay protection device hasdeveloped. As the fault component theory is widelyused in microcomputer protection, and faultcomponent exists in the network of faultcomponent, it is necessary to build up the faultcomponent network when short circuit faultemerging and to draw the current and voltagecomponent phasor diagram at fault point. In orderto understand microcomputer protection based onthe symmetrical component principle, we obtainedthe sequence current and sequence voltageaccording to the concept of symmetrical component.Distribution line directly to user-oriented powersupply, the reliability of its operation determines thequality and level of electricity supply. In recentdecades, because of the general power of the tirelessefforts of scientists and technicians, relay protectiontechnology and equipment application level hasbeen greatly improved, but the current domesticproduction of computer hardware, protectiondevices are still outdated systems. Softwaredevelopment has maintenance difficulties and shortsurvival time. With the factory automation systeminterface functions weak points, the networkcommunication cannot meet the actualrequirements. Protection principle configurationand device manufacturing process to be improvedand so on.

  15. Analysis of fault using microcomputer protection by symmetrical component method

    Directory of Open Access Journals (Sweden)

    Ashish Choubey

    2012-09-01

    Full Text Available To enhance power supply reliability for the user terminals in the case of the distribution system to avoid interference by the fault again, rapidly complete the automatic identification, positioning, automatic fault isolation, network reconfiguration until the resumption of supply of non-fault section, a microprocessor-based relay protection device has developed. As the fault component theory is widely used in microcomputer protection, and fault component exists in the network of fault component, it is necessary to build up the fault component network when short circuit fault emerging and to draw the current and voltage component phasor diagram at fault point. In order to understand microcomputer protection based on the symmetrical component principle, we obtained the sequence current and sequence voltage according to the concept of symmetrical component. Distribution line directly to user-oriented power supply, the reliability of its operation determines the quality and level of electricity supply. In recent decades, because of the general power of the tireless efforts of scientists and technicians, relay protection technology and equipment application level has been greatly improved, but the current domestic production of computer hardware, protection devices are still outdated systems. Software development has maintenance difficulties and short survival time. With the factory automation system interface functions weak points, the network communication cannot meet the actual requirements. Protection principle configuration and device manufacturing process to be improved and so on.

  16. Liquid propellant analogy technique in dynamic modeling of launch vehicle

    Institute of Scientific and Technical Information of China (English)

    2010-01-01

    The coupling effects among lateral mode,longitudinal mode and torsional mode of a launch vehicle cannot be taken into account in traditional dynamic analysis using lateral beam model and longitudinal spring-mass model individually.To deal with the problem,propellant analogy methods based on beam model are proposed and coupled mass-matrix of liquid propellant is constructed through additional mass in the present study.Then an integrated model of launch vehicle for free vibration analysis is established,by which research on the interactions between longitudinal and lateral modes,longitudinal and torsional modes of the launch vehicle can be implemented.Numerical examples for tandem tanks validate the present method and its necessity.

  17. Evaluation of dynamical models: dissipative synchronization and other techniques.

    Science.gov (United States)

    Aguirre, Luis Antonio; Furtado, Edgar Campos; Tôrres, Leonardo A B

    2006-12-01

    Some recent developments for the validation of nonlinear models built from data are reviewed. Besides giving an overall view of the field, a procedure is proposed and investigated based on the concept of dissipative synchronization between the data and the model, which is very useful in validating models that should reproduce dominant dynamical features, like bifurcations, of the original system. In order to assess the discriminating power of the procedure, four well-known benchmarks have been used: namely, Duffing-Ueda, Duffing-Holmes, and van der Pol oscillators, plus the Hénon map. The procedure, developed for discrete-time systems, is focused on the dynamical properties of the model, rather than on statistical issues. For all the systems investigated, it is shown that the discriminating power of the procedure is similar to that of bifurcation diagrams--which in turn is much greater than, say, that of correlation dimension--but at a much lower computational cost.

  18. Evaluation of dynamical models: Dissipative synchronization and other techniques

    Science.gov (United States)

    Aguirre, Luis Antonio; Furtado, Edgar Campos; Tôrres, Leonardo A. B.

    2006-12-01

    Some recent developments for the validation of nonlinear models built from data are reviewed. Besides giving an overall view of the field, a procedure is proposed and investigated based on the concept of dissipative synchronization between the data and the model, which is very useful in validating models that should reproduce dominant dynamical features, like bifurcations, of the original system. In order to assess the discriminating power of the procedure, four well-known benchmarks have been used: namely, Duffing-Ueda, Duffing-Holmes, and van der Pol oscillators, plus the Hénon map. The procedure, developed for discrete-time systems, is focused on the dynamical properties of the model, rather than on statistical issues. For all the systems investigated, it is shown that the discriminating power of the procedure is similar to that of bifurcation diagrams—which in turn is much greater than, say, that of correlation dimension—but at a much lower computational cost.

  19. Use of machine learning techniques for modeling of snow depth

    Directory of Open Access Journals (Sweden)

    G. V. Ayzel

    2017-01-01

    Full Text Available Snow exerts significant regulating effect on the land hydrological cycle since it controls intensity of heat and water exchange between the soil-vegetative cover and the atmosphere. Estimating of a spring flood runoff or a rain-flood on mountainous rivers requires understanding of the snow cover dynamics on a watershed. In our work, solving a problem of the snow cover depth modeling is based on both available databases of hydro-meteorological observations and easily accessible scientific software that allows complete reproduction of investigation results and further development of this theme by scientific community. In this research we used the daily observational data on the snow cover and surface meteorological parameters, obtained at three stations situated in different geographical regions: Col de Porte (France, Sodankyla (Finland, and Snoquamie Pass (USA.Statistical modeling of the snow cover depth is based on a complex of freely distributed the present-day machine learning models: Decision Trees, Adaptive Boosting, Gradient Boosting. It is demonstrated that use of combination of modern machine learning methods with available meteorological data provides the good accuracy of the snow cover modeling. The best results of snow cover depth modeling for every investigated site were obtained by the ensemble method of gradient boosting above decision trees – this model reproduces well both, the periods of snow cover accumulation and its melting. The purposeful character of learning process for models of the gradient boosting type, their ensemble character, and use of combined redundancy of a test sample in learning procedure makes this type of models a good and sustainable research tool. The results obtained can be used for estimating the snow cover characteristics for river basins where hydro-meteorological information is absent or insufficient.

  20. Advanced modeling techniques in application to plasma pulse treatment

    Science.gov (United States)

    Pashchenko, A. F.; Pashchenko, F. F.

    2016-06-01

    Different approaches considered for simulation of plasma pulse treatment process. The assumption of a significant non-linearity of processes in the treatment of oil wells has been confirmed. Method of functional transformations and fuzzy logic methods suggested for construction of a mathematical model. It is shown, that models, based on fuzzy logic are able to provide a satisfactory accuracy of simulation and prediction of non-linear processes observed.

  1. Analysis of computational modeling techniques for complete rotorcraft configurations

    Science.gov (United States)

    O'Brien, David M., Jr.

    Computational fluid dynamics (CFD) provides the helicopter designer with a powerful tool for identifying problematic aerodynamics. Through the use of CFD, design concepts can be analyzed in a virtual wind tunnel long before a physical model is ever created. Traditional CFD analysis tends to be a time consuming process, where much of the effort is spent generating a high quality computational grid. Recent increases in computing power and memory have created renewed interest in alternative grid schemes such as unstructured grids, which facilitate rapid grid generation by relaxing restrictions on grid structure. Three rotor models have been incorporated into a popular fixed-wing unstructured CFD solver to increase its capability and facilitate availability to the rotorcraft community. The benefit of unstructured grid methods is demonstrated through rapid generation of high fidelity configuration models. The simplest rotor model is the steady state actuator disk approximation. By transforming the unsteady rotor problem into a steady state one, the actuator disk can provide rapid predictions of performance parameters such as lift and drag. The actuator blade and overset blade models provide a depiction of the unsteady rotor wake, but incur a larger computational cost than the actuator disk. The actuator blade model is convenient when the unsteady aerodynamic behavior needs to be investigated, but the computational cost of the overset approach is too large. The overset or chimera method allows the blades loads to be computed from first-principles and therefore provides the most accurate prediction of the rotor wake for the models investigated. The physics of the flow fields generated by these models for rotor/fuselage interactions are explored, along with efficiencies and limitations of each method.

  2. A novel dual ex vivo lung perfusion technique improves immediate outcomes in an experimental model of lung transplantation.

    Science.gov (United States)

    Tanaka, Y; Noda, K; Isse, K; Tobita, K; Maniwa, Y; Bhama, J K; D'Cunha, J; Bermudez, C A; Luketich, J D; Shigemura, N

    2015-05-01

    The lungs are dually perfused by the pulmonary artery and the bronchial arteries. This study aimed to test the feasibility of dual-perfusion techniques with the bronchial artery circulation and pulmonary artery circulation synchronously perfused using ex vivo lung perfusion (EVLP) and evaluate the effects of dual-perfusion on posttransplant lung graft function. Using rat heart-lung blocks, we developed a dual-perfusion EVLP circuit (dual-EVLP), and compared cellular metabolism, expression of inflammatory mediators, and posttransplant graft function in lung allografts maintained with dual-EVLP, standard-EVLP, or cold static preservation. The microvasculature in lung grafts after transplant was objectively evaluated using microcomputed tomography angiography. Lung grafts subjected to dual-EVLP exhibited significantly better lung graft function with reduced proinflammatory profiles and more mitochondrial biogenesis, leading to better posttransplant function and compliance, as compared with standard-EVLP or static cold preservation. Interestingly, lung grafts maintained on dual-EVLP exhibited remarkably increased microvasculature and perfusion as compared with lungs maintained on standard-EVLP. Our results suggest that lung grafts can be perfused and preserved using dual-perfusion EVLP techniques that contribute to better graft function by reducing proinflammatory profiles and activating mitochondrial respiration. Dual-EVLP also yields better posttransplant graft function through increased microvasculature and better perfusion of the lung grafts after transplantation. © Copyright 2015 The American Society of Transplantation and the American Society of Transplant Surgeons.

  3. Increasing the reliability of ecological models using modern software engineering techniques

    Science.gov (United States)

    Robert M. Scheller; Brian R. Sturtevant; Eric J. Gustafson; Brendan C. Ward; David J. Mladenoff

    2009-01-01

    Modern software development techniques are largely unknown to ecologists. Typically, ecological models and other software tools are developed for limited research purposes, and additional capabilities are added later, usually in an ad hoc manner. Modern software engineering techniques can substantially increase scientific rigor and confidence in ecological models and...

  4. Fusing Observations and Model Results for Creation of Enhanced Ozone Spatial Fields: Comparison of Three Techniques

    Science.gov (United States)

    This paper presents three simple techniques for fusing observations and numerical model predictions. The techniques rely on model/observation bias being considered either as error free, or containing some uncertainty, the latter mitigated with a Kalman filter approach or a spati...

  5. Advanced Techniques for Reservoir Simulation and Modeling of Non-Conventional Wells

    Energy Technology Data Exchange (ETDEWEB)

    Durlofsky, Louis J.

    2000-08-28

    This project targets the development of (1) advanced reservoir simulation techniques for modeling non-conventional wells; (2) improved techniques for computing well productivity (for use in reservoir engineering calculations) and well index (for use in simulation models), including the effects of wellbore flow; and (3) accurate approaches to account for heterogeneity in the near-well region.

  6. Generalization Technique for 2D+SCALE Dhe Data Model

    Science.gov (United States)

    Karim, Hairi; Rahman, Alias Abdul; Boguslawski, Pawel

    2016-10-01

    Different users or applications need different scale model especially in computer application such as game visualization and GIS modelling. Some issues has been raised on fulfilling GIS requirement of retaining the details while minimizing the redundancy of the scale datasets. Previous researchers suggested and attempted to add another dimension such as scale or/and time into a 3D model, but the implementation of scale dimension faces some problems due to the limitations and availability of data structures and data models. Nowadays, various data structures and data models have been proposed to support variety of applications and dimensionality but lack research works has been conducted in terms of supporting scale dimension. Generally, the Dual Half Edge (DHE) data structure was designed to work with any perfect 3D spatial object such as buildings. In this paper, we attempt to expand the capability of the DHE data structure toward integration with scale dimension. The description of the concept and implementation of generating 3D-scale (2D spatial + scale dimension) for the DHE data structure forms the major discussion of this paper. We strongly believed some advantages such as local modification and topological element (navigation, query and semantic information) in scale dimension could be used for the future 3D-scale applications.

  7. A modeling technique for STOVL ejector and volume dynamics

    Science.gov (United States)

    Drummond, C. K.; Barankiewicz, W. S.

    1990-01-01

    New models for thrust augmenting ejector performance prediction and feeder duct dynamic analysis are presented and applied to a proposed Short Take Off and Vertical Landing (STOVL) aircraft configuration. Central to the analysis is the nontraditional treatment of the time-dependent volume integrals in the otherwise conventional control-volume approach. In the case of the thrust augmenting ejector, the analysis required a new relationship for transfer of kinetic energy from the primary flow to the secondary flow. Extraction of the required empirical corrections from current steady-state experimental data is discussed; a possible approach for modeling insight through Computational Fluid Dynamics (CFD) is presented.

  8. Teaching Behavioral Modeling and Simulation Techniques for Power Electronics Courses

    Science.gov (United States)

    Abramovitz, A.

    2011-01-01

    This paper suggests a pedagogical approach to teaching the subject of behavioral modeling of switch-mode power electronics systems through simulation by general-purpose electronic circuit simulators. The methodology is oriented toward electrical engineering (EE) students at the undergraduate level, enrolled in courses such as "Power Electronics,"…

  9. Teaching Behavioral Modeling and Simulation Techniques for Power Electronics Courses

    Science.gov (United States)

    Abramovitz, A.

    2011-01-01

    This paper suggests a pedagogical approach to teaching the subject of behavioral modeling of switch-mode power electronics systems through simulation by general-purpose electronic circuit simulators. The methodology is oriented toward electrical engineering (EE) students at the undergraduate level, enrolled in courses such as "Power…

  10. Application of Yamaguchi's technique for the Rescorla-Wagner model.

    Science.gov (United States)

    Yamaguchi, Makoto

    2007-12-01

    Yamaguchi in 2006 solved for the first time a problem concerning a 1972 mathematical model of classical conditioning by Rescorla and Wagner. That derivation is not an isolated contribution. Here it is shown that the same line of derivation can be successfully applied to another experimental situation involving more stimuli.

  11. Suitability of sheet bending modelling techniques in CAPP applications

    NARCIS (Netherlands)

    Streppel, A.H.; de Vin, L.J.; de Vin, L.J.; Brinkman, J.; Brinkman, J.; Kals, H.J.J.

    1993-01-01

    The use of CNC machine tools, together with decreasing lot sizes and stricter tolerance prescriptions, has led to changes in sheet-metal part manufacturing. In this paper, problems introduced by the difference between the actual material behaviour and the results obtained from analytical models and

  12. Teaching Behavioral Modeling and Simulation Techniques for Power Electronics Courses

    Science.gov (United States)

    Abramovitz, A.

    2011-01-01

    This paper suggests a pedagogical approach to teaching the subject of behavioral modeling of switch-mode power electronics systems through simulation by general-purpose electronic circuit simulators. The methodology is oriented toward electrical engineering (EE) students at the undergraduate level, enrolled in courses such as "Power…

  13. A review of propeller modelling techniques based on Euler methods

    NARCIS (Netherlands)

    Zondervan, G.J.D.

    1998-01-01

    Future generation civil aircraft will be powered by new, highly efficient propeller propulsion systems. New, advanced design tools like Euler methods will be needed in the design process of these aircraft. This report describes the application of Euler methods to the modelling of flowfields generate

  14. Ionospheric scintillation forecasting model based on NN-PSO technique

    Science.gov (United States)

    Sridhar, M.; Venkata Ratnam, D.; Padma Raju, K.; Sai Praharsha, D.; Saathvika, K.

    2017-09-01

    The forecasting and modeling of ionospheric scintillation effects are crucial for precise satellite positioning and navigation applications. In this paper, a Neural Network model, trained using Particle Swarm Optimization (PSO) algorithm, has been implemented for the prediction of amplitude scintillation index (S4) observations. The Global Positioning System (GPS) and Ionosonde data available at Darwin, Australia (12.4634° S, 130.8456° E) during 2013 has been considered. The correlation analysis between GPS S4 and Ionosonde drift velocities (hmf2 and fof2) data has been conducted for forecasting the S4 values. The results indicate that forecasted S4 values closely follow the measured S4 values for both the quiet and disturbed conditions. The outcome of this work will be useful for understanding the ionospheric scintillation phenomena over low latitude regions.

  15. Detecting feature interactions in Web services with model checking techniques

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    As a platform-independent software system, a Web service is designed to offer interoperability among diverse and heterogeneous applications.With the introduction of service composition in the Web service creation, various message interactions among the atomic services result in a problem resembling the feature interaction problem in the telecommunication area.This article defines the problem as feature interaction in Web services and proposes a model checking-based detection method.In the method, the Web service description is translated to the Promela language - the input language of the model checker simple promela interpreter (SPIN), and the specific properties, expressed as linear temporal logic (LTL) formulas, are formulated according to our classification of feature interaction.Then, SPIN is used to check these specific properties to detect the feature interaction in Web services.

  16. A Memory Insensitive Technique for Large Model Simplification

    Energy Technology Data Exchange (ETDEWEB)

    Lindstrom, P; Silva, C

    2001-08-07

    In this paper we propose three simple, but significant improvements to the OoCS (Out-of-Core Simplification) algorithm of Lindstrom [20] which increase the quality of approximations and extend the applicability of the algorithm to an even larger class of compute systems. The original OoCS algorithm has memory complexity that depends on the size of the output mesh, but no dependency on the size of the input mesh. That is, it can be used to simplify meshes of arbitrarily large size, but the complexity of the output mesh is limited by the amount of memory available. Our first contribution is a version of OoCS that removes the dependency of having enough memory to hold (even) the simplified mesh. With our new algorithm, the whole process is made essentially independent of the available memory on the host computer. Our new technique uses disk instead of main memory, but it is carefully designed to avoid costly random accesses. Our two other contributions improve the quality of the approximations generated by OoCS. We propose a scheme for preserving surface boundaries which does not use connectivity information, and a scheme for constraining the position of the ''representative vertex'' of a grid cell to an optimal position inside the cell.

  17. A Model-Following Technique for Insensitive Aircraft Control Systems.

    Science.gov (United States)

    1981-01-01

    Harvey and Pope(131 and Vinkler[301 compared several different methods in their works, while Shenkar [261 and Ashkenazi[2i extended the most promising...Following for In- sensitive Control works, let us consider the simple, first-order system used by Shenkar [261. The plant is described by x -(1 + Ar)x + u...representative of the methods of Vinkler, Asikenazi, and Shenkar ), and Model Following for Insensitive Control (MrIC). For the LQR design, we assume that our

  18. IMPROVED SOFTWARE QUALITY ASSURANCE TECHNIQUES USING SAFE GROWTH MODEL

    Directory of Open Access Journals (Sweden)

    M.Sangeetha

    2010-09-01

    Full Text Available In our lives are governed by large, complex systems with increasingly complex software, and the safety, security, and reliability of these systems has become a major concern. As the software in today’ssystems grows larger, it has more defects, and these defects adversely affect the safety, security, and reliability of the systems. Software engineering is the application of a systematic, disciplined, quantifiable approach to the development, operation, andmaintenance of software. Software divides into two pieces: internal and external quality characteristics.External quality characteristics are those parts of a product that face its users, where internal quality characteristics are those that do not.Quality is conformance to product requirements and should be free. This research concerns the role of software Quality. Software reliability is an important facet of software quality. It is the probability of failure-freeoperation of a computer program in a specified environment for a specified time. In software reliability modeling, the parameters of the model are typically estimated from the test data of the corresponding component. However, the widely used point estimatorsare subject to random variations in the data, resulting in uncertainties in these estimated parameters. This research describes a new approach to the problem of software testing. The approach is based on Bayesian graphical models and presents formal mechanisms forthe logical structuring of the software testing problem, the probabilistic and statistical treatment of the uncertainties to be addressed, the test design and analysis process, and the incorporation and implication of test results. Once constructed, the models produced are dynamic representations of the software testingproblem. It explains need of the common test-and-fix software quality strategy is no longer adequate, and characterizes the properties of the quality strategy.

  19. Modeling and Analyzing Terrain Data Acquired by Modern Mapping Techniques

    Science.gov (United States)

    2009-09-22

    enhanced by new terrain mapping technologies such as Laser altimetry (LIDAR), ground based laser scanning and Real Time Kinematic GPS ( RTK - GPS ) that...developed and implemented an approach that has the following features: it is modular so that a user can use different models for each of the modules ...support some way of connecting separate modules together to form pipelines, however this requires manual intervention. While a typical GIS can manage

  20. Groundwater Resources Assessment For Joypurhat District Using Mathematical Modelling Technique

    Directory of Open Access Journals (Sweden)

    Md. Iquebal Hossain

    2015-06-01

    Full Text Available In this study potential recharge as well as groundwater availability for 5 Upazillas (Akkelpur, Kalai, Joypurhat Sadar, Khetlal and Panchbibi of Joypurhat districts has been estimated using MIKE SHE modelling tools. The main aquifers of the study area are dominated by medium sands, medium and coarse sands with little gravels. The top of aquifers ranges from 15 m to 24 m and the screenable thickness of aquifers range from 33 m to 46 m within the depth range from 57 m to 87 m. Heavy abstraction of groundwater for agricultural, industrial and domestic uses results in excessive lowering of water table making the shallow and hand tubewells inoperable in the dry season. The upazilawise potential recharge for the study area was estimated through mathematical model using MIKE SHE modelling tools in an integrated approach. The required data were collected from the different relevant organisations. The potential recharge of the present study varies from 452 mm to 793 mm. Maximum depth to groundwater table in most of the places occurs at the end of April. At this time, groundwater table in most of the part of Kalai, Khetlal, Akkelpur and Panchbibi goes below suction limit causing HTWs and STWs partially/fully in operable.

  1. An improved calibration technique for wind tunnel model attitude sensors

    Science.gov (United States)

    Tripp, John S.; Wong, Douglas T.; Finley, Tom D.; Tcheng, Ping

    1993-01-01

    Aerodynamic wind tunnel tests at NASA Langley Research Center (LaRC) require accurate measurement of model attitude. Inertial accelerometer packages have been the primary sensor used to measure model attitude to an accuracy of +/- 0.01 deg as required for aerodynamic research. The calibration parameters of the accelerometer package are currently obtained from a seven-point tumble test using a simplified empirical approximation. The inaccuracy due to the approximation exceeds the accuracy requirement as the misalignment angle between the package axis and the model body axis increases beyond 1.4 deg. This paper presents the exact solution derived from the coordinate transformation to eliminate inaccuracy caused by the approximation. In addition, a new calibration procedure is developed in which the data taken from the seven-point tumble test is fit to the exact solution by means of a least-squares estimation procedure. Validation tests indicate that the new calibration procedure provides +/- 0.005-deg accuracy over large package misalignments, which is not possible with the current procedure.

  2. Establishment of reproducible osteosarcoma rat model using orthotopic implantation technique.

    Science.gov (United States)

    Yu, Zhe; Sun, Honghui; Fan, Qingyu; Long, Hua; Yang, Tongtao; Ma, Bao'an

    2009-05-01

    In experimental musculoskeletal oncology, there remains a need for animal models that can be used to assess the efficacy of new and innovative treatment methodologies for bone tumors. Rat plays a very important role in the bone field especially in the evaluation of metabolic bone diseases. The objective of this study was to develop a rat osteosarcoma model for evaluation of new surgical and molecular methods of treatment for extremity sarcoma. One hundred male SD rats weighing 125.45+/-8.19 g were divided into 5 groups and anesthetized intraperitoneally with 10% chloral hydrate. Orthotopic implantation models of rat osteosarcoma were performed by injecting directly into the SD rat femur with a needle for inoculation with SD tumor cells. In the first step of the experiment, 2x10(5) to 1x10(6) UMR106 cells in 50 microl were injected intraosseously into median or distal part of the femoral shaft and the tumor take rate was determined. The second stage consisted of determining tumor volume, correlating findings from ultrasound with findings from necropsia and determining time of survival. In the third stage, the orthotopically implanted tumors and lung nodules were resected entirely, sectioned, and then counter stained with hematoxylin and eosin for histopathologic evaluation. The tumor take rate was 100% for implants with 8x10(5) tumor cells or more, which was much less than the amount required for subcutaneous implantation, with a high lung metastasis rate of 93.0%. Ultrasound and necropsia findings matched closely (r=0.942; ptechnique for measuring cancer at any stage. Tumor growth curve showed that orthotopically implanted tumors expanded vigorously with time-lapse, especially in the first 3 weeks. The median time of survival was 38 days and surgical mortality was 0%. The UMR106 cell line has strong carcinogenic capability and high lung metastasis frequency. The present rat osteosarcoma model was shown to be feasible: the take rate was high, surgical mortality was

  3. A titration model for evaluating calcium hydroxide removal techniques

    Directory of Open Access Journals (Sweden)

    Mark PHILLIPS

    2015-02-01

    Full Text Available Objective Calcium hydroxide (Ca(OH2 has been used in endodontics as an intracanal medicament due to its antimicrobial effects and its ability to inactivate bacterial endotoxin. The inability to totally remove this intracanal medicament from the root canal system, however, may interfere with the setting of eugenol-based sealers or inhibit bonding of resin to dentin, thus presenting clinical challenges with endodontic treatment. This study used a chemical titration method to measure residual Ca(OH2 left after different endodontic irrigation methods. Material and Methods Eighty-six human canine roots were prepared for obturation. Thirty teeth were filled with known but different amounts of Ca(OH2 for 7 days, which were dissolved out and titrated to quantitate the residual Ca(OH2 recovered from each root to produce a standard curve. Forty-eight of the remaining teeth were filled with equal amounts of Ca(OH2 followed by gross Ca(OH2 removal using hand files and randomized treatment of either: 1 Syringe irrigation; 2 Syringe irrigation with use of an apical file; 3 Syringe irrigation with added 30 s of passive ultrasonic irrigation (PUI, or 4 Syringe irrigation with apical file and PUI (n=12/group. Residual Ca(OH2 was dissolved with glycerin and titrated to measure residual Ca(OH2 left in the root. Results No method completely removed all residual Ca(OH2. The addition of 30 s PUI with or without apical file use removed Ca(OH2 significantly better than irrigation alone. Conclusions This technique allowed quantification of residual Ca(OH2. The use of PUI (with or without apical file resulted in significantly lower Ca(OH2 residue compared to irrigation alone.

  4. The Establishment of Parallel Systems on Microcomputer and the Development of Parallel Programs%微机环境下并行系统的建立与并行程序的开发

    Institute of Scientific and Technical Information of China (English)

    王顺绪; 李志英

    2001-01-01

    The significance of establishing parallel systems on microcomputer and making parallel simulation is illustrated with methods given to install PVM on microcomputer. Besides, the file (.cshrc) which makes PVM run correctly and the examples of program application in the model of master/slave are also presented.%阐述了在微机环境下建立并行环境,进行并行模拟的意义,给出了微机上PVM的安装方法和使PVM正确运行的.cshrc文件,以及master/slave编程模式的PVM应用程序示例.

  5. Computable General Equilibrium Techniques for Carbon Tax Modeling

    Directory of Open Access Journals (Sweden)

    Al-Amin

    2009-01-01

    Full Text Available Problem statement: Lacking of proper environmental models environmental pollution is now a solemn problem in many developing countries particularly in Malaysia. Some empirical studies of worldwide reveal that imposition of a carbon tax significantly decreases carbon emissions and does not dramatically reduce economic growth. To our knowledge there has not been any research done to simulate the economic impact of emission control policies in Malaysia. Approach: Therefore this study developed an environmental computable general equilibrium model for Malaysia and investigated carbon tax policy responses in the economy applying exogenously different degrees of carbon tax into the model. Three simulations were carried out using a Malaysian social accounting matrix. Results: The carbon tax policy illustrated that a 1.21% reduction of carbon emission reduced the nominal GDP by 0.82% and exports by 2.08%; 2.34% reduction of carbon emission reduced the nominal GDP by 1.90% and exports by 3.97% and 3.40% reduction of carbon emission reduced the nominal GDP by 3.17% and exports by 5.71%. Conclusion/Recommendations: Imposition of successively higher carbon tax results in increased government revenue from baseline by 26.67, 53.07 and 79.28% respectively. However, fixed capital investment increased in scenario 1a by 0.43% and decreased in scenarios 1b and 1c by 0.26 and 1.79% respectively from the baseline. According to our policy findings policy makers should consider 1st (scenario 1a carbon tax policy. This policy results in achieving reasonably good environmental impacts without losing the investment, fixed capital investment, investment share of nominal GDP and government revenue.

  6. Modern EMC analysis techniques II models and applications

    CERN Document Server

    Kantartzis, Nikolaos V

    2008-01-01

    The objective of this two-volume book is the systematic and comprehensive description of the most competitive time-domain computational methods for the efficient modeling and accurate solution of modern real-world EMC problems. Intended to be self-contained, it performs a detailed presentation of all well-known algorithms, elucidating on their merits or weaknesses, and accompanies the theoretical content with a variety of applications. Outlining the present volume, numerical investigations delve into printed circuit boards, monolithic microwave integrated circuits, radio frequency microelectro

  7. Ecological Footprint Model Using the Support Vector Machine Technique

    Science.gov (United States)

    Ma, Haibo; Chang, Wenjuan; Cui, Guangbai

    2012-01-01

    The per capita ecological footprint (EF) is one of the most widely recognized measures of environmental sustainability. It aims to quantify the Earth's biological resources required to support human activity. In this paper, we summarize relevant previous literature, and present five factors that influence per capita EF. These factors are: National gross domestic product (GDP), urbanization (independent of economic development), distribution of income (measured by the Gini coefficient), export dependence (measured by the percentage of exports to total GDP), and service intensity (measured by the percentage of service to total GDP). A new ecological footprint model based on a support vector machine (SVM), which is a machine-learning method based on the structural risk minimization principle from statistical learning theory was conducted to calculate the per capita EF of 24 nations using data from 123 nations. The calculation accuracy was measured by average absolute error and average relative error. They were 0.004883 and 0.351078% respectively. Our results demonstrate that the EF model based on SVM has good calculation performance. PMID:22291949

  8. Comparative Studies of Clustering Techniques for Real-Time Dynamic Model Reduction

    CERN Document Server

    Hogan, Emilie; Halappanavar, Mahantesh; Huang, Zhenyu; Lin, Guang; Lu, Shuai; Wang, Shaobu

    2015-01-01

    Dynamic model reduction in power systems is necessary for improving computational efficiency. Traditional model reduction using linearized models or offline analysis would not be adequate to capture power system dynamic behaviors, especially the new mix of intermittent generation and intelligent consumption makes the power system more dynamic and non-linear. Real-time dynamic model reduction emerges as an important need. This paper explores the use of clustering techniques to analyze real-time phasor measurements to determine generator groups and representative generators for dynamic model reduction. Two clustering techniques -- graph clustering and evolutionary clustering -- are studied in this paper. Various implementations of these techniques are compared and also compared with a previously developed Singular Value Decomposition (SVD)-based dynamic model reduction approach. Various methods exhibit different levels of accuracy when comparing the reduced model simulation against the original model. But some ...

  9. Modelling and Design of a Microstrip Band-Pass Filter Using Space Mapping Techniques

    CERN Document Server

    Tavakoli, Saeed; Mohanna, Shahram

    2010-01-01

    Determination of design parameters based on electromagnetic simulations of microwave circuits is an iterative and often time-consuming procedure. Space mapping is a powerful technique to optimize such complex models by efficiently substituting accurate but expensive electromagnetic models, fine models, with fast and approximate models, coarse models. In this paper, we apply two space mapping, an explicit space mapping as well as an implicit and response residual space mapping, techniques to a case study application, a microstrip band-pass filter. First, we model the case study application and optimize its design parameters, using explicit space mapping modelling approach. Then, we use implicit and response residual space mapping approach to optimize the filter's design parameters. Finally, the performance of each design methods is evaluated. It is shown that the use of above-mentioned techniques leads to achieving satisfactory design solutions with a minimum number of computationally expensive fine model eval...

  10. Application of nonlinear forecasting techniques for meteorological modeling

    Directory of Open Access Journals (Sweden)

    V. Pérez-Muñuzuri

    Full Text Available A nonlinear forecasting method was used to predict the behavior of a cloud coverage time series several hours in advance. The method is based on the reconstruction of a chaotic strange attractor using four years of cloud absorption data obtained from half-hourly Meteosat infrared images from Northwestern Spain. An exhaustive nonlinear analysis of the time series was carried out to reconstruct the phase space of the underlying chaotic attractor. The forecast values are used by a non-hydrostatic meteorological model ARPS for daily weather prediction and their results compared with surface temperature measurements from a meteorological station and a vertical sounding. The effect of noise in the time series is analyzed in terms of the prediction results.

    Key words: Meterology and atmospheric dynamics (mesoscale meteorology; general – General (new fields

  11. Micro-computed tomography assisted distal femur metaphyseal blunt punch compression for determining trabecular bone strength in mice.

    Science.gov (United States)

    Sankar, Uma; Pritchard, Zachary J; Voor, Michael J

    2016-05-03

    Shorter generation time and the power of genetic manipulation make mice an ideal model system to study bone biology as well as bone diseases. However their small size presents a challenge to perform strength measurements, particularly of the weight-bearing cancellous bone in the murine long bones. We recently developed an improved method to measure the axial compressive strength of the cancellous bone in the distal femur metaphysis in mice. Transverse micro-computed tomography image slices that are 7µm thick were used to locate the position where the epiphysis-metaphysis transition occurs. This enabled the removal of the distal femur epiphysis at the exact transition point exposing the full extent of metaphyseal trabecular bone, allowing more accurate and consistent measurement of its strength. When applied to a murine model system consisting of five month old male wild-type (WT) and Ca(2+)/calmodulin dependent protein kinase kinase 2 (CaMKK2) knockout (KO) Camkk2(-/-) mice that possess recorded differences in trabecular bone volume, data collected using this method showed good correlation between bone volume fraction and strength of trabecular bone. In combination with micro-computed tomography and histology, this method will provide a comprehensive and consistent assessment of the microarchitecture and tissue strength of the cancellous bone in murine mouse models.

  12. Micro-computed tomography for small animal imaging: Technological details

    Institute of Scientific and Technical Information of China (English)

    Hao Li; Hui Zhang; Zhiwei Tang; Guangshu Hu

    2008-01-01

    The high-resolution micro-computed tomography(micro-CT)system has now become an important tool for biological research.The micro-CT system enables a non-invasive inspection to screen anatomical changes in small animals.The promising advantages include high-spatial resolution,high sensitivity to bone and lung,short scan time and cost-effectiveness.The dose received by the small animal might be a critical concern in the research.In this article,the choice of the components,fundamental physical problems,the image reconstruction algorithm and the representative applications of micro-CT are summarized.Some results from our research group are also presented to show high-resolution images obtained by the micro-CT system.

  13. Microcomputer based instrument for measuring a novel pulmonary function test

    Science.gov (United States)

    Craine, Brian L.; Craine, Eric R.

    1996-08-01

    The design of a prototype instrument for measuring the end-tidal concentration of carbon monoxide during human respiration is presented. The instrument automatically samples the final sixty cubic centimeters of exhaled breath, from successive breathing cycles, by coordinating a pump and the breathing cycle with a set of vacuum and pressure sensors. The concentration of carbon monoxide is measured using a nondispersive infrared spectrophotometer. The amount of carbon monoxide present is measured relative to the source air concentration eliminating the need for calibrating the instrument. The testing protocol and measurements can be controlled by a microcomputer connected to the instrument through a standard RS-232 serial interface. When at equilibrium, the end-tidal concentration of CO can be measured in a simple and reproducible fashion. This simplified technology allows for the construction of a small, portable, easy to use instrument that will allow the application of this new pulmonary function test at the point of contact with patients.

  14. An advanced microcomputer design for processing of semiconductor materials

    Science.gov (United States)

    Bjoern, L.; Lindkvist, L.; Zaar, J.

    1988-01-01

    In the Get Away Special 330 payload two germanium samples doped with gallium will be processed. The aim of the experiments is to create a planar solid/liquid interface, and to study the breakdown of this interface as the crystal growth rate increases. For the experiments a gradient furnace was designed which is heated by resistive heaters. Cooling is provided by circulating gas from the atmosphere in the cannister through cooling channels in the furnace. The temperature along the sample are measured by platinum/rhodium thermocouples. The furnace is controlled by a microcomputer system, based upon the processor 80C88. A data acquisition system is integrated into the system. In order to synchronize the different actions in time, a multitask manager is used.

  15. Application of Krylov Reduction Technique for a Machine Tool Multibody Modelling

    Directory of Open Access Journals (Sweden)

    M. Sulitka

    2014-02-01

    Full Text Available Quick calculation of machine tool dynamic response represents one of the major requirements for machine tool virtual modelling and virtual machining, aiming at simulating the machining process performance, quality, and precision of a workpiece. Enhanced time effectiveness in machine tool dynamic simulations may be achieved by employing model order reduction (MOR techniques of the full finite element (FE models. The paper provides a case study aimed at comparison of Krylov subspace base and mode truncation technique. Application of both of the reduction techniques for creating a machine tool multibody model is evaluated. The Krylov subspace reduction technique shows high quality in terms of both dynamic properties of the reduced multibody model and very low time demands at the same time.

  16. Multivariate moment closure techniques for stochastic kinetic models

    Science.gov (United States)

    Lakatos, Eszter; Ale, Angelique; Kirk, Paul D. W.; Stumpf, Michael P. H.

    2015-09-01

    Stochastic effects dominate many chemical and biochemical processes. Their analysis, however, can be computationally prohibitively expensive and a range of approximation schemes have been proposed to lighten the computational burden. These, notably the increasingly popular linear noise approximation and the more general moment expansion methods, perform well for many dynamical regimes, especially linear systems. At higher levels of nonlinearity, it comes to an interplay between the nonlinearities and the stochastic dynamics, which is much harder to capture correctly by such approximations to the true stochastic processes. Moment-closure approaches promise to address this problem by capturing higher-order terms of the temporally evolving probability distribution. Here, we develop a set of multivariate moment-closures that allows us to describe the stochastic dynamics of nonlinear systems. Multivariate closure captures the way that correlations between different molecular species, induced by the reaction dynamics, interact with stochastic effects. We use multivariate Gaussian, gamma, and lognormal closure and illustrate their use in the context of two models that have proved challenging to the previous attempts at approximating stochastic dynamics: oscillations in p53 and Hes1. In addition, we consider a larger system, Erk-mediated mitogen-activated protein kinases signalling, where conventional stochastic simulation approaches incur unacceptably high computational costs.

  17. Multivariate moment closure techniques for stochastic kinetic models

    Energy Technology Data Exchange (ETDEWEB)

    Lakatos, Eszter, E-mail: e.lakatos13@imperial.ac.uk; Ale, Angelique; Kirk, Paul D. W.; Stumpf, Michael P. H., E-mail: m.stumpf@imperial.ac.uk [Department of Life Sciences, Centre for Integrative Systems Biology and Bioinformatics, Imperial College London, London SW7 2AZ (United Kingdom)

    2015-09-07

    Stochastic effects dominate many chemical and biochemical processes. Their analysis, however, can be computationally prohibitively expensive and a range of approximation schemes have been proposed to lighten the computational burden. These, notably the increasingly popular linear noise approximation and the more general moment expansion methods, perform well for many dynamical regimes, especially linear systems. At higher levels of nonlinearity, it comes to an interplay between the nonlinearities and the stochastic dynamics, which is much harder to capture correctly by such approximations to the true stochastic processes. Moment-closure approaches promise to address this problem by capturing higher-order terms of the temporally evolving probability distribution. Here, we develop a set of multivariate moment-closures that allows us to describe the stochastic dynamics of nonlinear systems. Multivariate closure captures the way that correlations between different molecular species, induced by the reaction dynamics, interact with stochastic effects. We use multivariate Gaussian, gamma, and lognormal closure and illustrate their use in the context of two models that have proved challenging to the previous attempts at approximating stochastic dynamics: oscillations in p53 and Hes1. In addition, we consider a larger system, Erk-mediated mitogen-activated protein kinases signalling, where conventional stochastic simulation approaches incur unacceptably high computational costs.

  18. A MODEL FOR OVERLAPPING TRIGRAM TECHNIQUE FOR TELUGU SCRIPT

    Directory of Open Access Journals (Sweden)

    B.Vishnu Vardhan

    2007-09-01

    Full Text Available N-grams are consecutive overlapping N-character sequences formed from an input stream. N-grams are used as alternatives to word-based retrieval in a number of systems. In this paper we propose a model applicable to categorization of Telugu document. Telugu is an official language derived from ancient Brahmi script and also the official language of the state of Andhra Pradesh. Brahmi based script is noted for complex conjunct formations. The canonical structure is described as ((C C CV. The structure evolves any character from a set of basic syllables known as vowels and consonants where consonant, vowel (CV core is the basic unit optionally preceded by one or two consonants. A huge set of characters that resemble the phonetic nature with an equivalent character shape are derived from the canonical structure. Words formed from this set evolved into a large corpus. Stringent grammar rules in word formation are part of this corpus. Certain word combinations result in the formation of single word is to be addressed where the last character of the first word and first character of the successive word are combined. Keeping in view of these complexities we propose a trigram based system that provides a reasonable alternative to a word based system in achieving document categorization for the language Telugu.

  19. Multivariate moment closure techniques for stochastic kinetic models.

    Science.gov (United States)

    Lakatos, Eszter; Ale, Angelique; Kirk, Paul D W; Stumpf, Michael P H

    2015-09-07

    Stochastic effects dominate many chemical and biochemical processes. Their analysis, however, can be computationally prohibitively expensive and a range of approximation schemes have been proposed to lighten the computational burden. These, notably the increasingly popular linear noise approximation and the more general moment expansion methods, perform well for many dynamical regimes, especially linear systems. At higher levels of nonlinearity, it comes to an interplay between the nonlinearities and the stochastic dynamics, which is much harder to capture correctly by such approximations to the true stochastic processes. Moment-closure approaches promise to address this problem by capturing higher-order terms of the temporally evolving probability distribution. Here, we develop a set of multivariate moment-closures that allows us to describe the stochastic dynamics of nonlinear systems. Multivariate closure captures the way that correlations between different molecular species, induced by the reaction dynamics, interact with stochastic effects. We use multivariate Gaussian, gamma, and lognormal closure and illustrate their use in the context of two models that have proved challenging to the previous attempts at approximating stochastic dynamics: oscillations in p53 and Hes1. In addition, we consider a larger system, Erk-mediated mitogen-activated protein kinases signalling, where conventional stochastic simulation approaches incur unacceptably high computational costs.

  20. Quantification and characterization of radiation-induced changes to mandibular vascularity using micro-computed tomography.

    Science.gov (United States)

    Deshpande, Sagar Satish; Donneys, Alexis; Farberg, Aaron Samuel; Tchanque-Fossuo, Catherine N; Felice, Peter A; Buchman, Steven Richard

    2014-01-01

    Perhaps the most vexing and exigent problem confronting head and neck cancer reconstruction is overcoming the impediments of collateral damage imposed by radiation therapy (XRT) on normal surrounding tissue. Radiation therapy is detrimental to bone and soft tissue repair resulting in an unacceptably high incidence of devastating wound healing complications as well as the associated morbidity of late pathologic fractures, reduced bone healing, and osteoradionecrosis. The consequences of XRT on bone vasculature, long known to be affected by radiation, have been poorly understood. The purpose of this study was to analyze the degree by which irradiation degrades existing bone vascularity using a powerful micro-computed tomography technique to attain highly precise quantitative metrics of the vascular tree. Fourteen 400-g male Sprague-Dawley rats underwent 35 Gy of fractionated XRT at 7 Gy/d. The animals were euthanized after 28 days, and the left ventricle was fixed and injected with Microfil (MV-122; Flow Tech, Carver, Mass) contrast. Left hemimandibles were dissected and scanned using high-resolution micro-computed tomography (18-μm voxels). The vessel number, thickness, separation, connectivity, and vessel volume fraction were analyzed for the region of interest, defined to be the volume behind the third molar spanning a total distance of 5.1 mm. Stereologic analysis and subsequent analysis of variance test demonstrated a significant and quantifiable diminution in the irradiated vasculature when compared with control animals. The vessel volume fraction (0.016 vs 0.032, P ≤ 0.003) and vessel thickness (0.042 vs 0.067 mm, P ≤ 0.001) were markedly reduced. Interestingly, further analysis demonstrated no significant differences between vessel separation and vessel number. The results of our study specifically quantify the corrosive affects of XRT on the vasculature of the mandible. The data from this novel technique go even further and imply retention of blood

  1. Energy iteration model research of DCM Buck converter with multilevel pulse train technique

    Science.gov (United States)

    Qin, Ming; Li, Xiang

    2017-08-01

    According as the essence of switching converter is the nature of energy, the energy iteration model of the Multilevel Pulse Train (MPT) technique is studied in this paper. The energy iteration model of DCM Buck converter with MPT technique can reflect the control law and excellent transient performance of the MPT technique. The iteration relation of energy transfer in switching converter is discussed. The structure and operation principle of DCM Buck converter with MPT technique is introduced and the energy iteration model of this converter is set up. The energy tracks of MPT-control Buck converter and PT converter is researched and compared to show that the ratio of steady-state control pulse satisfies the expectation for the MPT technique and the MPT-controlled switching converter has much lower output voltage ripple than the PT converter.

  2. Developing a Teaching Model Using an Online Collaboration Approach for a Digital Technique Practical Work

    Science.gov (United States)

    Muchlas

    2015-01-01

    This research is aimed to produce a teaching model and its supporting instruments using a collaboration approach for a digital technique practical work attended by higher education students. The model is found to be flexible and relatively low cost. Through this research, feasibility and learning impact of the model will be determined. The model…

  3. Development of Reservoir Characterization Techniques and Production Models for Exploiting Naturally Fractured Reservoirs

    Energy Technology Data Exchange (ETDEWEB)

    Wiggins, Michael L.; Brown, Raymon L.; Civan, Frauk; Hughes, Richard G.

    2001-08-15

    Research continues on characterizing and modeling the behavior of naturally fractured reservoir systems. Work has progressed on developing techniques for estimating fracture properties from seismic and well log data, developing naturally fractured wellbore models, and developing a model to characterize the transfer of fluid from the matrix to the fracture system for use in the naturally fractured reservoir simulator.

  4. A novel method to estimate model uncertainty using machine learning techniques

    NARCIS (Netherlands)

    Solomatine, D.P.; Lal Shrestha, D.

    2009-01-01

    A novel method is presented for model uncertainty estimation using machine learning techniques and its application in rainfall runoff modeling. In this method, first, the probability distribution of the model error is estimated separately for different hydrological situations and second, the

  5. INTELLIGENT CAR STYLING TECHNIQUE AND SYSTEM BASED ON A NEW AERODYNAMIC-THEORETICAL MODEL

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    Car styling technique based on a new theoretical model of automotive aerodynamics is introduced, which is proved to be feasible and effective by wind tunnel tests. Development of a multi-module software system from this technique, including modules of knowledge processing, referential styling and ANN aesthetic evaluation etc, capable of assisting car styling works in an intelligent way, is also presented and discussed.

  6. Optimizaton of corrosion control for lead in drinking water using computational modeling techniques

    Science.gov (United States)

    Computational modeling techniques have been used to very good effect in the UK in the optimization of corrosion control for lead in drinking water. A “proof-of-concept” project with three US/CA case studies sought to demonstrate that such techniques could work equally well in the...

  7. Continuing Education in Micro-Computers for Academic and Practicing Psychologists.

    Science.gov (United States)

    Lupo, James V.; Ware, Mark E.

    1984-01-01

    A workshop designed to teach participants how to use microcomputers for practice management, psychological testing, word processing, peripheral interface, and data analysis was found to meet the interests and needs of both academic and practicing psychologists. (RM)

  8. Microcomputer design and analysis of the cable catenary large space antenna system

    Science.gov (United States)

    Akle, W.

    1984-01-01

    The use of microcomputers in the design of a cable catenary large space antenna system is discussed. The development of a system design capability, data base utilization, systems integration, program structure and logic, and integrated graphics output are discussed.

  9. Assessing Inexpensive Microcomputers for Classroom Use: A Product-Oriented Course to Promote Instructional Computing Literacy.

    Science.gov (United States)

    Ricketts, Dick; Seay, Jean A.

    1979-01-01

    A teacher inservice class that focused on evaluating inexpensive microcomputer systems is described. Evaluative criteria used by the class are included, and the findings and preferences of the class are summarized. (Author/IRT)

  10. Digital TV-echelle spectrograph for simultaneous multielemental analysis using microcomputer control

    Energy Technology Data Exchange (ETDEWEB)

    Davidson, J.B.; Case, A.L.

    1980-12-01

    A digital TV-echelle spectrograph with microcomputer control was developed for simultaneous multielemental analysis. The optical system is a commercially available unit originally equipped for film and photomultiplier (single element) readout. The film port was adapted for the intensifier camera. The camera output is digitized and stored in a microcomputer-controlled, 512 x 512 x 12 bit memory and image processor. Multiple spectra over the range of 200 to 800 nm are recorded in a single exposure. Spectra lasting from nanoseconds to seconds are digitized and stored in 0.033 s and displayed on a TV monitor. An inexpensive microcomputer controls the exposure, reads and displays the intensity of predetermined spectral lines, and calculates wavelengths of unknown lines. The digital addresses of unknown lines are determined by superimposing a cursor on the TV display. The microcomputer also writes into memory wavelength fiducial marks for alignment of the TV camera.

  11. Microcomputer Decisions for the 1990s [and] Apple's Macintosh: A Viable Choice.

    Science.gov (United States)

    Grosch, Audrey N.

    1989-01-01

    Discussion of the factors that should be considered when purchasing or upgrading a microcomputer focuses on the MS-DOS and OS/2 operating systems. Macintosh purchasing decisions are discussed in a sidebar. A glossary is provided. (CLB)

  12. X-Ray Micro-Computed Tomography Imaging of the Buzzard Coulee Chondrite

    Science.gov (United States)

    Melanson, D.; Samson, C.; Herd, R. K.; Fry, C.; McCausland, P. J. A.; Umoh, J.; Holdsworth, D. W.

    2012-03-01

    This abstract outlines research and some results of X-ray micro-computed tomography imaging of the Buzzard Coulee H4 chondrite. A comparison of bulk density results and an analysis of radio-density profile curves are discussed.

  13. System Response Analysis and Model Order Reduction, Using Conventional Method, Bond Graph Technique and Genetic Programming

    Directory of Open Access Journals (Sweden)

    Lubna Moin

    2009-04-01

    Full Text Available This research paper basically explores and compares the different modeling and analysis techniques and than it also explores the model order reduction approach and significance. The traditional modeling and simulation techniques for dynamic systems are generally adequate for single-domain systems only, but the Bond Graph technique provides new strategies for reliable solutions of multi-domain system. They are also used for analyzing linear and non linear dynamic production system, artificial intelligence, image processing, robotics and industrial automation. This paper describes a unique technique of generating the Genetic design from the tree structured transfer function obtained from Bond Graph. This research work combines bond graphs for model representation with Genetic programming for exploring different ideas on design space tree structured transfer function result from replacing typical bond graph element with their impedance equivalent specifying impedance lows for Bond Graph multiport. This tree structured form thus obtained from Bond Graph is applied for generating the Genetic Tree. Application studies will identify key issues and importance for advancing this approach towards becoming on effective and efficient design tool for synthesizing design for Electrical system. In the first phase, the system is modeled using Bond Graph technique. Its system response and transfer function with conventional and Bond Graph method is analyzed and then a approach towards model order reduction is observed. The suggested algorithm and other known modern model order reduction techniques are applied to a 11th order high pass filter [1], with different approach. The model order reduction technique developed in this paper has least reduction errors and secondly the final model retains structural information. The system response and the stability analysis of the system transfer function taken by conventional and by Bond Graph method is compared and

  14. System Response Analysis and Model Order Reduction, Using Conventional Method, Bond Graph Technique and Genetic Programming

    Directory of Open Access Journals (Sweden)

    Shahid Ali

    2009-04-01

    Full Text Available This research paper basically explores and compares the different modeling and analysis techniques and than it also explores the model order reduction approach and significance. The traditional modeling and simulation techniques for dynamic systems are generally adequate for single-domain systems only, but the Bond Graph technique provides new strategies for reliable solutions of multi-domain system. They are also used for analyzing linear and non linear dynamic production system, artificial intelligence, image processing, robotics and industrial automation. This paper describes a unique technique of generating the Genetic design from the tree structured transfer function obtained from Bond Graph. This research work combines bond graphs for model representation with Genetic programming for exploring different ideas on design space tree structured transfer function result from replacing typical bond graph element with their impedance equivalent specifying impedance lows for Bond Graph multiport. This tree structured form thus obtained from Bond Graph is applied for generating the Genetic Tree. Application studies will identify key issues and importance for advancing this approach towards becoming on effective and efficient design tool for synthesizing design for Electrical system. In the first phase, the system is modeled using Bond Graph technique. Its system response and transfer function with conventional and Bond Graph method is analyzed and then a approach towards model order reduction is observed. The suggested algorithm and other known modern model order reduction techniques are applied to a 11th order high pass filter [1], with different approach. The model order reduction technique developed in this paper has least reduction errors and secondly the final model retains structural information. The system response and the stability analysis of the system transfer function taken by conventional and by Bond Graph method is compared and

  15. Geomatic techniques for the generation of building information models towards their introduction in Integrated Management Systems

    OpenAIRE

    Diaz Vilariño, Lucia

    2015-01-01

    This research project proposes the use of geomatic techniques to reconstruct in a highly automated way semantic building models that might be subjected to energy analysis. Other non-destructive techniques such as infrared thermography are explored to obtain descriptive attributes for enriching the models. Building stock is considered as an important contributor to the global energy consumption and buildings energy efficiency has become a priority strategy in the European energy policy. Bu...

  16. Direct microcomputer controlled determination of zinc in human serum by flow injection atomic absorption spectrometry

    DEFF Research Database (Denmark)

    Simonsen, Kirsten Wiese; Nielsen, Bent; Jensen, Arne

    1986-01-01

    A procedure is described for the direct determination of zinc in human serum by fully automated, microcomputer controlled flow injection atomic absorption spectrometry (Fl-AAS). The Fl system is pumpless, using the negative pressure created by the nebuliser. It only consists of a three-way valve......, programmable from the microcomputer, to control the sample volume. No pre-treatment of the samples is necessary. The limit of detection is 0.14 mg l–1, and only small amounts of serum (

  17. Modelling the effects of the sterile insect technique applied to Eldana saccharina Walker in sugarcane

    Directory of Open Access Journals (Sweden)

    L Potgieter

    2012-12-01

    Full Text Available A mathematical model is formulated for the population dynamics of an Eldana saccharina Walker infestation of sugarcane under the influence of partially sterile released insects. The model describes the population growth of and interaction between normal and sterile E.saccharina moths in a temporally variable, but spatially homogeneous environment. The model consists of a deterministic system of difference equations subject to strictly positive initial data. The primary objective of this model is to determine suitable parameters in terms of which the above population growth and interaction may be quantified and according to which E.saccharina infestation levels and the associated sugarcane damage may be measured. Although many models have been formulated in the past describing the sterile insect technique, few of these models describe the technique for Lepidopteran species with more than one life stage and where F1-sterility is relevant. In addition, none of these models consider the technique when fully sterile females and partially sterile males are being released. The model formulated is also the first to describe the technique applied specifically to E.saccharina, and to consider the economic viability of applying the technique to this species. Pertinent decision support is provided to farm managers in terms of the best timing for releases, release ratios and release frequencies.

  18. STUDY OF CODING GENERATOR BASED ON IN-SYSTEM PROGRAMMING TECHNIQUE AND DEVICES

    Institute of Scientific and Technical Information of China (English)

    Liu Duren; Jin Yajing; Ren Zhichun

    2002-01-01

    This paper presents a design of coding waveform generator controlled by microcomputer or single-chip microcomputer and realizes arbitrary coding waveform combination based on In-system programming(ISP) technique and High Density Programmable Logic Deivce (HDPLD),and using latch register, control counter and easily expanded PS (Parallel in & Serial out) shift register array. This scheme can overcome some shortcomings in past schemes, so that hardware design can be realized by means of software.

  19. Comparing Parameter Estimation Techniques for an Electrical Power Transformer Oil Temperature Prediction Model

    Science.gov (United States)

    Morris, A. Terry

    1999-01-01

    This paper examines various sources of error in MIT's improved top oil temperature rise over ambient temperature model and estimation process. The sources of error are the current parameter estimation technique, quantization noise, and post-processing of the transformer data. Results from this paper will show that an output error parameter estimation technique should be selected to replace the current least squares estimation technique. The output error technique obtained accurate predictions of transformer behavior, revealed the best error covariance, obtained consistent parameter estimates, and provided for valid and sensible parameters. This paper will also show that the output error technique should be used to minimize errors attributed to post-processing (decimation) of the transformer data. Models used in this paper are validated using data from a large transformer in service.

  20. THE IMPROVEMENT OF THE COMPUTATIONAL PERFORMANCE OF THE ZONAL MODEL POMA USING PARALLEL TECHNIQUES

    Directory of Open Access Journals (Sweden)

    Yao Yu

    2014-01-01

    Full Text Available The zonal modeling approach is a new simplified computational method used to predict temperature distribution, energy in multi-zone building and indoor airflow thermal behaviors of building. Although this approach is known to use less computer resource than CFD models, the computational time is still an issue especially when buildings are characterized by complicated geometry and indoor layout of furnishings. Therefore, using a new computing technique to the current zonal models in order to reduce the computational time is a promising way to further improve the model performance and promote the wide application of zonal models. Parallel computing techniques provide a way to accomplish these purposes. Unlike the serial computations that are commonly used in the current zonal models, these parallel techniques decompose the serial program into several discrete instructions which can be executed simultaneously on different processors/threads. As a result, the computational time of the parallelized program can be significantly reduced, compared to that of the traditional serial program. In this article, a parallel computing technique, Open Multi-Processing (OpenMP, is used into the zonal model, Pressurized zOnal Model with the Air diffuser (POMA, in order to improve the model computational performance, including the reduction of computational time and the investigation of the model scalability.

  1. The Integrated Use of Enterprise and System Dynamics Modelling Techniques in Support of Business Decisions

    Directory of Open Access Journals (Sweden)

    K. Agyapong-Kodua

    2012-01-01

    Full Text Available Enterprise modelling techniques support business process (reengineering by capturing existing processes and based on perceived outputs, support the design of future process models capable of meeting enterprise requirements. System dynamics modelling tools on the other hand are used extensively for policy analysis and modelling aspects of dynamics which impact on businesses. In this paper, the use of enterprise and system dynamics modelling techniques has been integrated to facilitate qualitative and quantitative reasoning about the structures and behaviours of processes and resource systems used by a Manufacturing Enterprise during the production of composite bearings. The case study testing reported has led to the specification of a new modelling methodology for analysing and managing dynamics and complexities in production systems. This methodology is based on a systematic transformation process, which synergises the use of a selection of public domain enterprise modelling, causal loop and continuous simulation modelling techniques. The success of the modelling process defined relies on the creation of useful CIMOSA process models which are then converted to causal loops. The causal loop models are then structured and translated to equivalent dynamic simulation models using the proprietary continuous simulation modelling tool iThink.

  2. Research on the Propagation Models and Defense Techniques of Internet Worms

    Institute of Scientific and Technical Information of China (English)

    Tian-Yun Huang

    2008-01-01

    Internet worm is harmful to network security, and it has become a research hotspot in recent years. A thorough survey on the propagation models and defense techniques of Internet worm is made in this paper. We first give its strict definition and discuss the working mechanism. We then analyze and compare some repre sentative worm propagation models proposed in recent years, such as K-M model, two-factor model, worm-anti worm model (WAW), firewall-based model, quarantine based model and hybrid benign worm-based model, etc. Some typical defense techniques such as virtual honeypot, active worm prevention and agent-oriented worm defense, etc, are also discussed. The future direction of the worm defense system is pointed out.

  3. Mixture experiment techniques for reducing the number of components applied for modeling waste glass sodium release

    Energy Technology Data Exchange (ETDEWEB)

    Piepel, G.; Redgate, T. [Pacific Northwest National Lab., Richland, WA (United States). Statistics Group

    1997-12-01

    Statistical mixture experiment techniques were applied to a waste glass data set to investigate the effects of the glass components on Product Consistency Test (PCT) sodium release (NR) and to develop a model for PCT NR as a function of the component proportions. The mixture experiment techniques indicate that the waste glass system can be reduced from nine to four components for purposes of modeling PCT NR. Empirical mixture models containing four first-order terms and one or two second-order terms fit the data quite well, and can be used to predict the NR of any glass composition in the model domain. The mixture experiment techniques produce a better model in less time than required by another approach.

  4. Design of Current Controller for Two Quadrant DC Motor Drive by Using Model Order Reduction Technique

    CERN Document Server

    Ramesh, K; Nirmalkumar, A; Gurusamy, G

    2010-01-01

    In this paper, design of current controller for a two quadrant DC motor drive was proposed with the help of model order reduction technique. The calculation of current controller gain with some approximations in the conventional design process is replaced by proposed model order reduction method. The model order reduction technique proposed in this paper gives the better controller gain value for the DC motor drive. The proposed model order reduction method is a mixed method, where the numerator polynomial of reduced order model is obtained by using stability equation method and the denominator polynomial is obtained by using some approximation technique preceded in this paper. The designed controllers responses were simulated with the help of MATLAB to show the validity of the proposed method.

  5. Assessment of Venous Thrombosis in Animal Models.

    Science.gov (United States)

    Grover, Steven P; Evans, Colin E; Patel, Ashish S; Modarai, Bijan; Saha, Prakash; Smith, Alberto

    2016-02-01

    Deep vein thrombosis and common complications, including pulmonary embolism and post-thrombotic syndrome, represent a major source of morbidity and mortality worldwide. Experimental models of venous thrombosis have provided considerable insight into the cellular and molecular mechanisms that regulate thrombus formation and subsequent resolution. Here, we critically appraise the ex vivo and in vivo techniques used to assess venous thrombosis in these models. Particular attention is paid to imaging modalities, including magnetic resonance imaging, micro-computed tomography, and high-frequency ultrasound that facilitate longitudinal assessment of thrombus size and composition.

  6. Internet enabled modelling of extended manufacturing enterprises using the process based techniques

    OpenAIRE

    Cheng, K; Popov, Y

    2004-01-01

    The paper presents the preliminary results of an ongoing research project on Internet enabled process-based modelling of extended manufacturing enterprises. It is proposed to apply the Open System Architecture for CIM (CIMOSA) modelling framework alongside with object-oriented Petri Net models of enterprise processes and object-oriented techniques for extended enterprises modelling. The main features of the proposed approach are described and some components discussed. Elementary examples of ...

  7. Maternal, Infant Characteristics, Breastfeeding Techniques, and Initiation: Structural Equation Modeling Approaches

    OpenAIRE

    2015-01-01

    Objectives The aim of this study was to examine the relationships among maternal and infant characteristics, breastfeeding techniques, and exclusive breastfeeding initiation in different modes of birth using structural equation modeling approaches. Methods We examined a hypothetical model based on integrating concepts of a breastfeeding decision-making model, a breastfeeding initiation model, and a social cognitive theory among 952 mother-infant dyads. The LATCH breastfeeding assessment tool ...

  8. Biomaterial porosity determined by fractal dimensions, succolarity and lacunarity on microcomputed tomographic images

    Energy Technology Data Exchange (ETDEWEB)

    N' Diaye, Mambaye [LUNAM Université, GEROM Groupe Etudes Remodelage Osseux et bioMatériaux-LHEA, IRIS-IBS Institut de Biologie en Santé, CHU d' Angers, 49933 ANGERS Cedex (France); Degeratu, Cristinel [LUNAM Université, GEROM Groupe Etudes Remodelage Osseux et bioMatériaux-LHEA, IRIS-IBS Institut de Biologie en Santé, CHU d' Angers, 49933 ANGERS Cedex (France); University Politehnica of Bucharest, Faculty of Applied Chemistry and Materials Science, Department of Bioresources and Polymer Science, Calea Victoriei 149, 010072, Sector 1, Bucharest (Romania); Bouler, Jean-Michel [Inserm UMR 791, LIOAD, University of Nantes, 44000 Nantes (France); Chappard, Daniel, E-mail: daniel.chappard@univ-angers.fr [LUNAM Université, GEROM Groupe Etudes Remodelage Osseux et bioMatériaux-LHEA, IRIS-IBS Institut de Biologie en Santé, CHU d' Angers, 49933 ANGERS Cedex (France)

    2013-05-01

    Porous structures are becoming more and more important in biology and material science because they help in reducing the density of the grafted material. For biomaterials, porosity also increases the accessibility of cells and vessels inside the grafted area. However, descriptors of porosity are scanty. We have used a series of biomaterials with different types of porosity (created by various porogens: fibers, beads …). Blocks were studied by microcomputed tomography for the measurement of 3D porosity. 2D sections were re-sliced to analyze the microarchitecture of the pores and were transferred to image analysis programs: star volumes, interconnectivity index, Minkowski–Bouligand and Kolmogorov fractal dimensions were determined. Lacunarity and succolarity, two recently described fractal dimensions, were also computed. These parameters provided a precise description of porosity and pores' characteristics. Non-linear relationships were found between several descriptors e.g. succolarity and star volume of the material. A linear correlation was found between lacunarity and succolarity. These techniques appear suitable in the study of biomaterials usable as bone substitutes. Highlights: ► Interconnected porosity is important in the development of bone substitutes. ► Porosity was evaluated by 2D and 3D morphometry on microCT images. ► Euclidean and fractal descriptors measure interconnectivity on 2D microCT images. ► Lacunarity and succolarity were evaluated on a series of porous biomaterials.

  9. Biomaterial porosity determined by fractal dimensions, succolarity and lacunarity on microcomputed tomographic images.

    Science.gov (United States)

    N'Diaye, Mambaye; Degeratu, Cristinel; Bouler, Jean-Michel; Chappard, Daniel

    2013-05-01

    Porous structures are becoming more and more important in biology and material science because they help in reducing the density of the grafted material. For biomaterials, porosity also increases the accessibility of cells and vessels inside the grafted area. However, descriptors of porosity are scanty. We have used a series of biomaterials with different types of porosity (created by various porogens: fibers, beads …). Blocks were studied by microcomputed tomography for the measurement of 3D porosity. 2D sections were re-sliced to analyze the microarchitecture of the pores and were transferred to image analysis programs: star volumes, interconnectivity index, Minkowski-Bouligand and Kolmogorov fractal dimensions were determined. Lacunarity and succolarity, two recently described fractal dimensions, were also computed. These parameters provided a precise description of porosity and pores' characteristics. Non-linear relationships were found between several descriptors e.g. succolarity and star volume of the material. A linear correlation was found between lacunarity and succolarity. These techniques appear suitable in the study of biomaterials usable as bone substitutes. Copyright © 2013 Elsevier B.V. All rights reserved.

  10. Micro-computed tomography image-based evaluation of 3D anisotropy degree of polymer scaffolds.

    Science.gov (United States)

    Pérez-Ramírez, Ursula; López-Orive, Jesús Javier; Arana, Estanislao; Salmerón-Sánchez, Manuel; Moratal, David

    2015-01-01

    Anisotropy is one of the most meaningful determinants of biomechanical behaviour. This study employs micro-computed tomography (μCT) and image techniques for analysing the anisotropy of regenerative medicine polymer scaffolds. For this purpose, three three-dimensional anisotropy evaluation image methods were used: ellipsoid of inertia (EI), mean intercept length (MIL) and tensor scale (t-scale). These were applied to three patterns (a sphere, a cube and a right prism) and to two polymer scaffold topologies (cylindrical orthogonal pore mesh and spherical pores). For the patterns, the three methods provided good results. Regarding the scaffolds, EI mistook both topologies (0.0158, [-0.5683; 0.6001]; mean difference and 95% confidence interval), and MIL showed no significant differences (0.3509, [0.0656; 0.6362]). T-scale is the preferable method because it gave the best capability (0.3441, [0.1779; 0.5102]) to differentiate both topologies. This methodology results in the development of non-destructive tools to engineer biomimetic scaffolds, incorporating anisotropy as a fundamental property to be mimicked from the original tissue and permitting its assessment by means of μCT image analysis.

  11. Micro-computed tomography of false starts produced on bone by different hand-saws.

    Science.gov (United States)

    Pelletti, Guido; Viel, Guido; Fais, Paolo; Viero, Alessia; Visentin, Sindi; Miotto, Diego; Montisci, Massimo; Cecchetto, Giovanni; Giraudo, Chiara

    2017-05-01

    The analysis of macro- and microscopic characteristics of saw marks on bones can provide useful information about the class of the tool utilized to produce the injury. The aim of the present study was to test micro-computed tomography (micro-CT) for the analysis of false starts experimentally produced on 32 human bone sections using 4 different hand-saws in order to verify the potential utility of micro-CT for distinguishing false starts produced by different saws and to correlate the morphology of the tool with that of the bone mark. Each sample was analysed through stereomicroscopy and micro-CT. Stereomicroscopic analysis allowed the identification of the false starts and the detection of the number of tool marks left by each saw. Micro-CT scans, through the integration of 3D renders and multiplanar reconstructions (MPR), allowed the identification of the shape of each false start correlating it to the injuring tool. Our results suggest that micro-CT could be a useful technique for assessing false starts produced by different classes of saws, providing accurate morphological profiles of the bone marks with all the advantages of high resolution 3D imaging (e.g., high accuracy, non-destructive analysis, preservation and documentation of evidence). However, further studies are necessary to integrate qualitative data with quantitative metrical analysis in order to further characterize the false start and the related injuring tool. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. An EMG-assisted model calibration technique that does not require MVCs.

    Science.gov (United States)

    Dufour, Jonathan S; Marras, William S; Knapik, Gregory G

    2013-06-01

    As personalized biologically-assisted models of the spine have evolved, the normalization of raw electromyographic (EMG) signals has become increasingly important. The traditional method of normalizing myoelectric signals, relative to measured maximum voluntary contractions (MVCs), is susceptible to error and is problematic for evaluating symptomatic low back pain (LBP) patients. Additionally, efforts to circumvent MVCs have not been validated during complex free-dynamic exertions. Therefore, the objective of this study was to develop an MVC-independent biologically-assisted model calibration technique that overcomes the limitations of previous normalization efforts, and to validate this technique over a variety of complex free-dynamic conditions including symmetrical and asymmetrical lifting. The newly developed technique (non-MVC) eliminates the need to collect MVCs by combining gain (maximum strength per unit area) and MVC into a single muscle property (gain ratio) that can be determined during model calibration. Ten subjects (five male, five female) were evaluated to compare gain ratio prediction variability, spinal load predictions, and model fidelity between the new non-MVC and established MVC-based model calibration techniques. The new non-MVC model calibration technique demonstrated at least as low gain ratio prediction variability, similar spinal loads, and similar model fidelity when compared to the MVC-based technique, indicating that it is a valid alternative to traditional MVC-based EMG normalization. Spinal loading for individuals who are unwilling or unable to produce reliable MVCs can now be evaluated. In particular, this technique will be valuable for evaluating symptomatic LBP patients, which may provide significant insight into the underlying nature of the LBP disorder.

  13. Application of separable parameter space techniques to multi-tracer PET compartment modeling

    Science.gov (United States)

    Zhang, Jeff L.; Morey, A. Michael; Kadrmas, Dan J.

    2016-02-01

    Multi-tracer positron emission tomography (PET) can image two or more tracers in a single scan, characterizing multiple aspects of biological functions to provide new insights into many diseases. The technique uses dynamic imaging, resulting in time-activity curves that contain contributions from each tracer present. The process of separating and recovering separate images and/or imaging measures for each tracer requires the application of kinetic constraints, which are most commonly applied by fitting parallel compartment models for all tracers. Such multi-tracer compartment modeling presents challenging nonlinear fits in multiple dimensions. This work extends separable parameter space kinetic modeling techniques, previously developed for fitting single-tracer compartment models, to fitting multi-tracer compartment models. The multi-tracer compartment model solution equations were reformulated to maximally separate the linear and nonlinear aspects of the fitting problem, and separable least-squares techniques were applied to effectively reduce the dimensionality of the nonlinear fit. The benefits of the approach are then explored through a number of illustrative examples, including characterization of separable parameter space multi-tracer objective functions and demonstration of exhaustive search fits which guarantee the true global minimum to within arbitrary search precision. Iterative gradient-descent algorithms using Levenberg-Marquardt were also tested, demonstrating improved fitting speed and robustness as compared to corresponding fits using conventional model formulations. The proposed technique overcomes many of the challenges in fitting simultaneous multi-tracer PET compartment models.

  14. Techniques for Down-Sampling a Measured Surface Height Map for Model Validation

    Science.gov (United States)

    Sidick, Erkin

    2012-01-01

    This software allows one to down-sample a measured surface map for model validation, not only without introducing any re-sampling errors, but also eliminating the existing measurement noise and measurement errors. The software tool of the current two new techniques can be used in all optical model validation processes involving large space optical surfaces

  15. Models and techniques for evaluating the effectiveness of aircraft computing systems

    Science.gov (United States)

    Meyer, J. F.

    1978-01-01

    Progress in the development of system models and techniques for the formulation and evaluation of aircraft computer system effectiveness is reported. Topics covered include: analysis of functional dependence: a prototype software package, METAPHOR, developed to aid the evaluation of performability; and a comprehensive performability modeling and evaluation exercise involving the SIFT computer.

  16. Applying Modern Techniques and Carrying Out English .Extracurricular—— On the Model United Nations Activity

    Institute of Scientific and Technical Information of China (English)

    XuXiaoyu; WangJian

    2004-01-01

    This paper is an introduction of the extracurricular activity of the Model United Nations in Northwestern Polyteehnical University (NPU) and it focuses on the application of the modem techniques in the activity and the pedagogical theories applied in it. An interview and questionnaire research will reveal the influence of the Model United Nations.

  17. Using Game Theory Techniques and Concepts to Develop Proprietary Models for Use in Intelligent Games

    Science.gov (United States)

    Christopher, Timothy Van

    2011-01-01

    This work is about analyzing games as models of systems. The goal is to understand the techniques that have been used by game designers in the past, and to compare them to the study of mathematical game theory. Through the study of a system or concept a model often emerges that can effectively educate students about making intelligent decisions…

  18. Advanced Techniques for Reservoir Simulation and Modeling of Non-Conventional Wells

    Energy Technology Data Exchange (ETDEWEB)

    Durlofsky, Louis J.; Aziz, Khalid

    2001-08-23

    Research results for the second year of this project on the development of improved modeling techniques for non-conventional (e.g., horizontal, deviated or multilateral) wells were presented. The overall program entails the development of enhanced well modeling and general simulation capabilities. A general formulation for black-oil and compositional reservoir simulation was presented.

  19. Accuracy and reproducibility of dental replica models reconstructed by different rapid prototyping techniques

    NARCIS (Netherlands)

    Hazeveld, Aletta; Huddleston Slater, James J. R.; Ren, Yijin

    INTRODUCTION: Rapid prototyping is a fast-developing technique that might play a significant role in the eventual replacement of plaster dental models. The aim of this study was to investigate the accuracy and reproducibility of physical dental models reconstructed from digital data by several rapid

  20. Identification techniques for phenomenological models of hysteresis based on the conjugate gradient method

    Energy Technology Data Exchange (ETDEWEB)

    Andrei, Petru [Electrical and Computer Engineering Department, Florida State Unviersity, Tallahassee, FL 32310 (United States) and Electrical and Computer Engineering Department, Florida A and M Unviersity, Tallahassee, FL 32310 (United States)]. E-mail: pandrei@eng.fsu.edu; Oniciuc, Liviu [Electrical and Computer Engineering Department, Florida State Unviersity, Tallahassee, FL 32310 (United States); Stancu, Alexandru [Faculty of Physics, ' Al. I. Cuza' University, Iasi 700506 (Romania); Stoleriu, Laurentiu [Faculty of Physics, ' Al. I. Cuza' University, Iasi 700506 (Romania)

    2007-09-15

    An identification technique for the parameters of phenomenological models of hysteresis is presented. The basic idea of our technique is to set up a system of equations for the parameters of the model as a function of known quantities on the major or minor hysteresis loops (e.g. coercive force, susceptibilities at various points, remanence), or other magnetization curves. This system of equations can be either over or underspecified and is solved by using the conjugate gradient method. Numerical results related to the identification of parameters in the Energetic, Jiles-Atherton, and Preisach models are presented.

  1. Application of a systematic finite-element model modification technique to dynamic analysis of structures

    Science.gov (United States)

    Robinson, J. C.

    1982-01-01

    A systematic finite-element model modification technique has been applied to two small problems and a model of the main wing box of a research drone aircraft. The procedure determines the sensitivity of the eigenvalues and eigenvector components to specific structural changes, calculates the required changes and modifies the finite-element model. Good results were obtained where large stiffness modifications were required to satisfy large eigenvalue changes. Sensitivity matrix conditioning problems required the development of techniques to insure existence of a solution and accelerate its convergence. A method is proposed to assist the analyst in selecting stiffness parameters for modification.

  2. Non-linear control logics for vibrations suppression: a comparison between model-based and non-model-based techniques

    Science.gov (United States)

    Ripamonti, Francesco; Orsini, Lorenzo; Resta, Ferruccio

    2015-04-01

    Non-linear behavior is present in many mechanical system operating conditions. In these cases, a common engineering practice is to linearize the equation of motion around a particular operating point, and to design a linear controller. The main disadvantage is that the stability properties and validity of the controller are local. In order to improve the controller performance, non-linear control techniques represent a very attractive solution for many smart structures. The aim of this paper is to compare non-linear model-based and non-model-based control techniques. In particular the model-based sliding-mode-control (SMC) technique is considered because of its easy implementation and the strong robustness of the controller even under heavy model uncertainties. Among the non-model-based control techniques, the fuzzy control (FC), allowing designing the controller according to if-then rules, has been considered. It defines the controller without a system reference model, offering many advantages such as an intrinsic robustness. These techniques have been tested on the pendulum nonlinear system.

  3. Improving predictive mapping of deep-water habitats: Considering multiple model outputs and ensemble techniques

    Science.gov (United States)

    Robert, Katleen; Jones, Daniel O. B.; Roberts, J. Murray; Huvenne, Veerle A. I.

    2016-07-01

    In the deep sea, biological data are often sparse; hence models capturing relationships between observed fauna and environmental variables (acquired via acoustic mapping techniques) are often used to produce full coverage species assemblage maps. Many statistical modelling techniques are being developed, but there remains a need to determine the most appropriate mapping techniques. Predictive habitat modelling approaches (redundancy analysis, maximum entropy and random forest) were applied to a heterogeneous section of seabed on Rockall Bank, NE Atlantic, for which landscape indices describing the spatial arrangement of habitat patches were calculated. The predictive maps were based on remotely operated vehicle (ROV) imagery transects high-resolution autonomous underwater vehicle (AUV) sidescan backscatter maps. Area under the curve (AUC) and accuracy indicated similar performances for the three models tested, but performance varied by species assemblage, with the transitional species assemblage showing the weakest predictive performances. Spatial predictions of habitat suitability differed between statistical approaches, but niche similarity metrics showed redundancy analysis and random forest predictions to be most similar. As one statistical technique could not be found to outperform the others when all assemblages were considered, ensemble mapping techniques, where the outputs of many models are combined, were applied. They showed higher accuracy than any single model. Different statistical approaches for predictive habitat modelling possess varied strengths and weaknesses and by examining the outputs of a range of modelling techniques and their differences, more robust predictions, with better described variation and areas of uncertainties, can be achieved. As improvements to prediction outputs can be achieved without additional costly data collection, ensemble mapping approaches have clear value for spatial management.

  4. BREEDER: a microcomputer program for financial analysis of a large-scale prototype breeder reactor

    Energy Technology Data Exchange (ETDEWEB)

    Giese, R.F.

    1984-04-01

    This report describes a microcomputer-based, single-project financial analysis program: BREEDER. BREEDER is a user-friendly model designed to facilitate frequent and rapid analyses of the financial implications associated with alternative design and financing strategies for electric generating plants and large-scale prototype breeder (LSPB) reactors in particular. The model has proved to be a useful tool in establishing cost goals for LSPB reactors. The program is available on floppy disks for use on an IBM personal computer (or IBM look-a-like) running under PC-DOS or a Kaypro II transportable computer running under CP/M (and many other CP/M machines). The report documents version 1.5 of BREEDER and contains a user's guide. The report also includes a general overview of BREEDER, a summary of hardware requirements, a definition of all required program inputs, a description of all algorithms used in performing the construction-period and operation-period analyses, and a summary of all available reports. The appendixes contain a complete source-code listing, a cross-reference table, a sample interactive session, several sample runs, and additional documentation of the net-equity program option.

  5. An experimental comparison of modelling techniques for speaker recognition under limited data condition

    Indian Academy of Sciences (India)

    H S Jayanna; S R Mahadeva Prasanna

    2009-10-01

    Most of the existing modelling techniques for the speaker recognition task make an implicit assumption of sufficient data for speaker modelling and hence may lead to poor modelling under limited data condition. The present work gives an experimental evaluation of the modelling techniques like Crisp Vector Quantization (CVQ), Fuzzy Vector Quantization (FVQ), Self-Organizing Map (SOM), Learning Vector Quantization (LVQ), and Gaussian Mixture Model (GMM) classifiers. An experimental evaluation of the most widely used Gaussian Mixture Model–Universal Background Model (GMM–UBM) is also made. The experimental knowledge is then used to select a subset of classifiers for obtaining the combined classifiers. It is proposed that the combined LVQ and GMM–UBM classifier provides relatively better performance compared to all the individual as well as combined classifiers.

  6. A quantitative comparison of the TERA modeling and DFT magnetic resonance image reconstruction techniques.

    Science.gov (United States)

    Smith, M R; Nichols, S T; Constable, R T; Henkelman, R M

    1991-05-01

    The resolution of magnetic resonance images reconstructed using the discrete Fourier transform (DFT) algorithm is limited by the effective window generated by the finite data length. The transient error reconstruction approach (TERA) is an alternative reconstruction method based on autoregressive moving average (ARMA) modeling techniques. Quantitative measurements comparing the truncation artifacts present during DFT and TERA image reconstruction show that the modeling method substantially reduces these artifacts on "full" (256 X 256), "truncated" (256 X 192), and "severely truncated" (256 X 128) data sets without introducing the global amplitude distortion found in other modeling techniques. Two global measures for determining the success of modeling are suggested. Problem areas for one-dimensional modeling are examined and reasons for considering two-dimensional modeling discussed. Analysis of both medical and phantom data reconstructions are presented.

  7. Automatic parameter extraction techniques in IC-CAP for a compact double gate MOSFET model

    Science.gov (United States)

    Darbandy, Ghader; Gneiting, Thomas; Alius, Heidrun; Alvarado, Joaquín; Cerdeira, Antonio; Iñiguez, Benjamin

    2013-05-01

    In this paper, automatic parameter extraction techniques of Agilent's IC-CAP modeling package are presented to extract our explicit compact model parameters. This model is developed based on a surface potential model and coded in Verilog-A. The model has been adapted to Trigate MOSFETs, includes short channel effects (SCEs) and allows accurate simulations of the device characteristics. The parameter extraction routines provide an effective way to extract the model parameters. The techniques minimize the discrepancy and error between the simulation results and the available experimental data for more accurate parameter values and reliable circuit simulation. Behavior of the second derivative of the drain current is also verified and proves to be accurate and continuous through the different operating regimes. The results show good agreement with measured transistor characteristics under different conditions and through all operating regimes.

  8. Comparison of different uncertainty techniques in urban stormwater quantity and quality modelling

    DEFF Research Database (Denmark)

    Dotto, C. B.; Mannina, G.; Kleidorfer, M.

    2012-01-01

    is the assessment and comparison of different techniques generally used in the uncertainty assessment of the parameters of water models. This paper compares a number of these techniques: the Generalized Likelihood Uncertainty Estimation (GLUE), the Shuffled Complex Evolution Metropolis algorithm (SCEM......-UA), an approach based on a multi-objective auto-calibration (a multialgorithm, genetically adaptive multiobjective method, AMALGAM) and a Bayesian approach based on a simplified Markov Chain Monte Carlo method (implemented in the software MICA). To allow a meaningful comparison among the different uncertainty...... techniques, common criteria have been set for the likelihood formulation, defining the number of simulations, and the measure of uncertainty bounds. Moreover, all the uncertainty techniques were implemented for the same case study, in which the same stormwater quantity and quality model was used alongside...

  9. Q-DPM: An Efficient Model-Free Dynamic Power Management Technique

    CERN Document Server

    Li, Min; Yao, Richard; Yan, Xiaolang

    2011-01-01

    When applying Dynamic Power Management (DPM) technique to pervasively deployed embedded systems, the technique needs to be very efficient so that it is feasible to implement the technique on low end processor and tight-budget memory. Furthermore, it should have the capability to track time varying behavior rapidly because the time varying is an inherent characteristic of real world system. Existing methods, which are usually model-based, may not satisfy the aforementioned requirements. In this paper, we propose a model-free DPM technique based on Q-Learning. Q-DPM is much more efficient because it removes the overhead of parameter estimator and mode-switch controller. Furthermore, its policy optimization is performed via consecutive online trialing, which also leads to very rapid response to time varying behavior.

  10. Message Structures: a modelling technique for information systems analysis and design

    CERN Document Server

    España, Sergio; Pastor, Óscar; Ruiz, Marcela

    2011-01-01

    Despite the increasing maturity of model-driven software development (MDD), some research challenges remain open in the field of information systems (IS). For instance, there is a need to improve modelling techniques so that they cover several development stages in an integrated way, and they facilitate the transition from analysis to design. This paper presents Message Structures, a technique for the specification of communicative interactions between the IS and organisational actors. This technique can be used both in the analysis stage and in the design stage. During analysis, it allows abstracting from the technology that will support the IS, and to complement business process diagramming techniques with the specification of the communicational needs of the organisation. During design, Message Structures serves two purposes: (i) it allows to systematically derive a specification of the IS memory (e.g. a UML class diagram), (ii) and it allows to reason the user interface design using abstract patterns. Thi...

  11. Fluid-Structure Interaction in Abdominal Aortic Aneurysm: Effect of Modeling Techniques

    Directory of Open Access Journals (Sweden)

    Shengmao Lin

    2017-01-01

    Full Text Available In this work, the impact of modeling techniques on predicting the mechanical behaviors of abdominal aortic aneurysm (AAA is systematically investigated. The fluid-structure interaction (FSI model for simultaneously capturing the transient interaction between blood flow dynamics and wall mechanics was compared with its simplified techniques, that is, computational fluid dynamics (CFD or computational solid stress (CSS model. Results demonstrated that CFD exhibited relatively smaller vortexes and tends to overestimate the fluid wall shear stress, compared to FSI. On the contrary, the minimal differences in wall stresses and deformation were observed between FSI and CSS models. Furthermore, it was found that the accuracy of CSS prediction depends on the applied pressure profile for the aneurysm sac. A large pressure drop across AAA usually led to the underestimation of wall stresses and thus the AAA rupture. Moreover, the assumed isotropic AAA wall properties, compared to the anisotropic one, will aggravate the difference between the simplified models with the FSI approach. The present work demonstrated the importance of modeling techniques on predicting the blood flow dynamics and wall mechanics of the AAA, which could guide the selection of appropriate modeling technique for significant clinical implications.

  12. Fluid-Structure Interaction in Abdominal Aortic Aneurysm: Effect of Modeling Techniques.

    Science.gov (United States)

    Lin, Shengmao; Han, Xinwei; Bi, Yonghua; Ju, Siyeong; Gu, Linxia

    2017-01-01

    In this work, the impact of modeling techniques on predicting the mechanical behaviors of abdominal aortic aneurysm (AAA) is systematically investigated. The fluid-structure interaction (FSI) model for simultaneously capturing the transient interaction between blood flow dynamics and wall mechanics was compared with its simplified techniques, that is, computational fluid dynamics (CFD) or computational solid stress (CSS) model. Results demonstrated that CFD exhibited relatively smaller vortexes and tends to overestimate the fluid wall shear stress, compared to FSI. On the contrary, the minimal differences in wall stresses and deformation were observed between FSI and CSS models. Furthermore, it was found that the accuracy of CSS prediction depends on the applied pressure profile for the aneurysm sac. A large pressure drop across AAA usually led to the underestimation of wall stresses and thus the AAA rupture. Moreover, the assumed isotropic AAA wall properties, compared to the anisotropic one, will aggravate the difference between the simplified models with the FSI approach. The present work demonstrated the importance of modeling techniques on predicting the blood flow dynamics and wall mechanics of the AAA, which could guide the selection of appropriate modeling technique for significant clinical implications.

  13. Fluid-Structure Interaction in Abdominal Aortic Aneurysm: Effect of Modeling Techniques

    Science.gov (United States)

    Lin, Shengmao; Han, Xinwei; Bi, Yonghua; Ju, Siyeong

    2017-01-01

    In this work, the impact of modeling techniques on predicting the mechanical behaviors of abdominal aortic aneurysm (AAA) is systematically investigated. The fluid-structure interaction (FSI) model for simultaneously capturing the transient interaction between blood flow dynamics and wall mechanics was compared with its simplified techniques, that is, computational fluid dynamics (CFD) or computational solid stress (CSS) model. Results demonstrated that CFD exhibited relatively smaller vortexes and tends to overestimate the fluid wall shear stress, compared to FSI. On the contrary, the minimal differences in wall stresses and deformation were observed between FSI and CSS models. Furthermore, it was found that the accuracy of CSS prediction depends on the applied pressure profile for the aneurysm sac. A large pressure drop across AAA usually led to the underestimation of wall stresses and thus the AAA rupture. Moreover, the assumed isotropic AAA wall properties, compared to the anisotropic one, will aggravate the difference between the simplified models with the FSI approach. The present work demonstrated the importance of modeling techniques on predicting the blood flow dynamics and wall mechanics of the AAA, which could guide the selection of appropriate modeling technique for significant clinical implications. PMID:28321413

  14. The FORTRAN NALAP code adapted to a microcomputer compiler

    Energy Technology Data Exchange (ETDEWEB)

    Lobo, Paulo David de Castro; Borges, Eduardo Madeira; Braz Filho, Francisco Antonio; Guimaraes, Lamartine Nogueira Frutuoso, E-mail: plobo.a@uol.com.b, E-mail: eduardo@ieav.cta.b, E-mail: fbraz@ieav.cta.b, E-mail: guimarae@ieav.cta.b [Instituto de Estudos Avancados (IEAv/CTA), Sao Jose dos Campos, SP (Brazil)

    2010-07-01

    The Nuclear Energy Division of the Institute for Advanced Studies (IEAv) is conducting the TERRA project (TEcnologia de Reatores Rapidos Avancados), Technology for Advanced Fast Reactors project, aimed at a space reactor application. In this work, to attend the TERRA project, the NALAP code adapted to a microcomputer compiler called Compaq Visual Fortran (Version 6.6) is presented. This code, adapted from the light water reactor transient code RELAP 3B, simulates thermal-hydraulic responses for sodium cooled fast reactors. The strategy to run the code in a PC was divided in some steps mainly to remove unnecessary routines, to eliminate old statements, to introduce new ones and also to include extension precision mode. The source program was able to solve three sample cases under conditions of protected transients suggested in literature: the normal reactor shutdown, with a delay of 200 ms to start the control rod movement and a delay of 500 ms to stop the pumps; reactor scram after transient of loss of flow; and transients protected from overpower. Comparisons were made with results from the time when the NALAP code was acquired by the IEAv, back in the 80's. All the responses for these three simulations reproduced the calculations performed with the CDC compiler in 1985. Further modifications will include the usage of gas as coolant for the nuclear reactor to allow a Closed Brayton Cycle Loop - CBCL - to be used as a heat/electric converter. (author)

  15. Long term stability of rat sleep quantified by microcomputer analysis.

    Science.gov (United States)

    Vivaldi, E A; Pastel, R H; Fernstrom, J D; Hobson, J A

    1984-09-01

    An inexpensive microcomputer system is described for the direct recording of electrographic data from animals. Using this system, electrographic data can be recorded continuously on a polygraph and simultaneously quantitated by the computer, for days or weeks. Our system quantifies the amounts of delta waves, spindle bursts, hippocampal RSA activity and movement spikes for 15 sec epochs. These electrographic data are stored by the computer and subsequently can be used to score sleep stages. We find that the computer reliably counts waves; using these data, it can then score sleep stages off-line with about 90% accuracy. With this system, we find that the minutes/hour of both SWS and REM are remarkably stable from day to day. The absolute number of delta waves, spindle bursts, trains of RSA and movement counts/hour also remain stable from day to day. This type of system should find significant application in situations where quantitation of longterm effects of drugs, diets and other environmental inputs on sleep states or EEG wave bands are of interest.

  16. An effectiveness-NTU technique for characterising a finned tubes PCM system using a CFD model

    OpenAIRE

    Tay, N. H. Steven; Belusko, M.; Castell, Albert; Cabeza, Luisa F.; Bruno, F.

    2014-01-01

    Numerical modelling is commonly used to design, analyse and optimise tube-in-tank phase change thermal energy storage systems with fins. A new simplified two dimensional mathematical model, based on the effectiveness-number of transfer units technique, has been developed to characterise tube-in-tank phase change material systems, with radial round fins. The model applies an empirically derived P factor which defines the proportion of the heat flow which is parallel and isothermal....

  17. Some meta-modeling and optimization techniques for helicopter pre-sizing.

    OpenAIRE

    Tremolet, A.; Basset, P.M.

    2012-01-01

    Optimization and meta-models are key elements of modern engineering techniques. The Multidisciplinary Design Optimization (MDO) allows solving strongly coupled physical problems aiming at the global system optimization. For these multidisciplinary optimizations, meta-models can be required as surrogates for complex and high computational cost codes. Meta-modeling is also used for catching general trends and underlying relationships between parameters within a database. The application of thes...

  18. Study on ABCD Analysis Technique for Business Models, business strategies, Operating Concepts & Business Systems

    OpenAIRE

    Aithal, Sreeramana

    2016-01-01

    Studying the implications of a business model, choosing success strategies, developing viable operational concepts or evolving a functional system, it is important to analyse it in all dimensions. For this purpose, various analysing techniques/frameworks are used. This paper is a discussion on how to use an innovative analysing framework called ABCD model on a given business model, or on a business strategy or a operational concept/idea or business system. Based on four constructs Advantages,...

  19. An Improved Technique Based on Firefly Algorithm to Estimate the Parameters of the Photovoltaic Model

    Directory of Open Access Journals (Sweden)

    Issa Ahmed Abed

    2016-12-01

    Full Text Available This paper present a method to enhance the firefly algorithm by coupling with a local search. The constructed technique is applied to identify the solar parameters model where the method has been proved its ability to obtain the photovoltaic parameters model. Standard firefly algorithm (FA, electromagnetism-like (EM algorithm, and electromagnetism-like without local (EMW search algorithm all are compared with the suggested method to test its capability to solve this model.

  20. Numerical Time-Domain Modeling of Lamb Wave Propagation Using Elastodynamic Finite Integration Technique

    OpenAIRE

    Hussein Rappel; Aghil Yousefi-Koma; Jalil Jamali; Ako Bahari

    2014-01-01

    This paper presents a numerical model of lamb wave propagation in a homogenous steel plate using elastodynamic finite integration technique (EFIT) as well as its validation with analytical results. Lamb wave method is a long range inspection technique which is considered to have unique future in the field of structural health monitoring. One of the main problems facing the lamb wave method is how to choose the most appropriate frequency to generate the waves for adequate transmission capab...

  1. AN ACCURACY ASSESSMENT OF AUTOMATED PHOTOGRAMMETRIC TECHNIQUES FOR 3D MODELING OF COMPLEX INTERIORS

    OpenAIRE

    Georgantas, A.; M. Brédif; Pierrot-Desseilligny, M.

    2012-01-01

    This paper presents a comparison of automatic photogrammetric techniques to terrestrial laser scanning for 3D modelling of complex interior spaces. We try to evaluate the automated photogrammetric techniques not only in terms of their geometric quality compared to laser scanning but also in terms of cost in money, acquisition and computational time. To this purpose we chose as test site a modern building’s stairway. APERO/MICMAC ( ©IGN )which is an Open Source photogrammetric softwar...

  2. Statistical Techniques Complement UML When Developing Domain Models of Complex Dynamical Biosystems

    Science.gov (United States)

    Timmis, Jon; Qwarnstrom, Eva E.

    2016-01-01

    Computational modelling and simulation is increasingly being used to complement traditional wet-lab techniques when investigating the mechanistic behaviours of complex biological systems. In order to ensure computational models are fit for purpose, it is essential that the abstracted view of biology captured in the computational model, is clearly and unambiguously defined within a conceptual model of the biological domain (a domain model), that acts to accurately represent the biological system and to document the functional requirements for the resultant computational model. We present a domain model of the IL-1 stimulated NF-κB signalling pathway, which unambiguously defines the spatial, temporal and stochastic requirements for our future computational model. Through the development of this model, we observe that, in isolation, UML is not sufficient for the purpose of creating a domain model, and that a number of descriptive and multivariate statistical techniques provide complementary perspectives, in particular when modelling the heterogeneity of dynamics at the single-cell level. We believe this approach of using UML to define the structure and interactions within a complex system, along with statistics to define the stochastic and dynamic nature of complex systems, is crucial for ensuring that conceptual models of complex dynamical biosystems, which are developed using UML, are fit for purpose, and unambiguously define the functional requirements for the resultant computational model. PMID:27571414

  3. Statistical Techniques Complement UML When Developing Domain Models of Complex Dynamical Biosystems.

    Science.gov (United States)

    Williams, Richard A; Timmis, Jon; Qwarnstrom, Eva E

    2016-01-01

    Computational modelling and simulation is increasingly being used to complement traditional wet-lab techniques when investigating the mechanistic behaviours of complex biological systems. In order to ensure computational models are fit for purpose, it is essential that the abstracted view of biology captured in the computational model, is clearly and unambiguously defined within a conceptual model of the biological domain (a domain model), that acts to accurately represent the biological system and to document the functional requirements for the resultant computational model. We present a domain model of the IL-1 stimulated NF-κB signalling pathway, which unambiguously defines the spatial, temporal and stochastic requirements for our future computational model. Through the development of this model, we observe that, in isolation, UML is not sufficient for the purpose of creating a domain model, and that a number of descriptive and multivariate statistical techniques provide complementary perspectives, in particular when modelling the heterogeneity of dynamics at the single-cell level. We believe this approach of using UML to define the structure and interactions within a complex system, along with statistics to define the stochastic and dynamic nature of complex systems, is crucial for ensuring that conceptual models of complex dynamical biosystems, which are developed using UML, are fit for purpose, and unambiguously define the functional requirements for the resultant computational model.

  4. Antenna pointing system for satellite tracking based on Kalman filtering and model predictive control techniques

    Science.gov (United States)

    Souza, André L. G.; Ishihara, João Y.; Ferreira, Henrique C.; Borges, Renato A.; Borges, Geovany A.

    2016-12-01

    The present work proposes a new approach for an antenna pointing system for satellite tracking. Such a system uses the received signal to estimate the beam pointing deviation and then adjusts the antenna pointing. The present work has two contributions. First, the estimation is performed by a Kalman filter based conical scan technique. This technique uses the Kalman filter avoiding the batch estimator and applies a mathematical manipulation avoiding the linearization approximations. Secondly, a control technique based on the model predictive control together with an explicit state feedback solution are obtained in order to reduce the computational burden. Numerical examples illustrate the results.

  5. A comparative assessment of efficient uncertainty analysis techniques for environmental fate and transport models: application to the FACT model

    Science.gov (United States)

    Balakrishnan, Suhrid; Roy, Amit; Ierapetritou, Marianthi G.; Flach, Gregory P.; Georgopoulos, Panos G.

    2005-06-01

    This work presents a comparative assessment of efficient uncertainty modeling techniques, including Stochastic Response Surface Method (SRSM) and High Dimensional Model Representation (HDMR). This assessment considers improvement achieved with respect to conventional techniques of modeling uncertainty (Monte Carlo). Given that traditional methods for characterizing uncertainty are very computationally demanding, when they are applied in conjunction with complex environmental fate and transport models, this study aims to assess how accurately these efficient (and hence viable) techniques for uncertainty propagation can capture complex model output uncertainty. As a part of this effort, the efficacy of HDMR, which has primarily been used in the past as a model reduction tool, is also demonstrated for uncertainty analysis. The application chosen to highlight the accuracy of these new techniques is the steady state analysis of the groundwater flow in the Savannah River Site General Separations Area (GSA) using the subsurface Flow And Contaminant Transport (FACT) code. Uncertain inputs included three-dimensional hydraulic conductivity fields, and a two-dimensional recharge rate field. The output variables under consideration were the simulated stream baseflows and hydraulic head values. Results show that the uncertainty analysis outcomes obtained using SRSM and HDMR are practically indistinguishable from those obtained using the conventional Monte Carlo method, while requiring orders of magnitude fewer model simulations.

  6. Double-wire sternal closure technique in bovine animal models for total artificial heart implant.

    Science.gov (United States)

    Karimov, Jamshid H; Sunagawa, Gengo; Golding, Leonard A R; Moazami, Nader; Fukamachi, Kiyotaka

    2015-08-01

    In vivo preclinical testing of mechanical circulatory devices requires large animal models that provide reliable physiological and hemodynamic conditions by which to test the device and investigate design and development strategies. Large bovine species are commonly used for mechanical circulatory support device research. The animals used for chronic in vivo support require high-quality care and excellent surgical techniques as well as advanced methods of postoperative care. These techniques are constantly being updated and new methods are emerging.We report results of our double steel-wire closure technique in large bovine models used for Cleveland Clinic's continuous-flow total artificial heart development program. This is the first report of double-wire sternal fixation used in large bovine models.

  7. Determination of Complex-Valued Parametric Model Coefficients Using Artificial Neural Network Technique

    Directory of Open Access Journals (Sweden)

    A. M. Aibinu

    2010-01-01

    Full Text Available A new approach for determining the coefficients of a complex-valued autoregressive (CAR and complex-valued autoregressive moving average (CARMA model coefficients using complex-valued neural network (CVNN technique is discussed in this paper. The CAR and complex-valued moving average (CMA coefficients which constitute a CARMA model are computed simultaneously from the adaptive weights and coefficients of the linear activation functions in a two-layered CVNN. The performance of the proposed technique has been evaluated using simulated complex-valued data (CVD with three different types of activation functions. The results show that the proposed method can accurately determine the model coefficients provided that the network is properly trained. Furthermore, application of the developed CVNN-based technique for MRI K-space reconstruction results in images with improve resolution.

  8. Forecasting performances of three automated modelling techniques during the economic crisis 2007-2009

    DEFF Research Database (Denmark)

    Kock, Anders Bredahl; Teräsvirta, Timo

    2014-01-01

    into a linear model selection and estimation problem. To this end, we employ three automatic modelling devices. One of them is White’s QuickNet, but we also consider Autometrics, which is well known to time series econometricians, and the Marginal Bridge Estimator, which is better known to statisticians....... The performances of these three model selectors are compared by looking at the accuracy of the forecasts of the estimated neural network models. We apply the neural network model and the three modelling techniques to monthly industrial production and unemployment series from the G7 countries and the four......In this work we consider the forecasting of macroeconomic variables during an economic crisis. The focus is on a specific class of models, the so-called single hidden-layer feed-forward autoregressive neural network models. What makes these models interesting in the present context is the fact...

  9. Modelling of pulverized coal boilers: review and validation of on-line simulation techniques

    Energy Technology Data Exchange (ETDEWEB)

    Diez, L.I.; Cortes, C.; Campo, A. [University of Zaragoza, Zaragoza (Spain). Centro de Investigacion de Recursos y Consumos Energeticos (CIRCE)

    2005-07-01

    Thermal modelling of large pulverized fuel utility boilers has reached a very remarkable development, through the application of CFD techniques and other advanced mathematical methods. However, due to the computational requirements, on-line monitoring and simulation tools still rely on lumped models and semiempirical approaches, which are often strongly simplified and not well connected with sound theoretical basis. This paper reviews on-line modelling techniques, aiming at the improvement of their capabilities, by means of the revision and modification of conventional lumped models and the integration of off-line CFD predictions. The paper illustrates the coherence of monitoring calculations as well as the validation of the on-line thermal simulator, starting from real operation data from a case-study unit. The outcome is that it is possible to significantly improve the accuracy of on-line calculations provided by conventional models, taking into account the singularities of large combustion systems and coupling offline CFD predictions for selected scenarios.

  10. Kerf modelling in abrasive waterjet milling using evolutionary computation and ANOVA techniques

    Science.gov (United States)

    Alberdi, A.; Rivero, A.; Carrascal, A.; Lamikiz, A.

    2012-04-01

    Many researchers demonstrated the capability of Abrasive Waterjet (AWJ) technology for precision milling operations. However, the concurrence of several input parameters along with the stochastic nature of this technology leads to a complex process control, which requires a work focused in process modelling. This research work introduces a model to predict the kerf shape in AWJ slot milling in Aluminium 7075-T651 in terms of four important process parameters: the pressure, the abrasive flow rate, the stand-off distance and the traverse feed rate. A hybrid evolutionary approach was employed for kerf shape modelling. This technique allowed characterizing the profile through two parameters: the maximum cutting depth and the full width at half maximum. On the other hand, based on ANOVA and regression techniques, these two parameters were also modelled as a function of process parameters. Combination of both models resulted in an adequate strategy to predict the kerf shape for different machining conditions.

  11. MODELING AND COMPENSATION TECHNIQUE FOR THE GEOMETRIC ERRORS OF FIVE-AXIS CNC MACHINE TOOLS

    Institute of Scientific and Technical Information of China (English)

    2003-01-01

    One of the important trends in precision machining is the development of real-time error compensation technique.The error compensation for multi-axis CNC machine tools is very difficult and attractive.The modeling for the geometric error of five-axis CNC machine tools based on multi-body systems is proposed.And the key technique of the compensation-identifying geometric error parameters-is developed.The simulation of cutting workpiece to verify the modeling based on the multi-body systems is also considered.

  12. A Review of Domain Modelling and Domain Imaging Techniques in Ferroelectric Crystals

    Directory of Open Access Journals (Sweden)

    John E. Huber

    2011-02-01

    Full Text Available The present paper reviews models of domain structure in ferroelectric crystals, thin films and bulk materials. Common crystal structures in ferroelectric materials are described and the theory of compatible domain patterns is introduced. Applications to multi-rank laminates are presented. Alternative models employing phase-field and related techniques are reviewed. The paper then presents methods of observing ferroelectric domain structure, including optical, polarized light, scanning electron microscopy, X-ray and neutron diffraction, atomic force microscopy and piezo-force microscopy. Use of more than one technique for unambiguous identification of the domain structure is also described.

  13. Modelling the potential spatial distribution of mosquito species using three different techniques.

    Science.gov (United States)

    Cianci, Daniela; Hartemink, Nienke; Ibáñez-Justicia, Adolfo

    2015-02-27

    Models for the spatial distribution of vector species are important tools in the assessment of the risk of establishment and subsequent spread of vector-borne diseases. The aims of this study are to define the environmental conditions suitable for several mosquito species through species distribution modelling techniques, and to compare the results produced with the different techniques. Three different modelling techniques, i.e., non-linear discriminant analysis, random forest and generalised linear model, were used to investigate the environmental suitability in the Netherlands for three indigenous mosquito species (Culiseta annulata, Anopheles claviger and Ochlerotatus punctor). Results obtained with the three statistical models were compared with regard to: (i) environmental suitability maps, (ii) environmental variables associated with occurrence, (iii) model evaluation. The models indicated that precipitation, temperature and population density were associated with the occurrence of Cs. annulata and An. claviger, whereas land surface temperature and vegetation indices were associated with the presence of Oc. punctor. The maps produced with the three different modelling techniques showed consistent spatial patterns for each species, but differences in the ranges of the predictions. Non-linear discriminant analysis had lower predictions than other methods. The model with the best classification skills for all the species was the random forest model, with specificity values ranging from 0.89 to 0.91, and sensitivity values ranging from 0.64 to 0.95. We mapped the environmental suitability for three mosquito species with three different modelling techniques. For each species, the maps showed consistent spatial patterns, but the level of predicted environmental suitability differed; NLDA gave lower predicted probabilities of presence than the other two methods. The variables selected as important in the models were in agreement with the existing knowledge about

  14. Bone diagnosis by X-ray techniques

    Energy Technology Data Exchange (ETDEWEB)

    Lima, I. [Nuclear Engineering Program/COPPE/UFRJ, P.O. Box 68509, Av. Horacio Macedo, 2030, Sala I-133, Cidade Universitaria, Zip Code: 21941-972 Rio de Janeiro, RJ (Brazil)], E-mail: inaya@lin.ufrj.br; Anjos, M.J. [Nuclear Engineering Program/COPPE/UFRJ, P.O. Box 68509, Av. Horacio Macedo, 2030, Sala I-133, Cidade Universitaria, Zip Code: 21941-972 Rio de Janeiro, RJ (Brazil); Physics Institute, UERJ (Brazil); Farias, M.L.F. [University Hospital, UFRJ (Brazil); Parcegoni, N.; Rosenthal, D. [Biophysics Institute, UFRJ (Brazil); Duarte, M.E.L. [Histologic and Embriology Department, UFRJ (Brazil); Lopes, R.T. [Nuclear Engineering Program/COPPE/UFRJ, P.O. Box 68509, Av. Horacio Macedo, 2030, Sala I-133, Cidade Universitaria, Zip Code: 21941-972 Rio de Janeiro, RJ (Brazil)

    2008-12-15

    In this work, two X-ray techniques used were 3D microcomputed tomography (micro-CT) and X-ray microfluorescence (micro-XRF) in order to investigate the internal structure of the bone samples. Those two techniques work together, e.g. as a complement to each other, to characterize bones structure and composition. Initially, the specimens were used to do the scan procedure in the microcomputer tomography system and the second step consists of doing the X-ray microfluorescence analysis. The results show that both techniques are powerful methods for analyzing, inspecting and characterizing bone samples: they are alternative procedures for examining bone structures and compositions and they are complementary.

  15. Comparison of fast-acting of Java on microcomputer Raspberry Pi

    Directory of Open Access Journals (Sweden)

    O. I. Didukh

    2015-03-01

    Full Text Available Introduction. The speed of a specific set of instructions is important working with highlevel programming language for microcomputers. The method of comparing execution times of two functionally different programs on different versions of Virtual Machine Java is used to determine the performance of Java on Raspberry Pi microcomputer. Application of modern fast search algorithm primes up to a given integer N (Atkin sieve and a program to perform basic operations on floating point numbers are implemented. Results. Two algorithms microcomputer operating frequencies: 700 MHz, 900 MHz, 1000 MHz are tested. Two versions of Java for Raspberry Pi: Oracle Java and OpenJDK are used. Conclusions. In microcomputer Raspberry Pi elementary operations on floating-point run are faster than with complex search algorithm primes. It is established that Java from Oracle is 13-14 times is faster than OpenJDK. Oracle Java on Raspberry Pi gives good performance and it is recommended for installation and use on microcomputer.

  16. Quantification of arthritic bone degradation by analysis of 3D micro-computed tomography data

    Science.gov (United States)

    Svensson, Carl-Magnus; Hoffmann, Bianca; Irmler, Ingo M.; Straßburger, Maria; Figge, Marc Thilo; Saluz, Hans Peter

    2017-01-01

    The use of animal models of arthritis is a key component in the evaluation of therapeutic strategies against the human disease rheumatoid arthritis (RA). Here we present quantitative measurements of bone degradation characterised by the cortical bone profile using glucose-6-phosphate isomerase (G6PI) induced arthritis. We applied micro-computed tomography (μCT) during three arthritis experiments and one control experiment to image the metatarsals of the hind paws and to investigate the effect of experimental arthritis on their cortical bone profile. For measurements of the cortical profile we automatically identified slices that are orthogonal to individual metatarsals, thereby making the measurements independent of animal placement in the scanner. We measured the average cortical thickness index (CTI) of the metatarsals, as well as the thickness changes along the metatarsal. In this study we introduced the cortical thickness gradient (CTG) as a new measure and we investigated how arthritis affects this measure. We found that in general both CTI and CTG are able to quantify arthritic progression, whilst CTG was found to be the more sensitive measure. PMID:28290525

  17. NEW TECHNIQUE FOR OBESITY SURGERY: INTERNAL GASTRIC PLICATION TECHNIQUE USING INTRAGASTRIC SINGLE-PORT (IGS-IGP) IN EXPERIMENTAL MODEL.

    Science.gov (United States)

    Müller, Verena; Fikatas, Panagiotis; Gül, Safak; Noesser, Maximilian; Fuehrer, Kirs Ten; Sauer, Igor; Pratschke, Johann; Zorron, Ricardo

    2017-01-01

    Bariatric surgery is currently the most effective method to ameliorate co-morbidities as consequence of morbidly obese patients with BMI over 35 kg/m2. Endoscopic techniques have been developed to treat patients with mild obesity and ameliorate comorbidities, but endoscopic skills are needed, beside the costs of the devices. To report a new technique for internal gastric plication using an intragastric single port device in an experimental swine model. Twenty experiments using fresh pig cadaver stomachs in a laparoscopic trainer were performed. The procedure was performed as follow in ten pigs: 1) volume measure; 2) insufflation of the stomach with CO2; 3) extroversion of the stomach through the simulator and installation of the single port device (Gelpoint Applied Mini) through a gastrotomy close to the pylorus; 4) performance of four intragastric handsewn 4-point sutures with Prolene 2-0, from the gastric fundus to the antrum; 5) after the performance, the residual volume was measured. Sleeve gastrectomy was also performed in further ten pigs and pre- and post-procedure gastric volume were measured. The internal gastric plication technique was performed successfully in the ten swine experiments. The mean procedure time was 27±4 min. It produced a reduction of gastric volume of a mean of 51%, and sleeve gastrectomy, a mean of 90% in this swine model. The internal gastric plication technique using an intragastric single port device required few skills to perform, had low operative time and achieved good reduction (51%) of gastric volume in an in vitro experimental model. A cirurgia bariátrica é atualmente o método mais efetivo para melhorar as co-morbidades decorrentes da obesidade mórbida com IMC acima de 35 kg/m2. Técnicas endoscópicas foram desenvolvidas para tratar pacientes com obesidade leve e melhorar as comorbidades, mas habilidades endoscópicas são necessárias, além dos custos. Relatar uma nova técnica para a plicatura gástrica interna

  18. Forecasting performance of three automated modelling techniques during the economic crisis 2007-2009

    DEFF Research Database (Denmark)

    Kock, Anders Bredahl; Teräsvirta, Timo

    problem. To this end we employ three automatic modelling devices. One of them is White’s QuickNet, but we also consider Autometrics, well known to time series econometricians, and the Marginal Bridge Estimator, better known to statisticians and microeconometricians.The performance of these three model...... selectors is compared by looking at the accuracy of the forecasts of the estimated neural network models. We apply the neural network model and the three modelling techniques to monthly industrial production and unemployment series of the G7 countries and the four Scandinavian ones, and focus on forecasting......In this work we consider forecasting macroeconomic variables during an economic crisis. The focus is on a speci…c class of models, the so-called single hidden-layer feedforward autoregressive neural network models. What makes these models interesting in the present context is that they form a class...

  19. A Hybrid Model for the Mid-Long Term Runoff Forecasting by Evolutionary Computaion Techniques

    Institute of Scientific and Technical Information of China (English)

    Zou Xiu-fen; Kang Li-shan; Cae Hong-qing; Wu Zhi-jian

    2003-01-01

    The mid-long term hydrology forecasting is one of most challenging problems in hydrological studies. This paper proposes an efficient dynamical system prediction model using evolutionary computation techniques. The new model overcomes some disadvantages of conventional hydrology fore casting ones. The observed data is divided into two parts: the slow "smooth and steady" data, and the fast "coarse and fluctuation" data. Under the divide and conquer strategy, the behavior of smooth data is modeled by ordinary differential equations based on evolutionary modeling, and that of the coarse data is modeled using gray correlative forecasting method. Our model is verified on the test data of the mid-long term hydrology forecast in tbe northeast region of China. The experimental results show that the model is superior to gray system prediction model (GSPM).

  20. Low level waste management: a compilation of models and monitoring techniques. Volume 1

    Energy Technology Data Exchange (ETDEWEB)

    Mosier, J.E.; Fowler, J.R.; Barton, C.J. (comps.)

    1980-04-01

    In support of the National Low-Level Waste (LLW) Management Research and Development Program being carried out at Oak Ridge National Laboratory, Science Applications, Inc., conducted a survey of models and monitoring techniques associated with the transport of radionuclides and other chemical species from LLW burial sites. As a result of this survey, approximately 350 models were identified. For each model the purpose and a brief description are presented. To the extent possible, a point of contact and reference material are identified. The models are organized into six technical categories: atmospheric transport, dosimetry, food chain, groundwater transport, soil transport, and surface water transport. About 4% of the models identified covered other aspects of LLW management and are placed in a miscellaneous category. A preliminary assessment of all these models was performed to determine their ability to analyze the transport of other chemical species. The models that appeared to be applicable are identified. A brief survey of the state-of-the-art techniques employed to monitor LLW burial sites is also presented, along with a very brief discussion of up-to-date burial techniques.

  1. Hybrid models for hydrological forecasting: integration of data-driven and conceptual modelling techniques

    NARCIS (Netherlands)

    Corzo Perez, G.A.

    2009-01-01

    This book presents the investigation of different architectures of integrating hydrological knowledge and models with data-driven models for the purpose of hydrological flow forecasting. The models resulting from such integration are referred to as hybrid models. The book addresses the following top

  2. Hybrid models for hydrological forecasting: Integration of data-driven and conceptual modelling techniques

    NARCIS (Netherlands)

    Corzo Perez, G.A.

    2009-01-01

    This book presents the investigation of different architectures of integrating hydrological knowledge and models with data-driven models for the purpose of hydrological flow forecasting. The models resulting from such integration are referred to as hybrid models. The book addresses the following top

  3. Hybrid models for hydrological forecasting: integration of data-driven and conceptual modelling techniques

    NARCIS (Netherlands)

    Corzo Perez, G.A.

    2009-01-01

    This book presents the investigation of different architectures of integrating hydrological knowledge and models with data-driven models for the purpose of hydrological flow forecasting. The models resulting from such integration are referred to as hybrid models. The book addresses the following

  4. Hybrid models for hydrological forecasting: Integration of data-driven and conceptual modelling techniques

    NARCIS (Netherlands)

    Corzo Perez, G.A.

    2009-01-01

    This book presents the investigation of different architectures of integrating hydrological knowledge and models with data-driven models for the purpose of hydrological flow forecasting. The models resulting from such integration are referred to as hybrid models. The book addresses the following

  5. Machine learning techniques for astrophysical modelling and photometric redshift estimation of quasars in optical sky surveys

    CERN Document Server

    Kumar, N Daniel

    2008-01-01

    Machine learning techniques are utilised in several areas of astrophysical research today. This dissertation addresses the application of ML techniques to two classes of problems in astrophysics, namely, the analysis of individual astronomical phenomena over time and the automated, simultaneous analysis of thousands of objects in large optical sky surveys. Specifically investigated are (1) techniques to approximate the precise orbits of the satellites of Jupiter and Saturn given Earth-based observations as well as (2) techniques to quickly estimate the distances of quasars observed in the Sloan Digital Sky Survey. Learning methods considered include genetic algorithms, particle swarm optimisation, artificial neural networks, and radial basis function networks. The first part of this dissertation demonstrates that GAs and PSOs can both be efficiently used to model functions that are highly non-linear in several dimensions. It is subsequently demonstrated in the second part that ANNs and RBFNs can be used as ef...

  6. Evaluation of mesh morphing and mapping techniques in patient specific modelling of the human pelvis.

    Science.gov (United States)

    Salo, Zoryana; Beek, Maarten; Whyne, Cari Marisa

    2012-08-01

    Robust generation of pelvic finite element models is necessary to understand variation in mechanical behaviour resulting from differences in gender, aging, disease and injury. The objective of this study was to apply and evaluate mesh morphing and mapping techniques to facilitate the creation and structural analysis of specimen-specific finite element (FE) models of the pelvis. A specimen-specific pelvic FE model (source mesh) was generated following a traditional user-intensive meshing scheme. The source mesh was morphed onto a computed tomography scan generated target surface of a second pelvis using a landmarked-based approach, in which exterior source nodes were shifted to target surface vertices, while constrained along a normal. A second copy of the morphed model was further refined through mesh mapping, in which surface nodes of the initial morphed model were selected in patches and remapped onto the surfaces of the target model. Computed tomography intensity-based material properties were assigned to each model. The source, target, morphed and mapped models were analyzed under axial compression using linear static FE analysis, and their strain distributions were evaluated. Morphing and mapping techniques were effectively applied to generate good quality and geometrically complex specimen-specific pelvic FE models. Mapping significantly improved strain concurrence with the target pelvis FE model.

  7. Comparative analysis of system identification techniques for nonlinear modeling of the neuron-microelectrode junction.

    Science.gov (United States)

    Khan, Saad Ahmad; Thakore, Vaibhav; Behal, Aman; Bölöni, Ladislau; Hickman, James J

    2013-03-01

    Applications of non-invasive neuroelectronic interfacing in the fields of whole-cell biosensing, biological computation and neural prosthetic devices depend critically on an efficient decoding and processing of information retrieved from a neuron-electrode junction. This necessitates development of mathematical models of the neuron-electrode interface that realistically represent the extracellular signals recorded at the neuroelectronic junction without being computationally expensive. Extracellular signals recorded using planar microelectrode or field effect transistor arrays have, until now, primarily been represented using linear equivalent circuit models that fail to reproduce the correct amplitude and shape of the signals recorded at the neuron-microelectrode interface. In this paper, to explore viable alternatives for a computationally inexpensive and efficient modeling of the neuron-electrode junction, input-output data from the neuron-electrode junction is modeled using a parametric Wiener model and a Nonlinear Auto-Regressive network with eXogenous input trained using a dynamic Neural Network model (NARX-NN model). Results corresponding to a validation dataset from these models are then employed to compare and contrast the computational complexity and efficiency of the aforementioned modeling techniques with the Lee-Schetzen technique of cross-correlation for estimating a nonlinear dynamic model of the neuroelectronic junction.

  8. Modeling of PV Systems Based on Inflection Points Technique Considering Reverse Mode

    Directory of Open Access Journals (Sweden)

    Bonie J. Restrepo-Cuestas

    2013-11-01

    Full Text Available This paper proposes a methodology for photovoltaic (PV systems modeling, considering their behavior in both direct and reverse operating mode and considering mismatching conditions. The proposed methodology is based on the inflection points technique with a linear approximation to model the bypass diode and a simplified PV model. The proposed mathematical model allows to evaluate the energetic performance of a PV system, exhibiting short simulation times in large PV systems. In addition, this methodology allows to estimate the condition of the modules affected by the partial shading since it is possible to know the power dissipated due to its operation at the second quadrant.

  9. STATISTICAL INFERENCES FOR VARYING-COEFFICINT MODELS BASED ON LOCALLY WEIGHTED REGRESSION TECHNIQUE

    Institute of Scientific and Technical Information of China (English)

    梅长林; 张文修; 梁怡

    2001-01-01

    Some fundamental issues on statistical inferences relating to varying-coefficient regression models are addressed and studied. An exact testing procedure is proposed for checking the goodness of fit of a varying-coefficient model fired by the locally weighted regression technique versus an ordinary linear regression model. Also, an appropriate statistic for testing variation of model parameters over the locations where the observations are collected is constructed and a formal testing approach which is essential to exploring spatial non-stationarity in geography science is suggested.

  10. Application of Discrete Fracture Modeling and Upscaling Techniques to Complex Fractured Reservoirs

    Science.gov (United States)

    Karimi-Fard, M.; Lapene, A.; Pauget, L.

    2012-12-01

    During the last decade, an important effort has been made to improve data acquisition (seismic and borehole imaging) and workflow for reservoir characterization which has greatly benefited the description of fractured reservoirs. However, the geological models resulting from the interpretations need to be validated or calibrated against dynamic data. Flow modeling in fractured reservoirs remains a challenge due to the difficulty of representing mass transfers at different heterogeneity scales. The majority of the existing approaches are based on dual continuum representation where the fracture network and the matrix are represented separately and their interactions are modeled using transfer functions. These models are usually based on idealized representation of the fracture distribution which makes the integration of real data difficult. In recent years, due to increases in computer power, discrete fracture modeling techniques (DFM) are becoming popular. In these techniques the fractures are represented explicitly allowing the direct use of data. In this work we consider the DFM technique developed by Karimi-Fard et al. [1] which is based on an unstructured finite-volume discretization. The mass flux between two adjacent control-volumes is evaluated using an optimized two-point flux approximation. The result of the discretization is a list of control-volumes with the associated pore-volumes and positions, and a list of connections with the associated transmissibilities. Fracture intersections are simplified using a connectivity transformation which contributes considerably to the efficiency of the methodology. In addition, the method is designed for general purpose simulators and any connectivity based simulator can be used for flow simulations. The DFM technique is either used standalone or as part of an upscaling technique. The upscaling techniques are required for large reservoirs where the explicit representation of all fractures and faults is not possible

  11. Microcomputers and minipopulations: the 1981 Cook Islands census.

    Science.gov (United States)

    Hayes, G R

    1983-01-01

    This paper presents some of the background behind the installation and use of a microcomputer for census processing and briefly reviews the results of the 1981 census in the light of the Cook Islands' special demographic situation. Since independence from New Zealand in 1965, the Cook Islands has gradually developed a capacity for statistics gathering in general and census taking in particular. The 1981 quinquennial census represents a watershed in the development of local personnel without the aid of external advisors; it was the 1st census to be processed within the Cook Islands by means of a computer. Both the installation of the microcomputer and the processing of the 1981 census are counted as successes. This is due to a number of factors, namely: a long period of careful planning procedures which preceded the choice of system and installation; the enthusiasm of the local staff closely involved in the planning; the emphasis on training; tests of the equipment before installation by computer practitioners with knowledge of local needs and capacities; reasonable goal setting; and the supply, with the system, of adequate spare parts tools and maintenance and training manuals. Like many island populations, that of the Cook Islands is characterized by considerable instability as illustrated by the 1981 census results. As the rate and direction of population change varies from island to island, generalizations across the nation as a whole are difficult. For example, while some of the northern atolls experienced population decline during the latest intercensal period, most had their popultion increase in 1981. All the southern islands declined during the 1976-81 period at annual rates varying from 0.6% to 3.2%. The explanation for these patterns of change lies primarily in the different rates and direction of external migration on each island, as Cook Islanders are exempt from immigration regulations to New Zealand and Australia. In recent years, women have formed a

  12. Animal models in bariatric surgery--a review of the surgical techniques and postsurgical physiology.

    Science.gov (United States)

    Rao, Raghavendra S; Rao, Venkatesh; Kini, Subhash

    2010-09-01

    Bariatric surgery is considered the most effective current treatment for morbid obesity. Since the first publication of an article by Kremen, Linner, and Nelson, many experiments have been performed using animal models. The initial experiments used only malabsorptive procedures like intestinal bypass which have largely been abandoned now. These experimental models have been used to assess feasibility and safety as well as to refine techniques particular to each procedure. We will discuss the surgical techniques and the postsurgical physiology of the four major current bariatric procedures (namely, Roux-en-Y gastric bypass, gastric banding, sleeve gastrectomy, and biliopancreatic diversion). We have also reviewed the anatomy and physiology of animal models. We have reviewed the literature and presented it such that it would be a reference to an investigator interested in animal experiments in bariatric surgery. Experimental animal models are further divided into two categories: large mammals that include dogs, cats, rabbits, and pig and small mammals that include rats and mice.

  13. A technique using a nonlinear helicopter model for determining trims and derivatives

    Science.gov (United States)

    Ostroff, A. J.; Downing, D. R.; Rood, W. J.

    1976-01-01

    A technique is described for determining the trims and quasi-static derivatives of a flight vehicle for use in a linear perturbation model; both the coupled and uncoupled forms of the linear perturbation model are included. Since this technique requires a nonlinear vehicle model, detailed equations with constants and nonlinear functions for the CH-47B tandem rotor helicopter are presented. Tables of trims and derivatives are included for airspeeds between -40 and 160 knots and rates of descent between + or - 10.16 m/sec (+ or - 200 ft/min). As a verification, the calculated and referenced values of comparable trims, derivatives, and linear model poles are shown to have acceptable agreement.

  14. Visualization of Three-Dimensional Nephron Structure With Microcomputed Tomography

    Energy Technology Data Exchange (ETDEWEB)

    Bentley,M.; Jorgensen, S.; Lerman, L.; Ritman, E.; Romero, J.

    2007-01-01

    The three-dimensional architecture of nephrons in situ and their interrelationship with other nephrons are difficult to visualize by microscopic methods. The present study uses microcomputed X-ray tomography (micro-CT) to visualize intact nephrons in situ. Rat kidneys were perfusion-fixed with buffered formalin and their vasculature was subsequently perfused with radiopaque silicone. Cortical tissue was stained en bloc with osmium tetroxide, embedded in plastic, scanned, and reconstructed at voxel resolutions of 6, 2, and 1 {mu}m. At 6 {mu}m resolution, large blood vessels and glomeruli could be visualized but nephrons and their lumens were small and difficult to visualize. Optimal images were obtained using a synchrotron radiation source at 2 {mu}m resolution where nephron components could be identified, correlated with histological sections, and traced. Proximal tubules had large diameters and opaque walls, whereas distal tubules, connecting tubules, and collecting ducts had smaller diameters and less opaque walls. Blood vessels could be distinguished from nephrons by the luminal presence of radiopaque silicone. Proximal tubules were three times longer than distal tubules. Proximal and distal tubules were tightly coiled in the outer cortex but were loosely coiled in the middle and inner cortex. The connecting tubules had the narrowest diameters of the tubules and converged to form arcades that paralleled the radial vessels as they extended to the outer cortex. These results illustrate a potential use of micro-CT to obtain three-dimensional information about nephron architecture and nephron interrelationships, which could be useful in evaluating experimental tubular hypertrophy, atrophy, and necrosis.

  15. Visualization of three-dimensional nephron structure with microcomputed tomography.

    Science.gov (United States)

    Bentley, Michael D; Jorgensen, Steven M; Lerman, Lilach O; Ritman, Erik L; Romero, J Carlos

    2007-03-01

    The three-dimensional architecture of nephrons in situ and their interrelationship with other nephrons are difficult to visualize by microscopic methods. The present study uses microcomputed X-ray tomography (micro-CT) to visualize intact nephrons in situ. Rat kidneys were perfusion-fixed with buffered formalin and their vasculature was subsequently perfused with radiopaque silicone. Cortical tissue was stained en bloc with osmium tetroxide, embedded in plastic, scanned, and reconstructed at voxel resolutions of 6, 2, and 1 microm. At 6 microm resolution, large blood vessels and glomeruli could be visualized but nephrons and their lumens were small and difficult to visualize. Optimal images were obtained using a synchrotron radiation source at 2 microm resolution where nephron components could be identified, correlated with histological sections, and traced. Proximal tubules had large diameters and opaque walls, whereas distal tubules, connecting tubules, and collecting ducts had smaller diameters and less opaque walls. Blood vessels could be distinguished from nephrons by the luminal presence of radiopaque silicone. Proximal tubules were three times longer than distal tubules. Proximal and distal tubules were tightly coiled in the outer cortex but were loosely coiled in the middle and inner cortex. The connecting tubules had the narrowest diameters of the tubules and converged to form arcades that paralleled the radial vessels as they extended to the outer cortex. These results illustrate a potential use of micro-CT to obtain three-dimensional information about nephron architecture and nephron interrelationships, which could be useful in evaluating experimental tubular hypertrophy, atrophy, and necrosis.

  16. A microcomputer-controlled gas phase microreactor system

    Energy Technology Data Exchange (ETDEWEB)

    Morris, R.M.

    1983-08-01

    Although automated reactors are effective tools for studying a single type of reaction or optimizing catalyst performance, they may not be well suited for exploratory research. These reactors generally have several shortcomings. First, they may have limited versatility since they are usually designed with a single application in mind. Second, computer systems used for process control and data acquisition are often expensive and complex, so that once they are set up for a given application, it is quite difficult to adapt them for another. Because of these restrictions, experimental reactors are often operated manually, requiring a full-time operator to monitor operations and acquire data. This is a greater problem in laboratories where projects are often short-term, and the costs of setting up an automated reactor may outweigh the benefits of automation. For an automated reactor to be cost-effective in such an environment, both reactor hardware and control software must be versatile enough that they can be easily modified and adapted for different experiments. An automated gas-flow microreactor has been designed and constructed which is both inexpensive and flexible. The reactor is capable of performing three different types of experiments, 1) continuous reagent feed with analysis of the product stream, 2) pulsed-flow experiments, and 3) temperature-programmed desorption (TPD) and reaction (TPR). Conversion of the reactor from one configuration to another requires less than one hour. Process control and data acquisition are performed using an Apple II Plus microcomputer (Apple Computer Corp., Cupertino, Calif.) and an ISAAC interface device (Cyborg Corp., Newton, Mass.).

  17. Microcomputed tomography and shock microdeformation studies on shatter cones

    Science.gov (United States)

    Zaag, Patrice Tristan; Reimold, Wolf Uwe; Hipsley, Christy Anna

    2016-08-01

    One of the aspects of impact cratering that are still not fully understood is the formation of shatter cones and related fracturing phenomena. Yet, shatter cones have been applied as an impact-diagnostic criterion for decades without the role of shock waves and target rock defects in their formation having been elucidated ever. We have tested the application of the nondestructive microcomputed tomography (μCT) method to visualize the interior of shatter cones in order to possibly resolve links between fracture patterns and shatter cone surface features (striations and intervening "valleys"). Shatter-coned samples from different impact sites and in different lithologies were investigated for their μCT suitability, with a shatter cone in sandstone from the Serra da Cangalha impact structure (Brazil) remaining as the most promising candidate because of the fracture resolution achieved. To validate the obtained CT data, the scanned specimen was cut into three orthogonal sets of thin sections. Scans with 13 μm resolution were obtained. μCT scans and microscopic analysis unraveled an orientation of subplanar fractures and related fluid inclusion trails, and planar fracture (PF) orientations in the interior of shatter cones. Planar deformation features (PDF) were observed predominantly near the shatter cone surface. Previously undescribed varieties of feather features (FF), in the form of lamellae emanating from curviplanar and curved fractures, as well as an "arrowhead"-like FF development with microlamellae originating from both sides of a PF, were observed. The timing of shatter cone formation was investigated by establishing temporal relations to the generation of various shock microscopic effects. Shatter cones are, thus, generated post- or syn-formation of PF, FF, subplanar fractures, and PDF. The earliest possible time for shatter cone formation is during the late stage of the compressional phase, that is, shock wave passage, of an impact event.

  18. Modeling and comparative study of various detection techniques for FMCW LIDAR using optisystem

    Science.gov (United States)

    Elghandour, Ahmed H.; Ren, Chen D.

    2013-09-01

    In this paper we investigated the different detection techniques especially direct detection, coherent heterodyne detection and coherent homodyne detection on FMCW LIDAR system using Optisystem package. A model for target, propagation channel and various detection techniques were developed using Optisystem package and then a comparative study among various detection techniques for FMCW LIDAR systems is done analytically and simulated using the developed model. Performance of direct detection, heterodyne detection and homodyne detection for FMCW LIDAR system was calculated and simulated using Optisystem package. The output simulated performance was checked using simulated results of MATLAB simulator. The results shows that direct detection is sensitive to the intensity of the received electromagnetic signal and has low complexity system advantage over the others detection architectures at the expense of the thermal noise is the dominant noise source and the sensitivity is relatively poor. In addition to much higher detection sensitivity can be achieved using coherent optical mixing which is performed by heterodyne and homodyne detection.

  19. Investigation of the Stability of POD-Galerkin Techniques for Reduced Order Model Development

    Science.gov (United States)

    2016-01-09

    CFD solutions comparison of Case A at x/L = 0.5 for cases in Table 4. 12 The remaining three cases with multiple frequencies in the forcing function...Techniques for Reduced Order Model Development 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Huang, C...mitigate the stability issues encountered in developing a reduced order model (ROM) for combustion response to specified excitations using the Euler

  20. Stakeholder approach, Stakeholders mental model: A visualization test with cognitive mapping technique

    Directory of Open Access Journals (Sweden)

    Garoui Nassreddine

    2012-04-01

    Full Text Available The idea of this paper is to determine the mental models of actors in the firm with respect to the stakeholder approach of corporate governance. The use of the cognitive map to view these diagrams to show the ways of thinking and conceptualization of the stakeholder approach. The paper takes a corporate governance perspective, discusses stakeholder model. It takes also a cognitive mapping technique.

  1. Spatio–temporal rain attenuation model for application to fade mitigation techniques

    OpenAIRE

    2004-01-01

    We present a new stochastic-dynamic model useful for the planning and design of gigahertz satellite communications using fade mitigation techniques. It is a generalization of the Maseng–Bakken and targets dual-site dual-frequency rain attenu- ated satellite links. The outcome is a consistent and comprehensive model capable of yielding theoretical descriptions of: 1) long-term power spectral density of rain attenuation; 2) rain fade slope; 3) rain frequency scaling factor; 4) site diversity; a...

  2. Financial-Economic Time Series Modeling and Prediction Techniques – Review

    OpenAIRE

    2014-01-01

    Financial-economic time series distinguishes from other time series because they contain a portion of uncertainity. Because of this, statistical theory and methods play important role in their analysis. Moreover, external influence of various parameters on the values in time series makes them non-linear, which on the other hand suggests employment of more complex techniques for ther modeling. To cope with this challenging problem many researchers and scientists have developed various models a...

  3. Comparison of different uncertainty techniques in urban stormwater quantity and quality modelling.

    Science.gov (United States)

    Dotto, Cintia B S; Mannina, Giorgio; Kleidorfer, Manfred; Vezzaro, Luca; Henrichs, Malte; McCarthy, David T; Freni, Gabriele; Rauch, Wolfgang; Deletic, Ana

    2012-05-15

    Urban drainage models are important tools used by both practitioners and scientists in the field of stormwater management. These models are often conceptual and usually require calibration using local datasets. The quantification of the uncertainty associated with the models is a must, although it is rarely practiced. The International Working Group on Data and Models, which works under the IWA/IAHR Joint Committee on Urban Drainage, has been working on the development of a framework for defining and assessing uncertainties in the field of urban drainage modelling. A part of that work is the assessment and comparison of different techniques generally used in the uncertainty assessment of the parameters of water models. This paper compares a number of these techniques: the Generalized Likelihood Uncertainty Estimation (GLUE), the Shuffled Complex Evolution Metropolis algorithm (SCEM-UA), an approach based on a multi-objective auto-calibration (a multialgorithm, genetically adaptive multi-objective method, AMALGAM) and a Bayesian approach based on a simplified Markov Chain Monte Carlo method (implemented in the software MICA). To allow a meaningful comparison among the different uncertainty techniques, common criteria have been set for the likelihood formulation, defining the number of simulations, and the measure of uncertainty bounds. Moreover, all the uncertainty techniques were implemented for the same case study, in which the same stormwater quantity and quality model was used alongside the same dataset. The comparison results for a well-posed rainfall/runoff model showed that the four methods provide similar probability distributions of model parameters, and model prediction intervals. For ill-posed water quality model the differences between the results were much wider; and the paper provides the specific advantages and disadvantages of each method. In relation to computational efficiency (i.e. number of iterations required to generate the probability

  4. Technique for Early Reliability Prediction of Software Components Using Behaviour Models

    Science.gov (United States)

    Ali, Awad; N. A. Jawawi, Dayang; Adham Isa, Mohd; Imran Babar, Muhammad

    2016-01-01

    Behaviour models are the most commonly used input for predicting the reliability of a software system at the early design stage. A component behaviour model reveals the structure and behaviour of the component during the execution of system-level functionalities. There are various challenges related to component reliability prediction at the early design stage based on behaviour models. For example, most of the current reliability techniques do not provide fine-grained sequential behaviour models of individual components and fail to consider the loop entry and exit points in the reliability computation. Moreover, some of the current techniques do not tackle the problem of operational data unavailability and the lack of analysis results that can be valuable for software architects at the early design stage. This paper proposes a reliability prediction technique that, pragmatically, synthesizes system behaviour in the form of a state machine, given a set of scenarios and corresponding constraints as input. The state machine is utilized as a base for generating the component-relevant operational data. The state machine is also used as a source for identifying the nodes and edges of a component probabilistic dependency graph (CPDG). Based on the CPDG, a stack-based algorithm is used to compute the reliability. The proposed technique is evaluated by a comparison with existing techniques and the application of sensitivity analysis to a robotic wheelchair system as a case study. The results indicate that the proposed technique is more relevant at the early design stage compared to existing works, and can provide a more realistic and meaningful prediction. PMID:27668748

  5. Technique for Early Reliability Prediction of Software Components Using Behaviour Models.

    Science.gov (United States)

    Ali, Awad; N A Jawawi, Dayang; Adham Isa, Mohd; Imran Babar, Muhammad

    Behaviour models are the most commonly used input for predicting the reliability of a software system at the early design stage. A component behaviour model reveals the structure and behaviour of the component during the execution of system-level functionalities. There are various challenges related to component reliability prediction at the early design stage based on behaviour models. For example, most of the current reliability techniques do not provide fine-grained sequential behaviour models of individual components and fail to consider the loop entry and exit points in the reliability computation. Moreover, some of the current techniques do not tackle the problem of operational data unavailability and the lack of analysis results that can be valuable for software architects at the early design stage. This paper proposes a reliability prediction technique that, pragmatically, synthesizes system behaviour in the form of a state machine, given a set of scenarios and corresponding constraints as input. The state machine is utilized as a base for generating the component-relevant operational data. The state machine is also used as a source for identifying the nodes and edges of a component probabilistic dependency graph (CPDG). Based on the CPDG, a stack-based algorithm is used to compute the reliability. The proposed technique is evaluated by a comparison with existing techniques and the application of sensitivity analysis to a robotic wheelchair system as a case study. The results indicate that the proposed technique is more relevant at the early design stage compared to existing works, and can provide a more realistic and meaningful prediction.

  6. Combined rock-physical modelling and seismic inversion techniques for characterisation of stacked sandstone reservoir

    NARCIS (Netherlands)

    Justiniano, A.; Jaya, Y.; Diephuis, G.; Veenhof, R.; Pringle, T.

    2015-01-01

    The objective of the study is to characterise the Triassic massive stacked sandstone deposits of the Main Buntsandstein Subgroup at Block Q16 located in the West Netherlands Basin. The characterisation was carried out through combining rock-physics modelling and seismic inversion techniques. The app

  7. Modeling and teaching techniques for conceptual and logical relational database design.

    Science.gov (United States)

    Thompson, Cheryl Bagley; Sward, Katherine

    2005-10-01

    This paper proposes a series of techniques to be used in teaching database design. Common ERD notations are discussed. The authors developed an ERD notation, adapted from the Unified Modeling Language, which facilitates student learning of the database design process. The paper presents a specific step by step process for representing the ERD components as tables and for normalizing the resulting set of tables.

  8. The commercial use of segmentation and predictive modeling techniques for database marketing in the Netherlands

    NARCIS (Netherlands)

    Verhoef, PC; Spring, PN; Hoekstra, JC; Leeflang, PSH

    2003-01-01

    Although the application of segmentation and predictive modeling is an important topic in the database marketing (DBM) literature, no study has yet investigated the extent of adoption of these techniques. We present the results of a Dutch survey involving 228 database marketing companies. We find th

  9. Combined rock-physical modelling and seismic inversion techniques for characterisation of stacked sandstone reservoir

    NARCIS (Netherlands)

    Justiniano, A.; Jaya, Y.; Diephuis, G.; Veenhof, R.; Pringle, T.

    2015-01-01

    The objective of the study is to characterise the Triassic massive stacked sandstone deposits of the Main Buntsandstein Subgroup at Block Q16 located in the West Netherlands Basin. The characterisation was carried out through combining rock-physics modelling and seismic inversion techniques. The app

  10. Combined Rock-physical Modelling and Seismic Inversion Techniques for Characterisation of the Posidonia Shale Formation

    NARCIS (Netherlands)

    Justiniano, A.; Jaya, M.; Diephuis, G.

    2015-01-01

    The objective of this study is to characterise the Jurassic Posidonia Shale Formation at Block Q16 located in the West Netherlands Basin. The characterisation was carried out through combining rock-physics modelling and seismic inversion techniques. The results show that the Posidonia Shale Formatio

  11. Prediction of Monthly Summer Monsoon Rainfall Using Global Climate Models Through Artificial Neural Network Technique

    Science.gov (United States)

    Nair, Archana; Singh, Gurjeet; Mohanty, U. C.

    2017-08-01

    The monthly prediction of summer monsoon rainfall is very challenging because of its complex and chaotic nature. In this study, a non-linear technique known as Artificial Neural Network (ANN) has been employed on the outputs of Global Climate Models (GCMs) to bring out the vagaries inherent in monthly rainfall prediction. The GCMs that are considered in the study are from the International Research Institute (IRI) (2-tier CCM3v6) and the National Centre for Environmental Prediction (Coupled-CFSv2). The ANN technique is applied on different ensemble members of the individual GCMs to obtain monthly scale prediction over India as a whole and over its spatial grid points. In the present study, a double-cross-validation and simple randomization technique was used to avoid the over-fitting during training process of the ANN model. The performance of the ANN-predicted rainfall from GCMs is judged by analysing the absolute error, box plots, percentile and difference in linear error in probability space. Results suggest that there is significant improvement in prediction skill of these GCMs after applying the ANN technique. The performance analysis reveals that the ANN model is able to capture the year to year variations in monsoon months with fairly good accuracy in extreme years as well. ANN model is also able to simulate the correct signs of rainfall anomalies over different spatial points of the Indian domain.

  12. Particle Markov Chain Monte Carlo Techniques of Unobserved Component Time Series Models Using Ox

    DEFF Research Database (Denmark)

    Nonejad, Nima

    This paper details Particle Markov chain Monte Carlo techniques for analysis of unobserved component time series models using several economic data sets. PMCMC combines the particle filter with the Metropolis-Hastings algorithm. Overall PMCMC provides a very compelling, computationally fast...

  13. The commercial use of segmentation and predictive modeling techniques for database marketing in the Netherlands

    NARCIS (Netherlands)

    Verhoef, PC; Spring, PN; Hoekstra, JC; Leeflang, PSH

    Although the application of segmentation and predictive modeling is an important topic in the database marketing (DBM) literature, no study has yet investigated the extent of adoption of these techniques. We present the results of a Dutch survey involving 228 database marketing companies. We find

  14. Surgical technique: establishing a pre-clinical large animal model to test aortic valve leaflet substitute

    Science.gov (United States)

    Knirsch, Walter; Cesarovic, Niko; Krüger, Bernard; Schmiady, Martin; Frauenfelder, Thomas; Frese, Laura; Dave, Hitendu; Hoerstrup, Simon Philipp; Hübler, Michael

    2016-01-01

    To overcome current limitations of valve substitutes and tissue substitutes the technology of tissue engineering (TE) continues to offer new perspectives in congenital cardiac surgery. We report our experiences and results implanting a decellularized TE patch in nine sheep in orthotropic position as aortic valve leaflet substitute. Establishing the animal model, feasibility, cardiopulmonary bypass issues and operative technique are highlighted. PMID:28149571

  15. Removal of filling materials from oval-shaped canals using laser irradiation: a micro-computed tomographic study.

    Science.gov (United States)

    Keleş, Ali; Arslan, Hakan; Kamalak, Aliye; Akçay, Merve; Sousa-Neto, Manoel D; Versiani, Marco Aurélio

    2015-02-01

    The aim of this study was to assess the efficacy of lasers in removing filling remnants from oval-shaped canals after retreatment procedures with rotary instruments using micro-computed tomographic imaging. The root canals of 42 mandibular canines were prepared and obturated using the warm vertical compaction technique. Retreatment was performed with rotary instruments, and the specimens were distributed in 3 groups (n = 14) according to the laser device used in a later stage of retreatment procedure: Er:YAG, Er:YAG laser-based photon-induced photoacoustic streaming, and Nd:YAG. The specimens were scanned in a micro-computed tomographic device after root canal filling and each stage of retreatment at a resolution of 13.68 μm. The percentage differences of the remaining filling material before and after laser application within and between groups were statistically compared using the paired sample t test and 1-way analysis of variance test, respectively. Significance level was set at 5%. Overall, filling residues were located mainly in the apical third and into canal irregularities after the retreatment procedures. After using rotary instruments, the mean percentage volume of the filling remnants ranged from 13%-16%, with no statistical significant difference between groups (P > .05). Within groups, additional laser application had a significant reduction in the amount of the remaining filling materials (P materials. The additional use of lasers improved the removal of filling material after the retreatment procedure with rotary instruments. Copyright © 2015 American Association of Endodontists. Published by Elsevier Inc. All rights reserved.

  16. Numerical modelling of radon-222 entry into houses: An outline of techniques and results

    DEFF Research Database (Denmark)

    Andersen, C.E.

    2001-01-01

    Numerical modelling is a powerful tool for studies of soil gas and radon-222 entry into houses. It is the purpose of this paper to review some main techniques and results. In the past, modelling has focused on Darcy flow of soil gas (driven by indoor–outdoor pressure differences) and combined...... diffusive and advective transport of radon. Models of different complexity have been used. The simpler ones are finite-difference models with one or two spatial dimensions. The more complex models allow for full three-dimensional and time dependency. Advanced features include: soil heterogeneity, anisotropy......, fractures, moisture, non-uniform soil temperature, non-Darcy flow of gas, and flow caused by changes in the atmospheric pressure. Numerical models can be used to estimate the importance of specific factors for radon entry. Models are also helpful when results obtained in special laboratory or test structure...

  17. 48 CFR 1552.239-103 - Acquisition of Energy Star Compliant Microcomputers, Including Personal Computers, Monitors and...

    Science.gov (United States)

    2010-10-01

    ... Compliant Microcomputers, Including Personal Computers, Monitors and Printers. 1552.239-103 Section 1552.239... Star Compliant Microcomputers, Including Personal Computers, Monitors and Printers. As prescribed in... Personal Computers, Monitors, and Printers (APR 1996) (a) The Contractor shall provide computer products...

  18. Finding of Correction Factor and Dimensional Error in Bio-AM Model by FDM Technique

    Science.gov (United States)

    Manmadhachary, Aiamunoori; Ravi Kumar, Yennam; Krishnanand, Lanka

    2016-06-01

    Additive Manufacturing (AM) is the swift manufacturing process, in which input data can be provided from various sources like 3-Dimensional (3D) Computer Aided Design (CAD), Computed Tomography (CT), Magnetic Resonance Imaging (MRI) and 3D scanner data. From the CT/MRI data can be manufacture Biomedical Additive Manufacturing (Bio-AM) models. The Bio-AM model gives a better lead on preplanning of oral and maxillofacial surgery. However manufacturing of the accurate Bio-AM model is one of the unsolved problems. The current paper demonstrates error between the Standard Triangle Language (STL) model to Bio-AM model of dry mandible and found correction factor in Bio-AM model with Fused Deposition Modelling (FDM) technique. In the present work dry mandible CT images are acquired by CT scanner and supplied into a 3D CAD model in the form of STL model. Further the data is sent to FDM machine for fabrication of Bio-AM model. The difference between Bio-AM to STL model dimensions is considered as dimensional error and the ratio of STL to Bio-AM model dimensions considered as a correction factor. This correction factor helps to fabricate the AM model with accurate dimensions of the patient anatomy. These true dimensional Bio-AM models increasing the safety and accuracy in pre-planning of oral and maxillofacial surgery. The correction factor for Dimension SST 768 FDM AM machine is 1.003 and dimensional error is limited to 0.3 %.

  19. Top-level modeling of an als system utilizing object-oriented techniques

    Science.gov (United States)

    Rodriguez, L. F.; Kang, S.; Ting, K. C.

    The possible configuration of an Advanced Life Support (ALS) System capable of supporting human life for long-term space missions continues to evolve as researchers investigate potential technologies and configurations. To facilitate the decision process the development of acceptable, flexible, and dynamic mathematical computer modeling tools capable of system level analysis is desirable. Object-oriented techniques have been adopted to develop a dynamic top-level model of an ALS system.This approach has several advantages; among these, object-oriented abstractions of systems are inherently modular in architecture. Thus, models can initially be somewhat simplistic, while allowing for adjustments and improvements. In addition, by coding the model in Java, the model can be implemented via the World Wide Web, greatly encouraging the utilization of the model. Systems analysis is further enabled with the utilization of a readily available backend database containing information supporting the model. The subsystem models of the ALS system model include Crew, Biomass Production, Waste Processing and Resource Recovery, Food Processing and Nutrition, and the Interconnecting Space. Each subsystem model and an overall model have been developed. Presented here is the procedure utilized to develop the modeling tool, the vision of the modeling tool, and the current focus for each of the subsystem models.

  20. Nonlinear modelling of polymer electrolyte membrane fuel cell stack using nonlinear cancellation technique

    Energy Technology Data Exchange (ETDEWEB)

    Barus, R. P. P., E-mail: rismawan.ppb@gmail.com [Engineering Physics, Faculty of Industrial Technology, Institut Teknologi Bandung, Jalan Ganesa 10 Bandung and Centre for Material and Technical Product, Jalan Sangkuriang No. 14 Bandung (Indonesia); Tjokronegoro, H. A.; Leksono, E. [Engineering Physics, Faculty of Industrial Technology, Institut Teknologi Bandung, Jalan Ganesa 10 Bandung (Indonesia); Ismunandar [Chemistry Study, Faculty of Mathematics and Science, Institut Teknologi Bandung, Jalan Ganesa 10 Bandung (Indonesia)

    2014-09-25

    Fuel cells are promising new energy conversion devices that are friendly to the environment. A set of control systems are required in order to operate a fuel cell based power plant system optimally. For the purpose of control system design, an accurate fuel cell stack model in describing the dynamics of the real system is needed. Currently, linear model are widely used for fuel cell stack control purposes, but it has limitations in narrow operation range. While nonlinear models lead to nonlinear control implemnetation whos more complex and hard computing. In this research, nonlinear cancellation technique will be used to transform a nonlinear model into a linear form while maintaining the nonlinear characteristics. The transformation is done by replacing the input of the original model by a certain virtual input that has nonlinear relationship with the original input. Then the equality of the two models is tested by running a series of simulation. Input variation of H2, O2 and H2O as well as disturbance input I (current load) are studied by simulation. The error of comparison between the proposed model and the original nonlinear model are less than 1 %. Thus we can conclude that nonlinear cancellation technique can be used to represent fuel cell nonlinear model in a simple linear form while maintaining the nonlinear characteristics and therefore retain the wide operation range.

  1. Meso-damage modelling of polymer based particulate composites using finite element technique

    Science.gov (United States)

    Tsui, Chi Pong

    To develop a new particulate polymer composite (PPC) with desired mechanical properties is usually accomplished by an experimental trial-and-error approach. A new technique, which predicts the damage mechanism and its effects on the mechanical properties of PPC, has been proposed. This meso-mechanical modelling technique, which offers a means to bridge the micro-damage mechanism and the macro-structural behaviour, has been implemented in a finite element code. A three-dimensional finite element meso-cell model has been designed and constructed to simulate the damage mechanism of PPC. The meso-cell model consists of a micro-particle, an interface, and a matrix. The initiation of the particle/polymer matrix debonding process has been predicted on the basis of a tensile criterion. By considering the meso-cell model as a representative volume element (RVE), the effects of damage on the macro-structural constitutive behaviour of PPC have been determined. An experimental investigation has been made on glass beads (GB) reinforced polyphenylene oxides (PPO) for verification of the meso-cell model and the meso-mechanical finite element technique. The predicted constitutive relation has been found to be in good agreement with the experimental results. The results of the in-situ microscopic test also verify the correctness of the meso-cell model. The application of the meso-mechanical finite element modelling technique has been extended to a macro-structural analysis to simulate the response an engineering structure made of PPC under a static load. In the simulation, a damage variable has been defined in terms of the computational results of the cell model in meso-scale. Hence, the damage-coupled constitutive relation of the GB/PPO composite could be derived. A user-defined subroutine VUMAT in FORTRAN language describing the damage-coupled constitutive behaviour has then been incorporated into the ABAQUS finite element code. On a macro-scale, the ABAQUS finite element code

  2. Effects of industrial microcomputer use in control systems for machines and conveyors in surface mining

    Energy Technology Data Exchange (ETDEWEB)

    Borczyk, Z.; Rosseger, A. (Poltegor-Instytut, Wroclaw (Poland))

    1991-01-01

    Evaluates use of the ComPan Pzaz microcomputers manufactured in Poland for control of bucket wheel excavators without thrust action and for belt conveyors used in brown coal surface mines in Poland. The microcomputer consists of 14 standard microprocessor controller modules and 4 specialized modules. Structure of the control system, types of operations controlled by the system and effects of computerized control on safety and efficiency of operation of bucket wheel excavators are analyzed. Use of the ComPan Pzaz microcomputers for control of steep-angle conveyor systems (with a drive power from 1,000 kW to 4,000 kW) is evaluated. The system controls conveyor start-up and conveyor braking. Selected aspects of ComPan Pzaz reliability and service are also discussed. 2 refs.

  3. Nondestructive material characterization of meteorites with synchrotron-based high energy x-ray phase micro-computed tomography

    Science.gov (United States)

    Liu, Huiqiang; Xiao, Tiqiao; Xie, Honglan; Fu, Yanan; Zhang, Xueliang; Fan, Xiaoxi

    2017-02-01

    Synchrotron radiation based x-ray propagation-based micro-computed tomography (SRPCT) has been widely used to nondestructively access 3D structural information in many fields in the last decade. However, for strongly absorbed objects with small density-differential compositions, conventional SRPCT technique fails in providing high-contrast images for visualization of objects characteristic information except edge-enhancements at interfaces or boundaries of samples. In this study, we successfully employed the SRPCT technique with phase retrieval, the high energy x-ray phase-attenuation-duality (PAD) algorithm, into nondestructive material characterization of invaluable meteorite samples due to the greatly enhanced phase-contrast of different bulk material areas, as compared to conventional SRPCT on equal dose basis. Our experimental results demonstrated the PAD-SRPCT technique is superior to conventional SRPCT technique to access density and structure distributions of different meteorite compositions with high density resolution, owing to the striking contrast-to-noise ratio (CNR). In addition, a new mass-density measurement method was presented to estimate the mass density of different compositions in the meteorite sample based on the calibration of the imaging system.

  4. Comparison of optical coherence tomography, microcomputed tomography, and histology at a three-dimensionally imaged trabecular bone sample

    Science.gov (United States)

    Kasseck, Christoph; Kratz, Marita; Torcasio, Antonia; Gerhardt, Nils C.; van Lenthe, G. Harry; Gambichler, Thilo; Hoffmann, Klaus; Jones, David B.; Hofmann, Martin R.

    2010-07-01

    We investigate optical coherence tomography (OCT) as a method for imaging bone. The OCT images are compared directly to those of the standard methods of bone histology and microcomputed tomography (μCT) on a single, fixed human femoral trabecular bone sample. An advantage of OCT over bone histology is its noninvasive nature. OCT also images the lamellar structure of trabeculae at slightly higher contrast than normal bone histology. While μCT visualizes the trabecular framework of the whole sample, OCT can image additionally cells with a penetration depth limited approximately to 1 mm. The most significant advantage of OCT, however, is the absence of toxic effects (no ionizing radiation), i.e., continuous images may be made and individual cell tracking may be performed. The penetration depth of OCT, however, limits its use to small animal models and small bone organ cultures.

  5. Modeling and Control of a Photovoltaic Energy System Using the State-Space Averaging Technique

    Directory of Open Access Journals (Sweden)

    Mohd S. Jamri

    2010-01-01

    Full Text Available Problem statement: This study presented the modeling and control of a stand-alone Photovoltaic (PV system using the state-space averaging technique. Approach: The PV module was modeled based on the parameters obtained from a commercial PV data sheet while state-space method is used to model the power converter. A DC-DC boost converter was chosen to step up the input DC voltage of the PV module while the DC-AC single-phase full-bridge square-wave inverter was chosen to convert the input DC comes from boost converter into AC element. The integrated state-space model was simulated under a constant and a variable change of solar irradiance and temperature. In addition to that, maximum power point tracking method was also included in the model to ensure that optimum use of PV module is made. A circuitry simulation was performed under the similar test conditions in order to validate the state-space model. Results: Results showed that the state-space averaging model yields the similar performance as produced by the circuitry simulation in terms of the voltage, current and power generated. Conclusion/Recommendations: The state-space averaging technique is simple to be implemented in modeling and control of either simple or complex system, which yields the similar performance as the results from circuitry method.

  6. Studies on Pumice Lightweight Aggregate Concrete with Quarry Dust Using Mathematical Modeling Aid of ACO Techniques

    Directory of Open Access Journals (Sweden)

    J. Rex

    2016-01-01

    Full Text Available The lightweight aggregate is an aggregate that weighs less than the usual rock aggregate and the quarry dust is a rock particle used in the concrete for the experimentation. The significant intention of the proposed technique is to frame a mathematical modeling with the aid of the optimization techniques. The mathematical modeling is done by minimizing the cost and time consumed in the case of extension of the real time experiment. The proposed mathematical modeling is utilized to predict four output parameters such as compressive strength (Mpa, split tensile strength (Mpa, flexural strength (Mpa, and deflection (in mm. Here, the modeling is carried out with three different optimization techniques like genetic algorithm (GA, particle swarm optimization (PSO, and ant colony optimization (ACO with 80% of data from experiment utilized for the training and the remaining 20% for the validation. Finally, while testing, the error value is minimized and the performance obtained in the ACO for the parameters such as compressive strength, split tensile strength, flexural strength, and deflection is 91%, 98%, 87%, and 94% of predicted values, respectively, in the mathematical modeling.

  7. The feasibility of computational modelling technique to detect the bladder cancer.

    Science.gov (United States)

    Keshtkar, Ahmad; Mesbahi, Asghar; Rasta, S H; Keshtkar, Asghar

    2010-01-01

    A numerical technique, finite element analysis (FEA) was used to model the electrical properties, the bio impedance of the bladder tissue in order to predict the bladder cancer. This model results showed that the normal bladder tissue have significantly higher impedance than the malignant tissue that was in opposite with the impedance measurements or the experimental results. Therefore, this difference can be explained using the effects of inflammation, oedema on the urothelium and the property of the bladder as a distensible organ. Furthermore, the different current distributions inside the bladder tissue (in histological layers) in normal and malignant cases and finally different applied pressures over the bladder tissue can cause different impedances for the bladder tissue. Finally, it is believed that further studies have to be carried out to characterise the human bladder tissue using the electrical impedance measurement and modelling techniques.

  8. Finite-element-model updating using computational intelligence techniques applications to structural dynamics

    CERN Document Server

    Marwala, Tshilidzi

    2010-01-01

    Finite element models (FEMs) are widely used to understand the dynamic behaviour of various systems. FEM updating allows FEMs to be tuned better to reflect measured data and may be conducted using two different statistical frameworks: the maximum likelihood approach and Bayesian approaches. Finite Element Model Updating Using Computational Intelligence Techniques applies both strategies to the field of structural mechanics, an area vital for aerospace, civil and mechanical engineering. Vibration data is used for the updating process. Following an introduction a number of computational intelligence techniques to facilitate the updating process are proposed; they include: • multi-layer perceptron neural networks for real-time FEM updating; • particle swarm and genetic-algorithm-based optimization methods to accommodate the demands of global versus local optimization models; • simulated annealing to put the methodologies into a sound statistical basis; and • response surface methods and expectation m...

  9. A New Profile Learning Model for Recommendation System based on Machine Learning Technique

    Directory of Open Access Journals (Sweden)

    Shereen H. Ali

    2016-03-01

    Full Text Available Recommender systems (RSs have been used to successfully address the information overload problem by providing personalized and targeted recommendations to the end users. RSs are software tools and techniques providing suggestions for items to be of use to a user, hence, they typically apply techniques and methodologies from Data Mining. The main contribution of this paper is to introduce a new user profile learning model to promote the recommendation accuracy of vertical recommendation systems. The proposed profile learning model employs the vertical classifier that has been used in multi classification module of the Intelligent Adaptive Vertical Recommendation (IAVR system to discover the user’s area of interest, and then build the user’s profile accordingly. Experimental results have proven the effectiveness of the proposed profile learning model, which accordingly will promote the recommendation accuracy.

  10. A Multi-Model Reduction Technique for Optimization of Coupled Structural-Acoustic Problems

    DEFF Research Database (Denmark)

    Creixell Mediante, Ester; Jensen, Jakob Søndergaard; Brunskog, Jonas;

    2016-01-01

    Finite Element models of structural-acoustic coupled systems can become very large for complex structures with multiple connected parts. Optimization of the performance of the structure based on harmonic analysis of the system requires solving the coupled problem iteratively and for several...... frequencies, which can become highly time consuming. Several modal-based model reduction techniques for structure-acoustic interaction problems have been developed in the literature. The unsymmetric nature of the pressure-displacement formulation of the problem poses the question of how the reduction modal...... base should be formed, given that the modal vectors are not orthogonal due to the asymmetry of the system matrices. In this paper, a multi-model reduction (MMR) technique for structure-acoustic interaction problems is developed. In MMR, the reduction base is formed with the modal vectors of a family...

  11. Ex Vivo Aneurysm models mimicking real cases for the preoperative training of the clipping technique

    Directory of Open Access Journals (Sweden)

    Martin D.

    2016-06-01

    Full Text Available Training in a specialty like cerebrovascular neurosurgery becomes more and more difficult as the access to training is limited by the increasing number of neurosurgical departments and the lack of expert centers for specific pathology. This is why an increased investment in experimental training is encountered in many centers worldwide. The best models for training the clipping technique are ex Vivo on cadaveric heads aneurysm models, animal models or augmented reality models. We present a few ex Vivo models of aneurysms mimicking ACOA, ACM bifurcation and basil are tip aneurysms using a pulsed continuous perfusion system. Clipping training on aneurysm models is an invaluable tool both for the residents and for the specialists with a special interest in cerebrovascular surgery.

  12. Velocity Modeling and Inversion Techniques for Locating Microseismic Events in Unconventional Reservoirs

    Institute of Scientific and Technical Information of China (English)

    Jianzhong Zhang; Han Liu; Zhihui Zou; Zhonglai Huang

    2015-01-01

    A velocity model is an important factor influencing microseismic event locations. We re-view the velocity modeling and inversion techniques for locating microseismic events in exploration for unconventional oil and gas reservoirs. We first describe the geological and geophysical characteristics of reservoir formations related to hydraulic fracturing in heterogeneity, anisotropy, and variability, then discuss the influences of velocity estimation, anisotropy model, and their time-lapse changes on the accuracy in determining microseismic event locations, and then survey some typical methods for build-ing velocity models in locating event locations. We conclude that the three tangled physical attributes of reservoirs make microseismic monitoring very challenging. The uncertainties in velocity model and ig-noring its anisotropies and its variations in hydraulic fracturing can cause systematic mislocations of microseismic events which are unacceptable in microseismic monitoring. So, we propose some potential ways for building accurate velocity models.

  13. Constructing an Urban Population Model for Medical Insurance Scheme Using Microsimulation Techniques

    Directory of Open Access Journals (Sweden)

    Linping Xiong

    2012-01-01

    Full Text Available China launched a pilot project of medical insurance reform in 79 cities in 2007 to cover urban nonworking residents. An urban population model was created in this paper for China’s medical insurance scheme using microsimulation model techniques. The model made it clear for the policy makers the population distributions of different groups of people, the potential urban residents entering the medical insurance scheme. The income trends of units of individuals and families were also obtained. These factors are essential in making the challenging policy decisions when considering to balance the long-term financial sustainability of the medical insurance scheme.

  14. Constructing an urban population model for medical insurance scheme using microsimulation techniques.

    Science.gov (United States)

    Xiong, Linping; Zhang, Lulu; Tang, Weidong; Ma, Yuqin

    2012-01-01

    China launched a pilot project of medical insurance reform in 79 cities in 2007 to cover urban nonworking residents. An urban population model was created in this paper for China's medical insurance scheme using microsimulation model techniques. The model made it clear for the policy makers the population distributions of different groups of people, the potential urban residents entering the medical insurance scheme. The income trends of units of individuals and families were also obtained. These factors are essential in making the challenging policy decisions when considering to balance the long-term financial sustainability of the medical insurance scheme.

  15. Evaluating machine learning and statistical prediction techniques for landslide susceptibility modeling

    Science.gov (United States)

    Goetz, J. N.; Brenning, A.; Petschko, H.; Leopold, P.

    2015-08-01

    Statistical and now machine learning prediction methods have been gaining popularity in the field of landslide susceptibility modeling. Particularly, these data driven approaches show promise when tackling the challenge of mapping landslide prone areas for large regions, which may not have sufficient geotechnical data to conduct physically-based methods. Currently, there is no best method for empirical susceptibility modeling. Therefore, this study presents a comparison of traditional statistical and novel machine learning models applied for regional scale landslide susceptibility modeling. These methods were evaluated by spatial k-fold cross-validation estimation of the predictive performance, assessment of variable importance for gaining insights into model behavior and by the appearance of the prediction (i.e. susceptibility) map. The modeling techniques applied were logistic regression (GLM), generalized additive models (GAM), weights of evidence (WOE), the support vector machine (SVM), random forest classification (RF), and bootstrap aggregated classification trees (bundling) with penalized discriminant analysis (BPLDA). These modeling methods were tested for three areas in the province of Lower Austria, Austria. The areas are characterized by different geological and morphological settings. Random forest and bundling classification techniques had the overall best predictive performances. However, the performances of all modeling techniques were for the majority not significantly different from each other; depending on the areas of interest, the overall median estimated area under the receiver operating characteristic curve (AUROC) differences ranged from 2.9 to 8.9 percentage points. The overall median estimated true positive rate (TPR) measured at a 10% false positive rate (FPR) differences ranged from 11 to 15pp. The relative importance of each predictor was generally different between the modeling methods. However, slope angle, surface roughness and plan

  16. Floating Node Method and Virtual Crack Closure Technique for Modeling Matrix Cracking-Delamination Interaction

    Science.gov (United States)

    DeCarvalho, N. V.; Chen, B. Y.; Pinho, S. T.; Baiz, P. M.; Ratcliffe, J. G.; Tay, T. E.

    2013-01-01

    A novel approach is proposed for high-fidelity modeling of progressive damage and failure in composite materials that combines the Floating Node Method (FNM) and the Virtual Crack Closure Technique (VCCT) to represent multiple interacting failure mechanisms in a mesh-independent fashion. In this study, the approach is applied to the modeling of delamination migration in cross-ply tape laminates. Delamination, matrix cracking, and migration are all modeled using fracture mechanics based failure and migration criteria. The methodology proposed shows very good qualitative and quantitative agreement with experiments.

  17. Study of optical techniques for the Ames unitary wind tunnels. Part 4: Model deformation

    Science.gov (United States)

    Lee, George

    1992-01-01

    A survey of systems capable of model deformation measurements was conducted. The survey included stereo-cameras, scanners, and digitizers. Moire, holographic, and heterodyne interferometry techniques were also looked at. Stereo-cameras with passive or active targets are currently being deployed for model deformation measurements at NASA Ames and LaRC, Boeing, and ONERA. Scanners and digitizers are widely used in robotics, motion analysis, medicine, etc., and some of the scanner and digitizers can meet the model deformation requirements. Commercial stereo-cameras, scanners, and digitizers are being improved in accuracy, reliability, and ease of operation. A number of new systems are coming onto the market.

  18. Floating Node Method and Virtual Crack Closure Technique for Modeling Matrix Cracking-Delamination Migration

    Science.gov (United States)

    DeCarvalho, Nelson V.; Chen, B. Y.; Pinho, Silvestre T.; Baiz, P. M.; Ratcliffe, James G.; Tay, T. E.

    2013-01-01

    A novel approach is proposed for high-fidelity modeling of progressive damage and failure in composite materials that combines the Floating Node Method (FNM) and the Virtual Crack Closure Technique (VCCT) to represent multiple interacting failure mechanisms in a mesh-independent fashion. In this study, the approach is applied to the modeling of delamination migration in cross-ply tape laminates. Delamination, matrix cracking, and migration are all modeled using fracture mechanics based failure and migration criteria. The methodology proposed shows very good qualitative and quantitative agreement with experiments.

  19. Updating prediction models by dynamical relaxation - An examination of the technique. [for numerical weather forecasting

    Science.gov (United States)

    Davies, H. C.; Turner, R. E.

    1977-01-01

    A dynamical relaxation technique for updating prediction models is analyzed with the help of the linear and nonlinear barotropic primitive equations. It is assumed that a complete four-dimensional time history of some prescribed subset of the meteorological variables is known. The rate of adaptation of the flow variables toward the true state is determined for a linearized f-model, and for mid-latitude and equatorial beta-plane models. The results of the analysis are corroborated by numerical experiments with the nonlinear shallow-water equations.

  20. Modeling and simulation of atmosphere interference signal based on FTIR spectroscopy technique

    Science.gov (United States)

    Zhang, Yugui; Li, Qiang; Yu, Zhengyang; Liu, Zhengmin

    2016-09-01

    Fourier Transform Infrared spectroscopy technique, featured with large frequency range and high spectral resolution, is becoming the research focus in spectrum analysis area, and is spreading in atmosphere detection applications in the aerospace field. In this paper, based on FTIR spectroscopy technique, the principle of atmosphere interference signal generation is deduced in theory, and also its mathematical model and simulation are carried out. Finally, the intrinsic characteristics of the interference signal in time domain and frequency domain, which give a theoretical foundation to the performance parameter design of electrical signal processing, are analyzed.

  1. Accuracy Enhanced Stability and Structure Preserving Model Reduction Technique for Dynamical Systems with Second Order Structure

    DEFF Research Database (Denmark)

    Tahavori, Maryamsadat; Shaker, Hamid Reza

    A method for model reduction of dynamical systems with the second order structure is proposed in this paper. The proposed technique preserves the second order structure of the system, and also preserves the stability of the original systems. The method uses the controllability and observability...... gramians within the time interval to build the appropriate Petrov-Galerkin projection for dynamical systems within the time interval of interest. The bound on approximation error is also derived. The numerical results are compared with the counterparts from other techniques. The results confirm...

  2. Comparing modelling techniques when designing VPH gratings for BigBOSS

    Science.gov (United States)

    Poppett, Claire; Edelstein, Jerry; Lampton, Michael; Jelinsky, Patrick; Arns, James

    2012-09-01

    BigBOSS is a Stage IV Dark Energy instrument based on the Baryon Acoustic Oscillations (BAO) and Red Shift Distortions (RSD) techniques using spectroscopic data of 20 million ELG and LRG galaxies at 0.5VPH) gratings have been identified as a key technology which will enable the efficiency requirement to be met, however it is important to be able to accurately predict their performance. In this paper we quantitatively compare different modelling techniques in order to assess the parameter space over which they are more capable of accurately predicting measured performance. Finally we present baseline parameters for grating designs that are most suitable for the BigBOSS instrument.

  3. Applications of soft computing in time series forecasting simulation and modeling techniques

    CERN Document Server

    Singh, Pritpal

    2016-01-01

    This book reports on an in-depth study of fuzzy time series (FTS) modeling. It reviews and summarizes previous research work in FTS modeling and also provides a brief introduction to other soft-computing techniques, such as artificial neural networks (ANNs), rough sets (RS) and evolutionary computing (EC), focusing on how these techniques can be integrated into different phases of the FTS modeling approach. In particular, the book describes novel methods resulting from the hybridization of FTS modeling approaches with neural networks and particle swarm optimization. It also demonstrates how a new ANN-based model can be successfully applied in the context of predicting Indian summer monsoon rainfall. Thanks to its easy-to-read style and the clear explanations of the models, the book can be used as a concise yet comprehensive reference guide to fuzzy time series modeling, and will be valuable not only for graduate students, but also for researchers and professionals working for academic, business and governmen...

  4. Ensembles of signal transduction models using Pareto Optimal Ensemble Techniques (POETs).

    Science.gov (United States)

    Song, Sang Ok; Chakrabarti, Anirikh; Varner, Jeffrey D

    2010-07-01

    Mathematical modeling of complex gene expression programs is an emerging tool for understanding disease mechanisms. However, identification of large models sometimes requires training using qualitative, conflicting or even contradictory data sets. One strategy to address this challenge is to estimate experimentally constrained model ensembles using multiobjective optimization. In this study, we used Pareto Optimal Ensemble Techniques (POETs) to identify a family of proof-of-concept signal transduction models. POETs integrate Simulated Annealing (SA) with Pareto optimality to identify models near the optimal tradeoff surface between competing training objectives. We modeled a prototypical-signaling network using mass-action kinetics within an ordinary differential equation (ODE) framework (64 ODEs in total). The true model was used to generate synthetic immunoblots from which the POET algorithm identified the 117 unknown model parameters. POET generated an ensemble of signaling models, which collectively exhibited population-like behavior. For example, scaled gene expression levels were approximately normally distributed over the ensemble following the addition of extracellular ligand. Also, the ensemble recovered robust and fragile features of the true model, despite significant parameter uncertainty. Taken together, these results suggest that experimentally constrained model ensembles could capture qualitatively important network features without exact parameter information.

  5. Hydrological time series modeling: A comparison between adaptive neuro-fuzzy, neural network and autoregressive techniques

    Science.gov (United States)

    Lohani, A. K.; Kumar, Rakesh; Singh, R. D.

    2012-06-01

    SummaryTime series modeling is necessary for the planning and management of reservoirs. More recently, the soft computing techniques have been used in hydrological modeling and forecasting. In this study, the potential of artificial neural networks and neuro-fuzzy system in monthly reservoir inflow forecasting are examined by developing and comparing monthly reservoir inflow prediction models, based on autoregressive (AR), artificial neural networks (ANNs) and adaptive neural-based fuzzy inference system (ANFIS). To take care the effect of monthly periodicity in the flow data, cyclic terms are also included in the ANN and ANFIS models. Working with time series flow data of the Sutlej River at Bhakra Dam, India, several ANN and adaptive neuro-fuzzy models are trained with different input vectors. To evaluate the performance of the selected ANN and adaptive neural fuzzy inference system (ANFIS) models, comparison is made with the autoregressive (AR) models. The ANFIS model trained with the input data vector including previous inflows and cyclic terms of monthly periodicity has shown a significant improvement in the forecast accuracy in comparison with the ANFIS models trained with the input vectors considering only previous inflows. In all cases ANFIS gives more accurate forecast than the AR and ANN models. The proposed ANFIS model coupled with the cyclic terms is shown to provide better representation of the monthly inflow forecasting for planning and operation of reservoir.

  6. Numerical Time-Domain Modeling of Lamb Wave Propagation Using Elastodynamic Finite Integration Technique

    Directory of Open Access Journals (Sweden)

    Hussein Rappel

    2014-01-01

    integration technique (EFIT as well as its validation with analytical results. Lamb wave method is a long range inspection technique which is considered to have unique future in the field of structural health monitoring. One of the main problems facing the lamb wave method is how to choose the most appropriate frequency to generate the waves for adequate transmission capable of properly propagating in the material, interfering with defects/damages, and being received in good conditions. Modern simulation tools based on numerical methods such as finite integration technique (FIT, finite element method (FEM, and boundary element method (BEM may be used for modeling. In this paper, two sets of simulation are performed. In the first set, group velocities of lamb wave in a steel plate are obtained numerically. Results are then compared with analytical results to validate the simulation. In the second set, EFIT is employed to study fundamental symmetric mode interaction with a surface braking defect.

  7. Handling Missing Data With Multilevel Structural Equation Modeling and Full Information Maximum Likelihood Techniques.

    Science.gov (United States)

    Schminkey, Donna L; von Oertzen, Timo; Bullock, Linda

    2016-08-01

    With increasing access to population-based data and electronic health records for secondary analysis, missing data are common. In the social and behavioral sciences, missing data frequently are handled with multiple imputation methods or full information maximum likelihood (FIML) techniques, but healthcare researchers have not embraced these methodologies to the same extent and more often use either traditional imputation techniques or complete case analysis, which can compromise power and introduce unintended bias. This article is a review of options for handling missing data, concluding with a case study demonstrating the utility of multilevel structural equation modeling using full information maximum likelihood (MSEM with FIML) to handle large amounts of missing data. MSEM with FIML is a parsimonious and hypothesis-driven strategy to cope with large amounts of missing data without compromising power or introducing bias. This technique is relevant for nurse researchers faced with ever-increasing amounts of electronic data and decreasing research budgets. © 2016 Wiley Periodicals, Inc.

  8. Forecasting Macroeconomic Variables using Neural Network Models and Three Automated Model Selection Techniques

    DEFF Research Database (Denmark)

    Kock, Anders Bredahl; Teräsvirta, Timo

    In this paper we consider the forecasting performance of a well-defined class of flexible models, the so-called single hidden-layer feedforward neural network models. A major aim of our study is to find out whether they, due to their flexibility, are as useful tools in economic forecasting as some...... previous studies have indicated. When forecasting with neural network models one faces several problems, all of which influence the accuracy of the forecasts. First, neural networks are often hard to estimate due to their highly nonlinear structure. In fact, their parameters are not even globally...... on the linearisation idea: the Marginal Bridge Estimator and Autometrics. Second, one must decide whether forecasting should be carried out recursively or directly. Comparisons of these two methodss exist for linear models and here these comparisons are extended to neural networks. Finally, a nonlinear model...

  9. Limitations in paleomagnetic data and modelling techniques and their impact on Holocene geomagnetic field models

    DEFF Research Database (Denmark)

    Panovska, S.; Korte, M.; Finlay, Chris;

    2015-01-01

    Characterization of geomagnetic field behaviour on timescales of centuries to millennia is necessary to understand the mechanisms that sustain the geodynamo and drive its evolution. As Holocene paleomagnetic and archeomagnetic data have become more abundant, strategies for regularized inversion...... of modern field data have been adapted to produce numerous timevarying global field models. We evaluate the effectiveness of several approaches to inversion and data handling, by assessing both global and regional properties of the resulting models. Global Holocene field models cannot resolve Southern...... hemisphere regional field variations without the use of sediments. A standard data set is used to construct multiple models using two different strategies for relative paleointensity calibration and declination orientation and a selection of starting models in the inversion procedure. When data uncertainties...

  10. Chronology of DIC technique based on the fundamental mathematical modeling and dehydration impact.

    Science.gov (United States)

    Alias, Norma; Saipol, Hafizah Farhah Saipan; Ghani, Asnida Che Abd

    2014-12-01

    A chronology of mathematical models for heat and mass transfer equation is proposed for the prediction of moisture and temperature behavior during drying using DIC (Détente Instantanée Contrôlée) or instant controlled pressure drop technique. DIC technique has the potential as most commonly used dehydration method for high impact food value including the nutrition maintenance and the best possible quality for food storage. The model is governed by the regression model, followed by 2D Fick's and Fourier's parabolic equation and 2D elliptic-parabolic equation in a rectangular slice. The models neglect the effect of shrinkage and radiation effects. The simulations of heat and mass transfer equations with parabolic and elliptic-parabolic types through some numerical methods based on finite difference method (FDM) have been illustrated. Intel®Core™2Duo processors with Linux operating system and C programming language have been considered as a computational platform for the simulation. Qualitative and quantitative differences between DIC technique and the conventional drying methods have been shown as a comparative.

  11. Modelling, analysis and validation of microwave techniques for the characterisation of metallic nanoparticles

    Science.gov (United States)

    Sulaimalebbe, Aslam

    In the last decade, the study of nanoparticle (NP) systems has become a large and interesting research area due to their novel properties and functionalities, which are different from those of the bulk materials, and also their potential applications in different fields. It is vital to understand the behaviour and properties of nano-materials aiming at implementing nanotechnology, controlling their behaviour and designing new material systems with superior performance. Physical characterisation of NPs falls into two main categories, property and structure analysis, where the properties of the NPs cannot be studied without the knowledge of size and structure. The direct measurement of the electrical properties of metal NPs presents a key challenge and necessitates the use of innovative experimental techniques. There have been numerous reports of two/four point resistance measurements of NPs films and also electrical conductivity of NPs films using the interdigitated microarray (IDA) electrode. However, using microwave techniques such as open ended coaxial probe (OCP) and microwave dielectric resonator (DR) for electrical characterisation of metallic NPs are much more accurate and effective compared to other traditional techniques. This is because they are inexpensive, convenient, non-destructive, contactless, hazardless (i.e. at low power) and require no special sample preparation. This research is the first attempt to determine the microwave properties of Pt and Au NP films, which were appealing materials for nano-scale electronics, using the aforementioned microwave techniques. The ease of synthesis, relatively cheap, unique catalytic activities and control over the size and the shape were the main considerations in choosing Pt and Au NPs for the present study. The initial phase of this research was to implement and validate the aperture admittance model for the OCP measurement through experiments and 3D full wave simulation using the commercially available Ansoft

  12. An open microcomputer-based laboratory system for perceptional experimentality

    Science.gov (United States)

    Hamalainen, Ari

    A computer, equipped with hardware for acquiring data about the properties of a physical system and programs for processing that data, is a powerful tool for physics research and instruction. There is strong evidence that utilizing microcomputer-based laboratories (MBLs) in instruction can lead to significantly improved learning. The perceptional approach is a method for physics instruction, developed at the Department of Physics, University of Helsinki. Its main arguments are that the meanings of the concepts must be learnt before their formal definitions and adoption, and that learning and research are fundamentally similar concept formation processes. Applying the perceptional approach requires the ability to perform quantitative experiments, either as students' laboratory exercises or as lecture demonstrations, and to process their results. MBL tools are essential for this. In student's laboratory exercises, they reduce the routine work and leave more time for the actual learning. In lecture demonstrations, they make it possible to perform the experiments in the tight time limits. At a previous stage of the research, a set of requirements was found that the perceptional approach places on MBL systems. The primary goal of this thesis is to build a prototype of a MBL system that would fulfil these requirements. A secondary goal is to describe technical aspects of a computerized measurement system from the standpoint of educational use. The prototype was built using mostly commercial sensors and data acquisition units. The software was written with a visual programming language, designed for instrumentation applications. The prototype system was developed and tested with a set of demonstrations of various topics in the Finnish high school physics curriculum, which were implemented according to the perceptional approach. Limited usability tests were also performed. The prototype was improved, until it could perform the test demonstrations. It was found to meet the

  13. Using Data Mining Techniques to Build a Classification Model for Predicting Employees Performance

    Directory of Open Access Journals (Sweden)

    Qasem A. Al-Radaideh

    2012-02-01

    Full Text Available Human capital is of a high concern for companies’ management where their most interest is in hiring the highly qualified personnel which are expected to perform highly as well. Recently, there has been a growing interest in the data mining area, where the objective is the discovery of knowledge that is correct and of high benefit for users. In this paper, data mining techniques were utilized to build a classification model to predict the performance of employees. To build the classification model the CRISP-DM data mining methodology was adopted. Decision tree was the main data mining tool used to build the classification model, where several classification rules were generated. To validate the generated model, several experiments were conducted using real data collected from several companies. The model is intended to be used for predicting new applicants’ performance.

  14. New Diagnostic, Launch and Model Control Techniques in the NASA Ames HFFAF Ballistic Range

    Science.gov (United States)

    Bogdanoff, David W.

    2012-01-01

    This report presents new diagnostic, launch and model control techniques used in the NASA Ames HFFAF ballistic range. High speed movies were used to view the sabot separation process and the passage of the model through the model splap paper. Cavities in the rear of the sabot, to catch the muzzle blast of the gun, were used to control sabot finger separation angles and distances. Inserts were installed in the powder chamber to greatly reduce the ullage volume (empty space) in the chamber. This resulted in much more complete and repeatable combustion of the powder and hence, in much more repeatable muzzle velocities. Sheets of paper or cardstock, impacting one half of the model, were used to control the amplitudes of the model pitch oscillations.

  15. Automatic parameter extraction technique for gate leakage current modeling in double gate MOSFET

    Science.gov (United States)

    Darbandy, Ghader; Gneiting, Thomas; Alius, Heidrun; Alvarado, Joaquín; Cerdeira, Antonio; Iñiguez, Benjamin

    2013-11-01

    Direct Tunneling (DT) and Trap Assisted Tunneling (TAT) gate leakage current parameters have been extracted and verified considering automatic parameter extraction approach. The industry standard package IC-CAP is used to extract our leakage current model parameters. The model is coded in Verilog-A and the comparison between the model and measured data allows to obtain the model parameter values and parameters correlations/relations. The model and parameter extraction techniques have been used to study the impact of parameters in the gate leakage current based on the extracted parameter values. It is shown that the gate leakage current depends on the interfacial barrier height more strongly than the barrier height of the dielectric layer. There is almost the same scenario with respect to the carrier effective masses into the interfacial layer and the dielectric layer. The comparison between the simulated results and available measured gate leakage current transistor characteristics of Trigate MOSFETs shows good agreement.

  16. A stochastic delay model for pricing debt and equity: Numerical techniques and applications

    Science.gov (United States)

    Tambue, Antoine; Kemajou Brown, Elisabeth; Mohammed, Salah

    2015-01-01

    Delayed nonlinear models for pricing corporate liabilities and European options were recently developed. Using self-financed strategy and duplication we were able to derive a Random Partial Differential Equation (RPDE) whose solutions describe the evolution of debt and equity values of a corporate in the last delay period interval in the accompanied paper (Kemajou et al., 2012) [14]. In this paper, we provide robust numerical techniques to solve the delayed nonlinear model for the corporate value, along with the corresponding RPDEs modeling the debt and equity values of the corporate. Using financial data from some firms, we forecast and compare numerical solutions from both the nonlinear delayed model and classical Merton model with the real corporate data. From this comparison, it comes up that in corporate finance the past dependence of the firm value process may be an important feature and therefore should not be ignored.

  17. Potential and limitations of X-Ray micro-computed tomography in arthropod neuroanatomy: a methodological and comparative survey.

    Science.gov (United States)

    Sombke, Andy; Lipke, Elisabeth; Michalik, Peter; Uhl, Gabriele; Harzsch, Steffen

    2015-06-01

    Classical histology or immunohistochemistry combined with fluorescence or confocal laser scanning microscopy are common techniques in arthropod neuroanatomy, and these methods often require time-consuming and difficult dissections and sample preparations. Moreover, these methods are prone to artifacts due to compression and distortion of tissues, which often result in information loss and especially affect the spatial relationships of the examined parts of the nervous system in their natural anatomical context. Noninvasive approaches such as X-ray micro-computed tomography (micro-CT) can overcome such limitations and have been shown to be a valuable tool for understanding and visualizing internal anatomy and structural complexity. Nevertheless, knowledge about the potential of this method for analyzing the anatomy and organization of nervous systems, especially of taxa with smaller body size (e.g., many arthropods), is limited. This study set out to analyze the brains of selected arthropods with micro-CT, and to compare these results with available histological and immunohistochemical data. Specifically, we explored the influence of different sample preparation procedures. Our study shows that micro-CT is highly suitable for analyzing arthropod neuroarchitecture in situ and allows specific neuropils to be distinguished within the brain to extract quantitative data such as neuropil volumes. Moreover, data acquisition is considerably faster compared with many classical histological techniques. Thus, we conclude that micro-CT is highly suitable for targeting neuroanatomy, as it reduces the risk of artifacts and is faster than classical techniques. © 2015 Wiley Periodicals, Inc.

  18. High frequency magnetic field technique: mathematical modelling and development of a full scale water fraction meter

    Energy Technology Data Exchange (ETDEWEB)

    Cimpan, Emil

    2004-09-15

    This work is concerned with the development of a new on-line measuring technique to be used in measurements of the water concentration in a two component oil/water or three component (i.e. multiphase) oil/water/gas flow. The technique is based on using non-intrusive coil detectors and experiments were performed both statically (medium at rest) and dynamically (medium flowing through a flow rig). The various coil detectors were constructed with either one or two coils and specially designed electronics were used. The medium was composed by air, machine oil, and water having different conductivity values, i.e. seawater and salt water with various conductivities (salt concentrations) such as 1 S/m, 4.9 S/m and 9.3 S/m. The experimental measurements done with the different mixtures were further used to mathematically model the physical principle used in the technique. This new technique is based on measuring the coil impedance and signal frequency at the self-resonance frequency of the coil to determine the water concentration in the mix. By using numerous coils it was found, experimentally, that generally both the coil impedance and the self-resonance frequency of the coil decreased as the medium conductivity increased. Both the impedance and the self-resonance frequency of the coil depended on the medium loss due to the induced eddy currents within the conductive media in the mixture, i.e. water. In order to detect relatively low values of the medium loss, the self-resonance frequency of the coil and also of the magnetic field penetrating the media should be relatively high (within the MHz range and higher). Therefore, the technique was called and referred to throughout the entire work as the high frequency magnetic field technique (HFMFT). To practically use the HFMFT, it was necessary to circumscribe an analytical frame to this technique. This was done by working out a mathematical model that relates the impedance and the self-resonance frequency of the coil to the

  19. Hybrid Model Testing Technique for Deep-Sea Platforms Based on Equivalent Water Depth Truncation

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    In this paper, an inner turret moored FPSO which works in the water of 320 m depth, is selected to study the so-called "passively-truncated + numerical-simulation" type of hybrid model testing technique while the truncated water depth is 160 m and the model scale λ=80. During the investigation, the optimization design of the equivalent-depth truncated system is performed by using the similarity of the static characteristics between the truncated system and the full depth one as the objective function. According to the truncated system, the corresponding physical test model is made. By adopting the coupling time domain simulation method, the truncated system model test is numerically reconstructed to carefully verify the computer simulation software and to adjust the corresponding hydrodynamic parameters. Based on the above work, the numerical extrapolation to the full depth system is performed by using the verified computer software and the adjusted hydrodynamic parameters. The full depth system model test is then performed in the basin and the results are compared with those from the numerical extrapolation. At last, the implementation procedure and the key technique of the hybrid model testing of the deep-sea platforms are summarized and printed. Through the above investigations, some beneficial conclusions are presented.

  20. Using Interior Point Method Optimization Techniques to Improve 2- and 3-Dimensional Models of Earth Structures

    Science.gov (United States)

    Zamora, A.; Gutierrez, A. E.; Velasco, A. A.

    2014-12-01

    2- and 3-Dimensional models obtained from the inversion of geophysical data are widely used to represent the structural composition of the Earth and to constrain independent models obtained from other geological data (e.g. core samples, seismic surveys, etc.). However, inverse modeling of gravity data presents a very unstable and ill-posed mathematical problem, given that solutions are non-unique and small changes in parameters (position and density contrast of an anomalous body) can highly impact the resulting model. Through the implementation of an interior-point method constrained optimization technique, we improve the 2-D and 3-D models of Earth structures representing known density contrasts mapping anomalous bodies in uniform regions and boundaries between layers in layered environments. The proposed techniques are applied to synthetic data and gravitational data obtained from the Rio Grande Rift and the Cooper Flat Mine region located in Sierra County, New Mexico. Specifically, we improve the 2- and 3-D Earth models by getting rid of unacceptable solutions (those that do not satisfy the required constraints or are geologically unfeasible) given the reduction of the solution space.